Integrated Circuit and System Design. Power and Timing Modeling, Optimization and Simulation: 17th International Workshop, PATMOS 2007, Gothenburg, ... (Lecture Notes in Computer Science, 4644) 354074441X, 9783790811827

th Welcome to the proceedings of PATMOS 2007, the 17 in a series of international workshops. PATMOS 2007 was organized b

133 30 23MB

English Pages 600 [596] Year 2007

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Title Page
Preface
Organization
Table of Contents
System-Level Application-Specific NoC Design for Network and Multimedia Applications
Introduction
Related Work
NoC Design Methodology and Simulator Description
NoC Design Methodology
NoC Simulator Description
Experimental Results
Methodology Applied to Multimedia Applications
Methodology Applied to Network Application
Conclusion
References
Fast and Accurate Embedded Systems Energy Characterization Using Non-intrusive Measurements
Introduction
Related Works
Model Construction Basics
Model Structure and Parameters
Model Construction Case Study
Methodology Application
Model Validation
Conclusion
References
A Flexible General-Purpose Parallelizing Architecture for Nested Loops in Reconfigurable Platforms
Introduction
Definitions-Notation
Architectural - Functional Overview
Analysis
Case Studies
Image Filtering
Combinatorial Optimization in Hardware/Software Codesign
Conclusion
References
An Automatic Design Flow for Mapping Application onto a 2D Mesh NoC Architecture
Introduction
Context
Platform Description
Formulation Problem
TheDesignFlow
4G Radio Communication Application
4G Context Specification
IP Block Model
Simulation Results
Simulation Exploration Methodology
Simulation Results
Conclusion and Future Works
References
Template Vertical Dictionary-Based Program Compression Scheme on the TTA
Introduction
Transport Triggered Architecture
Code Characteristics Analysis
Code Compression Algorithm
Template-Based Compression
Vertical Dictionary-Based Compression
Hardware Prototype
Experiments
Compression Ratio
Area and Power Consumption
Conclusions
References
Asynchronous Functional Coupling for Low Power Sensor Network Processors
Introduction
Low Power Design Techniques
TI MSP 430 Microprocessor
Asynchronous SN Processor Implementation
Comparison
References
A Heuristic for Reducing Dynamic Power Dissipation in Clocked Sequential Designs
Introduction
Preliminaries
Cyclic Graph Model
Retiming
Valid Periodic Schedule
Problem Description and Proposed Resolution Approach
Experimental Results
Conclusions
References
Low-Power Content Addressable Memory With Read/Write and Matched Mask Ports
Introduction
12T CAM Cell
Cell Structure
Cell Layout
Design Topology
Architectural Overview
Functional Overview
Simulation and Results
Conclusion
References
The Design and Implementation of a Power Efficient Embedded SRAM
Introduction
Low-Swing Write Techniques
Dual-Rail Decoder
Architecture and Timing
Experimental Results
Conclusions
References
Design of a Linear Power Amplifier with ±1.5V PowerSupply Using ALADIN
Introduction
Novel ALADIN Tools
Modified Calculation of Current Density
Heat Simulation
Design of the Linear Power Amplifier
Layout Solutions
Simulation Results
Conclusions
References
Settling Time Minimization of Operational Amplifiers
Introduction
Minimization of Settling Time of a Generic Op-amp
First Order Response
Second Order Response
Third Order Response
Application Example
Conclusions and Future Work
References
Low-Voltage Low-Power Curvature-Corrected Voltage Reference Circuit Using DTMOSTs
Introduction
Theoretical Analysis
The Temperature Behavior of the New CMOS Weak Inversion Bandgap Reference
The Implementation of the CT 0.5 Block
Low-Power Operation Bandgap Reference Using DTMOSTs
The Implementation of PTAT Voltage Generator Using $\OVF$ (OffsetVoltage Follower) Block
CMOS Implementation of the Low-Voltage Low-Power Weak Inversion DTMOST Bandgap Reference
Experimental Results
Conclusions
References
Computation of Joint Timing Yield of Sequential Networks Considering Process Variations
Introduction
Characterization of Flip Flops
Models for Flip-Flop Parameters
Timing Yield of Sequential Networks
SSTA of Combinational Circuits
Clock Skew Analysis
Timing Constraints
Joint Timing Yield Estimation
Experimental Results
Conclusions
References
A Simple Statistical Timing Analysis Flow and Its Application to Timing Margin Evaluation
Introduction
Statistical Timing Analysis Flow
Standard Cell Characterization
Timing Analyzer $Core$
Validation
Monte Carlo vs. Statistically Calculated Values
Statistically Calculated Values vs. Silicon Data
Timing Margins and Improved SSTA Flow
Conclusion
References
A Statistical Approach to the Timing-Yield Optimization of Pipeline Circuits
Introduction
Preliminaries
Statistical Timing Models and Analysis
Timing Yield of Sequential Circuits
Timing Yield and Register Configuration
Timing Yield Changed by Latch Replacement
Problem Formulation
Statistical Latch Replacement
Optimization Flow Overview
Statistical Dynamic Programming
Experimental Results
Conclusions and Future Work
References
A Novel Gate-Level NBTI Delay Degradation Model with Stacking Effect
Introduction
NBTI Model
Previous NBTI Models
Our Novel Gate-Level NBTI Model with Stacking Effect
Gate-Level Delay Degradation Analysis
Traditional Gate DelayModel
Our Novel Gate-Level Delay Model
Experimental Results
Conclusion
References
Modelling the Impact of High Level Leakage Optimization Techniques on the Delay of RT-Components
Introduction
Related Work
Gate Level Model
Inverter Model
Modelling Other Gates Also Considering Temperature and Load
ExtensiontoRTLevel
Modelling the Critical Path of an RT Component
Varying Critical Paths
Evaluation
Conclusion
References
Logic Style Comparison for Ultra Low Power Operation in 65nm Technology
Introduction
Logic Families Selection for Design Comparison
Previous Work
Selected Logic Families
Multi-threshold Complementary Pass-Gate Logic (MTCPL) for Ultra Low Voltage Operation
Basic Concept
Delay Analysis
Proposed Changes for Ultra Low Power Operation
Setup and Experimental Procedure
Ultra Low-Power Performance
Conclusion
References
Design-In Reliability for 90-65nm CMOS Nodes Submitted to Hot-Carriers and NBTI Degradation
Introduction
Modeling of HCI and NBTI Degradation
HCI in NMOS
NBTI Modeling
Reliability Simulation
Silicon Verification
Analysis of Circuits
Conclusions
References
Clock Distribution Techniques for Low-EMI Design
Introduction
EMI in Digital Design: Previous Work
Spectral Analysis of Digital Signals
Clock Distribution Techniques for EMC-Aware Design
Clock-Tree Synthesi\s for Low EMI
Clock Skew for Low EMI
Experimental Results
Conclusions
References
Crosstalk Waveform Modeling Using Wave Fitting
Introduction
Methodology Overview
Other Analytical Models
Trapezoid Waveform
Isosceles Triangular Wave Model
Weibull Wave Model
Comprehensive Waveform Set Generation
Reference Waveform Set
Waveform Transformation
Error Matrix Generation
Noise Analysis with Wave Fitting
References
Weakness Identification for Effective Repair of Power Distribution Network
Introduction
PDN Weakness Identification and Fixing Problem
Issues in PDN Fixing
Assumption and Problem Formulation
Solving a PDN Fixing Problem
PDNWeakness Identification: Box-Scan Search
Voltage Recovery Estimation
Simulation Examples
Conclusion
References
New Adaptive Encoding Schemes for Switching Activity Balancing in On-Chip Buses
Introduction
Adaptive Encoding Schemes for Activity Balancing
Coarse-Grained Balancing Scheme
Fine-Grained Balancing Scheme
Codec Architecture
Experimental Results
Experimental Set-Up
Parameter Space Exploration
Experimental Data
Conclusions
References
On the Necessity of Combining Coding with Spacing and Shielding for Improving Performance and Power in Very Deep Sub-micron Interconnects
Introduction
Delay and Transition Activity in VDSM Interconnects
Improving Throughput and Power in Buses by Coding
Coding and Classic Anti-crosstalk Techniques
Simple Coding Schemes for Improving Throughput
Spacing and Shielding
Combining Coding with Spacing and Shielding
Combining Coding with Spacing
Optimizing Delay and Transition Activity
Optimizing Delay and Total Transition Activity
Concluding Remarks
References
Soft Error-Aware Power Optimization Using Gate Sizing
Introduction
Preliminaries
Power and Delay Models
Single Event Upset
System Lifetime and MTTF
Statistical Analysis of Gate Masking
General Approach
Reliability of Results
Problem Formulation
Convexity of the Optimization Problem
Simulation Results
Conclusion
References
Automated Instruction Set Characterization and Power Profile Driven Software Optimization for Mobile Devices
Introduction
Related Work
Characterization Tool
Measurement Circuit
Energy Model
Optimization System
GCC RTL Representation
Optimization Algorithms
Evaluation and Results
Measurement Circuit
Optimization System
Conclusion
References
RTL Power Modeling and Estimation of Sleep Transistor Based Power Gating
Introduction
Related Work
Modeling of Power Gating
Modeling the On-State
Modeling the Off-State
Modeling the Switch-Over
Estimation Framework
ModelEvaluation
Estimation Flow Evaluation
Conclusion
References
Functional Verification of Low Power Designs at RTL
Introduction
Power Management Techniques
Power Management Design Structure
Retention Memory Elements
Specifying Design Intent of Low Power Designs
Power Aware Simulation
PA Simulation Flow
Register, Latch, and Memory Recognition
Identification of Power Elements
Elaborating the Power Aware Design
Power Aware Simulation
Example
Sample Design
Power Configuration File
For Further Investigation
Conclusion
References
XEEMU: An Improved XScale Power Simulator
Introduction
Related Work
XTREM – An In-Depth Review
Experimental Setup and Benchmarks
Performance Validation
Power Modeling
Experimental Results
Conclusions and Future Work
References
Low Power Elliptic Curve Cryptography
Introduction
Characteristic 2 Finite Fields
Elliptic Curves
Coordinate System
Point Scalar Multiplication Algorithms
Hardware Implementation
$GF(2^m)$ Components
Reconfigurable Elliptic Curve Processor
Results
Conclusions
References
Design and Test of Self-checking AsynchronousControl Circuit
Introduction
Background
Labeled Petri Ne
Direct Mapping
Fail-Stop David Cell
Design and Test of Self-checking Asynchronous Circuits
Linear Structure
Fork/Join Structure
Merge/Choice Structure
Conclusion and Future Work
References
An Automatic Design Flow for Implementation of Side Channel Attacks Resistant Crypto-Chips
Introduction
Motivation
QDI Asynchronous Circuits
Persia: A Synthesis Tool for QDI Circuits
Arithmetic Function Extractor (AFE)
Decomposition
Template Synthesizer (TSYN)
Cell-Library Customization
AES Implementation
Conclusion
References
Analysis and Improvement of Dual Rail Logic as a Countermeasure Against DPA
Introduction
Differential Power Analysis
Dual Rail Logic: A Countermeasure Against DPA
Secure Design Range
Secure Triple Track Logic
Conclusion
References
Performance Optimization of Embedded Applications in a Hybrid Reconfigurable Platform
Introduction
Hybrid Reconfigurable Architecture
Design Flow
FPGA Mapping Procedure
Experimental Results
Set-Up
Experimentation
Conclusions
References
The Energy Scalability of Wavelet-Based, Scalable Video Decoding
Introduction
SystemOverview
Energy Measurement Setup and First Results
Energy Scalability
Conclusions
References
Direct Memory Access Optimization in WirelessTerminals for Reduced Memory Latency and EnergyConsumption
Introduction
Related Work
Overview of the Proposed Methodology
Step 1: Dominant Scenario Definition
Step 2: DMA Parameters Definition in Multi-threaded Systems
Profiling Framework
DMA Word Length and DMA Minimum Block Transfer Size
Analysis of the Proposed HW/SW Modifications for the DMA
Steps 3 and 4: Run-Time Scenario Identification and DMA Parameter Selection
Experimental Results
Conclusions
References
Exploiting Input Variations for Energy Reduction
Introduction
Typical-Case Design Methodologies
Related Works
Razor Logic
Canary Logic
Canary Flip-Flop
Power Reduction with Canary FFs
Canary FF Implementation Via Scan Reuse
Evaluation Methodology
Timing Error Rates
Architectural-Level Simulation Environment
Results
Conclusions
References
A Model of DPA Syndrome and Its Application to the Identification of Leaking Gates
Introduction
Different Differential Power Analyses
Differential Power Analysis of Kocher
Multi-bit DPA
Design Oriented Modelling of DPA Syndrome
Leaking Gates Identification
εpVDD Distribution Analysis
DPA Syndrome Analysis
Critical and Uncritical Gates and Nets
Conclusion
References
Static Power Consumption in CMOS Gates Using Independent Bodies
Introduction
Test Set-Up
Simulation Results
Input Patterns Containing the Controlling Value
Input Patterns Not Containing the Controlling Value
Conclusions
References
Moderate Inversion: Highlights for Low Voltage Design
Introduction
EKV 2.0 MOS Model
Self Cascode Structure
Traditional Approach
Proposed Approach
Temperature Considerations
Measurements and Discussion
Self Cascode Current Reference (SCCR)
Current Reference Topology
Design Equations
Design Methodology
Computer Simulation Results
Conclusion
References
On Two-Pronged Power-Aware Voltage Scheduling for Multi-processor Real-Time Systems
Introduction
Preliminaries
Power-Aware Heuristic for Task Graphs
Task Assignment and Online Voltage Scheduling
Static Voltage Scheduling
Experimental Results
Conclusions
References
Semi Custom Design: A Case Study on SIMD Shufflers
Introduction
Motivation
Design Methodology Choices
Datapath Choice for Case Study
SIMDShufflers
Datapath Generator (DPG)
Experimental Setup and Results
Conclusions
References
Optimization for Real-Time Systems with Non-convex Power Versus Speed Models
Introduction
Power Model and Related Work
Fundamental Problem
Scheduling for Non-convex Power Model
Experimental Results
Conclusion
References
Triple-Threshold Static Power Minimization in High-Level Synthesis of VLSI CMOS
Introduction
Related Work
Comparison of Standard Cell Libraries
Static Power Calculations
HSPICE Simulation
Triple-Threshold Static Power Minimization Methodology
Simulation
Conclusion
References
A Fast and Accurate Power Estimation Methodology for QDI Asynchronous Circuits
Introduction
Related Work
QDI Asynchronous Synthesis Method
Persia Synthesis Tool
High Level Power Estimation
High-Level Simulation for Power Estimation
Experimental Results
Conclusions
References
Subthreshold Leakage Modeling and Estimation of General CMOS Complex Gates
Introduction
Related Works
Subthreshold Leakage Model
Subthreshold Leakage in Non-series-parallel Networks
On-Transistors in Off-Networks
Results
Conclusions
References
A Platform for Mixed HW/SW AlgorithmSpecifications for the Exploration of SW and HW Partitioning
Introduction
State of the Art of Platforms Supporting Mixed HW/SW Implementations
Description of the Virtual Socket Platform
Details on HW Implementation and SW Support
The Heart of Simplicity: HDL Modules Virtual Memory Accesses
The Software Support: The Virtual Memory Window Library
The Integration of the HDL Modules in the Platform
Profiling Tools: Testing and Optimizing Data Transfers
Conclusion
References
Fast Calculation of Permissible SlowdownFactors for Hard Real-Time Systems
Introduction
Related Work
Model
Event Streams
Demand Bound
Linear Programme
Experiments
Conclusion and Future Work
References
Design Methodology and Software Tool for Estimation of Multi-level Instruction Cache Memory Miss Rate
Introduction
Proposed Estimation Methodology
Comparison Results
Conclusions
References
A Statistical Model of Logic Gates for Monte CarloSimulation Including On-Chip Variations
Introduction
Statistical Gate Model
Model Extraction
Case Study
Conclusions
References
Switching Activity Reduction of MAC-Based FIR Filters with Correlated Input Data
Introduction
Correlated Input Data
Proposed Approach
Multiplier and Bus Power Consumption
Selective Negation
Results
Design 1
Design 2
Design 3
Conclusion
References
Performance of CMOS and Floating-Gate Full-Adders Circuits at Subthreshold Power Supply
Introduction
FGMOS Basics
Full-Adder Designs
Full-Adder Simulations
Results
Discussion
Conclusions
References
Low-Power Digital Filtering Based on the Logarithmic Number System
Introduction
LNSBasics
LNS Representation in the Implementation of Digital Filters
Feedback Filters
FIR Filters
Power Dissipation of LNS Addition and Multiplication
Discussion and Conclusions
References
A Power Supply Selector for Energy- and Area-Efficient Local Dynamic Voltage Scaling
Introduction
LDVS Architecture Proposal
The Power Supply Selector
PSS Architecture
Hopping Sequence
Results
Power Efficiency
PSS Area
Transition Simulation
Conclusion
References
Dependability Evaluation of Time-Redundancy Techniques in Integer Multipliers
Introduction
Fault Tolerance
Integer Multipliers
Fault-Tolerant Adders and Multipliers
Time-Redundancy Techniques in Multipliers
Technique 1 – Swapped Inputs
Technique 2 – Inverted Reduction Tree
Technique 3 – Inverted Reduction Tree and Swapped Inputs
Technique 4 – Twin Precision
Evaluation of the Time-Redundancy Techniques
Fault Model
Assessment 1 - 8×8-Bit Multiplier with CSA Reduction Tree
Assessment 2 - 8×8-Bit Multiplier with HPM Reduction Tree
Assessment 3 - 16×16-Bit Multiplier with CSA Reduction Tree
Delay, Power, and Area Penalties
Conclusion
References
Design and Industrialization Challenges of Memory Dominated SOCs
Statistical Static Timing Analysis: A New Approach to Deal with Increased Process Variability in Advanced Nanometer Technologies
Analog Power Modelling
Technological Trends, Design Constraints and Design Implementation Challenges in Mobile Phone Platforms
System Design from Instrument Level Down to ASIC Transistors with Speed and Low Power as Driving Parameters
Author Index
Recommend Papers

Integrated Circuit and System Design. Power and Timing Modeling, Optimization and Simulation: 17th International Workshop, PATMOS 2007, Gothenburg, ... (Lecture Notes in Computer Science, 4644)
 354074441X, 9783790811827

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany

4644

Nadine Azemard Lars Svensson (Eds.)

Integrated Circuit and System Design Power and Timing Modeling, Optimization and Simulation 17th International Workshop, PATMOS 2007 Gothenburg, Sweden, September 3-5, 2007 Proceedings

13

Volume Editors Nadine Azemard LIRMM, UMR CNRS/Université de Montpellier II 161 rue Ada, 34392, Montpellier, France E-mail: [email protected] Lars Svensson Chalmers University of Technology Department of Computer Engineering 412 96 Göteborg, Sweden E-mail: [email protected]

Library of Congress Control Number: 2007933304 CR Subject Classification (1998): B.7, B.8, C.1, C.4, B.2, B.6, J.6 LNCS Sublibrary: SL 1 – Theoretical Computer Science and General Issues ISSN ISBN-10 ISBN-13

0302-9743 3-540-74441-X Springer Berlin Heidelberg New York 978-3-540-74441-2 Springer Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2007 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12111398 06/3180 543210

Preface

Welcome to the proceedings of PATMOS 2007, the 17th in a series of international workshops. PATMOS 2007 was organized by Chalmers University of Technology with IEEE Sweden Chapter of the Solid-State Circuit Society technical cosponsorship and IEEE CEDA sponsorship. Over the years, PATMOS has evolved into an important European event, where researchers from both industry and academia discuss and investigate the emerging challenges in future and contemporary applications, design methodologies, and tools required for the development of the upcoming generations of integrated circuits and systems. The technical program of PATMOS 2007 consisted of state-of-the-art technical contributions, three invited talks and an industrial session on design challenges in real-life projects. The technical program focused on timing, performance and power consumption, as well as architectural aspects with particular emphasis on modeling, design, characterization, analysis and optimization in the nanometer era. The Technical Program Committee, with the assistance of additional expert reviewers, selected the 55 papers presented at PATMOS. The papers were organized into 9 technical sessions and 3 poster sessions. As is always the case with the PATMOS workshops, full papers were required, and several reviews were received per manuscript. Beyond the presentations of the papers, the PATMOS technical program was enriched by a series of speeches offered by world class experts, on important emerging research issues of industrial relevance. Jean Michel Daga spoke about “Design and Industrialization Challenges of Memory Dominated SOCs”, Davide Pandini spoke about “Statistical Static Timing Analysis: A New Approach to Deal with Increased Process Variability in Advanced Nanometer Technologies” and Christer Svensson spoke about “Analog Power Modelling”. Furthermore, the technical program was augmented by two industrial talks, given by leading experts from industry. Fredrik Dahlgren, from Ericsson Mobile Platforms, spoke about “Technological Trends, Design Constraints and Design Implementation Challenges in Mobile Phone Platforms” and Anders Emrich, from Omnisys Instruments AB, spoke about “System Design from Instrument Level down to ASIC Transistors with Speed and Low Power as Driving Parameters”. We would like to thank the many people that worked voluntarily to make PATMOS 2007 possible, the expert reviewers, the members of the technical program and steering committees, and the invited speakers who offered their skill, time, and deep knowledge to make PATMOS 2007 a memorable event. Last but not least we would like to thank the sponsors of PATMOS 2007, Ericsson, Omnisys, Chalmers University and the city of Göteborg, for their support. September 2007

Nadine Azemard Lars Svensson

Organization

Organizing Committee General Chair Technical Program Chair Secretariat Proceedings

Lars Svensson, Chalmers University, Sweden Nadine Azemard, LIRMM, France Ewa Wäingelin, Chalmers University, Sweden Nadine Azemard, LIRMM, France

PATMOS Technical Program Committee A. Alvandpour, Linköping Univ., Sweden D. Atienza, EPFL, Switzerland N. Azemard, LIRMM, France P. A. Beerel, Univ. of Southern California, USA D. Bertozzi, Univ. of Ferrara, Italy N. Chang, Seoul National Univ., Korea J. J. Chico, Univ. de Sevilla, Spain J. Figueras, Univ. de Catalunya, Spain E. Friedman, Univ. of Rochester, USA C. E. Goutis, Univ. of Patras, Greece E. Grass, IHP, Germany J. L. Güntzel, Univ. Fed. de Pelotas, Brazil R. Hartenstein, Univ. of Kaiserslautern, Germany N. Julien, LESTER, France K. Karagianni, Univ. of Patras, Greece P. Marchal, IMEC, Belgium P. Maurine, LIRMM, France V. Moshnyaga, Univ. of Fukuoka, Japan W. Nebel, Univ. of Oldenburg, Germany D. Nikolos, Univ. of Patras, Greece A. Nunez, Univ. de Las Palmas, Spain V. Paliuras, Univ. of Patras, Grece, D. Pandini, ST Microelectronics, Italy F. Pessolano, Philips, The Netherlands H. Pfleiderer, Univ. of Ulm, Germany C. Piguet, CSEM, Switzerland M. Poncino, Politecnico di Torino, Italy R. Reis, Univ. of Porto Alegre, Brazil M. Robert, Univ. of Montpellier, France J. Rossello, Balearic Islands Univ., Spain D. Sciuto, Politecnico di Milano, Italy J. Segura, Balearic Islands Univ., Spain

VIII

Organization

D. Soudris, Univ. of Thrace, Greece L. Svensson, Chalmers Univ. of Technology, Sweden A. M. Trullemans, Univ. LLN, Belgium D. Verkest, IMEC, Belgium R. Wilson, ST Microelectronics, France

PATMOS Steering Committee Antonio J. Acosta, University of Sevilla/IMSE-CNM, Spain Nadine Azemard, LIRMM - University of Montpellier, France Joan Figueras, Universitat Politècnica de Catalunya, Spain Reiner Hartenstein, University of Kaiserslautern, Germany Jorge Juan-Chico, University of Sevilla/IMSE-CNM, Spain Enrico Macii, Politecnico di Torino (POLITO), Italy Philippe Maurine, LIRMM - University of Montpellier, France Wolfgang Nebel, OFFIS, Germany Vassilis Paliouras, University of Patras, Greece Christian Piguet, CSEM, Switzerland Dimitrious Soudris, Democritus University of Thrace (DUTH), Greece Lars Svensson, Chalmers University of Technology, Sweden Anne-Marie Trullemans, Université Catholique de Louvain (UCL), Belgium Diederik Verkest, IMEC, Belgium Roberto Zafalon, ST Microelectronics, Italy

Executive Steering Sub-committee President Vice-president Secretary

Enrico Macii, Politecnico di Torino (POLITO), Italy Vassilis Paliouras, University of Patras, Greece Nadine Azemard, LIRMM - University of Montpellier, France

Table of Contents

Session 1 - High-Level Design (1) System-Level Application-Specific NoC Design for Network and Multimedia Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lazaros Papadopoulos and Dimitrios Soudris

1

Fast and Accurate Embedded Systems Energy Characterization Using Non-intrusive Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nicolas Fournel, Antoine Fraboulet, and Paul Feautrier

10

A Flexible General-Purpose Parallelizing Architecture for Nested Loops in Reconfigurable Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ioannis Panagopoulos, Christos Pavlatos, George Manis, and George Papakonstantinou An Automatic Design Flow for Mapping Application onto a 2D Mesh NoC Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Julien Delorme

20

31

Session 2 - Low Power Design Techniques Template Vertical Dictionary-Based Program Compression Scheme on the TTA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lai Mingche, Wang Zhiying, Guo JianJun, Dai Kui, and Shen Li Asynchronous Functional Coupling for Low Power Sensor Network Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delong Shang, Chihoon Shin, Ping Wang, Fei Xia, Albert Koelmans, Myeonghoon Oh, Seongwoon Kim, and Alex Yakovlev

43

53

A Heuristic for Reducing Dynamic Power Dissipation in Clocked Sequential Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Noureddine Chabini

64

Low-Power Content Addressable Memory With Read/Write and Matched Mask Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Saleh Abdel-Hafeez, Shadi M. Harb, and William R. Eisenstadt

75

The Design and Implementation of a Power Efficient Embedded SRAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yijun Liu, Pinghua Chen, Wenyan Wang, and Zhenkun Li

86

X

Table of Contents

Session 3 - Low Power Analog Circuits Design of a Linear Power Amplifier with ±1.5V Power Supply Using ALADIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bj¨ orn Lipka and Ulrich Kleine

97

Settling Time Minimization of Operational Amplifiers . . . . . . . . . . . . . . . . Andrea Pugliese, Gregorio Cappuccino, and Giuseppe Cocorullo

107

Low-Voltage Low-Power Curvature-Corrected Voltage Reference Circuit Using DTMOSTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cosmin Popa

117

Session 4 - Statistical Static Timing Analysis Computation of Joint Timing Yield of Sequential Networks Considering Process Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amit Goel, Sarvesh Bhardwaj, Praveen Ghanta, and Sarma Vrudhula A Simple Statistical Timing Analysis Flow and Its Application to Timing Margin Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. Migairou, R. Wilson, S. Engels, Z. Wu, N. Azemard, and P. Maurine A Statistical Approach to the Timing-Yield Optimization of Pipeline Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chin-Hsiung Hsu, Szu-Jui Chou, Jie-Hong R. Jiang, and Yao-Wen Chang

125

138

148

Session 5 - Power Modeling and Optimization A Novel Gate-Level NBTI Delay Degradation Model with Stacking Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hong Luo, Yu Wang, Ku He, Rong Luo, Huazhong Yang, and Yuan Xie Modelling the Impact of High Level Leakage Optimization Techniques on the Delay of RT-Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marko Hoyer, Domenik Helms, and Wolfgang Nebel Logic Style Comparison for Ultra Low Power Operation in 65nm Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mandeep Singh, Christophe Giacomotto, Bart Zeydel, and Vojin Oklobdzija Design-In Reliability for 90-65nm CMOS Nodes Submitted to Hot-Carriers and NBTI Degradation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CR. Parthasarathy, A. Bravaix, C. Gu´erin, M. Denais, and V. Huard

160

171

181

191

Table of Contents

XI

Session 6 - Low Power Routing Optimization Clock Distribution Techniques for Low-EMI Design . . . . . . . . . . . . . . . . . . Davide Pandini, Guido A. Repetto, and Vincenzo Sinisi

201

Crosstalk Waveform Modeling Using Wave Fitting . . . . . . . . . . . . . . . . . . . Mini Nanua and David Blaauw

211

Weakness Identification for Effective Repair of Power Distribution Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Takashi Sato, Shiho Hagiwara, Takumi Uezono, and Kazuya Masu

222

New Adaptive Encoding Schemes for Switching Activity Balancing in On-Chip Buses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Sithambaram, A. Macii, and E. Macii

232

On the Necessity of Combining Coding with Spacing and Shielding for Improving Performance and Power in Very Deep Sub-micron Interconnects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Murgan, P.B. Bacinschi, S. Pandey, A. Garc´ıa Ortiz, and M. Glesner

242

Session 7 - High Level Design (2) Soft Error-Aware Power Optimization Using Gate Sizing . . . . . . . . . . . . . . Foad Dabiri, Ani Nahapetian, Miodrag Potkonjak, and Majid Sarrafzadeh Automated Instruction Set Characterization and Power Profile Driven Software Optimization for Mobile Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . Matthias Grumer, Manuel Wendt, Christian Steger, Reinhold Weiss, Ulrich Neffe, and Andreas M¨ uhlberger

255

268

RTL Power Modeling and Estimation of Sleep Transistor Based Power Gating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sven Rosinger, Domenik Helms, and Wolfgang Nebel

278

Functional Verification of Low Power Designs at RTL . . . . . . . . . . . . . . . . . Allan Crone and Gabriel Chidolue

288

XEEMU: An Improved XScale Power Simulator . . . . . . . . . . . . . . . . . . . . . ´ Zolt´ an Herczeg, Akos Kiss, Daniel Schmidt, Norbert Wehn, and Tibor Gyim´ othy

300

Session 8 - Security and Asynchronous Design Low Power Elliptic Curve Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maurice Keller and William Marnane

310

XII

Table of Contents

Design and Test of Self-checking Asynchronous Control Circuit . . . . . . . . Jian Ruan, Zhiying Wang, Kui Dai, and Yong Li

320

An Automatic Design Flow for Implementation of Side Channel Attacks Resistant Crypto-Chips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Behnam Ghavami and Hossein Pedram

330

Analysis and Improvement of Dual Rail Logic as a Countermeasure Against DPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Razafindraibe, M. Robert, and P. Maurine

340

Session 9 - Low Power Applications Performance Optimization of Embedded Applications in a Hybrid Reconfigurable Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michalis D. Galanis, Gregory Dimitroulakos, and Costas E. Goutis

352

The Energy Scalability of Wavelet-Based, Scalable Video Decoding . . . . . Hendrik Eeckhaut, Harald Devos, and Dirk Stroobandt

363

Direct Memory Access Optimization in Wireless Terminals for Reduced Memory Latency and Energy Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . Miguel Peon-Quiros, Alexandros Bartzas, Stylianos Mamagkakis, Francky Catthoor, Jose M. Mendias, and Dimitrios Soudris

373

Poster 1 - Modeling and Optimization Exploiting Input Variations for Energy Reduction . . . . . . . . . . . . . . . . . . . . Toshinori Sato and Yuji Kunitake

384

A Model of DPA Syndrome and Its Application to the Identification of Leaking Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Razafindraibe and P. Maurine

394

Static Power Consumption in CMOS Gates Using Independent Bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Guerrero, A. Millan, J. Juan, M.J. Bellido, P. Ruiz-de-Clavijo, E. Ostua, and J. Viejo Moderate Inversion: Highlights for Low Voltage Design . . . . . . . . . . . . . . . Fabrice Guigues, Edith Kussener, Benjamin Duval, and Herv´e Barthelemy On Two-Pronged Power-Aware Voltage Scheduling for Multi-processor Real-Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Naotake Kamiura, Teijiro Isokawa, and Nobuyuki Matsui

404

413

423

Table of Contents

Semi Custom Design: A Case Study on SIMD Shufflers . . . . . . . . . . . . . . . Praveen Raghavan, Nandhavel Sethubalasubramanian, Satyakiran Munaga, Estela Rey Ramos, Murali Jayapala, Oliver Weiss, Francky Catthoor, and Diederik Verkest

XIII

433

Poster 2 - High Level Design Optimization for Real-Time Systems with Non-convex Power Versus Speed Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ani Nahapetian, Foad Dabiri, Miodrag Potkonjak, and Majid Sarrafzadeh Triple-Threshold Static Power Minimization in High-Level Synthesis of VLSI CMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Harry I.A. Chen, Edward K.W. Loo, James B. Kuo, and Marek J. Syrzycki A Fast and Accurate Power Estimation Methodology for QDI Asynchronous Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Behnam Ghavami, Mahtab Niknahad, Mehrdad Najibi, and Hossein Pedram

443

453

463

Subthreshold Leakage Modeling and Estimation of General CMOS Complex Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paulo F. Butzen, Andr´e I. Reis, Chris H. Kim, and Renato P. Ribas

474

A Platform for Mixed HW/SW Algorithm Specifications for the Exploration of SW and HW Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christophe Lucarz and Marco Mattavelli

485

Fast Calculation of Permissible Slowdown Factors for Hard Real-Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Henrik Lipskoch, Karsten Albers, and Frank Slomka

495

Design Methodology and Software Tool for Estimation of Multi-level Instruction Cache Memory Miss Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. Kroupis and D. Soudris

505

Poster 3 - Low Power Techniques and Applications A Statistical Model of Logic Gates for Monte Carlo Simulation Including On-Chip Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Francesco Centurelli, Luca Giancane, Mauro Olivieri, Giuseppe Scotti, and Alessandro Trifiletti Switching Activity Reduction of MAC-Based FIR Filters with Correlated Input Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oscar Gustafson, Saeeid Tahmasbi Oskuii, Kenny Johansson, and Per Gunnar Kjeldsberg

516

526

XIV

Table of Contents

Performance of CMOS and Floating-Gate Full-Adders Circuits at Subthreshold Power Supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jon Alfredsson and Snorre Aunet

536

Low-Power Digital Filtering Based on the Logarithmic Number System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Basetas, I. Kouretas, and V. Paliouras

546

A Power Supply Selector for Energy- and Area-Efficient Local Dynamic Voltage Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sylvain Miermont, Pascal Vivet, and Marc Renaudin

556

Dependability Evaluation of Time-Redundancy Techniques in Integer Multipliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Henrik Eriksson

566

Keynotes Design and Industrialization Challenges of Memory Dominated SOCs . . . J.M. Daga

576

Statistical Static Timing Analysis: A New Approach to Deal with Increased Process Variability in Advanced Nanometer Technologies . . . . . D. Pandini

577

Analog Power Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Svensson

578

Industrial Session - Design Challenges in Real-Life Projects Technological Trends, Design Constraints and Design Implementation Challenges in Mobile Phone Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F. Dahlgren

579

System Design from Instrument Level Down to ASIC Transistors with Speed and Low Power as Driving Parameters . . . . . . . . . . . . . . . . . . . . . . . . A. Emrich

580

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

581

System-Level Application-Specific NoC Design for Network and Multimedia Applications Lazaros Papadopoulos and Dimitrios Soudris VLSI and Testing Center, Dept. of Electrical and Computer Engineering, Democritus University of Thrace, 67100, Xanthi, Greece {lpapadop,dsoudris}@ee.duth.gr

Abstract. Nowadays, embedded consumer devices execute complex network and multimedia applications that require high performance and low energy consumption. For implementing complex applications on Network-on-Chips (NoCs), a design methodology is needed for performing exploration at NoC system-level, in order to select the optimal application-specific NoC architecture. The design methodology we present in this paper is based on the exploration of different NoC characteristics and is supported by a flexible NoC simulator which provides the essential evaluation metrics in order to select optimal communication parameters of the NoC architectures. We show that it is possible with the evaluation metrics provided by the simulator we present, to perform exploration of several NoC aspects and select the optimal communication characteristics for NoC platforms implementing network and multimedia applications.

1 Introduction In the last years, network and multimedia applications are implemented with embedded consumer devices. Modern portable devices, (e.g. PDAs and mobile phones) implement complex network protocols in order to access the internet and communicate with each other. Moreover, embedded systems implement text, speech and video processing multimedia applications, such as MPEG and 3D video games, which experience a fast growth in their variety functionality and complexity. Both areas are characterized by a rapidly increasing demand in computational power in order to process complex algorithms. The implementation of these applications to portable devices is a difficult task, due to their limited resources and the stringent power constraints of such systems. Single processor systems are not capable of providing the required computational power for complex applications with Task-Level Parallelism (TLP) and hard real-time constraints (e.g. frame rate in MPEG). To face these problems, multiprocessor systems are used to provide the computational concurrency required to handle concurrent events in real-time. With technology scaling, the integration of billions of transistors on a chip is enabled. Current MPSoC platforms usually contain bus-based interconnection infrastructures. The bus based structures suffer from limited scalability, poor performance for large systems and high energy consumption. The computational power along with N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 1–9, 2007. © Springer-Verlag Berlin Heidelberg 2007

2

L. Papadopoulos and D. Soudris

energy efficiency that modern applications require, cannot be provided by shared bus types of interconnections. During the last five years, Network-on-chip (NoC) has been proposed as the new SoC design paradigm. NoC, instead of using bus structures or dedicated wires, is composed of an on-chip, packet-switched network, which provides more predictability and better scalability compared to bus communication schemes [1]. Satisfactory design quality for modern complex applications implemented in NoC, is possible when both computation and communication refinement is performed. Current NoC simulators focus mainly on communication aspects, at high abstraction level. However, they are not flexible enough in order to achieve satisfactory communication refinement. Additionally, they do not provide enough evaluation metrics that enable the designer to choose the optimal characteristics of the NoC architecture. In this paper we present a NoC design methodology for system-level exploration of NoC parameters. The methodology is based on a new NoC simulator which is an extension of Nostrum NoC Simulation Environment (NNSE) [2]. Due to the flexibility of this tool, the designer can easily explore many NoC characteristics such as topology, packet size and routing algorithm and choose the application-specific NoC architecture, which is optimal in terms of performance, communication energy and link utilization. In this work, we chose to explore different NoC topologies for specific network and multimedia applications. The remainder of the paper is organized as follows. In Section 2, we describe some related work. In Section 3, we analyze the design methodology that is supported by the NoC simulator. In Section 4 our benchmarks are introduced and the experimental results are presented. Finally, in Section 5 we draw our conclusions.

2 Related Work NoC as a scalable communication architecture is described in [1]. The existing NoC research is presented in [3] and shows that NoC constitutes a unification of current trends of intra-chip communication. Many NoC design methodologies focus on application-specific NoC design. The NoC architecture is customized for each application, in order to achieve optimal performance and energy trade-offs. For instance, linear programming based techniques for synthesis of custom NoC architectures are presented in [4]. The work in [5] describes a tool for automatic selection of the optimal topology, and core mapping for a given application. Other design methodologies are presented in [6], [7] and [8]. The design approach described in this work is complementary and can be used along with the aforementioned design methodologies, since the flexible NoC simulator we present is useful for exploring and evaluating the NoC characteristics, such as topology, mapping and routing. From a different perspective, several NoC implementations have been proposed. For instance, SPIN network [9] implements a fat-tree topology using wormhole routing. CHAIN NoC [10] is implemented using asynchronous circuit techniques. MANGO [11] is a clockless NoC which provides both best-effort and guaranteed services. However, the aforementioned approaches are not flexible enough, since they limit the design choices. Also, they do not focus on application-specific NoC design and therefore, they are not suitable for implementing exploration of NoC parameters

System-Level Application-Specific NoC Design

3

according to the characteristics of the application under study. Thus, a system-level and flexible simulator is needed for exploring interconnection characteristics. Nostrum NoC simulator (NNSE) [2] focuses on grid-based, router-driven communication media for on-chip communication. As we show in the next section, the new tool based on Nostrum, which supports our methodology, adds new features to Nostrum and allows the easy exploration of several NoC parameters.

3 NoC Design Methodology and Simulator Description In this section we analyze the application-specific NoC design methodology and describe the NoC simulator we developed, which supports the design methodology. 3.1 NoC Design Methodology The NoC design methodology we present is based on the exploration of NoC characteristics at system-level and the selection of the optimal architecture that meets the design constraints.

Fig. 1. Application-specific NoC design methodology

Figure 1 presents the NoC design methodology. The input of the methodology is the communication task graph (CTG) of the application and the output is an optimal application-specific NoC architecture which meets the design constraints. The first step of the design process is the partitioning of the application into a number of distinct tasks and the extraction of the CTG. The parallelization of the application can be done with several methods, as the one described in [13]. The second step of the methodology is the construction of the system-level NoC architecture, using the NoC simulator. In this step, one or more NoC aspects are

4

L. Papadopoulos and D. Soudris

explored, such as topology, packet size, buffer size etc. As we show in the following subsection, the flexibility of the simulator allows the exploration of several NoC communication characteristics easily. The next step is the scheduling of application tasks onto the NoC architecture. In this methodology we refer to static scheduling. This step can be implemented with a scheduling algorithm e.g. [14]. After performing task scheduling, step 4 refers to simulation process and the extraction of the evaluation metrics. The designer analyzes the experimental results and determines whether or not the chosen architecture satisfies the imposed design constraints. If the design constraints are satisfied, the NoC architecture can be selected. Otherwise, the NoC exploration of the same or other NoC aspects continues. Thus, the design methodology is an iterative process that ends when the design constraints are satisfied. Although the methodology we present can be used for any application domain, in this work we focus specifically on network and multimedia applications. This is because modern applications from both areas are implemented in embedded devices and demand increased computational power. 3.2 NoC Simulator Description The NoC simulator is developed for implementing NoC exploration at system-level. The tool emphasizes in communication aspects (such as packet rates, buffer size etc.). Therefore, it abstracts away lower level aspects, such as the cache and other memory effects, in order to keep the complexity of the NoC model under control. For instance, in Section 4 we try to capture the impact of topology on the overall behavior of the network. The high abstraction level of the simulator and its simplicity allows the easy exploration and quick modifications. Additionally, provides a variety of metrics that allow the evaluation of the specific NoC implementation. Table 1. New features added to Nostrum NoC Simulation Environment

Topologies: Irregular Topologies Routing: Both XY routing and routing tables Evaluation Metrics: Performance Throughput Total number of packets Link Utilization Communication Energy Consumption The NoC simulator is developed as an extended version of Nostrum NoC Simulation Environment (NNSE) and provides a number of new features which have been added to Nostrum. The simulator allows the construction of irregular topologies and routing can be done either using XY routing or routing tables. Also, provides more

System-Level Application-Specific NoC Design

5

evaluation metrics such as performance, average throughput, link utilization and communication energy consumption. Thus, the simulator we provide allows an indepth exploration of different NoC aspects at system-level. The additional features are summarized in Table 1.

Fig. 2. The pseudocode of the NoC simulator

The simulator is developed in SystemC and the pseudocode is depicted in figure 2. Resources, switches and channels are implemented as sc_modules. Application’s threads are allocated in each resource which implement functions read and write. Resources, which are an abstraction of processing elements, provide the required interface for allocating application threads on them and the required network interface, to connect the specific resource to the network. The resources communicate via the channels. The way the resources are connected and the number of channels used are defined by the designer. Thus, various topologies can be implemented. Each channel handles data independently, according to the bandwidth restriction. Every resource contains a switch, which implements the selected routing algorithm. By using adaptive routing, congestion avoidance can be implemented. The communication process between two resources is shown in Figure 3. First, channels are opened between different resources, to construct the specific topology. Then, application’s threads are implemented on the simulator. During the simulation time, threads trigger the resources to perform read and write functions. Thus, packets are injected to the network. Switches and channels handle the packets according to the specific destination.

6

L. Papadopoulos and D. Soudris Application Threads Send data

Packetization Packets are transmitted to switch.

Receive data

Resource and Network Interface

Receives packets in a round robin fashion.

Switch

Determines the output according to the specified routing algorithm.

Reassemble data Receive packets from the buffer.

Packet is stored in the appropriate output buffer. Destination is the resource connected to this switch.

Channel Packet is transmitted to the appropriate channel.

Fig. 3. The communication process between two threads in the NoC simulator

The simulator provides the essential evaluation metrics to explore NoC characteristics. Average packet delay refers to the time that a packet needs to be transferred from the source to the destination through the network. Energy consumption is calculated as described in [12] and is affected by the switch architecture and the number of hops passed by packets.

4 Experimental Results We used the methodology presented in the previous section to evaluate optimal NoC platforms for network and multimedia applications. The NoC aspect we chose to explore in this work is the topology. The topologies we implemented are: 6x6 2DMesh and 6x6 Torus, Binary Tree and Fat Tree of 31 nodes. Other NoC characteristics such as buffer size, packet size and routing algorithm can also be explored. The scheduling algorithm we chose for the third step of the design methodology is presented in [14]. The cost factors we used to evaluate each NoC topology are: throughput, link utilization and communication energy consumption. The applications we used as benchmarks in this work are taken from the Embedded System Synthesis Benchmark Suite (E3S) [15]. The first one is the Consumer application, which contains the kernels of four multimedia applications (including JPEG

System-Level Application-Specific NoC Design

7

compression / decompression). The second one is the Office application, which contains three kernels from the multimedia application domain. The last one is the Networking application, which is comprised of the kernels of three network protocols. The mapping process of the IP cores of the E3S benchmarks has been done manually. However, the purpose of this work is to evaluate the system-level NoC design methodology we present and to prove the effectiveness of the NoC simulator we designed. The development of a mapping algorithm, which will be included in the design process, will be a part of our future work. 4.1 Methodology Applied to Multimedia Applications We applied the proposed methodology to Consumer and Office applications from E3S benchmarks. The Consumer application consists of four multimedia kernels. We implemented the CTG of the application on the four different topologies and the results are shown in figure 4.

Fig. 4. Normalized throughput, link utilization and communication energy consumption of the Consumer application

From figure 4, it is deduced that the optimal topology in terms of throughput is the Torus. Implementing the application on the Binary Tree topology, optimal link utilization is achieved, but also communication energy consumption is increased at 38% compared to the Torus topology. The designer can choose the topology that satisfies the imposed design constraints of the NoC platform. Office application consists of three multimedia kernels. The evaluation metrics obtained by the implementation of the proposed methodology are presented in figure 5. From figure 5, it is concluded that the Torus implementation leads to increased throughput, but also to high communication energy consumption. On the other hand, 2D-Mesh implementation results in 41% lower energy consumption. Increased link utilization is achieved by implementing the application on the Fat Tree topology.

8

L. Papadopoulos and D. Soudris

Fig. 5. Normalized throughput, link utilization and communication energy consumption of the Office application

4.2 Methodology Applied to Network Application We evaluated the design methodology to the Networking application from the E3S benchmarks. The application is composed of three network kernels, which are Open Shortest Path First (OSPF) protocol, packet flow and route lookup. The evaluation metrics obtained for each topology we explored are depicted in figure 6.

Fig. 6. Normalized throughput, link utilization and communication energy consumption of the Networking application

From figure 6, it is deduced that implementing the application on the Torus topology, 54% increased throughput is achieved. 37% increased link utilization is experinced when the Networking application is implemented on the Binary Tree. Finally, the Fat Tree topology results in 52% less communication energy consumption, compared to the 2D-Mesh.

System-Level Application-Specific NoC Design

9

From the experimental results presented above, the importance of the exploration procedure that is included in our methodology is shown. It is proved that there is not any general solution to the aspect of topology selection. Instead, for every specific application, exploration is needed in order to distinguish the optimal topology that meets the design constraints.

5 Conclusion For efficient design of future embedded platforms, system-level methodologies and efficient tools are highly desirable. Towards this end, we have presented a systematic methodology for NoC design supported by a flexible NoC simulator. As shown from our case studies, using the proposed methodology we have managed to choose the optimal communication architecture for a number of network and multimedia applications. Our future work addresses the systematic use and the further automation of our approach.

References 1. Benini, L., De Micheli, G.: Networks on Chips: A new SoC paradigm. IEEE Computer 35(1) (2002) 2. Millberg, M., Nilsson, E., Thid, R., Kumar, S., Jantsch, A.: The Nostrum backbone-a communication protocol stack for networks on chip. In: Proc. VLSI Design (2004) 3. Bjerregaard, T., Mahadevan, S.: A survey of research and practices of network-on- chip. ACM Computing Surveys (CSUR), 38(1) (2006) 4. Srinivasan, K., Chatha, K.S., Konjevod, G.: Linear programming based techniques for synthesis of network-on-chip architectures. In: Proc. ICCD, pp. 422–429 (2004) 5. Murali, S., De Micheli, G.: SUNMAP: A tool for automatic topology selection and generation for NoCs. In: Proc. DAC, San Diego, pp. 914–919 (2004) 6. Jalabert, A., Murali, S., Benini, L., De Micheli, G.: xpipesCompiler: a tool for instantiating application specific networks on chip. In: Proc. DATE (2004) 7. Ogras, U.Y., Marculescu, R.: Energy-and performance-driven NoC communication architecture synthesis using a decomposition approach. In: Proc. DATE, pp. 352–357 (2005) 8. Pinto, A., Carloni, L.P., Sangiovanni-Vincentelli, A.L.: Efficient synthesis of networks on chip. In: Proc. ICCD (2003) 9. Guerrier, P., Greiner, A.: A generic architecture for on-chip packet-switched interconnections. In: Proc. DATE, pp. 250–256 (2000) 10. Bainbridge, W., Furber, S.: CHAIN: A delay-insensitive chip area interconnect. IEEE Micro 22(5), 16–23 (2002) 11. Bjerregaard, T.: The MANGO clockless network-on-chip: Concepts and implementation. Ph.D. thesis, Information and Mathematical Modeling, Technical University of Denmark, Lyngby, Denmark 12. Tao Ye, T., Benini, L., De Micheli, G.: Packetized on-chip interconnect communication analysis for MPSoC. In: Proc. DATE (2003) 13. The Cadence Virtual Component Co-design (VCC), http://www.cadence.com/ company/pr/09_25_00vcc.html 14. Meyer, M.: Energy-aware task allocation for network-on-chip architectures. MSc thesis, Royal Institute of Technology, Stockholm, Sweden 15. Dick, R.P.: The embedded system synthesis benchmarks suite (E3S) (2002)

Fast and Accurate Embedded Systems Energy Characterization Using Non-intrusive Measurements Nicolas Fournel1 , Antoine Fraboulet2 , and Paul Feautrier1 2

1 INRIA/Compsys ENS de Lyon/LIP, Lyon F-69364 France INRIA/Compsys INSA-Lyon/CITI, Villeurbanne F-69621 France

Abstract. In this paper we propose a complete system energy model based on non-intrusive measurements. This model aims at being integrated in fast cycle accurate simulation tools to give energy consumption feedback for embedded systems software design. Estimations takes into account the whole system consumption including peripherals. Experiments on a complex ARM9 platform show that our model estimates are in error by less than 10% from real system consumption, which is precise enough for source code application design, while simulation speed remains fast.

1

Introduction

With present day technology, it is possible to build very small platforms with enormous processing power. However, physical laws dictate that high processing power is linked to high energy consumption. Embedded platforms are mostly used in hand held appliances, and since battery capacity does not increases at the same pace as clock frequency, designers are faced with the problem of minimizing power requirements under performance constraints. The first approach is the devising of low-energy technologies, but this is outside the scope of this paper. The second approach is to make the best possible use of the available energy e.g. by adjusting the processing power to the instantaneous needs of the application, or by shutting down unused parts of the system. These tasks can be delegated to the hardware; however it is well known that the hardware only source of knowledge is the past of the application; only software that can anticipate future needs. Energy can also be minimized as a side effect of performance optimization. For instance, replacing a conventional Fourier transform by an FFT greatly improves the energy budget; the same can be said of data locality optimization, which aims at replacing costly main memory accesses by low-power cache accesses. The ultimate judge in the matter of energy consumption is measurement of the finished product. However, software designers, compilers and operating systems need handier methods for assessing the qualities of their designs and directing possible improvements. Hence designers need simple analytical models which must be expressed in term of software visible events like instructions, N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 10–19, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Fast and Accurate Embedded Systems Energy Characterization

11

cache hits and misses, peripheral activity and the like. There are several ways of constructing such models. One possibility is electrical simulation of the design; this method is too time-consuming for use on systems of realistic size. Another method is to interpolate/extrapolate from measurements on a prototype. This is the method we have applied in this work. The paper is organized as follows. After reviewing state of the art techniques in section 2 we present in section 3 a methodology to build complete platform energy consumption models oriented for software development. Section 4 presents the resulting model for an ARM9 development platform. This section also validates our model on more significant pieces of code, multimedia applications, thanks to its implementation in a fast and cycle accurate simulation tool. We then conclude and discuss future work.

2

Related Works

Many works focus on energy characterization of VLSI circuits. We can organize them using two main criteria: their level of hardware abstraction and the calibration method. For the first criterion, we can group the models in three main categories which are, by increasing level of abstraction, transistor/gate level models, architectural level models and finally instruction level models. Among these models there are usually three methods for building consumption models. The first method is analytical construction, the second one is simulation based, and the third is based on physical measurements. In transistor (gate) level models, all transistor (gate) state changes are computed to give an energy consumption approximation for a VLSI component. This method is highly accurate, but a complete description of the component is needed. The models built at this level of abstraction are generally reserved to hardware designers and are very long to simulate. At the upper level of abstraction, architectural or RTL level, the system is divided in functional units. Each unit can be represented by a specific model adapted to its internal structure.(e.g. bit-dependent or bit-independent models for Chen et al. [1]). To be more accurate some works, like Kim et al. [5], subdivide the block into sub-blocks to apply different models on each sub-block. This family of models allows to extend model to a complete platform, but the models proposed so far are not able to execute a full software system. The highest level is instruction/system level of abstraction. At this level, models are based on events such as instructions execution ([13,7,9]). Tiwari et al. in [13] propose to characterize the inter-instructions energy consumption, which represents the logic switching between two different instructions. Others works also take into account the logic switching due to data parameters [11]. The system considered in these models is generally composed of CPU, bus and memory. Only few works focus on modeling a complete platform. Among them, EMSIM [12] is a simulator based on Simunic et al. [10] model for the StrongARM SA110 energy characterization. This simulator poorly characterizes the peripherals. The SoftWatt simulator proposed in [4] uses a complete system simulator based on

12

N. Fournel, A. Fraboulet, and P. Feautrier

SimOS to monitor a complex operating system and interactions among CPU, memory hierarchy and hard disk operations. Their simulator is modified to include analytical energy models and the output reports kernel and user mode energy consumption up to the operating system services level of granularity. Data are sampled during simulation and dumped to log files at a coarser granularity than cycle level, leading to average power consumption figures. The closest work to ours, AEON’s model [6], is a complete platform energy consumption model based on measurement and uses simulator internal energy counters. The targeted platform is an 8 bit AVR micro-controller based sensor network node that does not include CPU pipelines, complex memory hierarchy and peripherals. Our model allows the simulation of much more complex hardware architectures while being independant of simulator internals. As far as calibration methods are concerned, analytical models are generaly based on manufacturers data, e.g. in Simunic et al. [10] the model is built thanks to datasheet informations. Simulation based calibration needs a full knowledge of the underlying level architecture, which means that it needs a description of low level hardware (VHDL, or Verilog descriptions). Measurement based methods only need few informations on the hardware and works like [13,2] have shown that it is possible to extract internal unit consumption from system measurements. In this paper we propose a methodology for complete platform energy consumption model construction based on simple and non-intrusive measurements. The model is built at a level of abstraction close to the system level presented before, but is extended to the complete platform by coupling it with architectural level principles presented by Kim et al. in [5]. We also take peripherals energy models and dynamic frequency and voltage scaling into account.

3

Model Construction Basics

We present in this section our methodology to build complete platform models. We first give more details on the structure and the parameters of the energy model. Section 4 will present the target dependent model parameters through a case study on an ARM9 based platform. 3.1

Model Structure and Parameters

Our choice among all the modeling methods which have been presented in Sect. 2 is to build an architectural level model, in which the system is divided into its main functional blocks at the platform based level such as CPU, interconnection bus, memory hierarchy, peripherals. . . The energy consumption of an application Eapp is obtained be adding all blocks consumptions Ebl . Each block can have its own energy consumption model. To have a platform model better suited for software development, we use instruction level abstraction for CPU. CPU energy consumption ECPU is described in equation 1. ECPU = Einsn + Ecache + EMMU

(1)

Fast and Accurate Embedded Systems Energy Characterization

13

The energy consumption is the sum of the energy consumed by instruction execution, plus cache and MMU overheads consumptions, and consumption of all other blocks of the platform.  Ebl (2) Eapp = ECPU + This model aims at being integrated in a full platform cycle accurate simulation tool. The most interesting way of writing the model for this kind of purpose is to define a time slot energy consumption. The chosen time slot is the CPU instruction execution. There are two reason for choosing this time reference. The first is that it is the finest time reference since CPU have generally the highest clock frequency in embedded systems. Secondly, interrupt requests, the only mean for the hardware peripherals to interact with the software, are managed at the end of instructions execution. From a software point of view, there is no need to use a finer time reference to report hardware events more precisely. The model can be rewritten in a form where the consumption of CPU and other blocks are reported for the currently executed instruction. All E∗ will be kept for overall application consumptions, for the sake of notation simplicity the consumption at instruction level of granularity will be noted as E∗ . This new model formula is expressed in the following equation:  Ebl (3) Eslot = ECPU + blocks

The last peculiarity in this model is the measurement based data collection. As we only get global measures for the platform consumption, we can foresee that the base consumptions of each block will not be easily distinguishable. We mean here that once the embedded system is put in its laziest state, idle state for example with all possible units powered off, the resulting consumption is considered as a base consumption regrouping the base consumption of every powered peripherals. Obviously, a part of this consumption is static power dissipation. We will call this term Ebase , it is important to note that this consumption is reported to the current executed instruction on the CPU. It can be expressed as in equation (5), as it is dependent on the instruction length linsn in terms of clock cycles. Equation (3) becomes equation (4).  Ebl (4) Eslot = Ebase + ECPU + Ebase = linsn × Ec

base

(5)

The CPU and other blocks consumption are then expressed as overhead against the idle state. As described in equation (1), CPU energy consumption is given by the executed instruction energy cost. This model can be simplified by regrouping instructions in classes as proposed in [7]. As far as other blocks are concerned, we can expand them as bus, memories and other peripherals. This is interesting since bus and memories will be subject to events generated by the processor, such as memory writes. The peripherals will be then modeled by state machines giving the energy consumption of the peripheral during the time slot.

14

N. Fournel, A. Fraboulet, and P. Feautrier

The last step in model construction consists in defining all possible parameters for these components. Due to the limited information available, the developers would not necessarily know the behavior of intra-blocks logic. The parameters for the CPU are already selected, since it is modeled thanks to instructions consumptions. The same can be done for cache, MMU and even co-processors consumptions. The parameters for other blocks are limited to behavioral parameters (UART sending a byte) and their states such as operating modes. Each energy cost in this model is function of the running frequency and power supply voltage to allow dynamic and frequency scaling capabilities of the platform to be modeled. An example of this is presented in the next section.

4

Model Construction Case Study

In this section we propose an example of our methodology application. This methodology was applied on a ARM based development board. This platform uses an ARM922T and usual embedded systems peripherals (e.g. UART, Timers, network interface) on the same chip. Our hardware architecture exploration reveals that the platform has three distinct levels of memory, a cache, a scratchpad and main memory. All peripherals are accessible through two levels of AMBA bus. We will give details about the energy consumption model construction for this platform, then we will check the accuracy of the resulting model. 4.1

Methodology Application

The complete platform modeling method presented in section 3.1 is applied on our ARM9 platform in this section. The measurement setup used for these experiments is close to the one depicted in [9]. We used a digitalizing oscilloscope, the shunt resistor is replaced by a current probe and we also used a voltage probe. Calibration benchmarks. We built benchmarks to calibrate our model, more precisely our block models. The hardware exploration gives us the main blocks to be modeled, namely the CPU, the different bus levels, the memory levels, and the other peripherals such as UART, interrupt controller or timers. For example, the selected parameters for our CPU model are the CPU instructions, or possibly class of instructions, plus the caches and MMU activities. We thus built benchmarks to evaluate the cost of possible parameters, in order to select only relevant ones. Here are examples of benchmarks that were used, and their target event: • loop-calibration: Measurement loop overhead benchmark. By running an empty loop, we can estimate the loop overhead. • insn-XXX: Comparison of CPU instructions execution costs (add, mov, . . . ). The target instruction is executed many times inside a loop. • XXX-access: Calibration of costs of each bus level (AHB1/2) and memory level (cache, scratchpad or main memory), depending on the address accessed.

Fast and Accurate Embedded Systems Energy Characterization

15

Table 1. Benchmarks results for simple operation energy calibration bench name length energy (nJ) error (pJ) loop-calibration 4 69.084 5.1777 insn-nop 1 16.747 1.2884 AHB1-access 6 101.33 7.7132 AHB2-access 18 300 22.998 Dcache-access 1 17.146 1.3007 mem-access 40 775.44 54.551 spm-access 8 131.72 10.168 timer-test on(nop) 1 16.754 1.2857

• timer-test: Example of peripherals energy characterization, this benchmark allows us to measure the timer power consumption. It is subdivided into two benchmarks, one in which the timer is stopped and the second in which the timer is running. The structure of the loop is the same as the insn-XXX benchmark with a nop instruction. Calibration results. Benchmark energy results examples are listed in table 1. Full results are available in [3]. These results represent for each benchmark the length of the calibrated event in CPU clock cycles (second column), the perevent raw energy cost measured on the complete platform (third column) and the measurement error (fourth column). Energy costs reported here give the consumption of the complete platform for an full event execution. These raw costs have to be refined to get the final parameters. As an example, the scratchpad memory access benchmark result (spm-access) gives the energy consumption of the CPU executing a load instruction, the bus conveying the load request and response and finally the scratchpad memory. The bus access cost includes the register accesses in the targeted peripherals since it is impossible to dissociate their costs. By removing the consumption of the CPU (one load and seven nop) and bus consumption, we finally obtain the scratchpad memory access cost. Experiments reported in [3] shows that the scratchpad memory does not consume more energy than a register accessed via the bus. Other model simplifications are possible. For example, the CPU cache models are simplified by taking into account only memory access bursts in case of misses since the overhead can be neglected. The basic model presented in section 3.1 can be rewritten, by using models simplifications obtained by calibration. We found that most instructions have the same energy consumption as long as they stay inside the CPU. Currently only ARM32 instruction set is modeled. Thumbs (16bit) instruction set can be modeled using the same benchmark methodology. In our setup, it is not possible to isolate the instruction cache consumption, which is lumped with the instruction consumption. ICache misses can be modeled as memory accesses. We finally have a model for which CPU instructions are grouped in two classes, arithmetic and logic intra-CPU instructions, and load / store instructions. A memory load access is modeled as a load instruction, plus a bus overhead, plus a

16

N. Fournel, A. Fraboulet, and P. Feautrier 2.0e−06

+

AHB2−reg−write AHB1−reg−write loop−cal insn−cmp_mul insn−cmp_nop

1.8e−06

energy per event (J)

1.6e−06 1.4e−06 1.2e−06

+ +

1.0e−06 8.0e−07 6.0e−07 4.0e−07 2.0e−07 + + + +

0.0e+00 + 1

+

+

+ + + +

+ + +

3

+

+

+

+ + + +

+

clock divisor 5

7

9

11

13

15

17

Fig. 1. Multiple frequencies experiments: This figure shows that the energy per event increases linearly with the clock period

memory overhead. Peripherals energy consumption are taken into account thanks to state machines that give their consumption during instructions execution. The final model is written on Equation (6).  Eslot = Ebase + Einsn + Ebus access + Emem + Eperiph state (6) Eslot is the energy consumption of the instruction execution time slot, Einsn is the cost of instruction given by its class cost, Ebus access is the bus overhead cost for load or store instructions, Emem is the overhead for memory accesses. The last term represents the sum of the energy overhead of peripherals state. These cost are all overhead costs, since the full consumption of a peripheral cost, for example, is given by its base energy cost comprised in Ebase and the overhead. Frequency Scaling. The model presented before is valid for full speed software execution. However, the Integrator CM922T has frequency scaling capabilities but no dynamic voltage scaling (DVS) capabilities, hence when we reduce the frequency we cannot decrease energy consumption. When repeating five benchmarks at different frequencies, we obtain the curves in Fig. 1. This figure represents the per event energy values for the five benchmarks as a function of the ref clock divisor, r = f f where fref is the nominal frequency (198 MHz here). These curves show that energy per event increases when frequency is decreased, and this may seem counter-intuitive. To understand these results observe first that a given event, e.g. the execution of some specific instruction, entails an almost constant number of bit flips, and that each flip uses a fixed amount of energy. Hence, to a first approximation, and in the absence of voltage scaling, the energy for a given event should be a constant. However, in our platform, frequency scaling acts only on the processor and Excalibur embedded peripherals; the consumption of other peripherals and external memories is not affected. Hence, the addition of a parasitic term which is roughly proportional to

Fast and Accurate Embedded Systems Energy Characterization

17

Table 2. Linear regression from curves of Fig. 1 based on the formula 7 Benchmark name Erp base (nJ) Emc (nJ) error (pJ) insn-mul 10.91 26.37 572.36 loop-calibration 10.52 19.22 258.90 insn-nop 10.54 6.35 105.61 access-AHB1 11.06 36.72 1085.37 access-AHB2 11.06 106.32 3431.46

the duration of the event or inversely proportional to frequency. This is clearly the case for the curves of Fig. 1. We must underline that all five benchmarks generate activity in the modified clock domain (CPU), but not on the remaining part of the platform. On top of that we kept all peripherals in the modified clock domain in an idle state. Hence, the event energy cost namely Eevt , which can be an instruction execution or a bus access for examples. In this consumption we can identify two types of sources. The first is the energy due to modified clock domain Emc , which is constant The second is the one due to the remaining part of the platform Erp base . Their relation in the total consumption of event is given by relation: Eevt = Erp

base

× linsn × r + Emc

(7)

The first term is dependent on the frequency ratio r and the instruction length linsn , whereas the second is not. Linear regressions on the results presented in Fig. 1 are shown on table 2. As shown in this table, equation (7) give a good explanation for the experiments on clock frequency variation. These results gives us an estimation of what we can consider as base energy, which is not changing against software execution. The last two columns are the events real consumption and the regression error. The value for the base energy can be approximated by the mean value 10.82 nJ per cycle (with a standard deviation of ±2.6 10−2 ). 4.2

Model Validation

We describe here our accuracy tests experiments. Our model is implemented in a simulator, and its results were compared to physical measurements. Simulator Integration. Our model is implemented in a simulation tool suite. The simulation tools are composed of two simulators. The first is a complete platform functional simulator in charge of generating a cycle-accurate execution trace of the software. This trace reports all executed instructions, and all peripherals activities (state changes). This first step allow software developers to functionally debug their applications and supply them the material to make the second step simulation. To fulfill this first task, we implemented the behaviour of the Integrator platform in the open source simulator skyeye. We also upgraded it to the cycle accurate trace generation. The second tool is the energy simulation tool. This simulator implements the model presented in the previous

18

N. Fournel, A. Fraboulet, and P. Feautrier

Table 3. Simulators results: the results obtained for execution time and energy consumption by real hardware measurement are shown in second and third columns, the simulation ones in fourth and fifth columns. The last two columns give the error percentile of the simulation. Measured values Simulated values Error Benchmark cycles energy (J) cycles energy (J) cycles (%) energy (%) jpeg 6916836 1.142440e-01 6607531 1.037940e-01 - 4.4 - 9.1 jpeg2k 7492173 1.268535e-01 7663016 1.200488e-01 + 2.2 - 5.3 mpeg2 13990961 2.335522e-01 14387358 2.208065e-01 + 2.8 - 5.4

section. Its main task is to compute model parameters from the cycle-accurate execution trace. It accumulates all computed energies, and reports them in an energy profile file for source level (C code) instrumentation and annotation. Validation Methodology. To check the accuracy of the resulting model, we propose to compare the consumption estimation of the model, thus implemented in our tool to physical measurement on the real platform. The test application chosen for this model validation are widely spread multimedia applications : JPEG, JPEG2000 and MPEG2. The implementations of these three applications are Linux standard libraries. Hence they use operating system services and standard libc functions. All experiments could have been made with Linux (or even uClinux), since the simulation tools are complete enough to run these operating systems. For limited measurement duration reasons, we decided to replace these heavy OS by the lightweight one, Mutek [8]. Linux hardware layer abstraction makes interrupt request managment too long to allow a reasonable sized image to be decoded in our measure time window. The three applications are executed in the simulation tools to get estimations of their execution. Accuracy. Results of model estimations and physical measurements are presented in table 3. The second and third columns reports the physical measurement results, in terms of execution duration in CPU clock cycles and in terms of energy consumption in Joules. Fourth and fifth columns gives the same kind of informations concerning the simulation results. Finally, the last two columns gives the percentile error of simulation errors of the simulation results against the physical measurement on the target hardware platform. These results show that a 10% error rate can be achieve by our simple complete platform energy model. This estimations are obtained in roughly less than a minute (25s for the first simulation plus 20s for the second). We think that the error rate of 10% is largely acceptable in regard of the simulation time.

5

Conclusion

In this paper we have explained how an accurate, energy consumption model for a full embedded system can be built from external measurements and microbenchmarks. Our methodology requires a prototype platform of comparable

Fast and Accurate Embedded Systems Energy Characterization

19

technology. Quantitative energy data are gathered at the battery output and are translated into per instruction energy figures by data analysis. The resulting model is thus driven by the embedded software activity and can be used with a simulation execution trace as input. It is thus possible to very easily add an energy estimator to a software functional simulator so as to get feedback at the source level. As simulation tools modifications are kept at a minimum the simulation speed is not impacted. Consumption data clearly identify power hungry operations, thus offering guidelines for software design tradeoffs. The model built on an ARM9 based development board using this methodology achieved an error rate of less than 10 % at the source level, which is acceptable compared to its simplicity of implementation and its fast running time.

References 1. Chen, R.Y., Irwin, M.J., Bajwa, R.S.: Architecture-level power estimation and design experiments. In: ACM TODAES, January 2001, vol. 6, pp. 50–66. ACM Press, New York (2001) 2. Contreras, G., Martonosi, M., Peng, J., Ju, R., Lueh, G.-Y.: XTREM: a power simulator for the Intel XScale core. In: LCTES ’04, pp. 115–125 (2004) 3. Fournel, N., Fraboulet, A., Feautrier, P.: Embedded Systems Energy Characterization using non-Intrusive Instrumentation. Research Report RR2006-37, LIP - ENS Lyon (November 2006) 4. Gurumurthi, S., Sivasubramaniam, A., Irwin, M.J., Vijaykrishnan, N., Kandemir, M., Li, T., John, L.K.: Using complete machine simulation for software power estimation: The softwatt approach. In: International Symposium on High Performance Computer Architecture (2002) 5. Kim, N.S., Austin, T., Mudge, T.r., Grunwald, D.: Power Aware Computing. In: Challenges for Architectural Level Power Modeling, Kluwer Academic Publishers, Dordrecht (2001) 6. Landsiedel, O., Wehrle, K., G¨ otz, S.: AEON: Accurate Prediction of Power Consumption in Sensor Nodes. In: SECON, October 2004, Santa Clara (2004) 7. Lee, M.T.-C., Fujita, M., Tiwari, V., Malik, S.: Power analysis and minimization techniques for embedded dsp software. IEEE Transactions on VLSI Systems (1997) 8. P´etrot, F., Gomez, P.: Lightweight Implementation of the POSIX Threads API for an On-Chip MIPS Multiprocessor with VCI Interconnect. In: DATE 03 Embedded Software Forum, pp. 51–56 (2003) 9. Russell, J.T., Jacome, M.F.: Software power estimation and optimization for high performance, 32-bit embedded processors. In: International Conference on Computer Design (October 1998) 10. Simunic, T., Benini, L., De Micheli, G.: Cycle-accurate simulation of energy consumption in embedded systems. In: 36th Design Automation Conference, May 1999, pp. 867–872 (1999) 11. Steinke, S., Knauer, M., Wehmeyer, L., Marwedel, P.: An accurate and fine grain instruction-level energy model supporting software optimizations. In: PATMOS (2001) 12. Tan, T.K., Raghunathan, A., Jha, N.K.: EMSIM: An Energy Simulation Framework for an Embedded Operating System. In: ISCAS 2002 (May 2002) 13. Tiwari, V., Malik, S., Wolfe, A., Lee, M.: Instruction level power analysis and optimization of software. Journal of VLSI Signal Processing (1996)

A Flexible General-Purpose Parallelizing Architecture for Nested Loops in Reconfigurable Platforms* Ioannis Panagopoulos1, Christos Pavlatos1, George Manis2, and George Papakonstantinou1 1

Dept. of Electrical and Computer Engineering National Technical University of Athens Zografou 15773, Athens Greece {ioannis,cpavlatos,papakon}@cslab.ece.ntua.gr http://www.cslab.ece.ntua.gr 2 Dept. of Computer Science University of Ioannina P.O. Box 1186, Ioannina 45110 Giannena, Greece [email protected]

Abstract. We present an innovative general purpose architecture for the parallelization of nested loops in reconfigurable architectures, in the effort of achieving better execution times, while preserving design flexibility. It is based on a new load balancing technique which distributes the initial nested loop’s workload to a variable user-defined number of Processing Elements (PEs) for execution. The flexibility offered by the proposed architecture is based on “algorithm independence”, on the possibility of on-demand addition/removal of PEs depending on the performance-area tradeoff, on dynamic reconfiguration for handling different nested-loops and on its availability for any application domain (design reuse). An additional innovative feature of the proposed architecture is the hardware implementation for dynamic generation of the loop indices of loop instances that can be executed in parallel (dynamic scheduling) and the flexibility this implementation offers. To the best of our knowledge this is the first hardware dynamic scheduler, proposed for fine grain parallelism of nested loops with dependencies. Performance estimation results and limitations are presented both analytically and through the use of two case studies from the image processing and combinatorial optimization application domains.

1 Introduction The platform based design methodology has been proven to be an effective approach for reducing the computational complexity involved in the design process of embedded systems [1]. Reconfigurable platforms consist of several programmable components *

This work is co - funded by the European Social Fund (75%) and National Resources (25%) Operational Program for Educational and Vocational Training II (EPEAEK II) and particularly the Program PYTHAGORAS II.

N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 20–30, 2007. © Springer-Verlag Berlin Heidelberg 2007

A Flexible General-Purpose Parallelizing Architecture

21

(microprocessors) interconnected to hardware and reconfigurable components. Reconfigurable components allow the flexibility of selecting specific computationally intensive parts (mainly nested loops) of the initial application to be implemented in hardware, during hardware/software partitioning [2], in the effort of achieving the best possible increase in performance, while obeying specific area, cost and power consumption design constraints. The most straightforward approach of speeding up the execution of nested loops is realized by the implementation of the nested loop in hardware and its placement in the platform either on an FPGA or as a dedicated ASIC (Application Specific Integrated Circuit) [3][4]. Another common approach also entails the migration of the nested loop to hardware, but prior to that, applies several theoretical algorithmic transformations (usually in the effort of parallelizing its execution) to further improve its performance. It has been mainly influenced by similar techniques exploited in general purpose computers [5][6][7]. In the second approach a scheduling algorithm is needed in order to act as a controller indicating the loop instances that will be executed in parallel at every control step. Existing scheduling algorithms are classified into two main categories: static and dynamic. Static algorithms [8] are executed prior to the loop’s execution and produce a mapping of loop instances’ execution at specific control steps. Static algorithms require a space in memory for storing the execution map. This makes them inappropriate for reconfigurable embedded systems where memory utilization needs to be minimized and the systems needs to handle nested-for loops with different communication and computation times. In contrast, dynamic algorithms attempt to use the runtime state information of the system in order to make informative decisions on balancing the workload and are executed during the computation of the nested loop. This makes them applicable to a much larger spectrum of applications and since they neither require a static map of the execution sequence nor they are limited by a specific loop they are the best candidates for loop parallelization in embedded systems. An important class of dynamic scheduling algorithms are the self-scheduling schemes presented in [9]. Our proposed architecture is based on the second approacha and is applied to grids. We target fine grain parallelism and apply the proposed scheduling algorithm on FPGA. To the extent of our knowledge this is the first implementation of a dynamic scheduling algorithm which handles data dependencies on embedded reconfigurable systems. The presented architecture allows the parallelization of the initial nested loop through the use of a number of interconnected PEs implemented in hardware that work in parallel and are coordinated by a main execution controller. It does not require any initial application specific algorithmic transformation of the nestedloop or architectural transformations of the hardware implementation which is the case of the most systolic approaches. Grids are a widely known family of graphs and have been applied to a variety of diverse application domains such as arithmetic analysis (e.g. convolution, solving differential equations), image and video processing (e.g. compression, edge detection), digital design, fluid dynamics etc. Our load balancing architecture addresses design flexibility, optimization and generality in those domains. The proposed architecture is presented as follows: Initially, we establish the notation and definitions that will be used throughout this paper and present the theoretical framework that governs our approach (Section 2). Then, we provide a general overview of the

22

I. Panagopoulos et al.

architecture and pinpoint the issues that need to be tackled and the questions that need to be answered to establish the performance merit of the approach (Section 3). Section 4 deals with those issues and presents the way they are resolved. Finally in Section 5 two case studies are presented that evaluate the actual speed-up gained by the application of the proposed architecture. Conclusion and Future Work follow in Section 6.

2 Definitions-Notation A perfectly nested loop has the general representation illustrated in Listing 1. for (i1=0;i1> VT. V ∑V j −[Vt 0 −ηVi +γ ∑V j ] ⎡ − i ⎤ nVT V I Si = I 0Wi e ⎢1 − e T ⎥ ⎥⎦ ⎢⎣ −

(3)

The transistor voltages can be evaluated in three different situations. The analysis assumes that Vdd >> Vi, which drop out all the Vi terms, and it also considers the fact that Vi >> VT, so that the [1 − e(− Vi / VT )] term can be ignored. The first situation is represented by the voltage V1 in Fig. 1. In this case, it is possible to associate every transistor connected in that node by series-parallel association. The terms Wabove and Wbelow, in equation (4), represent the effective width of the transistor over and under the node Vi, respectively. For this condition, Vi is given by

⎛W



ηVdd + nVT ln⎜⎜ above ⎟⎟ ⎝ Wbelow ⎠ Vi = 1 + 2η + γ

(4)

The second situation is presented by the voltage V2 in Fig. 1. In such condition, it is not possible to make series-parallel associations between the transistors connected at i-index node. The term Vabove means the voltage on the transistors above the node Vi. For this state, the voltage Vi is given by ηVabove ⎛ ⎜ W e nVT ∑ above nVT ln⎜ Wbelow ⎜ ⎜ ⎝ Vi = 1+η + γ

⎞ ⎟ ⎟ ⎟ ⎟ ⎠

(5)

Finally, the third situation is illustrated by the voltage V3 in the same arrangement. This case only happens at the bottom transistors and the analysis cannot assume Vi >> VT, so that the term (Vi/VT) should not be ignored. To simplify the mathematic calculation, the term e(− Vi / VT ) in (1) can be expressed by (1 − Vi / VT ) . Then, Vi is obtained by equation (6), where C = 1 + η + γ. C ⎛ Vi ⎜ n ⎜⎝ VT

⎞ ⎛V ⎟⎟ + ln⎜⎜ i ⎠ ⎝ VT

⎛W ⎞ ⎞ ηVabove ⎛V ⎞ ⎟⎟ = + ln⎜⎜ above ⎟⎟ + ln⎜⎜ above ⎟⎟ nVT ⎠ ⎝ Wi ⎠ ⎝ VT ⎠

(6)

478

P.F. Butzen et al.

3.1 Subthreshold Leakage in Non-series-parallel Networks Standard CMOS gates derived from logic equations are usually composed by seriesparallel (SP) device arrangements. When a Wheatstone bridge configuration is observed in the transistor schematic, as observed in some BDD-based networks [11-13], a non-series-parallel topology is identified, as depicted in Fig. 2b. The proposed model, discussed above, can be used to calculate the voltage across each single transistor, estimating accurately the leakage current. When the model is applied in non-series-parallel configuration sometimes is somewhat difficult to calculate the voltage across certain transistors, as occur in Fig. 2b for the transistor controlled by ‘c’. In this case, the transistor receiving ‘c’ may be ignored at a first moment, until the voltage at one of its terminals is evaluated. For evaluating the other terminal, such device is then included. Similar procedure is suitable for any kind of non-SP networks.

Fig. 2. SP (a) and non-SP (b) transistor arrangements of the same logic function

3.2 On-Transistors in Off-Networks Previous analysis considers only off-networks composed exclusively by transistors that are turned off. Usually, in the most cases, the transistors that are turned on could be treated as ideal short-circuits, because the drop voltage across such devices is some orders of magnitude smaller than the drop voltage across the off-transistors. However, in the case of NMOS transistors switched on and connected to power supply Vdd, the drop voltage Vdrop across them should be taken into account. In the leakage current analysis, this voltage drop is somewhat important when the transistor stacking presents only one off-device at the bottom of the stack, as depicted in Fig. 3. In the proposed model, the term Vdd − ∑ V j in equation (2) is replaced by Vdd − Vdrop −

∑Vj

.

In arrangements with more than one off-transistor in series, the on-devices could be accurately considered as zero drop voltage (i.e. ideal short-circuit). Similar analysis is valid for PMOS transistors in off-networks when they are connected to the ground node.

Subthreshold Leakage Modeling and Estimation of General CMOS Complex Gates

479

Fig. 3. Influence of on-transistor in off-stack current

4 Results In order to validate this model, the results obtained analytically were compared to Hspice simulation results, considering commercial 130nm CMOS process parameters and operating temperature at 100°C. It is known that down to 100nm processes, the subthreshold currents represent the main leakage mechanism, while for sub-100nm technologies, the model presented herein should be combined with gate oxide leakage estimation, already proposed by other authors [3][9]. The parameters used in the analytical evaluation were: Vdd = 1.2V, Vdrop = 0.14, I0 = 20.56mA, η = 0.078. γ = 0.17, n = 1.45, and W = 0.4μA. In a first moment, transistors with equal sizing were applied to simplify the analysis. The leakage current was calculated and correlated with Hspice results for several pull-down NMOS off-networks, depicted in Fig 4. The results presented in Table 1 show a good agreement between the model and the simulation data, showing an absolute average error of less than 10%. It is interesting to note that the static current in networks (f), (g), and (h) from Fig. 4, not treated by previous models, are accurately predicted. The main difference is observed for structures (c) and (d), when the model assumes Vi < VT and the term e(− Vi / VT ) is changed by (1 − Vi / VT ) .

Fig. 4. Pull-down NMOS networks (‘•’ and ‘ο’ represent on- and off-device, respectively)

480

P.F. Butzen et al.

Table 2 and 3 correspond to the results related to both NMOS trees in Fig. 4(f) and Fig. 4(g), respectively. In these tables, the input dependence leakage current is evaluated for all input combinations. In some cases, different input vectors result in equivalent off-device arrangements. For that, the Hspice values are presented for minimum and maximum values obtained applying the set of equivalent input states. Moreover, the previous model presented in [5] was also calculated for such logic states. Note that, the first input vector in both cases, which represents the entire network composed by off-switches, is not treated by the previous model proposed in [5]. Basically, different Table 1. Subthreshold leakage current (nA) related to the off-networks depicted in Fig. 4 Network (a) (b) (c) (d) (e) (f) (g) (h)

HSPICE 1.26 6.58 0.69 0.72 1.29 1.29 2.52 2.56

Model 1.26 6.60 0.75 0.77 1.28 1.28 2.53 2.54

Diff.(%) 0.30 8.70 6.94 0.78 0.78 0.40 0.79

Table 2. Input dependence leakage estimation (nA) in the logic network from Fig. 5(f) Input-state (abcde) 00000 00001 00010 00011 00100 a 00101 b 00110 c

HSPICE results* 1.29 9.71 1.43 25.00 1.36/1.37 14.91/15.14 6.30/6.73

Proposed model 1.28 9.65 1.34 25.02 1.30 14.94 6.60

Previous model [5] 9.65 1.34 25.02 1.31 16.69 8.34

* The HSPICE value is given for min./max. currents related to such equivalent vectors. Equivalent vectors – a.[01000, 01100]; b.[01001, 01101]; c.[01010, 01110, 10000, 10010, 10100, 10110, 11000, 11010, 11100, 11110].

a,b,c

Table 3. Input dependence leakage estimation (nA) in the logic network from Fig. 4(g) Input-state (abcde) 00000 00001a 00011b 00100 01000 01010c 01100d 10000

HSPICE results* 2.52 10.54/10.54 16.68/16.68 2.52 7.90 21.05/21.05 12.55/13.15 7.90

Proposed model 2.53 10.76 16.68 2.52 7.90 21.54 13.20 7.90

Previous model [5] 10.76 16.68 2.52 7.91 24.02 16.68 9.65

* The HSPICE value is given for min./max. currents related to such equivalent vectors. Equivalent vectors – a.[00010]; b.[00101, 00110, 00111]; c.[10001]; d.[10100, 11000, 11100].

a,b,c,d

Subthreshold Leakage Modeling and Estimation of General CMOS Complex Gates

481

values from both methods are obtained when on-transistors are considered in the offnetworks, providing more accurate correlation with the electrical simulation results. Fig. 5 shows the subthreshold average leakage current related to the NMOS networks illustrated in Fig. 4(f) and Fig. 4(g). As discussed before, the previous model from [5] cannot estimate the subthreshold current for the first input vector in both cases, and it is not considered in the average static current calculation. Unlike the previous model, the proposed one presents results close to Hspice simulations. The main reason for that is the influence of on-transistors in off-networks.

Fig. 5. Average subthreshold leakage current for pull-down networks in Fig. 4(f) and Fig. 4(g)

In terms of combinational circuit static dissipation analysis, the technology mapping task divides the entire circuit in multiple logic gates or cells. Thus, they can be treated separately for the leakage estimation, since the input state of each cell is known according to the primary input vector of the circuit. A complex CMOS logic gate, whose transistor sizes were determined by considering the Logical Effort method [16], is depicted in Fig. 6. Table IV presents the comparison between electrical simulation data and the model calculation, proving the method in sized gates. Finally, the proposed model has been verified to the variation of power supply voltage and operating temperature, as depicted in Fig. 7. The influence of temperature variation in the predicted current shows good agreement with Hspice results. On the

Fig. 6. CMOS gate, with different transistor sizing, according to the Logical Effort [16]

482

P.F. Butzen et al.

Table 4. Subthreshold leakage (nA) related to the CMOS complex gate depicted in Fig 6 Input state (abcd) 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

(a)

HSPICE results 4.01 20.67 19.93 44.52 4.44 42.34 19.93 43.38 4.40 36.81 19.93 43.38 19.50 96.67 20.43 20.48

Proposed model 4.13 20.68 19.99 43.27 4.29 42.37 19.99 43.27 4.26 36.50 19.99 43.27 19.99 96.92 19.99 20.21

Diff(%) 3.0 0.0 0.3 2.8 3.4 0.1 0.3 0.3 3.2 0.8 0.3 0.3 2.5 0.3 2.2 1.3

(b)

Fig. 7. Subthreshold current in (a) supply voltage and (b) operating temperature variations

Fig. 8. Subthreshold current according to the operating temperature variation

Subthreshold Leakage Modeling and Estimation of General CMOS Complex Gates

483

other hand, the difference between the subthreshold currents obtained from electrical simulation and analytical modeling to voltage variation can be justified by eventual inaccuracy in the parameter extraction. Fig. 8 shows the leakage current analysis in respect to the threshold voltage variation, validating the proposed method for this factor, critical in the most advanced CMOS processes.

5 Conclusions A new subthreshold leakage current model has been presented for application in general CMOS networks, including series-parallel and non-series-parallel transistor arrangements. The occurrence of on-devices in off-networks is also taken into account. These features make the model useful for different logic styles, including BDDderived networks, improving previous works not suitable to such a kind of structure. The proposed model has been validated considering a 130nm CMOS technology, in which the subthreshold current is the most relevant leakage mechanism. In the case of sub-100nm processes where gate leakage becomes more significant, the present work should be combined with already published works which address the interaction between the subthreshold and the gate oxide leakage currents, such as proposed by Yang et al. [3], in order to improve the accuracy of the total leakage estimation.

References 1. Roy, K., Mukhopadhyay, S., Meimand, H.M.: Leakage Current Mechanisms and Leakage Reduction Techniques in Deep-Submicrometer CMOS Circuits. Proceedings of the IEEE 91(2), 302–327 (2003) 2. Anderson, J.H., Najm, F.N.: Active Leakage Power Optimization for FPGAs. IEEE Trans. on CAD 25(3), 423–437 (2006) 3. Yang, S., Wolf, W., Vijaykrishnan, N., Xie, Y., Wang, W.: Accurate Stacking Effect Macro-modeling of Leakage Power in Sub-100nm Circuits. In: Proc. Int. Conference on VLSI Design, January 2005, pp. 165–170 (2005) 4. Gu, R.X., Elmasry, M.I.: Power Distribution Analysis and Optimization of Deep Submicron CMOS Digital Circuit. IEEE J. Solid-State Circuits 31 (May 1996) 5. Cheng, Z., Johnson, M., Wei, L., Roy, K.: Estimation of Standby Leakage Power in CMOS Circuits Considering Accurate Modeling of Transistor Stacks. In: Proc. Int. Symposium Low Power Electronics and Design, August 1998, pp. 239–244 (1998) 6. Roy, K., Prasad, S.: Low-Power CMOS VLSI Circuit Design. John Wiley & Sons, Chichester (2000) 7. Narendra, S.G., Chandrakasan, A.: Leakage in Nanometer CMOS Technologies. Springer, Heidelberg (2006) 8. Rosseló, J.L., Bota, S., Segura, J.: Compact Static Power Model of Complex CMOS Gates. In: Vounckx, J., Azemard, N., Maurine, P. (eds.) PATMOS 2006. LNCS, vol. 4148, Springer, Heidelberg (2006) 9. Lee, D., Kwong, W., Blaauw, D., Sylvester, D.: Analysis and Minimization Techniques for Total Leakage Considering Gate Oxide Leakage. In: Proc. DAC, June 2003, pp. 175– 180 (2003) 10. Gavrilov, S., et al.: Library-less Synthesis for Static CMOS Combinational Logic Circuits. In: Proc. ICCAD, November 1997, pp. 658–662 (1997)

484

P.F. Butzen et al.

11. Yang, C., Ciesielski, M.: BDS: a BDD-based logic optimization system. IEEE Trans. CAD 21(7), 866–876 (2002) 12. Lindgren, P., Kerttu, M., Thornton, M., Drechsler, R.: Low power optimization technique for BDD mapped circuits. In: ASP-DAC 2001, 615–621 (2001) 13. Shelar, R.S., Sapatnekar, S.: BDD decomposition for delay oriented pass transistor logic synthesis. IEEE Trans. VLSI 13(8), 957–970 (2005) 14. Sheu, B.J., Scharfetter, D.L., Ko, P.K., Jeng, M.C.: BSIM: Berkeley Short-Channel IGFET Model for MOS Transistors. IEEE J. Solid-State Circuits SC-22 (August 1987) 15. Butzen, P.F., Reis, A.I., Kim, C.H., Ribas, R.P.: Modeling Subthreshold Leakage Current in General Transistor Networks. In: Proc. ISVLSI (May 2007) 16. Sutherland, I., Sproull, B., Harris, D.: Logical Effort: Designing Fast CMOS Circuits. Morgan Kaufmann, San Francisco, CA (1999)

A Platform for Mixed HW/SW Algorithm Specifications for the Exploration of SW and HW Partitioning Christophe Lucarz and Marco Mattavelli Ecole Polytechnique F´ed´erale de Lausanne, Switzerland {christophe.lucarz,marco.mattavelli}@epfl.ch

Abstract. The increasing complexity in particular of video and multimedia processing has lead to the need of developing the algorithms specification using software implementations that become in practice generic reference implementations. Mapping directly such software models in platforms made of processors and dedicated HW elements becomes harder and harder for the complexity of the models and for the large choice of possible partitioning options. This paper describes a new platform aiming at supporting the mapping of software specifications into mixed SW and HW implementations. The platform is supported by profiling capabilities specifically conceived to study data transfers between SW and HW modules. Such optimization capabilities can be used to achieve different objectives such as optimization of memory architectures or low power designs by the minimization of data transfers.

1

Introduction

Due to the ever increasing complexity of integrated processing systems, software verification models are necessary to test performance and specify accurately a system behaviour. A reference software, in both cases of the processing specified by a standard body or for any other ”custom” algorithm, is the starting point of every real implementation. This is typical case for MPEG video coding where the ”reference software” is now the real specification. There is an intrinsic difference between real implementations that can be made of HW and SW components working in parallel using specific mapping of data on different logical or physical memories and the ”reference software” that is usually based on a sequential model with a monolithic memory architecture, such difference is generically called ”architectural gap”. In the process of transforming the reference SW into a real implementation the possibility of exploring different architectural solutions for specific modules and study the resulting data exchanges for defining optimal memory architectures is a very attractive approach. This system exploration option is particularly important for complex systems where conformance and validation of the new module designs need to be performed at each stage of the design otherwise incurring into the risk of N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 485–494, 2007. c Springer-Verlag Berlin Heidelberg 2007 

486

C. Lucarz and M. Mattavelli

developing solutions not respecting their original specification or not providing adequate performances. This paper presents an integrated platform and the associated profiling capabilities that supports mixed SW and HW specifications and enables the hardware designer to seamlessly transform a Reference Software into software plus hardware modules mapped on an FPGA. The paper is organized as follows: section 2 presents a brief state of the art on integrated HW/SW platforms. Section 3 provides a general view of the platform introducing the innovative elements. Section 4 describes the details of the platform that enables HW/SW support. Section 5 presents the capabilities of the profiling tool and explain how it can be used to study and optimize data transfers satisfying different criteria.

2

State of the Art of Platforms Supporting Mixed HW/SW Implementations

Many platforms have been designed to support mixed SW/HW implementations, but all of them suffer from the fact that there is no easy procedure capable to seamlessly plug hardware modules described in HDL to a pure software algorithm. Either the memory management is a burdensome task or the call of the hardware module is done by an embedded processor on the platform. Environments which support HW/SW implementations are generally based on a platform containing an embedded processor and some dedicated hardware logic like FPGA as described in the work of Andreas Koch [2]. The control program lies in the embedded processor. However, data on the host are available easily thanks to virtual serial ports. But the plugging of hardware modules inside the reference software running on the host remains the most difficult task. The work of Martyn Edwards and Benjamin Fozard [3] is interesting in the way a FPGA-based algorithm can be activated from the host processor. This platform is based on the Celoxica RC1000-PP board and communicates with the host by using the PCI bus. The control program is on the host processor, sends control information to the FPGA and transfers data in small shared memory which is part of the hardware platform. In this case, the designer/programmer must do explicitly the data transfer between the host and the local memory. Many other works about coprocessors have been reported in literature. Some examples are given in [4] [5]. However the problem of seamless plug-in of HDL modules is still there, the data transfers in the charge of the designer with can be a very burdensome task when dealing with complex data-dominated video or multimedia algorithms. In some works on coprocessors, data transfers can be generated automatically by the host like for instance in [6]. But data are copied in the local memory at a fixed place. Thus, the HDL module must be aware of the physical addresses

A Platform for Mixed HW/SW Algorithm Specifications

487

of the data in the local memory. The management of the addresses can be a burdensome task when dealing with complex algorithms. The Virtual Socket concept and associated platform has been presented in [10] [9] [7] and has been developed to support the mixed specification of MPEG-4 Part2 and Part 10 (AVC/H.264) in terms of reference SW including the plug-in of HDL modules. The platform is constituted by a standard PC where the SW is executed and by a PCMCIA card that contains a FPGA and a local memory. The data transfers between the host memory and the local memory on the FPGA must be explicitly specified by the designer/programmer. Specifying explicitly the data transfers would not constitute a serious burden when dealing with simple deterministic algorithms for which the data required by the HDL module are known exactly. Unfortunately for very complex design cases where design trade-offs are much more convenient (and often are the only viable solutions) than worst case designs data transfers cannot be explicitly specified in advance by the designer. Our work is based on the Virtual Socket platform to which we add the virtual memory capability to allow automatic data transfers from the host to the local memory. The goal of our platform implementation is to provide a ”direct map” of any SW portion to a corresponding HDL specification without the need of specifying any data transfer explicitly. In other words, to extend the concept of Virtual Socket for plugging HDL modules to SW partition with the concept of virtual memory. HDL modules and software algorithm share an unified virtual memory space. Having a shared memory - enforced by a cache-coherence protocol - between the CPU running the SW sections and the platform supporting HW avoids the need of specifying explicitly all the data transfers. The clear advantage is that the data transfer needs of the developed HDL module can be directly profiled so as to explore different memory architecture solutions. Another advantage of such direct map is that conformance with the original SW specification is guaranteed at any stage and the generation of test vectors is naturally provided by the way the HDL module is plugged to the SW section.

3

Description of the Virtual Socket Platform

The Virtual Socket platform is composed of a PC and a PCMCIA card that includes a FPGA and a local memory. The Virtual Socket handles the communications between the host (the PC environment) and the HDL modules (in the FPGA inside the PCMCIA). Given that the HDL modules are implemented on the FPGA, in principle they would only have access to the local memory (see figure 1). This was the case of the first implementation of the Virtual Socket platform, with the consequence that all the data transfers from the host to the local memory had to be specifically specified in advance by the designer/programmer himself. Such operation beside being error prone or be implemented transferring more data than necessary it is not straightforward and may become difficult to be handled when the volume of data is comparable with the size of the (small) local

488

C. Lucarz and M. Mattavelli

MPEG C# functions Host

Local Memory

HOST - PC

physical addresses

Virtual Socket PCMCIA FPGA Card

Virtual Socket Platform

Local Memory

Window Memory Unit

Platform

Virtual Memory Controller virtual addresses

HDL module 0

HDL module 1

HDL module 31

HDL module

HDL description of the functions

Fig. 1. The Virtual Socket platform overview

memory. Therefore, an extension has been conceived and implemented so as to handle these data transfers automatically. The Virtual Memory Extension (VME) is implemented by two components: the hardware extension to the Virtual Socket platform (Window Manager Unit) and a Virtual Manager Window (VMW) library on the host PC. The cache-coherence protocol is implemented in the Window Manager unit (WMU) using a TLB (Translation Lookaside Buffer) and is handled by the software support (VMW). The HDL module is designed simply generating virtual addresses relative to the user virtual memory space (on the host) to request data and execute the processing tasks. The processing of the data on the platform using the virtual memory feature proceed as follows. The algorithm starts the execution on the PC and associated host memory. The Virtual Socket environment allows the HDL module to have a seamless direct access to the host memory thanks to the Virtual Memory Extension and allows the HDL module to be started easily from the software algorithm thanks to the VMW Library. Figure 2 shows what are the relations between the host memory, the reference software algorithm, the hardware function call and the HDL module. Given an algorithm described in a mixed HW/SW form (1): some parts are executed in software with the host processor (5), some other parts are executed by hardware HDL modules (4) on the Virtual Socket platform hardware. To deal with mixed HW/SW algorithms, it is very convenient if the HDL and C functions have access to the same user memory space (6) which is part of the host hardware and where are stored the data to process. The main memory space is trivially available for the parts of the algorithm executed in software, which is much less evident for the parts executed in hardware. The section of C code the programmer intends to execute in hardware is replaced by the hardware call function (2). This latter is based on the Virtual Manager Window Library. The programmer sets the parameters to give to the HDL module. The Start_module() function drives the Virtual Socket platform

A Platform for Mixed HW/SW Algorithm Specifications

489

(6) User Software Virtual Memory Space

HOST HARDWARE Host Processor

(5)

PLATFORM HARDWARE

(1) (4)

Open platform

SIO

StatXYSigmaY ( const CStatXY *Stat) { SInner Avr ; if (Stat->N Sy /Stat->N; return ( SIO )sqrt ( (Stat-> Syy 2*Avr *Stat-> Sy + Stat->N* Avr *Avr ) / (Stat->N-1) ); SIO StatXYAreg ( const CStatXY *Stat) { SInner Delta;

(2) Configure platform

mixed HW /SW Algorithms (from reference C Functions)

SOFTWARE

Set Parameters Start HDL module Close the platform

VMW library

drives

Virtual (3) Socket + VME

Virtual Socket Platform with Virtual Memory Extension

VHDL described HDL modules

Implemented HDL Functions

Fig. 2. Interactions between the C function, the HDL module and the shared memory space

and the VME (3) to activate the HDL module (4). The VMW library manages all the data transfers between the main memory (6) and the local memory of the platform (3) because as the HDL module is in a FPGA, it has access only this local memory. Thanks to the VME, the HDL module has access to the host memory without intervention of the programmer. Data are sent to the HDL module and results are updated in the main memory automatically thanks to the software library support. When the HDL module finishes its work, the hardware call function is terminated by closing the platform and the reference software algorithm can be continued on the host PC.

4

Details on HW Implementation and SW Support

The following section describes in more details how the Virtual Socket platform supporting the Virtual Memory Extension is implemented. The first part explains how virtual memory accesses are possible from the HDL modules. Then, the Virtual Memory Window library, i.e. the software support is described in details to show how virtual memory accesses are handled. The final part explains how HDL modules can be integrated in the platform using a well-defined protocol.

490

4.1

C. Lucarz and M. Mattavelli

The Heart of Simplicity: HDL Modules Virtual Memory Accesses

The HDL modules are implemented on the FPGA, so that they have access only to the local memory of the Virtual Socket platform. With the implementation of the Virtual Memory Extension, the HDL modules have a direct access to the software virtual memory space located on the host PC. The right part of figure 1 shows in details how the connections between a HDL module, the Virtual Socket platform and the Virtual Memory Extension are implemented (in the hardware of the PCMCIA card). The virtual addresses generated by the HDL modules are handled by the Virtual Memory Controller (VMC) and the Window Memory Unit (WMU). The WMU is a component taken from the work of Vuleti´c and al.[8]. The WMU translates virtual addresses into physical addresses. The VMC is in charge of intercepting precise signals at right time from the interface between the HDL module and the platform in order to send information to the WMU which executes the translation. Among the signals intercepted by the VMC, can be mentioned the address signal, the count signal (number of data requested by the HDL module) and the strobe signal. The virtual addresses refer to the unified virtual memory space and the physical addresses refer to the local memory on the card. A physical address is composed of an offset and a page number. The local memory (on the current PCMCIA card platform) is composed of 32 pages of 2 kB. The offset corresponds to the location of the data in the page. The software support library (on the host PC) fills the pages of the local memory with the requested data coming from the virtual memory. When the WMU receives an unknown virtual address, it raises an interrupt through the interrupt controller of the card. The interrupt is taken in charge by the software support (on the host PC) and the requested data are written from the host memory to the local memory. From the designer/programmer point of view using the Virtual Memory Extension, the whole process of data transfers is completely transparent. The only issue the designer/programmer has to care of is to generate the virtual addresses accordingly to the data contained in the host memory space. The whole task of transferring data to local memory is done by the platform and its software support. 4.2

The Software Support: The Virtual Memory Window Library

The Virtual Memory Window (VMW) library is built on the FPGA card driver (Wildcard II API), the Virtual Socket API developed by Yifeng Qiu and Wael Badawy bases on the works [9] [10] and the WIN32 API. The Virtual Socket platform can be used with or without the Virtual Memory Extension. The designer/programmer is free to choose if the data transfers between the main memory on the host and the local memory on the card are done automatically (virtual mode) or manually (explicit mode).

A Platform for Mixed HW/SW Algorithm Specifications

491

The following piece of C code shows how a HDL module can be easily called from the Reference Software by using the Virtual Memory Extension: int main(int argc,char *argv[]) { /* [. . .] Reference Software Algorithm stops here */ /* Beginning of the HDL module calling procedure */ /******* CONFIGURING THE PLATFORM *******/ Platform_Init(); // Virtual Socket VMW_Init() ; // Virtual Memory Extension /******* PARAMETERS SETTINGS *******/ Module_Param.nb_param = 4 ; Module_Param.Param[0] = A ; Module_Param.Param[1] = B ; Module_Param.Param[2] = C ; Module_Param.Param[3] = D ;

// // // // //

number of parameter parameter parameter parameter

parameters 1 2 3 4

/******* HDL MODULE START *******/ Start_module(1, &Module_Param) ; /******* CLOSING THE PLATFORM *******/ VMW_Stop(); // Virtual Memory Extension Platform_Stop(); // Virtual Socket /* End of the HDL module calling procedure */ /* [. . .] the Reference Software Algorithm continues*/ }

First the designer/programmer must configure the platform by using the Platform_Init( ) and VMW_Init( ) functions from the Virtual Socket API and VMW API. HDL modules are activated thanks to the function Start_module( ) from the VMW API. The designer/programmer must set a given number of parameters needed for the configuration of the HDL module. This can be done thanks to the data structure Module_Param. Sixteen parameters are available for each HDL module. 4.3

The Integration of the HDL Modules in the Platform

The HDL module is linked to the Virtual Socket platform thanks to a well-defined interface and a precise communication protocol. Figure 3 illustrates the essential elements of the communication protocol. A HDL module can issue two types of requests: read or write data (in main or local memory, it depends on the operating mode: virtual or explicit mode).

492

C. Lucarz and M. Mattavelli

Steps 1 and 2 Virtual Socket Platform

request for read (write)

Step 3 Virtual Socket Platform Parameters of the read (write) request

HDL module

HDL module

acknowledgment

Local Memory

Local Memory

Steps 4 and 5 Virtual Socket Platform

"input valid" ("output valid") read data (write data)

Local Memory

Steps 6 and 7 Virtual Socket Platform

HDL module

release request

HDL module

acknowledgment

Local Memory

Fig. 3. The communication protocol between a HDL module and the Virtual Socket Platform

Read and write protocols are very similar. The following is the description of the communication protocol for the read (write) request: 1. The user HDL module asks to read (write) data, it issues a read request for reading (writing) the memory. 2. The platform accepts the reading (writing) request and if the data are available in the local memory, it generates an acknowledgement signal to the user HDL module. Otherwise the Virtual Memory Extension copies the requested data of the host memory into the local memory and then generates the acknowledgement. 3. Once the user HDL module receives the acknowledgement signal, it asks for reading (writing) some data directly from (to) the memory space. This request is performed by asserting a ”memory read” (”output valid”) signal together with setting up some other parameters signals (identification number of the HDL module used, the virtual address and how much data must be read (written)). 4. The platform accepts those signals and reads (writes) data from (to) the memory space. When the platform finishes each reading (writing), it asserts ”input valid” (”output valid”) and the data are ready to input of the user HDL module (platform). 5. The user HDL module receives (sends) the data from (to) the interface. 6. The user HDL module asserts a request to ask for releasing the reading (writing) operations when finished. 7. The platform generates an acknowledgement signal to release the reading (writing) operations.

A Platform for Mixed HW/SW Algorithm Specifications

493

In the Virtual mode, the read and write addresses contain the addresses of the data in the unified virtual memory space. It was like the HDL modules see the host memory.

5

Profiling Tools: Testing and Optimizing Data Transfers

Optimization of data transfers is a very important issue particularly for data dominated systems such as multimedia and video processing. Minimizing data transfer is also important for achieving low power implementations that are fundamental for mobile communication terminals. Data transfers contributes to the power dissipations and need to be optimized to achieve low power designs. The profiling tools supported by the platform allow the programmer to receive a feedback on the data requested by the HDL module.

(1) Conformance test

HDL module

HOST MEMORY Virtual Memory Extension (virtual mode) No profiling

(2) Global optimization

HDL module

HOST MEMORY Virtual Memory Extension (virtual mode) With profiling

(3) Final optimization and validation

HDL module

HOST MEMORY

CACHE MEMORY Original Virtual Socket (explicit mode)

profiling

Fig. 4. The optimization methodology

Figure 4 shows the methodology to achieve an optimized hardware function (HDL module) relative to data transfers. The first step concerns the validation of the design. Using the Virtual Memory Extension, the equivalency of the C and HDL functions are verified. Virtual memory feature allows the designer/programmer to focus exclusively on the HDL module conformance checking. He can forget about the memory management during this phase. The second phase consists in understanding and having a global overview of the data transfers made between the platform and the HDL module. The way the data are accessed, the re-organization of data can be source of optimization. When the data required by the HDL module are profiled, the designer/programmer enters the last phase in which data transfers are optimized between HDL module and cache memory.

494

6

C. Lucarz and M. Mattavelli

Conclusion

This paper describes the implementation of a platform capable of supporting the execution of algorithms described in mixed SW and HW form. The platform provide a seamless environment for migrating section of the SW into HDL modules that include a ”virtual memory space” common to SW sections and to the HW modules. On one side conformance of the HDL modules with the reference SW is guaranteed at any stage of the design, on the other side the programmer/designer can focus on different aspects of the design. First design efforts can be focused on the module functionality without worrying about data transfers, then using the profiled data transfer on design of appropriate memory architectures or any other design optimization that matches the specific criteria of the design.

References 1. Annapolis Micro Systems, WILDCARD-II Reference Manual, 12968-000 Revision 2.6 (January 2004) 2. Koch, A.: A comprehensive prototyping-platform for hardware-software codesign. Rapid System Prototyping, 2000. In: RSP 2000 Proceedings. 11th International Workshop, 21-23 June, 2000, pp. 78–82 (2000) 3. Edwards, M., Fozard, B.: Rapid prototyping of mixed hardware and software systems. In: Digital System Design, 2002, Proceedings. Euromicro Symposium, 4-6 September, 2002, pp. 118–125 (2002) 4. Pradeep, R., Vinay, S., Burman, S., Kamakoti, V.: FPGA based agile algorithm-ondemand coprocessor, Design, Automation and Test in Europe 2005 (March 2005) 5. Plessl, C., Platzner, M.: TKDM - a reconfigurable co-processor in a PC’s memory slot. In: Field-Programmable Technology (FPT) Proceedings. 2003 IEEE International Conference, 15-17 December, 2003, pp. 252–259. IEEE Computer Society Press, Los Alamitos (2003) 6. Sukhsawas, S., Benkrid, K., Crookes, D.: A reconfigurable high level FPGA-based coprocessor. In: Computer Architectures for Machine Perception, 2003 IEEE International Workshop, 12-16 May 2003, p. 4 (2003) 7. Schumacher, P., Mattavelli, M., Chirila-Rus, A., Turney, R.: A Virtual Socket Framework for Rapid Emulation of Video and Multimedia Designs. In: Multimedia and Expo, 2005 (ICME 2005) IEEE International Conference, 6-8 July, 2005, pp. 872–875 (2005) 8. Vuletic, M., Pozzi, L., Ienne, P.: Virtual memory window for application-specific reconfigurable coprocessors. In: Proceedings of the 41st Design Automation Conference, June 2004, San Diego, Calif (2004) 9. Amer, I., Rahman, C.A., Mohamed, T., Sayed, M., Badawy, W.: A hardwareaccelerated framework with IP-blocks for application in MPEG-4. In: System-onChip for Real-Time Applications, Proceedings. Fifth International Workshop, 20-24 July, 2005, pp. 211–214 (2005) 10. Mohamed, T.S., Badawy, W.: Integrated hardware-software platform for image processing applications, In: System-on-Chip for Real-Time Applications, Proceedings. 4th IEEE International Workshop, IEEE Computer Society Press, Los Alamitos (2004)

Fast Calculation of Permissible Slowdown Factors for Hard Real-Time Systems Henrik Lipskoch1 , Karsten Albers2 , and Frank Slomka2 1

Carl von Ossietzky Universit¨ at Oldenburg [email protected] 2 Universit¨ at Ulm {karsten.albers,frank.slomka}@uni-ulm.de

Abstract. This work deals with the problem to optimise the energy consumption of an embedded system. On system level, tasks are assumed to have a certain CPU-usage they need for completion. Respecting their deadlines, slowing down the task system reduces the energy consumption. For periodically occurring tasks several works exists. But even if jitter comes into account the approaches do not suffice. The event stream model can handle this at an abstract level and the goal of this work is to present and solve the optimisation problem formulated with the event stream model. To reduce the complexity we introduce an approximation to the problem, that allows us a precision/performance trade-off.

1

Introduction

Reducing the energy consumption of an embedded system can be done by shutting down (zero voltage), freezing (zero frequency, e.g. clock gating) or stepping the circuits with a slower clock and lower voltage (Dynamic Voltage Scaling or Adaptive Body Biasing). On system level, tasks as programmes or parts of those are assigned a processing unit. Here we are interested in tasks having a deadline not to miss and with some sort of repeated occurrence, that is those tasks are executed repeatedly as long as the embedded system is up and running. The mentioned possibilities to reduce the overall energy consumption result in a delay or slowdown from the view of a program running on the system. It has a lower-bound for those tasks having deadlines to meet with the side effect, that the available processing time for other tasks running on the the same processor will be reduced. Thus any technique regarding energy reduction which influences the processing speed has to take these limits into account. The problem we focus here is to minimise the systems total power consumption for a task set, where each task may have its own trigger and its own relative deadline, using static slowdown to guarantee hard real-time feasibility. In the next section we will describe work on similar problems. In the model section we will specify our assumptions, describe the event stream model along 

This work is supported by the German Research Foundation (DFG), grant GRK 1076/1 and SL 47/3-1.

N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 495–504, 2007. c Springer-Verlag Berlin Heidelberg 2007 

496

H. Lipskoch, K. Albers, and F. Slomka

with the demand bound function. We will improve the number of test points and then show how this is incorporated into a linear programme solving the problem of calculating slowdown factors, while guaranteeing hard real-time feasibility. Before we conclude the paper, we will do an example and demonstrate the advantages, we gained with the developed theory.

2

Related Work

There are several works regarding the energy optimisation of hard real-time systems. Here we are interested in optimising without granting the circuits any loss of precision, without missing any deadlines, and we want a true optimal solution, if possible, thus we focus on linear program solutions. Ishihara and Yasuura [1] incorporate in their integer linear program different discrete voltage/frequency pairs, an overall deadline for their task set to meet, but no intertask dependency and only dynamic power consumption. They consider mode switching overhead negligible and for every task a number of needed CPUcycles, in worst-case analysis this corresponds to their worst-case execution time. Andrei et al. [2] formulate four different problems, each regarding intertask dependency, a deadline per task, and per task a number of clock cycles to complete it, which can, again, be considered as worst-case time. The four problems vary in with or without regarding mode switching overhead and with or without integer linear programming for the number of clock cycles. The task set is considered non-preemptive. The authors prove the non-polynomial complexity of the integer linear problem with overheads. A somewhat different approach have Hua and Qu [3]. They are looking for the number and the values of voltages yielding the best solution with dynamic voltage scaling. However, their problem formulation only explores the slack between execution time and relative deadline of a task with the assumption that the task system is schedulable even if every task uses the complete deadline as execution time. In hard real-time analysis this assumption does not hold. For example if each task has its own period a common way is to set its deadline equal to its period. Rong and Pedram [4] have their problem formulated with intertask dependencies, leakage and dynamic power consumption, different CPU-execution modes, external device usage each with different execution modes. They state that mode switching overhead on the CPU is negligible especially when “normal” devices (e.g. hard disks etc.) are involved in the calculation. They also state the nonpolynomial complexity of the mixed integer linear program. And their task graph is assumed to be triggered periodically with a period they use as overall deadline for the task set. The tasks are considered non-preemptive. The tasks themselves do not have an own deadline. In [5] Jejurika and Gupta present a work on calculating slowdown factors for energy optimisation of earliest-deadline-first scheduled embedded systems with periodic task sets. The work does not consider static power consumption and therefore does not take switching off the CPU into account.

Fast Calculation of Permissible Slowdown Factors

497

They first present a method to calculate a slowdown factor for a whole task set using state-of-the-art real-time feasibility tests. Then they develop a method with ellipsoids to have a convex optimisation problem. This test incorporates the feasibility test of Baruah [6], which turned out to be very computational intensive, because it requires to test all intervals up to the hyper period of the task system in the case of a periodic task system (again, see [6]) in other cases. To face the problem of computational intensity Albers and Slomka developed another test [7] with a fast approximation [8], having the possibility to do a performance/precision tradeoff, based on the event stream methodology by Gresser [9]. Here, in this work, we want to show how this approximation applies for the problem of calculating slowdown factors.

3

Model

We do not want to limit us to periodic triggered task systems. Therefore, we assume that a task can be triggered not only periodic, but periodic with a jitter or sporadic with minimal distance between two consecutive triggers or other forms. We assume the triggers of a task be given in the form of an event stream, see below. For the scheduling algorithm, we assume preemptive earliest deadline first scheduling. When we speak of a task system we speak of tasks sharing one single processor concurring on the available processing time. Additional we assume it to be synchronous, i.e. all tasks can be triggered at the same point in time. An invocation of a task is called a job and the point in time when a task is triggered an event. Each task of our task systems is assumed to have a relative deadline d, measured from the time when the task is triggered, a worst-case execution time c, and an event stream denoting the worst-case trigger of that task. The latter one is described in the following subsection. 3.1

Event Streams

Motivated by the idea to determine bottlenecks in the available processing time, Gresser ([9]) introduced the event stream model, which is used in [7] to provide a fast real-time analysis for embedded systems. We understand a bottleneck as a shortest time span in which the most amount of processing time is needed, i.e. a time span with the highest density of needed processing time. The event stream model covers this, time spans with maximal density are ordered by their length (maximal among time spans having the same length: go through all intervals on the time axis having the same length and take the length of that one, that has the least available processing time within). Achieved is this by calculating the minimal time span for each number of task triggers. Definition 1. Let τ be a task. An event stream E(τ ) is a sequence of real numbers a1 , a2 , . . . , where for each i ∈ N ai denotes the length of the shortest interval

498

H. Lipskoch, K. Albers, and F. Slomka

in time in which i number of events of type τ can happen. (See [7] for a more detailed definition) Event streams are sub-additive sequences, i.e. ai + aj >= ai+j , for all i, j ∈ N. Albers and Slomka explain how to gather that information for periodic, periodic with jitter, and sporadic triggered tasks [7]. Example 1. Consider the following three tasks. 1. Let τ1 be triggered with a period of 100 ms. Then the shortest time span to trigger two tasks is 100 ms. For three it is 200 ms and so on, thus the resulting event stream is E(τ1 ) : a1 = 0 s, an = (n − 1) · 100 ms. 2. Then, let τ2 be triggered sporadically with a minimal distance between two events of 150 ms. Then the shortest time span to trigger two tasks is 150 ms. For one task more it is 300 ms. The resulting event stream is E(τ2 ) : a1 = 0 s, an = (n − 1) · 150 ms. 3. And let τ3 be triggered periodically every 60 ms but can be triggered 5 ms before and after its period. Thus the shortest time span to trigger two tasks is 50 ms, which corresponds to one trigger 5 ms after one period and the next trigger 5 ms before the next period. The then earliest task after both can not be triggered shorter than 60 ms later, which is 5 ms before the over-next period, and this corresponds to a time length of 110 ms to trigger 3 tasks. Following this argumentation the resulting event stream is E(τ3 ) : a1 = 0 s, a2 = 50 ms, an = 50 ms + (n − 2) · 60 ms. 3.2

Demand Bound

To guarantee the deadline of a task one has to consider the current workload of the resource the task runs on. The demand bound function (see [10] and [7]) provides a way to describe this, and the np-hard feasibility test using this function can be approximated in polynomial time [7]. For the workload we now calculate the maximal amount of needed processing time within an interval of length Δt. If we allow the simultaneous trigger of different tasks, which was our assumption, this leads to synchronising the event streams, and that is to assume all the intervals, out of which we obtained the time lengths for our event streams, have a common start. Thus, the sum of the worst-case execution times of all events in all event streams happening during a time of length Δt, having their deadline within that time, gives us an upper bound of the execution demand for any interval of length Δt. Note, that we only have to process jobs with deadline within this time span. Formulated with the notion of definition 1 the demand bound function turns out as follows. Definition 2. The demand bound function denotes for every time span an upper bound of workload on a resource to be finished within that time span. (See for ex. [10]) Lemma 1. Let τ1 , . . . , τn be tasks running on the same resource, each with worst-case execution time ci and relative deadline di , i = 1, . . . , n. And let

Fast Calculation of Permissible Slowdown Factors

499

E1 , . . . , En be their event streams. Define a0 := −∞. Under the assumption all tasks can be triggered at the same time, the demand bound function can be written as D(Δt) =

n 

max{j ∈ N0 : aj ∈ Ei ∪ {a0 }, aj ≤ Δt − di }ci .

i=1

(see [7]) Example 2. Consider the task set of example 1. Let the deadlines be 30 ms for the first, 20 ms for the second, and 10 ms for the third task; let the worst-case execution times for each task be 25 ms, 15 ms, and 5 ms, respectively. Out of these properties, we obtain the demand bound function, which is ! D(Δt) =

" ! " Δt + 70 ms Δt + 130 ms · 25 ms + · 15 ms 100 ms 150 ms  # $ ! " Δt Δt − 10 ms + max 0, + · 5 ms. |Δt − 10 ms| 60 ms

The next step proceeds with a match of the needed processing time below the available processing time. This is the feasibility test of the real-time system. Since one can process exactly t seconds processing time within an interval of t seconds, if every consumer of processing time is modelled within the task set, and thus the feasibility test results in proving D(Δt) ≤ Δt

∀Δt > 0.

(1)

Lemma 2. Let τ1 , . . . , τn be tasks and E(τ1 ), . . . , E(τ2 ) their corresponding event streams. A sufficient set of test points for the demand bound function is E  := %N i=1 {ai + di : ai ∈ E(τi )}. Proof. The demand bound function remains constant between two points e1 , e2 ∈ ˜ E. The values of E  can be bounded above and the remaining set will still be a sufficient test set. If the event streams contain only periodic behaviour, it is feasible to use their hyper period, since this is defined as the least common multiple of all involved periods it grows as the prime numbers contained in the periods grow (cf. p1 = 31 ∗ 2 = 62, p2 = 87 : H = p1 ∗ p2 = 5394, whereas p1 = 60 and p2 = 90 will yield H = 180). Another test bound exists [6] covering also nonU · max{Ti − di } periodic behaviour. It depends on the utilisation U : Δtmax = 1−U and it cannot be used here, because in slowing down the system, we will increase its utilisation (more processing time due less speed) and thus formulas, similar to the one mentioned, will result in infinite test bounds (which is the reason that such formulas are only valid for utilisations strict less than 1). Instead of using such test bounds, we improve the model in another way.

500

H. Lipskoch, K. Albers, and F. Slomka

Definition 3. A bounded event stream with slope s from k on is an event stream E with the property ∀ i ≥ k, ai ∈ E :

1 ≤ s. ai+1 − ai

(2)

The index k is called the approximation’s start-index. Because of the sub-additivity the pair (s, k) = (a2 , 2) always forms a bounded event stream, this is used in [8], but in changing the index, we may change the precision, as the following example shows: Example 3. Let there be a jittering task with period 100 ms and a jitter of 5 ms. Then approximating with a2 = 90 ms will have a significant error. Starting the approximation at index 2 with a3 − a2 = 100 ms will end up in no error at all! We summarise this information more formal in the following lemma. Lemma 3. Let task τ have a bounded event stream E with slope s from k, then an upper bound on its demand is  c · max{j : aj ∈ E, aj + d ≤ Δt} Δt < ak + d   Dτ (Δt) = (3) Δt ≥ ak + d. c · k − 1 + Δt−as k −d Note that the growth of the function has its maximum between 0 and ak + d, because for values greater than ak +d the function grows with Δt/s, which must be less or equal the maximal growth according to the sub-additivity of the underlying event stream. The definition reduces our set of test-points depending on the wanted precision. Theorem 1. Let τ1 , . . . , τn be tasks, c1 , . . . , cn their worst-case execution times, and let E1 , . . . , En be their bounded event streams with slopes s1 , . . . , sn and approximation start-indices k1 , . . . , kn . %n Define E˜ := i=1 {ai,j + di : ai,j ∈ Ei , ai,j ≤ ai,ki −1 }. A sufficient feasibility test is then ∀Δt ∈ E˜ :

D(Δt) ≤ Δt n  ci ≤ 1. s i=1 i

and

(4) (5)

Proof. In Lemma 2 we stated that the demand bound function is constant be˜ For values greater than A := max{a ∈ E} ˜ we approxtween the test-points of E. imate the demand bound function by a sum over straight lines for each task, cf. lemma 3: D(Δt) ≤ D :=

n  i=1

with:

gi (Δt)∀Δt > A,

  Δt − ai,k − di g(Δt) = c · ki − 1 + si

Fast Calculation of Permissible Slowdown Factors

501

The growth of D has its maximum between 0 and A, because this is true for the elements of the sum (compare note in lemma 3). If the function D is below the straight line h(x) := x for values between 0 and A, then it will cut h for values greater than A if and only if its derivate there is greater than 1. That results in proving: ∂ (D (Δt)) ∂Δt  n  ∂ = gi (Δt) ∂Δt i=1    n  ∂ Δt − ai,k − di = c · ki − 1 + ∂Δt si i=1

1≥

=

(Δt > A)

n  ci . s i=1 i

If a task system’s demand bound function allows some “slack”, that is it does not use the full available processing time, we are interested what happens to the calculation if we introduce another task into the system. The argumentation is clear: it has to fit into the rest available processing time. To be more formal we state the following lemma, which basically expresses, that we do not have to recalculate the demand bound as a whole but only for the new test-points. Lemma 4. Let Γ be a real-time feasible task system and let D be it’s demand bound function in the notion of the theorem. Let τ be a task with event stream E, deadline d and worst-case execution time c. Then the task system Γ ∪ {τ } is real-time feasible if D (Δt) + max{j ∈ N : aj ∈ E, aj + d ≤ Δt} · c ≤ Δt ∀Δt ∈ {ai + d : ai ∈ E}. (6) (cf. [7]) Proof. Since the function D will by prerequisite not exceed the line h(x) = x, this violation can only occur at points when the task τ needs to be finished. Clearly, if the introduced task has a bounded event stream with some slope s from some index k on, the set of test-points reduces to those induced by indices less than k. We summarise the gained complexity reduction along with the accuracy of the test. Lemma 5. The complexity of the test for a periodic only task system is linear in the number of tasks, if for all tasks the deadlines are equal to the periods, no accuracy will be lost. The complexity of the test for a periodic task system with m tasks having a jitter is linear in the number of tasks plus m.

502

H. Lipskoch, K. Albers, and F. Slomka

Two reasons for loss in accuracy exist. On the one hand, there is an error by the assumption of synchronicity. And on the other hand there is an approximation error due to linearisation. 3.3

Linear Programme

Our optimisation problem can now be formulated as a linear programme. Since our goal is to slow down the task system as much as it is allowed, the corresponding formulation (in the notion of the theorem) for the objective is then Maximize:

n  αi · ci i=1

si

,

(7)

where αi is the slow down for task i. Note that a slowdown factor of < 1 means to speed up the task as this shortens its execution time and therefore we have to ensure the opposite: αi ≥ 1.

(8)

As stated in the theorem, the long term utilisation must not exceed 1, this gives us the constraint: n  αi · ci ≤ 1. (9) si i=1 Clearly, the optimum will never exceed 1. Let ai,j denote the j-th element in the event stream belonging to task i. The constraint limiting the demand is then: ˜ ∀ Δt ∈ E

n 

max{j ∈ N : ai,j + di ≤ Δt} · αi · ci ≤ Δt.

(10)

i=1

The max-term in the equation is calculated beforehand, because it does not change during optimisation, if it’s value reaches a start-index k of some task then it is replaced by the equation 3 given in lemma 3.

4

Experiments

As first experiment we chose a rather simple example given in [11], describing seven periodic tasks, having their deadlines equal to their periods, on a Palmpilot (see table 1). It has a utilisation of 86.1%. Calculation with the unimproved test results in slowing down task 7 by 3.075 with the help of 45 constraints. Exactly the same slowdown was calculated by our fast approach with only 7 constraints. For demonstration purpose, we chose a periodic task system with some tasks having a deviation in their period, whose maximal value is found as jitter in table 4. The example was taken from [12]. The task set has a utilisation of about

Fast Calculation of Permissible Slowdown Factors

503

Table 1. Task set of the Palm-pilot Task Exec. Time [ms] Period [ms] 1 5 100 7 40 2 10 100 3 6 30 4 6 50 5 3 20 6 10 150 7 Table 2. Task set of processor one Task Exec. Time [μs] Jitter [μs] Deadline [μs] Period [μs] 1 150 0 800 800 2277 0 5000 200000 2 420 8890 15000 400000 3 552 10685 20000 20000 4 496 9885 20000 20000 5 1423 0 12000 25000 6 3096 0 50000 50000 7 7880 0 59000 59000 8 1996 15786 10000 50000 9 3220 34358 10000 100000 10 3220 55558 10000 100000 11 520 0 10000 200000 12 1120 107210 20000 200000 13 954 141521 20000 1000000 14 1124 0 20000 200000 15 3345 0 20000 200000 16 1990 0 100000 1000000 17

65.2%. We first applied the slowdown calculation without test-point reduction and tested up to the hyper-period of the task periods, which is 59,000,000, and the test resulted in 78562 constraints concerning the demand bound function. It yielded a slowdown for task 9 of about 9.7, a utilisation of exactly 1, which both are the same result as with the improved linear program we suggested with our developed theory having only 20 constraints regarding the demand bound function. All linear programs were programmed in GNU MathProg modelling language and solved with glpsol, version 4.15 [13].

5

Conclusion and Future Work

We have shown a very fast and yet accurate method for calculating static slowdown factors while providing hard real-time feasibility. In contrast to other methods it does not rely on periodic task behaviour, it’s complexity does not increase

504

H. Lipskoch, K. Albers, and F. Slomka

when other forms of trigger, like sporadic with minimal distance between two consecutive triggers or periodic with a certain jitter, are part of the optimisation problem. In future work we want to embed criteria to allow modelling different system states such as sleep states and with our methodology we want to research in what cases a common slow down factor will be sufficient.

References 1. Ishihara, T., Yasuura, H.: Voltage scheduling problem for dynamically variable voltage processors. In: Proceedings of the International Symposium on Low Power Electronics and Design, pp. 197–202 (1998) 2. Andrei, A., Schmitz, M., Eles, P., Peng, Z., Al-Hashimi, B.M.: Overhead conscious voltage selection for dynamic and leakage energy reduction of time-constrained systems. In: Proceedings of the Design Automation and Test in Europe Conference (2004) 3. Hua, S., Qu, G.: Voltage setup problem for embedded systems with multiple voltages. IEEE Transactions on Very Large Scale Integration (VLSI) Systems (2005) 4. Rong, P., Pedram, M.: Power-aware scheduling and dynamic voltage setting for tasks running on a hard real-time system. In: Proceedings of the Asia and South Pacific Design Automation Conference (2006) 5. Jejurika, R., Gupta, R.: Optmized slowdown in real-time task systems. In: Proceedings of the 16th Euromicro Conference on Real-Time Systems, pp. 155–164 (2004) 6. Baruah, S.K., Rosier, L.E., Howell, R.R.: Algorithms and complexity concerning the preemptive scheduling of periodic, real-time tasks on one processor. Real-Time Systems 2(4), 301–324 (1990) 7. Albers, K., Slomka, F.: An event stream driven approximation for the analysis of real-time systems. In: Proceedings of the Euromicro Conference on Real-Time Systems, pp. 187–195 (2004) 8. Albers, K., Slomka, F.: Efficient feasibility analysis for real-time systems with edf scheduling. In: Proceedings of the Design Automation and Test in Europe Conference (2005) 9. Gresser, K.: An event model for deadline verification of hard realtime systems. In: Proceedings of the Fifth Euromicro Workshop on Real Time Systems, pp. 118–123 (1993) 10. Baruah, S., Chen, D., Gorinsky, S., Mok, A.: Generalized multiframe tasks. RealTime Systems 17(1), 5–22 (1999) 11. Lee, T.M., Henkel, J., Wolf, W.: Dynamic runtime re-scheduling allowing multiple implementations of a task for platform-based designs. In: Proceedings of the Design, Automation and Test in Europe Conference (2002) 12. Tindell, K., Clark, J.: Holistic schedulability analysis for distributed hard real-time systems. Microprocessing and Microprogramming - Euromicro Journal (Special Issue on Parallel Embedded Real-Time Systems) 40(2-3), 117–134 (1994) 13. Makhorin, A.: GLPK linear programming/MIP solver (2005), http://www.gnu.org/software/glpk/glpk.html

Design Methodology and Software Tool for Estimation of Multi-level Instruction Cache Memory Miss Rate N. Kroupis and D. Soudris VLSI Design Centre, Dept. of Electrical and Computer Eng. Democritus University of Thrace, 67100 Xanthi, Greece {nkroup,dsoudris}@ee.duth.gr

Abstract. A typical design exploration process using simulation tools for various cache parameters is a rather time-consuming process, even for low complexity applications. The main goal of an estimation methodology, introduced in this paper, is to provide fast and accurate estimates of the instruction cache miss rate of data-intensive applications implemented on a programmable embedded platform with multi-level instruction cache memory hierarchy, during the early design phases. Information is extracted from both the high-level code description (C code) of the application and its corresponding assembly code, without carrying out any kind of simulation. The proposed methodology requires only a single execution of the application in a general-purpose processor and uses only the assembly code of the targeted embedded processor. In order to automate the estimation procedure, a novel software tool named m-FICA implements the proposed methodology. The miss rate of two-level instruction cache can be estimated with high accuracy (>95%), comparing with simulationbased results while the required time cost is much smaller (orders of magnitude) than the simulation-based approaches.

1 Introduction Cache memories have become a major factor to bridge the bottleneck between the relatively slow access time to main memory and the faster clock rate of today’s processors. Nowadays the programmable systems usually contain two levels caches, in order to reduce the main memory transfer delay. The simulation of cache memories is common practice to determine the best configuration of caches during the design of computer architectures. It has also been used to evaluate compiler optimizations with respect to cache performance. Unfortunately, the cache analysis of a program can significantly increase the program’s execution time often by two orders of a magnitude. Thus, cache simulation has been limited to the analysis of programs with a small or moderate execution time and still requires considerable experimentation time before yielding results, In reality, programs often execute for a long time, but cache simulation simply becomes unfeasible with conventional methods. The large overhead of cache simulation is imposed by the necessity of tracking the execution order of instructions. In [3], an automated method for adjusting two-level cache memory hierarchy in order to reduce energy consumption in embedded applications is presented. The N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 505–515, 2007. © Springer-Verlag Berlin Heidelberg 2007

506

N. Kroupis and D. Soudris

proposed heuristic, Two-level Cache Exploration Heuristic considering Cycles, consists of making a small search in the space of configurations of the two-level cache hierarchy, analyzing the impact of each parameter in terms of energy and number of cycles spent for a given application. Zhang and Vahid [1] introduce a cache architecture that can find the best set of cache configurations for a given application. Such architecture would be very useful in prototyping platforms, eliminating the need for time-consuming simulations to find the best cache configurations. Gordon et. al [4] present an automated method for tuning two-level caches to embedded applications for reduced energy consumption. The method is applicable to both a simulation-based exploration environment and a hardware based system prototyping environment. The heuristic interlaces the exploration of the two cache levels and searches the various cache parameters in a specific order based on their impact on energy. Platune is introduced by Givargis and Vahid in [2] which is used to automatically explore the large configuration space of such an SOC platform. The power estimation techniques for processors, caches, memories, buses, and peripherals combined with the design space exploration algorithm deployed by Platune form a methodology for design of tuning frameworks for parameterized SOC platforms in general. Additionally, a number of techniques were presented which had as main goal the reduction of the simulation time cost [5], [6], [7] and [8]. A technique called inline tracing can be used to generate the trace of addresses with much less overhead than trapping or simulation. Measurement instructions are inserted in the program to record the addresses that are referenced during the execution. Borg, Kessler, and Wall [5] modified some programs at link time to write addresses to a trace buffer, and these addresses were analyzed by a separate higher priority process. The time required to generate the trace of addresses was reduced by reserving five of the general purpose registers, to avoid memory references in the trace generation code. Mueller and Whalley [6] provided a method for instruction cache analysis, which outperforms the conventional trace-driven methods. This method, named static cache simulation, analyzes a program for a given cache configuration and determines, prior to execution time, if an instruction reference will always result in a cache hit or miss. In order to use the proposed technique, the designer should make changes in the compiler of the processor, which is restricted most of the times, when we used commercial tools and compilers. A simulation-based methodology, focused on an approximate model of the cache and the multi-tasking reactive software, that allows one to trade off-smoothly between accuracy and simulation speed, has been proposed by Lajolo et. al. [7]. The methodology reduces the simulation time, taking into account the intra-task conflicts and considering only a finite number of previous task executions. This method achieved a speed-up of about 12 times faster than the simulation process, with an error of 2% in cache miss rate estimation. Nohl et. al [8] presented a simulation-based technique, which meets the requirements for both, the high simulation speed and maximum flexibility. This simulation technique, called just-in-time cache compiled simulation technique, can be utilized for architecture design, as well as for end-user software development. However, the simulation performance increases by about 4 times, compared with the

Design Methodology and Software Tool

507

trace-driven techniques, which is not enough for exploring various cache sizes and parameters. In this paper, a novel systematic methodology aiming at the estimation of the optimal cache memory size with the smallest cache miss rate of a multi-level instruction cache hierarchy is introduced (Figure 1). The accuracy can be achieved within an affordable estimation time cost. The high-level estimation is very useful for a fast exploration among many instruction cache configurations. The basic concept of the new methodology is the straightforward relationship for specific characteristics (e.g. the number of loop iterations) between the high-level application description code and its corresponding assembly code. Using the proposed methodology, a new tool has been developed, achieving orders of magnitude speedup in the miss rate estimation time cost, compared to existing methods, with an estimation accuracy higher than 95%. We estimate the miss rate of a two level instruction cache consisting of a first level, L1, and second level, L2, cache memory. The proposed approach is fully software-supported by a CAD tool, named m-FICA, which automates the whole estimation procedure. In addition, the methodology could be applied to every processor without any compiler or application modification.

Fig. 1. The instruction memory hierarchy of the system, with L1 and L2 instruction cache off-chip

2 Proposed Estimation Methodology In order to model the number of cache misses of a nested loop, analytical formulas have been proposed in [10]. Given a nested loop with N iterations and a total size of instructions in assembly code L_s, a cache memory with size C_s (in instructions), and a block size B_s (cache line length), the number of misses N_misses, can be calculated using the following formulas [10]: Loop Type 1: if L _ s ≤ C _ s then: Num _ misses =

L_s B_s

(1)

Loop Type 2: if C _ s < L _ s < 2 × C _ s then: Num _ misses =

L_s L _ s mod C _ s + ( N − 1) × 2 × B_s B_s

(2)

Loop Type 3: if 2 × C _ s ≤ L _ s then: Num _ misses = N ×

L_s B_s

(3)

508

N. Kroupis and D. Soudris

The miss rate is given by the formula: Num _ misses Miss _ rate = (4) Num _ references where Num_references is the number of references from the processor to memory with L_s Num _ references = ×N (5) B_s Depending on the loop size mapped to the cache size, the assumed loops are categorized in three types (Fig. 1). This example assumes that the instruction cache block size is equal to the instruction size. With such a systematic way, the number of misses can be calculated for every application’s loop. The proposed miss rate estimation methodology is based on the correlation between the high-level description code (e.g. C) of the application and its associated assembly code. Using the compiler of the chosen processor, we can create the assembly code of the application. The crucial point of the methodology is that the number of conditional branches in both the C code and its assembly code is equal. Thus, executing the C code we can find the number of passes from every branch. The values correspond to the assembly code, and thus we can find how many times each assembly branch instruction is executed. Creating the Control Flow Graph (CFG) of the assembly code, the number of execution of all application’s assembly instructions can be calculated. The miss rate estimation is accomplished by the assembly code processing procedure and the data extracted from the application execution. Thus, the estimation time depends on the code (assembly and C) processing time and the application execution time in a general-purpose processor. The total estimation time cost is much smaller than that obtained by the trace-driven techniques. The proposed methodology has as input the high-level description, i.e., in C, of the application code, which includes:

(i) The conditional statements (if/else, case), (ii) The application function calls and (iii) The nested loops, (for/while) with the following assumptions: (a) perfectly / non-perfectly nested loops, (b) the loop indices may exhibit constant or variable lower and upper loop boundaries, constant or variable step and interdependences between the loop indices (e.g. affine formula ik=ik-1+c), and (c) The loop can contain, among others, conditional statements and function calls. The proposed methodology consists of three stages, shown in Figure 2. The first stage of the proposed methodology contains three steps: (i) Determination of branches in the C code of the application, (ii) Insertion of counters into the branches of the C code, and (iii) Execution of C code. The first stage aims at the calculation of the number of executions (passes) of all branches of the application C code. Such, the number of executions of every leaf of the CFG is evaluated by the application execution. Determining the branches of the high-level application code, we can find the number of executions within these branches executing the code. This stage is a platform-independent process and, thus, its results can be used in any programmable platform.

Design Methodology and Software Tool

509

The second stage estimates the number of executions of each microinstruction and, eventually, the total number of the executed instructions. It consists of: (i) The determination of assembly code branches, (ii) The creation of Control Flow Graph, (iii) The assignment of counter values to Control Flow Graph nodes, and (iv) The computation of execution cost of the rest CFG nodes The basic idea, not only of the second step but also of the proposed methodology, is the use of behavioral-level information (i.e. C code) into the processor level. The number of branches of the assembly code is equal to the number of branches in C code, and remains unchanged even if compiler optimization techniques are applied. In case of the unrolled loops, the methodology could detect in assembly code the repeated assembly instructions and eventually detect the location of the unrolled branch. Moreover, the methodology can handle optimized and compressed assembly code without any change and limitation. Concluding, the proposed methodology is compiler and processor

Fig. 2. The proposed methodology for estimating the miss rate of a two-level instruction memory cache hierarchy

510

N. Kroupis and D. Soudris

independent and can be applied into every programmable processor. The derived assembly code can be grouped into blocks of code each of which is executed in a sequential fashion, which are called basic blocks. A basic block code can be defined as the group of instructions between two successive labels, or between a conditional instruction and a label and vice versa. The third stage of the methodology is platform-dependent and contains two steps: (i) the creation of all the unique execution paths of each loop and (ii) the computation of number of instructions and iterations associated with a unique path. Exploring all the paths of the CFG of an application, we determine the loops and the size (in numbers of instructions), as well as the number of executions of each loop. Furthermore, from the rest of the conditional branches (if / else), we create all the unique execution paths inside every loop, together with the number of executions of each unique path. The methodology is able to handle applications, which includes perfectly nested loops or non-perfectly nested loops and any type of loops structure. Considering the target embedded processor, which is MIPS IV, we count the instructions of the assembly code. Using eq. (1)-(5), and the unique execution paths of each loop, the number of instruction cache misses and the cache miss rate can be estimated. These equations can be applied to every cache level with similar way estimating the cache misses in every level of the instruction cache.

3 Comparison Results In order to evaluate the proposed estimation technique we compare the results, which are taken using the developed tool, with simulation-based measurements. We considered as implementation platform the 64-bit processor core MIPS IV, while the measurements was taken by Simplescalar tool [11], the accurate instruction set simulator of MIPS processor. Simplescalar includes instruction set simulator, fastinstruction simulator and cache simulator, and can simulate architectures with instruction, data and mixed instruction-data caches with one or two memory hierarchy layers. In order to evaluate the proposed methodology, a set of benchmarks from various signal processing applications, such as MPEG-4, JPEG, Filtering and H.263 are used. Basically the code of every video compression algorithm used, such we use five Motion Estimation algorithms: (i) Full Search (FS) [12], (ii) Hierarchical Search (HS) [13], (iii) Three Step Logarithmic Step (3SLOG) [12], (iv) Parallel Hierarchical One Dimensional Search (PHODS) [12] and (v) Spiral Search (SS) [14]. It has been noted that their complexity ranged from 60 to 80% of the total complexity of video encoding (MPEG-4) [12]. Also, we have used the 1-D Wavelet transformation [15], the Cavity Detector [16] and the Self Organized Feature Map Color Quantization (CQ) [17]. More specifically, the C code size of all algorithms ranges from 2 Kbytes to 22 Kbytes, while the corresponding MIPS assembly code size ranges from 9 Kbytes to 46 Kbytes. We assumed L1 instruction cache memory size ranging from 64 bytes to 1024 bytes with block sizes 8 and direct-mapped cache architecture and L2 instruction cache with sizes varying between 128 bytes and 4 Kbytes. We performed both simulation and estimation computations in terms of the miss rate of instruction cache on L1 and L2. Moreover, we computed the actual time cost for running the

Design Methodology and Software Tool

511

simulation and the estimation-based approaches as well as the average accuracy level of the proposed methodology. Every cache level has its own local miss rate, which is the misses in this cache divided by the total number of memory accesses to this cache. Average miss rate is the misses in this cache divided by the total number of memory accesses generated by the processor. For example in the case where there are two level of cache memories the Average miss rate is given by the product of the two local miss rates of the two levels. (Miss RateL1 × Miss RateL2). Average miss rate is what matters to overall performance, while local miss rate is factor in evaluating the effectiveness of every cache level. The accuracy of the proposed estimation technique is provided by the average estimation error. Table 1-7 presents the average percentage error of the proposed methodology compared to the simulation results taken using the Simplescalar tool, considering the abovementioned eight DSP applications. The last row of each table provides the average estimation error of miss rate of a two-level instruction cache memory hierarchy of each application. We have choose to present the results of only two-level cache hierarch because luck of space. Also, in order to reduce the results we present only the miss rate of L2 cache which its size is four times greater that L1, otherwise a lot tables and results must be presented. Depending on the application, the corresponding average values of estimation error ranges from 1% to 12%, while the total average estimation error of the proposed approach is less than 4:% (i.e. 3.77%). The latter value implies that the proposed methodology exhibits high accuracy. Table 1. Comparison between the estimation and the simulation results of the miss rate in L1 and L2 caches when the size of L2 is 4 times greater than L1 cache for Full Search application

L1 Miss rate

L2 Miss rate Average Miss rate

L1 cache size (bytes) Simplescalar m-FICA L2 cache size (bytes) Simplescalar m-FICA Simplescalar m-FICA

64 100.0 100.0 256 99.8 99.9 99.8 99.9

128 100.0 100.0 512 99.2 99.6 99.2 99.6

256 99.8 99.9 1024 77.0 72.0 76.8 71.9

512 99.2 99.6 2048 0.1 0.2 0.1 0.2

1024 Error 76.8 1.10 % 71.9 4096 Error 0.0 1.16 % 0.2 0.0 1.13 % 0.1

Table 2. Comparison between the estimation and the simulation results of the miss rate in L1 and L2 caches when the size of L2 is 4 times greater than L1 cache for Hierarchical Search application

L1 Miss rate

L2 Miss rate Average Miss rate

L1 cache size (bytes) Simplescalar m-FICA L2 cache size (bytes) Simplescalar m-FICA Simplescalar m-FICA

64 99.9 100.0 256 92.7 87.5 92.5 87.5

128 97.3 96.0 512 68.2 63.3 66.4 60.8

256 92.6 87.5 1024 3.0 3.9 2.8 3.4

512 66.4 60.8 2048 2.3 5.3 1.6 3.2

1024 Error 2.8 2.5 % 3.4 4096 Error 53.3 10.2 % 15.9 1.5 2.8 % 0.5

512

N. Kroupis and D. Soudris

Table 3. Comparison between the estimation and the simulation results of the miss rate in L1 and L2 caches when the size of L2 is 4 times greater than L1 cache for 3SLOG application L1 cache size (bytes) Simplescalar L1 Miss rate m-FICA L2 cache size (bytes) Simplescalar L2 Miss rate m-FICA Simplescalar Average Miss rate m-FICA

64 100.0 100.0 256 93.1 96.9 93.1 96.9

128 99.7 99.4 512 15.9 7.4 15.9 7.4

256 93.1 96.9 1024 2.0 1.0 1.9 0.9

512 15.9 7.4 2048 11.0 7.2 1.7 0.5

1024 1.9 0.9 4096 0.4 2.2 0.0 0.0

Error 2.7 % Error 3.8 % 2.9 %

Table 4. Comparison between the estimation and the simulation results of the miss rate in L1 and L2 caches when the size of L2 is 4 times greater than L1 cache for PHODS application L1 cache size (bytes) Simplescalar L1 Miss rate m-FICA L2 cache size (bytes) Simplescalar L2 Miss rate m-FICA Simplescalar Average Miss rate m-FICA

64 100.0 100.0 256 99.6 98.8 99.6 98.8

128 100.0 100.0 512 96.8 96.1 96.7 96.1

256 99.6 98.8 1024 31.8 23.0 31.7 22.7

512 96.7 96.1 2048 0.8 1.0 0.8 1.0

1024 31.7 22.7 4096 0.7 4.2 0.2 1.0

Error 2.1 % Error 2.8 % 2.3 %

Table 5. Comparison between the estimation and the simulation results of the miss rate in L1 and L2 caches when the size of L2 is 4 times greater than L1 cache for SS application L1 cache size (bytes) Simplescalar m-FICA L2 cache size (bytes) Simplescalar L2 Miss rate m-FICA Simplescalar Average Miss rate m-FICA L1 Miss rate

64 99.9 100.0 256 99.0 98.4 98.8 98.4

128 99.9 99.2 512 80.0 75.6 79.9 75.0

256 98.8 98.4 1024 0.5 0.0 0.5 0.0

512 79.9 75.0 2048 0.1 0.0 0.1 0.0

1024 0.5 0.0 4096 0.1 9.4 0.0 0.0

Error 1.3 % Error 2.9 % 1.2 %

Table 6. Comparison between the estimation and the simulation results of the miss rate in L1 and L2 caches when the size of L2 is 4 times greater than L1 cache for 1-D Wavelet application L1 cache size (bytes) Simplescalar L1 Miss rate m-FICA L2 cache size (bytes) Simplescalar L2 Miss rate m-FICA Simplescalar Average Miss rate m-FICA

64 98.7 99.3 256 50.9 43.6 50.3 43.3

128 89.9 92.7 512 1.5 0.4 1.3 0.4

256 50.3 43.3 1024 2.1 0.2 1.1 0.1

512 1.3 0.4 2048 1.5 4.4 0.0 0.0

1024 1.1 1.1 4096 1.4 14.3 0.0 0.2

Error 2.3 % Error 5.2 % 1.8 %

Design Methodology and Software Tool

513

Table 7. Comparison between the estimation and the simulation results of the miss rate in L1 and L2 caches when the size of L2 is 4 times greater than L1 cache for Cavity Detector application L1 cache size (bytes) Simplescalar L1 Miss rate m-FICA L2 cache size (bytes) Simplescalar L2 Miss rate m-FICA Simplescalar Average Miss rate m-FICA

64 100.0 100.0 256 94.3 94.6 94.3 94.6

128 100.0 100.0 512 61.4 45.7 61.4 45.7

256 94.3 94.6 1024 17.9 0.8 16.9 0.8

512 61.4 45.7 2048 0.5 0.0 0.3 0.0

1024 16.9 0.8 4096 0.5 0.0 0.1 0.0

Error 6.4 % Error 6.8 % 6.5 %

Table 8. Comparison between the estimation and the simulation results of the miss rate in L1 and L2 caches when the size of L2 is 4 times greater than L1 cache for CQ application L1 cache size (bytes) Simplescalar L1 Miss rate m-FICA L2 cache size (bytes) Simplescalar L2 Miss rate m-FICA Simplescalar Average Miss rate m-FICA

64 100.0 100.0 256 89.2 84.2 89.1 84.2

128 99.4 98.7 512 46.8 3.5 46.5 3.5

256 89.1 84.2 1024 10.8 0.0 9.6 0.0

512 46.5 3.5 2048 0.8 0.0 0.3 0.0

1024 9.6 0.0 4096 0.4 100.0 0.0 0.0

Error 11.2% Error 17.9% 11.6 %

Apart from the accuracy of an estimation methodology (and tool), a second parameter very crucial for its efficiency is the required time cost to obtain the accurate estimates. Table 9 provides the required (average) time cost, in seconds, for performing the simulation and estimation procedure for all benchmarks. It is assumed an architecture with two levels of instruction cache and cache sizes for L1 from 64 bytes to 1024 bytes and L2 from 128 up to 4096 bytes there are 20 different combinations assuming that L2>L1. Using variable cache block sizes for L1 and L2 caches from 8 bytes to 32 bytes. there are totally 6 combinations assuming that L1_block_size ≤ L2_block_size. In order to complete explorer the two-level instruction cache architecture 20×6=120 simulation procedures are needed for every application. The estimation and simulation computations were performed by a personal computer with a processor Pentium IV, 2GHz and 1Gbyte RAM. It can be inferred that the proposed methodology offers a huge time speedup (orders of magnitude) compared with the simulation-based approach. Consequently, the new methodology/tool is suitable for performing estimations with a very high accuracy at the early design phases of an application. The exploration time cost of the simulation-based approach is proportional to the size of the trace file of the application considered (order of GBs). In contrast, the corresponding time cost of the proposed methodology is (almost) proportional to the code size of the assembly code (order of KBs). From Table 9, it can be seen that the larger the number of loop iterations in C code (and of course in assembly code) is, the larger is the speedup factor of the new methodology. Regarding the proposed

514

N. Kroupis and D. Soudris

approach, we achieved time cost reduction between 40 to 70,000 times (i.e. up to four (4) orders of magnitude), depending on the application characteristics. Thus, accurate estimation within an affordable time cost allows a designer to perform design exploration of larger search space (i.e. exploration of additional design parameters). In addition, the increasing complexity of modern applications, for instance image/video frame with higher resolution, will render the usage of simulation tool impractical. Thus, for designing such complex systems the high-level estimation tool will be the only viable solution. Table 9. Speed up comatrison results using our proposed methodology compared to the simulation time. In a host machine Intel Pentium IV CPU 2GHz.

Simulation Time (sec) Estimation Time (sec) Speed up

FS 73200 4.8 15,250

HS 3SLOG PHODS SS Wavelet 1920 2760 3480 77520 4320 27.3 7.2 9.45 7.2 105 70 383 368 10,767 41

Cavity CQ 1081080 795240 15.3 27.45 70,659 28,970

4 Conclusions A novel methodology for estimating the cache misses of multilevel instruction caches realized in an embedded programmable platform, was presented. The methodology was based on the straightforward relationship between the application high-level description code and its corresponding assembly code. Having as inputs both types of code, we extract specific features. Using the proposed methodology, we can perform estimation of application critical parameters during the early design phases, avoiding the time-consuming simulation-based approaches. The m-FICA tool is based on the proposed methodology and it is an accurate instruction cache miss rate estimator. The proposed methodology achieved estimations with smaller time cost than the simulation process, (i.e. orders of magnitude).

Acknowledgments This paper is part of the 03ED593 research project, implemented within the framework of the “Reinforcement Programme of Human Research Manpower” (PENED) and co-financed by National and Community Funds (75% from E.U.European Social Fund and 25% from the Greek Ministry of Development-General Secretariat of Research and Technology).

References [1] Zhang, D., Vahid, F.: Cache configuration exploration on prototyping platforms. In: 14th IEEE International Workshop on Rapid System Prototyping, June 2003, vol. 00, p. 164. IEEE, Los Alamitos (2003) [2] Givargis, T., Vahid, F.: Platune: A Tuning framework for system-on-a-chip platforms. IEEE Trans. Computer-Aided Design 21, 1–11 (2002)

Design Methodology and Software Tool

515

[3] Silva-Filho, A.G., Cordeiro, F.R., Sant’Anna, R.E., Lima, M.E.: Heuristic for Two-Level Cache Hierarchy Exploration Considering Energy Consumption and Performance. In: Vounckx, J., Azemard, N., Maurine, P. (eds.) PATMOS 2006. LNCS, vol. 4148, pp. 75– 83. Springer, Heidelberg (2006) [4] Gordon-Ross, A., Vahid, F., Dutt, N.: Automatic Tuning of Two-Level Caches to Embedded Aplications. In: DATE, pp. 208–213 (February 2004) [5] Borg, A., Kessler, R., Wall, D.: Generation and analysis of very long address traces. In: International Symposium on Computer Architecture, May 1990, pp. 270–279 (1990) [6] Mueller, F., Whalley, D.: Fast Instruction Cache Analysis via Static Cache Simulation. In: Proc. of 28th Annual Simulation Symposium, 1995, pp. 105–114 (1995) [7] Lajolo, M., Lavagno, L., Sangiovanni-Vincentelli, A.: Fast instruction cache simulation strategies in a hardware/software co-design environment. In: Proc. of the Asian and South Pacific Design Automation Conference, ASP-DAC 1999 (January 1999) [8] Nohl, A., Braun, G., Schliebusch, O., Leupers, R., Meyr, H.: A Universal Technique for Fast and Flexible Instruction-Set Architecture Simulation. In: Proc. of the 39th conference on Design automation, DAC 2002, New Orleans, Louisiana, USA, pp. 22–27 (2002) [9] Edler, J., Hill, M.D.: A cache simulator for memory reference traces, http:// www.neci.nj.nec.com/homepages/edler/d4 [10] Liveris, N., Zervas, N., Soudris, D., Goutis, C.: A Code Transformation-Based Methodology for Improving I-Cache Performance of DSP Applications. In: Proc. of DATE, 2002, Paris, pp. 977–984 (2002) [11] Austin, T., Larson, E., Ernst, D.: SimpleScalar: An Infrastructure for Computer System Modeling. Computer 35(2), 59–67 (2002) [12] Kuhn, P.: Algorithms, Complexity Analysis and VLSI Architectures for MPEG-4 Motion Estimation. Kluwer Academic Publisher, Boston (1999) [13] Nam, K., Kim, J.-S., Park, R.-H., Shim, Y.S.: A fast hierarchical motion vector estimation algorithm using mean pyramid. IEEE Transactions on Circuits and Systems for Video Technology 5(4), 344–351 (1995) [14] Cheung, C.-K., Po, L.-M.: Normalized Partial Distortion Search Algorithm for Block Motion Estimation. Proc. IEEE Transaction on Circuits and Systems for Video Technology 10(3), 417–422 (2000) [15] Lafruit, G., Nachtergaele, L., Vahnhoof, B., Catthoor, F.: The Local Wavelet Transform: A Memory-Efficient, High-Speed Architecture Optimized to a Region-Oriented ZeroTree Coder. Integrated Computer-Aided Engineering 7(2), 89–103 (2000) [16] Danckaert, K., Catthoor, F., De Man, H.: Platform independent data transfer and storage exploration illustrated on a parallel cavity detection algorithm. In: ACM Conference on Parallel and Distributed Processing Techniques and Applications III, pp. 1669–1675 (1999) [17] Dekker, A.H.: Kohonen neural networks for optimal colour quantization. Network: Computation in Neural Systems 5, 351–367 (1994)

A Statistical Model of Logic Gates for Monte Carlo Simulation Including On-Chip Variations Francesco Centurelli, Luca Giancane, Mauro Olivieri, Giuseppe Scotti, and Alessandro Trifiletti Dipartimento di Ingegneria Elettronica, Università di Roma "La Sapienza", Via Eudossiana 18, 00184 Roma, Italy {centurelli,giancane,olivieri,scotti, trifiletti}@mail.die.uniroma1.it

Abstract. Process variations are becoming a paramount design problem in nano-scale VLSI. We present a framework for the statistical model of logic gates that describes both inter-die and intra-die variations of performance parameters such as propagation delay and leakage currents. This allows fast but accurate behavioral-level Monte-Carlo simulations, that could be useful for full-custom digital design optimization and yield prediction, and enables the development of a yield-aware digital design flow. The model can incorporate correlation between mismatch parameters and dependence on distance and position, and can be extracted by fitting of Monte-Carlo transistor level simulations. An example implementation using Verilog-A hardware description language in Cadence environment is presented.

1 Introduction Fluctuations in manufacturing process parameters can cause random deviations in the device parameters, which in turn can significantly impact yield and performance of both analog and digital circuits [1]. With each technology node, process variability has become more prominent, and has become an increasing concern in integrated circuits as circuit complexity continues to increase and feature sizes continue to shrink [2]. As integrated circuits have to be insensitive to such fluctuations to avoid parametric yield loss, appropriate analyses are needed in the design phase to assess that circuit performance under process variations remains inside the acceptability region in the performance space, and new yield-oriented design methodologies are required [3]. Process variations can be classified into two categories: − inter-die variations (process variations), that affect all the devices on chip in the same way, and are due to variations in the manufacturing process; − intra-die variations (mismatch variations), that correspond to variations of the parameters of the devices in the same chip, due to spatial non-uniformities in the manufacturing process. Traditionally, inter-die variations were largely dominant, so that intra-die variations could be safely neglected in the design phase. However, in modern sub-micron N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 516–525, 2007. © Springer-Verlag Berlin Heidelberg 2007

A Statistical Model of Logic Gates

517

technologies, intra-die variations are rapidly and steadily growing and can significantly affect the variability of performance parameters on a chip [4]. Intra-die variations are spatially correlated, and are affected by circuit layout and surrounding environment. The problem of variability has been usually handled by analyzing the circuit at multiple process corners; in particular, digital designers adopt worst-case corner analysis (WCCA), assuming that a circuit that performs adequately at the extremes should perform properly also at nominal conditions [5]. However the worst-case methods do not provide adequate information about the yield and robustness of the design [6], and do not take into account intra-die variations. Moreover, WCCA is a deterministic analysis and this makes it an inadequate approach: a good accuracy could be obtained by considering a large number of process corners, thus losing the advantage of the corner analysis in terms of computational efficiency. These issues with WCCA have led to the development of statistical techniques, that rely on a more accurate representation of uncertainty and its impact on circuit functionality and performance [7]. Recently, for what concerns timing analysis in digital design, Statistical Static Timing Analysis (SSTA) has been proposed as an alternative to traditional Static Timing Analysis [8]-[10]. SSTA allows to compute the probability distribution of circuit delay, given the probability distributions of the delays of the logic gates, and taking into account their possible correlations: thus also intra-die variations are considered. With increasing clock frequency and transistor scaling, both dynamic and static power dissipation have become a major source of concern in recent deep sub-micron technologies. In the nanometer scale, leakage currents make up a significant portion of the total power consumption in high-performance digital circuits [11], and show large fluctuations both between different dies and between devices on the same chip. Digital IC design can no more take into account only timing performance, but a leakage-aware design methodology is needed to consider also power dissipation issues. Some authors ([12]-[13]) have underlined the need for a statistical approach in the analysis of leakage current as a key area in future high-performance IC design. However the standard digital design flow used nowadays does not take into account neither the leakage power consumption nor its statistical variability, and post-silicon tuning techniques have been developed to tighten the distribution of maximum operating frequency and maximum power consumption [14]. This scenario has to change to cope with the issues of future steps of deep submicron CMOS technology, to allow a design that provides high performance in terms both of speed and power consumption, with an acceptable yield. Yield prediction has to be integrated into CAD tools, and floorplanning and routing have to optimize simultaneously yield, performance, power, signal integrity and area [15]. As a first step towards this direction, yield prediction based on Monte-Carlo (MC) simulations is sometimes used on the complete chip or on a specific part of it, like for example the critical path. However this approach requires a very large number of transistor-level simulations for good accuracy, thus resulting highly demanding in terms of simulation time. In this paper we propose a statistical model of logic gates implemented in the Verilog-A language. The model is extracted by fitting transistor-level MC simulations,

518

F. Centurelli et al.

and describes the gate performances of interest (delay, leakage current, etc) as functions of a certain number of technology-related statistical variables. Both process and mismatch variations can be considered, thus obtaining a model well suited for the needs of present-day and future technologies. An example implementation in the Cadence analog CAD environment is considered, since it makes available a MC simulation engine that is not part of the standard digital design flow. This model can be used to perform gate-level yield prediction based on MC simulation, thus obtaining a net decrease in simulation time with an accuracy comparable to transistor-level simulations, and it is a first step towards the development of a yield-aware design flow. Moreover the analog extensions of the Verilog language can be used to accurately describe the transient behavior of the output signals, therefore allowing also fast but accurate simulations for mixed-signal environment. The paper is structured as follows: Section 2 describes the structure of the model of the logic gate and its implementation in the Cadence environment, and Section 3 presents the extraction procedure. A case study is shown in Section 4 to assess the validity of the model and an example application, and some conclusions are drawn in section 5 on possible developments of this work.

2 Statistical Gate Model A logic gate in a digital environment is described by a structural VHDL model, that contains information both on the function performed (RTL-level description) and on physical characteristics, such as propagation times and leakage currents, that can be function of the configuration of the inputs. These characteristics, that we will call in the following the gate figures of merit (FOMs), are described by a set of parameters: usually a certain number of libraries of gates are defined, that use different sets of parameters, related to different process corners and environmental conditions (temperature and supply voltage). To perform gate-level statistical analyses, such as SSTA, this corner-based approach, where a set of values is defined for each parameter, and the 'corners' take into account the correlation between parameters, has to be substituted with a probabilistic approach, where the parameters are defined as stochastic variables and a single statistical library is used. The correlation between the stochastic variables should be maintained, so that the resulting model could be effectively used for simultaneous optimization of all the FOMs and yield prediction. To guarantee that the correlation is correctly taken into account, we propose a model structure where all the FOMs are described as functions of a set of stochastic variables, that represent transistor-level parameters such as the threshold voltage, the body effect parameter and the unit-area gate-source capacitance. To minimize the number of stochastic variables while maintaining good accuracy in the description of the FOMs statistics, a Principal Component Analysis (PCA) could be needed to determine the optimum set of variables to be used. The FOMs are also defined as functions of environmental variables (temperature, supply voltage) and circuit parameters such as transistor sizes and the fan-out, thus allowing the definition of a scalable model that could be used for optimization.

A Statistical Model of Logic Gates

519

The digital design flows in use nowadays do not include a Monte-Carlo simulation engine, therefore we consider the implementation of the model structure we are proposing in Cadence analog CAD environment. In this context, the analog extensions of hardware description languages, such as Verilog-A [16] or VHDL-AMS [17], can be used to describe also the transient behavior of the input and output signals, leading to more realistic time-domain simulations. This allows fast and accurate mixed-signal statistical simulations, where the digital blocks are described using this approach, whereas the analog part can be simulated at transistor level. An accurate characterization of transient behavior can also be useful in a digital environment, since it allows a more accurate estimation of propagation times [18] and power consumption, and allows to cope with issues such as reduced noise margins and metastability that are becoming more and more important in deep-submicron technologies [15]. In particular, the following analog functions can be useful: - transition: to describe the transient behavior of the output variables (so that they can be interpreted as analog variables, i.e. voltages and currents); - cross: to interpret the input variables as analog variables, and thus to consider their transient behavior and introduce the noise margins. To implement the model into the Cadence environment, the logic gates are described by a Verilog-A code where the stochastic variables are defined as parameters, and a single library file (*.scs) contains the statistical description of these parameters. Their average values will be used for deterministic (typical) simulations, whereas the MC engine will calculate the values of the parameters to be used for the single MC iterations, starting from the description in the library file. This approach allows to describe both process and mismatch variations of the parameters, by a suitable definition of the library file: the generic transistor-level parameter pi (e.g. the threshold voltage) can be defined as pi = pio (1 + εi)

(1)

where pio is a stochastic variable that defines process variation and is the same for all the gates on the same chip, and εi is another stochastic variable that describes mismatch between different gates on the same chip. A mismatch variable is needed for each gate in the circuit to be simulated, therefore the library file has to contain a set of pi parameters for each gate, all sharing the same stochastic variables pio, but with different mismatch coefficients εi. Mismatch correlation and dependence on position and distance can also be included in the model, through a suitable definition of mismatch parameters.

3 Model Extraction A key issue to achieve good accuracy for the statistical model of the logic gate is the extraction procedure to get the statistical description of the transistor-level parameters pi defined in the previous Section. As a first step, appropriate equations have to be selected for the FOMs that have to be included in the model. These equations define the FOMs as functions of environmental variables, circuit variables and technological

520

F. Centurelli et al.

parameters; some of the latter have to be chosen as the random variables pi. The statistical description of the random variables could be obtained directly from the statistical technology library, thus allowing the model to be used to perform technology comparison or performance predictions for future technological nodes [19]. However for a more accurate fitting an extraction procedure to obtain the statistical description of the parameters pi starting from transistor-level simulations is needed, since the model tries to describe with a limited set of stochastic variables a much more complex statistical variability. The extraction procedure starts from Monte-Carlo transistor-level simulations of the logic gate, performed considering process variations and mismatch variations separately, as is usually possible with the statistical libraries of recent CMOS technologies. From these simulations, a statistical characterization of the FOMs of interest, in terms of average value and variance under process or mismatch variations, is obtained. The average values of the stochastic variables pio can be obtained by fitting the average values of the FOMs with the appropriate equations (the mismatch parameters εi have zero mean value). The standard deviations of the stochastic variables can then be obtained by fitting the distribution of the FOMs obtained by transistorlevel MC simulations with those obtained by MC simulations at the Verilog level; this step can be performed through an optimization procedure that computes the optimal values of the standard deviations of pio and εi to fit the standard deviations of the FOMs, by using the appropriate equations relating the standard deviations and minimizing a suitable error function. The model obtained can be scaled as a function of circuit parameters such as MOS channel width and fan-out of the logical gate, thus providing an useful tool for fast circuit simulation and optimization.

4 Case Study To verify the feasibility in a real analog CAD environment of the model framework we have presented, we have developed a model for a 2-input NAND gate in 90 nm CMOS technology. The model has been implemented in the Cadence Design Framework II environment, using the Verilog-A hardware description language and exploiting its analog extensions as described in Section 2. We have considered as FOMs the propagation delay with a single active input and the leakage current, described by the following equations: -

for the propagation delay [20]:

tp =

( FO + α ) C VDD k C + 2Wnp C ox vsat ( VDD − Vth ) DF

(2)

where VDD is the supply voltage, Vth is the MOS threshold voltage, Cox is the oxide capacitance, vsat is the carrier saturation velocity, Wnp is the channel width, averaged between the NMOS and PMOS transistors, C is the gate input capacitance, FO is the fan-out, α is the ratio between the output and the input capacitance of the logic gate, DF is the driving factor (the number of unity-width gates driving the gate under test) and k is a fitting parameter. The equation for the propagation delay

A Statistical Model of Logic Gates

521

has been adapted empirically from [20], and the driving factor allows to model the steepness of the input ramp (a higher driving factor corresponds to a steeper ramp). - for the leakage current, the model in [21] has been modified as follows:

⎧ I L00 = Ion exp ( − Vth nVT ) exp ( −ηVDD nVT ) for A=B=0 ⎪⎪ for A ≠ B I L = ⎨ I L01 = Ionp exp ( −Vth nVT ) ⎪ for A=B=1 ⎪⎩ I L11 = 2IL01 ( Iop Ionp )

(3)

where VT = kT q , n is the ideality coefficient in weak inversion, η is the DIBL (drain-induced barrier lowering) coefficient and the terms Ion, Iop and Ionp are given by Ioj = μ j Cox

Wj L

VT2 exp (1.8 )

(4)

where μ is the mobility, L the channel length, and W is the channel width of the NMOS devices, and the subscript j can be n, p or np (average between n and p values). Eqs. (3) have been obtained empirically to fit transistor-level simulations of the leakage current. We consider as stochastic variables pi the threshold voltage Vth, the gate input capacitance C and the DIBL coefficient η, using an uniform distribution for process variations and a gaussian distribution for the mismatch coefficients εi, similarly to what is done in the statistical model of the transistor. The distribution of the leakage current can be well matched by a lognormal probability distribution [22], due to the exponential relationship between leakage and the underlying device model parameters, therefore the logarithm of IL has been considered for histogram matching. Using minimum size NMOS transistors (L = 90 nm, Wn = 120 nm), Wp/Wn = 2.5 and a supply voltage of 1.2 V, we obtain from process-only and mismatch-only MC transistor-level simulations the statistical characterization summarized in Tab. 1, using 100 Monte-Carlo iterations, that have been proven to be enough to obtain stable results. It has to be noted that the values reported in Tab. 1 for the leakage current with different inputs (IL01) are averaged over the results for the 01 and 10 input configurations. Table 1. Results from transistor-level Monte-Carlo simulations

FOM tp log(IL00) log(IL01)

Average 15.568 ps -9.31 (487pA) -8.76 (1.739nA)

Std (process) 768 fs 0.160 0.255

Std (mismatch) 635 fs 0.359 0.409

Technological parameters such as the saturation velocity, the mobility and the oxide capacitance have been extracted from the technology library, and the mean values (Avg) of the stochastic parameters have been obtained by matching equations (2) – (3) with the results of transistor-level simulations in typical conditions. As a next step, the

522

F. Centurelli et al.

standard deviation (Std) of the parameters, for process and mismatch variations, has been extracted by matching the results of Verilog Monte-Carlo simulations to the values in Tab. 1. Tab. 2 summarizes the results, reporting the relative errors with respect with transistor-level simulations. Very good results are obtained for the delay tp, whereas the accuracy of the model (3) affects the results for the leakage current IL00. The model has been checked by comparing the statistical description of the FOMs in case of combined process and mismatch variations: Figs. 1 and 2 show some histograms for transistor-level and Verilog MC simulations, and Tab. 3 summarizes the results, showing a good agreement between the model and transistor-level simulations. Table 2. Results from Verilog Monte-Carlo simulations

FOM tp log(IL00) log(IL01)

Average ε% 15.593 ps 0.16% -9.34 0.28% -8.56 0.24%

Std (process) 768 fs 0.183 0.184

ε% Std (mismatch) ε% 0.08% 635 fs 0.03% 14% 0.375 4.3% 0.16% 0.409 0.11%

Fig. 1. Histograms of the propagation delay under combined process and mismatch variations: a) transistor-level simulations; b) Verilog simulations

Fig. 2. Histograms of the log of the leakage current (for A≠B) under combined process and mismatch variations: a) transistor-level simulations; b) Verilog simulations

A Statistical Model of Logic Gates

523

Table 3. Standard deviation of the FOMs under combined process and mismatch variations

FOM tp log(IL00) log(IL01)

CMOS 958 fs 0.380 0.450

ε% 7.23 % 14.2% 6.7%

Verilog 1.027 ps 0.434 0.480

As an example application, the cascade of 15 NAND gates has been considered to simulate the critical path of a microprocessor, and the distribution of the overall propagation delay under process and mismatch variations has been studied using the proposed model, and compared with transistor-level simulation. Tab. 4 summarizes the results, and Fig. 3 shows the histograms under combined process and mismatch variations.

Fig. 3. Histograms of the delay of the cascade of 15 gates under combined process and mismatch variations: a) transistor-level simulations; b) Verilog simulations Table 4. Statistics of the 15-NAND delay under combined process and mismatch variations

FOM Avg Std

CMOS 261.164 ps 12.908 ps

Verilog 255.658 ps 11.995 ps

ε% 2.11% 7.08%

We have also checked the scalability of the model, that would allow to obtain a statistical library of gates with different transistor sizes by extracting the model of a single logic gate. The statistical description of the propagation delay tp extracted for a minimum-size NAND gate has been used to predict the delays of gates with larger transistors (with constant PMOS-to-NMOS width ratio). We have considered the scaled NAND gate driven and loaded by DF and FO minimum-size gates respectively, and eq. (2) has been modified to take into account the scaling factor S = Wnp/Wmin. An empirical relationship has been obtained to correctly describe the dependence of tp on S and C: tp =

( FO + Sα ) C VDD k ⎡ + γ( 2SWmin Cox vsat ( VDD − Vth ) DF ⎣

)

S −1

C + C⎤ ⎦

(5)

524

F. Centurelli et al.

Tab. 5 compares the statistical characterization of the logic gate obtained from transistor-level and Verilog MC simulations, showing that a good agreement is maintained as the scaling factor increases. Table 5. Staptistical characterization of propagation delay vs. channel width (process variations)

S 1 2 3 4

Avg (CMOS) Avg (Verilog) 15.568 ps 15.593 ps 15.818 ps 15.949 ps 16.677 ps 16.863 ps 17.813 ps 17.818 ps

H% 0.16% 0.83% 1.11% 0.03%

Std (CMOS) 768 fs 792 fs 834 fs 870 fs

Std (Verilog) 768 fs 805 fs 827 fs 844 fs

H% 0.08% 1.59% 0.81% 3%

5 Conclusions We have presented a framework for a statistical model of logic gates that allows Monte-Carlo behavioral-level simulations in a standard analog CAD environment (Cadence). The model fits the statistics obtained from transistor-level simulations both for process and mismatch variations, and takes into account the correlation between the performance parameters of the gate (delays, leakage currents, etc). The framework also allows to include correlations between different gates and positiondependent mismatches, to allow a more realistic description of a digital IC. The proposed model allows very fast statistical simulations for circuit optimization and yield prediction of complex digital designs. An accurate analog description of transient behavior can also be obtained by exploiting analog extensions of the Verilog hardware description language, and can be useful for mixed-signal simulations or fullcustom digital design optimization. The proposed model framework could also be used to develop some form of yield-aware design optimization that finds the best trade-off between performance parameters such as maximum delay, dynamic power consumption, leakage power consumption, etc. A simple case study has been presented to assess the feasibility of the proposed framework inside the Cadence analog CAD environment, and to test the extraction procedure for the stochastic variables used in the model. A good agreement is obtained both for the average value and the standard deviation of the performance parameters of interest, both for process and mismatch variations taken separately, and when the two effects are considered together.

References 1. Chandrakasan, A., Bowhill, W.J., Fox, F.: Design of high-performance microprocessor circuits. Wiley, New York (2001) 2. Gneiting, T., Jalowiecki, I.P.: Influence of process parameter variations on the signal distribution behavior of wafer scale integration devices. IEEE Trans. Components, Packaging and Manufacturing Technology Part B 18(3), 424–430 (1995)

A Statistical Model of Logic Gates

525

3. Chang, H., Qian, H., Sapatnekar, S.S.: The certainty of uncertainty: randomness in nanometer design. In: Macii, E., Paliouras, V., Koufopavlou, O. (eds.) PATMOS 2004. LNCS, vol. 3254, pp. 36–47. Springer, Heidelberg (2004) 4. Nassif, S.: Design for variability in DSM technologies. In: IEEE Int. Symp. Quality Electronic Design, pp. 451–454. IEEE Computer Society Press, Los Alamitos (2000) 5. Nardi, A., Neviani, A., Zanoni, E., Quarantelli, M., Guardiani, C.: Impact of unrealistic worst case modeling on the performance of VLSI circuits in deep submicron CMOS technologies. IEEE Trans. Semiconductor Manufacturing 12(4), 396–402 (1999) 6. Singhal, K., Visvanathan, V.: Statistical device models from worst case files and electrical test data. IEEE Trans. Semiconductor Manufacturing 12(4), 470–484 (1999) 7. Mutlu, A.A., Kwong, C., Mukherjee, A., Rahman, M.: Statistical circuit performance variability minimization under manufacturing variations. In: ISCAS 06 IEEE Int. Symp. on Circuits and Systems, pp. 3025–3028. IEEE Computer Society Press, Los Alamitos (2006) 8. Jyu, H.-F., Malik, S., Devadas, S., Keutzer, K.W.: Statistical timing analysis of combinatorial logic circuits. IEEE Trans. VLSI Systems 1(2), 126–137 (1993) 9. Chang, H., Sapatnekar, S.S.: Statistical timing analysis under spatial correlations. IEEE Trans. on CAD 24(9), 1467–1482 (2005) 10. Jess, J.A.G., Kalafala, K., Naidu, S.R., Otten, R.H.J.M., Visweswariah, C.: Statistical timing for parametric yield prediction of digital integrated circuits. IEEE Trans. on CAD 25(11), 2376–2392 (2006) 11. Agarwal, A., Mukhopadhyay, S., Raychowdhury, A., Roy, K., Kim, C.H.: Leakage power analysis and reduction for nanoscale circuits. IEEE Micro 26(2), 68–80 (2006) 12. Rao, R., Srivastava, A., Blaauw, D., Sylvester, D.: Statistical estimation of leakage current considering inter- and intra-die process variation. In: ISLPED 03 Int. Symp. Low-Power Electronics and Design, pp. 84–89 (2003) 13. Chang, H., Sapatnekar, S.S.: Full-chip analysis of leakage power under process variations, including spatial correlations. In: DAC 05 Proc. Design Automation Conf., pp. 523–528 (2005) 14. Chen, T., Naffziger, S.: Comparison of Adaptive Body Bias (ABB) and Adaptive Supply Voltage (ASV) for improving delay and leakage under the presence of process variation. IEEE Trans. VLSI Systems 11(5), 888–899 (2005) 15. ITRS: The International Technology Roadmap for Semiconductors, 2005 edn. (2005) 16. Verilog-A language reference manual, Version 1.0. Open Verilog International (1996) 17. Ashenden, P.J., Peterson, G.D., Teegarden, D.A.: The system designer’s guide to VHDLAMS. Morgan Kaufmann, San Francisco (2002) 18. Auvergne, A., Daga, J.M., Rezzoug, M.: Signal transition time effect on CMOS delay evaluation. IEEE Trans. Circuits and Systems I 47(9), 1362–1369 (2000) 19. Bowman, K.A., Duvall, S.G., Meindl, J.D.: Impact of die-to-die and within-die parameter fluctuations on the maximum clock frequency distribution for gigascale integration. IEEE J. Solid-State Circuits 37(2), 183–190 (2002) 20. Bowman, K.A., Austin, B.L., Eble, J.C., Tang, X., Meindl, J.D.: A physical alpha-power law MOSFET model. J. Solid-State Circuits 34(10), 1410–1414 (1999) 21. Gu, R.X., Elmasry, M.I.: Power dissipation analysis and optimization of deep submicron CMOS digital circuits. IEEE J. Solid-State Circuits 31(5), 707–713 (1996) 22. Rao, R., Devgan, A., Blaauw, D., Sylvester, D.: Parametric yield estimation considering leakage variability. In: DAC 04 Proc. Design Automation Conf., pp. 442–447 (2004)

Switching Activity Reduction of MAC-Based FIR Filters with Correlated Input Data Oscar Gustafson1 , Saeeid Tahmasbi Oskuii2 , Kenny Johansson1, and Per Gunnar Kjeldsberg2 1

Department of Electrical Engineering, Link¨ oping University, SE-581 83 Link¨ oping, Sweden {oscarg,kennyj}@isy.liu.se 2 Department of Electronics and Telecommunications, Norwegian University of Science and Technology (NTNU), NO-7491 Trondheim, Norway {saeeid.oskuii,per.gunnar.kjeldsberg}@iet.ntnu.no

Abstract. In this work we consider coefficient reordering for low power realization of FIR filters on fixed-point multiply-accumulate (MAC) based architectures, such as DSP processors. Compared to previous work we consider the input data correlation in the ordering optimization. For this we model the input data using the dual bit type approach. Results show that compared with just optimizing the number of switches between coefficients, the proposed method works better when the input data is correlated, which can be assumed for most applications. Keywords: FIR filter, MAC, dual bit type, switching activity, coefficient reordering.

1

Introduction

Energy consumption is becoming the major cost measure when implementing integrated circuits. This trend is motivated both by the benefit of increased battery life for portable products as well as reducing cooling problems. Many of these systems include a digital signal processing (DSP) subsystem which performs a convolution or a sum-of-product computation. These computations are often performed using a, possibly embedded, programmable DSP processor. The probably most common form of convolution algorithms is the finite-length impulse response (FIR) filter. The output of an N :th-order FIR filter is computed as N  y(n) = h(i)x(n − i) (1) i=0

where the filter coefficients, h(n), determine the frequency response of the filter. The transfer function of the FIR filter is H(z) =

N 

h(i)z −i

i=0 N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 526–535, 2007. c Springer-Verlag Berlin Heidelberg 2007 

(2)

Switching Activity Reduction of MAC-Based FIR Filters T

x(n)

T

T

T

527

T

h0

h1

h2

h3

h4

h5

h0

h1

h2

h3

h4

h5

y(n) x(n)

y(n)

T

T

T

T

T

Fig. 1. (above) Direct form and (below) transposed direct form fifth-order FIR filter

Data memory Coefficient memory Register

Fig. 2. Multiply-accumulate (MAC) architecture suitable for realizing direct form FIR filters

The two most common filter structures for realizing the transfer function in (2) are the direct form and the transposed direct form structures depicted in Fig. 1. As can be seen from Fig. 1 the basic arithmetic operation is a multiplication followed by an addition. This is usually called a multiply-accumulate (MAC) operation and is commonly supported in programmable DSP processors [1]. If a direct form FIR filter is realized the input data is stored in one memory, while the coefficients are stored in another memory. Then each output is computed by performing N + 1 MAC operations. An abstracted suitable architecture is shown in Fig. 2. It is also possible to use a similar architecture when implementing FIR filters in application specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs) [2, 3]. Many FPGAs have dedicated general fixed-point multipliers and some even have a complete fixed-point MAC as a dedicated building block. For integrated circuits implemented in CMOS technology the sources of power dissipation can be classified as dynamic, short circuit, and leakage power. Even though the impact of leakage power increases with decreasing feature size, the

528

O. Gustafson et al.

main source of power dissipation for many integrated circuits is still the dynamic power. The dynamic power consumption for a CMOS circuit is expressed as 2 Pdynamic = αCf VDD (3) where α is the switching activity, C is the capacitance, f is the frequency, and VDD is the power supply voltage. In this work we focus on reducing the switching activity in fixed-point MACbased realizations of direct form FIR filters. The same ideas can be applied to other convolutions and sum-of-product computation, but for clarity we will only discuss FIR filters here. We focus on reducing the number of switches on the inputs of the multiplier. It should be noted that this will also decrease the number of switches on the buses connecting the memories to the multiplier. Previous work considering switching activity reduction of FIR filters on MACbased architectures can be divided in two class. For the first class it is assumed that the MAC operations are performed in increasing (or decreasing) order and the approaches optimize the coefficient values such that the number of switches between adjacent coefficients is small [4,5]. In [4] a heuristic optimization method is proposed, while in [5] an optimal method based on mixed integer linear programming is presented. The second class aims at reordering the computations such that the number of switches between succeeding coefficients are small [4,6]. In [4] a framework that both optimizes the coefficients and the order was proposed. However, the optimization and reordering are not performed simultaneously. Furthermore, the work in [4] neglects the fact that the input data is correlated. Input data correlation is treated in [6] by determining a lookup table based on simulation. This lookup table will grow rapidly with increased filter length, and, hence, only short filters and convolutions are considered in [6]. There are also works where several output samples are computed interleaved with, typically, more than one accumulator, leading to reduced switching activity [7, 8]. In this work we characterize the input data using the dual bit type method [9] and derive equations for computing the correlation between samples more than one sample period apart. This is used to formulate a Hamiltonian path (or traveling salesman, TSP) problem that is solved to find the best ordering of the computations. While the focus of this work is on FIR filters, similar techniques can be applied for other applications based on subsequent MAC-operations. In the next section we review the issues related to correlated input data and derive the correlation equations for the input data. Then, in Section 3 the proposed optimization approach is presented. This approach is extended to include possible negation of coefficients in Section 3.2. In Section 4 results are presented that highlight the importance of the contribution. Finally, in Section 5 some concluding remarks are given.

2

Correlated Input Data

Signals in real world applications can in many cases be approximated as a Gaussian stochastic variable. This leads to that their binary representations have

Switching Activity Reduction of MAC-Based FIR Filters α

BP1 BP0 linear BP

MSB

529

LSB

0.5 αMSB

Bit W−1W−2W−3 S

3

2 U

1

0

Fig. 3. Illustration of the dual bit type properties

different switching probabilities for different bit positions. However, certain properties for different regions can be observed [9, 10, 11]. In this work we focus on two’s complement representation, but a similar derivation can be performed using, e.g., sign-magnitude representation. The dual bit type (DBT) method [9] is based on the fact that the binary representation of most real world signals can be divided into a few regions, where the bits of each region have a well defined switching activity. In [9] the three regions LSB, linear, and MSB was defined as illustrated in Fig. 3. Because of the linear approximation of the middle region, it was stated that two bit types is sufficient. In the LSB region the switching probability is 1/2, which corresponds to random switching. Hence, the bits are divided into a uniform white-noise (UWN) region, U , and a sign region, S, as shown in Fig. 3. The word-level statistics, i.e., mean, μ, variance, σ 2 , and correlation, ρ, of the signal are used to determine the breakpoints and the switching activities. The correlation for a signal is computed as ρ=

μΔ − μ2 σ2

(4)

where μΔ is the average value of the signal multiplied by the signal delayed one sample. Typically, we have μ = 0, which gives that the probability of a bit in the two’s complement representation being one, p, is 1/2. In [9] the break points of the regions are defined as  |ρ| ) BP 0 = log2 σ + log2 ( 1 − ρ2 + 8 BP 1 = log2 (|μ| + 3σ) BP 0 + BP 1 BP = 2

(5) (6) (7)

With a data wordlength of W bits the number of bits in the region S is WS = W − BP − 1

(8)

If pQ is the probability that a single-bit signal, Q, is one and the temporal correlation of Q is ρQ , then the switching activity, aQ , of Q is defined as [10] αQ = 2pQ (1 − pQ )(1 − ρQ )

(9)

530

O. Gustafson et al. 0.5

Switching probability

0.4

0.3

α

MSB

= 0.1

αMSB = 0.01 αMSB = 0.001

0.2

0.1

0 0

20

40 60 80 Distance between samples

100

Fig. 4. Resulting switching activity for the MSB region, S, after reordering

The probability for a one is assumed to be 1/2 for all bits in the twos-complement representation of a signal, as the mean value is 0. Furthermore, it was stated in [10] that the temporal correlation for bits in the MSB region is close to the word-level correlation, ρ. Hence, the switching activity in the MSB region can be computed from as (1 − ρ) αMSB = (10) 2 Now, when the filter coefficients are reordered, the switching probability for adjacent data words is changed. Let αm,D denote the switching probability between two bits at position, m, with time index i and i + D. We have ⎧ 0 D=0 ⎪ ⎪ ⎨# 1/2 m ∈ S αm,D = D=1 (11) αMSB m ∈ U ⎪ ⎪ ⎩ (1 − αm,1 )αm,D−1 + αm,1 (1 − αm,D−1 ) D ≥ 2 In Fig. 4 the effect of reordering on the switching probability is shown for some initial switching probabilities, αMSB . From this it can be seen that the switching probability increases monotonically toward 1/2 with increasing distance between samples. Hence, while reordering may decrease the switching probability of the coefficients, it will increase the switching probability of the input data sent to the multiplier.

3

Proposed Approach

Let the coefficient h(i) be represented using a B-bit two’s complement representation as B−2  h(i) = −bi,B−1 + bi,k 2−(B−1−k) (12) k=0

Switching Activity Reduction of MAC-Based FIR Filters

531

where bi,j ∈ {0, 1}. Hence, the number of switches when changing the coefficient from h(i) to h(j) (or vice versa) is ch,i→j = ch,j→i =

B−1 

bi,k ⊕ bj,k

(13)

k=0

This is the Hamming distance measure used for coefficients in [4, 5, 6]. For the input data it is not possible to explicitly compute the number of switches. Instead the switching probability from (11) is used to obtain cx,i→j = cx,j→i =

W −1 

αk,|i−j|

(14)

k=0

The total transition cost for selecting h(j) as the input coefficient after h(i) is now (15) ctot,i→j = ctot,j→i = ch,i→j + cx,i→j By forming a fully connected weighted graph with N + 1 nodes, where each node correspond to a filter coefficient, h(i), and each edge weight is obtained from (15) the ordering problem becomes a problem of finding a path visiting all nodes in the graph once with minimum weight. This problem is known as a symmetric traveling salesman problem (TSP). The TSP-problem is NP-hard, but it is in general possible to solve rather large instances in reasonable time. We have used GLPK [12] to solve problems with about 100 coefficients to optimality in tens of seconds. 3.1

Multiplier and Bus Power Consumption

It should be noted that switches at different inputs to the multiplier affects the power consumption differently [13]. Hence, if it is possible to characterize the used multiplier it is also possible to weight the ch,i→j and cx,i→j terms. However, the results in [13] also indicate that the variation is not large. Our own simulations also show that when all inputs are randomly distributed with a switching and one probability of 0.5 except for one which is known to switch every cycle, the variation in power consumption is insignificant. Hence, this aspect is not included in the results. For the cases that we will implement a custom multiplier and not use an existing one in a DSP, an FPGA, or a macro library, it is worth noticing that it is possible to optimize the power consumption of the multiplier based on the expected switching probability [14]. For buses the traditional power model has been to count the number of switches. However, for deep sub-micron technology the interwire capacitances is dominating over the wire-to-ground capacitances [15]. Hence, one should possibly include these in the cost function as well. In general it is hard to determine the exact bus structure, especially for DSPs and FPGAs, and, hence, in the results section we only consider the number of switches.

532

O. Gustafson et al.

3.2

Selective Negation

In [4] it was proposed that if the MAC operation is able to conditionally subtract the output of the multiplier, it is possible to negate some of the coefficients to reduce the switching even further. However, they did not provide any solution as how to decide which coefficients to be negated. In this work we evaluate these ideas by considering the case that we change all coefficients to positive values and selectively subtract the results that corresponds to negative coefficients. This could in general be solved using a modified TSP formulation known as the equality generalized TSP (E-GTSP) [16].

4

Results

To illustrate the results of the proposed design technique we will consider three FIR filters of varying lengths. These will be optimized according to the methodology in Section 3 using different data distributions. All FIR filters are designed using the Remez exchange algorithm for the given specifications. For simplicity we assign the same weights for the passband and the stopband ripples. The filter coefficients are scaled by a power of two such that the magnitude of the largest coefficients is represented using all available bits, i.e., 0.5 ≤ max(|h(i)|) < 1. Finally, the coefficients are rounded to the used wordlength. It should be noted that the used wordlength are enough even for harder specifications [17]. Hence, it would be possible to design filters with shorter wordlengths for most designs. However, this aspect is not considered here, but the rounded coefficients are used to demonstrate the properties of the proposed reordering methodology. 4.1

Design 1

For the first design the passband and stopband edges are at 0.2π rad and 0.3π rad, respectively. For this design we aim at a general purpose DSP with a 24×24bit multiplier. With a filter order of 65 we obtain the results shown in Table 1, where the switching activity denotes the total number of switches at the bus and multiplier inputs for a complete FIR filter computation. From the results it can be seen that savings in switching activity between 1.5% and 7.2% are obtained taking the correlation of the input data into account. 4.2

Design 2

For the second design, we will consider implementation in an FPGA which includes a general 18 × 18-bits multiplier. Again we use an FIR filter designed using the Remez algorithm with identical maximum passband and stopband ripples. For the passband and stopband edges we select 0.6π rad and 0.8π rad, respectively. The filter coefficients are scaled as in the previous design. To obtain reasonable stopband attenuation we select a filter order of 40.

Switching Activity Reduction of MAC-Based FIR Filters

533

Table 1. Total switching activity of the data and coefficient values for Design 1

Data characteristics Random WS = 4, αM SB = 0.1 WS = 4, αM SB = 0.01 WS = 8, αM SB = 0.1 WS = 8, αM SB = 0.01 WS = 12, αM SB = 0.1 WS = 12, αM SB = 0.01

Optimized for Optimized using coefficient Reduction data Hamming characteristics distance [4] 1020.0 1020.0 1012.0 996.6 1.5% 943.5 929.2 1.5% 1004.0 964.6 3.9% 867.0 837.7 3.4% 996.1 924.0 7.2% 790.5 744.3 5.8%

Natural order1 1472.0 1366.4 1342.6 1260.8 1213.3 1155.2 1083.9

Switching activity

500 480

Random data WS = 4

460

WS = 8 WS = 12

440 420 400 380 360 340 0

5

Actual W

10

15

S

Fig. 5. Resulting estimated switching activity for different coefficient orders with respect to the actual WS . αM SB = 0.1

As the statistical properties of the input signal are estimated it is of interest to know how a change in the actual input characteristics affects the switching activity. For this we have considered the case where αMSB = 0.1 and varied WS for four different designs, each optimized for coefficient Hamming distance [4] (corresponding to WS = 0) and WS = 4, 8, and 12. The results are shown in Fig. 5, where it can be seen that the designs that consider the input data correlation in most cases result in less switching activity compared to not considering (as in [4]) independent of the actual value of WS . It is only for small WS :s that the orderings designed for large WS are worse compared with the ordering designed for Hamming distance (corresponding to WS = 0). Hence, the proposed design methodology will reduce the switching activity as long as the estimated parameters are reasonably close to the actual parameters of the input data. 1

The filter coefficients processed as h0 , h1 , h2 , . . .

534

O. Gustafson et al.

Table 2. Total switching activity of the data and coefficient values for Design 3

Data characteristics Random WS = 6, αM SB = 0.1 WS = 6, αM SB = 0.01 WS = 10, αM SB = 0.1 WS = 10, αM SB = 0.01

4.3

Original sign Natural Optimized 1890.0 1152.0 1635.6 1111.1 1578.4 963.8 1470.0 1062.0 1374.9 829.2

Positive sign Reduction Natural Optimized Natural Optimized 1604.0 1128.0 15.1% 2.1% 1349.6 1068.1 17.5% 3.9% 1292.4 929.9 18.1% 3.5% 1184.0 1000.5 19.5% 5.8% 1088.9 779.2 20.8% 6.0%

Design 3

In the third design, again we consider implementation in an FPGA. Now we consider the effect of making all signs positive. This can easily be realized by replacing the adder in Fig. 2 with an adder/subtracter. To control this an additional bit is required in the coefficient memory. It should be noted that using this approach is similar to using the sign-magnitude number representation. This sign bit should possibly be included in the switching activity analysis. From a bus point of view it should be included while from a multiplier point of view it should not, as it does not affect the power consumption of the multiplier. We choose to not include it in the cost function in this design study. For this design we consider a 105:th-order FIR filter with passband and stopband edges at 0.5π rad and 0.53π rad, respectively. The results are shown in Table. 2 for the cases where the original sign is used and when all coefficients are transformed into having positive sign. It is clear that while significant savings can be obtained by using absolute valued coefficients when realizing the multiplications in their natural order, the advantage decreases when coefficient reordering is considered. The reason for the larger savings using natural ordering is that the value of the coefficients vary for the impulse response leading to many switches for the sign bits. For the optimized version this is already considered through the optimization. One would expect that by using the E-GTSP formulation the switching activity can be reduced even further.

5

Conclusion

In this work we have proposed an approach to low-power realization of FIR filters on MAC-based architectures when the input data correlation is considered. The reordering of computations to reduce the switching activity now also depends on the input data correlation, which is represented using the dual bit type method. Furthermore, we proposed how to form a problem when we consider the possibility to negate coefficients to reduce the switching activity further. The proposed approach provide a more accurate modeling compared to [4] which did not consider input data correlation. The results show that as long as we have correlated input data and the dual bit type parameter estimation is reasonably correct we obtain lower switching activity using the proposed methodology compared to [4]. Compared to [6] the proposed method can handle arbitrary large FIR filters, while

Switching Activity Reduction of MAC-Based FIR Filters

535

the modeling in [6] depended on simulations making it complex to obtain results for long filters due to the large look-up tables required. For this work the corresponding results are easily computed from the presented equations. However, this requires that the input data is characterized for use of the dual bit type method.

References 1. Lapsley, P., Bier, J., Shoham, A., Lee, E.A.: DSP Processor Fundamentals: Architectures and Features. Wiley-IEEE Press (1997) 2. Wanhammar, L.: DSP Integrated Circuits. Academic Press, London (1999) 3. Meyer-Baese, U.: Digital Signal Processing with Field Programmable Gate Arrays. Springer, Heidelberg (2001) 4. Mehendale, M., Sherlekar, S.D., Venkatesh, G.: Low-power realization of FIR filters on programmable DSPs. IEEE Trans. VLSI Systems 6(4), 546–553 (1998) 5. Gustafsson, O., Wanhammar, L.: Design of linear-phase FIR filters with minimum Hamming distance. In: Proc. IEEE Nordic Signal Processing Symp., October 4–7, 2002, Hurtigruten, Norway (2002) 6. Masselos, K., Merakos, P., Theoharis, S., Stouraitis, T., Goutis, C.E.: Power efficient data path synthesis of sum-of-products computations. IEEE Trans. VLSI Systems 11(3), 446–450 (2003) 7. Arslan, T., Erdogan, A.T.: Data block processing for low power implementation of direct form FIR filters on single multiplier CMOS DSPs. In: Proc. IEEE Int. Symp. Circuits Syst., 1998, Monterey, CA, vol. 5, pp. 441–444 (1998) 8. Parhi, K.K.: Approaches to low-power implementations of DSP systems. IEEE Trans. Circuits Syst.–I 48(10), 1214–1224 (2001) 9. Landman, P.E., Rabaey, J.M.: Architectural power analysis: The dual bit type method. IEEE Trans. VLSI Systems 3(2), 173–187 (1995) 10. Ramprasad, S., Shanbhag, N.R., Hajj, I.N.: Analytical estimation of transition activity from word-level signal statistics. In: Proc. Design Automat. Conf., June 1997, 582–587 (1997) 11. Lundberg, M., Muhammad, K., Roy, K., Wilson, S.K.: A novel approach to highlevel swithing activity modeling with applications to low-power DSP system synthesis. IEEE Trans. Signal Processing 49(12), 3157–3167 (2001) 12. Gnu Linear Programming Kit 4.16, http://www.gnu.org/software/glpk/ 13. Hong, S., Chin, S.-S., Kim, S., Hwang, W.: Multiplier architecture power consumption characterization for low-power DSP applications. In: Proc. IEEE Int. Conf. Elec. Circuits Syst., September 15–18, 2002, Dubrovnik, Croatia (2002) 14. Oskuii, S.T., Kjeldsberg, P.G., Gustafsson, O.: Transition-activity aware design of reduction-stages for parallel multipliers. In: Proc. Great Lakes Symp. on VLSI, March 11–13, 2007, Stresa-Lago Maggiore, Italy (2007) 15. Caputa, P., Fredriksson, H., Hansson, M., Andersson, S., Alvandpour, A., Svensson, C.: An extended transition energy cost model for buses in deep submicron technologies. In: Proc. Int. Workshop on Power and Timing Modeling, Optimization and Simulation, Santorini, Greece, September 15–17, 2004, pp. 849–858 (2004) 16. Fischetti, M., Salazar Gonzˆ alez, J.J., Toth, P.: The generalized traveling salesman problem. In: The Traveling Salesman Problem and Its Variations, pp. 609–662. Kluwer Academic Publishers, Dordrecht (2002) 17. Kodek, D.M.: Performance limit of finite wordlength FIR digital filters. IEEE Trans. Signal Processing 53(7), 2462–2469 (2005)

Performance of CMOS and Floating-Gate Full-Adders Circuits at Subthreshold Power Supply Jon Alfredsson1 and Snorre Aunet2 1

Department of Information Technology and Media, Mid Sweden University SE-851 70 Sundsvall, Sweden [email protected] 2 Department of Informatics, University of Oslo Postbox 1080 Blindern, 0316 Oslo, Norway [email protected]

Abstract. To reduce power consumption in electronic designs, new techniques for circuit design must always be considered. Floating-gate MOS (FGMOS) is one of those techniques and has previously shown potentially better performance than standard static CMOS circuits for ultra-low power designs. One reason for this is because FGMOS only requires a few transistors per gate and still retain a large fan-in. Another reason is that CMOS circuits becomes very slow in subthreshold region and are not suitable in many applications while FGMOS can have a shift in threshold voltage to increase speed performance. This paper investigates how the performance of an FGMOS fulladder circuit will compare with two common CMOS full-adder designs. Simulations in a 120 nm process shows that FGMOS can have up to 9 times better EDP performance at 250 mV. The simulations also show that the FGMOS full-adder is 32 times faster and have two orders of magnitude higher power consumption than that for CMOS.

1 Introduction It has become more and more important to reduce the power consumption in circuits while still trying to achieve as high switching speed as possible. The increasing demands for longer lasting lifetimes in portable and battery driven applications are some of the strongest driving forces to push the limits in terms of ultra-low power consumption. According to the ITRS Roadmap for Semiconductors [18], the two most important of the five “grand challenges” for future nanoscale CMOS are to reduce power consumption and design for manufacturability. In this work we have chosen to focus on reducing power consumption. The challenge to design for manufacturability is desirable for future work within this topic. One of the ways to reduce power is to explore new types of circuits in order to find better circuit techniques for energy savings. Floating-gate MOS (FGMOS) is a circuit technique that has been proposed in several previous works as a potentially good technique to reduce power consumption and still maintain a relatively high speed [1],[2],[3]. FGMOS is normally fabricated using a standard CMOS process where an extra floating-gate capacitance is connected to the transistor’s gate node. This capacitance, called floating-gate capacitance, will make it possible to shift the N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 536–545, 2007. © Springer-Verlag Berlin Heidelberg 2007

Performance of CMOS and Floating-Gate Full-Adders Circuits

537

threshold voltage level of the MOS-transistors. The required effective threshold voltage for the gate will thereby change and the shift is controlled by the floatinggate’s node charge voltage [1],[3]. A shift in threshold voltage will also change the static current (and power consumption), normally to a higher value, at the same time as the propagation delay of the circuit will be different (normally smaller). Maximum propagation delay, tp, and power consumption of a circuit, P, are two figures of merits that are important in FGMOS designs. These figures must be considered while simulating with different fan-ins. In our simulations we have been using power consumption (P), power-delay product (PDP) and energy-delay product (EDP) as figure of merits to determine differences in performance [4]. The approach to reduce power consumption and increase performance in this work is to lower the circuits power supply voltage into subthreshold region. Previous works in this area have shown that FGMOS circuits working in subthreshold should not have a fan-in higher than 3 in order to be able to have advantages compared to CMOS [14]. This advice has been taken into account in this work where we use an FGMOS full-adder with a maximum fan-in of 3 and compare it to two common basic CMOS full-adders with respect to power and speed performances. The aim of this work has been to determine if the FGMOS full-adder will show better performance than normal CMOS full-adders when power supply is reduced below subthreshold voltage. This is important knowledge since subthreshold designs have been frequently proposed to be good for ultra-low power consumption [19]. In this article we show that when the power supply is reduced into subthreshold region (250 mV), the FGMOS circuits will have up to 9 times better EDP and 32 times higher speed than the CMOS circuits. However, FGMOS will also have penalty with over two orders of magnitude higher power consumption and also worse PDP.

2 FGMOS Basics The FGMOS technique is based on normal MOSFET transistors and CMOS process technology. They are manufactured with an extra gate capacitance in series with the transistor’s gate. Because of that, FGMOS can shift the effective threshold potential required by the transistor. The shifts in the threshold are made by charging the node between the extra gate capacitance and the normal transistor gate. If there is no charge leakage, the node is said to be floating and it is called a true floating-gate circuit. The added extra capacitance is called a floating-gate capacitance (CFG). Figure 1 shows a floating-gate transistor and a majority gate with fan-in 3 designed in FGMOS. Depending on the size of the floating-gate charge voltage (VFG), the effective threshold voltage will vary. VFG is determined during the design process and the floating-gate circuits are subsequently programmed with the selected VFG once and then they are fixed during operation [6]. Implementation of the floating-gate potential, VFG, can be made via a variety of different methods. For true floating-gates, hot-electron injection, electron tunnelling or UV-exposure is normally used [3],[10],[11]. If the CMOS process also has a gateoxide thickness of 70Å or less [7], some kind of refresh or auto-biasing technique is also required as gate charge leakage will be significant [8].

538

J. Alfredsson and S. Aunet

Fig. 1. FGMOS transistor (left) and FGMOS Majority-3 gate with fan-in 3 (right)

3 Full-Adder Designs The Full-adder is one of the most used basic circuits since addition of binary numbers are one of the most used operations in digital electronics. Full-adders exist everywhere in electronic systems and a large amount of research has been done in this area in order to achieve best possible performance [12], [13], [15]. There exist many different solutions for full-adder designs, this work have focused on two the most commonly used basic CMOS full-adders. A standard static CMOS full-adder design (Figure 4) and a mirrored–gate based full-adder (Figure 3) have

Fig. 2. FGMOS full-adder with fan-in 3

Performance of CMOS and Floating-Gate Full-Adders Circuits

539

been used in our simulations to determine speed and power performance compared to a floating-gate full-adder. The floating-gate full-adder is represented by a recently improved adder structure with a maximum fan-in of 3 [14]. This full-adder is shown in Figure 2 and have shown potential to be better than CMOS at subthreshold.

Fig. 3. Mirrored CMOS full-adder

Fig. 4. Standard static CMOS full-adder

4 Full-Adder Simulations The simulations have been performed in Cadence with the Spectre simulator in a 120 nm CMOS process technology and the used transistors are of low-leakage type. The

540

J. Alfredsson and S. Aunet

transistors using minimum gate lengths, 120 nm (effective), and a width of 150 nm for NMOS and a width of 380 nm for the PMOS. The threshold voltage, Vth, for these low-leakage transistors are 383 mV for NMOS and -368 mV for PMOS according to the simulations. Previous research with full-adders at subthreshold power supply, suggests that the EDP performance for FGMOS can be better than EDP for CMOS if the fan-in of the floating-gate circuit is below four [14]. For this reason, a floating-gate full-adder structure with a fan-in of three has been used. In this work the simulations have been performed for three types of full-adders, one FGMOS and two CMOS. The power supply for the simulations are chosen between 150 mV - 250 mV since previous simulations have shown that this is the range in subthreshold with best performance. The propagation delay, tp, for a fulladder varies with every different state change on the input and because of that, the results from our simulations are based on the slowest input to output change [15]. EDP is calculated from the average power consumption (P) and the minimum signal propagation delay, tp, according to Eq. 1. It is the consumed power required to drive the output to 90% of its final value multiplied by the propagation delay squared.

EDP = PDP ⋅ t p = I avg ⋅ Vdd ⋅ t p ⋅ t p = P ⋅ t 2p

(1)

Iavg is the average switching current and tp is the inverter’s minimum propagation delay [4].

5 Results The simulation results from this work should determine if FGMOS can be used to design better full-adder circuits than static and mirrored CMOS. Figure 5 and Figure 6 show plots of EDP for the circuits at 150 mV and 250 mV power supply. As seen, the

Fig. 5. EDP for Floating-gate and CMOS full-adders at 250 mV

Performance of CMOS and Floating-Gate Full-Adders Circuits

541

Fig. 6. EDP for Floating-gate and CMOS full-adders at 150 mV

Fig. 7. PDP for different full-adders at 250 mV power supply

EDP can be up to 9 times better for FGMOS at 250 mV depending on how you chose the floating-gate voltage VGFp. In all the figures we have plotted CMOS’ EDP as straight horizontal lines to be easily comparable with FGMOS. The plots show the limit of how large the floating-gate voltage can be while the circuit’s gain is higher than one. If the floating-gate voltage, VFGp, is set more negative than in these plots, there will be an attenuation of the signal for each gate. Figure 7 shows the PDP (at 250 mV) which is almost constant for all applied different floating-gate voltages and is approximately 4 times worse than PDP for each

542

J. Alfredsson and S. Aunet

of the CMOS full-adders. Similar results can be obtained from simulations at 150 mV. Plots from the simulations of propagation delay can be seen in Figure 8 and Figure 9 and the FGMOS full-adder has up to 33 times shorter delay compared to the CMOS versions at 250 mV.

Fig. 8. Propagation delay for the different full-adders at 250 mV. The horizontal lines are for the CMOS circuits.

Figure 10 shows the power consumption at 250 mV and it is more than two orders of magnitude higher for FGMOS (114 times).

Fig. 9. Propagation delay for the different full-adders at 150 mV. The horizontal lines are for the CMOS circuits.

Performance of CMOS and Floating-Gate Full-Adders Circuits

543

Fig. 10. Power consumption for the three types of full-adders

6 Discussion FGMOS circuits have in previous studies shown that it can achieve better EDP performance in subthreshold region than normal static CMOS and the fan-in should not be more than three [2],[14]. While there is an advantage in EDP performance for FGMOS in subthreshold, there is also a penalty with a worse PDP and power consumption that needs to be taken into account. The simulation results in this work shows that the EDP can be up to 9 times better for FGMOS full-adder compared to the static CMOS design. It also shows an advantage in switching speed that is 33 times higher for FGMOS than for CMOS fulladders at 250 mV. Even at 150 mV, the switching speed will be more than 3 times better for FGMOS. The mirrored CMOS and static CMOS full-adder circuits in this work have been chosen to be compared with FGMOS since they have shown to have some of the best results of commonly used full-adders in terms of P, PDP and EDP[12],[13]. To notice is also that the mirrored gate full-adder has better performance than the static CMOS full-adder in all the three figures of merits. Even though the results from simulations performed in this work shows a clear advantage for FGMOS when certain design constraints are fulfilled, it must be taken into account that it might not be possible to design the FGMOS with a true floatinggate. It could be required to use some kind of refresh circuit, either as a large resistance or switch that retain or recharge the voltage on the floating-gate node [16],[17]. This will of course have an impact on performance. Especially for state-ofthe-art and future process technologies where the gate-oxide thickness decreases for every generation this will be an issue to carefully look into during the design process. There is still a lot of research to be done within the field of subthreshold FGMOS to find out more advantages or limitations. Some work close related to the topic of this article could be to do a more detailed analysis of netlists from layout perform real

544

J. Alfredsson and S. Aunet

measurements. It would also be interesting to find out how statistical process variations and mismatches between components will affect the performance.

7 Conclusions Using FGMOS circuits in subthreshold power supply can give several times improvement in terms of EDP and over one order of magnitude better gate propagation delay than comparable CMOS circuits. These advantages in performance will hopefully lead to more ultra-low power circuits with higher requirements on switching frequency. While the FGMOS circuits can be much faster and have better EDP than CMOS, they will also have significantly higher power consumption than and that will on the other hand decrease the number of possible applications for FGMOS. The performance constraints for FGMOS designs in subthreshold, especially the power consumption, will be one of the major limiting factors that decide if floating-gate circuits can be used in a specific design.

References [1] Shibata, T., Ohmni, T.: A Functional MOS Transistor Featuring Gate-Level Weighted Sum and Threshold Operations. IEEE Transactions on Electron Devices 39 (1992) [2] Alfredsson, J., Aunet, S., Oelmann, B.: Basic speed and power properties of digital floating-gate circuits operating in subthreshold. In: IFIP VLSI-SOC 2005, Proc. of IFIP International Conference on Very Large Scale Integration, October 2005, Australia (2005) [3] Hasler, P., Lande, T.S.: Overview of floating-gate devices, circuits and systems. IEEE Transactions on Circuits and Systems - II: Analog and Digital Signal Processing 48(1) (January 2001) [4] Stan, M.R.: Low-power CMOS with subvolt supply voltages. IEEE Transactions on VLSI Systems 9(2) (April 2001) [5] Rodríguez-Villegas, E., Huertas, G., Avedillo, M.J., Quintana, J.M., Rueda, A.: A Practical Floating-Gate Muller-C Element Using vMOS Thershold Gates. IEEE Transactions on Cirucits and Systems-II: Analog and Digital Signal Processing 48(1) (January 2001) [6] Aunet, S., Berg, Y., Ytterdal, T., Næss, Ø., Sæther, T.: A method for simulation of floating-gate UV-programmable circuits with application to three new 2-MOSFET digital circuits. In: The 8th IEEE International conference on Electronics, Circuits and Systems, 2001, vol. 2, pp. 1035–1038 (2001) [7] Rahimi, K., Diorio, C., Hernandez, C., Brockhausen, M.D.: A simulation model for floating-gate MOS synapse transistors. In: ISCAS 2002, Proc. of the 2002 IEEE International Sympposium on Circuits and Systems, May 2002, vol. 2, pp. 532–535 (2002) [8] Ramírez-Angulo, J., López-Martín, A.J., González Carvajal, R., Muñoz Chavero, F.: Very low-voltage analog signal processiing based on quasi-floating gate transistors. IEEE Journal of Solid-State Circuits 39(3), 434–442 (2004) [9] Schrom, G., Selberherr, S.: Ultra-Low-Power CMOS Technologies (Invited paper). In: Proc. of International Semiconductor Conference, vol. 1, pp. 237–246 (1996)

Performance of CMOS and Floating-Gate Full-Adders Circuits

545

[10] Aunet, S.: Real-time reconfigurable devices implemented in UV-light programmable floating-gate CMOS. Ph.D. Dissertation 2002:52, Norwegian University of Science and Technology, Trondheim, Norway (2002) [11] Rabaey, J.M.: Digital Integrated Cirucuits - A design perspective, pp. 188–193. Prentice Hall, Englewood Cliffs (2003) [12] Alioto, M., Palumbo, G.: Impact of Supply Voltage Variations on Full Adder Delay: Analysis and Comparison. IEEE Transactions on very large scale integration (VLSI) systems 14(12) (December 2006) [13] Granhaug, K., Aunet, S.: Six Subthreshold Full Adder Cells characterized in 90 nm CMOS technology. Design and Diagnostics of Electronic Circuits and Systems, 25–30 (April 2006) [14] Alfredsson, J., Aunet, S., Oelmann, B.: Small Fan-in Floating-gate Circuits with Application to an Improved Adder Structure. In: Proc. of 20th international Conference on VLSI design, January 2007, Bangalore, India (2007) [15] Shams, A.M., Bayoumi, M.A.: A Framework for Fair Performance Evaluation of 1-bit Full Adder Cells. In: 42nd Midwest Symposium on Circuits and Systems, vol. 1, pp. 6–9 (1999) [16] Seo, I., Fox, R.M.: Comparison of Quasi-/Pseudo-Floating Gate Techniques. In: Proceedings of the International Symposium on Circuits and Systems, ISCAS 2004, May 2004, vol. 1, pp. 365–368 (2004) [17] Alfredsson, J., Oelmann, B.: Influence of Refresh Circuits Connected to Low Power Digital Quasi-Floating gate Designs. In: Proceedings of the 13th IEEE International Conference on Electronics, Circuits and Systems (ICECS 2006), December 2006, Nice, France, (2006) [18] International Technology Roadmap for Semiconductors, Webpage documents, http:// public.itrs.net [19] Lande, T.S., Wisland, D.T., Sæther, T., Berg, Y.: Flogic – Floating Gate Logic for LowPower Operation. In: Proceedings of International Conferens on Electronics Circuits and Systems (ICECS’96), April 1996, vol. 2, pp. 1041–1044 (1996)

Low-Power Digital Filtering Based on the Logarithmic Number System Ch. Basetas, I. Kouretas, and V. Paliouras Electrical and Computer Engineering Department, University of Patras, Greece

Abstract. This paper investigates the use of the Logarithmic Number System (LNS) as a low-power design technique for signal processing applications. In particular we focus on power reductions in implementations of FIR and IIR filters. It is shown that LNS requires a reduced word length compared to linear representations for cases of practical interest. Synthesis of circuits that perform basic arithmetic operations using a 0.18μm 1.8V CMOS standard-cell library, reveal that power dissipation savings more than 60% in some cases are possible.

1

Introduction

Data representation plays an important role in low-power signal processing system design since it affects both the switching activity and processing circuit complexity [1][2]. Over the last decades, the Logarithmic Number System (LNS) has been investigated as an efficient way to represent data in VLSI processors. Traditionally the motivation for considering LNS as a possible efficient solution for data representation in VLSI, is the inherent simplification of the basic arithmetic operations of multiplication, division, roots, and powers which are reduced to addition, subtraction, and right and left shifts, respectively, due to the properties of the logarithm. Beyond the simplification of basic arithmetic, LNS provides an interesting performance in terms of roundoff error, resembling the behavior of floating-point arithmetic. In fact, LNS-based systems have been proposed with characteristics similar to 32-bit single-precision floating-point representation [3]. Recently the LNS has been proposed as a means that can reduce power dissipation in signal processing-related applications, ranging from hearing-aid devices [4], subband coding [5], to video processing [6], and error control [7]. The properties of logarithmic arithmetic have been studied [8,9] and it has been demonstrated that under particular conditions, the choice of the parameters of the representation can reduce the switching activity, while guaranteeing the quality of the output evaluated in terms of measures such as the signal-to-noise ratio (SNR). The impact of the selection of the base b of the logarithm has been investigated as a means to explore trade-offs between precision and dynamic range given a particular word length. However, these works treat the subject at a representational level only, without power estimation data based on simulations. N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 546–555, 2007. c Springer-Verlag Berlin Heidelberg 2007 

Low-Power Digital Filtering Based on the Logarithmic Number System

547

In this paper we demonstrate that there are practical cases, where a reduced LNS representation can replace a linear representation of longer word length, without imposing any degradation on the signal quality. Furthermore we quantify the power dissipated by synthesized LNS arithmetic circuits and equivalent fixed-point circuits, to demonstrate that substantial power savings are possible. Finally, the implementation of the LNS processing units in a 0.18μm library is discussed. It is here shown that for a practical range of word lengths, gatelevel implementations of the required look-up tables provide significant benefits over the linear approach, while the use of low-power techniques such as enabled latched inputs to independent blocks, significantly reduces the power dissipation. Actually it is shown that the basic structure of the LNS adder allows for the easy application of the aforementioned technique. The remainder of the paper is organized as follows: Section 2 describes the basics of LNS operation. Section 3 discusses cases where a smaller LNS wordlength suffices, when compared to a linear fixed-point representation. Section 4 discusses the power aspects of this approach, while conclusions are discussed in section 5.

2

LNS Basics

In LNS, a number X is represented as the triplet X = (zx , sx , x),

(1)

where zx is asserted in the case that X is zero, sx is the sign of X and x = logb (|X|), with b the base of the logarithm and the representation. The choice of b plays a crucial role in the representational capabilities of the triplet in (1), as well as the computational complexity of the processing and forward and inverse conversion units. Due to basic properties of the logarithm, the multiplication of X and Y is reduced to the computation of the triplet Z, Z = (zz , sz , z),

(2)

where zz = zx or zy , sz = sx xor sy , and z = x + y. Similarly for the case of division. The derivation of the logarithm s of the sum S of two triplets is more involved, as it relies on the computation of   s = max{x, y} + logb 1 + b−|x−y| . (3) Similarly the derivation of the difference of two numbers, requires the computation of   (4) d = max{x, y} + logb 1 − b−|x−y| .

548

C. Basetas, I. Kouretas, and V. Paliouras

Assume that a two’s-complement word is used to represent the logarithm x, composed of a k-bit integral part and an l-bit fractional part. The range DLNS spanned by x is   −l k−1 −l  * * k−1 −l 2−l DLNS = b2 , b2 −2 , (5) {0} −b2 −2 ,−b to be compared with the range of (−2i−1 , 2i−1 −2−f ) of a linear two’s-complement representation of i integral bits and f fractional bits. In general, LNS offers a superior range, over the linear two’s-complement representation. This is achieved using comparable word lengths, by departing from the strategy of equispaced representable values and by resorting to a scheme that resembles floating-point arithmetic.

3

LNS Representation in the Implementation of Digital Filters

By capitalizing on the representational properties of LNS, this section investigates cases of filters and shows that a word-length reduction due to LNS is feasible for practical cases. 3.1

Feedback Filters

Fig. 1 depicts a second-order structure, the impulse response of which is shown in 15 Fig. 2(a), for the case of a1 = 489 256 and a2 = − 16 . The impulse response for various word length choices assuming a two’s-complement fixed-point implementation is shown in Fig. 2(b). Similarly, the impulse response of the particular structure implemented in LNS, is shown in Fig. 3(a), for different word lengths and various values of the base b. The choice of the LNS base b is important as it affects the required word length and therefore the complexity of the underlying VLSI implementation of the LNS adder and multiplier. Furthermore, the selection of the base b greatly affects the representational precision of the filter coefficients, y

x a1 D

a2

D

Fig. 1. A second-order feedback structure

Low-Power Digital Filtering Based on the Logarithmic Number System

549

5

4 5 4+5 bits 4+6 bits 4+8 bits

3 4

2 3

1

2

1

0

0

−1 −1

−2 −2

−3

0

50

100

150

200

250

300

−3

0

50

100

(a)

150

200

250

300

(b)

Fig. 2. Impulse response of a second-order structure: (a) ideal and (b) for various fractional word lengths, assuming fixed-point representation 5

6

b=1.2955 k=3 l=3 b=1.6782 k=3 l=4 b=1.2301 k=5 l=5

4

b=1.2301 b=1.5700 b=1.5800

5

4 3

3 2 2 1 1 0 0

−1 −1

−2

−3

−2

0

50

100

150

(a)

200

250

300

−3

0

50

100

150

200

250

300

(b)

Fig. 3. Impulse response of a second-order structure (a) for various word lengths (k, l), of k integral and l fractional bits, assuming logarithmic representations with different values of the base b and (b) response for various values of b. Notice the difference of the impulse responses for b = 1.57 and b = 1.58.

in such an extent that even a difference of 0.01 in the value of the base b can severely alter the impulse response of the second order structure. The behavior of the impulse response as a function of the base is depicted in Fig. 3(b). The experimental results tabulated in Tables 1(a) and 1(b) show that an LNSbased system using b = 1.2301 and a 9-bit word which includes a 5-bit fractional part, is equivalent to a 12-bit fixed-point two’s-complement system. In every case an additional sign bit is implied. 3.2

FIR Filters

The word length required by LNS and fixed-point implementations of practical filters have been determined. The performance of a raised-cosine filter is studied, assuming a zero-mean gaussian input with variance 13 , i.e., taking values in (-1,1), and various values of the input signal correlation factor ρ, have been tested, namely, −0.99, −0.5, 0, 0.5 and 0.99.

550

C. Basetas, I. Kouretas, and V. Paliouras

Table 1. SNR for various wordlengths of a linear (a) and a logarithmic (b) implementation of a second-order structure (b)

(a) word length (bits) 4+5 4+6 4+8 4+9

SNR (dB) 9.65 12.15 2.45 39.23

word length (bits) 3+3 3+4 4+5 5+5

b 1.2955 1.6782 1.2301 1.2301

SNR (dB) 14.29 18.28 25.54 31.64

The experimental results shown in Tables 2(a) and 2(b) reveal that an LNSbased implementation demostrates equivalent behavior to the fixed-point system, using 9 bits instead of 10 bits for ρ = −0.5 and ρ = 0. Furthermore a 9-bit LNSbased system is found to be equivalent to a linear system of more than 10 bits, for the case of ρ = −0.99, while for ρ = 0.5 and ρ = 0.99 both systems exhibit identical performance for the same number of bits (9 bits). Further experiments were conducted employing filters designed using the Parks-McClellan algorithm. The filters were excited using an identical set of input signals. The relation of achieved SNR to the employed word length for the LNS and linear fixed-point implementation are tabulated in Tables 3(a) and 3(b). Table 2. Word lengths for SNR for various values of the input signal correlation ρ., assuming (a) fixed-point two’s-complement representation, and (b) LNS word organization (b)

(a) word length SNR (bits) 0 + 10 0 + 11 0 + 12 0 + 10 0 + 11 0 + 12 0 + 10 0 + 11 0 + 12 0+9 0 + 10 0 + 11 0+9 0 + 10 0 + 11

(dB) 15.62 20.71 26.21 29.45 35.14 40.60 32.27 38.76 44.11 29.18 34.08 40.25 30.08 36.40 41.45

ρ −0.99 −0.99 −0.99 −0.5 −0.5 −0.5 0 0 0 0.5 0.5 0.5 0.99 0.99 0.99

word length (bits) 4+5 4+6 4+5 4+6 4+5 4+6 4+5 4+6 4+5 4+6

SNR b 1.5 1.7 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5

(dB) 18.04 21.30 29.49 33.54 29.79 35.13 30.13 35.84 28.89 34.39

ρ −0.99 −0.99 −0.5 −0.5 0 0 0.5 0.5 0.99 0.99

Low-Power Digital Filtering Based on the Logarithmic Number System

551

Table 3. (a) SNR and integral and fractional word lengths in LNS for various values of ρ. (b) Word lengths and SNR in a fixed-point filter, for various values of ρ. An additional sign bit is implied. (b)

(a) word length (bits) 4+5 4+6 4+5 4+6 4+5 4+6 4+5 4+6 4+5 4+6

SNR b 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5

(dB) 29.47 34.37 34.34 39.89 34.58 41.10 28.29 34.81 22.00 28.83

word length SNR ρ −0.99 −0.99 −0.5 −0.5 0 0 0.5 0.5 0.99 0.99

(bits) 1+10 1+11 1+12 0+10 0+11 0+12 0+9 0+10 0+11 0+9 0+10 0+11 0+9 0+10 0+11

(dB) 22.56 28.06 34.65 32.73 38.91 44.57 27.70 33.48 39.47 25.92 31.69 37.82 10.02 15.54 21.95

ρ −0.99 −0.99 −0.99 −0.5 −0.5 −0.5 0 0 0 0.5 0.5 0.5 0.99 0.99 0.99

Results depict that the LNS-based system demonstrates equivalent behavior to the linear implementation, requiring only 9 bits instead of 10 bits for the cases of ρ = −0.5, ρ = 0 and ρ = 0.5, while for the case of ρ = 0.99, 9 bits are required instead of 11 bits for the linear case. The experimental results reveal that LNS achieves acceptable behavior using a lower word length than a fixed-point structure. The area, delay and power dissipation required for LNS basic operations in comparison to the corresponding fixed-point operations are detailed in the following section.

4

Power Dissipation of LNS Addition and Multiplication

The basic organization of an LNS adder/subtractor is shown in Fig. 4. By performing power measurements assuming gaussian input distributions of mean value zero and for various values of variance, the results of Table 4 occur. The results refer to a 0.18μm CMOS library, operating with a supply voltage of 1.8V. The particular LNS adder requires an area of 8785μm2 and has a delay of 4.76ns, while the corresponding linear two’s-complement multiplier is organized as a carry-save adder array and it requires 11274μm2 and has a delay of 4.22ns. Fixed-point addition and LNS multiplication are both implemented by binary adders. The area and delay of addition in two’s-complement fixed-point (833.45μm2, 2.06ns) is compared to the cost of multiplication in LNS (639μm2 , 1.58ns ). The corresponding power dissipation figures are offered in Table 5.

552

C. Basetas, I. Kouretas, and V. Paliouras

add LUT

x − subtract LUT

y

s



Fig. 4. The organization of an LNS adder/subtractor Table 4. Power dissipation of an LNS adder/subtractor compared to power dissipation of an equivalent linear fixed-point multiplier

σ2 0.1 0.5 1.0 1.5 1.7

LNS (mW) 1.214 1.307 1.338 1.359 1.356

FXP savings (mW) % 2.286 46.9 2.233 41.5 2.275 41.2 2.318 41.6 2.277 40.4

Table 5. Power dissipation of an LNS multiplier compared to power dissipation of an equivalent linear fixed-point adder

σ2 0.1 0.5 1.0 1.5 1.7

LNS (μW) 84.78 83.75 85.57 85.17 85.05

FXP savings (μW) % 116.27 27.1 115.07 27.2 116.8 26.7 116.12 26.5 115.57 26.4

Table 5 shows that power savings are achieved by adopting LNS, due to reducing the employed word length. The organization of the LNS adder comprises two look-up tables, one of which is used in the case of operands having the same sign (add LUT in Fig. 4), while the other one is used in the case of operands of different signs (subtract LUT). It is here reported that significant power savings are achieved by latching the

Low-Power Digital Filtering Based on the Logarithmic Number System

553

sx sy

dadd

dsub

d

Fig. 5. Latched inputs to the LUTs Table 6. Comparison of power dissipated by the enhanced LNS adder to an equivalent fixed-point multiplier

σ2 0.1 0.5 1.0 1.5 1.7

LNS (mW) 0.619 0.833 0.889 0.896 0.903

FXP savings (mW) % 2.286 72.9 2.233 62.6 2.275 60.3 2.318 61.3 2.277 61.0

inputs to the look-up table which is not used in a particular addition. Therefore an exclusive-or operation on the signs of the operands simply provides an enable signal to an input latch and thus can inhibit any unnecessary switching in the LUT not required in a particular operation, as shown in Fig. 5. In employing this scheme using level-sensitive latches care should be taken to avoid timing violations, related to latch set-up times. The benefits of this scheme are tabulated in Table 6, while it requires 10290μm2 and has a delay of 3.08ns. Table 7 summarizes area, time, and power dissipation of synthesized LNS adders, while Table 8 presents synthesized multiplier performance data for comparison purposes. In this paper the issue of conversion between LNS and a linear representation is not dealt with. It is assumed that a sufficiently large amount of computation can be Table 7. Area, time, and power dissipation of synthesized LNS adders word length 4/7 5/9 6/10 7/11

LNS adder Area Delay Power ns mW μm2 5370.70 3.66 0.723 8785.73 4.76 1.338 15058.89 5.33 2.049 24722.73 6.40 2.887

with latches Area Delay Power μm2 ns mW 6118.73 2.46 0.678 10289.98 3.08 0.889 17091.92 3.91 1.525 28931.14 4.50 1.966

554

C. Basetas, I. Kouretas, and V. Paliouras Table 8. Synthesized linear multiplier data word length 11 12 13

Area μm2 9485.4 11274.3 13173.0

Delay ns 3.94 4.22 4.37

Power mW 1.877 2.275 2.759

performed within LNS, and that any conversions are limited to input/output, therefore any conversion overhead, if required, is compensated by the overall benefits. All power figures assume a 50MHz word data rate and are obtained by the following procedure. Scripts developed in Matlab, generate the VHDL description of the LNS adders, as well as the input signals for the simulations. VHDL models are synthesized and simulations provide switching activity information which subsequently back-annotates the designs, to improve the accuracy of power estimation.

5

Discussion and Conclusions

It has been shown that the use of LNS in certain DSP kernels can significantly reduce the required word length. By exploiting the reduced word length, combined with the simplified circuits imposed by the use of LNS, as well as the isolation of the look-up table which is not used in a particular addition, significant power dissipation reduction is achieved for practical cases. It should be noted that further power reduction due to LNS, can be sought by applying look-up table size reduction techniques, dependent on the particular accuracy [10][7]. Therefore the use of LNS is worth investigating for possible adoption in lowpower signal processing systems.

References 1. Stouraitis, T., Paliouras, V.: Considering the alternatives in low-power design. IEEE Circuits and Devices 17, 23–29 (2001) 2. Landman, P.E., Rabaey, J.M.: Architectural power analysis: The dual bit type method. IEEE Transactions on VLSI Systems 3, 173–187 (1995) 3. Arnold, M.G., Bailey, T.A., Cowles, J.R., Winkel, M.D.: Applying features of the IEEE 754 to sign/logarithm arithmetic. IEEE Transactions on Computers 41, 1040–1050 (1992) 4. Morley, J. R.E., Engel, G.L., Sullivan, T.J., Natarajan, S.M.: VLSI based design of a battery-operated digital hearing aid. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1988, pp. 2512–2515. IEEE Computer Society Press, Los Alamitos (1988) 5. Sacha, J.R., Irwin, M.J.: Number representation for reducing switched capacitance in subband coding. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1998, pp. 3125–3128 (1998)

Low-Power Digital Filtering Based on the Logarithmic Number System

555

6. Arnold, M.G.: Reduced power consumption for mpeg decoding with lns. In: Proceedings of the IEEE International Conference on Application-Specific Systems, Architectures and Processors (ASAP 02), 2002, IEEE Computer Society Press, Los Alamitos (2002) 7. Kang, B., Vijaykrishnan, N., Irwin, M.J., Theocharides, T.: Power-efficient implementation of turbo decoder in sdr system. In: Proceedings of the IEEE International SOC Conference, 2004, pp. 119–122 (2004) 8. Paliouras, V., Stouraitis, T.: Low-power properties of the Logarithmic Number System. In: Proceedings of 15th Symposium on Computer Arithmetic (ARITH15), Vail, CO, June 2001, pp. 229–236 (2001) 9. Paliouras, V., Stouraitis, T.: Logarithmic number system for low-power arithmetic. In: Soudris, D.J., Pirsch, P., Barke, E. (eds.) PATMOS 2000. LNCS, vol. 1918, pp. 285–294. Springer, Heidelberg (2000) 10. Taylor, F., Gill, R., Joseph, J., Radke, J.: A 20 bit Logarithmic Number System processor. IEEE Transactions on Computers 37, 190–199 (1988)

A Power Supply Selector for Energy- and Area-Efficient Local Dynamic Voltage Scaling Sylvain Miermont1 , Pascal Vivet1 , and Marc Renaudin2 1

CEA-LETI/LIAN, MINATEC, 38054 Grenoble, France {sylvain.miermont,pascal.vivet}@cea.fr 2 TIMA Lab./CIS group, 38031 Grenoble, France [email protected]

Abstract. In systems-on-chip, dynamic voltage scaling allows energy savings. If only one global voltage is scaled down, the voltage cannot be lower than the voltage required by the most constrained functional unit to meet its timing constraints. Fine-grained dynamic voltage scaling allows better energy savings since each functional unit has its own independent clock and voltage, making the chip globally asynchronous and locally synchronous. In this paper we propose a local dynamic voltage scaling architecture, adapted to globally asynchronous and locally synchronous systems, based on a technique called Vdd-hopping. Compared to traditional power converters, the proposed power supply selector is small and power-efficient, with no needs for large passives or costly technological options. This design has been validated in a STMicroelectronics CMOS 65nm low-power technology.

1

Introduction

With the demand for more autonomy of mobile equipments and general purpose microprocessors hitting the power wall, there is an unquestionable need for techniques increasing power-efficiency of computing. As CMOS technology scale down, the part of leakage energy in the total energetic budget tends to increase, however reducing dynamic power still remains a major issue. This is caused by the use of new telecommunication standards, highly-compressed multimedia formats, high-quality graphics and, more generally, the greater algorithmic complexity of today’s applications. For digital circuits, Ptotal ∼ k1 · Vdd 2 + k2 · Vdd and Fmax ∼ k · Vdd so the energy-per-operation Eop = Ptotal /Fmax scales with Vdd . Hence, DVS (Dynamic Voltage Scaling) techniques allow an energetic gain when computing speed can be scaled-down without violating real-time constraints. On the other hand, with the increased complexity of actual and future integrated circuits, global clock distribution, design reuse, intra-chip communications and whole-chip verification are becoming severe engineering challenges. Thus new N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 556–565, 2007. c Springer-Verlag Berlin Heidelberg 2007 

A Power Supply Selector for Local Dynamic Voltage Scaling

557

engineering methodologies are emerging to replace the ‘all synchronous’ way of designing circuits, one of them being the GALS (Globally Asynchronous Locally Synchronous) technique. In a GALS chip, there is no global clock, but rather a set of synchronous ‘islands’ communicating through an asynchronous interconnexion network. An example of GALS telecommunication SoC (System-on-Chip) is given in [1], for GALS microprocessors, see [2]. As each synchronous island in a GALS chip has its own independent clock, each clock frequency can be individually modified and locally managed, so it is possible to change the voltage of each island: that’s the principle of LDVS (Local Dynamic Voltage Scaling). The purpose of this work is to design an energy- and area-efficient way to do LDVS on a GALS chip. See [3] for more details about the interests of GALS and LDVS. In the first section we’ll describe our architecture proposal for LDVS, then the power supply selector architecture and its behaviour and, finally, the results we’ve got so far for this PSS and our conclusions.

2

LDVS Architecture Proposal

In a GALS-SoC, functional units, classically called ‘IP blocks’, form synchronous islands. In our proposal (see fig. 1), the chip is composed of many units (the functional part of a unit is called ‘the core’) and a GALS interconnect for interunits communications (e.g. an asynchronous network-on-chip, see [1]).

Fig. 1. LDVS chip architecture proposal: a GPM (Global Power Manager) per chip, and for each synchronous unit, a LCG (Local Clock Generator), a LPM (Local Power Manager) and a PSS (Power Supply Selector)

One unit of the chip acts as a global power manager. The global power manager enforces the chip energy policy by sending commands through the GALS interconnect to local power managers and receiving back some status information. Local power managers can be specialized for their units in order to reduce energy by self-adapting to local conditions, such as temperature or process, but

558

S. Miermont, P. Vivet, and M. Renaudin

also by using power-optimization algorithms (e.g. Earliest Deadline First if the unit is a processor with a scheduler, as seen in [4]). Local Clock Generation. Each unit has its own LCG (Local Clock Generator), designed to run the unit at the maximum frequency allowed by local conditions. This LCG can be a programmable ring oscillator as suggested in [3]. Local Power Supply. Each unit has its own power converter, driven by the local power manager. In conventional DVS techniques, the converter is either inductive, capacitive or linear. Inductive and capacitive converters have a good maximum efficiency but needs passives (inductances and capacitances), and a typical SoC can have tens of units, so using external components is not an option. Passives can be integrated but that would take a lot of area or would require special technological options in the manufacturing process. Linear converters do not use any passives but their efficiency decreases as the output voltage is lowered, so the energy-per-operation does not scale down. All these converters also need a regulation circuit that consumes power and lowers the total efficiency of the device. These three types of converters have been integrated for various purposes [5,6,7] but do not fit the needs of actual and future SoC platforms. On the other hand, a ‘discrete’ DVS technique, using only two set-points instead of a continuously adjustable voltage, is presented in [8], and is better suited for integration. This paper [8] showed that significant power is saved using two set points and that the additional gain of using an infinite number of set-points is low. Architectures using this principle, called ‘Vdd-Hopping’, have been presented in [9,10,11]. In our LDVS architecture (see fig. 1), ‘Vdd-Hopping’ is implemented with two external supply voltages, called Vhigh (nominal voltage, unit running at nominal speed) and Vlow (reduced voltage, unit running at reduced speed) and a PSS (Power Supply Selector) for each unit. Typically, the two voltages will by supplied by two fixed-voltage inductive DC/DC converters, that can be partially integrated (passives on the board) or not. The efficiency of these converters must be taken into account when calculating the total system efficiency (see section 4.1). Power switches architectures have been proposed (see [10,12,13]) for hopping from one source to an other (this is called a transition) but their simplicity comes with serious disadvantages. First problem, there is no control on the transition duration (i.e. the dV /dt slope on Vcore ). Power lines from external sources to PSS forms a RLC (resistive-inductive-capacitive) network, so if current changes too rapidly, the supply voltage of a unit could fall under a safe voltage limit, leading to memory losses. Moreover, the unit power grid delays the voltage transition, so a fast transition from Vhigh to Vlow creates large ΔV over the unit power grid. If the clock generator is a ring oscillator, large ΔV can cause a mismatch between generator delay element and critical path delay, leading to errors. So, without a careful

A Power Supply Selector for Local Dynamic Voltage Scaling

559

analysis of power supply networks of the chip, there is no guarantee that the core will make no error during transitions. Besides, if both power switches are off at the same time, the unit voltage can drop, while if both power switches are on, a large current will flow from the higher voltage source to the lower voltage source. This is a second problem to solve because voltage regulators are usually not designed to receive current from the load.

3

The Power Supply Selector

The principle we use is: when no transition is needed, the system must be in a ‘sleep’ mode, one of the source supplies the power with minimum losses. When a transition is needed, the power element between Vcore and Vhigh is used as a linear regulator so Vcore ≈ Vref , and the power element between Vlow and Vcore is switched on and off only when Vref ≈ Vlow . With this principle, using the appropriate sequence, we can switch smoothly between the two power supplies. Notice that all elements of the PSS are powered by the Vhigh source. For designing the power supply selector, the system constraints were: – device area must be moderate regarding to unit core area (< 20%), – power efficiency must be as good as possible (> 80%), and total energy-peroperation must scale down with computing speed, – the voltage must always be sufficient for the unit to run without error even during transitions from one source to an other, – current must never flow from one source to an other. 3.1

PSS Architecture

Power Switches and Soft-Switch. The power switches are the main elements of the power supply selector (fig. 2). They are made of a group of PMOS transistors. Between Vlow and Vcore is the Tlow transistor, sized to minimize resistive

Fig. 2. Power Supply Selector architecture

560

S. Miermont, P. Vivet, and M. Renaudin

losses when in full conduction with a unit at maximum power. The inverter driving its gate is sized so that transistor switches on and off relatively slowly (a dozen clock periods). Between Vhigh and Vcore are the Ntrans Thigh transistors (Ntrans = 24), connected in parallel with common drain, source and bulk, but separated gates. The sum of Thigh transistors width is chosen to minimize losses, the width of each transistor is chosen so Vcore changes from ±Vhigh /Ntrans when a transistor is switched on or off using a thermometer code. The Ntrans inverters driving their gates are sized to minimize switch-on and -off time ( Tclk , ≈100 ps in our case). The soft-switch is a synchronous digital element acting as a 1-bit input digital integrator with a Ntrans -bit thermometer coded output. When this element is enabled, for each clock cycle, if its input is a binary ‘1’, it switches on the output bit previously off with the lowest index number (1 to Ntrans in this case). If its input is a binary ‘0’, it switches off the output bit previously on with the highest index number. Combining the Ntrans Thigh transistors and the soft-switch element gives us a ‘virtual’ Thigh transistor whose effective width can be increased or decreased (in predetermined non-uniform steps) according to a binary signal. DAC and Comparator. DAC is a 5-bit R-2R Digital to Analog Converter used to generate voltage ramps between Vhigh and Vlow − Δs , with Δs ≈ Δr , resistive losses through Tlow when Vlow is the selected power supply (see section 3.2 for the use of Δs ). Ramp duration is set in the controller, and it generates codes for the DAC accordingly. DAC output impedance must be low enough to drive the comparator’s input (capacitive load) without limiting the minimum duration of the ramp. The comparator must be fast (≈ 150 ps in our case) and must be able to compare voltages close to supply voltage. None of this element’s characteristics depend on the unit, so no redesign is needed until technology is changed. When the comparator’s output is connected together with soft-switch input, these two elements and the Thigh power switches become a closed-loop system, acting as a linear voltage regulator, maintaining Vcore = Vref . This regulation is integral with a 1-bit error signal so Vcore oscillates around Vref with an amplitude depending on loop delay, clock period Tclk , and the number of Thigh transistors Ntrans . Other Elements. To prevent any dependence with the core LCG, our PSS has its own clock generator, a ring oscillator with a programmable delay element (so the clock frequency is digitally adjustable). The clock frequency doesn’t need to be accurate but must be fast enough so the closed-control loop can regulate variation of the unit current (in our PSS, Fclk ≈ 1GHz). In our implementation of the PSS, ramp duration Tramp , low reference point offset Δs and clock frequency Fclk are tunable, so there are elements, not represented on the diagram, to allow the GPM to configure these parameters. Last element, the controller plays the hopping sequence and disable elements of the PSS to save power when there is no need to make a transition.

A Power Supply Selector for Local Dynamic Voltage Scaling

3.2

561

Hopping Sequence

Falling Transition. A falling transition occurs when switching from Vhigh to Vlow . When the LPM sends a signal to the PSS asking to hop, the PSS clock generator wakes up and the controller starts its hopping sequence.

Fig. 3. Falling transition chronogram

First, the controller sends an enable signal to all elements, and waits for a few clock periods for them to stabilize (‘init’ on fig. 3). The comparator output is connected with the soft-switch input, enabling the closed-loop regulation. In this state, Vref = Vhigh and Vcore = Vhigh − Δr , Δr representing resistive losses through the Thigh power transistor, so that Vref > Vcore , so all Thigh transistors stay on. Then, the controller sends codes to the DAC so Vref falls with a controlled slope, Vcore follows Vref , the voltage across Thigh transistors rises (‘transi’ on fig. 3). This phase ends when Vref reaches its lowest point, a voltage equals to Vlow − Δs . As Vcore ≈ Vref < Vlow , the Tlow transistor can be switched on without any reverse current flowing through it. In this phase (‘switch’ on fig. 3), the Tlow transistors slowly switch on (thanks to an undersized driver), while more Thigh transistors switch off to compensate. At this point, Vref ≈ Vlow − Δs ≈ Vlow − Δr (with Δr resistive losses through the Tlow transistor) so Thigh might not be totally off. In the last phase (‘end’ on fig. 3), the controller opens the control loop, forces all Thigh transistors to switch off, disables all elements, waits a few clock periods and stops the clock generator. Core power is supplied only by the Vlow source, and the PSS do not consume any dynamic power. Rising Transition. When switching from Vlow to Vhigh , the controller starts by enabling all elements and closing the control loop (‘init’ on fig. 4). Then, Tlow is switched off, some Thigh transistors switches on, so Vcore ≈ Vlow (‘switch’ on fig. 4). Then Vref slowly rises up, Vcore follows, and this phase (‘transi’ on fig. 4) ends when Vref reaches its highest point and when the soft-switch is in the ‘all-on’ state. Then, all elements are disabled and the clock is stopped. Power to the unit is supplied only by the Vhigh source, and the PSS do not consume any dynamic power.

562

S. Miermont, P. Vivet, and M. Renaudin

Fig. 4. Rising transition chronogram

4

Results

The PSS is a mixed-mode device so its design is made using VHDL for digital elements, SPICE for analog elements, and VHDL-AMS to bind digital and analog elements together. For co-simulation, we use ADVance MS, Modelsim and Eldo from Mentor Graphics. For implementation, we use a ST Microelectronics CMOS 65 nanometer low-power triple-Vt technology, using synthesis and standard-cells for the digital parts, manual layout for analog parts. The corner case is Vhigh = 1.2 V, Vlow = 0.8 V, T = 25 ◦ C and process is considered nominal. Vlow value was chosen so the unit logic cells and memories are functional with a reduced power consumption. 4.1

Power Efficiency

Outside transitions phases, losses in the PSS are the sum of resistive losses in the power transistors and leakage current in the device. Thanks to the use of a triple-Vt technology, leakage power for the PSS is 10−5 to 10−4 Watt, a negligible value (less than 1%) compared to unit active power. The power transistors Thigh and Tlow are sized so that the resistive losses ρres are less than 3 % of Punit , useful unit power. So, when the PSS is not doing a transition, power efficiency is around 97 %. During transitions, there are two sources of losses: the energy dissipated by the Thigh transistors when used as a voltage regulator and the dynamic consumption of the PSS elements. During ramp time Tramp , a linear ramp is done from Vhigh to Vlow and the V −Vcore Thigh power losses ratio is ρramp = high . In our implementation, for a Vhigh V

−V

low = whole transition, it is the average of ρramp+ = 3% and ρramp− = high Vhigh 33.3%, so we obtain ρramp = 18%. During transition time Ttransi = Tramp + Tinit,switch,end , the PSS is active, and its dynamic power is Pdyn . Punit The total PSS efficiency ηP SS = Punit +PP SS is given by:

PP SS =

Thop − Ttransi Tramp Ttransi · ρres · Punit + · ρramp · Punit + · Pdyn Thop Thop Thop

A Power Supply Selector for Local Dynamic Voltage Scaling

563

As example, for an average time between transition of Thop = 1 μs (1 Mhz hopping frequency), with Tramp = 100 ns, Ttransi = 140 ns, ρramp = 18% and Pdyn = 4 mW (post-synthesis estimation for Fclk = 1 GHz), we obtain the following PSS efficiency: PP SS = 4.18% · Punit + 0.56 mW. With these parameters: – for a 15 mW unit, PP SS = 1.19 mW, so PSS efficiency ηP SS = 92, 7% – for a 150 mW unit, PP SS = 6.83 mW, so PSS efficiency ηP SS = 95, 6% To calculate the total system efficiency ηsystem , we must also take into account the efficiency of the external or internal voltage regulators used to supply the global voltages Vhigh and Vlow . If fixed-voltage inductive DC/DC regulators are used, with an efficiency in the 90% range, the total system efficiency is ηsystem ≈ 80%. The efficiency of our system is high compared to systems using linear regulators, and equivalent to the efficiency of systems based on integrated inductive or capacitive converters. 4.2

PSS Area

Area of the PSS is divided into a fixed part, for its digital logic and analog elements, and a variable part, for the power switches and their drivers. The fixed part is composed of the DAC, the PSS clock generator, the comparator and hundreds of logic standard cells (≈ 2500 μm2 after synthesis) witch gives an estimated size of ≈ 4400 μm2 . Table 1. Area results of PSS for 3 different units peak power total area PSS area relative PSS area unit A 11.4 mW 322 · 103 μm2 ≈ 6.6 · 103 μm2 2.06 % unit B 23.1 mW 899 · 103 μm2 ≈ 7.4 · 103 μm2 0.83 % unit C 61.1 mW 1242 · 103 μm2 ≈ 10.8 · 103 μm2 0.87 %

For the variable part, the power switches are sized according to the unit power consumption, by choosing an acceptable frequency penalty and measuring the unit current on a averaging time window. The detailed sizing method is out of the scope of this paper, see [14] for more about sizing power switches. Table 1 sums up area results obtained for some telecom signal processing units (the area is given after place-and-route). Even for small units, the area cost of the proposed PSS is very low compared to inductive or capacitive integrated voltage converters and is equivalent to linear converters. 4.3

Transition Simulation

Results of a PSS simulation, with a simplified load model, is shown in fig. 5: Vcore and Vref ramp down, oscillation of Vcore around Vref is clearly visible, as current switching slowly from one source to another. In this simulation, Ttransi ≈ 300 ns and Fclk ≈ 500 MHz.

564

S. Miermont, P. Vivet, and M. Renaudin

Fig. 5. Simulation chronogram of a falling transition

5

Conclusion

This paper presents a local dynamic voltage scaling architecture proposal and the power supply selector on witch it is based. The presented power supply selector is as efficient as the best integrated inductive or capacitive regulators, but is much more compact and solves the problems of simple voltage selectors. This work will be implemented in a future technological demonstrator, allowing us to characterize the PSS and measure the power gained by using our LDVS architecture. If the presented results and numbers are confirmed in the final design, we will be able to demonstrate significant power gains in globally asynchronous and locally synchronous system-on-chip.

References 1. Lattard, D., Beigne, E., Bernard, C., Bour, C., Clermidy, F., Durand, Y., Durupt, J., Varreau, D., Vivet, P., Penard, P., Bouttier, A., Berens, F.: A Telecom Baseband Circuit-Based on an Asynchronous Network-on-Chip. In: Proceedings of Intl. Solid State Circuits Conf. (ISSCC) (February 2007) 2. Iyer, A., Marculescu, D.: Power and Performance Evaluation of Globally Asynchronous Locally Synchronous Processors. In: Proceedings of Intl. Symp. on Computer Architecture (ISCA) (May 2002) 3. Njølstad, T., Tjore, O., Svarstad, K., Lundheim, L., Vedal, T.Ø., Typp¨ o, J., Ramstad, T., Wanhammar, L., Aar, E.J., Danielsen, H.: A Socket Interface for GALS using Locally Dynamic Voltage Scaling for Rate-Adaptive Energy Saving. In: Proceedings of ASIC/SOC Conf. (September 2001) 4. Zhu, Y., Mueller, F.: Feedback EDF Scheduling Exploiting Dynamic Voltage Scaling. In: Proceedings of Real-Time and Embedded Technology and Applications Symp (RTAS) (May 2004)

A Power Supply Selector for Local Dynamic Voltage Scaling

565

5. Ichiba, F., Suzuki, K., Mita, S., Kuroda, T., Furuyama, T.: Variable supply-voltage scheme with 95%-efficiency DC-DC converter for MPEG-4 codec. In: Proceedings of Intl. Symp. on Low Power Electronics and Design (ISLPED) (August 1999) 6. Li, Y.W., Patounakis, G., Jose, A., Shepard, K.L., Nowick, S.M.: Asynchronous datapath with software-controlled on-chip adaptive voltage scaling for multirate signal processing applications. In: Proceedings of Intl. Symp. on Asynchronous Circuits and Systems (ASYNC) (May 2003) 7. Hammes, M., Kranz, C., Kissing, J., Seippel, D., Bonnaud, P.-H., Pelos, E.: A GSM Baseband Radio in 0.13μm CMOS with Fully Integrated Power-Management. In: Proceedings of Intl. Solid State Circuits Conf (ISSCC) (February 2007) 8. Lee, S., Sakurai, T.: Run-Time Voltage Hopping for Low-Power Real-Time Systems. In: Proceedings of Design Automation Conf (DAC) (June 2000) 9. Kawaguchi, H., Zhang, G., Lee, S., Sakurai, T.: An LSI for VDD-Hopping and MPEG4 System Based on the Chip. In: Proceedings of Intl. Symp. on Circuits and Systems (ISCAS) (May 2001) 10. Xu, Y., Miyazaki, T., Kawaguchi, H., Sakurai, T.: Fast Block-Wise Vdd-Hopping Scheme. In: Proceedings of IEICE Society Conf. (September 2003) 11. Calhoun, B.H., Chandrakasan, A.P.: Ultra-Dynamic Voltage Scaling using Subthreshold Operation and Local Voltage Dithering in 90nm CMOS. In: Proceedings of Intl. Solid-State Circuits Conf (ISSCC) (February 2005) 12. Kuemerle, M.W.: System and Method for Power Optimization in Parallel Units. Us Patent 6289465 (September 2001) 13. Cohn, J.M., et al.: Power Reduction by Stage in Integrated Circuit. US Patent 6825711 (November 2004) 14. Anis, M.H., Areibi, S., Elmsary, M.I.: Design and Optimization of Multi-Threshold CMOS (MTCMOS) Circuits. IEEE Trans. on CAD 22(10) (October 2003)

Dependability Evaluation of Time-Redundancy Techniques in Integer Multipliers Henrik Eriksson SP Technical Research Institute of Sweden Box 857, SE-501 15 Borås, Sweden [email protected]

Abstract. An evaluation of the fault tolerance which can be achieved by the use of time-redundancy techniques in integer multipliers has been conducted. The evaluated techniques are: swapped inputs, inverted reduction tree, a novel use of the half precision mode in a twin-precision multiplier, and a combination of the first two techniques. The faults which have been injected are single stuck-atzero or stuck-at-one faults. Error detection coverage has been the evaluation criteria. Depending on the technique, the attained error detection coverage spans from 25% to 90%.

1 Introduction Dependable computer systems, where a malfunction can lead to loss of human lives, economical loss, or an environmental disaster, are no longer limited to space, aerospace, and nuclear applications, but are rapidly moving toward consumer electronics and automotive applications. In a few years time, mechanical and hydraulic automotive subsystems for braking and steering will be replaced by electronic systems, also known as by-wire systems. In these systems, faults will occur, and thus fault tolerance is paramount. Modern microprocessors have several on-chip error-detection mechanisms in memories and registers, but in general arithmetic units such as multipliers do not. In this paper four different time-redundancy techniques as a means to achieve fault tolerance in an integer multiplier are evaluated. 1.1 Fault Tolerance There are different means to attain a dependable system [1] e.g.: • Fault prevention • Fault removal • Fault tolerance Fault prevention aims to reduce the faults which are introduced during development. The use of a mature development process, a strongly typed programming language, and design rules for hardware development are examples of fault prevention. Fault removal is present both during development and when the system is in use. During development, faults are removed by verification and validation activities N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, pp. 566–575, 2007. © Springer-Verlag Berlin Heidelberg 2007

Dependability Evaluation of Time-Redundancy Techniques in Integer Multipliers

567

which consist of static analysis methods and dynamic testing methods. Preventive and/or corrective maintenance remove faults during operation. Despite the fact that fault prevention and removal are used, there might still be design faults left in the system or faults will be caused by external sources such as radiation from space (neutrons) or EMI (electromagnetic interference). As a consequence there is a need for a certain tolerance level of faults in the system. Fault tolerance is achieved by error detection and error recovery. To be able to detect and recover from errors, redundancy in time, space (hardware or software), or information is needed. A parity bit is an example of information redundancy and mirrored hard disk drives in a RAID (Redundant Array of Inexpensive Disks) system an example of hardware spatial redundancy. If a piece of software is executed twice on the same hardware component, it is denoted time redundancy. The effectiveness of a fault-tolerance technique is called coverage and defines the probability that the fault tolerance technique is effective given that a fault has occurred. In order to determine the coverage, an assumption on the types, location, arrival rate, and persistence of the faults, i.e. a fault model, has to be defined. The fault model used in this study is described in Section 3.1. 1.2 Integer Multipliers An 8×8-bit parallel multiplier of carry-save type [2] is shown in Fig. 1.

HA

HA

HA

FA

FA

FA

HA

FA

FA

FA

FA

FA

HA

FA

FA

FA

FA

FA

FA

FA

HA

FA

FA

FA

FA

FA

FA

FA

FA

FA

HA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

Partial product generation xi y j

& pij

Partial product reduction tree

p01 p 10

p 00

O0

HA

p77

FA

Final adder

O15

Fig. 1. An 8×8-bit integer multiplier of carry-save type. Three parts can be distinguished: partial product bit generation (AND gates), partial product reduction tree (half-adder cells, HAs, and full-adder cells, FAs), and the final adder.

The first step in the multiplier is to generate all partial product bits. In a simple unsigned integer multiplier this is performed by conventional AND gates. The second step is to feed all partial-product bits to the reduction tree where the bits are reduced to a number which is suitable for the final adder. Different reduction trees exist, ranging from slow and simple, e.g. a carry-save tree, to fast and complex, e.g. Wallace [3] or Dadda [4] trees. To get optimal performance, the final adder which computes the

568

H. Eriksson

resulting product shall be tailored to the delay profile of the reduction tree and could consist of both ripple-carry and carry-lookahead parts [5]. 1.3 Fault-Tolerant Adders and Multipliers Many techniques have been proposed to add fault tolerance to adders and multipliers. The techniques described in this section are a small but representative set of the possible techniques which can be used. A fault-tolerant adder was proposed [6] which use TMR (Triple Modular Redundancy) in the carry logic of the full-adder cells as well as TMR on the ALU (Arithmetic Logic Unit) level. The extra two ALUs operate on either shifted or rotated inputs. Hence a fault in a bit slice will manifest itself as an error in another bit position in the results from the redundant ALUs. Using TMR is an expensive approach. Time-redundancy is used in the RETWV (Recomputing with triplication with voting) technique [7]. The idea here is to divide the input(s) and the adder or multiplier into three parts and for each computation one third of the end result is computed and decided by voting. In the adder case, both input operands are divided into three parts but for the multiplier only one of the operands is divided. An extra redundant bit slice is added in a Wallace multiplier to give the possibility for self repair [8]. A test sequence is used to detect errors and if an error is detected the multiplier is reconfigured by the use of a configuration memory. Residue number systems (RNSs) exhibit error detection and correction capabilities by means of information redundancy. RNS has been used to design a 36-bit single fault tolerant multiplier [9].

2 Time-Redundancy Techniques in Multipliers The use of time redundancy to achieve fault tolerance is not new and many techniques have been proposed [10]. Common for most time-redundancy techniques is that during the first computation the operands are used as presented and during the second computation the operands are encoded (in general no extra bits are added, i.e. no information redundancy is added) before the computation and decoded after. Examples of encoding functions are inversion or arithmetic shift. Not all components are self dual, i.e. they can use inversion without modification, but e.g. the adder is. In order to use arithmetic shift extra bit slices are needed to accommodate the larger word length needed. In this paper, the multiplier is the target and the operation we wish to perform is P = x · y.

(1)

Here x and y are the input operands and P is the product. In the following, the techniques used during the second computation of the multiplier are explained. 2.1 Technique 1 – Swapped Inputs Technique 1 is as simple as a technique can be. During the second computation, the input operands are swapped, and thus the product is calculated as P = y · x.

(2)

Dependability Evaluation of Time-Redundancy Techniques in Integer Multipliers

569

Although the difference is small, all partial product bits where i ≠ j (see Fig. 1) will enter the reduction tree at different locations. As a consequence, a fault might affect the resulting product differently making error detection possible. 2.2 Technique 2 – Inverted Reduction Tree The multiplier is not self dual but inversion can still be explored. Technique 2 harnesses the inverting property of the full-adder cell [11], i.e. S’(a, b, c) = S(a’,b’,c’) and CO’(a, b, c) = CO(a’,b’,c’).

(3)

Here S and CO are the sum and carry outputs, respectively. Since the reduction tree more or less consists of a network of full-adder cells, it is possible to invert the partial-product bits entering the reduction tree and then invert the output product to get the correct product. The cost for an N-bit multiplier is (N×N)-1 XOR gates at the input and 2N-1 XOR gates at the output. However, since the inverted versions of the partialproduct bits are available “inside” the AND gates these could be used together with multiplexers to get the inverted mode of operation; at least in a full-custom design. The half adder cells of the reduction tree do not fulfill the inverting property and as a consequence they have to be changed to full adder cells. The extra input of the new full-adder cells is selected according to inverted mode (1) or not (0). In Technique 2 the multiplication becomes P = x · y = x0y0+21(x1y0+x0y1)+…+2N-1xN-1yN-1 = ((x0y0)’+21((x1y0)’+(x0y1)’)+…+2N-1(xN-1yN-1)’)’.

(4)

2.3 Technique 3 – Inverted Reduction Tree and Swapped Inputs Technique 3 is a combination of Technique 1 and 2. Hence the multiplication becomes P = y · x = y0x0+21(y1x0+y0x1)+…+2N-1yN-1xN-1 = ((y0x0)’+21((y1x0)’+(y0x1)’)+…+2N-1(yN-1xN-1)’)’.

(5)

2.4 Technique 4 – Twin Precision With appropriate modifications, it is possible to perform two N/2-bit multiplications in parallel in an N-bit multiplier, see Fig 2. This is referred to as a twin-precision multiplier [12]. One N/2×N/2-bit multiplication is performed using N/2×N/2 partial product bits in the least significant part of the multiplier and another N/2×N/2-bit multiplication using partial product bits in the most significant part. The resulting products are the least significant and most significant halves of the output, respectively. Other implementations of a twin or dual precision multiplier exist [13], where three of the four base half precision multipliers are used to obtain a fault tolerant mode of operation using majority voting. The idea behind Technique 4 is to divide the input operands into halves and then these halves are multiplied with each other according to P = x · y = (xh+xl) · (yh+yl) = xhyh+xhyl+xlyh+xlyl.

(6)

570

H. Eriksson

Besides the extra (third) computation (compared with the other techniques), there is also a need for extra shifts and additions to obtain the final product. The extra computation is not that costly since two N/2×N/2-bit multiplications in parallel do not take as long time as a single N×N-bit multiplication [12].

x i yj

x i y j tp

&

&

pij

pij

HA

HA

HA

HA

FA

FA

FA

HA

FA

FA

FA

FA

FA

HA

FA

FA

FA

FA

FA

FA

FA

HA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

p 01 p 10

p00

HA

p 77

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

FA

O0

FA

FA

O 15

Fig. 2. An 8×8-bit twin-precision integer multiplier of carry-save type. The partial-product bits represented by white squares are unused (zeroed) when the multiplier is performing two 4-bit multiplications in parallel.

3 Evaluation of the Time-Redundancy Techniques The four different techniques are evaluated by injecting faults in a gate-level model of a multiplier. After a fault has been injected, the base computation (Eq. 1) is performed as well as computations using all the time-redundancy techniques T1-T4. The output of the base computation is compared with a golden (fault-free) multiplication to see if the fault is effective. If the fault is effective, i.e. the result from the base computation differs from the golden computation, the error detection capability of a technique is checked by comparing the output of the base computation with that of the presently evaluated technique. If the outputs are different, an error has been detected. The error detection coverage is obtained by dividing the number of detected errors by the number of effective faults (detected errors + wrong outputs). 3.1 Fault Model The faults which are injected are permanent (last at least two computations) single stuck-at-one or stuck-at-zero faults. The faults are injected at the outputs of the gates in the multiplier. 3.2 Assessment 1 - 8×8-Bit Multiplier with CSA Reduction Tree In the first assessment faults are injected in an 8×8 multiplier of carry-save type. Faults are injected in all possible gate outputs in the multiplier and for each fault all

Dependability Evaluation of Time-Redundancy Techniques in Integer Multipliers

571

possible input combinations are evaluated, i.e. an exhaustive evaluation. The results from the different fault-injection campaigns are collected in Table 1. The campaigns are: • • • • •

pi – the outputs of the input AND gates p – the outputs of the inverting XOR gates before the reduction tree s – the sum outputs of the FA gates in the reduction tree co – the carry outputs of the FA gates in the reduction tree o – the outputs of the inverting XOR gates after the reduction tree

Some interesting observations can be made in Table 1: • • • •

The number of effective faults is roughly 50%. Thus, on average, every second fault will have an effect on the multiplier result, i.e. cause an error. Using time-redundancy, it is difficult to detect errors at the output of the multiplier. However, if technique T4 is used it is possible to detect at least 70% of the errors. The techniques exploring the inverting property of the reduction tree, T2 and T3, tolerate all possible stuck-at faults in the reduction tree, i.e. the coverage is 100%. Errors caused by faults in the input AND gates cannot be detected by Technique T2, which make sense since the input bits are fed to the same AND gates as for the base computation.

The relations between the coverage of the different techniques are as expected. In terms of extra hardware, less costly techniques have less coverage than the expensive ones. Table 1. 8×8-bit CSA reduction tree. Exhaustive. (sta0 – stuck-at-zero, sta1 – stuck-at-one)

Campaign

Injected faults

Effective faults

pi-sta0 pi-sta1 p-sta0 p-sta1 s-sta0 s-sta1 co-sta0 co-sta1 o-sta0 o-sta1 Total

4194304 4194304 4194304 4194304 3670016 3670016 3670016 3670016 1048576 1048576 33554432

1048576 3145728 1032192 3096576 1636774 2033242 613916 3056100 418276 564764 16646144

Coverage by technique T1 T2 T3 T4 66% 0% 66% 89% 22% 0% 22% 93% 67% 100% 100% 90% 22% 100% 100% 93% 33% 100% 100% 80% 27% 100% 100% 8% 57% 100% 100% 88% 12% 100% 100% 97% 0% 0% 0% 70% 0% 0% 0% 78% 27% 69% 77% 90%

3.3 Assessment 2 - 8×8-Bit Multiplier with HPM Reduction Tree To check if the interconnection scheme of the reduction tree has an effect on the error detection capabilities of the techniques, a multiplier having another reduction tree is

572

H. Eriksson

evaluated in this assessment. The selected tree is the HPM reduction tree [14] which has a logarithmic logic depth (the CSA tree has linear logic depth) but still a regular connectivity. The results from this assessment are collected in Table 2. As can be seen in the table, the coverage dependence on the reduction tree is insignificant. Table 2. 8×8-bit HPM reduction tree. Exhaustive. (sta0 – stuck-at-zero, sta1 – stuck-at-one)

Campaign

Injected faults

Effective faults

pi-sta0 pi-sta1 p-sta0 p-sta1 s-sta0 s-sta1 co-sta0 co-sta1 o-sta0 o-sta1 Total

4194304 4194304 4194304 4194304 3670016 3670016 3670016 3670016 1048576 1048576 33554432

1048576 3145728 1032192 3096576 1590058 2079958 613916 3056100 418276 564764 16646144

Coverage by technique T1 T2 T3 T4 66% 0% 66% 89% 22% 0% 22% 93% 67% 100% 100% 90% 22% 100% 100% 93% 29% 100% 100% 81% 22% 100% 100% 84% 42% 100% 100% 89% 9% 100% 100% 97% 0% 0% 0% 70% 0% 0% 0% 78% 25% 69% 77% 90%

3.4 Assessment 3 - 16×16-Bit Multiplier with CSA Reduction Tree Besides the interconnection scheme of the reduction tree, it is interesting to study the impact of the multiplier size on the coverage of the different techniques. In this assessment a 16×16-bit multiplier with a CSA reduction tree is used. For this size it is no longer possible to evaluate all possible input combinations since then the total number of injected faults would be more than 8.5·1012. Therefore, for each Table 3. 16×16-bit CSA reduction tree. Random. (sta0 – stuck-at-zero, sta1 – stuck-at-one)

Campaign

Injected faults

Effective faults

pi-sta0 pi-sta1 p-sta0 p-sta1 s-sta0 s-sta1 co-sta0 co-sta1 o-sta0 o-sta1 Total

25600000 25600000 25600000 25600000 24000000 24000000 24000000 24000000 3200000 3200000 204800000

6375286 19124714 6375286 19124714 11348556 12651444 4940279 19059721 1464353 1735647 102200000

Note, not all input combinations are used.

Coverage by technique T1 T2 T3 T4 71% 0% 71% 92% 24% 0% 24% 94% 71% 100% 100% 92% 24% 100% 100% 94% 43% 100% 100% 81% 39% 100% 100% 82% 68% 100% 100% 91% 18% 100% 100% 95% 0% 0% 0% 66% 0% 0% 0% 72% 34% 72% 81% 90%

Dependability Evaluation of Time-Redundancy Techniques in Integer Multipliers

573

stuck-at-fault injected, 100 000 random input combinations are used. For a specific fault, the same random input combinations are used for the evaluation of all techniques. The results from this assessment are collected in Table 3. Although there are some minor differences compared with the results from the assessments on the 8×8 multipliers, they probably originate from the fact that only a fraction of all possible input combinations is evaluated for each fault. 3.5 Delay, Power, and Area Penalties Adding redundancy to tolerate faults always comes at a cost in delay (time), power, area, or a combination of these. Time-redundancy has a small area cost but a large delay cost. For spatial redundancy, on the other hand, the situation is the opposite; a small delay cost but a large area cost. The power dissipation is at least doubled in both these cases. In Table 4, the delay, power, and area penalties for the different techniques are estimated. Area and power penalties are estimated based on added gate equivalents and the delay penalty for T4 based on the delay values presented by Sjalander et al. [8]. All values are normalized to the ones of the base computation, i.e. a conventional multiplication. The goal of this paper has been to compare the different timeredundancy techniques, therefore the cost of the detection mechanism (comparison), which is the same for all techniques, has been omitted from the values in Table 4. Table 4. Delay, power, and area penalties

Technique T1 T2 T3 T4*

Delay 2 2 2 2.7

Power 2 2.6 2.6 2.2

Area 1 1.3 1.3 1

* Penalties from extra shifts and addition are not included.

The least costly technique, T1, is also the one having the worst coverage. Delay, power, and area budgets have to be considered when a selection is made between the other three techniques. There is however no reason for not selecting T3, when budgets permit the use of T2 or T3.

4 Conclusion Four different time-redundancy techniques have been evaluated. The techniques range from less costly techniques where the inputs to the multiplier are swapped to advanced techniques where the half-precision mode in a twin-precision multiplier is harnessed. The latter technique is novel with respect to time-redundancy in multipliers.

574

H. Eriksson

The coverage for single stuck-at-faults at the output of the gates was assessed for all techniques. Different reduction trees and multiplier sizes were used during the assessment and some interesting observations were made. • • • •

As expected, the most costly technique (in terms of delay and power), twin precision, was also the technique having the best coverage. The techniques using inverted inputs to the reduction tree detects all errors caused by a fault in the reduction tree. The only technique which can detect some of the errors caused by a fault at the multiplier output is the twin precision technique. If a technique where the partial-product bits to the reduction tree are inverted, the input operands shall be swapped since it yields a better coverage at no extra cost.

Acknowledgements The author wish to thank Professor Per Larsson-Edefors and Dr. Jonny Vinter who have provided fruitful input to this manuscript.

References 1. Avizienis, A., Laprie, J.-C., Randall, B., Landwehr, C.: Basic Concepts and Taxonomy of Dependable and Secure Computing. IEEE Transactions on Dependable and Secure Computting 1(1), 11–33 (2004) 2. Parhami, B.: Computer Arithmetic – Algorithms and Hardware Design, 1st edn. Oxford University Press, Oxford (2000) 3. Wallace, C.S.: A Suggestion for a Fast Multiplier. IEEE Transactions on Electronic Computers 13, 14–16 (1964) 4. Dadda, L.: Some Schemes for Parallel Adders. Acta Frequenza 42(5), 349–356 (1965) 5. Oklobdzija, V.G., Villeger, D., Liu, S.S.: A Method for Speed Optimized Partial Production and Generation of Fast Parallel Multipliers Using an Algorithmic Approach. IEEE Transactions on Computers 45(8), 294–306 (1995) 6. Alderighi, M., D’Angelo, S., Metra, C., Sechi, G.R.: Novel Fault-Tolerant Adder Design for FPGA-Based Systems. In: Proceedings of the 7th International On-Line Testing Workshop, pp. 54–58 (2001) 7. Hsu, Y.-M., Swartzlander Jr., E.E.: Reliability Estimation for Time Redundant Error Correcting Adders and Multipliers. In: Proceedings of IEEE International Workshop on Defect and Fault Tolerance in VLSI Systems (DFT), pp. 159–167. IEEE Computer Society Press, Los Alamitos (1994) 8. Namba, K., Ito, H.: Design of Defect Tolerant Wallace Multiplier. In: Proceedings of the 11th IEEE Pacific Rim International Symposium on Dependable Computing (PRDC), pp. 159–167. IEEE Computer Society Press, Los Alamitos (2005) 9. Radhakrishnan, D., Preethy, A.P.: A Novel 36 Bit Single Fault-Tolerant Multiplier Using 5 Bit Moduli. In: Proceedings of IEEE Region 10 International Conference (TENCON’98), pp. 128–130. IEEE Computer Society Press, Los Alamitos (1998) 10. Pradhan, D.K.: Fault-Tolerant Computer System Design, 1st edn. Prentice-Hall, Englewood Cliffs (1996)

Dependability Evaluation of Time-Redundancy Techniques in Integer Multipliers

575

11. Rabaey, J.M., Chandrakasan, A., Nikolic, B.: Digital Integrated Circuits, 2nd edn. Prentice-Hall, Englewood Cliffs (2003) 12. Sjalander, M., Eriksson, H., Larsson-Edefors, P.: An Efficient Twin-Precision Multiplier. In: Proceedings of the IEEE International Conference on Computer Design (ICCD’04), IEEE Computer Society Press, Los Alamitos (2004) 13. Mokrian, P., Ahmadi, M., Jullien, G., Miller, W.C.: A Reconfigurable Digital Multiplier Architecture. In: Proceedings of the Canadian Conference on Electrical and Computer Engineering (CCECE) (2003) 14. Eriksson, H., Larsson-Edefors, P., Sheeran, M., Sjalander, M., Johansson, D., Scholin, M.: Multiplier Reduction Tree with Logarithmic Logic Depth and Regular Connectivity. In: Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), IEEE Computer Society Press, Los Alamitos (2006)

Design and Industrialization Challenges of Memory Dominated SOCs J.M. Daga ATMEL, France

The quest for the universal memory has attracted many talented researchers and number of investors for years now. The objective is to develop a low cost, high-speed, low power, and reliable non-volatile memory. In practice, the universal memory system is more like an optimized combination of execution and storage memories, each of them having its own characteristics. Typically, execution memories manage temporary data and must be fast, with no endurance limitations. Different types of RAM memories are used to build an optimized hierarchy, including different levels of cache. In addition to RAM memories, non-volatile memories such as ROM or NOR flash used for code storage can be considered as execution memories when in place execution of the code is possible. There are several advantages in having execution memories embedded with the CPU, such as: speed and power optimization, improved code security. There is a trend confirmed by the SIA that forecasts that memories will represent more than 80% of the total area of SOCs by the end of the decade. SOCs in the future will be more and more memory dominated. As a result, memory management decisions will have a major impact on system cost, performances and reliability. Memory I/P availability (including embedded SRAM, DRAM, FLASH ..) will become the main differentiator, especially for fabless companies. This will be developed during the presentation. A detailed comparison of different types of embedded memories (SRAM, DRAM, ROM, EEPROM and FLASH) and their related challenges will be reviewed. Practical examples of SOC implementation, including for example flash based MCUs versus ROM based ones will be presented.

N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, p. 576, 2007. © Springer-Verlag Berlin Heidelberg 2007

Statistical Static Timing Analysis: A New Approach to Deal with Increased Process Variability in Advanced Nanometer Technologies D. Pandini STMicroelectronics, Italy

As process parameter dimensions continue to scale down, the gap between the designed layout and what is really manufactured on silicon is increasing. Due to the difficulty in process control in nanometer technologies, manufacturing-induced variations are growing both in number and as a percent of feature size and electrical parameters. Therefore, characterization and modeling of the underlying sources of variability, along with their correlations, is becoming more and more difficult and costly. Furthermore, the process parameter variations make the prediction of digital circuit performance an extremely challenging task. Traditionally, the methodology adopted to determine the performance spread of a design in presence of variability is to run multiple Static Timing Analyses at different process corners, where standard cells and interconnects have the worst/best combinations of delay. Unfortunately, as the number of variability sources increases, the corner-based method is becoming too computationally expensive. Moreover, with the larger parameter spread this approach results in overly conservative and suboptimal designs, leaving most of the advantages offered by the new technologies on the table. Statistical Static Timing Analysis (SSTA) is a promising innovative approach to deal with process variations in nanometer technologies, especially the intra-die variations that cannot be handled properly by existing corner-based techniques. In this keynote, the SSTA methodology is presented showing the potential advantages over the traditional STA approach. Moreover, the most important challenges for SSTA, like the required additional process data, characterization efforts, and its integration into the design flow are outlined and discussed. Experimental results obtained from pilot projects in nanometer technologies will be presented, demonstrating the potential benefits of SSTA, along with optimization techniques based on SSTA and the parametric yield improvement that can be achieved.

N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, p. 577, 2007. © Springer-Verlag Berlin Heidelberg 2007

Analog Power Modelling C. Svensson Linköping University, Sweden

Digital power modelling is well developed today, through many years of active research. However analog power modelling lags behind. The aim of this paper is to discuss possible fundamentals of analog power modelling. Modelling is based on noise, precision, linearity, and process constraints. Simple elements as samplers, amplifiers and comparators are discussed. Analog-to-digital converters are used to compare predicted minimum power constraints with real circuits.

N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, p. 578, 2007. © Springer-Verlag Berlin Heidelberg 2007

Technological Trends, Design Constraints and Design Implementation Challenges in Mobile Phone Platforms F. Dahlgren Ericsson Mobile Platforms, Sweden

Mobile phones has already become far more than the traditional voice centric device. A large number of capabilities are being integrated into the higher-end phones competing with dedicated devices, including camera, camcorder, music player, positioning, mobile TV, and high-speed internet access. The huge volumes push the employment of the very latest silicon and packaging technologies, with respect taken to cost and high-volume production. While at one hand, the technology allows for integration of more features and higher performance, issues such as low hardware cost requirements, power dissipation, thermal issues, and handling the software complexity are increasingly challenging. This presentation aims at surveying current market and technological trends, including networking technology, multimedia, application software, services, and technology enablers. Furthermore, it will go through a set of design constraints and design tradeoffs, and finally cover some of the implementation challenges going forward.

N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, p. 579, 2007. © Springer-Verlag Berlin Heidelberg 2007

System Design from Instrument Level Down to ASIC Transistors with Speed and Low Power as Driving Parameters A. Emrich Omnisys Instruments AB, Sweden

For wide bandwidth spectrometers there are several competing technologies to consider, digital, optical and various analog schemes. For applications demanding wide bandwidth and low power consumption in combination, autocorrelation based digital designs take advantage of Moores law and will take a dominating position in the coming years. Omnisys implementations have shown an order of magnitude better performance in respect to bandwidth versus power consumption as compared to what other teams has presented over the last decade. The reason for this is concurrent engineering and optimisation has been performed at all levels in parallel, from instrument level down to transistor level. We now have a single chip spectrometer core providing 8 GHz of bandwidth with 1024 resolution channels and a power consumption of less than 3 W. The design approach will be presented with examples of how decisions on different levels interact

N. Azemard and L. Svensson (Eds.): PATMOS 2007, LNCS 4644, p. 580, 2007. © Springer-Verlag Berlin Heidelberg 2007

Author Index

Abdel-Hafeez, Saleh 75 Albers, Karsten 495 Alfredsson, Jon 536 Aunet, Snorre 536 Azemard, N. 138

Feautrier, Paul 10 Fournel, Nicolas 10 Fraboulet, Antoine 10

Bacinschi, P.B. 242 Barthelemy, Herv´e 413 Bartzas, Alexandros 373 Basetas, Ch. 546 Bellido, M.J. 404 Bhardwaj, Sarvesh 125 Blaauw, David 211 Bravaix, A. 191 Butzen, Paulo F. 474 Cappuccino, Gregorio 107 Catthoor, Francky 373, 433 Centurelli, Francesco 516 Chabini, Noureddine 64 Chang, Yao-Wen 148 Chen, Harry I.A. 453 Chen, Pinghua 86 Chidolue, Gabriel 288 Chou, Szu-Jui 148 Cocorullo, Giuseppe 107 Crone, Allan 288

Galanis, Michalis D. 352 Ghanta, Praveen 125 Ghavami, Behnam 330, 463 Giacomotto, Christophe 181 Giancane, Luca 516 Glesner, M. 242 Goel, Amit 125 Goutis, Costas E. 352 Grumer, Matthias 268 Gu´erin, C. 191 Guerrero, D. 404 Guigues, Fabrice 413 Gustafson, Oscar 526 Gyim´ othy, Tibor 300 Hagiwara, Shiho 222 Harb, Shadi M. 75 He, Ku 160 Helms, Domenik 171, 278 Herczeg, Zolt´ an 300 Hoyer, Marko 171 Hsu, Chin-Hsiung 148 Huard, V. 191 Isokawa, Teijiro

Dabiri, Foad 255, 443 Daga, J.M. 576 Dahlgren, F. 579 Dai, Kui 320 Delorme, Julien 31 Denais, M. 191 Devos, Harald 363 Dimitroulakos, Gregory Duval, Benjamin 413

423

Jayapala, Murali 433 Jiang, Jie-Hong R. 148 JianJun, Guo 43 Johansson, Kenny 526 Juan, J. 404 352

Eeckhaut, Hendrik 363 Eisenstadt, William R. 75 Emrich, A. 580 Engels, S. 138 Eriksson, Henrik 566

Kamiura, Naotake 423 Keller, Maurice 310 Kim, Chris H. 474 Kim, Seongwoon 53 ´ Kiss, Akos 300 Kjeldsberg, Per Gunnar 526 Kleine, Ulrich 97 Koelmans, Albert 53 Kouretas, I. 546

582

Author Index

Kroupis, N. 505 Kui, Dai 43 Kunitake, Yuji 384 Kuo, James B. 453 Kussener, Edith 413

Pandey, S. 242 Pandini, Davide 201, 577 Papadopoulos, Lazaros 1 Papakonstantinou, George 20 Parthasarathy, CR. 191 Pavlatos, Christos 20 Pedram, Hossein 330, 463 Peon-Quiros, Miguel 373 Popa, Cosmin 117 Potkonjak, Miodrag 255, 443 Pugliese, Andrea 107

Li, Shen 43 Li, Yong 320 Li, Zhenkun 86 Lipka, Bj¨ orn 97 Lipskoch, Henrik 495 Liu, Yijun 86 Loo, Edward K.W. 453 Lucarz, Christophe 485 Luo, Hong 160 Luo, Rong 160 M¨ uhlberger, Andreas 268 Macii, A. 232 Macii, E. 232 Mamagkakis, Stylianos 373 Manis, George 20 Marnane, William 310 Masu, Kazuya 222 Matsui, Nobuyuki 423 Mattavelli, Marco 485 Maurine, P. 138, 340, 394 Mendias, Jose M. 373 Miermont, Sylvain 556 Migairou, V. 138 Millan, A. 404 Mingche, Lai 43 Munaga, Satyakiran 433 Murgan, T. 242 Nahapetian, Ani 255, 443 Najibi, Mehrdad 463 Nanua, Mini 211 Nebel, Wolfgang 171, 278 Neffe, Ulrich 268 Niknahad, Mahtab 463 Oh, Myeonghoon 53 Oklobdzija, Vojin 181 Olivieri, Mauro 516 Ortiz, A. Garc´ıa 242 Oskuii, Saeeid Tahmasbi Ostua, E. 404 Paliouras, V. 546 Panagopoulos, Ioannis

Raghavan, Praveen 433 Ramos, Estela Rey 433 Razafindraibe, A. 340, 394 Reis, Andr´e I. 474 Renaudin, Marc 556 Repetto, Guido A. 201 Ribas, Renato P. 474 Robert, M. 340 Rosinger, Sven 278 Ruan, Jian 320 Ruiz-de-Clavijo, P. 404 Sarrafzadeh, Majid 255, 443 Sato, Takashi 222 Sato, Toshinori 384 Schmidt, Daniel 300 Scotti, Giuseppe 516 Sethubalasubramanian, Nandhavel Shang, Delong 53 Shin, Chihoon 53 Singh, Mandeep 181 Sinisi, Vincenzo 201 Sithambaram, P. 232 Slomka, Frank 495 Soudris, Dimitrios 1, 373, 505 Steger, Christian 268 Stroobandt, Dirk 363 Svensson, C. 578 Syrzycki, Marek J. 453 Trifiletti, Alessandro

526

20

Uezono, Takumi Verkest, Diederik Viejo, J. 404

516

222 433

433

Author Index Vivet, Pascal 556 Vrudhula, Sarma 125

Wilson, R. 138 Wu, Z. 138

Wang, Ping 53 Wang, Wenyan 86 Wang, Yu 160 Wang, Zhiying 320 Wehn, Norbert 300 Weiss, Oliver 433 Weiss, Reinhold 268 Wendt, Manuel 268

Xia, Fei 53 Xie, Yuan 160 Yakovlev, Alex 53 Yang, Huazhong 160 Zeydel, Bart 181 Zhiying, Wang 43

583