CAD for Hardware Security 9783031268953, 9783031268960


204 103 18MB

English Pages 417 Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Acknowledgements
Contents
1 Introduction to CAD for Hardware Security
1.1 Introduction
1.1.1 Emergence of Threats in Hardware Supply Chain
1.1.2 SoC Security Development Life-cycle (SDL)
1.1.3 Security Requirements
1.2 CAD Tools in SoC Supply Chain
1.2.1 Significance of CAD Tools in SoC Life-cycle Security
1.2.2 CAD Tools in SoC Life-cycle Threats
1.2.3 CAD Tools as Vulnerability Source: The Other Side of the Coin
1.2.4 The Need for CAD Solutions For SoC Security Verification
1.3 Summary
References
2 CAD for Security Asset Identification
2.1 Introduction
2.2 Motivation and Background
2.2.1 Motivation
2.2.2 Classification of Security Assets
2.2.3 Assessing Security Assets
2.3 CAD for Security Asset Identification
2.3.1 Inputs
2.3.2 Asset Propagation Analysis
2.3.3 Candidate Component Identification
2.3.4 Pruning
2.3.4.1 Information Leakage Assessment
2.3.4.2 Power Side-Channel Assessment
2.3.4.3 Fault Injection Assessment
2.3.4.4 Results
2.4 Summary
References
3 Metrics for SoC Security Verification
3.1 Introduction
3.2 Motivating Example
3.3 Threat Model
3.3.1 IP Piracy
3.3.2 Power Side Channel (PSC) Leakage
3.3.3 Fault Injection
3.3.4 Malicious Hardware
3.3.5 Supply Chain
3.4 IP-level Security Metrics and Design Parameters Contributing to the IP-level Security
3.4.1 Metrics to Assess an IP's Vulnerability to Piracy and Reverse Engineering
3.4.2 IP-Level Parameters Contributing IP Piracy Security Metrics
3.4.3 Metrics to Assess an IP's Vulnerability to Power Side-Channel (PSC) Attacks
3.4.4 IP-Level Parameters Contributing Power Side-Channel (PSC) Security Metrics
3.4.5 IP-level Parameters Contributing to an IP's Vulnerability to Fault Injection Attacks
3.4.6 Metrics to Assess an IP's Vulnerability to Malicious Hardware
3.4.7 IP-Level Parameters Contributing Malicious Hardware Security Metrics
3.4.8 Metrics to Assess an IP's Vulnerabilities to Supply Chain Attacks
3.5 Transition from IP to Platform
3.5.1 Platform-level Parameters for IP Piracy
3.5.2 Platform-level Parameters for Power Side-Channel Analysis
3.5.3 Platform-level Parameters for Fault Injection
3.5.4 Platform-level Parameters for Malicious Hardware
3.5.5 Supply Chain
3.6 Security Measurement and Estimation
3.7 Platform-level Security Measurement and Estimation Approaches
3.7.1 Platform-level Security Measurement and Estimation Approaches for IP Piracy
3.7.1.1 Platform-Level SAT Resiliency Measurement Flow
3.7.1.2 Platform-level SAT Resiliency Estimation Flow
3.7.2 Result and Analysis
3.8 Challenges
3.8.1 Challenges in Platform-Level Security Estimation and Measurement
3.8.2 Challenges in Achieving Accurate Estimation
3.9 Summary
References
4 CAD for Information Leakage Assessment
4.1 Introduction
4.2 Motivation and Background
4.2.1 Motivation
4.2.2 Information Flow Tracking
4.3 Information Flow Tracking Methodologies
4.3.1 Software-Based IFT Techniques
4.3.2 Hardware-Based IFT Techniques
4.3.2.1 Gate-Level Information Flow Tracking
4.3.2.2 Finding Timing Channels Using GLIFT
4.3.2.3 Register-Transfer Level Information Flow Tracking
4.3.3 HDL-Level Based IFT Technique
4.4 Summary
References
5 CAD for Hardware Trojan Detection
5.1 Introduction
5.2 Literature Survey
5.3 SymbA: Symbolic Execution at C-level for Hardware Trojan Activation
5.3.1 Preliminary Concepts
5.3.2 Why C-level Symbolic Execution?
5.3.3 SymbA Trojan Detection Steps
5.3.4 Tackling Scalability
5.3.5 SymbA Results
5.4 Summary
References
6 CAD for Power Side-Channel Detection
6.1 Introduction
6.2 Background on Security Evaluation
6.2.1 Power Side-Channel Analysis
6.2.2 General Workflow of Security Evaluation
6.2.3 Security Vulnerability Evaluation Metrics
6.2.4 Pre- and Post-silicon Evaluation
6.3 Literature Survey
6.3.1 Machine Learning-Based Side-Channel Leakage Detection
6.3.1.1 Methodology
6.3.1.2 Experimental Setup
6.3.1.3 Evaluation Results
6.3.1.4 Strength and Limitations
6.3.2 Side-Channel Leakage Detection at Register-Transfer Level
6.3.2.1 RTL-PSC Evaluation Framework and Metrics
6.3.2.2 Results and Analysis
6.3.2.3 Strengths, Limitations, and Future Work
6.3.3 Computer-Aided SCA Design Environment (CASCADE)
6.3.3.1 Methodology
6.3.3.2 Experimental Results
6.3.3.3 Strengths and Limitations
6.3.4 System Side-Channel Leakage Emulation for HW/SW Security Co-verification
6.3.4.1 Methodology
6.3.4.2 Results
6.3.4.3 Strengths and Limitations
6.3.5 Holistic Power Side-Channel Leakage Assessment (HAC)
6.3.5.1 Methodology
6.3.5.2 Experimental Results
6.3.5.3 Strengths and Limitations
6.4 Summary
References
7 CAD for Fault Injection Detection
7.1 Introduction
7.2 Background on FI Prevention and Detection
7.2.1 Delay-Based Countermeasures
7.2.2 Hardware Platforms for FI Vulnerability Assessment
7.2.3 Analyzing Vulnerabilities in FSMs (AVFSMs)
7.3 Literature Survey Methodology
7.3.1 The Efficiency of a Glitch Detector Against Fault Injection
7.3.2 HackMyMCU: Low-Cost Security Assessment Platform
7.3.3 Analyzing Vulnerabilities in Finite State Machine (AVFSM)
7.3.4 Security-Aware FSM
7.3.5 Multi-fault Attacks Vulnerability Assessment
7.4 Summary
References
8 CAD for Electromagnetic Fault Injection
8.1 Introduction
8.2 Background Study
8.3 Literature Survey Methodology
8.3.1 Electromagnetic Security Tests for SoC
8.3.2 Electromagnetic Fault Injection Against a System-on-Chip, Toward New Micro-architectural Fault Models
8.3.3 Security Evaluation Against Electromagnetic Analysis at Design Time
8.3.4 Design for EM Side-Channel Security Through Quantitative Assessment of RTL Implementations
8.3.5 Resilience of Error Correction Codes Against Harsh Electromagnetic Disturbances
8.4 Summary
References
9 CAD for Hardware/Software Security Verification
9.1 Introduction
9.2 Background
9.3 Literature Survey
9.3.1 System-on-Chip Platform Security Assurance: Architecture and Validation
9.3.1.1 Asset Definition and Security Policies
9.3.1.2 Sources of Vulnerabilities
9.3.1.3 Secure Architecture Components
9.3.1.4 Security Validation
9.3.2 Secure RISC-V System-on-Chip
9.3.3 Symbolic Assertion Mining for Security Validation
9.3.4 Hardware/Software Co-verification Using Interval Property Checking
9.3.5 Verification Driven Formal Architecture and Microarchitecture Modeling
9.4 Summary
References
10 CAD for Machine Learning in Hardware Security
10.1 Introduction
10.2 Background on the Problem
10.2.1 Machine Learning Techniques
10.2.1.1 Deep Learning
10.2.1.2 Decision Tree
10.2.1.3 K-Nearest Neighbors (KNN)
10.2.1.4 Support Vector Machine (SVM)
10.2.2 Threat Models
10.2.2.1 Side-Channel Attacks
10.2.2.2 Hardware Trojan Attacks
10.3 Literature Survey
10.3.1 ML-Based Side-Channel Analysis Techniques
10.3.1.1 DL-LA: Deep Learning Leakage Assessment
10.3.1.2 Assessment of Common Side-Channel Countermeasures with Respect To Deep Learning-Based Profiled Attacks
10.3.1.3 Test Generation Using Reinforcement Learning for Delay-Based Side-Channel Analysis
10.3.1.4 Machine Learning-Based Side-Channel Evaluation of Elliptic-Curve Cryptographic FPGA Processor
10.3.2 Trojan Detection
10.3.2.1 SVM-Based Real-Time Hardware Trojan Detection for Many-Core Platform
10.3.2.2 Adaptive Real-Time Trojan Detection Framework Through Machine Learning
10.3.2.3 HW2VEC: A Graph Learning Tool for Automating Hardware Security
10.3.2.4 Automated Test Generation for Hardware Trojan Detection Using Reinforcement Learning
10.3.2.5 Contrastive Graph-Convolutional Networks for Hardware Trojan Detection in Third-Party IP Cores
10.4 Summary
References
11 CAD for Securing IPs Based on Logic Locking
11.1 Introduction
11.2 Background and Related Work
11.2.1 Threat Model
11.2.2 EPIC: Ending Piracy of Integrated Circuits
11.2.3 Fault Analysis-Based Logic Encryption
11.2.4 Security Analysis of Logic Obfuscation and Strong Logic Locking
11.2.5 On Improving the Security of Logic Locking
11.2.6 Evaluation of the Security of Logic Encryption Algorithms
11.2.7 SARLock: SAT Attack Resistant Logic Locking
11.2.8 Anti-SAT: Mitigating SAT Attack on Logic Locking
11.2.9 Attacking Logic Locked Circuits by Bypassing Corruptible Output and Trade-off Analysis Against all known Logic Locking Attacks based on BDD
11.2.10 AppSAT: Approximately Deobfuscating Integrated Circuits
11.2.11 CAS-Lock: A Logic Locking Scheme without Trade-off between Security and Corruptibility
11.3 Summary
References
12 CAD for High-Level Synthesis
12.1 Introduction
12.2 Background on the Problem
12.2.1 Trojan Attack
12.2.2 Task Scheduling
12.3 Literature Survey Methodology
12.3.1 Secure High-Level Synthesis: Challenges and Solutions
12.3.2 Examining the Consequences of High-Level Synthesis Optimizations on Power Side Channel
12.3.3 High-Level Synthesis with Timing-Sensitive Information Flow Enforcement
12.3.4 TL-HLS: Security-Aware Scheduling with Optimal Loop Unrolling Factor
12.3.5 Secure by Construction: Addressing Security Vulnerabilities Introduced During High-Level Synthesis
12.3.5.1 Detecting Vulnerability Source
12.3.5.2 Modifying Configuration Settings
12.3.5.3 Modifying Algorithms
12.3.6 Analyzing Security Vulnerabilities Introduced by High-Level Synthesis
12.3.6.1 Tool for Detection of Unbalanced Pipeline
12.3.6.2 Tool for Power Side-Channel Leakage Assessment
12.3.6.3 Tool for Side-Channel Assessment
12.3.6.4 Tool for Fault Injection Vulnerability Assessment
12.4 Summary
References
13 CAD for Anti-counterfeiting
13.1 Introduction
13.2 Background
13.2.1 Threat Model
13.2.2 Supply Chain Vulnerabilities
13.2.3 Challenges
13.3 Counterfeit Avoidance
13.4 Counterfeit Detection Using CDIR
13.5 Protection Against Untrusted Foundry
13.5.1 Counterfeit Test Method Selection
13.5.2 Counterfeit Chip Defects in Infrared (IR) Domain
13.5.3 Defective Pin Detection
13.6 Summary
References
14 CAD for Anti-probing
14.1 Introduction
14.2 Background
14.2.1 Probing Techniques
14.2.2 Threat Model
14.3 Detection of Probing Attempts
14.3.1 Principle Operation
14.3.2 Advantages
14.3.3 Limitations
14.4 Cryptographically Secure Shields
14.4.1 Operation Principle
14.4.2 Back-side Attack
14.4.3 Limitations
14.5 Vulnerability Assessment to Probing Attacks
14.5.1 Advantages
14.5.2 Limitations
14.6 Layout-driven Assessment Framework
14.6.1 Bypass Attack Assessment
14.6.2 Reroute Attack Assessment
14.6.3 Evaluation
14.7 Anti-probing Physical Design Flow
14.7.1 Evaluation
14.8 Summary
References
15 CAD for Reverse Engineering
15.1 Introduction
15.2 Literature Survey
15.3 Inference
15.3.1 CLARION
15.3.2 FSM Extraction Methods
15.3.2.1 FSM Extraction Founded on Control Signal Identification
15.3.2.2 ReFSM
15.3.3 FSM Synthesis
15.3.4 RERTL
15.4 Summary
References
16 CAD for PUF Security
16.1 Introduction
16.2 Background
16.3 Literature Survey
16.3.1 Error Correction
16.3.2 Real-Valued Physical Unclonable Functions (RV-PUF)
16.3.3 Soft Decision IBS Encoder
16.3.4 Soft Decision IBS Decoder
16.3.5 Security Analysis
16.4 DNN Based Modeling Attack (PUFNet)
16.4.1 XOR-Inverter RO-PUF Design Analysis and Vulnerability Assessment by Machine Learning
16.4.2 Modeling Attacks of PUF on Silicon Data
16.5 Summary
References
17 CAD for FPGA Security
17.1 Introduction
17.2 FPGA Security Background
17.2.1 CAD for FPGA
17.2.1.1 FPGA Security Issues
17.2.2 Information Assurance
17.2.2.1 Confidentiality
17.2.2.2 Integrity
17.2.2.3 Authentication
17.2.3 Anti-tamper
17.2.3.1 Defense Against Run-Time Malicious Circuit Insertion
17.2.3.2 Bitstream Decryption Key Protection
17.2.3.3 Restricted Access to Dedicated Circuitry
17.2.3.4 Defense Against Invasive and Physical Attacks
17.3 SoC FPGA Security
17.3.1 Security Architectural Features
17.3.2 Attack Vectors and Possible Mitigation
17.3.2.1 Direct Memory Attack (DMA)
17.3.2.2 Cache Timing Attack (CTA)
17.3.2.3 Rowhammer Attack
17.4 Cloud FPGA Security
17.4.1 Remote Side-Channel Attacks
17.4.2 Remote Fault Injection Attacks
17.4.3 Countermeasures
17.5 FPGA Initialization Security
17.5.1 Threat Model
17.5.2 SeRFI Protocol
17.5.3 Protocol Timeline
17.5.4 SeRFI Attack Resiliency
17.6 Summary
References
18 The Future of CAD for Hardware Security
18.1 Introduction
18.2 Summary
18.2.1 Introduction to CAD for Hardware Security
18.2.2 CAD for Detecting Hardware Threats
18.2.3 CAD for Frontend Security
18.2.4 CAD for Physical Assurance
18.2.5 CAD for FPGA Security
18.3 Conclusion and Future Directions
References
Index
Recommend Papers

CAD for Hardware Security
 9783031268953, 9783031268960

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Farimah Farahmandi M. Sazadur Rahman Sree Ranjani Rajendran Mark Tehranipoor

CAD for Hardware Security

CAD for Hardware Security

Farimah Farahmandi • M. Sazadur Rahman • Sree Ranjani Rajendran • Mark Tehranipoor

CAD for Hardware Security

Farimah Farahmandi University of Florida Gainesville, FL, USA

M. Sazadur Rahman University of Florida Gainesville, FL, USA

Sree Ranjani Rajendran University of Florida Gainesville, FL, USA

Mark Tehranipoor University of Florida Gainesville, FL, USA

ISBN 978-3-031-26895-3 ISBN 978-3-031-26896-0 https://doi.org/10.1007/978-3-031-26896-0

(eBook)

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Farimah Farahmandi would like to dedicate this book to her Parents: Fatemeh Hashmei-Kashani and Mohammad Farahmandi, Sisters: Farzaneh Farahmandi and Fargol Farahmandi, Friends: Roshanak Mohammdivojdan and Masi Rajabi for their constant support in my up and down times. M Sazadur Rahman would like to dedicate this book to his parents-Ghulmaur Rahman and Shamima Rahman, wife-Tasnuva Farheen, siblings-Shaikhur Rahman and Sabrina Rahman, friend-Adib Nahiyan for their constant support, encouragement, and effort whenever he needed. Sree Ranjani Rajendran would like to dedicate this book to her daughter Tanusree. S, Parents: A. R. Rajendran and R. Thilagavathy, sibling: R. Rajpradeep, friends, and teachers for their encouragement and constant support. Mark Tehranipoor would like to dedicate this book to his project sponsors.

Preface

Emerging hardware security vulnerabilities are menacing since it is almost impossible to amend the design after fabrication. Recent studies reported vulnerabilities, including side-channel leakage, information leakage, access control violations, malicious functionality, etc. Software-level security mechanisms can easily bypass these attacks and put the devices or systems at risk. Increased design complexity, aggressive time to markets, and exotic hardware attacks portent the security of hardware designs. Ensuring the security of system-on-chip (SoC) in terms of trustworthiness, privacy, and reliability is exacting for its wide usage. However, there is a lack of automation in the existing techniques, and they rely on manual approaches that are neither efficient nor scalable for complex designs. The semiconductor industries are looking for automatic computer-aided design (CAD) tools for design verification, and validation efficiently increases the design accuracy with minimal testing time. The hardware engineers examine the security features by utilizing the CAD tools to aid analysis, identifying, root-causing, and mitigating SoC security problems to ensure the trustworthiness of the design. This book attempts to cover the utilization of CAD tools in hardware security. Whereas vulnerabilities in SoCs arise due to design mistakes, lack of security understanding, design transformations, various attack surfaces, and malicious intents. Further, existing CAD tools used in SoC design flow can unintentionally introduce additional vulnerabilities in the SoCs. Considering the above challenges and potential solutions, the scope of this book presents a comprehensive summary of hardware security defenses, describes the fundamentals of CAD tool usage, and highlights the significant research results. The book systematizes the knowledge of CAD tools used in hardware security and elaborates on its imperative features. The book contains 18 chapters and an appendix on VLSI testing. Each chapter has been planned to emphasize the utilization of CAD tools in the domain of hardware

vii

viii

Preface

security. We anticipate that this book will provide comprehensive knowledge to graduate students, researchers, and professionals in SoC design and CAD tool development. Gainesville, FL, USA August 30, 2022

Farimah Farahmandi M Sazadur Rahman Sree Ranjani Rajendran Mark Tehranipoor

Acknowledgements

A handful of books in the community cover CAD tools in the automation of electronic design. However, writing the first ever textbook dedicated to the securityaware usage of CAD tools was not a piece of cake due to many obstacles. It was a lasting and relentless journey to plan the book, prepare it, and finally combine them into a printable format. However, the outcome of the journey was more rewarding than our imagination. The footprint of this book was enriched by our friends, colleagues, and students. This book wouldn’t have been possible without the generous contributions of many researchers and experts in the field of hardware security from industry and academia. Their valuable inputs have shaped various book elements, e.g., chapter contents, illustrations, exercises, and results. We thank the following authors for contributing to academic research as the CAD for Hardware Security book chapter. • • • • • • • • • • • • • • • • •

Mohammad, Sajeed, University of Florida, USA Ayalasomayajula, Avinash, University of Florida, USA Md Kawser Bepary, University of Florida, USA Md Rafid Muttaki, University of Florida, USA Henian Li, University of Florida, USA Shuvagata Saha, University of Florida, USA Dr. Nitin Pundir, IBM, USA, Tanvir Rahman, University of Florida, USA Arash Vafaei, University of Florida, USA Amit Mazumder Shuvo, University of Florida, USA Nusrat Farzana Dipu, University of Florida, USA Md Sami Ul Islam, University of Florida, USA Pantha Protim Sarker, University of Florida, USA Nashmin Alam, University of Florida, USA Tao Zhang, University of Florida, USA Ahmed, Bulbul, University of Florida, USA Dr. Dhwani Mehta, AMD, USA

ix

Contents

1

Introduction to CAD for Hardware Security . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Emergence of Threats in Hardware Supply Chain. . . . . . . 1.1.2 SoC Security Development Life-cycle (SDL) . . . . . . . . . . . 1.1.3 Security Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 CAD Tools in SoC Supply Chain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Significance of CAD Tools in SoC Life-cycle Security . 1.2.2 CAD Tools in SoC Life-cycle Threats . . . . . . . . . . . . . . . . . . . 1.2.3 CAD Tools as Vulnerability Source: The Other Side of the Coin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 The Need for CAD Solutions For SoC Security Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 5 6 8 9 9 13 14 16 17

2

CAD for Security Asset Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Motivation and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Classification of Security Assets . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Assessing Security Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 CAD for Security Asset Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Asset Propagation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Candidate Component Identification . . . . . . . . . . . . . . . . . . . . . 2.3.4 Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21 21 22 22 23 24 25 26 27 28 30 33 33

3

Metrics for SoC Security Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Motivating Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37 37 38 xi

xii

Contents

3.3

Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 IP Piracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Power Side Channel (PSC) Leakage . . . . . . . . . . . . . . . . . . . . . 3.3.3 Fault Injection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Malicious Hardware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 Supply Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 IP-level Security Metrics and Design Parameters Contributing to the IP-level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Metrics to Assess an IP’s Vulnerability to Piracy and Reverse Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 IP-Level Parameters Contributing IP Piracy Security Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Metrics to Assess an IP’s Vulnerability to Power Side-Channel (PSC) Attacks . . . . . . . . . . . . . . . . . . . . . 3.4.4 IP-Level Parameters Contributing Power Side-Channel (PSC) Security Metrics . . . . . . . . . . . . . . . . . . . . 3.4.5 IP-level Parameters Contributing to an IP’s Vulnerability to Fault Injection Attacks . . . . . . . . . . . . . . . . . 3.4.6 Metrics to Assess an IP’s Vulnerability to Malicious Hardware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.7 IP-Level Parameters Contributing Malicious Hardware Security Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.8 Metrics to Assess an IP’s Vulnerabilities to Supply Chain Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Transition from IP to Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Platform-level Parameters for IP Piracy . . . . . . . . . . . . . . . . . 3.5.2 Platform-level Parameters for Power Side-Channel Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Platform-level Parameters for Fault Injection . . . . . . . . . . . . 3.5.4 Platform-level Parameters for Malicious Hardware . . . . . 3.5.5 Supply Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Security Measurement and Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Platform-level Security Measurement and Estimation Approaches 3.7.1 Platform-level Security Measurement and Estimation Approaches for IP Piracy. . . . . . . . . . . . . . . . . . . . . 3.7.2 Result and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 Challenges in Platform-Level Security Estimation and Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.2 Challenges in Achieving Accurate Estimation. . . . . . . . . . . 3.9 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40 40 41 41 43 43 44 44 45 47 49 50 52 53 54 56 56 59 60 61 62 63 64 64 71 71 71 74 74 74

Contents

xiii

4

CAD for Information Leakage Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2 Motivation and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.2.2 Information Flow Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.3 Information Flow Tracking Methodologies . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3.1 Software-Based IFT Techniques . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3.2 Hardware-Based IFT Techniques. . . . . . . . . . . . . . . . . . . . . . . . . 88 4.3.3 HDL-Level Based IFT Technique . . . . . . . . . . . . . . . . . . . . . . . . 95 4.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

5

CAD for Hardware Trojan Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 SymbA: Symbolic Execution at C-level for Hardware Trojan Activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Preliminary Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Why C-level Symbolic Execution? . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 SymbA Trojan Detection Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Tackling Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 SymbA Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

103 103 105

CAD for Power Side-Channel Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Background on Security Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Power Side-Channel Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 General Workflow of Security Evaluation . . . . . . . . . . . . . . . 6.2.3 Security Vulnerability Evaluation Metrics . . . . . . . . . . . . . . . 6.2.4 Pre- and Post-silicon Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Machine Learning-Based Side-Channel Leakage Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Side-Channel Leakage Detection at Register-Transfer Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Computer-Aided SCA Design Environment (CASCADE). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 System Side-Channel Leakage Emulation for HW/SW Security Co-verification . . . . . . . . . . . . . . . . . . . . . . . . 6.3.5 Holistic Power Side-Channel Leakage Assessment (HAC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

123 123 124 125 125 126 126 127

6

111 111 112 114 116 117 120 120

128 131 136 139 142 145 145

xiv

7

8

9

Contents

CAD for Fault Injection Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Background on FI Prevention and Detection . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Delay-Based Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Hardware Platforms for FI Vulnerability Assessment . . . 7.2.3 Analyzing Vulnerabilities in FSMs (AVFSMs) . . . . . . . . . . 7.3 Literature Survey Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 The Efficiency of a Glitch Detector Against Fault Injection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 HackMyMCU: Low-Cost Security Assessment Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Analyzing Vulnerabilities in Finite State Machine (AVFSM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Security-Aware FSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.5 Multi-fault Attacks Vulnerability Assessment . . . . . . . . . . . 7.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

149 149 151 151 152 152 152

CAD for Electromagnetic Fault Injection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Background Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Literature Survey Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Electromagnetic Security Tests for SoC . . . . . . . . . . . . . . . . . 8.3.2 Electromagnetic Fault Injection Against a System-on-Chip, Toward New Micro-architectural Fault Models . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Security Evaluation Against Electromagnetic Analysis at Design Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Design for EM Side-Channel Security Through Quantitative Assessment of RTL Implementations . . . . . . 8.3.5 Resilience of Error Correction Codes Against Harsh Electromagnetic Disturbances . . . . . . . . . . . . . . . . . . . . . 8.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

169 169 171 172 172

CAD for Hardware/Software Security Verification . . . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 System-on-Chip Platform Security Assurance: Architecture and Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Secure RISC-V System-on-Chip . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Symbolic Assertion Mining for Security Validation. . . . . 9.3.4 Hardware/Software Co-verification Using Interval Property Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

187 187 188 189

153 154 156 160 162 165 166

174 176 179 182 183 184

189 194 198 201

Contents

xv

9.3.5

Verification Driven Formal Architecture and Microarchitecture Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 9.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 10

CAD for Machine Learning in Hardware Security . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Background on the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Machine Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Threat Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 ML-Based Side-Channel Analysis Techniques . . . . . . . . . . 10.3.2 Trojan Detection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

211 211 212 213 214 218 218 223 228 228

11

CAD for Securing IPs Based on Logic Locking . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Background and Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 EPIC: Ending Piracy of Integrated Circuits . . . . . . . . . . . . . . 11.2.3 Fault Analysis-Based Logic Encryption . . . . . . . . . . . . . . . . . 11.2.4 Security Analysis of Logic Obfuscation and Strong Logic Locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.5 On Improving the Security of Logic Locking . . . . . . . . . . . 11.2.6 Evaluation of the Security of Logic Encryption Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.7 SARLock: SAT Attack Resistant Logic Locking . . . . . . . . 11.2.8 Anti-SAT: Mitigating SAT Attack on Logic Locking . . . 11.2.9 Attacking Logic Locked Circuits by Bypassing Corruptible Output and Trade-off Analysis Against all known Logic Locking Attacks based on BDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.10 AppSAT: Approximately Deobfuscating Integrated Circuits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.11 CAS-Lock: A Logic Locking Scheme without Trade-off between Security and Corruptibility . . . . . . . . . . 11.3 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

231 231 232 233 233 234

CAD for High-Level Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Background on the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.1 Trojan Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.2 Task Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Literature Survey Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

259 259 260 261 261 263

12

236 239 242 244 247

249 251 253 256 256

xvi

Contents

12.3.1 12.3.2

Secure High-Level Synthesis: Challenges and Solutions Examining the Consequences of High-Level Synthesis Optimizations on Power Side Channel. . . . . . . . 12.3.3 High-Level Synthesis with Timing-Sensitive Information Flow Enforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.4 TL-HLS: Security-Aware Scheduling with Optimal Loop Unrolling Factor . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.5 Secure by Construction: Addressing Security Vulnerabilities Introduced During High-Level Synthesis 12.3.6 Analyzing Security Vulnerabilities Introduced by High-Level Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

263

13

CAD for Anti-counterfeiting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.1 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.2 Supply Chain Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Counterfeit Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Counterfeit Detection Using CDIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Protection Against Untrusted Foundry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.1 Counterfeit Test Method Selection . . . . . . . . . . . . . . . . . . . . . . . 13.5.2 Counterfeit Chip Defects in Infrared (IR) Domain . . . . . . 13.5.3 Defective Pin Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

289 289 290 290 292 293 294 296 299 300 304 307 311 311

14

CAD for Anti-probing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 Probing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.2 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Detection of Probing Attempts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.1 Principle Operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.2 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 Cryptographically Secure Shields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.1 Operation Principle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.2 Back-side Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 Vulnerability Assessment to Probing Attacks. . . . . . . . . . . . . . . . . . . . . . . 14.5.1 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6 Layout-driven Assessment Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

315 315 317 317 318 318 319 321 321 321 322 323 324 324 325 325 326

265 267 269 272 278 286 286

Contents

xvii

14.6.1 Bypass Attack Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6.2 Reroute Attack Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7 Anti-probing Physical Design Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7.1 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.8 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

326 329 330 331 332 334 335

15

CAD for Reverse Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.1 CLARION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.2 FSM Extraction Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.3 FSM Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.4 RERTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

337 337 339 339 339 341 347 349 353 354

16

CAD for PUF Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.1 Error Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.2 Real-Valued Physical Unclonable Functions (RV-PUF). 16.3.3 Soft Decision IBS Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.4 Soft Decision IBS Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.5 Security Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 DNN Based Modeling Attack (PUFNet) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4.1 XOR-Inverter RO-PUF Design Analysis and Vulnerability Assessment by Machine Learning . . . . . . . . 16.4.2 Modeling Attacks of PUF on Silicon Data . . . . . . . . . . . . . . . 16.5 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

357 357 358 359 359 360 361 361 362 363

CAD for FPGA Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 FPGA Security Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2.1 CAD for FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2.2 Information Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2.3 Anti-tamper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 SoC FPGA Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.1 Security Architectural Features . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.2 Attack Vectors and Possible Mitigation . . . . . . . . . . . . . . . . . . 17.4 Cloud FPGA Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.1 Remote Side-Channel Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . .

373 373 374 374 375 378 380 380 381 383 383

17

365 366 368 370

xviii

18

Contents

17.4.2 Remote Fault Injection Attacks. . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.3 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5 FPGA Initialization Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5.1 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5.2 SeRFI Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5.3 Protocol Timeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5.4 SeRFI Attack Resiliency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

388 389 391 391 392 394 394 395 395

The Future of CAD for Hardware Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2.1 Introduction to CAD for Hardware Security . . . . . . . . . . . . . 18.2.2 CAD for Detecting Hardware Threats . . . . . . . . . . . . . . . . . . . 18.2.3 CAD for Frontend Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2.4 CAD for Physical Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2.5 CAD for FPGA Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.3 Conclusion and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

397 397 397 398 398 399 400 400 401 401

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

Chapter 1

Introduction to CAD for Hardware Security

1.1 Introduction We live in a world where embedded, and internet of things (IoT) devices have become a part of our daily lives. In addition to smart consumer products, electronic system-on-chips (SoCs) are used in industrial automation solutions and military and space applications. Over the last four decades, digital convergence has created a demand for functionally complex integrated circuits (ICs) at mass-market costs every six to nine months. As shown in Fig. 1.1, the number of IoT devices has risen significantly to 30 billion in 2020 [58] in contrast to a human population of eight billion, which boils down to, on average, four devices per person. VLSI system-on-chip (SoC) designers face enormous challenges as VLSI technologies grow in speed and shrink in size. Thus, billions of transistors are integrated into a single chip with digital/analog circuits. SoC is a single unified structure integrated with formerly individual microelectronic devices. The semiconductor industry has made a profound technological development in modern electronic devices through a system-level architecture. The main objective of SoC is to build a system through the integration of pre-designed hardware and software blocks, often collectively known as intellectual properties (IPs). Based on the target specification, the SoC integration team collects the IPs either from in-house or third-party vendors and assembles them as a target device. The behavior of a design is involved in the operations of IPs and the communication of IP interface providers in the context of SoC design. High SoC integration is achieved through advances in IC process technology, computeraided design (CAD) tools, and system-level IP blocks. The primary purpose of component integration in SoC products is to reduce costs, improve performance, and reduce time to market. System reliability and low power dissipation are some of the other advantages of SoC integration. However, SoC design integration is not straightforward, and many challenges arise while meeting tight time-to-market deadlines. The facets of complexity in today’s SoCs are functional complexity and architectural and verification challenges. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Farahmandi et al., CAD for Hardware Security, https://doi.org/10.1007/978-3-031-26896-0_1

1

2

1 Introduction to CAD for Hardware Security

Fig. 1.1 Electronic Devices have become a part of our daily lives from smart home, smart cars, to handheld PDA devices and wearable. According to [58], there are 30 billion electronic devices surfacing around the world of eight billion population

8 Billion People

30 Billion Electronic Devices

Due to the complexity of modern computing devices, challenges in SoC design are rapidly increasing with factors like performance, functionality, speed, reliability, cost, and, more importantly, security. The heterogeneity of components plays a significant role in the challenges due to the design complexity, which may include complexity in the problem domain, development process, choice domain, testing, and packaging-related complexity. However, the arrival of electronic design automation (EDA) tools in the early 1980s resolved the many challenging attributes of design engineers. Automated EDA tools reduce the time and cost of designing complex integrated circuits and help in the virtual testing of the electronic circuits, resulting in decreased production costs. EDA companies exploit CAD tools to overcome design complexity challenges. CAD tools used in the SoC design flow will automate the design workflow by supporting the basic features of logical design, circuit schematic design, layout generation, and design checks. The usage of CAD tools has also been extended for verification processes and has proven to be a highly accomplished tool in the IC design process.

1.1.1 Emergence of Threats in Hardware Supply Chain A modern computing system is illustrated in Fig. 1.2 in terms of different fields of security. The field of network security concerns the network’s vulnerabilities that connect computer systems and the mechanisms to safeguard its integrity and usability under attack. Malicious attacks on software, such as inconsistent error handling and buffer overflows, can be exploited through software security, and techniques are developed to ensure reliable software operation even when

1.1 Introduction

3

Network Security 1. Denial of Service 2. Man-in-the-middle attacks 3. Phishing/spoofing Attacks 4. Social engineering

Information Security 1. Corruption 2. Leakage 3. Unavailability ...

Software

Storage

Software Security 1. IP theft/piracy 2. Privilege escalation 3. Man-in-the-middle 4. Malware 5. Denial of service

Smart Car PDA IC Wearables

Computers

Electronic Hardware

Modem/ Router

Smart home

Servers

Hardware Security 1. IP theft/piracy 2. Hardware trojan 3. Reverse-engineering 4. Cloning/overproduction 5. Side-channel attack 6. Fault-injection attack 7. Privilege escalation

Fig. 1.2 The spectrum of hardware security threats in various user application level [4]. The overall threats on hardware security can be categorized among—network, information, software, and hardware related threats

potential security risks are present. Information security ensures confidentiality, integrity, and availability by protecting information against unauthorized access, use, modification, or destruction. In contrast, hardware security deals with attacks targeting the hardware, its entirety, and protection against those attacks. It forms the foundation of system security, providing a trust anchor for other system components to interact with it. While ubiquitous computing has several benefits, security and trust concerns have been raised by its rapid increase. The modern electronics industry is vulnerable to hardware-assisted and software-centric attacks that have been prevalent for decades. Those tools and techniques that are used for designing, implementing, optimizing performance, verifying the design, performing advanced failure analyses (FA), localizing defects, testing manufacturing processes, and analyzing reliability can pose a security threat to hardware if an adversary possesses them. IP piracy, hardware trojan, malicious modification, counterfeit, side-channel analysis, probing, fault injection, photon emission, or reverse engineering may allow attackers to extract secret information or intellectual property (IP) from electronic systems. These days, SoCs are built in collaboration with various parties, including the system owner, designer, foundry, assembly, and test facility. Academic, industry, and government researchers have been reporting security issues on the hardware used in a wide range of applications for over a decade [4, 28, 44, 53]. Figure 1.3 illustrates a simplified supply chain flow of electronic devices from the equipment owner to the end user where any party can be potentially untrusted and pose a severe threat to the original equipment manufacturer (OEM). One example of a recent hardware attack is the “The Big Hack” incident [46] shown in Fig. 1.3. Bloomberg reports that several Supermicro data centers are found to possess motherboards infected with

4

1 Introduction to CAD for Hardware Security

Tens of parties involved!

Owner

Designer

Foundry

OSAT

Bloomberg: Design stolen by employee from ASML, 2015 [47]

Bloomberg: Tampered Chip in Amazon, Apple, CIA, DoD, NASA, Navy, 2018 [45]

End-user

Suspicious Chip?!

Insecure Asset, e.g., Theverge: Remote attacker unlocks and starts Tesla, 2022 [65] credentials, design data, etc.

Fig. 1.3 Several events of hardware security breaches in last five years. These treats span from hardware trojan to IP theft and remote attacks which make user data and credentials insecure

malware every time they booted up due to a small malicious chip in the motherboard. Though the attack was initiated in 2014, this incident got publicly exposed no sooner than 2018. Supermicro’s motherboards are used in data centers of NASA, US Navy, DoD, CIA, Amazon, Apple, etc. Very recently, Bloomberg reported another incident [45] where an insider engineer from ASML, the largest company for manufacturing lithography equipment that shrinks and prints transistor patterns onto silicon wafers, stole intellectual property and fled to a different country. Another recent incident [65] shows that attackers can exploit network security to unlock and start a smart car in seconds without requiring any key. All these current threats to the electronic hardware supply chain raise the security concern about personal data and credentials stored in these potentially suspicious devices. The possible hardware attacks like side-channel attacks, exploitation of Test/Debug infrastructure, fault injection, information leakage, and malicious hardware known as Hardware Trojans (HT) can also be potentially detected using CAD tools [5]. Such hardware attacks must be carefully addressed due to their possible impact on the hardware and the underlying software and firmware. Based on a common vulnerability exposure (CVE-MITRE) [37] report, the overall system

1.1 Introduction

5

vulnerability gets reduced by 43% if hardware vulnerabilities are expunged from any design. These hardware vulnerabilities are highly challenging to the community of design and manufacturing companies. Today, more than ever, design houses are willing to invest in security solutions to provide security assurance and build trust in their products. This opens a high demand for hardware security experts in the government and industry. The role of the hardware security team is to include security features and support throughout the design, testing, manufacturing, and validation. These teams strive to incorporate security features into the product development life-cycle (PLC) and develop a security development life-cycle (SDL) [17] to reduce security risk before deployment [28]. Even though the security solutions included in the life-cycle development adds some advantage, EDA tools used to design and manufacture hardware plays a major role in incorporating security objectives. In addition to the power, performance, and area (PPA), EDA tools should provide security assurance with minimal cost and effort with high efficiency. In 2015 EDA companies began using CAD tools for hardware verification/validation during the pre-and post-silicon stages of hardware design culminating in hardware assurance [1]. Hence, researchers brought CAD for security techniques to build the trust of end-user and product developers [16, 54]. CAD for security techniques is being increasingly applied to SoC design to enhance security assurance with concurrent validation [5, 57].

1.1.2 SoC Security Development Life-cycle (SDL) Figure 1.4a, b shows the various stages of the SoC and security development life-cycle. In any SoC design, IPs developed by different IP vendors, known as third-party IP (3PIP) vendors, are integrated, synthesized using CAD tools, and fabricated. The end products are used in many applications, including the internet of things (IoT), cyber-physical systems (CPS), and embedded computing systems.

CAD Tools Utilization

... 3rd party IP

Product Concept & Definition

Physical Synthesis SoC Layout DFT & DFF Integrator (a) SoC Development Lifecycle

Threat Model

Security Architecture

Design Review

Fabrication

Security Testing

End product

Incident Response

(b) Security Development Lifecycle Fig. 1.4 SoC development life-cycle alongside with SoC security development life-cycle (SDL)

6

1 Introduction to CAD for Hardware Security

SoC used in any such systems should be verified and validated for hardware and software security threats. However, the design complexity of modern SoC chips makes verification a bottleneck problem, and the chip manufacturing companies are utilizing more than 70% of efforts and resources to ensure the correctness of the chip design in all aspects of performance, functionality, timing, and reliability [13]. On the other hand, the verification techniques were expected to provide the required security against hardware bugs or vulnerabilities in the life-cycle of the SoC. The hardware bugs are more challenging because patching is not always possible at any level of abstraction and results in persistent/permanent denial-of-service, IP/IC leakage, or exposure of assets to untrusted entities. However, the semiconductor industry has taken extensive measures to provide security assurances with the existing simulation, emulation, and formal verification techniques to detect or prevent hardware bugs in the design. On the contrary, the security development lifecycle starts from product conceptualization and definition, as shown in Fig. 1.4b. Later it performs threat modeling, secure architecture development and deployment, design review, testing, and critical incident response.

1.1.3 Security Requirements This section describes the security requirements to be considered while designing an SoC. These SoC requirements are based on a threat model developed by the potential vulnerabilities at different stages of the SoC life-cycle. Computing devices are being employed in banking, shopping, and personal information tracking, including health monitoring, fitness tracking, etc., which must be protected from malicious or unauthorized access. Other than the personalized usage of computing systems, device architectures are used widely in high confidential applications such as cryptographic and digital rights management (DRM) keys, programmable fuses, on-chip debug instrumentation, and defeature bits. Hence, when designing a modern SoC, several sensitive assets are to be considered to protect from unauthorized access. System-critical and security-sensitive information stored in the chips are the two massive assets to be considered. In [14], the assets are utilized to develop security policies to provide a valid authentication of the design. However, in any SoC, security assets are disseminated across the IPs, and access control is provided based on the requirements of security policies. A security property is a statement that can check conditions, assumptions, and expected behavior of a design [36]. However, these security properties are specific to a security vulnerability and formally describe the expected behavior. The security properties are developed from the essential assets and their corresponding security levels. An asset is a resource with a security-critical property that is to be protected from adversaries [3]. These assets may be different from one abstraction level to another, depending upon the adversarial intention. In any SoC assets can be classifies as two categories as listed in Fig. 1.5 [14]:

1.1 Introduction

7

Asset

Primary Asset (High Priority Protection)

Secondary Asset (Supports Primary Assets, e.g., shared bus)

Static Asset (Stored in SoC key, password, Firmware, etc.)

Dynamic Asset (Generated in run-time, e.g., onchip generated key, true random numbers, etc.)

Fig. 1.5 Classification of security assets in an SoC [14]. Assets in an SoC are classified based on their abstraction level and adversarial intention

1. Primary Assets—High priority of protection is given by the designer. For example, device keys, passwords, firmware, true random number generator (TRNG), on-chip key generator (e.g., Physically unclonable functions (PUFs)) are included as primary assets. The primary assets under consideration can be classified as below: (a) Static—these primary assets are stored within SoC and utilized whenever required, e.g., key, password, firmware, etc. (b) Dynamic—these primary assets are generated during the run-time of SoC and used for authentication. 2. Secondary Assets—Infrastructures that support the protection of primary assets during rest or transition. For example, a shared bus ensures the protection of key moving from an on-chip ROM to an encryption engine within an SoC. The security properties developed from the specific assets will provide security assurance to the SoC. Threat models violating confidentiality, integrity, and availability are identified in [14] and security properties are developed according to the assets. Security property/rule databases are available in [57] are widely used by the researchers to verify and validate an SoC design.

8

1 Introduction to CAD for Hardware Security

1.2 CAD Tools in SoC Supply Chain CAD tools play a vital role in the rapid increase of SoC design complexity. The full span of design flow utilizes CAD tools: High-level synthesis (discussed in Chap. 12), Verification (discussed in Chap. 9), Logic Synthesis, Placement and Routing, Static Timing Analysis, Post-Silicon Validation, and Manufacturing Testing. Figure 1.6 describes how CAD tools are utilized for synthesizing, analyzing, and testing the SoC design flow. CAD tools may be used in behavioral, RTL, FPGA, Logic, physical, and DSP synthesis, and while optimizing, CAD tools can perform transistor sizing, process variation, and statistical design. While analyzing an SoC design for threats, CAD tools may perform as checkers and verifiers. The checkers do a design rule check (DRC) to ensure that the designers do not violate design rules to achieve a high overall yield and reliability for the design. Electric rule check (ERC), netlist compare, ratio checker, fan-in/fan-out checker, and power checkers are applied to check the correctness of the design to meet the specification. However, verifiers will check/verify the condition explicitly specified as part of the design. The verifiers are of two types, timing, and functional verifiers. Timing verifiers optimize the circuit performance by determining the longest delay path and also checking for a correct clock cycle. Functional verifiers are symbolic checkers, which compare the symbolic description of circuit functionality with its individual parts derived behavior. However, in both cases, checkers ensure that rules meet the design specification. The CAD tools also perform as testers by generating suitable automatic test patterns to test the design, which is done by an automatic test pattern generation (ATPG) and a design-for-Test (DFT). Synopsys, Mentor, and Cadence are the three major commercial CAD tool vendors, Fig. 1.8 shows the usage of those CAD tools in various stages of the SoC design flow. Other than these tools there exist open-source tools also, in a vast and vibrant ecosystem. However, the post-silicon validating tools are not common and only Mentor Graphics have Tessent.

VLSI CAD Tools Utilization in SoC Design Flow Analysis Tools

Synthesis Tools Synthesis

Checkers

Behavioral Synthesis RTL Synthesis FPGA Synthesis Logic Synthesis Circuit Optimization Transistor Sizing Statistical Design Physical Synthesis DSP Synthesis

DRC, ERC Netlist Compare Ratio Checkers Fan-in/Fan-out Checkers Power Checker

Testing Tools Verifiers

Timing Verifier ICE/Hardware Formal Verifier

Testers ATPG DFT

Fig. 1.6 Steps in semiconductor design where CAD tools are used, such as synthesis, analysis, and testing in the SoC design flow

1.2 CAD Tools in SoC Supply Chain

9

1.2.1 Significance of CAD Tools in SoC Life-cycle Security This section describes how CAD tools play a significant role in the supply chain challenges of the SoC life-cycle and various possible hardware threats associated with each stage. Due to globalization, the SoC development cycle is distributed globally, and the possibility of adding/embedding threats to the design increases. As a result, the end-users and IC manufacturing industries lose trust in their products. The significance of commercially available and newly developed CAD tools in hardware security is highlighted below. • With the exponential growth in design complexity and the number of potential attack surfaces, the effort required to secure electronic circuits tends to grow drastically. Therefore, there is only one way to address the mismatch between demand and supply to ensure hardware security: to improve the qualitative security that these CAD tools can offer. Qualitative security is meant the CAD tools not only perform design implementation, optimization, verification, and testing as mentioned at the beginning of Sect. 1.2 but also ensure that the underlying electronic circuit meets the security requirements as discussed in Sect. 1.1.3. • Using CAD tools can ensure that hardware security requirements can be assessed and addressed (when required) during the design and implementation stage of the semiconductor supply chain. It is more suitable to detect any vulnerability in the design at the early stage, for instance, RT-level, than in later stages, such as physical layout. As presented in Fig. 1.6, CAD tools are already used throughout the different stages of the SoC design flow for various tests, checks, and verification tasks. For example, logical equivalency is checked between the input and output of any synthesis step to ensure functional similarity during the flow. Suppose security requirements mentioned in Sect. 1.1.3 can be broken down into tangible rules and checks. Then those can be verified during the design flow using CAD tools. In that case, scalable hardware security can be ensured. • The researchers proposed several solutions [4] to meet hardware security requirements. However, their practical usage is severely challenged due to being ad-hoc in nature and a lack of automation, implementation overhead, and scalability. The use of CAD tools can bring enhanced productivity, scalability, and reduced turnaround times with minimal cost to meet security requirements in supply chain.

1.2.2 CAD Tools in SoC Life-cycle Threats Figure 1.7 highlights the existing hardware security threats in the SoC life-cycle. On the other hand, Fig. 1.8 shows different CAD tools from different vendors that are used in the industry throughout the different steps of the SoC design flow. The

10

1 Introduction to CAD for Hardware Security

SoC Life-Cycle IP Vendor

SoC Design house

Foundry

Deployment

Hardware Threats Hardware Trojan Hidden Backdoor Information Leakage IP Piracy Trojan Insertion by Tools Information Leakage

Fault Injection Side-channel Analysis

Trojan Implant IC Overproduction Reverse Engineering

Fault Injection Side-channel Analysis

Leak Secret Information Side-Channel Attack Fault Injection

Reverse Engineering

Fig. 1.7 Hardware threats in globally distributed supply chain on SoC life-cycle

following briefly explains the major hardware security threats from Fig. 1.7 and associated CAD tools used exploit those threats. 1. IP Piracy: Intellectual property is an original design idea of any IC. The attacker may steal the intellectual property of the design without the knowledge of the designer [6, 25, 32, 38]. During the design phase, any synthesis, place, and route CAD tools from Fig. 1.8 can be used to exploit these threats. Techniques to mitigate IP piracy are discussed in Chap. 11. 2. Hidden Backdoor: Hidden backdoor is the logic functions added to the design and they enable remote control of the IC, such that the adversary can access the design when the IC is functioning [30]. The adversary can leak the secret information or create any malfunction to the design. Usually such threats are included during the design synthesis or manufacturing phase. Hence, synthesis, place and route CAD tools from Fig. 1.8 are mostly used to insert such vulnerabilities. 3. Reverse Engineering: IC reverse engineering is a process of identifying the device functionality by extracting the gate-level netlist [35, 56]. The attacker may reverse engineer either the end product or the GDSII layout of the design [10, 56]. Nowadays reverse engineering tools and techniques are available at lower cost [9, 11], to steal or pirate a design (discussed in Chap. 15). 4. IC Overbuilding: The attacker in the foundry may overproduce the IC and sell those illegally in the market [7, 48]. Overproduction does not necessarily require any specific CAD tools. The GDSII shared with the foundry for fabrication is good enough to build more chips than the contract and sell those in the open market. 5. Counterfeiting: The counterfeit ICs are produced and distributed in the market at less price without the knowledge of the original component manufacturer [30] (discussed in Chap. 13).

Platform Architect

System Integration

Synthesis DFT Compiler, Modus

DFT

Security Verification

Security Verification

Post-synthesis Verification

Conformal LEC, Formality

Fig. 1.8 The list of CAD tools used at different stages of SoC design flow

Specification

VCS, ModelSim, NCSim, Vivado HLS, Stratus, JasperGold, Incisive Design Compiler, LegUp, Bambu, Genus, Precision, Functional Catapult C Yosys Verification Logic High Level Synthesis Synthesis Security Verification

Primetime, StarRC Tempus, Quantus

Static Timing

Placement & Routing

IC Compiler, Innovus, Xpedition

Power and DFM

PrimePower, Voltus, RedHawk

Physical Verification & Signoff IC Validator, Calibre, Assura

Security Verification

Fabrication Manufacturing Testing TetraMax, Encounter Test, FastScan

Post-Silicon Validation

SigSeT, Tessent

1.2 CAD Tools in SoC Supply Chain 11

12

1 Introduction to CAD for Hardware Security

6. Hardware Trojans: Once triggered, malicious modifications to the circuit may modify the design functionality, cause a denial-of-service, or leak secret information about the design [27, 31, 39, 52]. A detailed description of Trojans and their classifications are described in [49, 51]. Trojan detection is difficult due to its stealthy nature, and the technology scaling of the devices is limited, so it is very difficult to distinguish Trojan malfunction from the process variation [50]. These Trojans are embedded at any design stage, either at the specification phase, design phase, fabrication phase, testing phase, or assembly and package phase. Several CAD tools from Fig. 1.8, such as synthesis tools, DFT tools, verification tools, testing tools, etc., can be used to embed these hardware Trojans in the design. Still, no specific tools or standard measurements exist to detect or prevent Trojan insertion. Whereas, Salmani et al.[50], had proposed a vulnerability analysis flow to determine the location of Trojan embedded in the design, and they have developed a Trojan detectability metric to quantify Trojan activation and effect. The Trojan detectability metric analyzes the weaknesses and strengths of Trojan detection techniques (discussed in Chap. 5). 7. Information Leakage: During run-time the attacker can attain the confidential information either by means of side-channel analysis [23] or through the deployed Trojan [18]. The attacker can easily attain the private key of a crypto module (elaborated in Chap. 4). Researchers have shown that several synthesis, verification, testing, and DFT tools can be exploited to leak sensitive design information. Details of these threats are thoroughly discussed in Chap. 4. 8. Side-channel Attacks: By exploiting side-channel signals like power, electromagnetic waves, timing, acoustic, optical, memory cache, and hardware weaknesses, the attacker can easily attain the information of crypto modules [34, 55]. By employing side-channel attacks, the attacker can extract the secret key without learning the direct relation between plaintext and ciphertext. With the secret key, it is easy to decipher the encrypted information. (elaborated in Chaps. 6 and 8). Researchers exploit power analysis and design for manufacturability (DFM) tools from Fig. 1.8 to analyze the side-channel information of a design, localize the target asset, and later carry out the side-channel attack on the device [34, 55]. On the other hand, their associated mitigation can also be integrated into the design using synthesis and place and route CAD tools. 9. Fault Injection Attacks (FIAs): A transient fault is induced during the execution of normal chip operation and thereby results either in the disabling of security features and countermeasures or by leaking the secret information in crypto modules [15, 26]. Clock glitching, voltage glitching, electromagnetic (EM), light and laser, and focused ion beam (FIB) are the most prominent FIA approaches to cause violation of device integrity, confidentiality, and availability [61]. The attacks are performed in the fabricated device. However, several DFT, testing, verification, power, and manufacturability tools from Fig. 1.8 are vastly used to fully or partially localize the asset in the device. Details on fault injection attacks and usage of CAD tools in their detection and mitigations is discussed in Chap. 7. To overcome these hardware security challenges [16] and to provide trustable hardware, design-for-trust techniques are proposed. CAD tools are utilized at SoC

1.2 CAD Tools in SoC Supply Chain

13

design stage to develop a design-for-trust techniques such as watermarking [24], IC metering [2, 29], split manufacturing [19, 21, 22, 22, 59], IC camouflaging [12, 42, 60, 64], and logic encryption [40, 41, 43, 48, 62, 63]. Among the all design-for-trust techniques, logic locking is most significant as it provides a protection at all stages of IC supply chain, (Discussed in Chap. 11). Whereas the other techniques like IC camouflaging and split manufacturing can protect the design only against particular malicious entities, CAD tools are used at the logic synthesis level is recommended. Generally, the hardware security and trust schemes developed using CAD tools are proposed for the detection/prevention of hardware threats in the SoC supply chain. However, researchers employed CAD tools in each stage of the SoC design flow to provide an end-to-end security verification and assurance, and this results in a security base life-cycle, which runs parallel to the SoC design flow as described below. The security development life-cycle (SDL), is one of the measures included in the SoC development life-cycle to provide security assessment as shown in Fig. 1.3. The security requirements are considered an SDL flow specification, including the adversary threat model. From the security specifications, a list of assets, capabilities of an adversary, and objectives of security architecture are provoked to mitigate any threat. Along with the security test cases, these specification objectives are interpreted as micro-architectural security specifications. So once the design is implemented, pre-silicon security verification is carried on either utilizing dynamic verification, formal verification, or manual RTL analysis. Only the chips that pass the security verification and meet security specifications are taped out. Post-silicon security verification begins once the chip is taped out. However, the bugs identified in both pre-silicon and post-silicon phases are fixed according to the severity rating. The challenge in SDL flow is that the security specification varies for every threat model. Therefore, human expertise is required to define and run the test cases. Nowadays, CAD tools are widely used to validate the security assessments in SDL flow and conform to meet industry security requirements.

1.2.3 CAD Tools as Vulnerability Source: The Other Side of the Coin While CAD tools are an integral part of the modern SoC design flow, several researchers explored the possibility of CAD tools inserting vulnerabilities in the design unintentionally. Authors of [8] analyzes IEEE P1735, which describes methods for encrypting electronic-design intellectual property (IP) and managing access rights, and highlights that the standard contains several cryptographic errors. By exploiting the most egregious errors, authors were able to recover the entire plaintext IP. Padding-oracle attacks, for instance, are well-known attack vectors exploited in [8]. As a result of the underlying IP being required to support typical applications, new capabilities emerge, for instance, commercial system-on-chip

14

1 Introduction to CAD for Hardware Security

(SoC) tools that combine multiple IP pieces into a fully specified chip design. On the other hand, in a black-box oracle approach an attacker can exploit various mistakes made in a commercial SoC tool. As well as recovering plaintext IP, authors of [8] demonstrates how to create ciphertexts of IP that include targeted hardware Trojans in a standard-compliant way. Researchers have also shown that circuit design CAD tools can be leveraged to insert and avoid detection of hardware trojans [47]. Figure 1.8 shows how CAD tools are being used in the IC designs for the purpose of verifying the security assurance of the chip. Due to scaling, the entire RTL to GDSII design flow has moved from standalone synthesis, placement, and routing algorithms to an integrated construction and analysis approach. Apart from the traditional functional design implementation, optimization, and verification steps, the SoC design flow must undergo security verification steps after every design transformation steps as depicted in Fig. 1.8. These security verification steps ensure that the final GDSII is free from any potential security vulnerabilities. Moreover, the required effort and resource to identify and fix any security vulnerability increases multiple times as the design moves from one abstraction level to another. Therefore, the subsequent chapters of this book discusses how security vulnerabilities can be detected and mitigated at the early stage of the SoC design flow.

1.2.4 The Need for CAD Solutions For SoC Security Verification Hardware validation is more challenging at the SoC level due to the stealthy nature of the potential attacks and the diversity of vulnerabilities. EDA companies [20, 33] face security challenges while designing SoC chips. Security challenges such as design complexity, integration of third-party IPs, customized functionality, and globally distributed supply chain are addressed using verification and validation techniques of CAD tools. Figure 1.8 describes the usage of CAD tools in IC chip design for verification and validation of security assurance. The evolution of CAD tools results in compact electronic gadgets and systems, whereas the design complexity challenges are also resolved to an extent. However, CAD tools used in SDL will provide security assurance for most existing attacks. This book is a collection of existing CAD techniques providing security assurance by validating the security specifications. The book is an attempt to cover the foundation of understanding CAD tools used to enhance the security assurance of the hardware verification and validation techniques. It presents a comprehensive summary of the threat models and attack scenarios and describes the fundamental principles with highlighted research results. The book systematizes the application of CAD tools to the SoC life-cycle development to provide an assessment of existing security development techniques. It groups similar analysis and verification techniques to explain the common principles

1.2 CAD Tools in SoC Supply Chain

15

in detail. Important concepts are elaborated with illustrative circuit examples. The book includes 18 chapters. Each chapter highlights the fundamental principles behind the application of CAD techniques in the existing security assessments of the SoC development life-cycle. The first chapter is an introduction to SoC lifecycle development with the security development life-cycle. The following chapter will focus on the application of CAD approaches, in the assessment, at the presilicon level of the SoC life-cycle. Below is a conspectus of each chapter: • Chapter 2 describes security assets and their classification. This chapter also discusses the existing challenges to identify security vulnerabilities and necessity of an automated framework for security asset identification. The later part of the chapter provides an overview of automated security asset identification framework. • Chapter 3 is all about security metric. The security of a system greatly depends on the standard it is founded. The mitigation technique for one threat might be hurting the security of another threat. Therefore, this Chap. 3 discusses the metrics for IP-level security metric, platform-level security metric, transition of a metric from IP to the platform, security quantification and estimation, etc. • Chapter 4 focuses on the usage of CAD techniques for Information Leakage Assessment. It also discusses the various state-of-the-art techniques which track the flow of information at different abstraction levels, including software, hardware, and the HDL level. This chapter summarizes information flow tracking in three categories and presents the designer with methodologies that may prevent system violations. • Chapter 5 presents computer-aided hardware Trojan detection techniques. The focus of this chapter is to introduce tools developed by academics and provide an overview of the concepts incorporated to address the Trojan detection schemes. • Chapter 6 presents a survey on CAD for Power Side-Channel Detection. The chapter includes a collection of CAD techniques used for power side-channel analysis at various stages of the design flow. • Chapter 7 elaborates on fault injection attacks by addressing challenges associated with clock glitching. It includes the challenges associated with current vulnerability assessment tools and how CAD tools are used to detect fault injection attacks to safeguard the device. • Chapter 8 focuses on Electromagnetic (EM) Fault injection attacks on SoC. The chapter includes a review of CAD techniques used to inject EM attacks on a targeted SoC and possible countermeasures that can be incorporated at the design stage. This chapter aims to consolidate the attack models against SoCs, security evaluation metrics of a design at the pre-silicon stage, and a triplication-based error correction code that is resilient against varying electromagnetic fields. • Chapter 9 elaborates on a collection of CAD techniques used to enhance security verification at hardware and software levels to find design vulnerabilities. • Chapter 10 describes the machine learning (ML) techniques used in hardware security verification and validation. This chapter includes various machine

16

















1 Introduction to CAD for Hardware Security

learning techniques used for different threat models addressed in the domain of hardware security. Chapter 11 focuses on the application of CAD tools to reinforce the logic locking technique. This chapter elaborates on cutting-edge logic locking techniques, along with their advantages and limitations, to ensure trust in the design. Chapter 12 discusses the vulnerabilities addressed while designing hardware with High-level languages (HLL) by using High-level synthesis (HLS) tools. This chapter provides a literature survey of prominent research done in this domain and highlights research work that ensures security-aware HLS translation. Chapter 13 elaborates on anti-counterfeiting techniques and how machine learning algorithms are applied to detect counterfeit ICs accurately. This chapter presents the taxonomy of counterfeit types in detail with the detection of counterfeit IC and existing countermeasures. Chapter 14 focuses on the countermeasures against a probing attack. It includes a survey of existing probing attacks, limitations of detecting probing attacks, and an assessment of IC vulnerability through a layout-driven framework. Chapter 15 compiles a collection of CAD tools applicable for reverse engineering. It presents a high-level algorithm to extract gate-level netlists with reverse engineering techniques. Chapter 16 discusses the CAD techniques applied to Physical Unclonable Functions (PUF) security. This chapter enumerates error correction technology for PUF and numerical modeling attacks on several PUF implementations. Chapter 17 elaborates on the state-of-the-art of Field-programmable gate array (FPGA) security, including general FPGA security mechanisms, system-on-chip (SoC) FPGA security, cloud FPGA security, and FPGA initialization security. High-level security issues in FPGAs are discussed to provide an overview of various concerns in wide applications. Chapter 18 finally concludes the book by providing a summary of the chapter contents and provides direction for future research in using CAD tools for hardware security.

1.3 Summary This chapter elaborates on the hardware vulnerability challenges in the SoC design flow and the utilization of CAD tools to address the need for security assurance. The hardware threats related to the SoC life-cycle are discussed, and the SDL flow developed with CAD tools to provide a security assessment at all stages of SoC design was elaborated. This chapter also highlights a brief description of each chapter on how CAD tools are applicable in the domain of hardware security.

References

17

References 1. S. Aftabjahani, R. Kastner, M. Tehranipoor, F. Farahmandi, J. Oberg, A. Nordstrom, N. Fern, A. Althoff, Special session: cad for hardware security-automation is key to adoption of solutions, in 2021 IEEE 39th VLSI Test Symposium (VTS) (IEEE, Piscataway, 2021), pp. 1–10 2. Y. Alkabani, F. Koushanfar, Active hardware metering for intellectual property protection and security, in USENIX Security Symposium (2007), pp. 291–306 3. A. ARM, Security Technology Building a Secure System Using TrustZone Technology (White Paper) (ARM Limited, Cambridge, 2009) 4. S. Bhunia, M. Tehranipoor, Hardware Security: A Hands-on Learning Approach (Morgan Kaufmann, Burlington, 2018) 5. CAD/IP for security, trust-hub. https://www.trust-hub.org/#/cad-ip-sec/cad-solutions. Accessed 04 Aug 2021 6. E. Castillo, U. Meyer-Baese, A. García, L. Parrilla, A. Lloris, IPP@ HDL: efficient intellectual property protection scheme for IP cores. IEEE Trans. Very Large Scale Integr. VLSI Syst. 15(5), 578–591 (2007) 7. R.S. Chakraborty, S. Bhunia, Harpoon: an obfuscation-based soc design methodology for hardware protection. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 28(10), 1493– 1502 (2009) 8. A. Chhotaray, A. Nahiyan, T. Shrimpton, D. Forte, M. Tehranipoor, Standardizing bad cryptographic practice: a teardown of the IEEE standard for protecting electronic-design intellectual property, in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (2017), pp. 1533–1546 9. Chipworks, Reverse engineering software. http://www.chipworks.com/en/technicalcompetitive-analysis/resources/reerse-engineering-software 10. L. Chow, J.P. Baukus, W.M. Clark Jr, Integrated circuits protected against reverse engineering and method for fabricating the same using an apparent metal contact line terminating on field oxide (2007). US Patent 7,294,935 11. Degate, http://www.degate.org/documentation/ 12. M. El Massad, S. Garg, M.V. Tripunitara, Integrated circuit (IC) decamouflaging: reverse engineering camouflaged ICs within minutes, in NDSS (2015), pp. 1–14 13. F. Farahmandi, Y. Huang, P. Mishra, System-on-Chip Security (Springer, Berlin, 2020) 14. N. Farzana, F. Farahmandi, M. Tehranipoor, Soc security properties and rules. Cryptology ePrint Archive (2021) 15. O. Faurax, T. Muntean, Security analysis and fault injection experiment on AES, in Proceedings of SAR-SSI, vol. 2007 (2007) 16. D. Gardner, P. Ramrakhani, S. Jeloka, P. Song, C. Vishik, S. Aftabjahani, R. Cammarota, M. Chen, A. Xhafa, J. Oakley, et al., Research needs: trustworthy and secure semiconductors and systems (T3S), semiconductor research corporation (2019) 17. M. Howard, S. Lipner, The Security Development Lifecycle, vol. 8 (Microsoft Press Redmond, Washington, 2006) 18. N. Hu, M. Ye, S. Wei, Surviving information leakage hardware trojan attacks using hardware isolation. IEEE Trans. Emerg. Top. Comput. 7(2), 253–261 (2017) 19. F. Imeson, A. Emtenan, S. Garg, M. Tripunitara, Securing computer hardware using 3D integrated circuit ({IC}) technology and split manufacturing for obfuscation, in Presented as Part of the 22nd {USENIX} Security Symposium ({USENIX} Security 13) (2013), pp. 495–510 20. Intel, Intel bug bounty program. https://www.intel.com/content/www/us/en/security-center/ bug-bounty-program.html 21. M. Jagasivamani, P. Gadfort, M. Sika, M. Bajura, M. Fritze, Split-fabrication obfuscation: metrics and techniques, in 2014 IEEE International Symposium on Hardware-oriented Security and Trust (HOST) (IEEE, Piscataway, 2014), pp. 7–12 22. R.W. Jarvis, M.G. Mcintyre, Split manufacturing method for advanced semiconductor circuits (2007). US Patent 7,195,931

18

1 Introduction to CAD for Hardware Security

23. M. Joye, F. Olivier, Side-channel analysis (2011). https://marcjoye.github.io/papers/ JO05encyclo.pdf 24. A.B. Kahng, J. Lach, W.H. Mangione-Smith, S. Mantik, I.L. Markov, M. Potkonjak, P. Tucker, H. Wang, G. Wolfe, Watermarking techniques for intellectual property protection, in Proceedings of the 35th Annual Design Automation Conference (ACM, New York, 1998), pp. 776–781 25. A.B. Kahng, J. Lach, W.H. Mangione-Smith, S. Mantik, I.L. Markov, M. Potkonjak, P. Tucker, H. Wang, G. Wolfe, Constraint-based watermarking techniques for design IP protection. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 20(10), 1236–1252 (2001) 26. M. Karpovsky, K.J. Kulikowski, A. Taubin, Robust protection against fault-injection attacks on smart cards implementing the advanced encryption standard, in International Conference on Dependable Systems and Networks (IEEE, Piscataway, 2004), pp. 93–101 27. R. Karri, J. Rajendran, K. Rosenfeld, M. Tehranipoor, Trustworthy hardware: identifying and classifying hardware trojans. Computer 43(10), 39–46 (2010) 28. H. Khattri, N.K.V. Mangipudi, S. Mandujano, HSDL: a security development lifecycle for hardware technologies, in 2012 IEEE International Symposium on Hardware-Oriented Security and Trust (IEEE, Piscataway, 2012), pp. 116–121 29. F. Koushanfar, Provably secure active IC metering techniques for piracy avoidance and digital rights management. IEEE Trans. Inf. Forensics Secur. 7(1), 51–63 (2011) 30. F. Koushnafar, I. Markov, Designing chips that protect themselves, in Front End Topics. Proceedings of the conference on Design Automation Conference (2010) 31. F. Koushanfar, A. Mirhoseini, A unified framework for multimodal submodular integrated circuits trojan detection. IEEE Trans. Inf. Forensics Secur. 6(1), 162–174 (2010) 32. F. Koushanfar, G. Qu, Hardware metering, in Proceedings of the 38th Design Automation Conference (IEEE Cat. No. 01CH37232) (IEEE, Piscataway, 2001), pp. 490–493 33. Lenovo, Lenovo:taking action on product security. https://www.lenovo.com/us/en/productsecurity/about-lenovo-product-security,2017 34. Y. Lyu, P. Mishra, A survey of side-channel attacks on caches and countermeasures. J. Hardw. Syst. Secur. 2(1), 33–50 (2018) 35. G. Masalskis, et al., Reverse engineering of CMOS integrated circuits. Elektronika ir elektrotechnika 88(8), 25–28 (2008) 36. P. Mishra, M. Tehranipoor, S. Bhunia, Security and trust vulnerabilities in third-party IPs, in Hardware IP Security and Trust (Springer, Berlin, 2017), pp. 3–14 37. Mitre, Common vulnerabilities and exposures (2005) 38. D.C. Musker, Protecting and exploiting intellectual property in electronics, in IBC Conferences, vol. 10 (1998) 39. S.R. Rajendran, M.N. Devi, Malicious hardware detection and design for trust: an analysis. Elektrotehniski Vestnik 84(1/2), 7 (2017) 40. S.R. Rajendran, M.N. Devi, Enhanced logical locking for a secured hardware IP against keyguessing attacks, in International Symposium on VLSI Design and Test (Springer, Berlin, 2018), pp. 186–197 41. J. Rajendran, Y. Pino, O. Sinanoglu, R. Karri, Security analysis of logic obfuscation, in Proceedings of the 49th Annual Design Automation Conference (ACM, New York, 2012), pp. 83–89 42. J. Rajendran, M. Sam, O. Sinanoglu, R. Karri, Security analysis of integrated circuit camouflaging, in Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security (ACM, New York, 2013), pp. 709–720 43. J. Rajendran, H. Zhang, C. Zhang, G.S. Rose, Y. Pino, O. Sinanoglu, R. Karri, Fault analysisbased logic encryption. IEEE Trans. Comput, 64(2), 410–424 (2013) 44. S.R. Rajendran, R. Mukherjee, R.S. Chakraborty, SoK: physical and logic testing techniques for hardware trojan detection, in Proceedings of the 4th ACM Workshop on Attacks and Solutions in Hardware Security (2020), pp. 103–116 45. J. Robertson, M. Riley, The big hack: How China used a tiny chip to infiltrate US companies. Bloomberg Businessweek 4 (2018)

References

19

46. J. Robertson, M. Riley, The big hack: how China used a tiny chip to infiltrate us companies. Bloomberg Businessweek 4(2018) (2018) 47. J. Roy, F. Koushanfar, I. Markov, Extended abstract: circuit cad tools as a security threat, in 2008 IEEE International Workshop on Hardware-oriented Security and Trust, pp. 65–66 (2008) 48. J.A. Roy, F. Koushanfar, I.L. Markov, Ending piracy of integrated circuits. Computer 43(10), 30–38 (2010) 49. H. Salmani, M. Tehranipoor, Trojan benchmarks [Online]. Available: https://trust-hub.org/ benchmarks/trojan 50. H. Salmani, M. Tehranipoor, R. Karri, On design vulnerability analysis and trust benchmarks development, in 2013 IEEE 31st International Conference on Computer Design (ICCD) (IEEE, Piscataway, 2013), pp. 471–474 51. B. Shakya, T. He, H. Salmani, D. Forte, S. Bhunia, M. Tehranipoor, Benchmarking of hardware trojans and maliciously affected circuits. J. Hardw. Syst. Secur. 1(1), 85–102 (2017) 52. M. Tehranipoor, F. Koushanfar, A survey of hardware trojan taxonomy and detection. IEEE Des. Test Comput. 27(1), 10–25 (2010) 53. M. Tehranipoor, C. Wang, Introduction to Hardware Security and Trust (Springer Science & Business Media, Berlin, 2011) 54. M. Tehranipoor, R. Cammarota, S. Aftabjahani, Microelectronics security and trust-grand challenges. TAME: Trusted and Assured MicroElectronics Working Group Report (2019) 55. K. Tiri, Side-channel attack pitfalls, in 2007 44th ACM/IEEE Design Automation Conference (IEEE, Piscataway, 2007), pp. 15–20 56. R. Torrance, D. James, The state-of-the-art in semiconductor reverse engineering, in 2011 48th ACM/EDAC/IEEE Design Automation Conference (DAC) (IEEE, Piscataway, 2011), pp. 333– 338 57. Trust-hub Benchmark. [online]. https://www.trust-hub.org 58. I. Ullah, Q.H. Mahmoud, Design and development of a deep learning-based model for anomaly detection in IoT networks. IEEE Access 9, 103906–103926 (2021) 59. K. Vaidyanathan, B.P. Das, E. Sumbul, R. Liu, L. Pileggi, Building trusted ICs using split fabrication, in 2014 IEEE International Symposium on Hardware-oriented Security and Trust (HOST) (IEEE, Piscataway, 2014), pp. 1–6 60. A. Vijayakumar, V.C. Patil, D.E. Holcomb, C. Paar, S. Kundu, Physical design obfuscation of hardware: a comprehensive investigation of device and logic-level techniques. IEEE Trans. Inf. Forensics Secur. 12(1), 64–77 (2016) 61. H. Wang, H. Li, F. Rahman, M.M. Tehranipoor, F. Farahmandi, SoFI: security property-driven vulnerability assessments of ICs against fault-injection attacks. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 41(3), 452–465 (2022) 62. Y. Xie, A. Srivastava, Mitigating sat attack on logic locking, in International Conference on Cryptographic Hardware and Embedded Systems (Springer, Berlin, 2016), pp. 127–146 63. M. Yasin, B. Mazumdar, J. Rajendran, O. Sinanoglu, SARLock: sat attack resistant logic locking, in 2016 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (IEEE, Piscataway, 2016), pp. 236–241 64. M. Yasin, B. Mazumdar, O. Sinanoglu, J.V. Rajendran, CamoPerturb: secure IC camouflaging for minterm protection, in 2016 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (IEEE, Piscataway, 2016), pp. 1–8 65. K. Zetter, New attack can unlock and start a Tesla Model Y in seconds, say researchers. The Verge (2022). https://www.theverge.com/2022/9/12/23348765/tesla-model-y-unlock-drivecar-thief-nfc-relay-attack

Chapter 2

CAD for Security Asset Identification

2.1 Introduction Systems on Chips (SoCs) have become integral to every computing system. SoCs integrate multiple hardware functional blocks, called intellectual property (IP) blocks, to provide the various functionality demanded by the current computing systems. Each IP block performs a specific functionality that the SoC requires. Examples include the ALU IP block performing arithmetic operations, crypto IP blocks performing the cryptographic function, etc. As the SoC design process integrates many hardware IP blocks, hidden under this shadow of higher functionality lies the threat of security vulnerabilities. As SoCs get more and more complicated, the lack of security awareness in the SoC design process has resulted in various security threats such as Spectre [18], Meltdown [20], MDS [8, 29]. As computing devices become ubiquitous and user data is digitalized, it is no longer safe to assume that secure software is enough to protect user data. In the current horizontal model of silicon fabrication, with design, fabrication, and assembly spread over various parts of the world, threats can be introduced through unintentional design practices or through malicious implants [3, 4, 31] as well. Comprehensive hardware security is paramount during the device’s lifecycle, starting at the design phase. Incorporating security from the design phase gives designers greater flexibility for design changes. It can reduce security vulnerabilities at the post-silicon stage and save the design house money and time [1]. Achieving comprehensive security of the SoC requires complete knowledge of the SoC functionality and the threats that the SoC would face under various operating conditions. With this knowledge, designers can now identify the critical components in the SoC design that carry user and device-sensitive data and need to be protected. These critical components are called security assets. As more devices connect to the internet of things network, more and more user data is being stored in devices. The critical security assets of these devices need to be secured. Security assets in current computation devices range from the hardware © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Farahmandi et al., CAD for Hardware Security, https://doi.org/10.1007/978-3-031-26896-0_2

21

22

2 CAD for Security Asset Identification

registers storing user bank details, health data, passwords, photos, etc. to the intricate state machines controlling mechanisms that can give a user or adversary access to the user information. These control mechanisms can be finite state machines that check for password or fingerprint matching and can be attacked by fault injection [21]. Security assets can also comprise the firmware that is stored and interacts with the hardware and thus can cause security leaks. Identifying security assets is the first and one of the most crucial steps toward ensuring SoC security. However, current practices of identifying security assets are a manual and laborious task. It requires a designer/security engineer to understand the functionality of each hardware IP block and its interactions with the rest of the system. The designer/security engineer also needs to understand the various conditions of the environment in which the SoC operates and assess all possible security threats that the SoC faces. Gathering all such information, the designer/security engineer needs to identify the various design components that must be protected. Any lapse in judgment of either the threat faced or the importance of the design component can result in design choices that leave a security asset vulnerable. The possibility of these qualms demands a technique for the automated identification of security assets in an SoC. The rest of the chapter is organized as follows. Section 2.2 describes the background for security assets and their classification. Section 2.2.1 describes the various literature delved into security asset identification. Section 2.3 explores some of the tools developed for the automated identification of security assets. Section 2.4 talks about future work and concludes the chapter.

2.2 Motivation and Background 2.2.1 Motivation SoC system requires a complete understanding of the functional boundary of each hardware IP block and insight into which functional behavior can cause a security vulnerability under adversarial conditions. As SoC designs become more compact, integrating a higher number of hardware IP blocks, it becomes impossible for designers/security engineers to analyze the SoC design and identify security vulnerability concerns related to each design component. Thus, it becomes imperative for designers/security engineers to identify those design components called security assets that carry information that when under threat, can leak device secrets. By providing measures to protect the security assets from various threats, designers/security engineers can ensure that the secrets of the SoC are safe. However, identifying these security assets is not intuitive and varies from design to design and the adversarial threat model under consideration. SoCs integrating tens of hardware IP blocks containing a few hundred design components can be manually analyzed to identify the security assets for the given

2.2 Motivation and Background

23

threat models. However, for bigger SoCs containing hundreds of IP blocks and thousands of signals, manual analysis of the design for security assets is not pragmatic. This problem requires an automated approach that can easily integrate into the current design process.

2.2.2 Classification of Security Assets As described in Sect. 2.2.1, as the designs and threat models change, so do the security assets. Hence it becomes nearly impossible to give a proper definition for security assets that can help designers/security engineers to identify them. To help address this, Nusrat et al. in [14] have described the classification of security assets. This classification describes characteristics that can help designers/security engineers with security asset identification. The two board categories of security assets described in [14] are: 1. Primary Assets: Primary assets are the design components that are the ultimate target of the adversary/attack. These primary assets can be design components that contain hardware secrets such as hardware keys, firmware, users’ passwords, and personal information. Other design components, such as PCR registers, entropy to true random number generator (TRNG), physical unclonable functions (PUF), etc., which provide security for authentication and integrity, can also be considered primary assets. 2. Secondary Assets: Secondary assets are design components that interact with/ propagate the secondary asset to various design regions. Secondary assets can be design components that inherit sensitive information from primary assets through functionality. These can also be design components that can help propagate the sensitive information from the primary asset to its target location in the design. Some secondary assets in an SoC are system buses, peripheral ports, internal registers, etc. An overview of security asset classification can be seen in Fig. 2.1. Primary security assets are further classified as Static and Dynamic assets. Static security assets are secrets embedded in the hardware during design and manufacturing. Static assets are embedded into the ROM or FLASH of the SoC. Examples of static assets are secure hardware, encryption & logic locking keys, etc. Dynamic security assets are security assets that are generated during the runtime of the SoC infield. True random numbers, PUF responses, on-chip generated keys, etc. are examples of dynamically generated security assets. This classification of security assets aids designers in observing the characteristics of various design components and categorizing them as primary and secondary assets. This categorizing can help design houses to identify regions of the design that require protective measures, i.e., primary and secondary security assets. This classification further helps designers to focus security efforts towards protecting the

24

2 CAD for Security Asset Identification

Fig. 2.1 Classification of security assets into Primary and Secondary assets. Primary assets are the ultimate target of protection and can further be classified as static and dynamic assets. Static assets are stored in the chip from design time, whereas dynamic assets are generated during the run-time. Secondary assets are all design components that help propagate or store the primary asset

security assets, thus providing a higher level of security with lesser effort rather than trying to protect all design components.

2.2.3 Assessing Security Assets A vulnerability-free, secure SoC encapsulates a tight and secure integration of the various induvial secure hardware IP blocks. The security assets for an SoC include the design components of each hardware IP that are crucial for the security of that IP block and the design components that integrate these various hardware IP blocks. Figure 2.2 shows a sample SoC. The sample SoC consists of a processing core that acts as the brain of the SoC. It consists of two types of memory an external ROM containing a few secure memory blocks and internal RAM and a DMA block to access the ROM. Various security-related IP blocks are present such as a symmetric encryption core, public key encryption core, and security primitive blocks such as a TRNG and PUF. Connecting these various hardware blocks is a system bus. SOC consists of peripherals such as the general purpose I/O ports and a debug port, through which inputs and outputs flow to the processor core aided by the peripheral bus and system bus. A designer can identify various security assets for the SoC shown based on the threat model under consideration. Below we present how and what security assets are identified for the SoC shown. 1. Confidentiality Threat Model: In the confidentiality threat model, no secret or sensitive information to leak from a highly secure region of the design to a lowly secure region. For the SoC shown, the security IP block is a high-secure region. IP blocks such as the GPIO and debug ports are low-secure regions, as adversaries can access them. Sensitive information, such as encryption keys, should never flow to these peripheral ports. The encryption keys become our primary assets, our ultimate protection target. The encryption can either be

2.3 CAD for Security Asset Identification

25

Fig. 2.2 A sample SoC consisting of a processor core and security IPs like Symmetric Encryption Core, Public Key Encryption Core, TRNG, PUF. A RAM and ROM memory and DMA memory controller. Also present are peripherals such as GPIO and UART and a system and peripheral bus to connect all the components together

embedded in the secure memory regions of the ROM or be generated by the PUF responses, making them either static or dynamic primary assets, respectively. The keys flow to/from the encryption cores through the system bus. As the system bus propagates these security assets, it is our secondary asset. 2. Integrity Threat Model: In the integrity threat model, no access should be there from a lowly secure region to a highly secure region that can modify the secure data. The “program counter” value stored in registers in the processor core keeps track of the program running. An adversary who can gain access to the program counter and change it can alter its running, resulting in a malignant program or denial-of-service. The program counter now becomes our primary asset.

2.3 CAD for Security Asset Identification From the previous section, we have determined that for an SoC integrating a few diverse hardware IP blocks, multiple design components are identified as security assets depending on the threat model under consideration. For an SoC integrating a few hundred hardware IPs with tens of threat models, this task becomes nay impossible. However, there has not been much progress in this area of research. The authors in the works [2, 24] describe security policy enforcement through security asset identification but do not describe how these assets have been identified. The

26

2 CAD for Security Asset Identification

works described in [10, 22] analyze the confidentiality and integrity of hardware designs with DfT inserted, utilizing security assets. They chose the primary assets through complete manual analysis or selected every primary input as an asset but did not specify any methodology for asset identification. Reference [25] emphasize the need for automation of security assets but do not provide any tool to accomplish it. All the previous works described above discuss how security assets can be utilized for security assurance but do not lay out any methodologies for security asset identification. In [13] authors, Nusrat et al., have developed an automated CAD framework titled “Secondary Asset Identification Tool” (SAIF). The authors of [13] developed SAIF as an automated tool to help designers identify the secondary assets in their designs for various threat models. It is the first-of-its-kind tool developed to help detect security assets in an SoC design. We explore SAIF in detail in the following sections.

2.3.1 Inputs Figure 2.3 shows an overview of the SAIF tool. SAIF identifies secondary assets in an SoC at the register-transfer level (RTL). SAIF requires the following inputs to detect secondary assets in an SoC at the RTL.

Fig. 2.3 Overview of SAIF workflow. It consists of three steps. Asset propagation analysis to identify the common components. Candidate components identification to identify the common components. Pruning steps to identity common components vulnerable to threats and then output the final set of secondary assets

2.3 CAD for Security Asset Identification

27

1. Primary Assets: SAIF defines primary assets as input ports or design components into secure or sensitive information flows for the SoC or hardware IP block under consideration. Analyzing the design specification and documentation can easily discern these primary assets. As observed in the Sect. 2.2.3, security of the program counter (PC) value is paramount to prevent any modification to the program execution. Hence the PC register is annotated as a primary asset and can be input into the SAIF tool. 2. Observable Points: SAIF defines observable points as the design points interacting with the outside world. Information flow control defines and classifies data into trusted and untrusted lattices [11]. The observable points to which the primary asset information can flow as defined by the architectural specification are annotated as trusted observable points. All other observable points are annotated as untrusted observable points. Reverting to our sample SoC in Fig. 2.2, taking the PC as the primary asset, the PC value can flow to the debug port in “HALT” debug mode of operation, when the processor halts for debugging [9, 32]. The processor then allows the user to see the program execution via PC value and debug through the debug port [23]. However, in the normal mode of operation, the PC value should never flow to the debug port, as it exposes the program execution. Hence in the normal mode of operation, the “debug port” of the SoC is an untrusted observable point, and all other observable points are annotated as trusted. The designer must identify and annotate the primary assets, trusted and untrusted observable points for the SoC. SAIF can read multiple primary assets and observable points and output the secondary assets. Once inputs are given, SAIF performs three significant steps for identifying the secondary assets.

2.3.2 Asset Propagation Analysis The design search space for secondary assets in an SoC consisting of thousands of design components can be computationally expensive. SAIF tackles this issue through its “Asset Propagation Analysis” step. SAIF prunes the design search space for secondary asset identification through the asset propagation analysis step. SAIF utilizes structural analysis of the design to perform the design space pruning. Modern CAD tools such as Synopsys Design Compiler [26] and Cadence Synthesis [7] can analyze a design and identify all the components connected structurally, i.e., a path exists for information propagation between the two nodes. SAIF utilizes the principle stated: “While the presence of a structural path between two nodes in a design does necessarily imply an information flow, the absence of a structural path confirms the absence of any information flow between the nodes.” Hence by detecting all the design components that are not structurally connected to the primary input asset, we can prune out the various design components that cannot be security assets. The asset propagation analysis step is made up of three smaller substeps defined below:

28

2 CAD for Security Asset Identification

1. Forward Analysis: SAIF performs a fan-out analysis for the primary inputs annotated as primary assets. A fan-out analysis is a technique for analyzing the RTL design and, for a given component, identifying its cone of influence. Using modern CAD design tools [7, 26], SAIF performs this fan-out analysis for the primary assets and identifies all the design components structurally connected to it. All the identified components are stored as a set 2. Backward Analysis: To further prune the design components search space, SAIF tries to identify the design components accessible from the observable points. An adversary can exploit any structural path from a design component to an observable point to gather information from it or manipulate its value. For this, employ a technique called fan-in analysis. A fan-in analysis takes a design component and identifies all components that affect the said design component. By doing this we come to a trusted observable point, we can identify all the design components structurally connected to the trusted observable point. All the identified components are stored in a set. 3. Intersection Analysis: The final step is the intersection analysis step. With the result sets from the forward and backward analysis, SAIF performs a common component analysis step to identify the common components between the two result sets. These identified components are the subset of design components in the RTL, which are structurally connected to the primary asset and hence can carry a non-zero probability of propagating sensitive information. They are also connected to observable points and carry a non-zero probability of being attacked by an adversary. This set of common design components is stored in a set termed “Common Components.” Thus, SAIF utilized the asset propagation analysis step for pruning the design search space and identifying only design components with a non-zero probability of propagating secret and secure information and a non-zero probability of being accessed through an observable point.

2.3.3 Candidate Component Identification A structural connection from the primary asset to the design component does not entail carrying any of the primary asset’s sensitive information. There needs to exist a functional path that propagates the sensitive information from the primary asset to the design component for it to annotate it as a secondary asset. The sensitive information would undergo various operations along the functional. Hence it also becomes essential to identify how much of the sensitive information is preserved in the final information reaching the design component. The candidate component identification step allows SAIF to take the set of common components identified from the asset propagation analysis step. It identifies all the design components with a functional path connection to the primary asset input. It also calculates the

2.3 CAD for Security Asset Identification

29

amount of sensitive information from the primary asset that propagates to the design component. To achieve the above two goals, SAIF utilizes the fault simulation technique. Fault simulation lets designers define and test the design against various fault attacks at the pre-silicon level. Taking the design files at RTL, fault simulation tools can inject and track fault propagation through the design. They use a fault model defined by the designer to inject and track the faults. Synopsys ZOIX [33] is one such fault simulation tool. The fault model defined by the user should be defined with various characteristics for the fault, as defined in [30]: 1. Fault Category: The designer has to define a global fault, where the faults are injected throughout the design. Alternatively, he can define it as a local fault, where the fault is injected at a particular design region. 2. Fault Injection Location: The designer can define three levels of fault injection location. “Complete control,” where the fault will affect a single specific design component. “Some control” means a few groups of design components are being targeted for the fault injection. Finally, “no control” represents that faults can be injected anywhere. Designers can choose any of the three controls depending on the threat model and fault injection method. 3. Fault Injection Time: Similar to fault injection location, the designer can define three control levels again. “Complete control” indicates that the fault is injected at the specific clock cycle mentioned. “Some control” indicates that the fault is injected within a delta of the specified time. “No control” mentions to the fault simulator that fault can be injected at any time. 4. Fault Type: The designer can define the type of fault being injected, such as stuck-at-faults, bit-flip faults, etc. 5. Fault Duration: Faults can have different durations: Transient, where it lasts for a small amount—permanent faults, where it lasts throughout the running of the design. SAIF utilizes permanent stuck-at-fault simulation to identify functional paths and calculate the information preserved at each common component. SAIF introduces stick-at-zero and stuck-at-one permanent faults at the primary input asset and performs a fault simulation to identify design components to which sensitive information propagates. After injecting the fault, SAIF calculates the fault coverage, i.e., the number of injected faults captured at the common component. Any common component with a fault coverage higher than zero has information propagating the functional path from the primary input asset. Once SAIF has identified all common components with a non-zero fault coverage, it utilizes a metric called “Observation Hardness” (OH) to quantify how sensitive information propagates to the design component and is preserved. The observation hardness metric is defined below: OH (P , R) =

F DCC F IP A

(2.1)

30

2 CAD for Security Asset Identification

where F D CC is the number of faults detected at the common component and F I P A is the number of faults injected at the primary asset. A higher OH value indicates higher information propagation to that common component. SAIF lets the designer input an observation hardness threshold, which acts as the minimum OH value required to consider the design component as a candidate for being a secondary asset. Any common component with an OH value greater than the threshold is then added to the set termed ‘Candidate Components.’

2.3.4 Pruning The candidate components step provides the user with all design components that carry or propagate the sensitive information from the primary asset. While all these design components can be considered secondary assets, with SoC designs containing thousands of design components and billions of transistors, this list may be enormous. Focusing all efforts towards securing these candidate components may be costly, monetary and labor-wise. To address this bottleneck, SAIF performs a “pruning” step to help identify the candidate components most vulnerable to threats. SAIF considers three threat models and assesses the various candidate components against each of these threat models. The three threat models that SAIF checks for are: 1. Information Leakage Threat Model: The candidate components should not leak information to any of the untrusted observable. 2. Power Side-Channel Threat Model: The power emissions for different information the candidate components carry should not be distinguishable. 3. Fault Injection Threat Model: The candidate components should be vulnerable to fault injection attacks. SAIF integrates various techniques to analyze the candidate components against the above three defined threat models and assess their vulnerability.

2.3.4.1

Information Leakage Assessment

SAIF utilized formal analysis, especially a method named security path verification (SPV) [5], to analyze the candidate components for information leakage vulnerability. SPV uses a mathematical design model to identify if there are unintentional functional paths from the source node to a destination node. Similar to formal property verification, we wrote assertions to be checked by a formal tool. Cadence JasperGold [6], Synopsys VC Formal [27], and Cycuity Radix-S [12] are some examples of formal tools that can check for these unintentional paths. SAIF utilizes a template-based methodology to generate the required SPV assertions for information leakage assessment. Table 2.1 shows the SPV templates for Cadence JasperGold and Synopsys VC Formal. SAIF sets the candidate component

2.3 CAD for Security Asset Identification

31

Table 2.1 SPV templates for various commercial CAD formal tools used by SAIF to generate the SPV assertions, which are formally verified to identify any information leakage No. 1 2

Template check_path -from -to fsv_generate -src -dst

Formal tool Cadence JasperGold Synopsys VC formal

as the source node and lets the formal tools check for any unintentional functional paths to the untrusted observable points, which is our destination node. The failed assertions denote the presence of an unintentional functional path from the candidate component to the untrusted observable point and hence an information leakage path. These candidate components are then annotated as vulnerable to information leakage.

2.3.4.2

Power Side-Channel Assessment

Miao et al. in [16] define RTL-PSC as a pre-silicon power side-channel assessment tool. In [16], authors utilize functional simulation data to calculate the switching activity of each design component. The switching activity is converted into a probability density function (PDF) called the design component’s power profile. Once RTL-PSC generates the power profiles for all the design components, it then utilizes Kullback-Leibler (KL) divergence metric [17] to calculate the vulnerability to power side-channel attacks. KL divergence is a statistical distance measuring how one PDF varies from another; it is mathematically defined as below:  KL(Ti Tj ) =

fTi (TC ) log

fTi (TC ) d(TC ) fTj (TC )

(2.2)

where fTi (TC ) and fTj (TC ) are the probability density functions of the transition counts (TC ) for given test patterns Ti and Tj . SAIF utilizes functional simulation, where it simulates to generate the power profiles for all the candidate components as directed in [16]. SAIF then utilizes completely random stimuli to generate a second set of power profiles for the design components. Once the power profiles are generated for both random stimuli, SAIF calculates the KL divergence for each candidate component. The designer can input a threshold value for the KL divergence, and any component with a value higher than the threshold is annotated as vulnerable to power side-channel attacks.

2.3.4.3

Fault Injection Assessment

SAIF utilizes fault simulation methodology to assess the candidate components against fault injection. As described in Sect. 2.3.3, fault simulation lets designers inject faults in the design and see their propagation at the RTL. SAIF calculates a

32

2 CAD for Security Asset Identification

metric called “Vulnerability Factor for Fault Injection” (VFFI), which quantifies the probability of a candidate component to fault injection. VFFI quantifies the probability that a fault Injected at the primary asset propagates to an observable point through the candidate component. VFFI is defined as below: V FFI =

F CC to untrusted OP F CAsset to C + PAsset to C PC to untrusted OP

(2.3)

VFFI is calculated utilizing F C P AtoCC , fault coverage for each candidate component when a fault is injected at the primary asset and F C CCtoU OP , fault coverage for the untrusted observable point when a fault is injected at the candidate component. SAIF also calculates the number of functional paths from the primary asset to the candidate components, PP AtoCC and functional paths from candidate components to the untrusted observable point, PCCtoU OP . Similar to previous steps, the designer can input a VFFI threshold value. Any component with a VFFI of a design component greater than the threshold is more susceptible to fault injection and can leak sensitive information. SAIF then outputs all the candidate components which have been found susceptible to all the mentioned threat models as secondary assets. SAIF also outputs the results for each threat model individually, allowing the designer flexibility to include those candidates that have been found vulnerable to a subset of these threat models.

2.3.4.4

Results

In this section, we will discuss some of the results provided by SAIF. For the asset propagation analysis step, the authors utilized Synopsys Design Compiler [26]; for fault simulation, they utilized Synopsys ZOIX [33]. Cadence Jaspergold [6] was utilized for SPV analysis and VCS [28] was used for functional simulation. Table 2.2 shows the results provided by SAIF. Row 1 shows the name of the benchmark. The authors tested SAIF against the open-source microcontroller MSP430 [15] and a subsystem of the MIT CEP SoC [19]. Row 2 shows the total number of design components in the benchmarks. All internal registers, logic blocks, communication protocols, and IPs in an SoC are collectively called design components. Row 3 shows the number of primary assets (PA), trusted(TOP), and untrusted observable points (UOP). Row 4 details the number of common components identified after the asset propagation analysis step, and Row 5 details the number of candidate components identified after the candidate component identification step. Rows 6,7, and 8 detail the candidate components vulnerable to information leakage, power side channel, and fault injection. Row 9 shows the number of secondary assets as output by SAIF, with Row 10 showing the tool’s runtime. As seen in Table 2.2, SAIF can take in an RTL design of any size and output the secondary assets within a minimal compute time.

References

33

Table 2.2 Results of SAIF implementation for the MSP430 and CEP subsystem. The table details the results obtained after the asset propagation analysis step, the candidate component identification step and the pruning step. The results also show the number of identified candidate components that are vulnerable to information leakage, fault injection, and power side-channel attacks Design #PA/#TOP/#UOP #Total components #Common components (registers) #Candidate Components #Candidate components after IFA #Candidate components after SCA assessment #Candidate components after fault Inj. assessment #Identified secondary assets Average execution time for pruning (s)

MSP430 1/1/1 5500 99 15 14 7 10 10 183.27

CEP SoC 1/1/ 400,000 500 80 70 22 31 44 618.65

2.4 Summary This chapter discusses the importance of security assets in the overall security assurance of an SoC. As SoC designs are getting more complex every day, so are the attack vectors that try to exploit them. There raises a need to ensure SoC security and protect users’ data. Security needs are introduced into the design process, and security asset identification is the initial step toward it. Security assets are the hardware design components that store and propagate the users’ sensitive information and, thus, must be protected from attacks. Different types of security assets can be present in an SoC and can be classified based on specific characteristics of the design component that make them a security asset. However, identifying security assets is complex, and neither the design houses nor the research community has made much progress. As SoC becomes more complex, the need for a security asset identification tool becomes paramount. SAIF is a novel security assets identification tool that can be integrated into the SoC design process. The various techniques utilized by SAIF, such as structural analysis, fault simulation-based information quantification for security asset identification, along with the various metrics SAIF employs to assess vulnerability threat. SAIF outputs the securitycritical secondary security assets that designers need to ensure protection. In this way, security assets need to be identified and protected to ensure the higher overall security of the SoC.

References 1. D.M. Anderson, Design for manufacturability: how to Use Concurrent Engineering to Rapidly Develop Low-Cost, High-Quality Products for Lean Production (CRC Press, 2020) 2. A. Basak, S. Bhunia, S. Ray, A flexible architecture for systematic implementation of soc security policies, in 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (2015), pp. 536–543. https://doi.org/10.1109/ICCAD.2015.7372616

34

2 CAD for Security Asset Identification

3. S. Bhunia, M.S. Hsiao, M. Banga, S. Narasimhan, Hardware trojan attacks: Threat analysis and countermeasures. Proc. IEEE 102(8), 1229–1247 (2014). https://doi.org/10.1109/JPROC. 2014.2334493 4. Bloomberg, Planting Tiny Spy Chips in Hardware Can Cost as Little as $200. https://www. bloomberg.com/news/features/2018-10-04/the-big-hack-how-china-used-a-tiny-chip-toinfiltrate-america-s-top-companies 5. Cadence, JasperGold Formal Verification. https://www.cadence.com/en_US/home/tools/ system-design-and-verification/formal-and-static-verification/jasper-gold-verificationplatform/security-path-verification-app.html 6. Cadence, JasperGold FPV App. https://www.cadence.com/en_US/home/tools/system-designand-verification/formal-and-static-verification/jasper-gold-verification-platform.html 7. Cadence, Synthesis. https://www.cadence.com/en_US/home/tools/digital-design-and-signoff/ synthesis.html 8. C. Canella, D. Genkin, L. Giner, D. Gruss, M. Lipp, M. Minkin, D. Moghimi, F. Piessens, M. Schwarz, B. Sunar, J. Van Bulck, Y. Yarom, Fallout: Leaking data on meltdown-resistant CPUs, in Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS) (ACM, 2019) 9. I.S. Committee, et al., IEEE Standard Test Access Port and Boundary-Scan Architecture (2001) 10. G.K. Contreras, A. Nahiyan, S. Bhunia, D. Forte, M. Tehranipoor, Security vulnerability analysis of design-for-test exploits for asset protection in SoCs, in 2017 22nd Asia and South Pacific Design Automation Conference (ASP-DAC) (2017), pp. 617–622. https://doi.org/10. 1109/ASPDAC.2017.7858392 11. G.K. Contreras, A. Nahiyan, S. Bhunia, D. Forte, M. Tehranipoor, Security vulnerability analysis of design-for-test exploits for asset protection in SoCs, in 2017 22nd Asia and South Pacific Design Automation Conference (ASP-DAC) (IEEE, 2017), pp. 617–622 12. Cycuity, Radix Solutions. https://cycuity.com/solutions/ 13. N. Farzana, A. Ayalasomayajula, F. Rahman, F. Farahmandi, M. Tehranipoor, Saif: Automated asset identification for security verification at the register transfer level, in 2021 IEEE 39th VLSI Test Symposium (VTS) (2021), pp. 1–7 https://doi.org/10.1109/VTS50974.2021.9441039 14. N. Farzana, F. Rahman, M. Tehranipoor, F. Farahmandi, SoC security verification using property checking, in 2019 IEEE International Test Conference (ITC) (2019), pp. 1–10. https:// doi.org/10.1109/ITC44170.2019.9000170 15. O. Girard, MSP430 (2009 (accessed 04 Aug 2009)). https://github.com/olgirard/openmsp430 16. M. He, J. Park, A. Nahiyan, A. Vassilev, Y. Jin, M. Tehranipoor, RTL-PSC: Automated power side-channel leakage assessment at register-transfer level, in 2019 IEEE 37th VLSI Test Symposium (VTS) (2019), pp. 1–6. https://doi.org/10.1109/VTS.2019.8758600 17. J.M. Joyce, Kullback-Leibler Divergence (Springer, Berlin, Heidelberg, 2011), pp. 720–722. https://doi.org/10.1007/978-3-642-04898-2_327 18. P. Kocher, J. Horn, A. Fogh, D. Genkin, D. Gruss, W. Haas, M. Hamburg, M. Lipp, S. Mangard, T. Prescher, M. Schwarz, Y. Yarom, Spectre attacks: Exploiting speculative execution, in 40th IEEE Symposium on Security and Privacy (S&P’19) (2019) 19. M.L. Laboratory, CEPSoC. https://github.com/mit-ll/CEP. Accessed: 2020-06-12 20. M. Lipp, M. Schwarz, D. Gruss, T. Prescher, W. Haas, A. Fogh, J. Horn, S. Mangard, P. Kocher, D. Genkin, Y. Yarom, M. Hamburg, Meltdown: Reading kernel memory from user space, in 27th USENIX Security Symposium (USENIX Security 18) (2018) 21. A. Nahiyan, F. Farahmandi, P. Mishra, D. Forte, M.M. Tehranipoor, Security-aware FSM design flow for identifying and mitigating vulnerabilities to fault attacks. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 38, 1003–1016 (2019) 22. A. Nahiyan, J. Park, M. He, Y. Iskander, F. Farahmandi, D. Forte, M. Tehranipoor, Script: A cad framework for power side-channel vulnerability assessment using information flow tracking and pattern generation. ACM Trans. Des. Autom. Electron. Syst. 25(3), 1–27 (2020). https:// doi.org/10.1145/3383445 23. Z. Ning, F. Zhang, Understanding the security of ARM debugging features, in 2019 IEEE Symposium on Security and Privacy (SP) (2019), pp. 602–619

References

35

24. S. Ray, Y. Jin, Security policy enforcement in modern SoC designs, in 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (2015), pp. 345–350. https:// doi.org/10.1109/ICCAD.2015.7372590 25. S. Ray, E. Peeters, M.M. Tehranipoor, S. Bhunia, System-on-chip platform security assurance: Architecture and validation. Proc. IEEE 106(1), 21–37 (2018). https://doi.org/10.1109/JPROC. 2017.2714641 26. Synopsys, Synopsys Design Compiler. https://www.synopsys.com/implementation-andsignoff/rtl-synthesis-test/dc-ultra.html 27. Synopsys, VC Formal. https://www.synopsys.com/verification/static-and-formal-verification/ vc-formal.html 28. Synopsys, VCS Functional Verification Solution. https://www.synopsys.com/verification/ simulation/vcs.html 29. S. van Schaik, A. Milburn, S. Österlund, P. Frigo, G. Maisuradze, K. Razavi, H. Bos, C. Giuffrida, RIDL: Rogue in-flight data load, in S&P (2019) 30. H. Wang, H. Li, F. Rahman, M.M. Tehranipoor, F. Farahmandi, SoFI: Security propertydriven vulnerability assessments of ICs against fault-injection attacks. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 41(3), 452–465 (2022). https://doi.org/10.1109/TCAD.2021. 3063998 31. Wired, https://www.wired.com/story/plant-spy-chips-hardware-supermicro-cheap-proof-ofconcept/ 32. X. Xu, J. Xu, D. Lee, D. Park, J. Park, Design of halt-mode and monitoring-mode onChipDebugger 2G for Core-A. Int. J. Inf. Electron. Eng. 3(3), 324 (2013) 33. Z01X Functional Safety Assurance, https://www.synopsys.com/verification/simulation/z01xfunctional-safety.html

Chapter 3

Metrics for SoC Security Verification

3.1 Introduction Electronic products today and in the near future rely heavily on System-on-Chips (SoCs). SoCs are the heart of almost every computing system (e.g., mobile phones, payment gateways, IoT devices, medical equipment, automotive systems, avionic devices, etc.). A modern SoC contains a wide range of sensitive information, commonly referred to as security assets (e.g., keys, biometrics, personal info, etc.). It is important to protect the assets from a variety of attack models, including IP piracy [82], power side-channel analysis [33], fault injection, malicious hardware attacks [64], supply chain attacks [25]. After a hardware design is fabricated and deployed, security vulnerabilities cannot usually be patched. After fabrication, any changes to the hardware design will require redoing the entire design, which takes time and costs money. Vulnerabilities should be addressed at an early design stage, such as the RTL or gate level. For an overall evaluation of the security of the design, a quantitative measurement or estimation of the security is much needed. In spite of much testing research [6–9, 37, 42, 65, 68, 83], security verification continues to rise, and many Electronic Design Automation (EDA) tools lack the capability of automatically evaluating and optimizing security. Most of the time, security is an afterthought. Various detection [5, 36, 56, 71] and defense mechanisms [32, 47, 51, 53, 57, 58, 88] have been developed to address this issue. The hardware security community has also developed a number of methodologies for quantifying defense mechanisms. These approaches, however, can only be applied to intellectual property (IP) blocks, which are the platform’s primary building blocks. After an IP is integrated into a platform, its security may not remain the same. In the process of integrating an IP into a platform, additional parameters are introduced that directly impact security. Once an IP has been placed on the platform, these parameters either degrade or enhance the IP’s security. Consequently, it leads to the development of new methodologies for estimating and measuring security at the platform level. It is important that the methods of security estimation should © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Farahmandi et al., CAD for Hardware Security, https://doi.org/10.1007/978-3-031-26896-0_3

37

38

3 Metrics for SoC Security Verification

Fig. 3.1 Platform-level security estimation approach. Based on this bottom-up approach, IP-level security estimation is carried out first, followed by platform integration and platform-level security estimation

accurately estimate the security of the silicon at an early stage of the design process, whereas the measurement approach should accurately measure the platform-level security through exhaustive simulation and emulation. It is also important to note that threats are typically considered individually when mitigating them. When a vulnerability is mitigated for one threat, the design may become more susceptible to another. Therefore, security optimization is necessary to achieve the best collective security possible in the context of diverse threat models. Figure 3.1 provides an overview of the platform-level security estimation approach. Following a bottomup approach, the IP-level security estimation is followed by the platform integration and the platform-level security estimation. This chapter begins with a motivating example in Sect. 3.2. Following that, the threat model is discussed in Sect. 3.3. The background work is discussed in Sect. 11.2, including IP-level security metrics and design parameters that contribute to the platform-level security metric. A description of the transition from IP to SoC is provided in Sect. 3.5. Section 3.6 presents the approach for measuring and estimating security followed by challenges in Sect. 3.8. Lastly, Sect. 17.6 concludes the chapter.

3.2 Motivating Example This section provides a motivating example and discusses why security estimation and measurement techniques should be developed at the SoC level. In particular, we consider the power side-channel (PSC) model to be the threat model of interest. An illustration of the transition from IP to SoC can be found in Fig. 3.2. Consider a cryptographic element, IP2 , shown in Fig. 3.2, assuming it has already been evaluated as a standalone module against the PSC vulnerability. Our next objective is to determine whether the IP2 remains robust against PSC attacks after it is integrated into a system. If not, by how much does it decrease or increase? In order to gain a better understanding of how SoC parameters affect PSC robustness on IP2 , let us consider the power distribution network (PDN) as the SoC-level parameter.

3.2 Motivating Example

39

Fig. 3.2 Transition from IP to Platform. This repository contains a wide range of IPs that are required for the development of a SoC platform. The IPs are integrated with additional glue logic, such as wrappers, buses, firewalls, etc., to form the SoC platform

IP2 IP3 IP1

IP Repository

IP4

AES

Integration

Wrapper IP1

Wrapper

Wrapper

IP2

IP3

Firewall

Firewall

Memory Encrypter Firewall

System Bus Firewall

Security IP

Firewall

Firewall

Wrapper

Wrapper

AES

IP4

An analysis of power side-channel leakage on IP2 begins with collecting its power traces. It is important to note that, depending on the way PDN is implemented at the SoC level, different power rails may be shared among different IPs. Therefore, power traces cannot be obtained from IP2 alone at the platform level. Furthermore, when evaluating the IP-level PSC robustness, an assumption is made that the attacker has the privilege to measure the IP’s power trace. At the platform level, however, this assumption is not true. This is because the attacker is only able to collect the aggregate power trace of all active IPs along with IP2 on the platform. As a result, the collected power traces may be noisier than those collected from standalone IP2 , potentially complicating the attackers’ task. Alternatively, IP2 integrated into a system-on-chip might be more secure against PSC attacks. Therefore, the level of security associated with an IP at the platform level may differ depending on a number of parameters that are introduced during the transition from IP to platform. Table 3.2 provides a list of such parameters affecting platformlevel security for five different threats. An accurate estimation has a significant impact on the design decision. PSC assessments, for instance, may not require

40

3 Metrics for SoC Security Verification

additional countermeasures if the estimate of SoC-level security of IP2 is high enough, which significantly reduces design effort, area, power, and cost. It is clear from this example that the transition from IP to platform affects the security of the platform by introducing different parameters, resulting in the need for platform-level security estimation and measurement methods.

3.3 Threat Model This section discusses the threat models for the five different types of attacks. A brief description of how these attacks are conducted at the IP level is provided.

3.3.1 IP Piracy This attack involves making an illegal copy or clone of the IP. It is most common for an attacker to make little or no modifications to the original IP and sell it to a chip designer claiming ownership of it. In order to prevent IP piracy, logic locking has been a popular technique. In this approach, key gates are inserted at the gate level in order to lock the design. IP functions properly only if the key inputs are correctly set. Other incorrect key values will result in a corrupted output, preventing an attacker from gaining access to the IP [38–40]. Boolean Satisfiability Checkbased attacks (SAT attacks) have been proven to be highly efficient in retrieving the correct key from a locked gate-level netlist in recent years. It has been shown that for a considerably large key space [62], the SAT-based attack can retrieve the correct key within a few hours. In this attack, a modern SAT solver is used to perform an iterative solution of a SAT formula, called the Conjunctive Normal Format (CNF), until the correct key is retrieved. Here is a summary of the attack model: • A gate-level netlist of the locked IP is available to the attacker. • Additionally, the attacker has access to a functional copy of IC called Oracle, in which she can perform inputs and observe the correct results. • It is possible for the attacker to access the scan chain of the integrated circuit. The SAT attack requires access to the scan chain in sequential designs. By gaining access to the scan chain and performing the scan shift operation, full control of the circuit’s internal states is provided and the complexity of SAT attacks is reduced, just as if the same procedure was applied to a combinational design [20]. The SAT solving tools produce distinguishing input patterns (DIPs) [62] and prune out the wrong key space, resulting in finding the correct key at the end. DIP is an input pattern, xd , for which there are at least two different key values, k1 , and k2 , that generate two different outputs, o1 , and o2 . By first determining the DIP, xd , the SAT solver tool applies it to IC to obtain the correct output od . A comparison is made between the input-output pair of the locked design and the correct input-output pair

3.3 Threat Model

41

of the design for (xd , od ). If they do not match, then the key values used to find the DIP are pruned out. A single DIP cannot prune out all the wrong keys. Until there are no more DIPs to be identified, this process continues. This will result in all wrong key values being removed, leaving only the correct key.

3.3.2 Power Side Channel (PSC) Leakage It is assumed that power consumption analysis will reveal distinct assets (keys) within critical IPs at risk of being exposed by power side-channel attacks. Attacks using power side-channels attempt to extract the secret key from cryptographic implementations based on the trace of power consumption of an IP (e.g., advanced encryption standard (AES)). An AES algorithm generates round keys using the secret key. Each round of operations involves the substitution of bytes, row shifting, mixing of columns, and addition of round keys to the plaintext. The transistor/switching activity of a chip determines the power consumption of a device. There is, however, an apparent correlation between the energy consumed at different phases of cryptographic operations and the value of the secret key. In the first round of s-box computation, it is possible to statistically distinguish power traces based on the least significant bit of the s-box output [34]. A platform-level security measurement and estimation process is described with regard to cryptographic IPs in which the attacker has the objective of leaking the key by means of differential power analysis (DPA) or correlation power analysis (CPA) attacks [16, 33]. This assumption is that the attacker has full access to the measurement of the power consumption of the target chip. In order to obtain the secret key, an adversary collects power traces from the chip during cryptographic operations for a large number of known plaintexts. By utilizing a divide-andconquer approach, the adversary pieces together the secret key byte by byte. According to the value of the least significant bit (0 or 1) in the first s-box output, the adversary divides the traces into two sets. A comparison of two datasets is then conducted in order to determine if there is a statistically significant difference between them; in this case, the correct hypothetical key causes the two datasets to significantly differ [34].

3.3.3 Fault Injection As one of the most common non/semi-invasive side-channel attacks in the hardware domain, fault injection attacks are among the most common [29, 89]. In order to perform this attack, an attacker creates transient or permanent faults during the device’s normal operation and observes the erroneous outputs. By using this technique, an attacker is able to drastically reduce the number of experiments that

42

3 Metrics for SoC Security Verification

Fig. 3.3 Different fault injection techniques. (a) Fault injection by voltage glitch. (b) Fault injection by Clock glitch. (c) Laser fault injection. (d) Electromagnetic (EM) fault injection

are required to guess the secret, also known as Differential Fault Analysis (DFA) [19]. A number of fault injection techniques have been developed, including voltage and clock glitching, EM/radiation injection, and laser/optical injection [89]. As shown in Fig. 3.3a, voltage glitching is characterized by a momentary drop in the supply voltage during the operation. It has been observed that voltage spikes or drops have an adverse effect on critical paths resulting in latching failures of signals [49]. It is possible to cause a voltage glitch either by disabling the main power supply causing a global effect or by running a power-hungry circuit such as ring oscillators (ROs) in order to cause a localized effect. There has been recent demonstration that remote fault injection attacks can also be carried out by manipulating the supply voltage remotely using software-based registers, or using ROs to cause voltage drops in remote FPGAs. Clock glitching involves adding glitches in the clock supply or disturbing normal behavior, as shown in Fig. 3.3b. It is possible to violate setup and hold times as a result of these clock glitches [4]. It should also be noted that voltage starvation can also affect the clock and circuit timing. Therefore, voltage and clock glitching attacks are often combined to cause timing errors. A transient fault may be injected using optical/laser fault injection attacks that use different wavelengths of laser/optics from the front or backside of the chip,

3.3 Threat Model

43

as illustrated in Fig. 3.3c. The front side attack is accomplished by using a higher wavelength laser (1300 µm), whereas the backside attack is accomplished by using a near-infrared laser (1064 µm) due to its lower absorption coefficient. As a result of the laser’s injection, electron-hole pairs drift apart in the active region under the influence of the electric field, which results in transitory currents. As a result of the transient currents, the transistors conduct and the capacitive loads are charged and discharged. In general, laser/optical fault injection involves de-packaging the chip to expose the die, which qualifies as a semi-invasive attack. It is important to note that, with the laser/optics, an attacker can pinpoint the exact location and time of the laser injection, making it a very powerful attack [70]. Accordingly, EM fault injection attacks involve altering the electrical or magnetic field flux to influence normal device operation, as shown in Fig. 3.3d. Electromagnetic field causes voltage and current fluctuations inside the device, leading to the faults [18].

3.3.4 Malicious Hardware It is well known that malicious hardware poses a significant security threat to modern SoC platforms. In most cases, this attack is carried out by inserting malicious hardware or modifying the original design maliciously [15, 45, 67]. There is a possibility that malicious hardware may be implanted by third-party IP vendors, unreliable design, fabrication, and testing facilities. A potential violation of either of the following can occur due to this type of attack. • Confidentiality: It can leak a sensitive information to an untrusted observable points. • Integrity: It can illegally alter/modify a data. • Availability: It can affect the availability of the device.

3.3.5 Supply Chain An integral part of the modern supply chain for SoCs includes the definition of design specifications, implementation of RTLs, IP integration, verification, synthesis, insertion of design-for-test (DFT) and design-for-debug (DFD) structures, physical layout, fabrication, testing, verification, packaging, and distribution [11– 13, 61, 74, 77, 79, 80]. IC designs have become increasingly vulnerable as a result of the globalization of these steps. It is also inevitable that the design will be exposed to different supply chain attacks as it proceeds through the supply chain. Among the most common supply chain attacks that compromise hardware design flow are recycling, remarking, and cloning. • Recycling: An electronic component can be recycled by claiming that it is a new one if it has been used for several years. Attackers in this type of attack extract

44

3 Metrics for SoC Security Verification

a chip from a used system, then sell the chip as a new product without much modification. The recycled chip usually has a reduced performance and a shorter life expectancy [11–13, 15, 61, 74, 77, 80]. • Remarking: In order to be uniquely identified, each electronic chip is marked with some information. Among these types of information are part identifying numbers (PINs), lot identification code, date code, manufacturer’s identification code, country of manufacture, electrostatic discharge (ESD) sensitivity identifiers, certification marks, etc. [15]. This marking information is undoubtedly crucial. Space-graded chips can, for example, withstand extreme conditions including temperatures and radiation that are beyond the capabilities of commercial-graded chips. Therefore, space-graded chips are more expensive than commercial-grade chips. Attackers can remark the commercial-grade as space-grade chips and sell them on the market for a higher price. In addition, using this remarked chip in spacecraft will definitely result in a disastrous failure. • Cloning: The malicious intent of IP piracy can lead to the cloning of a design. For example, a dishonest platform integrator can steal the IP and sell it to another platform integrator.

3.4 IP-level Security Metrics and Design Parameters Contributing to the IP-level Security This section presents IP-level security metrics for five different threat models and discusses the parameters that contribute to the IP-level security metric.

3.4.1 Metrics to Assess an IP’s Vulnerability to Piracy and Reverse Engineering As discussed in Sect. 3.3, logic locking has been one of the most effective methods for preventing IP piracy by locking the design with a key. A modern SAT solver can, however, retrieve the key within a relatively short amount of time using modern SAT solvers. This subsection intends to discuss a number of metrics that represent the resilience to SAT attacks as a measure of the robustness against IP piracy. Following are the most commonly used metrics for evaluating IP Piracy resistance. • Output Corruptibility [59]: For a given input pattern, output corruptibility is defined as the probability that two outputs from a locked IP that has the wrong key and an unlocked chip that is functional are not equal. Taking into account that Ce and Co represent the locked and functional IP, respectively, and I and K represent the input pattern space and wrong key space, respectively, the output corruptibility Cr may be expressed as follows. Cr (Ce , Co ) ≡ Pr [Ce (i, k) = Co (i)], where i ∈ I, k ∈ K

3.4 IP-level Security Metrics and Design Parameters Contributing to the IP-. . .

45

Although output corruptibility is an efficient measure for hiding the design’s functionality, a high level of output corruption may aid SAT solving tools in gaining a faster convergence rate. • Number of SAT Iterations [78]: As another important metric, this indicates how many iterations the SAT attacker must undergo in order to uncover the key. Generally, the higher this number, the more resilient the system is. In addition, a higher iteration number does not necessarily result in a faster key retrieval time. This is determined by the initial conditions and the way in which the SAT solver navigates through the DIP search space. There is no guarantee that a design with more iterations will be more resilient than a design with less iteration. The time for each iteration is also not necessarily equal. Generally, however, a large number of iterations is expected to require a longer period of time. • CPU Time [46]: CPU time represents the time required by the SAT attacking engine to extract the correct key. An ideal SAT resilient design should have a high CPU timeout, since higher CPU times indicate a more robust design.

3.4.2 IP-Level Parameters Contributing IP Piracy Security Metrics Various design parameters of an IP contribute to quantifying the resiliency of that IP against a specific threat. Those parameters are referred to as IP-level parameters. A number of parameters are discussed in this subsection, which contribute to the evaluation of IP Piracy robustness against SAT attacks. For each of the five threats, Table 3.1 contains different relevant IP-level parameters. • Locking Key Size: An important IP-level parameter contributing to IP Piracy resiliency is key size. It is expected that the attacker would be unable to retrieve the key without brute force in an ideal SAT attack resilient locking mechanism. It is common practice for the locking mechanism to be implemented so that it takes as many iterations to recover the key as it takes during brute force recovery. This assumption implies that the SAT resilience should increase exponentially with increasing key size. • Locking Key Distribution: The distribution of the key is another important IPlevel parameter. Essentially, the key distribution indicates whether a particular locking key is shared among a variety of IP or not. A shared key locking mechanism allows for the recovery of functionality from more than one IP by retrieving a single key. • Output Corruption: This term refers to the deviation of the output of a locked IP from the correct output. Considering that the primary purpose of locking an IP is to conceal its functionality, the deviated output with the incorrect key serves this function. It prevents the attacker from gaining access to that functionality. Hamming distance is usually used to measure the degree of output corruption between the correct and the incorrect output. In ideal circumstances, a hamming

46

3 Metrics for SoC Security Verification

Table 3.1 IP-level parameters that contribute to the security metric of IP piracy, Power sidechannel analysis, Fault injection, Malicious hardware, Supply chain attack Parameters threat Locking key size Locking key distribution Output corruption Locking mechanism Key error rate (KER) Input error rate (IER) Mode of operation Cryptographic key size Number of clock cycles per operation Data dependent switching activity Data independent switching activity Gate type/sizing Nearby cells Static probability Signal rate Toggle rate Fan-out Immediate fan-in Lowest controllability of inputs

IP Piracy ✓ ✓ ✓ ✓ ✓ ✓ – – – – – – – – – – – – –

Power Side-channel – – – – – – ✓ ✓ ✓ ✓ ✓ – ✓ – – – – – –

Fault Injection – – – – – – – – – – – ✓ ✓ – – – – – –

Malicious Hardware – – – ✓ – – – – – – – – – ✓ ✓ ✓ ✓ ✓ ✓

Supply chain – – – – – – – – – – – – – – – – – – –

distance of 50% should constitute the maximum deviation. In contrast, highly deviated output (higher hamming distance) usually assists SAT solvers in pruning out a broader number of DIPs in one iteration. • Locking Mechanism: Generally, locking mechanisms refer to the manner in which locking is actually implemented in a design. It represents the placement of the key gates within the design, the number of fanouts affected by the key gate, and finally, the cause of the output corruption. It was demonstrated by the researchers that locking with randomly inserted key gates can result in sensitization of the key bit to the output that results in retrieving the correct key[54]. • Key Error Rate (KER): KER [41] The KER value of a key [41] represents the fraction of the input minterms that have been corrupted by this key. In the case of an input size of n bits, if XK represents the set of corrupted input minterms by the key K, then the key error rate for the key K would be calculated as follows: KER =

|Xk | 2n

An increase in KER is likely to increase the overall output corruption of the design.

3.4 IP-level Security Metrics and Design Parameters Contributing to the IP-. . .

47

• Input Error Rate (IER): I ER [41] is the ratio between the number of wrong keys that corrupt the input minterm and the total number of wrong keys [41]. For a given input minterm X, if it is corrupted by the set of the wrong keys KX , and the KW K denotes the set of all wrong keys, then I ER is defined as I ER =

|KX | |KW K |

High I ER contributes to functional corruption in the design, thereby hiding the correct functionality of the design.

3.4.3 Metrics to Assess an IP’s Vulnerability to Power Side-Channel (PSC) Attacks For power side-channel analysis at the IP level, this subsection discusses some existing security metrics. • Signal-to-Noise Ratio (SNR): SNR can be defined as the ratio of the variance of the actual power consumption to the noise [81]. In the case of P[signal] and P[noise] , where they represent the power consumption of the target attack gates and the additive noise, respectively, the SNR can be defined as follows: SNR =

V ar(Psignal ) V ar(Pnoise )

(3.1)

where var represents the variance of a function. Using the signal-to-noise ratio, one can determine how difficult it would be to identify the correct key. • Measurement to Disclose (MTD): An MTD is defined as the number of power traces required to reveal the correct key successfully [44]. The MTD is dependent both on the signal-to-noise ratio (SNR) and the correlation coefficient ρ0 between the power model and the signal in the power consumption. A mathematical definition of MTD is as follows: MT D ∝

1 SNR ∗ ρ02

(3.2)

• Test Vector Leakage Assessment (TVLA): Using Welch’s t-test, TVLA [23] evaluates side-channel vulnerabilities. TVLA examines two sets of power traces. In one case, the fixed key is used in conjunction with fixed plain text, whereas in another case, the fixed key is used in conjunction with random plain text. Based on a null hypothesis that the two sets of traces are identical, a hypothesis testing is then carried out on the two sets of traces. If the null hypothesis is accepted, then the collected traces are unlikely to leak information about the key. If the

48

3 Metrics for SoC Security Verification

null hypothesis is rejected, then the collected traces are likely to leak sensitive information about the key. Here is a definition of TVLA: μr − μf T V LA =  σr2 nr

+

σf2 nf

(3.3)

In this case, the mean and standard deviation of the set of traces with fixed keys and fixed plaintexts are μf and σf . In the set of traces with fixed key, but random plaintext, μr and σr indicate the mean and standard deviation, respectively. In the set of traces with random and fixed plaintext, respectively, nr and nf represent the number of traces. • Kullback-Leibler (KL) Divergence: The KL divergence [35] is generally applied in order to calculate the statistical distance between two probability distribution functions. It is possible to identify vulnerable designs using the concept of statistical distance when conducting power side-channel analysis. For example, suppose the power consumption distribution of a cryptographic algorithm varies with the keys [26]. In such a case, it indicates that the power consumption can be exploited in order to reveal key information. Assuming that fT |ki (t) and fT |kj (t) are the two probability density functions of the power consumption for the given keys ki and kj , respectively, the KL divergence is defined as follows:  DKL (ki ||kj ) =

fT |ki (t) log

fT |ki (t) dt fT |kj (t)

(3.4)

The higher the value of KL divergence, the more significant are the differences between power consumptions for different keys, and the easier it is to exploit the design by analyzing power side channels. Alternatively, a smaller KL divergence indicates greater resistance to power side-channel attacks. • Success Rate (SR): A success rate is defined as the ratio of successful attacks to the total number of attempts [60]. It is defined as: SR =

Number of successf ul attacks T otal number of attacks

(3.5)

• Side-Channel Vulnerability (SCV): There is a functional analogy between the side-channel vulnerability (SCV) metric and the signal-to-noise ratio (SNR). In contrast to SNR metrics, SCV metrics are useful in formal methods based on information flow tracking (IFT) to assess PSC vulnerability in the pre-silicon design stage with a few simulated traces, rather than thousands of silicon traces as required by SNR metrics. The term is defined as follows:

3.4 IP-level Security Metrics and Design Parameters Contributing to the IP-. . .

SCV =

PT .hi − PT .hj Psignal = Pnoise Pnoise

49

(3.6)

Here, PT .hi and PT .hj denote the average power consumption of the target function when the Hamming Weight (HW) of the output is hi = H W (T i) and hj = H W (Tj ) for ith and jth input patterns, respectively. For the PSC assessment, the difference between PT .hi and PT .hj is estimated as signal power.

3.4.4 IP-Level Parameters Contributing Power Side-Channel (PSC) Security Metrics The following are some briefly discussed IP-level design parameters which contribute to the IP-level PSC metric. • Mode of Operation: In cryptographic operations, the mode of operation can have a significant impact on the amount of power consumed. In the case of AES block cipher, this algorithm can operate in several modes, such as Electronic Code Book (ECB), Cipher Block Chaining (CBC), Cipher Feedback (CFB), Counter (CTR), etc. It is inevitable that PSC resilience varies for various modes of operation [30], as the power consumption and noise level will differ owing to the additional logic, the use of initialization vectors or counters, nonces, etc. • Cryptographic Key Size: Another important factor to consider is the key size of the cryptographic algorithm, primarily because it determines the number of rounds in the AES algorithm. In the case of 128, 192, and 256 bit keys, the rounds of operation are 10, 12, and 16, respectively. Based on the number of rounds, the parallel activities will vary, which will result in varying resilience to side-channel attacks. • Number of Clock Cycles per Cryptographic Operation: An important factor to consider when evaluating resiliency of an implementation against side-channel attacks is the number of clock cycles required to perform an AES operation. One or more rounds of operations may be performed in the same clock cycle in the AES implementation utilizing a loop unrolling architecture. Usually, only one round of the algorithm is implemented as a combinational processing element, and the results are stored in data registers at the end of each clock cycle. With a pipelined architecture, multiple data blocks are processed simultaneously in a clock cycle, and multiple registers and processing elements are typically required for implementation. Also, the expansion of the key can be performed at the same time as the round operations or during the first clock cycle. Due to this, the number of cycles required for the algorithm will affect the parallel operation as well as the level of noise in the power trace. • Data Dependent Switching Activities: The transistor activities that are directly related to the key are known as data dependent switching activities. As an example, the first add round key operation and substitute operation utilize the

50

3 Metrics for SoC Security Verification

key value which directly contributes to the power consumption. Furthermore, key and plaintext values affect data dependent activities. • Data Independent Switching Activities: When transistor activity is uncorrelated with the input key, it is considered to be data independent switching activity and adds noise to the power trace. For example, when AES is operated in the first round, there is enough confusion and diffusion. This means that the correlation between the key and power consumption diminishes significantly in the following rounds. Furthermore, additional logic and circuitry for different implementations and architectures of the AES algorithm will provide for varying amounts of data independent switching.

3.4.5 IP-level Parameters Contributing to an IP’s Vulnerability to Fault Injection Attacks There are several parameters that are important in fault injection attacks, and these parameters can impact the overall feasibility of fault injection attacks, as discussed in the following section. • Spatial Controllability: Spatial controllability enables an attacker to target a specific net/gate within the design. It should be noted that not all components of the design (nets/gates/registers) are essential to a successful attack due to the large design size. In an attempt to cause integrity and confidentiality violations, an attacker attempts to inject faults in specific components, which if violated will assist his exploit. This means that the higher the spatial controllability of the fault injection attack, the greater the susceptibility of the design. There are a number of parameters that can affect spatial controllability, such as fault methods (clock, voltage, laser, etc. ), timing information provided by the design (path delays, clocks, etc.), and library information (types of cells and gates). For instance, a laser or optical fault method provides a far greater degree of spatial controllability to an attacker, enabling him or her to target specific locations on a chip. Alternatively, clock glitching and voltage glitching can violate multiple paths within the design, which can result in multiple faults. Likewise, the delay distribution of the paths can influence which registers are affected by voltage and clock glitching. • Temporal Controllability: During the execution of a design, a temporal controllability feature allows an attacker to control the time of fault injection. An attacker would, for example, inject faults in the eighth round of execution to perform differential fault analysis on AES that contains minimal faults [22, 69]. In contrast, attacking after the eighth round would require more faults, and before the eighth round would require a complex differential fault analysis, making key retrieval impossible. As a consequence, if an attacker can trigger the clock, voltage, or laser injection, it directly impacts the attacker’s ability to inject faults. This, in turn, impacts the design’s susceptibility to fault injection.

3.4 IP-level Security Metrics and Design Parameters Contributing to the IP-. . .

51

• Fault Type and Duration: A fault’s type, such as permanent, semi-permanent, or transient, can also affect fault injection attacks. Lasers can inject transient faults, but higher laser power can permanently damage silicon. While transient faults flip bits at the gate’s output, permanent faults can cause 0/1 to stick. Therefore, Different types of faults require different analyses from the attacker to leak information [85]. A design’s susceptibility can also be affected by fault duration. The fault may not get latched if it is too small or within the slack relaxation of the timing path. Additionally, laser duration can impact the number of clock cycles the transient effect of laser current would affect. • Fault Propagation: To ensure injected faults propagate to the observable points (for example, ciphertext in AES encryption), fault propagation is required. If not latched to the register, clock/voltage glitches, laser injection, etc. do not affect the design’s execution. A number of factors can affect the successful latching of faults, including the timing of the paths, laser stimulation period, fault duration, and system clock. • Fault Method: When evaluating/developing fault injection security metrics, fault injection method also plays a major role. Clock and voltage glitching-based fault injection methods, for example, are global. Using these methods, it is hard to control the specific fault location. Likewise a single glitch can cause a fault in multiple paths across the design. Laser and optical fault injection, on the other hand, are localized fault injection methods, and an adversary can inject faults at specific fault nodes. Furthermore, different methods require different physical parameters. Laser injection, for instance, requires an exposed backside, whereas clock glitching requires access to the system’s clock. To determine an IP’s susceptibility, the parameters discussed above are essential. In IPs, faults may occur in the control or datapaths. Datapaths are composed of combinational gates that transfer information from one register to another. Control paths, on the other hand, consist of registers that control data flow. To dates, no security metrics exist to evaluate fault injections in datapaths. Evaluation should take into account the design and threat model (e.g., type of crypto). Injections of faults in the eighth, ninth, and tenth rounds of AES, for instance, can leak keys, whereas earlier faults are more difficult to exploit [19]. In contrast, faults in the control paths, like FSM, can allow attackers to bypass design states. When an attacker bypasses round operations in AES, they are able to reach the done state, thus destroying encryption’s security. FSM’s vulnerability to fault injection is, therefore, measured using the vulnerability factor (V F[F I ] ) as follows in [49]: V FF I = P V T (%), ASF

(3.7)

  SF VT P V T (%) =  , ASF =  T VT

(3.8)

where

52

3 Metrics for SoC Security Verification

 PVT(%) is the ratio of  the number of vulnerable transitions ( V T ) to the total  number of transitions ( T ). Whereas ASF is the average susceptibility factor ( SF ). The susceptibility factor is defined as, SFT =

min(P V ) − max(P O) avg(PF S )

(3.9)

where min(PV) is the minimum value of delays in path violated, and max(PO) is the maximum value of delays in Paths not violated. Accordingly, the higher the PVT(%) and ASF value, the more susceptible the FSM is to fault injection attacks.

3.4.6 Metrics to Assess an IP’s Vulnerability to Malicious Hardware The rest of this subsection discusses some of the existing metrics for evaluating Malicious Hardware in a design. • Controllability: In the context of a design, controllability [31] refers to the ability to control the inputs of a component from the primary inputs of the design. Here is an example of a component within a design with input variables x1 to xn and output variables z1 to zn . If the controllability is represented by CY , then the value of CY for each output of the component can be calculated as follows: 1 CY (xi ) n n

CY (Zj ) = CT F ×

(3.10)

i=1

Here, CT F is the controllability transfer function of the component and defined as follows. ⎛ ⎞ m  |N (0) − N (1)| 1 j j ⎠ CT F ∼ 1− (3.11) = ⎝ m 2 j =1

where Nj (0) and Nj (1) are the number of input patterns for which zj has value from 0 and 1, respectively. • Observability: It is the ability to observe the output of a component from the primary outputs of the design. A component’s observability OY for each input can be calculated as follows: 1  OY (zj ) m m

OY (xi ) = OT F ×

(3.12)

j =1

where OTF is the observability transfer function measuring the probability that a fault in the input of a component will propagate to the outputs. If NSi is the

3.4 IP-level Security Metrics and Design Parameters Contributing to the IP-. . .

53

number of input patterns for which xi causes a change in output, then the OT F is defined as 1  NSi OT F ∼ = n 2n n

(3.13)

i=1

• Statement Hardness: According to [55], statement hardness is an evaluation of how vulnerable a design is to malicious hardware insertion at the behavioral level. More specifically, it represents the difficulty of executing a statement in the RTL source code. Typically, statements with a low statement hardness value are vulnerable to malicious hardware insertion attacks. A malicious attacker will most likely target this area to insert malicious hardware, hoping it will be activated rarely with specific trigger conditions. • Hard-to-Detect: [66] proposes the hard-to-detect metric that quantifies the areas in the gate-level netlist that are vulnerable to malicious hardware insertions. Attackers frequently target these areas, as inserting malicious hardware in these areas will reduce the probability of being detected during validation and verification. • Code Coverage: The code coverage [48] measures how much of the design source code is executed during functional verification. A higher code coverage indicates fewer suspicious areas in the design, which reduces the likelihood of malicious hardware insertion. [66] proposes a technique called Unused Circuit Identification (UCI) that uses code coverage analysis. Using this method, an attacker can insert malicious hardware into code lines that are not executed during functional verification. • Observation Hardness (OH): It is defined as the percentage of primary inputs propagating to an intermediate node [21]. It is necessary to inject a stuck-at0 or stuck-at-1 fault into a primary input in order to determine the OH value. There can be a full detection, a potential detection, or an undetected fault at an intermediate node, indicating how dependent this intermediate node is on the primary input. A node’s observation hardness may be expressed as OH =

T otal No. of Detected F aults T otal number of f aults inj ected

(3.14)

3.4.7 IP-Level Parameters Contributing Malicious Hardware Security Metrics This subsection discusses a set of IP-level parameters that contribute to an IP’s maliciousness.

54

3 Metrics for SoC Security Verification

• Static Probability: A signal’s static probability can be expressed as the fraction of time the signal value will be driven at a logic high (1’b1). In an ideal scenario, the probability of being at logic high or low should not be skewed. When the probability of the signal being logic high is significantly greater than the probability of it being logic low, the signal may be a potential trigger of malicious hardware [28]. • Signal Rate: In a design, signal rate refers to the number of transitions from logic high to logic low or from logic low to logic high at a wire per second [28]. In a design, wires with lower signal rates may be potential candidates for the triggering of malicious circuits. • Toggle Rate: The toggle rate of a signal in a design is defined as the frequency at which it switches from its previous value [28]. Generally, attackers utilize signals with a very low toggle rate to implement the trigger condition of the malicious circuit. • Fan-out: Fan-out refers to the maximum number of inputs that can be connected to a gate’s output. It is likely that the attacker would target a payload that impacts a larger part of the design in the case of a denial-of-service attack. Therefore, a net with a higher fan-out may be an attractive choice for an attacker [28]. • Immediate Fan-in: A large number of immediate fan-ins is usually indicative of a large function with low entropy being implemented with rare activation conditions. Accordingly, the immediate Fan-in becomes an extremely crucial IPlevel parameter in the malicious hardware attack [28]. • Lowest Controllability of Inputs: Logic input signals that have a lower impact on the design outputs are considered suspicious nets [72]. These nets are difficult to control from primary inputs and vulnerable to malicious hardware insertion since they are difficult to control.

3.4.8 Metrics to Assess an IP’s Vulnerabilities to Supply Chain Attacks This section briefly discusses some existing metrics used to assess recycling, remarking, and cloning as part of the supply chain attack discussed in Sect. 3.3. • Metrics for Cloning: Recent studies have shown that strong PUF [27] is highly effective in detecting cloned devices. It is important to note that the degree of confidence in detecting cloned chips is dependent on the quality of the PUF, which makes the PUF-quality metrics an excellent measure for determining whether a chip has been cloned or not. uniqueness, randomness, and reproducibility are some of the major PUF-quality metrics. Uniqueness: Uniqueness refers to the distinctiveness of a challenge-response pair (CRP) between two chips. It is ideal for a PUF referring to one chip to produce a CRP that is unique to that chip, which is what makes PUF so effective in authenticating chips and detecting clones. A common measure of uniqueness

3.4 IP-level Security Metrics and Design Parameters Contributing to the IP-. . .

55

among multiple PUFs is the inter-chip hamming distance (inter-HD) [43]. The inter-HD is calculated as follows for n PUFs, assuming k is the bit length of the PUF response and Ri and RJ are the responses from PUFs i and j . H Dinter =

n n−1   H D(Ri , Rj ) 2 × 100% n(n − 1) k

(3.15)

i=1 j =i+1

It is considered that an inter-HD value of 50% represents the ideal level of uniqueness since it represents the maximum difference between two PUF responses. Randomness: A PUF’s random response indicates its unpredictability. If an attacker cannot predict its response, then a PUF can be used to identify cloned chips. It is not possible to model the response of a PUF because it should not exhibit any correlation. PUFs with good diffusive properties exhibit a high degree of randomness. Diffusive properties result in significant changes in output responses when even small changes in input challenges are made. Reproducibility: The reproducibility of a PUF is defined as its ability to reproduce the same challenge-response pair at different environments and at different times. A quantitative measure of reproducibility is calculated using the intra-PUF hamming distance [43] as shown below. 

1  H D(Ri , Ri,y ) × 100% m k m

H Dintra =

(3.16)

y=1

Here, m is the number of samples for a PUF, k is the length of the response,  HD(Ri , Ri,y ) represents the hamming distance between the response Ri and the 

response of yth sample Ri,y . An ideal PUF is expected to always produce the same response to a particular challenge under different operating conditions with 0% intra-PUF HD. • Metric for Recycling and Remarking: The Counterfeit Defect Coverage (CDC) developed by Guin et al. [24] has been used to evaluate counterfeit detection techniques. Counterfeit Defect Coverage (CDC): CDC represents the confidence level of detecting a chip as counterfeit after performing a set of tests. It is defined as n 

CDC =

(XRj × DFj )

j =1



m

× 100%

(3.17)

j =1

where DFj is the defect frequency of the defect j, which indicates how frequently the defect appears in the supply chain. XRj defines the confidence level of detecting the defect j by a test method R.

56

3 Metrics for SoC Security Verification

3.5 Transition from IP to Platform During the transition from IP to platform, different design parameters are introduced which may impact security metrics at the platform level, as seen in this section. During the transition from an IP to a Platform, an IP is surrounded by its neighboring IPs and glue logic is used to connect them, which does not factor into the development of metrics at the IP level. The introduction of a platform-level testing architecture may also have an impact on security at the platform level. Thus, a thorough analysis of the transition from IP to Platform is required in order to obtain a comprehensive list of such additional platform-level parameters. In Table 3.2, both IP- and Platform-level parameters are summarized for all five threats: IP Piracy, Power Side-Channel Analysis, Fault Injection, Malicious Hardware, and Supply Chain attack. Parameters in the rightmost column of Table 3.2 are the platformlevel parameters that show up after the integration. Security at the IP level is not assessed based on these parameters. However, from a platform perspective, they do have a significant impact on security. All of these additional parameters are briefly discussed in this section, along with their impact on the robustness against IP Piracy, Power Side-Channel Analysis, Fault Injection, Malicious Hardware, and Supply Chain attacks at the platform level.

3.5.1 Platform-level Parameters for IP Piracy Let us examine Fig. 3.4 in order to better understand the Platform-level parameters associated with IP Piracy. As illustrated in the figure, SAT resilience at the Platform level is affected by various Platform-level parameters. For instance, each IP is wrapped with a wrapper and compressed and decompressed circuits have been added for accelerated testing. Some of the wrappers vary in terms of functionality, such as bypassing capability, access to the internal scan chain, etc. Because of the Platform-level parameters, the way IPs can be attacked as standalone IPs differs significantly from the way they can be attacked at the Platform-level. According to Sect. 3.3, access to the internal scan chain is a critical element of SAT attacks on sequential designs. Figure 3.4 illustrates that the scan access to the target IP within the platform is different from that of a standalone IP. For example, the platform wrapper adds an additional boundary scan chain. This section provides a brief description of these SoC-level parameters affecting the SAT attack resiliency against IP Piracy. • Type of Platform-level Testing Architecture: Platform-level SAT resiliency greatly depends on the testing structure of the Platform, as the SAT attack leverages this structure to retrieve the correct key. Different types of testing structures bring a different level of impact on the SAT resiliency. Examples include flattened, direct I/O, concatenated scan chain, etc.

3.5 Transition from IP to Platform

57

Table 3.2 List of IP and platform-level design parameters contributing the security metric for IP protection, fault injection, power side channel, malicious hardware, and supply chain Threat IP Piracy

Power side channel

IP-level parameters Locking key size Locking key distribution Output corruption Locking mechanism Key error rate (KER) Input error rate (IER) Mode of operation Cryptographic key size Number of clock cycles for one cryptographic operation Data dependent switching activity Data independent switching activity –

Fault injection Malicious hardware

Supply chain

– – – – Gate type/sizing Nearby cells Static probability Signal rate Toggle rate Fan-out Immediate fan-in Lowest controllability of inputs – –

– – –

Platform-level parameters Type of platform-level testing arch. Bypassing capability Wrapper locking Accessibility to the internal flip flops Compression ratio Scan obfuscation On-chip voltage regulator Pipeline depth CPU Scheduling IP-level parallelism Shared power distribution network (PDN) Dynamic voltage and frequency scaling (DVFS) Clock jitter Decoupling capacitance Communication bandwidth Clock-gating Power distribution network Decoupling capacitance IP wrapper/sandboxing IP firewall Bus monitor Security policy Probability of the malicious IP’s activation during critical operation Isolation of trusted subsystems Electronic chip ID (ECID) Physically unclonable function (PUF) Path delay Aging Wear-out Asset management infrastructure

• Bypassing Capability: The bypassing capability enables the attacker to target the specific IP during the structural testing. This allows the attacker the same attacking capability at the platform-level as at the IP-level.

58

3 Metrics for SoC Security Verification Boundary Scan Cell

Decompressor

Compressor

Internal Scan Chain

Test data in

Test data out

Fig. 3.4 An overview of the parameters influencing the security metric for IP piracy at the platform level. Boundary scan cells, decompressors, and compressors are introduced after the platform has been integrated

• Wrapper Locking: It is standard practice for all IPs to be wrapped within a wrapper during platform integration. A locked wrapper increases the SAT attack resiliency as the attacker must breach the wrapper lock in order to perform SAT attacks at the Platform level. • Accessibility to the Internal Flip-flops: Another key feature of the wrapper is access to the internal scan chain, which also significantly impacts the level of SAT resilience at the platform level. The attackers will have to rely on the inputoutput response of that IP through its boundary scan cells in the absence of this feature, thus making their task much more difficult. • Compression Ratio: It is important to control and observe the internal state of the design through the scan chain in order to succeed in the SAT attack. Observing the exact responses of internal flip flops and controlling internal states is difficult during decompression and compaction. As a result of the decompressor, the attacker is prevented from applying the inputs of her choosing to the internal flip flops. Conversely, the compactor prevents an attacker from comparing the original responses since it compacts the shifted output. Moreover, the existing compaction circuits used in industries act as a one-way function, thus preventing the attackers from reverting the compacted response, thus making the SAT attack significantly difficult. • Scan Obfuscation: This technique is an effective way to conceal adversaries’ access to the scan chain [52, 76]. Scan obfuscation is a method of scrambling scanned-in and scanned-out responses, and only by using the correct key can the original values of the scan chain be retrieved. As a result, it greatly reduces the controllability and observability of internal flip-flops within a design, as well as the possibility of a SAT attack.

3.5 Transition from IP to Platform

59

3.5.2 Platform-level Parameters for Power Side-Channel Analysis Several parameters are introduced after the platform integration that greatly influence the platform-level power side-channel analysis. Following is a discussion of some of these parameters. • Pipeline Depth: This is a platform-level parameter that indicates the degree to which the CPU is parallelizing at the instruction level. It is highly likely that higher pipeline depths will increase other activities at the same time when the cryptographic IP is operating, resulting in increased noise in the platform’s power trace as a result. A higher level of noise may mask the power consumption of cryptographic operations and enhance resilience to power side-channel attacks. • CPU Scheduling: CPU Scheduling indicates its capabilities for performing parallel operations. A good scheduling algorithm increases CPU utilization, indicating a higher probability of simultaneous, resulting in higher noise insertion into measured power measurements. • Shared Power Distribution Network (PDN): A shared PDN refers to the fact that different components share the same power rail of the power distribution network (PDN) with the cryptographic IP. Shared PDN information allows us to estimate what kind of noise might be inserted in the main rail from which the attackers measure power. It is more likely that there will be more noise insertion if there are more IPs sharing the same power rail with the target IP. • On-Chip Voltage Regulator: To achieve low noise levels, better transient responses, and high power efficiency, modern integrated circuits (ICs) have onchip voltage regulators. By scrambling or enhancing the entropy of its power profile, on-chip voltage regulators can help make the platform more resilient to power analysis attacks [84]. • Dynamic Voltage and Frequency Scaling (DVFS): Random dynamic voltage and frequency scaling approaches incorporate additional noise to randomize power consumption and reduce data dependent correlations. These techniques can be employed as a platform-level countermeasure against DPA attacks [14], thereby improving system robustness. • Clock Jitter: During differential power analysis, it is essential to align the collected power traces using the clock edges as a method of synchronizing the collected power traces. [1] however, the clock generator may introduce clock jitter at the platform level, potentially making the analysis more challenging. • Communication Bandwidth: There will be an impact on the parallel activities in other IPs due to the bandwidth of the communication bus. During crypto operations, a higher bandwidth allows more IPs to be active, which increases noise in the power trace and is not correlated with the crypto operations. As a result, a higher communication bus bandwidth will result in increased resistance against side-channel attacks.

60

3 Metrics for SoC Security Verification

• Clock-Gating: Using clock gates in modern circuits reduces the dynamic power consumption of clock signals that drive large load capacitances. By shutting off the registers during idle periods, clock-gated circuits minimize power dissipation. Power side-channel resiliency of a platform can be improved with clock-gating techniques. For example, latch-based random clock-gating [63] obfuscates the power trace via time-shifting. It follows that the presence of clock-gating logic can have an impact on the power side-channel vulnerability metric of a platform.

3.5.3 Platform-level Parameters for Fault Injection This section describes the parameters that affect IPs’ resistance against fault injection at the platform level. • Nearby IPs: The fault resistivity of a critical IP may be significantly affected by nearby IPs. For example, if a critical IP is surrounded by very high switching IPs, then the decap’s effect is diminished, and noise margins are increased. This may enhance the effect of voltage and clock glitching attacks. • Interconnect: Fault tolerance can also be affected by interconnects. Laser fault injection can be more difficult from the front side when there is a higher density of metal interconnects. In similar fashion, the length and width of interconnects can affect delays, thereby affecting timing faults. • Power Distribution Networks: The power and ground rails form a large mesh inside the chip, forming many horizontal and vertical loops. Therefore, they are particularly susceptible to EM injections. EM injections can cause voltage drops and spikes at both power and ground rails. Due to asymmetrical couplings between the two rails, each rail may experience drops/shoots at a different rate depending on the EM pulse power [18]. During EM injection, this voltage swing (i.e., Vd d − Gnd) propagates toward the pads but is attenuated along the way. The propagation of the voltage swing results in sampling faults. • Decaps: Noise is primarily caused by the power grid. Lower power supply voltages contribute to reducing noise margins in the circuit. Power-hungry circuits (such as ROs) can, however, increase these noise variations, resulting in delay violations and voltage drops. Decaps are an effective means of reducing these noise margins. As a result, the distance between the critical registers and the power grid, the supply voltage, the width of the power lines, and other factors can significantly affect the susceptibility of the critical register to voltage fault injection. • Error Correction and Redundant Circuitry: As part of the error correction circuitry, bus parities can also contribute to the design’s resilience against fault injection. For example, parity bits can assist in resolving single or few bit faults on the data bus, depending on the error correction circuit. Similarly, redundant circuitry in the design can be used to verify if faults were injected into the design

3.5 Transition from IP to Platform

61

[50]. Such measures, however, carry a heavy area and latency overhead for the design as a whole.

3.5.4 Platform-level Parameters for Malicious Hardware This section discusses the platform-level parameters that affect malicious hardware security. • IP Wrapper: In general, IP wrappers are used at the platform level as a checkpoint to ensure that the IP does not engage in any unauthorized activities. By being monitored and controlled by the IP wrapper, an IP, even malicious hardware, cannot easily commit malicious acts. This impacts the platform-level maliciousness to some extent. • IP Firewall: The IP firewall is similar to the IP wrapper in that it monitors and controls the activity of an IP and ensures that it behaves according to expectations. A firewall containing a good firewall policy can prevent an IP from performing malicious action, potentially preventing a malicious hardware attack from taking place on the platform. • Bus Monitor: Bus monitors are components that monitor the activity of the BUS connecting different IPs. Monitoring the BUS activity can dramatically reduce the chance of malicious data snooping and message flooding with the goal of causing a denial-of-service attack. • Bus Protocol: Each component on a platform should adhere to the bus protocol in order to ensure safe and correct operation of the bus. Bus protocol ensures the correct and secure operation of the bus. In order to reduce the risk of malicious activity during communication over a bus, it is critical to develop a well-developed bus protocol. • Security Policy: Policy enforcers in platforms enforce security policies to ensure that the platform maintains secure functionality and prevents potentially unauthorized activity. The development of a comprehensive list of security policies can significantly reduce malicious activity within the platform; however, developing the security policy list is a non-trivial task. • Probability of the Malicious IP’s Activation During Critical Operations: It is important to note that not all IPs are intended to interact with one another within a platform. Each IP has a probability of interaction based on its design requirements. Having an IP that has a security asset interact with a malicious IP is a larger concern than interacting with a non-malicious IP. Thus, interaction probability is an essential parameter when assessing platform-level security. • Isolation of Trusted and Untrusted Subsystems: A significant reduction in the likelihood that malicious activity will occur is achieved by isolating untrusted IPs from security-critical IPs. • Memory Management Unit (MMU) Policy: An important component of any platform is its memory. The MMU policy ensures that no unauthorized

62

3 Metrics for SoC Security Verification

application can access the protected memory region, therefore, reducing the possibility of malicious activity occurring in the secure memory.

3.5.5 Supply Chain Several parameters introduced at the chip level have a significant impact on the resilience of the supply chain at platform level. Some of these parameters are discussed in the remainder of this section. • Electronic Chip ID (ECID): As a unique identifier, the ECID is used throughout the supply chain to identify the chip. It is used to detect the remarked chip by tracking its ECID. • Physically Unclonable Function (PUF): Physically unclonable function (PUF) [27] utilizes the inherent process variation introduced in every chip during the manufacturing process. This process variation is unpredictable and uncontrollable, making a unique fingerprint of the individual chip used in chip identification and authentication [10, 75]. PUF is very efficient in detecting the cloning of a platform chip. • Path Delay: In [87], the authors utilize the path delay as the fingerprint to detect the recycled chip. As the recycled chip has been in use for a particular duration of time, it is usual that the performance of the chip will be degraded. Due to the negative/positive bias temperature instability (NBTI/PBTI) and hot carrier injection (HCI), the path delay of a used chip becomes sufficiently larger that it is used to detect whether the chip is recycled or not. The higher the path delay, the higher probability that the chip is recycled. • Aging and Wear-out: Temperature instability (BTI), hot carrier injection (HCI), electromigration (EM), and time-dependent dielectric breakdown (TDDB) are the key parameters for the aging degradation and wear-out of a platform chip [17, 24, 25, 73, 86]. An aged and worn-out chip usually represents a significantly degraded performance compared to the new chip. • Asset Management Infrastructure: Asset management infrastructures (AMI) are cloud-based infrastructures that allow testing of platforms after they have been fabricated. As well as setting up secure communication with the chip on the manufacturing floor, it is responsible for unlocking and authenticating the chip, preventing the chip from being overproduced. According to the discussion above, all of the parameters contribute to the security of the platform to some extent. While quantifying the security from a platformlevel, these parameters are either helpful or detrimental to the base security (IPlevel security). Section 3.7 illustrates how these parameters are considered when measuring and estimating security for two different threats: IP piracy and power side channels.

3.6 Security Measurement and Estimation

63

3.6 Security Measurement and Estimation A discussion of IP was presented in previous sections, followed by a discussion of the transition between IP and platforms, as well as how the parameters evolve during this period. The purpose of this section is to introduce two new terms, security measurement and estimation, and to discuss both the measurement and estimation processes from the perspective of the platform. • Security Measurement: Security measurement of a design refers to the exhaustive security verification of the design through simulation, emulation, formal verification, etc. In other words, security measurement includes performing actual attacks and analyzing the resiliency of the design against a particular threat. For example, measuring the PSC robustness of a platform requires performing the power side-channel analysis attack on the platform by collecting and analyzing its power traces. If MTD is considered as the security measure for power side-channel analysis, then the number of power traces required to reveal the cryptographic key from the platform would indicate the measured security of the platform. Similarly, measuring security for IP piracy would require performing SAT attack on the platform and measuring the time to reveal the locking key. To measure the maliciousness of platform, an exhaustive security property verification may be a good choice to measure the confidentiality violation through information leakage. This approach can also be used to measure the data integrity violation. PUF, ECID, and age detecting sensors can be used to measure the probability of a chip being recycled, remarked, or cloned. • Security Estimation: Security estimation, on the other hand, does not depend on the simulation or emulation. Instead, it leverages the modeling of the design and different parameters to estimate the security. As a result, the estimation is a lot faster than measurement. Security estimation is a very efficient technique that can enhance the process of building a secure design. For example, with the help of the estimation, the designer can quickly estimate the security of the entire chip earlier in the design phase. Based on this estimated security, the designer can easily make the necessary modification to meet the other PPA (Power, Performance, Area) metric while not compromising the security. As the core component of the estimation, first, a model of the impact factor of the SoC-level parameters is developed. Then this model can predict/estimate the security of an unknown SoC while not needing the actual value of the parameters. Unlike measurement, estimation can be very useful for a larger design. Ideally, the estimation should not depend on the design size. It brings a great benefit to make the designer capable of choosing among different components and different SoC configuration that provides the best result in terms of security. However, the estimated security lacks behind the measured security in terms of accuracy.

64

3 Metrics for SoC Security Verification

3.7 Platform-level Security Measurement and Estimation Approaches The purpose of this section is to present the platform-level security estimation and measurement approach for IP piracy and power side-channel analysis attacks, and to explain each step of estimation and measurement with examples.

3.7.1 Platform-level Security Measurement and Estimation Approaches for IP Piracy 3.7.1.1

Platform-Level SAT Resiliency Measurement Flow

This section provides the step-by-step process for assessing an IP’s platform-level SAT resilience. The transition from IP to platform introduces different additional parameters that significantly impact SAT resiliency, as discussed in Sect. 3.5. The SAT resiliency can, therefore, be measured at the platform level by replicating the platform-level environment by adding the platform-level parameters to the target IP first. As a first step to performing SAT attacks on a design, the gate-level netlist must be converted to a conjunctive normal format (CNF). This process of conversion is sometimes referred to as SAT modeling. In order to determine the platform-level SAT resiliency of a design, the SAT modeling is performed first along with the inclusion of additional platform-level parameters. Secondly, SAT attack is conducted to measure the time it takes for the SAT attacking tools to retrieve the platform-level key. The purpose of this demonstration is to consider only one platform-level parameter, such as decompression or compaction, in order to keep it simple. It is the objective of this study to determine the SAT attacking time for breaking the lock of an IP in conjunction with its compression/decompression circuits. Two steps are involved in completing the measurement: Platform-level SAT modeling of the design, and executing the SAT attack. These two steps are described in greater detail in the subsections below. SAT Modeling for SoC-level Attack The existing SAT attacking tools are not capable of performing attack directly on the HDL code. The design needs to be converted into the Conjunctive Normal Form (CNF) before feeding to the SAT attacking tool. The CNF conversion flow for both the combination and sequential design is shown in Fig. 3.5. Modeling of Combinational Designs The netlist of the combinational designs can readily be converted to the CNF form. There are open-source synthesis tools such abc [2] that can covert the HDL design of a combination circuit into the CNF form. Modeling of Sequential Designs This section describes the modeling of the sequential design for performing SAT attacks at both IP- and Platform-level.

3.7 Platform-level Security Measurement and Estimation Approaches

65

Fig. 3.5 Different steps for the modeling of the HDL design for SAT attack. Modeling sequential design requires an additional step, framing, as compared to modeling combinational design

• IP-level SAT Modeling: The conversion of the sequential designs into CNF format requires an additional step named “framing.” The existing SAT attacking tools has a limitation of performing SAT attack on sequential designs. To address this issue, the sequential design needs to go through the framing process. Framing converts the sequential design into a one-cycle equivalent design. In other words, framing converts the sequential design to equivalent combinational design. For example, let us consider the case of SAT attack on an AES design that takes 11 clock cycles to complete one cryptographic operation. By framing the AES implementation, one can break the design into 11 pieces of one-cycle design. To get the exact behavior of the sequential design, the 11 frames can be stitched together. However, to perform the SAT attack for measuring SAT resiliency, one frame is sufficient. It is because the SAT attack proceeds by comparing the output of the locked and the unlocked design and pruning the wrong keys away from the search space. For the comparison of the outputs, only one frame is adequate. However, both the locked and unlocked designs should be framed. One question that arises here is, how can one frame the physical chip that is used as an oracle. Here, the scan access to each flip-flop does the same job as framing. Shifting the value through the scan chain, running the design for one clock cycle in functional mode, and shifting out all the flip-flops value gives the same output as one frame would provide. An example of framing of the sequential design is shown in Fig. 3.6. The sequential design is the combination of the sequential (flip-flops) and combinational parts. During the framing, all flip-flops are removed from the design, and the corresponding Q pins and D pins are added as primary inputs and outputs, respectively, while the connection between the D and Q pins with the combinational parts is maintained. This frame is then converted into the CNF format. • Platform-level SAT Modeling: Platform-level parameters must be taken into consideration when modeling the platform-level SAT attack design. The compression ratio (CR) will be considered the platform-level parameter in this example of measurement flow, since this is the most important factor to be considered in determining the platform-level SAT resilience. Figure 3.7 shows a typical sequential design with a decompressor and compactor circuit. Consider the design in Fig. 3.7, which has three scan chains with three flip-flops in each

66

3 Metrics for SoC Security Verification

Combinational Logic POs

PIs

D

Q

Q

D

D

Q

CK

CK

CK

FF0

FF1

FF2

D

Q

D

FF3

FF4

Q

D

FF1/Q

FF1/D

FF2/Q

FF2/D

FF3/Q

Framing

FF5

Q

D

POs FF0/D

Q

CK

CK

CK

D

Q

D

PIs FF0/Q

Combinational Logic

FF4/Q FF5/Q

Q

FF3/D FF4/D FF5/D

FF6/Q

FF6/D

CK

CK

CK

FF7/Q

FF7/D

FF6

FF7

FF8

FF8/Q

FF8/D

Fig. 3.6 High-level overview of converting the sequential design to one-cycle frame. This frame does not contain sequential elements, such as flip-flops. Instead, the output and input of the flipflops are converted into the input and output of the frame, respectively

Combinational Logic POs

PIs

Q

Decompressor

FF0 Q

FF1 D

FF3 Q

Q

CK FF6

Q

FF2 D

Q

CK

FF4 D

D CK

CK

CK

D

Q

CK

CK

D

D

Compactor

D

Q

CK FF7

FF5 D

Q

CK FF8

Fig. 3.7 Sequential design with the decompressor and compressor. Decompressors decompress the test input and apply it to the internal flip-flops, while compressors compress the test output

chain. While applying the desired input pattern, it needs to go through the decompressor, and after three clock cycles, the pattern is shifted into all nine flipflops. Similarly, while capturing the values out of the flip-flops, they pass through the compactor circuits, and it takes three clock cycles to shift out all the flip-flops’ values. Note that the framed design is a one-cycle design. So to mimic the shifting

3.7 Platform-level Security Measurement and Estimation Approaches

67

in and shifting out of the data in 3 cycles, three copies of the decompressor and three copies of the compactors should be instantiated instead of one copy. For example, let us consider the SAT modeling with the compactor circuit. While shifting out the values, FF2, FF5, and FF8 are applied to the compactor in the first clock cycle. Similarly, the group of FF1, FF4, and FF7, and the group of FF0, FF3, and FF6 are applied to the compactor at the 2nd and 3rd clock cycle, respectively. This phenomenon reflects the scenario of resource sharing. In other words, one compactor provides the compression to three groups of flip-flops at different times. Now, to shift out the values of the three groups in one clock cycle, one possible solution would be to connect a separate compactor to each of these groups. In other words, if three compactors are connected to these three groups, all the values can be shifted out in one cycle. This technique is leveraged while modeling the design with the decompressor and the compactor circuit. Figure 3.8 shows our approach to model a framed design with the compactor circuit. In a framed design, all the flip-flops’ D and Q pins are introduced as primary outputs and primary inputs, respectively. This makes the connection of the compactor even easier. Three copies of the same compactor are instantiated and each of those is assigned to each group of flip-flops. It is noted that the one compactor is connected to the group of FF2, FF5, and FF8. Similarly, two other copies of the compactor are connected to the group of FF1, FF4, FF7 and FF0, FF3, FF6. This connection can be made both in Verilog and bench format. In this example, the design is first converted without the decompressor and the compressor into the bench format (CNF). Then the bench format of the decompressor and the compactor is connected to the bench format of the design. Performing SAT Attack The next step of the platform-level SAT resiliency measurement is performing the actual attack. For this purpose, an open-source SAT tool named Pramod [3] is used. Following the modeling approach discussed above for both the locked and oracle design with the compression and decompression circuit, both of these model are fed to the SAT attacking tool to measure the number of iteration and the CPU time to extract the key.

Combinational Logic POs

Q

D

Q

D

Q

CK

CK

CK

FF0

FF1

FF2

D

Q

CK

Q

Q

FF4 D

D

Q

CK

CK

FF3 D

D

Q

FF5 D

Q

CK

CK

CK

FF6

FF7

FF8

Compactor

Decompressor

D

PIs

POs

FF0/Q

FF0/D

FF1/Q

FF1/D

FF2/Q FF3/Q

Framing

FF4/Q FF5/Q FF6/Q

Combinational Logic

PIs

FF2/D FF3/D FF4/D FF5/D FF6/D

FF7/Q

FF7/D

FF8/Q

FF8/D

Fig. 3.8 High-level overview of framing a sequential design in the presence of decompressor and compressor. In order to convert the multi-cycle design into a one-cycle design, multiple copies of the decompressor and compressor are instantiated and connected to the main frame of the design

68

3.7.1.2

3 Metrics for SoC Security Verification

Platform-level SAT Resiliency Estimation Flow

A data-driven estimation methodology is used for estimating platform-level SAT attacking time while the IP-level SAT attacking time and the value of the SoClevel parameters (In this case, compression ratio (CR)) are given. Towards this goal, a large dataset is built by measuring the platform-level SAT attacking time for different IPs with different combinations of Key length and the value of compression ratio. Analyzing the results, an interesting relationship between the compression ratio and SAT attacking time is found. It is observed that nearly all the platform-level SAT attacking time for all the IPs follows a general trend with the increasing value of compression ratio. This behavior is leveraged to build our estimation model, which is later used to predict the SAT attacking time for an unknown IP at the platform level. Data Collection and Modeling As the first step, SAT attack is performed for ten different benchmarks with different values of the compression ratio. The benchmarks are divided into three categories: small, medium, and large. The benchmarks are categorized in terms of the number of gates. The benchmarks which have a number of gates less than 1000 are defined as the small benchmark. Benchmarks containing 1000–2500 gates are considered a medium, and the benchmarks having more than 2500 gates are defined as the large benchmark. Each benchmark is locked with a random locking mechanism with four different key lengths. Each version of the design is modeled with the decompressor and compactor of the compression ratio of up to four different values (1, 2, 4, 8, 16). With all these combinations, a total of 146 experiments were performed. A summary of the experiment setup is shown in Table 3.3. From each of the experiments, the following features and their values are extracted: Key Length, number of gates, number of primary inputs, number of primary outputs, number of scan equivalent I/O (FF IO), Compression ratio, CPU

Table 3.3 Summary of benchmarks used in the experiment. The different experimental scenarios include varying key lengths and compression ratios Category Small

Medium

Large

Total

Benchmark c499 c880 c1355 c1908 k2 c3540 c5315 seq c7552 apex4

# of gates 212 404 574 925 1908 1754 2427 3697 3695 5628

Compression ratio 1, 2, 4, 8, 16 1, 2, 4, 8 1, 2, 4, 8, 16 1, 2, 4, 8 1, 2, 4, 8, 16 1, 2, 4, 8 1, 2, 4, 8, 16 1, 2, 4, 8, 16 1, 2, 4, 8, 16 1, 2, 4,

# of key length 4 4 3 4 3 3 3 3 2 4

# of experiments 20 16 15 16 15 12 15 15 10 2 146

3.7 Platform-level Security Measurement and Estimation Approaches

SAT Attacking Time (s)

k2_enc10 20

10000

15

5000

69

c5315_enc25

10 0

5

0

5

10

15

20

15

20

-5000

0

0

5

10

15

20

c499_enc50

c7552_enc10

3000

60

2000

40

1000 20 0

0 -1000 0

5

10

15

0

5

10

20

Compression Ratio Fig. 3.9 Dominant pattern of SAT attacking time vs CR among 146 experimental data. According to this pattern, there is a common trend indicating an increase in SAT attacking time with an increase in compression ratio

Time, number of total inputs, number of total outputs, and the number of iterations. Analyzing the results, a dominant pattern is noticed that exhibits the characteristics of most of the designs. Figure 3.9 shows the dominant pattern of four different benchmarks: c499_enc50, k2_enc10, c5315_enc25, and c7552_enc10. Basically, this figure shows how the SAT attacking time (in the Y-axis) changes with the increase of the compression ratio (shown in the X-axis). The SAT attacking time at compression ratio 1 is equivalent to the IP-level SAT attacking time. The other compression ratio values mimic the platform-level scenario having the decompressor/compactor with the corresponding compression ratio. As an example plot for k2_enc10 benchmark, SAT attacking time at compression ratio 1 is around 1 s. This indicates the IP-level scenario. In other words, if the k2_enc10 was attacked as a standalone IP, the time to retrieve the key would have been around 1 s. However, let us consider this IP is placed in a platform where the decompressor and compressor circuit with compression ratio 16 is implemented. In the plot, it is obvious that the SAT attacking time now increases to 15 s (the last point in the plot). This indicates that the IP-level SAT attacking time does not remain the same at the platform level. In this case, higher compression ratio helps to increase the SAT resiliency at the platform level. Similarly, the impact of other platform-level parameters should also be investigated. However, this study is out of the scope of this paper. The next step is how to build a model with the collected data to estimate the SAT resiliency of an unknown IP at the platform level. The dominant patterns shown in Fig. 3.9 seem to follow the same nonlinearity. However, when they are plotted on the

70

3 Metrics for SoC Security Verification 9000 8000

c5315_enc25

SAT attacking time (s)

7000 6000 5000 4000 3000

c7552_enc10

2000 1000

k2_enc10

0 -1000

0

2

4

6

8

10

12

14

c499_enc50 16

18

Compression Ratio Fig. 3.10 Dominant pattern of SAT attacking time vs compression ratio for four different benchmarks on same scale. The SAT attacking time increases with an increase in compression ratio for all four designs; however, the rate of increase varies from design to design

same scale, their rising rate seems to be different. Figure 3.10 shows the plot of the dominant patterns on the same scale. From this figure, it is obvious that one single model cannot be used to estimate/predict the SAT attacking time for an unknown IP at the platform level. To address this issue, 20 experimental data out of the total 146 experiments are carefully selected to use in the security estimation modeling. While choosing those 20 experimental data, the focus was on bringing the possible diversity. These 20 experimental data is the training data set of the estimation model based on which the SAT resiliency of unknown IPs at platform-level is estimated. The estimation process comprises two steps: model selection and estimation. As discussed above, the estimation model is built with 20 sub-models. While estimating the SAT resiliency for an unknown design, first, it should be identified which submodel (one of the 20 sub-models) best suits the unknown design. Towards this goal, some IP-level metadata of a locked IP are leveraged. The metadata consists of the following items. • • • • •

Key Length Number of gates Number of Primary Inputs Number of Primary Outputs Number of flip-flops

A cosine-based similarity metric is utilized for the best match of the metadata of the unknown design with the sub-model. The cosine similarity metric is usually used to find out the angular distance between two vectors. In our case, the metadata

3.8 Challenges

71

Table 3.4 Estimated and Measured SAT attacking time for different values of compression ratio IP b18_enc_128_cr4 b18_enc_128_cr8 b18_enc_128_cr16 c2670_enc_128_cr4 c2670_enc_128_cr8 c2670_enc_128_cr16

Compression ratio 4 8 16 4 8 16

IP-level SAT time (s) 1472.9

242.734

Platform-level SAT time (s) Estimated Measured 6489.619 7806.53 3170.813 2414.11 1624.404 1500.63 5056.956 6189.06 4197.503 4002.7 134.111 5.58977

is treated as the vector of the items. Then the cosine similarity metric is calculated between the metadata of unknown design and all the 20 sub-models. The sub-model with the highest similarity metric is selected as the estimation model.

3.7.2 Result and Analysis In order to evaluate the platform-level SAT resilience estimation and measurement, two benchmarks are used with a key length of 128 bits and compression ratios of 4, 8, and 16. Table 3.4 indicates the estimated SAT attacking time, as well as the measured one. SAT attacks are performed after proper modeling of the design with decompressor and compressor circuits in order to obtain the measured value. Furthermore, the estimated result is derived from the data-driven model. It is observed that, for both cases, a higher compression ratio results in a higher SAT attack time to retrieve the key. It is important to note that IP security does not remain the same at the platform-level due to the addition of platform-level parameters that must be taken into consideration during the security assessment.

3.8 Challenges This section presents several challenges during estimation and measurement of security at the platform level.

3.8.1 Challenges in Platform-Level Security Estimation and Measurement The major challenges in estimating and measuring the platform-level security can be itemized as follows:

72

3 Metrics for SoC Security Verification

1. Identifying the representative parameters contributing to the platform-level security. 2. Modeling the impact of the platform-level parameters on platform-level security estimation. 3. Introducing the platform-level parameters with the design while performing the actual attack during measurement. During the transition from IPs to platform, several new parameters are introduced at different stages and abstraction levels of the design flow. However, not all these parameters impact the system’s security. Identifying the parameters that impact security against different threats is a challenging task, as there is no systematic approach to accomplish it. Modeling the impact of different platformlevel parameters on security is also a non-trivial task. A list of challenges for different threats is listed below. IP Piracy • The RT level platform does not provide information about the test structure introduced in later design stages. The robustness of a platform against IP piracy through SAT attack greatly depends on the test structure of that platform. Hence, estimating the security based on parameters that do not exist becomes quite challenging. • As discussed in Sect. 3.7, the proposed estimation model for IP piracy is built based on a data-driven model, in which a set of data for different designs and the values of the platform-level parameter (in this case, compression ratio) are used to predict/estimate the security of an unknown design. However, selecting a suitable set of such designs and the values of the platform-level parameters is difficult. • While estimating an unknown platform’s security against IP piracy, the first step is to match the metadata of that design with all the designs used in modeling. A bad choice of metadata can potentially steer the estimation method toward a poor security estimation outcome. However, developing a representative set of metadata is a very challenging task. Also, with the developed metadata, the simple cosine similarity-based matching algorithm does not accurately select the suitable model, which leads to the need for future machine learning (ML) based approaches. PSC Analysis • Parallel activities on the system bus and active IPs that share the power rail with the target IP are the most critical elements in platform-level PSC vulnerability assessment. Without extensive human effort from verification engineers, identifying and controlling IP activities in complex SoC designs becomes extremely challenging. Moreover, HW/SW co-verification for a platform will be required to compute the simulated power traces required for PSC vulnerability estimation and measurement.

3.8 Challenges

73

• Because RT level designs do not include any physical information about the blocks, the simulated power model may fail to capture factors such as shared power distribution network, voltage regulator, clock jitters, etc. Analysis at a lower abstraction level (e.g., gate-level, layout-level) may improve the accuracy of the power models. However, each abstraction level increases the analysis time by tenfold, making it nearly impossible to perform gate-level, or layout-level analysis for a full-blown SoC design. • The data-driven model proposed for fast estimation of PSC vulnerability may not be able to predict accurate switching activity of the IPs. It will be significantly reliant on the IP properties chosen to match the switching activities of the existing IP repository. ML-based algorithms could be useful for mapping metadata in order to estimate switching activity of the IPs. However, the training phase will have to incorporate power models for a large number of IPs with varying workloads, temporal overlapping, etc. This becomes a challenging task due to the sheer volume of the training data and time required to train the model. Fault Injection • Modeling of physical parameters involved with fault injection methods, such as laser, clock, voltage, etc., is the most challenging part when it comes to assessing fault injection susceptibility of a pre-silicon design/SoC. • Considering the large size of SoCs, the possible number of fault locations grow exponentially in the design/SoC. Therefore, exhaustive simulation of all fault nodes and analyzing their impact on the design’s security is a challenging part. Malicious Hardware • A major platform-level parameter contributing to the platform-level security against malicious hardware is the set of security policies. It is very unlikely that an exhaustive set of security policies will be available at the earlier design stages, which significantly hinders the estimation capability. Even with a set of security policies, quantifying the contribution to platform-level malicious operation is a daunting task. • The interaction probability between IPs significantly impacts the security of the entire platform. An IP with the security asset having a high interaction probability with a malicious IP can lead to a greater risk of malicious operation than the interaction with a non-malicious IP. However, getting information about this parameter at an earlier platform stage is a challenging task. Supply Chain • As discussed in the previous section, the widely used counterfeit IC detection mechanism utilizes the security primitive PUF. As PUF is implemented using the process variation of the chip, it is practically impossible to measure the security against cloning attacks at the RT level, as there is no notion of process variation in this stage of the design.

74

3 Metrics for SoC Security Verification

3.8.2 Challenges in Achieving Accurate Estimation The security estimation should provide a highly accurate estimate of the platformlevel security, closely resembling the security measured in a full-blown SoC in silicon. The challenge is that the estimation is performed at the earlier stages (e.g., RTL, gate-level), and it is expected to estimate the security of the SoC in silicon. Consider estimating the platform-level security against PSC attacks at the RT level. It is quite challenging to accurately model the impact of the platformlevel parameters, such as PDN, Decap, DVFS, on the security at the RT level. These parameters are not available at the RT level and become available later in the design flow. The only information about the power traces that can be achieved at this stage is the switching activity, which potentially leads to a poor estimation. Thus, estimating the security at RT level closely representative of the platform-level security at silicon is a very challenging task. This is mainly because most platformlevel information is unavailable at the RTL or gate level.

3.9 Summary This chapter discusses how the transition from IP to SoC affects the overall SoC security. It has been shown that how the additional parameters are introduced during the SoC integration and their possible impact on developing the SoC-level security metric is also discussed. The step-by-step procedure for the measurement and estimation of the security against IP Piracy is shown as a case study. The major challenges are also discussed that need to be addressed to obtain the full benefit of this approach. This discussion opens up new research directions that require more attention from the hardware security community.

References 1. https://www.invia.fr/Pages/Tutorials/clock-randomization-against-side-channel-attacks.aspx 2. https://github.com/berkeley-abc/abc 3. https://bitbucket.org/spramod/host15-logic-encryption/pull-requests/ 4. M. Agoyan, J.M. Dutertre, D. Naccache, B. Robisson, A. Tria, When clocks fail: on critical paths and clock faults, in International Conference on Smart Card Research and Advanced Applications (Springer, Berlin, 2010), pp. 182–193 5. K. Ahi, N. Asadizanjani, S. Shahbazmohamadi, M. Tehranipoor, M. Anwar, Terahertz characterization of electronic components and comparison of terahertz imaging with x-ray imaging techniques, in Terahertz Physics, Devices, and Systems IX: Advanced Applications in Industry and Defense, vol. 9483 (International Society for Optics and Photonics, 2015), p. 94830K 6. N. Ahmed, C. Ravikumar, M. Tehranipoor, J. Plusquellic, At-speed transition fault testing with low speed scan enable, in 23rd IEEE VLSI Test Symposium (VTS’05) (IEEE, Piscataway, 2005), pp. 42–47

References

75

7. N. Ahmed, M. Tehranipoor, V. Jayaram, A novel framework for faster-than-at-speed delay test considering IR-drop effects, in Proceedings of the 2006 IEEE/ACM International Conference on Computer-Aided Design (2006), pp. 198–203 8. N. Ahmed, M. Tehranipoor, V. Jayaram, Supply voltage noise aware ATPG for transition delay faults, in 25th IEEE VLSI Test Symposium (VTS’07) (IEEE, Piscataway, 2007), pp. 179–186 9. N. Ahmed, M.H. Tehranipour, M. Nourani, Low power pattern generation for BIST architecture, in 2004 IEEE International Symposium on Circuits and Systems (IEEE Cat. No. 04CH37512), vol. 2 (IEEE, Piscataway, 2004), pp. II–689 10. Y. Alkabani, F. Koushanfar, Active hardware metering for intellectual property protection and security, in USENIX Security Symposium (2007), pp. 291–306 11. N. Asadizanjani, S. Gattigowda, M. Tehranipoor, D. Forte, N. Dunn, A database for counterfeit electronics and automatic defect detection based on image processing and machine learning, in ISTFA 2016 (ASM International, 2016), pp. 580–587 12. N. Asadizanjani, M.T. Rahman, M. Tehranipoor, Counterfeit detection and avoidance with physical inspection, in Physical Assurance (Springer, Berlin, 2021), pp. 21–47 13. N. Asadizanjani, M. Tehranipoor, D. Forte, Counterfeit electronics detection using image processing and machine learning, in Journal of Physics: Conference Series, vol. 787 (IOP Publishing, 2017), p. 012023 14. K. Baddam, M. Zwolinski, Evaluation of dynamic voltage and frequency scaling as a differential power analysis countermeasure, in 20th International Conference on VLSI Design Held Jointly with 6th International Conference on Embedded Systems (VLSID’07) (IEEE, Piscataway, 2007), pp. 854–862 15. S. Bhunia, M. Tehranipoor, Hardware Security: A Hands-on Learning Approach (Morgan Kaufmann, 2018) 16. E. Brier, C. Clavier, F. Olivier, Correlation power analysis with a leakage model, in International Workshop on Cryptographic Hardware and Embedded Systems (Springer, Berlin, 2004), pp. 16–29 17. H. Dogan, D. Forte, M.M. Tehranipoor, Aging analysis for recycled FPGA detection, in 2014 IEEE international symposium on defect and fault tolerance in VLSI and nanotechnology systems (DFT) (IEEE, Piscataway, 2014), pp. 171–176 18. M. Dumont, M. Lisart, P. Maurine, Electromagnetic fault injection: how faults occur, in 2019 Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC) (IEEE, Piscataway, 2019), pp. 9–16 19. P. Dusart, G. Letourneux, O. Vivolo, Differential fault analysis on AES, in International Conference on Applied Cryptography and Network Security (Springer, Berlin, 2003), pp. 293– 306 20. M. El Massad, S. Garg, M. Tripunitara, Reverse engineering camouflaged sequential circuits without scan access, in 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (IEEE, Piscataway, 2017), pp. 33–40 21. N. Farzana, A. Ayalasomayajula, F. Rahman, F. Farahmandi, M. Tehranipoor, SAIF: automated asset identification for security verification at the register transfer level, in 2021 IEEE 39th VLSI Test Symposium (VTS) (IEEE, Piscataway, 2021), pp. 1–7 22. N.F. Ghalaty, B. Yuce, M. Taha, P. Schaumont, Differential fault intensity analysis, in 2014 Workshop on Fault Diagnosis and Tolerance in Cryptography (IEEE, Piscataway, 2014), pp. 49–58 23. B.J. Gilbert Goodwill, J. Jaffe, P. Rohatgi, et al., A testing methodology for side-channel resistance validation, in NIST Non-invasive Attack Testing Workshop, vol. 7 (2011), pp. 115– 136 24. U. Guin, D. DiMase, M. Tehranipoor, A comprehensive framework for counterfeit defect coverage analysis and detection assessment. J. Electron. Testing 30(1), 25–40 (2014) 25. U. Guin, X. Zhang, D. Forte, M. Tehranipoor, M.: Low-cost on-chip structures for combating die and IC recycling, in 2014 51st ACM/EDAC/IEEE Design Automation Conference (DAC) (IEEE, Piscataway, 2014), pp. 1–6

76

3 Metrics for SoC Security Verification

26. M. He, J. Park, A. Nahiyan, A. Vassilev, Y. Jin, M. Tehranipoor, RTL-PSC: automated power side-channel leakage assessment at register-transfer level, in 2019 IEEE 37th VLSI Test Symposium (VTS) (IEEE, Piscataway, 2019), pp. 1–6 27. C. Herder, M.D. Yu, F. Koushanfar, S. Devadas, S.: unclonable functions and applications: a tutorial. Proc. IEEE 102(8), 1126–1141 (2014) 28. T. Hoque, J. Cruz, P. Chakraborty, S. Bhunia, Hardware IP trust validation: learn (the untrustworthy), and verify, in 2018 IEEE International Test Conference (ITC) (IEEE, Piscataway, 2018), pp. 1–10 29. M.C. Hsueh, T.K. Tsai, R.K. Iyer, Fault injection techniques and tools. Computer 30(4), 75–82 (1997) 30. D. Jayasinghe, R. Ragel, J.A. Ambrose, A. Ignjatovic, S. Parameswaran, Advanced modes in AES: are they safe from power analysis based side channel attacks? in 2014 IEEE 32nd International Conference on Computer Design (ICCD) (2014), pp. 173–180. https://doi.org/ 10.1109/ICCD.2014.6974678 31. K.R. Kantipudi, Controllability and observability. ELEC7250-001 VLSI Testing (Spring 2005), Instructor: Professor Vishwani D. Agrawal (2010) 32. N. Karimian, Z. Guo, M. Tehranipoor, D. Forte, Highly reliable key generation from electrocardiogram (ECG). IEEE Trans. Biomed. Eng. 64(6), 1400–1411 (2016) 33. P. Kocher, J. Jaffe, B. Jun, Differential Power Analysis, in Annual International Cryptology Conference (Springer, 1999), pp. 388–397 34. P. Kocher, J. Jaffe, B. Jun, P. Rohatgi, Introduction to differential power analysis. J. Cryptogr. Eng. 1(1), 5–27 (2011) 35. S. Kullback, R.A. Leibler, On information and sufficiency. Ann. Math. Stat. 22(1), 79–86 (1951) 36. C. Lamech, R.M. Rad, M. Tehranipoor, J. Plusquellic, An experimental analysis of power and delay signal-to-noise requirements for detecting trojans and methods for achieving the required detection sensitivities. IEEE Trans. Inform. Forensics Secur. 6(3), 1170–1179 (2011) 37. J. Lee, S. Narayan, M. Kapralos, M. Tehranipoor, Layout-aware, IR-drop tolerant transition fault pattern generation, in Proceedings of the Conference on Design, Automation and Test in Europe (2008), pp. 1172–1177 38. J. Lee, M. Tehranipoor, J. Plusquellic, A low-cost solution for protecting IPS against scanbased side-channel attacks, in 24th IEEE VLSI Test Symposium (IEEE, Piscataway, 2006), pp. 6–pp 39. J. Lee, M. Tehranipoor, C. Patel, J. Plusquellic, Securing scan design using lock and key technique, in 20th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems (DFT’05) (IEEE, Piscataway, 2005), pp. 51–62 40. J. Lee, M. Tehranipoor, C. Patel, J. Plusquellic, Securing designs against scan-based sidechannel attacks. IEEE Trans. Depend. Secure Comput. 4(4), 325–336 (2007) 41. Y. Liu, M. Zuzak, Y. Xie, A. Chakraborty, A. Srivastava, Robust and attack resilient logic locking with a high application-level impact. ACM J. Emer. Technol. Comput. Syst. 17(3), 1–22 (2021) 42. J. Ma, J. Lee, M. Tehranipoor, Layout-aware pattern generation for maximizing supply noise effects on critical paths, in 2009 27th IEEE VLSI Test Symposium (IEEE, Piscataway, 2009), pp. 221–226 43. A. Maiti, N. Gunreddy, P. Schaumont, A systematic method to evaluate and compare the performance of physical unclonable functions, in Embedded systems design with FPGAs (Springer, Berlin, 2013), pp. 245–267 44. S. Mangard, Hardware countermeasures against DPA—a statistical analysis of their effectiveness, in Cryptographers’ Track at the RSA Conference (Springer, Berlin, 2004), pp. 222–235 45. P. Mishra, S. Bhunia, M. Tehranipoor, Hardware IP Security and Trust (Springer, Berlin, 2017) 46. P. Morawiecki, M. Srebrny, A sat-based preimage analysis of reduced Keccak hash functions. Inform. Process. Lett. 113(10–11), 392–397 (2013)

References

77

47. A. Nahiyan, F. Farahmandi, P. Mishra, D. Forte, M. Tehranipoor, Security-aware FSM design flow for identifying and mitigating vulnerabilities to fault attacks. IEEE Trans. Comput.-Aided Design Integr. Circuits Syst. 38(6), 1003–1016 (2018) 48. A. Nahiyan, M. Tehranipoor, Code coverage analysis for IP trust verification, in Hardware IP Security and Trust (Springer, Berlin, 2017), pp. 53–72 49. A. Nahiyan, K. Xiao, K. Yang, Y. Jin, D. Forte, M. Tehranipoor, AVFSM: a framework for identifying and mitigating vulnerabilities in FSMs, in 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC) (IEEE, Piscataway, 2016), pp. 1–6 50. S. Patranabis, A. Chakraborty, P.H. Nguyen, D. Mukhopadhyay, A biased fault attack on the time redundancy countermeasure for AES, in International Workshop on Constructive SideChannel Analysis and Secure Design (Springer, Berlin, 2015), pp. 189–203 51. M.S. Rahman, A. Nahiyan, S. Amir, F. Rahman, F. Farahmandi, D. Forte, M. Tehranipoor, Dynamically obfuscated scan chain to resist oracle-guided attacks on logic locked design. Cryptology ePrint Archive (2019) 52. M.S. Rahman, A. Nahiyan, F. Rahman, S. Fazzari, K. Plaks, F. Farahmandi, D. Forte, M. Tehranipoor, Security assessment of dynamically obfuscated scan chain against oracle-guided attacks. ACM Trans. Design Autom. Electron. Syst. 26(4), 1–27 (2021) 53. M.T. Rahman, D. Forte, Q. Shi, G.K. Contreras, M. Tehranipoor, CSST: preventing distribution of unlicensed and rejected ICs by untrusted foundry and assembly, in 2014 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT) (IEEE, Piscataway, 2014), pp. 46–51 54. J. Rajendran, Y. Pino, O. Sinanoglu, R. Karri, Security analysis of logic obfuscation, in Proceedings of the 49th Annual Design Automation Conference (2012), pp. 83–89 55. H. Salmani, M. Tehranipoor, Analyzing circuit vulnerability to hardware trojan insertion at the behavioral level, in 2013 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS) (IEEE, Piscataway, 2013), pp. 190–195 56. H. Salmani, M. Tehranipoor, J. Plusquellic, A layout-aware approach for improving localized switching to detect hardware trojans in integrated circuits, in 2010 IEEE International Workshop on Information Forensics and Security (IEEE, Piscataway, 2010), pp. 1–6 57. M.S.U.I. Sami, F. Rahman, A. Cron, D. Donchin, M. Borza, F. Farahmandi, M. Tehranipoor, POCA: first power-on chip authentication in untrusted foundry and assembly, in 2021 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (2021) 58. M.S.U.I. Sami, F. Rahman, F. Farahmandi, A. Cron, M. Borza, M. Tehranipoor, Invited: end-to-end secure SOC lifecycle management, in 2021 58th ACM/IEEE Design Automation Conference (DAC) (2021), pp. 1295–1298. https://doi.org/10.1109/DAC18074.2021.9586106 59. K. Shamsi, T. Meade, M. Li, D.Z. Pan, Y. Jin, On the approximation resiliency of logic locking and IC camouflaging schemes. IEEE Trans. Inform. Forensics Secur. 14(2), 347–359 (2018) 60. F.X. Standaert, T.G. Malkin, M. Yung, A unified framework for the analysis of side-channel key recovery attacks, in Annual International Conference on the Theory and Applications of Cryptographic Techniques (Springer, Berlin, 2009), pp. 443–461 61. A. Stern, U. Botero, F. Rahman, D. Forte, D., M. Tehranipoor, EMFORCED: EM-based fingerprinting framework for remarked and cloned counterfeit IC detection using machine learning classification. IEEE Trans. Very Large Scale Integr. Syst. 28(2), 363–375 (2019) 62. P. Subramanyan, S. Ray, S. Malik, S.: Evaluating the security of logic encryption algorithms, in 2015 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (IEEE, Piscataway, 2015), pp. 137–143 63. K. Tanimura, N.D. Dutt, LRCG: latch-based random clock-gating for preventing power analysis side-channel attacks, in Proceedings of the Eighth IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis (2012), pp. 453–462 64. M. Tehranipoor, F. Koushanfar, A survey of hardware trojan taxonomy and detection. IEEE Design Test Comput. 27(1), 10–25 (2010) 65. M. Tehranipoor, K. Peng, K. Chakrabarty, Test and Diagnosis for Small-Delay Defects (Springer, Berlin, 2011)

78

3 Metrics for SoC Security Verification

66. M. Tehranipoor, H. Salmani, X. Zhang, Integrated Circuit Authentication (Springer, Cham, 2014) 67. M. Tehranipoor, H. Salmani, X. Zhang, M. Wang, R. Karri, J. Rajendran, K. Rosenfeld, Trustworthy hardware: Trojan detection and design-for-trust challenges. Computer 44(7), 66– 74 (2010) 68. Tehranipour, M.H., Ahmed, N., Nourani, M.: Testing SOC interconnects for signal integrity using boundary scan, in Proceedings. 21st VLSI Test Symposium, 2003 (IEEE, Piscataway, 2003), pp. 158–163 69. M. Tunstall, D. Mukhopadhyay, S. Ali, Differential fault analysis of the advanced encryption standard using a single fault, in IFIP International Workshop on Information Security Theory and Practices (Springer, 2011), pp. 224–233 70. J.G. Van Woudenberg, M.F. Witteman, F. Menarini, Practical optical fault injection on secure microcontrollers, in 2011 Workshop on Fault Diagnosis and Tolerance in Cryptography (IEEE, Piscataway, 2011), pp. 91–99 71. J. Villasenor, M. Tehranipoor, Chop shop electronics. IEEE Spectr. 50(10), 41–45 (2013) 72. A. Waksman, M. Suozzo, S. Sethumadhavan, FANCI: identification of stealthy malicious logic using Boolean functional analysis, in Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security (2013), pp. 697–708 73. S. Wang, J. Chen, M. Tehranipoor, Representative critical reliability paths for low-cost and accurate on-chip aging evaluation, in Proceedings of the International Conference on Computer-Aided Design (2012), pp. 736–741 74. X. Wang, Y. Han, M. Tehranipoor, System-level counterfeit detection using on-chip ring oscillator array. IEEE Trans. Very Large Scale Integr. Syst. 27(12), 2884–2896 (2019) 75. X. Wang, M. Tehranipoor, Novel physical unclonable function with process and environmental variations, in 2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010) (IEEE, Piscataway, 2010), pp. 1065–1070 76. X. Wang, D. Zhang, M. He, D. Su, M. Tehranipoor, Secure scan and test using obfuscation throughout supply chain. IEEE Trans. Comput.-Aided Design Integr. Circuits Syst. 37(9), 1867–1880 (2017) 77. K. Xiao, D. Forte, M.M. Tehranipoor, Circuit timing signature (CTS) for detection of counterfeit integrated circuits, in Secure System Design and Trustable Computing (Springer, Berlin, 2016), pp. 211–239 78. Y. Xie, A. Srivastava, Mitigating sat attack on logic locking, in International Conference on Cryptographic Hardware and Embedded Systems (Springer, Berlin, 2016), pp. 127–146 79. K. Yang, D. Forte, M. Tehranipoor, An RFID-based technology for electronic component and system counterfeit detection and traceability, in 2015 IEEE International Symposium on Technologies for Homeland Security (HST) (IEEE, Piscataway, 2015), pp. 1–6 80. K. Yang, D. Forte, M.M. Tehranipoor, CDTA: a comprehensive solution for counterfeit detection, traceability, and authentication in the IoT supply chain. ACM Trans. Design Autom. Electron. Syst. 22(3), 1–31 (2017) 81. Y. Yano, K. Iokibe, Y. Toyota, T. Teshima, Signal-to-noise ratio measurements of side-channel traces for establishing low-cost countermeasure design, in 2017 Asia-Pacific International Symposium on Electromagnetic Compatibility (APEMC) (IEEE, Piscataway, Berlin, 2017), pp. 93–95 82. M. Yasin, B. Mazumdar, J.J. Rajendran, O. Sinanoglu, SARLock: sat attack resistant logic locking, in 2016 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (IEEE, Piscataway, 2016), pp. 236–241 83. M. Yilmaz, K. Chakrabarty, M. Tehranipoor, Test-pattern selection for screening small-delay defects in very-deep submicrometer integrated circuits. IEEE Trans. Comput.-Aided Design Integr. Circuits Syst. 29(5), 760–773 (2010) 84. W. Yu, Exploiting On-Chip Voltage Regulators as a Countermeasure Against Power Analysis Attacks (University of South Florida, 2017)

References

79

85. F. Zhang, X. Lou, X. Zhao, S. Bhasin, W. He, R. Ding, S. Qureshi, K. Ren, Persistent fault analysis on block ciphers, in IACR Transactions on Cryptographic Hardware and Embedded Systems (2018), pp. 150–172 86. X. Zhang, M. Tehranipoor, Design of on-chip lightweight sensors for effective detection of recycled ICs. IEEE Trans. Very Large Scale Integr. Syst. 22(5), 1016–1029 (2013) 87. X. Zhang, K. Xiao, M. Tehranipoor, Path-delay fingerprinting for identification of recovered ICs, in 2012 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT) (IEEE, Piscataway, 2012), pp. 13–18 88. X. Zhou, B. Ahmed, J.H. Aylor, P. Asare, H. Alemzadeh, Data-driven design of contextaware monitors for hazard prediction in artificial pancreas systems (2021). arXiv preprint arXiv:2104.02545 89. H. Ziade, R.A. Ayoubi, R. Velazco, et al., A survey on fault injection techniques. Int. Arab J. Inf. Technol. 1(2), 171–186 (2004)

Chapter 4

CAD for Information Leakage Assessment

4.1 Introduction After decades of research on hardware and software security, most of today’s computing systems remain vulnerable to basic exploit techniques. One of the main reasons is that traditional design flows do not consider hardware security as vital as other marketed commercial features like performance, area, and power optimization. New and novel attacks have emerged as a result of exploitation-based vulnerabilities. With research advancements, computing systems are becoming more and more complex with added improvements in features and functionalities on a single chip. The growth in system complexity is leaving a wide range of attack surfaces for the adversaries to exploit. Today’s systems are often developed by integrating third-party intellectual property (3PIPs), which could contain intentional or unintentional malicious codes with the ability to compromise the whole system. The system’s complexity makes eliminating all such design flaws during the verification phase of development impractical. Computing systems are affected by various attacks like buffer overflows, returnoriented programming, string vulnerabilities, etc. These vulnerabilities primarily aim to grant unprivileged access to malicious users, information leakage, data corruption, and modification, causing a denial-of-services. In [21], the vulnerability of buffer overrun was exploited on a Microsoft SQL server. This vulnerability resulted in the loss of hundreds of millions of dollars by shutting down the tens of thousands of machines within a few minutes. Many tools [4, 9] are developed to detect a few types of security attacks. However, all of them are designed targeting specific vulnerabilities and fail to detect unknown software vulnerabilities. Moreover, these tools do not provide much information about attacks, replicating the attack, input attack pattern, etc. One way to gather information about the attacks is through system log events. Many systems continuously monitor and log event interactions between various processes communicating and exchanging

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Farahmandi et al., CAD for Hardware Security, https://doi.org/10.1007/978-3-031-26896-0_4

81

82

4 CAD for Information Leakage Assessment

information. However, practical systems entail a trade-off between the granularity of log information and run-time performance overhead. Similar to software, attacks on underlying hardware can be aimed to leak information, inject faults, violate access control, and can even compromise the whole system. Two policies are required to uphold the system’s security, noninterference [15] and confidentiality. The non-interference policy states that no untrustworthy entity should be able to influence a trusted entity. A later policy states that information from a trusted entity should never reach/leak to untrustworthy entities. A recent study in the area of hardware security has given insight into restricting the flow of information among hardware entities. Physical isolation of resources, access control, and information flow tracking (IFT) are ways to enforce the policies. Hardware IFT techniques demonstrated its effectiveness in identifying and mitigating security vulnerabilities like timing channel information leakage in caches and cryptographic cores, information leakage through hardware trojans and unintended interference between IP components of different privilege levels [6, 18, 19, 33, 34, 39]. With the advancement in research, recent works [8] show that IFT can be a promising methodology for detecting many software security attacks that compromise and corrupt the control flow data. In addition to detecting an attack, IFT can trace back and generate the input data pattern that can exploit the vulnerability. These features were effective in building a network defense system [22]. IFT can also be used to detect information leakage as discussed in [22, 23, 36]. In [28], IFT methods based on large number of languages are discussed, whereas in [11, 17] OSlevel security policies are focused with IFT. IFT at hardware has been implemented at various levels of abstraction. IFT designs targeting instruction set architecture (ISA) were demonstrated in [10, 31] and IFT implementation at register-transfer level (RTL) are shown in [3]. ISA implementations proved to be overly conservative in a way that the implementation showed unintentional information flow. RT-level IFT gets down a level and implements tracking as part of HDL code where a tradeoff between information flow granularity and computational complexity is made. IFT at gate level [35] implements the tracking logic on the synthesized design at the gate level, giving more precise data tracking at the bit level while trading for area and computation. Researchers proved that SoC vulnerabilities could be formulated as a comprehensive set of security properties based on IFT [13] to reduce the validation effort. By mapping these security properties at different design stages, the verification effort could be reduced, thereby reducing the total verification time, provided that the new vulnerabilities introduced at the different abstraction levels are addressed appropriately. To meet this challenge, AutoMap [2] extends and expands the security properties and mapping to identify new vulnerabilities introduced when the design moves from higher to lower levels of abstraction. Finally, these property-driven security verification tools are usually driven by computationally expensive formal methods. Hence, automation is the key to reducing verification costs. Authors of [1] focused on how CAD tools can be configured for securityaware assurance to enhance hardware security coverage.

4.2 Motivation and Background

83

The rest of this chapter is organized as follows. Section 7.2 describes the motivation and background. Section 7.3 talks about the literature survey and the chapter is concluded in Sect. 7.4.

4.2 Motivation and Background 4.2.1 Motivation As systems get increasingly complex with multiple hardware intellectual property (IPs) packed onto a single chip with tight software and hardware integration, tracking the flow of information in these systems becomes challenging. The system must be protected to secure critical information (e.g., encryption keys, program counter, etc.) from being leaked to an adversary. As represented in Fig. 4.1, tracking this information flow across all layers of abstraction for a system is a challenging task. Ensuring a secure flow of information requires the designer/security engineer to track how highly secure information/modules interact with information/modules of lower security across and between software and hardware levels. Let us consider the subsystem shown in Fig. 4.2 which consists of a processor, an on-chip RAM, an AES encryption, a True Random Number Generator (TRNG), and a UART module. Consider the processor, on-chip RAM, AES core, and TRNG as highly secure and trusted modules and the UART as the untrusted module. It is vital from a security point of view that any information that is generated or stored by any of these trusted modules, such as data from the processor, encryption/decryption keys used by the AES core, the seed used to instantiate the TRNG, the data stored on the on-chip RAM, etc., can pass between these trusted module (green dash line) but should not pass to the UART unauthorized (red dash line). It is also required to ensure that any information generated by the UART should not influence the operation of any of the trusted modules. Fig. 4.1 Different abstraction levels of information flow tracking. Tracking this information flow across all layers of abstraction for a system is a challenging task

OS Compiler Instruction Set HDL Logic Gates Physical layer

84

4 CAD for Information Leakage Assessment

Fig. 4.2 An example of a hardware subsystem with trusted/untrusted modules and secure/insecure information flows. A secure flow involves only trusted modules while an insecure flow involves at least one untrusted module, for instance, UART in the figure

Such ever-present security threat scenarios give rise to the need for techniques and tools to help track the flow of information across and between various system abstractions. Information Flow Tracking has emerged as an attractive solution for designers and security engineers to help better understand the design and its security.

4.2.2 Information Flow Tracking IFT is a technique developed to aid the flow of information across complicated and tightly integrated computing systems. Any system abets four types of information flows: 1. 2. 3. 4.

Secure/Trusted Module to Secure/Trusted Module Secure/Trusted Module to Unsecure/Untrusted Module Unsecure/Untrusted Module to Secure/Trusted Module Unsecure/Untrusted Module to Unsecure/Untrusted Module

For any system, information flow types 2 and 3 are identified as insecure information flows that need to be prevented. IFT helps us identify these two types of flows. It explores the concept of tainted data to check where the data flow during the operation of the system. Tainting is the concept of marking every input data bit with a corresponding taint bit following this input bit and “tainting” (i.e., set the taint bit of data bit to “1”) the intermediate data that this input data flows to. By utilizing tainting, a security engineer can take a system design with the critical input data bits tainted (i.e., taint set to 1) and the others untainted (i.e., taint set to 0) and check where the input data flows to in the system. Information tainting can be of two types: • Conservative: When a conservative tainting is used, every intermediate data value or output is tainted if it is linked logically to a tainted input data.

4.2 Motivation and Background

85

Fig. 4.3 A generalized level of IFT operation. (a) shows a grey box module with 4 input bits, 2 intermediate data bits, and 3 output bits and (b) shows IFT introduced module with each input, intermediate, and output bit associated with a security tag

This includes both cases when a change in tainted input affects the value of intermediate/output data values. Conservative tainting is used when false positives (no actual flow of information) are of no consequence. • Precision: Precision tainting is a type of tainting logic in which only those intermediate data values or outputs are tainted whose values are directly affected by the input. This approach of tainting includes only the case where a change in tainted information causes a change in intermediate/output data values and propagates the taint to these data values. Precision tainting reduces the number of false positives but increases the complexity of the tainting logic. A high-level view of IFT in a system is shown in Fig. 4.3. Figure 4.3a shows a grey box module (Here module can be a software program or a hardware IP) with four input bits I0, I1, I2, I3, and two output bits O0, O1. Assuming that there is observability for an intermediate signal of 2 bits C0, C1, IFT can help track where the trusted/security-critical input bits (I0, I1) flow and check if they flow to untrustworthy output ports (whose values are updated after some logical operation on the input bits and then undergo some logical operations to produce output bits). Furthermore, Fig. 4.3b shows that for an IFT introduced module, there is an additional bit for every input, intermediate, and output bit. This additional bit is the taint bit which is set high for a security-critical input (T1) and set low for other inputs (T0). By simulating the design, the security engineer can observe if any intermediate components are tainted (C0 in the figure), indicating a flow of information from the inputs I0 and I1. The security engineer can identify information flows to the outputs by following these tainted intermediate data bits. The taint bits of outputs O0 and O2 indicate a flow of information from the securitycritical/trusted inputs I0 and I1. A flow path to O0 is of no consequence as O0 is a trusted output port. However, the flow to untrusted output port O2 could be a

86

4 CAD for Information Leakage Assessment

Fig. 4.4 Different types of information flows in a system. (a) Explicit flows, (b) Implicit flows, and (c) Timing flows

functional flaw or a security weakness that an adversary can exploit to extract the values of bits I0 and I1. IFT can thus give deep insight into how information flows in a computational system. IFT can help a designer identify the following types of data flows in a system: • Explicit Flows: Explicit flows are information flow between variables defined explicitly by the designer. These are generally direct data dependencies as seen in Fig. 4.4a. • Implicit Flows: Implicit flows can be defined as information flows in which conditional execution of a function on any variable depends on the value of another variable. An example of this can be seen in Fig. 4.4b. • Timing Flows: Timing Flows are defined as information flows where the timing delay of a logical operation is dependent on the value of another variable. An example of this can be found in Fig. 4.4c. Computational systems are a closed-packed structure of software (Application, OS, Kernel, Firmware) and hardware (Microarchitecture, Logic Gates, Transistor). Identifying how information flows across all these layers is a complex task. Thus different techniques have been described in literature on how IFT can be performed at a single layer of abstraction or across two layers. These methodologies are described in detail in Sect. 7.3.

4.3 Information Flow Tracking Methodologies 4.3.1 Software-Based IFT Techniques IFT tags or labels the input data coming from untrusted sources or channels as “unsafe” data, and during the program computation, the tag is propagated. Data derived from these unsafe data are labeled unsafe as well. Through propagation, IFT logic tracks and detects the usage of unsafe data throughout the computation that could maliciously cause any unwanted and unintentional program action like context switching, branch explosion, etc. IFT in software can be implemented in two different ways: compile-time and run-time. In [23], information tracking is done at the time of compiling by

4.3 Information Flow Tracking Methodologies

87

using particular type-safe programming languages. Hence, this is applicable for approaches whose programs are written in specific languages and inapplicable for other type-unsafe languages like C/C++, which are widely used in the application and operating system development. Converting legacy programs from type-unsafe to type-safe is either not possible or possible only with some loss of information and could introduce unintentional new bugs. In addition, this method fails to detect any run-time attacks like alteration of the return address due to the lack of any runtime information. Most IFT tools developed in this approach can detect information leakage but not any security attacks. The performance overhead is void because there is no need for run-time information in this approach. Information flow can be tracked using a source-code or binary-code instrumentation at run-time. However, in third-party library codes, source-code implementation fails to detect information flow and all the security exploits. Additionally, third-party library owners must provide a detailed summary of all the functions involved in their libraries. This information is required to track information flow through library calls. However, providing such detailed information can be tedious, complex, and errorprone, which could cause many unintentional side-effects and vulnerabilities. IFT using source-code instrumentation has a lower overhead compared to binary-code instrumentation [38]. In [24], binary-code instrumentation-based information flow tracking was proposed to accurately track information flow in third-party libraries with a compromised overhead. The program execution is reduced by 37 times in this method, which is too large to accept. Authors in [27] proposed a software-based low overhead information flow tracking system called LIFT, which exploits runtime binary instrumentation to reduce overhead. LIFT detects security attacks that can corrupt and manipulate control data, e.g., programs, return addresses, function pointers, etc. Dynamically tracking information at binary level leveraging accurate run-time information leaves room for optimizations. LIFT employs run-time binary optimization: • Fast Path (FP) • Merge Check (MC) • Fast Switch (FS) LIFT uses StarDBT, a dynamic binary translator developed by Intel Corporation. StarDBT holds a cache to automatically load the translated code so that the base code is translated once but also executed multiple times. LIFT uses StarDBT features at run-time to instrument the translated code and performs IFT. LIFT assigns a one-bit (1 for “unsafe” and 0 for “safe”) tag for each byte of data into the system’s memory. A user can use multi-bit tag values for different levels of trusted and untrustworthy channels, for example, using dedicated tag bits for input from network channels called external memory sources. All tags are stored in a dedicated memory called tag space with a one-to-one mapping to the corresponding data byte in memory. Using multi-bit tag bits can significantly reduce the IFT but increases space overhead for tag space. The tag space itself is protected by turning off access permissions for the tags associated with the tag space. Similarly, the code is made read-only to protect the LIFT code from attacks and corruption. Fast-Path

88

4 CAD for Information Leakage Assessment

Table 4.1 Control data vulnerability detection using LIFT [27] and comparison with five existing solutions Exploits targets Return address (3) Base pointer (3) Function pointer (6) Longjmp buffer (6) Total (18)

StackGuard [9] 3/3 2/3 0/6 0/6 5/18

Stack Shield [30] 3/3 3/3 0/6 0/6 6/18

ProPolice [12] 2/3 2/3 3/6 3/6 10/18

LibSafe+ Libverify [4, 5] 1/3 1/3 1/6 1/6 4/18

LIFT [27] 3/3 3/3 6/6 6/6 18/18

optimization helps by removing unnecessary IFT because a majority of safe data propagates to safe data. Based on the experimental data collected by the authors in [27], 98.05% of safe data flows to a safe destination. Dividing the program into blocks of codes and checking for the tag values at the beginning and end of the block will give enough information to tell if their information needs to be tracked. The merged check (MC) combines multiple tag checking operations into one to reduce the number of tag comparison operations. In these situations, the MC can leverage the fundamental principles of computer architecture—temporary locality and spatial locality. If an instruction accesses the same memory location multiple times, the MC checks the tag value only once. Similarly, when an instruction accesses a memory location, the MC combines the tags of surrounding data bytes into one single tag and checks only once to eliminate redundant checks. Because there is a dedicated code for IFT logic, LIFT uses two stack memories to store the intermediate data from IFT and normal code. In conventional x86 architecture, the stack uses expensive pushfq/popfq instructions to save and restore data from the stack. However, LIFT uses two cheaper instructions lahf/sahf, to perform the same save and restore operation on the stack. Table 4.1 shows and compares LIFT effectiveness with five existing tools in detecting a wide range of security attacks that corrupt control data. LIFT was able to detect all 18 different attack benchmarks developed by [37] covering a wide range of security exploits. Results show the comparison between LIFT and five existing tools—StackGuard [9], Stack Shield [30], ProPolice [12], and LibSafe+LibVerify [4, 5].

4.3.2 Hardware-Based IFT Techniques IFT at hardware levels gives a deeper insight into how data flows at the bitlevel granularity. Software-based IFT techniques involve a large number of context changes between source code and instrumented code, as seen in “LIFT” [27], causing a considerable overhead. IFT techniques at the software level also failed to capture how data flows at the hardware level. Any insecure information flow observed at the hardware level can result in drastic system data leakages that cannot

4.3 Information Flow Tracking Methodologies

89

be easily patched. Such a vulnerability would require a recall of the entire system hardware and cause significant losses to the design house. Thus it is crucial to ensure the proper flow of information in the hardware, and this can be done in the presilicon stages.

4.3.2.1

Gate-Level Information Flow Tracking

Gate-level information flow tracking (GLIFT) [16] was one of the first techniques designed to track the flow of data in hardware. Hardware microarchitecture is designed using hardware description languages (HDLs) like Verilog, VHDL, and System Verilog or high-level programming languages such as C++ and System C, which are then synthesized into HDLs using high-level synthesis tools (HLS). The microarchitecture HDLs are either designed in-house or obtained from thirdparty vendors and are synthesized into logic gates, such as AND, OR, XOR, etc., using a CAD tool. “Gate-level” is a lower level of abstraction in which information regarding the placement of each logic gate and the routing between them is contained. In a GDSII file, the gate-level information will then be sent for fabrication into physical transistors on silicon. The high level of detail obtained at the gate level before fabrication is an excellent point for tracking the flow of data in the hardware. GLIFT uses a novel technique of shadow logic generation to track how information flows through each logic gate. Shadow logic for a logic gate is the taint logic that helps track the flow of information in the gate. For example, in Fig. 4.5 the logic function of AND with inputs A, B and output variable f is shown in Fig. 4.5a and its corresponding shadow logic function in Fig. 4.5b with a.t and b.t as the taints for inputs A and B. The shadow logic function seen in Fig. 4.5b for the AND gate is of precision tracking or tainting, indicating that the output of the AND gate is tainted (a.t = 1 and b.t = 1) if a change in the input causes a change in the output. These shadow logic functions can be synthesized and aggregated into a library. The synthesized logic gate of AND can be seen in Fig. 4.5c and its respective synthesized shadow logic function in Fig. 4.5d. GLIFT can create the shadow logic library through the following two methods.

f = A.B (a)

A A_t B A

O

O_t

B_t A_t

sh(f)=A.bt + B.at + at.bt (b)

A B_t

(c)

(d)

Fig. 4.5 (a) Logic function AND, (b) Shadow logic function for AND, (c) Logic AND gate, and (d) Shadow logic gate of AND

90

4 CAD for Information Leakage Assessment

Verilog

Design Library

Logic Synthesis

Gate-level Netlist

GLIFT Extension

GLIFT Extended Netlist

GLIFT Library

Fig. 4.6 The overview of the GLIFT architecture. Logic gates are created by synthesizing HDL code from microarchitecture (Verilog or VHDL). For every logic gate in the synthesized netlist, a shadow logic gate is added using the GLIFT shadow logic library

• Brute Force: The brute force method is carried out by applying the set of input vectors to any logic functions and observing the output response for the same combination. A min-term is added for each input combination that causes a change in the output. Shadow logic can be generated for entire design library components and then synthesized and added to the GLIFT library. This method is computationally complex with a complexity factor of .O(22n ) (where n is the number of inputs to the gate). • Constructive: The constructive method builds functions from logic primitives such as AND, OR, and XOR gates. Like the technology mapping libraries used for synthesizing the design from HDL to netlist, this method is computationally cheaper than the brute force method with a complexity of O(n). The constructive approach gives false positives and is not precise like brute force. Figure 4.6 shows the overview of GLIFT technique. The HDL code of microarchitecture (Verilog or VHDL) is synthesized into logic gates using any standard design library. Using the GLIFT library of shadow logic, a shadow logic gate is added for every logic gate in the synthesized netlist resulting in a GLIFT extended netlist. However, the GLIFT extended netlist simulation can be used for information tracking. Information flow at the gate level is at the lowest level of abstraction. Hence capturing any dependencies or information is inherently invisible at higher levels. This approach helps to find any insecure flows that an adversary can exploit. Also, IFT is done at the pre-silicon stage of the design process. Hence any violations can be detected and fixed by the designer. The synthesized shadow logic can be sent for fabrication to develop dynamic information flow trackers that can check for any information flow violations in-field. GLIFT provides designers and security engineers with a higher level of observance of data flow in the hardware but also is computationally complex and adds significant overhead. GLIFT was tested on some standard benchmarks such as “74 283” which is a 4-bit adder, “alu4_cl” is a 4-bit ALU, “s344/s349” is a 4-bit multi-

4.3 Information Flow Tracking Methodologies

91

Table 4.2 Area of the original design and circuits from different shadow logic generation methods. Due to large number of inputs, brute force and complete sum approach could not be tested (.−/.−) on DES. All the results are generated using ABC Benchmark 74 283 alu4_cl s344/s349 DES

Original 109 671 488 14,539

Brute force 187 5824 1760 .−/.−

Complete sum 177 2876 1761 .−/.−

Constructive 179 2292 1765 75,262

Table 4.3 Computation time of various GLIFT implementations for shadow logic generation on several benchmark circuits. Due to large number of inputs, brute force and complete sum approach could not be tested (.−/.−) on DES. All the results are generated using ABC Benchmark 74 283 alu4_cl s344/s349 DES

Brute force (hh:mm:ss.cs) 0:08:39.26 1:53:23.74 0:01:28.51 .−/.−

Constructive (ss.cs) 0.07 0.2323 0.24 2.56

plier and “DES” is the data encryption standard. Table 4.2 shows the area overhead of implementing GLIFT technique on these benchmarks and obtaining area using Synopsys ABC tool. As can be seen, GLIFT adds significant area overhead to all these benchmark designs. And Brute force method is computationally costly that the test-bed used cannot generate the shadow logic for GLIFT insertions to get an area figure. Table 4.3 gives an outlook into the computational time for each benchmark to generate GLIFT shadow logic using “Brute Force” and “Constructive” methods.

4.3.2.2

Finding Timing Channels Using GLIFT

GLIFT gives the information flow paths in any given design. These information flow paths can be an explicit flow, an implicit flow, or a timing flow. Thus GLIFT enables us to identify timing channels [26] in a design and protect them from being exploited by adversaries. To help understand how to detect timing channels, let us define essential terms and differentiate between a functional and timing flow below. • Event: An event is described as a set of the value of an input or output and the time of its occurrence during simulation. For example, output event e.i = (00,2) indicates that the output value is 00 at a time value of 2 in the simulation run. • Distinct Outputs: A distinct set of output events are defined as: the longest subset of the set of output events, which contains all the events with distinct values of output. For example, in a two-bit trace of output events A = [(00,1), (00,2), (01,3), (01,4), (11,5), (10,6)], the distinct event set is d(A) = [(00,1), (01,3), (11,5), (10,6)].

92

4 CAD for Information Leakage Assessment

clk START A B

Multiplier

Fast

(a)

Out

FAST MULT

SLOW MULT

(b)

Fig. 4.7 An exemplary circuit showing how GLIFT [26] finds timing channel in a design and protect from adversaries. (a) A simple multiplier module (b) FSM of the multiplier module

• Functional Flow: The functional flows are defined as: the flows in which change to a tainted input event observes a change in the value in the set of distinct outputs of the design, but not the time of the generated outputs. • Timing Flow: Timing flows are identified as flows in which change in the tainted input events observe a difference in the time of generated output events but no change in the set of specific output events. A designer/security engineer can get all the information flow paths using GLIFT. The designer/security engineer tries to identify if any change in the tainted input values causes a change in the values of the distinct event set of the output. If any changes are not observed, there is an absence of a functional path in the design, and the GLIFT identified information path is a timing channel that could be a potential security weakness. However, if a functional flow is detected, it would not be possible to know if a timing channel exists in the design. Consider the simple design shown in Fig. 4.7a and its corresponding state machine as seen in Fig. 4.7b. The simple multiplier module inputs “A” and “B” and outputs their product “Out”. The multiplier also inputs “Fast” which, when set high, uses the fast multiplier circuit as opposed to the slow multiplier circuit used for normal operation. By applying the GLIFT technique, tainting the input to “Fast,” leaving the other inputs untainted. GLIFT will give us an information path from “Fast” to the output “Out”. Given input traces {I0} = {A0,B0, 0, t0}, where t0 is start of the simulation and {I1} = {A0,B0,1,t0} output traces S = {Out, t} and    .S = {Out , t } are obtained. Since A and B are the same for both the input traces Out= Out’. But careful observation shows that .t = t  as input trace I0 uses the faster circuit to deliver the output quicker. So clearly indicates the absence of a functional flow, indicating that the observed information flow is a timing flow that an adversary can exploit.

4.3.2.3

Register-Transfer Level Information Flow Tracking

GLIFT allows the designer/security engineer to track data at the lowest level of abstraction. However, GLIFT can be computationally complex and has a large

4.3 Information Flow Tracking Methodologies

93

area overhead. At such a low level of abstraction, it cannot detect high-level dependencies (i.e., the correlation between variables), resulting in lesser precision and higher false positives. To overcome these weaknesses in GLIFT gives, arises the requirement to perform IFT at a higher level of abstractions such as microarchitecture. As discussed previously, microarchitecture is generally written at the Register-Transfer Level (RTL). Register-Transfer Level Information Flow Tracking (RTLIFT) [3] gives the designer an advantage by clearly differentiating between explicit and implicit flows. RTLIFT defines a library with tracking modules that can be used to replace defined modules in the architecture. For example, a Verilog code of a simple AND gate is shown in Fig. 4.8a. A Verilog code that introduces a conservative tracking at the RTL level is as shown in Fig. 4.8b, and a precision tracking Verilog module is as in Fig. 4.8c. RTLIFT parses the RTL to construct the data flow graph and then introduces the tracking logic. Explicit information flows can be captured by constructing such tracking modules for every logic module and including them in the RTLIFT library. To address implicit flows, RTLIFT can again implement a conservative tracking approach or precision tracking flow. For every branch statement, such as if-else, RTLIFT can replace this branch with a branch that can track the implicit flow of information. Consider an example, a simple if-else branch case in Fig. 4.9a. RTLIFT can parse

Fig. 4.8 An example AND gate HDL showing RTLIST utilizing conservative or precision tracking approach. (a) Verilog code for AND module. (b) Conservative RTLIFT Verilog module for AND. (c) Precision RTLIFT Verilog module for AND

Fig. 4.9 Another example showing how to use RTLIFT for precision tracking. (a) Simple if-else RTL scenario. (b) Conservative approach for implicit flow tracking (c) Precision approach for implicit flow

94

4 CAD for Information Leakage Assessment Data Flow Precision Data Flow Graph

Design

Parser

Verilog AST

Control Flow Graph

Explicit Logic Extension

Security Properties

IFT Library

IFT Inserted Design

Simulator/ Formal Verifier

Implicit Logic Extension Control Flow Precision

Information Leakage Paths

Pass/ Fail? Remove IFT Fabricate

Fig. 4.10 An overview of the full RTLIFT [3] flow. RTLIFT offers the benefit of IFT to detect information leakage at a higher level of abstraction with improved computational complexity and time

the RTL to identify the control flow graph. For a conservative approach, it taints the data variables if the control variable is tainted, as shown Fig. 4.9b. For a precision flow, the control values in the control flow graph are reversed to see if there will be a change in the data variable within the branch. Suppose there is a change in the data variable, then it is tainted else left untainted as shown in Fig. 4.9c. In this way, RTLIFT can identify implicit flows. An overview of the full RTLIFT flow can be seen in Fig. 4.10. RTLIFT offers the benefit of IFT to detect information leakage at a higher level of abstraction. RTLIFT’s higher viewing point helps identify variable correlations and track more precision. It also gives the designer a chance to decrease the computational complexity of IFT by providing the ability to decide to use conservative or precision tracking. Prioritizing precision flow for implicit flows and conservative for data flows in a control system such as used in aircraft [30] and such can reduce the time of verification but give a higher confidence in terms of security. The inverse can be said for systems whose data values are of importance. RTLIFT offers better computational complexity and lower computational times. It also provides lower area overhead if the tracking logic is synthesized. In GLIFT, shadow tracking logic is added to the original logic to increase the area. In contrast, in RTLIFT the logic modules are replaced with new modules from the RTLIFT library that perform the logic functionality and information flow tracking in the same module. Hence the RTLIFT is more precise compared to the GLIFT, resulting in fewer false-positive information flows, as seen in Table 4.4. RTLIFT has higher precision which can be seen through the false-positive rates (%FP) of GLIFT.

4.3 Information Flow Tracking Methodologies

95

Table 4.4 Precision & complexity of RTLIFT vs GLIFT. RTLIST is more precise and generates less false-positive than GLIFT Operation 8-bit adder 16-bit adder 32-bit adder 8-bit multiplier 16-bit multiplier 4-way case 8-way case 16-way case

RTLIFT Tainted flows 8,477,103 16,441,632 32,385,907 15,029,971 31,816,947 849,874 869,915 869,799

Area 271 603 1215 847 2078 70 226 199

GLIFT Tainted flows 8,535,900 16,524,855 32,549,597 15,310,281 32,200,870 883,810 958,070 997,874

Area 222 556 1243 1759 7647 54 129 289

%FP 5.6% 7.9% 15.6% 26.7% 36.6% 3.2% 8.4% 12.2%

4.3.3 HDL-Level Based IFT Technique Modern computing systems extensively confide on hardware to secure any acute software and security devices or components. However, designing the hardware system itself is error-prone, which requires a static way of verifying the hardware while it is being designed. However, user processes are now isolated from supervisor processes by the protection rings to separate safe and unsafe software runs. This approach can be seen in the state-of-the-art hardware security architectures, including the ARM TrustZone, Intel SGX, and the IBM SecureBlue, which can protect the software even when the underlying operating system (OS) is malicious or compromised. But there is no guarantee that these systems are bug-free themselves, which calls for the need for techniques that provides formal guarantees about the security properties imposed by the hardware. More specifically, bugs can still go unchecked if they are not tracked at the HDL during the design process. So if the HDL design includes extensive tracking and analysis of information flow, it can be highly assured that the design will be secure and error-free. One of such techniques has been proposed in [14], where the hardware is developed using a “modified” HDL such that the information flow between different components is tracked, analyzed, and verified statically during the design process by the CAD tools. Information Flow Control Hardware Description Language (IFC HDL) is used to design and demonstrate a multi-core prototype of the ARM TrustZone architecture. The hardware designed and the “modified” HDL (i.e., IFC HDL) used to design this hardware are described below. The Hardware or Processor Designed So the hardware designed is the ARM TrustZone architecture, a security processor used widely in applications of embedded systems and smartphones. The hardware mechanisms used in this processor provides an isolated execution environment for low and high priority software to maintain confidentiality (i.e., no information flow between secure and nonsecure cores—represented by NS in Fig. 4.11), and integrity (no information flow from a normal memory core can affect a secure memory core) as shown in

96

4 CAD for Information Leakage Assessment

Fig. 4.11 The prototype of the ARM TrustZone architecture [25]. Where, NS*—non-secure, AC*—access control and CHK*—checker

Fig. 4.11. Whereas NS, in this regard, is a non-secure, a security tag that can isolate information from secure and normal worlds. However, AC represents an access control, and CHK is a checker. The “modified” HDL (IFC HDL) The IFC HDL is known as the SecVerilogBL which is built upon the SecVerilog language. SecVerilog is an extended Verilog (a standard HDL language) that contains syntax annotating wires and registers with security labels. So SecVerilog is a language that can help track the information flow and prevent security policy violations. SecVerilogBL extends the capabilities of SecVerilog by adding security labels to individual bits (hence the suffix BL—Bit Level) rather than the whole wire or packet of data. By doing this, SecVerilogBL can provide precise reasoning and explanation regarding the flow of data and information between different nodes and addresses within the architecture. The security policies (for confidentiality (C) and Integrity (I)) implemented in TrustZone are expressed in terms of .CT , CU, P T , andP U . Where C stands for Confidential, P for Public, T for Trusted, and U for Untrusted as shown in Fig. 4.12a where the arrows represent the direction of allowed information flow between different cores or blocks of the architecture. The labels CT represent the variables in the secure world, such as processing cores, memory, and hardware IP blocks, while P U represents the variables in the normal world. Signals that must be trusted, but are not confidential, are labeled P T . For example, the NS bits, the TrustZone control registers, and the clock and reset variables are labeled P T . Since most hardware resources can be switched between the two worlds, the security labels (defining the “world”) for such modules use a dependent type expressed as a function of the NS bit (ns) associated with that module. So, for example, world (ns) maps the value of ns to a security label: 1 maps to P U and 0 maps to CT .

4.3 Information Flow Tracking Methodologies

97

Fig. 4.12 The Information flow (IF) policy implemented in TrustZone [25] using IFC HDL where (a) shows the security lattice of IF between different elements and (b) shows the translation of TrustZone security policies as IF policies Table 4.5 Programming overhead in terms of the lines of code used

The highlighted numbers in the last column signify the total number of lines of code for the unverified and verified versions

Contributions, Assumptions, and Limitations IFC HDL is used to implement and design a secure architecture, the Arm TrustHub processor. Previous works such as the one presented in [7, 32], and [29] have implemented similar HDL support for the architecture design, however, the architectures were simple and only had single cores. IFC HDL designs and analyzes a complicated, multi-core security architecture with full support for information flow tracking. It also analyzes and detects bugs within the TrustZone implementations which might have gone unnoticed without the new IF policies. The bugs detected (presented in Table 4.5) are

98

4 CAD for Information Leakage Assessment

some of the recently known vulnerabilities in modern safety-critical architectures. However, it is somewhat unclear if the bugs are completely undetected by the TrustZone architecture. Furthermore, this methodology can be used to design other architectures but has not provided a detailed analysis of what that will entail and possible challenges. Nevertheless, the application of this methodology proposed is endless. It can help find bugs and vulnerabilities during the design process, especially for safety-critical hardware devices. Similarly, the overhead results (presented in Table 4.5) only show the overhead related to lines of codes and not in terms of area and performance. The techniques proposed in [18, 19, 40] was one of the first schemes to analyze a multi-core process in terms of information flow and provide an HDL-based design methodology that incorporated the tracking of information or data flow and analysis of security features and support provided by the hardware. The results of the IFC HDL methodology include the evaluation of the architecture designed in terms of bug detection and the overhead. The assessment is done by validating how effective IFC HDL is in finding security vulnerabilities detected in conventionally available methods. The bugs mainly incorporated changing the normal or non-secure world related-information to the secure world relatedinformation, and they are as follows: • Access Control Omission—Allows normal world to access trusted or confidential state information. • Cache Poisoning—Allows user-mode (normal world) process to execute arbitrary code in System management code (secure world). • NS Bit flip—Requests from normal world are interpreted as that of the secure world. • Network Routing Bug—Incorrectly routes information from secure world to the normal world. • World Switching Bug—Switches the world context from normal to secure. Figure 4.13 shows the bug detection methodology for Access Control Omission bug. The line highlighted in blue shows how normally the control checks for the partition register are implemented. The proposed IFC HDL method detects a bug because the variable .data_in cannot be trusted. .data_in has the label CT when ns is 0 and label P U when ns is 1. Furthermore, based on the proposed modified HDL, the modified line of code highlighted in red adds a check which ensures that the ns bit is 0 and thus the variable .data_in can now be trusted. The results on the overhead, mainly in terms of the lines of code (Programming overhead), area, power, and performance overheads are shown in Table 4.5. Table 4.5 shows the number of lines of code for the unverified version (without the IF-policy extensions in the HDL, as shown in Fig. 4.12) versus the lines of code for a verified version (with the IF-policy extensions in the HDL). The red column in Table 4.5 shows a programming overhead of about .2.9% on average. The reported numbers are perfectly understandable because the architecture already has a hardware mechanism to analyze the security of the architecture in place. This means that the extra lines of code are only implemented to add those extra checks

4.4 Summary

99

Fig. 4.13 Detection of the Access Control Omission bug in the TrustZone architecture. The line highlighted in blue shows how normally the control checks for the partition register are implemented. The modified line of code highlighted in red adds a check which ensures that the ns bit is 0 and thus the variable .data_in can now be trusted

and that these codes are not modifying the design or implementing a new design in any way. The overhead in terms of area and power is about .0.37% and .0.32%, respectively, which is perfectly reasonable given that the lines of code increased only slightly, which resulted in a slight impact on the performance of the TrustZone architecture [25]. The proposed technique can aid and complement other forms of information flow tracking methods. For example, GLIFT (hardware-based IF tracking) and other language-level based tracking methods [20]. Furthermore, the verification methodology was static and performed during compiling time, making it the first of their kind to implement such techniques to analyze information flow in a multi-core architecture. Nevertheless, the implementation of a single-core architecture and the subsequent comparison of performance against [18, 19, 40] could have made their claims richer.

4.4 Summary This chapter describes the techniques of information flow tracking and various proposed methodologies to monitor data flow among different trust levels, to maintain the two main information flow policies—confidentiality and integrity. This tracking can be done at different abstraction levels, as shown in Fig. 4.1. The evaluation results discuss the state-of-the-art IFT techniques based on hardware, software, and HDL level. The reader should understand what IFT is, its importance, and what strategies can be used to perform IFT. The information flow policies are not violated; without compromising the integrity, confidentiality, and security of the system. The chapter also helps designers decide which tracking technique best suits their designs and security requirements. The software-based IFTs can track data flow between entities and detect information leakage and security attacks.

100

4 CAD for Information Leakage Assessment

Hardware-based IFT enables bit-level information tracking that gives more insight into the granular design level, which helps identify malicious policy violations at the hardware level. Hardware systems can also be verified at design-time using information flow analysis and an extended SecVerilog called SecVerilogBL to ensure the static verification of the architecture and the assertion of information flow security policies. In this way, the various described IFT methodologies can be adapted by any designer to a design to develop and manufacture a completely secure system.

References 1. S. Aftabjahani, R. Kastner, M. Tehranipoor, F. Farahmandi, J. Oberg, A. Nordstrom, N. Fern, A. Althoff, Special session: Cad for hardware security-automation is key to adoption of solutions, in 2021 IEEE 39th VLSI Test Symposium (VTS) (IEEE, 2021), pp. 1–10 2. B. Ahmed, F. Rahman, N. Hooten, F. Farahmandi, M. Tehranipoor, Automap: Automated mapping of security properties between different levels of abstraction in design flow, in 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD) (IEEE, 2021), pp. 1–9 3. A. Ardeshiricham, W. Hu, J. Marxen, R. Kastner, Register transfer level information flow tracking for provably secure hardware design, in Design, Automation Test in Europe Conference Exhibition (DATE), 2017 (2017), pp. 1691–1696. https://doi.org/10.23919/DATE. 2017.7927266 4. A. Baratloo, N. Singh, T. Tsai, Transparent Run-Time Defense Against Stack Smashing Attacks (USENIX Association, USA, 2000) 5. A. Baratloo, T. Tsai, N. Singh, Libsafe: Protecting critical elements of stacks (2001) 6. M.M. Bidmeshki, Y. Makris, Toward automatic proof generation for information flow policies in third-party hardware IP, in 2015 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (2015), pp. 163–168. https://doi.org/10.1109/HST.2015.7140256 7. R. Boivie, P. Williams, SecureBlue++: CPU support for secure execution. IBM, IBM Research Division, RC25287 (WAT1205-070), pp. 1–9 (2012) 8. M. Costa, J. Crowcroft, M. Castro, A. Rowstron, L. Zhou, L. Zhang, P. Barham, Vigilante: End-to-End Containment of Internet Worms (Association for Computing Machinery, New York, NY, USA, 2005). https://doi.org/10.1145/1095810.1095824 9. C. Cowan, C. Pu, D. Maier, J. Walpole, P. Bakke, S. Beattie, A. Grier, P. Wagle, Q. Zhang, H. Hinton, StackGuard: Automatic adaptive detection and prevention of buffer-overflow attacks, in 7th USENIX Security Symposium (USENIX Security 98) (USENIX Association, San Antonio, TX, 1998). https://www.usenix.org/conference/7th-usenix-security-symposium/ stackguard-automatic-adaptive-detection-and-prevention 10. M. Dalton, H. Kannan, C. Kozyrakis, Raksha: A Flexible Information Flow Architecture for Software Security (Association for Computing Machinery, New York, NY, USA, 2007). https:// doi.org/10.1145/1250662.1250722 11. P. Efstathopoulos, M. Krohn, S. VanDeBogart, C. Frey, D. Ziegler, E. Kohler, D. Mazières, F. Kaashoek, R. Morris, Labels and event processes in the asbestos operating system 39(5) (2005). https://doi.org/10.1145/1095809.1095813 12. H. Etoh, Gcc extension for protecting applications from stack-smashing attacks (2004) 13. N. Farzana, F. Rahman, M. Tehranipoor, F. Farahmandi, SoC security verification using property checking, in 2019 IEEE International Test Conference (ITC) (IEEE, 2019), pp. 1– 10

References

101

14. A. Ferraiuolo, R. Xu, D. Zhang, A.C. Myers, G.E. Suh, Verification of a practical hardware security architecture through static information flow analysis, in Proceedings of the TwentySecond International Conference on Architectural Support for Programming Languages and Operating Systems (2017), pp. 555–568 15. J.A. Goguen, J. Meseguer, Security policies and security models, in 1982 IEEE Symposium on Security and Privacy (1982), pp. 11–11. https://doi.org/10.1109/SP.1982.10014 16. W. Hu, J. Oberg, A. Irturk, M. Tiwari, T. Sherwood, D. Mu, R. Kastner, Theoretical fundamentals of gate level information flow tracking. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 30(8), 1128–1140 (2011). https://doi.org/10.1109/TCAD.2011.2120970 17. M. Krohn, A. Yip, M. Brodsky, N. Cliffer, M.F. Kaashoek, E. Kohler, R. Morris, Information flow control for standard OS abstractions 41(6) (2007). https://doi.org/10.1145/1323293. 1294293 18. X. Li, V. Kashyap, J.K. Oberg, M. Tiwari, V.R. Rajarathinam, R. Kastner, T. Sherwood, B. Hardekopf, F.T. Chong, Sapper: A language for Hardware-Level Security Policy Enforcement (Association for Computing Machinery, New York, NY, USA, 2014). https://doi.org/10.1145/ 2541940.2541947 19. X. Li, M. Tiwari, J. Oberg, V. Kashyap, F. Chong, T. Sherwood, B. Hardekopf, Caisson: A hardware description language for secure information flow (2011), pp. 109–120. https://doi. org/10.1145/1993498.1993512 20. L. Lourenço, L. Caires, Dependent information flow types, In Proceedings of the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (2015), pp. 317–328 21. D. Moore, V. Paxson, S. Savage, C. Shannon, S. Staniford, N. Weaver, Inside the slammer worm. IEEE Secur. Priv. 1(4), 33–39 (2003). https://doi.org/10.1109/MSECP.2003.1219056 22. A.C. Myers, Jflow: Practical Mostly-Static Information Flow Control (Association for Computing Machinery, New York, NY, USA, 1999). https://doi.org/10.1145/292540.292561 23. A.C. Myers, B. Liskov, Protecting privacy using the decentralized label model 9(4) (2000). https://doi.org/10.1145/363516.363526 24. J. Newsome, D. Song, Dynamic taint analysis for automatic detection, analysis, and signature generation of exploits on commodity software (2005) 25. B. Ngabonziza, D. Martin, A. Bailey, H. Cho, S. Martin, Trustzone explained: Architectural features and use cases, In 2016 IEEE 2nd International Conference on Collaboration and Internet Computing (CIC) (IEEE, 2016), pp. 445–451 26. J. Oberg, S. Meiklejohn, T. Sherwood, R. Kastner, Leveraging gate-level properties to identify hardware timing channels. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 33(9), 1288– 1301 (2014). https://doi.org/10.1109/TCAD.2014.2331332 27. F. Qin, C. Wang, Z. Li, H.S. Kim, Y. Zhou, Y. Wu, Lift: A low-overhead practical information flow tracking system for detecting security attacks, In 2006 39th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO’06) (2006), pp. 135–148. https://doi. org/10.1109/MICRO.2006.29 28. A. Sabelfeld, A. Myers, Language-based information-flow security. IEEE J. Sel. Areas Commun. 21(1), 5–19 (2003). https://doi.org/10.1109/JSAC.2002.806121 29. M. Sabt, M. Achemlal, A. Bouabdallah, Trusted execution environment: what it is, and what it is not, In 2015 IEEE Trustcom/BigDataSE/ISPA, vol. 1 (IEEE, 2015), pp. 57–64 30. Stack shield, A “stack smashing” technique protection tool for Linux (2006). http://www. angelfire.com/sk/stackshield/ 31. G.E. Suh, J.W. Lee, D. Zhang, S. Devadas, Secure program execution via dynamic information flow tracking 32(5) (2004). https://doi.org/10.1145/1037947.1024404 32. J. Szefer, R.B. Lee, Architectural support for hypervisor-secure virtualization. ACM SIGPLAN Not. 47(4), 437–450 (2012) 33. M. Tiwari, J.K. Oberg, X. Li, J. Valamehr, T. Levin, B. Hardekopf, R. Kastner, F.T. Chong, T. Sherwood, Crafting a usable microkernel, processor, and i/o system with strict and provable information flow security 39(3) (2011). https://doi.org/10.1145/2024723.2000087

102

4 CAD for Information Leakage Assessment

34. M. Tiwari, H.M. Wassel, B. Mazloom, S. Mysore, F.T. Chong, T. Sherwood, Complete information flow tracking from the gates up 44(3) (2009). https://doi.org/10.1145/1508284. 1508258 35. M. Tiwari, H.M. Wassel, B. Mazloom, S. Mysore, F.T. Chong, T. Sherwood, Complete information flow tracking from the gates up 37(1) (2009). https://doi.org/10.1145/2528521. 1508258 36. N. Vachharajani, M. Bridges, J. Chang, R. Rangan, G. Ottoni, J. Blome, G. Reis, M. Vachharajani, D. August, Rifle: An architectural framework for user-centric information-flow security, in 37th International Symposium on Microarchitecture (MICRO-37’04) (2004), pp. 243–254. https://doi.org/10.1109/MICRO.2004.31 37. J. Wilander, M. Kamkar, A comparison of publicly available tools for dynamic buffer overflow prevention (2003) 38. W. Xu, S. Bhatkar, R. Sekar, Taint-enhanced policy enforcement: A practical approach to defeat a wide range of attacks, in 15th USENIX Security Symposium (USENIX Security 06) (USENIX Association, Vancouver, B.C. Canada, 2006). https://www.usenix.org/conference/15th-usenixsecurity-symposium/taint-enhanced-policy-enforcement-practical-approach 39. D. Zhang, Y. Wang, G.E. Suh, A.C. Myers, A Hardware Design Language for Timing-Sensitive Information-Flow Security (Association for Computing Machinery, New York, NY, USA, 2015). https://doi.org/10.1145/2694344.2694372 40. D. Zhang, Y. Wang, G.E. Suh, A.C. Myers, A hardware design language for timing-sensitive information-flow security. ACM SIGPLAN Not. 50(4), 503–516 (2015)

Chapter 5

CAD for Hardware Trojan Detection

5.1 Introduction With extensive competition to add functionality to system-on-chips (SOCs), integrators have been forced to use IPs developed by outside vendors. Entities worldwide supply these designs, and in some cases, these design houses are untrusted. The final model might contain additional malicious functionality aimed at jeopardizing the security of the system by leaking design assets, manipulating user data, causing denial-of-service, or even total system shutdown [22]. This extra circuitry injected by the attacker is referred to as a Hardware Trojan, and there are a variety of designs introduced so far, each being implemented in a specific abstraction layer through the design process [22]. A hardware Trojan designed at higher levels of abstraction (e.g., SystemC or VHDL models) is potentially more complex and targeted in achieving the attacker’s goal with lower overheads. At the same time, the latitude of attack decreases as the design gets closer to the fabrication phase, caused by the restrictions on the components used to implement the attack, like the certain gate types used in the gate-level netlist unutilized spaces of a layout. In contrast, the detection probability reduces as it gets closer to the manufacturing phase because fewer stages of model checking and functional verification are left. Therefore the Trojan design becomes more entangled with the actual functionality of the golden model, and detection becomes less probable. Figure 5.1 shows the stages at which a Trojan is most likely to be inserted into a design [9]. Also, Fig. 5.1 presents design time attacks during IP design and integration, placement and routing, and the final manufacturing stage as the most common phrases for the Trojan to be inserted into a design. Hardware Trojans consist of two main parts that work in tandem to deliver the attack. The first part is the Trojan trigger, which works as an attacker’s initiator. Triggers can always be on in a periodic manner or controlled by input signals [5, 20]. The second part is the Trojan payload which is the actual effect of the Trojan on the design. The payload is the logic that grants unauthorized access to the assets of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Farahmandi et al., CAD for Hardware Security, https://doi.org/10.1007/978-3-031-26896-0_5

103

104

5 CAD for Hardware Trojan Detection

Fig. 5.1 The simplified manufacturing steps and the exact stages where a Trojan can be injected. As it can be noticed, Trojan insertion scope spans from pre-silicon to post-silicon stages

the system and jeopardizes their confidentiality, integrity, or availability [9]. Printed circuit board (PCB) or system-level Trojans are an actual threat with some instances of attacks like the infamous “big-hack,” which showed how a small chip-set added to the PCB can act as a Hardware Trojan and leak confidential information to untrusted entities [21]. At this level, additional electronic elements like tiny microchips are added to the layers of PCB design that usually observe the behavior of the functional aspects and manipulate or redirect the information flow to untrusted parties. The detection methods at the system level vary from utilizing the JTAG boundary scan chains for delay testing to image processing methods that try to obtain a bill of material of the board and automatically detect any abnormal modules placed on the board [14]. Many data sets have been gathered on different sorts of electronic elements used in standard PCB integration, so they can help the pattern matching and image processing algorithms in increasing the precision of their Trojan module categorization [12, 24]. Most of the research on Trojan detection is at the IP level and how to detect a Trojan either through destructive or non-destructive approaches, considering whether the chip is usable afterward [19]. Figure 5.2 presents a fair categorization of the different Trojan detection mechanisms developed at this scope. The post-silicon Trojan detection can become a tedious task that involves invasive or semi-invasive analysis that could lead to the chip being destructed. Destruction methods include complete reverse engineering of the chip, electrical probing, and imaging [27]. However, these methods are not fully applicable because they are super expensive and also because even in case of detection, it does not conclude that all the chips manufactured are infected since the Trojan could have been inserted in a small portion of the chips [3]. On the other hand, non-destructive methods can also be invasive or non-invasive. The invasive label originates from some detection methods using additional logic to embolden the Trojan’s payload and facilitate detection by providing better observability of the design’s inner workings. While the mechanisms mentioned above help detect when the manufacturer is untrusted, there is another crucial attack surface when the IP designer is the rogue party. For example, in a large system on chip (SOC), many third-party IPs are gathered from design houses around the world, which can potentially endanger the

5.2 Literature Survey

105

Trojan Detection Methods

Destructive

Non-destructive

Invasive

Preventive

Non-invasive

Assistive

Test-time

Logic Testing

Run-time

Side-channel Analysis

Fig. 5.2 Categorization of the Trojan detection methods. Trojan detection methods primarily consist of destruction and non-destructive approaches

security of the system as a whole. Most methods developed for these attacks focus on detecting Trojans at the soft-IP level, whether at RTL, gate level, or even layout level. Two main detection categories exist in this area: logic testing and side-channel analysis. Logic testing is the process of finding triggers by investigating the details of the design and observing any anomalies that exist regarding a golden model. The side-channel analysis focuses on the effect a Trojan has on existing covert channels like power, timing, and electromagnetic emissions caused by a Trojan’s payload and is observed by close analysis of these covert channels. Side-channel analysis is a great approach, especially for the Trojans that leak their payload through these covert channels. Section 5.2 presents some of the most promising approaches toward Trojan detection with a proper background on why each of these methods took a particular path toward tackling the issue. The following section focuses on the previous soft-IP Trojan detection work using logic testing and side-channel analysis.

5.2 Literature Survey This section includes two parts. First, the methods that incorporate side-channel analysis are introduced, and then different approaches toward logic testing for trigger detection are discussed. Side-channel-based Trojan detection utilizes various standard IC side-channel signals like the voltage, delay, and even temperature as the tool for Trojan identification. This is because digital Hardware Trojans tamper

106

5 CAD for Hardware Trojan Detection

with the measurements on these channels, such as an increase in path delay and power consumption. These values are compared to a set of testing IC measurements obtained from either simulation or actual prototypes, and the mismatch could reveal the existence of additional circuitry. The limitation of side-channel-based approaches is that process variation in smaller technology nodes could lead to false positives, and in the case of smaller Trojan designs, this can decrease the detection accuracy drastically [7]. The authors of [25, 30] studies use delay-based Trojan detection to build a delay fingerprint of the golden model and the actual IC for further comparison. Recently, machine learning approaches in categorizing side-channel leakage behaviors have been on the rise. Side-channel analysis is a powerful tool for detecting anomalies [11] combined with support vector machines (SVMs) for classification. The channel used is the power consumption, and the Trojan targeted is a type of Trojan that leaks the AES key through a resistive leakage channel. Another tool for statistical test generation using side-channel leakage is MERS [13]. This tool uses an initial set of tests based on either Hamming distance or simulation. It mutates this test set to target rare nodes with rare switching activity to be excited more than a certain threshold. This will result in a higher probability of activating the Trojan trigger and, thus, more observation provided to the verification engineer through side-channel analysis. Several methods stand out when it comes to logic testing, such as Functional Analysis for Nearly Unused Circuit Identification (FANCI), which flags suspicious wires in a design that may be malicious [29]. FANCI uses scalable, approximate, Boolean functional analysis to detect stealthy malicious logic. The advantage of FANCI is that it is not hindered by incomplete test suite coverage, and it tests all logic equally, regardless of whether or not it is an input interface. Thus a portion of the logic cannot go untested. FANCI can also operate with a low degree of false negatives. It also has low false-positive rates (less than 1%), and it can analyze all Trust-Hub Trojan-infected designs in a day or less. The previous work presented at “Unused Circuit Identification” (UCI) was deterministic and discrete-valued in its approach [10]. UCI means, given a test suite, a wire is only flagged if it is entirely unused, regardless of its relations to other wires. However, in FANCI, it also catches nearly unused wires, meaning wires that are not completely unused but which rarely affect output signals. FANCI is a unique and significant technique since, unlike UCI, FANCI can catch Trojans with high probability. Algorithm 5.1 shows how the FANCI algorithm can flag suspicious wires in an untrusted design. Outputs are examined for each gate in the module (lines 3–16). A functional truth table is constructed in line 3 of the algorithm for each output wire for the corresponding inputs. After that, in line 6 of the algorithm, FANCI iterates through each of the input columns in the truth table. When evaluating one column, all other columns are held fixed. Each row determines whether the value of the column in question affects the output or not. There are two logical functions in a mathematical sense; one sets the input to digital zero, and the other fixes the input to digital one. Thus the Boolean difference between these two functions is calculated. This way, the fraction of controlled rows based on the input column is

5.2 Literature Survey

107

Algorithm 5.1: FANCI algorithm to flag suspicious wires in a design [29] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Result: To flag suspicious wires in a design for all modules m do for all gates g in m do for all output ; wires ; w of; g; do T ← T ruthT able(F anI nT ree(w)) V ← Empty vector of control values for all columns; c; in ; T do Compute control of c Add Control (c) to vector V end Compute heuristics for V Denote w as suspicious or not suspicious end end end

determined. After computing control vectors for each input, heuristics are applied in line 10 to differentiate the suspicious wires from the other candidates. Because of the hardware Trojan’s stealthy nature, it is often hidden in wires, rarely affecting the outputs. The authors in [29] claim that the control values are zero or nearly zero for the wires which are weakly affecting the output, meaning the wires that belong to stealthy backdoor triggers. Thus, the result is compared with a predefined threshold value after computing heuristics for the control vectors. If the result is smaller than the threshold, the wire is denoted as a suspicious wire and vice versa. Though the authors claim that FANCI can be applied to real-world designs today, it has some limitations. FANCI assumes that a backdoor is hidden in rarely used circuits and thus identifies only the rare or unused circuit parts. Hence, if the backdoor is less stealthy, it will be able to evade FANCI but get caught during testing. Such an attack is called a Frequent-Action Backdoor, where the idea is to put the backdoor in plain sight. Moreover, if any malicious designer includes hundreds of backdoor-like circuits, but only one of them triggers a payload, FANCI will flag all those backdoor circuits and generates false positives. This type of attack is called “false-positive flooding.” Also, a different kind of attack named Pathological Pipeline Backdoor is where a sequential backdoor uses a vast and deep state machine that might be able to evade FANCI detection. Other than statistical approaches, there are many formal tools in the literature for Trojan detection. The authors of [8] use polynomial representations of both the implementation and specification at the gate level, and by investigating the reminder of dividing these two equations, they attempt to localize the actual gate that has changed. This tool uses Grobner basis reduction as the grounds for equivalence checking. Figure 5.3 shows the flow of the steps toward Trojan localization. The main problem with this approach and many formal methods is scalability issues when dealing with more extensive benchmarks, a term explosion during a polynomial reduction in this specific method.

Final Implementation

Golden Netlist

5 CAD for Hardware Trojan Detection

Specification Polynomial (S)

Specification Polynomial (S)

Calculate the reminder of set S over l

108

Trusted Implementation Reminder Elements all Zero?

Untrusted Implementation

Localization of Trojan based on R elements

Fig. 5.3 Hardware Trojan localization using Grobner basis and polynomial representations [8]

Because of the scalability issue of sole formal implementations, many detection approaches use a hybrid mechanism that uses dynamic simulation as a scalable tool to help the formal tool prune the search space and improve its performance. The tool introduced in [6] uses advanced test pattern generation (ATPG) tools in addition to model checking to tackle the limited observability of sequential designs with partial scan architecture. There is a rare branch and suspicious gate identification in the first step, which is done through random simulation and statistical analysis of the gate-level netlist. These signals are primarily located after non-scan flip-flops in partial scan architecture because of their low controllability. By replacing the scan flip-flops with primary inputs, many signals can be controlled, allowing model checkers to generate traces that activate rare candidate signals. These traces include all the assignments to the signals in the zone of influence of the target signal and might not have all the primary inputs or exchanged scan flip-flops. These traces are passed to the ATPG by a simple primitive “AND” gate with all the signal assignments as input and the target stuck at fault in the output. This way, the fault in the sequential depth is brought back closer to the design inputs, and ATPG can generate test patterns faster than the common sequential performance obtained from this tool. The limitation of this tool comes from both methodologies used. The main issue is that there are many false positives from doing simulation at the gate level because many functional parts of the netlist are not activated. One must go through every possible candidate attained from this step in consecutive stages of the tool. Also, while the assumption that signals after non-scan flip-flops are rarely activated is true, it depends on the design and may introduce many false candidates to the model checker. At the same time, a simple ATPG process was more than enough. The results show this for smaller benchmarks and that the performance is worse in these cases compared to a sole ATPG process. This approach is compared to both using only ATPG or only using a model checker and one other technique that uses both. The results in Table 5.1 show the detection of five different Trojan benchmarks and also the performance comparison of the methods. The MERO [4] tool is another statistical approach that uses ATPG to prune the test patterns and obtain a more compact set of vectors that trigger the greatest number of rare signals. MERO’s ATPG [4] approach focuses on full scan architectures and also uses the same tools as the study at hand.

Benchmark

Scan FFs (Scan/total) AES-T1000 6448/6933 AES-T2000 6468/7108 RS232-T400 30/59 RS232-T800 26/58 cba es1 5 5006/5889 7262/7809 cba es2 0

#Rare bran.

2 5 2 1 1 1

Test cov.

99% 99% 97% 97% 99% 99%

ATPG Detect ✓ ✓ ✓ ✓ ✓ ✓ Time 0.02 s 0.90 s 0.24 s 0.06 s 28,800 s 28,800 s

Model Chk. (MC) Detect Time ✓ 85.6 s ✓ 216.5 s ✓ 3600 s ✓ 7.23 s ✗ MO ✗ MO

MERO Detect ✗ ✗ ✓ ✓ ✗ ✗

Time TO TO 2810 s 3157 s 15,720 s 16,740 s

ATPG+MC [6] Detect Time ✓ 8.80 s ✓ 22.0 s ✓ 0.52 s ✓ 0.12 s ✓ 7.85 s ✓ 38.3 s

Table 5.1 The results of ATPG and model checking approach compared to only ATPG and only MC and also a statistical approach called MERO [4]

5.2 Literature Survey 109

110

5 CAD for Hardware Trojan Detection

Fig. 5.4 The overall flow of single target Concolic testing for Trojan activation, (a–b) target pruning, (c–d) path selection, (e) concrete simulation, and (f) symbolic execution are all the stages shown in this picture

One promising approach for fast and scalable test pattern generation at RTL abstraction is Concolic testing. This method is composed of interleaving the normal concrete simulation with Symbolic model checking that will lead the simulation to new paths for test generation. In Concolic testing, first, the model is simulated with inputs being a random seed, and then based on the constraints of the path taken in this iteration, a new path is selected for the next simulation session. The novelty of this study is that the intelligent selection of these alternative paths is based on the target Trojan candidate’s distances to the currently traversed path [1]. Figure 5.4 provides a good overview of the steps involved in a Concolic testing platform. First, through random simulation for sufficient cycles, the rare branches of the RTL code are identified. After that, the overlapped targets are pruned using static analysis, resulting in a smaller set of targets to activate. This approach takes a single target at each iteration and, through simulation, observes a control flow path execution for a predefined number of cycles. The branch selection is the process of negating one of the conditions in the traversed path and, if satisfiable, taking a new path for concrete execution while checking if the target is reached. The selection of which condition to negate next is crucial in this study, and it is done through distance evaluation. First, all the concurrent blocks are converted to control flow graphs based on the dependency of each condition on global assignments in the design, and extra edges are added to connect all these graphs. Search is conducted for distance evaluation from each target to each node of this graph, and the condition closest to the target is selected for negation next. This approach provides far better performance and coverage than pure model checking methods. While they suffer from the state space explosion, Concolic testing solves this problem by concrete execution of paths and converts the problem to quality of path selection. The main problem is the dependence of this approach on the first random seed, and if that seed is not close to a target, the execution time

5.3 SymbA: Symbolic Execution at C-level for Hardware Trojan Activation

111

will increase drastically. There are further optimizations done that save the closest random seeds for each target and use the previous efforts in future searches, too. The results of this method show the power of Concolic testing, indicating that EBMC [17] uses more memory exponentially when the size of the AES design increases. These results are sorted based on the size of the designs, and the timing and memory usage are a lot better than property definition and pure model checking.

5.3 SymbA: Symbolic Execution at C-level for Hardware Trojan Activation With all the lessons learned from the approaches mentioned above, it can be concluded that the most efficient way of tackling Trojan detection is using formal methods in conjunction with other simulation-based strategies to guide the formal tool toward identifying the Trojan. SymbA [26] is a Concolic execution tool for detecting Trojans at 3PIPs designed at the register-transfer level. In this section, the details of this approach are discussed as a possible solution that outperforms the previously mentioned approaches.

5.3.1 Preliminary Concepts SymbA utilizes symbolic execution as the formal tool and tries to direct its exploration of the search space toward specific lines of code that are not activated during a simulation session with random inputs. Symbolic execution is a concept vastly used in the software domain for verification and debugging. At its core, symbolic execution uses symbolic variables instead of concrete values. It explores different execution paths at once instead of going down only one possible route dictated by input values, which reduces verification run-time while increasing the coverage of the simulation-based validation. A tree structure can present the possible execution paths, with the root being the program’s entry point and leaves acting as the exit points. Suppose concrete values are applied to the inputs. In that case, one of these paths is traversed at any time, while in symbolic execution, a formula based on the symbolic variables is constructed at each node and passed down to the next node along the path. By solving the final formula at the leaves, one can obtain the input values that will lead the execution to a leaf node [15]. Assertions exist to encourage specific scenarios and pinpoint the execution paths that direct the model toward those user-defined conditions. Figure 5.5 represents a simple example of how symbolic execution provides the means to explore different execution paths in a simple C++ program seen in Fig. 5.5a. Other conditions are determined based on input values a, b, and c, and an assertion statement is defined in line 13. In practice, it is hard to explore all

112

5 CAD for Hardware Trojan Detection

x=0, y=0, z=0

True

False

a

True

b>3

x=(-3) True

b>3

False

True

la &&c>0

y=2

z=2

False

False !A^(B3)^(C3)

A^(B3)^(C>0) x

(a)

(b)

Fig. 5.5 An example showing how symbolic execution provides the means to explore different execution paths in a simple C++ program. (a) A simple C++ code with input variables a, b, and c. (b) Possible execution paths shown in the graph form. The red cross shows the specific branch that is the target of test generation

possible paths in large software codes because of the numerous execution flows. A symbolic engine will build the path constraints dynamically and prunes the search space for symbolic execution while keeping track of unexplored paths and concrete values that change during the execution of each course. The final formulas obtained in the exit points of the code are shown at the leaves of the tree in Fig. 5.5b, which are solved by a satisfiability module solver that returns concrete values assigned to each symbolic variable for every path. If a consistent answer to the formula is not found, a value that indicates the unsatisfiability is returned. The assertion statement is only triggered if the execution path is marked with a red cross, and the solver will generate the counter-example leading to that specific path. By modeling rare branches of the design using these assertions, SymbA generates test patterns for targeted Trojan activation. Symbolic execution, as an engine, has been integrated that investigates the translated design at C-level.

5.3.2 Why C-level Symbolic Execution? The proposed software-level model-based technique generates test patterns for Trojan activation because abstracting the design at the C/C++ level enables a faster simulation for covering more scenarios. Moreover, it can use a large set of tools that, unfortunately, are not available at RTL. These tools provide a high degree of configurability and parallelism. This simplification does not reduce the search space in a way that eliminates a portion of the possible solutions. The functionality of the

5.3 SymbA: Symbolic Execution at C-level for Hardware Trojan Activation

113

module Trig( input clk , input rst , input [127:0] plaintext ) ; reg State0 , State1 ; reg Trojan_Trig ; always @posedge(clk) begin if ( rst == 1) begin State0 90%) suggest that RTL-PSC’s vulnerability assessment results are nearly as accurate as GTL’s. The SAKURA-G board uses two SPARTAN-6 FPGAs for silicon validation. Based on significant correlation coefficient values (>80%), RTL’s PSC vulnerability assessment results are nearly as accurate as FPGA evaluation. RTL-PSC evaluates AES-GF in 46.3 min and AES-LUT in 24.03 min. The same experiments on gate-level designs would take around 31 h. Therefore, RTL-PSC is 42X more efficient than similar gate-level analyses. However, many implementation details, including as clock-gating, gate delay, datapath gating, clock and power network topology, retiming, etc., are not available at RTL.

6.3.2.3

Strengths, Limitations, and Future Work

The ability of RTL strength PSCs to provide a full PSC vulnerability score at both the RTL and block levels of granularity distinguishes them. The designers will be able to determine which blocks are vulnerable using much faster simulation, albeit with lower precision than post-silicon evaluation. Experimental results validate the effectiveness of the proposed metrics with gate-level and FPGA metrics. However, for the time being, this framework simply serves as a standalone AES implementation that can be extended to the SoC level to show which modules are vulnerable. The power model ignores other disturbances such as clock-gating, gate latency, clock and power network layout, and so on. As a result, the power estimation based on the toggle count will be far off from the actual power consumption in large

136

6 CAD for Power Side-Channel Detection

designs. A more exact power estimation model can be used in the future to account for minor fluctuations.

6.3.3 Computer-Aided SCA Design Environment (CASCADE) It is necessary to perform side-channel leakage evaluation and design in parallel, to reduce the design time of a system. Until now, there has been no automation in the field of parallel design and side-channel evaluation, which increases time-to-market and costs expensive engineering time. In [24], the authors propose a framework that enables automated side-channel evaluation at design time. This approach can save lots of time and cost, by reducing time-to-market and reducing expensive expert human-hour in the design phase.

6.3.3.1

Methodology

Figure 6.5 shows the overall framework of the proposed method. Here, all tool usage and other modification can be done by a command line interface (CLI). The framework consists of library manager (LM), session manager (SM), handlers, different synthesis phases, different simulation phases, test generators, parsers, and analyzers. SM is the main driver of this framework. It controls the whole design process and integrates related parts of the design with corresponding libraries with the help of LM from given sets of configuration parameters. Firstly, the RTL design of the system is taken as input. Then synthesis and translation phases are done on various stages of the RTL. After that, simulations in logic level, physical level, and SPICE simulation are performed on the synthesized design. In these simulations, generators provide related test stimuli to work with the design. After this stage, parsers convert the simulation files into power frame file (PFF) format files, which are necessary for SCA. These PFF files are analyzed using TVLA [23] or CPA techniques, and a continuous power waveform is reconstructed and written as output to an analyzed frame data (AFD) file, which is used next to give an estimation about the side-channel resistance of the design. Based on this estimation, necessary modifications can be advised and practiced on the design to make it more power side-channel resistant. Figure 6.6 shows the design flow using the framework. The authors have integrated industrial-level tools in this framework, which can ensure the level of confidence of the assessment. The list of the tools used in different stages of the framework is given in Table 6.4. To quantify side-channel vulnerabilities, the authors incorporated timing and power models into the simulation of the design, allowing them to complete a sidechannel evaluation with greater efficiency. Particularly, the authors relied on the composite current source (CCS) model, as it can be done on less computation power and produce timing and power estimations close to SPICE models [7]. Until the

6.3 Literature Survey

137

CLI 1

2

3

4

LM

Handlers

5

SM LS

Parameters

LT

PS

LSIM

2 run.LT

3 2

gln.LSIM

4 3 2

run.PS

4 3 2

par.LSIM

Analyzers

TVLA

DoM

...

X

Generators

Parsers

5

LP

*.afd

X

syn.LSIM

3 2

5

SSIM

beh.LSIM

run.LS

2

PSIM

PP

TG

SP

*.pff

1

DG

2

tb.v

C Tool

User I/O

Text file

i

Python Tool

Command line interface

Binary file

i

SG

syn.sdf

Parameter set i Parameterized file

Fig. 6.5 High-level architecture of CASCADE [24]. CASCADE can be accessed via a Command Line Interface (CLI). In the framework, the session manager (SM) plays a central role

2 run.LS

design.v

BEH

X

Design Stage

X

Design Stage

3 2

LS

SYN

2

syn.LSIM

run.LT

LT

4 3 2 run.PS

GLN

PS

PAR

SCA Evaluation SCA Closure

3 2

gln.LSIM

4 3 2

par.LSIM

Fig. 6.6 Standard cell (SC) design flow using CASCADE [24]. CASCADE enables SCA evaluation at every step. When security requirements for the current stage are met, proceeding to the next stage is permitted

138

6 CAD for Power Side-Channel Detection

Table 6.4 Commercial EDA tools that were used are listed below Acronym LS LT PS LSIM PSIM SSIM

Function Logic synthesis Logic translation Physical synthesis Logic simulation Physical simulation SPICE simulation

Tool Synopsys design compiler Synopsys design compiler Cadence Innovus MentorGraphics QuestaSim Synopsys PrimeTime with PX Synopsys HSPICE

Table 6.5 Process time for 1 million PAR traces of AES-U and other circuits. AES-U is larger than many embedded devices’ security-dedicated areas. Using a single thread of the i7-7700 workstation, a million traces can be simulated and processed in less than 32 h [24] Design TI PRESENT S-box WDDL PRESENT S-layer BP AES S-Box AES-U

Area [kGE] 0.35 2.98 5.45 127.18

Frame [ps] 1800 1800 5000 30000

LSIM [h] 0.02 0.14 0.41 25.81

LP [h] 100K

TVLA 96.5 30.7 32.3 11.7

HAC 0.80 0.57 0.40 0.10

ways to create two matrices J01 and J10 . A distance matrix C is calculated from J01 and J10 , which consists of the distance between various points of the matrices of one another as well as the distance between the rows of themselves. After that, the distance matrix C is used to calculate set-membership matrix Q, which specifies the probability of a member being either J01 or from J10 . The authors assume that the strength of the design lies in this probability matrix. The more a member of Q has an equal probability of being from J01 or J10 , the more secure the design is. To measure the strength of the design, the matrix Q is sorted and a value z and confidence interval  are calculated. The slope of z signifies the strength of the design. The less slope of z points out that the design is more secure to power SCA.

6.3.5.2

Experimental Results

The study investigated the traces from four distinct S-Box implementations in FPGA using the PXIe-5186 oscilloscope from a SAKURA-G evaluation board. Moreover, HAC diagrams of the four FPGA designs that were tested in [1]. The fraction of column numbers in Q is represented in the x-axis, and the fraction of joint traces from L0 ||L1 whose kth neighbors are in L1 ||L0 instead of L0 ||L1 is represented in the y-axis. A larger slope of the line z means that the groups of traces in D0 and D1 are not interchangeable which results in less secure DUT. The results show that the HLS implementation gives the best power SCA resistance, whereas the LUT implementation is the most vulnerable to power SC attacks. To validate the claims of this approach, these results were compared with established SCA methods CPA MTD and TVLA in Table 6.7. The comparison shows that, regarding power SCA, HAC can come to the same conclusion as CPA MTD and TVLA.

6.3.5.3

Strengths and Limitations

The strength of the approach lies in the bivariate method of assessment, holistic way of assessment, and ability to quantify the resistance of a design. The approach was able to generate superior assessment results using the bivariate method. The method can determine the overall resistance of the design owing to the holistic approach, which eliminates the requirement to consider POI. In contrast to the “pass/fail”

References

145

verdict from the established SCA methods, this work quantifies the resistance of the designs and opens a provision to compare them against one another. Though this approach uses bivariate analysis of power traces, the authors did not address the analysis with more than two variables. Moreover, they do not mention any cut-off threshold of the quantified value for a design to pass the SCA tests, which leaves the designer to choose the cut-off value. Again, this method gives the strength of designs in its entirety, it cannot point out a smaller area in the design that needs to be addressed to increase power SCA resiliency.

6.4 Summary Several approaches relevant to CAD tools in power SCA have been discussed in this chapter. The review covers topics of power SCA such as design flow integration with power side-channel analysis [11, 17, 18, 20, 21, 30], a measure for side-channel evaluation [13, 24], and a new machine learning method to power SCA [1]. Despite the fact that machine learning methods for side-channel assessments require less traces, investigations reveal that they produce less accurate results. Furthermore, a new multivariable analysis-based metric can be used to more precisely assess a design’s power side-channel hardness. The design flow approaches utilizing power SCA techniques in design time can aid in the design process by allowing power side-channel countermeasures to be integrated into the system during the design phase.

References 1. A. Althoff, J. Blackstone, R. Kastner, Holistic power side-channel leakage assessment: towards a robust multidimensional metric, in 2019 IEEE/ACM International Conference on ComputerAided Design (ICCAD) (IEEE, Piscataway, 2019), pp. 1–8 2. G. Becker, J. Cooper, E. De Mulder, G. Goodwill, J. Jaffe, G. Kenworthy, Test vector leakage assessment (TVLA) derived test requirements (DTR) with AES, in International Cryptographic Module Conference (2013) 3. S. Bhasin, J.L. Danger, T. Graba, Y. Mathieu, D. Fujimoto, M. Nagata, Physical security evaluation at an early design-phase: a side-channel aware simulation methodology, in Proceedings of International Workshop on Engineering Simulations for Cyber-Physical Systems (2013), pp. 13–20 4. E. Brier, C. Clavier, F. Olivier, Correlation power analysis with a leakage model, in International Workshop on Cryptographic Hardware and Embedded Systems (Springer, Berlin, 2004), pp. 16–29 5. M. Bucci, R. Luzzi, F. Menichelli, R. Menicocci, M. Olivieri, A. Trifiletti, Testing poweranalysis attack susceptibility in register-transfer level designs. IET Inform. Secur. 1(3), 128– 133 (2007) 6. Z. Chen, P. Schaumont, Early feedback on side-channel risks with accelerated toggle-counting, in 2009 IEEE International Workshop on Hardware-Oriented Security and Trust (IEEE, Piscataway, 2009), pp. 90–95

146

6 CAD for Power Side-Channel Detection

7. T. El Motassadeq, CCS vs NLDM comparison based on a complete automated correlation flow between PrimeTime and HSPICE, in 2011 Saudi International Electronics, Communications and Photonics Conference (SIECPC) (IEEE, Piscataway, 2011), pp. 1–5 8. D. Fujimoto, M. Nagata, T. Katashita, A. Sasaki, Y. Hori, A. Satoh, A fast power current analysis methodology using capacitor charging model for side channel attack evaluation, in 2011 IEEE International Symposium on Hardware-Oriented Security and Trust (IEEE, Piscataway, 2011), pp. 87–92 9. B. Gierlichs, L. Batina, P. Tuyls, B. Preneel, Mutual information analysis, in International Workshop on Cryptographic Hardware and Embedded Systems (Springer, Berlin, 2008), pp. 426–442 10. B. Gladman, Implementations of AES (Rjindael) in C/C++ and assembler (2010). http:// gladman.plushost.co.uk/oldsite/cryptography technology/index.php 11. M. He, J. Park, A. Nahiyan, A. Vassilev, Y. Jin, M. Tehranipoor, RTL-PSC: automated power side-channel leakage assessment at register-transfer level, in 2019 IEEE 37th VLSI Test Symposium (VTS) (IEEE, Piscataway, 2019), pp. 1–6 12. P. Kocher, J. Jaffe, B. Jun, Differential power analysis, in Annual International Cryptology Conference (Springer, Berlin, 1999), pp. 388–397 13. A. Krieg, J. Grinschgl, C. Steger, R. Weiss, H. Bock, J. Haid, System side-channel leakage emulation for HW/SW security coverification of MPSoCs, in 2012 IEEE 15th International Symposium on Design and Diagnostics of Electronic Circuits & Systems (DDECS) ((IEEE, Piscataway, 2012), pp. 139–144 14. T.H. Le, J. Clédière, C. Canovas, B. Robisson, C. Servière, J.L. Lacoume, A proposition for correlation power analysis enhancement, in International Workshop on Cryptographic Hardware and Embedded Systems (Springer, Berlin, 2006), pp. 174–186 15. S. Mangard, Hardware countermeasures against DPA—a statistical analysis of their effectiveness, in Cryptographers’ Track at the RSA Conference (Springer, Berlin, 2004), pp. 222–235 16. T.S. Messerges, E.A. Dabbish, R.H. Sloan, Examining smart-card security under the threat of power analysis attacks. IEEE Trans. Comput. 51(5), 541–552 (2002) 17. A. Nahiyan, J. Park, M. He, Y. Iskander, F. Farahmandi, D. Forte, M. Tehranipoor, Script: a cad framework for power side-channel vulnerability assessment using information flow tracking and pattern generation. ACM Trans. Design Autom. Electron. Syst. 25(3), 1–27 (2020) 18. J. Park, N.N. Anandakumar, D. Saha, D. Mehta, N. Pundir, F. Rahman, F. Farahmandi, M.M. Tehranipoor, in IACR Cryptol. ePrint Arch, vol. 2022 (2022), p. 527 19. A. Poschmann, A. Moradi, K. Khoo, C.W. Lim, H. Wang, S. Ling, Side-channel resistant crypto for less than 2300 GE. J. Cryptol. 24(2), 322–345 (2011) 20. N. Pundir, J. Park, F. Farahmandi, M. Tehranipoor, Power side-channel leakage assessment framework at register-transfer level, in IEEE Transactions on Very Large Scale Integration (VLSI) Systems (2022) 21. S. Roy et. al., Self-timed sensors for detecting static optical side channel attacks, in 2022 23rd International Symposium on Quality Electronic Design (ISQED) (IEEE, Piscataway, 2022), pp. 1–6 22. H. Satyanarayana, AES128 (2012). http://opencores.org/project. AES crypto core 23. T. Schneider, A. Moradi, Leakage assessment methodology, in International Workshop on Cryptographic Hardware and Embedded Systems (Springer, Berlin, 2015), pp. 495–513 24. D. Sijacic, J. Balasch, B. Yang, S. Ghosh, I. Verbauwhede, Towards efficient and automated side channel evaluations at design time. Kalpa Publ. Comput. 7, 16–31 (2018) 25. F.X. Standaert, T.G. Malkin, M. Yung, A unified framework for the analysis of side-channel key recovery attacks, in Annual International Conference on the Theory and Applications of Cryptographic Techniques (Springer, Berlin, 2009), pp. 443–461 26. K. Tiri, D. Hwang, A. Hodjat, B.C. Lai, S. Yang, P. Schaumont, I. Verbauwhede, Prototype IC with WDDL and differential routing–DPA resistance assessment, in International Workshop on Cryptographic Hardware and Embedded Systems (Springer, Berlin, 2005), pp. 354–365 27. K. Tiri, I. Verbauwhecle, Simulation models for side-channel information leaks, in Proceedings. 42nd Design Automation Conference, 2005 (IEEE, Piscataway, 2005), pp. 228–233

References

147

28. L. Zhang, A.A. Ding, Y. Fei, P. Luo, A unified metric for quantifying information leakage of cryptographic devices under power analysis attacks, in International Conference on the Theory and Application of Cryptology and Information Security (Springer, Berlin, 2015), pp. 338–360 29. L. Zhang, D. Mu, W. Hu, Y. Tai, Machine-learning-based side-channel leakage detection in electronic system-level synthesis. IEEE Netw. 34(3), 44–49 (2020) 30. T. Zhang, J. Park, M. Tehranipoor, F. Farahmandi, in 2021 58th ACM/IEEE Design Automation Conference (DAC) (2021), pp. 709–714

Chapter 7

CAD for Fault Injection Detection

7.1 Introduction In the last decade, there has been an exponential increase of system-on-chip (SoC) devices available to the market. These embedded systems exist globally, ranging from consumer electronic devices to space applications [9, 14, 15, 17], and ordinary users consider them trustworthy for securely maintaining and operating our data. SoC devices rely on hardware-implemented cryptographic algorithms [2, 27] to guarantee our data’s confidentiality and integrity. Despite security being built into many of these devices, they are still deployed in uncontrolled hostile environments where adversaries exploit the vulnerabilities of systems through physical attacks [16]. Adversaries use a variety of physical attacks to effectively bypass the chip’s built-in security mechanism [22] to observe the unencrypted confidential data or cryptographic information [29]. These physical attacks can be categorized into two classes: passive attacks and active attacks [16]. In these two scenarios, active attacks focus only on the scope of this chapter. An active attack’s goal is to cause a malfunction on the target device by modifying the operating environment of the device [16, 31]. Adversaries perform active attacks with fault injection (FI) attacks to either bypass the security mechanisms or extract secret information through the use of faulty outputs [3, 8, 13, 16]. FI attacks can perform two different effects on a target device: global or local. Global faults occur when a fault spreads to affect the entire device. Local defects happen when a fault occurs only in the vicinity of the injection probe [31]. FI attacks are a severe threat to SoCs because many of the techniques [5, 7, 17, 20] for FI can be done by a single attacker, using low-cost equipment with mid-tier expertise [16]. FI techniques can be split into invasive, semi-invasive, and noninvasive attacks [6]. Invasive attacks are where reverse engineering and imaging are done layer by layer to understand the device’s inner workings. Then device modification using a FIB creates a test point or injection point. Last, microprobes © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Farahmandi et al., CAD for Hardware Security, https://doi.org/10.1007/978-3-031-26896-0_7

149

150

7 CAD for Fault Injection Detection

inject faults into the device [4]. Previous works have integrated shielding and tamper sensors into the metal layers to prevent an attacker from using microprobing to extract sensitive data from the device [4]. These countermeasures aim to detect any added capacitance introduced to the device by semi-invasive FI attacks that penetrate the backside of integrated circuits (ICs). Then, an attacker can inject faults into the target device using flash glitching or laser glitching. Applying a strong laser beam into the device changes the state and data registers to leak the cryptographic key [25]. The laser glitching may allow an attacker to inject faults into a specific or broad area of the device [4]. Researchers have used tamper sensors, UV, and IR protection to prevent attackers from successfully using semi-invasive FI techniques [26]. Non-invasive is the third type of attack that can be used. This chapter focuses on non-invasive attacks because they are low cost and can be performed by attackers with little knowledge of the device [4, 16, 31]. Non-invasive attacks, unlike invasive and semi-invasive ones, do not require any modification to the device. Clock glitching is a common type of FI attack. Clock glitching tampers with the clock pin to speed up or slow down the clock, which causes faulty outputs to all clocked operations [4, 16]. Clock glitching causes the flip-flops (FFs) from latching the correct data and can cause invalid setup and hold times that cause states and instructions to be bypassed in a design [4]. Previous works used SASEBO and SAKURA to evaluate the resilience of an FPGA against an FI attack. However, this scheme can only detect clock glitches in FPGAs and ASICs [16] and overlooks the vulnerability in microcontroller, which poses a significant security risk since many SoC devices rely on microcontrollers [1, 21]. Furthermore, researchers have proposed linear, nonlinear, multilinear, and multi-robust codes to protect FSMs from FI attacks [22] to prevent setup and hold time violation attacks commonly found in FSM designs. These techniques cause significant area overhead in the device. In many cases, SoCs are already area restricted. It is not feasible to integrate these techniques into the device. Attackers can use power supply and electromagnetic (EM) glitch attacks to inject faults, in addition to clock glitching attacks. Power supply glitching is performed by lowering the expected voltage level to cause a faulty output. EM attacks are performed by generating EM signals and directing them to the device [4]. An attacker uses these methods to affect timing constraints in the targeted device [31]. Delay-based countermeasures (CMs) have been proposed and implemented to combat these types of FI attacks. However, the existing delay-based CM can induce a fault without triggering an alarm when the EM disturbance is located far away from the device [31]. The FI techniques can be performed individually or with a combined fault model to perform multi-fault injection attacks. A multi-fault attack is when multiple fault models are used simultaneously to inject faults into the device. Previous works have addressed portions of the multi-fault attack challenges by using Genetic Algorithms and Deep Learning [18, 19, 24]. None of these solutions address all of the difficulties associated with multi-fault attacks [29].

7.2 Background on FI Prevention and Detection

151

This chapter aims to provide a literature review on the state-of-the-art FI countermeasures and vulnerability CAD assessment tools []. The outline of the rest of the chapter is organized as follows. Section 7.2 provides the background on the survey’s issues in Sect. 7.3. Section 7.3 reviewed proposed schemes in the literature and provided details on their methodology, results, and limitations. Section 7.4 concludes the chapter by summarizing and comparing the relevant research.

7.2 Background on FI Prevention and Detection This section covers the vulnerabilities of various digital circuits from fault injection attacks and methods to assess and prevent devices from becoming compromised.

7.2.1 Delay-Based Countermeasures Digital integrated circuits are driven by their internal clocking schemas, which exhibit specific timing requirements for various setup and hold times. Aggressors may incorporate various attacks to tamper with a design’s physical properties, particularly their clocks [16]. There are two types of clock-based aggressions: • Clock glitching • Overclocking Clock glitching is the sudden change of the system clock’s input frequency, inherently changing the frequency while overclocking increases the target board’s frequency. Depending on the complexity of a design, multiple system clocks and divided peripheral clocks may exist. Thereby, clock glitching increases the vulnerability of the target device, as it increases the probability of multi-stage aggression [31]. Additionally, the presence of these clocks introduces timing requirements to ensure that data values are latched onto register circuits or fed into combinational circuits. However, clock-based attacks can impose timing violations that may damage data transfer and state machine transitions. Equation 7.1 displays the necessary condition to uphold timing requirements. Tclk > Dclk2q + DpMax + Tsetup + Tskew + T j itter

(7.1)

This equation describes that a circuit’s overall clock period must be larger than the sum of all internal delays and setup/hold times, often referenced as positive slack [31]. Fault injection aggressors can target a single clock generator or synchronizer to alter a circuit and generate a negative slack state. Delay-based countermeasures have been developed to trigger alarms when aggressors attempt to impose clock glitches or alter supply currents, resulting in power glitches. However, with the development

152

7 CAD for Fault Injection Detection

of complex circuits, the efficiency of single glitch detector architectures has been challenged by Loic Zussa et al.

7.2.2 Hardware Platforms for FI Vulnerability Assessment Various authors have proposed hardware assessment platforms to determine a device’s vulnerability against fault injection attacks [10–12, 24, 28, 30]. These platforms allow silicon manufacturers and embedded engineers to assess their designs against a list of attacks, thereby decreasing the potential probability of producing a vulnerable product. However, assessing fault injection attacks is not trivial. There exists a gap among most platforms between fault injection analysis and fault injection exploitation, which are necessary steps in vulnerability analysis. Performing both analysis stages became inefficient with the emergence of multi-stage fault attacks [29], which were designed for single-stage fault attacks. Additionally, assessment platforms tend not to consider combined fault attacks. These attacks require a more extensive security coverage metric as they must account for various fault models and attack paths [29]. An optimization problem arises with these combined attack models, and the multitude of possible fault injection settings engineers utilize for vulnerability exploitation.

7.2.3 Analyzing Vulnerabilities in FSMs (AVFSMs) Current evaluation platforms aim to analyze the vulnerability of datapaths or data manipulation circuits, excluding finite state machines. These circuits are crucial to digital ICs as they dictate a datapath’s select lines and enabled latch. With little analysis in the pre-silicon manufacturing stage, the probability of a severe fault injection attack could be increased. The post-silicon study poses a challenge, as the aggressive time-to-market hinders further design changes.

7.3 Literature Survey Methodology This section examines the state-of-the-art approaches the researchers have used to solve the challenges described in Sect. 7.2.

7.3 Literature Survey Methodology

153

7.3.1 The Efficiency of a Glitch Detector Against Fault Injection This section reviews the analysis of glitch detector efficiency provided by Loic Zussa et al. [31]. Before Loic Zussa et al., delay-based countermeasures for electromagnetic glitches had been proposed but not thoroughly evaluated. The authors’ contribution to the hardware security research community includes the following: • Establishing concrete hardware guidelines for designing a delay countermeasure • Assessing fault injection threats to various industry applications, particularly chip manufacturing [31]. Their scheme argues that single glitch detectors do not efficiently indicate ongoing fault injection probes. Important manufacturing questions are also presented to facilitate more research in the hardware security community. These researchers used a delay-based countermeasure presented by another group (Endo et al.). Before introducing the countermeasure in detail, Loic Zussa et al. provided information on the condition of violating a timing constraint shown in Eq. (7.1). This condition inherently details that a digital circuit’s clock period must be greater than the sum of all internal propagation delays [31]. Presenting this condition is flourishing as it supports the background necessary in establishing a timing guideline for hardware designers. The countermeasure shown in Fig. 7.1 continually attempts to detect any timing violation induced by initially monitoring a delayed version of the injection. According to Loic Zussa et al., a glitch detector performs a phase comparison between the input clock and a delayed version of it. If the signals do not match, the comparator pulls high, triggering the alarm signal [31]. This approach also helps to detect any power supply violations, particularly changes in supply voltage inputs. Fault injection techniques can induce power supply units and create timing violations. With certain power injections occurring nearby the power pads, the authors introduce the question of such countermeasure’s spatial limitations [31]. To implement a platform for assessing the countermeasures’ efficiencies, a Xilinx Spartan 700 FPGA was selected to realize five instantiations of glitch detectors and an AES128 module for efficiency and analysis. The hard macro schema was used to allow for post-synthesis location flexibility and bit stream generation [31]. A total of five glitch detectors were placed and routed on the FPGA die in order to determine the spatial limitations. By including this methodology, Loic Zussa Fig. 7.1 General Principle governing delay-based glitch detectors

D

CK Delay

DCK

Q

DFF

154

7 CAD for Fault Injection Detection

et al. consider invasive fault injection attacks, which alter the current that flows through the integrated circuits [31]. There were two electrode tips used to inject 20– 200V range electromagnetic pulses [31]. Scans of the FPGA die were completed to analyze the triggering rate and overall countermeasure efficiency. Results show that clock-based attack analysis might require more glitch detectors for efficiency purposes.

7.3.2 HackMyMCU: Low-Cost Security Assessment Platform

Targeted Board

Controlling PC

Communication

Trigger Clock

Configuration

Fig. 7.2 General setup between HackMyMCU modules and target board

Configuration

A vulnerability platform is introduced to assess clock glitching attacks on a microcontroller. The HackMyMCU framework was established by Zahra Kazemi et al. HackMyMCU is an open hardware–software platform that assesses a device’s vulnerability to a range of attacks, including fault injection and side channel [16]. It is proposed that with the increase of Internet of Things (IoT) applications within the medical and defense industries, the probability of a severe fault injection attack also increases. Additionally, Zahra Kazemi et al. claim that an open hardware framework provides a robust assessment to software engineers designing these complex and fragile systems [16]. Before Zahra Kazemi, security assessment platforms like SASEBO and SAKURA allowed designers to evaluate how fault injection security applications run on an FPGA or ASIC [16]. However, HackMyMCU also enables embedded system engineers to test their general microcontroller purpose applications against physical and active attacks. In addition, this indicated platform was designed to assess the extent of clock-based attacks through two clock generator architectures: Two-Variable Phase Shift Generator and Switching Clock Glitch Generator. Each fault injection module follows a simple hardware setup, as shown in Fig. 7.2, where the user passes configuration parameters to the module and target board. With this information, injection modules generate various system clocks and receive a trigger signal from the target board to initiate clock-based aggression. Overall, the twovariable phase shift glitch generators were implemented using Xilinx’s Arty-S7-50,

Fault Injection Module

7.3 Literature Survey Methodology

155

while the switching clock glitch generator was implemented on the Kintex 7 FPGA [16]. The Two-Variable Phase Shift Generator contains three parameters set by the user application: glitch width, glitch location, and glitch delay. Glitch width sets the shifted clock’s frequency, glitch location specifies the phase of the aggressor’s clock, and glitch delay indicates the necessary clock cycles before triggering a clock glitch [16]. Additionally, two phase-shifted clock signals are generated from the FPGA’s PLLs and are switched between two signals on the target board with the same frequency but different phases. This inherently induces a glitch stream that creates a new generated “glitch” clock, with a rising edge of the trigger signal. Figure 7.3 shows the logic circuits of the phase shift generator. HackMyMCU’s switching generator (Fig. 7.4) consists of two primary clock sources: a fast and a slow clock, and it is the converse of the phase shift glitch generator. Each clock signal is mixed, synchronized, and buffered to the target device via the Kintex 7 FPGA. Switching generators requires an input stream from the user PC and trigger edge from a target device before sending out the Fig. 7.3 Phase shift clock glitch generator logic [16]

Clock DCM1 DCM2

Generated Clock

Trigger

Fig. 7.4 Functional body diagram of HackMyMCU’s switching glitch detector [16] which consists of two primary clock sources: a fast and a slow clock, and it is the converse of the phase shift glitch generator

156

7 CAD for Fault Injection Detection

output clock. Both utilize the UART module from the indicated FPGA [16]. This is different from the two-variable phase shift generator approach as the glitch generation is not set by user configuration settings, and the clock switching schema is primarily on these two primary clock sources. Thus, operating in such a fashion allows for overclocking-based aggressions. Experiments and analysis were performed on the platform to determine effectiveness in injecting faults on a target device running an AES AddRoundKey method [16]. Various data streams are passed into the AES module, and the output ciphertext was analyzed to categorize potential faults. According to Zahra et al., clock switch glitch generators produced more faults than the phase shift glitch generator [16]. One pro on the provided analysis is that the experimenters recorded the attack faults with respect to an operations clock cycle which successfully shows the effectiveness of a glitch generator schema in inducing a precise clock-based attack on a module’s operation timeline. It also supports the author’s claim that HackMyMCU is modular and applicable to all microcontrollers. There is a lot of information on the effects of attacking a target MCU’s overall operation, but there is no comparison to other security frameworks. It will be more effective in persuading that HackMyMCU is more favorable for microcontroller applications by including another available framework’s aggression results. The setup was based upon an encryption application, which can limit the effectiveness of the authors’ arguments that HackMyMCU’s IoT aggressions are successful. However, the authors who incorporate ongoing experiments within medical-based applications should emphasize the importance of protecting such applications [16].

7.3.3 Analyzing Vulnerabilities in Finite State Machine (AVFSM) Clock glitch attacks may cause security threats in an FSM controller. This section introduces the AVFSM framework proposed by Nahiyan et al. for identifying and mitigating vulnerabilities in FSMs. The authors have shown that synthesizing an FSM may introduce don’t-care states and transitions, which affect security. The authors have also shown an IC can attack by taking advantage of the don’t-care conditions. However, Nahiyan et al. state that there is no systematic technique to identify vulnerabilities in FSM. According to the authors’ investigation, analyzing vulnerabilities in finite state machine (AVFSM) is the first time a systematic approach for analyzing FSM vulnerabilities has been proposed [23]. The automatic vulnerability assessment mentioned in the AVFSM method mainly focused on the FSM vulnerabilities against fault injection attacks and hardware Trojan-based attacks. Due to the limited scope of this chapter, the vulnerabilities of FSMs against fault injections are focused on and discussed here. AVFSM uses FSM and the netlist as inputs to identify FSM vulnerabilities. Figure 7.5 shows AVFSM’s high level of functioning. The vulnerability analysis

7.3 Literature Survey Methodology

157

Fig. 7.5 Flowchart for the AVFSM framework which assumes that designers have access to and understand the netlist, FSM, and circuit functionality [23]

Fig. 7.6 Original and modified FSM netlist generated by AVFSM [23]. The transformed netlist’s primary input to the modified netlist is the non-state FFs (PIFNS) and the state FFs (PIPresentState)

of Nahiyan et al. assumes that designers have access to and understand the netlist, FSM, and circuit functionality [23]. The inner workings of AVFSM are as follows. First, the RTL code is synthesized, and a synthesis report is generated where the state registers and the state transitions can be found. The AVFSM takes the state FFs, checks their fan-in cone, and identifies any non-state FFs. Then, AVFSM generates a transformed netlist as shown in Fig. 7.6. The transformed netlist’s primary input to the modified netlist is the non-state FFs (PIFNS) and the state FFs (PIPresentState). Furthermore, XOR2 gates are placed where the state registers are in the original FSM netlist, shown in Fig. 7.6. The other input to the XOR2 gate is left as a primary input (PIXOR). All of the XOR2 gates are ORed together to generate the primary output (POOR) [23]. The modified netlist is used with an ATPG tool to create the State Transition Graph (STG) by determining which PIPresentState, PIFNS, and input pins of the FSM conditions could create transitions to one specific state, by applying the particular state’s logical values in the form of constraints on PIXOR. Test patterns are generated to induce a stuck-at-1 fault at POOR. The entire gatelevel STG is extracted by doing this. The STG extracted from the gate-level is compared to the RTL STG to identify the potential unexpected don’t-care states and state transitions. The gate-level STG analyzes the FSM’s fault injection vulnerability. Nahiyan et al. considered several powerful low-cost fault injection attacks, including the attacks based on the setup time violations caused by clock glitching, voltage glitching,

158

7 CAD for Fault Injection Detection

local heating, etc. [23]. The authors used AVFSM to estimate the vulnerability of a setup time violation-based fault injection attack. The first step in performing fault vulnerability analysis is applying the AVFSM framework to the extracted STG and protected states. A protected state is the state in FSM determined by the designer that contains sensitive information that can be detrimental if an adversary has access to it. AVFSM looks at each state transition to examine if that transition is a Vulnerable Transition (VT). VTs are the set of transitions that can have a fault injected into them and jump to the protected state. The AVFSM framework returns the classification of each VT as PathViolated, PathOK, PathNoEffect, and dangerous don’t-care states (DDCSs). PathViolated means that the setup time violation is needed to cause a fault, PathOK represents that the setup time constraint needs to be maintained, PathNoEffect stands for not impacting the vulnerability analysis, and DDCS stands for those don’t-care states that can access the protected states [23]. Using the reported VTs, a static timing analysis (STA) is performed to quantify the FI vulnerability for each VT because different VTs have a different threat levels. From the STA, the susceptibility factor (SF) metric can be used as a quantitative measurement to evaluate the fault injection vulnerability for each transition. The SF is a function of the Path Difference and divided by the average path state FFs. The vulnerability factor of fault injection (VFFI) is a computed metric representation of the vulnerability of the FSM when facing a fault injection attack. Thus, a higher SF and VFFI indicates that the FSM is more vulnerable when attacked by fault injection [23]. The AVFSM framework uses Synopsys and Intel, previously Altera, and CAD tools. The Synopsys dcs hell is used to synthesize the design, and the tool’s reportf sm command generates the FSM report. Synopsys Tetramax is used to generate the test patterns; the tool’s n-detect command is used to find all possible combinations to the current state and primary input combinations, causing a transition to the specific states. Finally, the Synopsys Primetime tool is used for STA. Intel Quartus is used to extract the STG from the RTL code. Nahiyan et al. used the AVFSM to perform a fault injection vulnerability assessment on AES encryption and RSA encryption modules. Two encoding schemes for the AES FSM module were applied. The first scheme was Wait Key, Wait Data, Initial Round, Do Round, Final Round = 000, 001,010, 011, 100 and the second scheme was Wait Key, Wait Data, Initial Round, Do Round, Final Round = 001, 010, 011, 100, 000 [23]. For both schemes, the Final Round was set as the protected state. The AES encryption module for both schemes was pumped through the AVFSM framework. Table 7.1 shows the vulnerability analysis. Based on VFFI, it can be observed that scheme 1 is resilient to fault injection attacks, while scheme Table 7.1 AES encryption vulnerability analysis [23]

V FF I V FT ro

Scheme 1 (0, 0) 0

Scheme 2 (58.9%, 0.15) 0.18

7.3 Literature Survey Methodology Table 7.2 RSA encryption vulnerability analysis [23]

159

Scheme 1 Scheme 2

VT 1 3

V FF I (10%, 0.66) (30%, 0.15)

V FT ro 0 0.1

2 is not. One reason is that scheme 2 had more don’t-care states which ultimately made it more vulnerable. Similar to AES, two schemes for RSA were used. The schemes are as such, scheme 1: Idle, Init, Load1, Load2, Multiply, Square, Result = 000, 001, 010, 011, 100, 101, 110 and scheme 2: Idle, Init, Load1, Load2, Multiply, Square, Result= 001, 010, 011, 100, 101, 110, 000 [23]. The protected state for both schemes was the Result state. Table 7.2 shows the vulnerability analysis of the RSA. Note that scheme 2 has more VTs than scheme 1 and is more vulnerable to a fault injection attack because VFFI is much higher at 0.66. Overall, both schemes are vulnerable to fault injection attacks. The applications for this novel framework are endless. Any design that uses an FSM architecture can use AVFSM to determine how susceptible its control system is to FI attacks in the pre-silicon stage which can significantly reduce cost and save a considerable amount of testing time in the post-silicon. It is challenging to judge the results of this novel framework to other known works. Due to the rule of ten, the AVFSM framework can save designers thousands of dollars because the vulnerability analysis is done in the pre-silicon stage. This method finds potential flaws in designs that lead to fault injection attacks in the post-silicon stage or in-field. Furthermore, the AVFSM frameworks allow designers to compare different implementations of the same module to determine which one is the most resilient to fault injection attacks. The AVFSM framework can produce a vulnerability analysis report within minutes. The bottleneck is the synthesis of the design, which in turn means that design implementations can be rapidly iterated throughout to find the best design layout possible. The limitations of the AVFSM framework should be noted. First, AVFSM requires the designer to have knowledge of the RTL code, the netlist, and the synthesis report FSM, which can be problematic if the design is outsourced and the 3-IP vendor returns a firm or hard IP of the design. AVFSM is not capable of reporting the vulnerability analysis because the framework requires the RTL code. RTL code can be generated from the netlist but requires a significant amount of time. Furthermore, AVFSM only assumes single-fault injection attacks and not multifault attacks. AVFSM cannot find multi-fault attacks based on the algorithms. Work needs to be done to determine if AVFSM can give vulnerability analysis reports on sophisticated fault injection methods.

160

7 CAD for Fault Injection Detection

Fig. 7.7 Flowchart for the security-aware FSM framework

7.3.4 Security-Aware FSM Security-aware FSM is an extension of the AVFSM framework, which tries to mitigate the vulnerabilities detected by the AVFSM framework. The previous schemes have been proposed to protect FSMs from fault injection attacks. Sunar et al. proposed the fault injection attacks describe a linear error detection technique are possible even with linear error detection because the detection approach does not take into consideration that the synthesis tool may introduce don’t-care states [22]. Karpovsky and Taubin proposed a nonlinear error detection code (EDC) technique; however, adversaries who understand the states and transitions in the FSM controller can introduce fault injection attacks [22]. Security-aware FSM is a continuation of the AVFSM framework proposed by Nahiyan et al. to prevent fault injection attacks in FSMs. Figure 7.7 shows the security-aware framework. First, the FSM design gets a one-hot or critical-transition encoding scheme. Then the design is synthesized and passed through the AVFSM framework. The proposed technique determines if the FSM is secure using the vulnerability results from AVFSM. If not, an additional circuit is added to the RTL design to secure the FSM. Binary encoding is typically used to encode FSM, but binary encoding brings more fault injection vulnerabilities to the FSM. Therefore Nahiyan et al. proposed two different encoding approaches to make FSM secure. The first encoding scheme is one-hot, as shown in Algorithm 7.1. One-hot encoding can cause a massive number of don’t-care states and increase area overhead. However, suppose only the protected states are encoded with one-hot, while normal states are encoded with the binary scheme. In that case, it reduces the amount of don’t-care states, making sure that the attacker cannot access the protected states without bypassing a normal state. This is due to the upper bits of the protected states being fixed to zero [22].

7.3 Literature Survey Methodology

161

Security-aware FSM Architecture

Secure?

Security-aware FSM encoding Verilog

Logic Synthesis

Analyze for Vulnerability

Secure FSM Netlist

Fig. 7.8 Secure FSM logic cascaded to the original design

The second scheme encodes the critical transition, which is shown by Algorithm 7.2. The algorithm’s input parameters include the name of each state and the prohibited transitions. Some specific transitions are not permitted in predefinition. The purpose of the algorithm is to choose an encoding scheme that does not conflict with prohibited transitions. A mask is generated based on the temporary encoding scheme to determine if an attacker intends to inject faults on the transition to get direct access to a protected state. The mask is generated by identifying the bits that have changed during the transition. The changed bits are marked with an “x” and the fixed bits are kept. Comparing the encoding of the protected state with the mask determines if there are one-bit differences in the fixed bits. If there is a one-bit difference, the mask encoding is safe. Otherwise, the algorithm searches again for a safe encoding [22]. After the correct encoding on the FSM is complete, the design is synthesized and run through the AVSFM framework. The VFFI from vulnerability analysis determines if the FSM is secure from fault injection attacks. If the VFFI is non-zero, additional circuitry is added to the RTL code to secure it. The additional circuitry consists of an additional set of state FFs and a “secure FSM logic” that cascades to the original design as shown in Fig. 7.8 [22]. The additional state FFs allow for the

162

7 CAD for Fault Injection Detection

next state and present state to be analyzed in the same clock cycle to determine if the next state transition is valid. If the next state is protected and the present state is not authorized, the present state goes to reset. The present state is otherwise allowed to access the protected state. The logic is routed by hand to ensure a uniform path delay and that adversaries cannot perform a fault injection attack due to a nonuniform path delay in the “secure FSM logic.” A vulnerability analysis was conducted using four benchmark FSM circuits and a variety of encoding schemes. These circuits include the following: AES Controller, MIPS Controller, Memory Controller, and RSA Controller. Based on the results collected after fault exploitation, it can be concluded that the binary encoding scheme yielded the highest VFFI metric, making it increase an FSM’s vulnerability to FI attacks. In contrast, the proposed encoding schemes overall yielded a lower VFFI area [22]. However, the proposed encoding schemes could not secure the MIPS Controller alone, primarily due to the mass generation of don’t-care states from the synthesis tool. To calculate the delay distributions between state transitions, a vulnerability analysis of an AES controller with secure FSM architecture was performed. Delays are one of the sources of vulnerability in security, as they provide aggressors a wider attack window to induce an unauthorized state transition. In the author’s results demonstrating the delay distribution among the secure FSM transition logic, the maximum and minimum delay times between unprotected and protected states are roughly the same. This supports the authors’ arguments that secure FSM architecture protects the next state transition logic from fault injection aggressions. Overall, the AVFSM scheme was fairly strong because its examination thoroughly analyzed various benchmark circuits. Certain metrics were declared with each circuit, such as overhead, area, and vulnerability factors, to determine which encoding scheme creates security risks. However, the proposed encoding scheme has limitations with synthesis generated don’t-care designs. An additional layer of security was needed to uphold FSM security, leading to the inclusion of the proposed secure FSM architecture. This could be difficult to realize in the aggressive silicon manufacturing market, with transistor miniaturization driving shorter design life cycles.

7.3.5 Multi-fault Attacks Vulnerability Assessment So far, FI attacks using a single attack model have been discussed in this chapter. This section demonstrates an end-to-end approach for multi-fault attack vulnerability assessment proposed by Werner et al. because modern-day security evaluation considers multi-fault attacks. Three challenges arise when looking at multi-fault attacks: 1. Explore the connection between fault analysis and fault exploration. 2. Consider combined fault attacks.

7.3 Literature Survey Methodology

163

3. Improve selecting fault injection settings [29]. Past researchers have attempted and successfully solved some of the challenges above; however, no researcher before Werner et al. [29] was able to address all three challenges with multi-faults. Previously challenge 1 was addressed by Dureuil et al. However, they were looking at single faults, and their selected fault injection settings were done manually [29]. Similarly, Given-Wilson et al., Riviere et al., and Laurent et al. proposed methods to reduce the gap between fault analysis and fault exploration through hardware and software approaches or RTL fault models [29]. However, they failed to address challenges 2 and 3. Carpi et al., Wu et al., and Maldini et al. proposed fault injection settings search strategies for select fault injections by using genetic algorithms or deep learning. They were unable to propose a solution for combined fault attacks and gap reduction [29]. In this approach, Warner et al. claim that no previous works have successfully created an end-to-end approach for multi-fault attack vulnerability assessment. Warner et al.’s end-to-end approach for vulnerability assessment is a simulationbased methodology. The methodology has three steps to identify multi-fault attacks: tool-assisted fault model inference, tool-assisted fault analysis, and tool-assisted fault exploitation. Inferring fault models is the first step to simultaneously performing characterization and fault injection simulation. Figure 7.9 shows the flow process for tool-assisted fault model inference. Characterizing helps to select the most efficient fault injection settings for the UUT. During characterization, a test

Characterization Fault Injection Settings

Fault Injection Simulation Fault Models

Fault-list generation, laser time and angle, spot, size, etc. Test Program + Target Device Faulty Outputs

Transient, stuck -at faults, etc.

Test Program + Target Device Faulty Outputs

000083ED 00002F9B

000083ED 00002F9B

TSFM Generation Target Specific Fault Model Laser time and angle, spot size, etc.

Transient, stuckat fault, etc.

Fig. 7.9 The high-level functionality of the fault model generation step [29]

164

7 CAD for Fault Injection Detection

program is loaded into the chip, and the program propagates the computational errors caused by faults using a grid search. The observed output is compared to the expected outcome of the test program to determine a faulty result. Using characterization does not show where the fault occurred in the UUT. Therefore, characterization and CELTIC can be used to perform fault injection simulations of equal quality. CELTIC is a simulation-based fault injection tool at the binary level, which simulates instruction set architecture (ISA) fault models [29]. When the same faulty output is observed during simulation and characterization, the fault injection settings (si) and the fault model (mi) are paired (si, mi) to produce the Target Specific Fault Model (TSFM) [29]. Fault analysis is performed using the most probable TSFMs on the chip. Fault analysis is automated using CELTIC to find all successful single and multi-fault attacks within the TSFMs. After all single-fault attacks have been assessed, CELTIC returns a list of the successful attacks, with the oracle and the selected fault models [29]. Finding single-fault attacks is a straightforward process. To successfully find combined fault attacks, CELTIC first generates an execution trace of the program without its fault, called the reference execution trace. A successful combined fault attack is found using CELTIC’s function FindCFA. The pseudocode for FindCFA is shown in Algorithm 7.3. Once CELTIC iterates through all the faults, it can be configured to simulate any fault models at the ISA level. Finally, the settings for fault injection can be automatically generated using the successful attack results produced by CELTIC, which is done by considering the high probability TSFMs and combining the optimal fault injection settings, allowing for a complete equipment configuration to be made for fault exploitation. The evaluation of this approach was applied to a VerifyPIN algorithm on an ARM Cortex-M4 32-bit microcontroller. The fault injection was induced by employing two independent laser beams. During the characterization process, approximately 12,000 (24%) faulty outputs were generated from 50,000 injected faults [29]. The remaining 76% caused fatal errors. Injection delays of successful attacks were used to validate the generated configurations. The generated configuration was compared to an exhaustive search for the injected delays. An exhaustive search took about one week to complete and found 1401 out of the 1744 possible injection delays, while Warner et al.’s approach covered 894 out of 1744 [29]. The authors stated that detection for their

7.4 Summary

165

method could be optimized to improve injection delay detection. The overall endto-end approach was compared to two other techniques for multi-fault attacks. First, there was a native approach where the evaluator had no prior information about the injection delays or the positions. This approach was unable to complete the experiment within a reasonable time. The second technique was using a hybrid approach, where the attacker uses a heatmap for injection positions but has no knowledge of the injection delays. According to the experimental results, the proposed method was, on average, three times faster than the hybrid approach, which shows the proposed method is a fast and repeatable way to perform a multifault attack vulnerability assessment. The limitations of the Warner et al. proposed method need to be noted. Suppose the faulty output is not found when characterizing and simulating its faults. The rest of the process fails to find that specific fault. On the other hand, this method considers specific hidden states that may be present, while other methods cannot. Using a grid search to generate faulty outputs in the characterization step can be challenging due to the hardware countermeasures in place [29]. One potential solution is to follow a smart search of fault injection settings, which other researchers have done. Another limitation is the low faulty output coverage compared to state-of-the-art fault models, so in step 2, CELTIC cannot find any chip vulnerabilities. Thus the generated configuration may not be able to find any successful attacks in fault exploitation. In theory, this multi-vulnerability assessment tool can be applied to any design. However, this method could not detect nearly as many possible attacks as compared to an exhaustive search. The proposed method also detected false positives, thus bringing the actual identification of possible attacks below 50%. This method was only tested using lasers as the FI technique and no other methods like power glitch FI. Furthermore, their approach is three times faster than other methods out there. Overall this is a promising fault injection vulnerability assessment tool, but it requires much more work to improve the accuracy in detecting possible attacks.

7.4 Summary As mentioned in Sect. 7.2 and Sect. 7.3, various developments have been made in fault injection vulnerability analysis. Each methodology introduced crucial discussions on a device’s capability to detect clock-based glitching, power-based glitching, and unauthorized FSM transitions and develop fault injection analysis and exploitation platforms. Embedded engineers have few tools to determine whether applications are secure from clock-based intrusions. HackMyMCU provides a simple, reconfigurable framework for such developers to exploit various fault attacks and possibly improve their designs. Security-aware AVFSM provides engineers with a way to analyze FSM vulnerabilities in the pre-sliced stage. However, AVFSM requires specific design files that may not be available to designers if the IPs were outsourced.

166

7 CAD for Fault Injection Detection

Despite the introduction of novel techniques, these methodologies may provide little motivation in the current state of silicon manufacturing. Aggressive time-to-market and enormous manufacturing costs can increase the difficulty of implementing and testing additional hardware in the pre-silicon stages to prevent fault injection attacks, especially when devices are tested using multi-fault FI attack models. It is better to use an exhaustive search than the proposed method by Warner et al. because their approach has bad accuracy results. Weeks are required for a comprehensive investigation to obtain results from a design that is infeasible for today’s aggressive time-to-market components. Suppose designers are only concerned about the security of their devices. In that case, these novel techniques provide a robust way to ensure their designs cannot be exploited through fault injection attacks by an adversary.

References 1. K. Bae, S. Moon, D. Choi, Y. Choi, D.S. Choi, J. Ha, Differential fault analysis on aes by round reduction, in 2011 6th International Conference on Computer Sciences and Convergence Information Technology (ICCIT) (2011), pp. 607–612 2. A. Barenghi, L. Breveglieri, I. Koren, D. Naccache, Fault injection attacks on cryptographic devices: Theory, practice, and countermeasures. Proc. IEEE 100(11), 3056–3076 (2012) 3. S. Bhasin, D. Mukhopadhyay, Fault injection attacks: Attack methodologies, injection techniques and protection mechanisms, in International Conference on Security, Privacy, and Applied Cryptography Engineering (Springer, 2016), pp. 415–418 4. S. Bhunia, M. Tehranipoor, Hardware Security: A Hands-On Learning Approach (Morgan Kaufmann, 2018) 5. A. Dehbaoui, J.M. Dutertre, B. Robisson, A. Tria, Electromagnetic transient faults injection on a hardware and a software implementations of aes, in 2012 Workshop on Fault Diagnosis and Tolerance in Cryptography (2012), pp. 7–15. https://doi.org/10.1109/FDTC.2012.15 6. S. Dey, J. Park, N. Pundir, D. Saha, A.M. Shuvo, D. Mehta, N. Asadi, F. Rahman, F. Farahmandi, M. Tehranipoor, Secure physical design. Cryptology ePrint Archive, Paper 2022/891 (2022) 7. T. Farheen, S. Roy, S. Tajik, D. Forte, A twofold clock and voltage-based detection method for laser logic state imaging attack. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. (2022) 8. T. Given-Wilson, A. Legay, Formalising fault injection and countermeasures, in Proceedings of the 15th International Conference on Availability, Reliability and Security (2020), pp. 1–11 9. S. Govindavajhala, A.W. Appel, Using memory errors to attack a virtual machine, in 2003 Symposium on Security and Privacy (2003), pp. 154–165 10. J. He, X. Guo, M. Tehranipoor, A. Vassilev, Y. Jin, Em side channels in hardware security: Attacks and defenses. IEEE Des. Test 39(2), 100–111 (2022). https://doi.org/10.1109/MDAT. 2021.3135324 11. J. He, H. Ma, M. Panoff, H. Wang, Y. Zhao, L. Liu, X. Guo, Y. Jin, Security oriented design framework for em side-channel protection in rtl implementations. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 41(8), 2421–2434 (2022). https://doi.org/10.1109/TCAD. 2021.3112884

References

167

12. M. He, J. Park, A. Nahiyan, A. Vassilev, Y. Jin, M. Tehranipoor, Rtl-psc: Automated power side-channel leakage assessment at register-transfer level, in 2019 IEEE 37th VLSI Test Symposium (VTS) (2019), pp. 1–6. https://doi.org/10.1109/VTS.2019.8758600 13. M.C. Hsueh, T.K. Tsai, R.K. Iyer, Fault injection techniques and tools. Computer 30(4), 75–82 (1997) 14. M. Hutter, J. Schmidt, T. Plos, Contact-based fault injections and power analysis on rfid tags, in 2009 European Conference on Circuit Theory and Design (2009), pp. 409–412 15. M. Karpovsky, A. Taubin, New class of nonlinear systematic error detecting codes. IEEE Trans. Inf. Theory 50(8), 1818–1819 (2004) 16. Z. Kazemi, A. Papadimitriou, I. Souvatzoglou, E. Aerabi, M.M. Ahmed, D. Hely, V. Beroulle, On a low cost fault injection framework for security assessment of cyber-physical systems: Clock glitch attacks, in 2019 IEEE 4th International Verification and Security Workshop (IVSW) (IEEE, 2019), pp. 7–12 17. D.F. Kune, J. Backes, S.S. Clark, D. Kramer, M. Reynolds, K. Fu, Y. Kim, W. Xu, Ghost talk: Mitigating emi signal injection attacks against analog sensors, in 2013 IEEE Symposium on Security and Privacy (2013), pp. 145–159 18. L. Lin, J. Wen, H. Shrivastav, W. Li, H. Chen, G. Ni, S. Chowdhury, C. Chow, N. Chang, Layout-level vulnerability ranking from electromagnetic fault injection, in 2022 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (2022), pp. 17–20. https:// doi.org/10.1109/HOST54066.2022.9840146 19. L. Lin, D. Zhu, J. Wen, H. Chen, Y. Lu, N. Chang, C. Chow, H. Shrivastav, C.W. Chen, K. Monta, M. Nagata, Multiphysics simulation of em side-channels from silicon backside with mlbased auto-poi identification, in 2021 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (2021), pp. 270–280. https://doi.org/10.1109/HOST49136.2021. 9702270 20. A.P. Mirbaha, J.M. Dutertre, A. Tria, Differential analysis of round-reduced aes faulty ciphertexts, in 2013 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS) (2013), pp. 204–211. https://doi.org/10.1109/DFT.2013. 6653607 21. N. Moro, A. Dehbaoui, K. Heydemann, B. Robisson, E. Encrenaz, Electromagnetic fault injection: Towards a fault model on a 32-bit microcontroller, in 2013 Workshop on Fault Diagnosis and Tolerance in Cryptography (2013), pp. 77–88 22. A. Nahiyan, F. Farahmandi, P. Mishra, D. Forte, M. Tehranipoor, Security-aware fsm design flow for identifying and mitigating vulnerabilities to fault attacks. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 38(6), 1003–1016 (2018) 23. A. Nahiyan, K. Xiao, K. Yang, Y. Jin, D. Forte, M. Tehranipoor, Avfsm: A framework for identifying and mitigating vulnerabilities in fsms, in Proceedings of the 53rd Annual Design Automation Conference (2016), pp. 1–6 24. N. Pundir, H. Li, L. Lin, N. Chang, F. Farahmandi, M. Tehranipoor, Security properties driven pre-silicon laser fault injection assessment, in 2022 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (IEEE, 2022), pp. 9–12 25. C. Roscian, J.M. Dutertre, A. Tria, Frontside laser fault injection on cryptosystems - application to the aes’ last round -, in 2013 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST) (2013), pp. 119–124. https://doi.org/10.1109/HST.2013.6581576 26. S.P. Skorobogatov, Semi-invasive attacks: a new approach to hardware security analysis (2005) 27. M. Tehranipoor, C. Wang, Introduction to Hardware Security and Trust (Springer Publishing Company, 2011) 28. H. Wang, H. Li, F. Rahman, M.M. Tehranipoor, F. Farahmandi, Sofi: Security property-driven vulnerability assessments of ics against fault-injection attacks. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 41(3), 452–465 (2021) 29. V. Werner, L. Maingault, M.L. Potet, An end-to-end approach for multi-fault attack vulnerability assessment, in 2020 Workshop on Fault Detection and Tolerance in Cryptography (FDTC) (IEEE, 2020), pp. 10–17

168

7 CAD for Fault Injection Detection

30. T. Zhang, J. Park, M. Tehranipoor, F. Farahmandi, Psc-tg: Rtl power side-channel leakage assessment with test pattern generation, in 2021 58th ACM/IEEE Design Automation Conference (DAC) (2021), pp. 709–714. https://doi.org/10.1109/DAC18074.2021.9586210 31. L. Zussa, A. Dehbaoui, K. Tobich, J.M. Dutertre, P. Maurine, L. Guillaume-Sage, J. Clediere, A. Tria, Efficiency of a glitch detector against electromagnetic fault injection, in 2014 Design, Automation & Test in Europe Conference & Exhibition (DATE) (IEEE, 2014), pp. 1–6

Chapter 8

CAD for Electromagnetic Fault Injection

8.1 Introduction The start of the twenty-first century saw a significant rise in portable electronic devices. These electronic devices have become highly diverse, ranging from ATM cards to personal computers that can fit in our pockets. These devices have various processing elements, like small chips and complex microprocessors, responsible for handling multiple tasks. As the demand for devices to simplify human life increases, so does each device’s responsibility and the importance of securing the device’s data. For example, devices that were thought to be very simple a decade ago, like ATM cards, require security measures to counteract the threat of stealing confidential data stored on the card. Now imagine the complexity of smartphones and personal computers used in banking, construction, communication, military, state security, and the Internet of Things (IoT). The security aspect of these devices becomes increasingly apparent as an insecure device could lead to espionage or other types of attacks, harming people, businesses, and governments. System on chips (SoCs) are developed to accommodate devices and support complex applications. The emergence of high integration of circuits and their miniaturization led to the integration of elements such as processor, system memory, data memory, and controllers, which create an SoC. With specific hardware being developed for applications, SoCs are coupled with respective intellectual properties (IPs). This offloads the burden on the CPU on particular tasks, and the hardware accelerates the functionality compared to a software implementation. IP modules such as video encoders/decoders and cryptographic modules significantly reduce the burden on the CPU, allowing for many efficient designs to work in portable form factors. Now that data is moving around different blocks in an SoC, the possibility of hardware attacks to disrupt or leak that information increases. Cryptographic modules are a high-interest attack point as they store encryption keys responsible for safeguarding confidential data.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Farahmandi et al., CAD for Hardware Security, https://doi.org/10.1007/978-3-031-26896-0_8

169

170

8 CAD for Electromagnetic Fault Injection

Fault injection has existed for a long time [2]. Examples of hijacking and snooping during telephone conversations have been known to exist since the days of World War II [10]. These attacks initially used to be simple in that their prime target was to disrupt the service that was being attacked. As time and technology progressed, the attacks grew in complexity to collect or leak the data while disrupting the service. Moreover, attacks were carried out without the victim knowing. Relevant to this approach, the scope of attacks concentrates on electromagnetic fault injection attacks. These attacks are carried out by introducing a strong electromagnetic field, which corrupts the data flow in the targeted device, an SoC, in this case, thus disrupting the ability to perform correctly. Similarly, sidechannel analysis is performed by observing the electromagnetic fields generated by the SoCs switching activity and correlating the collected data to predict secure information such as a key in a cryptographic module. EMFI attack has been first reported as a threat against the security of circuits in [23, 24]. The first concrete results on the attack were published in [25], with reference to those early reports the authors showed that low-cost EM-based attack is possible. However, in [3] authors demonstrated that a low cost apply a Bellcore attack. In [4, 5], different EMFI setups were presented with a high-voltage pulse generator. Another setup was presented in [16] using ferrite core coil use provides advantage of more controllable fault injection. Following these initial investigations, the authors in [14, 18, 22] evaluated the efficacy of EM attacks on different platforms by analyzing the embedded codes. Recently, the impact of EMFI has been evaluated on the system clock frequency in [11]. EMFI attack has also been analyzed on bit-sliced post-quantum implementations in [26]. In [7], EMFI has been used to cause skipping/faulting of multiple instructions on a 320MHz RISC-V processor. For FPGA platforms, EM fault has been emulated without the actual fault injection setup in [13]. Countermeasures have to be developed to safeguard from these attacks with a great understanding of the attack model performed. As a result, attack models are entrenched concerning the countermeasure that should be established. The generated designs are tested and measured quantitatively to ascertain weakness or probability for exploitation. Moreover, as the hardware is attacked, it cannot be patched later like software, so the countermeasures must be introduced in the presilicon stage of the design workflow. In [20], a security-aware FSM design flow based on a secured encoding scheme has been proposed to prevent fault injection attacks. Recently, EM side channels have been analyzed from both attack and defense point of view in [8]. Also, considering different fault attacks, it is important to analyze and protect the most vulnerable locations of the design. In this regard, [30] presents a framework that identifies these vulnerable locations based on designspecific security property. In conjunction with these preventive measures, there are some detection mechanisms presented in [6, 17, 19]. In [6], a sensor has been proposed based on detecting the timing changes at the gate level, whereas in [17] a PLL-based sensor has been presented to detect the EM activity. Finally, in [19], a universal detection mechanism has been proposed that can detect the fault attacks by tracking delay changes.

8.2 Background Study

171

This chapter confers the research articles describing different attack models and appropriately developed countermeasures to address the hardware security issues using CAD tools [1]. In [15], the authors proposed a fault injection attack which allowed them to analyze the emitting field and the area where the error was occurring. In [28], the researchers performed a state-of-the-art fault injection attack on an SoC and introduced countermeasures against the resulting fault models. In [12], the authors proposed a simulation methodology to be implemented in an integrated circuit (IC) design workflow. In [9], the authors propose an evaluation technique based on side channel at the pre-silicon stage and suggest securityfocused design rules. In [29], the researchers proposed a new triplication-based error-correcting code that is resilient to harsh EM disturbances. This chapter aims to consolidate the attack models against SoCs, security evaluation metrics of design against EM fault injection at the pre-silicon stage, and a triplication-based error correction code that is resilient against varying electromagnetic fields.

8.2 Background Study Attackers can exploit the physical characteristics of devices to perform fault injection attacks. Attacks can have various forms [27] and are broadly classified into the following categories: • Clock Fault Injection: Faults are introduced into the device clock in which the attacker would increase or reduce the clock speed at a specific time. However, it is suitable for devices that use an external clock. For example, an SoC uses an internal clock, which is not a feasible scenario. • Voltage Fault Injection: The fault is introduced in the device by lowering the vdd at the time which operates best with a larger design, where a different power domain is present, and injecting fault would not change the overall system. • Electromagnetic Fault Injection: Faults are injected by utilizing the EM emissions. A high current would be generated through a coil. This technique is suited to the SoC as no direct physical contact is needed for the fault injection. • Optical Fault Injection: Faults are introduced in the device using a laser beam. The laser is focused on the chip directly. However, the SoC is packaged in a way that prevents such attacks. The chip must be de-packaged to be vulnerable to such an attack. Fault injection attacks can be combined with side-channel analysis to ease finding the proper fault parameter. Side channel is defined as physical characteristics that manifest during the operation of the chip. Side-channel information can be of several categories, including power, timing, and EM radiation. Given the complexity of an SoC and its placement in a closed package, EM side-channel analysis is an appealing choice to the attacker as it works without risking damage to the device. The EM analysis can be as simple as the attacker obtaining the key from the observed traces of the encryption, classified as simple electromagnetic analysis (SEMA). In other

172

8 CAD for Electromagnetic Fault Injection

cases, where the information is insufficient, the attacker must apply differential EM analysis on the obtained traces (DEMA) as discussed in the review. The mentioned attacks can be utilized as a suitable security test method to find the potential vulnerabilities of chips. The evaluation can be performed by an impartial third party to prove the security level of the components. Previously, security tests for microcontrollers have been widely studied due to their small size and low speed. It would allow the attacker to perform these attacks. However, testing SoCs can be overwhelming for designers due to their complexity and high speed. The rest of the chapter is organized as follows: Sect. 8.3 covers an exhaustive literature review of the EM SoC security test for SoC, EM fault injection, emerging microarchitectural faults models, security evaluation of EM at design time, quantitative EM assessment of EM side channel at RT-level, error correction resiliency against harsh EM disturbances, etc. At the end of the chapter, we provide a summary of the existing works and possible future direction.

8.3 Literature Survey Methodology This section describes the research techniques considered over the decade in electromagnetic fault injection attacks.

8.3.1 Electromagnetic Security Tests for SoC In [15], an experimental security test study was to apply a side-channel attack based on electromagnetic side-channel results of an advanced encryption standard (AES) module. The experiment is performed on a commercial SoC consisting of a 32bit processor designed in 40nm technology. The test is applied to AES modules in the SoC and applies to different modules adapting to different parameters. The experiment’s methodology included exploiting an EM side channel to confine the AES activity, obtained as follows: data stimulus input to the AES and statistical measurement of the results to underline the AES activity. AES activity analysis is performed to find the right time to inject faults. Pulse injection is done at that time, covering the entire surface of the SoC. This ensures that if the fault injection has succeeded, a behavioral analysis of the SoC is conducted. An HH150 microprobe, or Langer, is used for sampling the EM emission. Langer is placed near the surface of the SoC to get a better signal. To maximize the signal-to-noise ratio (SNR), signal t-test strategy is used, which is shown in Eqs. (8.1) and (8.2). Data are separated into three types: setk, setm, and setc. Each represents the 128-bit key, message, and cipher. For localizing where EM emissions are related to AES, statistical tools are used on

8.3 Literature Survey Methodology

173

all spatial points, as shown in Eq. (8.1). The data to be considered must be above the threshold represented by Eq. (8.2). Applying the side channel, localized spatial emission of the HW-AES can be seen, and the proper place to inject faults can be selected. The experiment follows the classical type of attack, the DFA Piret and Quisqueya’s DFA [21], where the attacker injects the fault on the 9th round of AES encryption. The faults injection step has revealed some behavior of the SoC, reporting that some areas would result in a mute state of the SoC. Furthermore, three types of faults can be identified, as shown in Fig. 8.1. X0 (t) − XF F (t) 2 ) SX (t) = (  σ0 (t)2 σF F (t)2 n0 + nF F

(8.1)

T hr(Sx (t)) = Sx (t) + 3 × σSx (t)

(8.2)

In Fig. 8.1, CF1 repeated values inside indicate buffer errors, CF2 shows the error while transferring the cipher to the output buffer, and CF3 shows corrupted AES values which is what the experiments aim to retrieve. Although the investigation succeeded in injecting faults in the SoC, the approach has not clearly defined which faults are related directly to the AES process. The method could be significantly enhanced if the injected faults could result in known states of the SoC. Based on their analysis of the behavior of the SoC, the control of affecting the chip by the injected fault is inconclusive. 52 4F F4 9C C3 C5 AE 60 B8 A9 81 56 B1 46 9E 13 CF1 (54%)

52 4F F4 9C 52 4F F4 9C B8 A9 81 56 B1 46 9E 13 52 4F F4 9C C3 C5 AE 60 B1 46 9E 13 B1 46 9E 13 …..

CF2 (32%)

CC CC CC CC 50 00 CC CC B8 A9 81 56 B1 46 9E 13 CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC …..

CF3 (14%)

CA 4F F4 9C C3 C5 AE 2A B8 A9 1D 56 B1 1A 9E 13 38 EB 9C 91 56 F6 08 C9 6D AE E0 F5 E2 8F 02 B6 …..

Fig. 8.1 Three main types of faults, CF1, CF2, and CF3, are identified after injecting faults on the 9th round of AES. CF1 repeated values inside indicate buffer errors, CF2 shows the error while transferring the cipher to the output buffer, and CF3 shows corrupted AES values, which the experiments aim to retrieve

174

8 CAD for Electromagnetic Fault Injection

8.3.2 Electromagnetic Fault Injection Against a System-on-Chip, Toward New Micro-architectural Fault Models In [28], researchers aimed to inject a fault in the memory hierarchy and memory management unit. They have demonstrated how faults can alter the execution of the instructions. Here, some instructions in level 1 cache and level 2 cache are targeted, which changes the application’s output. They have also altered the mapping of the memory management unit. Its primary role in every system is to translate between virtual addresses and physical addresses. This approach introduces faults to change the values of these mappings. The experiment uses JTAG to help stop the chip’s execution when faults are observed and read the register values and memory as the CPU sees it. The experiment targeted each subsystem individually, which is claimed to be a new fault model. The targeted chip in this experiment is the Raspberry Pi 3B. The chip has four cores, but only one core is activated to simplify the experiments. Furthermore, the applications are running bare metal (no OS is running on these experiments) to avoid context switching, an error recovery mechanism provided by the operating system. The bare metal and Linux setup faults are discussed to support their claim on choosing a bare metal system (no OS running on the system), with the conclusion that by running Linux, the effect of fault would be at the microarchitectural level. In contrast, the impact of a basic metal setup would be in the instruction level. 1 2 3 4 5 6 7 8 9

trigger_up(); wait_us(2); invalidate_icache(); for (int i = 0; i -> -> -> -> -> -> ->

PA 0x0 0x10000 0x20000 0x30000 0x40000 0x50000 0x60000 0x70000

0x80000 0x90000 0xa0000 0xb0000 0xc0000 0xd0000 0xe0000 0xf0000

-> -> -> -> -> -> -> ->

0x80000 0x90000 0xa0000 0xb0000 0xc0000 0xd0000 0xe0000 0xf0000

Listing 8.3 Correct identity mapping [28]

After the fault injection step is performed, faults are observed in the MMU in three ways in Listing 8.4. Correct mapping still exists up to 0x70000. Moreover, pages are incorrectly mapped to 0x0, from 0x80000 to 0xb0000. Finally, pages from 0xc0000 to 0xf0000 read to incorrect pages. 1 2 3 4 5 6 7 8 9

VA 0x0 0x10000 0x20000 0x30000 0x40000 0x50000 0x60000 0x70000

-> -> -> -> -> -> -> -> ->

PA 0x0 0x10000 0x20000 0x30000 0x40000 0x50000 0x60000 0x70000

Listing 8.4 Incorrect mapping [28]

0x80000 0x90000 0xa0000 0xb0000 0xc0000 0xd0000 0xe0000 0xf0000

-> -> -> -> -> -> -> ->

0x0 0x0 0x0 0x0 0x80000 0x90000 0xa0000 0xb0000

176

8 CAD for Electromagnetic Fault Injection

The third fault in this experiment is injection faults in the modified loop as shown in Listing 8.5. Step-by-step execution using JTAG would conclude that the program is stuck in an infinite loop. Moreover, the same parameters applied to the modified loop would result in a similar infinite loop. The approach has been able to find vulnerabilities in the SoC and introduce faults on different SoC subsystems (L1, L2, and MMU). However, the authors did not show what type of instructions, memory mapping, or memory blocks that would be affected by the faults. The presented approach offers corrupting data randomly with not much information of the expected results. Future work should show which type of faults affect which regions. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29

489bc: 489c0: 489c4: 489c8: 489cc: 489d0: 489d4: 489d8: 489dc: 489e0: 489e4: 489e8: 489ec: 489f0: 489f4: 489f8: 489fc: 48a00: 48a04: 48a08: 48a0c: 48a10: 48a14: 48a18: 48a1c: 48a20: 48a24: 48a28: 48a2c:

a9be7bfd 910003fd b9001fbf b9001bbf b90017bf 900001a0 912d2000 d2802002 52800001 94000b28 97fefe67 d2800040 97feffe2 94008765 940087ad b9001fbf 14000010 b9001bbf 14000008 940087c1 b94017a0 11000400 b90017a0 b9401ba0 11000400 b9001ba0 b9401ba0 7100c41f 54fffeed

stp x29, x30, [sp,#-32]! mov x29, sp str wzr, [x29,#28] str wzr, [x29,#24] str wzr, [x29,#20] adrp x0, 7c000 add x0, x0, #0xb48 mov x2, #0x100 mov w1, #0x0 bl 4b680 bl 8380 mov x0, #0x2 bl 8974 bl 6a784 bl 6a8a8 str wzr, [x29,#28] b 48a3c str wzr, [x29,#24] b 48a24 bl 6a90c ldr w0, [x29,#20] add w0, w0, #0x1 str w0, [x29,#20] ldr w0, [x29,#24] add w0, w0, #0x1 str w0, [x29,#24] ldr w0, [x29,#24] cmp w0, #0x31 b.le 48a08

Listing 8.5 Loop target application assembly [28]

8.3.3 Security Evaluation Against Electromagnetic Analysis at Design Time In this scheme, the authors proposed a design time evaluation methodology to analyze EM leakage of processors. Their proposed simulation method involves

8.3 Literature Survey Methodology

177

first separating the system into two partitions. One partition is the chip, and the other is the package. The chip is modeled in a circuit simulator, and then the package is modeled and simulated based on its lumped characteristics like R, L, and C. The current consumption of the resulting circuit is simulated, resulting in a security evaluation metric for that circuit. The authors claim that the proposed methodology can be easily added to an IC design workflow to analyze systematic EM characteristics. Using an EM simulator tends to be very slow for complex circuit designs. As a result, the authors perform a current simulation of the circuit as the magnetic and electric fields are a function of the current flow through the circuit. The chip is simulated with a circuit simulator or SPICE. Then the entire package is simulated using EM simulation and modeled based on its lumped elements. Figure 8.2 shows the design flow with the EM analysis. The Verilog/SPICE co-simulation allows users to target specific IPs such as cryptographic modules simulated with their respected testbenches. Once the EM analysis data is generated, the data can be processed using tools like MATLAB. Figure 8.3 shows the data processing pipeline. As shown in the shaded region of the figure, near-field and far-field electromagnetic sensors capture data. Near-field sensors are used to capture direct EM fields, whereas far-field sensors are

HDL (Verilog or VHDL) Design Code std. cell library Logic Synthesis testbench gate level netlist

Floor planning P&R extraction

parasitics

Verilog/HSPICE Co-simulations

EM Analysis

transistor level design technology package models lumped elements

Fig. 8.2 Digital design flow with EM analysis [12]. The Verilog/SPICE co-simulation allows users to target specific IPs such as cryptographic modules simulated with their respected testbenches. Once the EM analysis data is generated, the data can be processed using tools like MATLAB

178

8 CAD for Electromagnetic Fault Injection technology models

Idd(t) data process for EM Analysis

testbench

Verilog netlist

Functional / Power Simulation I(t)

Spice netlist

Synchronize data files Re-sample (according to measuring set up) Simulate EM emissions Low pass filter

DEMA Traces

Chip parasitics & package lumped elements

Fig. 8.3 EM analysis simulation procedure [12] for data processing pipeline

Fig. 8.4 EMA simulation for S-XAP processor performing XOR with different operands [12]

used to receive modulated EM signals. The received signal is passed through a lowpass circuit to remove noise. The obtained EM Analysis (EMA) traces EMA1 and EMA2, which are subtracted from each other to obtain a Differential EM Analysis (DEMA). The peaks in the DEMA trace represent the leaked key information. The current consumption data was obtained by the authors using Nanosim by Synopsis. EMA1 is obtained when the processor performs the 00 XOR 55 operation, and EMA2 trace is obtained while performing the 55 XOR 55 operation. Figure 8.4

8.3 Literature Survey Methodology

179

Fig. 8.5 EMA experimental measurement for S-XAP processor performing XOR with different operands [12]

shows the simulated results and Fig. 8.5 shows the measurement results. The DEMA trace is represented at the bottom. When comparing the simulated results (Fig. 8.4) and the measurement results (Fig. 8.5), it can be seen that the DEMA trace has a peak when the XOR operation is performed. This peak shows that the EM emissions are related to the data operation performed. Furthermore, the simulated curves are lower in amplitude compared to the measured curves because the authors do not model the memory elements for store operations. The proposed methodology is based on time-domain analysis of the captured signals, which are simple to execute. Analyzing in frequency domain will be more complicated, and the proposed method is not suitable for that. However, performing the frequency-domain-based analysis can extract information such as data execution in a loop requires a complex signal processing system. The CMOS characteristics of the chip being manufactured can also significantly affect the simulation. Therefore the proposed method’s efficacy needs to be tested with recent technology files.

8.3.4 Design for EM Side-Channel Security Through Quantitative Assessment of RTL Implementations This scheme proposed an optimization scheme depending on the t-test evaluation results obtaining fine-grained localized information leakage metrics from the

180

8 CAD for Electromagnetic Fault Injection

register-transfer level (RTL) implementation. It is claimed that this is the first time a quantitative representation was established from the RTL. Also, some generic security design rules have been suggested to make the designs resilient against EM side-channel attacks. Additionally, it is reported that although different countermeasures for EM leakage are available, there are no comprehensive quantitative assessment techniques proposed to analyze the EM leakage. The optimization framework is run on three different implementations of the AES module. AES is a cryptographic module present on most SoC deigns to encrypt and decrypt AES encryption. The AES implementation based on NIST is named AES_NIST. Developers designed another implementation with Satoh Lab named AES_LUT. The final implementation was based on the design from the evaluation platform by MIT Lincoln Lab, named AES_CEP. The AES_CEP has the same schematic compared to AES_NIST. However, the encryption rounds in AES_CEP are instantiated. For this reason, the AES_CEP is pipelined execution. This makes AES_CEP faster, but the area is much higher. The proposed framework is divided into two parts. The first part evaluates the design and the second part is to optimize based on the first part. Figure 8.6 shows the design flow of the EM security quantitative evaluation method. The first step of the evaluation method represented by 1 is the generation of logical components and the driven capabilities of those components from the RTL code. Then, L-traces representing the side-channel behavior of the chip are obtained in

2 Plaintext

Functional Simulation

Switching Activity & Logical Status

Keys

RTL Implementation

Leakage Model

L-traces

1 Pre-synthesis

Logical Components & Driven Capabilities

3 Univariate TVLA

t-test Evaluation

Fig. 8.6 Cryptographic algorithm for EM security evaluation. The first step generates logical components and the driven capabilities of those components from the RTL code. L-traces representing the side-channel behavior of the chip are obtained in the second step. The final step compares t-test and TVLA values for quantitative assessment

8.3 Literature Survey Methodology

181

2 . The t-test is performed based on the L-traces. In the final step represented by 3 , the t-test values and the Univariate TVLA results are compared to obtain a quantitative assessment. The proposed evaluation method works for both the ASIC and FPGA implementations. The EDA tool maps the RTL code to the registers and the logic components in the FPGA implementation. In an ASIC implementation, technology libraries and synthesis strategies are required. Hamming Distance (HD) and Hamming Weights (HWs) based EM radiation models are generated at the gate and register-transfer levels. Whereas, the HD model is appropriate in simulating EM radiation for s sequential states. On the other hand, the HW model is suitable for simulating EM radiation for every individual state. The L-trace simulation algorithm is shown in Algorithm 8.1. The RTL implementation, along with the input vectors for each clock cycle, is given as inputs. Logical components from the RTL implementation are generated. From the logical components, the driving capabilities of these components are synthesized. For every clock cycle, the HW model and HD model data are drawn. L-traces are the function of these HW and HD simulation data points at respective clock intervals. Welsh’s t-test is the most used metric to check how different two given datasets are. The tvalue is a function of the mean, variance, and cardinality of the dataset. If the value of t is outside the range of ±4.5, it indicates that the mean is distinguishable at a given sample. This range proves that there is side-channel leakage present. However, this does not mean any high value of t guarantees the success of the side-channel attack.

For the evaluation, three different AES implementations are chosen as sidechannel leakages vary for different RTL implementations. In the AES_NIST implementation, S-box signals are stored in the register, whereas the S-box signals are passed to the mix-column operation. Finally, in the AES_CEP, the S-box signals are directly passed to the internal flip-flops. The t-test results show that AES_NIST generates extra EM leakage when compared to AES_LUT. AES_CEP presents the most leakage according to the evaluation method. However, when the EM

182

8 CAD for Electromagnetic Fault Injection

leakage is measured for the three AES implementations, the result is different. This difference is due to the implementation of AES_CEP in the FPGA, where the registers are implanted into read-only RAM. Due to the sequential nature, it will not show any correlation, thereby reducing the chance of leaking the data through EM side-channel analysis. This shows that the proposed evaluation works, but the EM leakage can be significantly reduced by implementation regardless of the evaluation. The integration of the implementation metric to the evaluation algorithm can significantly improve the accuracy of the quantitative assessment. Furthermore, more work can be done in providing optimization metrics for the designer to follow.

8.3.5 Resilience of Error Correction Codes Against Harsh Electromagnetic Disturbances In this approach [29], the authors provide three triplication-based error corrections. When each triplication code is simulated and tested based on false negative occurrence rates under a single frequency interference, the code that includes inversion is appreciably more robust to those disturbances. The experimental setup is simple. The Device Under Test (DUT) consists of PCB with a low-resistance (50 ohms) microstrip. The output with 50ohm impedance provides the microstrip 1 V. At the other end, a 50-ohm load is connected. The bottom of the PCB is grounded, and the width of the strip is 3 mm and 5 cm wide. An encoder on the side sends the data encoded with the error-correcting code. The data is transmitted over the channel and is received by the data consumer. This entire transfer is subjected to EM radiation, and induced voltages are calculated at all ports. The errors are categorized into five ways, with the thresholds set for lower and upper limits of voltage. • • • • •

Category 1: Decoded data is correct with no correction required (marked in green). Category 2: Decoded data is correct only when the correction method is applied (marked in yellow). Category 3: Decoded data is incorrect even after the correction method is used (marked in orange). Category 4: Decoded data is incorrect, and the correction method did not detect any error (marked in red). Category 5: Threshold voltage is crossed (marked in blue).

The simple triplication method is defined as TEC. For a 4-bit word, the encoding for TEC can be given as follows: TEC(ABCD) = [AAA BBB CCC DDD]. On top of the TEC, there are three variants used for better EMI resilience. The triplication-based error correction codes are as follows.

8.4 Summary Table 8.1 Experiments results (in %) of the fault categorization distribution [29]. Categories 1, 2, 3, 4, and 5 are marked green, yellow, orange, red, and blue, respectively

183 Category Blue Green Yellow Orange Red

TEC 4.66E−1 5.06E−1 1.44E−2 1.05E−2 2.28E−3

TAN 4.67E−1 5.06E−1 1.48E−2 1.13E−2 2.84E−4

TCILT 4.68E−1 5.06E−1 1.44E−2 1.12E−2 2.52E−6

TDILT 4.66E−1 5.06E−1 1.71E−2 1.02E−2 0.00

• Triplication with Nominal encoding and Altered bit-order. TAN(ABCD) = [ABCD CADB DCBA] • Triplication setting the bits Close with Inversion at the Last Triplet. TCILT(ABCD) = [AAA BBB CCC DDD] • Triplication with Dispersed data bits with Inversion at the Last Triplet. TDILT(ABCD) = [ABCD ABCD A− B− C− D− ] The results from the fault categorization distribution are summarized in Table 8.1. From the table, category red representing the false negatives, the triplication-based code TAN is 10 times better than TEC, while TCILT is 1000 times better than TEC. Furthermore, it can be seen that the false negatives for TDILT are zero. The simulating results prove the efficacy of the TDILT error correction code. Furthermore, the researchers injected faults to measure how the triplication-based codes work, but all codes depicted significant false negatives in the analysis. They conclude by saying that the code based on inversion can have greater dependability in a communication channel.

8.4 Summary In summary, EM fault injection and side-channel analysis are robust attack methods in which attackers could disrupt the behavior of the targeted system and leak sensitive data. EM attacks targeting emergent technology, namely SoC, which is deemed challenging due to its complexity and high speed, are examined in the reviewed literature. The schemes have been able to find vulnerabilities in the security module, memory cache, and MMU when state-of-the-art attack models are employed. However, the effects of the faults are yet to be explored. While other research articles provide various attack models and countermeasures, determining a strict standard of practice still seems unsolved. The current fault-tolerant models are adequate for specific attacks, but they still need more effort to increase efficacy. Future works in developing tools that automate formulating mitigation strategies in the design workflow will interest academia.

184

8 CAD for Electromagnetic Fault Injection

References 1. S. Aftabjahani, R. Kastner, M. Tehranipoor, F. Farahmandi, J. Oberg, A. Nordstrom, N. Fern, A. Althoff, Special session: Cad for hardware security-automation is key to adoption of solutions, in 2021 IEEE 39th VLSI Test Symposium (VTS) (IEEE, 2021), pp. 1–10 2. S. Bhasin, D. Mukhopadhyay, Fault injection attacks: Attack methodologies, injection techniques and protection mechanisms, in International Conference on Security, Privacy, and Applied Cryptography Engineering (Springer, 2016), pp. 415–418 3. D. Boneh, R.A. DeMillo, R.J. Lipton, On the importance of eliminating errors in cryptographic computations. J. Cryptol. 14(2), 101–119 (2001) 4. A. Dehbaoui, J.M. Dutertre, B. Robisson, P. Orsatelli, P. Maurine, A. Tria, Injection of transient faults using electromagnetic pulses practical results on a cryptographic system. ACR Cryptol. (2012) ePrint Archive 5. A. Dehbaoui, J.M. Dutertre, B. Robisson, A. Tria, Electromagnetic transient faults injection on a hardware and a software implementations of aes, in 2012 Workshop on Fault Diagnosis and Tolerance in Cryptography (IEEE, 2012), pp. 7–15 6. D. El-Baze, J.B. Rigaud, P. Maurine, An embedded digital sensor against em and bb fault injection, in 2016 Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC) (2016), pp. 78–86. https://doi.org/10.1109/FDTC.2016.14 7. M.A. Elmohr, H. Liao, C.H. Gebotys, Em fault injection on arm and risc-v, in 2020 21st International Symposium on Quality Electronic Design (ISQED) (2020), pp. 206–212. https:// doi.org/10.1109/ISQED48828.2020.9137051 8. J. He, X. Guo, M. Tehranipoor, A. Vassilev, Y. Jin, Em side channels in hardware security: Attacks and defenses. IEEE Des. Test 39(2), 100–111 (2022). https://doi.org/10.1109/MDAT. 2021.3135324 9. J. He, H. Ma, X. Guo, Y. Zhao, Y. Jin, Design for em side-channel security through quantitative assessment of rtl implementations (2020), pp. 62–67. https://doi.org/10.1109/ASP-DAC47756. 2020.9045426 10. D.M. Helfeld, A study of justice department policies on wire tapping. Law. Guild Rev. 9, 57 (1949) 11. S. Koffas, P.K. Vadnala, On the effect of clock frequency on voltage and electromagnetic fault injection, in Applied Cryptography and Network Security Workshops, eds. by J. Zhou, S. Adepu, C. Alcaraz, L. Batina, E. Casalicchio, S. Chattopadhyay, C. Jin, J. Lin, E. Losiouk, S. Majumdar, W. Meng, S. Picek, J. Shao, C. Su, C. Wang, Y. Zhauniarovich, S. Zonouz (Springer International Publishing, Cham, 2022), pp. 127–145 12. H. Li, A.T. Markettos, S. Moore, Security evaluation against electromagnetic analysis at design time, in Cryptographic Hardware and Embedded Systems – CHES 2005 (Springer Berlin Heidelberg, Berlin, Heidelberg, 2005), pp. 280–292 13. P. Maistri, J. Po, A low-cost methodology for em fault emulation on fpga, in 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE) (2022), pp. 1185–1188. https:// doi.org/10.23919/DATE54114.2022.9774507 14. F. Majéric, E. Bourbao, L. Bossuet, Electromagnetic security tests for soc, in 2016 IEEE International Conference on Electronics, Circuits and Systems (ICECS) (IEEE, 2016), pp. 265– 268 15. F. Majeric, E. Bourbao, L. Bossuet, Electromagnetic security tests for soc (2016), pp. 265–268. https://doi.org/10.1109/ICECS.2016.7841183 16. P. Maurine, Techniques for em fault injection: equipments and experimental results, in 2012 Workshop on Fault Diagnosis and Tolerance in Cryptography (IEEE, 2012), pp. 3–4 17. N. Miura, Z. Najm, W. He, S. Bhasin, X.T. Ngo, M. Nagata, J.L. Danger, Pll to the rescue: A novel em fault countermeasure, in 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC) (2016), pp. 1–6. https://doi.org/10.1145/2897937.2898065 18. N. Moro, A. Dehbaoui, K. Heydemann, B. Robisson, E. Encrenaz, Electromagnetic fault injection: towards a fault model on a 32-bit microcontroller, in 2013 Workshop on Fault

References

185

Diagnosis and Tolerance in Cryptography (IEEE, 2013), pp. 77–88 19. M.R. Muttaki, T. Zhang, M. Tehranipoor, F. Farahmandi, Ftc: A universal sensor for fault injection attack detection, in 2022 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (2022), pp. 117–120. https://doi.org/10.1109/HOST54066.2022. 9840177 20. A. Nahiyan, F. Farahmandi, P. Mishra, D. Forte, M. Tehranipoor, Security-aware fsm design flow for identifying and mitigating vulnerabilities to fault attacks. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 38(6), 1003–1016 (2019). https://doi.org/10.1109/TCAD. 2018.2834396 21. G. Piret, J.J. Quisquater, A differential fault attack technique against spn structures, with application to the aes and khazad (2003), pp. 77–88. https://doi.org/10.1007/978-3-540-452386_7 22. J. Proy, K. Heydemann, F. Majéric, A. Cohen, A. Berzati, Studying em pulse effects on superscalar microarchitectures at isa level. Preprint (2019). arXiv:1903.02623 23. J.J. Quisquater, D. Samyde, Eddy current for magnetic analysis with active sensor, in Proceedings of eSMART, vol. 2002 (2002) 24. D. Samyde, S. Skorobogatov, R. Anderson, J.J. Quisquater, On a new way to read data from memory, in First International IEEE Security in Storage Workshop, 2002. Proceedings (IEEE, 2002), pp. 65–69 25. J.M. Schmidt, M. Hutter, Optical and EM fault-attacks on CRT-based RSA: Concrete results, in Austrochip 2007, 15th Austrian Workhop on Microelectronics, 11 October 2007, Graz, Austria, Proceedings (pp. 61–67). Verlag der Technischen Universität Graz. 26. R. Singh, S. Islam, B. Sunar, P. Schaumont, An end-to-end analysis of emfi on bit-sliced postquantum implementations. Preprint (2022). arXiv:2204.06153 27. N. Timmers, A. Spruyt, M. Witteman, Controlling pc on arm using fault injection, in 2016 Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC) (2016), pp. 25–35. https://doi.org/10.1109/FDTC.2016.18 28. T. Trouchkine, S.K. Bukasa, M. Escouteloup, R. Lashermes, G. Bouffard, Electromagnetic fault injection against a system-on-chip, toward new micro-architectural fault models. ArXiv (2019) 29. J. Van Waes, D. Vanoost, J. Vankeirsbilck, J. Lannoo, D. Pissoort, J. Boydens, Resilience of error correction codes against harsh electromagnetic disturbances: Fault elimination for triplication-based error correction codes. IEEE Trans. Electromagn. Compat. PP, 1–10 (2020). https://doi.org/10.1109/TEMC.2019.2948478 30. H. Wang, H. Li, F. Rahman, M.M. Tehranipoor, F. Farahmandi, Sofi: Security propertydriven vulnerability assessments of ics against fault-injection attacks. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 41(3), 452–465 (2022). https://doi.org/10.1109/TCAD.2021. 3063998

Chapter 9

CAD for Hardware/Software Security Verification

9.1 Introduction Security validation of an SoC involves numerous challenges due to the complex SoC design. Modern SoCs are composed of complex hardware and software/firmware intellectual properties (IPs) that interact through system bus architecture. In addition, SoCs deployed for Internet of Things (IoT) and smart devices contain several security assets that need protection from possible attacks. Therefore, the design houses must adopt a comprehensive security verification and validation plan parallel with functional verification to ensure the security assets’ confidentiality, integrity, and availability (CIA) and protect the chip from vulnerabilities. The plan includes identifying security assets, defining appropriate security properties and threat models, designing hardware-based security architecture to protect the chip from the low level of the system, and validating the hardware and software/firmware to find vulnerabilities introduced during the design [6, 28]. Furthermore, researchers currently emphasize developing security hardware architectures to protect the SoC [5, 16, 20]. For several decades, the firmware has been the primary attack surface for attackers. They exploit the vulnerable points of the firmware to obtain access to the system. For instance, attackers can run malicious code using vulnerable update routine [19], attackers get access to the protected memory region using faulty hardware configuration, etc. Random input simulation may not detect all vulnerable spots of the design, or exhaustive simulation or formal verification is not feasible for the complete SoC. Therefore, a comprehensive and sophisticated validation method is essential to identify the potential weak spots of the firmware. Current industrial practices for firmware/software validation are primarily relying on simulation. On the other hand, formal methods are gaining popularity for validating security vulnerabilities for hardware designs. Furthermore, the formal method is very helpful for validating complex processors against their instruction set architecture (ISA) model. Instead of validating firmware/software and hardware © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Farahmandi et al., CAD for Hardware Security, https://doi.org/10.1007/978-3-031-26896-0_9

187

188

9 CAD for Hardware/Software Security Verification

separately, formal validation for hardware can be extended to the low-level software to enable hardware/software co-verification. This approach provides thorough insight using an application programmer interface (API) which can guarantee the correct behavior of the hardware and software. Based on the above discussion, this chapter explains a comprehensive security verification/validation flow for the different abstraction levels of the SoC design and analyze security aspects in this chapter. It focuses on surveying the schemes that discuss promising security validation techniques for different abstraction levels. In [28], the authors discuss security assurance requirements and identify security vulnerabilities and challenges to mitigate them. ITUS [20] provides a secure SoC design that protects the chip from software/firmware vulnerabilities. DOVE [8] uses symbolic simulation and assertion mining to generate formal assertion automatically and finds the security vulnerabilities for firmware security validation. The authors in [25] extend the usage of interval property checking (IPC) from hardware to hardware/software co-verification. Finally, the authors in [22] proposed formally analyzable models for abstraction level above register-transfer level (RTL). The chapter organization is as follows. Section 9.2 presents the background and some related works. Section 9.3 discusses a literature review. Finally, Sect. 9.4 concludes the chapter.

9.2 Background SoC security is driven by the requirement to protect system assets against unauthorized access. Security vulnerabilities can be introduced at different stages of the SoC design life-cycle [10]. The attackers can exploit these vulnerabilities to get access to security assets. Therefore security validation and verification efforts are required at different stages of the SoC design life-cycle. Though validation and verification are two terms used interchangeably in industry, there are some differences between these two concepts. The distinction between the two terms is primarily to do with the role of specifications. Validation is checking whether the specification captures the customer’s needs. At the same time, verification is the process of checking that the implementation meets the specification [11, 33]. Security validation approaches enforce security policy, whereas security verification involves verifying security properties. Security policy enforcement is performed at the system level. On the other hand, security properties are verified during the design and integration phases. A security policy aims to map the requirements to “actionable” design constraints that IP designers can use, or SoC integrates to develop protection mechanisms. However, security properties are the statements that a design should hold to fulfill specification criteria [9]. A computational system is typically composed of hardware, firmware, and software levels, as shown in Fig. 9.1. The existence of any vulnerability or functional bug in any of these abstraction levels can put the whole system at risk. However, the overall system cannot be verified by a single verification approach. Therefore

9.3 Literature Survey Fig. 9.1 Different levels of abstraction of a computational system

189

User Application Software

Operating System

Firmware

Hypervisor

Hardware

Architecture Micro-Architecture Register Transfer Level Gate Physical Layout

different verification approaches have been developed, focusing on one or more abstraction levels. In this chapter, we will present various verification and validation approaches using formal methods.

9.3 Literature Survey 9.3.1 System-on-Chip Platform Security Assurance: Architecture and Validation In this Internet of Things (IoT) era, billions of devices are connected with cloud networks and data centers through the Internet, processing a vast amount of personal and security-critical data. The protection mechanism of these data is maintained through cryptographic operations, which depend on several security assets like encryption/decryption keys, true random numbers, asymmetric keys, e-fuse configuration, etc. The system-on-chips (SoCs) integrated inside these devices, which are responsible for managing the device’s security assets, need protection against possible attack scenarios and vulnerabilities introduced in the design. This chapter aims to provide a comprehensive survey of the security vulnerabilities and potential threats associated with modern SoC designs, security assurance requirements, and mitigation practices. The security assurance of SoCs requires comprehensive analysis on identifying the assets, the attack model, the countermeasures to prevent such attacks, and the risk assessment, which should be processed along with the SoC life-cycle [27, 28]. Figure 9.2 shows the typical life-cycle of an SoC along with a security assessment and validation plan. The overview of the process is discussed in the following section.

190

9 CAD for Hardware/Software Security Verification

Fig. 9.2 Life-cycle of a typical SoC with security flow Policy Classes Access Control

Information Flow

Liveness

Secure Boot

Message Immutability

Impersonation

Nonobservability

Fig. 9.3 Classification of policies required for hardware/software security verification

9.3.1.1

Asset Definition and Security Policies

During the design phase, the design house needs to identify all security assets, including static generated during run-time, and define appropriate security policies for each asset to prevent unauthorized access and ensure the CIA of the assets. For example, “During boot time, data transmitted by the crypto engine cannot be observed by any IP in the SoC other than its intended target” is a confidentiality requirement policy [28]. Figure 9.3 represents some classification of policies, which are explained as follows. • Access Control: These policies define how any IP inside the chip can access any security assets. For example, any unauthorized IP must not have access to the protected memory address region of the one-time programmable (OTP) memory, which contains the static security assets. • Information Flow: Due to design vulnerabilities, security assets can be inferred through indirect observation or snooping. Information flow policies restrict such inferences or information flow. • Liveness: These policies ensure that the system functions correctly without any stagnation during the operation. • Secure Boot: During the boot process, significant communication among different IPs with the security assets (e.g., configuration data stored in the eFuse, priority-based access control, different types of cryptographic keys, firmware, etc.) occurs. Therefore, secure boot policies ensure the security of the assets. • Message Immutability: These policies ensure the integrity of a message during the transfer among different IPs. • Impersonation: These policies ensure that any impersonator cannot imitate the behavior of the receiver or the sender during any secure communication.

9.3 Literature Survey

191

• Non-observability: These policies ensure that other IPs cannot access any private message.

9.3.1.2

Sources of Vulnerabilities

After asset identification and policy definition, the design house needs to identify the sources of vulnerabilities and the appropriate attack model. The SoC lifecycle passes through several stages, going from design, manufacturing, and in-field operations to the end of life, thus introducing many possible vulnerabilities and attack surfaces. Therefore, possible sources of vulnerabilities can be divided into three categories. Design Challenges: High complexity of devices, reduced time-to-market, high diversity, and continuous connectivity are the four critical challenges to the security assurance of modern computing devices [26, 28]. In addition, focus on functionality agnostic validation, less expertise in security-aware design and verification, and reduced time-to-market, which provides less time for adequate validation, contribute to the unintentional addition of vulnerabilities in the SoC design. Supply Chain Challenges: Due to the globalization of the SoC manufacturing process, each of the SoC life-cycle stages introduces several vulnerabilities that the attackers can exploit. Due to reduced time-to-market, the SoC integrators integrate untrusted third-party intellectual properties (3PIPs) inside the design, which may possess hardware trojan or hidden backdoors. On the other hand, an untrusted SoC design house can overuse 3PIPs or perform IP piracy to steal the design. The untrusted foundry can overproduce integrated circuits (ICs), perform IC piracy/cloning, and supply defective chips into the supply chain. Moreover, the end-user may reverse engineer the chip to clone it so that they can reenter the remarked/recycled chips into the supply chain. Figure 9.4 shows the SoC lifecycle with associated vulnerabilities. In-Field Challenges: During in-field operations, the chip and its security assets may face different attacks such as information leakage, access control/privilege mode change using different fault injection attacks, software attacks, etc. Modern embedded SoCs are built on hardware–software interaction, which provides numerous attack surfaces to the attacker. The situation gets exacerbated with the proliferation of smart computing devices. For example, modern smartphones have different apps installed with access to the device-specific/personal information or assets, can insert malware, or perform other network attacks, hardware attacks, etc. In addition, the authors of [28] provided a taxonomy of adversaries based on the entry points of attacks (Fig. 9.5). A short explanation of these adversaries is given below: • Unprivileged Software Adversary: In this model, the adversary has no access to the part of the architecture that contains any privileged information, and the attacks are performed through user-level application software. The attacks are contingent on design bugs (hardware and software) to evade the protection

192

9 CAD for Hardware/Software Security Verification

Fig. 9.4 Lifecycle of an SoC with supply chain vulnerabilities. The green boxes represent the stages of the SoC life-cycle, and the red boxes highlight the associated threats

SoC Life Cycle

IP Vendor

SoC Design House

Foundry

Deployment

Threats

− Insert HW Trojan − Hidden Backdoor

− IP Piracy (Cloning) − Trojan in Design (e.g., By Tools)

− Implant Trojan − Overproduction & Cloning

− Leak Secret Info − Magnetic Field ATK

Unprivileged Software

Malicious Hardware Intrusion

System Software

Hardware RE

Software Covert Channel

Naïve Hardware Fig. 9.5 Taxonomy of adversaries based on the attack entry points

mechanisms. Examples of these attacks include buffer overflow, code injection, etc. • System Software Adversary: In this model, the assumption is that the operating system is also malicious in addition to user-level software. Therefore, the

9.3 Literature Survey









193

protection and mitigation techniques must depend on the hardware level to enforce security policies. Software Covert Channel Adversary: In this model, the adversary is assumed to perform a side-channel attack and have access to the system characteristics, which are non-functional in addition to system and application software (e.g., power consumption). Naïve Hardware Adversary: In this model, the attacks are performed through debug interfaces and glitching of control or data lines. The assumption is that the attacker has control over the hardware. Hardware Reverse Engineering Adversary: The assumption of this model is that the adversary can perform reverse engineering of the SoC and gets access to on-chip secrets. Malicious Hardware Intrusion Adversary: In this model, the adversary implants malicious hardware (e.g., Trojan) into the chip and gets access to the security assets through fault injection, information leakage, etc.

9.3.1.3

Secure Architecture Components

The design house needs to develop secure architecture to ensure policy enforcement and authentication mechanism. A significant amount of effort has been conducted when developing and standardizing secure architectures. Most architectures create a trusted execution environment (TEE) to isolate software and sensitive data. The most common platform to establish TEE is the trusted platform module (TPM) [31]. It stores and protects security assets (e.g., passwords, certificates, cryptographic keys, etc.) and provides secure authentication and attestation for secure computing. In addition, ARM TrustZone [2], Intel Software Guard Extension (SGX) [18], and Samsung KNOX [29] are some of the widely used secure architectures that isolate malicious/untrusted software/normal world with the trusted hardware/world.

9.3.1.4

Security Validation

Security validation is one of the crucial aspects of security assurance and resilient design. The goal of the security validation is to validate security conditions that are not covered by functional validation. The formal method is a promising method to verify security policies and find vulnerabilities inside the design during presilicon security verification [21]. However, the authors emphasized two validation techniques, fuzz testing, and penetration testing, during the post-silicon phase. Fuzzing: During fuzz testing [14], random inputs/unexpected inputs are provided in the design under test, and results like crashes, failing assertions, and memory leaks are monitored [3]. Results can identify the attacker’s potential entry points such as buffer overflow, a rare condition, denial-of-services, etc. However, as it depends on random inputs, it suffers from coverage issues and may miss some possible attack points. Figure 9.6 shows a high-level block diagram of fuzz testing.

194

9 CAD for Hardware/Software Security Verification

Fig. 9.6 A high-level block diagram of Fuzz testing. A fuzzer utilizes random inputs from a random number generator and provides vectors that are applied to the design under test for security validation

Penetration Testing: Penetration testing requires deep knowledge of the design and implementation [3]. It iteratively follows three steps to find the possible attack surfaces. • Attack Surface Enumeration: At first, the possible attack features or aspects are identified. It requires extensive testing, such as random testing or fuzz testing. • Vulnerability Exploitation: Depending on the vulnerabilities, the attack entry points are exploited against possible target areas. • Result Analysis: Once the attack is successful, the target is compared with the security objective and policies to determine whether the design is compromised. In short, the authors provided a systematic approach to ensure the security of the assets of the chips deployed in security-critical applications.

9.3.2 Secure RISC-V System-on-Chip The above discussion provides an overall process to protect security assets and validate the design for possible vulnerabilities. However, the rising tide of attacks on SoCs has led the researcher to focus on the hardware-based solution to protect the SoC and its security assets. Developing secure hardware protects the chip from possible software-based attacks, making the system more resilient to attacks. The authors of [20] proposed secure SoC, ITUS, and built around RISC–V instruction set architecture. Its security features are a secure boot, key management, cryptographic accelerator, and memory protection unit. Figure 9.7 shows the high-level block diagram of ITUS architecture. The hardware accelerator block consists of public key cryptography based on elliptic-curve cryptography (ECC) and the Rivest–Shamir–Adleman (RSA) for authentication protocols in remote communication, secure boot, software update, and digital signature. For data transmission and storage, it uses symmetric encryption. The key management unit consists of a physical unclonable function (PUF) and a true random number generator (TRNG) for key generation. It has two RISC-

9.3 Literature Survey

195

Fig. 9.7 High-level block diagram of ITUS architecture. The hardware accelerator block consists of public key cryptography based on elliptic-curve cryptography (ECC) and the RSA for authentication protocols in remote communication, secure boot, software update, and digital signature

V rocket cores used as processors, and one of them is used for TEE, and the other is for a rich execution environment (REE). Thus, the SoC provides isolation between secure and regular software programs. The secure boot process of the SoC maintains a chain of trust (CoT) where the hardware is responsible for authentication of the bootloader, BIOS/Firmware, the operating system, and the user-level software. Conventional unified extensible firmware interface (UEFI)-based secure boot is a software-based protocol where the software running on the processor runs the authentication of the bootloader and image files. Figure 9.8 represents the conventional and ITUS secure boot process. The processor reads the zero-stage boot loader (ZSBL) from the read-only memory (ROM) and authenticates it. After authentication, the bootloader reads the first stage bootloader from the external memory, and the processor authenticates it. Similarly, the newly authentic bootloader is responsible for bringing the image files inside the chip, and the processor performs the authentication. In ITUS, a hardware boot sequencer and the code authentication unit are designed to authenticate the bootloader and image files.

196

9 CAD for Hardware/Software Security Verification

Fig. 9.8 Conventional and ITUS secure boot process. In contrast to the conventional secure boot, ITUS deploys a hardware boot sequencer and code authentication unit to authenticate the bootloader and image files. (a) Conventional secure boot. (b) Hardware secure boot (this work)

A hardware-based authentication mechanism provides more security over software, better performance, and better control over sensitive data. In ITUS, the external memory is divided into the trusted and non-trusted regions. The trusted region is protected from software interception or modification. The memory protection unit (MPU) designed for secure storage is responsible for data read/write in the trusted region. The MPU has secure memory range registers (SMRRs) that contain the memory range partitioning of the trusted region. AES GCM is used to ensure the confidentiality and integrity of the data stored in the trusted region. To ensure the freshness of memory content and prevent replay attacks, it uses a Bonsai Merkle Tree (BMT) structure. ITUS also supports partial memory encryption. Figure 9.9 shows the block diagram of MPU. The key management unit has PUF with error-correcting code. It operates based on two modes. In one mode, it generates the PUF response by taking either a key ID or a challenge–syndrome pair as inputs. In another mode, a table is created with user-supplied or TRNG generated challenge, syndrome, and key ID. Key ID works as a handle for the PUF response stored inside the chip memory. The PUF response is hashed with SHA3 to get the final key. ITUS KMU generates elliptic-curve key pairs using PUF. The secret key is derived from the hash PUF responses, and the public key is computed by multiplying the base point of the curve with the private

9.3 Literature Survey

197

Fig. 9.9 Block diagram of memory protection unit. The MPU has secure memory range registers (SMRR) that contain the memory range partitioning of the trusted region

Fig. 9.10 Block diagram of key management unit. The key management unit has PUF with errorcorrecting code that operates in two modes. In one mode, it generates the PUF response by taking either a key ID or a challenge–syndrome pair as inputs. In another mode, a table is created with user-supplied or TRNG generated challenge, syndrome, and key ID Table 9.1 Secure boot resource usages Secure boot with ECDSA (sect233r1) including SHA3

Slice LUTs 27,170 (13.3%)

FFs 6722 (1.6%)

key. For ephemeral keys generation, ITUS uses TRNG. Figure 9.10 represents the block diagram of KMU. The authors in [20], presented the area overhead of the implementation result, however, does not have a clear discussion on the threat model or the attack scenarios from which ITUS protects the security assets or the sensitive information. Tables 9.1, 9.2, and 9.3 show the area overhead results.

198

9 CAD for Hardware/Software Security Verification

Table 9.2 ECDSA (Binary curve sect233r1) implementation

Cycles Frequency (MHz) Time (ms) Power (mW) Energy (mJ) Energy efficiency (mJ × ms) Table 9.3 KMU resource usage (with sub-components)

Baseline SW Intel IVB 2,226,927 2000 1.11 2100 2331 2587.4

ECkeygen PUF BCHenc BCHdec PUFraw KMU

Vectorized SW Intel IVB + AVX 405,330 2000 0.22 2600 572 125.84

Slice LUTs 21,473 6965 71 251 5554 29,529

HW FPGA 6720 100 0.067 386 0.025 0.0016

FFs 249 2027 333 371 364 3344

9.3.3 Symbolic Assertion Mining for Security Validation According to a national vulnerability database survey, firmware vulnerabilities have been rising over the past decades as shown in Fig. 9.11. These vulnerabilities are exploited to perform various firmware attacks [13]. Symbolic Execution and Assertion Checking are combined to verify firmware execution flow in recent works. However, most often, the assertions are derived from naturally written language. Moreover, the assertions are difficult to derive, error-prone, and incapable of detecting potential vulnerabilities. Therefore, an automated way of assertion generation can ease the security verification of firmwares to a great extent. The scope of [13, 30] is assertion mining for formal verification of firmware in an embedded system. In this chapter, the authors have proposed an approach, named as “DOVE” for automatic assertion generation, and consider how assertions with rare behavior can hide potential security vulnerabilities. The inputs to the framework are a firmware in binary code and an abstract model of the hardware on top of which the firmware runs. As depicted in Fig. 9.12, DOVE consists of three steps. In the first step, a symbolic simulation of firmware is performed to achieve a symbolic tree [8]. In the second step, a probability mapper annotates the edges of the symbolic tree with the probability of path execution. In the third step, for each node of the symbolic tree, assertions are generated by using a Linear Temporal Logic template—G (antecedent → consequent) where the antecedent is the constrain derived for a node of the symbolic tree using symbolic execution and consequent is the value of the variable, achieved during the execution of the

9.3 Literature Survey

199

Fig. 9.11 Firmware vulnerability survey represents its sharp increase over the past decades

Fig. 9.12 Architectural view of DOVE that consists of three steps. The first step performs a symbolic simulation of the firmware. The second step maps the probability

path. The assertion that corresponds to the lowest ranked probability is the mined assertion. According to the authors, with DOVE, a selective symbolic simulation can be performed that targets the most exposed firmware functionalities to attacks. The authors utilized the symbolic simulation tool KLEE to obtain the symbolic tree in the first step. By following the method described in [12], the nodes of the symbolic tree are annotated with the probability of path execution. For assertion generation in the third step, the LTL template (described earlier ) is translated into assertion using an always operator and a next operator to connect the antecedent and the consequent. The authors implemented their proposed framework to validate a memory protection mechanism and an interrupt service handler. For case study 1, Fig. 9.13 shows the probabilities of assertions generated by DOVE. The orange ranking shows the probability of assertions with (blue) and without (orange) firmware vulnerability. The analysis of the assertion with the lowest probability (a1) indicates the situation where the IP interface asks the firmware to (1) disable the interrupts, (2) enable the protected memory, and (3) perform a write in its own code. Assertion a1 highlights that the system can be attacked by exploiting the firmware. Figure 9.14 shows that the x axes corresponds the memory addresses to the instructions pointed by the PC during the simulation, and y axes represents the probability of the PC that points to those memory addresses. The top ranked assertions generated by DOVE pinpoint the corner cases in the execution flow of the firmware where unauthorized operations are performed. In particular, the assertion associated to the most unlikely execution path describes as follows -

200

9 CAD for Hardware/Software Security Verification

Fig. 9.13 Case study 1: probabilities of assertions generated by DOVE. The orange ranking shows the probability of assertions with (blue) and without (orange) firmware vulnerability

Fig. 9.14 Case study 2: probability of reading the memory location addressed by the PC

9.3 Literature Survey

201

G(X[37](mem[0] = 0) ∧ X[48](mem[4] = 0x4108) → X[68](P C = 0x3910)) with the probability = 1.4e − 18. This assertion shows how specific values processed by the interrupt handler allow the CPU to fetch an instruction from the memory address 0x3910 where a fake (malicious) handler reset value was previously stored. Therefore, the results obtained from these experiments validate the effectiveness and the efficiency of the DOVE framework. The primary advantage of the proposed assertion mining technique is that it mines assertions in an automated manner, which does not need any manual effort. Since manual intervention is not required, the identified assertions are less errorprone. The design engineers can utilize the identified assertions to patch their design as these assertions describe the undetected vulnerabilities in the corner cases of the firmware execution. According to the claim of the authors, DOVE can mine security assertions in a lesser amount of time; therefore, utilization of it will reduce verification time. Although there are some benefits of the proposed assertion mining technique, there are some limitations which are as follows: • The assertions mined by the framework is not generic. They are applicable for a given firmware running on particular hardware. • The assertion mining depends on a probabilistic approach. Hence, it may identify many false-positive corner cases. The corner cases may not lead to any vulnerability scenario. In such cases, both verification time and effort will be wasted. • In the experimental result, the performance of DOVE has not been compared to any other assertion miner tools. Therefore, the applicability of DOVE cannot be validated.

9.3.4 Hardware/Software Co-verification Using Interval Property Checking Embedded system verification requires applying various verification techniques at different abstraction levels, making it a daunting task for verification engineers. Though hardware/software co-verification approaches has become popular [15, 17], [7] in recent years, it suffers from solubility issues. In traditional RTL hardware verification, properties are typically considered to run over a hundred clock cycles. However, in HW/SW co-verification, the formulated properties span thousands of clock cycles. Hence, the computational model for verification becomes intractable even for the most powerful solvers. The authors in [25] consider the hardware and the software program of a complex embedded system as a joint computational model so that the flaws in the interplay

202

9 CAD for Hardware/Software Security Verification

Processor’s loader

Fig. 9.15 Construction of embedded system in hardware level to construct a design under verification (DUV)

of hardware events and corresponding software actions can be detected. As shown in Fig. 9.15, a design under verification (DUV) can be built for an embedded system at the hardware level. The executable code of a software program is loaded into a program memory which is connected to the processor. The execution sequence of individual segments of the program can be considered as a sequential circuit C which can be implemented using a finite state machine (FSM) M. Therefore multiple FSMs need to be implemented for the overall program. A long sequence of operations becomes involved because of the various FSMs in the design. As a result, the security properties required to verify the DUV formally do not remain straightforward. The properties span over thousands of clock cycles. For a long unrolled circuit of ML number of FSMs, the corresponding security property has been defined as long property φL by the authors in [25]. A powerful solver may provide unreachable counter-examples even if a long property φL holds in the design. According to the authors, the main reason behind such scenarios is the lack of proper reachability constraints definition. The proposed formal verification based approach named as “Interval Property checking (IPC)” can handle models of large complexity. In this approach, a Property-Based Abstraction Method has been developed. The goal is to prove a “long” property φL with a large length L. To prove the long property, the approach proceeds in the following ways (as shown in Fig. 9.16): • Step 1: Check if an IPC checker can prove the long property φL . • Step 2: If the proof fails, simplify the computational model and decompose the 1 , φ 2 ...,φ n . long property φL into a set of short properties φl1 ln l2

9.3 Literature Survey

203

Fig. 9.16 The approach taken for proving long property: (a) proof goal and (b) proof steps

• Step 3: Check if IPC checker can check all short properties and if the long property is also proven. • Step 4: If the IPC checker is not able to check all short properties, decompose the short properties into even shorter properties. Repeat Steps 1, 2, and 3. As a DUV, an embedded system has been considered for experimental purposes, which has an open-source 32-bit processor[1], two memory modules (one RAM and ROM), and a peripheral module (UART). The length of the Verilog code of these hardware components is 6850 lines. The processor executes a C program that is compiled and loaded into the ROM module. The software program variables can be translated into the hardware level using a converter tool to generate the properties. The authors implemented the proposed verification approach for two case studies: (1) a system that runs a Fibonacci program and (2) a system with a Local Interconnect Network (LIN) protocol. The result obtained from the implementation indicates that the proposed abstract model can verify longer properties in less amount of time while utilizing fewer resources compared to a concrete model (Fig. 9.17). The authors pointed out some advantages of the abstraction model-based HW/SW co-verification method. In this approach, a model checker tool performs formal verification for an abstraction model of a system rather than its concrete model. Decomposing a long security property into short properties reduces the computational complexity verification time and memory utilization during verification. It also lessens the probability of getting false counter-examples because of the reachability issues. Moreover, the abstraction model-based verification can also be utilized for memory model verification [24, 32]. However, this method is not free of its limitations. • Although the decomposition of long properties into short properties has been mentioned, any automatic approach to perform the decomposition was not mentioned.

204

9 CAD for Hardware/Software Security Verification

Fig. 9.17 Increase in design size over the years. This rapid increase in the design size has left verification capability lag behind the design/fabrication capability, resulting in a “verification gap”

• The authors assume that if a long property fails, the only reason behind it is a lack of proper constraint definition. However, the property may fail because of the improper definition of the property itself. • If the long property is erroneous and the short properties derived from it somehow pass the checking, it is possible that unknown vulnerabilities remain undetected in the design.

9.3.5 Verification Driven Formal Architecture and Microarchitecture Modeling With the growing demand of user and shrinking time-to-market, design houses are increasing the amount of chip fabrication in less time. However, at the same rate, design houses do not spend money or time on verification effort. Therefore a “verification” gap is growing and hardware security verification is becoming challenging [4]. Since many design and state variables are associated with a design, functional verification suffers from state space explosion. Moreover, when design specifications is mapped to implementation, it requires extensive human effort. Sometimes user-defined specifications are incomplete or hard to analyze, resulting in inappropriate design modeling. Each of these factors can jeopardize verification efforts. High-level hardware models (e.g., SystemC and SystemVerilog) aim to raise the abstraction level to facilitate the designer’s productivity. However, the provided abstraction is for executable but not for analyzable descriptions. On the other hand, RTL implementation of specifications provides structural and timing information, but it does not provide high-level computation information. To overcome the computational and analyzable shortcomings of different abstraction labels and minimize the gap between them, a higher level of abstractions

9.3 Literature Survey

205

has been proposed in [23]. The authors claim that the proposed abstraction levels would help verification engineers to get computation-based transaction information of the design under verification. The proposed abstraction levels are (i) architectural model, which provides a data-centric view of computation, and (ii) μ-architectural model that provides a detailed implementation of transactions and information on shared physical resources. To describe these two abstraction levels, the authors have provided some semantics. For the architectural model, the definitions of the semantics are described below: • Transaction: any concurrent unit of data computation is called a transaction • Architectural state: any register, memory location, or queue • Local state elements: the elements with a lifetime, the same as the transaction lifetime • Transition step: a step where a state updates • Transition graph: comprised of a series of transaction steps • Transition class: a transaction class T is defined as a tuple (Γ , S), where Γ is a transaction graph, and S is a set of local state elements which may update Γ . Figure 9.18 shows the transaction graph of an example processor. Here fetch, decode, and execution are transaction steps, and each instruction execution indicates a transaction graph. Finally, the overall model is called transaction class. The semantics of the μ-Architectural model is similar to the architectural one. The main difference between the μ-architectural and architectural states depends on the resources associated with the states. Moreover, it captures transaction information of shared resources. When multiple transactions share the same resource, the authors assume that a resource manager is modeled using an FSM. In this scheme, the FSM is named a resource manager. Grants allocation of resources is based on a transaction step’s request. Figure 9.19 shows the transaction graph of an example processor. Here, fetch, decode, and execution are transaction steps, and each instruction execution indicates a transaction graph. Finally, the overall model is called transaction class. Figure 9.19 shows the transaction graph of a pipelined processor, which is presented using the μ-architectural model. In the processor example, the ALU resource pool is accessed during the execution stage of different instructions (e.g., t load, store, and add). The resource manager takes care of the resource allocation. The authors in [23] claim that the architecture and μ-architecture models are used in different verification methods. For example: • Functional Simulation: Since these abstraction levels provide notation of simulation coverage metric by providing information about every transaction graph, transactions, etc., it can be used to generate automated test benches. • Formal Verification Across Models: As the μ-architectural model utilizes some semantics of architectural model, correspondence functions f α and c(f α ) can be established between levels to check equivalency.

Fig. 9.18 Architectural transaction class for the processor—(a) state (b) transaction graph

206 9 CAD for Hardware/Software Security Verification

Fig. 9.19 μ-Architectural transaction class for the pipelined processor—(a) state (b) transaction graph

9.3 Literature Survey 207

208

9 CAD for Hardware/Software Security Verification

• Verifying the RTL: Since RTL implementation can be obtained by synthesizing the μ-architecture model, an equivalency can be checked between these two levels of abstractions. • Run-Time Validation: The μ-architecture model has some salient features which make it highly suitable for adding online monitoring/checking and recovery techniques for run-time validation. The proposed level of abstraction in [23] clearly has some advantages. The idea of architecture and microarchitecture to create a multilevel framework for verification is generalized. The distinction between the proposed model and the traditional models such as RTL is that it considers a static view of the design and provides a mechanism to associate events with the corresponding specificationlevel transaction. Thus, μ-architecture makes this information easy to understand and access for formal analysis. Therefore, inappropriate hardware modeling can be reduced to a great extent by filling up the gap between the specification and implementation of hardware designs, as described in [23]. Some limitations in the proposed abstraction levels are mentioned below: • When RTL implementation is synthesized from such higher levels of abstractions (e.g., architectural and microarchitectural), the synthesis tools may add some vulnerability because of its optimization effort (e.g., vulnerable FSM encoding) to the implementation, which may hamper the verification effort. • The high-level synthesized RTL code may suffer from redundancy. The existence of dead code in the synthesized RTL implementation may increase the verification time and complexity. • Although the models described in [23] provide a simple high-level description that minimizes the gap between implementation and specification by describing how the units of work are implemented using the hardware blocks, these models may not be applicable for some designs that do not divide the units of data computation.

9.4 Summary Chapter 9 analyzes and surveys existing techniques using CAD tools for security verification in both software and hardware. It described that the formal methods provide better insights for security verification and validation of both hardware and software. The first two techniques [20, 28] aim to validate system-level security. In contrast, the rest focus on security verification efforts for lower levels of abstractions. The authors in [28] presented a comprehensive overview of platform security. The systematic approach described in [28] has provided many emerging research directions that should be taken into account for creating trustworthy computing devices. In [20], V. B.Y. Kumar et al. proposed a secure RISC-V SoC which inherits various security features such as trustworthy boot, confidentiality/integrity of memory, PUF-based key management, and enclaving

References

209

support. The proposed scheme suggests that design houses consider the tradeoff between security management features and performance while building a secured architecture. Moreover, vulnerabilities in lower levels of abstractions (e.g., firmware, hardware/software interface, hardware implementation, etc.) manifest at the system level when a device is deployed in the field. In [8], A. Danese et al. have proposed a framework named “DOVE” to detect vulnerabilities in hardware in an automated manner. The framework can identify the vulnerabilities in the form of assertion in less time and paves a way to reduce verification effort and time. M. D. Nguyen et al. [25] presented a novel property-based abstraction scheme. The scheme proves long and complex properties in HW/SW systems that span thousands of clock cycles. Lastly, in [22], the authors came up with a higher level of abstractions to fill up the “verification gap,” as shown in Fig. 9.17, between specification and implementation. The proposed abstraction levels reduce the risk of inappropriate RTL implementation and enhance the designer’s productivity. After reviewing these verification techniques, we can understand that overall system verification cannot be completed or a secured architecture cannot be obtained by utilizing only one single verification approach at a time. Different verification approaches at different levels of abstractions should come into play.

References 1. T. Aitch, A Pipelined RISC CPU Aquarius (SuperH-2 ISA Compatible CPU Core) (2003). http://www.opencores.info/projects.cgi/web/aquarius/ 2. ARM Security Technology Building a Secure System using TrustZone Technology. https:// developer.arm.com/documentation/genc009492/c 3. K.Z. Azar, M.M. Hossain, A. Vafaei, H. Al Shaikh, N.N. Mondol, F. Rahman, M. Tehranipoor, F. Farahmandi, Fuzz, penetration, and AI testing for SoC security verification: challenges and solutions. Cryptology ePrint Archive (2022) 4. B. Bailey, A new vision for scalable verification. EE Time (February 18th, 2004) 5. S. Bhasin, T.E. Carlson, A. Chattopadhyay, V.B.Y. Kumar, A. Mendelson, R. Poussier, Y. Tavva, Secure your SoC: building system-an-chip designs for security, in 2020 IEEE 33rd International System-on-Chip Conference (SOCC). (IEEE, 2020), pp. 248–253 6. B. Chen, K. Cong, Z. Yang, Q. Wang, J. Wang, L. Lei, F. Xie, End-to-end concolic testing for hardware/software co-validation, in 2019 IEEE International Conference on Embedded Software and Systems (ICESS) (IEEE, Piscataway, 2019), pp. 1–8 7. L. Cordeiro, B. Fischer, H. Chen, J. Marques-Silva, Semiformal verification of embedded software in medical devices considering stringent hardware constraints, in 2009 International Conference on Embedded Software and Systems (IEEE, Piscataway, 2009), pp. 396–403 8. A. Danese, V. Bertacco, G. Pravadelli, Symbolic assertion mining for security validation, in 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE) (IEEE, Piscataway, 2018), pp. 1550–1555 9. F. Farahmandi, Y. Huang, P. Mishra, SoC security verification using property checking, in System-on-Chip Security (Springer, Berlin, 2020), pp. 137–152 10. N. Farzana, F. Rahman, M. Tehranipoor, F. Farahmandi, SoC security verification using property checking, in 2019 IEEE International Test Conference (ITC) (2019), pp. 1–10 11. N. Farzana, F. Farahmandi, M. Tehranipoor, SoC security properties and rules. Cryptology ePrint Archive (2021)

210

9 CAD for Hardware/Software Security Verification

12. A. Filieri, C.S. P˘as˘areanu, W. Visser, Reliability analysis in symbolic pathfinder, in 2013 35th International Conference on Software Engineering (ICSE) (IEEE, Piscataway, 2013), pp. 622– 631 13. Flood of New Advisories Expose Massive Gaps in Firmware Security. https://eclypsium.com/ wp-content/uploads/2019/11/Flood-of-New-Advisories.pdf 14. Fuzzing for Software Security Testing and Quality Assurance, Second Edition. https:// ieeexplore.ieee.org/abstract/document/9100404/ 15. D. Groβe, U. Kühne, R. Drechsler, HW/SW co-verification of embedded systems using bounded model checking, in Proceedings of the 16th ACM Great Lakes Symposium on VLSI (2006), pp. 43–48 16. T.T. Hoang, C. Duran, R. Serrano, M. Sarmiento, K.D. Nguyen, A. Tsukamoto, K. Suzaki, C.K. Pham, Trusted execution environment hardware by isolated heterogeneous architecture for key scheduling. IEEE Access 10, 46014–46027 (2022) 17. A. Horn, M. Tautschnig, C. Val, L. Liang, T. Melham, J. Grundy, D. Kroening, Formal covalidation of low-level hardware/software interfaces, in 2013 Formal Methods in ComputerAided Design (IEEE, Piscataway, 2013), pp. 121–128 18. Intel˝o Software Guard Extensions (Intel˝o SGX). https://www.intel.com/content/www/us/en/ architecture-and-technology/software-guard-extensions.html 19. C. Kallenberg, J. Butterworth, X. Kovah, S. Cornwell, Defeating Signed BIOS Enforcement (2014). https://www.mitre.org/publications/technical-papers/defeating-signedbios-enforcement 20. V.B.Y. Kumar, A. Chattopadhyay, J. Haj-Yahya, A. Mendelson, ITUS: a secure RISC-V system-on-chip, in 2019 32nd IEEE International System-on-Chip Conference (SOCC) (2019), pp. 418–423 21. E. Love, Y. Jin, Y. Makris, Proof-carrying hardware intellectual property: a pathway to trusted module acquisition. IEEE Trans. Inf. Forensics Secur. 7(1), 25–40 (2012) 22. Y. Mahajan, C. Chan, A. Bayazit, S. Malik, W. Qin, Verification driven formal architecture and microarchitecture modeling, in 2007 5th IEEE/ACM International Conference on Formal Methods and Models for Codesign (MEMOCODE 2007) (2007), pp. 123–132 23. Y. Mahajan, C. Chan, A. Bayazit, S. Malik, W. Qin, Verification driven formal architecture and microarchitecture modeling, in 2007 5th IEEE/ACM International Conference on Formal Methods and Models for Codesign (MEMOCODE 2007) (IEEE, Piscataway, 2007), pp. 123– 132 24. Y.A. Manerkar, D. Lustig, M. Martonosi, M. Pellauer, RTLCheck: verifying the memory consistency of RTL designs, in Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture (2017), pp. 463–476 25. M.D. Nguyen, M. Wedler, D. Stoffel, W. Kunz, Formal hardware/software co-verification by interval property checking with abstraction, in Proceedings of the 48th Design Automation Conference (2011), pp. 510–515 26. S. Ray, J. Bhadra, Security challenges in mobile and IoT systems, in 2016 29th IEEE International System-on-Chip Conference (SOCC) (2016), pp. 356–361 27. S. Ray, W. Chen, R. Cammarota, Protecting the supply chain for automotives and IoTs, in Proceedings of the 55th Annual Design Automation Conference (2018), pp. 1–4 28. S. Ray, E. Peeters, M.M. Tehranipoor, S. Bhunia, System-on-chip platform security assurance: architecture and validation. Proc. IEEE 106(1), 21–37 (2018) 29. Samsung Knox | Secure mobile platform and solutions. https://www.samsungknox.com/en 30. A. Takanen, Fuzzing: the past, the present and the future. Actes du 202–212 (2009) 31. TPM 1.2 Main Specification. https://trustedcomputinggroup.org/resource/tpm-mainspecification/ 32. C. Trippel, Y.A. Manerkar, D. Lustig, M. Pellauer, M. Martonosi, TriCheck: memory model verification at the trisection of software, hardware, and ISA. ACM SIGPLAN Not. 52(4), 119– 133 (2017) 33. Verification vs validation. https://www.easterbrook.ca/steve/2010/11/the-difference-betweenverification-and-validation

Chapter 10

CAD for Machine Learning in Hardware Security

10.1 Introduction The recent transition from a vertical development model to a horizontal development model in the hardware industry has led to new threat models. There is a lack of trust among entities in the supply chain with the outsourcing of fabrication, design, and integration steps. Untrusted parties work together during development, which leads to increased openings for malicious entities to reverse engineer, reproduce, or tamper with intellectual property. Recent strides have been made in the hardware security industry to close some of these openings for attackers by developing new architectures, more advanced verification, and Trojan detection frameworks with a myriad of other efforts. Attackers have developed new and more complex attacks to achieve their aims with these new developments. The arms race between attackers and hardware designers has continued to escalate using various machine learning (ML) techniques. Machine learning is used by attackers and defenders alike to develop new attacks and protection. The application of machine learning techniques has been especially fruitful in the domains of Trojan detection and side-channel analysis [1, 25, 31]. Designers, system integrators, and fabricators are heavily invested in better hardware Trojan detection frameworks because a Trojan in their product may result in heavy financial losses and loss of trust. Trojans can be detected at different times, such as design, run, and/or test time. Detection of Trojans at run-time poses a unique challenge. It is because detection techniques must be flexible enough to detect unexpected new attacks in real-time while also having a minimal hardware overhead [11]. Deep learning offers the potential to augment existing hardware Trojan detection frameworks, offering the potential to detect previously unknown attacks while also demonstrating high accuracy when detecting known attacks.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Farahmandi et al., CAD for Hardware Security, https://doi.org/10.1007/978-3-031-26896-0_10

211

212

10 CAD for Machine Learning in Hardware Security

Recent work has explored the use of machine learning and deep learning techniques for Trojans in multi-core router architectures [9] and [11]. Various algorithms can be used depending on the case, implying the flexible use of machine learning in these domains. Hardware Trojans come in a variety of forms, which is better expressed in a taxonomy created in [32]. The taxonomy provides a framework for identifying Trojans at the chip level. Machine learning can also be effectively applied to the realm of side-channel analysis [7, 13]. Machine learning-based methods are often used to augment sidechannel attacks. The author infers powerful relationships between measurements that can retrieve protected or sensitive information. Also prevalent is its use in sidechannel leakage assessment, identifying whether or not an attacker can access any information at all [17]. Even if the attacker cannot access a full key, for example, it is still a threat as any side-channel leakage can aid in the restoration of the asset. Side-channel leakages can be of different orders, which usually refers to the power eminence from the system, where the orders are correlated to the derivatives of the signal in the mathematical sense. The leakages also present univariate or multivariate expression, where the equation that the leakage models have one or more than one variable. Attackers can analyze these side-channel leakages to snoop on critical information/assets that are meant to be undetectable to those unauthorized. The literature survey in this chapter aims to synthesize the advances in these two domains and highlight the applicability of machine learning to them. This chapter does not generate novel implementations of any of these techniques; instead, it compiles previous work and presents it in a novel way.

10.2 Background on the Problem Machine Learning technology is often defined by its primary goal to understand the underlying structure of input data, fitting it into models that we can use and analyze. There are two primary forms of machine learning utilized in these works: • Supervised Learning—The algorithm is provided with example data points labeled with the expected outputs. • Unsupervised Learning—The algorithm is provided with example data points with no labels. It must find the commonalities among the data on its own. The various Machine Learning techniques utilized in the evaluated implementations are discussed in this section.

10.2 Background on the Problem

213

10.2.1 Machine Learning Techniques 10.2.1.1

Deep Learning

This approach involves using neural networks to process the data. The networks contain layers, which have a matrix of weights that are linearly applied to its inputs and a non-linear activation applied at each of the results. The final output layer contains the same amount of output coordinates as classes [4]. • Training—There is a process in deep learning known as training. In training, a loss function informs the efficacy of the current weights in the network. The weights are adjusted after each batch based on the gradient of this loss function or subdivision of the data set. An epoch has occurred once all the batches of data have passed through the network. The step after training is performed inference. In inference we measure the accuracy of the trained model as described below. • Measuring Accuracy—The accuracy of the model is usually evaluated using a validation accuracy metric,* where a separate validation data set is used to test how generalized the learned traits are. Typically, one does not test the network on the data used for its training because the goal is to extract these data traits and not to memorize data/overfit the data. The structure of weights and activation functions is commonly referred to as the network architecture. Several architectures have become popular across various domains due to their broad capabilities. For instance, different architectures can image segmentation, defective sample identification, data synthesis, text recognition, and much more. As a result, the hardware assurance community has adopted several of these architectures, employing them on sub-problems of PCB and component analysis.

10.2.1.2

Decision Tree

A tree-formatted model is used to predict what an output value will be based on input variables. The data’s determined attributes are the branches, and the conclusions made about the data’s expected value are the leaves [24]. Processes like defect detection or side-channel analysis can have tens of thousands of inputs. In these cases, it is not possible to manually tune an equation to determine counterfeits or chip behavior as a function of the inputs. Decision trees can be quite helpful in these cases. The tree structure effectively pits each feature against one another in a tournament style, determining which inputs are most informative for a given output.

10.2.1.3

K-Nearest Neighbors (KNN)

This approach is a pattern recognition model used for classification, regression, and search. The algorithm calculates the distances between a query and the data

214

10 CAD for Machine Learning in Hardware Security

examples, selects the K instances that are closest to the query, and votes for the most common label (classification) or average label (regression) [23]. In the context of hardware assurance, KNN is often employed as a highly computationally efficient anomaly detection method. For instance, several EM traces for expected behavior and suspicious behavior are collected during sidechannel analysis. When an unknown sample is encountered, KNN can be used to determine whether it is more similar to the suspicious or genuine data, and a decision can be made as to that sample’s authenticity.

10.2.1.4

Support Vector Machine (SVM)

This approach is a linear model for classification and regression. It creates a line or hyper-plane that separates the data into classes. Similar to KNN, it can determine whether an unknown sample is close to one data category versus another. As a result, both methods are commonly used for the same purpose. Unlike KNN, SVMs require an initialization stage to determine where these hyper-planes should be drawn between groups in the data. As a result, it is a more computationally intensive method. However, it provides a significant number of advantages, such as rapid classification time and nonlinear relationship detection [3].

10.2.2 Threat Models Threat modeling is a process of identifying the vulnerabilities in a given system to mitigate those threats. Various vulnerabilities can lead to a host of challenges in the two sub-domains investigated. These vulnerabilities are also explored in [30]. This section discusses existing attack models and their challenges described in [32].

10.2.2.1

Side-Channel Attacks

Side channels are sources of nonfunctional information that can infer the inner functionality of a device or process. Side-channel attacks pose a dangerous threat to devices that require confidentiality. The privacy of sensitive information is paramount. For instance, cryptographic modules need private keys to remain confidential, or they risk undermining the entire cryptographic scheme and the confidentiality of any data encrypted with that scheme. Research into how sidechannel attacks can be resisted and the state-of-the-art side-channel analysis aids in designing architectures that resist them. • Invasive vs. Noninvasive—Side-channel attacks are generally noninvasive, meaning they do not usually rely on techniques like removing the passivation layer of a chip or de-packaging it.

10.2 Background on the Problem

215

• Passive vs. Active—Active attacks focus on manipulating side-channel inputs to influence device behavior. Passive attacks, instead, focus on correlating the sidechannel outputs with the internal functionality of the design or circuit. • Simple vs. Differential—Simple attacks utilize a small number of traces from a given side channel to infer the operation of the circuit or design. Differential attacks rely on many traces to correlate data values with the experimental sidechannel measurements. • Countermeasures—Several countermeasures exist to protect against sidechannel attacks: information hiding, masking, design partitioning, and anti-tamper protection. Information hiding attempts to reduce the side-channel leakage or observability by either increasing the noise or reducing a signal (e.g., noise generators, low power design, shielding). Masking or blinding attempts remove the relationship between input data and side-channel emissions from intermediate functional nodes. Design partitioning attempts to decrease sidechannel leakage by separating regions that work on the plaintext from areas that work on the ciphertext for a cryptographic module. Anti-tamper protection attempts to prevent tampering or measuring of side-channel emissions. Challenges in Side-Channel Analysis Attacks Effective side-channel analysis poses numerous challenges, including efficiency, number of traces needed, and ability to converge to a given value for sensitive information. These challenges are discussed below, with machine learning as a critical factor in side-channel analysis attacks. • Efficiency of a machine learning-aided side-channel attack—Machine learning-aided side-channel attacks may take longer than traditional ones due to the lengthy training periods needed to train their underlying models properly. This training period could be quite extensive depending on the number of traces and features. • Number of required traces—Machine learning, or deep learning-aided sidechannel, analyses and attacks require many leakage traces (sufficient dataset) to train and verify the underlying models they utilize effectively. This traces numbers in the thousands. Additionally, the more complicated the model (i.e., the more parameters need tuning), the more traces are needed. It has become a challenge to train machine learning models to get accurate results due to the unavailability of such large data sets. • Ability to converge—Attacks utilizing machine learning are evaluated on their ability to converge to a value of sensitive information with a high degree of confidence. The model’s performance is evaluated against the ability to converge to the correct value of this sensitive information during the testing phase (in supervised learning, where labels are provided). One of the chief difficulties in this approach is achieving a high degree of confidence and accuracy in converging to these values.

216

10 CAD for Machine Learning in Hardware Security

The challenges mentioned above are addressed in various ways by the works discussed in this chapter, which employ unique training, data collection, and data generation approaches to overcome them.

10.2.2.2

Hardware Trojan Attacks

Hardware Trojan inclusion truly creates a need for detection mechanisms to identify and locate the malicious changes. There are various properties of chip-level hardware Trojans crafted into a taxonomy as described in Fig. 10.1 • Insertion Phase—The insertion phase specifies the phase of the design process where the Trojan is inserted into the system. • Abstraction Level—The level of abstraction refers to Trojan’s implementation level. • Activation Mechanism—The activation mechanism refers to the cause of a Trojan unleashing its payload. • Effect—Effect refers to the payload or effect that a Trojan unleashes on the system. • Location—The location property refers to the location of the Trojan in the system. • Physical Characteristic—The physical characteristics represent the actual physical manifestation of the Trojan. It discusses the distribution, size, type (parametric or functional), and structural change made to the system to implant this Trojan (same layout or changed layout). The chip-level hardware Trojan’s critical properties craft a better picture of the threat’s complexity, which must be countered with Trojan detection strategies [32]. Challenges in Hardware Trojan Attacks The various challenges addressing the threat of hardware Trojans with machine learning approaches are discussed below: • Relevant feature extraction and its hardware implementation feasibility—The relevant features always pose issues for machine learning algorithms because defining which features are important is challenging. Hardware implementation of the Trojan detection mechanism is also a source of great challenge. • Memory requirement to meet the required number of observations for training data—The necessary memory increases as the number of observations for the training data increases, thus creating a resource challenge for the system. • Hardware complexity of the machine learning algorithm—If the technique is to be implemented in hardware, the hardware complexity (referring to the area) is an important metric. It can pose issues for the framework design. • Effective placement of the Machine Learning kernel—The placement of this kernel is significant, as it requires memory of its own and quick processing. • Creating a low-overhead run-time hardware Trojan detection framework— Having the framework be low-overhead is critical to the run-time nature of the

Fig. 10.1 Taxonomy of chip-level hardware Trojans [32]

10.2 Background on the Problem 217

218

10 CAD for Machine Learning in Hardware Security

application. Detecting Trojans at run-time is difficult, but it has many advantages, such as detecting Trojans during the lifespan of the IC and detecting Trojans that escaped during design and testing. This chapter discusses the existing hardware Trojan detection works which meets these challenges with clever techniques.

10.3 Literature Survey This section describes machine learning-based techniques addressed in the literature for over a decade. The methods used to enhance the side-channel analysis and the Trojan detection are discussed with the pros and cons.

10.3.1 ML-Based Side-Channel Analysis Techniques This section describes various ML-based techniques carried out in the side-channel analysis.

10.3.1.1

DL-LA: Deep Learning Leakage Assessment

Overview The authors proposed a novel side-channel leakage assessment technology using deep learning called DL-LA (Deep Learning Leakage Assessment) [17]. The central claim of the authors in [17] is that their chosen network offers some fundamental universality and robustness, as well as low false positives. The authors also assume that a validation set has equal validation group sets. Deep learning was used for the DL-LA implementation but was tested against Welch’s t-test and Pearson’s .x 2 -test [8]. The Neural Network Architecture is the specific deep learning architecture used to implement their framework and was a four fully connected layered neural network. The input layer uses a Rectified Linear Unit (ReLU) as the activation function, and the last layer is softmax. The final layer needs to output the probability distribution of the two classes. The layers have 120, 90, 50, and 2 neurons. A batch normalization layer separates each inner layer to help prevent over-fitting, where the model memorizes input data rather than the learned traits. The Python library Keras uses the TensorFlow tool as the backend to develop the network to test their application and implement the architecture above. The authors produce seven different case studies to vary across the parameters that may affect the analysis. The tests are performed on a SAKURA-G board featuring two Spartan-6 field-programmable gate arrays (FPGAs). The following are the parameters:

10.3 Literature Survey

• • • • •

219

Unprotected vs. Protected Block cipher circuit Aligned vs. Misaligned traces 6Mhz Clock or Randomized Clock for FPGA Univariate or Multivariate Leakages Single Order vs. Higher-Order Leakages

For the first and second case studies, the unprotected block cipher is analyzed with aligned and misaligned traces. The traces are misaligned by shifting when the oscilloscope measured the exact moment of a rising clock edge, causing distortion. For the third case study, a randomized clock provides further misalignment in the traces rather than the oscilloscope method. The protected block cipher is analyzed with aligned and misaligned traces for the fourth and fifth case studies. For the sixth and seventh case studies, the architecture of the block cipher is varied to produce multivariate leakage with aligned and misaligned traces, respectively. This approach was mastered by including a significant improvement in many of the weaknesses of the t-test and Pearson’s .x 2 -test. The traces can be input as training data into the network without pre-processing in challenging scenarios with fully multivariate or horizontal leakages. Another advantage is that the evaluator does not need a high skill level, and no prior knowledge of the underlying implementation is necessary. Finally, the method provides a much lower risk of false positives. Some limitations of this approach are discussed here; firstly, the result of the approach is a single confidence value. Thus, sensitivity analysis must be performed as an additional step. Another disadvantage is the massive training traces necessary to detect the multivariate higher-order leakage (tens of millions). If training data is available, this is not a limitation, but still, the training time is another challenge in the number of epochs necessary. The results of the implementation are pretty interesting. In case study 1, the ttest, .x 2 -test, and DL-LA found the leakage at around 20, 90, and 10 traces. In case study 2, with the misaligned traces, the results were around 100, 100, and 70 traces to find leakage, respectively. The misalignment saw the best performance in the Pearson’s test over the t-test. In case study 3, we see an exacerbated form of case study 2. However, all the tests are suffering greatly. The t-test finds the leakage in 2000 traces, the .x 2 -test struggles to find the leakage at all, and the DL-LA finds the leakage after training on around 150 traces. In case studies 4 and 5, the first-order ttest completely misses the leakage with the protected design. In contrast, the second and third-order t-tests find leakage after 3 million and 1.5 million traces. The .x 2 test performs much better, finding the leakage after around 500,000–800,000 traces, similar to DL-LA. For case studies 6 and 7, with the multivariate leakage production, all three t-tests and the .x 2 -test fail to detect any leakage, while the DL-LA can detect it after around 20–25 million trained traces. This remarkable finding shows that this method does meet the claim for multivariate leakages.

220

10.3.1.2

10 CAD for Machine Learning in Hardware Security

Assessment of Common Side-Channel Countermeasures with Respect To Deep Learning-Based Profiled Attacks

The architectures of deep learning are secured against traditional side-channel attacks with countermeasures like hiding and masking. Simple SCA attacks often assume the underlying distribution of the side-channel data (usually under a Gaussian distribution). Machine learning models can operate without this assumption. SCA attacks aided by deep learning models can attack cryptographic implementations. They can even attack when traditional SCA countermeasures like Boolean masking, jitter, and random delay insertions are utilized. Houssem Maghrebi, from Underwriters Laboratories Identity Management and Security in France, investigated how current state-of-the-art SCA countermeasures hold up against deep learning-based side-channel attacks [16]. The author chose to test a profiling attack on an AES implementation against masking schemes based on Shamir’s secret sharing, shuffling, and 1-amongst-N countermeasures. Three machine learning models are tested with this profiling attack: two-layer convolutional neural networks (CNN_2LAYERS), multi-layer perceptron (MLP), and long short-term memory (LSTM). All three countermeasures are found to be vulnerable to DL-SCA attacks. Each result is verified through simulation and onboard implementation. Maghrebi utilized the ChipWhisperer platform, which utilizes an 8-bit AVR microprocessor. Shamir’s secret sharing (SSS) is a countermeasure against SCA that works by splitting sensitive data into .n >= 2d + 1 shares. Where n is an integer number of shares and d represents the degree of a random polynomial, with constant term Z [5, 29]. Implementations are denoted with the notation (n, d)-SSS. SSS is tested with two implementations: (3,1)-SSS and (5,2)-SSS. Both are found to leak data that could profile the implementation. Maghrebi performed simulations with both implementations and confirmed the (5,2)-SSS results on the ChipWhisperer platform. 256,000 traces are generated for profiling/training during the attack preparation. For validation of the model, 1000 traces are used and 2000 traces for the test phase. Three different DL models are trained: CNN_2Layers, MLP, and LSTM. Each model is then used to perform key recovery and calculate the average rank of the correct key. The graphs below show the performance of each model in simulation and on the ChipWhisperer board. All three deep learning models are able to converge to the correct key rank given enough traces (>20 for (3,1)-SSS, >50 for (5,2)-SSS). LTSM is particularly suited to attacking and defeating an SSS cryptographic implementation, as it can predict data that depends on sequentially leaked shares. Maghrebi tested the 1-amongst-N countermeasure against DL-SCA again with simulations and implementation on the CW board with .N = 8 [16]. 200,000 traces are generated for profiling, 500 traces for validation, and 1000 traces for the attack. DL-SCA successfully defeated 1-amongst-N. The CNN architecture performs the best under the experimental conditions tested. The resistance of 1-amongst-N can be

10.3 Literature Survey

221

improved by adding fake keys, plaintexts for dummy computations, and increasing the number of fake computations N. Maghrebi tested DL-SCA against the shuffling countermeasure again with simulations and implementation on the CW board [16]. Shuffling randomizes the execution order of operations that do not depend on one another, making a program execute in a non-deterministic fashion. For testing the robustness of shuffling against DL-SCA, 200,000 traces were generated for profiling, 500 for verification, and 1000 for testing. Two S-box outputs specifically were targeted (first and fourth) in the attack phase. The shuffling countermeasure was successfully defeated by DL-SCA using all three models. Since all three DL models are equally efficient, a conclusion cannot be drawn about which model is most effective. It is observed that all three countermeasures were vulnerable to a deep learningaided profiling attack. The three machine learning models converged on the appropriate value of the average rank, of the correct key, with varying efficiency. LSTM performed the best against SSS, CNN performed the best against 1-amongstN, and all models similarly against shuffling. A strong argument is made towards applying deep learning to side-channel attacks by defeating various countermeasures with various machine learning models. Maghrebi explored multiple defensive schemes and utilized multiple machine learning models, which shows that the results exhibited are not an exception but rather a common trend in the field [16]. Additionally, the contribution of the work in the form of suggestions for improving the efficiency of deep learning-aided sidechannel attacks is quite helpful for future research directions. However, the author only tests against masking countermeasures, which says nothing about the current strength of other side-channel countermeasures like binding. The countermeasures chosen are also not the current state-of-the-art. The work’s significance is limited by the choice in countermeasure.

10.3.1.3

Test Generation Using Reinforcement Learning for Delay-Based Side-Channel Analysis

The authors in [21] mention that the path delay between HT injected design and a golden design is negligent when compared to process variations and environmental noise. The existing approaches rely on manually designed rules for test generation and require numerous simulations, making the process not scalable. Therefore, the current delay-based side-channel analysis techniques are not efficient. In this work, they introduce reinforcement learning for detecting delay-based Trojans. The critical paths are utilized to generate test vectors that maximize side-channel sensitivity on an average of 50% and 17x time for test generation. The assumption made in work is that the attacker constricts trigger conditions based on rare nodes is not convincing. The attacker can also consider observability and rareness for designing trigger conditions. According to [27], rare nodes are always not the chosen ones (trigger nodes) and vice versa. An intelligent attacker can also use a combination of genuine and rare nodes to generate trigger conditions,

222

10 CAD for Machine Learning in Hardware Security

which works by generating initial test patterns based on rare nodes only. Next, it introduces and provides the algorithm used for reinforcement learning. However, it does not focus on the data collected and used for reinforcement learning and does not discuss the model parameters. This work shows higher hardware Trojan detection results for the proposed approach for the 5 benchmarks present in [8]. The proposed work fails to compare scenarios where genuine and rare nodes are used to design the trigger by the attacker and the good comparison of existing schemes.

10.3.1.4

Machine Learning-Based Side-Channel Evaluation of Elliptic-Curve Cryptographic FPGA Processor

Another example of machine learning’s strength in aiding side-channel attacks comes from ML-based side-channel evaluation of Elliptic-Curve Cryptographic FPGA Processor [20]. In this work, the authors test the security of an ellipticcurve (ECC) implementation against multiple machine learning-based side-channel analyses. Very little work existed previously to assess ECC against machine learning-aided side-channel analysis. The authors implemented an elliptic-curve cryptosystem on a Kintex-7 FPGA operating at 24 MHz. They captured thousands of power signals using SAKURA-X’s specialized side-channel analysis board. 14,000 power traces were collected in total, with each trace consisting of 10,000 sampling points. These traces were used along with classification algorithms to classify and recover the secret key bits. ECC revolves around the relationship between P and Q’s chosen points, which lie on the elliptic curve. An integer k relates the two points in the relationship Q = kxP. The security of ECC revolves around the difficulty of finding k. ECC is also particularly well-suited for resource-constrained and low power devices, as the most computationally intense operation required is scalar multiplication. In the particular ECC implementation focused on by the authors, the security of the Double-And-Add-Always algorithm is examined under the threat of ML-aided SCA. This algorithm was specifically designed to be resistant to differential power analysis, an improvement on the Double-And-Add algorithm, which can be defeated using simple power analysis. The side-channel attack on the ECC double-and-add-always algorithm operates by attacking one bit at a time. Only the three bits of the key are targeted in this analysis. The authors only attack the fourth, third, and second-least-significant bits of the 256-bit key. The first bit is not needed, as the encryption process uses the second bit of the key. They tested the performance of the algorithms with and without pre-processing steps to see how pre-processing filters can increase classification accuracy. The collected power traces are labeled and used as a dataset for training. In the labeling process, traces are assigned to either group 0 (GB0) or group 1 (GB1), depending on if that sample represents a “0” bit or a “1” bit. Additionally, three “attack levels” are designated. These attack levels correspond to the bits under

10.3 Literature Survey

223

attack, which the authors refer to as LB2, LB3, and LB4. LB2 corresponds to the second-least-significant bit, LB3 corresponds to the third least significant bit, and LB4 corresponds to the fourth least significant bit. For example, each power trace corresponding to the key having a “0” at the second-bit location is labeled as “GB0” for LB2. The first bit is not attacked at all. Instead of using the raw power traces directly as features for training, properties of the power traces are used instead. Signal properties of power signals are effective as features because they reduce redundant information and prioritize informative data. It also reduces the likelihood of over-fitting on uninformative information or noise. Some features that are used for training include: • • • • •

Mean of absolute value of the signal Kurtosis—the sharpness of the frequency-distribution curve peak Median PSD—the median in the frequency domain Frequency Ratio—the ratio of the recorded frequencies Median Amplitude Spectrum of the signals

Four different classification algorithms are used in the classification step: Support Vector Machine (SVM), Naive Bayes (NB), Multi-layer Perceptron (MLP), and Random Forest (RF). The models were validated using k-fold cross-validation before testing. Pre-processing did not appear to improve the accuracy of RF, NB, and MLP classification algorithms. However, PCA is effective as a pre-processing step with SVM because it extracts relevant features. It was able to improve the accuracy of SVM up to 86%. All four algorithms effectively attacked ECC by extracting the secret key bits, showing the strength of ML-based SCA. Random Forest performed the best four models in extracting the secret key bits. The authors showed that machine learning and deep learning techniques apply to attacking ECC, albeit with varying performance levels. They demonstrated this for various models, which helps future researchers determine what direction to take in future ECC side-channel analyses. The authors only exhibited the performance of the four classification algorithms for converging to the first couple bits of the key. It is not particularly convincing, and the performance cannot be effectively extrapolated to other key sections. The authors did not explore deep learning algorithms like CNNs and LSTM networks to improve attack accuracy. Though, neural networks may perform better as they are more robust in the face of non-linearity in a system.

10.3.2 Trojan Detection Trojan detection schemes have been on the premises for more than a decade. Many schemes with and without golden reference are proposed, but there is no prominent or generalized solution. It is due to the stealthy nature of hardware Trojans in any chip design. Recently, machine learning techniques have been widely used in Trojan

224

10 CAD for Machine Learning in Hardware Security

detection, and the results are promising. Some of the machine learning-based Trojan detection techniques are discussed here.

10.3.2.1

SVM-Based Real-Time Hardware Trojan Detection for Many-Core Platform

The architecture for run-time Trojan detection is proposed for a custom many-core based on ML technique [9]. The authors’ central claim is that machine learning approaches to Trojan detection can significantly increase detection accuracy on the Many-Core system. The author also assumes that the processor cores and memories are secure. The specific machine learning algorithm used to implement their framework was the SVM. Three other techniques were also evaluated (Decision Tree, Linear Regression, and K-Nearest Neighbors). The implementation was performed on a Many-Core system. Thus, the features used to reflect that architecture: • Source Core—Source Core number • Destination Core—Destination Core number • Packet Transfer Path—The unique path that the packet is transferred between source and destination • Distance—The number of router hops the packet makes to reach its destination SVM Hardware Architecture The implementation consists of the SVM architecture implemented in hardware on an FPGA rather than software. The training is done offline. However, the test cases are predicted online. This SVM is trained using the “Golden Data Set” and SVM Linear Library. The SVM is trained on this set offline, formulating a function then tested with a feature to establish whether a Trojan is detected. The 64-core many-core architecture uses 10 features (7260 training data) to build this “Golden Data Set.” The three main blocks of SVM are tested feature extraction, SVM computation, and post-processing. Within the SVM computation block, Dot Product, SVM Controller, Weight Vector, and Bias Vector Addition exist. Tools and Application For their application to test and implement the above architecture, the tools used include Matlab toolbox and SVM Linear Library (liblinear-1.94). The test set-up was done on a Xilinx Virtex-7 FPGA. This test design was a security framework for seizure detection algorithm on the Many-Core platform. The overall architecture included three main components: the 64-Core Many-core Architecture, the attack detection module described above, and the Trojan insertion module. The Trojan insertion module is designed to inject a Trojan out of three options: routing loop, core spoofing, and traffic diversion attacks. The significant advantage of this approach includes strong computational complexity. Another benefit is flexibility to any other platform, as it is implemented in FPGA hardware. It allows the design to be portable, assuming comparable memory and LUTs. Another advantage of this scheme is distributing this attack detection module in memory to reduce latency.

10.3 Literature Survey

225

There are also some limitations of this approach caused by the implementation of SVM in hardware. First, features must be pre-processed to allow offline training of the algorithm. Another severe limitation is the area overhead of the hardware implementation and computational model implementation. The final drawback is managing memory transfers in hardware, which are expensive and complex to implement in hardware. The training set in the approach also contains 64Core and 16-Core training records, making it challenging for 32-Core or 8-Core implementations. The results of the implementation are pretty profound. The proposed security framework achieves an average of 93% detection accuracy while incorporating each attack detection module adds 2% of area overhead. It seems to justify their claims, as the challenge of Many-Core Trojan detection is met with a low-overhead and practical implementation with high accuracy. This work tests the capability of hardware SVM to detect Trojans and uses the four ML approaches in software to test their relative competency at detecting Trojans. SVM appears to perform the best compared to the other options, on par with Decision Tree, which has too large a hardware complexity to be considered for their implementation.

10.3.2.2

Adaptive Real-Time Trojan Detection Framework Through Machine Learning

Amey Kulkarni et al. [10] analyzed the effectiveness of real-time online learning for detecting hardware Trojans at run-time. They specifically chose a Modified Balanced Winnow (MBW) online learning algorithm and compared it against Support Vector Machine and K-Nearest Neighbor detection methods. Their research was a direct response to and improvement upon SVM-based real-time hardware Trojan detection for many-core platform. MBW was compared against SVM and KNN. The models were explored by the authors of [9] for real-time Trojan detection capability and were trained on core spoofing and route looping attacks and tested on an unknown traffic diversion attack. The training data contained 7260 records with 10 features and class labels corresponding to Trojan or Trojan-free. They were tested on 10,055 records, with 25% being Trojan-free and 75% containing Trojans. 42% of the records included the unknown traffic diversion attack. The graph below shows the differences between offline and online attack methods in successfully detecting Trojans in this test set. Online methods, represented by the MBW model, consistently outperformed SVM and KNN methods. In particular, the MBW algorithm was robust at adapting to unknown attacks and successfully identifying Trojans over time, as indicated by detection accuracy changes per 1000 record intervals. Overall, MBW exhibited a 5–8% higher overall detection accuracy than SVM and KNN and a 56% reduction in area overhead and 50% reduction in latency overhead when comparing the post and route implementations on FPGA. The work successfully defended their assumption that online learning methods would outperform offline learning methods for identifying unknown Trojan attacks

226

10 CAD for Machine Learning in Hardware Security

in real-time. It was backed up by the performance of the MBW model for identifying the unknown attacks more consistently than SVM and KNN. However, the authors accurately describe the implementation and approach. It leaves out critical details about the FPGA implementation of the seizure algorithm, leaving readers to go back to [9] to gain insight into the algorithm. Additionally, the author is pretty self-evident that an online approach will be more successful at detecting an unknown attack at run-time than an offline approach. The performance of MBW would have been much more convincing if it had been compared to other online learning approaches as well.

10.3.2.3

HW2VEC: A Graph Learning Tool for Automating Hardware Security

This work focuses on the growing importance of hardware security. As the hardware becomes more and more complex, there is less time-to-market and globalization of the integrated supply chain. There has been a recent increase in graph-based learning to model circuits and improve design flows. These graph-based learning techniques have gained the attention of electronic design automation (EDA). This work proposes an open-source graph learning tool for users who want to experiment with hardware security. Their tool automates the extraction of a graph representation from a hardware design at multiple levels of abstraction (register-transfer level or gate-level netlist). The tool demonstration of hardware Trojan detection and IP piracy detection is illustrated in this work. Graph representations can be extracted from hardware designs using automated pipelines, taking advantage of graph learning. The first step in the tool is from hardware to graph. Steps such as pre-processing, graph generation engine, and post-processing are JSON file format that is further converted to graph object in the upcoming steps. The next major step is from graph to graph embedding, which performs low-level parsing chores like storing data on disk to speed up operations like repeated model testing, train-test split, and among all graph data instances discovering the unique set of node labels. Later, the pipeline is converted to multiple model components such as pooling layers, convolutions layers, and readout operations in the graph embedding step. The last step is graph training and evaluation. As a result, dataset preparations and algorithms described in this work are detailed and well described. All in all, this would be a good start for users researching in the hardware security domain.

10.3.2.4

Automated Test Generation for Hardware Trojan Detection Using Reinforcement Learning

System-on-Chip (SoC) designs are vulnerable to malicious insertions because of the globalization of the electronic supply chain. With the ever-increasing complexity of the SoCs, it has become difficult to detect hardware Trojans through validation

10.3 Literature Survey

227

tests. There is a constant effort to develop test generation approaches for Trojan detection; however, they have limited detection accuracy and are not scalable. The authors in [22] propose a logic testing technique leveraging reinforcement learning for Trojan detection. The proposed work had three major contributions: identifying rare nodes, reinforcement learning, and testability analysis. The work shared a summary of the “rareness heuristic” and “test generation complexity,” where the evaluations mention the rare nodes do not need to be trigger nodes in [27]. The high computation complexity of the test generations proves to be a huge drawback. The authors follow a two-stage method (test effectiveness and test generation efficiency) for an algorithm based on test generation. Test effectiveness takes advantage of the testability of signals and rareness, whereas, for efficiency, they take advantage of feedback in intermediate steps. The first step in their approach is the application of a dynamic simulation (for rare node information) and static testability (for each node computes SCOAP testability parameters). The collected intermediate results are given as inputs to the ML model (reinforcement model), which will generate test vectors. As the reinforcement learning model learns over time, the final model is used for test generation automatically. The work also explains the algorithms in detail for dynamic simulation and reinforcement learning. Their techniques’ comprehensive results and comparisons are discussed with the existing ones. The authors use the same benchmark to provide detailed results. Their results show that both TARMAC [14] and MERO [2] do not have consistent results when tested with large designs. The authors’ approach outperforms both TARMAC (14.5% on average) and MERO (77.1% on average) in trigger coverage. They also provide the test generation time results to show improvement in regards to the other two methods (MERO and TARMAC).

10.3.2.5

Contrastive Graph-Convolutional Networks for Hardware Trojan Detection in Third-Party IP Cores

The authors tried to identify malicious implants inserted within cores of third-party intellectual property (3PIP) in an IC design. Their work is based on leveraging a deep learning model for detecting triggers embedded in ICs without using a golden sample. The deep learning model of choice is a graph-convolutional network based on supervised learning. The work highlights include a trigger detection netlist instead of a golden sample, unstructured circuit data to model trigger detection deploying the graphconvolutional network, analysis, comparison of various state-of-the-art classification models, and a technique to collect data for building a large dataset. The proposed work averages 21.91% and 46.99% of performance improvement for detecting sequential and combinatorial triggers, respectively. However, the dataset used for sequential and combinatorial triggers is not large enough. The models supervised contrastive pretraining: embedding, projection, loss formulation, pretraining, and classification are explained in detail.

228

10 CAD for Machine Learning in Hardware Security

The experiments performed in work are extensive where their proposed approach (GATE-Net: Graph-Aware Trigger Detection Network) is tested compared to the state-of-the-art approaches. However, it would have been an interesting analysis to see if the graph-convolutional network was tested and/or compared with other types of neural networks and what features (structural and functional) would have been extracted to attain accuracy. The two tests performed were extrapolation-based testing and random shuffle testing. The work also includes the list of structural and functional features used by the two state-of-the-art approaches and how they compare with each other [12] and [6]. They perform experiments on a variant of GATE-Net, which does not include contrastive learning, and prove that supervised contrastive learning help GATE-Net generate higher performance for HT detection. However, the work provides little discussion on false positive detection. The datasets and the code will be available publicly. They also worked on GATE-Net to become a hardware Trojan detection research baseline.

10.4 Summary This chapter investigated and produced a survey of machine learning in two subfields of hardware security: Hardware Trojan detection and side-channel analysis. This chapter evaluated a variety of works, which all use machine learning techniques to augment various hardware security frameworks or even create new ones entirely. Three of the works seek to analyze better the beneficial effects of machine learning to infer new relationships between side-channel measurements and extract sensitive information. The other two works seek to use machine learning to develop real-time hardware Trojan detection frameworks on a Multi-Core platform, which have been rarely investigated in prior works. These approaches were observably innovative in a variety of ways. For example, their implementation approaches use machine learning techniques, like SVM and deep learning, instead of classical approaches. The experimental design of all the works met their claims successfully.

References 1. M. Azhagan, D. Mehta, H. Lu, S. Agrawal, P. Chawla, M. Tehranipoor, D. Woodard, N. Asadi, A review on automatic bill of material generation and visual inspection on pcbs, in ISTFA 2019: Conference Proceedings from the 45th International Symposium for Testing and Failure Analysis, ISTFA (2019), pp. 256–265 2. R.S. Chakraborty, F. Wolff, S. Paul, C. Papachristou, S. Bhunia, MERO: A statistical approach for hardware Trojan detection, in International Workshop on Cryptographic Hardware and Embedded Systems (Springer, Berlin, Heidelberg, 2009), pp. 396–410 3. T.S. Furey, N. Cristianini, N. Duffy, D.W. Bednarski, M. Schummer, D. Haussler, Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics 16(10), 906–914 (2000)

References

229

4. I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (MIT Press, 2016) 5. L. Goubin, A. Martinelli, Protecting AES with Shamir’s secret sharing scheme, in International Workshop on Cryptographic Hardware and Embedded Systems (Springer, Berlin, Heidelberg, 2011), pp. 79–94 6. T. Hoque, J. Cruz, P. Chakraborty, S. Bhunia, Hardware IP trust validation: Learn (the untrustworthy), and verify, in 2018 IEEE International Test Conference (ITC) (IEEE, 2018), pp. 1–10 7. G. Hospodar, B. Gierlichs, E. De Mulder, I. Verbauwhede, J. Vandewalle, Machine learning in side-channel analysis: a first study. J. Cryptogr. Eng. 1(4), 293 (2011) 8. ISCAS89 Sequential Benchmark Circuits [n.d.], https://filebox.ece.vt.edu/-mhsiao/iscas89. html 9. A. Kulkarni, Y. Pino, T. Mohsenin, Adaptive real-time Trojan detection framework through machine learning, in 2016 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (2016), pp. 120–123. https://doi.org/10.1109/HST.2016.7495568 10. A. Kulkarni, Y. Pino, T. Mohsenin, Adaptive real-time trojan detection framework through machine learning, in 2016 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (IEEE, 2016), pp. 120–123 11. A. Kulkarni, Y. Pino, T. Mohsenin, SVM-based real-time hardware Trojan detection for manycore platform, in 2016 17th International Symposium on Quality Electronic Design (ISQED) (2016) 12. T. Kurihara, K. Hasegawa, N. Togawa, Evaluation on hardware-Trojan detection at gate-level IP cores utilizing machine learning methods, in 2020 IEEE 26th International Symposium on On-Line Testing and Robust System Design (IOLTS) (IEEE, 2020), pp. 1–4 13. L. Lerman, R. Poussier, G. Bontempi, O. Markowitch, F.-X. Standaert, Template attacks vs. machine learning revisited (and the curse of dimensionality in side-channel analysis), in International Workshop on Constructive Side-Channel Analysis and Secure Design (Springer, Cham, 2015), pp. 20–33 14. Y. Lyu, P. Mishra, Automated trigger activation by repeated maximal clique sampling, in 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC) (IEEE, 2020), pp. 482–487 15. H. Maghrebi, Assessment of common side channel countermeasures with respect to deep learning based profiled attacks, in 2019 31st International Conference on Microelectronics (ICM) (2019) 16. H. Maghrebi, Assessment of common side channel countermeasures with respect to deep learning based profiled attacks, in 2019 31st International Conference on Microelectronics (ICM) (IEEE, 2019), pp. 126–129 17. T. Moos, F. Wegener, A. Moradi, DL-LA: Deep learning leakage assessment: A modern roadmap for SCA evaluations. Cryptology ePrint Archive (2019) 18. A. Moradi, B. Richter, T. Schneider, F.-X. Standaert, Leakage detection with the x2-test. IACR Trans. Cryptogr. Hardw. Embed. Syst., 209–237 (2018) 19. N. Mukhtar, M. Mehrabi, Y. Kong, A. Anjum, Machine-learning-based side-channel evaluation of elliptic-curve cryptographic FPGA processor. Applied Sciences 9(1), 64 (2018) 20. N. Mukhtar, M.A. Mehrabi, Y. Kong, A. Anjum, Machine-learning-based side-channel evaluation of elliptic-curve cryptographic FPGA processor. Applied Sciences 9(1), 64 (2019) 21. Z. Pan, J. Sheldon, P. Mishra, Test generation using reinforcement learning for delay-based side-channel analysis, in 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD) (IEEE, 2020), pp. 1–7 22. Z. Pan, P. Mishra, Automated test generation for hardware trojan detection using reinforcement learning, in Proceedings of the 26th Asia and South Pacific Design Automation Conference (2021), pp. 408–413 23. L.E. Peterson, K-nearest neighbor. Scholarpedia 4(2), 1883 (2009) 24. J.R. Quinlan, Induction of decision trees. Machine Learning 1(1), 81–106 (1986)

230

10 CAD for Machine Learning in Hardware Security

25. M.T. Rahman, N. Asadizanjani, Backside security assessment of modern socs, in 2019 20th International Workshop on Microprocessor/SoC Test, Security and Verification (MTV) (IEEE, 2019), pp. 18–24 26. F. Regazzoni, S. Bhasin, A.A. Pour, I. Alshaer, F. Aydin, A. Aysu, . . . , V. Yli-Mäyry, Machine learning and hardware security: Challenges and opportunities-invited talk, in 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD) (IEEE, 2020, November), pp. 1–6 27. H. Salmani, COTD: Reference-free hardware trojan detection and recovery based on controllability and observability in gate-level netlist. IEEE Trans. Inf. Forensics Secur. 12(2), 338–350 (2016) 28. G.A.F. Seber, A.J. Lee, Linear Regression Analysis, vol. 329 (Wiley, 2012) 29. A. Shamir, How to share a secret. Commun. ACM 22(11), 612–613 (1979) 30. A. Shostack, Threat Modeling: Designing for Security (Wiley, 2014) 31. A. Stern, D. Mehta, S. Tajik, F. Farahmandi, M. Tehranipoor, SPARTA: A laser probing approach for Trojan detection, in 2020 IEEE International Test Conference (ITC) (IEEE, 2020), pp. 1–10 32. M. Tehranipoor, trust-hub.org, Trust. [Online]. Available: https://trust-hub.org/#/benchmarks/ chip-level-trojan. [Accessed: 02-May-2021]

Chapter 11

CAD for Securing IPs Based on Logic Locking

11.1 Introduction The horizontal model of the integrated circuits (IC) supply-chain means different steps of the design process are performed under various entities. For example, the intellectual property (IP) design house takes care of the pre-fabrication steps, whereas offshore foundries for the finished products facilitate fabrication and testing. Besides, time-to-market has become very short with the increased competition to produce the ICs. So, the use of 3rd party IP blocks has also risen significantly in system-on-chips (SoCs). The IC supply chain’s horizontal model has facilitated different attacks on the hardware, such as reverse engineering the functionality [26], stealing and claiming ownership [1], and inserting malicious circuits into the design [7]. These incidents have led stakeholders of the IP/IC design process to re-strategize how to secure the hardware. To address the lack of security against such attacks, countermeasures such as split manufacturing [10], IC camouflaging [19], and IC metering [1] were proposed. However, logic locking is one of the most widely proposed solutions against IP piracy, IC reverse engineering, and overbuilding. Logic locking is a method which hides the original functionality and implementation of a design by inserting additional circuitry [11–16, 21–24, 32]. Chapter 11 focuses on logic locking solutions in the digital domain and their limitations towards certain attacks. Locking effectiveness and defense against specific attacks are the two main parameters behind each of the locking techniques proposed during the past decade. Here, effectiveness corresponds to the corruptibility introduced at the output while applying the incorrect key combination. On the other hand, as there was ongoing research to assess the performance of the proposed locking mechanisms, researchers also came up with specific attacks exploiting the weaknesses of certain techniques [6, 17, 18, 31]. As a result, locking techniques were proposed that are resilient to specific attacks. This incremental work has become a trend, i.e., new locking © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Farahmandi et al., CAD for Hardware Security, https://doi.org/10.1007/978-3-031-26896-0_11

231

232

11 CAD for Securing IPs Based on Logic Locking

techniques are proposed to address some attacks, and on the counterpart, more recent attacks are proposed to exploit the technique’s vulnerabilities. The main goal of this chapter is to perform a comprehensive analysis of locking techniques based on the mentioned parameters, the underlying algorithm and efficacy, and the performance of the methods against proposed attacks. In addition, a brief analysis of significant attack methodologies will be presented. The rest of the chapter is organized as follows—Sect. 11.2 covers the existing logic locking techniques, target threat model, proposed attacks available in the literature, etc. Finally, Sect. 11.3 discusses how the presented methods compare to other locking techniques.

11.2 Background and Related Work One of the first locking techniques was introduced in [25] to address the vulnerabilities introduced by an untrusted foundry. In [25], insertion of XOR/XNOR gates was performed at various locations of the gate-level netlist. The placement of these gates was randomly determined. In [18], the sensitization attack was proposed that could break the locking technique from [25]. A new approach was proposed based on an interference graph that uses the locking key’s interdependency to prohibit the contribution of one key to the output without knowing the other to resist the key sensitization attack. A locking technique based on fault analysis was proposed in [21] to ensure maximum output corruptibility. Another method was proposed based on the modified interference graph in [38] to thwart sensitization attacks. In HOST (2015), perhaps the most significant algorithmic attack [31] was proposed based on Boolean satisfiability (SAT). Using SAT attack, all of the logic locking techniques that existed at the time were broken. It was an Oraclebased attack that used a miter circuit to distinguish input patterns (DIPs). The DIPs discovered were then used to determine the oracle’s actual response. Upon comparison with the response from the oracle, incorrect keys are pruned away. SARLock and Anti-SAT techniques were proposed in [34, 40] to defend against the SAT attack, which made sure the number of queries for the SAT tool differentiated all the incorrect keys and remained exponential for the locking key size. There were a few attacks proposed afterward to thwart the SAT-resistant locking. In [36], a signal probability skew (SPS) based attack was presented to thwart Anti-SAT. Another attack named the bypass attack [35] was proposed that was able to break SARLock and Anti-SAT. In [37], a removal attack was presented that could identify and remove SAT-resistant protection circuitry wrapping the unmodified logic cone. Techniques such as stripped functionality logic locking (SFLL) [38] and cascaded Locking (CAS-Lock) [27] have been proposed to address these attacks.

11.2 Background and Related Work

233

11.2.1 Threat Model For the comparative analysis among the proposed techniques, the considered threat model will include third-party vendors and the untrusted foundry (having oracle access) that can compromise the security of the design in any of the following ways: • Reverse engineering (RE) by the foundry to gain critical design information • Overproduction of the ICs outside the IP owner’s knowledge • Performing malicious insertion (i.e., Trojan) upon retrieving the unlocked design

11.2.2 EPIC: Ending Piracy of Integrated Circuits During the IP design process, all the critical paths are known to the designer after the gate-level netlist is generated from the Register-Transfer Level, and placement of the components is done [25]. In this locking technique, XOR/XNOR gates are added to the selected non-critical wires of modules intended for protection. There is a usage of asymmetric encryption in this process which can be described as follows. The design became transparent by applying the correct common Key (CK). In other cases, the response of the design would be altered. This Common Key is randomly generated so that it cannot be stolen at earlier stages. CK is securely sent to the IP design house. They needed activation before the testing phase after routing and fabricating the locked design. Private and public random chip keys (RCK) pair is generated and engraved on electronic fuse unit (EFU) while the initial power-up of the fabricated chip is performed. The foundry needs to establish secure communication with the IP design house to transmit RCK-public for the activated chip. The foundry’s private key authenticates this communication. IN RETURN, the IP owner sends the input key (IK), representing the Common Key, to be encrypted by MK-private and RCK-public. RCK-public usage to encrypt communications provided resilience against statistical attacks. This IK could be encrypted with the foundry’s public key so that the fab can only receive it. When the IK is entered into the chip, it is decrypted with RCK-private and MK-public, authenticating it to be sent by the IP owner. After decryption, CK is produced to unlock the chip. Finally, testing the chip can be done, and the chip will be ready for sales. In Fig. 11.1, the overview of this technique is shown. The combinational designs are locked by inserting k-new XOR/XNOR gates at different locations by selecting k-wires. For the chosen wires, the driver output is disconnected from the sink, and an XOR/XNOR gate is inserted so that one input of the newly inserted gate is reconnected with the driver output while the other input is the key input. The output of this gate is connected to the original sink. The correct key value would be 0’s and 1’s for the inserted XORs and XNORs gates, respectively.

234

11 CAD for Securing IPs Based on Logic Locking

Logic Synthesis and Tech Mapping Common Key

Combinational Locking

Adding locks and Crypto

RTL

Fabrication and Packaging

Testing

Sales

Master Key (public) Initial Powerup, random bits

Master Key Holder Routing

Chip Key Generation

Activation Input Key Chip Key (public)

Master Key Holder

Fig. 11.1 The semiconductor supply chain based on EPIC [25] is presented. Unlike traditional design flow, additional gates, termed “key gates,” are inserted after the logic synthesis step. The correct input value to those key gates, known as the “unlocking key,” restores the original functionality and is stored in the master key holder to unlock the fabricated chip

Random logic locking is one of the earlier approaches to logic locking in pursuit of securing IPs. As a result, the authors’ perspective of the threat model is limited. The security of this technique is solely based on the computational complexity of breaking Rivest–Shamir–Adleman (RSA) type crypto-systems. As a result, this technique cannot hide the correct key patterns because of the emergence of different attack models, i.e., algorithmic [31], structural [35], and machine learning-based attacks [4]. To access the IK and acquire the correct keys without any modification to the chip, the attacker must obtain all the RCK-public, MK-private, and CK. Getting all these keys to acquire IK makes the technique resilient towards the attacker. However, as mentioned before, the technique does not consider a wide array of attacks, i.e., algorithmic [31], machine learning-based attacks; [4] or how the technique would fare against these attacks. Additionally, there is no measure of whether the input patterns can activate the key gate for an incorrect key application to produce output corruptibility.

11.2.3 Fault Analysis-Based Logic Encryption In the EPIC [25] technique, when inserting a key gate at a random place, whether the incorrect key application can ensure corrupted output patterns is not certain. It may happen that for most of the input patterns, the wrong key does not add corruptibility to the output. Meaning the locking technique is mostly ineffective. In this regard, the logic encryption based on fault analysis ensures that for each incorrect key combination, the Hamming distance (HD) between correct and incorrect output is 50%. In the fault analysis-based logic encryption technique [21], the goal is to insert key-gates so wrong key combinations would impact 50% of the total output bits for any input patterns. To ensure this, fault impact was evaluated for each net of the design from the equation 1. Equation 1’s number of patterns that can be used to detect the stuck-at-0 fault and stuck-at-1 fault is .NO P0 and .NO P1 , respectively,

11.2 Background and Related Work

235 Gate-Level Design to Lock Calculate FaultImpact of Each net

XOR-based Approach

Highest FaultImpact Net Selection

Insert XOR gates

MUX-based Appraoch Highest Fault-Impact Net Selection as true wire Compute Contradiction Metric

Select highest Metric Net No

Done for T otal Key Size?

Insert MUX

Yes

Insert Inverters Randomly

Done for T otal Key Size?

No

Yes

Replace XORs with XNORs

Switch true/false wires

Fig. 11.2 The two algorithms proposed in Fault Analysis-Based logic Locking [21] using XOR and MUX gates, respectively

whereas the total number of impacted output bits for s-a-0 and s-a-1 are .NO O0 and NO O1 , respectively. The incorrect key were guaranteed to have a maximum impact on the output to ensure .50% corruptibility, by: choosing the location of highest fault impact to insert key-gates.

.

F ault I mpact = (NO P0 × NO O0 + NO P1 × NO O1 )

.

(11.1)

There are two algorithms proposed in this technique. The flow diagram of these algorithms can be seen in Fig. 11.2. From the figure, the fault impact of each design net is determined using Eq. 11.1 for both algorithms.

236

11 CAD for Securing IPs Based on Logic Locking

The net with the highest impact is selected for inserting XOR gates for the XORbased approach. After the XOR gate insertion, the fault impact is again calculated for the updated netlist. This process continues until the defined number of key gates are inserted. Finally, some of the inserted XOR gates are changed to XNOR, and inverters are inserted at the output of XOR/XNOR gates to ensure that the correct key combination provides the correct output. On the other hand, the highest fault impact net is the true wire for the MUXbased approach. The false wire is selected based on the contradiction metric, which maximizes different values probability at the true and false wire. The following equation defines the fault analysis: Here .P0,true and .P1,true represent the probabilities of a 0 and 1 at the true net; whereas, .P0,f alse and .P1,f alse represent the probabilities of false wire value being a 0 and 1. The MUX is inserted, and the parameters are again calculated for the updated netlist, based on the highest fault impact and contradiction metric nets. The process continues like the XOR-based approach until the defined number of key gates are inserted. Finally, the input wires of the MUX are flipped at random locations. Contradiction Metric = (P0,true × P1,f alse ) + (P1,true × P0,f alse )

.

(11.2)

The main objective of fault-analysis-based logic locking is to ensure 50% HD between the output of correct and wrong key combinations, but with lower overheads compared to random insertion of the key gates. The overhead becomes higher for smaller designs (.