Fiber Bundles: Statistical Models and Applications 3031147960, 9783031147968

​This book presents a critical overview of statistical fiber bundle models, including existing models and potential new

202 72 6MB

English Pages 163 [164] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Acknowledgments
Contents
Acronyms
1 Introduction and Preliminaries
1.1 Overall Introduction
1.1.1 Early Origins of Fiber Bundles Model
1.1.2 Organization of This Book
1.2 Preliminaries
1.2.1 Elements of Probability
1.2.1.1 Sample Space and Events
1.2.1.2 Axioms of Probability
1.2.1.3 Conditional Probability and Independence
1.2.2 Random Variables
1.2.2.1 Expectation of a Random Variable
1.2.2.2 Variance, Covariance, and Correlation of Random Variables
1.2.2.3 Moments of a Random Variable
1.2.2.4 Survival Function
1.2.2.5 Hazard Function
1.2.2.6 Quantile Function
1.2.2.7 Distributions of Minimum and Maximum
1.2.3 Some Commonly Used Discrete Distributions
1.2.3.1 Binomial Distribution
1.2.3.2 Poisson Distribution
1.2.4 Some Commonly Used Continuous Distributions
1.2.4.1 Uniform Distribution
1.2.4.2 Normal Distribution
1.2.4.3 Exponential Distribution
1.2.4.4 Weibull Distribution
1.2.4.5 Other Log-Location-Scale Distributions
1.2.4.6 Other Lifetime Distributions
1.2.5 Likelihood Inference
1.2.5.1 Likelihood and Fisher Information Matrices
1.2.5.2 General Maximum Likelihood Theory
1.2.6 Statistical Inference
1.2.7 Model Selection Criteria
1.2.8 Regression
1.2.8.1 Simple Regression Analysis
1.2.8.2 Parametric Lifetime Regression Models (Weibull Regression, Exponential Regression)
1.2.8.3 Semiparametric Regression Model (Cox Proportional Hazards Model)
1.2.9 Censoring
1.2.10 Kaplan–Meier Estimator of cdf
Part I Physical Aspects of Fiber Bundle Models
2 Electrical Circuits of Ordinary Capacitors
2.1 Electrical Laws for Circuits of Capacitors
2.2 Conservation Laws for Series and Parallel Circuits
2.2.1 Conservation Laws for Series and Parallel Circuits
2.2.2 Consequences of the Conservation Laws: The Capacitor Laws
2.2.3 Parallel and Series Circuits of Capacitors with the Same Capacitance
2.2.4 Behavior of the Charge and Voltage Load Distributions for Series Circuits of Capacitors
3 Breakdown of Thin-Film Dielectrics
3.1 Quantum Theory of Electron States in Solids
3.2 The Two Dielectric Materials Being Examined
3.2.1 Structure of Silicon Dioxide Thin Films
3.2.2 Structure of Hafnium Oxide Thin Films
3.3 Mechanisms of Conduction Through Dielectrics
3.3.1 Electrode-Limited Conduction Mechanisms
3.3.2 Bulk-Limited Conduction Mechanisms
3.4 Breakdown in Silicon Dioxide Dielectrics
3.5 Breakdown in Hafnium Oxide Dielectrics
4 Cell Models for Dielectrics
Part II Statistical Aspects of Fiber Bundle Models
5 Electrical Breakdown and the Breakdown Formalism
5.1 The Breakdown Formalism
5.2 Time-to-Breakdown (TBD) Formalism: Static Loads
5.2.1 TBD Formalism: Dynamic Loads
6 Statistical Properties of a Load-Sharing Bundle
6.1 Load-Sharing Rules
6.2 The Bundle Strength Distribution as an Affine Mixture
6.3 The Bundle Strength Density as a Gamma-Type of Mixed Distribution
6.4 The Gibbs Representation of the Distribution of the States of a Bundle
6.5 Examples of Size Effects
7 An Illustrative Application: Fibers and Fibrous Composites
7.1 The Weibull Distribution and the Weakest Link Hypothesis
7.1.1 The Bader–Priest Fiber Data
7.1.2 The Bader–Priest Impregnated Tow Data
7.1.3 Cumulative Damage Models
7.2 Discussion of Rosen's Experiments
7.2.1 Description of the Series A Experiments and the Analysis of the Specimen A-7 Photographs
7.2.2 Discussion Regarding the Shape of the Bundle in the Chain-of-Bundles Model
8 Statistical Analysis of Time-to-Breakdown Data
8.1 Fitting Breakdown Data with Different Statistical Distributions
8.2 Breakdown-Time Regression Models
8.2.1 Proportional Hazard Models for kimle2004's Figure 6 Data
8.2.2 Fitting kimle2004's Figure 3 data with different parametric models and link functions
8.3 Prediction of Hard Breakdown Based on Soft Breakdown Time
9 Circuits of Ordinary Capacitors
9.1 Voltage Breakdown (VBD) of Series and Parallel Circuits Based on kimle2004's Figure 6 Data
9.2 Parallel–Series Circuits Based on kimle2004's Figure 6 Data
9.3 TBD and Cycle Times to Breakdown (CTBD) of Series Circuits
10 Simulated Size Effects Relationships Motivated by the Load-Sharing Cell Model
10.1 Background
10.2 Size Effect Simulations
11 Concluding Comments and Future Research Directions
11.1 Book Summary
11.2 Some Future Research Directions
11.2.1 Curvature in Weibull Plots
11.2.2 Modeling Roughness
11.2.3 Degradation
11.2.4 Nano-Sensors
A Appendices of Supplementary Topics
A.1 Curvature in Weibull Plots and Its Implications
A.1.1 Reliability Systems and Curvature in Related Weibull plots
A.1.2 Curvature in Weibull Plots
A.1.3 Size Effects and Mixed Hazards
A.1.4 Weibull Plots of Mixed Weibull Hazards: Convex Curvature
A.1.5 An Example of an Exact Weibull Plot with Concave Curvature
A.1.6 The Weibull Chain-of-Links Hypothesis and Linearity in Weibull Plots
A.2 Load-Sharing Networks and Absorbing State Load-Sharing Rules
A.2.1 Load-Sharing Networks
A.2.2 Absorbing State Load-Sharing Rules
A.3 Gibbs Measure Potentials and the Stresses and Potential Energies in Load-Sharing Bundles
References
Index
Recommend Papers

Fiber Bundles: Statistical Models and Applications
 3031147960, 9783031147968

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

James U. Gleaton · David Han James D. Lynch · Hon Keung Tony Ng Fabrizio Ruggeri

Fiber Bundles Statistical Models and Applications

Fiber Bundles

James U. Gleaton • David Han • James D. Lynch • Hon Keung Tony Ng • Fabrizio Ruggeri

Fiber Bundles Statistical Models and Applications

James U. Gleaton Worcester, MA, USA

David Han Management of Science and Statistics University of Texas at San Antonio San Antonio, TX, USA

James D. Lynch Department of Statistics University of South Carolina Columbia, SC, USA

Hon Keung Tony Ng Department of Mathematical Sciences Bentley University Waltham, MA, USA

Fabrizio Ruggeri Istituto di Matematica Applicata e Tecnologie Informatiche Consiglio Nazionale delle Ricerche Milano, Italy

ISBN 978-3-031-14796-8 ISBN 978-3-031-14797-5 https://doi.org/10.1007/978-3-031-14797-5

(eBook)

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The evolution of the book started during the 2019–2020 academic year when SAMSI (Statistical and Applied Mathematical Sciences Institute, in Durham, NC, USA) hosted a one-year program called “Games and Decisions in Risk and Reliability.” There, the authors organized a SAMSI program reliability working group on Load-Sharing Systems. This was the start of their three-year collaboration on the topic. The original working group plan was to write an overview paper, but the material that we collected and produced, far exceeded the usual size of a paper. We also realized that there was a need to produce something more structured and detailed for a wider audience. An important class of load-sharing systems are fiber bundle models, which have applications in the physical and material sciences. Many in the statistical community may not be that acquainted with the physical aspects of these applications. In addition, the failure of fiber bundles and chains of such bundles are based on stochastic reliability models and methods, which some in the physical and material science community may not be familiar with. Therefore, we thought a book which described both the physical and statistical modeling in a rigorous, but accessible, way to both communities, would be an important contribution. We start the book by introducing the basic elements in probability and statistics about distributions, classical inference, and stochastic models, mostly related to reliability. This is followed by the two main parts of the book. In Part I, we discuss classical electrical circuits of ordinary capacitors, including circuit laws. This is followed by a discussion of the solid-state physics of thin-film dielectrics, including structure, conduction mechanisms, and dielectric breakdown for both silica and hafnia dielectrics, as well as cell models for thin-film dielectrics. In Part II, the

v

vi

Preface

statistical fiber bundle model is applied to the breakdown phenomenon, as well as to the failure of fibrous composite materials. The book closes with a summary and some suggestions for future research. Worcester, MA, USA San Antonio, TX, USA Columbia, SC, USA Waltham, MA, USA Milano, Italy June 2022

James U. Gleaton David Han James D. Lynch Hon Keung Tony Ng Fabrizio Ruggeri

Acknowledgments

We would like to thank David Fecko at the Pennsylvania State University’s Materials Research Institute (MRI) for arranging a visit to MRI’s dielectrics testing center. We would especially like to thank Eugene Furman of MRI for several insightful discussions and numerous references on dielectrics. We want to thank J. C. Lee and J-L. Le for their efforts regarding the data in Kim and Lee (2004) and Le (2012). This material was based upon work partially supported by the National Science Foundation under Grant DMS-1127914 to the Statistical and Applied Mathematical Sciences Institute (SAMSI). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

vii

Contents

1

Introduction and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Overall Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Early Origins of Fiber Bundles Model . . . . . . . . . . . . . . . . . . . . 1.1.2 Organization of This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Elements of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Some Commonly Used Discrete Distributions . . . . . . . . . . . . 1.2.4 Some Commonly Used Continuous Distributions . . . . . . . . 1.2.5 Likelihood Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.6 Statistical Inference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.7 Model Selection Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.8 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.9 Censoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.10 Kaplan–Meier Estimator of cdf . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 1 2 4 4 6 11 12 19 21 22 23 26 27

Part I Physical Aspects of Fiber Bundle Models 2

3

Electrical Circuits of Ordinary Capacitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Electrical Laws for Circuits of Capacitors . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Conservation Laws for Series and Parallel Circuits . . . . . . . . . . . . . . . . 2.2.1 Conservation Laws for Series and Parallel Circuits . . . . . . . 2.2.2 Consequences of the Conservation Laws: The Capacitor Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Parallel and Series Circuits of Capacitors with the Same Capacitance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Behavior of the Charge and Voltage Load Distributions for Series Circuits of Capacitors . . . . . . . . . . . .

31 31 32 33

Breakdown of Thin-Film Dielectrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Quantum Theory of Electron States in Solids. . . . . . . . . . . . . . . . . . . . . . . 3.2 The Two Dielectric Materials Being Examined. . . . . . . . . . . . . . . . . . . . .

39 39 40

33 34 35

ix

x

Contents

3.2.1 Structure of Silicon Dioxide Thin Films . . . . . . . . . . . . . . . . . . 3.2.2 Structure of Hafnium Oxide Thin Films. . . . . . . . . . . . . . . . . . . Mechanisms of Conduction Through Dielectrics . . . . . . . . . . . . . . . . . . . 3.3.1 Electrode-Limited Conduction Mechanisms . . . . . . . . . . . . . . 3.3.2 Bulk-Limited Conduction Mechanisms . . . . . . . . . . . . . . . . . . . Breakdown in Silicon Dioxide Dielectrics . . . . . . . . . . . . . . . . . . . . . . . . . . Breakdown in Hafnium Oxide Dielectrics . . . . . . . . . . . . . . . . . . . . . . . . . .

40 41 44 45 47 49 51

Cell Models for Dielectrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

3.3

3.4 3.5 4

Part II Statistical Aspects of Fiber Bundle Models 5

Electrical Breakdown and the Breakdown Formalism . . . . . . . . . . . . . . . . . 5.1 The Breakdown Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Time-to-Breakdown (TBD) Formalism: Static Loads . . . . . . . . . . . . . . 5.2.1 TBD Formalism: Dynamic Loads . . . . . . . . . . . . . . . . . . . . . . . . .

59 59 59 62

6

Statistical Properties of a Load-Sharing Bundle . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Load-Sharing Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 The Bundle Strength Distribution as an Affine Mixture. . . . . . . . . . . . 6.3 The Bundle Strength Density as a Gamma-Type of Mixed Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 The Gibbs Representation of the Distribution of the States of a Bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Examples of Size Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65 65 66

An Illustrative Application: Fibers and Fibrous Composites . . . . . . . . . . 7.1 The Weibull Distribution and the Weakest Link Hypothesis . . . . . . . 7.1.1 The Bader–Priest Fiber Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 The Bader–Priest Impregnated Tow Data . . . . . . . . . . . . . . . . . 7.1.3 Cumulative Damage Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Discussion of Rosen’s Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Description of the Series A Experiments and the Analysis of the Specimen A-7 Photographs . . . . . . . . . . . . . . . 7.2.2 Discussion Regarding the Shape of the Bundle in the Chain-of-Bundles Model . . . . . . . . . . . . . . . . . . . . . . . . . . . .

75 75 76 80 82 84

7

8

68 68 71

84 86

Statistical Analysis of Time-to-Breakdown Data . . . . . . . . . . . . . . . . . . . . . . . . 91 8.1 Fitting Breakdown Data with Different Statistical Distributions. . . 91 8.2 Breakdown-Time Regression Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 8.2.1 Proportional Hazard Models for Kim and Lee (2004)’s Figure 6 Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 8.2.2 Fitting Kim and Lee (2004)’s Figure 3 data with different parametric models and link functions . . . . . . . . . . . 97 8.3 Prediction of Hard Breakdown Based on Soft Breakdown Time . . 100

Contents

9

Circuits of Ordinary Capacitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Voltage Breakdown (VBD) of Series and Parallel Circuits Based on Kim and Lee (2004)’s Figure 6 Data . . . . . . . . . . . . . . . . . . . . . 9.2 Parallel–Series Circuits Based on Kim and Lee (2004)’s Figure 6 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 TBD and Cycle Times to Breakdown (CTBD) of Series Circuits .

xi

103 103 108 111

10

Simulated Size Effects Relationships Motivated by the Load-Sharing Cell Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 10.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 10.2 Size Effect Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

11

Concluding Comments and Future Research Directions . . . . . . . . . . . . . . 11.1 Book Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Some Future Research Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Curvature in Weibull Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Modeling Roughness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.3 Degradation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.4 Nano-Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

125 125 127 127 128 129 129

A

Appendices of Supplementary Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 Curvature in Weibull Plots and Its Implications . . . . . . . . . . . . . . . . . . . . A.1.1 Reliability Systems and Curvature in Related Weibull plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.2 Curvature in Weibull Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.3 Size Effects and Mixed Hazards . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.4 Weibull Plots of Mixed Weibull Hazards: Convex Curvature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.5 An Example of an Exact Weibull Plot with Concave Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.6 The Weibull Chain-of-Links Hypothesis and Linearity in Weibull Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Load-Sharing Networks and Absorbing State Load-Sharing Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2.1 Load-Sharing Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2.2 Absorbing State Load-Sharing Rules . . . . . . . . . . . . . . . . . . . . . . A.3 Gibbs Measure Potentials and the Stresses and Potential Energies in Load-Sharing Bundles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

131 131 131 132 133 134 136 142 145 145 145 147

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

Acronyms

AD AFR AFT AFM AIC BIC BD CAFM CBD cdf CTBD EOT FBM GB GOF HBD i.i.d. KM MEV MLE pdf PE pmf r.v. RVE SBD TBD TAT VBD

Anderson-Darling Accelerate failure rate Accelerated failure time Atomic force microscopy Akaike information criterion Bayesian information criterion Breakdown Conductive atomic force microscopy Current breakdown Cumulative distribution function Cycle times to breakdown Equivalent oxide thickness Fiber bundle model Grain boundaries Goodness-of-fit Hard breakdown Independent and identically distributed Kaplan–Meier Minimum extreme value Maximum likelihood estimator Probability density function Polyethylene Probability mass function Random variable Representative volume element Soft breakdown Time to breakdown Trap-assisted tunneling Voltage breakdown

xiii

Chapter 1

Introduction and Preliminaries

1.1 Overall Introduction 1.1.1 Early Origins of Fiber Bundles Model Over the last sixty years, fiber bundle models (FBMs) have played an indispensable role in “Modelling Critical and Catastrophic Phenomena.” The phrase in quotes is part of the title of a book on FBM (Bhattacharyya & Chakrabarti, 2006). This book consists of several tutorial introductory chapters, one of which is by Kun et al. (2006) entitled “Extensions of fibre bundle models,” where they state that “The fibre bundle model is one of the most important theoretical approaches to investigate the fracture and breakdown (BD) of disordered media extensively used both by the engineering and physics community.” The chapters after the introductory ones are specialized applications of the FBM in the geosciences. A related reference that is an excellent introduction to FBM and accessible to non-physicists is Hansen et al. (2015)’s book entitled “The Fibre Bundle Model: Modeling Failure in Materials.” Another is Bažant and Le (2017)’s book entitled “Probabilistic Mechanics of Quasibrittle Structures—Strength, Lifetime, and Size Effect.” The point of the current book is to present a friendly introduction of this important topic to those statisticians that are not familiar with it and an introduction to statistical methods for FBM for non-statisticians. This is accomplished by concentrating on both the physical and statistical aspects of a specific load-sharing example, the BD for circuits of capacitors, and related dielectrics. By concentrating on this specific situation, the presentation can be done in an axiomatic framework that is more comfortable to statisticians and probabilists; e.g., the load-sharing rule can be derived from first principles, and the physical aspects of dielectric breakdown are discussed at an elementary level. On the other hand, material scientists also might find the overview enlightening; the statistical aspects presentation is selfcontained.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5_1

1

2

1 Introduction and Preliminaries

The starting point of FBM originated with Daniels (1945) seminal work on the distribution of breaking strength of a bundle of threads. Here, in equilibrium, Hooke’s law is .σ = Y ε, where .σ, ε, and Y are, respectively, stress, strain, and Young’s modulus. For homogeneous bundles, where all the threads have the same Young’s modulus, Hooke’s law leads to the equal load-sharing rule when all the threads have the same cross-sectional area (homogeneous case) and proportional load-sharing if they have different areas (inhomogeneous case), where the proportions depend on the cross-sectional areas. These are equilibrium rules that are abruptly violated when a thread fails under increasing load. A thread breakage initiates a violent process, not easy to model, that can cause a cascade of thread failures. After this cascade, once the bundle is again in equilibrium, the surviving threads share the load equally (homogeneous case) or proportionally (inhomogeneous case).

1.1.2 Organization of This Book In Part I, we consider series circuits of capacitors to illustrate FBM. Much like stressed threads that store potential energy, capacitors are electrical devices that store electrical energy, and series circuits of capacitors behave like bundles of threads. In addition, the capacitor law for a given capacitor, .V = C −1 Q, where V , C, and Q are, respectively, the voltage, capacitance , and charge for that capacitor, is analogous to Hooke’s law. Here, V , .C −1 , and Q, respectively, play the roles of stress, Young’s modulus, and strain leading to the equal load-sharing rule if all the capacitances are the same in the circuit. Besides series circuits of ordinary capacitors, we also discuss the electrical breakdown of thin dielectrics and cell models that have been used to model them. In particular, we discuss the load-sharing cell model where the thin dielectric is modeled as a parallel circuit of cells and where the cell consists of a series circuit of nanocapacitors subject to the electrical laws for ordinary capacitors. This conceptualism leads to a weakest link chain-ofbundles/cell model where, for an infinite chain, extreme value asymptotics leads to a Weibull distribution for the BD distribution for the dielectric. Lack of fit for the Weibull is considered a size effect and leads to consideration of the finite weakest link model to account for this. In Part II, we consider the statistical aspects of fibers and fibrous composites and of circuits of ordinary capacitors and thin dielectrics. Statistical analyses of these materials and electric circuits are given, and related size effects are illustrated and discussed. We give a critical overview of the model assumptions and propose modifications. We close the overall introduction with a discussion of further background and other applications of FBM. In addition to the physics-based reliability analysis of semiconductor dielectrics, there exist limitless applications of the FBM, many found in the fields of material science, mechanical and structural engineering, and nanotechnology. Its application to understand and explain the physical failure

1.1 Overall Introduction

3

process has a long and rich history. During World War II, it was used to analyze the sudden and unexpected failure of the American Liberty cargo ships. They were the first ships built with hulls that were welded rather than riveted, and some of them broke in half without warning. Another catastrophic example is the hull failure of a Boeing 737 airplane during flight in April 1988 in Hawaii. An explosive decompression occurred, and the airplane fuselage was ripped away mid-air. After the investigation based on the FBM, it was realized that the failure process had started long before as a small crack near a rivet due to metal fatigue initiated by crevice corrosion. The crack grew due to the cyclic pressure loading from flying and being on the ground. Recently, Mishnaevsky (2013) used FBM for the micromechanics of wind turbine blade composites. The strength, stiffness, and fatigue life of composite materials were predicted, and the microstructural effects and suitability of different groups of materials were analyzed for applications in wind turbine blades. Pugno (2014) reviewed the mechanics of nanotubes, graphene, and related fibers. For designing super-strong carbon nanotube, graphene fibers, and composites, FBMs were applied to quantify the effect of thermodynamically unavoidable atomistic defects on the fracture strength. Using FBM, Orgéas et al. (2015) discussed the rheology of highly concentrated fiber suspensions. Polymer composites reinforced with fibers or fiber bundles are suitable for many aeronautic, automotive, shipbuilding, electrical, electronic, health, and sports applications. Among these materials, sheet molding compounds, bulk molding compounds, glass mat thermoplastics, and carbon mat thermoplastics are the subject of several ongoing research, and their structural properties with respect to the material reliability are understood using the FBM. More recently, Boufass et al. (2020) studied the composite material energy for the FBM when the fibers in the composite are randomly oriented. Also, Leckey et al. (2020) described the construction of prediction intervals for the time that a given number of components fail in a load-sharing system. Their interest was in the successive failure of tension wires (the components) in prestressed concrete beams. Although unconventional, FBM could also be applied to understand natural phenomena in which rapid mass movements are triggered. Reiweger et al. (2009) used a FBM to describe a slab snow avalanche, the most dangerous snow avalanches, accounting for 99% of fatal avalanches in Canada during the period 1972–1991. The slab snow avalanche presupposes the existence of a weak layer below the surface, which triggers a complete sheet of snow to slide. Further down the slope, the slab may break up into smaller pieces. Among different types and causes of landslides, some landslide models involve fiber bundles. Like the slab snow avalanches, a buried weak layer may cause a shallow landslide to occur. The weak layer is usually caused by infiltration of water, by rapid snow melting, or by heavy rainfall. This results in reduction of the soil strength. The water-induced weakening is modeled by making the strength distribution of the fibers in the fiber bundle depend on the water content. To estimate the time to failure, Lehmann and Or (2012) modeled the time-dependent water infiltration for a given rainfall. As roots have a stabilizing effect in soil and may

4

1 Introduction and Preliminaries

inhibit landslides, Cohen et al. (2011) developed FBM for shallow landslides by treating roots as fibers. Finally, a Markov chain random walk on a graph is basic to well-defining local load-sharing rules since these rules do not fully describe the load-sharing over all possible configurations of component failures. In addition, it also gives a way to obtain the “equilibrium” joint distribution of the states of the components of the related FBM. To do this, let the nodes in the Markov chain graph correspond to the components in the FBM where the edges indicate how the load is transferred locally. The local load-sharing rules define the one-step transition probabilities of the chain. Consider a set of surviving components in the FBM and their corresponding nodes that are now considered as absorbing states of the chain. The absorption probabilities are used to extend the local load-sharing rule to this set of components. The above is a generic model for the failure of a FBM network based on the Markov chain graph. As the load increases, components/nodes fail, one considers the BD of the network. This with the components/nodes’ BD distributions can be used to construct the Gibbs measure for the state of the network (Sect. 6.4). The Gibbs measure indicates what routes are available, if any, between two components. This approach was used to determine the shape of a bundle in Li et al. (2019) where a bundle failed when there was a route/crack across bundle. This network model may also have implications for the reliability of certain types of nano-sensors. Ebrahimi et al. (2013a, 2013b) used a lattice structure for the nano-sensor where the sensor fails when there is no conductive route across the lattice, but they use a percolation model to produce a conductive route. We do not elaborate on this abstraction in the sequel except for discussing it as an area for future research in Sect. 11.2.

1.2 Preliminaries 1.2.1 Elements of Probability 1.2.1.1

Sample Space and Events

In the book, we consider random phenomena (or experiments) whose individual outcomes are uncertain, although we know all the possible realizations. The set of all possible outcomes is called the sample space of the experiment, and we denote it by S. Any outcome s of the experiment is called an elementary event, and more generally, any subset A of the sample space S is called an event. Therefore, an event is an outcome or a set of outcomes of the random phenomenon. As an example, if we roll a dice once, the sample space is made of all the possible outcomes, i.e., .S = {{1}, {2}, {3}, {4}, {5}, {6}}, whereas an event is any set of the outcomes, e.g., the set of even outcomes is .A = {2, 4, 6}. For any two events A and B, we define the new event .A ∪ B, called the union of A and B, to consist of all outcomes that are either only in A or B or in both A and

1.2 Preliminaries

5

B. We define the event AB (or .A ∩ B), called the intersection of A and B, to consist of all outcomes that are in both A and B. The definitions can be generalized to more than two events. Therefore, given the events .A1 , . . . , An , their union, denoted by n A , is defined to consist of all outcomes that are in any of the .A , whereas their .∪ i i=1 i intersection, denoted by .∩ni=1 Ai , is defined to consist of all outcomes that are in all of the .Ai . For any event A, the event .AC , referred to as the complement of A, consists of all outcomes in the sample space S that are not in A. Therefore, .AC occurs if and only if A does not. Since the outcome of the experiment must lie in the sample space S, it follows that .S C contains no outcome and thus cannot occur. We call .S C the null set and designate it by .∅. If .A ∩ B = ∅, so that A and B cannot both occur, we say that A and B are mutually exclusive, and the events A and B are disjoint.

1.2.1.2

Axioms of Probability

For each event A of a random phenomenon having sample space S, we consider a number, denoted by .Pr(A), which is called the probability of the event A. A more formal definition, based on measure theory, requires a measurable space .(S, S), where .S is a .σ -algebra over S (e.g., all the subsets of S if S has finite cardinality). The probability .Pr is defined as a function over .S and taking values in the interval .[0, 1]. The probability is characterized by the following three axioms: • Axiom 1: .0 ≤ Pr(A) ≤ 1, for any .A ∈ S. • Axiom 2: .Pr(S) = 1. • Axiom 3: For any sequence of mutually exclusive events .A1 , A2 , . . . ∈ S, .

n    Pr ∪ni=1 Ai = Pr(Ai ), n = 1, 2, . . . , ∞. i=1

These axioms can be used to prove a variety of results about probabilities, like: • .Pr(AC ) = 1 − Pr(A), for any .A ∈ S (complement rule). • .Pr(∅) = 0. • .Pr(A ∪ B) = Pr(A) + Pr(B) − Pr(A ∩ B), for any .A, B ∈ S.

1.2.1.3

Conditional Probability and Independence

We denote by .Pr(A|B) the conditional probability of A given that B has occurred. If the event B occurs, then, in order for A to occur, it is necessary that the actual occurrence is a point in both A and B, i.e., it must be in .A ∩ B. Since we know that B has occurred, it follows that B becomes our new sample space , and hence, the probability that the event .A ∩ B occurs will equal the probability of .A ∩ B relative to the probability of B, i.e.,

6

1 Introduction and Preliminaries

.

Pr(A|B) =

Pr(A ∩ B) , with Pr(B) > 0. Pr(B)

The determination of the probability that some event A occurs is often simplified by considering a second event B and then determining both the conditional probability of A given that B occurs and the conditional probability of A given that B does not occur. Since .A = (A ∩ B) ∪ (A ∩ B C ), .(A ∩ B), and .(A ∩ B C ) are mutually exclusive, this gives .

Pr(A) = Pr(A ∩ B) + Pr(A ∩ B C ) = Pr(A|B) Pr(B) + Pr(A|B C ) Pr(B C ).

1.2.2 Random Variables A random variable (r.v.) X is a variable whose numerical value depends on the outcome of a random phenomenon. Therefore, a r.v. can be defined as a function over the .σ -algebra .S taking value in an adequate space (e.g., real numbers if a realvalued r.v.). A r.v. that can take either a finite or at most a countable number of possible values is said to be discrete. The cumulative distribution function (cdf), F , of the r.v. X is defined, for any real number x, by F (x) = Pr(X ≤ x).

.

The cdf is a right-continuous function such that .limx→−∞ F (x) = 0 and limx→∞ F (x) = 1. For a discrete r.v. X, its probability mass function (pmf) .f (x) is given by

.

f (x) = Pr(X = x).

.

If X is a discrete r.v. that takes on one of the possible values .x1 , x2 , . . ., then we ∞  have . f (xi ) = 1. i=1

A r.v. X is a continuous real r.v. if there is a nonnegative function .f (x), defined for all real numbers x, such that, for any measurable set C of real numbers, it holds  .

Pr(X ∈ C) =

f (x) dx. C

The function f is called the probability density function (pdf) of the r.v. X. The definition can be extended to the multivariate case. The relationship between the cdf .F (·) and the pdf .f (·) is given, in the real case, by

1.2 Preliminaries

7

 F (x) =

x

.

−∞

f (u)du.

For two r.v.’s X and Y , the joint cdf of X and Y is given by F (x, y) = Pr(X ≤ x, Y ≤ y).

.

If X and Y are both discrete r.v.’s, then f (x, y) = Pr(X = x, Y = y)

.

is the joint pmf of X and Y . We say that X and Y are jointly continuous, with joint pdf .f (x, y), if for any sets of real numbers C and D, it holds   . Pr(X ∈ C, Y ∈ D) = f (x, y)dxdy. y∈D

x∈C

The r.v.’s X and Y are said to be independent if, for any two measurable sets of real numbers C and D, it follows .Pr(X ∈ C, Y ∈ D) = Pr(X ∈ C) Pr(Y ∈ D). Two discrete r.v.’s X and Y will be independent if and only if, for all .x, y, .

Pr(X = x, Y = y) = Pr(X = x) Pr(Y = y),

whereas two continuous r.v.’s X and Y will be independent if and only if, for all x, y,

.

f (x, y) = fX (x)fY (y),

.

where .fX (x) and .fY (y) are the marginal pdfs of X and Y , respectively.

1.2.2.1

Expectation of a Random Variable

If X is a discrete r.v. that takes on one of the possible values .x1 , x2 , . . ., then the expectation or expected value of X, also called the mean of X (denoted as .E(X)), is defined by E(X) =



.

xi Pr(X = xi ).

i

The mean measures the central location of the distribution. If X is a continuous r.v. having pdf f , then the expected value of X is given by

8

1 Introduction and Preliminaries

 E(X) =



.

−∞

xf (x)dx.

Given a r.v. X, the expected value of .g(X), where g is any function, is given by E[g(X)] =



.

g(x)f (x)

x

for a discrete r.v. having pmf .f (x), and by  E[g(X)] =



.

−∞

g(x)f (x)dx

for a continuous r.v. with pdf .f (x). Observe that the expectations exist only when the previous sums and integrals are finite. be proved Itcan nthat .E[aX + b] = aE(X) + b, for any constants a and b, and n .E X = i=1 i i=1 E(Xi ), for a sequence of r.v.’s .X1 , . . . , Xn . 1.2.2.2

Variance, Covariance, and Correlation of Random Variables

If X is a r.v. with mean .μ = E(X), then the variance of X, denoted by .V ar(X), is defined, if the expectation exists, by V ar(X) = E[(X − μ)2 ].

.

The variance measures the variability of X. As a consequence of the definition, an alternative formula is V ar(X) = E(X2 ) − [E(X)]2 .

.

It can be proved that .V ar(aX + b) = a 2 V ar(X), for any constants a and b. The covariance of two r.v.’s X and Y , denoted by .Cov(X, Y ), is defined by Cov(X, Y ) = E[(X − μX )(Y − μY )],

.

where .μX = E(X) and .μY = E(Y ). The covariance can be expressed as Cov(X, Y ) = E[XY ] − E(X)E(Y ). For a sequence of r.v.’s .X1 , . . . , Xn and constants .a1 , . . . , an , we have

.

V ar

 n 

.

i=1

 ai Xi

=

n 

ai2 V ar(Xi ) + 2

i=1

For two r.v.’s X and Y , it becomes

 i,j : i t), t > 0,

.

10

1 Introduction and Preliminaries

is defined as the survival function (or reliability function) of T . .S(t) determines the probability that a unit survives beyond time t. Furthermore, .S(t) is the complement of the cdf, i.e., S(t) = 1 − F (t) ≡ F (t),

.

and for a continuous r.v. with pdf .f (·), it holds  S(t) =



f (x)dx.

.

t

The survival function is sometimes defined as .S(t) = Pr(T ≥ t). If T is continuous, the definitions are equivalent, unlike when T is discrete and the two definitions differ at the support points. In fact, the first definition leads to a rightcontinuous function, whereas the second one gives a left-continuous one.

1.2.2.5

Hazard Function

Given a lifetime T of a unit, we define the hazard function .h(t) that describes the propensity that the unit, still surviving at time t, is going to fail immediately after t: h(t) = lim

.

t→0

1 Pr(t < T ≤ t + t|T > t) Pr(t < T ≤ t + t) = lim . t Pr(T > t) t→0 t

For a continuous r.v., we have h(t) =

.

f (t) , for t > 0. S(t)

The hazard function is also known as failure rate function, instantaneous failure rate, force of mortality, conditional mortality rate, and age-specific failure rate. Given the pdf .f (t) (and, consequently, the survival function .S(t)), the definition leads to the hazard function .h(t). It can be proved that the reverse is possible: given .h(t), it holds that S(t) = e−

t

.

The latter term .H (t) =

1.2.2.6

0

h(x)dx

t 0

and f (t) = h(t)S(t) = h(t)e−

t 0

h(x)dx

.

h(x)dx is called cumulative hazard function.

Quantile Function

Given the cdf F , we define the p-th quantile of F as the smallest time .tp such that

1.2 Preliminaries

11 .

Pr(T ≤ tp ) = F (tp ) ≥ p,

where .0 < p < 1. When .F (t) is strictly increasing, there is a unique value .tp that satisfies .F (tp ) = p, and we write .tp = F −1 (p). When .F (t) is constant over some intervals, there can be more than one solution t to the equation .F (t) ≥ p. Taking .tp equal to the smallest t-value satisfying .F (t) ≥ p is a standard convention.

1.2.2.7

Distributions of Minimum and Maximum

Let .X1 , . . . , Xn be independent and identically distributed (i.i.d.) with common cdf F (x), r.v.’s. Let .Xn:n and .X1:n be, respectively, the maximum and the minimum of the n r.v.’s. The cdf of .Xn:n is given by

.

.

Pr(Xn:n ≤ x) = Pr(X1 ≤ x, . . . , Xn ≤ x) =

n

F (x) = [F (x)]n .

i=1

The cdf of .X1:n is given by .

Pr(X1:n ≤ x) = 1 − Pr(X1:n > x) = 1 − Pr(X1 > x, . . . , Xn > x) = 1−

n

[1 − F (x)] = 1 − [1 − F (x)]n .

i=1

1.2.3 Some Commonly Used Discrete Distributions 1.2.3.1

Binomial Distribution

A r.v. X has a binomial distribution .Bin(n, p) if its pmf is given by f (x) =

.

n x p (1 − p)n−x , x = 0, 1, . . . , n, 0 < p < 1. x

A binomial .Bin(1, p) r.v. is called a Bernoulli r.v. (sometimes denoted as .Bern(p)), whose outcome is typically presented as a “success” with probability p in a trial. Similarly, the realization of a binomial distribution .Bin(n, p) is presented as the number of “successes” in n independent trials, each of which results in a “success” n  Xi , where X is a with probability p. Mathematically, the link is given by .X = i=1

binomial .Bin(n, p) r.v. and .X1 , . . . , Xn are independent Bernoulli .Bern(p) r.v.’s. The mean and the variance of a binomial .Bin(n, p) r.v. X are given, respectively, by

12

1 Introduction and Preliminaries

E(X) = np and V ar(X) = np(1 − p).

.

1.2.3.2

Poisson Distribution

A r.v. X has a Poisson distribution .P(λ) if its pmf is given by f (x) =

.

e−λ λx , x = 0, 1, . . . , and λ > 0. x!

A Poisson r.v. may be used to approximate the distribution of the number of successes in a large number of trials when each of them has a small probability of being a success. The mean and the variance of a Poisson .P(λ) r.v. X coincide and are given by E(X) = V ar(X) = λ.

.

Thus, if X represents the number of events occurring within a unit period of time, then .λ is the average rate of occurrence per unit time. An additive property holds: if .Xi is Poisson distributed with parameter .λi , .i = n  1, 2, . . . , n, and if these r.v.’s are mutually independent, then . Xi has a Poisson distribution with parameter .

n 

i=1

λi .

i=1

Given two independent Poisson r.v.’s .X1 and .X2 , with parameters .λ1 and .λ2 , respectively, another important property is about the conditional distribution of .X1 when their sum s is fixed, i.e., .Pr(X1 = t|X1 + X2 = s), for any .0 ≤ t ≤ s. It can λ1 . be proved that such conditional distribution is .Bin(s, π ), where .π = λ1 + λ2

1.2.4 Some Commonly Used Continuous Distributions 1.2.4.1

Uniform Distribution

A r.v. X has a continuous uniform distribution .U(a, b) over the interval .(a, b), .a < b, if its pdf is given by f (x) =

.

1 , for a ≤ x ≤ b. b−a

The cdf of X is given, for .a < x < b, by  F (x) =

.

a

x

1 x−a dx = . b−a b−a

1.2 Preliminaries

13

The mean and variance of a .U(a, b) r.v. X are E(X) =

.

(b − a)2 a+b and V ar(X) = , 2 12

respectively.

1.2.4.2

Normal Distribution

A r.v. X has a normal (or Gaussian) distribution .N(μ, σ 2 ) with mean .μ and variance 2 .σ if its pdf is given by f (x) = √

1

.

2π σ

e

− 12

 x−μ 2 σ

, for − ∞ < x < ∞.

An important fact about normal r.v.’s is that, if X is normal with mean .μ and variance .σ 2 , then for any constants a and b, .aX + b is normally distributed with mean .aμ + b and variance .a 2 σ 2 . If X is normal with mean .μ and variance .σ 2 , then X−μ .Z = σ is normal with mean 0 and variance 1, and it is said to have a standard normal distribution. The notations .φ and . are typically used to denote the pdf and cdf of the standard normal distribution, i.e., x2 1 φ(x) = √ e− 2 , for − ∞ < x < ∞, 2π  x y2 1 and (x) = √ e− 2 dy, for − ∞ < x < ∞. 2π −∞ .

The normal distribution plays an important role in Statistics, with many relevant properties. We just mention its use in approximating distributions, as a consequence of the Central Limit Theorem, which asserts that the normalized sum of a large number of independent r.v.’s has approximately a normal distribution. More formally, if .X1 , X2 , . . . , is a sequence of independent and identically distributed r.v.’s having finite mean .μ and finite variance .σ 2 , then

X1 + X2 + · · · + Xn − nμ . lim Pr 0 and λ > 0. λ

The mean and variance of an .E(λ) r.v. T are E(T ) = λ and V ar(T ) = λ2 ,

.

respectively. A key property of exponential r.v.’s is that they possess the “memoryless property,” i.e., .

Pr(T > s + t|T > s) = Pr(T > t), for all s, t > 0.

Another useful property of exponential r.v.’s is that they remain exponential when multiplied by a positive constant. The parameter .λ is the mean of an exponential r.v., but it is also the reciprocal of its (constant) hazard function .h(t). The constancy of .h(t) means that the propensity of a unit, survived up to time t, to fail right after it does not change over time.

1.2.4.4

Weibull Distribution

A r.v. T has a Weibull distribution .W(λ, k) with shape parameter k and scale parameter .λ if its pdf and cdf are given by f (t) =

.

k λ

k−1 t k e−(t/λ) , for t > 0, k > 0 and λ > 0, λ

and F (t) = 1 − e−(t/λ) , for x > 0, k > 0 and λ > 0, k

.

respectively. The mean and variance of a .W(λ, k) r.v. T are   1 2 1 2 2 and V ar(T ) = λ 1 + − 1+ .E(T ) = λ 1 + , k k k ∞ respectively, where . (a) = 0 y a−1 e−y dy is the complete gamma function. When .k = 1, the Weibull distribution reduces to an exponential .E(λ) distribution. The hazard and cumulative hazard functions of .W(λ, k) are given by k .h(t) = λ

k−1 t λ

1.2 Preliminaries

15

and H (t) =

k t for t > 0, k > 0 and λ > 0, λ

respectively. The parameter k plays a relevant role in determining the propensity to fail of a Weibull distributed unit that has survived up to time t. For .k < 1, the hazard function decreases over time, modeling the case of a sample of items where the defective ones are failing early (“infant mortality”). For .k > 1, the hazard function is increasing, modeling the case of a sample of items subject to aging (“obsolescence”). For .k = 1, the Weibull distribution becomes an exponential one, with constant hazard rate and constant propensity to fail over time. An interesting property of the Weibull distribution is that its Weibull plot {(ln t, ln[− ln S(t)]) : t > 0} = {(ln t, k ln t − k ln λ) : t > 0}

.

is linear. As such, given a sample, the Weibull probability plot is a practical tool used to visually assess if the Weibull model provides a good fit to the data. The sample is made of i.i.d. r.v.’s .T1 , . . . , Tn with cdf .F (x). The empirical cdf is defined as  n (t) = 1 1Ti ≤t , F n n

.

i=1

where .1A is the indicator function of the set A. Data are drawn in the Weibull plot, whose axes are .

n (t)]. ln t and ln[− ln(1 − F

Should the data come from a Weibull distribution, then the Weibull plot should be approximately linear. Another interesting property of the Weibull distribution is that it is a minimum extreme value (MEV) stable law. In particular, from Sect. 1.2.2.7, for the minimum of n i.i.d. observations, .X1:n = min{X1 , ...Xn }, from a .W(λ, k), Pr(X1:n > x) = [S(x)]n = e−n(x/λ) = e−(n k

.

1/k x/λ)k

,

which is a .W(n−1/k λ, k). That is, the distribution of .X1:n is still Weibull , and it is stable since .n1/k X1:n follows .W(λ, k). In addition, it is an MEV stable law. That is, let .X1 , . . . , Xn be i.i.d. from a distribution F with survival function .SF (x) and cumulative hazard function .HF (x), where .SF (x) = exp{−HF (x)}; then, a necessary and sufficient condition for the distribution of .n1/k X1:n to converge to .W(λ, k) is that .HF (x) ∼ = (x/λ)k (or k ∼ equivalently, that .F (x) = (x/λ) if .x ≈ 0.

16

1 Introduction and Preliminaries

1.2.4.5

Other Log-Location-Scale Distributions

Lognormal Distribution A r.v. X has a lognormal distribution .LN(μ, σ ) with scale parameter .exp(μ) > 0 and shape parameter .σ > 0 if its pdf is given by 1 −1 .f (t) = √ e 2 tσ 2π

 ln t−μ 2 σ

,

for .t > 0. The mean and variance of a lognormal r.v. T are

E(T ) = e

.

2

μ+ σ2



2  2 and V ar(T ) = eσ − 1 e2μ+σ ,

respectively. If the r.v. T is .LN(μ, σ 2 ) distributed, then .X = ln T has a normal distribution with mean .μ and standard deviation .σ .

Log-logistic Distribution A r.v. T has a log-logistic distribution .LL(α, β) with scale parameter .α > 0 and shape parameter .β > 0 if its pdf is given by (β/α)(t/α)β−1 f (t) =  2 , 1 + (t/α)β

.

for .t > 0, .α > 0, and .β > 0 with the corresponding cdf F (t) =

.

1 . 1 + (t/α)−β

The mean of the log-logistic r.v. X exists only for .β > 1, and it is equal to E(T ) =

.

απ/β . sin(π/β)

The variance exists only for .β > 2, and it is given by

 V ar(T ) = α 2 2(π/β)/ sin 2(π/β) − (π/β)2 / sin2 (π/β) .

.

The quantile function is given by

1.2 Preliminaries

17

Q(p) = α

.

p 1−p

1/β ,

for .0 ≤ p ≤ 1.

1.2.4.6

Other Lifetime Distributions

Gamma Distribution A r.v. T has a gamma distribution .G(α, β) with shape parameter .α and rate .β if its pdf is given by f (t) =

.

β α α−1 −βt t e , (α)

for .t > 0, .α > 0, and .β > 0. The mean and variance of a .G(α, β) r.v. T are E(T ) =

.

α α and V ar(T ) = 2 , β β

respectively. Observe that the gamma distribution becomes an exponential distribution .E(1/β), when .α = 1. An additive property holds: if .Ti is gamma distributed with shape parameter .αi and independent, then n same rate .β, .i = 1, 2, . . . , n, and if these r.v.’s are mutually n . i=1 Ti is a gamma variate with shape parameter .λ = i=1 αi and rate .β. As a consequence, the sum of n i.i.d. .E(λ) r.v.’s is a gamma distributed .G(n, λ) r.v.

Inverse Gaussian Distribution A r.v. T has an inverse Gaussian distribution .IG(μ, λ) with mean .μ and shape parameter .λ if its pdf is given by  f (t) =

.

λ λ(t − μ)2 , exp − 2π t 3 2μ2 t

for .x > 0, .μ > 0, and .λ > 0. The mean and variance of an .IG(μ, λ) r.v. T are E(T ) = μ and V ar(T ) =

.

respectively.

μ3 , λ

18

1 Introduction and Preliminaries

Birnbaum–Saunders Distribution A r.v. T has a Birnbaum–Saunders distribution .BS(β, γ ) with scale parameter .β and shape parameter .γ if its pdf is given by  f (t) =

.

t β

+



β t

2γ t

⎛ ⎜ φ⎝

t β



 ⎞ β t

γ

⎟ ⎠,

for .t > 0, .β > 0, and .γ > 0 with the corresponding cdf ⎛ ⎜ .F (t) =  ⎝

t β



 ⎞

γ

β t

⎟ ⎠.

The mean and variance of a .BS(β, γ ) r.v. T are 5γ 2 γ2 and V ar(T ) = (γβ)2 1 + , E(T ) = β 1 + 2 4

.

respectively. The Birnbaum–Saunders distribution is based on a physical argument of cumulative damage that produces fatigue in the materials originating from renewal theory, via idealization of the number of cycles necessary to force a fatigue crack to grow past a threshold.

Beta and Dirichlet Distributions A r.v. X has a beta distribution .B(α, β) over the interval .(0, 1) if its pdf is given by f (x) =

.

(α + β) α−1 x (1 − x)β−1 , for 0 < x < 1, α > 0, β > 0, (α) (β)

where . (·) is the gamma function. The mean and variance of a .B(α, β) r.v. X are E(X) =

.

αβ α and V ar(X) = . 2 α+β (α + β) (α + β + 1)

A multivariate extension is provided by the Dirichlet distribution .D(α1 , α2 , . . . , αn ), with .αi > 0, i = 1, 2, . . . , n. Its pdf is given by

1.2 Preliminaries

19

 n n  ( n αi ) αi −1 f (x) = n i=1 xi , for 0 < xi < 1, i = 1, 2, . . . , n, and xi = 1. i=1 (αi )

.

i=1

i=1

If the r.v. .X = (X1 , X2 , . . . , Xn ) has a .D(α1 , α2 , . . . , αn ) distribution, then it follows that  αi αi (α − αi ) and V ar(Xi ) = 2 , for i = 1, 2, . . . , n, and α = αi . α α (α + 1) n

E(Xi ) =

.

i=1

Furthermore, the covariance is always negative and given by Cov(Xi , Xj ) =

.

−αi αj , for any i = j. α 2 (α + 1)

1.2.5 Likelihood Inference 1.2.5.1

Likelihood and Fisher Information Matrices

If .x1 , x2 , . . . , xn are the values of a sample from a population with density function dependent on the parameter .θ , the likelihood function of the sample is given by L(θ ) = f (x1 , x2 , . . . , xn ; θ )

.

for the values of .θ within a given domain. Here .f (x1 , x2 , . . . , xn ; θ ) is the value of the joint probability distribution or the joint probability density of the r.v.’s .X1 , . . . , Xn at .X1 = x1 , . . . , Xn = xn . As an example, consider a binomial sample, like the flip of a coin three times, when the probability to get a head in a toss is .p ∈ (0, 1). Let X be the number of heads appeared in these three tosses. Based on this sample, the likelihood function of p is 3 x .L(p) = p (1 − p)(3−x) . x Although the joint pdf and the likelihood function look the same, they have different interpretations. The former is a function of the sample and denotes the probability of obtaining the observations .(x1 , x2 , . . . , xn ) given .θ . The latter, which is solely function of the parameter, denotes how likely the value of .θ is if the observations .(x1 , x2 , . . . , xn ) are obtained. Such interpretation of the likelihood function leads to the most popular method, in classical statistics, to estimate the parameter .θ : the maximum likelihood estimator (MLE) of .θ is the value .θˆ that maximizes .L(θ ).

20

1 Introduction and Preliminaries

Consider now n independent, but not necessarily identically distributed observations. Each one of them gives a multiplicative contribution .Li (θ ), .i = 1, . . . , n, to the likelihood L(θ ) =

n

.

Li (θ ).

i=1

If we consider the logarithms .li (θ ) = ln Li (θ ), we get the log-likelihood l(θ ) =

n 

.

li (θ ).

i=1

Given that the logarithm is an increasing function, the MLE .θˆ of .θ can be obtained by maximizing either the likelihood or the log-likelihood (often the latter leads to simpler computations). We now consider the more general case of a multivariate parameter .θ and the corresponding MLE .θˆ . We define the Fisher information matrix .Iθ for .θ as     2  2 n ∂ L(θ) ∂ Li (θ ) = . Iθ = E − E − ∂θ ∂θ T ∂θ∂θ T i=1

.

The Fisher information matrix approximately quantifies the amount of information that we expect to get from our future data. Whereas .Iθ denotes the expected information matrix, it is useful and practical to introduce the observed (local) information matrix, given by  2  ˆ.Iθ = − ∂ L(θ )  ∂θ ∂θ T 

θ =θˆ

1.2.5.2

=

n  i=1

 ∂ 2 Li (θ )  −  ∂θ∂θ T 

. θ =θˆ

General Maximum Likelihood Theory

Under regularity conditions, it can be shown that the limiting distribution of the MLE .θˆ is approximately multivariate normal with mean .θ and variance–covariance matrix . θˆ = I−1 θ . In general, one is interested in inferences on functions of .θ, say .g(θ ). The MLE of .g(θ ) is .gˆ = g(θˆ ). Under regularity conditions, the limiting distribution of .g(θˆ ) is approximately multivariate normal with mean .g(θ ) and variance–covariance matrix: 

∂g(θ ) . g ˆ = ∂θ



T  θˆ

 ∂g(θ ) . ∂θ

1.2 Preliminaries

21

The results on asymptotics are useful to build approximate confidence regions (or intervals). An approximate .100(1 − α)% confidence region for .θ is the set of all values of .θ in the ellipsoid  −1 2 ˆˆ (θˆ − θ)T  (θˆ − θ ) ≤ χ(1−α,r) , θ

.

2 where r is the length of .θ and .χ(1−α,r) denotes the quantile of order .1 − α of a .χ 2 distribution. This is sometimes called the “Wald method” or “normal-approximation method.” More generally, let .g(θ ) be a vector function of .θ. An approximate .100(1 − α)% confidence region for a .r1 -dimensional subset .g1 = g1 (θ ) is the set of all values of .g1 in the ellipsoid

 −1 2 ˆ gˆ (ˆg1 − g1 )T  (ˆg1 − g1 ) ≤ χ(1−α,r . 1 1)

.

When .r1 = 1, .g1 = g1 (θ) is a scalar function of .θ, an approximate .100(1 − α)% normal-approximation confidence interval is obtained from  ˆ gˆ 1 ± z1−α/2 V ar[g1 (θ)].

.

1.2.6 Statistical Inference The results and notions introduced earlier allow to make inference on some characteristics of a population when a sample is collected from it. For example, consider the number of defective items in a batch or the lifetime of a light bulb. In both cases, a sample (of batches or light bulbs) should be collected, and the number of defective items in each batch and the lifetime of each light bulb should be recorded. The random phenomenon should be modeled in the first case with a binomial distribution (for a known, “small” batch size) or approximated by a Poisson distribution (for a “large” batch size). In the second case, an exponential distribution (or its extensions such as gamma and Weibull distributions) could be considered. The ultimate goal of the statistical inference would be to learn about the parameter p of the binomial distribution and the parameter .λ of the exponential distribution. In this book, we are presenting a classical (or frequentist) approach, without mentioning the Bayesian one (except for a discussion of a Bayesian nonparametric analysis in Sect. 7.2), which makes use not only of the results of statistical experiments but also of the available knowledge on the random phenomenon. In a classical framework, a point estimate of the parameter is usually obtained considering the MLE. Point estimates are often combined with (or replaced by) confidence intervals (or regions), i.e., intervals of values that may contain the

22

1 Introduction and Preliminaries

unknown parameter with some degree of confidence. In general, the confidence intervals for any parameter are built around the point estimate, and the size of the interval depends on the sampling error and a confidence coefficient. While the sampling error is related to the variability within the sample, the confidence level, C, gives the probability that the random interval estimate ± (confidence coefficient × sampling error)

.

captures the true parameter value in repeated samples. Note that it is not the probability that any one specific interval calculated from a random sample captures the true parameter. Users can choose the confidence level, usually 90% or higher because they want to be quite sure of their conclusions. The most common confidence level is 95%. The other pillar of statistical inference is hypothesis testing, when one makes a claim about a population parameter (hypothesis) and tests this claim by using sample information. First of all, one has to identify the statement to be tested; such statement is called the null hypothesis (.H0 ). This is assumed “true” and compared to the data to see if there is evidence against it. Typically, .H0 is a statement of “no difference” or “no effect,” and the experimenter, in general, would like to reject it, or prove that it has to be rejected. The alternative hypothesis (.Ha ) is the statement about the population parameter that one hopes or suspects is true, being interested to see if the data support this hypothesis. The test depends on the choice of a significance level (typically 0.1, 0.05, and 0.01) that is the value of probability below that we start considering significant differences. After identifying the sampling distribution and its test statistics, and collecting the data to calculate the latter, the p-value of the test is computed, and the statistical decision is made. Observe that the p-value is the probability of obtaining the value of the test statistics under the null hypothesis.

1.2.7 Model Selection Criteria As stated by George P. Box, “All models are wrong, but some are useful.” Thus, statisticians need methods to select among different models looking at their physical and mathematical properties, their ability in generating the actual sample, parsimony in the number of parameters, and forecasting performance. Here, we give for granted that the proposed models are compatible with the nature of the random phenomenon, and they are tractable from a mathematical viewpoint. We will present two criteria to choose a parsimonious model (not the “right” one but the most useful among the entertained ones) that better fits the data. The first criterion is the Akaike information criterion (AIC) that provides a tradeoff between simplicity of the model and goodness-of-fit (GOF). The former is represented by the number k of parameters involved, whereas the latter is provided by .L(θˆ ), i.e., the maximum value of the likelihood function, obtained for the MLE

1.2 Preliminaries

23

θˆ . Thus, the AIC is given by

.

AI C = 2k − 2 ln L(θˆ ).

.

Given a set of competing models, this criterion leads to choose the one with the smallest AIC . The other criterion is the Bayesian information criterion (BIC), which is very similar to AIC , but it penalizes more the complexity of the model. In fact, BIC is defined as BI C = k ln n − 2 ln L(θˆ ),

.

where n is the number of observations, or sample size. As for the AIC, in a family of models, one chooses the one with the lowest BIC.

1.2.8 Regression 1.2.8.1

Simple Regression Analysis

A key issue in Statistics is the study of the relationship between variables to forecast one of them (response variable Y ) based on the related behavior of the other ones (predictors or covariates X). The relationship may be expressed as a mathematical equation. The simplest case is when there is a linear relationship between two variables Y and X, so that the equation of a straight line may be used as the mathematical equation of the relationship. Given a sample .(Yi , Xi ), i = 1, . . . , n, the relationship can be specified as Yi = β0 + β1 Xi + εi , i = 1, . . . , n,

.

where .εi is the error, while .β0 (intercept) and .β1 (slope) are parameters whose values are unknown and hence must be estimated. Using ordinary least squares method and minimizing the sum of squared errors: S=

n 

.

ei2 =

i=1

n  (Yi − b0 − b1 Xi )2 , i=1

the parameters are estimated as n 

Sxy .b1 = = Sxx

¯ i − Y¯ ) (Xi − X)(Y

i=1 n  i=1

¯ 2 (Xi − X)

,

24

1 Introduction and Preliminaries

¯ b0 = Y¯ − b1 X. One way to evaluate the regression model is to test the hypothesis that the population slope .β1 equals 0 (indicating no linear relationship): H0 : β1 = 0 vs. Ha : β1 = 0.

.

The appropriate statistical test is a t-test: t=

.

b1 − 0 , s.e.(b1 )

where .s.e.(b1 ) is the standard error of the slope that is given by

s.e.(b1 ) =

.

! " " n 2 e /(n − 2) " # i=1 i Sxx

.

Under .H0 , the above t-statistic has a t-distribution with .n − 2 degrees of freedom.

1.2.8.2

Parametric Lifetime Regression Models (Weibull Regression, Exponential Regression)

The regression models considered so far have been linear in the parameters .β’s. A natural extension is to nonlinear regression models, defined as .Y = f (X, β) + ε, where Y is the response variable, .X is the vector of predictors (covariates), .β is the parameter vector, f is some known regression function, and .ε is an error term whose distribution may or may not be normal. One example of nonlinear model is the exponential regression model, where Y = γ0 + γ1 exp{β T X) + ε,

.

where .ε has a normal distribution with mean 0 and variance .σ 2 , and .γ0 and .γ1 are further parameters. Another example, used in this book, is the Weibull regression model, which k considers the influence of covariates on the survival function .S(t) = e−(t/λ) and   k t k−1 the hazard function .h(t) = λ λ . In this case, the covariates are introduced via the parameter .λ, so that in a sample of n subjects, it becomes .λi = exp{β T Xi } for the i-th subject. As a consequence, the corresponding survival function is Si (t) = e−(t exp{−β

.

while the hazard function is

T X })k i

,

1.2 Preliminaries

25

$ % hi (t) = kt k−1 exp −k(β T Xi ) .

.

An important class of survival models is the class of proportional hazards models in which the unit increase of a covariate has a multiplicative effect on the hazard rate. The Weibull regression model is a parametric proportional hazards model , and the proportionality is evident when comparing the hazard functions of two subjects, since it holds that .

1.2.8.3

& ' hi (t) = exp{−kβ T Xi − Xj }. hj (t)

Semiparametric Regression Model (Cox Proportional Hazards Model)

The Weibull regression model is an example of parametric proportional hazard models. Its parametric structure could ease computations at the cost of a reduced flexibility. The latter negative aspect can be overcome by considering semiparametric models. The Cox proportional hazards model is the most relevant one among them, and it models the hazard functions for n subjects through two multiplicative components: a baseline function .h0 (t) common to all subjects and a function T .exp{β X} depending on the values taken by the covariates .X for each subject. The Cox proportional hazards model is a semiparametric model because it makes no assumptions about the form of .h0 (t) (nonparametric part of the model), and it assumes parametric form for the effect of the predictors on the hazard. As a consequence, the hazard function for the i-th subject has the form hi (t) = h0 (t) exp{β T Xi },

.

where .Xi are the covariates of such subject and .β are the parameters. The proportionality is evident when comparing the hazard functions of two subjects, since it holds that .

& ' hi (t) = exp{β T Xi − Xj }. hj (t)

To estimate the model parameter .β, Cox (1972) proposed the partial likelihood approach. Specifically, let .τi be the lifetime of the i-th item; the risk set at time .tj is denoted by R(tj ) = {i : τi ≥ tj }.

.

The conditional probability that k fails at .tj given that one individual from the risk set .R(tj ) fails at .tj , which is simply

26

1 Introduction and Preliminaries

.

hj (t)  = hi (t) i∈R(tj )

exp{β T Xj } .  exp{β T Xi } i∈R(tj )

The overall partial likelihood is ⎧ ⎪

L(β) =

.

n ⎪ ⎨

⎫ δj ⎪ ⎪ ⎬

exp{β Xj } ,  ⎪ exp{β T Xi } ⎪ ⎪ j =1 ⎪ ⎭ ⎩ T

i∈R(tj )

where .δj denotes the censoring indicators. The maximum (partial) likelihood estimate of .β can be obtained by maximizing the partial log-likelihood function, .ln L(β), with respect to .β.

1.2.9 Censoring Consider a nonnegative, continuous r.v. T denoting the survival time of an item until its failure or of an individual until his/her death. Sometimes it is not possible to observe the process until the failure/death occurs: censoring is a source of difficulty in statistical analysis since it provides an incomplete information about T . Censoring might be due to different causes, such as loss, early termination, death to other causes, very reliable unit. There are different kinds of censoring: • Right-censoring: A survival time is said to be right-censored if the event of interest occurs after the subject is observed in the study. • Left-censoring: A survival time is said to be left-censored if the event of interest occurs before the subject is observed in the study. In some studies, it may happen that some survival times are left-censored and some are right-censored. In this case, we say that the sampling scheme of the study is doubly censored. Censoring could be due to early stopping of an experiment in which many items are tested until their failure or, in general, until some event occurs (e.g., death, recover, infection). An experiment could be stopped at a pre-specified time c, so that all failures of items surviving after c cannot be observed. This is known as Type-I censoring. In this case, the number of failures being observed is a r.v., while the experimental time is fixed. There is also a generalized Type-I censoring due to subjects entering into the study at different time points, with the study to be terminated at a pre-specified time c. The determination of the censoring time c is a critical aspect. If c is large, the expense of the experiment is large. If c is small, it might turn out that only a small

1.2 Preliminaries

27

portion of sample can be observed. To avoid these situations, one might terminate the experiment after the first r failures have been observed. This is known as Type-II censoring. In this case, the number of failures being observed is fixed, while the experimental time is a r.v.. In some situations, the lifetime is known to occur only within an interval; in this case, we have interval censoring. In a typical cancer research study involving n individuals, the i-th subject might drop out of the study for many reasons, e.g., death for other causes. For such subject, there should be not only the survival time .Ti but also the censoring time .Ci , a r.v. in this case. Thus, the observed data would be .min(Ti , Ci ). An important assumption, in general, is that .Ci and .Ti are independent r.v.’s, i.e., the reason for observing a censored observation is completely unrelated to the disease process of interest. The survival data are usually coded as .(Xi , δi ), .i = 1, 2, . . . , n, where .Xi is the observed time, .δi = 1, if the i-th observation is uncensored, and 0 otherwise. The notion of censoring applies to a sample from a population, while the term truncation applies when considering the entire population. As an example, we talk about a left-censored sample when n integrated circuits are put into the test and a burn-in process of 1000 h is proceeded before any failures are recorded. A left truncated sample is obtained when a sample of n integrated circuits is randomly drawn from a batch of them that has been already burned-in 1000 h. The different censoring mechanisms have different impact on the statistical inferences, namely on the likelihood. Suppose the lifetime T is a continuous r.v. with cdf .F (t) and pdf .f (t), .t > 0. We consider an experiment about n subjects observed in the time interval .[a, b]. The contributions of the i-th subject, failing at time .ti , to the likelihood are the following: • • • •

f (ti ), if .a ≤ ti ≤ b (exact observation). F (a), if .ti < a (left-censored observation). .1 − F (b), if .ti > b (right-censored observation). .F (b) − F (a), if .ti is unknown except for .a ≤ ti ≤ b (interval-censored observation). . .

1.2.10 Kaplan–Meier Estimator of cdf The Kaplan–Meier (KM) estimator (Kaplan & Meier, 1958), also known as the product limit estimator, is a nonparametric approach to estimate the cdf from survival data with right-censoring. Based on the right-censored data described in Sect. 1.2.9, suppose .0 = t0 < t1 < t2 < . . . be the observed failure times, we define the following: • • • •

n: total sample size di : the number of failures at time .ti .ri : the number of right-censored observation at time .ti .ni : the size of the risk set at time .ti , i.e., .

28

1 Introduction and Preliminaries

n−

i−1 

.

dj −

j =0

i−1 

rj

j =0

The KM nonparametric estimator of cdf at time .ti , .F (ti ), can be obtained as Fˆ (ti ) = 1 −

.

i

(1 − pˆ j ),

j =1

where .pˆ j = dj /nj .

Part I

Physical Aspects of Fiber Bundle Models

In Chap. 2, we discuss plate capacitors (devices used to store an electric charge) and the electrical laws for series, parallel, and parallel–series circuits of ordinary capacitors and the dynamic behavior of the charge and voltage distribution in a series circuit. In Chaps. 3 and 4, the physical aspects of the breakdown of thin-film dielectrics and cell models for this are presented, and the size effects for the loadsharing cell models are discussed.

Chapter 2

Electrical Circuits of Ordinary Capacitors

Here, we give a brief review regarding traditional circuits of ordinary capacitors where current is described as a flow in the classical theory of circuits. We distinguish this from later chapters regarding thin dielectrics that are solid-state electronic devices. For such devices, the flow analogy is only approximate because, at the nanoscale, quantum effects have to be taken into consideration. This is discussed in Chap. 3. The discussion of capacitors, resistors, and classic electric circuits that follows is based on Jones (1971). Plate type capacitors are discussed in Sect. 2.1, while in Sect. 2.2, the electrical laws for parallel and series circuits of ordinary capacitors and the behavior of the charge distribution on a series circuit are given.

2.1 Electrical Laws for Circuits of Capacitors A capacitor is an electrical device that stores electric charge; equivalently, it stores electric potential energy. The capacitance of a plate type capacitor, with plate area A, separation d between the plates and filled with a vacuum (or air, approximately), is a structural constant of the device given by C=

ε0 A . d

(2.1) 2

coul Here, ε0 is the electric permittivity of free space, 8.85418 × 10−12 nt−m 2 . If we replace the vacuum (air) between the plates by a dielectric material with permittivity ε = κε0 , the capacitance will be larger by a factor κ(>1), the relative permittivity (dielectric constant) of the dielectric material. Thus,

C=

κε0 A . d

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5_2

(2.2)

31

32

2 Electrical Circuits of Ordinary Capacitors

For silicon dioxide (SiO2 ), the dielectric constant is approximately 3.9. For silicon (Si), the dielectric constant is approximately 11.0–12.0, and that of hafnium dioxide (HfO2 ), a high-κ dielectric, is 25. Notice that, from Eqs. (2.1) and (2.2), part of the variation of the capacitance, C, is captured in the manufacturing variation of A and d of the capacitor. It is instructive to let A = An = nA0 and d = dm = md0 in Eqs. (2.1) and (2.2), where m and n are positive integers and A0 and d0 are positive constants. Then, Eqs. (2.1) and (2.2) become Cn,m ∝

An nA0 = . dm md0

(2.3)

This indicates that the capacitance increases in cross-sectional area nA0 and decreases in thickness md0 . These relationships are analogues of the electrical circuit laws for capacitors where the capacitance of a parallel system of capacitors is the sum of the individual components’ capacitance, while for a series system, it is the harmonic sum. These circuit laws are discussed in the next section.

2.2 Conservation Laws for Series and Parallel Circuits Below we state the capacitor laws for series and parallel circuits. The results are based on the capacitance law of the capacitance, C, of a capacitor in terms of electric potential, or voltage V , across the capacitor, the charge Q on the capacitor, and the conservation laws of energy/voltage and of charge/current. The Capacitance Law C = Q/V .

(2.4)

The capacitance is thus the amount by which the stored charge increases for a unit increase in the electric potential across the capacitor. Remark The equivalent form of the law V = C −1 Q

(2.5)

is analogous to Hooke’s Law where C −1 is Young’s modulus, the voltage is the force/stress , and the charge is the extension/strain.

2.2 Conservation Laws for Series and Parallel Circuits

33

2.2.1 Conservation Laws for Series and Parallel Circuits In this and the next subsection, we just highlight the conservation laws relevant to capacitor circuits and the consequences. For this, Ci , Qi , and Vi are, respectively, the capacitance of capacitor i, the charge at it, and the voltage across it for i = 1, 2, . . . , m, where Ci = Qi /Vi . We use V , V , Q , and Q to denote the respective voltages and charges of series circuits () and parallel circuits (). In a series circuit, its voltage is the sum of the voltages across all the capacitors, whereas its charge equals the ones in each capacitor. Series Circuit Laws Series Voltage Law

V =

m 

(2.6)

Vi .

i=1

Series Charge Law

Q = Qi for i = 1, 2, . . . , m.

(2.7)

In a parallel circuit, its charge is the sum of the charges on all the capacitors, whereas its voltage equals the ones across each capacitor. Parallel Circuit Laws m 

Parallel Charge Law

Q =

Parallel Voltage Law

V = Vi for i = 1, 2, . . . , m.

(2.8)

Qi .

i=1

(2.9)

2.2.2 Consequences of the Conservation Laws: The Capacitor Laws Series Capacitor Law m m   1 Vi V 1 = = ≡ , i.e., C = 1 Ci Q Q C i=1

i=1



m  1 . Ci

(2.10)

i=1

Justification Identity (2.10) follows from (2.6) and (2.7) since Ci = Qi /Vi = Q /Vi , and so,

34

2 Electrical Circuits of Ordinary Capacitors

1 Vi = . Ci Q  

Parallel Capacitor Law m  i=1

Ci =

m  Qi Q = = C . V V

(2.11)

i=1

Justification Identity (2.11) follows from (2.8) and (2.9) since Ci = Qi /Vi = Qi /V .   Comments Consider a plate capacitor whose capacitance C1,1 is given by (2.3) with m = 1 = n. If we increase the thickness of the capacitor by a factor m, the capacitance is C1,m , and this stacking of plate capacitors is equivalent to a series circuit of capacitors, each with capacitance C1,1 by the series capacitor law (2.10). Similarly, by increasing the plate area by a factor n, the capacitance is Cn,1 , and this increase in plate area is equivalent to a parallel circuit of capacitors, each with capacitance C1,1 by the parallel capacitor law (2.11).

2.2.3 Parallel and Series Circuits of Capacitors with the Same Capacitance In this subsection, we consider series and parallel circuits of m capacitors with the same capacitance, C. In Facts A and B below, we see that the load-sharing on each capacitor in the circuit is the same (the so-called equal load-sharing rule) in terms of the voltage and charge, respectively, for series and parallel circuits. Fact A Consider a series circuit of m capacitors each having the same capacitance, C, where the voltage across the circuit is V , and the voltage across the i-th capacitor is Vi = V /m. Q i Justification From (2.7), C = Ci = Q Vi = Vi . Thus, Vi = Q C ≡ V , and from m  Vi = mV . So, Vi = V = Vm .   (2.6), V = i=1

Remark 2.2.3 (i) Note that as capacitors fail, the circuit is still a series one, but the failed capacitors now become conductors. Assuming the resistance of a failed capacitor is negligible, it follows that the voltage across each of the working capacitors is VM , where M is the number of working capacitors. This is the equal load-sharing rule , and the series circuit is a load-sharing system in terms of the voltage. (ii) If one has capacitors of k < m different capacitances, the above

2.2 Conservation Laws for Series and Parallel Circuits

35

argument can be modified to show how the voltage/load is distributed proportionally among the different types.   Fact B Consider a parallel circuit of capacitors each having the same capacitance, C, where the charge on the circuit is Q , and the charge on the i-th capacitor is Qi = Q /m. Qi i Justification From (2.9), C = Ci = Q Vi = V . Thus, Qi = CV ≡ Q, and from m  Qi = mQ. So, Qi = Q = Qm .   (2.8), Q = i=1

2.2.4 Behavior of the Charge and Voltage Load Distributions for Series Circuits of Capacitors The previous electrical laws were valid for the equilibrium case. Here, we study the dynamic behavior of the charge and voltage load distributions for a series circuit of capacitors and how they converge to equilibrium, based on Kirchhoff’s circuit laws (Jones, 1971). First, consider a series circuit loop in which there are two capacitors with equal capacitance C, a battery with electromotive force V , and a switch. Let r denote the (nearly negligible) resistance of the circuit wires. If the resistance of the circuit wires were 0, then the approach to equilibrium would be instantaneous. Also, when the breakdown of a capacitor happens, the capacitor ceases to be a capacitor and becomes a resistor. An ohmic (linear) resistor is a passive circuit component that limits the flow of electric current and dissipates electromagnetic energy as heat. The relationship between the applied voltage (electromotive force), V , and the current, I , flowing through the circuit is called Ohm’s law, V = I R, where R is a characteristic, called resistance, of the resistor—the greater the value of R, the lower the current for a given voltage. The energy dissipation equation for the resistor is P = I 2 R, where P is the power or energy dissipated per unit time. When the circuit is closed, the equation governing the charge distribution’s, q(t), approach to equilibrium is V =r

dq 2 + q. dt C

The solution to this equation is q(t) =

 2t CV  1 − e− rC ≡ CV (t), 2

for t ≥ 0 where V (t) is the voltage load at a capacitor. Then, the current is

36

2 Electrical Circuits of Ordinary Capacitors

q(t) ˙ =

V − 2t e rC . r

If one of the capacitors fails, becoming a resistor with resistance R1 , the differential equation governing the return to equilibrium is V = (r + R1 )

dq 1 + q. dt C

The solution to this equation is   t − q(t) = CV 1 − e (r+R1 )C , and the current in the circuit as the system approaches equilibrium is q(t) ˙ =

t V − e (r+R1 )C . r + R1

If the second capacitor fails, it becomes a resistor with resistance R2 . The current in the circuit is then constant: q(t) ˙ =

V . r + R1 + R2

Note that we cannot assume that the two new resistors have equal resistances. The capacitors may have had equal capacitance, but when they fail, they do not necessarily fail in the exact same way. The resistance depends on the breakdown path(s) through the dielectric insulating material that causes capacitor failure. Further discussion of this is in Chap. 3. For a series circuit with k(≥2) capacitors, each with capacitance C, the equation governing the approach to equilibrium, after j < k capacitors fail, is V = (r + RT ) where RT =

j

i=1 Ri .

dq k−j + q, dt C

The solution is

CV q(t) = k

  (k−j )t − (r+R )C T 1−e ≡ CV (t),

with current q(t) ˙ =

V − (k−j )t e (r+RT )C . r + RT

2.2 Conservation Laws for Series and Parallel Circuits

37

In general, if k is the number of working capacitors, the equilibrium charges and voltage loads at the working capacitors are q(t) →

CV V ≡ Q and V (t) → , k k

where V /k is the equal load-sharing rule.

Chapter 3

Breakdown of Thin-Film Dielectrics

We are considering the breakdown mechanisms of a thin-film dielectric. The earlier type of dielectric used in electronic circuits, silicon dioxide, is generated on a metal substrate by a chemical vapor deposition process (Chu, 2014). The deposition process is usually performed at relatively low temperatures to avoid defect formation, diffusion, and degradation of the metal layers. However, at lower temperatures, more impurities are generated in the silicon dioxide. The vapor involved in the deposition process is a mixture of silane (SiH.4 ) and nitrous oxide (N.2 O) in nitrogen. The result is that there is a relatively small number of silanol (SiOH) and water sites in the silicon dioxide thin film. Due to the effects of these impurities on the reliability of the dielectric and the need for thinner dielectric films in newer electronic devices, materials other than silicon dioxide have been evaluated in recent years. One promising material that appears to outperform others (Bersuker et al., 2007; Iglesias et al., 2011; McKenna & Shluger, 2011; Pirrotta et al., 2013) is hafnium oxide (hafnia). The following brief discussion of some concepts from solid-state physics follows Jones (1971). In the next section, there is a basic discussion of the quantum band structure of crystalline solids. Subsequent sections include a discussion of the electronic structure of a thin-film silicon dioxide dielectric layer, followed by a discussion of the structure of a thin-film hafnium oxide layer, on a silicon substrate. Afterward, the conduction mechanisms through such dielectrics are discussed. Finally, there is a discussion of the occurrence of soft and hard breakdowns of dielectrics.

3.1 Quantum Theory of Electron States in Solids For an isolated atom, atomic energy levels are degenerate (Jones, 1971), i.e., there may be several electrons with the same energy but different values of the other quantum numbers, such as spin and orbital angular momentum. A level may be © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5_3

39

40

3 Breakdown of Thin-Film Dielectrics

of degeneracy 2, 8, 18, 32, etc. When the atom is embedded in a crystal of the same element, with perhaps .1019 atoms, interactions among the atoms remove the degeneracy, spreading out the states. For an energy level of degeneracy m in a crystal of N atoms, the level is spread into Nm states that are close to each other in energy, so that the level may be approximated by a continuous band. In an insulating material, there is a full band (all states are occupied), called the valence band, separated by an (relatively large) energy gap from the higher and empty conducting band. An electron in the valence band would need to absorb an amount of energy at least as large as the band gap to jump to a state in the conduction band. In a semiconducting material, there is a smaller band gap, so that thermal excitations may be sufficient to cause electrons to jump into the conduction band and be able to conduct current. In a metal, the valence band is full, and the conduction band is partially full so that electrons in that band are able to conduct current when an electric field is applied across the metal crystal.

3.2 The Two Dielectric Materials Being Examined 3.2.1 Structure of Silicon Dioxide Thin Films The electronic band structure of solids also applies to molecular solids, as well as to atomic solids, and to amorphous solids, as well as crystalline solids. Silicon dioxide, in the amorphous form used in electronic devices, also has a band structure (Nekrashevish & Gritsenko, 2014). A thin film of silicon dioxide is dielectric (insulating). With no applied electric field across the dielectric, the valence band in the thin film is fully occupied, while the conduction band is empty. The applied field would need to be relatively strong to give enough energy to a valence band electron to enable it to jump to a state in the conduction band. In a SiO.2 thin film with impurities, however, there may be additional states, called trapping states (Alam et al., 1999, 2002; Houssa et al., 2000) located in the band gap between the valence band and the conduction band. An electron in the valence band may gain enough energy, perhaps from the applied electric field, to jump to one of these intermediate states, even if it does not gain enough energy to jump to a state in the conduction band. An electron in the conduction band could lose energy, perhaps to thermal excitation of the dielectric, and fall into one of the trapping states. With silanol impurities in the dielectric, there will be mobile protons (Houssa et al., 2000), which can jump from a silanol site in the film to a neighboring SiO.2 site as the energy input from the applied electric field leads to disruption and reestablishment of atomic bonds. Thus, trapping sites in the dielectric are not fixed but can rearrange themselves under the influence of an applied electric field. The mobility of these protons can then generate percolation paths across the dielectric.

3.2 The Two Dielectric Materials Being Examined

41

A percolation path is a sequence of traps spanning the distance from the cathode to the anode. The number of traps in a percolation path can vary, depending on the thickness of the dielectric, the density of trapping sites in the dielectric, and the applied electric field, which, as stated above, can cause traps to shift from one site to another.

3.2.2 Structure of Hafnium Oxide Thin Films Recent developments in electronic technology have motivated a search for more efficient dielectric materials than silicon dioxide, in order to allow for smaller, more efficient electronic components. In this context, more efficient means three things: (a) the dielectric material needs to have a higher relative permittivity, leading to the same performance (e.g., capacitance) with a thinner layer of material, (b) a relatively large band gap between the valence band and the conduction band, and (c) higher reliability—a longer time to BD (TBD) and a higher breakdown voltage. One promising candidate for such a material is hafnium oxide (HfO.2 ), or hafnia (Lee et al., 2002; Smirnova et al., 2008; Zhang et al., 2019). This material has a much higher relative permittivity than silicon dioxide (.κ ∼ = 25 for HfO.2 , as opposed to .κ ∼ = 3.8 for SiO.2 ). The band gap for HfO.2 is approximately 5.68 eV, which, though smaller than the band gap for SiO.2 (8.9 eV), is large enough for use in similar electronic applications. Also, as discussed below, the characteristics of trapping sites and the relative strengths of the charge transport mechanisms differ from those of SiO.2 in favorable ways. There are different mechanisms used for creating a hafnia layer on a silicon substrate. One method involves vapor deposition using various organometallic compounds as precursors (Smirnova et al., 2008). These vapors were transported to the substrate by argon (inert) carrier stream at a flow rate of 50–200 cm.3 /min. The films were found, using infrared examination and energy-dispersive X-ray spectroscopy, to contain residual organics. The organics were removed by annealing the hafnia at 1070. ◦ K. It was then found that the films grown from hafnium dipivaloylmethanate were free of organics to the limit of the IR spectroscopy. The resulting films were then examined using infrared radiation, X-ray photoelectron detection, energy-dispersive X-ray spectroscopy, X-ray diffraction, ellipsometry, and electrophysical methods. It was found that the films also contained HfSi and HfSiO.4 due to chemical reactions at the interface with the silicon substrate. This interface layer would affect the current conduction mechanisms of the hafnia. Annealing the hafnia removes some of the impurities but also changes its structure. It produces a dielectric layer with a mixture of amorphous and polycrystalline forms (Zhang et al., 2019). Within the microcrystals (or grains), the hafnia has a regular cubic lattice structure, with few or no defects or deformities. Within the amorphous phase, there are randomly scattered irregularities, in the form of elongated hafnium–oxygen bonds. These elongated bonds are also more concentrated in the grain boundaries (GBs) (Bersuker et al., 2007; Iglesias et al.,

42

3 Breakdown of Thin-Film Dielectrics

2011; Pirrotta et al., 2013), between adjacent grains and between grains and the amorphous part of the dielectric. Bersuker et al. (2007) examined electron trapping under constant voltage stress. They found evidence of two types of trapping: (a) Fast trapping occurs when electrons tunnel from the cathode to pre-existing (as grown) defects; these defects occur primarily at GBs and consist of elongated hafnium–oxygen bonds, due to distortion/dislocation of the crystalline structure at the boundaries. These dislocations contain under-coordinated oxygen ions (an ion’s coordination number is the number of adjacent oppositely charged ions in the regular crystal lattice), resulting in electron traps with energy levels within the band gap. These location sites are also locally positively charged, acting to attract electrons. (b) Slow trapping occurs when thermally activated trapped electrons migrate to unoccupied traps. In several of the studies discussed below, the dielectric surface structure and current flow through the dielectric were examined using Atomic Force Microscopy (AFM) (Binnig et al., 1986) and Conductive Atomic Force Microscopy (CAFM) (Lanza, 2017). Atomic Force Microscopy moves a cantilevered sharp probe (resolution of fractions of a nanometer) over a surface, together with a laser beam focused on the probe tip. As the probe dips and rises, the amount of reflected laser light hitting a photodiode changes, generating a map of the traced surface. CAFM adds a current-to-voltage preamplifier to the AFM probe to measure the current flow at the point of probe contact when a potential difference is applied between the base of the sample of material and the probe tip. Iglesias et al. (2011) used CAFM to map the topographical structure of the surface of polycrystalline hafnia and to examine the conductive properties of the dielectric. They manufactured an HfO.2 film with a nominal thickness of 5 nm using atomic layer deposition on a 1 nm thick interface layer of SiO.2 produced by oxidation on a silicon substrate. The hafnia layer was then annealed at 1000. ◦ C to induce crystallization. The CAFM process uses a relatively thin electric probe to resolve nanometerlevel structure and current flow in materials. The probe had a silicon tip coated with a layer of conductive material. The process was performed both in air and in ultrahigh vacuum (.10−10 mbar). The results showed a granular structure of the dielectric, with GBs having an average depth of .∼1.6 ± 0.4 nm below the level of the grains at the anode surface. Electrical conductance was examined using a .−6.5 volt bias, with the CAFM tip grounded and electrons injected into the silicon substrate. Analysis of the .∼50 GBs showed an average width of .∼4 nm. The current map showed a correlation with the topographic map, confirming that the leakage conduction occurs primarily at the GB’s. Breakdown (BD) spots were also identified, with diameters of .∼20 nm. The larger diameters indicated that the BD spots spread outside of the GBs leakage sites into the grains. Further analysis indicated that the BD tends to propagate along with the GBs.

3.2 The Two Dielectric Materials Being Examined

43

Pirrotta et al. (2013) also reported the use of CAFM to investigate the role of grains and GBs in conduction through polycrystalline hafnia. Their analysis confirmed that the leakage current flows primarily through the GBs. The hafnia samples were produced by the same method and to the same specifications as in the experiment by Iglesias et al. (2011). They found that the average diameter of the grains was .∼15 nm, but with substantial variation. The measured leakage currents at GBs were found to be at least an order of magnitude higher than that at the grains. It was also found that .∼5% of the total area of the dielectric consisted of GB’s. A simulation study was also conducted, using the results of the CAFM analysis. The density of defects at grains was estimated to be .3 × 1019 cm.−3 . The defect density at GBs was estimated to be .0.9 × 1021 cm.−3 for a 3 nm HfO.2 thickness, and 21 cm.−3 for a 5 nm HfO. thickness. Thus, the defect density was estimated .2.1 × 10 2 to be more than an order of magnitude greater at the GBs than at the grains. A more recent study (Zhang et al., 2019) examined the temperature dependence of the creation of a hafnia thin film on a silicon substrate. Remote plasma atomic layer deposition was used at a temperature of 250. ◦ C to deposit a thin film of amorphous hafnia on the substrate for a number of specimens. Rapid thermal annealing was then used at various temperatures (450. ◦ C, 500. ◦ C, 550. ◦ C, and 600. ◦ C). The structural changes and crystallization properties of the hafnia thin films were then examined using atomic force microscopy, grazing incident Xray diffraction, X-ray photoelectron spectroscopy, and high-resolution transmission electron microscopy. The temperature dependence of evolution of the HfO.2 /Si interface layer was also examined. It was found that both the structure and the electrical properties of the dielectric were modified by annealing. At higher annealing temperatures, CAFM showed that, in agreement with the previous studies, the surface of the hafnia layer became more irregular. The grains grew, and the crevasses between grains became deeper. Before annealing, the hafnia was an amorphous layer on the silicon substrate. During annealing, however, the hafnia became polycrystalline, and a SiO.2 layer grew between the substrate and the hafnia. It was inferred that oxygen atoms from the hafnia diffused toward, and reacted with, the silicon. In the process, at higher annealing temperatures, the SiO.2 interface layer became monocrystalline. The hafnia layer’s structure gradually evolved from amorphous through a monoclinic polycrystalline phase to an orthorhombic polycrystalline phase (In a monoclinic crystal, the lattice is described by three vectors, forming a rectangular prism with a parallelogram as its base; in an orthorhombic crystal, the vectors form a rectangular prism, with all three vectors intersecting at right angles (Kittel, 2004)). The dielectric constant of the hafnia was also affected by annealing, at first increasing to a maximum of 17.2 at an annealing temperature of 500. ◦ C, and then decreasing as the temperature approached 600. ◦ C. The authors concluded that an annealing temperature of 500. ◦ C produced the best results. At that temperature, there was a thin monocrystalline layer of SiO.2 atop the silicon substrate. Above that layer was another thin layer consisting of a combination of amorphous SiO.2 and amorphous hafnia, and above that layer was a layer of monoclinic polycrystalline

44

3 Breakdown of Thin-Film Dielectrics

hafnia. In monoclinic crystalline hafnia, the lattice unit is approximately cubic, described by three lattice vectors, .a, .b, and .c. It has been found (Perevalov et al., 2007) that: ∼ 0.5106 nm, . b ∼ 1. . a = = 0.5165 nm, . c ∼ = 0.5281 nm. 2. .a ⊥ b. 3. . (a, c) =  (b, c) ∼ = 99.35◦ . Within the lattice unit, there are four hafnium ions and eight oxygen ions. In summary, these studies showed the importance of the GBs for leakage current and for dielectric breakdown, with currents preferentially being channeled along with the GBs, rather than through the grains. Since approximately 5% of the volume of the hafnia consists of the GBs, there is a relatively small fraction of the dielectric that accounts for leakage current and breakdown current. When a breakdown occurs, it happens at the GBs, but a breakdown path spreads from the boundary into the surrounding grains, although the spread tends to be greater along the GBs.

3.3 Mechanisms of Conduction Through Dielectrics At absolute 0. ◦ K in a dielectric material, the valence band will be completely filled, and the conduction band empty. No current will flow under the influence of an electric field. At non-zero temperatures, there is a non-zero, small probability that an electron will be thermally excited across the band gap into a state in the conduction band. If so, then an applied electric field across the dielectric will induce a current. However, the current through the dielectric for usual applied electric fields will be very low. For large electric fields, there will be noticeable current due to various mechanisms (Chu, 2014). The observed mechanisms will depend on, among other factors, the configuration of the electronic component. It could be either: (a) a dielectric sandwiched between two metal plates, a MIM component, or (b) a dielectric sandwiched between a metal plate and a semiconductor, an MIS component. The observed conduction mechanisms will differ somewhat for the two types of devices. The conduction mechanism is discussed below. Some of the conduction mechanisms depend on the properties of the electrode– dielectric contact and are called electrode-limited mechanisms. They include: (i) Schottky or thermionic emission, (ii) Fowler–Nordheim tunneling, (iii) direct tunneling, and (iv) thermionic-field emission. Other mechanisms, called bulk-limited mechanisms, depend on the properties of the dielectric itself. These are: (i) Poole–Frenkel emission, (ii) hopping conduction, (iii) ohmic conduction, (iv) space-charge-limited conduction, (v) ionic conduction, and (vi) grain-boundary-limited conduction (in polycrystalline dielectrics).

3.3 Mechanisms of Conduction Through Dielectrics

45

3.3.1 Electrode-Limited Conduction Mechanisms (a) Schottky or Thermionic Emission Schottky emission is a mechanism by which, if electrons in the metal obtain enough energy through thermal excitation, they will be able to overcome the energy barrier at the interface and enter the dielectric. In this mechanism, the current density is an increasing function of temperature. In the absence of an electric field across the dielectric, the emission current density is given by JSc = A∗ T 2 e−W/kT .

.

Here, W is the work function, the minimum thermal energy needed for an electron to escape the surface of the cathode and enter the dielectric, .k = Boltzmann’s constant, .T = Kelvin temperature, and A∗ =

.

4π qk 2 m∗ , h3

where .q = electron charge, .h = Planck’s constant, and .m∗ = effective electron mass in the dielectric.1 The work function, .W = qϕB , is the Schottky barrier height at the surface, where .ϕB is the interface barrier potential. In the presence of an electric field, E, the work function is reduced by  q 3E , 4π εr 0

W =

.

where .εr is the optical dielectric constant (equal to the square of the refraction index) and .0 is the permittivity of the vacuum, defined at the beginning of Chap. 2. At .T = 0 ◦ K, the emission current density is 0. At an operating temperature of 400. ◦ K (260.31. ◦ F) and constant electric field stress of 1 mV/cm, the emission current density is approximately .1 µA/cm.2 . If there are traps and interface states in the dielectric, so that the electron mean free path, l, is smaller than the dielectric thickness, .td , then the equation for the current density must be modified to 

JSc = αT

.

1. 1 m∗

=

∂2ε , ∂p2

1.5

m∗ Eμ m0

1.5

e−

W −W kT

where .ε = electron energy, .p = electron momentum.

,

46

3 Breakdown of Thin-Film Dielectrics

A − sec. . In this case, the cm3 k 1/5 temperature dependence of the emission current density is somewhat reduced. The electron mobility in the dielectric is denoted by .μ. In a vacuum, if there is an electric field, a charge carrier, such as an electron, will be accelerated in the direction opposite to the electric field. However, in a dielectric, collisions between electrons and the atoms or molecules of the dielectric will lead to an average constant electron velocity opposite to the direction of the field. The electron mobility is the constant of proportionality between this average velocity and the field strength.

where .m0 = free electron mass and .α = 3 × 10−4

(b) Fowler–Nordheim Tunneling This process occurs when the applied electric field across the dielectric is large enough to induce a triangular interface potential barrier between the metal and the dielectric. This mechanism is independent of temperature; it depends on the electric field strength and the surface barrier height. The equation for the current density is JF N

.

   −8π 2qm∗T 1.5 q 3E2 = exp ϕB , 3hE 8π h3 qϕB

where m∗T is the tunneling effective mass of the electron in the dielectric. If the dielectric thickness is large enough (for example, greater than about 4 nm in SiO2 , the tunneling effective mass is approximately the same as the effective mass of the electron in the dielectric, m∗ . At low voltage stress, the thermionic emission current is negligible, and this tunneling current is dominant. As the electric field strength increases, the Fowler– Nordheim tunneling current density increases. (c) Direct Tunneling For lower applied electric fields, direct tunneling can occur across the dielectric. In this case, the tunneling occurs across the full dielectric thickness. This type of current would of course be smaller than Fowler–Nordheim tunneling current. Again, this mechanism is independent of temperature but does depend on the dielectric thickness, being smaller for thicker dielectrics. The direct tunneling current density is given approximately by √  8π 2q ∗ (m ϕB )0.5 κtEOT . JT un ∼ = exp − 3h

.



κ SiO2 is the equivalent oxide thickness of the dielectric, κ is the Here, tEOT = t κ dielectric constant of the dielectric, and κSiO2 is the dielectric constant of silicon dioxide. Obviously, if the dielectric is silicon dioxide, then tEOT = t. Note that the direct tunneling current density is (approximately) independent of the voltage stress. At low temperatures and low applied voltage, direct tunneling is the dominant mode of conduction and is a decreasing function of dielectric thickness.

3.3 Mechanisms of Conduction Through Dielectrics

47

(d) Thermionic-Field Emission This mechanism is intermediate between Schottky emission and Fowler–Nordheim tunneling. The electrons gain some energy by thermal excitation so that they “see” an even smaller triangular potential barrier and are able to tunnel through more easily. This mechanism is, of course, temperaturedependent. The current density is given by JT F

.

√  

qϕ h2 q 2 E 2 q 2 mkT E B . exp = exp − kT 2h2 π 0.5 96π 2 m(kT )3

This conduction mechanism is mildly dependent on the temperature and strongly dependent on the electric field across the dielectric.

3.3.2 Bulk-Limited Conduction Mechanisms (e) Poole–Frenkel Emission This type of conduction occurs when electrons in trapping sites gain enough energy through thermal excitation to jump out of the trap into the conduction band of the dielectric. This type of emission is more likely to occur if an electric field is applied across the dielectric, thus reducing the potential energy of the trapped electron. The current density is given by ⎡ JP F = qμNC E exp ⎣

−q(ϕT −

.



qE π i 0 )

kT

⎤ ⎦.

Here, .NC is the density of states in the conduction band of the dielectric, and .ϕT is the trap energy level. This current density will be larger for the larger density of available states in the conduction band and with a larger applied electric field (thus increasing the available energy to be absorbed by a trapped electron). It is a decreasing function, however, of the trap potential—the larger the gap between the trap and the conduction band, the smaller the current density. Poole–Frenkel emission has been observed to be the dominant conduction mechanism in certain dielectric materials when there is a combination of high electric fields (.>1 mV/cm) and for temperatures between 300. ◦ K and 400. ◦ K. (f) Hopping Conduction This conduction occurs when an electron in a trapping site is able to tunnel from one trap to another. If there is a complete sequence of trapping sites across the dielectric (a percolation path), the electron may tunnel from trap to trap across to the anode. The current density is  Ea qaE − , = qanν exp kT kT 

JH op

.

48

3 Breakdown of Thin-Film Dielectrics

where a is the mean hopping distance from one trap to the next, n is the electron concentration in the conduction band of the dielectric, ν is the frequency of thermal vibration of electrons at trap sites, and Ea is the activation energy, the energy level from the trap states to the bottom of the conduction band. This mechanism differs from Poole–Frenkel emission in that the electrons are not thermally excited into the conduction band, but tunnel from one trap site to another. (g) Ohmic Conduction This occurs when there are electrons in the conduction band of the dielectric and holes in the valence band. A hole is a vacant state in the valence band, which acts effectively as a positive charge that can conduct current, although a hole current moves in the opposite direction than a current of electrons. This conduction mechanism is somewhat temperature-dependent. Its current density is 

JOhm

.

−Eg = σ E = qμENC exp 2kT

 ,

where σ is the electrical conductivity, μ is the electron mobility, NC is the effectivity density of states in the conduction band, and Eg is the band gap (the difference between the top state in the valence band and the lowest state in the conduction band). The magnitude of the Ohmic current is very small until the dielectric breakdown. (h) Space-Charge-Limited Conduction This mechanism is caused by the injection of electrons into the dielectric at an ohmic (metallic) contact. The electrons diffuse through the dielectric toward the anode under the influence of the electric field. If the electrons were injected into a vacuum instead of into the dielectric (as in a vacuum tube device), they would accelerate toward the anode. Since there is a solid (the dielectric material) between the cathode and anode, the injected electrons experience collisions with the lattice sites, leading to diffusion across the dielectric with a constant average velocity. If there are trapping sites in the dielectric, the space-charge-limited conduction current density is given by JSCL =

.

  NC 9 Et − EC E 2 μ , exp 8 gn Nt kT d

where gn is the degeneracy of energy states in the conduction band, Nt is the density of traps in the dielectric, Et is the trap energy level (assumed to be single-valued), EC is the lowest energy level in the conduction band, and  is the static dielectric constant. The other quantities are as defined previously. In the case of very strong injection of electrons into the dielectric (a strong potential difference, approaching breakdown), all traps are filled with electrons, and the space-charge-limited conduction current density follows Child’s law: JChild =

.

9 E2 μ . 8 d

3.4 Breakdown in Silicon Dioxide Dielectrics

49

This current density has a square-voltage dependence, since V = E/d for a constant electric field across the dielectric. (j) Ionic Conduction This occurs when ions move due to an applied electric field. For example, if there are defects or impurities in the dielectric film, the field may cause ions to jump from one defect site to another. The current density is JI on

.

   qϕB Eqd , = J0 exp − − kT 2kT

where J0 is a proportionality constant and d is the spacing of two nearby jumping sites. Since ion masses are relatively large, this mechanism is not usually significant in dielectric films in CMOS applications. (k) Grain-Boundary-Limited Conduction In a polycrystalline dielectric, resistivity at boundaries between microcrystals may be higher than that within the microcrystals. At the grain boundary, there will be a potential energy barrier that is proportional to the square of the trap density at the boundary and inversely proportional to the relative dielectric permittivity of the dielectric: B =

.

q 2 n2b , 2N

where  is the relative permittivity, nb is the trap density at the boundary, and N is the dopant concentration. The greater the trap density at the boundary, the larger the potential barrier; the greater the relative permittivity, the lower the potential barrier. In hafnia, this type of conduction is important, since empirical evidence given above shows that leakage current and breakdown current are both preferentially channeled along the GBs.

3.4 Breakdown in Silicon Dioxide Dielectrics In thick dielectrics, it is unlikely that there will be quantum tunneling of electrons from the cathode to the anode. However, in thin-film dielectrics, with thicknesses of the order of several nanometers, tunneling probabilities are larger. Prior to the formation of a complete percolation path across the dielectric, tunneling is the primary means of current flow between the cathode and the anode. Once a complete percolation path is formed, electrons can also jump from trap to trap due to thermal excitation. In this manner, they pass from cathode to anode. In addition, an electron in a trapping site is closer to the anode than an electron at the cathode and thus has a higher probability of tunneling from the site across to the anode. This sudden increase in conduction through the dielectric is called soft breakdown. There may be several soft breakdown occurrences before the final failure of the dielectric.

50

3 Breakdown of Thin-Film Dielectrics

Consider a capacitor consisting of two metal plates with a thin film of SiO.2 dielectric sandwiched between them. Let the area of the plates (and the dielectric) be A. Let the capacitor be subjected to a voltage stress, either constant or timevarying. Prior to the formation of a complete percolation path, at reference time .t = 0, tunneling is the only means of current flow, as represented by the equation AJstress = AJT un ,

.

where the tunneling current density is a decreasing function of the dielectric thickness, .Tox , but an increasing function of the applied voltage .V (t): JT un = αtEOT e−βtEOT /V (t) .

.

Once a complete percolation path has been formed, additional current begins to flow, and the current equation becomes   AJstress = AJT un + AJdisp + Iperc .

.

Here: ε dV (a) .Jdisp = tEOT dt is called the displacement current, and .ε is the dielectric permittivity. This current is due to time variation in the electric dipole moments of the SiO.2 (and silanol and water) molecules. If the voltage stress across the dielectric   is constant, there is no displacement current. (b) . Iperc = {G0 (t)} V (t)δ is the percolation current, or hopping across a complete path. It is area-dependent and sample-specific. Each sample has a different time-dependent conductance, .{G0 (t)}, which also depends on the density of trapping sites in the dielectric (and so on the particulars of the deposition process) and on the power being dissipated by the percolation current (this dependency is denoted by the braces around the conductance and the current). The exponent of the voltage is found empirically (Alam et al., 2002) to be less than 1.

It is clear that each current density term in the previous equation contains more than one of the conduction mechanisms described in the previous chapter. For example, the tunneling current density term includes not only direct tunneling but also Fowler–Nordheim tunneling, which also depends on the electric field strength. This phase will also include low levels of conduction due to the various temperaturedependent mechanisms. The percolation current density includes several conduction mechanisms, with dependency on field strength, temperature, electron concentration in the conduction band of the dielectric, mean hopping distance from one trap to another, etc. Hopping, or trap-assisted tunneling (TAT), would appear to be the dominant conduction mechanism during this phase until the local temperature at the percolation path becomes large enough to trigger hard breakdown (HBD) and the ohmic conduction phase.

3.5 Breakdown in Hafnium Oxide Dielectrics

51

As percolation current flows across the dielectric, power is dissipated through the percolation path .Pperc (t > 0) = V (t > 0)Iperc (t > 0). If this power dissipation is large enough, the local temperature will be high enough near the percolation path to melt silicon, leading to a short circuit through the dielectric. In addition, the increase in temperature due to the percolation current may locally increase the current flow due to the temperature-dependent mechanisms discussed above. There is thus a feedback mechanism; as the percolation current increases the local temperature, thus increasing the currents due to other mechanisms, the temperature increases further. The current following this short circuit is found empirically (Alam et al., 2002) to be proportional to the voltage stress, thus obeying Ohm’s law. The result is called a HBD of the dielectric, as the dielectric becomes a resistor.

3.5 Breakdown in Hafnium Oxide Dielectrics In a hafnium oxide thin-film dielectric with a silicon dioxide interface layer, the breakdown mechanism is similar to that of a silicon dioxide thin film, but also different in some ways. In the dielectric specimens described in the most recent study (Zhang et al., 2019), there is a monocrystalline SiO.2 layer atop the silicon substrate, then a thin mixed layer of amorphous SiO.2 and amorphous hafnia atop that, and a thicker layer of monocline polycrystalline hafnia above that. Electrons emitted by the silicon substrate cathode may cross the monocrystalline SiO.2 layer by various methods discussed above, particularly through direct tunneling and Fowler–Nordheim tunneling (Perevalov et al., 2007). When they reach the amorphous layer of SiO.2 and hafnia, they encounter a potential barrier, due to the larger concentration of defects in this layer. They are then channeled along with this layer until reaching a grain boundary of the polycrystalline hafnia layer. They may then hop from trap to trap in this grain boundary to reach the anode. It is unlikely that leakage will cross the grains since the defects (elongated hafnium–oxygen bonds) are much more concentrated in the GBs. This means that roughly 5% of the volume of hafnia primarily accounts for current flow. As electrons hop from trap to trap in the GB, the local varying electric field which they produce generates phonons2 in the adjacent grains (Vandelli et al., 2013), increasing the local temperature. The vibrations also tend to generate more oxygen vacancies (trapping sites) in the grain boundary, increasing the trap density. The increased temperature also enhances the various temperature-dependent conduction mechanisms, leading to increasing leakage current. The increased trap density leads to increased trap-assisted tunneling, as the distance between adjacent trapping sites is smaller. The increased leakage current, in turn, leads to the generation of more trapping sites and higher local temperature.

2 A phonon is a quasiparticle of sound in a crystal lattice; it is quantized due to the regularity of the lattice structure.

52

3 Breakdown of Thin-Film Dielectrics

If the damage is relatively small, and the local percolation path is primarily confined to the GBs, then the capacitor may discharge as a soft breakdown event, with the subsequent buildup of charge due to the continuing voltage stress (Chatterjee et al., 2006). If, however, the temperature-induced damage is significant, then the positive feedback leads to a runaway leakage current that melts an ohmic conduction path along the grain boundary, producing hard dielectric breakdown. There will be a primary, initial grain boundary ohmic conduction channel, but there may also be secondary channels through the dielectric. There may be several SB events prior to the HB event. After HB, the capacitor is converted into a resistor, with the resistance dependent on the details of the channels of ohmic conduction.

Chapter 4

Cell Models for Dielectrics

Le et al. (2009), Le (2012), and Bažant and Le (2017, Chapter 14) proposed a subcell model for high-gate dielectrics. Here, the dielectric is viewed as a parallel circuit of cells and where each cell is a series circuit of nanocapacitor subcells. Because it is a parallel circuit, each cell (also referred to as a representative volume element (RVE) by Bažant and Le (2017)) is a bundle load-sharing system where the capacitor laws dictate the load on each working subcell. As such, the dielectric is a chainof-bundles load-sharing reliability system where the system fails when one of the bundles/cells fails. Other cell types of models have been considered earlier to analyze the BD of a dielectric, but these are defect-based rather than based on load-sharing. See Strong et al. (2009, Sections 3.2–3.4) where percolation and analytic cell models are discussed in detail. The cells in the analytic model are akin to the subcells in Le et al. (2009)’s model without regard to load-sharing and are considered defective cells if they contain a defect. Let N be the number of cells, and let .Xi = 0 if the ith cell is not defective, or .Xi = 1 if the ith cell is defective, for .i = 1, 2, . . . , N. The .Xi ’s are independent Bernoulli r.v.’s where the probability of being defective is .λ, the mean fraction of defective cells in the dielectric. The dielectric fails when there is a collection of defective cells that go directly across the thickness of the dielectric since this causes current to flow across the dielectric. In the percolation model, which is a more appropriate model for thin-film silicon dioxide dielectrics, defects are impurity sites in the amorphous material. Most of the sites in the material consist of silicon dioxide molecules bound within the dielectric. However, as discussed in the previous chapter, the manufacture of the thin film leads to the inclusion of impurity sites, consisting of either silicon hydroxide (silanol) or water. A hydrogen ion (proton) in a defect site may, under the influence of an electric field across the dielectric, exchange positions with an adjacent oxide ion in a silicon dioxide site. In this way, the impurity site, which acts as an electron trap, may migrate through the dielectric. If it happens that the various migrating traps © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5_4

53

54

4 Cell Models for Dielectrics

temporarily form a chain (percolation path) of adjacent traps extending across the dielectric, then there will be a temporary increase in the leakage current, a soft breakdown (SBD), which will also lead to an increase in the local temperature around the chain of traps. However, the continued migration of the trapping sites will lead to dispersal of the percolation path. Another percolation path may form later at a different place in the dielectric. If it happens that the SBD current in a path leads to a sufficient increase in the local temperature to produce melting of an ohmic conduction path through the dielectric, then HBD is said to occur. In a thin-film hafnium oxide dielectric, on the other hand, the percolation model does not work so well. The trapping sites, which consist of electron energy states in the band gap between the valence band and the conduction band, are generated when a dislocation, consisting of an elongated hafnium–oxygen bond, occurs. These trapping sites do not migrate through the dielectric but are confined largely to the GBs. The leakage current through a GB increases the local temperature, leading to the generation of more dislocations (trapping sites) in the GB. The consequence is a temporary increase in leakage current (SBD), with concurrent capacitor discharge. However, the traps do not migrate through the dielectric but remain localized in the GB. Repeated SBD events at a GB may lead to a sufficient increase in the local temperature to begin generating dislocations (traps) in the grains adjacent to the GB, producing a permanent conductive path across the dielectric, a HBD. The cell model is more appropriate for thin-film hafnium oxide dielectrics, because of the localization of the paths for SBDs and HBDs. Each cell may be conceptually considered to be a series of SBD events, leading to an HBD. A SBD occurs when an increase in local temperature due to leakage current leads to generation of a new trapping site (dislocation) in the GB. This new site increases the probability of electron tunneling from trap to trap, increasing the leakage current and further increasing the local temperature in the GB. At sufficiently high local temperature, dislocations start to be produced in the grains adjacent to the GB, leading to the melting of a large-scale conduction path across the dielectric. HBD is then said to occur. The cells are parallel, due to localization. A series of pseudo-nanocapacitors in a cell, for which sequential subcell failure leads to cell failure and HBD, may approximate the sequence of SBDs leading to HBD. Each SBD event increases the local temperature at the GB, increasing the probability of future SBDs at that location, and of the eventual HBD. The above models are related to chains-of-links (bundles) types of models where the chain fails when one of the links fails (the weakest link model). The number of links is determined by the size (cross-sectional area) of the links, and the number of cells/subcells in a link is determined by the size of the cell and the thickness of the dielectric. If the cells/elements in the links are independent and identically distributed (i.i.d.), extreme value asymptotics suggests that the BD statistics would be approximately Weibull for long chains. Lack of fit of the Weibull is attributed to size effects of the finite length of the chain where Le (2012) and Bažant and Le (2017) referred to the model as a finite weakest link model. Though they based their model on a bundle/cell being a series circuit of nanocapacitors, this is just an approximate model that is not consistent with the

4 Cell Models for Dielectrics

55

nano-physics of BD as indicated in the previous chapter. Rather than using their load-sharing cell model to study the size effect related to Kim and Lee (2004, Figure 14) data and to correct for this inconsistency, they empirically fit what they call a grafted distribution for the BD cycle time. The grafted distribution behaves like a Weibull at the origin and like a normal distribution in the right tail and serves as a reference distribution to the size effect. Size effects were observed earlier in the study of the BD of fibrous composites (Phoenix, 1983; Phoenix & Tierney, 1983; Taylor, 1987). They modeled composites as chains-of-(load-sharing) bundles where size effects need to be addressed in the analysis of the chains. They used a reference distribution to discover the length, m, of the chain and the size, n, of the bundle. (See the discussions regarding the “weakest link transform” and the “reverse weakest link relationship” in Section 6 of Phoenix and Tierney (1983) and Section 5 of Taylor (1987), respectively.) Their way of selecting the bundle size (the RVE) depended on the bundles being approximately i.i.d. In Chap. 10, we study the size effects of a finite weakest link/chain-of-bundles model for Kim and Lee (2004, Figure 14)’s cycle time data. Our study is based on a hybrid of Le et al. (2009)’s model and the fibrous composite model that are both chain-of-bundles load-sharing systems. In the hybrid, we just choose a reference distribution without regard to the details of the statistical behavior of the elements of the bundle, which is quite complicated to model for complex materials. The reference distribution is simply based on load-sharing bundles of n i.i.d. Weibull components where the load-sharing within the bundle is the equal load-sharing rule. The Weibull parameters and n are tuning parameters to study the size effect of the chain length and the bundle size that can be used to incorporate the physical structure of the dielectric. Note that the relationship of the Weibull shape parameter, .β, in the asymptotic distribution of the TBD statistic under static testing protocols is .β ∼ = Tox /a0 . Here, .Tox is the thickness of the HfO.2 dielectric, and .a0 is the lattice constant for the grain. (See formula (3.17) of Strong et al. (2009) where, here, we ignore the thickness of the interface, since it is negligible for more modern fabrications of HfO.2 dielectrics.) We also consider the fabrication of the HfO.2 dielectrics, where the conductive regions of the GBs are where the BD of the dielectric occurs since the crystalline structure of the grains makes it magnitudes more difficult for electrons to move through/jump around in the grain until the behavior of the electrons becomes irreversible. As indicated in the previous chapter, electrons in the cathode end traverse the interface with the HfO.2 dielectric laterally until it reaches a GB that it is physically capable of entering. The electrons attempt to move across to the anode through the GBs described in the previous chapter. Thus, the length of the chain, m, is dictated not by the total cross-sectional area but just that consisting of the GBs. Finally, another consideration we discuss is how the topography of the HfO.2 surface at the anode end can be incorporated into the model.

Part II

Statistical Aspects of Fiber Bundle Models

The basic model for BD is the chain-of-bundles model. This is a weakest link model for materials where the chain/material fails when one of its bundles/links fails. Rosen (1964, 1965) used such a model in the analysis of his seminal experiments regarding unidirectional 2-D glass fibers composites. He discovered that around a break in a fiber in the composite all the load was transferred to fibers horizontally in the composite. He referred to this horizontal distance as the ineffective length and used this to conceive of the composite as a chain-of-bundles where each bundle was a horizontal collection of ineffective length fibers. Following Daniels (1945)’s work on the breaking strength of a bundle of threads, Rosen (1964, 1965) used the equal load-sharing rule among the surviving fiber elements in a bundle. This rule, though reasonable for a bundle of threads or a dry bundle of fibers, was unrealistic for fibrous composites. Zweben and Rosen (1970) used more realistic local load-sharing rules for 3-D composites. Rosen’s work initiated a considerable amount of work on chain-of-bundles loadsharing systems for fibrous composites in 1970s and 1980s. Notable was the work by a group of material scientists and statisticians at Cornell (see, for example, Harlow and Phoenix, 1978a, 1978b; Harlow and Phoenix, 1982; Harlow et al., 1983, and the references cited in Chaps. 4 and 5). In Chap. 5, electrical breakdown (BD) and the BD formalism that relate various types of breakdown distributions are discussed, while in Chap. 6, the statistical properties of the load-sharing bundles and chains of such bundles are presented. A mixture representation is given for the bundle breaking strength distribution when the bundle component strength distributions are i.i.d. The shape parameter depends on the bundle size, and the scale parameter is determined by the load-sharing rule in this representation. The Gibbs measure for the set of surviving components is also given, and a description of the stochastic process for BD is given that is consistent with the BD of a series circuit as a load-sharing system and the distribution of the BD of the stochastic process. In addition, size effects are illustrated for chain-ofbundles for bundle sizes 2, 3, 4, and 5 and the equal and a local load-sharing rule. In Chap. 7, we discuss fiber bundle models in the context of fibers and unidirectional fibrous composites. There, the physical aspects are from a mechanical

58

II

Statistical Aspects of Fiber Bundle Models

perspective; tensile load is applied to the fibers or the composites where the load is parallel to the fibers. For brittle fibers, this load is primarily vertical and simplifies the analysis. There, several fiber and fibrous composite data sets are used to illustrate statistical methods and conclusions drawn from the analysis using these methods for chain-of-bundles models. Kim and Lee (2004) did a thorough study of reliability characteristics of the BD of HfO2 dielectrics. Statistical analyses of their data for Figures 3, 6, and 14 regarding the BD formalism are given in Chap. 8. (The data for Figure 14 were kindly provided by Le (2012) who used these data in his analysis of his load-sharing cell model. The data from Figures 3 and 6 were reconstructed from the figures since J. C. Lee was not successful in locating these data.) In Chap. 9, the analysis of Kim and Lee (2004)’s Figure 6 is used to study the BD of circuits of ordinary capacitors. This is for illustrative purposes only to illustrate size effects since the capacitor BD is based on dielectrics and not classical plate capacitors. We also study parallel–series circuits (chain-of-bundles) to study simulation size effects as well the size effect of the chain length. We also discuss the load-sharing aspects that have to be considered in time to BD and cycles to BD. Le (2012) and Bažant and Le (2017) discussed the size effect of the length of the chain in the chain-of-cells/bundles model and refer to this as the finite weakest link model. We study this in Chap. 10 and suggest hybrids of their load-sharing cell model that is described in Chap. 4 with earlier work by Phoenix (1983), Phoenix and Tierney (1983), Taylor (1987) who studied methods for discovering the length of a chain and the size of a bundle. These hybrids suggest how: (i) one may account for the coarseness in the metal anode interface with the HfO2 surface and (ii) a birth process akin to Taylor (1987)’s model to account for the creation of an ohmic path that causes BD in the dielectric. A summary of the book is given in the last chapter on concluding comments as are some other areas of future research relevant to the topics in the book.

Chapter 5

Electrical Breakdown and the Breakdown Formalism

In the testing of capacitors and capacitor circuits, one is interested in their reliability. Thus, one studies various types of breakdowns under accelerated stress conditions: e.g., stressed under increasing voltage or current to determine voltage or current breakdown (VBD or CBD) and time to failure under static voltage or current load or cycles to failure. The BD formalism allows one to relate BD under different testing protocols and to project the reliability to normal operating conditions.

5.1 The Breakdown Formalism Fundamental to the understanding of electrical breakdown of plate capacitors is a formalism based on the Weibull distribution. Much of the justification for the use of the Weibull is empirically based and motivated by weakest link arguments. Here the necessary background regarding this formalism for static and dynamic loads is given. This is based on Chapters 2 and 3 from Strong et al. (2009) and Chapters 14–17 from Dissado and Fothergill (2008). See also Phoenix (1983) and Phoenix and Tierney (1983) who gave a more general formalism in a different format for composites that include general load history in BD and derived the power law from first principles. In Chapter 8, statistical methods are given for studying the validity of the formalism and illustrated based on Kim and Lee’s (2004) work regarding the BD statistics of metal oxide (HfO2 ) capacitors.

5.2 Time-to-Breakdown (TBD) Formalism: Static Loads In this section, we present the assumptions/axioms A.1–3 and their consequences of the formalism for static loads.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5_5

59

60

5 Electrical Breakdown and the Breakdown Formalism

A.1 Weakest Link Principle Let FA denote the TBD distribution of a plate capacitor with area A. Then, 1 − FA = (1 − F1 )A , where F1 is the TBD distribution of a plate capacitor of unit area. Note that it follows from this that 1 − FA+B = (1 − F1 )A+B . So, the voltage BD statistics for two disjoint areas are independent. A.2 Weibull Distribution for the TBD, T , under a static voltage load, V , and unit gate area (A = 1)   a  t Survival Function: S(t) ≡ S(t; τ, V , a) = exp − ; τ (V ) Hazard Function: h(t) = aτ (V )−a t a−1 .

A.3 The Inverse Power Law Relationship of the Parameters of T to that of V τ (V ) = DV −n ≡ C −1/a V −b/a .

Consequences of A.2 and A.3 (i) The scale parameter, τ (V ), the characteristic lifetime, is the 1 − e−1 percentile of T under a static load V . Let tp (V ) and μ(V ) denote the pth percentile and mean of T under a static load V . Then, since  a tp (V ) = [τ (V )]a [− ln(1 − p)] and μ(V ) = τ (V )





at a e−t dt, a

0

it follows that the percentiles and the mean of the TBD also satisfy the inverse power law. The shape parameter, a, is related to the thickness of the capacitor (see Section 3.2.5 of Strong et al., 2009). (ii) Since ln τ (V ) = −n ln V + ln D, plots of ln τ (V ), ln μ(V ), and ln tp (V ), p = 0.5, versus ln V are used to verify the appropriateness of the inverse power law model. The reader is referred to

5.2 Time-to-Breakdown (TBD) Formalism: Static Loads

61

Figure 14.7 in Bažant and Le (2017), Figure 3.43b in Strong et al. (2009), and Figures 14.8b and 14.5a in Dissado and Fothergill (2008), respectively, which support the power law for different materials. Note, though, that the related Figures 3.43a in Strong et al. (2009) and 14.5b in Dissado and Fothergill (2008) suggest that the exponential law is a reasonable alternative to the inverse power law. Section 14.2.1 of Dissado and Fothergill (2008) compared the inverse power and exponential laws and indicated why the inverse power law is preferred. Section 3.4.1 of Strong et al. (2009) discussed in depth the deficiencies of the exponential law and stated on page 271–2 that “At lower voltages or for thinner oxides, the TBD power law dependence remains valid over 12 orders of magnitude as seen in Figure 3.46(b). Recently, the empirical TBD power law was also confirmed by different groups. These independent studies from different research groups unambiguously demonstrate that an exponential law is invalid to characterize the TBD voltage dependence for both thin and thick oxides.” In addition, Phoenix (1983) page 227 and Phoenix and Tierney (1983) page 215 stated that “In summary, the notion that the exponential breakdown rule is theoretically justified from kinetic theory, and that the power-law is only an ‘empirical’ law is without solid foundation. Not only is the power-law theoretically justified, it also has the added advantage of mathematical tractability.” Their justification of the power law is in the context of molecular slippage and is based on a better and more appropriate approximation to the potential function than the linear approximation used to justify the exponential power law. The rationale for preferring the power law is based on the following problems with the linear approximation to the potential function. The rate at which the thermal activation potential decreases with increasing stress should be unbounded at 0; near 0 stress, any small increase in stress should produce a large decrease in the thermal activation potential. This is incompatible with a Maclaurin linear approximation to the potential. In addition, in the process of material fracture under a tensile load, the stresses involved must be large enough to cause bond rupture with a relatively high probability within an observable time interval. This means that the stress will be outside of the range of a linear approximation to the potential. The potential function is observed to be actually curved over the stress range of interest. Their more reasonable approximation models this and results in the power law. See Phoenix (1983) and Phoenix and Tierney (1983) for more details. Consequences of A.1 and A.2 At a fixed voltage V and gate area A, the characteristic lifetime, τ (V , A), is τ (V , A) = τ (V )A1/a . It follows from Consequences of A.2 & A.3(i) how the pth percentile, tp (V , A), and mean, μ(V , A), also preserve this relationship. Kim and Lee (2004)’s Figures 8 and 9 are consistent with A.1 & A.2.

62

5 Electrical Breakdown and the Breakdown Formalism

5.2.1 TBD Formalism: Dynamic Loads Assumption A.1 for static loads is a special case of the proportional hazards model where the proportionality is proportional to the area. In our analysis in Sect. 8.2 of Kim and Lee’s (2004) data for increasing voltage load, we find that a proportional hazards model was appropriate, but it was not proportional to the area. For dynamic loads, we replace A.1 by A.1’ but keep A.2 and A.3. A.1’ Weakest Link Principle Let FA denote the TBD distribution of a plate capacitor with area A. Then, 1 − FA = (1 − F1 )m(A) , where F1 is the TBD distribution of a plate capacitor of unit area and m(A) is a nonnegative increasing function. Note that it follows from this that if A < C with C = A + B, 1 − FC = (1 − F1 )m(C) = (1 − F1 )m(C)−m(A) (1 − F1 )m(A) = (1 − F1 )m(C)−m(B) (1 − F1 )m(B) . So, the voltage BD statistics for two disjoint areas are independent. A.4 Time and Voltage Relationship for the Probability of Breakdown Plugging τ (V ) − C −1/a V −b/a from A.2 into the hazard rate in A.1 gives h(t) = at a−1 CV b or h(t) = at a−1 C(V (t))b if the voltage is a function of t. When V (t) = V t, then h(t) = Cat a+b−1 V b . Thus, the hazard function of T  H (t) = 0

t



t

h(s)ds =

Cat 0

a−1

[V (s)] ds = b

Ct a V b , a+b

if V (t) ≡ V , b

C at a+bV ,

if V (t) ≡ V t,

and the survival function is ⎧ ⎨exp{−Ct a V b },  if V (t) ≡ V ,

S(t) = exp{H (t)} = a+b b ⎩exp C at a+bV , if V (t) ≡ V t.

(5.1)

Thus, from Eq. (5.1), under an increasing voltage, assumption A.2 and V (t) = V t, the breakdown time, TBD , has a Weibull distribution with shape parameter ρ and scale parameter (characteristic life) τ where

5.2 Time-to-Breakdown (TBD) Formalism: Static Loads

ρ = a + b, τ

−1



Vb = Ca a+b

63

1/(a+b) and n =

b . a

(5.2)

Remark Since V (t) = V t, from Eq. (5.2), the BD voltage VBD , VBD = V TBD , has a Weibull distribution with shape parameter ρ and scale parameter (characteristic life) V τ . Similar assumptions to those above are relevant to cyclic testing. For example, for AC tests, the pulse voltage amplitude, V , will be constant for static testing, while, for dynamic testing, the amplitude rate, V , is proportional to time. (See Formulas (16.28–29) and (17.5–7b) of Dissado and Fothergill, 2008). Kim and Lee’s (2004) static cyclic testing data has the added feature that each cycle has a duty cycle (an on-cycle) when the capacitor is under static cyclic testing and an off-cycle when it is under no stress. For this limited amount of data (Kim & Lee, 2004) suggested that changes in the frequency and the length of the on-cycle seemed to only affect the scale but not the shape parameter. A statistical analysis of this is discussed in Sect. 8.1 and Chap. 10 as well as other data in their paper.

Chapter 6

Statistical Properties of a Load-Sharing Bundle

As indicated in the introduction of Part II, some physical systems can be modeled using fiber bundles where a bundle is a load-sharing parallel reliability system of components. Here, a parallel system (bundle) fails when all the components in the system fail. Below we discuss load-sharing rules for a bundle and its consequences and then give the survival distribution and mixed distribution of the strength of a bundle under increasing load. After that, we give the joint distribution, the Gibbs measure, of the state (failed/working) of the components of a bundle and discuss the stochastic failure process for the bundle. Finally, we close the chapter with a discussion of size effects for the equal load-sharing rule and a local load-sharing rule.

6.1 Load-Sharing Rules The strength of a load-sharing bundle is based on the nominal load per component (the load per component), say x. We assume that there are n components in the bundle .N = {1, 2, . . . , n}. Let .M ⊆ N denote the set of working components in N. Then the load at component .i ∈ M for a nominal load per component x is given by .λi (M)x. These nonnegative constants, .λi (M), define the load-sharing, and the collection .{λi (M) : i ∈ M, M ⊆ N} is called a load-sharing rule. A load-sharing rule is said to be monotone if λi (A) ≤ λi (B) whenever i ∈ B ⊂ A ⊆ N.

.

The component failure process for a fiber bundle is complicated and occurs in cycles. The first component failure in a cycle is due to increased load and referred to as a Phase I failure. This failure causes a cascade of failures referred to as Phase II failures due to load transfer from the failed components in the cycle. After this Phase I/II cycle, the next cycle begins with the next failure due to increased load. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5_6

65

66

6 Statistical Properties of a Load-Sharing Bundle

A complete description of the failure process for a load-sharing system when the load-sharing rule is monotone can be found in Li and Lynch (2011). In the sequel, we only consider monotone sharing rules. Examples (i) The equal load-sharing rule is .λi (M) = n/M, where .|M| is the cardinality of M. Notice that the equal load-sharing rule is obviously monotone. (ii) Consider the local load-sharing rule where load is transferred equally to adjacent components. This rule is not well-defined since it does not define .λi (A) for all i and A. It can be well-defined by using an absorbing state Markov chain on N where the local load-sharing describes the one-step transition probabilities of the chain. This local load-sharing rule is a special case of the local loadsharing discussed in Sect. 7.2.2 for rectangular grid bundles, .Gr,c , but, here, the grid is a horizontal bundle with .r = 1 and .c = n. The discussion in Appendix A.2 shows that such absorbing state load-sharing rules are monotone and describes how such rules are calculated. Also, notice that, when .n = 2, the equal and local load-sharing rules are the same.

6.2 The Bundle Strength Distribution as an Affine Mixture Consider m sets of working components in the bundle .N ⊇ S1 ⊃ . . . ⊃ Sm ⊃ Sm+1 ≡ ∅ (also written as .S1 → . . . → Sm → ∅) that will be referred as a path P . Let .F and .H = − ln F be the bundle component survival distribution and cumulative hazard function, respectively. The signature   of this path and its ν(P )−1   path coefficient are defined as .ν(P ) = m + 1 and . H (xλi (Sk )) , k=1

i∈Sk \Sk+1

respectively, where x is the load per component in N . Following Lee et al. (1995), the bundle survival distribution can written in terms of the path signatures and coefficients as ⎧ ⎡ )−1 ⎨ ν(P   ⎣ .G(x) = (−1)ν(P ) exp − ⎩ P

k=1

 i∈Sk \Sk+1

⎤⎫ ⎬ H (xλi (Sk ))⎦ . ⎭

(6.1)

Since .G(0−) = 1, formula (6.1) shows that the survival distribution of the bundle strength is an affine mixture over the path coefficients. If the component strength distribution is .W(τ, ρ), this affine mixture can be written as G(x) =



.

AP exp{−CP (x/τ )ρ },

P

where .AP = (−1)ν(P ) and .CP =

ν(P )−1 



k=1 i∈Sk \Sk+1

λi (Sk ).

(6.2)

6.2 The Bundle Strength Distribution as an Affine Mixture

67

n+1  Let .n = (−1) λn dA(λ). It follows from Theorem 6.3 and its proof in Lee n! et al. (1995) and from (6.2) above that .G(x) ∼ (x/τ )nρ k as .x → 0. This shows that the survival distribution of the breakdown strength of a series system of such bundles is asymptotically Weibull with survival distribution .exp{−(x/τ )nρ n } and how the asymptotic shape parameter and scale parameter depend on n, .ρ, .τ , and the n-th moment, .n , of the affine distribution .A(λ). In Sect. 6.3, we shall see that .n is also the n-th moment of the probabilistic mixture distribution in the gamma-type mixture representation for the bundle strength distribution. From (6.2), we will derive a gamma-type mixture for the density of the bundle strength distribution in the next section. To do this, replace .(x/τ )ρ by y in (6.2). This gives

G0 (y) =



.

 AP exp{−CP y} ≡

exp(−λy)dA(λ).

(6.3)

P

Note that this is the bundle strength distribution when the component strengths have an exponential distribution, but the load-sharing rule is now .(λi (M))ρ . Remark (Daniels’ Equal Load-Sharing Rule (Daniels, 1945)) Let .X1 < . . . < Xn denote the ordered component strengths. Then, under equal load-sharing, the bundle breaking strength is .

max{(n − j + 1)Xj /n} = sup xF n (x), j ≤n

x

where .Fn is the empirical cdf of the component strengths. In addition, symmetry considerations lead to considerable simplification of formula (6.2) when the component survival distribution is Weibull with shape parameter .ρ. This is our interest here for series circuits. In this case, we can just consider the number of working components, .0, 1, . . . , n, rather than the set of working components. Since the contribution of .k → j , where .k > j , is ρ ρ to the path, coefficients .C and .A for path .k → k → . . . → .n (k − j )/k p p 1 2 kt → kt+1 = 0 are, respectively,  ρ .n

(k1 − k2 ) ρ

k1

+ ··· +

(kt−1 − kt ) ρ

kt−1

kt

+ ρ kt

 and (−1)t+1

      n k1 kt−1 kt ··· . kt kt k1 k2

Formula (6.1) was derived for a parallel system of components. However, consider another reliability structure, for the system of n components, say system .Rn . Consider any path, P , where the complement of .Sm in P , .N − Sm , is a set of components whose failure causes the failure of .Rn . Then, because of monotonicity of the load-sharing rule, .λi (Sm ) can be modified and set equal to infinity. Thus, such terms are zero in the sums in (6.1) and (6.2). This modification was needed in the calculation of the survival distribution of the grid bundles

68

6 Statistical Properties of a Load-Sharing Bundle

in Sect. 7.2.2 since the grid fails not only if all the components fail but also if there is just a “crack” across the bundle.

6.3 The Bundle Strength Density as a Gamma-Type of Mixed Distribution The density .g0 of .G0 given by (6.3) is the affine mixture  .g0 (y)

=

 λ exp(−λy)dA(λ) =

 =

 ∞ y exp(−θy) θ

 ∞ y exp(−θy)dθdA(λ)

λ λ

λdA(λ)dθ ≡

 y exp(−θy)a1 (θ)dθ.

Repeating the same interchange of order of integration .n − 2 more times gives  .g0 (y)

=

 y n−1 exp(−θy)an−1 (θ)dθ ≡

θ n y n−1 exp(−θy)b(θ)dθ. (n − 1)!

(6.4)

Formula (6.4) is a bona fide gamma mixture representation where the mixing is over the gamma scale parameter and where the mixing density, .b(θ), is a sized biased convolution of uniforms (see Durham and Lynch, 2000; Li and Lynch, 2011, for details and interesting consequences of this representation). If .G1 (y) = G0 (y ρ ), .g1 (y) = ρy ρ−1 g0 (y) is a gamma-type mixture. Note that, from (6.4), .g1 (y)y

−(nρ−1) = ρ



y→0 θn exp(−θy ρ )b(θ)dθ −→ ρ (n − 1)!



θn b(θ)dθ. (n − 1)!

(6.5) Formula (6.5) shows that the distribution of the breakdown strength of a series system of such bundles, a so-called chain-of-bundles, is asymptotically Weibull with shape parameter .nρ and scale parameter based on the n-th moment of .b(θ).

6.4 The Gibbs Representation of the Distribution of the States of a Bundle Below is a summary of results from Sections 2 and 4 in Li et al. (2019) (see also, Li and Lynch, 2019). Here, the set A denotes the set of working components in the bundle that work under a load per component s, and .Ac , the complement of A, is the set that has failed. A starting point is to model the probability that the set of working components, .Ps (A), is a Gibbs measure with energy .Us (A), .A ⊆ N with .Us (∅) = 0, where

6.4 The Gibbs Representation of the Distribution of the States of a Bundle

.Ps (A)

=

exp{−Us (A)} . Z(s)



Here, the normalizing constant .Z(s) =

69

(6.6)

exp{−Us (A)} = 1/Ps (∅) is referred to as the

A⊆N

partition function in statistical mechanics and is just the reciprocal of the probability that none of the components works under a load s per component. The local structure of .Ps (A) defined in (6.6) is the log-odds for component .i ∈ A. An attractive feature of load-sharing systems is that the local structure is simply .σi (A, s)

≡ ln

F i (λi (A)s) Ps (A) = ln . Ps (A − {i}) Fi (λi (A)s)

(6.7)

If .Us (A) is the energy, define its potential .Vs (A) as .Vs (A)



≡−

(−1)|A−B| Us (B),

(6.8)

B⊆A

where .|A| denotes the cardinality of the set A. We have immediately .Us (A)

≡−



(6.9)

Vs (B)

B⊆A

since (6.8) and (6.9) are just the Möbius inversion formulas that relate .Us and .Vs . Thus, from (6.6), (6.7), and (6.9), .σi (A, s)

Ps (A) = Us (A \ {i}) − Us (A) Ps (A \ {i})   = Vs (L) − Vs (L) = ln

L⊆A

=

L⊆A\{i}



(6.10)

Vs (L).

L⊆A:i∈L

Identity (6.10) gives a way to calculate the log-odds ratios by summing potentials. The following theorem shows, via the Möbius inversion formula, how to obtain the potentials from the log-odds ratios.  Theorem 6.1 (Li et al. 2019, Theorem 2.1) Let .σs (A) ≡ σi (A, s). Then, for .L = ∅, i∈A

 .Vs (L)

=

(−1)|L\A| σs (A)

A⊆L

|L|

.

(6.11)

There is some simplification in the quantities .U (A), .V (A), and .σ (A) defined above when the load-sharing rule is the equal load-sharing rule; they only depend on .|A|. This is useful in the situation we consider here where the details are spelled out in the following example.

70

6 Statistical Properties of a Load-Sharing Bundle

Example (i) Consider series circuits of capacitors of equal capacitance under increasing voltage load. Thus, the equal load-sharing rule, .λi (A) = n/|A|, applies. Then, if .|A| = a, (6.7) becomes .σi (A, s)

= ln

F (ns/|A|) ≡ σ ∗ (|A|, s) for i ∈ A, F (ns/|A|)

(6.12)

and also, if .|L| = l, then (6.11) becomes  .Vs (L)

=

(−1)l−a aσ ∗ (a, s)

a≤l

l

≡ Vs∗ (l),

(6.13)

where .aσ ∗ (a, s) = σs (A) in Theorem 6.1 for .i ∈ A ⊆ L. Thus, from (6.9), (6.12), (6.13), .Us (A)

≡−

 B⊆A

Vs (B) = −



Vs∗ (|B|) = −

B⊆A

 a  V ∗ (b) ≡ Us∗ (a). b s

(6.14)

B≤|a|

From (6.6), (6.12), (6.13), and (6.14), if .|A| = a, then .Ps (A)

=

  exp{−Us∗ (a)} n exp{−Us∗ (a)} and pa,s = Ps ({A : ∀|A| = a}) = , Z(s) Z(s) a

(6.15) where .pa,s is the probability distribution of the number of working capacitors. (ii) Furthermore, when the capacitors have BD voltages whose survival function is ρ .W (x; τ, ρ) = exp{−(x/τ ) } ≡ W (x). Then, (6.12) becomes        |A| ρ ρ W (λi (A)s) |A| ρ ρ =− .σi (A, s) = ln s − ln 1 − exp − s W (λi (A)s) τ τ = σ ∗ (|A|, s) for i ∈ A      |A| ρ ρ |A| ρ ρ s − ln s (1 + o(1)) =− τ τ ρ    |A| ρ |A| s ρ − ln − ρ ln s + o(1). =− τ τ Remark (Description of the Stochastic Failure Process for the Bundle) As noted earlier, the failure of components is a failure of two types of component failures as the load increases. As the load increases, a component fails due to the increasing load. We refer to this as a Phase I failure. When this component fails, it can cause a sequence of component failures (Phase II failures) due to load transfer from the Phase I failure and the sequence of failures in the Phase II cycle. The bundle fails through a sequence of Phase I/II cycles.

6.5 Examples of Size Effects

71

Let .N ∗ denote the collection of working components right after a Phase I failure. Then, the Gibbs measure for the sub-bundle .N ∗ gives the distribution for the set of components that survive after the Phase II cycle for the Phase I failure. If the load per component is s when the Phase I failure occurs, then the load per component on the sub-bundle is .N s/N ∗ . The Gibbs measure is the equilibrium distribution of a Markov random field (Preston, 1974). Here the field is on the set .N ∗ . The dynamics on the field is given by the local structure (6.7) where the state of component is 0 (failed) or 1 (working). Formula (6.7) indicates the log-odds that the i-th component will fail when the set of working components is A. Here, the Gibbs measure reflects that after the Phase II failures in the cycle for .N ∗ , in equilibrium, the load-sharing rule indicates how the load is shared by the surviving nodes.

6.5 Examples of Size Effects In this section, we investigate size effects for chain-of-bundles where the bundles are horizontal collections of .k = 2, 3, 4, 5 components and the bundle distribution is either based on the equal load-sharing rule or a local load-sharing rule. It is assumed that the component strength distribution is .W(2, 2). Figures 6.1 and 6.2 give the distribution and Weibull plots for the equal and local loadsharing rules, and Fig. 6.3 is the overlay of these plots. Figures 6.4 and 6.5 give the plots of the linear least squares fit to the Weibull plots lower tail for the equal and local load-sharing rules, respectively. In Figs. 6.4 and 6.5, the linear least square fit is on interval [0,t*], where t* is that value above which the Weibull plots start deviating from linearity (i.e., .R 2 < 0.999). Table 6.1 gives values of .t ∗ and the survival distribution for a bundle, .G(t ∗ ), for the two rules and for .k = 2, 3, 4, 5 components. Other columns give values for the survival n n distribution, .G (t ∗ ), for various chain lengths, n. Note that small values of .G (t ∗ ) indicate very little size effect and that, for samples of size m, for a chain-of-bundles of length n n, .mG (t ∗ ) is the expected number of observations in the sample greater than .t ∗ . Large

Fig. 6.1 Distribution and Weibull plots for the equal load-sharing rule. Bundle size is .k = 2, 3, 4, and 5

72

6 Statistical Properties of a Load-Sharing Bundle

Fig. 6.2 Distribution and Weibull plots for the local load-sharing rule. Bundle size is .k = 2, 3, 4, and 5

Fig. 6.3 Distribution and Weibull plots for both the equal and local load-sharing rules. Bundle size is .k = 2, 3, 4, and 5

expected values indicate when size effects are possible for that sample size. Note for .k = 2, = 40, and .m = 2,500,000, the Weibull approximation based on the linear least squares fit would not be likely to indicate any size effect even if we had a sample of size 2,500,000. n Similar calculations based on the .G (t ∗ ) column can be used to assess at what chain lengths n and sample sizes m the difference between the Weibull linear least square fit and the actual chain-of-bundles distribution is negligible. n Figure 6.6 gives the overlayed plots of .(n, G (t ∗ )) for both the equal load-sharing and local load-sharing rules. These plots indicate how long the chain has to be for the Weibull linear least square fit to give reasonable approximations to the chain of bundles distribution. The chain needs to be very long for bundles of sizes 4 and 5 and relatively short for bundle sizes 2 and 3.

.n

Fig. 6.4 Linear least squares fit to the Weibull plots lower tail for the equal load-sharing rule. Bundle size is .k = 2, 3, 4, and 5

Fig. 6.5 Linear least squares fit to the Weibull plots lower tail for the local load-sharing rule. Bundle size is .k = 2, 3, 4, and 5

74

6 Statistical Properties of a Load-Sharing Bundle n

Table 6.1 Values of .t ∗ , for .k = 2, 3, 4, 5 and .G (t ∗ ) for .n = 10, 20, 30, 40 for the equal and local load-sharing rules Load-sharing Equal and Local Equal Equal Equal Local Local Local

k 2 3 4 5 3 4 5

.t



1.1160661 0.7474775 0.6116817 0.5082182 0.7280781 0.5728829 0.4661862

∗) 0.6904407 0.9117727 0.9710638 0.9935315 0.9140129 0.9738128 0.9937406

.G(t

= 10 0.0246186 0.3970687 0.7455513 0.9371654 0.4069331 0.7669286 0.9391399

.n

= 20 0.0006061 0.1576636 0.5558468 0.8782790 0.1655945 0.5881794 0.8819838

.n

= 30 0.0000149 0.0626033 0.4144123 0.8230927 0.0673859 0.4510916 0.8283062

.n

= 40 0.0000004 0.0248578 0.3089657 0.7713740 0.0274216 0.3459550 0.7778955

.n

n

Fig. 6.6 Chain size effects: Plots of .(n, G (t ∗ )) for both the equal and local load-sharing rules. n ∗ (t ) is the bundle strength distribution where the bundle size is .k = 2, 3, 4, and 5. The length of the chain is n

.G

Comment Fibers and fibrous composites are discussed in the next chapter. Though the Gibbs measures are not emphasized there, in Appendix A.3, we relate the Gibbs measure potentials and energies to the stresses and potential energies for load-sharing bundles of fibers.

Chapter 7

An Illustrative Application: Fibers and Fibrous Composites

In this chapter, we consider load-sharing systems where load is shared among working components and transferred to other working components as components fail. The argument for the survival distribution, F , and its cumulative hazard function, H , will either be time or increasing load in the context of load-sharing systems. Pertinent to this discussion are the weakest link hypothesis, size effects, Weibull analysis, well-defining load-sharing rules, size effects, and Weibull analysis. In the next section, we discuss curvature in the Weibull plots of the survival distribution S and size effects. The relevance of the weakest link chain-of-bundle load-sharing model is illustrated for fibers and fibrous composites. The load-sharing fiber bundle model is an accelerated failure load or lifetime model. Related to such acceleration models is cumulative damage type of models. These are not emphasized here, but we do describe their role in modeling the failure of fibrous materials. In the last section, we discuss the particular application of the load-sharing fiber model to Rosen’s Specimen-A data where the specimens are unidirectional glass fibers embedded in an epoxy material.

7.1 The Weibull Distribution and the Weakest Link Hypothesis The Weibull has been a popular distribution to model the strength and time to failure of fibers and fibrous composites. Here we consider its appropriateness to model (Bader & Priest, 1982) strength data for (1, 10, 20, 50 mm) carbon fibers and (20, 50, 150, 300 mm) impregnated 1000 fiber tows of the same type of fibers used in the fiber tests. The tows were impregnated with a liquid epoxy resin.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5_7

75

76

7 An Illustrative Application: Fibers and Fibrous Composites

7.1.1 The Bader–Priest Fiber Data This data can be found in Smith (1991) and Crowder et al. (1991) with discussion where (Smith, 1991) states that “Given the wide acceptance of the weakest-link notion in statistical studies of strength of materials, these results must be considered disturbing, though in none of the studies so far is it clear whether the discrepancy is due to failures of experimental procedure or to something more fundamental.” Watson and Smith (1985) found that the Weibull weakest link hypothesis was not supported by the data for all the lengths since the p-value of the GOF test they used was less than 0.01. The same GOF test based on Table 7.1, though, for just the 1, 20, and 50 mm fibers has a large p-value of 0.2784616 (calculated later) that supports length proportional Weibull hazards model for those lengths. This suggests that the non-Weibull behavior of the 10 mm fibers results in the rejection of the Weibull weakest link hypothesis. Our analysis is based on Table 7.1 and Figs. 7.1, 7.2, 7.3, and 7.4. The Weibull plots in Figs. 7.1 and 7.2 indicate the Weibull is a good fit for the 1, 20, and 50 mm fibers. Figure 7.4, though, indicates that the Weibull is not a good fit for the 10 mm fibers. (Perhaps this discrepancy may be due to a testing anomaly suggested in the above references or a size effect.) The non-Weibull behavior of the 10 mm fibers results in the rejection of the Weibull weakest link hypothesis for all the lengths, but the Weibull weakest link fit in Table 7.1 for the 1, 20, 50 mm fibers does support Weibull weakest link hypothesis. This is confirmed by the large p-values for the length proposal Weibull hazards model in Table 7.1. (Also the p-value of the GOF test stated by Watson and Smith (1985) is less than 0.01, while its p-value is 0.28 for the 1, 20, and 50 mm fibers.) The overlay of the Weibull plots for this data adjusted for the length proportional hazards in Fig. 7.4 further confirms the Weibull weakest link hypothesis. To formalize the GOF analysis, let F (x; L) denote the probability that a fiber of length L fails under tensile stress x. The weakest link hypothesis yields the Weibull model Table 7.1 Parameter estimates and the corresponding maximum log-likelihood values of fitting the fiber data in Watson and Smith (1985) with model (7.1) and individually using the Weibull model Fit the data with difference values of L Fit the data with difference values of L individually (no. of parameters = 6) based on Eq. (7.1) (no. of parameters = 2) ˆ ˆ ˆ Max. log-likelihood β(L) η(L) ˆ Max. log-likelihood L β(L) = βˆ η(L) δˆ 5.592967 4.575247 −71.02396 1 mm 4.591984 5.836908 4.591984 −71.11730 5.524439 2.648193 −49.92875 20 mm 4.591984 5.836908 2.748547 −51.31214 6.038520 2.424450 −36.16505 50 mm 4.591984 5.836908 2.349236 −37.23194 Total −159.66138 Total −157.11785

7.1 The Weibull Distribution and the Weakest Link Hypothesis

77

Probability Plot of 1 mm, 20 mm, 50 mm Weibull - 99% CI 1 mm

99.9

20 mm

99.9 90

90

1 mm Shape 5.593 Scale 4.575 N 57 AD 0.388 P-Value >0.250

50 50 10 10

Percent

1 0.1

1 5

2 50 mm

99.9 90

1

2

20 mm Shape 5.709 Scale 2.630 N 67 AD 0.169 P-Value >0.250 50 mm Shape 6.048 Scale 2.422 N 64 AD 0.246 P-Value >0.250

50 10 1 0.1 1

2

Fig. 7.1 Weibull plots of the 1, 20, 50 mm fiber data where the shape and scale are estimated separately

Fig. 7.2 Weibull plots of the 1, 20, 50 mm fiber data where the shape is 5.651475 and the scale is estimated based on the Weibull weakest link analysis in Table 7.1

78

7 An Illustrative Application: Fibers and Fibrous Composites

−2 −4

ln[−ln(1−F)]

0

2

Weibull probability plot based on normalized Weibull distributions

−6

1 mm 20 mm 50 mm

−1.5

−1.0

−0.5

0.0

0.5

1.0

log(standardized tensile stress)

Fig. 7.3 Weibull plots of the 1, 20, 50 mm fiber data based on normalized Weibull distributions

Probability Plot of 10 mm Weibull - 95% CI 99.9 Shape Scale N AD P-Value

Percent

99 90 80 70 60 50 40 30 20 10 5 3 2 1 1.5

2

3

10 mm

Fig. 7.4 Weibull plot of the 10 mm fiber data

4

5

5.029 3.318 62 0.915 0.019

7.1 The Weibull Distribution and the Weakest Link Hypothesis

 F (x; L) = 1 − exp −L

 x β  δ

79

  β(L) x , L > 0, x > 0, (7.1) = 1 − exp − η(L)

−1

where β(L) = β and η(L) = δL β . Here, η(L) is the scale parameter, and β(L) is the shape parameter of the Weibull model. To test the validity of Eq. (7.1), a likelihood ratio test based on fitting the single fiber data in Watson and Smith (1985) (see, also, Smith, 1991) with three different gauge lengths L = 1, 20, and 50 mm is used (L = 10 mm is not used). The parameter estimates and the corresponding maximum log-likelihood values of fitting these data sets together and individually using the Weibull model are presented in Table 7.1. The likelihood ratio test statistic is −2 × [−159.66138 − (−157.11785)] = 5.087251. Under the null hypothesis that Eq. (7.1) is valid, the likelihood ratio test statistic is asymptotically distributed as the chi-square distribution with degrees 2 ). Therefore, the p-value is Pr(χ 2 > of freedom (6 − 2) = 4 (denoted as χ(4) (4) 5.087251) = 0.2784616, and the null hypothesis that Eq. (7.1) is valid is not rejected. As an alternative to the above GOF analysis, we now consider the Weibull regression model presented in Sect. 1.2.8.2 with the length of the fiber as a covariate, i.e., z = L,   β t Pr(T < t; β, η(z)) = F (t; β, η(z)) = 1 − exp − , t > 0, (7.2) η(z) where β is the shape parameter and η(z) is the scale parameter of the Weibull model. The link function is η(z) = exp [ν0 + ν1 ln(z)] .

(7.3)

Based on the single fiber data in Watson and Smith (1985) with three different gauge lengths L = 1, 20, and 50 mm, we obtain the estimates of the model parameters as βˆ = 5.651475, νˆ 0 = 1.5126677, and νˆ 1 = −0.1676359. The estimates of the Weibull parameters based on model (7.2) with link function (7.3) and the maximum log-likelihood values are summarized in Table 7.2. Table 7.2 Parameter estimates and the corresponding maximum log-likelihood values of fitting these data sets together and individually using the Weibull regression model

z=L 1 mm 20 mm 50 mm

Parameter estimates η(z) ˆ βˆ 5.651475 4.538823 5.651475 2.746907 5.651475 2.355781 Total

Max. log-likelihood −71.10181 −51.22476 −37.03629 −159.36287

80

7 An Illustrative Application: Fibers and Fibrous Composites

From the Weibull regression model, it is clear that the logarithm of the length, ln(L), is a factor significantly contributed to the tensile stress (with p-value < 2 × 10−16 for testing the null hypothesis νˆ 1 = 0). Similarly, to test the validity of Eq. (7.2), a likelihood ratio test can be used. The likelihood ratio test statistic is −2[−159.3627 − (−157.11785)] = 4.490221. Under the null hypothesis that Eq. (7.2) is valid, in comparing to fitting the data for different lengths L with individual Weibull distributions, the likelihood ratio test statistic is asymptotically distributed as the chi-square distribution with degrees of freedom (6 − 3) = 3. Therefore, the p-value is 0.21316, and the hypothesis that Eq. (7.2) is valid is not rejected. Based on the Weibull regression model, the hazard ratio of fiber with length L0 and fiber with length L1 can be obtained as      h(t; L1 ) L1 η(L ˆ 0) β ˆ = exp −β νˆ 1 ln = . h(t; L0 ) η(L ˆ 1) L0 For ln(L1 /L0 ) = 1, i.e., the log-length increase by 1 unit, the hazard increases by exp(−βˆ νˆ 1 ) = 2.57897 times.

7.1.2 The Bader–Priest Impregnated Tow Data Below we discuss the weakest link hypothesis for the impregnated tow data. The Weibull plots indicate that the Weibull is an appropriate fit for this data since all the p-values for the Anderson–Darling (AD) GOF test are greater than 0.25. Except for the 300 mm bundles, the shape parameter is about 19 for the other lengths. This suggests that a proportional Weibull hazards model may be used to determine if the weakest link hypothesis (i.e., length proportional hazards) holds for the other lengths. This is doubtful since all the scale parameters are about 2.8, but a formal analysis is based on Table 7.2. Before discussing this analysis, we note that the shape parameter for the 300 mm data is about 13, which is considerably less than 19. Based on the discussion in Appendix A.2, this suggests that the difference may be due to a size effect, possibly based on a Weibull competing risk with shape about 13. This risk is accentuated for the 300 mm length but attenuated for the other lengths because their sample sizes are too small to identify this risk. Watson and Smith (1985) considered a chain-of-bundle model for the impregnated tows. The length of a bundle is based on Rosen’s experimental work described in the next section. Rosen (1964) discovered that around a fiber break in a bundle the broken fiber could not bear any load for a distance that he referred to as the ineffective length and that the load was transferred to the adjacent unbroken fibers. Watson and Smith (1985) assumed that the ineffective length was 0.04 mm that is about five times the mean fiber diameter of 0.008 mm.

7.1 The Weibull Distribution and the Weakest Link Hypothesis

81

Thus, for example, a 300 mm tow of fibers consisted of 300/0.04 = 7500 horizontal bundles. This assumes that the shape of a bundle is a horizontal collection of 1000 ineffective length fiber elements. Furthermore, physical considerations suggested that the critical number of failed adjacent fibers that cause bundle failure was k ∗ = 3 or 4. This is consistent with the ratio of the shape parameter of 20 for the 20, 50, 150 mm tows to that for the shape parameter of 6 for the 1, 20, and 50 mm fibers, i.e., 3.33 = 20/6, while that for the 300 mm tows was about 13/6 = 2.2. Relevant to this is the discussion in Appendix A.2 regarding extrinsic strength that is caused by defects and is more prevalent at greater sizes. This is apparent in the Weibull plots of the tow data in Figs. 7.5 and 7.6. The larger length of 300 mm increases the possibility of defects that cause tow failure and decrease the shape parameter. Notice that the Weibull plots in Figs. 7.5 and 7.6 progress from fairly linear (almost purely extrinsic strength) for the 300 mm tow, to fairly convex (competing risks, both extrinsic and intrinsic strength) for the 150 and 50 mm tows and then fairly concave (almost purely intrinsic strength) for the 20 mm tow. In the last two sections, we discuss Rosen’s experiment to give further insights into the failure of composites. There we discuss how his discovery of the ineffective length gives insights into the relationship of the shape parameters of the ineffective length fibers strength distribution, the bundle size and the length of the chain in the chain-of-bundles model for his Series A composite specimens as well as the critical configuration of fibers in a bundle that cause its failure. A similar discussion was given in Chap. 4 for the cell model for the failure of a dielectric.

Probability Plot of 20 mm, 50 mm, 150 mm, 300 mm Weibull 99 Variable 20 mm 50 mm 150 mm 300 mm

Percent

90 80 70 60 50 40 30

Shape 20.27 19.41 18.96 12.86

20

Scale 2.897 2.884 2.763 2.605

N 28 30 32 29

AD 0.425 0.302 0.264 0.220

10 5 3 2 1 1.8

2

2.2

2.4

2.6

2.8

3

3.2

Data

Fig. 7.5 Weibull Plots and GOF Tests for the Bader–Priest Impregnated Bundle Data

P >0.250 >0.250 >0.250 >0.250

82

7 An Illustrative Application: Fibers and Fibrous Composites

BP Impregnated Bundle Breaking Strength Data Weibull - 95% CI 20 mm 50 mm 150 mm 300 mm 20 mm

50 mm

90

90

50

50

10

Percent

20 mm Shape 20.27 Scale 2.897 N 28 AD 0.425 P-Value >0.250 50 mm 19.41 Shape 2.884 Scale 30 N 0.302 AD P-Value >0.250

10

1

1 2

2.4

2.8

3.2

2

150 mm

2.4

2.8

3.2

300 mm

90

90

50

50

10

10

1

1 2

2.25

2.5

2.75

3

1.5

2

2.5

3

150 mm Shape 18.96 Scale 2.763 N 32 AD 0.264 P-Value >0.250 300 mm Shape 12.86 Scale 2.605 N 29 AD 0.220 P-Value >0.250

Fig. 7.6 Weibull Plots and GOF Tests for the Bader–Priest Impregnated Bundle Data

Before doing that, we want to discuss modifications that model the extrinsic strength that is caused by defects. These are cumulative damage models.

7.1.3 Cumulative Damage Models Watson and Smith (1985), Smith (1991), and Crowder et al. (1991) all expressed reservations regarding the Weibull weakest link model , but Watson and Smith noted that it is practical for prediction purposes. Motivated by some of the curvature in the Weibull plots of the Bader–Priest data, Durham and Padgett (1997) proposed a cumulative damage model for the failure of carbon fibers and composites. (See also three other papers by Durham and Padgett and their coauthors related to the Bader– Priest data and cumulative damage (Black et al., 1990; Durham & Padgett, 1991; Padgett et al., 1995). This model incorporates the intrinsic strength (referred to as “theoretical strength” by them) with its reduction as damage accumulates. Their model is based on the most severe flaw in a fiber or composite test specimen. The damage accumulates at this flaw under increasing tensile load and reduces the intrinsic strength until it reaches zero and causes specimen failure. Size effects are incorporated into the spatial distribution of the flaws; the size of the specimen stochastically increases the initial damage caused by the most severe flaw and the initial reduction of the intrinsic strength. Their approach is a generalization

7.1 The Weibull Distribution and the Weakest Link Hypothesis

83

of the Birnbaum–Saunders model that incorporates size/scaling effects into the analysis and relates the intrinsic to the extrinsic strengths. Related to the Durham and Padgett’s cumulative damage approach under increasing load is Taylor (1987)’s work on the static fatigue of semicrystalline polymer materials. Here the polymer is modeled as a weakest link chain-of-bundles model. Defects occur in the amorphous regions in a bundle where cracks are initiated in the polymer material at random times. A pure birth process models the crack growth of a particular crack where the times between births have exponential distributions with birth rates based on stress factors at the boundary of the crack. These are explosive birth processes; there is an infinite number of births in finite time that is referred to as the explosion time. The time it takes for a given crack to cause bundle failure is the sum of the time it takes the defect to initiate crack growth plus its explosion time. The polymer failure time is the minimum of the bundle failure times. Taylor (1987) used this formulation to construct what he calls the characteristic distribution function, which has the exact weakest link relationship, and used it as a reference distribution to study the size effect (see Taylor (1987)’s formulas (5.4)–(5.9)). Remarks (i) Durham and Padgett’s and Taylor’s approaches are constructions that incorporate damage to reduce the intrinsic strength or lifetime for an ideal physical structure. These approaches are accelerated failure load or accelerated failure lifetime (AFL) models since they reduce the strength or survival time (as opposed to accelerate failure rate (AFR) models that increase the failure rate as damage accumulates). (ii) Taylor’s study is for semicrystalline polymer materials. Such materials are “typically, 40% – 80%” crystalline where the rest is amorphous. A pure crystalline material strength would be based on molecular bonds that are magnitudes stronger than that of the amorphous materials. In the latter, there are deformations and dislocations where defects occur or are much weaker where damage is initiated. Thus, modeling the amorphous material is fundamental to the failure of disorganized materials. In the last chapter of this book, we discuss such a model in the context of HfO2 dielectrics. (iii) Another important consideration is how to relate TBD and BD under increasing load. In Chap. 5, we discussed this relationship under a power law assumption when the BD distribution is Weibull. There, we saw that, under a static load x, a power law relationship for the Weibull scale parameter (the so-called, characteristic lifetime), τ (x) ∝ x −a , a > 0, links the time to failure under a static load to the breaking strength under a uniformly increasing load.

84

7 An Illustrative Application: Fibers and Fibrous Composites

7.2 Discussion of Rosen’s Experiments Rosen (1964, 1965) conducted a number of elaborate experiments on fibrous composites that gave fundamental insights into their failure. Here we are interested in his Series A experiments. These experiments consisted of nine specimens that were single layers of unidirectional glass fibers embedded in an epoxy material. The specimens were put under increasing load until failure. Brief descriptions of the specimens and their failure analysis are given in the next two subsections. Many more details can be found in Grego et al. (2014) and Li et al. (2019).

7.2.1 Description of the Series A Experiments and the Analysis of the Specimen A-7 Photographs The test section of each specimen was 0.5 × 1 inch in size with a thickness of 0.06 inches. The number of fibers in each test specimen is given in Table 7.3, with the tensile load on the specimen at which it failed. Specimen A-7, which failed at an ultimate load of 116, was given special consideration. Photographs of this specimen were taken under polarized light at various applied loads. Under no load, the fibers are dark, and the epoxy binder is light. Under increasing load, the fibers brighten and are dark, vertically, around fiber breaks for a distance that Rosen referred to as the ineffective length. This is the distance around a break where the fiber does not support load. Rosen used the ineffective length to consider such fibrous composites as a chainof-bundles. A bundle was a parallel horizontal system of ineffective length fiber components with equal load-sharing in a bundle. His specimen A-7 photographs indicate otherwise, though, since there are X-shaped regions of increased brightness around breaks that indicate that the load at the break is transferred diagonally and horizontally to adjacent fibers but not vertically. This suggests (i) that a bundle is not a horizontal collection of ineffective length fibers and (ii) that the load-sharing is local around a break. In fairness to Rosen, he certainly was aware that this was a crude idealization of a 2-dimensional (2-D) fibrous composite. In fact, his work with Zweben and Rosen (1970) discussed local load-sharing in the context of 3-D unidirectional fibrous composites.

Table 7.3 Data for Rosen’s nine Series A specimens Specimen Failure load No. of fibers + ∗

1 114 92

2 84+ 93

3 111 93

4 125 94

5 116 92

6 117 94

Right-censored observation; specimen failed in grip section The number of fibers was not given

7 116 93

8 65+ ∗

9 107 83

7.2 Discussion of Rosen’s Experiments

85

Grego et al. (2014) used the local load-sharing criteria where (1/6)-th of the load is transferred in each of the diagonal and horizontal adjacent fibers around a break to determine the “in viva” ineffective fiber length strength distribution. Here, they consider the fibrous composite to consist of a 22 × 93 grid where the nodes in the grid are the ineffective length fiber components. The value 22 was determined by the ineffective length , and 93 was the number of fibers in the specimen A-7 composite. (The number of rows in the grid was determined by taking difference in the vertical distance for the break closest to the top of the specimen to that closest to the bottom that was 0.822 inch. This was divided by the median ineffective length, 0.03725 inch, of the 92 fiber breaks that occurred during the experiment giving about 22 (∼0.822/0.03725).) Due to the nature of the experiments (photographs taken at various loads as the load on the specimen increased), all the actual breaking loads of failed nodes were censored. Of the 92 fiber breaks in the photographs, 84 were isolated pure Phase I breaks, while the other 8 gave 4 pairs that were possible Phase I/II cycle breaks. The local load-sharing described above for the way load was transferred around breaks to adjacent fibers was sufficient to determine the censoring sets. It was straightforward to determine censoring intervals for the breaking loads of the pure Phase I breaks. The censoring for the possible Phase I/II breaks and all of the 1954 = 22 × 93 − 92 unfailed nodes in the grid representation of the Specimen A-7 was a bit more complicated but resulted in sets of intervals for these cases.1 These censoring sets and intervals determined a partition consisting of 22 disjoint intervals

P = Ii : i = 1, 2, . . . , 22 and ∪22 I = [0, ∞) . i i=1 Each interval or censoring set can be represented as a union of intervals in the partition. See Section 7 and Tables 1–6 in Grego et al. (2014) for further details about the partition and its construction. This partitioning suggests using a partitioned-based prior approach to estimate the survival distribution of the ineffective length fiber breaking load based on Sethuraman and Hollander (2009)’s work. Specialized to the Specimen A-7 data, it is simply a multinomial problem where the censoring partition, P , determines the 22 multinomial categories. Let p = (p1 , p2 , . . . , p22 ) be the multinomial category probabilities. Then, if the prior for p is a Dirichlet with parameter α = (α1 , α2 , . . . , α22 ) (where α = α1 + α2 + . . . + α22 is referred to as the weight), the posterior given the censored observations is a mixture of Dirichlet distributions. The parameter α of the Dirichlet prior in Grego et al. (2014) is based on some experimental work by Zhao and Takeda (2000) and an earlier crude digitized version of Rosen’s data. There, α(Ii ) = αi /α, where α(I ) is the probability

1 Do not confuse 22 in 22 × 92 that corresponds to 22 rows determined by the ineffective length (where 92 is fibers/columns of fibers embedded in the specimen) with the 22 intervals in the partition. They are different entities.

86

7 An Illustrative Application: Fibers and Fibrous Composites

measure induced by the Weibull, W(τ, ρ), with shape parameter τ = 1.5 and scale parameter ρ = 6.5. Figures 8 and 9 and Table 7 in Grego et al. (2014) summarize their findings when the weight is α = 50. Figure 7.8 overlays the graphs of the posterior mean survival distribution, S(t), with 95% Bayesian credibility bands and the prior mean. The linear least squares fits of ln(− ln(S(t)) versus ln(t) were used to obtain the values of the Weibull scale and shape parameters, τ and ρ, respectively. These are given in Table 7 in Grego et al. (2014) for various weights and a W(2.05, 4.95) distribution fit to the posterior mean survival function for α = 50. The Weibull plot for the W(2.05, 4.95) in Figure 7.9, overlaid with the Weibull plots of the other quantities from Fig. 7.8, indicates that the lower tail of the W(2.05, 4.95) distribution is well within the confidence bands. This was the justification for the use of the W(2, 5) as the ineffective length fiber strength distribution in Li et al. (2019) study of the shape of a bundle for the chain-ofbundles model for the Rosen’s Series A data in Table 7.3. This is discussed in the next section.

7.2.2 Discussion Regarding the Shape of the Bundle in the Chain-of-Bundles Model In this section, we discuss the chain-of-bundles model in the context of the data given in Table 7.3. Li et al. (2019) used this data in a simulation study to determine what bundle shapes for the chain-of-bundles 22 × 93 grid model give satisfactory approximations to the estimated breaking load distribution based on Table 7.3. Their use of this size grid and the W(2, 5) distribution for the node breaking strength distribution is based on Grego et al. (2014)’s analysis of specimen A-7. The maximum likelihood estimates of the Weibull parameters for a Weibull fit to breaking load distribution are given in Table 7.4. Figure 7.7 gives the distribution and Weibull plots of the MLE Weibull fit and the KM fit with both the KM and MLE 95% confidence bands. Both the KM and MLE fit account for the censoring of the data in Table 7.3. Li et al. (2019) investigated the chain-of-bundles model to model the Series-A data. They studied bundles that are rectangular r × c grids 3 < r, c < 6, and base their investigation on the load-sharing criteria that (Grego et al., 2014) used to analyze the Specimen A-7 photographs. Li et al. (2019) used an absorbing state load-sharing rule that is consistent with these criteria. Table 7.4 Weibull parameter estimates for the breaking load distribution

Parameter Shape Scale

Estimate 22.0406 117.69

Standard error 6.02677 2.14405

7.2 Discussion of Rosen’s Experiments

87

Fig. 7.7 Distribution and Weibull Plot of the Kaplan–Meier (KM) and Weibull MLE fits for Series-A data in Table 7.3 Fig. 7.8 3 × 3 node schematic

1

2

3

4

5

6

7

8

9

For example, the 3×3 grid has 9 nodes depicted, below, in the Fig. 7.8 schematic. The sets of nodes C = {1, 3, 7, 9}, I = {5}, and E = {2, 6, 8, 4} are, respectively, the sets of corner nodes, of interior nodes, and of edge nodes that are not corner nodes. The local load-sharing criterion only allows load to transfer around a break equally diagonally and horizontally to the nearest adjacent nodes but not vertically. Thus, the interior node transfers load only to the corner nodes and edge nodes 4 and 6, (5’s nearest neighbors) but not to the other 2 edge nodes. The corner nodes only transfer their load to two nodes, their adjacent horizontal and diagonal nodes (their nearest neighbors). The edge nodes each have 3 adjacent nodes where they can transfer load. The nine nodes with their nearest neighbors define a graph with directed edges from a node to its nearest neighbor. A random walk on the graph is defined where, if the walk is at a node, it moves along a directed edge to a nearest neighbor. The probability is equal for the move from each edge to the nearest neighbor. For example, if the walk is at node 5, it is equally likely (probability 1/6) to move to its 6 nearest neighbors. This defines the one-step probabilities for a Markov chain that describes the random walk on the graph. The formulas for the absorbing state load-sharing rule are given in (A.23)–(A.27) of Appendix A.2. With this information, Li et al. (2019)

88

7 An Illustrative Application: Fibers and Fibrous Composites

simulated the breaking strength distribution for the rectangular grid bundles, Gr,c , 3 < r, c < 6, and used this to assess the fit of the chain-of- Gr,c -grid-bundles for the breaking strength distribution of Rosen’s Series A data in Table 7.3. To do this assessment, the scale parameters for the grid chains breaking strength have to be calibrated with the estimated value of the scale parameter, τˆ = 117.69, given in Table 7.4 for the Rosen’s Series A data. This calibration is required since (i) the units in Table 7.4 are in terms of the total load on the specimens, while (ii) the units in the calculation of the grid chains are in terms of the nominal load per component in a chain-of-grid-bundles where the bundles are stacked. In addition, (iii) the adjustment, Ag , is also needed since the grid bundles are not horizontal but rectangular and the bundles are not vertically stacked (see Li et al. (2019) Section 4.2.2 and Table 3 for more details regarding adjustment Ag ). Figures 7.9, 7.10, and 7.11, give the distribution and Weibull plots of this fit and the KM estimator overlaid with the chain-of-bundle fits for bundles that are grids of various sizes. Notice that all these fits are within the KM 95% confidence bands in these plots and the blue confidence region for the MLE Weibull fit. The pointwise 95% confidence intervals were calculated using the parametric bootstrapp method with the MLE of Weibull parameters. A complete sample of size 9 (without censoring) was assumed for each bootstrap sample, and 5000 bootstrap samples were generated for each strength point for computing the percentiles. Figure 7.11 indicates that, except for the 3 × 3 grid, the other square grids give good fits to the Weibull fit where the 6 × 6 fit is the best.

Fig. 7.9 Distributions and Weibull Plots for 3 × c grids, c = 3, 4, 5, and 6. Blue-shaded area is the MLE Weibull fit confidence region

7.2 Discussion of Rosen’s Experiments

89

Fig. 7.10 Distributions and Weibull plots for 6 × c grids, c = 3, 4, 5, and 6. Blue-shaded area is the MLE Weibull fit confidence region

Fig. 7.11 Distributions and Weibull plots square grids. Blue-shaded area is the MLE Weibull fit confidence region

The y-axis in the Weibull plot in Fig. 7.12 indicates that, for −2.875 < y < 2, the 6 × 6 grid fit and the Weibull fit from Table 7.4 are not discernable. These y-values correspond to percentiles in [1 − exp(− exp(−2.875)), 1 − exp(− exp(−2.000))] = [0.055, 0.994]. For y < −2.875, Fig. 7.12 indicates that the 6 × 6 grid chain overestimates the breaking load for small percentiles, but it is modest for percentiles close to 0.055.

90

7 An Illustrative Application: Fibers and Fibrous Composites

Fig. 7.12 Distribution and Weibull plots for the 6 × 6 grid chain. The Weibull has been adjusted for scale in the comparison with the chain-of-bundles distribution for the 6 × 6 grid

In Fig. 7.12, the chain length of the 6×6 grids is (22×93)/(6×6) = 57 (rounded up). Since the required adjustment for the 6×6 grid is Ag = 125.58 and τˆ = 117.69, the adjusted MLE Weibull fit has scale parameter τˆ /Ag = 117.69/125.58 = 0.94.

Chapter 8

Statistical Analysis of Time-to-Breakdown Data

In this chapter, we consider the Weibull analyses of Figures 3, 6, and 14 in Kim and Lee (2004) and Le’s (2012) analysis of their Figure 14 (Figures 4 and 5 of Le, 2012)1 . We use this to see the role that the Weibull plays in their analysis, the physical interpretation of the Weibull shape and scale, and the BD formalism. In addition, other distributions are used to compare with the Weibull and to study the legitimacy of some of the BD formalism assumptions.

8.1 Fitting Breakdown Data with Different Statistical Distributions In this section, we consider the lifetime data of HfO2 -based gate dielectrics (gate area ≈ 4 × 10−4 mm2 and thickness ≈ 4.8 − 5 nm) presented in Kim and Lee (2004)’s Figure 14 (see also Le, 2012, Figure 4) with different frequency unipolar AC voltage stresses (10 kHz and 0.1 kHz) and different duty cycles Ton /T0 = 0.1 and 0.5, where T0 is the duration of each cycle and Ton is the duration of the “on” period in each cycle. Different lifetime distributions are fitted to the data. The following lifetime distributions are considered: • Weibull distribution W(η, β) • Lognormal distribution LN(μN , σN )

1 The original data for Figures 3 and 6 of Kim and Lee’s is not available. The figures and analyses in Chapter 8 are based on data we constructed visually from Figures 3 and 6. We want to thank Professor Jack C. Lee for his efforts, though unsuccessful, in trying to provide the original data. We also want to thank Professor Jia-Liang Le for the cycles to failure data for Kim and Lee’s (2004) Figure 14; Le used this in his (2012) Figure 4 Weibull plots.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5_8

91

92

8 Statistical Analysis of Time-to-Breakdown Data

Table 8.1 Parameter estimates of different distributions for the data in (Kim and Lee, 2004, Figure 14) Distribution Weibull

Lognormal

Log-logistic

Birnbaum–Saunders

ηˆ βˆ MT T F −2 ln L μˆ N σˆ N MT T F −2 ln L μˆ L σˆ L MT T F −2 ln L αˆ γˆ MT T F −2 ln L

10 kHz Ton /T0 = 0.1 81.989 2.799 73.006 189.044 4.213 0.399 73.137 188.513 4.216 0.223 73.636 188.459 0.408 67.261 72.851 188.598

10 kHz Ton /T0 = 0.5 222.645 2.575 197.700 232.610 5.180 0.483 199.683 234.880 5.220 0.277 210.592 235.434 0.496 175.506 197.074 234.746

0.1 kHz Ton /T0 = 0.1 263.030 3.098 235.221 232.513 5.395 0.364 235.471 232.158 5.409 0.210 240.443 232.931 0.369 219.847 234.804 232.038

The values of −2 ln L corresponding to the model with the largest likelihood are highlighted in bold

• Log-logistic distribution LL(exp(μL ), 1/σL ) • Birnbaum–Saunders distribution BS(α, γ ) In Table 8.1, we tabulate the parameter estimates for the different fitted distributions as well as their mean time to failure (MTTF) estimates and log-likelihoods (−2 ln L). The values of the MTTF estimates and the −2 ln L’s are all comparable. The values of the Weibull shapes are comparable to those in Kim and Lee (2004)’s Figure 14 where the range is given as 2.4–2.9. The fitted distributions presented in Table 8.1 are plotted in the probability plot in Weibull scale in Fig. 8.1. As noted in Chaps. 4 and 5, the Weibull shape parameter is related to the thickness of the capacitor. Here, we are considering static cycle testing. Thus, the shape is proportional to the thickness, while the scale/characteristic lifetime depends on the cyclic protocol. For pure crystalline HfO2 , the proportionality constant is the crystalline lattice constant (see Chap. 3), which is about 1. Since the thickness is about 4.9 nm, if the dialectic is pure crystalline, the shape parameter would be around 5. As noted in Chap. 3, in modern HfO2 dielectrics, BD begins in the grain boundaries, not in the (pure crystalline) grains. This would reduce the shape parameter and is consistent with Kim and Lee (2004)’s discussion of their Figure 10 regarding this. Le (2012)’s analysis suggested it is a “size effect” and is reflected in the “curvature” in Le (2012)’s Figure 4. The curvature is not confirmed in the Weibull plots with 95% confidence bands in Fig. 8.2 where the p-values for the AD GOF test indicate the Weibull’s GOF. We discuss “size effect” issues in Chap. 10.

8.2 Breakdown-Time Regression Models

93

Fig. 8.1 Probability plot of different fitted parametric models for the data from Kim and Lee (2004)’s Figure 14 in Weibull scale

8.2 Breakdown-Time Regression Models Below we consider Weibull and semiparametric proportional hazard models to investigate the validity of the Weakest Link Principles A.1 and A.1’. The analyses, below, do not refute the A.1’ Weakest Link Principle of the BD formalism for both EOTs, but the Weibull analysis does refute the A.1 Weakest Link Principle for EOT = 1.4 nm and suggests that the principle depends on the thickness of the dielectric. The analyses cannot be any more definitive since all but one of the confidence intervals are so large because of the small sample sizes.

8.2.1 Proportional Hazard Models for Kim and Lee (2004)’s Figure 6 Data For the data in Figure 6 of Kim and Lee (2004), there are two covariates, area and the equivalent oxide thickness (EOT), that are considered to be related to the breakdown voltage (VBD ):

94

8 Statistical Analysis of Time-to-Breakdown Data

Probability Plot of a, b, c Weibull - 95% CI

Percent

99 Variable a b c

90 80 70 60 50 40 30

Shape 2.799 2.575 3.098

Scale 81.99 222.6 263.0

N AD P 20 0.601 >0.109 20 0.153 >0.250 20 0.306 >0.250

20 10 5 3 2 1 10

100

1000

Data

Fig. 8.2 Weibull probability plot of the data from Kim and Lee (2004)’s Figure 14 in Weibull scale: (a) 10 kHz, Ton /T0 = 0.1, (b) 10 kHz, Ton /T0 = 0.5, (c) 0.1 kHz, Ton /T0 = 0.1

Area: x1 =

EOT: x2 =

 1.6 × 10−5 cm2 , 0.1 × 10−5 cm2 ;

 1.4 nm, 2.5 nm.

Parametric Proportional Hazard Model First, we consider a parametric proportional hazard model (i.e., Weibull regression model) for the breakdown voltage data presented in Figure 6 of Kim and Lee (2004) as follows: Pr(VBD ≤ v) = FW (v; ρ(x2 ), τ (x))   ρ(x2 )  v , v > 0, = 1 − exp − τ (x)

(8.1)

where ρ(x2 ) is the shape parameter depending on the EOT and τ (x) is the scale parameter depending on both the area and the EOT. We use the link functions τ (x) = exp(ν0 + ν1 x1 + ν2 x2 ),

8.2 Breakdown-Time Regression Models

95

Fig. 8.3 Weibull probability plots of the data from Kim and Lee (2004)’s Figure 6

ρ(x2 ) =

1 . γ0 + γ1 x2

Based on the breakdown voltage data presented in Figure 6 of Kim and Lee (2004), we obtain the maximum likelihood estimates of the model parameters as νˆ 0 = 1.10671, νˆ 1 = −0.02979, νˆ 2 = 0.14570, γˆ0 = 0.10702, γˆ1 = −0.03731. The p-values for individually testing each of the parameters γ0 , γ1 , ν0 , ν1 , ν2 is different than 0 are statistically significant since all are less than 10−8 . The Weibull probability plots of the breakdown voltage data presented in Figure 6 of Kim and Lee (2004) with the fitted models are presented in Fig. 8.3. In summary, the MLEs of Weibull parameters for different areas and EOTs are as follows: • Area = 1.6 × 10−5 cm2 , EOT = 1.4 nm: shape parameter ρˆ = 18.2536; scale parameter τˆ = 3.5361. • Area = 1.6 × 10−5 cm2 , EOT = 2.5 nm: shape parameter ρˆ = 72.7921; scale parameter τˆ = 4.1507.

96

8 Statistical Analysis of Time-to-Breakdown Data

• Area = 0.1 × 10−5 cm2 , EOT = 1.4 nm: shape parameter ρˆ = 18.2536; scale parameter τˆ = 3.6977. • Area = 0.1 × 10−5 cm2 , EOT = 2.5 nm: shape parameter ρˆ = 72.7921; scale parameter τˆ = 4.3404. To evaluate the effect of the area, based on the proportional hazards assumption in the Weibull regression model, we have Pr(T > t; x1 = 1.6, x2 ) = Pr(T > t; x1 = 0.1, x2 )]φ(x2 ) , where φ(x2 ) is the proportionality constant for a fixed EOT (i.e., x2 ). The proportionality constant, if it is an integer (say, n), can be interpreted as the number of independent and identically distributed components in a series system in which Pr(n-component series system lifetime > t) = [Pr(component lifetime > t)]n . Since the two cross-sectional areas are proportional (1.6/0.1 = 16) for both of the EOTs, under A1, φ(1.4) = 16 = φ(2.5). For EOT = 1.4 nm and 2.5 nm, we have φ(1.4) = 2.2606 and φ(2.5) = 25.8563, with 95% normal approximated confidence intervals (1.2650, 3.2562) and (0, 214.4775), respectively. Since 16 is not in the 95% confidence interval (1.2650, 3.2562), the A.1 Weakest Link Principle for this thickness is rejected for EOT = 1.4 nm. Semiparametric Proportional Hazards Model In addition to fitting Kim and Lee (2004)’s Figure 6 data by using a parametric proportional hazards model, we consider the Cox semiparametric proportional hazard model here. In analogy to the parametric proportional hazard model in Eq. (8.1) that the proportionality constant for area effect depends on the EOT, we consider the semiparametric Cox proportional hazards model with an interaction term, i.e., h(t; x1 , x2 ) = h(t; x10 , x20 ) exp(θ1 x1 + θ2 x2 + θ12 x1 x2 ), t > 0,

(8.2)

where h(t; x1 , x2 ) is the hazard function at time t with covariates (x1 , x2 ), (x10 , x20 ) are the baseline covariates, and θ = (θ1 , θ2 , θ12 ) is the vector of parameters. Alternatively, we can express the Cox proportional hazards model in Eq. (8.2) in terms of the survival probabilities as Pr(T > t; x1 , x2 ) = [Pr(T > t; x10 , x20 )]exp(θ1 x1 +θ2 x2 +θ12 x1 x2 ) .

(8.3)

Based on Kim and Lee (2004)’s Figure 6 data, we obtain the estimates of θ as θˆ = (θˆ1 , θˆ2 , θˆ12 ) = (1.0500, −4.9421, 0.3469). For EOT = 1.4 nm and 2.5 nm, we have the proportionality constants (i.e., the hazard ratios of area = 1.6×10−5 cm2 to area = 0.1 × 10−5 cm2 for the area) as

8.2 Breakdown-Time Regression Models

97

Fitted survival curves based on Cox Proportional Hazards Model 1.0 Area = 0.1, EOT = 1.4 Area = 1.6, EOT = 1.4 Area = 0.1, EOT = 2.5 Area = 1.6, EOT = 2.5

Survival Probability

0.8

0.6

0.4

0.2

0.0 3.0

3.5

4.0

4.5

5.0

VBD

Fig. 8.4 Fitted survival curves based on Cox proportional hazards model

h(t; x1 = 1.6, x2 = 1.4) = exp[(1.6 − 0.1)θˆ1 + 1.4(1.6 − 0.1)θˆ12 ] = 10.0107 h(t; x1 = 0.1, x2 = 1.4) and

h(t; x1 = 1.6, x2 = 2.5) = exp[(1.6 − 0.1)θˆ1 + 2.5(1.6 − 0.1)θˆ12 ] = 17.7454, h(t; x1 = 0.1, x2 = 2.5)

with 95% approximated normal confidence intervals (0, 37.1487) and (0, 83.5950), respectively. The fitted survival curves based on Cox proportional hazards model are presented in Fig. 8.4. The fits in Fig. 8.4 support the proportional hazards model for both EOTs. In addition, since the confidence intervals for both thicknesses contain 16, they do not refute the cross-sectional area proportional hazards model. The intervals are so large, though, that this does not convincingly support this latter model.

8.2.2 Fitting Kim and Lee (2004)’s Figure 3 data with different parametric models and link functions In Figure 3 of Kim and Lee (2004), the soft breakdown (SBD) and hard breakdown (HBD) of HfO2 (EOT = 1.4 nm) at different stress voltages (−2.6 V, −2.7 V, and − 2.8 V) are presented. To fit the data with a lifetime regression model, we define the following variables: z1 =

 0 for SBD, 1 for HBD,

z2 = −voltage stress.

98

8 Statistical Analysis of Time-to-Breakdown Data

Weibull Regression Model We assume that the SBD and HBD follow Weibull distributions with different shape and scale parameters. A Weibull regression model with covariates z1 and z2 can be expressed as Pr(T < t; ρ(z), τ (z)) = FW (t; ρ(z), τ (z))     v ρ(z) , t > 0, = 1 − exp − τ (z)

(8.4)

where ρ(z) is the shape parameter and τ (z) is the scale parameter. For the scale parameter, based on the power law relationship, we consider a log-linear link function τ (z) = exp(ν0 + ν1 z1 + ν2 ln z2 ).

(8.5)

For the shape parameter, in order to study the effect of different relations between the covariates and the shape parameter, we consider the following link functions: • Log-linear: ρLL (z) = exp(δ0 + δ1 z1 + δ2 ln z2 ). • Linear: ρL (z) = δ0 + δ1 z1 + δ2 ln z2 . • Inverse linear: ρI L (z) = (δ0 + δ1 z1 + δ2 ln z2 )−1 . We also consider the Weibull regression models in which the Weibull shape parameter does not depend on the voltage stress (i.e., δ2 = 0). The maximum likelihood estimates and the p-values for testing the significance of parameters for Weibull regression model with different link functions are presented in Table 8.2. The values of the likelihood for different models are also presented in Table 8.2. We observe that the results with different link functions are similar, especially in the estimates of MTTF. Moreover, the shape parameters for SBD and HBD are around 1.4 and 2.0, which agree with the results obtained in Kim and Lee (2004). Lognormal Regression Model We assume that the SBD and HBD follow lognormal distributions with different shape and scale parameters. A lognormal regression model with covariates z1 and z2 can be expressed as Pr(T < t; μ(z), σ (z)) = FLN (t; μ(z), σ (z))   ln t − μ(z) , t > 0, = σ (z)

(8.6)

where (·) is the cumulative distribution function of the standard normal distribution, σ (z) is the shape parameter, and μ(z) is the scale parameter. For the scale parameter, based on the power law relationship, we consider a log-linear link function μ(z) = exp(ν0 + ν1 z1 + ν2 ln z2 ).

(8.7)

8.2 Breakdown-Time Regression Models

99

Table 8.2 Maximum likelihood estimates and the p-values for testing the significance of parameters for Weibull regression model with different link functions Without voltage as a covariate for the shape parameter (i.e., δ2 = 0)

With voltage as a covariate for the shape parameter Est.

ρLL (z)

p-value ρL (z)

δˆ0 δˆ1

0.9926 0.3220

δˆ2 νˆ 0

54.1708 0.0000

p-value ρI L (z)

p-value ρLL (z)

2.3585 0.2280

0.3907 0.3539

0.3208 0.0107

0.5388 0.0132

−0.1928 0.0113

−0.6413 0.3831

−0.9373 0.3839

0.3126 0.3828

54.1403 0.0000

54.0537 0.0000

p-value ρLL (z)

0.3538 0.0001 0.3218 0.0105 –

– 53.9475 0.0000

p-value ρI L (z)

1.4245 0.0000

−0.1932 0.0103

0.5406 0.0129 –

– 53.9472 0.0000

p-value

0.7020 0.0000 –

– 53.9445 0.0000

νˆ 1

1.9314 0.0000

1.9315 0.0000

1.9321 0.0000

1.9323 0.0000

1.9324 0.0000

1.9324 0.0000

νˆ 2

−50.0240 0.0000

−49.9933 0.0000

−49.9063 0.0000

−49.7996 0.0000

−49.7994 0.0000

−49.7966 0.0000

SBD, −2.6V ρˆ

1.4620

1.4628

1.4505

1.4245

1.4245

1.4245

τˆ

585.4282

584.7454

582.6511

580.2213

580.1897

580.1598

MT T F

530.2220

529.5652

528.2759

527.4334

527.4012

527.3792

HBD, −2.6V ρˆ

2.0150

2.0016

2.0136

1.9651

1.9651

1.9653

τˆ

4039.0270

4034.5050

4022.4910

4006.7440

4006.8680

4006.5586

MT T F

3579.0320

3575.4350

3564.4180

3552.1640

3552.2750

3551.9940

SBD, −2.7V ρˆ

1.4271

1.4275

1.4261

1.4245

1.4245

1.4245

τˆ

88.6257

88.6249

88.5978

88.5846

88.5804

88.5815

MT T F

80.5410

80.5372

80.5238

80.5253

80.5210

80.5260

HBD, −2.7V ρˆ

1.9668

1.9662

1.9669

1.9651

1.9651

1.9653

τˆ

611.4525

611.4758

611.6590

611.7249

611.7485

611.7650

MT T F

542.0706

542.0948

542.2529

542.3224

542.3434

542.3572

SBD, −2.8V ρˆ

1.3942

1.3934

1.4033

1.4245

1.4245

1.4245

τˆ

14.3704

14.3863

14.4274

14.4814

14.4808

14.4831

MT T F

13.1061

13.1218

13.1446

13.1639

13.1633

13.1654

HBD, −2.8V ρˆ

1.9215

1.9321

1.9239

1.9651

1.9651

1.9653

τˆ

99.1449

99.2594

99.6037

100.0020

100.0066

100.0193

MT T F LL

87.9475

88.0354

88.3512

88.6564

88.6605

88.6716

−117.8087

−117.8129

−117.8089

−117.8400

−117.8400

−117.8400

For the shape parameter, in order to study the effect of different relations between the covariates and the shape parameter, we consider the following link functions: • Log-linear: σLL (z) = exp(δ0 + δ1 z1 + δ2 ln z2 ). • Linear: σL (z) = δ0 + δ1 z1 + δ2 ln z2 . • Inverse linear: σI L (z) = (δ0 + δ1 z1 + δ2 ln z2 )−1 . We also consider the lognormal regression models in which the shape parameter does not depend on the voltage stress (i.e., δ2 = 0). The maximum likelihood estimates and the p-values for testing the significance of parameters for lognormal regression model with different link functions are presented in Table 8.3. The values of the likelihood for different models are also presented in Table 8.3.

100

8 Statistical Analysis of Time-to-Breakdown Data

Table 8.3 Maximum likelihood estimates and the p-values for testing the significance of parameters for lognormal regression model with different link functions Without voltage as a covariate for the shape parameter (i.e., δ2 = 0)

With voltage as a covariate for the shape parameter Est.

σLL (z)

p-value σL (z)

p-value σI L (z)

p-value σLL (z)

δˆ0 δˆ1

0.6460 0.3909

2.0565 0.0857

−1.4026 0.3381

−0.2573 0.0245

−0.1770 0.0248

0.3986 0.0236

δˆ2

−0.9376 0.3450

−1.3009 0.1947

2.7325 0.2103

νˆ 0

53.8620 0.0000

52.9790 0.0000

53.0099 0.0000

p-value σLL (z)

−0.2648 0.0023 −0.2679 0.0215 –

– 53.5532 0.0000

p-value σI L (z)

0.7676 0.0000 −0.1806 0.0230 –

– 53.4246 0.0000

p-value

1.3027 0.0000 0.4007 0.0230 –

– 53.4228 0.0000

νˆ 1

2.0210 0.0000

2.0251 0.0000

2.0254 0.0000

2.0233 0.0000

2.0238 0.0000

2.0238 0.0000

νˆ 2

−50.0891 0.0000

−49.2045 0.0000

−49.2366 0.0000

−49.7812 0.0000

−49.6522 0.0000

−49.6504 0.0000

SBD, −2.6V σˆ

0.7788

0.8134

0.8276

0.7673

0.7676

0.7676

μˆ

6.0013

5.9635

5.9637

5.9867

5.9813

5.9812

547.0701

541.5037

547.9512

534.3846

531.6253

531.5794

MT T F

HBD, −2.6V σˆ

0.6021

0.6364

0.6223

0.5870

0.5870

0.5871

μˆ

8.0223

7.9886

7.9891

8.0100

8.0051

8.0050

3653.9350

3608.9280

3578.7860

3577.0890

3559.5960

3559.4160

MT T F

SBD, −2.7V σˆ

0.7518

0.7643

0.7625

0.7673

0.7676

0.7676

μˆ

4.1109

4.1065

4.1055

4.1079

4.1074

4.1074

80.9211

81.3391

81.1452

81.6430

81.6179

81.6163

MT T F

HBD, −2.7V σˆ

0.5812

0.5873

0.5848

0.5870

0.5870

0.5871

μˆ

6.1319

6.1317

6.1309

6.1313

6.1312

6.1312

545.0053

546.8269

545.6207

546.5059

546.4879

546.4968

MT T F

SBD, −2.8V σˆ

0.7266

0.7170

0.7088

0.7673

0.7676

0.7676

μˆ

2.2893

2.3171

2.3149

2.2975

2.3017

2.3017

12.8485

13.1200

13.0151

13.3555

13.4142

13.4148

MT T F

HBD, −2.8V σˆ

0.5617

0.5400

0.5527

0.5870

0.5870

0.5871

μˆ

4.3103

4.3422

4.3403

4.3208

4.3255

4.3255

MT T F LL

87.1856

88.9446

89.3928

89.3999

89.8174

89.8247

−116.1585

−116.0007

−116.0017

−116.3284

−116.3264

−116.3264

8.3 Prediction of Hard Breakdown Based on Soft Breakdown Time Based on the SBD and HBD data in Figure 3 of Kim and Lee (2004), we are interested in predicting the HBD based on SBD. As Kim and Lee (2004) suggested, “the time between soft and hard breakdown significantly decreases as stress voltage increases”; therefore, to predict HBD based on the observed SBD, we should take the stress voltage into account. Due to the strong linear relation between the log-transformed HBD and logtransformed SBD, we consider a simple linear regression of ln(H BD) on ln(SBD) and the voltage stress (i.e., z2 ): ln(H BD) = β0 + β1 ln(SBD) + β2 z2 ,

(8.8)

Fig. 8.5 Simple linear regression of Kim and Lee (2004)’s Figure 3 data for predicting HBD based on SBD and voltage stress

Fig. 8.6 Relationship between log-failure-time and ln(V )

102

8 Statistical Analysis of Time-to-Breakdown Data

where β0 , β1 , and β2 are the regression parameters. Based on the data in Figure 3 of Kim and Lee (2004), we obtain the estimates of the regression parameters as βˆ0 = 15.1795, βˆ1 = 0.7487, and βˆ2 = −4.4881 with the corresponding p-values for testing the significant as 4.61e − 16, < 2e − 16, and 1.70e − 13, respectively. The original data and the fitted regression lines are presented in Fig. 8.5. In addition, Fig. 8.6 indicates the linear relationship between log percentile and ln V given in Consequences of A.2 & A.3 (ii) of the BD formalism.

Chapter 9

Circuits of Ordinary Capacitors

In this chapter, we illustrate how the BD distributional behavior of ordinary capacitors affects that of a series circuit of such capacitors. In Sect. 9.1, we investigate the VBD of the circuit when the VBD distribution is Weibull. The parameters are based on the earlier analysis of Kim and Lee (2004)’s Figure 6 data for illustrative purposes even though the data is not for ordinary capacitors but for thin dielectrics. We also examine simulation size effects and chain-of-bundles (parallel–series circuits) size effects for these capacitors in Sects. 6.2 and 6.4. Such effects are related to finite weakest link models considered by Le et al. (2009), Le (2012), and Bažant and Le (2017) for dielectrics and related models for finite length chains considered earlier by Phoenix (1983), Phoenix and Tierney (1983), and Taylor (1987) for polymer fibers. This was discussed in Chap. 4. Since the testing protocol for Kim and Lee (2004)’s Figure 6 is increasing voltage load, VBD, the analysis is greatly simplified (e.g., Daniels formula in Chap. 6 applies) than that for circuits based on Kim and Lee (2004)’s Figures 3 and 14 data. In these latter two data sets, the respective protocols are the TBD and cycle times to BD (CTBD) under static voltage loads. Recall that the AC amplitude is constant over the “on” period in a duty cycle for the static case of cycles to failure. In Sect. 9.3, we discuss how the distributional behavior of ordinary capacitors can be used to access the TBD or CTBD reliability of that of a series circuit for static loads on the circuit. In Chap. 10, we discuss size effects related to Kim and Lee (2004)’s Figure 14 (Le (2012)’s Figure 4).

9.1 Voltage Breakdown (VBD) of Series and Parallel Circuits Based on Kim and Lee (2004)’s Figure 6 Data Here, we illustrate the statistical behavior of the BD of series and parallel circuits for ordinary capacitors whose breakdown distributions are based on the Weibull fits © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5_9

103

104

9 Circuits of Ordinary Capacitors

for Kim and Lee’s Figure 6 data. For the reader’s convenience, the MLE Weibull fit parameters from Sect. 8.2 are: Area = 1.6×10−5 cm2 , EOT = 1.4 nm: Shape parameter = 18.2536; Scale parameter = 3.5361. Area = 1.6×10−5 cm2 , EOT = 2.5 nm: Shape parameter = 72.7921; Scale parameter = 4.1507. Area = 0.1×10−5 cm2 , EOT = 1.4 nm: Shape parameter = 18.2536; Scale parameter = 3.6977. Area = 0.1×10−5 cm2 , EOT = 2.5 nm: Shape parameter = 72.7921; Scale parameter = 4.3404. In Figs. 9.1 and 9.2, exact Weibull plots and survival distributions are given for series circuits of two of these capacitors. The curvature in the Weibull plots is due to the fact that the distribution has a mixed gamma-type of distribution with density of the form (6.5). Note that the plots for the series circuits suggest that mixtures of two truncated Weibull distributions will be a very good empirical approximation similar to the grafted distribution suggested by Le (2012) and Bažant and Le (2017) for Kim and Lee (2004)’s Figure 14 data. In addition, in Fig. 9.2, we have graphed the survival distributions for the transformed mixed gamma densities given by (6.4). The transformed distributions

log(-log(1-F))

0

–10

–20

series circuit of 2 EOT1.4 capacitors with Area1.6 single EOT2.5 capacitor with Area1.6

–30

series circuit of 2 EOT1.4 capacitors with Area0.1 single EOT2.5 capacitor with Area0.1

1.0

1.5

2.0

2.5

log VBD

Fig. 9.1 Weibull plots of VBD of series circuits composed of two EOT = 1.4 nm capacitors with two different oxide areas along with the VBD of their respective single EOT = 2.5 nm capacitors

9.1 Voltage Breakdown (VBD) of Series and Parallel Circuits Based on Kim. . .

a

b 1.0

1.0

empirical survival function fitted gamma survival function

empirical survival function fitted gamma survival function

0.8 Survival Probability

0.8 Survival Probability

105

0.6

0.4

0.2

0.6

0.4

0.2

0.0

0.0 0

1

2

3

4

5

Weibull hazard of VBD per capacitor in the series circuit

0

1

2

3

4

Weibull hazard of VBD per capacitor in the series circuit

Fig. 9.2 The survival functions of VBD of a series circuit of 2 EOT = 1.4 nm capacitors: (a) Area = 1.6 cm2 —the number of capacitor failures for the series circuit to fail was found to be 1 with probability 1. The fitted gamma distribution (in green) has the shape 1.0149 and the scale 2.0366; (b) Area = 0.1 cm2 —the number of capacitor failures for the series circuit to fail was found to be 1 with probability 1. The fitted gamma distribution (in green) has the shape 0.9951 and the scale 2.0484

are gammas with shape parameters equal to 2, while the fitted gammas have shape parameters approximately equal to 1. These fitted gammas are not discernible from the transformed gamma. This is due to the fact that the probability of two single Phase I failures causing circuit failure is negligible. This is discussed next. In Figs. 9.3 and 9.4, the VBD survival functions and boxplots for series circuits of size k = 2, 3, 4, and 5 for capacitors with EOT = 1.4 nm and area 1.6 × 10−5 cm2 . Similar figures are given in the supplementary material for EOT = 1.4 nm capacitors and area 0.1 × 10−5 cm2 and EOT = 2.5 nm capacitors and areas 0.1 × 10−5 cm2 and 1.6 × 10−5 cm2 . The boxplots are of the VBD voltage given the number of components that precipitated the series failure. The boxplots in Fig. 9.4 are for simulations of size 40,000. The MLEs of the shape and scale are given, in Table 9.1, for gammas of the transformed data as are the number of capacitors that precipitate the Phase II failures of the remaining working capacitors and cause series failure. Notice that the shape parameters of the fitted gammas are all ≈ 1 and the scale parameter ≈ k. This is especially misleading if one is interested in the minimum extreme value for minimums of BD voltages for such series circuits. In particular, the asymptotic distribution of the minimum is a Weibull with shape parameter kρ (not equal to 1 × ρ, where k is the number of capacitors in the series circuit and ρ is the shape parameter of the capacitor VBD distribution (see discussion regarding formula (6.5)). This is further discussed in the next section on parallel– series circuits. Notice that the physical thickness of an EOT = 2.5 nm capacitor is about double that of an EOT = 1.4 nm capacitor (9.65 nm to 4.9 nm). Consequently,

106

9 Circuits of Ordinary Capacitors

1.0

Survival Probability

0.8

0.6

0.4

0.2

2 capacitors 3 capacitors 4 capacitors 5 capacitors

0.0 0

5

10

15

20

Voltage to Breakdown (VDB)

Fig. 9.3 The survival distributions of VBD for series circuits of k capacitors, k = 2, 3, 4, and 5 with EOT = 1.4 nm, area = 1.6 × 10−5 cm2

a 2 capacitors

b

3 capacitors

c

4 capacitors

d

5 capacitors

1

2

1

2

Voltage to Breakdown (VBD)

18 16 14 12 10 8 6 4 1

1

2

Number of capacitor failures for the series circuit to fail

Fig. 9.4 Boxplots of VBD given the number of failures that precipitate failure for series circuits of k capacitors, k = 2, 3, 4, and 5 with EOT = 1.4 nm, area = 1.6 × 10−5 cm2 . Based on simulations of size 40,000

9.1 Voltage Breakdown (VBD) of Series and Parallel Circuits Based on Kim. . .

107

Table 9.1 Fitted transformed gamma for series circuits of k capacitors, k = 2, 3, 4, and 5. Estimated parameters and the number of capacitors that precipitate circuit failure with EOT = 1.4 nm, area = 1.6 cm2 k 2 3 4 5

Shape 1.0149 (≈ 1) 1.0074 (≈ 1 1.0646 (≈ 1) 1.0270 (≈ 1)

Scale 2.0366 (≈ 2) 2.9907 (≈ 3) 4.1741 (≈ 4) 5.1775 (≈ 5)

Number 1 100.000% 99.975% 99.325% 97.875%

Number 2 0.025% 0.675% 2.125%

1.0

Survival Probability

0.8

0.6

1-out-of-16:F

16-out-of-16:F

0.4

0.2 parallel circuit of 16 Area0.1 capacitors with EOT1.4 single Area1.6 capacitor with EOT1.4

0.0 0

1

2

3

4

5

Voltage to Breakdown (VBD)

Fig. 9.5 The survival functions of VBD of parallel circuits composed of 16 capacitors, each with Area = 0.1 × 10−5 cm2 , and a single capacitor with Area = 1.6 × 10−5 cm2 (EOT = 1.4 nm for all these circuits)

a series circuit of two EOT = 1.4 nm has roughly the same capacitance as that of an EOT = 2.5 nm, but the Weibull plots in Fig. 9.1 indicate their BD statistical behavior is quite different. We now compare a series circuit of 16 0.1 × 10−5 cm2 area capacitors with EOT = 1.4 nm and 2.5 nm to a single 0.1 × 10−5 cm2 area with the same EOT. Since a capacitor failure in a series circuit changes the capacitance , but the circuit need not fail, we compare the single VBD survival distribution with a k-out-16 system where k failures cause the circuit to fail. These are done in Figs. 9.5 and 9.6 that show that the survival distributions are increasingly ordered as k increases and

108

9 Circuits of Ordinary Capacitors

1.0

Survival Probability

0.8

0.6

1-out-of-16:F

16-out-of-16:F

0.4

0.2

0.0

parallel circuit of 16 Area0.1 capacitors with EOT2.5 single Area1.6 capacitor with EOT2.5

3.6

3.8

4.0

4.2

4.4

4.6

Voltage to Breakdown (VBD)

Fig. 9.6 The survival functions of VBD of parallel circuits composed of 16 capacitors, each with Area = 0.1 × 10−5 cm2 , and a single capacitor with Area = 1.6 × 10−5 cm2 (EOT = 2.5 nm, for all these circuits)

that the single capacitor VBD survival distribution is less reliable than all the other k-out-16 system for all k when EOT = 2.5 nm, while the comparison is less clear when EOT = 1.4 nm.

9.2 Parallel–Series Circuits Based on Kim and Lee (2004)’s Figure 6 Data In this section, we consider the simulation size effects for series and parallel–series circuits based on the BD distributions for Kim and Lee (2004)’s Figure 6 data. In Fig. 9.7a, b, c, exact Weibull plots are given for the VBD distributions of series circuits consisting of k = 2, 3, and 4 of EOT = 1.4 nm and area 0.1 × 10−5 cm2 capacitors as well as Weibull plots for simulations of sizes 100,000, 1 million, and 10 million for these circuits. These plots indicate that the lower tail in the three series circuits does not stabilize until the simulation size is 1 million for circuits of sizes 3 and 4 and 10 million for a circuit of size 2. Table 9.2 gives information on the simulation size effect with regard to the lower tail, that is, the number of capacitor

9.2 Parallel–Series Circuits Based on Kim and Lee (2004)’s Figure 6 Data

109

log(-log(1-F))

a 0

2 capacitors 3 capacitors 4 capacitors

–5 –10 1.4

1.6

1.8

1.6

1.8

2.0 log VBD

2.2

2.4

2.6

2.0

2.2

2.4

2.6

2.2

2.4

2.6

log(-log(1-F))

b 0

2 capacitors 3 capacitors 4 capacitors

–5 –10 1.4

log VBD

log(-log(1-F))

c 0

2 capacitors 3 capacitors 4 capacitors

–5 –10 1.4

1.6

1.8

2.0 log VBD

Fig. 9.7 Weibull plots of VBD for series circuits of k capacitors, k = 2, 3, and 4 with EOT = 1.4 nm and Area = 0.1 × 10−5 cm2 . Solid lines are exact Weibull plots. (a) Based on 100,000 simulations. (b) Based on 1,000,000 simulations. (c) Based on 10,000,000 simulations

failures that initiate the final Phase II cycle of failures. The boxplots in Fig. 9.8 show that at most 2 capacitor failures cause circuit failure. Weibull plots are given in Fig. 9.9 a, b, c of the VBD for parallel–series circuits where the series circuits are of size k = 2, 3, and 4, and there are n = 1, 400, 40,000, and 4,000,000 series circuits that are in parallel. These plots document the statistical size effect for the Weibull extreme value asymptotics. Notice that Weibull asymptotic minimum extreme value distribution is only marginally apparent for n = 4 million and k = 3 and 4, and for k = 2, there is considerable curvature. Also, note that the lower tails in the Weibull plots are picking up on the Weibull asymptotics.

110

9 Circuits of Ordinary Capacitors

Table 9.2 The simulation-based proportions of the number of failures that precipitate failure for series circuits of k capacitors, k = 2, 3, and 4 with EOT = 1.4 nm and Area = 0.1 cm2 Simulation size No. of capacitor failures for the series circuit to fail k=2 k=3 k=4

100,000 1 0.99998 0.99900 0.99295

1,000,000 2 0.00002 0.00100 0.00705

1 0.999991 0.999066 0.992968

10,000,000 2 0.000009 0.000934 0.007032

1 0.9999941 0.9990939 0.9929901

2 0.0000059 0.0009061 0.0070099

7 6 5 4 1

2

3 Capacitors 12 11 10 9 8 7 6 1

Number of capacitor failures for the series circuit to fail

2

Voltage to Breakdown (VBD)

2 Capacitors 8

Voltage to Breakdown (VBD)

Voltage to Breakdown (VBD)

a 4 Capacitors 16 14 12 10 1

Number of capacitor failures for the series circuit to fail

2

Number of capacitor failures for the series circuit to fail

8 7 6 5 4 1

3 Capacitors

Voltage to Breakdown (VBD)

2 Capacitors

Voltage to Breakdown (VBD)

Voltage to Breakdown (VBD)

b 12 11 10 9 8 7 6

2

1

Number of capacitor failures for the series circuit to fail

4 Capacitors 16 15 14 13 12 11 10 9

2

1

Number of capacitor failures for the series circuit to fail

2

Number of capacitor failures for the series circuit to fail

8 7 6 5 4 1

2

Number of capacitor failures for the series circuit to fail

3 Capacitors

Voltage to Breakdown (VBD)

2 Capacitors

Voltage to Breakdown (VBD)

Voltage to Breakdown (VBD)

c 12 11 10 9 8 7 6 1

2

Number of capacitor failures for the series circuit to fail

4 Capacitors 16 14 12 10 8 1

2

Number of capacitor failures for the series circuit to fail

Fig. 9.8 Boxplots of VBD given the number of failures that precipitate failure for series circuits of k capacitors, k = 2, 3, and 4 with EOT = 1.4 nm and Area = 0.1 × 10−5 cm2 . (a) Based on 100,000 simulations. (b) Based on 1,000,000 simulations. (c) Based on 10,000,000 simulations

9.3 TBD and Cycle Times to Breakdown (CTBD) of Series Circuits Weibit Plots for a series circuit with 2 capacitors

Weibit Plots for a series circuit with 3 capacitors

1 cell 400 cells 40,000 cells 4,000,000 cells

1.4 1.5 1.6 1.7 1.8 1.9 2.0 log VBD

5

log(-log(1-F))

log(-log(1-F))

log(-log(1-F))

–5

–10

Weibit Plots for a series circuit with 4 capacitors

5

5

0

111

0

–5 1 cell 400 cells 40,000 cells 4,000,000 cells

–10 1.9

2.0

2.1

2.2

log VBD

2.3

2.4

0

–5 1 cell 400 cells 40,000 cells 4,000,000 cells

–10 2.2

2.3

2.4

2.5

2.6

2.7

log VBD

Fig. 9.9 Weibull plots of simulated VBD among parallel–series circuits composed of a different number of cells with EOT = 1.4 nm and Area = 0.1 × 10−5 cm2

9.3 TBD and Cycle Times to Breakdown (CTBD) of Series Circuits Here, we describe how the reliability of a series circuit is determined for TBD or CTBD from BD information for the capacitors. Consider a series circuit of N0 capacitors for TBD and CTBD. Here, we have Phase I failures that are due to fatiguing or wear over time. Initially, the series circuit is under a constant voltage or voltage amplitude, say V , and the time of the Phase I failure is the minimum of the failure times of the capacitors, i.e., the minimum of N0 Weibull distributions with shape ρ and scale τ (V ). At the failure of the first Phase I failure, there is a cascade of Phase II failures of the remaining working capacitors due to increased voltage or voltage amplitude due to load transfer. The number of Phase II failures is governed by the Gibbs measure (6.6), where the load per capacitor is N0 V /(N0 − 1) that has to be adjusted for the number of working capacitors after this Phase I failure , and the log-odds that define the Gibbs measure are based on the VBD distribution of the capacitors. Incorporating the conditional information that the surviving capacitors when the Phase I failure occurs have a VBD greater than V is straightforward since the log-odds (6.7) and conditional log-odds are the same in the presence of this conditioning. (See also Example (i & ii) where the Gibbs measure is specialized to the equal load-sharing rule and the Weibull distribution.) After the first Phase I/II cycle, there remain N1 working capacitors. The voltage or voltage amplitude per the remaining number of components has increased and is N0 V /N1 . Because of the power law, this increase is reflected in the scale parameter of the Weibull failure rate for the N1 working capacitors, so that it is now τ (N V /N1 ), while the shape parameter does not change.

112

9 Circuits of Ordinary Capacitors

If the circuit has not failed, then we consider the next Phase I/II cycle and repeat the arguments in the preceding two paragraphs with N0 replaced by N1 and N1 replaced by N2 , the number of capacitors that survive the second Phase I/II cycle. If the circuit has not failed, repeat this recursive argument until circuit failure. Note that the above recursive argument depends on knowing the function τ (·). If the BD formalism applies, then, from Consequences of A.2 & 3 (ii) of the formalism, ln(τ (V )) is a linear function of ln(V ). For example, such a relationship is given in Fig. 8.6, based on Kim and Lee (2004)’s Figure 3 HBD and SBD data.

Chapter 10

Simulated Size Effects Relationships Motivated by the Load-Sharing Cell Model

In this chapter, we discuss the load-sharing (LS) cell model (Le et al., 2009; Le, 2012; Bažant & Le, 2017) and size effects (Le, 2012, Figures 4 and 5) raised regarding Kim and Lee (2004) Figure 14 data. The data in Figure 14 is generated from static loading tests where three different protocols are considered. Here, the cycle time to failure is from an accelerated failure time (AFT) perspective where the load-sharing directly reduces the cycle lifetime. That is, the role of increased voltage to voltage BD is functionally replaced by increased time or the number of cycles to cycle breakdown. In the next section, we discuss further background regarding Kim and Lee’s data, size effects, and the load-sharing cell model. This is followed by a section where simulations based on this data are used to illustrate size effects.

10.1 Background Our earlier analysis of the Kim and Lee (2004)’s Figure 14 data was given in Sect. 8.1 where: (i) The Weibull was not refuted as a model for the CTBD data. (ii) The curvature emphasized in the Weibull plots in Le (2012, Figure 4) is not discernable from background variation in our Fig. 8.2. (iii) The Weibull plots of the lognormal, the log-logistic, and the Birnbaum– Saunders fits in Fig. 8.1 accentuate the curvature.1

1 Such curvature is very apparent in the raw Weibull plots given in Figure 5 of Ntenga et al. (2019).

These plots are for the tensile strength of plant fibers (PFs). The authors use a Weibull in their analysis even though they note that a lognormal has a better fit due to this curvature. However, “PF properties are considerably influenced by their hierarchic composite microstructure” that consists of “cellulose microfibrils about the axis of the fibre.” This suggests that a chain-of-bundles may be © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5_10

113

114

10 Simulated Size Effects Relationships Motivated by the Load-Sharing Cell Model

The Weibull plots for this data are given in Fig. 10.1, but where the shape parameter is the same for all three protocols. The plots support the BD formalism regarding static CTBD where the shape is the same when the cycle voltage amplitude is the same but differences in the protocol for the on/off-cycle length and the frequencies affect the scale parameter. In this model, a cell consists of a series circuit of nanocapacitors (subcells) where, as subcells fail, the voltage (amplitude since we consider static cyclic loading under different testing protocols) load is distributed equally to the surviving subcells. How faithful this model is to the actual physics of a dielectric cell BD is not clear given the quantum effects that govern its behavior. A modification of their cell model is discussed, below, that addresses some of this. Le (2012) and Bažant and Le (2017) discussed size effects for the LS cell model like those considered in Sects. 9.1 and 9.2 for parallel–series circuits of ordinary capacitors. Here, the dielectric is viewed as a parallel circuit of cells where the number of cells is proportional to the cross-sectional area of the dielectric and the thickness of the dielectric determines the number of subcells in a cell. They distinguished the weakest link model from the finite weakest link model, where the former is an infinitely long parallel circuit and the minimum extreme value asymptotics lead to the BD distribution being Weibull. Le (2012) used what he called a grafted distribution (see Le (2012), formulas (17)–(18) or Bažant and Le (2017), formulas (14.8)–(14.9)) for the BD cycle time for a cell and used it as the reference distribution to quantify the size effects between the BD distributions for the two models (see Figure 5 of Le (2012), or Figure 14.9 of Bažant and Le (2017)). As mentioned in Chap. 4, such size effects were considered earlier. Of particular note for this section is Taylor (1987)’s work on the static fatigue of semicrystalline polymer materials. There, the polymer is modeled as a weakest link chain-ofbundles model. Defects occur in the amorphous regions in a bundle where cracks are initiated in the polymer material at random times. A pure birth process models the crack growth of a particular crack where the times between births have exponential distributions with birth rates based on stress factors at the boundary of the crack. These are explosive birth processes; there is an infinite number of births in finite time, which is referred to as the explosion time. The time it takes for a given crack to cause bundle failure is the sum of the time it takes the defect to initiate crack growth plus its explosion time. The polymer failure time is the minimum of the bundle failure times. Taylor (1987) used this formulation to construct what he calls the characteristic distribution function, which has the exact weakest link relationship, and uses it as a reference distribution to study the size effect (see Taylor (1987)’s formulas (5.4)–(5.9)). As indicated in Sects. 3.3 and 3.4, the breakdown mechanisms are very different for SiO2 dielectrics versus HfO2 dielectrics. In the former, complete percolation paths form and disperse under the influence of the electric field, leading to repeated

an appropriate model for the PF and that the curvature could be due to a finite weakest link size effect discussed earlier in Chap. 4 and later, below.

10.1 Background

115

SBDs. The concurrent physical changes in the dielectric are reversible until the local temperature along a complete path remains high enough for a time that is long enough to melt an ohmic conduction path through the dielectric, producing a HBD. Notice that the times of SiO2 SBDs of the dielectric behave like Weibull distributions (see Kim and Lee 2004, Figure 3 and Strong et al. 2009, Figure 3.81) where the shape parameter of the kth SBD Weibull increases to that of the HBD shape parameter. Unfortunately, it is not clear how this can be used to model the SBD time of a cell that initiates irreversible damage that leads to failure of the cell. On the other hand, for the latter, the leakage current leads to irreversible damage, due to the physical structure of the dielectric, with crystal grains separated by amorphous GBs containing the defects (trapping sites). Taylor (1987)’s approach suggested the following modification that makes it more faithful to the physics of dielectric BD as described in Sect. 3.5. Dielectric BD initiates in the grain boundary where electron trapping/detrapping increases local vibrational and temperature effects. These change the structure of the GB and create a feedback loop that increases the leakage current. At some point, the capacitor discharges through the GB (a SBD); the charge must then rebuild, leading to further SBDs. Eventually, the local temperature along with the GB remains high enough for a time long enough to melt an ohmic conduction path along the GB, which concurrently expands somewhat into the adjacent grains. The times between the breakdown events may be viewed as a birth process. Taylor (1987) considered a chain-of-bundles/weakest link model for a polymer. Taylor (1987) used a “spatial” Poisson process in the (t, y) plane where the rate of the Poisson at (t, y) is vf (y); v is the rate of occurrence of a defect that appears through time, and f (y) is the density of the explosion times. Here, v depends on the size of the bundle and on the load on the bundle. Emulating (Taylor, 1987), though, would be more complicated since the rate v and f would depend on the coarseness of the anode interface. As indicated in Sect. 3, other modifications need to be made to account for information regarding the coarseness of the HfO2 metal interface with the anode and for the percentage of the dielectric that is grain boundary. Pirrotta et al. (2013) discussed the roles of the grains and grain boundaries in the conduction of electrons through the dielectric, while (Zhang et al., 2019) indicated how the coarseness depends on the annealing temperature. This is depicted in Figure 1 of Zhang et al. (2019) (see also Figure 2 of Zhang et al. (2019)). This coarseness is reflected in the peaks and valleys of the interface where the valleys determine the grain boundaries. The asymptotic survival distribution for the minimum CTBD time for a given height is Weibull where the shape parameter, β, is proportional to its height. The distribution of bundle sizes of the grain boundary structure can be incorporated by mixing over β where the mixing proportions are determined by the fraction of the grain boundary sizes in these peaks and valleys. The scale parameter, τ , is more problematic, but one could imagine that its relationship voltage, V , is that ln τ is linear in ln V for a static load V . Determining good models for the joint behavior

116

10 Simulated Size Effects Relationships Motivated by the Load-Sharing Cell Model

of (τ, β) is an open question but would give ways to realistically model the BD distribution of such HfO2 dielectrics as a (weakest link) chain-of-bundles model. Another possibility is to statistically model the coarseness as follows. It is clear from Pirrotta et al. (2013) and Zhang et al. (2019) that the experimentalists can extract very detailed data regarding the coarseness. If one has such data for a number of specimens, one could construct a Dirichlet/gamma process for the heights, z, over an (x, y) grid cell where the cell and the height are based on the resolution of the measuring device. With this structure, a nonparametric posterior empirical Bayes estimator of the surface can be constructed based on the data to model the coarseness. The above approaches to modeling the coarseness are open areas of possible future research and discussed further in Sect. 11.2.

10.2 Size Effect Simulations Now, we consider the load-sharing cell model and look at size effects based on subcells CTBD distributions consistent with the CTBD Weibull plots given in Fig. 10.2. Since the lattice constant is about 1 for some types of pure HfO2 pure crystals and the physical thickness is about 5 nm for (Kim & Lee, 2004)’s data, a pure grain cell would consist of about 5 pure subcells. This is consistent with (Le, 2012)’s description of the cell size where a defect is about the size of 1 subcell. Since the actual failure of the dielectric is initiated in the grain boundary and the Weibull plots have a shape parameter of about 3 (hence 3 subcells), this suggests that the behavior of the load-sharing distribution for a grain boundary cell at the origin can be approximated by a mixture of gammas with shape parameter 3. Hence, subcell CTBD times are assumed to be i.i.d. exponential in the discussion of cell BD and size effects for dielectric breakdown given, below. In addition, the length of the chain reflects the amount of the grain boundary that makes up the cross-sectional area of the dielectric. Furthermore, unit exponentials (scale parameter equals 1) are chosen in the computations, since the analysis for an arbitrary scale parameter can be easily adjusted in the analysis. (Daniels equal load-sharing rule justifies this; multiply the unit exponential time to failure of a subcell by the characteristic lifetime/scale parameter for a subcell. Also, the BD distributions for the 3 different cycles to failure protocols are just a scale change so it can be adjusted in the same way.) With this choice of the subcell BD cycle times and with the equal load-sharing rule for the cell, we investigate the size effects for this model. This investigation is similar to the one in Sects. 9.1 and 9.2, where series circuits of sizes, 2, 3, 4 and 5, were considered, except that the subcell BD distributions, here, are compatible with Fig. 10.1 information. Figures 10.2, 10.3, 10.4, 10.5, 10.6, and 10.7 and Tables 10.1 and 10.2 indicate simulation size effects for series circuits of capacitors of size k = 2, 3, 4, and 5, and the CTBD distribution of the capacitors is unit exponential. Figures 10.2 and 10.3 give the exact Weibull plots and survival distributions for these series circuits based

10.2 Size Effect Simulations

117

Fig. 10.1 Weibull plots of Kim and Lee (2004)’s Figure 14 data with the same shape parameter = 2.810153; (a) 10 kHz, 10% duty cycle: scale parameter = 4.407255; (b) 10 kHz, 50% duty cycle: scale parameter = 5.420054; (c) 0.1 kHz, 10% duty cycle: scale parameter = 5.558458

on formula (6.4) since the capacitors CTBD distribution is unit exponential. Notice that the curvature in the Weibull plots increases as k increases where the slope of the plots at the right end of the plot is around 2 and at the left end is k for the series circuit of k capacitors. Based on simulations of size 100,000, the boxplots, given the number of capacitor failures that precipitate circuit failure for a series circuit of size k, Nk , in Fig. 10.4 for CTBD, are very similar. Thus, there is very little information on the number of capacitor failures prior to circuit failure in predicting circuit failure. Information, based on simulations of size 100,000, about the distribution of the Nk are given in Table 10.1. Also, note that MLE fitted gammas have shape parameters that are related to the circuit size.

118

10 Simulated Size Effects Relationships Motivated by the Load-Sharing Cell Model 5

0

log(-log(1-F))

–5

–10

–15

–20 2 capacitors 3 capacitors 4 capacitors 5 capacitors

–25

–30 –4

–2

2

0

4

log CTBD

Fig. 10.2 Weibull plots for CTBD for series circuits with k capacitors, k = 2, 3, 4, 5. Capacitors CTBD distribution is the unit exponential distribution 1.0

2 capacitors 3 capacitors 4 capacitors 5 capacitors

Survival Probability

0.8

0.6

0.4

0.2

0.0 0

2

4

6

8

CTBD

Fig. 10.3 The survival distributions of CTBD for series circuits with k capacitors, k = 2, 3, 4, 5. Capacitors CTBD distribution is the unit exponential distribution

Number of capacitor failures for the series circuit to fail

10.2 Size Effect Simulations

119

2

1

0

5

10

15

10

15

10

15

10

15

Number of capacitor failures for the series circuit to fail

CTBD 3 2 1 0

5

Number of capacitor failures for the series circuit to fail

Number of capacitor failures for the series circuit to fail

CTBD 4 3 2 1 0

5 CTBD

5 4 3 2 1 0

5 CTBD

Fig. 10.4 Boxplots of CTBD given the number of capacitor failures that precipitate circuit breakdown for series circuits with k capacitors, k = 2, 3, 4, 5. Based on simulations of size 100,000

120

10 Simulated Size Effects Relationships Motivated by the Load-Sharing Cell Model

0 –4

–2

2 capacitors 3 capacitors 4 capacitors 5 capacitors

–6

log(–log(1-F))

2

a

–2

–1

0

1

2

1

2

1

2

log CTBD

0 –4

–2

2 capacitors 3 capacitors 4 capacitors 5 capacitors

–6

log(–log(1-F))

2

b

–2

–1

0 log CTBD

0 –4

–2

2 capacitors 3 capacitors 4 capacitors 5 capacitors

–6

log(–log(1-F))

2

c

–2

–1

0 log CTBD

Fig. 10.5 Weibull plots of CTBD for series circuits with k capacitors, k = 2, 3, 4, 5. Based on simulations of sizes 10, 1000, and 100,000. (a) Based on simulations of size 10. (b) Based on simulations of size 1000. (c) Based on simulations of size 100,000

10.2 Size Effect Simulations

121

1 2 3 Number of capacitor failures for the series circuit to fail

1 2 3 4 Number of capacitor failures for the series circuit to fail

5 Capacitors

0.0

Probability Mass 0.1 0.2 0.3

1 2 Number of capacitor failures for the series circuit to fail

0.4

0.5 Probability Mass 0.1 0.2 0.3 0.4 0.0

0.5 Probability Mass 0.1 0.2 0.3 0.4

4 Capacitors

0.0

3 Capacitors

Probability Mass 0.1 0.2 0.3 0.4

2 Capacitors

0.0

0.5

a

1 2 3 4 5 Number of capacitor failures for the series circuit to fail

b

1 2 3 4 Number of capacitor failures for the series circuit to fail

5 Capacitors Probability Mass 0.00 0.050.10 0.15 0.20 0.25 0.30

1 2 3 Number of capacitor failures for the series circuit to fail

0.00

0.0

1 2 Number of capacitor failures for the series circuit to fail

4 Capacitors Probability Mass 0.10 0.20 0.30

3 Capacitors Probability Mass 0.1 0.2 0.3 0.4

Probability Mass 0.0 0.1 0.2 0.3 0.4 0.5 0.6

2 Capacitors

1 2 3 4 5 Number of capacitor failures for the series circuit to fail

c

1 2 3 Number of capacitor failures for the series circuit to fail

1 2 3 4 Number of capacitor failures for the series circuit to fail

5 Capacitors Probability Mass 0.000.05 0.10 0.15 0.20 0.25 0.30

Probability Mass 0.10 0.20 0.30 0.00

4 Capacitors

Probability Mass 0.1 0.2 0.3 0.4

1 2 Number of capacitor failures for the series circuit to fail

3 Capacitors

0.0

Probability Mass 0.0 0.1 0.2 0.3 0.4 0.5 0.6

2 Capacitors

1 2 3 4 5 Number of capacitor failures for the series circuit to fail

Fig. 10.6 Histograms for the number of capacitor failures that precipitate circuit breakdown for series circuits with k capacitors, k = 2, 3, 4, 5. Based on simulations of sizes 10, 1000, and 100,000. (a) Based on simulations of size 10. (b) Based on simulations of size 1000. (c) Based on simulations of size 100,000

122

10 Simulated Size Effects Relationships Motivated by the Load-Sharing Cell Model

a 15 10 0

5

CTBD

10 0

5

CTBD

10 0

5

CTBD

10 CTBD 5 0

5 Capacitors

15

4 Capacitors

15

3 Capacitors

15

2 Capacitors

1 2 Number of capacitor failures for the series ciruit to fail

1 2 3 Number of capacitor failures for the series ciruit to fail

1 2 3 4 Number of capacitor failures for the series ciruit to fail

1 2 3 4 5 Number of capacitor failures for the series ciruit to fail

2 Capacitors

3 Capacitors

4 Capacitors

5 Capacitors 15 0

0

5

5

CTBD

10

15 CTBD

10

15 10 CTBD 0

0

5

5

CTBD

10

15

b

1 2 Number of capacitor failures for the series ciruit to fail

1 2 3 Number of capacitor failures for the series ciruit to fail

1 2 3 4 Number of capacitor failures for the series ciruit to fail

1 2 3 4 5 Number of capacitor failures for the series ciruit to fail

2 Capacitors

3 Capacitors

4 Capacitors

5 Capacitors

1 2 3 Number of capacitor failures for the series ciruit to fail

15

1 2 3 4 Number of capacitor failures for the series ciruit to fail

0

5

CTBD

10

15 10 0

5

CTBD

10 5

CTBD 1 2 Number of capacitor failures for the series ciruit to fail

0

0

5

CTBD

10

15

15

c

1 2 3 4 5 Number of capacitor failures for the series ciruit to fail

Fig. 10.7 Boxplots of CTBD given the number of capacitor failures that precipitate circuit breakdown for series circuits with k capacitors, k = 2, 3, 4, 5. Based on simulations of sizes 10, 1000, and 100,000. (a) Based on simulations of size 10. (b) Based on simulations of size 1000. (c) Based on simulations of size 100,000

Figure 10.5 indicates the simulation size effect in the Weibull plot simulations of sizes 10, 1000, and 100,000 for the four different circuit sizes. Table 10.2 and Fig. 10.6 give information regarding these simulations of the distribution of Nk . These indicate just how well the simulation size affects the Weibull plots and

10.2 Size Effect Simulations

123

Table 10.1 Estimated probabilities for the number of capacitor failures that precipitate circuit breakdown for series circuits with k capacitors, k = 2, 3, 4, 5. Based on simulations of size 100,000. MLEs are for the gamma parameters fit Number of capacitors 2 3 4 5

Number of capacitor failures for the series circuit to fail 1 2 3 4 5 0.33240 0.66760 – – – 0.14058 0.38581 0.47361 – – 0.06584 0.22308 0.36679 0.34429 – 0.03276 0.13083 0.25729 0.32117 0.25795

Gamma Parameter MLE Scale Shape 2.3566 1.9620 (≈ 2) 3.8881 2.8824 (≈ 3) 5.4926 3.7426 (≈ 4) 7.2019 4.6004 (≈ 5)

Table 10.2 Estimated probabilities for the number of capacitor failures that precipitate circuit breakdown for series circuits with k capacitors, k = 2, 3, 4, 5. Based on simulations of sizes 10, 1000, and 100,000 Simulation size 10

1000

100,000

Number of capacitors 2 3 4 5 2 3 4 5 2 3 4 5

Number of capacitor failures for the series circuit to fail 1 2 3 4 5 0.5 0.5 – – – 0.0 0.5 0.5 – – 0.0 0.1 0.4 0.5 – 0.0 0.1 0.3 0.4 0.2 0.331 0.669 – – – 0.133 0.370 0.497 – – 0.067 0.220 0.363 0.350 – 0.027 0.152 0.252 0.318 0.251 0.33240 0.66760 – – – 0.14058 0.38581 0.47361 – – 0.06584 0.22308 0.36679 0.34429 – 0.03276 0.13083 0.25729 0.32117 0.25795

the estimates of the distributions. Finally, the boxplots in Fig. 10.7 illustrate the simulation size effects for the conditional distribution of the CTBD given Nk . Figure 10.8 gives part of the exact Weibull plots for parallel–series circuits of capacitors with unit exponential CTBD distributions. Let F k (t) denote the CTBD survival distribution of a series circuit of size k. Then, a parallel  circuit n of n series circuits of size k has CTBD survival distribution F k,n (t) ≡ F k (t) . Thus, the y-coordinate of the Weibull plot of F k,n (t) is ln[ln(F k,n (t))] = ln n + ln[− ln(F k (t))], which is a vertical shift of ln(− ln(F k (t))) at ln t. Figure 10.8 gives insights into the size effects due to size of the parallel circuit. From Fig. 10.8a, the curvature in the plots is around ln CT BD = 0 or CT BD = 1. Since 0 ≈ F 2,n (1) for n = 40,000 and 4,000,000, the Weibull asymptotics are appropriate for these values of k, but a finite weakest link adjustment is necessary

10 0 –10

log(–log(1–F))

0 –10

1 cell 400 cells 40,000 cells 4000,000 cells

–30

–30

–20

1 cell 400 cells 40,000 cells 4000,000 cells

–20

log(–log(1–F))

10

20

10 Simulated Size Effects Relationships Motivated by the Load-Sharing Cell Model

20

124

–4

–2

0

2

4

–4

–2

2

4

20 0 –10 –30

–30

1 cell 400 cells 40,000 cells 4000,000 cells

–20

log(–log(1–F))

10

10 0 –10

1 cell 400 cells 40,000 cells 4000,000 cells

–20

log(–log(1–F))

0 log CTBD

20

log CTBD

–4

–2

0 log CTBD

2

4

–4

–2

0

2

4

log CTBD

Fig. 10.8 Weibull plots of CTBD among parallel–series circuits composed of a different number of cells. Each cell is composed of a different number of capacitors where the capacitor CTBD distribution is the unit exponential distribution. (a) Each cell composed of 2 capacitors. (b) Each cell composed of 3 capacitors. (c) Each cell composed of 4 capacitors. (a) Each cell composed of 5 capacitors

for n = 400 for the right tail of F 2,400 . Similar statements can be made regarding circuits of size k = 3, 4, and 5. The above shows how to assess the size effect for the load-sharing cell model where a cell is a series circuit of k capacitors with unit exponential CTBD. As mentioned earlier, to account for the coarseness of the interface surface at the metal anode end of HfO2 dielectrics, one can mix over k where the mixing proportions are determined by the coarseness. Also, the size of the parallel circuit is determined by the percentage of the dielectric’s grain boundary.

Chapter 11

Concluding Comments and Future Research Directions

11.1 Book Summary Over the last 60 years, FBMs have played a fundamental role in the analysis of the reliability of certain types of complex load-sharing systems. It was first used in the study of the strength of yarns and threads followed by modeling the reliability of fibrous composites. This followed with their use in the investigation of the fracture and breakdown (BD) of disordered materials studied by material scientists. The basic models for BD were the FBM and the chain-of-bundles (links) model where there is load-sharing between the elements in the bundle to accommodate physical considerations. The latter model is also a weakest link model where the chain breaks down when one of the links fails (the so-called weakest link). Limitless applications of the FBM can be found in the fields of material science, mechanical and structural engineering, and nanotechnology since modern disordered materials are extensively used in these areas. In addition, FBMs have been used in the geosciences to understand how rapid mass movements, such as avalanches, are triggered. Since FBMs are stochastic in nature and because of their importance in the analysis of system BD, we presented an overview of the statistical FBM. Here the emphasis was on the statistical aspects to make it accessible to those not familiar with the topic. This was accomplished by first discussing the basic statistical distribution theory for load-sharing bundles (Chap. 6) with more technical details relegated to appendices (Appendices A.2 and A.3) and then by concentrating on the physical and statistical aspects of specific load-sharing examples: the BD for circuits of capacitors and related dielectrics as well as polymer fibers and fibrous composites (Chaps. 7–9). For series and parallel–series circuits (series–parallel reliability systems) of ordinary capacitors, the load-sharing rules were derived from the electrical laws (Chaps. 2, 5, and 9). This, with the BD formalism (Chap. 5), was then used to obtain the BD distribution of the circuit. Simulation size effects were illustrated

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5_11

125

126

11 Concluding Comments and Future Research Directions

for simulations of series and parallel–series circuits (Chap. 6 and Appendix A.1) as well as the generic load-sharing cell model (Chaps. 9 and 10). Simulation size effects are related to the finite weakest link adjustments for the BD distribution that arise in large series–parallel reliability load-sharing systems, such as dielectric BD, from their minimum extreme value (MEV) Weibull distribution approximations (Sect. 6.5). The weakest link size effects are related to the length of the chain rather than the simulation size; the behavior of very long chains is determined by the lower tail of the link/bundle distribution, while determining the behavior of the lower tail of the bundle distribution requires large simulation sizes. Ideas regarding how to quantify size effect are also given (Sects. 6.5, A.1.5, and A.1.6) and now discussed with some related topics. If the links/bundles are independent and each link has the same BD distribution, then MEV theory gives the Weibull as the asymptotic BD distribution of the chain when the bundle distribution behaves like a Weibull distribution at the origin (or equivalently, its cumulative hazard function) with scale parameter τ and shape parameter ρ ∗ . The scale parameter τ is the characteristic lifetime of the elements/components in the bundle and is determined by the load-sharing rule. The shape parameter ρ ∗ is related to the number of elements in the bundle that need to fail for the bundle to fail (e.g., n is the number of elements in a parallel reliability system such as a series circuit of capacitors). The generic load-sharing bundle/cell model further assumes that the cell/link consists of n independent subcells with the same BD distribution. If the subcell BD distribution behaves, at the origin, like a Weibull with shape parameter ρ, then the BD distribution of the cell behaves, at the origin, like a Weibull with shape parameter ρ ∗ = nρ, and the scale parameter is a function of τ and the nth moment of the load-sharing distribution. Thus, the Weibull shape and scale can have physical interpretations. This is part of the BD formalism that is discussed. Statistical analyses using proportional hazard models and other statistical methods (such as Weibull analyses and a nonparametric Bayes analysis) are used to study the rubrics of the formalism (Chaps. 7 and 8). A critique of the role of the power and of the exponential laws in this formalism that link the time to failure distribution under a static load to that for an increasing load is also given (Sect. 5.2). The Weibull distribution is used as an approximation for the chains BD distribution. For a bundle with n independent components each with Weibull survival distribution with scale and shape parameters, λ and ρ, respectively, the bundle breaking strength distribution is a mixed gamma-type of distribution with shape nρ, where the mixing is over the scale parameter λ and totally determined by the load-sharing rule. Thus, exact Weibull plots of the distribution are not linear, and those given in Figs. 6.1, 6.2, 6.3, 7.9, 7.10, 7.11, 9.1, and 9.7 are, in fact, concave. This nonlinearity leads to size effects based on the length of the chain of such bundles (the so-called finite weakest link model). For concave Weibull plots for the bundle strength distribution, the slope is related to the number of component failures in a bundle that cause chain failure.

11.2 Some Future Research Directions

127

An elementary but in-depth discussion of the physical aspects of SiO2 and HfO2 dielectrics and cell models was given (Chaps. 3 and 4). This was used to study a load-sharing cell model for the BD of HfO2 dielectrics and the BD formalism. The latter was based on Kim and Lee (2004)’s comprehensive study of such dielectrics and an analysis of their data. Here, several BD distributions were compared in the analysis, and proportional hazard regression models were used to study the BD formalism (Chap. 8). Critical findings were as follows. It was discovered that for very thin dielectrics, the A.1 Weakest Link Principle of the BD formalism, a proportional hazards model, was no longer proportional to the area but that a proportional hazard model (A.1’) was still obtained. Also, regression models were used to predict HBDs based on the first SBD time. In addition, simulation size effects were studied for the load-sharing cell model based on this data (Chap. 10).

11.2 Some Future Research Directions In this section, four topics for future research are addressed: (i) (ii) (iii) (iv)

Curvature in Weibull plots Roughness/smoothness of a dielectric Incorporating degradation into the FBM Application to nano-sensors

11.2.1 Curvature in Weibull Plots Curvature in a Weibull plot plays an important role in size effects and, as seen in Appendix A.1, is measured by d(t) =

h(t)t h(t) = . H (t) A(t)

The use of d(t) to study the curvature naturally leads to change point models in the consideration of piecewise linear fits in the Weibull plots or in Weibull plots of mixtures of exact bundle distributions where the mixtures are over the number of elements in the bundle. Further discussion of this to account for roughness in hafnia dielectrics is given in the next subsection. Linear fits of the left tails in exact bundle Weibull plots required R 2 > 99.9% in Sect. 6.5. The left tail fit approximates the MEV Weibull for the model and dictates the length of the chain size effects for the chain-of-bundles model. The effects of this computational error of size effects need further study. Data Weibull plots will also have statistical uncertainty (error of estimation, etc.). The exact plots are of

128

11 Concluding Comments and Future Research Directions

theoretical use, but for practical empirical work the statistical uncertainty needs to be considered in the study of size effects.

11.2.2 Modeling Roughness As indicated in Chap. 3, roughness is an important consideration in dielectric behavior. Annealing improves some of the qualities of the dielectric, for example increasing the dielectric constant, up to a point. The annealing temperature affects the roughness at the anode end of the dielectric; low annealing temperature reduces the roughness and diminishes quality, while high temperatures make it too rough and also reduce the dielectric constant. It was found by Zhang et al. (2019) that the optimal annealing temperature for the dielectric constant is 500◦ C, producing a dielectric constant of 17.2. Material scientists have been able to accurately measure smoothness/roughness at the micro-level and have provided means and standard deviations (macroinformation) of these measurements (Zhang et al., 2019). An important question is how to model the roughness and incorporate this macro-information. Roughness in the dielectric is a very jagged process; perhaps, a Levy process, such as a gamma process, would better reflect this jaggedness than a Weiner process. Also, bivariate macro-information regarding smoothness such as the correlation would be useful in this study. For example, positive correlation might reflect smoothness, while negative correlation roughness. Also, gamma processes with covariates where one of the covariates is the annealing temperature would exploit hidden replications and give a more precise analysis. In the study by Zhang et al. (2019), roughness/smoothness at the anode surface of the dielectric was determined using Atomic Force Microscopy (Ray, 2013). This method moves a very fine-tipped (∼10 to 20 nm in diameter) probe across a surface to achieve very high-resolution images of the surface. The tip of the probe moves vertically in response to interactions between the tip and the surface, and the movement is measured using a focused laser beam. In the study by Zhang et al. (2019), the images were of a surface measuring 2 µm by 2 µm with a resolution of 256 points by 256 lines. To describe this, let the base of the anode be B = {(x, y) : 0 < x < w, 0 < y < l}, where w and l are, respectively, the width and length of B. Let φ(x, y) = z denote the height at (x, y), which is the distance of the pure hafnia to the grain boundary at (x, y). In the study by Zhang et al. (2019), the images were of a hafnia surface measuring 2 µm by 2 µm with a resolution of 256 points by 256 lines. Thus, the base B should be considered a grid of squares G = {Sj }, where each square is a face of a cube in the hafnia lattice and the height at Sj is nonnegative set function φ(Sj ).

11.2 Some Future Research Directions

129

Note that G is a finite partition of B and φ can be extended, in the obvious way,1 to a finite measure on, σ (G), the set of all unions of sets in G and the empty set. As such, one can consider the finite measure φ (where φ(A) is the total height over A ∈ σ (G)) as the parameter for a Levy process on σ (G). For gamma processes, this leads to Dirichlet processes and to partitioned-based Dirichlet distributions to construct the gamma process and their partitioned-based counterparts. This construction can be very attractive for analysis purposes (Sethuraman & Hollander, 2009). Modeling the roughness gives a chain-of-bundle model for the dielectric where the base of the bundle is an element in G = {Sj } and the number of elements in a bundle is determined by the height. The length of the chain is the number of elements in G. Thus, bundles are not identical in structure but depend on their height. Related to the previous section, a possible approach is to model the bundle in the chain as a mixture of bundles where the mixing is over the height, the number of elements in the bundle. The curvature of the Weibull plots of such a bundle distribution need not be concave. As such, the size effects can be dramatically related to the mixing proportions for the heights.

11.2.3 Degradation Under a testing regime for a material, the change in the degradation level would be dependent on its current level. Processes with independent increments would not be suitable, but related state-dependent processes should be considered instead, for example the transformed gamma process, to access the material’s reliability (Giorgio et al., 2018).

11.2.4 Nano-Sensors One type of nano-sensor is a network of nanowires. Such sensors are used to detect certain types of gases since they change their resistivity in the presence of the gas. As such, it is a fuse-type network of nanowires that can have geometrical structure, where the fuses/resistors are related to the nanowires in the network. Here the physics of the network is based on its conductivity (or resistivity), and changes in it are used to detect gases, for example. For reliability purposes, (Ebrahimi et al., 2013a,b) have used fuse-type percolation models to model a nano-sensor as a network of fuses where the fuses are based on the nanowires. This is similar to dielectrics that are viewed as chains-of-bundles where the bundle is a series circuit of nanocapacitors and the physics is based on capacitance rather than resistance. An open question is:

1 There

is no need for measure theory in this construction since G is a finite collection of sets.

130

11 Concluding Comments and Future Research Directions

• Can such nano-sensors be modeled from a load-sharing perspective? • Related to this question is that the network for a nano-sensor can have geometrical structure. Viewing this from a load-sharing chain-of-bundles perspective, one can ask: – Can the geometrical structure be considered as a load-sharing chain? – What are the shapes of the bundles in the chain? Another consideration is that a nanowire can increase in volume in the presence of a gas and then decrease in volume when the gas is purged. Such a decrease can cause irreversible damage to the nanowire; atoms in the nanowire can be dislocated. This is reminiscent of SBD and HBD discussed in Sects. 3.4, 3.5 and 8.3. Cumulative damage models discussed in Sects. 7.1.3 and 11.2.3 suggest ways to model such degradation.

Appendix A

Appendices of Supplementary Topics

A.1 Curvature in Weibull Plots and Its Implications A.1.1 Reliability Systems and Curvature in Related Weibull plots A series system of components is a system that fails when one of the components in the system fails, while a parallel system is one that fails when all the components fail. A series–parallel system is a system where you have parallel subsystems in series. This system fails when one of the parallel subsystems fail. Another name for a series–parallel system is a chain-of-bundles model. Here, the parallel subsystems are the bundles or links in the chain, and the chain fails when one of the links fails; as such, it is also referred to as the weakest link model , and its assumption is referred to as the weakest link hypothesis. For a chain with N links where the links are independent with the same survival distribution .F (t) = exp{−H (t)} (where .F (t) = 1 − F (t) is the cdf), the survival N distribution of the chain is .F (t) = exp{−NH (t)}, where .H (t) = − ln F (t) is the cumulative hazard function of F . That is, the chain is a length proportional hazards model. Relevant to this model is how curvature in the Weibull plots of .F affects N the behavior of the plots for .F . Curvature contributes to size effects one sees N in the Weibull plots of .F and is addressed in the next sections. A discussion regarding size effects and curvature in these plots is given, and examples of convex and concave curvature are considered. The first is observed in competing Weibull risks/mixed hazards, while the second is illustrated for a k component backup system.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5

131

132

A Appendices of Supplementary Topics

A.1.2 Curvature in Weibull Plots Let .g(t) = ln[− ln F (t)] ≡ ln H (t) for a distribution F , where H is the cumulative hazard function of F . The function .g(t) is sometimes referred to as the Weibit function. Throughout the remainder, we assume that F has a density f . Then, the derivative of H is the failure rate or hazard function h(t) =

.

f (t) F (t)

 and H (t) =

t

h(s)ds. 0

The exact Weibull plot of F is the graph (ln t, g(t)) ↔ (y, g(ey )) ≡ (y, w(y)).

.

The curvature in the plot is dictated by the behavior of the derivative of w, .w  . Note that w  (y) = g  (ey )ey =

.

Let .d(t) ≡

h(t)t H (t) .

h(ey )ey H  (ey )ey ≡ . H (ey ) H (ey )

(A.1)

The following lemma follows immediately from (A.1).

Lemma A.1.2.1 (a) A Weibull plot is convex (concave) if and only if .d(t) is an increasing (decreasing) function. (b) In addition, the graph is convex (concave) on an open interval I if and only if .d(t) is increasing (decreasing) on I . Comments A.1.2.1 (i) This lemma has an interpretation from a reliability perspective. To see this, let H (t) = .A(t) = t

t 0

h(s)ds . t

The function .A(t) is the average failure rate, and the ratio d(t) −

.

h(t) h(t)t = H (t) A(t)

is comparing the failure rate to the average failure rate. If the failure rate is increasing, an increasing failure rate (IFR), then .A(t) is increasing (increasing failure rate average, IFRA). If the Weibull plot of F is concave, this indicates that the average failure rate is increasing faster than the failure rate. An example of this is the exact Weibull plot of the .G(1, k), where .k = 2 and 3. This is shown in Sect. A.1.5, while figures there strongly suggest that the plots for .k = 4 and 5 are also concave.

A

Appendices of Supplementary Topics

133

(ii)

Let .Y = σ X1/ρ , where X be a nonnegative r.v. and .σ , .ρ are positive. Then, the distribution of Y is .FY (t) = FX ((t/σ )ρ ), and the family of distributions ρ .{FX ((·/σ ) ) : σ > 0, ρ > 0} is the scalar-shape family generated by .FX . Also, note that  ρ  t . .dY (t) = ρdX σ Thus, from Lemma A.1.2.1 (b), the curvature of the exact Weibull plot for Y is the same as that for X. Also, .dX is monotone if and only if .dY is, and if X is the standard exponential, then Y is a Weibull with scale .σ and shape parameter .ρ with .dY (t) = ρ. This latter result just indicates that the exact Weibull plot of Y is linear with slope .ρ.

A.1.3 Size Effects and Mixed Hazards This section is motivated by Fischer and Nissen (1976). First, we consider the size (scaling) effect on mixed hazards and then show the convexity of Weibull probability plots of mixed Weibull hazards. Consider the competing risk survival distribution of a material of size S F S (t) = exp {−HS (t)} ,

(A.2)

HS (t) = L1 (S)H1 (t) + L2 (S)H2 (t).

(A.3)

.

where .

Here, .H1 (t) and .H2 (t) are the cumulative hazard functions of two competing risks where .L1 (S) and .L2 (S) quantify the size effects of the competing risks of the failure of material of size S. Fischer and Nissen (1976) suggested that there are two competing risks for the time of breakdown of polyethylene (PE)—intrinsic strength and extrinsic strength. The former, risk 1, is due to the material strength of the PE (free of defects), and the latter, risk 2, is the strength affected by defects. Note that for a given size, S, of the PE material, from (A.3), the failure/hazard rate function of the material is d d d HS (t) = L1 (S) H1 (t) + L2 (S) H2 (t) dt dt dt ≡ L1 (S)h1 (t) + L2 (S)h2 (t).

hS (t) ≡

.

(A.4)

From (A.4), given that the material has just failed at time t, the probability the failure is to risk .i = 1 or 2 is

134

A Appendices of Supplementary Topics

.

Li (S)hi (t) . L1 (S)h1 (t) + L2 (S)h2 (t)

(A.5)

Remarks A.1.3.1 Let .|S| denote the magnitude of size S. (i) For very small size PE material, .|S| ≈ 0, .L2 (S) ≈ 0 0, β > 0, ⎩ j! ⎭ j =1

is a mixture of Weibull hazards. Hence, its Weibull plot is convex.

136

A Appendices of Supplementary Topics

A.1.5 An Example of an Exact Weibull Plot with Concave Curvature Consider a k component reliability system where components .2, . . . , k are backups that immediately go into service when the preceding component fails. There is no aging of a component until it goes into service and is under use. Let .Tj , .j = 1, . . . , k, denote the use failure time of the k components. Then, the use failure time of the component backup system is .Sk = T1 +. . .+Tk . If .T1 , . . . , Tk are independent ∞ j −t  k t e standard exponentials, then the distribution of .Sk is .Gk (t) = ∼ tk! as j! j =k

t → 0, a gamma distribution with scale parameter 1 and shape parameter k (i.e., a .G(1, k) distribution). Lemma A.1.2.1, Comment A.1.2(ii), and the graphs of .dk (t) for .k = 2, 3, 4, and 5 in Fig. A.1 suggest that, for .σ > 0, .dk (t/σ ) is decreasing. This (as well as the Weibull plots in Fig. A.2) indicates that the .G(σ, k) distributions, .k = 2, 3, 4, and 5, may have exact Weibull plots that are concave. In Lemma A.1.5.1, it is actually proved that .dk (t) is concave for .k = 2 and 3. Note that, for a positive integer k, the .G(σ, k) distributions are also called Erlang distributions. The density, survival function, hazard rate, and cumulative hazard function of .Gk are, respectively, .

gk (t) =

.

Gk (t) =

t k−1 e−t (k − 1)! k−1 j −t t e

j!

j =0

hk (t) =

gk (t) Gk (t)

=

t k−1 k−1  (k − 1)!

j =0

(a)

(b)

Scatterplot of d2(t), d3(t), d4(t), d5(t) vs t 8

Variable d2(t) d3(t) d4(t) d5(t)

7 6 Y-Data

, tj j!

Scatterplot of d2(t), d3(t), d4(t), d5(t) vs t 0 d2(t)

2.00

3.0

1.75

2.5

5

1.50

2.0

4

1.25

3

1.00

10 d3(t)

15

20

1.5 1.0 8

d4(t)

4

2

5

d5(t)

6

1

3

4

0

2

2 0

-1

1

0

5

10 t

15

20

0

5

10

15

20

t

Fig. A.1 (a) Left: Overlayed plots and (b) Right: Individual graphs of .dk (t) for .k = 2, 3, 4, and 5

A

Appendices of Supplementary Topics

137

5 0

log(-log(1-F))

–5

–10 –15 –20 –25

k=2 k=3 k=4 k=5

–30 –2

–4

0

2

4

log TBD

Fig. A.2 Weibull Plots of .Gk (t) for .k = 2, 3, 4, and 5

Hk (t) = − ln Gk (t) = − ln

k−1 j −t t e j =0

 = 0

t

 hk (s)ds = 0

t

j!

⎛ ⎞ k−1 j −t t e ⎠ = t − ln ⎝ j! j =0

s k−1 (k − 1)!

k−1  j =0

(A.9)

ds. sj j!

Also, dk (t) =

.

hk (t)t =  k−1 Hk (t)  t − ln

j =0

tk

tj j!

(k − 1)!

k−1  j =0

.

(A.10)

tj j!

We now prove that: Lemma A.1.5.1 The Weibull plot of a .G(1, k) distribution is concave for .k = 2 and 3. Proof From Lemma A.1.2.1(a), we need to show that .dk (t) in (A.10) is decreasing. To do this, we only need to show that its derivative, .dk (t), is non-positive. From the

138

A Appendices of Supplementary Topics

quotient rule, to do this, we need to show that, for .t > 0,  0 ≥ k (t) ≡ Hk2 (t)dk (t) = hk (t)t + hk (t) Hk (t) − (hk (t))2 t.

.

(A.11)

We first prove (A.11) for .k = 2 since its proof is more straightforward than the one for .k = 3. Then, (A.11) is equivalent to 0 ≥ 2 (t) ≡

.

(1 + t)2 2 (t) = (2 + t)(t − ln(1 + t)) − t 2 t

= 2t + t 2 − (2 + t)(ln(1 + t)) − t 2 = 2t − (2 + t)(ln(1 + t)),

(A.12)

and (A.12) is equivalent to f (t) ≡ ln(1 + t) ≥

.

2t ≡ g(t). 2+t

(A.13)

Note that f  (t) =

.

4 1 ≥ = g  (t) and f (0) = 0 = g(0). (1 + t) (2 + t)2

So, 

t

f (t) = f (t) − f (0) =

f  (s)ds ≥

.

0



t

g  (s)ds = g(t) − g(0) = g(t).

0

This proves (A.13) and completes the proof for .k = 2. The following identities from (A.9) and (A.10) are used in the proof for .k = 3: ⎛ ⎞ 2 j t ⎠ .H3 (t) = t − ln ⎝ j! j =0

h3 (t) = 2

t2 2  j =0

⎛ (h3 (t))2 t = t 5 ⎝2

tj j! 2 tj j =0

⎛ th3 (t) = t ⎝2

j!

⎞−2 ⎠

⎡ ⎞−2 ⎧⎡ ⎤ ⎤⎫ 2 1 ⎨ j j ⎬ t t ⎣4t ⎠ ⎦ − t 2 ⎣2 ⎦ . ⎩ j! j! j! ⎭

2 tj j =0

j =0

j =0

(A.14)

A

Appendices of Supplementary Topics

139

Note that from (A.11) and (A.14), (A.11) is equivalent to  0 ≥ k (t) ≡ H32 (t)d3 (t) = h3 (t)t + h3 (t) H3 (t) − (h3 (t))2 t ⎫ ⎡ ⎤2 ⎧ 2 2 2 1 ⎨ j j j j⎬ t t t t ⎦ 4t = t ⎣2 − 2t 2 + 2t j! ⎩ j! j! j!⎭

.

j =0

j =0

j =0

j =0

⎫⎧ ⎤ ⎤2 ⎡⎧ ⎛ ⎞⎫ 2 2 2 1 ⎨ ⎬ ⎨ ⎬ j j j j t ⎦ ⎣ t t t ⎠ − 2t 2 6t = t ⎣2 t − ln ⎝ − t 4⎦ , ⎩ j! j! j!⎭ ⎩ j! ⎭ ⎡

j =0

j =0

j =0

j =0

(A.15) and (A.15) is equivalent to ⎧ ⎫⎡ ⎡ ⎤2 ⎛ ⎞⎤ 2 1 j 2 j 2 j⎬ ⎨ t t t 1 ⎣ tj ⎦ 2 ⎣t − ln ⎝ ⎠⎦ − t 4 .0 ≥ 3 (t) = 3 (t) = 6t 2 − 2t ⎩ t j! j! j! ⎭ j! j =0

j =0

j =0

⎫⎡ ⎡⎧ ⎤ ⎛ ⎞⎤ 1 j 2 j⎬ 2 j ⎨ t t t 3 ⎣t − ln ⎝ ⎠⎦ − t ⎦ . − 2t = t⎣ 6 ⎩ j! j! ⎭ j! j =0

j =0

j =0

(A.16)

j =0

Canceling t and rearranging the terms in (A.16) give the inequality ⎧ ⎫ ⎛ ⎞ 2 1 2 ⎨ j j t t ⎬ ⎝ t j ⎠ − 2t .P3 (t)f3 (t) ≡ 6 ln ⎩ j! j!⎭ j! j =0

j =0

j =0

⎧ ⎫ 2 1 ⎨ tj tj ⎬ − 2t ≥ 6 t − t 3 ≡ P3 (t)t − t 3 . ⎩ j! j!⎭ j =0

(A.17)

j =0

Now, for .t ≥ 0, ⎧ ⎫ 2 1 ⎨ tj tj ⎬ − 2t .P3 (t) = 6 > 0. ⎩ j! j!⎭ j =0

(A.18)

j =0

From (A.18), we can divide both sides of (A.17) by .P3 (t) to see that (A.17) is equivalent to ⎛ f3 (t) = ln ⎝

.

2 tj j =0

Notice that

j!

⎞ ⎠≥t−

t3 ≡ gk (t). P3 (t)

(A.19)

140

A Appendices of Supplementary Topics



1 

j =0

f3 (t) =

.

2 

j =0

tj j!

and g3 (t) = 1 −

tj j!

3t 2 P3 (t) − t 3 P3 (t) with f (0) = 0 = g(0). [P3 (t)]2 (A.20)

We will show that  .f3 (t)

1 

j =0

=

2 

j =0

tj

j!

≥1− tj j!

3t 2 P3 (t) − t 3 P3 (t) = g3 (t). [P3 (t)]2

(A.21)

Cross-multiplying the denominators (which are positive for .t > 0) in (A.21) and other straightforward manipulations show that (A.21) is equivalent to

.

2 " tj ! 6P3 (t) + 2tP3 (t) − [P3 (t)]2 ≥ 0. j!

(A.22)

j =0

If we prove (A.22), then (A.21) holds, and from (A.20),  f3 (t) = f3 (t) − f3 (0) =

t

.

0

f3 (s)ds

 ≥ 0

t

g3 (s)ds = g3 (t) − g3 (0) = g3 (t)

and proves the lemma for .k = 3. To prove (A.22), note that ⎧ ⎫ 2 1 ⎨ tj tj ⎬ − 2t .P3 (t) = 6 = 6 + 6t + 3t 2 − 2t − 2t 2 = 6 + 4t + t 2 , ⎩ j! j!⎭ j =0

j =0

⎧ ⎫ 1 ⎨ j ]6 − 2(j + 1)]t ⎬  .P3 (t) = = 4 + 2t ⎩ ⎭ j! j =0

and [P3 (t)]2 = [6 + 4t + t 2 ]2 = 36 + 48t + 28t 2 + 8t 3 = t 4 . The left-hand side of inequality (A.22) then becomes

A

Appendices of Supplementary Topics

.

2 " tj ! 6P3 (t) + 2tP3 (t) − [P3 (t)]2 j!

141

j =0

  $ t2 # 6(6 + 4t + t 2 ) + 2t (4 + 2t) − (36 + 48t + 28t 2 = 8t 3 = t 4 ) = 1+t + 2   $ t2 # 36 + 32t + 10t 2 ) − (36 + 48t + 28t 2 = 8t 3 = t 4 ) = 1+t + 2 = [36 + 32 + 10t 2 ] + t[36 + 32t + 10t 2 ] +

t2 [36 + 32 + 10t 2 ] − (36 + 48t + 28t 2 = 8t 3 = t 4 ) 2

= 40t + 32t 2 + 18t 3 + 4t 4 ≥ 0. This completes the proof for .k = 3. It follows immediately from Comment A.1.2.1(ii) and Lemma A.1.5.1 that, for .σ > 0, .d2 (t/σ ) is decreasing. Thus: Corollary A.1.5.1 For .σ > 0, the .G(σ, k) distributions have exact Weibull plots that are concave for .k = 2 and 3. Comment The argument in Lemma A.1.5.1 for .k = 2 and 3 can be repeated to establish that the exact Weibull plots for .G(σ, k) distributions for integers .k > 3 are concave, which is equivalent to showing that certain polynomials are nonnegative for .t > 0. We leave this to the interested reader to verify this. Figures A.2 through A.6 depict the behavior of the exact Weibull plot of .Gk (t) for .k = 2, 3, 4, and 5. Figure A.2 supports the concavity and suggests that the plot is almost linear for .k = 2, but the concavity is more pronounced as k increases. The least squares linear fit to the plot for .k = 2 in Fig. A.3 indicates that the curvature is not linear, but the piecewise linear fits for the plots in Figs. A.4, A.5, and A.6 restricted to the intervals .(−∞, −6.21], .[−6.21, −1.77], and .[−1.77, ∞) give a very good approximation. Here, .e−6.21 and .e−1.77 correspond, respectively, to the .0.0000020 and .1.29600 percentiles of the .G(1, 2) distribution. Since the minimum extreme value (MEV) distribution for .G2 is Weibull with scale .20.5√and shape 2, the Weibit for the √ Weibull survival distribution .W (t) = exp{−(t/ 2)2 } is .ln(− ln(W (t)) = ln{(t/ 2)2 } = − ln 2+2 ln t. This is essentially the least squares fit given in Fig. A.4. An important warranty consideration is the minimum of the time of failure for such systems in use. How well the Weibull MEV distribution is a useful approximation for this purpose is a (sample) size effect and depends on the actual number of systems under warranty. An analogous size effect is discussed in detail for .k = 2, 3, 4, and 5 component equal load-sharing parallel systems in Chap. 10 where the components have standard exponential distributions .E(1). There, the relevant distribution for breakdown (BD)

142

A Appendices of Supplementary Topics Scatterplot of In(–In(1–G(t))) vs In t

0

In(–In(1–G(t)))

–5

–10

–15 *Blue Line is the Weibul Plot for the Gamma(1,2)

–20 *Black Line is the Linear Regression Fit

–25 –12

–10

–8

–6

–4 In t

–2

0

2

4

Fig. A.3 The regression equation is .ln(− ln(1−G(t))) = −1.337+1.537 ln t for the exact Weibull plot of the .G(1, 2) distribution

of the system is a mixed gamma with shape parameter k, and the mixing is over the scale parameter and determined by the load-sharing rule. It is shown that these gammas are well approximated by a gamma. When the components are i.i.d., where their BD distribution is Weibull with shape parameter .ρ, the system BD distribution is a mixture of a gamma-type of distribution over the scale parameter where the shape parameter is .kρ. The much more in-depth discussion of size effects given in Chap. 10 is relevant to size effects considered here. In Figs. A.4, A.5, and A.6, the blue dots are the exact values of the Weibull plot for the .G(1, 2) distribution, and the solid line is the least squares linear fit when broken into the indicated three regions of the values for .ln t. This linearity is discussed further in the next section.

A.1.6 The Weibull Chain-of-Links Hypothesis and Linearity in Weibull Plots In the previous section, we saw that the minimum extreme value (MEV) distribution for .G2 is Weibull with scale .20.5 and shape 2. This, at best, suggests that long chains of i.i.d. links with link distribution .G2 should approximately satisfy the Weibull weakest link hypothesis. Below we give some heuristic insights that quantify this based on Figs. A.4, A.5, and A.6, for an arbitrary bundle distribution G whose Weibull plot’s left tail is approximately linear.

A

Appendices of Supplementary Topics

143

Fitted Line Plot In(–In(1–G(t)))_1 = – 0.6964 + 2.000 In t_1 –12

In(–In(1–G(t)))_1

–14

S R-Sq R-Sq(adj)

Piecewise Linear Fit Part 1: In t < –6.21 n=200

0.0001790 100.0% 100.0%

–16 –18 –20 –22 –24 –12

–11

–10

–9 In t_1

–8

–7

–6

Fig. A.4 Exact Weibull plot and linear fit on .ln t < −6.21 for a .G(1, 2) distribution. Linear fit is part of a piecewise linear fit

Fitted Line Plot In(–In(1–G(t)))_2 = – 0.8344 + 1.968 In t_2

–4 –5 –6

S R-Sq R-Sq(adj)

Piecewise Linear Fit Part II: –6.21 < In t < –1.77

0.0122124 100.0% 100.0%

In(–In(1–G(t)))_2

n=16850

–7 –8 –9 –10 –11 –12 –13 –6

–5

–4 In t_2

–3

–2

Fig. A.5 Exact Weibull plot and linear fit on .−6.21 < ln t < −1.77 for a .G(1, 2) distribution

144

A Appendices of Supplementary Topics

Fitted Line Plot In(–In(1–G(t)))_3 = – 1.213 + 1.456 In t_3

3 2

Piecewise Linear Fit

0.0885548 99.5% 99.5%

Part III: –1.77 < In t

1 In(–In(1–G(t)))_3

S R-Sq R-Sq(adj)

n=982950

0 –1 –2 –3 –4 –5 –2

–1

0 In t_3

1

2

Fig. A.6 Exact Weibull plot and linear fit on .−1.77 < ln t for a .G(1, 2) distribution. Linear fit is part of a piecewise linear fit

Notice that the Weibull plot for a chain of length n is the graph .G = {(ln t, ln n + ln(− ln G(t)) : t > 0}. If .G is concave, similar to Fig. A.4, let .t ∗ denote a value where the linear fit of the left tail for t follows .ε(0, t ∗ ), which is .

ln(− ln G(t))) ≈ a ∗ ln t + b∗ .

Here the relationship of .a ∗ and .b∗ to the Weibull shape parameter .ρ ∗ and scale parameter .τ ∗ is a ∗ = ρ ∗ and b∗ = −a ∗ ln τ ∗ .

.

n∗

n

Now choose .n∗ such that .G (t ∗ ) ≈ 0. Note that, for all .n > n∗ , .G (t ∗ ) ≤ n∗ ∗ G (t ). From this and the linearity of the Weibull plot of G, for .t ≤ t ∗ , it follows ∗ n n that .G (t ∗ ) ≈ exp{−n(t/τ ∗ )ρ } for almost all of the support of .G . That is, for long chains of length .n > n∗ , the chain failure distribution is approximately ∗ 1/ρ ∗ , ρ ∗ ). This supports the approximate Weibull weakest link hypothesis .W(τ /n for long chains. Remark A.1.6.1 Similar ideas for the left tail can be modified to understand how linearity in the Weibull plots in other regions (e.g., Fig. A.4 versus Figs. A.5 and A.6) affects size effects related to the chain length. Here the linearity results in truncated Weibull distributions that can be used in the analysis to find what chain lengths

A

Appendices of Supplementary Topics

145

apply to give good Weibull approximations for that region and where other chain lengths will be affected by two or more regions and result in curvature in the exact Weibull plots for those chain lengths. Details regarding this are left to the reader.

A.2 Load-Sharing Networks and Absorbing State Load-Sharing Rules A.2.1 Load-Sharing Networks In Chap. 6, load-sharing bundles were used to model fibrous composites as chainsof-bundles. This application will not be emphasized here since we are only interested in the theoretical implications of fiber bundles. However, one problem with the local load-sharing described for the analysis of Rosen’s experiments was that the load-sharing rule really was not well-defined. Here, we will consider a fiber bundle simply as a generic load-sharing network of components where an absorbing state load-sharing rule is defined in terms of a transition diagram of a Markov chain on the graph. The transition diagram for the one-step probabilities is defined by the local load-sharing. Consider a network of components. The network is defined by a directed graph .G = G(N, E), where .N and .E are the sets of nodes/components and edges in .G. The transition diagram for the edges describes how load is transferred between nodes with given edges. The load is determined by a load-sharing rule referred to as absorbing state load-sharing rules defined in terms of an absorbing state Markov chain on the graph .G that is discussed in the following section. In essence, the graph is the basis of a transition diagram for Markov chain that will be used to calculate the load-sharing rule. Another consideration are the states of the nodes in the network. This is a binary state classification where the node either has failed or is working. The distribution for the states is given by the Gibbs measure given in Sect. 6.4. The Gibbs measure is determined by the load-sharing rule and the log-odds of the strength distributions for each node. The log-odds determine the potentials and energy function that define the Gibbs measure in Sect. 6.4. An in-depth study of the Gibbs measure and its potentials, which is based on Rosen’s Series-A specimen 7 discussed in Sect. 7.2, can be found in Section 4 of Li et al. (2019).

A.2.2 Absorbing State Load-Sharing Rules Consider a network of components/nodes .N = {1, 2, . . . , n}. The state of a node is either 1 or 0 indicating that the node works or has failed. Each component i has a strength .Si . Let .s > 0 denote the load per component when there is a total load of

146

A Appendices of Supplementary Topics

ns on the network, and let A denote the set of working components. Then, the load at component .i ∈ A is given by .λi (A)s, and the component fails if .λi (A)s > Si . We now describe a class of rules based on the absorption probabilities for a Markov chain on the graph .G. Let .P = {pi,j : i and j = 1, . . . , n} = {pi,j } be a one-step transition probability matrix on the nodes N of .G. Note there is a directed edge from i to j in .G if .pi,j > 0. For the set A of working components, let .{A ui,j : i ∈ Ac , j ∈ A} denote the set of absorption probabilities for the Markov chain on .G to the set of nodes in A. Interpret .A ui,j as the proportion of the load at .i ∈ Ac that is transferred to .j ∈ A. Then we define an absorbing state load-sharing rule by λj (A) = 1 +



.

A

ui,j .

(A.23)

i∈AC

For the set A, let .A P = {A pi,j }, where if .i ∈ A, .A pi,j = 1 when .j = i and .j = 0 if not and .A pi,j = pi,j if i and .j ∈ Ac . By rearranging the rows and columns of A . P, it can be rewritten as the partitioned matrix A

.

& I A0 , P= A A R Q %A

(A.24)

Where .A I is the .|A| × |A| identity matrix, .A Q is the subprobability matrix of onestep transition probabilities from states in .Ac to those in .Ac , and .A R is the matrix of one-step transition probabilities from states in .Ac to those in A. Then, the absorption probabilities .{A ui,j : i ∈ Ac , j ∈ A} satisfy the system of equations A

.

ui,j =



A

qi,k A uk,j + A ri,j

(A.25)

k∈Ac

that can be written in matrix form as A

.

U =A QA U +A R.

(A.26)

The system of equations (A.26) has solution A

.

U=



A

mA

Q

R ≡ (A I − A Q)−1A R.

(A.27)

m=0

Proposition A.2.2.1 An absorbing state load-sharing rule is monotone. Proof Let .j ∈ A ⊂ B. From Eq. (A.23), to show that we only need to show that B

.

uk,j ≤ A uk,j for k ∈ B c ⊂ Ac .

(A.28)

A

Appendices of Supplementary Topics

147

To see (A.28), note that from (A.27) A

.

U=



A

mA

Q

R and B U =

m=0



B

QmB R.

m=0

Comparing the entries in the kth row and the j th column of .B QmB R and .A QmA R  A (m)  B (m) (m) (m) qk,i pi,j ≤ qk,i pi,j since .B qk,i ≤ A qk,i . This proves shows that . i∈B C

i∈AC

(A.28). The formulas in the above proof provide insights into the functional relationship of the load-sharing rule and related absorption probabilities. In particular, let .TA denote the first time the chain .{Xn : n = 0, 1, 2, . . .} enters A. Then, for any .k ∈ Ac ,   A (m) .Pk (TA < ∞) ≡ Pr(TA < ∞|X0 = k) = 1, and . qk,i pi,j = Pk (XTA = j ). m i∈Ac

Let .PAc denote when the  initial chain is the uniform  distribution  A (m) of the Markov distribution on .Ac . Then, . qk,i pi,j ≡ |Ac |PAc (XTA = j ), where .|Ac | k∈Ac m i∈Ac

denotes the cardinality of .Ac . So, from (A.23), we have the following. Lemma A.2.2.2 .λj (A) = 1 + |Ac |PAc (XTA = j ). The following corollary shows that absorbing state Markovian load-sharing rules conserve load.  Corollary A.2.2.3 . j ∈A λj (A) = |N|. Proof From Lemma A.2.2.2, .

j ∈A

λj (A) =

j ∈A

1 + |Ac |



PAc (XTA = j ) = |A| + |Ac | = |N |.

j ∈A

A.3 Gibbs Measure Potentials and the Stresses and Potential Energies in Load-Sharing Bundles Here we discuss the interplay of statistics and physics in the context of a loadsharing unidirectional bundle of brittle fibers where the bundle is under a load/stress per component s that is parallel with the fibers. Hooke’s law relates extension/strain, e, and stress as .s = Y e, where Y is Young’s modulus for the fibers. Note that the e extension/strain energy density is . 0 Y εdε = Y e2 /2, which is the potential energy 2

d Ye for an unfailed fiber, and that the stress .s = de 2 is the derivative of the potential energy. Here, we give explicitly the relationship of the potential energy and the stress to the energy function that defines the Gibbs measure for the states of the fibers in

148

A Appendices of Supplementary Topics

the bundle when the fiber strength distribution is a g-log-logistic (g.l.l.) distribution with .g(x) = (x/τ )ρ . Consider the g-log-logistic (g.l.l.) distribution F (s) =

.

g(s) , 1 + g(s)

where .g(s) is a nonnegative nondecreasing function on .[0, ∞], where .g(0) = 0 and .g(∞) = ∞. The function g is the odds .g(s) = F (s) . Note that the breaking F (s) extension distribution is g.l.l. where the odds are given by .g(Y e). We first specialize our study to .g(x) = (x/τ )ρ , which also happens to be the hazard function of the Weibull with shape and scale parameters, .ρ and .τ . For the load-sharing rule, .λi (A), and, using the notation in (6.7), the log-odds for fiber i contained in the set A of unfailed fibers are     λi (A)s ρ λi (A)s .σi (A, s) ≡ ln g(λi (A)s) = ln = ρ ln Y − ρ ln τ τ Y = ρ ln(Y ei ) − ρ ln τ ≡ σi∗ (A, s) − ρ ln τ.

(A.29)

This shows that the log-odds for fiber i are the sum of two quantities. The first is in terms of the extension, .ei , and the second is a constant that depends on the shape and scale parameters .ρ and .τ but not the extension. Using (A.29), define σi (A, s) ≡



.

σi (A, s) ≡

i∈A



σi∗ (A, s) − |A|ρ ln τ ≡ σ ∗ (A, s) + |A|σ ∗ (τ, ρ).

i∈A

(A.30) Formula (6.11) of Theorem 6.1 with (A.30) shows that the Gibbs measure potential, .V (L, s), for the set L can be written as  V (L, s) =

.

A⊆L (−1)

 =

|L\A| σ (L, s)

|L|

A⊆L (−1)

|L\A| σ ∗ (L, s)

|L| ∗



≡ V (L, s) + V (|L|; τ, ρ),

 ∗

+ σ (τ, ρ)

A⊆L (−1)

|L\A| |A|

|L| (A.31)

where .V ∗ (L, s) is a function of the stresses of the fibers in all the subsets of the set L and  |L\A| |A| A⊆L (−1) ∗ ∗ .V (|L|; τ, ρ) = σ (τ, ρ) |L|

A

Appendices of Supplementary Topics

' =

149

0,

if |L| > 1,

σ ∗ (τ, ρ),

if |L| = 1,

(A.32)

 where the last identity follows since . A⊆L (−1)|L\A| |A| = 0 if .|L| = 1 (see Li et al. (2019) Lemma 4.1.1 for a proof). Since .U (A, s) ≡ − V (B, s) defines the Gibbs measure, .Ps (A) = B⊆A exp{−U (A,s)} , Z(s)

(A.31) and (A.32) show that U (A, s) = U ∗ (A, s) + σ ∗ (τ, ρ)2|A|−1 ,

.

(A.33)

where .U ∗ (A, s) is a function of the stresses of the fibers in all the subsets of the set L and .σ ∗ (τ, ρ)2|A|−1 only depends on the cardinality of the sets A and the scale and shape parameters .τ and .ρ but not the extensions. Also, note that by modifying (A.29) as follows, ρ ln(Y 2 ei2 ) − ρ ln τ 2

Y ei2 ρ ρ = ln + ln(2Y ) − ρ ln τ 2 2 2   [2Y ]1/2 ∗∗ . ≡ σi (A, s) + ρ ln τ

σi (A, s) =

.

(A.34)

Formula (A.34) shows that the log-odds for fiber i are the sum of two quantities. Unlike (A.29), though, the first is in terms of its extension energy rather than its stress, and the second is a constant that depends on the shape and scale parameters .ρ and .τ and Young’s modulus. Thus, the same argument that showed the energy function defining the Gibbs measure was an explicit function of the stresses can be modified to show how it is an explicit function of the potential energies of the fiber elements. Also note that, from an information theory standpoint, it is the bits of information of the individual extension energies and stresses on the fiber elements and the parameters in the problem (natural logs of all these quantities) that determines the energy function defining the Gibbs measure. In addition, for an arbitrary life distribution, .F (s) = 1 − eH (s) , the odds function is .

F (s) F (s)

=

(1 − eH (s) ) ∼ = H (s) for s ∼ = 0. F (s)

 ρ If .H (s) ∼ τs as .s → 0 (as is the case of F in the minimum extreme value domain of attraction of the Weibull), then the above shows that the Gibbs measure for F is approximately the Gibbs measure for a g.l.l. distribution where .g(s) = (s/τ )ρ . The

150

A Appendices of Supplementary Topics

implication of this is that, for infinite chains with such fiber strength distributions, the Gibbs measure is given by the one for the g.l.l. distribution with .g(s) = (s/τ )ρ . Also, the Gibbs measure will be approximately the one for the g.l.l. distribution if the component fibers are brittle and elastic, meaning that the induced strain is proportional to the applied stress up to the point of failure. For a relatively strong material, it also means that the Young’s modulus of elasticity is relatively large. This implies, for a Weibull fiber-failure distribution, that: (a) the shape parameter exceeds 1, and (b) the scale parameter is relatively large. If these two conditions hold, then the cdf is approximately s F (s) =

.

τ

1+

ρ

s

ρ,

τ

for .s > 0. Finally, to emphasize the information theoretic relationship mentioned earlier, for a fixed s, consider the exponential family of distributions .{Pη,ν (A)} generated by (A.33) with the following notation: U (A, s) = U ∗ (A, s) + σ ∗ (τ, ρ)2|A|−1 ≡ U1 (A) + U2 (A),

.

where Pη,ν (A) =

.

exp{−[ηU1 (A) + νU2 (A)]} Z(η, ν)

and .Ps (A) = P1,1 (A) in this notation. This family is the collection of solutions to the following max entropy optimization: For fixed .η and .ν, the Gibbs measure, .Pη,ν , maximizes the entropy .





P (A) ln P (A)

A

among all measures P for which .

A

Ui (A)P (A) =



Ui (A)Pη,ν (A) for i = 1, 2.

A

The above approach is based on Jaynes (1957a, 1957b)’s foundational work on the use of max entropy to reconcile statistical mechanics and thermodynamics and its popularization to model random phenomena. His work was inspired by Shannon 1948a, 1948b)’s seminal papers on an information theory approach to the communication of data. The set function .U1 is related to stress and potential energy, while .U2 is based on the statistical parameters and the magnitude of A, .|A|. Note, though, that they were obtained, here, from .Ps (A) = P1,1 (A), which was derived from probabilistic and statistical load-sharing considerations and not the max entropy formalism.

References

Alam, M. A., Weir, B., Bude, J., Silverman, P. J., & Monroe, D. (1999). Explanation of soft and hard breakdown and its consequences for area scaling. In International electron devices meeting 1999. Technical digest (Cat. No.99CH36318) (pp. 449–452). Alam, M. A., Weir, B. E., & Silverman, P. J. (2002). A study of soft and hard breakdown—part I: Analysis of statistical percolation conductance. IEEE Transactions on Electron Devices, 49(2), 232–238. Bader, M. G., & Priest, A. M. (1982). Statistical aspects of fibre and bundle strength in hybrid composites. In T. Hayashi, K. Kawata, & S. Umekawa (Eds.) Progress in science and engineering of composites (pp. 1129–1136). ICCM-IV. Bažant, Z. P., & Le, J.-L. (2017). Probabilistic mechanics of quasibrittle structures—strength, lifetime, and size effect. Cambridge University Press. Bersuker, G., Sim, J. H., Park, C. S., Young, C. D., Nadkarni, S. V., Choi, R., & Lee, B. H. (2007). Mechanism of electron trapping and characteristics of traps in HfO.2 gate stacks. IEEE Transactions on Device and Materials Reliability, 7(1), 138–145. Bhattacharyya, P., & Chakrabarti, B. K. (2006). Modelling critical and catastrophic phenomena in geoscience—a statistical physics approach. Springer. Binnig, G., Quate, C. F., & Gerber, C. (1986). Atomic force microscope. Physical Review Letters, 56, 930–933. Black, C. M., Durham, S. D., & Padgett, W. J. (1990). Parameter estimation for a new distribution for the strength of brittle fibers: A simulation study. Communications in Statistics–Simulation and Computation, 19, 809–825. Boufass, S., Hader, A., Tanasehte, M., Sbiaai, H., Achik, I., & Boughaleb, Y. (2020). Modelling of composite materials energy by fiber bundle model. The European Physical Journal Applied Physics, 92(1), 10401. Chatterjee, S., Kuo, Y., Lu, J., Tewg, J.-Y., & Majhi, P. (2006). Electrical reliability aspects of HfO.2 high-k gate dielectrics with TaN metal gate electrodes under constant voltage stress. Microelectronics Reliability, 46(1), 69–76. Chu, F. (2014). A review on conduction mechanisms in dielectric films. Advances in Materials Science and Engineering, 2014, 578168. Cohen, D., Schwarz, M., & Or, D. (2011). An analytical fiber bundle model for pullout mechanics of root bundles. Journal of Geophysical Research: Earth Surface, 116, F03010. Cox, D. R. (1972). Regression models and life-tables. Journal of the Royal Statistical Society. Series B (Methodological), 34(2), 187–220. Crowder, M. J., Kimber, A. C., Smith, R. L., & Sweeting, T. J. (1991). Statistical analysis of reliability data. Chapman and Hall.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5

151

152

References

Daniels, H. E. (1945). The statistical theory of the strength of bundles of threads I. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, 183(995), 405– 435. Dissado, L. A., & Fothergill, J. C. (1992/2008). Electrical degradation and breakdown in polymers. The Institution of Engineering and Technology, London, UK. Original work published 1992; Reprinted in 2008. Durham, S., & Lynch, J. (2000). A threshold representation for the strength distribution of a complex load sharing system. Journal of Statistical Planning and Inference, 83(1), 25–46. Durham, S. D., & Padgett, W. J. (1991). A probabilistic stress-strength model and its application to fatigue failure in gun barrels. Journal of Statistical Planning and Inference, 29, 67–74. Durham, S. D., & Padgett, W. J. (1997). Cumulative damage models for system failure with application to carbon fibers and composites. Technometrics, 39(1), 34–44. Ebrahimi, N., McCullough, K., & Xiao, Z. (2013a). Reliability of sensors based on nanowire networks. IIE Transactions, 45(2), 215–228. Ebrahimi, N., McCullough, K., & Xiao, Z. (2013b). Reliability of sensors based on nanowire networks with either an equilateral triangle lattice or a hexagonal lattice structure. IEEE Transactions on Nanotechnology, 12(1), 81–95. Fischer, P. H., & Nissen, K. (1976). The short-time electric breakdown behavior of polyethylene. IEEE Transactions on Electrical Insulation, EI-11, 37–40. Giorgio, M., Guida, M., Postiglione, F., & Pulcini, G. (2018). Bayesian estimation and prediction for the transformed gamma degradation process. Quality and Reliability Engineering International, 34(7), 1315–1328. Grego, J., Lynch, J., Li, S., & Sethuraman, J. (2014). Partition-based priors and multiple event censoring: An analysis of Rosen’s fibrous composite experiment. Technometrics, 56(3), 359– 371. Griffiths, A. A. (1921). The phenomena of rupture and flow in solids. Philosophical Transactions of the Royal Society of London, A, 221(582–593), 163–198. Hansen, A., Hemmer, P. C., & Pradhan, S. (2015). The fiber bundle model: Modeling failure in materials. Wiley-VCH, Weinheim. Harlow, D. G., & Phoenix, S. L. (1978a). The chain-of-bundles probability model for the strength of fibrous materials I: Analysis and conjectures. Journal of Composite Materials, 12(2), 195– 214. Harlow, D. G., & Phoenix, S. L. (1978b). The chain-of-bundles probability model for the strength of fibrous materials II: A numerical study of convergence. Journal of Composite Materials, 12(3), 314–334. Harlow, D. G., & Phoenix, S. L. (1982). Probability distributions for the strength of fibrous materials under local load sharing I: Two-level failure and edge effects. Advances in Applied Probability, 14(1), 68–94. Harlow, D. G., Smith, R. L., & Taylor, H. M. (1983). Lower tail analysis of the distribution of the strength of load-sharing systems. Journal of Applied Probability, 20(2), 358–367. Houssa, M., Stesmans, A., Naili, M., & Heyns, M. M. (2000). Charge trapping in very thin highpermittivity gate dielectric layers. Applied Physics Letters, 77(9), 1381–1383. Iglesias, V., Porti, M., Nafria, M., Aymerich, X., Dudek, P., & Bersuker, G. (2011). Dielectric breakdown in polycrystalline hafnium oxide gate dielectrics investigated by conductive atomic force microscopy. Journal of Vacuum Science Technology, B, 29(1), 01AB02. Jaynes, E. T. (1957a). Information theory and statistical mechanics I. Physical Review, 106, 620– 630. Jaynes, E. T. (1957b). Information theory and statistical mechanics II. Physical Review, 108, 171– 190. Jones, E. R. (1971). Solid state electronics. Intext Educational Publishers. Kaplan, E. L., & Meier, P. (1958). Nonparametric estimation from incomplete observations. Journal of the American Statistical Association, 53(282), 457–481. Kim, Y.-H., & Lee, J. C. (2004). Reliability characteristics of high-k dielectrics. Microelectronics Reliability, 44(2), 183–193.

References

153

Kittel, C. (2004). Introduction to solid state physics (8th ed.). John Wiley & Sons. Kun, F., Raischel, F., Hidalgo, R. C., & Herrmann, H. J. (2006). Extensions of fibre bundle models. In Modelling critical and catastrophic phenomena in geoscience (pp. 57–92). Springer. Lanza, M. (Ed.). (2017). Conductive atomic force microscopy—applications in nanomaterials. John Wiley & Sons. Le, J.-L. (2012). A finite weakest-link model of lifetime distribution of high-k gate dielectrics under unipolar AC voltage stress. Microelectronics Reliability, 52(1), 100–106. Le, J.-L., Bažant, Z. P., & Bazant, M. Z. (2009). Lifetime of high-k gate dielectrics and analogy with strength of quasibrittle structures. Journal of Applied Physics, 106(10), 104119. Leckey, K., Müller, C. H., Szugat, S., & Maurer, R. (2020). Prediction intervals for load-sharing systems in accelerated life testing. Quality and Reliability Engineering International, 36(6), 1895–1915. Lee, S., Durham, S., & Lynch, J. (1995). On the calculation of the reliability of general load sharing systems. Journal of Applied Probability, 32(3), 777–792. Lee, S. J., Jeon, T. S., & Kwong, D. L. (2002). Hafnium oxide gate stack prepared in situ rapid thermal chemical vapor deposition process for advanced gate dielectrics. Journal of Applied Physics, 92(5), 2807. Lehmann, P., & Or, D. (2012). Hydromechanical triggering of landslides: From progressive local failures to mass release. Water Resources Research, 48(3), W03535. Li, S., Gleaton, J., & Lynch, J. (2019). What is the shape of a bundle? An analysis of Rosen’s fibrous composites experiments using the chain-of-bundles model. Scandinavian Journal of Statistics, 46(1), 59–86. Li, S., & Lynch, J. (2011). On a threshold representation for complex load-sharing systems. Journal of Statistical Planning and Inference, 141(8), 2811–2823. Li, S., & Lynch, J. (2019). A sketch of some stochastic models and analysis methods for fiber bundle failure under increasing tensile load. https://arxiv.org/abs/1903.02546 McKenna, K. P., & Shluger, A. L. (2011). Electron and hole trapping in polycrystalline metal oxide materials. Proceedings of the Royal Society, A, 467, 2043–2053. Mishnaevsky, L. (2013). Micromechanical modelling of wind turbine blade materials. In P. Brøndsted, & R. P. Nijssen (Eds.), Advances in wind turbine blade design and materials (pp. 298–324). Woodhead Publishing Series in Energy. Nekrashevish, S. S., & Gritsenko, V. A. (2014). Electronic structure of silicon dioxide (a review). Physics of the Solid State, 56(2), 207–222. Ntenga, R., SAÏDJO, S., Beda, T., & Béakou, A. (2019). Estimation of the effects of the crosshead speed and temperature on the mechanical strength of kenaf bast fibers using Weibull and Monte-Carlo statistics. Fibers, 7(10), 89. Orgéas, L., Dumont, P., & Corre, S. L. (2015). Rheology of highly concentrated fiber suspensions. In F. Chinesta, & G. Ausias (Eds.), Rheology of non-spherical particle suspensions (pp. 119– 166). Elsevier. Padgett, W. J., Durham, S. D., & Mason, A. M. (1995). Weibull analysis of the strength of carbon fibers using linear and power law models for the length effect. Journal of Composite Materials, 29(14), 1873–1884. Perevalov, T. V., Gritsenko, V. A., Erenburg, S. B., Badalyan, A. M., Wong, H., & Kim, C. W. (2007). Atomic and electronic structure of amorphous and crystalline hafnium oxide: X-ray photoelectron spectroscopy and density functional calculations. Journal of Applied Physics, 101, 053704. Phoenix, S., & Tierney, L.-J. (1983). A statistical model for the time dependent failure of unidirectional composite materials under local elastic load-sharing among fibers. Engineering Fracture Mechanics, 18(1), 193–215. Phoenix, S. L. (1983). Statistical modeling of the time and temperature dependent failure of fibrous composites. In Proceedings of the Ninth U.S. National Congress of Applied Mechanics. The American Society of Mechanical Engineers.

154

References

Pirrotta, O., Larcher, L., Lanza, M., Padovani, A., Porti, M., Nafria, M., & Bersuker, G. (2013). Leakage current through the poly-crystalline HfO.2 : Trap densities at grains and grain boundaries. Journal of Applied Physics, 114, 134503. Preston, C. J. (1974). Gibbs states on countable sets: Gibbs states and Markov random fields. Cambridge University Press. Pugno, N. (2014). A review on the design of superstrong carbon nanotube or graphene fibers and composites. In M. J. Schulz, V. N. Shanov, & Z. Yin (Eds.), Nanotube superfiber materials (pp. 495–518). William Andrew Publishing. Ray, S. S. (2013). Statistical aspects of fibre and bundle strength in hybrid composites. In T. Hayashi, K. Kawata & S. Umekawa (Eds.), Environmentally friendly polymer nanocomposites—types, processing and properties (pp. 74–87). Woodhead Publishing Limited. Reiweger, I., Schweizer, J., Dual, J., & Herrmann, H. J. (2009). Modelling snow failure with a fibre bundle model. Journal of Glaciology, 55(194), 997–1002. Rosen, B. (1965). Mechanics of composite strengthening. In Fibre composite materials (pp. 37– 75). American Society of Metals. Chapter 3. Rosen, B. W. (1964). Tensile failure of fibrous composites. AIAA Journal, 2(11), 1985–1991. Sethuraman, J., & Hollander, M. (2009). Nonparametric Bayes estimation in repair models. Journal of Statistical Planning and Inference, 139, 1722–1733. Shannon, C. E. (1948a). A mathematical theory of communication. The Bell System Technical Journal, 27(3), 379–423. Shannon, C. E. (1948b). A mathematical theory of communication. The Bell System Technical Journal, 27(4), 623–656. Smirnova, T. P., Yakovkina, L. V., Kitchai, V. N., Kaichev, V. V., Shubin, Y. V., Morozova, N. B., & Zherikova, K. V. (2008). Chemical vapor deposition and characterization of hafnium oxide films. Journal of Physics and Chemistry of Solids, 69, 685–687. Smith, R. L. (1991). Weibull regression models for reliability data. Reliability Engineering & System Safety, 34(1), 55–76. Strong, A. W., Wu, E. Y., Vollertsen, R.-P., Sune, J., La Rosa, G., Rauch III, S. E., & Sullivan, T. D. (2009). Reliability wearout mechanisms in advanced CMOS technologies. John Wiley & Sons. Taylor, H. M. (1987). A model for the failure process of semicrystalline polymer materials under static fatigue. Probability in the Engineering and Informational Sciences, 1(2), 133–162. Vandelli, L., Padovani, A., Larcher, L., & Bersuker, G. (2013). Microscopic modeling of electrical stress-induced breakdown in poly-crystalline hafnium oxide dielectrics. IEEE Transactions on Electron Devices, 60(5), 1754–1762. Watson, A. S., & Smith, R. L. (1985). An examination of statistical theories for fibrous materials in the light of experimental data. Journal of Materials Science, 20, 3260–3270. Zhang, X.-Y., Hsu, C.-H., Lien, S.-Y., Wu, W.-Y., Ou, S.-L., Chen, S.-Y., Huang, W., Zhu, W.Z., Xiong, F.-B., & Zhang, S. (2019). Temperature-dependent HfO.2 /Si interface structural evolution and its mechanism. Nanoscale Research Letters, 14(1), 83. Zhao, F., & Takeda, N. (2000). Effect of interfacial adhesion and statistical fiber strength on tensile strength of unidirectional glass fiber/epoxy composites. part I: Experiment results. Composites Part A: Applied Science and Manufacturing, 31(11), 1203–1214. Zweben, C., & Rosen, B. (1970). A statistical theory of material strength with application to composite materials. Journal of the Mechanics and Physics of Solids, 18(3), 189–206.

Index

A Accelerated failure load, 75 Accelerated failure time (AFT), 113 Akaike information criterion (AIC), 22, 23 Annealing, 41, 43, 115, 128 Atomic force microscopy (AFM), 42 B Band conduction, 40, 41, 44, 47, 48, 50, 54 valence, 40, 41, 44, 48, 54 Bayesian credibility bands, 86 Bayesian information criterion (BIC), 23 Birth process, 83, 114, 115 Breakdown current (CBD), 59 cycle time to (CTBD), 103, 111, 116, 117, 122–124 hard (HBD), 51, 54, 100 soft (SBD), 54, 100 time to (TBD), 41, 59–62, 83, 103, 111 voltage (VBD), 59, 103–105, 107–109, 111 Bundles chain of, 113–116, 131, 145 grid, 66, 68, 88 load-sharing, 74, 145, 147 C Capacitance, 2, 31–36, 41, 70, 107 Capacitors, 1, 2, 31–37, 59, 70, 103–105, 107–109, 111, 112, 116, 117, 125 parallel circuit of, 34, 35, 107, 108, 114, 123, 125 series circuit of, 34–36, 70, 105–107, 109, 110, 114, 116, 117, 123, 125, 126

Censoring, 26, 27, 85, 86, 88 interval, 27 left, 26 right, 26, 27 Characteristic lifetime, 60, 61, 83, 92, 116 Conditional probability, 5, 6, 25 Conduction grain-boundary limited, 44, 49 hopping, 44 ionic, 44, 49 ohmic, 44, 48, 51, 52, 54, 115 space-charge limited, 44, 48 Conduction mechanism, 39, 41, 44, 45, 47, 48, 50, 51 Conductive atomic force microscopy (CAFM), 42, 43 Confidence interval, 21, 22, 88, 93, 96, 97 Correlation, 8, 9, 42 Covariance, 8, 13, 19, 20 Crack growth, 83, 114 Cumulative damage model, 82 Cumulative distribution function, 6, 7, 10–16, 18, 27, 28, 98 Cumulative hazard function, 10, 14, 15, 66, 75, 131–133, 136 Current displacement, 50 percolation, 50, 51

D Data Bader–Priest fiber, 76, 82 Bader–Priest impregnated tow, 80 Rosen’s Series A, 88

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 J. U. Gleaton et al., Fiber Bundles, https://doi.org/10.1007/978-3-031-14797-5

155

156 Dielectrics, 1, 2, 31, 39, 44, 46, 49, 51–55, 83, 91, 92, 103, 114, 124, 125, 127, 128 Dirichlet process, 116, 129 Distribution Bernoulli, 11, 53 beta, 18 binomial, 11, 19, 21 Birnbaum–Saunders, 18, 83, 92, 113 chi-square, 79, 80 Dirichlet, 18, 85 exponential, 13–15, 17, 21, 67, 83, 114, 116–118, 123, 124, 133, 136, 141 gamma, 17, 21, 68, 104, 105, 107, 123, 136, 142 grafted, 55, 104, 114 inverse Gaussian, 17 log-logistic, 16, 92, 113, 148 lognormal, 16, 91, 92, 98, 113 multinomial, 85 normal (or Gaussian), 13, 16, 20, 24, 55 Poisson, 12, 21 uniform, 12, 68, 147 Weibull, 2, 14, 15, 21, 54, 55, 59, 60, 62, 63, 67, 68, 78, 86, 88, 91, 92, 95, 98, 103, 105, 111, 113–115, 126, 133, 148, 150

E Electron states, 39 Emission Poole-Frenkel, 44, 47, 48 Schottky/thermionic, 44, 45, 47 thermionic-field, 44 Energy dissipation equation, 35 Event, 4–6, 12, 26, 52, 54, 115 Expectation, 7–9 Expected value, 7, See also Expectation

F Failure Phase I, 70, 71, 105, 111 Phase II, 65, 70, 71, 105, 111 Failure rate function, 10, See also Hazard function Fiber bundle models (FBM), 1–4, 125, 127 Fisher information matrix, 20

G Gamma process, 116, 128, 129 Gibbs measure, 4, 65, 68, 71, 74, 111, 145, 147–150

Index Goodness-of-fit, 22, 76, 81, 92 Anderson-Darling, 92 Grain boundary (GB), 41–44, 49, 51, 52, 54, 55, 115 H Hafnium dioxide, 32 Hazard function, 10, 14, 15, 24, 25, 60, 62, 96, 132, 148 Hooke’s law, 2, 32, 147 Hypothesis testing, 22 I Independence, 5 Ineffective length, 80, 81, 84–86 K Kaplan–Meier estimator, 27, 28, 86, 88 Kurtosis, 9 L Law conservation, 32, 33 Kirchoff’s circuit, 35 Ohm’s, 51 parallel capacitor, 34 parallel charge, 33 parallel voltage, 33 series capacitor, 33 series charge, 33 series voltage, 33 Leakage, 42–44, 49, 51, 52, 54, 115 Least-squares method, 23 Likelihood function, 19, 22, 26 Likelihood ratio test, 79, 80 Load-sharing, 1–4, 34, 37, 53, 55, 69–72, 74, 75, 84–87, 111, 113, 116, 124–127, 130, 141, 142, 145–147, 150 M Markov chain, 4, 66, 87, 145–147 Markov random field, 71 Maximum likelihood, 19, 20, 86, 95, 98, 99 Mean, 7–9, 11–14, 16–18, 20, 24, 48, 50, 53, 60, 80, 86, 92 Mixture affine, 66, 68 Dirichlets, 85 gamma-type, 67, 68, 116 Weibull, 104, 135 Möbius inversion formula, 69 Moment, 9, 50, 67, 68, 126

Index P Partition function, 69 Path coefficient, 66 Path signature, 66 Percentile, 60, 61, 88, 89, 102, 141, See also Quantile Percolation path, 40, 41, 47, 49–52, 54, 114 Permittivity electric, 31 relative, 31, 41, 49 Poisson process, 115 Power law, 60, 61, 83, 98, 111 Probability density function, 6–8, 10, 12–14, 16–19, 27 Probability mass function, 6–8, 11, 12 p-value, 22, 76, 80, 92, 95, 98, 99, 102

Q Quantile function, 10, 16

R Random variable, 6–14, 16–19, 26, 27, 133 Random walk, 4, 87 Regression, 23, 24, 93, 97, 100, 102, 127, 142 exponential, 24 lognormal, 98–100 proportional hazards, 25, 93, 127 Weibull, 24, 25, 79, 80, 93, 94, 96, 98, 99 Reliability function, 10, See also Survival function Representative volume element (RVE), 53 Resistance, 34–36, 52, 129 Resistor, 31, 35, 36, 51, 52, 129

S Sample space, 4, 5 Silicon, 32, 39, 41–43, 51 Silicon dioxide, 32, 39–41, 46, 49, 51, 53 Silicon hydroxide, 53 Size effect, 1, 2, 54, 55, 65, 71, 72, 75, 76, 80, 82, 83, 92, 103, 109, 131, 133, 134, 141, 142, 144 simulation, 103, 108, 127

157 Skewness, 9 Standard deviation, 9, 16, 128 Strain, 2, 32, 147, 150 Stress, 2, 32, 42, 45, 46, 50–52, 59, 61, 74, 76, 80, 83, 91, 97–100, 102, 114 Survival distribution, 65–67, 71, 75, 85, 86, 104, 105, 107, 108, 115–117, 123, 126, 131, 133, 135, 141, See also Survival function Survival function, 9, 10, 15, 24, 60, 62, 70, 86, 105, 107, 108, 136

T Thread, 2 Trapping states, 40 t-test, 24 Tunneling, 46, 49–51, 54 direct, 44, 46, 50, 51 Fowler–Nordheim, 44, 46, 47, 50, 51

V Variance, 8, 9, 11–14, 16–18, 20, 24 Voltage, 2, 32–35, 37, 41, 42, 46, 49–52, 59–63, 70, 91, 93–95, 97–100, 102, 103, 105, 111, 113–115 Voltage load dynamic, 35 static, 103

W Wald method, 21 Weakest link model, 54, 82, 114, 115, 131 finite, 103, 114 Weakest link principle, 60, 62, 93, 96, 127 Weibull plot, 15, 71, 72, 75–78, 80–82, 86–90, 92, 94, 95, 104, 107–109, 111, 113, 114, 116–118, 120, 122–124, 126, 127, 129, 131–137, 141–145 linear least squares fit, 72, 73

Y Young’s modulus, 2, 32, 150