Optimal Inspection Models with Their Applications 303122020X, 9783031220203

This booksurveys recent applications of inspection models, maintenance models and cumulative damage models, as well as d

311 9 3MB

English Pages 265 [266] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
1 Introduction
1.1 Importance of Reliability Engineering
1.2 Stochastic Models and Practical Applications
1.3 Two Problems of Inspection
1.4 Standard and Random Inspections
1.5 General Inspection
1.6 Inspection Model with Minimal Repair
1.7 Hierarchical Structure Reliability
1.8 Application Examples
1.9 Problems
References
2 Standard Inspection Models
2.1 Standard Inspection Policy
2.1.1 Asymptotic Inspection Times
2.2 Inspection for Finite Interval
2.2.1 Asymptotic Inspection Times
2.3 Inspection for Random Interval
2.4 Modified Inspection Models
2.4.1 Imperfect Inspection
2.4.2 Intermittent Fault
2.4.3 Inspection for Scale
2.5 Problems
References
3 Random Inspection Models
3.1 Periodic and Random Inspections
3.1.1 Sequential Inspection
3.1.2 Comparison of Periodic and Random Inspections
3.2 Random Inspection
3.3 Inspection First and Last
3.3.1 Inspection First
3.3.2 Inspection Last
3.3.3 Comparison of Inspection First and Last
3.4 Inspection Overtime
3.4.1 Comparisons of Periodic Inspection and Inspection Overtime
3.4.2 Comparisons of Inspection Overtime with First and Last
3.5 Problems
References
4 General Inspection Models
4.1 Inspection First and Last
4.1.1 Inspection First
4.1.2 Inspection Last
4.2 General Inspection Models
4.2.1 Inspection First
4.2.2 Inspection Last
4.3 Inspection Models with Three Variables
4.3.1 Inspection First, Last and Middle
4.3.2 Comparisons of Inspection First, Last and Middle
4.4 Modified Inspection Models
4.4.1 Modified Inspection First
4.4.2 Modified Inspection Last
4.4.3 Comparisons of Modified Inspection First and Last
4.5 Problems
References
5 Inspection Models with Minimal Repair
5.1 Inspection with Two Failures
5.1.1 Periodic Inspection
5.1.2 Sequential Inspection
5.2 Modified Inspection Model
5.2.1 Periodic Inspection
5.2.2 Sequential Inspection
5.3 Inspection for a Finite Interval
5.3.1 Standard Inspection
5.3.2 Modified Inspection Model
5.4 Problems
References
6 Hierarchical Structure Reliability
6.1 Basic System
6.1.1 n-Unit Parallel System
6.1.2 n-Unit Series System
6.1.3 k-Out-of-n System
6.2 Random K-Out-of-n System
6.2.1 Series-Parallel System
6.2.2 Parallel-Series System
6.2.3 Consecutive K-Out-of-5:F System
6.3 Comparison of Series-Parallel and Parallel-Series Systems
6.3.1 Reliability Properties
6.3.2 Replacement Time
6.3.3 Inspection Time
6.4 Problems
References
7 Application Examples of Storage System
7.1 Basic Inspection Model
7.2 Modified Inspection Model
7.3 Finite Number of Inspections
7.4 Degradation at Inspection
7.5 Problems
References
8 Application Examples of Phased Array Radar
8.1 Cost Model
8.1.1 Cyclic Maintenance
8.1.2 Delayed Maintenance
8.2 Availability Model
8.2.1 Cyclic Maintenance
8.2.2 Delayed Maintenance
8.2.3 Combined Cyclic and Delayed Maintenances
8.3 Problems
References
9 Application Examples of Power Generator
9.1 Self-Diagnosis Policy for Gas Turbine FADEC
9.1.1 Standard FADEC System
9.2 Maintenance of Power Generator
9.2.1 Shock Model with Damage Level
9.2.2 Two Modified Schock Models
9.2.3 Shock Model with Multi-echelon Risks
9.3 Problems
References
10 Application Examples of Airframe
10.1 Optimal Multi-echelon Maintenance of Aircraft
10.1.1 Multi-echelon Maintenance with Hazard Rates
10.1.2 Operation Policy for Commercial Aircraft
10.2 Airframe Maintenance with Crack Growth
10.2.1 Airframe Cracks (1)
10.2.2 Airframe Cracks (2)
10.3 Commercial Airframe with Damage and Crack
10.3.1 Damage Level and Crack Number
10.3.2 Multiple Damage Levels and Crack Number
10.4 Problems
A.1 H(t)/t Increases Strictly with t to h(infty)
A.2 L3(n,x) Increases Strictly with x to infty
A.3 QK(N) Increases Strictly with N from exp(-wK) to 1
A.4 QKM(N) Increases Strictly with N to 1
A.5 Qf(N) Increases Strictly with N to 1-exp(-λ)
References
Appendix Answers to Selected Problems
Recommend Papers

Optimal Inspection Models with Their Applications
 303122020X, 9783031220203

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Springer Series in Reliability Engineering

Kodo Ito Toshio Nakagawa

Optimal Inspection Models with Their Applications

Springer Series in Reliability Engineering Series Editor Hoang Pham, Department of Industrial and Systems Engineering Rutgers University, Piscataway, NJ, USA

Today’s modern systems have become increasingly complex to design and build, while the demand for reliability and cost effective development continues. Reliability is one of the most important attributes in all these systems, including aerospace applications, real-time control, medical applications, defense systems, human decision-making, and home-security products. Growing international competition has increased the need for all designers, managers, practitioners, scientists and engineers to ensure a level of reliability of their product before release at the lowest cost. The interest in reliability has been growing in recent years and this trend will continue during the next decade and beyond. The Springer Series in Reliability Engineering publishes books, monographs and edited volumes in important areas of current theoretical research development in reliability and in areas that attempt to bridge the gap between theory and application in areas of interest to practitioners in industry, laboratories, business, and government. Now with 100 volumes! **Indexed in Scopus and EI Compendex** Interested authors should contact the series editor, Hoang Pham, Department of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ 08854, USA. Email: [email protected], or Anthony Doyle, Executive Editor, Springer, London. Email: [email protected].

Kodo Ito · Toshio Nakagawa

Optimal Inspection Models with Their Applications

Kodo Ito Faculty of Engineering, Social Systems and Civil Engineering Tottori University Tottori, Japan

Toshio Nakagawa Department of Business Administration Aichi Institute of Technology Toyota, Japan

ISSN 1614-7839 ISSN 2196-999X (electronic) Springer Series in Reliability Engineering ISBN 978-3-031-22020-3 ISBN 978-3-031-22021-0 (eBook) https://doi.org/10.1007/978-3-031-22021-0 © Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Inspections are needed greatly for everything with high reliability. Today, hard-time maintenances rarely apply to new products to avoid high operation cost. On the other hand, condition monitoring maintenances are widely applied to most systems, and their system status can be judged from inspection results and minimum necessary maintenances can be performed. Most people knows the importance of inspections, however, we do not feel its necessary on daily life because relaibilities of products are so high. Occasional catastrophes such as tunnel collapse accident and reactor pipe deterioration rupture accident, tell us the importance of inspection for our social infrastructures. Reliabilities of various devices around us have improved, however, systems with high reliability still cannot satisfy the expectations of society. When a space rocket is launched, inspections before launching are indispensable for successful launch, but such complicated and time-consuming works should be eliminated with advances in automation technologies. The purpose of this book is to bridge theory and practice by providing various theories and application examples of inspection. Many books have already been published concerning theories, so that, principal inspection models are outlined, and new themes such as inspection fast, last and overtime are described in this book. One special topic is the problem of failure detections: When failures occur, there exist two types of failures. One can be detected immediately and another cannot be detected unless inspection is undergone. Inspection policies for two types of failures are taken up. Another topic is the problem of hierarchical structure reliabilities, and theoretical studies on their reliabilities are important. Application examples of storage system, phased array radar, gas turbine, thermal power plant and airframe are introduced. Each system has its own unique characteristics and requires appropriate modifications for applications. The logical dependence of chapters is shown in Fig. 1: Chap. 1 is devoted to explaining the importance of reliability engineering, the gap between mathematical models and their applications, problems of inspection, and briefly reviewing the following 9 chapters. Chapter 2 summarizes the results of past inspection models.

v

vi

Preface

Fig. 1 Logical dependence of chapters

Chapter 3 is devoted to inspection first, last and overtime which are new models. Chapter 4 summarizes the extension of inspection first and last to general models. Chapter 5 is devoted to inspection models of two types of failures. Chapter 6 summarizes a hierarchical structure reliability. Chapters 7–10 are devoted to application examples of storage system, phased array radar, power generator, and airframe. This book provides readers with inspiration for theoretical extensions based on practical ideas. The system has various unique characteristics from the viewpoint of people involved in its design and operation. When conventional inspection methods are improved by automation technologies, it is necessary to estimate quantitatively how much it has improved compared to conventional ones, and at that time, extended models that reflect characteristics of actual systems are needed. We wish to thank Mr. K. D. A. Truong for Chaps. 6, 8, 10, and Mr. S. Kawakami for Chap. 3. We wish to thank all members of Nagoya Computer and Reliability Research Group for their cooperation and valuable discussions. Especially, we wish to express our special thank to Prof. S. Mizutani and Prof. X. Zhao with cooperation of our works. Finally, I would like to express our sincere appreciation to Prof. Hoang Pham, Rutgers University, and Editor Kavitha Sathish, Springer-Verlag, London, for providing us the opportunity to write this book. Tottori, Japan Toyota, Japan

Kodo Ito Toshio Nakagawa

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Importance of Reliability Engineering . . . . . . . . . . . . . . . . . . . . . . . 1.2 Stochastic Models and Practical Applications . . . . . . . . . . . . . . . . 1.3 Two Problems of Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Standard and Random Inspections . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 General Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Inspection Model with Minimal Repair . . . . . . . . . . . . . . . . . . . . . . 1.7 Hierarchical Structure Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Application Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 3 5 7 8 8 9 11 16 16

2

Standard Inspection Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Standard Inspection Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Asymptotic Inspection Times . . . . . . . . . . . . . . . . . . . . . . . 2.2 Inspection for Finite Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Asymptotic Inspection Times . . . . . . . . . . . . . . . . . . . . . . . 2.3 Inspection for Random Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Modified Inspection Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Imperfect Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Intermittent Fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Inspection for Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19 20 23 26 29 31 34 34 35 39 42 42

3

Random Inspection Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Periodic and Random Inspections . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Sequential Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Comparison of Periodic and Random Inspections . . . . . . 3.2 Random Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45 46 50 52 55

vii

viii

Contents

3.3

Inspection First and Last . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Inspection First . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Inspection Last . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Comparison of Inspection First and Last . . . . . . . . . . . . . 3.4 Inspection Overtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Comparisons of Periodic Inspection and Inspection Overtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Comparisons of Inspection Overtime with First and Last . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58 58 61 63 65

67 69 69

4

General Inspection Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Inspection First and Last . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Inspection First . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Inspection Last . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 General Inspection Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Inspection First . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Inspection Last . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Inspection Models with Three Variables . . . . . . . . . . . . . . . . . . . . . 4.3.1 Inspection First, Last and Middle . . . . . . . . . . . . . . . . . . . 4.3.2 Comparisons of Inspection First, Last and Middle . . . . . 4.4 Modified Inspection Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Modified Inspection First . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Modified Inspection Last . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Comparisons of Modified Inspection First and Last . . . . 4.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71 71 72 73 75 75 76 79 79 82 84 84 86 88 89 89

5

Inspection Models with Minimal Repair . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Inspection with Two Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Periodic Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Sequential Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Modified Inspection Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Periodic Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Sequential Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Inspection for a Finite Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Standard Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Modified Inspection Model . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

91 92 93 94 95 96 98 100 100 102 105 105

66

Contents

ix

6

Hierarchical Structure Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Basic System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 n-Unit Parallel System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 n-Unit Series System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.3 k-Out-of-n System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Random K -Out-of-n System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Series-Parallel System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Parallel-Series System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Consecutive K -Out-of-5:F System . . . . . . . . . . . . . . . . . . 6.3 Comparison of Series-Parallel and Parallel-Series Systems . . . . . 6.3.1 Reliability Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Replacement Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Inspection Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

107 109 109 110 112 114 115 117 119 122 122 123 125 127 128

7

Application Examples of Storage System . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Basic Inspection Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Modified Inspection Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Finite Number of Inspections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Degradation at Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

129 130 134 141 145 153 153

8

Application Examples of Phased Array Radar . . . . . . . . . . . . . . . . . . . 8.1 Cost Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Cyclic Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Delayed Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Availability Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Cyclic Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Delayed Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Combined Cyclic and Delayed Maintenances . . . . . . . . . 8.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

155 156 157 160 162 162 165 167 172 173

9

Application Examples of Power Generator . . . . . . . . . . . . . . . . . . . . . . . 9.1 Self-Diagnosis Policy for Gas Turbine FADEC . . . . . . . . . . . . . . . 9.1.1 Standard FADEC System . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Maintenance of Power Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Shock Model with Damage Level . . . . . . . . . . . . . . . . . . . 9.2.2 Two Modified Schock Models . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Shock Model with Multi-echelon Risks . . . . . . . . . . . . . . 9.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

175 176 176 179 180 185 188 194 194

x

Contents

10 Application Examples of Airframe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Optimal Multi-echelon Maintenance of Aircraft . . . . . . . . . . . . . . 10.1.1 Multi-echelon Maintenance with Hazard Rates . . . . . . . . 10.1.2 Operation Policy for Commercial Aircraft . . . . . . . . . . . . 10.2 Airframe Maintenance with Crack Growth . . . . . . . . . . . . . . . . . . . 10.2.1 Airframe Cracks (1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Airframe Cracks (2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Commercial Airframe with Damage and Crack . . . . . . . . . . . . . . . 10.3.1 Damage Level and Crack Number . . . . . . . . . . . . . . . . . . . 10.3.2 Multiple Damage Levels and Crack Number . . . . . . . . . . 10.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

197 198 198 205 208 209 217 221 222 228 234 234 237

Appendix: Answers to Selected Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

Chapter 1

Introduction

1.1 Importance of Reliability Engineering Reliability engineering was established in the 1950s for the aim of maintaining the reliability of vacuum tubes, and has been widely applied mainly to space and military systems [1]. Currently, the reliability of solid-state semiconductor devices which include large-scale integrated circuits has been need in reliability engineering. Highly reliable devices have led to the spread of utilization of computer systems in various fields, and their spread has increased the importance of new system elements. Furthermore, such devices have brought the sophistication of functions and performances not only in electronic systems, but also in conventional mechanical systems resulted in complexity. The emergence of new systems have increased the risk as they are developed and are maintained, because system complications increase development costs and may cause a secondary disaster due to system troubles. These new situations bring new challenges and subjects for reliability engineering. Specific three actual cases are denoted as follows:

(1) Civil Aircraft The development of aircrafts requires huge assets which have led to the oligopoly of aircraft manufacturers [2]. The cause of enormous development costs of aircrafts is linked to not only the modernization of various subsystems installed, but also the discovery of Widespread Fatigue Damage (WFD), which is a new defect phenomenon related to airframe reliability. WFD is defined as the simultaneous presence of cracks at multiple structural details with sufficient size and density, whereby the structure might no longer meet its damage tolerance requirement [3]. This is a phenomenon in aluminum alloy airframe and leads to catastrophic accidents in a short operation time. This was firstly discovered at Boeing 737-200 Aloha Incident in 1988. After that, © Springer Nature Switzerland AG 2023 K. Ito and T. Nakagawa, Optimal Inspection Models with Their Applications, Springer Series in Reliability Engineering, https://doi.org/10.1007/978-3-031-22021-0_1

1

2

1 Introduction

WFD was introduced to FAA regulations, and the full-scale fatigue test certificating damage tolerances was made. This was obligated to certify the freedom from WFD during a whole service period by using real airframes [4], and became a heavy burden on airframe development.

(2) IT System of Bank IT systems have automated financial transactions such as deposits, withdrawals and remittances to banks, and had become one of fundamental social infrastructures, and now are used by many people in their daily lives. From February 28, 2021 to March 12, 2021, serious IT system failures occurred in a big bank in Japan. There was a big damage to customers including the strange behavior of Automatic Teller Machine (ATM) and its operation suspension. For these failures, bankbooks and cards became stuck in ATMs and were not returned when customers tried to make their transactions. In February 28, the total numbers of such strange behavior and operation suspension of ATM ware reported 5,244 ATMs and 4,318 ATMs, respectively [5]. In fact, from April 1 to early May, 2002, the first serious IT system failures had occurred in the same bank. The banking system failed to treat 105,000 automatic debits scheduled for April 1, and additional 30,000 double debits occurred for 5 days [6]. This trouble was caused by a human factor on the side of designing and operating the system [5]: (1) Detailed hardware information was not delivered to IT system designers. (2) IT designers did not understand how harmful of the card stack in ATM to customers. (3) The trouble shooting response was delayed because the information transmission route was complicated.

(3) Social Infrastructure The lives of 39% of road bridges and 27% of tunnels in 2023 have become more than 50 years after construction in Japan, and their large-scale maintenances have become a big national concern because social infrastructures such as road, tunnel, and bridge are fundamental and indispensable structures in people’s lives and economic activities [7]. Roads of 100 km, tunnels with a total length of several kilometers, and bridges of several ten kilometers over ocean require a lot of time and manpower just to inspect their deterioration statuses. Annual inspections and repairs of roads are performed by local governments which have serious financial difficulties in a high aging worker and a regional economic downturn. So that, innovative methods to solve the above issues would be strongly awaited.

1.2 Stochastic Models and Practical Applications

3

1.2 Stochastic Models and Practical Applications One of innovative methods to solve the various problems in reliability engineerings is to establish a systematic management based on unified indicators such as system availability and total loss cost. When failure distributions and various kinds of elemental costs and times related to failures are estimated, total expected costs and times can be clarified, and system availability and total loss cost can be obtained. Furthermore, risks of reliability management can be estimated by using system availability and total loss cost. Risks are estimated using mathematical models at a planning stage before the start of project, and after that, actual data are collected to check whether the project is proceeding well without serious troubles. The purpose of reliability engineering is how to design a system with a required high reliability. When system reliability is fixed, system maintainability is designed. Next, the system is inspected during operation, preventive maintenance (PM) is undergone before system failure, and corrective maintenance (CM) is undergone after system failure. A maintenance plan is to arrange inspection, PM, and CM, and is to establish its logical structure. Many studies have already been published on the maintenance theory using applied stochastic processes [1, 16]. Engineers select a better mathematical model created by researchers, and modify it appropriately, and try to make its appropriate maintenance plan. But, there exist some cases where such attempts may not work well because there are real systems to which researcher’s models cannot be applied as shown in Fig. 1.1.

(1) Characteristics of Real Systems Most systems in maintenance theory are assumed to run in operation, where run means that systems are energized and moving. But, some systems such as airbag and missile spend most of their lifetimes stored in non-operating state, and so that,

Fig. 1.1 Storage reliability with periodic test [8]

4

1 Introduction

conventional maintenance models cannot be applied to such systems. Missile operation [8] is from only a few tens of seconds to a few tens of minutes from its launch until its target hit, and it is in storage for most of lifetimes except for periodic maintenances until its operation. These models may not be applicable to real systems when their characteristics are far from virtual systems. In such case, new models have to be established by considering the characteristics of real systems.

(2) Necessity of Switching Evaluation Indicators It may be necessary to change evaluation indicators depending on operational situations. For example, in case of military systems, their availabilities are prioritized over their operating costs during war, but their costs may be prioritized over availabilities in peacetime. Maintenances personnel in peacetime tend to suspend from replacing with new spare parts even if failures occur and secure an amount of spare parts stock for wartime because costs of military systems are expensive. Therefore, it is more appropriate to switch an evaluation indicator of such systems between wartime and peacetime. Cost models should be changed to availability ones when they are applied to wartime.

(3) Sensitive Parameters It would be needed to modify theoretical models to practical ones. For example, imperfect maintenance models have parameters that indicate an imperfect inspection. It is desirable that such parameters are not appeared explicitly because even their slight changes might have large differences in calculation results. Factor of safety (FS) in the airframe design is similar to one of these parameters, which is defined as FS = Allowable stress/Design stress [9]. FSs of airplane, manned space vehicle and missile are 1.5, 1.4, and 1.29, respectively [10]. When FS is low, airframe weight can be reduced and more personnel and luggage can be loaded, but its strength decreases and a damage risk during operation increases. In the airframe design, a finite element method is applied to the extremely lightweight structure which can resist external loads while satisfying FS. Even a slight change of FS may cause a great change of airframe calculation and airframe weight. So that, once FS is set, it cannot be changed easily. Designers do not want any sensitive parameters which change significantly the design appeared in technical documentations. For applying imperfect models to airframe maintenance, we pay attension to modify it suitably so that no sensitive parameter appears.

1.3 Two Problems of Inspection

5

(4) Difficulties in Solving Problems Researchers form stochastic reliability models under the assumption that various functions in these models are exponential, because exponential functions are convenient and useful for analyzing these models easy and for solving complex equations simple. However, exponential functions sometimes are not adequate for functions which represent phenomena in practice. For example, a Weibull distribution is mostly used as a failure distribution because it can express both times of initial and wearout failure intervals. Compared to an exponential distribution, it is difficult to form equations theoretically and to solve them numerically in case of a Weibull one. For example, Stieltjes convolutions of functions for shock and damage models [11, p. 14] become simple in case of an exponential function (Problem 1) as follows: When G(x) = 1 − exp(−ωx), G ( j) (x) ≡



x

G ( j−1) (x − y)dG(y) =

0

M(x) ≡

∞ 

∞  (ωx)i i= j

i!

e−ωx ,

G ( j) (x) = ωx ,

(1.1)

j=1

where G ( j) (x) ( j = 1, 2, . . .) denotes the j-fold Stieltjes convolution of G(x). It is very difficult to calculate the convolution with non-exponential functions. For example, when G(x) = 1 − exp[−(ωx)m ], the approximation formula of a Weibull distribution is [12], G ( j) (x) ≈

∞  ui i= j

i!

e−u ,

u ≡ (ωpx)m ,

p≡

( j + 1/m) . (1 + 1/m) j!

(1.2)

1.3 Two Problems of Inspection The purpose of inspections is to observe system conditions and determine suitable maintenances after inspections according to their results. Any trivial disorders of systems can be found immediately during operation when systems are energized and in use. However, any problem of systems such as airbags and missiles could not be detected immediately even if there would exist some serious problems, because such systems spend most their lifetimes in non-energized condition. To avoid mission failures of airbags and missiles at their operations, it is necessary to inspect their conditions periodically, which can observe system health and improve system availabilities. From the above viewpoints, there exist the following two problems of inspections in practice.

6

1 Introduction

(1) Inspection is Imperfect Inspections cannot clarify system conditions perfectly. There are not only due to human errors in charge of inspections, but also there are cases where inspections cannot be performed perfectly. For example, airframes are generally made of aluminum alloy or FRP (Fiber Reinforced Plastics) and semi-monocoque structures, and are composed of aluminum outer skin panels and structural members such as stringer, bulkhead, and frame. Cabin and airframe structures are separated by inner skin panels, and their occupants do not directly touch aluminum members in Fig. 1.2. In daily inspections, only outer skin panels are inspected from outsides. Even if there exist cracks in stringers, they cannot be found. To find these cracks, we have to wait for a heavy maintenance which is undergone at every 22,500 flight hours, and then, all critical cracks are inspected and cured by removing inner skin panels [13]. In cases of connecting inspection facilities to systems and inspecting their functions electrically, it may be difficult to perform perfect inspections for visual ones. For example, missiles are inspected in storage periodically and just before launch, and they are to confirm electrically each of which can function perfectly. Inspection facilities are designed with proper consideration so that all functions of missiles can be inspected through umbilical cables connected physically between missiles and inspection facilities. Even if all functions of components can be inspected perfectly, there still may exist some portions which cannot be inspected. Periodic inspections during storage are undergone on the ground, and pre-launch ones are undergone in launchers on the ground, in ASROC (Anti Sub-marine ROCket) launchers on ships, and under aircraft pylons. Missiles in launch and flight are exposed to much more severe environments than ground, offshore, and manned aircraft. MIL-HDBK-217 is used to estimate failure rates of electronic and electrical parts in various environments quantitatively, and contains environmental factors for estimating failure rates under various environments. For example, failure rates of microcircuits on the ground are doubled in ASROC launchers on ships, quadrupled under fighter pylons, and six times for missile launches, because environmental factors of microcircuits are 2.0 on the ground, 4.0 in canisters on ships, 8.0 under fighter pylons, and 12 for missile launches [14]. When there exist soldering defects on printed circuits of missiles, they may pass inspections on the ground because all contacts are connected, but malfunctions due to poor contacts may occur by vibration or impact after launch.

Fig. 1.2 Cross-sectional structure of airframe

1.4 Standard and Random Inspections

7

(2) Inspection is Expensive Inner skin panels of airframe must be removed to find any cracks on stringers. Airframe disassembling and overhaul are undergone at periodic inspections which are performed at every 22,500 flight hours and are expensive. For example, B747-400 D-check conducted in 2010 ranged from 4 to 4.5 million dollars [15]. To inspect missiles in storage, they have to be removed from storages, pulled from missile canisters, and attached to a special inspection facility. In cases of cruise missiles with several meters long, cranes and forklifts must be used in their transferring works, which are time-consuming and labor-intensive works. These inspections are performed automatically by test facilities, but their transfers and installations require human works with inspection costs. In inspections of social infrastructures such as roads and bridges, their size and number are obstacles. In cases of roads, it is not easy to inspect entire roads because it has a long length and many branches. Japanese regulation has adopted a two-step method of conducting every 5 years visual inspection over an entire length of roads, and then, detailed inspections and maintenances are made. Especially in tunnels, inspections are classified into 4 stages and different maintenances are undergone. In cases of large-scale bridges on oceans, manual automatic inspections are made by structural health monitoring. However, a large number of sensors and data loggers must be installed on bridges to realize structural monitorings, which may require initial large investments. To maintain systems with expensive and imperfect inspections, it is necessary to establish unified indicators such as system availability and total economical loss, and to make optimal inspection plans based on them. Furthermore, to formulate such unified indicators, it is important to quantify imperfect inspection and establish models which reflect characteristics of systems. In this book, we describe optimal inspection plans from both directions of theories and examples of various kinds of real systems.

1.4 Standard and Random Inspections Many theoretical inspection models have already been proposed [1, 16–18]. Random inspection in these models are a relatively new theme. In Chap. 2, standard inspection models are summarized. In cases of inspections specified in regulations, periodic inspections are widely adopted which are easy to manage. So that, it is unlikely that regular inspections adopted in the past can be changed to random ones. In future, manual inspections could be switched to automatic computer inspections in many systems for saving of works. For example, rockets before launching need a number of inspections which are extremely complicated and spend much time. By automating this inspection, Japanese Epsilon Launch Vehicle has realized the reduction of time and manpower before launching [19]. This is the successor to

8

1 Introduction

M-V Launch Vehicle, and is one of leading rockets along with H-II rocket. Such M-V Launch Vehicle needs lots of inspection equipments with skills and manpower. These inspections are undergone automatically by on-board devices called Responsive Operation Support Equipment (ROSE). This has eliminated connections between various inspection equipments and rocket, and significantly shortened inspection times. In case of such automatic inspections, inspection plans can be set flexibly, and there is a possibility that random inspections can be realized. It is expected that random inspections are applied to a number of systems in near future. Random inspection models are denoted in Chap. 3.

1.5 General Inspection Reliability engineering was established to realize a highly reliable system in an era when only unreliable elements such as vacuum tubes were used. When a system is inspected at a planned time or at a random working time, whichever occurs first, this is called inspection first based on an idea which prioritizes reliability over others. This is appropriate in wartime of military systems, but such inspection policies should be changed depending on whether it is wartime or peacetime because modern military systems are expensive and require huge amounts of maintenance costs. When there is a shortage of spare parts of military systems in peacetime, such parts may divert from less important systems to important ones. This diversion is called cannibalization. The above example is given as follows: There exist two systems A and B, and a subsystem C for failure A. When system B has a subsystem C, but it is not mission capable because of failure of subsystem D, field workers may move C from system B to system A. This allows system A for missions and leaves system B with two failed subsystems C and D [20]. While more systems can return to operating states, more systems cannot operate by cannibalization. In fiscal years from 1996 to 2000, U.S. Air Force and U.S. Navy reported a total of about 850,000 cannibalization [21]. In peacetime, it is practical that failed subsystems are replaced when they have accumulated in certain amounts, instead of replacing them immediately. This idea is based on minimizing system maintenance costs in peacetime, and inspection last policy can be accepted if this idea diverts. This is the policy that a system is inspected at a planned time or at a random working time, whichever occurs last. The general inspection models with inspection first and last are introduced in Chap. 4.

1.6 Inspection Model with Minimal Repair When defects are found at inspection, defective parts are replaced with new ones. These replacement parts are designed in advance to be easy to improve system availabilities. In case of military supplies, these replacement parts are called spare parts, which are purchased for replacement, replenishment of stock, or for use in mainte-

1.7 Hierarchical Structure Reliability

9

nance, overhaul, and repair of systems [20]. System reliabilities of most inspection models are restored by replacing failed parts with new ones. These models are applicable when spare parts are constantly supplied, but such models cannot be applied when they are not supplied. For example, systems used on ships must be maintained with limited spare parts on board if once ships have departed from their home ports and supports from supply ships are not available. When spare parts are exhausted and new parts may not be available, used ones have to be used. These second-hand parts are not limited to those stored in storage, but may also be cannibalized from similar systems of lower importance. Such cannibalization is not ideal for system operation, however, it might occur in actual operation. When spare parts of other systems are used, system reliabilities cannot be expected to be significantly restored because they are not new. When reliabilities before and after any repair are unchanged, this is called minimal repair. There are cases where failures can be found or cannot be found during operations. The latter cases are to be explained in detail in Sect. 1.9. When failures cannot be found during operations, systems stop their operations and inspections are undergone. We consider optimal inspection policies with minimal repair and two types of failures, i.e., one can be detected during operation and another can be detected only at inspection in Chap. 5.

1.7 Hierarchical Structure Reliability A system is composed of subsystems, and a subsystem is composed of lower-level subsystems. Such system hierarchy is important in system plans of system maintenance and spare parts supply. For example, a fighter aircraft houses engines, hydraulic equipments, and avionics inside the airframe in Fig. 1.3. Avionics can be replaced by opening inspection panels on sides of aircrafts. There are various types of avionics such as communication equipment, navigation system, autopilot system, flight management system, which are compact cabinets and can be easily taken out by removing connector wirings. Maintenances performed beside aircrafts are called line maintenances, and replaced units here are in such cabinets. These are called line-replaceable units (LRU), and are parts or assemblies which are typically removed directly from aircrafts, when other maintenances are undergone than adjustment, calibration, or servicing [22]. Fighters return to the mission when maintenance workers replace failed LRU with normal ones. There are many printed circuit boards inside LRU, and failed cabinets are repaired by replacing failed boards with normal ones. Inspections when printed circuit boards are normal are performed by special inspection devices, and normal LRU are returned to line maintenances. Such printed circuit boards are called shop-replaceable units (SRU), and are subassemblies of LRU which are typically replaced during repairs of LRU [22]. Repairs of such failed LRU are undergone by intermediate maintenances [23]. Failed SRUs are repaired by replacing failed

10

1 Introduction

Fig. 1.3 Hierarchical structure example of jet fighter

parts with depot maintenances or manufacturers [23]. Such maintenances can make it to attain high availabilities of fighter aircraft on line, and to request manufacturers of spare parts such as SRU. We examine relationships between hierarchical structures and system reliabilities [24]: Series systems composed of parallel units and parallel systems composed of series units are considered in Chap. 6. Reliabilities of both systems are obtained and compared, and optimal inspection intervals to minimize expected cost rates are computed and compared theoretically and numerically.

1.8 Application Examples

11

1.8 Application Examples (1) Storage System We have already mentioned that inspection models are needed greatly for storage systems. In order to solve these issues, it is necessary to reform inspection models in consideration of characteristics of storage systems. In past models, total expected costs, which contain inspection costs and loss costs during the downtime from failures to their repairs, are introduced. Usually, general systems are operating constantly and their downtime costs are given, however, storage systems are stored in the non-operating state and have no downtime cost. In such cases, inspection models were established, considering overhaul costs [25, 26]. When storage systems are in non-operating states, does it deteriorate with time and eventually fail? Past studies answered for this question that systems stored in storage without operation certainly might fail [27]. Degradation rates of storage systems are slower than those of operating ones, and numbers of failed ones increase with time gradually. In order to avoid operational loss of storage systems, it is necessary to undergo periodic inspections to confirm that they can be operated normally. Therefore, overhauls of storage systems have to be undergone after a certain period of stored times. By using overhaul costs instead of loss costs, inspection models of storage systems are established in Chap. 7.

(2) Phased Array Radar Antennas with phased array radar are composed of a large number of homogeneous element antennas in Fig. 1.4. Radar performances are not affected when a small number of these element antennas fails, but required performances of radar systems are not satisfied when a large number of element antennas fails. Such systems can automatically identify failed elements, but failed ones cannot be inspected and replaced during operation. So, it is necessary to interrupt operations at periodic times,

Fig. 1.4 Conceptual diagram of phased array radar antenna and element antenna

12

1 Introduction

undergo inspection, and replace failed element antennas. Radar systems are required to operate continuously, so that it is necessary to minimize impacts of inspection and replacement from the viewpoint of availability. When operations continue without inspection, many element antennas may fail, and system performances degrade and fall below a required level. Therefore, it is necessary to undergo optimal inspections, considering both losses due to low availability and to deterioration of performances [25, 26, 28]. Optimal inspection models of the phased array radar are denoted in Chap. 8.

(3) Gas Turbine FADEC Control internal combustion engines including gas turbines are called governor controllers. Purposes of governor controllers are to maintain target rotational speeds by controlling fuel flows in response to fluctuations of loads in Fig. 1.5. Governor controllers were originally mechanical systems, and recently, they have digitized, and can operate engines with higher efficiency and performance. Control softwares can perform multiple tasks such as engine control calculation and self-diagnosis by using time-sharing methods of interrupting control. Increases in the load of one task may affect another one when hardware resources are limited. So that, execution cycles of self-diagnosis must consider both influences on engine control calculation and damages due to diagnostic delay. Complicated computer softwares can be processed using high performance computers with advanced technologies. When gas turbine control systems are designed, they adopt matured technologies with stable production qualities and confirmed reliabilities. When self-diagnoses of

Fig. 1.5 Principle diagram of mechanical governor controller

1.8 Application Examples

13

Fig. 1.6 Schematic diagram of cogeneration system

gas turbine are designed, we consider the following trade-off: When self-diagnoses intervals are short, burdens of self-diagnosis increase, which causes adverse effects to engine control calculations. Conversely, when self-diagnosis intervals are long, failure detections and emergency stops are delayed, and damages to engines increase. Therefore, it is necessary to set diagnosis intervals adequately, considering both delay of failure detection and adverse effect to engine control calculations, which are expressed by cost weights [25, 26]. Optimal diagnosis intervals to minimize expected cost rates are derived in Sect. 9.1.

(4) Power Generator A standard cumulative damage model is that damages accumulate over time and systems fail eventually [11]. A simple damage model is metal test pieces break eventually when they are damaged repeatedly and total damage reachs a failure level. Corrective maintenances after reaching failure levels cause great damage in operation. So that, managerial levels which are lower than failure levels, are settled and preventive maintenances are undergone when total damage has exceeded managerial levels. Real systems are composed of various subsystems, and subsystems are composed of various lower subsystems, and such hierarchical structures continue to their component parts. Huge number types of components for power plants are required because they are composed of various types of subsystems. A schematic diagram of cogeneration system is shown in Fig. 1.6 as an example of power plants.

14

1 Introduction

Fig. 1.7 Schematic diagram of power generation motor with open-cycle gas turbine

Figure 1.7 shows a schematic diagram of power generation motor in the cogeneration system. A power generation motor consists of gas turbine, fuel valve, and governor controller where gas turbines are mechanical systems, fuel valves are electrical components, and governor controllers are mechanical or electronic devices. A cogeneration system consists of power generation motor, generator, and HRSG (Heat Recovery Steam Generator) where generators are electrical devices and HRSG are mechanical devices. Replaced units at maintenance vary from subsystems to component parts, and failure levels at unit failures vary in their types [29–31]. We discuss managerial levels for preventive maintenances of systems with various types of units and failure levels in Sect. 9.2.

(5) Airframe Airframe inspections are imperfect for airframe structures in Fig. 1.2, and there exist sensitive parameters of which small changes cause big ones of calculation results. Such parameters have to be avoided in an airframe design. So, we establish imperfect preventive maintenance models in which such parameters are not appeared in Sect. 10.1.1. Airline companies need to earn operating profits by their aircraft operations. It is predicted that utilization rates of old aircrafts decrease with time, and their

1.8 Application Examples

15

maintenance costs increase with the growth of deteriorated units. Values of commercial aircrafts decline over time [32, 33]. So that, airline companies need to decide to censor operations of aircrafts at some points in operations, and after that, the following fates are waiting [33]. Reuse: Recycle: Energy: Landfilling:

Returning aircrafts to other services in which sufficient operating cost can be secured. Returning components to other active aircrafts. Returning materials to industrial processes. Discarding materials in landfills.

Aircrafts censored by airline companies may be reused by other companies with different customers, and so, companies need quantitative guidelines for censoring operations. Using optimal maintenance models of aircrafts, we establish optimal censoring policies, considering decreases of aircraft price with time in Sect. 10.1.2. Cracks in structural members expand over time when more stresses lay on members, which cannot maintain sufficient strength. This problem is a threat to all structures of machines and buildings, and especially, this is important in airframes because their strength is designed to limits for weight reduction. It has been wellknown that crack accident was aerial disassembly of DH.106 Comet in 1954. This was the first commercial jet aircraft and flew at higher altitude than conventional aircrafts. So that, large stresses laid on airframes due to pressure differences inside and outside of aircraft, and fractures due to cracks progressed in a short time. By this accident, failsafe designs were incorporated into airframe designs. Furthermore, damage tolerant designs were incorporated into airframe designs after Boeing 707 crash in 1977, and were established after Boeing 737-200 Aloha Incident in 1988. Cracks can cause various accidents: In particular, brittle fractures have crack propagations of up to 1,000 m/s and cause fatal damages to entire structures in a short time. Fracture mechanics have been developed to make clear of such fracture phenomena and utilize in actual mechanical designs. Analysis software tools such as AFGROW and NASGRO have been developed to precisely predict growth of cracks [34, 35]. In addition to these crack growth prediction tools, it is necessary to clarify what kind of policies should be taken for crack growth. We establish managerial crack levels to undergo preventive maintenances before some cracks reach fatal levels in Sect. 10.2. When we take up damages due to cracks, their sizes and numbers should be considered together because airframe strengths are greatly reduced not only by a single large crack but also by a large number of small cracks. Therefore, we derive optimal maintenance policies with both size and number of cracks in Sect. 10.3.

16

1 Introduction

1.9 Problems 1. Derive (1.1).

References 1. Barlow RE, Proschan F (1965) Mathematical theory of reliability. Wiley, New York 2. Harrison GJ (2011) Challenge to the Boeing-Airbus duopoly in civil aircraft: issues for competitiveness specialist in industry policy. Congressional Res Serv Rep for Congress 7–5700:R41925 3. FAA (1998) Advisory circular 25.571C damage tolerance and fatigue evaluation of structure. US Department of Transportation 4. FAA (1998) 14 CFR Part 25 Amendment 25-96 5. Special Investigative Committee (2021) Receipt of the investigative report of the system failure. Mizuho Financial Group Inc 6. JEF (July/Aug 2002) Mizuho Systems Debacle. J Jpn Trade Ind 7. Ministry of Land (2017) White paper on land, infrastructure, transport and tourism in Japan 8. Eugene C, Martinez EC (1984) Storage reliability with periodic test. Proc Annu Reliab Maintainab Symp 181–185 9. Ramsey JK (2019) Calculating factors of safety and margins of safety from interaction equations. NASA/TM-2019-220153 10. Muller GE, Schmid CJ (1978) Factors of safety-USAF design practice. National Technical Information Service (NTIS), AFFDL-TR-78-8 11. Nakagawa T (2007) Shock and damage models in reliability theory. Springer, London 12. Schlenker GJ (1986) Methods for calculating the probability distribution of sums of independent random variables. (AD-A170465) US Army Armament, Munitions and Chemical Command Systems Analysis Office 13. Dixon M (2006) The maintenance costs of aging aircraft: insights from commercial aviation. RAND Corporation, California 14. DOD, Washington DC (1991) Military handbook: reliability prediction of electronic equipment. (MIL-HDBK-217F) 15. Ackert SP (2010) Basics of aircraft maintenance programs for financiers. Evaluation & insights of commercial aircraft maintenance programs 16. Nakagawa T (2005) Maintenance theory of reliability. Springer, London 17. Nakagawa T (2014) Random maintenance policies. Springer, London 18. Ito K, Nakamura S, Nakagawa T (2017) A summary of replacement policies for continuous damage models. In: Nakamura S, Qian CH, Nakagawa T (eds) Reliability modeling with computer and maintenance applications. World Scientific, Singapore, pp 331–343 19. Yui T, Kitai Y, Nohara M, Obara H (2012) Development of ground systems for efficient checkout and launch operation of the Epsilon launch vehicle. The Japan Society for Aeronautical and Space Sciences 20. Vasquez RB (2019) Audit of Navy and defense logistics agency spare parts for F/A-18 E/F Super Hornets. (DODIG-2020-030) Department of defense office of inspector general 21. Curtin NP (2001) Military aircraft: cannibalizations adversely affect personnel and maintenance. (GAO-01-693T, A01046) United States general accounting office 22. Adams JL, Abell JB, Isaacson KE (1993) Modeling and forecasting the demand for aircraft recoverable spare parts. (ADA282492) Rand Corp, Santa Monica 23. Mullen III WF (2020) Maintenance operations. (MCTP 3-40E) Marine corps tactical publication

References

17

24. Ito K, Nakagawa T (2019) Reliability properties of K-out-of-N: G systems. In: Ram M, Dohi T (eds) Systems engineering reliability analysis using k-out-of-n structures. CRC Press, Boca Raton, pp 25–40 25. Ito K, Nakagawa T (2009) Applied maintenance models. In: Ben-Daya M, Duffuaa SO, Raouf A, Knezevic J, Ait-Kadi D (eds) Handbook of maintenance management and engineering. Springer, Dordrecht, pp 363–395 26. Ito K, Nakagawa T (2009) Maintenance models of miscellaneous systems. In: Nakamura S, Nakagawa T (eds) Stochastic reliability modeling, optimization and applications. World Scientific, Singapore, pp 243–278 27. Menke JT (1983) Deterioration of electronics in storage, national. SAMPE Symp 966–972 28. Nakagawa T, Ito K (2007) Optimal availability models of a phased array radar. In: Dohi T, Osaki S, Sawaki K (eds) Recent advances in stoshastic operations research. World Scientific, Singapore, pp 115–130 29. Ito K, Nakagawa T (2006) Maintenance of a cumulative damage model and its application to gas turbine engine of co-generation system. In: Pham H (ed) Reliability modeling, analysis and optimization. World Scientific, Singapore, pp 429–438 30. Ito K, Nakagawa T (2009) Optimal censoring policies for the operation of a damage system. In: Dohi T, Osaki S, Sawaki K (eds) Recent advances in stochastic operations research II. World Scientific, Singapore, pp 201–210 31. Ito K (2013) Maintenance models of miscellaneous systems. In: Nakamura N, Qian CH, Chen M (eds) Reliability modeling with applications. World Scientific, Singapore, pp 307–330 32. Towle I, Johnston C, Lingwood R, Grant PS (2004) The aircraft at end of life sector: a preliminary study. Oxford University, Parks Road, Oxford, Department of Materials 33. Oliveira Junior FS (2019) Identification and assessment of the economic outcomes of commercial aircraft decommissioning: a theoretical and mathematical approach to support decisionmaking regarding end-of-life aircraft treatment issues. Master’s thesis in production engineering, at the Federal University of Rio de Janeiro 34. Harter JA (1999) Afgrow users guide and technical manual. Defense Technical Information Center, Fort Belvoir 35. Skorupa M, Machniewicz T, Schijve J, Skorupa A (2007) Application of the strip-yield model from the NASGRO software to predict fatigue crack growth in Alu-Minium alloys under constant and variable amplitude loading. Eng Fract Mech 74:291–313

Chapter 2

Standard Inspection Models

System reliability can be improved by providing some redundant and standby units. Especially, standby units play an important role in the case where failures of an operating unit are costly or dangerous. A typical example is standby electric generators in nuclear power plants, hospitals, and other public facilities. However, it would be greatly serious if a standby generator has been failed at the stop of electric power supplies. So that, frequent and economical inspections would be necessary to avoid such emergency situations. Similar examples can be found in defense systems in which all weapons are on standby, and hence, they have to be checked at suitable times. For examples, missiles are stored for a great part of their lifetimes after delivery. However, their reliabilities have been known to decrease with time because some parts deteriorate with time. Thus, it would be important to check their functions whether if they can operate normally or not. It would be great necessary to check them periodically to monitor them and to repair their parts. Earlier studies have been done on the problem of checking a single unit. Optimal schedules of inspections to minimize two expected costs until failure detection and per unit of time were summarized [1, 2]. Inspection models with a finite interval [3–5] were considered and some modified periodic inspection models for complex systems were discussed [6–8]. Furthermore, imperfect inspection [9] and approximation methods using an exponential distribution [10] were proposed. Using Brender’s algorithm [11], optimal inspection times to minimize the expected costs rates were computed [12, 13]. This chapter reviews the results [2]: Sect. 2.1 summarizes the standard models with successive and periodic checking times, and introduces two simple and easy asymptotic methods of optimal policies. Most units are usually working for a finite interval. Sections 2.2 and 2.3 rewrite the results of Sect. 2.1 to a finite interval S, when S is constant and is a random variable. Section 2.4 proposes three modified models

© Springer Nature Switzerland AG 2023 K. Ito and T. Nakagawa, Optimal Inspection Models with Their Applications, Springer Series in Reliability Engineering, https://doi.org/10.1007/978-3-031-22021-0_2

19

20

2 Standard Inspection Models

with imperfect inspection, inspection of intermittent faults and inspection for a scale. These would be mainly assumed based on practical applications of inspection models such as phased array radars, power generators and aircrafts in thereafter chapters.

2.1 Standard Inspection Policy An operating unit should be operating for an infinite time span and is checked at successive times Tk (k = 1, 2, . . .), where T0 ≡ 0. Any failure is detected at the next checking time and is replaced immediately. The unit has a failure distribution F(t) ∞ with finite mean μ ≡ 0 F(t)dt, where Φ(t) ≡ 1 − Φ(t) for any function Φ(t), and a failure rate h(t) ≡ f (t)/F(t) which is unchanged at any check, where f (t) is a density function of F(t), i.e., f (t) ≡ dF(t)/dt. It is assumed that all times needed for checks and replacements are negligible. Let cT be the cost of one check and c D be the downtime cost per unit of time for the time elapsed between failure and its detection at the next checking time. Then, the total expected cost until failure detection is [1, p. 108], [2, p. 203] C(T) = =

∞   k=0 ∞ 

Tk+1

  cT (k + 1) + c D (Tk+1 − t) dF(t)

Tk



 cT + c D (Tk+1 − Tk ) F(Tk ) − c D μ ,

(2.1)

k=0

where T = {T1 , T2 , . . .}. Differentiating C(T) with Tk (k = 1, 2, . . .) and setting it equal to zero, Tk+1 − Tk =

cT F(Tk ) − F(Tk−1 ) − . f (Tk ) cD

(2.2)

Algorithm 1 for computing the optimal inspection schedule is [1, p. 112], [2, p. 203]: Algorithm 1

T 1. Choose T1 to satisfy cT = c D 0 1 F(t)dt. 2. Compute T2 , T3 , . . . recursively from (2.2). 3. If any δk > δk−1 , reduce T1 and repeat, where δk ≡ Tk+1 − Tk . If any δk < 0, increase T1 and repeat. 4. Compute until T1 < T2 < · · · are determined to the degree of accuracy required. The mean time until failure detection is M(T) =

∞  k=0

∞  Tk+1 [F(Tk+1 ) − F(Tk )] = (Tk+1 − Tk )F(Tk ) . k=0

(2.3)

2.1 Standard Inspection Policy

21

Thus, the expected cost rate is  cT ∞ F(Tk ) − c D μ C(T) = ∞ k=0 R(T) ≡ + cD . M(T) k=0 (Tk+1 − Tk )F(Tk )

(2.4)

Furthermore, we define [1, p. 115] D(T, α) ≡ C(T) − αM(T) ∞  = [cT + (c D − α)(Tk+1 − Tk )]F(Tk ) − c D μ .

(2.5)

k=0

Differentiating D(T, α) with respect to Tk and setting it equal to zero, Tk+1 − Tk =

cT F(Tk ) − F(Tk−1 ) − . f (Tk ) cD − α

(2.6)

Algorithm 2 for computing the optimal schedule is [1, p. 116], [13]: Algorithm 2 1. 2. 3. 4.

Choose T1(1) = T1∗ and α(1) = C(T∗ )/M(T∗ ) = R(T∗ ), using Algorithm 1. Compute T2(1) , T3(1) , . . . recursively from (2.6). If any δk > δk−1 , reduce T1(1) . Compute T(i) and α(i) = R(T(i) ) (i = 1, 2, . . .) until T1(i) < T2(i) < · · · are determined and the differences of (α(i) − α(i−1) ) become small to the degree of actually required.

When F(t) is a Weibull distribution, we give one example of computing optimal inspection times to minimize the expected cost rate R(T) in (2.4), using Algorithm 2 (Problem 1). In particular, when the unit is checked at periodic times, i.e., Tk = kT (k = 0, 1, 2, . . .), the total expected cost in (2.1) is C(T ) = (cT + c D T )

∞ 

F(kT ) − c D μ .

(2.7)

k=0

In addition, when F(t) = 1 − exp(−λt), C(T ) =

cD cT + c D T . − −λT 1−e λ

(2.8)

Optimal T ∗ (0 < T ∗ < ∞) to minimize C(T ) satisfies eλT − 1 − λT =

cT , c D /λ

(2.9)

22

2 Standard Inspection Models

and the resulting cost is C(T ∗ ) =

c D  λT ∗ e −1 . λ

(2.10)

Next, we consider the modified inspection model with a finite number of checks, because a system such as missiles involves some parts which are replaced when the total operating times of checks have exceeded a prespecified time of quality warranty. The unit is checked at times Tk (k = 1, 2, . . . , N − 1) and is replaced at time TN (N = 1, 2, . . .), i.e., it is replaced at time TN or at failure, whichever occurs first [2, p. 204], [4]. Then, the expected cost when unit failure is detected and the unit is replaced at time Tk (k = 1, 2, . . . , N ) is N   k=1

Tk

[cT k + c D (Tk − t) + c R ]dF(t) ,

Tk−1

and when it is replaced without failure at time TN is (cT N + c R )F(TN ) , where c R is the replacement cost. Thus, the total expected cost until replacement is  N −1    cT + c D (Tk+1 − Tk ) F(Tk ) − c D

TN

F(t)dt + c R .

0

k=0

Similarly, the mean time until replacement is N   k=1

Tk

Tk dF(t) + TN F(TN ) =

Tk−1

N −1 

(Tk+1 − Tk )F(Tk ) .

k=0

Therefore, the expected cost rate is C(T1 , T2 , . . . , TN ) =

cT

 N −1 k=0

F(Tk ) − c D

 N −1 k=0

 TN 0

F(t)dt + c R

(Tk+1 − Tk )F(Tk )

+ cD .

(2.11)

In particular, when the unit is checked at periodic times kT (k = 1, 2, . . .), i.e., Tk = kT , the expected cost rate is, from (2.11), 1 C(T ; N ) = T

cT −

cD

 NT 0

F(t)dt − c R

 N −1 k=0

F(kT )

+ cD ,

(2.12)

2.1 Standard Inspection Policy

23

which decreases strictly with N to 1 C(T ; ∞) = T

cD μ − cR + cD . c T − ∞ k=0 F(kT )



(2.13)

In addition, when F(t) = 1 − exp(−λt), (2.12) is 

cD  cR 1 −λT + cD . cT − 1 − e − C(T ; N ) = T λ 1 − e−λN T

(2.14)

We find optimal T ∗ to minimize C(T ; N ) for a fixed N ≥ 1. Differentiating C(T ; N ) with respect to T and setting it equal to zero,

cD cR − λ 1 − e−λN T

  c R λN T e−λN T 1 − e−λT 1 − (1 + λT )e−λT − = cT , (2.15)  2 1 − e−λN T

whose left-hand side increases strictly with T from −c R /N to c D /λ − c R for c D /λ > c R . Thus, if c D /λ > cT + c R , then there exists a finite and unique T ∗ (0 < T ∗ < ∞) which satisfies (2.15). Furthermore, noting that the left-hand side of (2.15) increases strictly with N , T ∗ decrease with N (Problem 2). When N = 1, (2.15) is 1 − (1 + λT )e−λT cT + c R , = λ cD

(2.16)

cT 1 − (1 + λT )e−λT = . λ c D /λ − c R

(2.17)

and when N = ∞,

∗ < T ∗ ≤ T1∗ and T ∗ decrease with N Because (cT + c R )/c D > cT /(c D /λ − c R ), T∞ ∗ ∗ ∗ ∗ from T1 to T∞ , where T1 and T∞ are the respective solutions of (2.16) and (2.17), ∗ ; ∞) in (2.13). and C(T ∗ ; N ) decreases with N from C(T1∗ ; 1) to C(T∞

2.1.1 Asymptotic Inspection Times The computing procedure for obtaining the optimal inspection schedule was specified [1, 2]. However, it is difficult to compute algorithm numerically, because the computations have to be repeated until the procedures are determined to a required degree by changing the first checking time T1 . To avoid such troublesome, four asymptotic inspection policies were suggested [2, p. 207]. This section introduces two simple approximate calculations of optimal checking times and compare them numerically.

24

2 Standard Inspection Models

(1) Periodic Method It is assumed that a random  t variable X has a failure distribution F(t) ≡ Pr{X ≤ t}. Then, denoting H (t) ≡ 0 h(u)du, H (t) represents the expected number of failures in (0, t], and F(t) = 1 − exp[−H (t)]. Furthermore, letting a random variable Y ≡ H (X ), which represents the expected number of failures until the failure time, has the following distribution [14, p. 32]: Pr{Y ≤ t} = Pr{H (X ) ≤ t} = Pr{X ≤ H −1 (t)} = 1 − e−t ,

(2.18)

where H −1 is the inverse function of H . Thus, Y has an exponential distribution with mean 1. In other words, we can transform the checking times T1 , T2 , . . . for a general distribution to them for an exponential distribution with mean 1. As one of simple examples, assume that the failure time has a Weibull distribution F(t) = 1 − exp[−(λt)m ] with mean μ, i.e., H (t) = (λt)m and μ = (1 + 1/m)/λ. Then, a random Y = H (X ) has an exponential distribution [1 − exp(−t)], and from (2.9), optimal checking time T ∗ of the unit with [1 − exp(−t)] is easily given by eT − 1 − T =

cT . cD μ

Thus, setting that H (Tk ) = (λTk )m = kT ∗ , Tk =

1 (kT ∗ )1/m λ

(k = 1, 2, . . .) .

(2.19)

√ For example, when cT /c D = 10, m = 2 and 1/λ = 500, μ = 500 × π/2 ≈ 443 and T ∗ = 0.2052, T1 = 226, T2 = 320, T3 = 392, . . ., T10 = 716 from (2.19), which will be given in Table 2.1.

(2) Keller’s Method The following inspection intensity n(t) has been defined [2, p. 208], [15]: Letting n(t)dt denote the probability that the unit is checked at interval (t, t + dt], when it is checked at times Tk (k = 1, 2, . . .), we have the relation 

Tk

n(t)dt = k .

(2.20)

0

Furthermore, suppose that the mean time from failure at time t to its detection at time t + a is a half of checking intervals, i.e., 

t+a t

n(u)du =

1 , 2

2.1 Standard Inspection Policy

25

Table 2.1 Comparisons of Barlow’s, Periodic, and Keller’s methods when F(t) = 1 − exp[−(t/500)2 ] and cT /c D = 10 k Barlow Periodic Keller 1 2 3 4 5 6 7 8 9 10 11 12 13 14

205.6 307.6 392.3 467.5 536.4 600.7 661.5 719.2 774.6 827.8 879.3 929.1 977.4 1024.5

205.6 307.6 392.3 467.5 536.5 600.8 661.6 719.4 774.8 828.2 879.7 929.7 978.7 1025.6

226.5 320.3 392.3 453.0 506.4 554.8 599.2 640.6 679.5 716.2 751.2 784.6 816.6 847.5

177.8 282.3 369.9 448.1 520.0 587.2 650.8 711.4 769.5 825.5 879.6 932.2 983.3 1033.1

which can be approximately written by 

t+a

n(u)du ≈ an(t) =

t

1 , 2

and a = 1/[2n(t)]. By the same arguments, when the unit was checked at time Tk , the next checking interval is 1/n(Tk ) approximately. Therefore, the total expected cost in (2.1) is 



C(n(t)) = 0

 c D h(t) cT n(t) + F(t)dt . 2n(t)

Differentiating C(n(t)) with respect to n(t) and setting it equal to zero,  n(t) =

c D h(t) . 2cT

(2.21)

Thus, from (2.20), approximate checking times are 

Tk 0

 c D h(t) dt = k 2cT

(k = 1, 2, . . .) .

(2.22)

26

2 Standard Inspection Models

Example 2.1 Suppose that the √ failure time has a Weibull distribution F(t) = 1 − exp[−(λt)2 ] with μ = (1/λ) π/2. In this case, T ∗ = 0.2052 from (2.9), and for Periodic method, from (2.19), Tk =

1 (kT ∗ )1/2 . λ

For Keller’s method, from (2.22),

Tk =

3k 2λ



cT cD

2/3 .

Table 2.1 presents optimal checking times of Barlow’s method, and approximate checking times √ of Periodic and Keller’s methods when cT /c D = 10 and 1/λ = 500, i.e., μ = (1/λ) π/2 = 443.0. Two approximations give good ones for Barlow’s method. In particular, Periodic method is better than Keller’s one for small k. In general, it would be unnecessary to compute checking time for larger than the mean failure time μ = 443.0. Thus, Periodic method would be very useful for computing optimal times approximately. This means that we first compute optimal T ∗ of Periodic inspection, and replacing T1 in 1 of Algorithm 1 with T1 in (2.19), we can compute Tk∗ easily. Furthermore, it can be suggested that we adopt Periodic method until just after the mean failure time μ, and after that, we do Keller’s method. 

2.2 Inspection for Finite Interval This section rewrites the standard inspection models for an infinite interval to the modified models in a finite interval (0, S] and summarizes optimal inspection policies for a fixed S (0 < S < ∞) [3, p. 64]. We consider periodic and sequential inspection policies, and give their asymptotic ones for a finite interval. An operating unit is checked at successive times 0 = T0 < T1 < T2 < · · · < TN (N = 1, 2, . . .), where TN ≡ S. Then, from (2.1), the total expected cost until failure detection or time S is C(N ; S) ≡ C(T1 , T2 , . . . , TN )  N −1  = [cT + c D (Tk+1 − Tk )]F(Tk ) − c D k=0

TN

F(t)dt ,

(2.23)

0

and from (2.2), Tk+1 − Tk =

F(Tk ) − F(Tk−1 ) cT − f (Tk ) cD

(k = 1, 2, . . . , N − 1) . (2.24)

2.2 Inspection for Finite Interval

27

Thus, the resulting expected cost is  ; S) ≡ C(N ; S) + c D C(N



TN

F(t)dt 0

=

N −1 

[cT + c D (Tk+1 − Tk )]F(Tk ) .

(2.25)

k=0

From the above results, we compute Tk (k = 1, 2, . . . , N − 1) which satisfies  ; S). Next, (2.24) and substituting them into (2.25), we obtain the expected cost C(N ∗  comparing C(N ; S) for all N ≥ 1, we can get optimal number N and times Tk∗ (k = 1, 2, . . . , N ∗ ), where TN∗ = S. In particular, when the unit is checked at periodic times kT (k = 1, 2, . . . , N ), where N ≡ S/T , the total expected cost C(N ; S) until failure detection or time S is, from (2.23),  N −1 

 S cD S  kS − cD F F(t)dt . C(N ; S) = cT + N N 0 k=0

(2.26)

Noting that 

S

C(1; S) = cT + c D

F(t)dt ,

C(∞; S) = lim C(N ; S) = ∞ ,

0

N →∞

there exists a finite number N ∗ (1 ≤ N ∗ < ∞) to minimize C(N ; S). In addition, when F(t) = 1 − exp(−λt), (2.26) is 

1 − e−λS cD S cD  1 − e−λS . − C(N ; S) = cT + −λS/N N 1−e λ

(2.27)

Forming the inequality C(N + 1; S) − C(N ; S) ≥ 0, 

e−λS/(N +1) − e−λS/N cD S   . ≥ −λS/(N +1) −λS/N cT 1−e /N − 1 − e /(N + 1)

(2.28)

Using two approximations by a Taylor expansion, λS λS − , N N +1

2

2 1 λS λS 1 ≈ − . 2(N + 1) N 2N N + 1

e−λS/(N +1) − e−λS/N ≈ 1 − e−λS/(N +1) 1 − e−λS/N − N N +1

28

2 Standard Inspection Models

Thus, (2.28) becomes approximately N 

j=

j=1

λS c D S N (N + 1) ≥ . 2 4 cT

(2.29)

 to minimize C(N ; S) is easily given by (2.29). Thus, approximate number N Furthermore, setting that T ≡ S/N in (2.27), C(T ; S) = (cT + c D T )

1 − e−λS cD  1 − e−λS . − −λT 1−e λ

(2.30)

Differentiating C(T ; S) with respect to T and setting it equal to zero, we have (2.9).  satisfies (2.9), we have the following optimal partition policy [3, Therefore, when T p. 42]: (i)

(ii)

 < S and [S/T ] ≡ N , then calculate C(N ; S) and C(N + 1; S) from If T (2.27), where [x] denotes the greatest integer contained in x. If C(N ; S) ≤ C(N + 1; S), then N ∗ = N , and conversely, if C(N ; S) > C(N + 1; S), then N ∗ = N + 1.  ≥ S, then N ∗ = 1. If T

, optimal number N ∗ and time Example 2.2 Table 2.2 presents approximate time T ∗ ∗ ∗ T = S/N , the resulting expected cost C(N ; S)/c D in (2.27), and approximate  from (2.29) for S and cT /c D when 1/λ = 100. This indicates that both number N ∗ T and N ∗ increase with cT /c D and S, and C(N ∗ ; S) increase with both S and cT /c D . Comparing N ∗ for S = 100 and S = 200, the values of N ∗ for S = 200 are almost  ≥ N ∗ , however, they are almost the same two times of those for S = 100, and N ∗ as N and become about n times when S becomes n times. As a result, it would be  in (2.29) for sufficient in actual fields to adopt the approximate checking number N a finite interval (0, S] when the failure time is exponential. 

, number N  and optimal time T ∗ , number N ∗ , and resulting cost Table 2.2 Approximate time T C(N ∗ ; S)/c D when F(t) = 1 − exp(−t/100)   S cT /c D T∗ N∗ C(N ∗ ; S)/c D T N 100

200

2 5 10 25 2 5 10 25

19.355 30.040 41.622 63.271 19.355 30.040 41.622 63.271

5 3 2 2 10 7 5 3

20.000 33.333 50.000 50.000 20.000 28.571 40.000 66.667

5 3 2 1 10 7 5 3

13.506 22.269 33.180 57.287 18.475 30.336 44.671 76.427

2.2 Inspection for Finite Interval

29

2.2.1 Asymptotic Inspection Times We introduce two simple approximate calculations of optimal checking times for a finite interval S.

(1) Periodic Method When F(t) = 1 − exp(−λt), we can get optimal N ∗ and T ∗ = S/N ∗ from (2.28). Thus, using (1) of Sect. 2.1.1 when F(t) = 1 − exp[−(λt)m ], approximate checking times are Tk =

1 λ

kS N∗

1/m .

However, TN∗ does not become S. To correct it, we define newly (Problem 3) k = T

k N∗

1/m

(k = 1, 2, . . . , N ∗ ) .

S

(2.31)

For example, when S = 100, cT /c D = 2 and 1/λ = 100, N ∗ = 5. Thus, when m = 2 = 63.25, T 3 = 77.46, T 4 = 89.44, 1 = 44.72, T 2, approximate checking times are T 5 = 100, which will be given in Table 2.4. and T

(2) Keller’s Method Using the inspection intensity n(t) defined in (2) of Sect. 2.1.1, the approximate total expected cost for a finite interval S is [2, p. 209] 

S

C(n(t); S) =



 cT

0

0

t

  S cD dF(t) + cT F(S) n(x)dx + n(t)dt . 2n(t) 0

Differentiating C(n(t)) with respect to n(t) and setting it equal to zero,  n(t) =

c D h(t) , 2cT

which agrees with (2.21). k (k = 1, 2, . . . , N − 1) and a checkWe compute approximate checking times T  from (2.22). First, we set ing number N  0

S

 c D h(t) dt = X , 2cT

30

2 Standard Inspection Models

and [X ] ≡ N , where [x] denotes the greatest integer contained in x. Then, we obtain A N (0 < A N < 1) such that 

S

 c D h(t) dt = N , 2cT

AN 0

and define a modified intensity as   n (t) ≡ A N

c D h(t) . 2cT

k which satisfy Using (2.22), we compute checking times T 

Tk

 n (t)dt = k

(k = 1, 2, . . . , N ) ,

(2.32)

0

N = S. where T Next, we set N by N + 1 and do similar computations. At last, we compare  ; S) (N = 1, 2, . . .) in (2.25) and choose the smallest expected cost C(N  ; S) C(N and checking times (k = 1, 2, . . . , N ) as an asymptotic policy. Example 2.3 Suppose that when F(t) = 1 − exp[−(λt)2 ], μ = S, i.e.,  0



e−(λt) dt = 2

√ 1 π = S, λ 2

√ and when S = 100, 1/λ = 200/ π  112.838. From (2.24), computing Tk and  ; S) in (2.25) for N = 1, 2, . . . , 8 when cT /c D = 2, we present comparing C(N Table 2.3. In this case, the expected cost is minimum at N ∗ = 4. √ In Periodic method, when 1/λ = 112.838 and S = 100, i.e., λS = π/2, from (2.28), N ∗ = 4 or 5. Thus, from (2.25), N −1  ; S)  C(N 2 = (2 + Tk+1 − Tk )e−(Tk /112.838) . cD k=0

√ n (t) = in Keller’s method, X = N = 4 and A N = (12/100) / π/200,  √Similarly, 6 t/103 . Thus, from (2.32), checking times are  0

Tk

6 √ 1 3/2 T tdt = =k 1000 250 k

(k = 1, 2, 3) .

2.3 Inspection for Random Interval

31

 ; S)/c D when S = 100 and cT /c D = 2 Table 2.3 Checking time Tk and expected cost C(N N

1

T1

100

2

T2

3

4

5

6

7

50.9

44.1

40.3

38.1

36.8

36.3

100.0

77.1

66.0

60.0

56.2

54.3

53.3

T3

100.0

T4

84.0

75.4

70.5

67.8

66.6

100.0

88.6

82.3

78.9

77.3

91.1

87.9

85.9

100.0

94.9

92.5

100.0

97.2

T5

100.0

T6 T7 T8  ; S)/c D C(N

8

64.1

100.0 102.00

93.55

91.52

91.16

91.47

92.11

92.91

93.79

k and resulting cost C(N  ; S)/c D for N = 4, 5 when S = 100 and Table 2.4 Approximate T cT /c D = 2 Periodic method N =5

N =4

N =5

50.00

44.72

39.69

34.20

70.71

63.25

63.00

54.29

86.60

77.46

82.55

71.14

100.00

89.44

100.00

1 T 2 T 3 T 4 T

Keller’s method

N =4

5 T  ; S)/c D C(N

100.00 91.29

91.53

86.18 100.00

91.22

91.58

√ √ When N = 5, A N = (15/100)/ π/200 and  n (t) = 3 t/(4 × 102 ), and 

Tk 0

3 √ 1 3/2 T tdt = =k 400 200 k

(k = 1, 2, 3, 4) .

k for N = 4, 5 in Periodic and Keller’s methods. Table 2.4 presents approximate T k Comparing Tables 2.3 and 2.4, N ∗ = 4 and optimal Tk∗ in Table 2.3 are between T in two methods. 

2.3 Inspection for Random Interval It is assumed that S is a random variable with a general distribution L(t) ≡ Pr{S ≤ t} ∞ with finite mean l ≡ 0 L(t)dt. Suppose that the unit is checked at periodic times kT (k = 1, 2, . . .). Then, the total expected cost until failure detection or time S is classified in the following three cases [16, p. 198]:

32

2 Standard Inspection Models

(a) When the unit fails at time t (t < S) and the next check is done before time S, the expected cost is ∞ 

 L[(k + 1)T ]

(k+1)T

  kcT + c D [(k + 1)T − t] dF(t) .

kT

k=0

(b) When the unit fails at time t (t < S) and the next check is done after time S, the expected cost is ∞   k=0

(k+1)T



kT

(k+1)T

 [kcT + c D (u − t)]dL(u) dF(t) .

t

(c) When the unit does not fail until time S, the expected cost is  ∞  cT k

(k+1)T

F(t)dL(t) .

kT

k=0

Summing up (a)–(c), the total expected cost until failure detection or time S is C(T ; L)

 ∞  = cT k k=0

(k+1)T

 L(t)dF(t) +

kT

+c D =

∞ 

F(t)dL(t) kT

∞  

(k+1)T







(k+1)T

L(u)du dF(t)

k=0 kT



F(kT ) cT L(kT ) + c D



(k+1)T

(k+1)T

t





L(t)dt − cT − c D

kT

k=0



L(t)F(t)dt . (2.33)

0

Note that C(0; L) = ∞ and 



C(∞; L) ≡ lim C(T ; L) = c D T →∞

L(t)F(t)dt .

0

Thus, there exists a positive T ∗ (0 < T ∗ ≤ ∞) to minimize C(T ; L). In particular, when F(t) = 1 − exp(−λt) and L(t) = 1− exp(−t/l), C(T ; l) =

 cT + c D l 1 − e−T /l cD − cT . − 1 − e−(λ+1/l)T λ + 1/l

(2.34)

We find optimal T ∗ to minimize C(T ; l) in (2.34). Differentiating C(T ; l) with respect to T and setting it equal to zero,

2.3 Inspection for Random Interval

33

Table 2.5 Optimal T ∗ when F(t) = 1 − exp(−t/100) and L(t) = 1 − exp(−t/l) cT l cD 50 100 200 500 1000 ∞ 0.5 1.0 2.0 3.0 4.0 5.0

10.161 14.458 20.615 25.397 29.463 33.069

9.996 14.130 19.967 24.434 28.191 31.492

9.915 13.972 19.656 23.977 27.590 30.751

9.868 13.878 19.475 23.709 27.240 30.321

9.852 13.847 19.415 23.622 27.125 30.181

9.836 13.817 19.355 23.534 27.011 30.040

 cT eλT − e−T /l − l 1 − e−T /l = , λ + 1/l cD

(2.35)

whose left-hand side increases strictly with T from 0 to ∞. Thus, there exists a finite and unique T ∗ (0 < T ∗ < ∞) which satisfies (2.35). It can be easily seen that because the left-hand side of (2.35) increases with l from 0 to that of (2.9), T ∗ decrease with l to T ∗ given in (2.9). Example 2.4 Table 2.5 presents optimal T ∗ for l and cT /c D when F(t) = 1 − exp(−t/100) and L(t) = 1 − exp(−t/l). This indicates that T ∗ increase with cT /c D and decrease slowly with l to T ∗ for l = ∞. This means that if l is smaller, then we do not need to check the unit earlier. It is of interest that if cT becomes four times,  then T ∗ becomes almost two times (Problem 4). Next, the unit is checked at successive times Tk (k = 1, 2, . . .), where T0 ≡ 0. The total expected cost is, by replacing kT with Tk in (2.33), C(T; L)  ∞  = F(Tk ) cT L(Tk ) + c D k=0

Tk+1 Tk

  L(t)dt − cT − c D



L(t)F(t)dt .

(2.36)

0

When L(t) = 1 − exp(−t/l), C(T; L)  ∞    = F(Tk ) (cT + c D l)e−Tk /l − c D le−Tk+1 /l − cT − c D k=0



e−t/l F(t)dt . (2.37)

0

Differentiating C(T; L) with respect to Tk and setting it equal to zero,

  cT F(Tk ) F(Tk ) − F(Tk−1 ) −(Tk+1 −Tk )/l − l 1−e = 1+ , f (Tk ) cD l f (Tk )

(2.38)

34

2 Standard Inspection Models

Table 2.6 Optimal Tk∗ when F(t) = 1 − exp[−(t/100)2 ], L(t) = 1 − exp(−t/l) and cT /c D = 5 k

l 100

200

500

1000



1 2 3 4 5 6 7 8 9 10

56.90 84.60 107.43 127.62 146.05 163.19 179.30 194.55 209.01 222.65

55.96 83.37 106.01 126.04 144.34 161.34 177.32 192.41 206.68 220.07

55.39 82.63 105.14 125.07 143.28 160.20 176.08 191.05 205.18 218.38

55.20 82.38 104.85 124.74 142.92 159.80 175.64 190.58 204.65 217.78

55.01 82.12 104.55 124.74 142.55 159.40 175.20 190.09 204.10 217.16

which agrees with (2.2) when l → ∞. Using Algorithm 1 in Sect. 2.1, we can compute optimal Tk∗ which satisfy (2.38). Example 2.5 Table 2.6 presents optimal Tk∗ (k = 1, 2, . . . , 10) for l when F(t) = 1 − exp[−(t/100)2 ] and cT /c D = 5. For example, when l = 200, T2∗ is almost the same mean failure μ = 88.6 and T9∗ is almost the same mean interval l. This indicates that Tk∗ decrease slowly with l to optimal checking times given in (2.2), and the ∗ also decrease with k and l.  differences between Tk∗ and Tk+1

2.4 Modified Inspection Models We give the following three modified inspection models such as imperfect inspection [2, p. 183], inspection of intermittent faults [2, p. 220] and inspection for a scale [3, p. 97]:

2.4.1 Imperfect Inspection Suppose that the unit is checked at successive times Tk (k = 1, 2, . . .) and a failure is detected at the next checking time with probability q (0 < q ≤ 1) and the unit remains in a failed state with probability p ≡ 1 − q. Then, the total expected cost until failure detection is

2.4 Modified Inspection Models

C(T; P) =

∞   k=0

=q

Tk+1 Tk

∞ 

⎧ ∞ ⎨ ⎩

j=0

35

⎫ ⎬

p j q[cT (k + j + 1) + c D (Tk+ j+1 − t)] dF(t) ⎭

p k (cT k + c D Tk ) − c D μ + q

k=0

∞ k   [cT + c D (Tk+1 − Tk )] p k− j F(T j ) . k=0

j=0

(2.39) Differentiating C(T; P) with respect to Tk and setting it equal to zero, k Tk+1 − Tk =

j=1

p k− j [F(T j ) − F(T j−1 )] f (Tk )



cT , cD

(2.40)

which agrees with (2.2) when p = 0. Therefore, using Algorithm 1, we can compute optimal Tk∗ (k = 1, 2, . . .). In particular, when the unit is checked at periodic times and the failure time is exponential, i.e., Tk = kT and F(t) = 1 − exp(−λt), the total expected cost in (2.39) is 

cD 1 p − + . (2.41) C(T ; P) = (cT + c D T ) q 1 − e−λT λ Optimal T p∗ (0 < T p∗ < ∞) to minimize C(T ; P) satisfies eλT − 1 − λT +

 cT p  λT e − 1 1 − e−λT = , q c D /λ

(2.42)

which agrees with (2.9) when p = 0. Noting that the left-hand side of (2.42) increases strictly with p to ∞, T p∗ decreases from T ∗ given in (2.9) to 0. Example 2.6 Table 2.7 presents optimal T p∗ for p and cT /c D when 1/λ = 100. This indicates that T p∗ decrease with p and increase with cT /c D . When p = 0, T p∗ = T ∗ given in (2.9). This means that we should shorten the checking intervals as the failure probability p of inspections become large. Furthermore, Table 2.8 presents optimal Tk∗ for p when F(t) = 1 − exp[−(t/500)2 ] and cT /c D = 10. When p = 0, Tk∗ agree with Table 2.1. This also indicates that Tk∗ decrease with p. 

2.4.2 Intermittent Fault Digital systems have usually two types of faults from the viewpoint of operational failures [2, p. 220]: Permanent faults due to hardware failures or software errors,

36

2 Standard Inspection Models

Table 2.7 Optimal T p∗ when F(t) = 1 − exp(−t/100) p

cT /c D 2

5

10

0.00 0.01 0.02 0.05 0.10 0.20 0.50

19.35 19.18 19.01 18.49 17.65 16.03 11.47

30.04 29.78 29.52 28.76 27.50 25.07 18.05

41.62 41.28 40.94 39.93 38.27 35.00 25.40

Table 2.8 Optimal Tk∗ when F(t) = 1 − exp[−(t/500)2 ] and cT /c D = 10 k

p 0.00

0.01

0.05

0.10

1 2 3 4 5 6 7 8 9 10 11 12 13 14

205.6 307.6 392.3 467.5 536.4 600.7 661.5 719.2 774.6 827.8 879.3 929.1 977.4 1024.5

200.7 299.6 381.8 454.9 522.0 584.7 643.9 700.2 754.2 806.2 856.5 905.2 952.6 998.7

181.6 268.7 341.5 406.7 466.6 522.8 576.0 629.9 675.7 722.8 768.3 812.6 855.7 897.7

159.1 232.7 294.9 350.8 402.6 451.3 497.6 542.0 584.8 626.1 666.3 705.3 743.4 780.6

and intermittent faults due to transient failures. Intermittent faults are automatically detected by the error-correcting code, and are corrected by the error control or the restart. However, some faults occur repeatedly and consequently, become permanent faults. Some checks should be applied to detect and isolate faults, but it would waste time and money to do checks more frequently. This section applies the standard inspection policy to intermittent faults where checks are planned at periodic times kT (k = 1, 2, . . .) to detect such faults in Fig. 2.1. We obtain the mean time to detect a fault and the expected number of checks. Furthermore, we discuss optimal times to minimize the expected cost until fault detection, and maximize the probability of detecting the first fault.

2.4 Modified Inspection Models

37

Fig. 2.1 Process of periodic inspection with intermittent faults

(1) Perfect Check Suppose that faults occur intermittently, i.e., the unit repeats an operating state (State 0) and fault state (State 1) alternately. The times of respective operating and faults states are independent and have identical exponential distributions [1 − exp(−λt)] and [1 − exp(−μt)] with μ > λ. Periodic checks to detect faults are done at times kT (k = 1, 2, . . .). It is assumed that faults are investigated only through checkings which are perfect, i.e., faults are always detected by checks when they have occurred and are isolated. The time required for checks is negligible. The transition probabilities Pi j (t) (i, j = 0, 1) from State i to State j are [2, p. 43] λ −(λ+μ)t μ + e , λ+μ λ+μ  μ  1 − e−(λ+μ)t , P10 (t) = λ+μ P00 (t) =

 λ  1 − e−(λ+μ)t , λ+μ μ −(λ+μ)t λ + e P11 (t) = . λ+μ λ+μ P01 (t) =

Using the above equations, we have the following reliability quantities: The expected number M(T ) of checks to fault detection is ∞  ( j + 1)[P00 (T )] j P01 (T ) = M(T ) = j=0

1 , P01 (T )

the mean time l(T ) to fault detection is l(T ) =

∞  ( j + 1)T [P00 (T )] j P01 (T ) = j=0

T . P01 (T )

Thus, the expected cost until fault detection is C(T ) = cT M(T ) + c O l(T ) =

cT + c O T , P01 (T )

where cT = cost of one check and c O = operational cost rate of the unit.

(2.43)

38

2 Standard Inspection Models

We find optimal T ∗ to minimize C(T ). Differentiating C(T ) with respect to T and setting it equal to zero,  1  (λ+μ)T cT e −1 −T = , λ+μ cO

(2.44)

which agrees with (2.9) when μ = 0 and c O = c D . Thus, there exists a finite and unique T ∗ (0 < T ∗ < ∞) which satisfies (2.44).

(2) Imperfect Check Suppose that checks are imperfect, i.e., a fault is detected with probability q (0 < q ≤ 1) and is not detected with probability p ≡ 1 − q. Then, the expected numbers M j (T ) ( j = 0, 1) of checks until failure detection from State j at time 0 are given by the following renewal equations: M0 (T ) = P00 (T )[1 + M0 (T )] + p P01 (T )[1 + M1 (T )] , M1 (T ) = P10 (T )[1 + M0 (T )] + p P11 (T )[1 + M1 (T )] . Solving the above equations, M0 (T ) =

(1 − p)P00 (T ) + p[P01 (T ) + P10 (T )] . (1 − p)P01 (T )

(2.45)

Similarly, the mean times l j (T ) ( j = 0, 1) until fault detection from State j at time 0 is l0 (T ) = P00 (T )[T + l0 (T )] + P01 (T )[T + pl1 (T )] , l1 (T ) = P10 (T )[T + l0 (T )] + P11 (T )[T + pl1 (T )] . Solving the above equations for l0 (T ), l0 (T ) =

T {1 − p + p[P01 (T ) + P10 (T )]} , (1 − p)P01 (T )

(2.46)

where note that l0 (T ) = T [1 + M0 (T )]. Therefore, the total expected cost until fault detection is C(T ; p) =

(1 − p)[cT P00 (T ) + c O T ] + p(cT + c O T )[P01 (T ) + P10 (T )] . (2.47) (1 − p)P01 (T )

We find optimal T p∗ to minimize C(T ; p). Differentiating C(T ; p) with respect to T and setting it equal to zero,

2.4 Modified Inspection Models

39

Table 2.9 Optimal T p∗ when 1/μ = 100 μ/λ

p=0 cT /c O 1 5

10

50

1.2 1.5 2.0 5.0 10.0 50.0

79 84 90 103 108 113

170 182 196 230 244 258

250 269 292 348 372 395

139 148 159 185 196 206

100

p = 0.1 cT /c O 1 5

10

50

100

286 309 337 403 432 459

75 80 85 97 102 106

164 176 189 222 235 248

244 263 286 339 363 385

280 303 330 394 422 449

133 142 153 177 188 197

  1  (λ+μ)T (1 − p)cT e − 1 − p 1 − e−(λ+μ)T − (1 − p)T = , (2.48) λ+μ cO which agrees with (2.44) when p = 0. Noting that the left-hand side of (2.48) increases strictly with T from 0 to ∞, there exists a finite and unique T p∗ (0 < T p∗ < ∞) which satisfies (2.48), and T p∗ decreases with p from T ∗ given in (2.44) to 0. Example 2.7 Table 2.9 presents optimal T p∗ for p, μ/λ and cT /c O when 1/μ = 100. This indicates that T p∗ increase with μ/λ and cT /c O , and decrease with p from T ∗ given in (2.44). 

2.4.3 Inspection for Scale Suppose that there is a manufacture process in which we weigh some product by a scale in the final stage to check its exact weight [3, p. 97], [17]. However, the scale occasionally becomes uncalibrated and produces inaccurate weights for individual products. To prevent such incorrect weights, the scale is also checked everyday. If the scale is detected to be uncalibrated at the inspection, then it is adjusted, and we reweigh some volume of products. Two modified models, where inspection activities involve adjustment operations and are executed only for detecting scale inaccuracy, were proposed [18, 19]. When we have many products to weigh everyday, we can regard the volume of products to be weighed as continuous. Let t (t > 0) denote the total volume of products to weigh everyday. For example, when we weigh chemical products by a scale, we may denote t as the total expected chemical products per day. When a scale is checked at only one time in the evening and is detected to be uncalibrated, some volume T are reweighed by the adjusted scale. Let a random variable X (0 ≤ X < ∞) be the time at which the scale becomes uncalibrated measured by the volume of weighed products. Thus, if X > t, then the

40

2 Standard Inspection Models

Fig. 2.2 Four cases of a reweiging process for a scale

scale is correct at the inspection, and all products are shipped out simultaneously. Conversely, if X ≤ t, then the scale becomes uncalibrated. In this case, the scale is adjusted, and a volume T (0 ≤ T ≤ t) of products is reweighed by this scale. In addition, let Y denote the time when the scale becomes inaccurate again, measured by the volume of reweighed products. If Y < T , then the scale becomes inaccurate again, and some defective products are shipped out. Let U denote the volume of defective products to be shipped out. Then, we consider the following four cases in Fig. 2.2 [3, p. 98]: (i) (ii) (iii) (iv)

U U U U

= 0 for X > t, or t − T < X ≤ t and Y = T − Y for t − T < X ≤ t and Y ≤ T = t − T − X for X ≤ t − T and Y > T = t − X − Y for X ≤ t − T and Y ≤ T

> T in Case (1). in Case (2). in Case (3). in Case (4).

It is assumed that X and Y are independent and identically distributed, and that both have an identical distribution Pr{X ≤ x} = Pr{Y ≤ x} ≡ F(x). Then, we have

2.4 Modified Inspection Models

 E{U } = F(t)

41



T

t−T

(T − x)dF(x) + (t − T − x)dF(x) 0 0  t−T  T F(x)dx + F(x)dx . = F(t) 0

(2.49)

0

Next, let c D denote the cost incurred for shipping out a unit volume of defective products, and c R denote the cost for reweighing a unit volume of products. Then, the total expected cost during (0, t], including the time for reweighing when the scale is uncalibrated, is  C(T |t) = c D F(t) 0

T



t−T

F(x)dx +

 F(x)dx + c R T F(t) .

(2.50)

0

Note that  C(0|t) = c D

t

F(x)dx ,

 C(t|t) = F(t) c D

0

t

 F(x)dx + c R t .

0

Differentiating E{U } in (2.49) with T and setting it equal to zero, F(t)F(T ) − F(t − T ) = 0 ,

(2.51)

whose left-hand side increases strictly with T from −F(t) to F(t)2 . Thus, there exists a finite and unique T1∗ to minimize E{U }. Next, we find optimal T2∗ to minimize C(T |t) in (2.50). Differentiating C(T |t) with respect to T and setting it equal to zero, F(t) − F(t − T ) cD − cR + F(T ) = . F(t) cD

(2.52)

It can be easily shown that T2∗ = T1∗ when c R /c D = 0. The left-hand side of (2.52) increases strictly with T from 0 to 1 + F(t). Thus, if c D > c R , then there exists a finite and unique T2∗ (0 < T2∗ < t) which satisfies (2.52). Conversely, if c D ≤ c R , then T2∗ = 0, i.e., we should not reweigh any products. Example 2.8Suppose that the failure time has a gamma distribution of order k, ∗ j i.e., F(x) = ∞ j=k [(λx) /j!] exp(−λx). In this case, T1 = 0.865 and 0.732 when λ = 0.182 for k = 1 and λ = 0.731 for k = 2, i.e., F(t) ≈ 1/6 for both cases when t = 1. Table 2.10 presents optimal T2∗ and the expected cost C(T2∗ |1) for k = 1, 2 and c R /c D when t = 1 and λ = 0.182 for k = 1 and λ = 0.731 for k = 2. This indicates that T2∗ increase from 0 to 0.865 for k = 1 and from 0 to 0.732 for k = 2 as c D /c R increase from 1 to ∞. Optimal T2∗ for k = 2 are less than those for k = 1, because the mean time to failure is 1/0.182 = 5.495 for k = 1 is larger than 2/0.731 = 2.736 for k = 2. 

42

2 Standard Inspection Models

 j Table 2.10 Optimal T2∗ and resulting cost C(T2∗ |1)/c R when F(x) = ∞ j=k [(λx) /j!] × exp(−λx) c D /c R k=1 k=2 T2∗ C(T2∗ |1)/c R T2∗ C(T2∗ |1)/c R 1 2 3 4 5 10 50 100 ∞

0.000 0.445 0.587 0.658 0.700 0.783 0.849 0.857 0.865

0.086 0.134 0.158 0.176 0.192 0.261 0.763 1.384 ∞

0.000 0.329 0.447 0.510 0.550 0.635 0.711 0.722 0.732

0.063 0.098 0.115 0.127 0.136 0.174 0.422 0.725 ∞

2.5 Problems 1. Compute optimal Tk∗ to minimize the expected cost rate R(T) in (2.4) when F(t) is a Weibull distribution, using Algorithm 2 [13]. 2. Prove that the left-hand side of (2.15) increases strictly with T from −c R /N to c D /λ − c R and increases strictly with N from [3, p. 205]  cD  1 − (1 + λT )e−λT − c R λ to c

D

λ

− cR

  1 − (1 + λT )e−λT .

3. Consider how we define (2.31). 4. Explain why T ∗ becomes about 2 times when cT becomes 4 times.

References 1. 2. 3. 4.

Barlow RE, Proschan F (1965) Mathematical theory of reliability. Wiley, New York Nakagawa T (2005) Maintenance theory of reliability. Springer, London Nakagawa T (2008) Advanced reliability models and maintenance policies. Springer, London Vaurio JK (1999) Availability and cost functions for periodically inspected preventively maintained units. Reliab Eng Syst Saf 63:133–140 5. Nakagawa T, Mizutani S (2009) A summary of maintenance policies for a finite interval. Reliab Eng Syst Saf 94:89–96 6. Taghipour S, Banjevic D, Jardine AKS (2010) Periodic inspection optimization model for a complex repairable system. Reliab Eng Syst Saf 95:944–952

References

43

7. Taghipour S, Banjevic D (2011) Periodic inspection optimization models for a repairable system subject to hidden failures. IEEE Trans Reliab 60:275–285 8. Taghipour S, Banjevic D (2012) Optimal inspection of a complex system subject to periodic and opportunistic inspections and preventive replacements. Eur J Oper Res 220:649–660 9. Berrade MD, Cavalcante AV, Scarf PA (2012) Maintenance scheduling of a protection system subject to imperfect inspection and replacement. Eur J Oper Res 218:716–725 10. Zhao X, Al-Khalifa KN, Nakagawa T (2015) Approximate methods for optimal replacement, maintenance, and inspection policies. Reliab Eng Syst Saf 144:68–73 11. Brender DM (1963) A surveillance model for recurrent event. IBM Watson Research Center Report 12. Osaki T, Dohi T, Kaio N (2009) Numerical computation algorithm for checkpoint placement. Perform Evaluat 66:311–326 13. Mizutani S, Zhao X, Nakagawa T (2022) Optimal inspection policies to minimize expected cost rates. Int J Reliab Qual Saf Eng 29:2250001(15 pages) 14. Nakagawa T (2011) Stochastic processes with applications to reliability theory. Springer, London 15. Keller JB (1974) Optimum checking schedules for systems subject to random failure. Manage Sci 21:256–260 16. Nakagawa T (2014) Random maintenance policies. Springer, London 17. Sandoh H, Nakagawa T (2003) How much should we reweigh? J Oper Res Soc 54:318–321 18. Sandoh H, Igaki N (2001) Inspection policies for a scale. Quat Maint Eng 7:220–231 19. Sandoh H, Igaki N (2003) Optimal inspection policies for a scale. Comput Math Appl 46:1119– 1127

Chapter 3

Random Inspection Models

Some units in offices and industries successively execute jobs and computer processes. It would be impossible or impractical to maintain such units in a strict periodic fashion shown in Chap. 2. In this chapter, we consider an operating unit, which exej cutes a job with random working times Y j ( j = 1, 2, . . .) and S j ≡ i=1 Yi , S0 ≡ 0. It is assumed that random variables Y j are independent and have an identical distribution G(t) ≡ Pr{Y j ≤ t} with finite mean 1/θ. Then, the probability that the unit works exactly j times in [0, t] is G ( j) (t) − G ( j+1) (t), where G ( j) (t) ( j = 1, 2, . . .) denotes the j-fold convolution of G(t) with itself and G (0) (t) ≡ 1 for t ≥ 0, ∞ Stieltjes ( j) and M(t) ≡ j=1 G (t) represents the expected number of random works in [0, t]. Suppose that the unit fails according to a general distribution F(t) with its density ∞ function f (t) ≡ dF(t)/dt and finite μ ≡ 0 F(t)dt, irrespective of working times Y j , where Φ(t) ≡ 1 − Φ(t) for any function Φ(t). We apply the inspection policies introduced in Chap. 2 to the unit with random working times Y j [1, p. 253], [2–4]: It is assumed in Sect. 3.1 that the unit is checked at successive working times S j and also at periodic times kT (k = 1, 2, . . .). The total expected costs until failure detection are obtained, and optimal policies to minimize them for periodic and random inspection policies when the failure time is exponential, i.e., F(t) = 1 − exp(−λt). It is shown that periodic inspection is better than random one when both costs of periodic and random inspections are the same. However, if the random inspection cost is the half of periodic one, both expected costs are almost the same. Furthermore, when the unit is checked at successive times Tk (k = 1, 2, . . .), optimal checking times are computed numerically. This chapter is written based on the chapter [5, p. 87]: It is assumed in Sect. 3.2 that the unit is checked at every completion of N th (N = 1, 2, . . .) working time. Optimal number N ∗ to minimize the total expected cost is derived. Furthermore, when both failure and working times are exponential, we propose three modified inspection policies where the unit is checked at a planned time T or at a working © Springer Nature Switzerland AG 2023 K. Ito and T. Nakagawa, Optimal Inspection Models with Their Applications, Springer Series in Reliability Engineering, https://doi.org/10.1007/978-3-031-22021-0_3

45

46

3 Random Inspection Models

time Y , whichever occurs first or last, and at the first completion of working times, which are called inspection first, inspection last and inspection overtime in Sect. 3.3. We obtain the total expected costs of each policy, derive optimal policies to minimize them, and compare them analytically and numerically. It is shown that either of them is better than the other according to the ratio of checking costs to downtime cost from failure to its detection. Finally, in Sect. 3.4, we propose an extended inspection policy in which the unit is checked at the first completion of working times over time T . The total expected cost of inspection overtime is obtained, and is compared for periodic inspection and inspection first and last.

3.1 Periodic and Random Inspections Suppose that the unit is checked at successive working times S j ( j = 1, 2, . . .) and at periodic times kT (k = 1, 2, . . .) for a specified T (0 < T ≤ ∞) in Fig. 3.1 [1, p. 88]. The failure is detected by either random or periodic checking times, whichever occurs first. The probability that the failure is detected at periodic check is ∞   k=0

(k+1)T

⎧ ∞  ⎨ ⎩

kT

j=0

0

t

⎫ ⎬

G[(k + 1)T − x]dG ( j) (x) dF(t) , ⎭

(3.1)

and the probability that it is detected at random check is ∞   k=0

(k+1)T

kT

⎞ ⎛ ∞  t   ( j) ⎝ G[(k + 1)T − x] − G(t − x) dG (x)⎠ dF(t) . j=0

0

Fig. 3.1 Process of random and periodic inspections

(3.2)

3.1 Periodic and Random Inspections

47

Let cT be the cost of periodic check, c R be the cost of random check, and c D be the downtime cost per unit of time for the time elapsed between failure and its detection at the next check. Then, the total expected cost until failure detection is C R (T ) =

∞  

 ∞ 

(k+1)T kT

k=0

(k + 1)cT + jc R + c D [(k + 1)T − t]

j=0



t

× +

(k+1)T

 ∞  t 

kT

0

j=0

(k+1)T −x t−x

= cT

(x) dF(t)

[kcT + ( j + 1)c R 

+c D (x + y − t)]dG(y) dG ∞ 



0

∞   k=0

G[(k + 1)T − x]dG

( j)





( j)

 (x) dF(t)



F(kT ) + c R

M(t)dF(t) 0

k=0

−(cT − c R )

∞  

(k+1)T

 G[(k + 1)T ] − G(t)

k=0 kT  t

  G[(k + 1)T − x] − G(t − x) dM(x) dF(t)

+ +c D

∞   k=0

0 (k+1)T



(k+1)T

G(y)dy kT

+

 t  0

t (k+1)T −x

  G(y)dy dM(x) dF(t) .

(3.3)

t−x

In particular, when T = ∞, i.e., the unit is checked only at random inspection, the total expected cost is   cD  ∞ [1 + M(t)]dF(t) − c D μ . C R (∞) ≡ lim C R (T ) = c R + T →∞ θ 0

(3.4)

Next, when G(t) = 1 − exp(−θt) (0 < θ < ∞), i.e., M(t) = θt, the total expected cost in (3.3) is (Problem 1) C R (T ) = cT

∞ 

F(kT ) + c R θμ

k=0

cD   + c R − cT + θ k=0 





(k+1)T

kT



 1 − e−θ[(k+1)T −t] dF(t) .

(3.5)

48

3 Random Inspection Models

We find optimal checking time T ∗ to minimize C R (T ). Differentiating C R (T ) with respect to T and setting it equal to zero, ∞

k=0 (k

+ 1)

 (k+1)T

 θe−θ[(k+1)T −t] dF(t)  cT , (3.6) − 1 − e−θT = c R − cT + c D /θ k=1 k f (kT )

kT  ∞

for c R + c D /θ > cT (Problem 1). This is a necessary condition that optimal T ∗ minimizes C R (T ). In addition, when F(t) = 1 − exp(−λt) (0 < λ < ∞), (3.5) is, for λ = θ,   cT cR θ  cD  λ e−λT − e−θT + c R − cT + C R (T ) = + , (3.7) 1− λ θ θ − λ 1 − e−λT 1 − e−λT

and (3.6) is    θ  cT 1 − e−(θ−λ)T − 1 − e−θT = , θ−λ c R − cT + c D /θ

(3.8)

whose left-hand side increases strictly with T from 0 to λ/(θ − λ) for λ < θ and ∞ for λ > θ. Thus, if c R + c D /θ > cT θ/λ, then there exists a finite and unique T ∗ (0 < T ∗ < ∞) which satisfies (3.8). Conversely, if c R + c D /θ ≤ cT θ/λ, then T ∗ = ∞, i.e., periodic check should not be done, and the total expected cost is  C R (θ) ≡ lim C R (T ) = c R T →∞

 θ cD +1 + . λ θ

(3.9)

In particular, when θ → 0, i.e., 1/θ → ∞, (3.8) agrees with (2.9) in Sect. 2.1. On the other hand, when λ = θ, (3.5) is C R (T ) =

 cT c D  1 − (1 + θT )e−θT + c + c − c + , R R T 1 − e−θT θ 1 − e−θT

(3.10)

and (3.6) is   θT − 1 − e−θT =

cT , c R − cT + c D /θ

(3.11)

whose left-hand side increases strictly with T from 0 to ∞. Thus, if c R + c D /θ > cT , then there exists a finite and unique T ∗ (0 < T ∗ < ∞) which satisfies (3.11). If c R + c D /θ ≤ cT , then T ∗ = ∞. The physical meaning of the condition c R + c D /θ > cT is that the total of random checking cost c R and mean downtime cost c D /θ between random checks is higher than the periodic cost cT . So that, C R (θ) in (3.9) is larger than the total of periodic cost cT and cost of random checks until failure given by 



cR 0

θtλe−λt dt =

cR θ , i.e., λ

C R (θ) > cT +

cR θ . λ

3.1 Periodic and Random Inspections

49

Table 3.1 Optimal T ∗ and resulting cost C R (T ∗ )/c D when cT /c D = 2, c R /c D = 1, and F(t) = 1 − exp(−t m /100) 1/θ m=1 m=2 m=3 T∗ C R (T ∗ )/c D T ∗ C R (T ∗ )/c D T ∗ C R (T ∗ )/c D ∞ ∞ ∞ ∞ ∞ ∞ 49.941 32.240 22.568 19.355

1 2 3 4 5 10 15 20 50 ∞

102.000 53.000 37.333 30.000 26.000 21.000 22.165 22.210 21.799 21.487

∞ 23.680 16.699 14.011 12.264 8.081 7.183 6.819 6.266 5.954

10.862 7.432 6.803 6.748 6.783 6.914 6.937 6.945 6.953 5.966

∞ 7.017 6.512 6.303 6.187 5.969 5.898 5.861 5.794 5.748

6.144 4.854 4.613 4.551 4.541 4.589 4.630 4.757 4.716 4.771

Example 3.1 Suppose that the failure time has a Weibull distribution and the working time is exponential, i.e., F(t) = 1 − exp(−λt m ) (m ≥ 1) and G(t) = 1 − exp(−θt). Then, from (3.6), ∞

k=0 (k

 (k+1)T −θ[(k+1)T −t] m  + 1) kT θe mλt m−1 e−λt dt  ∞ − 1 − e−θT m m−1 −λ(kT ) e k=0 kmλ(kT ) cT , = c R − cT + c D /θ

(3.12)

which agrees with (3.8) when m = 1. When 1/θ = ∞, (3.12) is ∞

e−λ(kT ) cT −T = , m−1 e−λ(kT )m c kmλ(kT ) D k=0

∞

k=0

m

(3.13)

which corresponds to periodic inspection with a Weibull distribution in Sect. 2.1. Table 3.1 presents optimal T ∗ and its resulting cost C R (T ∗ )/c D for m and 1/θ when 1/λ = 100, cT /c D = 2 and c R /c D = 1. When m = 1, if 1/θ ≤ 10 then T ∗ = ∞. This indicates that T ∗ decrease with 1/θ and m. However, if the mean working time 1/θ exceeds a threshold level, optimal T ∗ vary little for given m. Thus, it would be sufficient to check the unit at the smallest T ∗ for large 1/θ which satisfies (3.13). It is of great interest that C R (T ∗ ) has no monotonous property for 1/θ. This suggests that there might exist a combined inspection policy in which the unit is checked at periodic times kT and at N th working time, which will be discussed in Sect. 3.2. 

50

3 Random Inspection Models

Fig. 3.2 Process of random and sequential inspections

We consider a modified inspection policy in which failures are detected only at periodic times kT and c D is the cost of the working number for a failed unit until its failure detection. Then, the total expected cost until failure detection is C R (T ) = =

∞  

(k+1)T



 (k + 1)cT + c D [M((k + 1)T ) − M(t)] dF(t)

k=0 kT ∞  

  cT + c D [M((k + 1)T ) − M(kT )] F(kT ) − c D



M(t)dF(t) .

(3.14)

0

k=0

In particular, when G(t) = 1 − exp(−θt), i.e., M(t) = θt, C R (T ) = (cT + c D θT )

∞ 

F(kT ) − c D θμ ,

(3.15)

k=0

which agrees with the total expected cost in (2.7) of Sect. 2.1 when θ = 1 (Problem 2).

3.1.1 Sequential Inspection Suppose that the unit is checked at working times S j ( j = 1, 2, . . .) and at successive times Tk (k = 1, 2, . . .) in Fig. 3.2, where S0 = T0 = 0 and T ≡ (T1 , T2 , . . .). Then, by the similar method of obtaining (3.3), the total expected cost until failure detection is

3.1 Periodic and Random Inspections

C R (T) = cT

∞ 

51



−(cT − c R )

∞  

+ 0

+c D +

 G(Tk+1 ) − G(t) 

[G(Tk+1 − x) − G(t − x)]dM(x) dF(t)

∞  

Tk+1



0

Tk+1

G(y)dy

k=0 Tk t  Tk+1 −x



Tk+1 Tk

k=0 t

M(t)dF(t) 0

k=0





F(Tk ) + c R

t





G(y)dy dM(x) dF(t) .

(3.16)

t−x

In particular, when G(t) = 1 − exp(−θt), (3.16) is C R (T) = cT

∞ 

F(Tk ) + c R θμ

k=0

cD   + c R − cT + θ k=0 





Tk+1



 1 − e−θ(Tk+1 −t) dF(t) .

(3.17)

Tk

Differentiating C R (T) with respect to Tk and setting it equal to zero,  Tk 1−e

−θ(Tk+1 −Tk )

Tk−1

=

θe−θ(Tk −t) dF(t) f (Tk )



cT , c R − cT + c D /θ

(3.18)

which agrees with (2.2) when θ → 0. Therefore, by using Algorithm 1 in Sect. 2.1, we can compute optimal checking times Tk∗ which satisfy (3.18). Example 3.2 Suppose that the failure time has a Weibull distribution F(t) = 1 − exp(−λt 2 ). Then, (3.18) is  Tk 1−e

−θ(Tk+1 −Tk )

=

Tk−1

θe−θ(Tk −t) te−λt dt 2

Tk

2 e−λTk



cT . c R − cT + c D /θ



cT . cD

When 1/θ → ∞, from (2.2), e−λTk−1 − e−λTk 2

Tk+1 − Tk =

2λTk

2

2 e−λTk

52

3 Random Inspection Models

Table 3.2 Optimal Tk∗ when cT /c D = 2, c R /c D = 1, 1/λ = 100, and F(t) = 1 − exp(−t 2 /100) k

1/θ 10

50



1 2 3 4 5 6 7 8 9 10

9.85 14.25 17.81 20.92 23.75 26.36 28.80 31.11 33.30 35.34

8.63 12.75 16.12 19.09 21.79 24.29 26.63 28.84 30.90 32.70

8.36 12.42 15.75 18.68 21.35 23.83 26.16 28.36 30.46 32.48

Table 3.2 presents optimal Tk∗ (k = 1, 2, . . . , 10) for 1/θ when cT /c D = 2, c R /c D = 1, and 1/λ = 100. This indicates that Tk∗ decrease slowly with 1/θ, however, vary a little for 1/θ and increase gradually with k. Compared to Table 3.1 when m = 2,  T1∗ > T ∗ > T2∗ − T1∗ for given 1/θ.

3.1.2 Comparison of Periodic and Random Inspections Suppose that the failure time has an exponential distribution F(t) = 1 − exp(−λt). The unit is checked at periodic times kT (k = 1, 2, . . .) and its failure is detected at the next check. The total expected cost is given in (2.8), optimal T ∗ to minimize C(T ) satisfies (2.9), and the resulting cost is given in (2.10). On the other hand, the unit is checked at random working times S j ( j = 1, 2, . . .) where S0 ≡ 0, and Y j = S j − S j−1 ( j = 1, 2, . . .) are independent and have an identical exponential distribution Pr{Y j ≤ t} = 1 − exp(−θt). Then, the total expected cost C R (θ) until failure detection is given in (3.9). Optimal θ∗ to minimize C(θ) is λ = θ∗

!

cR , c D /λ

(3.19)

and the resulting cost is C R (θ∗ ) = c D /λ



λ θ∗

2

 +2

λ θ∗

 =

! cR cR +2 . c D /λ c D /λ

(3.20)

Example 3.3 Table 3.3 presents optimal T ∗ , 1/θ∗ , and their resulting costs C(T ∗ )/ c D , and C R (θ∗ )/c D for cT /c D when λ = 1 and cT = c R . Both T ∗ and 1/θ∗ increase

3.1 Periodic and Random Inspections

53

Table 3.3 Optimal T ∗ , 1/θ∗ , and resulting costs C(T ∗ )/c D and C R (θ∗ )/c D when cT = c R and F(t) = 1 − exp(−t) cT /c D T∗ C(T ∗ )/c D 1/θ∗ C R (θ∗ )/c D 0.001 0.002 0.005 0.010 0.020 0.050 0.100 0.200 0.500 1.000

0.0444 0.0626 0.0984 0.1382 0.1935 0.3004 0.4162 0.5722 0.8577 1.1462

0.0454 0.0646 0.1034 0.1482 0.2135 0.3504 0.5162 0.7722 1.3577 2.1462

0.0316 0.0447 0.0707 0.1000 0.1414 0.2236 0.3152 0.4472 0.7071 1.0000

0.0642 0.0914 0.1464 0.2100 0.3028 0.4972 0.7324 1.0944 1.9142 3.0000

with cT /c D . This indicates that T ∗ > 1/θ∗ and C(T ∗ ) < C R (θ∗ ), i.e., periodic checking times are larger than random ones, and periodic inspection is better than random one. This also indicates that if a random checking cost c R is the half of cT , both total expected costs of periodic and random inspections are almost the same. For example, C(T ∗ )/c D = 0.0646 for cT /c D = 0.002 and C R (θ∗ )/c D = 0.0642 when c R /c D = 0.001. In other words, if c R is smaller than the half of cT , then random inspection might be better than periodic one.  We compare periodic and random inspections theoretically when cT = c R . It is assumed for the simplicity of notations that λ = 1 and c ≡ λcT /c D ≤ 1 because the downtime cost for the mean failure time 1/λ would be much higher than one checking cost for most inspection models. When cT /c D = 1, T ∗ = 1.1462, and 1/θ∗ = 1.0. Thus, it is noted that 0 < T ∗ ≤ 1.1462 and 0 ≤ 1/θ∗ ≤ 1.0. From (2.9) and (3.19), a solution of the equation Q(T ) ≡ eT − (1 + T + T 2 ) = 0 , is T = 1.79 > 1.1462, which follows that Q(T ) < 0 for 0 < T < 1.79. Thus, 0 < 1/θ∗ < T ∗ ≤ 1.1462. Next, prove that 2/θ∗ > T ∗ . From (2.9), c = eT − (1 + T ) > i.e., T ∗
2c > T ∗ , ∗ θ

54

3 Random Inspection Models

Table 3.4 Values of " c R /c D and " c R /cT when C(T ∗ ) = C R (θ∗ ) cT /c D " c R /c D " c R /cT 0.001 0.002 0.005 0.010 0.020 0.050 0.100 0.200 0.500 1.000

0.0005 0.0010 0.0025 0.0051 0.0103 0.0263 0.0535 0.1097 0.2868 0.5987

0.5039 0.5054 0.5086 0.5118 0.5160 0.5253 0.5352 0.5485 0.5735 0.5987

which follows that 1/θ∗ < T ∗ < 2/θ∗ . In addition, from (2.10) and (3.20), C R (θ∗ ) − C(T ∗ ) = cD



1 θ∗

2 +

2 2 ∗ − eT + 1 > ∗ − T ∗ > 0 . θ∗ θ

From the above results, T ∗ > 1/θ∗ and C(T ∗ ) < C R (θ∗ ), i.e., periodic inspection is better than random one and optimal T ∗ is larger than 1/θ∗ . It has been assumed until now that both checking costs of periodic and random inspections are the same. Usually, the cost of random check would be smaller than that of periodic one because the unit is checked at the completion of working times. We compute a random checking cost " c R when the total expected costs of two inspections are the same one from Table 3.1: # " cR " c ∗ R +2 . C(T ∗ ) = eT − 1 = cD cD Example 3.4 Table 3.4 presents " c R /c D and " c R /cT for cT /c D , and indicates that the checking cost " c R is a little larger than the half of cT . It is noted from (2.10) and (3.20), 1 eλT − 1 = . T →0 (λT )2 + 2λT 2 lim

This shows that if cT → 0, then C(T ∗ ) → C R (θ)/2, i.e., as cT → 0, the total expected cost of periodic inspection is the half of that of random one. Therefore, it would be estimated that if cT → 0 and c R /cT = 1/2, then both expected costs of periodic and random inspections would be the same, as shown in Table 3.4 (Problem 3). 

3.2 Random Inspection

55

3.2 Random Inspection Suppose that the unit is checked at every N th (N = 1, 2, . . .) working times S j N ( j = 1, 2, . . .), i.e., the j N th number of working times, and also at periodic times kT (k = 1, 2, . . .), whichever occurs first. Then, the total expected cost until failuredetection is, by replacing formally G(t) and M(t) with G (N ) (t) and ( j N) (t) (N = 1, 2, . . .) in (3.3) where M (1) (t) ≡ M(t), respecM (N ) (t) ≡ ∞ j=1 G tively, C R (T, N ) = cT

∞ 

 ∞  

−(cT − c R ) G

+c D

(N )

∞  

 t  0



G (N ) ((k + 1)T ) − G (N ) (t)

((k + 1)T − x) − G

(k+1)T



kT

k=0

+

(k+1)T kT

k=0

 t$ 0

M (N ) (t)dF(t)

0

k=0

+



F(kT ) + c R

(k+1)T

(N )

%

(t − x) dM

(N )

 (x) dF(t)

  1 − G (N ) (y) dy

t

(k+1)T −x



   1 − G (N ) (y) dy dM (N ) (x) dF(t) .

(3.21)

t−x

In general, it is very difficult to derive analytically both optimal T ∗ and N ∗ to minimize C R (T, N ). In particular, when T = ∞, i.e., the unit is checked only at every N th (N = 1, 2, . . .) working time, the total expected cost is C R (N ) ≡ lim C R (T, N ) T →∞     ∞ N cD 1+ = cR + F(t)dM (N ) (t) − c D μ . θ 0

(3.22)

In addition, when F(t) = 1 − exp(−λt), 



e 0

−λt

dM

(N )

(t) =

∞   j=1



e−λt dG ( j N ) (t) =

0

where G ∗ (s) is the LS transform of G(t), i.e., G ∗ (s) ≡ Thus, the total expected cost is, from (3.22), C R (N ) =

∞ 0

c R + N c D /θ c D . − 1 − G ∗ (λ) N λ

G ∗ (λ) N , 1 − G ∗ (λ) N e−st dG(t) for Re(s) > 0.

(3.23)

56

3 Random Inspection Models

We find optimal N ∗ to minimize C R (N ). Forming the inequality C R (N + 1) − C R (N ) ≥ 0, N  

1 G ∗ (λ)

j=1

j −N ≥

cR , c D /θ

(3.24)

whose left-hand side increases strictly with N from 1/G ∗ (λ) − 1 to ∞. Thus, there exists a finite and unique minimum N ∗ (1 ≤ N ∗ < ∞) which satisfies (3.24). Example 3.5 When G(t) = 1 − exp(−θt), i.e., G ∗ (λ) = θ/(λ + θ), the total expected cost is, from (3.23), C R (N ) =

c R + N c D /θ cD − , N 1 − [θ/(λ + θ)] λ

(3.25)

and from (3.24), optimal N ∗ satisfies N   j=1

λ 1+ θ

j −N ≥

cR , c D /θ

(3.26)

whose left-hand side increases strictly with N from λ/θ to ∞. Thus, there exists a√finite and unique minimum N ∗ (1 ≤ N ∗ < ∞) which satisfies (3.26). If 1/θ ≥ c R /(c D λ), then N ∗ = 1. It can be easily shown that N ∗ decreases with 1/θ from ∞ to 1.  Comparing random inspection with periodic one in which the expected cost C(T ) is given in (2.8), optimal T ∗ satisfies (2.9) and the resulting cost C(T ∗ ) is given in (2.10). Setting T ≡ N /θ, i.e., θ ≡ N /T in (3.25), it can be easily shown that 

θ+λ θ

N

  λT N = 1+ N

increases strictly with N to exp(λT ), i.e., for any N , 

N N + λT

N

≥ e−λT .

Thus, by comparing (2.8) and (3.25), C R (N ) > C(N /θ), which follows that when c R = cT , periodic inspection is better than random one, which has been already shown in Sect. 3.1.2. Example 3.6 Table 3.5 presents optimal N ∗ and its resulting cost C R (N ∗ )/c D for 1/θ and c R /c D when 1/λ = 100, and optimal T ∗ and its resulting cost C(T ∗ )/c D . This indicates that optimal N ∗ decrease with 1/θ and increase with c R /c D , however,

3.2 Random Inspection

57

Table 3.5 Optimal N ∗ and T ∗ , and resulting costs C R (N ∗ )/c D and C(T ∗ )/c D when F(t) = 1 − exp(−t/100) 1/θ c R /c D = 5 c R /c D = 10 N∗ C R (N ∗ )/c D N∗ C R (N ∗ )/c D 1 2 3 4 5 10 15 20 25 50 T∗ C(T ∗ )/c D

30 15 10 8 6 3 2 2 1 1 30.04 35.04

35.6 36.2 36.8 37.4 37.9 40.7 43.5 47.3 50.0 65.0

42 21 14 11 8 4 3 2 2 1 41.62 51.62

52.2 52.8 53.4 54.1 54.7 57.7 60.6 63.6 75.0 80.0

N ∗ /θ is almost the same for small 1/θ. Furthermore, T ∗ are almost the same as N ∗ /θ  for small 1/θ and C R (N ∗ ) > C(T ∗ ). Example 3.7 Suppose that the failure time has a Weibull distribution F(t) = 1 − ∞ exp(−λt m ) (m ≥ 1), μ = (1 + 1/m)/λ1/m where (α) ≡ 0 x α−1 exp(−x)dx for α > 0, and G(t) = 1 − exp(−θt). In this case, noting that the renewal density is [5, p. 101], [6, p. 57], [7, p. 52] m

(N )



dM (N ) (t)  θ(θt) N j−1 −θt = e , (t) ≡ dt (N j − 1)! j=1

the total expected cost in (3.22) is ⎤ ⎡   ∞  ∞ N j−1  N cD ⎣ m θ(θt) C R (N ) = c R + e−θt dt ⎦ − c D μ . e−λt 1+ θ (N j − 1)! 0 j=1

(3.27)

Table 3.6 presents optimal N ∗ for m and 1/θ when 1/λ = 100 and c R /c D = 5. When m = 1, N ∗ are equal to Table 3.5 when c R /c D = 5. Because the failure rate h(t) = λmt m−1 (m > 1) increases rapidly, N ∗ become much smaller than those for m = 1 when 1/θ is small, and N ∗ /θ are almost constant, i.e., N ∗ /θ ≈ 30, 9, 6 for m = 1, 2, 3, respectively. On the other hand, when 1/θ is very large, the unit should be checked at every working time, i.e., N ∗ = 1. This indicates that optimal N ∗ decrease with 1/θ and m, and increase with c R /c D , however, N ∗ /θ are almost the same for small 1/θ. 

58

3 Random Inspection Models

Table 3.6 Optimal N ∗ when c R /c D = 5 and F(t) = 1 − exp(−t m /100) 1/θ m=1 m=2 m=3 1 2 3 4 5 10 15 20 25

30 15 10 8 6 3 2 2 1

9 5 3 2 2 1 1 1 1

6 3 2 2 2 1 1 1 1

3.3 Inspection First and Last As extended inspection policies, we propose the following three random policies of inspection first, last and overtime, and derive and compare their optimal policies to minimize the total expected costs theoretically and numerically [5, p. 101].

3.3.1 Inspection First Suppose that the unit is checked at a planned time T (0 < T ≤ ∞) or at a working time Y j ( j = 1, 2, . . .), whichever occurs first. That is, the unit is checked at interval times Z j ≡ min{T, Y j } ( j = 1, 2, . . .) in Fig. 3.3, and Y j has an identical distribution G(t) ≡ Pr{Y j ≤ t}. In this case, Z j forms a renewal process with an interarrival distribution Pr{Z j ≤ t} = G(t) for t < T , 1 for t ≥ T . It is assumed that the failure time has an exponential distribution F(t) = 1 − exp(−λt). Then, the probability that the unit does not fail and is checked at time T is G(T )F(T ) ,

Fig. 3.3 Process of inspection first

3.3 Inspection First and Last

59

the probability that it does not fail and is checked at time Y j is 

T

F(t)dG(t) ,

0

the probability that it fails and its failure is detected at time T is G(T )F(T ) , and the probability that it fails and its failure is detected at time Y j is 

T

F(t)dG(t) .

0

Thus, using the above equations, the mean downtime l D from failure to its detection is given by a renewal equation 





T

l D = l D G(T )F(T ) +

F(t)dG(t) 0



T

+G(T )



T

(T − t)dF(t) +

0

0



t

 (t − u)dF(u) dG(t) .

0

By solving the above renewal equation, T l D = 0 T

G(t)F(t)dt

0

G(t)dF(t)

.

(3.28)

Similarly, the expected number MT of checks at time T until failure detection is given by a renewal equation  MT = (1 + MT )G(T )F(T ) + MT

T

F(t)dG(t) + G(T )F(T ) ,

0

i.e., MT =  T 0

G(T ) G(t)dF(t)

.

(3.29)

The expected number M R of checks at working time Y j until failure detection is given by a renewal equation 

T

M R = (1 + M R ) 0

 F(t)dG(t) + M R G(T )F(T ) + 0

T

F(t)dG(t) ,

60

3 Random Inspection Models

i.e., MR =  T 0

G(T ) G(t)dF(t)

.

(3.30)

Therefore, the total expected cost until failure detection is C F (T ) = cT MT + c R M R + c D l D

T cT G(T ) + c R G(T ) + c D 0 G(t)F(t)dt = , T 0 G(t)dF(t)

(3.31)

where cT , c R , and c D are given in (3.3). When G(t) = 1 − exp(−θt) and F(t) = 1 − exp(−λt), the total expected cost is   cT + (c R − cT + c D /θ) 1 − e−θT cD   , (3.32) C F (T ) = − λ [λ/(θ + λ)] 1 − e−(θ+λ)T which agrees with (2.8) when θ → 0 and (3.9) when T → ∞. This includes periodic and random inspection policies. We find optimal TF∗ to minimize C F (T ) in (3.32) for c R + c D /θ > cT . Differentiating C F (T ) with respect to T and setting it equal to zero,   cT λ  θ  λT e −1 − 1 − e−θT = , θ+λ θ+λ c R − cT + c D /θ

(3.33)

whose left-hand side increases strictly with T from 0 to ∞. Thus, there exists a finite and unique TF∗ (0 < TF∗ < ∞) which satisfies (3.33), and the resulting cost is ∗

λC F (TF∗ ) = [θ(c R − cT ) + c D ]eλTF − c D .

(3.34)

In particular, when c R = cT , (3.33) is   cT  λ 1  λT e −1 − 1 − e−θT = , θ+λ θ(θ + λ) cD

(3.35)

whose left-hand side decreases with θ, and TF∗ increases with θ from T ∗ given in (2.9) to ∞ (Problem 4).

3.3 Inspection First and Last

61

3.3.2 Inspection Last Suppose that the unit is checked at a planned time T (0 ≤ T < ∞) or at a working time Y j ( j = 1, 2, . . .), whichever occurs last. That is, the unit is checked at interval times Z j ≡ max{T, Y j } ( j = 1, 2, . . .) with G(t) ≡ Pr{Y j ≤ t} in Fig. 3.4. In this case, Z j forms a renewal process with an interarrival distribution Pr{ Z j ≤ t} = 0 for t < T , and G(t) for t ≥ T . It is assumed that the failure time has an exponential distribution F(t) = 1 − exp(−λt). Then, the probability that the unit does not fail and is checked at time T is G(T )F(T ) , the probability that it does not fail and is checked at time Y j is 



F(t)dG(t) ,

T

the probability that it fails and its failure is detected at time T is G(T )F(T ) , and the probability that it fails and its failure is detected at time Y j is 



F(t)dG(t) .

T

Thus, the mean downtime l D from failure to its detection is given by a renewal equation   l D = l D G(T )F(T ) +





F(t)dG(t) T



T

+G(T )





(T − t)dF(t) +

0

Fig. 3.4 Process of inspection last

T

 0

t

 (t − u)dF(u) dG(t) .

62

3 Random Inspection Models

By solving the above renewal equation, T lD =

0

∞ F(t)dt + T G(t)F(t)dt ∞ . 1 − T G(t)dF(t)

(3.36)

Similarly, the expected number MT of checks at time T until failure detection is given by a renewal equation 



MT = (1 + MT )G(T )F(T ) + MT

F(t)dG(t) + G(T )F(T ) ,

T

i.e., MT =

1−

G(T ) ∞ . T G(t)dF(t)

(3.37)

The expected number M R of checks at working time Y j until failure detection is given by a renewal equation 





M R = (1 + M R )



F(t)dG(t) + M R G(T )F(T ) +

T

F(t)dG(t) ,

T

i.e., MR =

G(T ) ∞ . 1 − T G(t)dF(t)

(3.38)

Therefore, the total expected cost until failure detection is C L (T ) = cT MT + c R M R + c D l D

$ % ∞ T cT G(T ) + c R G(T ) + c D 0 F(t)dt + T G(t)F(t)dt ∞ . = 1 − T G(t)dF(t)

(3.39)

Clearly, lim C L (T ) = lim C F (T ) ,

T →0

T →∞

lim C L (T ) = lim C F (T ) = ∞ .

T →∞

T →0

When F(t) = 1 − exp(−λt) and G(t) = 1 − exp(−θt), the total expected cost is C L (T ) =

    cT 1 − e−θT + c R e−θT + (c D /θ) θT + e−θT cD − , 1 − e−λT + [λ/(θ + λ)]e−(θ+λ)T λ

(3.40)

3.3 Inspection First and Last

63

which agrees with (2.8) when θ → ∞ and (3.9) when T → 0. We find optimal TL∗ to minimize C L (T ) in (3.40). Differentiating C L (T ) with respect to T and setting it equal to zero,     θ λ  cT − c R θ  λT −θT e −1 + 1−e + eθT − 1 λ θ+λ θ+λ    c D θ  λT λ −θT = cT . e − (1 + λT ) − e + θ λ θ+λ

(3.41)

In particular, when cT = c R , (3.41) is  λ cT 1  λT e − (1 + λT ) − e−θT = , λ θ(θ + λ) cD

(3.42)

whose left-hand side increases strictly with T from −λ/[θ(θ + λ)] to ∞. Thus, there exists a finite and unique TL (0 < TL < ∞) which satisfies (3.42). Therefore, optimal TL∗ is TL∗ ≥ TL for c R ≥ cT and TL∗ < TL for c R < cT , and TL decreases with θ to T ∗ given in (2.9) of Sect. 2.1 (Problem 5).

3.3.3 Comparison of Inspection First and Last We compare optimal policies for inspection first and last when cT = c R , G(t) = 1 − exp(−θt) and F(t) = 1 − exp(−λt). In this case, the total expected cost of inspection first is, from (3.32),   cT + (c D /θ) 1 − e−θT cD −  , C F (T ) = −(θ+λ)T λ [λ/(θ + λ)] 1 − e

(3.43)

optimal TF∗ satisfies (3.35), and the resulting cost is, from (3.34), C F (TF∗ ) =

 c D  λTF∗ e −1 . λ

(3.44)

The total expected cost of inspection last is, from (3.40),   cT + (c D /θ) θT + e−θT cD − C L (T ) = , −λT −(θ+λ)T 1−e + [λ/(θ + λ)] e λ

(3.45)

optimal TL∗ satisfies (3.42), and the resulting cost is C L (TL∗ ) =

 c D  λTL∗ −1 . e λ

(3.46)

64

3 Random Inspection Models

By comparing (2.9) with (3.35) for 0 < T < ∞, eλT − (1 + λT ) eλT − 1 λ(1 − e−θT ) > − , λ θ+λ θ(θ + λ) which follows that TF∗ > T ∗ . Similarly, by comparing (2.9) with (3.42), TL∗ > T ∗ . Thus, from (2.10), (3.44) and (3.46), periodic inspection with only time T ∗ is better than both inspection first and last. Furthermore, to compare (3.35) and (3.42), denoting  λ −θT θ  λT e − (1 + λT ) − e λ θ+λ   θ  λT λ  e −1 + 1 − e−θT − θ+λ θ+λ    θ  λT λ  θ  λT e − (1 + λT ) + 1 − 2e−θT − e −1 , = λ θ+λ θ+λ

Q(T ) ≡

Q(T ) increases strictly with T from −λ/(θ + λ) to ∞. Thus, there exists a finite and unique TI (0 < TI < ∞) which satisfies Q(T ) = 0. Thus, from (3.35) and (3.42), if L(TI ) ≡

  cT θ  λTI λ  e −1 − 1 − e−θTI ≥ , θ+λ θ+λ c D /θ

(3.47)

then TF∗ ≤ TL∗ , and inspection first is better than inspection last, and conversely, if L(TI ) < cT /(c D /θ), then TL∗ < TF∗ , and inspection last is better than inspection first. Example 3.8 Table 3.7 presents optimal TF∗ and TL∗ which satisfy (3.35) and (3.42), respectively, and TI , L(TI ) for cT /c D and 1/θ when 1/λ = 1 and cT = c R . When 1/θ = ∞, T ∗ are given in Table 3.3. This indicates that both TF∗ and TL∗ increase with cT /c D . When cT /c D is small, i.e., L(TI ) > cT /c D , TF∗ < TL∗ and inspection first is better than inspection last. Conversely, when cT /c D is large, i.e., L(TI ) < cT /c D , TL∗ < TF∗ and inspection last is better than inspection first. Optimal TF∗ decrease with 1/θ to T ∗ and TL∗ increase with 1/θ from T ∗ . So that, inspection first is better than inspection last as 1/θ become larger. It is of interest that when 1/θ = 0.5 and cT /c D = 0.100, TF∗ = 0.4739 < 1/θ = 0.5 < TL∗ = 0.5161, and both inspection times are almost the same as 1/θ. 

3.4 Inspection Overtime

65

Table 3.7 Optimal TF∗ , TL∗ , and TI when F(t) = 1 − exp(−t) and G(t) = 1 − exp(−θt) cT /c D

1/θ = 0.1 TF∗ TL∗

1/θ = 0.2 TF∗ TL∗

1/θ = 0.5 TF∗ TL∗

1/θ = ∞ T∗

0.001 0.002 0.005 0.010 0.020 0.050 0.100 0.200 0.500 1.000 TI L(TI )

0.0479 0.0697 0.1168 0.1764 0.2727 0.5004 0.7884 0.1939 1.8871 2.4932 0.1259 0.0057

0.0461 0.0660 0.1069 0.1553 0.2279 0.3859 0.5817 0.8744 1.4350 1.9742 0.2444 0.0226

0.0450 0.0639 0.1016 0.1446 0.2061 0.3307 0.4739 0.6787 1.0792 1.4985 0.5643 0.1400

0.0444 0.0626 0.0984 0.1382 0.1936 0.3004 0.4162 0.5722 0.8577 1.1462

0.0939 0.1012 0.1216 0.1511 0.1993 0.3017 0.4165 0.5723 0.8577 1.1462

0.1698 0.1737 0.1850 0.2030 0.2362 0.3190 0.4239 0.5747 0.8580 1.1462

0.3746 0.3762 0.3811 0.3891 0.4048 0.4492 0.5161 0.6297 0.8785 1.1539

3.4 Inspection Overtime Suppose that the unit is checked at the first completion of working times over time T (0 ≤ T < ∞), which is called inspection overtime [5, p. 108]. It is assumed that the failure time and each working time have the respective exponential distribution F(t) = 1 − exp(−λt) and G(t) = 1 − exp(−θt). Then, the probability that the unit does not fail at some checking interval is ∞   j=0



T

0



 F(u)dG(u − t) dG ( j) (t) =

T

θ e−λT , θ+λ

and the probability that it fails at some interval is ∞   j=0

T



0





F(u)dG(u − t) dG ( j) (t) = 1 −

T

θ e−λT . θ+λ

Thus, the mean time from failure to its detection is ∞   j=0

=T+

T 0



∞ T



u

  (u − x)dF(x) dG(u − t) dG ( j) (t)

0

1 θ 1 − + e−λT . θ λ λ(θ + λ)

(3.48)

66

3 Random Inspection Models

The expected number M R of checking times until failure detection is given by a renewal equation M R = (1 + M R )

∞   j=0

+

T 0



 F(u)dG(u − t) dG ( j) (t)

T

∞   j=0



T



0



 F(u)dG(u − t) dG ( j) (t) ,

T

i.e., M R = ∞  T  ∞ j=0 0

T

1



F(u)dG(u − t)

dG ( j) (t)

=

1 . (3.49) 1 − [θ/(θ + λ)]e−λT

Therefore, from (3.48) and (3.49), the total expected cost until failure detection is (Problem 6) C O (T ) =

c R + c D (T + 1/θ) cD − , −λT 1 − [θ/(θ + λ)]e λ

(3.50)

which agrees with (3.9) when T = 0. We find optimal TO∗ to minimize C O (T ). Differentiating C O (T ) with respect to T and setting it equal to zero, 

  1 1  λT cR + e −1 −T = , λ θ cD

(3.51)

whose left-hand side increases strictly with T from 0 to ∞. Thus, there exists a finite and unique TO∗ (0 < TO∗ < ∞) which satisfies (3.51), and the resulting cost is   C O (TO∗ ) λ λTO∗ = 1+ e − 1. c D /λ θ

(3.52)

Note that optimal TO∗ increases strictly with θ from 0 to T ∗ given in (2.9), and TO∗ ≤ T ∗ .

3.4.1 Comparisons of Periodic Inspection and Inspection Overtime Compare periodic inspection in which the total expected cost C(T ) is given in (2.8) with inspection overtime when cT = c R . In this case, from (3.51), TO∗ decreases with 1/θ from T ∗ to 0, and TO∗ < T ∗ . On the other hand,

3.4 Inspection Overtime

67

      1 1  λTO∗ 1  λ(TO∗ +1/θ) 1 cT e > + e − 1 − TO∗ + − 1 − TO∗ = , λ θ λ θ cD which follows that TO∗ + 1/θ > T ∗ > TO∗ . Thus, comparing (3.52) with (2.10), C(T ∗ ) < C O (TO∗ ), i.e., periodic inspection is better than inspection overtime. Next, assume that c R < cT . Then, from (2.10) and (3.52), if ∗

cT + c D T > c R + c D



TO∗

1 + θ

 ,

then inspection overtime is better than periodic one. Furthermore, we obtain " c R in the case where C(T ∗ ) = C O (TO∗ ) for given cT and ∗ c D . First, we compute T from (2.9) and C(T ∗ ) from (2.10). Using T ∗ and C(T ∗ ), "O which satisfies we obtain T 

  1 1 1  λT"O cT + e − 1 + = T∗ + , λ θ θ cD

and from (3.51),   " cR cT 1 ∗ " . =T + − TO + cD cD θ Example 3.9 Table 3.8 presents optimal TO∗ and " c R /c D for 1/θ and cT /c D when c R /c D increase with cT /c D and decrease F(t) = 1 − exp(−t). Optimal TO∗ and " c R /c D with 1/θ. Compared to Table 3.7, TO∗ < T ∗ < TO∗ + 1/θ. This indicates that " approach to cT /c D as cT /c D become larger. In other words, if both cT and c R for cT > c R become larger, then TO∗ and T ∗ become larger, and both inspection overtime and periodic inspection are almost the same. If TO∗ + 1/θ ≥ T ∗ + cT /c D , then any positive " c R does not exist, i.e., inspection overtime cannot be rather than periodic inspection. 

3.4.2 Comparisons of Inspection Overtime with First and Last Compare inspection overtime with inspection first introduced in Sect. 3.3.1 and inspection last introduced in Sect. 3.3.2 when c R = cT . From (3.33) and (3.51), noting that 

     1 1  λT 1  λT λ + e −1 −T − e −1 + 1 − e−θT λ θ θ+λ θ(θ + λ)   λ λ2 T + 1 − e−θT > 0 , > θ(θ + λ) θ(θ + λ)

68

3 Random Inspection Models

Table 3.8 Optimal TO∗ for cT = c R and " c R /c D for C(T ∗ ) = C O (TO∗ ) when F(t) = 1 − exp(−t) and G(t) = 1 − exp(−θt) cT /c D 1/θ = 0.01 1/θ = 0.05 1/θ = 0.1 TO∗ " c R /c D TO∗ " c R /c D TO∗ " c R /c D 0.001 0.002 0.005 0.010 0.020 0.050 0.100 0.200 0.500 1.000

0.0355 0.0534 0.0889 0.1285 0.1838 0.2906 0.4064 0.5624 0.8478 1.1363

– 0.0012 0.0045 0.0097 0.0198 0.0498 0.0998 0.1998 0.4999 0.9999

0.0170 0.0303 0.0606 0.0972 0.1503 0.2550 0.3698 0.5250 0.8098 1.0980

– – – 0.0010 0.0133 0.0454 0.0964 0.1972 0.4979 0.9982

0.0095 0.0182 0.0407 0.0713 0.1190 0.2181 0.3299 0.4830 0.7658 1.0531

– – – – – 0.0323 0.0863 0.1892 0.4919 0.9931

we have TO∗ < TF∗ (Problem 7). From (3.42) and (3.51), noting that 

   1 1  λT 1  λT λ + e −1 −T − e − (1 + λT ) + e−θT λ θ λ θ(θ + λ)  1  λT λ e −1 + e−θT > 0 , = θ θ(θ + λ)

we have TO∗ < TL∗ . In addition, because (Problem 7),    λ λ 1 λ(T +1/θ) e − e−θ(T +1/θ) − 1 + λT + λ θ θ(θ + λ)    1  λT 1 + e −1 + T > 0, − λ θ we have TO∗ < TL∗ < TO∗ + 1/θ. Furthermore, TL∗ increase with 1/θ from T ∗ to ∞ and TO∗ decreases with 1/θ from T ∗ to 0. Example 3.10 Table 3.9 presents optimal TF∗ , TL∗ , and TO∗ for 1/θ and cT /c D when λ = 1. This indicates that all of Ti∗ (i = F, L , O) increase with cT /c D , TF∗ decrease with 1/θ to T ∗ , TL∗ increase with 1/θ from T ∗ , and TO∗ decrease with 1/θ from T ∗ to 0, and TO∗ < TF∗ and TO∗ < TL∗ < TO∗ + 1/θ. 

References

69

Table 3.9 Optimal TF∗ , TL∗ , and TO∗ when F(t) = 1 − exp(−t) and G(t) = 1 − exp(−θt) 1/θ 0.001 0.002 0.005 0.010 0.020 0.050 0.100 0.200 0.500 1.000

cT /c D = 0.01 TF∗ TL∗

TO∗

cT /c D = 0.05 TF∗ TL∗

TO∗

cT /c D = 0.10 TF∗ TL∗

TO∗

2.3989 1.7938 1.1036 0.7031 0.4253 0.2307 0.1764 0.1553 0.1446 0.1413

0.1372 0.1362 0.1333 0.1285 0.1197 0.0973 0.0713 0.0441 0.0194 0.0099

3.9328 3.2601 2.4029 1.8017 1.2726 0.7419 0.5004 0.3859 0.3307 0.3149

0.2994 0.2984 0.2955 0.2906 0.2812 0.2550 0.2181 0.1643 0.0880 0.0477

4.6161 3.9338 3.0495 2.4078 1.8116 1.1474 0.7884 0.5817 0.4739 0.4436

0.4152 0.4142 0.4113 0.4064 0.3968 0.3698 0.3299 0.2666 0.1596 0.0914

0.1382 0.1382 0.1382 0.1382 0.1382 0.1392 0.1511 0.2030 0.3891 0.6546

0.3004 0.3004 0.3004 0.3004 0.3004 0.3004 0.3017 0.3190 0.4492 0.6876

0.4162 0.4162 0.4162 0.4162 0.4162 0.4162 0.4165 0.4239 0.5161 0.7269

3.5 Problems 1. 2. 3. 4. 5. 6. 7.

Derive (3.5) and (3.6). C R (T ). Derive optimal T ∗ to minimize √ Make that T ∗ /(1/θ∗ ) < 2 in Table 3.3. Prove that TF∗ increases strictly with θ from T ∗ to ∞. Prove that TL decreases strictly with θ to T ∗ . Derive (3.50). Prove that TO∗ < TF∗ and TO∗ + 1/θ > TL∗ .

References 1. Nakagawa T (2005) Maintenance theory of reliability. Springer, London 2. Nakagawa T (2010) A summary of periodic and random inspection policies. Reliab Eng Syst Saf 95:906–911 3. Nakagawa T, Zhao X, Yun WY (2011) Optimal age replacement and inspection policies with random failure and replacement times. Inter J Reliab Quat Saf Eng 18:1–12 4. Ito K, Nakamura S, Nakagawa T (2017) A summary of replacement policies for continuous damage models. In: Nakamura S, Qian CH, Nakagawa T (eds) Reliability modeling with computer and maintenance applications. World Scientific, Singapore, pp 331–343 5. Nakagawa T (2014) Random maintenance policies. Springer, London 6. Barlow RE, Proschan F (1965) Mathematical theory of reliability. Wiley, New York 7. Nakagawa T (2011) Stochastic processes with applications to reliability theory. Springer, London

Chapter 4

General Inspection Models

When the failure time follows an exponential distribution, the policies of inspection first, last, and overtime have been proposed, and their optimal policies have been derived and compared analytically and numerically in Sects. 3.3 and 3.4. As theoretical studies, it would be of interest to formulate the general inspection models, combining the periodic and random policies to satisfy commonly planned and randomly needed inspection times. This chapter takes up random inspection policies in which the unit is checked at random times forming a renewal process [1]. By formulating the inter-arrival distributions of checking times in a renewal theory, the policies of inspection first, last, and overtimes are investigated in Sect. 4.1. Furthermore, the general models of inspection first, last, and overtime with constant time T and n variables of random times are formulated, and their optimal policies are derived and compared in Sect. 4.2 [2]. As one of simple examples, inspection models with three variables T , Y1 and Y2 are introduced, and a new inspection model called inspection middle is proposed. Furthermore, two modified inspection models are considered, and when all of inspection costs are the same, it is theoretically shown that which policy is better than the others. These formulations would be applied to computer systems with faults [3] and maintenance policies for reliability models [4].

4.1 Inspection First and Last ∞ An operating unit has a failure distribution F(t) with finite mean μ ≡ 0 F(t)dt, where Φ(t) ≡ 1 − Φ(t) for any function Φ(t). The unit is checked at random times S j = Y1 + Y2 + · · · + Y j ( j = 1, 2, . . .), where S0 ≡ Y0 ≡ 0, and random variables Y j are independent with each other and have an identical distribution © Springer Nature Switzerland AG 2023 K. Ito and T. Nakagawa, Optimal Inspection Models with Their Applications, Springer Series in Reliability Engineering, https://doi.org/10.1007/978-3-031-22021-0_4

71

72

4 General Inspection Models

∞ G(t) ≡ Pr{Y j ≤ t} with finite mean 1/θ ≡ 0 G(t)dt < ∞. Denoting G ( j) (t) ≡ convolution of G(t) with itself, Pr{S j ≤ t} ( j = 1, 2, . . .) be the j-fold  Stieltjes ( j) G (t) represents the expected number where G (0) (t) ≡ 1 for t ≥ 0, M(t) ≡ ∞ j=1 of random inspections in (0, t]. Let c R be the inspection cost of one check and c D be the downtime cost per unit of time for the time elapsed between failure and its detection. Then, the total expected cost until failure detection is [1, p. 254], [4] C R (G) ∞   = j=0



 t 

0

0



 [( j + 1)c R + c D (x + y − t)] dG(y) dG

( j)

(x) dF(t)

t−x

 ∞ cD = cR + F(t)dM(t) − c D μ , 1+ θ 0

(4.1)

which agrees with (3.4) in Sect. 3.1. In particular, when F(t)=1 − exp(−λt), noting that  ∞ G ∗ (λ) e−λt dM(t) = , 1 − G ∗ (λ) 0 where Φ ∗ (λ) ≡ cost is

∞ 0

e−λt dΦ(t) for λ > 0 and any function Φ(t), the total expected

C R (G) =

c R + c D /θ c D − . 1 − G ∗ (λ) λ

(4.2)

4.1.1 Inspection First Suppose that the unit is checked at time T (0 < T ≤ ∞) or at time Y j , whichever occurs first. Then, by setting Z j ≡ min{T, Y j } ( j = 1, 2, . . .), Z j has a distribution  G F (t) ≡ Pr{Z j ≤ t} =

G(t) for t < T , 1 for t ≥ T ,

and has a mean time 1 ≡ E{Z j } = θF





 G F (t)dt =

0

Thus, the total expected cost is, from (4.1),

0

T

G(t)dt .

4.1 Inspection First and Last

73



 ∞ cD 1+ C R (G F ) = c R + F(t)dM F (t) − c D μ , θF 0

(4.3)

 ( j) where M F (t) ≡ ∞ j=1 G F (t). When F(t) = 1 − exp(−λt), noting that G ∗F (λ) ≡





e−λt dG F (t) = e−λT +

0



T

λe−λt G(t)dt ,

0

we have 1 + M F∗ (λ) =  T 0

1

.

λe−λt G(t)dt

Because C R (G F ) is a function of T , (4.3) is rewritten as T c R + c D 0 G(t)dt cD , C F (T ) =  T − −λt λ G(t)dt 0 λe

(4.4)

which agrees with (3.31) when cT = c R and F(t) = 1 − exp(−λt). We find optimal TF∗ to minimize C F (T ) in (4.4). Differentiating C F (T ) with respect to T and setting it equal to zero, 

T

L F (T ) ≡



0

 cR eλ(T −t) − 1 G(t)dt = , cD

(4.5)

whose left-hand side increases strictly with T from 0 to ∞. Thus, there exists a finite and unique TF∗ (0 < TF∗ < ∞) which satisfies (4.5), and the resulting cost is  ∗  λC F (TF∗ ) = c D eλTF − 1 .

(4.6)

4.1.2 Inspection Last Suppose that the unit is checked at time T (0 ≤ T < ∞) or at time Y j , whichever Z j has a distribution occurs last. Thus, by setting  Z j ≡ max{T, Y j } ( j = 1, 2, . . .),   G L (t) ≡

0 for t < T , G(t) for t ≥ T ,

and has a mean time 1 ≡ E{  Z j} = θL



∞ 0





G L (t)dt = T + T

G(t)dt .

74

4 General Inspection Models

Thus, the total expected cost is, from (4.1), 

 ∞ cD 1+ F(t)dM L (t) − c D μ , C R (G L ) = c R + θL 0

(4.7)

 ( j) where M L (t) ≡ ∞ j=1 G L (t). When F(t) = 1 − exp(−λt), noting that G ∗L (λ) =





e−λt dG L (t) =



0



λe−λt G(t)dt ,

T

we have 1 + M L∗ (λ) =

1−

∞ T

1 . λe−λt G(t)dt

Thus, the total expected cost is ∞   c R + c D T + T G(t)dt cD ∞ , − C L (T ) = λ 1 − T λe−λt G(t)dt

(4.8)

which agrees with (3.39) when cT = c R and F(t) = 1 − exp(−λt). We find optimal TL∗ to minimize C L (T ) in (4.8). Differentiating C L (T ) with respect to T and setting it equal to zero, L L (T ) ≡

 1  λT e − λT − 1 − λ



∞ T

  cR 1 − e−λ(t−T ) G(t)dt = , cD

(4.9)

whose left-hand side increases strictly with T from L L (0) < 0 to ∞. Thus, there exists a finite and unique TL∗ (0 < TL∗ < ∞) which satisfies (4.9), and the resulting cost is  ∗  λC L (TL∗ ) = c D eλTL − 1 . Next, compare inspection first and last. From (4.5) and (4.9), L L F (T ) ≡ L L (T ) − L F (T )  ∞    1  λT = e − 1 − λT − 1 − e−λ(t−T ) G(t)dt λ T  T  λ(T −t)  e − 1 G(t)dt , − 0

(4.10)

4.2 General Inspection Models

75

whose right-hand side increases strictly with T from L L F (0) < 0 to ∞ (Problem 1). Thus, there exists a finite and unique TI∗ (0 < TI∗ < ∞) which satisfies L L F (T ) = 0. From (4.5), if 

TI

L F (TI ) ≡ 0

 λ(TI −t)  cR e − 1 G(t)dt ≥ , cD

then TF∗ ≤ TL∗ , and from (4.6) and (4.10), C F (TF∗ ) ≤ C L (TL∗ ), i.e., inspection first is better than inspection last. Conversely, if L F (TI ) < c R /c D , then TL∗ < TF∗ , and C L (TL∗ ) < C F (TF∗ ), i.e., inspection last is better than inspection first. This means that inspection last becomes better than inspection first as c R /c D is larger. When F(t) = 1 − exp(−λt) and G(t) = 1 − exp(−θt), optimal TF∗ and TL∗ are computed and their comparisons are given in Example 3.8 of Chap. 3.

4.2 General Inspection Models We extend inspection first and last to general models with constant checking time T and n (n = 1, 2, . . .) random variables of checking times, and derive their optimal policies theoretically.

4.2.1 Inspection First Suppose that the unit is checked at time T (0 < T ≤ ∞) or at random times Y j1 , Y j2 , . . . , Y jn ( j = 1, 2, . . .), whichever occurs first, where Y ji (i = 1, 2, . . . , n) have an identical distribution G i (t) ≡ Pr{Y ji ≤ t} with finite mean 1/θi (0 < θi < ∞) for j=1, 2, . . .. Then, by setting Z jn ≡ min{T, Y j1 , Y j2 , . . . , Y jn } ( j = 1, 2, . . .), Z jn has a distribution  G Fn (t) ≡ Pr{Z jn ≤ t} =

1− 1

n i=1

G i (t) for t < T , for t ≥ T ,

and has a mean time 1 θ Fn





≡ E{Z jn } =



T

G Fn (t)dt =

0

0



n 

 G i (t) dt .

i=1

Thus, noting that G ∗Fn (λ) ≡



∞ 0

e−λt dG Fn (t) = 1 −



T 0

λe−λt

 n  i=1

 G i (t) dt ,

76

4 General Inspection Models

the total expected cost is, from (4.1), 

cD C R (G Fn ) = c R + θ Fn

 1+



F(t)dM Fn (t) − c D μ ,

(4.11)

0

 ( j) where M Fn (t) ≡ ∞ j=1 G Fn (t). In particular, when F(t) = 1 − exp(−λt), noting that G ∗Fn (λ)







e

−λt



T

dG Fn (t) = 1 −

0

λe

−λt

0

 n 

 G i (t) dt ,

i=1

we have ∗ (λ) =  T 1 + M Fn 0

λe−λt

1 n i=1

 . G i (t) dt

Thus, the total expected cost is, from (4.3),  T n  cR + cD 0 cD i=1 G i (t) dt − , C Fn (T ) =  T  n −λt λ i=1 G i (t) dt 0 λe

(4.12)

which agrees with (4.4) when n = 1. ∗ to minimize C Fn (T ) in (4.12). Differentiating C Fn (T ) with We find optimal TFn respect to T and setting it equal to zero, 

T

L Fn (T ) ≡ 0

 n   λ(T −t)   cR e −1 G i (t) dt = , cD i=1

(4.13)

whose left-hand side increases strictly with T from 0 to ∞. Thus, there exists a finite ∗ ∗ (0 < TFn < ∞) which satisfies (4.13), and the resulting cost is and unique TFn  ∗  ∗ ) = c D eλTFn − 1 . λC Fn (TFn

(4.14)

∗ Noting that the left-hand side of (4.13) decreases with n from that of (4.5) to 0, TFn ∗ increases with n from TF given in (4.5) to ∞.

4.2.2 Inspection Last Suppose that the unit is checked at time T or at random times Y j1 , Y j2 , . . ., Y jn ( j = 1, 2, . . .), whichever occurs last. Then, by setting  Z jn ≡ max{T, Y j1 , Y j2 , . . . , Y jn },  Z jn has a distribution

4.2 General Inspection Models

77

G Ln (t) ≡ Pr{  Z jn ≤ t} =



0  n i=1

for t < T , G i (t) for t ≥ T ,

and has a mean time 1 ≡ E{  Z jn } = θ Ln









G Ln (t)dt = T +

0

 1−

T

n 

 G i (t) dt .

i=1

Thus, the total expected cost is, from (4.1), 

 ∞ cD 1+ F(t)dM Ln (t) − c D μ , C R (G Ln ) = c R + θ Ln 0

(4.15)

 ( j) where M Ln (t) ≡ ∞ j=1 G Ln (t). In particular, when F(t) = 1 − exp(−λt), noting that G ∗Ln (λ)

 ≡



e

−λt





dG Ln (t) =

0

λe

−λt

T

 n 

 G i (t) dt ,

i=1

we have ∗ 1 + M Ln (λ) =

1−

∞ T

λe−λt

1 n i=1

 . G i (t) dt

Thus, the total expected cost is, from (4.15), ∞    n G i (t) dt c R + c D T + T 1 − i=1 cD ∞ n  C Ln (T ) = , − λ 1 − T λe−λt G (t) dt i=1 i

(4.16)

which agrees with (4.8) when n = 1. ∗ We find optimal TLn to minimize C Ln (T ) in (4.16). Differentiating C Ln (T ) with respect to T and setting it equal to zero, L Ln (T ) ≡

  n   ∞  1 λT cR e − 1 − λT − 1 − e−λ(t−T ) 1 − G i (t) dt = , λ cD T

(4.17)

i=1

whose left-hand side increases strictly with T from L Ln (0) < 0 to ∞. Thus, there ∗ ∗ (0 < TLn < ∞) which satisfies (4.17), and the resulting exists a finite and unique TLn cost is  ∗  ∗ ) = c D eλTLn − 1 . λC Ln (TLn

(4.18)

78

4 General Inspection Models

∗ Noting that the left-hand side of (4.17) decreases with n from that of (4.9), TLn ∗ increases with n from TL given in (4.9). Next, compare inspection first and last. From (4.13) and (4.17),

L n (T ) ≡ L Ln (T ) − L Fn (T )

   ∞ n     1  λT −λ(t−T ) e − 1 − λT − 1−e 1− G i (t) dt = λ T i=1  n   T  λ(T −t)   e −1 G i (t) dt , − 0

i=1

whose right-hand side increases strictly with T from L n (0) < 0 to ∞ (Problem 2). Thus, there exist a finite and unique TI∗n (0 < TI∗n < ∞) which satisfies L n (T ) = 0. From (4.13), if 

TI

L Fn (TI ) ≡ 0



e

λ(TI −t)

 n    cR −1 G i (t) dt ≥ , cD i=1

∗ ∗ ∗ ∗ ≤ TLn , and from (4.14) and (4.18), C Fn (TFn ) ≤ C Ln (TLn ), i.e., inspection then TFn ∗ ∗ < TFn , first is better than inspection last. Conversely, if L Fn (TI ) < c R /c D , then TLn ∗ ∗ and C Ln (TLn ) < C Fn (TFn ), i.e., inspection last is better than inspection first. ∗ ∗ Example 4.1 Table 4.1 presents optimal TFn in (4.13) and TLn in (4.17) when F(t) = 1 − exp(−t) and G i (t) = 1 − exp(−10t) for n and c R /c D . This indicates that both ∗ ∗ ∗ ∗ and TLn increase with c R /c D and n. In addition, when n = 5, TFn > TLn for all TFn ∗ ∗ ∗ ∗ n, and when n = 10, TFn < TLn for c R /c D ≤ 0.5 and TFn > TLn for c R /c D ≥ 0.10. This means that if c R /c D and n are large, inspection last is better than inspection first. In other words, if inspection cost c R is large or n is large, we should not check the unit early, and inspection last is better than inspection first. 

Furthermore, when cT is the inspection cost at time T and ci (i = 1, 2, . . . , n) are the respective inspection costs at times Y ji ( j = 1, 2, . . .), (4.12) is  T n   G i (T) + c D 0 G i (t) dt i=1   n T n + i=1 ci 0 j=1, j=i G j (t) dG i (t) cD , C Fn (T ) = − T n  −λt λ i=1 G i (t) dt 0 λe cT

n

i=1

(4.19)

and (4.16) is        n n cT 1 − i=1 G i (T ) + c D T + 0T 1 − i=1 G i (t) dt   n n + i=1 ci T∞ j=1, j=i G j (t) dG i (t) cD   n . (4.20) C Ln (T ) = − λ 1 − T∞ λe−λt G (t) dt i=1 i

4.3 Inspection Models with Three Variables

79

∗ and T ∗ when F(t) = 1 − exp(−t) and G (t) = 1 − exp(−10t) Table 4.1 Optimal TFn i Ln

c R /c D

n=5 ∗ TFn

∗ TLn

n = 10 ∗ TFn

∗ TLn

0.01 0.02 0.05 0.10 0.20 0.50 1.00 2.00 5.00 10.00

1.051 1.548 2.319 2.952 3.607 4.488 5.159 5.830 6.717 7.387

0.151 0.201 0.303 0.417 0.573 0.858 1.146 1.505 2.091 2.611

1.064 1.563 2.336 2.970 3.625 4.506 5.176 5.846 6.732 7.401

2.479 2.480 2.483 2.487 2.495 2.520 2.560 2.635 2.831 3.091

∗ ∗ When cT = ci , it has been shown that both TFn and TLn increase with n from TF∗ ∗ and TL , respectively. So that, from (4.14) and (4.18), optimal policies with n random times become worse with n. However, when cT > ci , optimal policy with random times might be better than that with time T , as shown in Sect. 3.1.2.

4.3 Inspection Models with Three Variables We consider inspection models with time T and random times Yi (i = 1, 2) when F(t) = 1 − exp(−λt), where Yi has a different distribution Pr{Yi ≤ t} = G i (t) with mean 1/θi (i = 1, 2).

4.3.1 Inspection First, Last and Middle Suppose that the unit is checked at times T , Y1 , or Y2 , whichever occurs first. Then, the total expected cost is, from (4.12), T c R + c D 0 G 1 (t)G 2 (t)dt cD , C F (T ) =  T − −λt G (t)G (t)dt λ 1 2 0 λe

(4.21)

optimal TF∗ satisfies 

T

L F (T ) ≡ 0

 λ(T −t)  cR e − 1 G 1 (t)G 2 (t)dt = , cD

(4.22)

80

4 General Inspection Models

and the resulting cost is  ∗  λC F (TF∗ ) = c D eλTF − 1 .

(4.23)

Next, suppose that the unit is checked at times T , Y1 , or Y2 , whichever occurs last. Then, the total expected cost is, from (4.16), ∞   c R + c D T + T [1 − G 1 (t)G 2 (t)]dt cD ∞ , − C L (T ) = −λt λ 1 − T λe G 1 (t)G 2 (t)dt

(4.24)

optimal TL∗ satisfies L L (T ) ≡

 1  λT e − 1 − λT − λ



∞ T

  cR 1 − e−λ(t−T ) [1 − G 1 (t)G 2 (t)]dt = , cD (4.25)

and its resulting cost is  ∗  λC L (TL∗ ) = c D eλTL − 1 .

(4.26)

Finally, suppose that the unit is checked at times T , Y1 or Y2 , whichever occurs middle, i.e., it is checked at time T for Y1 < T < Y2 and Y2 < T < Y1 , at time Y1 for T < Y1 < Y2 and Y2 < Y1 < T , and at time Y2 for T < Y2 < Y1 and Y1 < Y2 < T . Then, the probability that the unit is checked at time T before failure is   F(T ) G 1 (T )G 2 (T ) + G 1 (T )G 2 (T ) , the probability that it is checked at time Y1 before failure is 

T





F(t)G 2 (t)dG 1 (t) +

0

F(t)G 2 (t)dG 1 (t) ,

T

and the probability that it is checked at time Y2 before failure is 

T 0





F(t)G 1 (t)dG 2 (t) +

F(t)G 1 (t)dG 2 (t) .

T

The probability that the failure is detected at time T is   F(T ) G 1 (T )G 2 (T ) + G 1 (T )G 2 (T ) , and the mean time from failure to its detection is    G 1 (T )G 2 (T ) + G 1 (T )G 2 (T ) 0

T

(T − t)dF(t) .

(4.27)

4.3 Inspection Models with Three Variables

81

The probability that the failure is detected at time Y1 is 

T





F(t)G 2 (t)dG 1 (t) +

0

F(t)G 2 (t)dG 1 (t) ,

T

and the mean time from failure to its detection is 

T



t

G 2 (t)

0

 (t − u)dF(u) dG 1 (t) +

0





t

G 2 (t)

T

(t − u)dF(u) dG 1 (t) . (4.28)

0

The probability that the failure is detected at time Y2 is 

T





F(t)G 1 (t)dG 2 (t) +

0

F(t)G 1 (t)dG 2 (t) ,

T

and the mean time from failure to its detection is 

T



t

G 1 (t)

0

 (t − u)dF(u) dG 2 (t) +

0





t

G 1 (t)

T

(t − u)dF(u) dG 2 (t) . (4.29)

0

Thus, summing up (4.27), (4.28) and (4.29), the mean time from failure to its detection is given by a renewal function   l D (T ) = l D (T ) 1 −

T



0



T

+



[1 − G 1 (t)G 2 (t)]dF(t) − T





[1 − G 1 (t)G 2 (t)]F(t)dt +

0

 G 1 (t)G 2 (t)dF(t)

G 1 (t)G 2 (t)F(t)dt ,

T

i.e., T

[1 − G 1 (t)G 2 (t)]F(t)dt +

l D (T ) =  T 0

0

[1 − G 1 (t)G 2 (t)]dF(t) +

∞ T

G 1 (t)G 2 (t)F(t)dt

T

G 1 (t)G 2 (t)dF(t)

∞

.

(4.30)

The expected number of checks until failure detection is M M (T )

  = 1 + M M (T ) 1 −

T





[1 − G 1 (t)G 2 (t)]dF(t) −

0

 G 1 (t)G 2 (t)dF(t) ,

T

i.e., M M (T ) =  T 0

1 [1 − G 1 (t)G 2 (t)]dF(t) +

∞ T

G 1 (t)G 2 (t)dF(t)

.

(4.31)

82

4 General Inspection Models

Therefore, the total expected cost until failure detection is C M (T ) = c R M M (T ) + c D l D (T )    c R + c D 0T [1 − G 1 (t)G 2 (t)]dt + T∞ G 1 (t)G 2 (t)dt cD . (4.32) − = T  −λt [1 − G (t)G (t)]dt + ∞ λe−λt G (t)G (t)dt λ λe 1 2 1 2 0 T

Clearly, lim C M (T ) = lim C F (T ) ,

T →0

T →∞

lim C M (T ) = lim C L (T ) .

T →∞

T →0

We find optimal TM∗ to minimize C M (T ) in (4.32). Differentiating C M (T ) with respect to T and setting it equal to zero, 

T

L M (T ) ≡ 0

  λ(T −t) − 1 [1 − G 1 (t)G 2 (t)]dt e  ∞   cR − 1 − e−λ(t−T ) G 1 (t)G 2 (t)dt = , c D T

(4.33)

whose left-hand side increases strictly with T from L M (0) < 0 to ∞. Thus, there exists a finite and unique TM∗ (0 < TM∗ < ∞) which satisfies (4.33), and the resulting cost is  ∗  λC M (TM∗ ) = c D eλTM − 1 .

(4.34)

Noting that when the unit is checked at max{Y1 , Y2 } before time T , at time T for {Y1 < T < Y2 } and {Y2 < T < Y1 }, or at min{Y1 , Y2 } after time T , this policy agrees with inspection middle in (4.32), i.e., the approaches of whichever occurs first and last are included within inspection middle.

4.3.2 Comparisons of Inspection First, Last and Middle It is easily seen from (4.23), (4.26), and (4.34) that if optimal Ti∗ (i = F, L , M) is the smallest among them, policy i is the best one. We compare inspection first and middle: Letting L M F (T ) ≡ L M (T ) − L F (T )  T  λ(T −t)   e − 1 G 1 (t)G 2 (t) + G 1 (t)G 2 (t) dt = 0  ∞   − 1 − e−λ(t−T ) G 1 (t)G 2 (t)dt , T

4.3 Inspection Models with Three Variables

83

L M F (T ) increases strictly with T from L M F (0) < 0 to ∞. Thus, there exists a finite and unique TM F (0 < TM F < ∞) which satisfies L M F (T ) = 0. Therefore, we have the following results : (i) If L M (TM F ) ≥ c R /c D , then TF∗ ≤ TM∗ , i.e., inspection first is better than inspection middle. (ii) If L M (TM F ) < c R /c D , then TF∗ > TM∗ , i.e., inspection middle is better than inspection first. This indicates that if c R /c D becomes larger, we should adopt inspection middle. In other words, if inspection cost c R is smaller or downtime cost c D is larger, then the unit should be checked as earlier as possible, and vice versa. Next, we compare inspection last and middle : Letting L L M (T ) ≡ L L (T ) − L M (T )  T  λ(T −t)  e − 1 G 1 (t)G 2 (t)dt = 0  ∞    − 1 − e−λ(t−T ) G 1 (t)G 2 (t) + G 1 (t)G 2 (t) dt , T

L L M (T ) increases strictly with T from L L M (0) < 0 to ∞. Thus, there exists a finite and unique TL M (0 < TL M < ∞) which satisfies L L M (T ) = 0. Therefore, we have the following results : (iii) If L M (TL M ) ≥ c R /c D , then TM∗ ≤ TL∗ , i.e., inspection middle is better than inspection last. (iv) If L M (TL M ) < c R /c D , then TM∗ > TL∗ , i.e., inspection last is better than inspection middle. This indicates that if c R /c D becomes smaller, we should adopt inspection middle. From (i)–(iv), we can summarize the following results of inspection first, last, and middle (Problem 3): (i) If L M (TM F ) ≥ c R /c D and L M (TL M ) ≥ c R /c D , then C F (TF∗ ) ≤ C M (TM∗ ) ≤ C L (TL∗ ). (ii) If L M (TM F ) < c R /c D < L M (TL M ) and L F (TL F ) ≥ c R /c D , then C M (TM∗ ) < C F (TF∗ ) ≤ C L (TL∗ ). (iii) If L M (TL M ) < c R /c D < L M (TM F ) and L F (TL F ) < c R /c D , then C L (TL∗ ) < C F (TF∗ ) < C M (TM∗ ). (iv) If L M (TM F ) < c R /c D and L M (TL M ) ≤ c R /c D , then C L (TL∗ ) ≤ C M (TM∗ ) < C F (TF∗ ). This means that as c R /c D becomes larger, the ranking of inspections moves to inspection first, middle, and last, and inspection middle does not become the worst among the three policies.

84

4 General Inspection Models

Table 4.2 Optimal TF∗ , TL∗ , TM∗ , L M (TM F ) and L M (TL M ) when F(t) = 1 − exp(−t) and G i (t) = 1 − exp(−θi t) (i = 1, 2) c R /c D 1/θ1 = 0.1, 1/θ2 = 0.5 1/θ1 = 0.5, 1/θ2 = 1.0 TF∗ TL∗ TM∗ TF∗ TL∗ TM∗ 0.01 0.02 0.05 0.10 0.20 0.50 1.00 2.00 5.00 10.00 L M (TM F ) L M (TL M )

0.186 0.293 0.550 0.869 1.304 2.026 2.645 3.299 4.191 4.875 0.000 0.129

0.336 0.355 0.408 0.485 0.610 0.871 1.151 1.507 2.091 2.611

0.141 0.200 0.321 0.460 0.660 1.054 1.468 1.979 2.765 3.410

0.148 0.213 0.348 0.507 0.741 1.201 1.674 2.234 3.060 3.722 0.043 0.803

0.642 0.650 0.675 0.715 0.789 0.974 1.207 1.532 2.098 2.613

0.226 0.258 0.338 0.442 0.598 0.907 1.237 1.659 2.350 2.950

Example 4.2 Table 4.2 presents optimal TF∗ in (4.22), TL∗ in (4.25), and TM∗ in (4.33) when F(t) = 1 − exp(−t) and G i (t) = 1 − exp(−θi t) (i = 1, 2) for 1/θ1 = 0.1, 1/θ2 = 0.5 and 1/θ1 = 0.5, 1/θ2 = 1.0. This indicates that TM∗ < TF∗ or TM∗ < TL∗ , as shown already above. All of Ti∗ (i = F, L , M) increase with c R /c D , TF∗ increase with 1/θi (i = 1, 2), TL∗ decrease with 1/θi (i = 1, 2), however, TM∗ have no monotonous properly for 1/θi . When 1/θ1 = 0.5 and 1/θ2 = 1.0, TF∗ < TM∗ < TL∗ for c R /c D < 0.043, TM∗ < TF∗ < TL∗ for 0.043 < c R /c D < 0.803, TL∗ < TM∗ < TF∗ for c R /c D > 0.803. This means that inspection middle is better than inspection first or last as  c R /c D is large.

4.4 Modified Inspection Models We propose two modified inspection models with time T and random times Yi (i = 1, 2) when F(t) = 1 − exp(−λt) and Pr{Yi ≤ t} = G i (t).

4.4.1 Modified Inspection First Suppose that the unit is checked at time T (0 < T ≤ ∞) or at time max{Y1 , Y2 }, whichever occurs first. Then, the probability that the unit is checked at time T before failure is F(T )[1 − G 1 (T )G 2 (T )] ,

4.4 Modified Inspection Models

85

the probability that it is checked at time Y1 before failure is 

T

F(t)G 2 (t)dG 1 (t) ,

0

the probability that it is checked at time Y2 before failure is 

T

F(t)G 1 (t)dG 2 (t) ,

0

the probability that the failure is detected at time T is F(T )[1 − G 1 (T )G 2 (T )] , the probability that the failure is detected at time Y1 is 

T

F(t)G 2 (t)dG 1 (t) ,

0

and the probability that the failure is detected at time Y2 is 

T

F(t)G 1 (t)dG 2 (t) .

0

Thus, the mean time l D (T ) from failure to its detection is   l D (T ) = l D (T ) 1 − F(T )[1 − G 1 (T )G 2 (T )] − 

T

  F(t)G 1 (t)dG 2 (t) + [1 − G 1 (T )G 2 (T )]

0 T

+ 

F(t)G 2 (t)dG 1 (t)

0

− 

T



0 T

+

t

G 2 (t) 

t

G 1 (t)

0

(T − t)dF(t)

0

(t − u)dF(u) dG 1 (t)

0

T

(t − u)dF(u) dG 2 (t) ,

0

and the expected number M F (T ) of checks until failure detection is  M F (T ) = 1 + M F (T ) 1 − F(T )[1 − G 1 (T )G 2 (T )] 

T

− 0

 F(t)G 2 (t)dG 1 (t) − 0

T

 F(t)G 1 (t)dG 2 (t) .

86

4 General Inspection Models

Solving the above renewal equations, T

[1 − G 1 (t)G 2 (t)]F(t)dt l D (T ) = 0 T , 0 [1 − G 1 (t)G 2 (t)]dF(t) 1 . M F (T ) =  T 0 [1 − G 1 (t)G 2 (t)]dF(t)

(4.35) (4.36)

Therefore, the total expected cost until failure detection is C F M (T ) = c R M F (T ) + c D l D (T ) T c R + c D 0 [1 − G 1 (t)G 2 (t)]dt cD . = T − −λt λ [1 − G 1 (t)G 2 (t)]dt 0 λe

(4.37)

We find optimal TF∗ M to minimize C F M (T ). Differentiating C F M (T ) with respect to T and setting it equal to zero,  L F M (T ) ≡ 0

T

 λ(T −t)  cR e − 1 [1 − G 1 (t)G 2 (t)]dt = , cD

(4.38)

whose left-hand side increases strictly with T from 0 to ∞. Thus, there exists a finite and unique TF∗ M which satisfies (4.38), and the resulting cost rate is   ∗ λC F M (TF∗ M ) = c D eλTF M − 1 .

(4.39)

4.4.2 Modified Inspection Last Suppose that the unit is checked at time T or at min{Y1 , Y2 }, whichever occurs last. Then, the probability that the unit is checked at time T before failure is F(T )[1 − G 1 (T )G 2 (T )] , the probability that it is checked at time Y1 before failure is 



F(t)G 2 (t)dG 1 (t) ,

T

the probability that it is checked at time Y2 before failure is 

∞ T

F(t)G 1 (t)dG 2 (t) ,

4.4 Modified Inspection Models

87

the probability that the failure is detected at time T is F(T )[1 − G 1 (T )G 2 (T )] , the probability that the failure is detected at time Y1 is 



F(t)G 2 (t)dG 1 (t) ,

T

and the probability that the failure is detected at time Y2 is 



F(t)G 1 (t)dG 2 (t) .

T

Thus, the mean time from failure to its detection is   l D (T ) = l D (T ) 1 − F(T )[1 − G 1 (T )G 2 (T )] −



F(t)G 2 (t)dG 1 (t)

T

  T F(t)G 1 (t)dG 2 (t) + [1 − G 1 (T )G 2 (T )] (T − t)dF(t) T 0  t

 ∞ G 2 (t) (t − u)dF(u) dG 1 (t) + T 0  t

 ∞ G 1 (t) (t − u)dF(u) dG 2 (t) , + 





T

0

and the expected number of checks until failure detection is    M L (T ) = 1 + M L (T ) 1 − F(T ) 1 − G 1 (T )G 2 (T )   ∞  ∞ F(t)G 2 (t)dG 1 (t) − F(t)G 1 (t)dG 2 (t) . − T

T

Solving the above renewal equations, ∞ F(t)dt + T F(t)G 1 (t)G 2 (t)dt l D (T ) = , ∞ F(T ) + T G 1 (t)G 2 (t)dF(t) 1 . M L (T ) = ∞ F(T ) + T G 1 (t)G 2 (t)dF(t) T 0

(4.40) (4.41)

Therefore, the total expected cost until failure detection is ∞   c R + c D T + T G 1 (t)G 2 (t)dt cD . C L M (T ) = − ∞ −λT −λt λ 1−e + T λe G 1 (t)G 2 (t)dt

(4.42)

88

4 General Inspection Models

We find optimal TL∗M to minimize C L M (T ). Differentiating C L M (T ) with respect to T and setting it equal to zero, L L M (T ) ≡

 ∞  1 λT cR , (4.43) 1 − e−λ(t−T ) G 1 (t)G 2 (t)dt = e − 1 − λT − λ cD T

whose left-hand side increases strictly with T from L L M (0) < 0 to ∞. Thus, there exists a finite and unique TL∗M (0 < TL∗M < ∞) which satisfies (4.43), and the resulting cost rate is  ∗  λC L M (TL∗M ) = c D eλTL M − 1 .

(4.44)

4.4.3 Comparisons of Modified Inspection First and Last We compare modified inspection first and last. From (4.38) and (4.43), L L F (T ) ≡ L L M (T ) − L F M (T )  ∞    1  λT e − 1 − λT − = 1 − e−λ(t−T ) G 1 (t)G 2 (t)dt λ T  T  λ(T −t)  e − 1 [1 − G 1 (t)G 2 (t)] dt , − 0

whose left-hand side increases strictly with T from L L F (0) < 0 to ∞. Thus, there exists a finite and unique TI (0 < TI < ∞) which satisfies L L F (T ) = 0. Thus, from (4.38), if L F M (TI ) ≥ c R /c D , then TF∗ M ≤ TL∗M , and from (4.39) and (4.44), C F M (TF∗ ) ≤ C L (TL∗ ), i.e., modified inspection first is better than modified inspection last. Conversely, if L F M (TI ) < c R /c D , then TL∗M < TF∗ M , and C L M (TL∗M ) < C F (TF∗ M ), i.e., modified inspection last is better than modified inspection first. Next, we compare inspection first in Sect. 4.3.1 and modified inspection first in Sect. 4.4.1. From (4.22) and (4.38), 

T

L F M (T ) − L F (T ) =

   λ(T −t) − 1 1 − G 1 (t)G 2 (t) − G 1 (t)G 2 (t) dt > 0 , e

0

because for 0 < t < ∞, 1 − G 1 (t)G 2 (t) > G 1 (t)G 2 (t) , which follows that TF∗ M < TF∗ . Thus, from (4.26) and (4.39), modified inspection first is better than inspection first.

References

89

Similarly, we compare inspection last in Sect. 4.3.1 and modified inspection last in Sect. 4.4.2. From (4.25) and (4.43), 



L L M (T ) − L L (T ) =

   1 − e−λ(t−T ) 1 − G 1 (t)G 2 (t) − G 1 (t)G 2 (t) dt > 0 ,

T

which follows that TL∗M < TL∗ . Thus, from (4.26) and (4.44), modified inspection last is better than inspection last. Finally, we compare inspection middle in Sect. 4.4.3 and modified inspection first and last. It is easily shown that from (4.33) and (4.38), L F M (T ) > L M (T ), and from (4.33) and (4.43), L L M (T ) > L M (T ), i.e., TF∗ M < TM∗ and TL∗M < TM∗ . This means that all of checking costs at time T and Yi (i = 1, 2) are the same, inspections centering around T are better than other ones (Problem 4).

4.5 Problems 1. Prove that L L F (T ) increases strictly with T from L L F (0) < 0 to ∞. 2. Prove that L n (T ) increases strictly with T from L n (0) < 0 to ∞. 3. Draw the figures of L F (T ) in (4.22), L L (T ) in (4.25) and L M (T ) in (4.33), and derive (i)–(iv). 4. Give numerical examples of TF∗ M and TL∗M and compare them with Table 4.2.

References 1. Nakagawa T (2005) Maintenance theory of reliability. Springer, London 2. Chen M, Zhao X, Nakagawa T (2017) General inspection models. In: Nakamura S, Qian CH, Nakagawa T (eds) Reliability modeling with computer and maintenance applications. World Scientific, Singapore, pp 313–330 3. Zhao X, Chen M, Nakagawa T (2014) Optimum time and random inspection policies for computer systems. Appl Math infrom Sci 82:413–417 4. Chen M, Zhao X, Nakagawa T (2019) Replacement policies with general models. Ann Oper Res 277:47–61

Chapter 5

Inspection Models with Minimal Repair

The supply of spare parts is a big issue in maintaining complex systems such as aircrafts and warships, because spare parts have a wide variety of items and are stocked at multi-echelon facilities such as bases and depots. Multi-Echelon Technique for Recoverable Item Control (METRIC) is the inventory theory of spare parts for US Air Force with multi-echelon facilities [1, 2]. Dyna-METRIC is the expanded analytical method of METRIC for studying transient behaviors of system inventories under time-dependent operational demand [3, 4]. To perform good management in inventory, accurate demand forecasting is an essential issue: Regattieri [5] considered the forecasting of aircraft spare parts in case of the lumpy demand which is random with zero demand in many time periods. Yoon [6] researched the optimal concurrent spare parts inventory level for aircrafts. Lee et al. [7] studied the simulation optimization method of multi-objectives such as cost and spare parts satisfactory rate for aircraft spare parts. Smidt-Destombes et al. [8] proposed a heuristic method to solve the inventory problem considering the cost effective balance between maintenance frequencies, spare parts inventory and repair capacity to achieve an objective availability. Moon et al. [9] represented a forecasting method of consumable spare parts of South Korean Navy in case of lumpy demand. Costantino et al. [10] studied the two-echelon and multi-item spare parts inventory control of Italian Air Force under constraint of budget and target availability. The logistics support is indispensable in maintaining defense systems, however, there may exist some situations that systems have to operate without logistics support [11, 12]. For example, the open sea navigation of warships may take several weeks and they have to maintain them in stocks. Repair of brand-new parts improve system reliability, however, repairs of degraded parts may not disturb the system failure rate which is called minimal repair [13]. It has been wellknown that the following replacement policy is called periodic replacement with minimal repair at failures [14, 15]: A unit is replaced periodically at planned © Springer Nature Switzerland AG 2023 K. Ito and T. Nakagawa, Optimal Inspection Models with Their Applications, Springer Series in Reliability Engineering, https://doi.org/10.1007/978-3-031-22021-0_5

91

92

5 Inspection Models with Minimal Repair

times and only minimal repair after each failure is made, and so that, the failure rate remains undisturbed by any repair of failures between successive replacements. Minimal repair can be performed by interchanging parts, modules, and subsystems of other systems, which is called cannibalization. Fisher [16] studied the expected number of inoperative systems under constraint of resource maintenances considering cannibalization. Electronic equipments are usually checked automatically, however, every failure cannot be detected because automatic checks have their own detective limitation, which is called test efficiency [17]. So that, there exist two type of failures i.e., a failure detected by check effortlessly and another one detected by precise diagnosis. Two kinds of these failures have to be considered for the maintenance plan of defense systems. A typical model with two types of failures is a phased array radar and will be taken up in Chap. 8. Chapter 5 proposes inspection models of two types of failures [18]: One type is detected easily and restored with minimal repair, and another one is detected only by diagnosis. It is important to distinguish between two failures for maintenance plan, because their maintenance costs are different and are investigated through inspections, and total maintenance cost is affected greatly by them. First, we consider a standard inspection policy where the unit is checked at periodic times kT and successive times Tk (k = 1, 2, . . .). Using similar methods used in Chap. 2, optimal checking times are derived theoretically and computed numerically. Next, we propose a modified inspection policy where failures are detected at checking times or at minimal repair, whichever occurs first. By using a modified method of the standard inspection policy, optimal checking times are also derived. Most reliability models such as defense and logistic systems have to operate for a finite interval. We consider inspection models for a finite interval [19] and discuss their optimal policies.

5.1 Inspection with Two Failures We assume the following two types of failures: (1) A  ∞unit fails according to a failure distribution F(t) with finite mean μ ≡ 0 F(t)dt and a density function f (t) ≡ dF(t)/dt, where, Φ(t) ≡ 1 − Φ(t) for any function Φ(t). The unit failure is called major failure for distinguishing from minimal failure. (2) Another failure which is called minimal failure, occurs at a nonhomogeneous Poisson process with a hazard rate h(t) and a cumulative hazard function t H (t) ≡ 0 h(u)du. The probability that j failures occur exactly in the time interj = 0, 1, . . .), and the probabilval [0, t] is p j (t) ≡ [H (t) j /j!] exp[−H (t)] ( ity that some failures occur in [0, t] is G(t) ≡ ∞ j=1 p j (t) = 1 − exp[−H (t)]. Furthermore, the unit undergoes minimal repair for minimal failure and can operate again immediately, and its hazard rate h(t) increases from h(0) to

5.1 Inspection with Two Failures

93

h(∞) ≡ limt→∞ h(t) [13]. Thus, H (t) represents the expected number of minimal failures, i.e., minimal repairs during the interval (0, t]. In general, such failures have different phenomena from major failures and occur independently of them.

5.1.1 Periodic Inspection It is assumed that major failure is detected only at checking times kT (k = 1, 2, . . .) for 0 < T < ∞. Let cT be the cost of one check, c D be the loss cost per unit of time for the time elapsed between major failure and its detection, and c M be the cost of minimal repair. Then, because the expected number of minimal repairs in [0, (k + 1)T ] is H ((k + 1)T ), the total expected cost until major failure detection is, from (2.1), C(T ) ∞   = =

(k+1)T

  cT (k + 1) + c D [(k + 1)T − t] + c M H ((k + 1)T ) dF(t)

k=0 kT ∞  

 cT + c D T + c M [H ((k + 1)T ) − H (kT )] F(kT ) − c D μ .

(5.1)

k=0

In particular, when F(t) = 1 − exp(−λt), ∞

C(T ) =

 cT + c D T cD . + cM [H ((k + 1)T ) − H (kT )] e−kλT − −λT 1−e λ k=0

(5.2)

In addition, when H (t) = βt, (5.2) is C(T ) =

cT + c D T + c M βT cD . − −λT 1−e λ

(5.3)

Optimal T ∗ (0 < T ∗ < ∞) to minimize C(T ) satisfies eλT − 1 − λT = which agrees with (2.9) when c M β = 0.

cT λ , cD + cM β

(5.4)

94

5 Inspection Models with Minimal Repair

5.1.2 Sequential Inspection Suppose that the unit is checked at successive times Tk (k = 1, 2, . . .), where T0 ≡ 0. Then, the total expected cost until failure detection is, replacing kT with Tk in (5.1), C(T) = =

∞   k=0 ∞ 

Tk+1

{cT (k + 1) + c D (Tk+1 − t) + c M H (Tk+1 )} dF(t)

Tk

{cT + c D (Tk+1 − Tk ) + c M [H (Tk+1 ) − H (Tk )]} F(Tk ) − c D μ , (5.5)

k=0

where T ≡ (T1 , T2 , . . .) and which agrees with (2.1) when c M = 0. We find optimal {Tk∗ } to minimize C(T). Differentiating C(T) with Tk and setting it equal to zero (Problem 1), cM [H (Tk+1 ) − H (Tk )] Tk+1 − Tk + cD  F(Tk ) − F(Tk−1 ) cM cT = 1+ h(Tk ) . − cD f (Tk ) cD

(5.6)

In particular, when H (t) = βt, Tk+1 − Tk =

cT F(Tk ) − F(Tk−1 ) − , f (Tk ) cD + cM β

(5.7)

which agrees with (2.2) when c M = 0. When F(t) = 1 − exp(−λt), (5.6) is cM [H (Tk+1 ) − H (Tk )] Tk+1 − Tk + cD 

cT cM 1 1+ = h(Tk ) eλ(Tk −Tk−1 ) − 1 − . λ cD cD

(5.8)

In addition, when H (t) = βt, (5.8) is Tk+1 − Tk =

1 λ(Tk −Tk−1 ) cT e . −1 − λ cD + cM β

Setting T1 be a solution of the equation, T1 =

1 λT1 cT e −1 − , λ cD + cM β

(5.9)

5.2 Modified Inspection Model

95

Table 5.1 Optimal Tk∗ when F(t) = 1 − exp(−t m /100) and cT /(c D + c M β) = 2 k

m 1.0

1.1

1.2

1.3

1.4

1.5

2.0

1 2 3 4 5 6 7 8 9 10

19.4 38.7 58.1 77.4 96.8 116.1 135.5 154.8 174.2 193.5

17.4 33.1 48.4 63.5 78.3 93.0 107.5 122.0 136.3 150.6

15.7 28.7 41.0 52.9 64.5 75.9 87.1 98.1 109.0 119.8

14.3 25.2 35.2 44.8 54.0 63.0 71.8 80.4 88.8 97.1

13.0 22.3 30.6 38.4 45.9 53.1 60.1 66.9 73.5 80.1

12.0 19.9 26.8 33.3 39.5 45.3 51.0 56.5 61.8 67.0

8.4 12.4 15.7 18.7 21.4 23.8 26.2 28.4 30.5 32.4

we have T2 − T1 =

1 λT1 cT e −1 − = T1 , λ cD + cM β

which follows that T2 = 2T1 . Repeating the above procedures, Tk = kT1 , i.e., this policy corresponds to periodic inspection. Example 5.1 When F(t) = 1 − exp(−λt m ), Table 5.1 presents optimal Tk∗ in (5.7) ∗ − Tk∗ for m when 1/λ = 100 and cT /(c D + c M β) = 2. The differences of Tk+1 ∗ ∗ ∗ ∗ ∗ decrease with k when m > 1, and Tk+1 − Tk = T1 , i.e., Tk = kT1 (k = 1, 2, . . .) when m = 1, as already shown above. Furthermore, Tk∗ decrease with m because  failure rate λmt m−1 increases with time t.

5.2 Modified Inspection Model It has been assumed in Sect. 5.1 that major failure is detected only at inspection times. However, when the unit undergoes minimal repair, major failure might be detected by minimal repair. For example, the precise diagnosis may be undergone together at minimal repair, and undetected failures might be detected at automatic checks. Using the methods in Sect. 2.1, we propose the periodic and sequential inspection policies where major failure is detected at inspection time or minimal repair, and give their numerical examples.

96

5 Inspection Models with Minimal Repair

5.2.1 Periodic Inspection Suppose that when the unit fails, major failure is detected at checking time (k + 1)T or at minimal repair, whichever occurs first. Then, the probability that major failure is detected at time (k + 1)T when the unit fails at time t (kT < t ≤ (k + 1)T ) is 

(k+1)T kT

G((k + 1)T ) G(t)

dF(t) ,

and when it is detected at minimal repair, 

(k+1)T

kT

G((k + 1)T ) − G(t) G(t)

dF(t) .

Thus, when the major failure is detected at periodic inspection, the total expected cost is ∞  

(k+1)T

{cT (k + 1) + c D [(k + 1)T − t] + c M H (t)}

G((k + 1)T )

kT

k=0

G(t)

dF(t) ,

and when it is detected at minimal repair is ∞   k=0

(k+1)T



kT

(k+1)T

 {cT k + c D (u − t) + c M [1 + H (t)]} dG(u)

t

1 G(t)

dF(t) .

Therefore, the total expected cost until failure detection is (Problem 2) C(T ) = cT

∞  

(k+1)T

  (k + 1)G((k + 1)T ) + k

k=0 kT ∞  (k+1)T 

+c D

(k+1)T t

 [(k + 1)T − t]G((k + 1)T )

k=0 kT

 +

+c M

∞  

(k+1)T

1 dG(u) dF(t) G(t)

t



(k+1)T

 1 (u − t)dG(u) dF(t) G(t)

H (t)G((k + 1)T )

k=0 kT

 +[1 + H (t)] t

(k+1)T

 1 dG(u) dF(t) G(t)

5.2 Modified Inspection Model

= (cT − c M ) +c D

∞  

∞ 

97

 G((k + 1)T )

k=0 (k+1)T



(k+1)T

kT (k+1)T

G(u)du

k=0 kT

1 G(t)  1

dF(t) + cT

F((k + 1)T )

k=0



G(t)

t

∞ 



dF(t) + c M

[1 + H (t)]dF(t) . (5.10)

0

When G(t) = 1 − exp(−βt) and H (t) = βt, (5.10) is  ∞   c D  (k+1)T −β[(k+1)T −t] C(T ) = cT − c M − e dF(t) β k=0 kT +cT

∞ 

F(kT ) + c M (1 + βμ) +

k=1

cD . β

(5.11)

In addition, when F(t) = 1 − exp(−λt) for λ = β, (5.11) is C(T )     cD cT β λ e−λT − e−βT cD = cT − c M − + c + 1 + + , (5.12) M β β − λ 1 − e−λT λ β eλT − 1

and for λ = β, C(T ) =

eλT

  1 cD  cD cT − c M − λT + cT + 2c M + . −1 λ λ

(5.13)

Suppose that c M + c D /β > cT . Then, we find optimal T ∗ to minimize C(T ) in (5.12). Differentiating C(T ) with respect to T and setting it equal to zero, 

λ λT

cT 1 −βT −βT e 1−e , e −1 − = λ−β β c D + (c M − cT )β

(5.14)

whose left-hand side increases strictly with T from 0 to ∞ for λ > β and to λ/[β(β − λ)] for β > λ. Thus, if λ/[β(β − λ)] > cT /[c D + (c M − cT )β)], then there exists a finite and unique T ∗ (0 < T ∗ < ∞) which satisfies (5.14). When λ = β, optimal T ∗ satisfies λT − 1 + e−λT = which agrees with (2.9) when c M = cT .

cT , c D /λ + c M − cT

(5.15)

98

5 Inspection Models with Minimal Repair

5.2.2 Sequential Inspection Suppose that the unit is checked at successive times Tk (k = 1, 2, . . .). Then, the total expected cost is, from (5.10), C(T) = (cT − c M )

∞ 

G(Tk+1 )

 Tk+1

+c D

k=0 Tk

G(t) 

Tk

k=0

 ∞  Tk+1  Tk+1 

1

G(u)du

t

dF(t) + cT

∞ 

F(Tk+1 )

k=0

1

G(t)

dF(t) + c M

 ∞ 0

[1 + H (t)]dF(t) . (5.16)

When G(t) = 1 − exp(−βt) and H (t) = βt,   ∞  c D  Tk+1 −β(Tk+1 −t) C(T) = cT − c M − e dF(t) β k=0 Tk +cT

∞ 

F(Tk ) + c M (1 + βμ) +

k=1

cD . β

(5.17)

In addition, when F(t) = 1 − exp(−λt), for β = λ, 





λ  −λTk+1 e − e−β(Tk+1 −Tk )−λTk β − λ k=0   ∞  β cD + . e−λTk + c M 1 + +cT λ β k=1

cD C(T) = cT − c M − β

(5.18)

Differentiating C(T) in (5.17) with respect Tk and setting it equal to zero, 1 1 − e−β(Tk+1 −Tk ) = β f (Tk )



Tk

e−β(Tk −t) dF(t) −

Tk−1

cT . c D + (c M − cT )β

(5.19)

In addition, when F(t) = 1 − exp(−λt), for β = λ, 1 − e−β(Tk+1 −Tk ) 1

cT = 1 − e−(β−λ)(Tk −Tk−1 ) − ,(5.20) β β−λ c D + (c M − cT )β and for β = λ, 1 − e−β(Tk+1 −Tk ) cT = Tk − Tk−1 − . β c D + (c M − cT )β

(5.21)

5.2 Modified Inspection Model

99

Setting T1 be a solution of (5.20), 1 − e−βT1 1

cT = 1 − e−(β−λ)T1 − , β β−λ c D + (c M − cT )β we have 1 − e−β(T2 −T1 ) 1

cT − 1 − e−(β−λ)T1 + β β−λ c D + (c M − cT )β 1 − e−βT1 1 − e−β(T2 −T1 ) − = 0, = β β which follows that T2 = 2T1 . Repeating the above procedures, Tk = kT1 , i.e., this policy corresponds to periodic inspection. Example 5.2 When F(t) = 1 − exp(−t m /100), Table 5.2 presents optimal Tk∗ in (5.19) for m when β = 0.010, 0.015, and cT /(c D + c M β) = 2. This indicates that ∗ − Tk∗ decrease with Tk∗ decrease with m and increase with β. The differences of Tk+1 ∗ ∗ ∗ ∗ ∗ k when m > 1, and Tk+1 − Tk = T1 , i.e., Tk = kT1 (k = 1, 2, . . .) when m = 1. ∗ − Tk∗ are almost the same for k and m in Tables 5.1 and 5.2. Furthermore, Tk+1 Comparing Table 5.2 with Table 5.1, optimal Tk∗ in Table 5.2 are larger than those in Table 5.1, because major failures might be detected at minimal repair. 

Table 5.2 Optimal Tk∗ when F(t) = 1 − exp(−t m /100) and cT /(c D + c M β) = 2 k

β = 0.010

β = 0.015

m

1.0

m

1.1

1.2

1.3

1.4

1.5

2.0

1.0

1.1

1.2

1.3

1.4

1.5

2.0

1

20.9

18.4

16.6

14.9 13.5

12.4

8.5

21.8

18.8

17.0

15.3 13.8

12.6

8.6

2

41.8

35.1

30.2

26.2 23.1

20.5

12.6

43.6

35.7

30.9

26.8 23.5

20.8

12.7

3

62.7

51.2

43.1

36.7 31.6

27.6

16.0

65.4

51.9

43.9

37.4 32.2

28.0

16.1

4

83.6

66.9

55.5

46.6 39.7

34.2

18.9

87.3

67.7

56.5

47.5 40.3

34.7

19.1

5

104.5

82.4

67.6

56.1 47.4

40.5

21.6

109.1

83.0

68.7

57.1 48.1

41.1

21.8

6

125.4

97.6

79.5

65.4 54.7

46.5

24.1

130.9

97.9

80.5

66.5 55.6

47.1

24.3

7

146.3 112.7

91.1

74.5 61.9

52.3

26.5

152.7 112.4

92.0

75.6 62.8

52.9

26.6

8

167.3 127.5 102.6

83.3 68.9

57.8

28.7

174.5 126.3 103.1

84.4 69.8

58.6

28.8

9

188.2 142.0 114.0

92.0 75.7

63.3

30.7

196.3 139.5 113.9

93.0 76.7

64.1

30.8

10

209.1 156.3 125.2 100.6 82.4

68.6

32.4

218.1 152.0 124.2 101.3 83.3

69.4

32.4

100

5 Inspection Models with Minimal Repair

5.3 Inspection for a Finite Interval Most aircrafts and warships have to be operating for a finite time horizon [19]. Suppose that the unit has to be operating for a finite interval [0, S] with a specified S (0 < S < ∞) in Sect. 2.2. We propose standard and modified inspection models for a finite interval, and give numerical examples.

5.3.1 Standard Inspection (1) Periodic Inspection Suppose in Sect. 2.1 that the unit is checked at periodic times kT (k = 1, 2, . . . , N ), where N T ≡ S. Then, the total expected cost until failure detection or time S is, from (5.1), C(T ; S) =

N −1 



S

{cT + c D T + c M [H ((k + 1)T ) − H (kT )]} F(kT ) − c D

F(t)dt . (5.22)

0

k=0

In particular, when F(t) = 1 − exp(−λt), N −1

C(T ; S) = (cT + c D T )

 1 − e−λS + cM [H ((k + 1)T ) − H (kT )] e−kλT −λT 1−e k=0

cD 1 − e−λS . (5.23) − λ

In addition, when H (t) = βt, (5.23) is C(T ; S) = (cT + c D T + c M βT )

1 − e−λS cD 1 − e−λS . − 1 − e−λT λ

(5.24)

 to minimize C(T ; S) for constant S satisfies Optimal T eλT − 1 − λT =

cT λ , cD + cM β

(5.25)

which agrees with that of (5.4).  ≥ S, then N ∗ = 1. If T  < S, we obtain Therefore, using the policy in Sect. 2.2, if T ], N2 = [S/T ] + 1, and T1 = S/N1 , T2 = S/N2 . Comparing C(T1 , S) N1 = [S/T and C(T2 , S) in (5.23), we can determine optimal N ∗ = N1 or N ∗ = N2 , as shown in Sect. 2.2.

5.3 Inspection for a Finite Interval

101

(2) Sequential Inspection Suppose that the unit is checked at times Tk (k = 1, 2, . . . , N ), where TN = S. Then, the total expected cost until failure detection or time S is, from (5.22), C(T1 , T2 , . . . , TN ; S) =

N −1   k=0

cT + c D (Tk+1 − Tk ) 



+c M [H (Tk+1 ) − H (Tk )] F(Tk ) − c D

S

F(t)dt .

(5.26)

0

We find optimal {Tk∗ } to minimize C(T1 , T2 , . . . , TN ; S). Differentiating C(T1 , T2 , . . . , TN ; S) with Tk and setting it equal to zero, we have the following results (Problem 3): For N = 2 and T2 = S, S − T1 +

 cT cM cM F(T1 ) − [H (S) − H (T1 )] = 1 + h(T1 ) . cD cD f (T1 ) c D

(5.27)

For N ≥ 3 and k = 1, 2, . . . , N − 2, cM [H (Tk+1 ) − H (Tk )] c D F(Tk ) − F(Tk−1 ) cM cT = 1+ h(Tk ) , − cD f (Tk ) cD

Tk+1 − Tk +

(5.28)

and for k = N − 1, putting that TN = S in (5.28), cM [H (S) − H (TN −1 )] c D cT cM F(TN −1 ) − F(TN −2 ) − = 1+ h(TN −1 ) . cD f (TN −1 ) cD

S − TN −1 +

(5.29)

In particular, when H (t) = βt, (5.27), (5.28), and (5.29) are, respectively, cT F(T1 ) − , f (T1 ) c D + c M β cT F(Tk ) − F(Tk−1 ) − , Tk+1 − Tk = f (Tk ) cD + cM β cT F(TN −1 ) − F(TN −2 ) − . S − TN −1 = f (TN −1 ) cD + cM β S − T1 =

(5.30)

The above calculation would be easier than that of infinite case in Sect. 5.1 because S is given. If S is large, then this would be almost the same as an infinite case.

102

5 Inspection Models with Minimal Repair

 )/(c D + c M β) when F(t) = 1 − exp(−t 2 /100), Table 5.3 Optimal Tk∗ and resulting cost C(N cT /(c D + c M β) = 2 and S = 20.0 k N 1 2 3 4 5 6 1 20.0 2 3 4 5 6  )/(c D + 22.00 C(N c M β)

11.1 20.0

9.1 14.1 20.0

8.5 12.7 16.3 20.0

8.3 12.3 15.5 18.1 20.0

16.28

15.24

15.03

15.02

8.3 12.3 15.4 18.0 19.7 20.0 15.06

Example 5.3 When F(t) = 1 − exp(−λt 2 ), Table 5.3 presents optimal Tk∗ and its  )/(c D + c M β) which satisfies (5.30) when 1/λ = 100, cT /(c D + resulting cost C(N c M β) = 2, and S = 20.0, where S  ) C(T1 , T2 , . . . , TN ; S) + c D 0 F(t)dt C(N = cD + cM β cD + cM β  N −1  cT + (Tk+1 − Tk )F(Tk ) . = cD + cM β k=0

(5.31)

This indicates that when N = 5, the total expected cost is the smallest and optimal checking times are 8.3, 12.3, 15.5, 18.1 and 20.0. Comparing Table 5.3 with Table 5.1 for m = 2, k = 1, 2, 3, 4, 5, optimal values are smaller than those in Table 5.1, because the unit does not need to operate more than S = 20.0. 

5.3.2 Modified Inspection Model (1) Periodic Inspection The unit is checked at periodic times kT (k = 1, 2, . . . , N ), where N T ≡ S. Then, when major failure is detected at time (k + 1)T or at minimal repair, whichever occurs first, the total expected cost is, from (5.10),

5.3 Inspection for a Finite Interval C(T ; S) = (cT − c M )

N −1 

 G((k + 1)T )

k=0

+c D

N −1  (k+1)T  k=0

kT

103

(k+1)T kT



(k+1)T

G(u)du t

1 G(t)  1

dF(t) + cT

G(t)

N −1 

F((k + 1)T )

k=0

 dF(t) + c M

S

[1 + H (t)]dF(t) , (5.32)

0

which agrees with (5.10) when S → ∞, i.e., N → ∞. When G(t) = 1, i.e., H (t) = 0, (5.32) becomes  N −1     S cD S  kS − cD F F(t)dt , C(N ; S) = cT + N N 0 k=0

(5.33)

which agrees with (5.26). When G(t) = 1 − exp(−βt) and H (t) = βt, (5.32) is C(T ; S)  N −1   c D  (k+1)T −β[(k+1)T −t] = cT − c M − e dF(t) β k=0 kT    S N  c D F(S) + c M F(S) + β +cT F(kT ) + [F(S) − F(t)]dt . (5.34) β 0 k=1 In addition, when F(t) = 1 − exp(−λt) for λ = β, C(T ; S)



−λT   − e−βT 1 − e−λS e cD λ = cT − c M − β β−λ 1 − e−λT      −λS cD 1−e β  + +cT λT 1+ 1 − e−λS + c M 1 − e−λS − β Se−λS , (5.35) β λ e −1

and for λ = β, C(T ; S) =

  1 − e−λS cD  cT − c M − λT + cT λ eλT − 1



cD 1 − e−λS + c M 2 1 − e−λS − λSe−λS . + λ

(5.36)

We find optimal T ∗ to minimize C(T ; S) in (5.35). Differentiating C(T ; S) with respect to T and setting it equal to zero for constant S, 

λ cT 1 e−βT eλT − 1 − 1 − e−βT = , λ−β β c D + (c M − cT )β

(5.37)

104

5 Inspection Models with Minimal Repair

 is a solution which satisfies (5.37), we obtain which agrees with (5.14). Therefore, if T ], N2 ≡ [S/T ] + 1, and T1 ≡ S/N1 , T2 ≡ S/N2 . Comparing C(T1 ; S) N1 ≡ [S/T and C(T2 ; S), we can determine optimal N ∗ = N1 or N ∗ = N2 , as shown in Sect. 2.2. When λ = β, we can obtain similar results.

(2) Sequential Inspection Suppose that the unit is checked at time Tk (k = 1, 2, . . . , N ), where TN ≡ S. Then, the total expected cost is, from (5.16), C(T1 , T2 , . . . , TN ; S) = (cT − c M )

N −1 





S

1 G(t)

Tk

k=0

+c M

Tk+1

G(Tk+1 )

[1 + H (t)]dF(t) + c D

0

dF(t) + cT

N −1  Tk+1  k=0

Tk

N −1 

F(Tk+1 )

k=0





Tk+1

G(u)du t

1 G(t)

dF(t) .

(5.38)

When G(t) = 1 − exp(−βt) and H (t) = βt, (5.38) is   N −1  Tk+1

cD C(T1 , T2 , . . . , TN ; S) = c M + 1 − e−β(Tk+1 −t) dF(t) − cT β k=0 Tk  S N −1  +cT F(Tk ) + c M β [F(S) − F(t)] dt . (5.39) 0

k=0

We find optimal {Tk∗ } (k = 1, 2, . . . , N ) to minimize C(T1 , T2 , . . . , TN ; S) in (5.39). Differentiating C(T1 , T2 , . . . , TN ; S) with respect to Tk and setting it equal to zero, we have the following result: For N = 2 and T2 = S, 1 1 − e−β(S−T1 ) = β f (T1 )



T1

e−β(T1 −t) dF(t) −

0

cT , c D + (c M − cT )β

for N ≥ 3 and k = 1, 2, . . . , N − 2, 1 1 − e−β(Tk+1 −Tk ) = β f (Tk )



Tk

e−β(Tk −t) dF(t) −

cT , c D + (c M − cT )β

e−β(TN −1 −t) dF(t) −

cT . c D + (c M − cT )β

Tk−1

and for k = N − 1, 1 − e−β(S−TN −1 ) 1 = β f (TN −1 )



TN −1 TN −2

(5.40)

References

105

The total expected cost is S  1 , T2 , . . . , TN ; S) C(T1 , T2 , . . . , TN ; S) − c M β 0 [F(S) − F(t)]dt C(T ≡ cD + cM β cD + cM β   Tk+1     N −1   cT 1 cT = 1 − e−β(Tk+1 −t) dF(t) . F(Tk ) + − cD + cM β β cD + cM β Tk

(5.41)

k=0

Using the same method in Example 5.3, we can give the similar table to Table 5.3 (Problem 4).

5.4 Problems 1. 2. 3. 4.

Derive (5.6). Derive (5.10). Derive (5.27)–(5.29). When F(t) = 1 − exp(−λt 2 ) and both cT /(c D + c M β) and β are given, compute Tk∗ (k = 1, 2, . . . , N ) for N = 1, 2, . . ., compare the expected cost in (5.41), and decide sequence {Tk∗ } for optimal N ∗ .

References 1. Sherbrooke CC (1966) METRIC a multi-echelon technique for recoverable item control. (RM5078-PR) United States Air Force Project RAND 2. Sherbrooke CC (2004) Optimal inventory modeling of systems, multi-echelon techniques. Kluwer Academic Publishers, Boston 3. Hillestad RJ (1982) Dyna-METRIC: dynamic multi-echelon technique for recoverable item control. (R-2785-AF) United States Air Force Project RAND 4. Pyles R (1984) The Dyna-METRIC readiness assessment model, motivation, capabilities, and use. (R-2886-AF,AD-A145699) United States Air Force Project RAND 5. Regattieri A, Gamberi M, Gamberini R, Manzini R (2005) Managing lumpy demand for aircraft spare parts. J Air Trans Manag 11:426–431 6. Yoon KB, Sohn SY (2007) Finding the optimal CSP inventory level for multi-echelon system in Air Force using random effects regression model. Eur J Oper Res 180:1076–1085 7. Lee LH, Chew EP, Teng S, Chen Y (2008) Multi-objective simulation-based evolutionary algorithm for an aircraft spare parts allocation problem. Eur J Oper Res 189:476–491 8. De Smidt-Destombes KS, van der Heijden MC, van Harten A (2009) Joint optimization of spare part inventory, maintenance frequency and repair capacity for k-out-of-N systems. Int J Prod Econ 118:260–268 9. Moon S, Hicks C, Simpson A (2012) The development of a hierarchical forecasting method for predicting spare parts demand in the South Korean Navy?A case study. Int J Prod Econ 140:794–802 10. Costantino F, Gravio GD, Tronci M (2013) Multi-objective simulation-based evolutionary algorithm for an aircraft spare parts allocation problem. Reliab Eng Syst Saf 119:95–101

106

5 Inspection Models with Minimal Repair

11. Iverson RG, Fisher RD, Wenzel RF (1962) The growth of a rational, system approach to Naval repair parts inventories : the introduction of military essentiality. Thesis of Naval Postgraduate School 12. Vogel RH (1966) Inventory management of shipboard material. Thesis of Naval Postgraduate School 13. Nakagawa T (2005) Maintenance theory of reliability theory. Springer, London 14. Barlow RE, Proschan F (1965) Mathematical theory of reliability. Wiley, New York 15. Tadj L, Ouali S, Yacout S, Ait-Kadi S (eds) (2011) Replacement models with minimal repair. Springer, London 16. Fisher WW (1990) Markov process modeling of a maintenance system with spares, repair, cannibalization and manpower constraints. Math Comp Model 13:119–125 17. Bauer J, Cottrell DF, Gagnier TR, Kimball EW (1973) Dormancy and power on-off cycling effects on electronic equipment and part rebiability. (AD-768619) Martin Marietta Aerospace 18. Ito K, Mizutani S, Nakagawa T (2020) Optimal inspection models with minimal repair. Reliab Eng Syst Saf 201 September:106946 19. Nakagawa T, Mizutani S (2009) A summary of maintenance policies for a finite interval. Reliab Eng Syst Saf 94:89–96

Chapter 6

Hierarchical Structure Reliability

Some systems might be hierarchical, i.e., a system is constituted with several subsystems, a subsystem is constituted with several sub-subsystems, and such hierarchical relations continue from a system to discrete parts in Fig. 1.3. One typical example is a jet fighter with maintenance hatch beside its airframe: When the hatch is opened, there exist some equipment chassis which are easy to exchange at failures, and are called LRUs (Line Replacement Units). An LRU has knobs for handling easily and electric connector sockets for connecting other LRUs electrically, and can be gotten out when its connector sockets are disassembled, its fixed attachments are released, and are pulled out. In addition, this contains lots of printed card boards, and failed boards are eluted one by one, are checked by the special function tester, and are exchanged to normal ones. Such LRU repair works are undergone at military maintenance facilities and failed boards are repaired at civilian factories. Some systems on medical electronics and avionics are necessary to be high reliable and have special hierarchical constitutions. Because a flight control system (FCS) of fly-by-wire controlled aircrafts has to be reliable, Boeing company adopts complicated redundancy structures. In case of Boeing 777 FCS, there exists one type of computer which is called Primary Flight Computer (PFC) [1]: This is constituted with 3 computers, i.e., left, center and right PFCs, and their software and hardware are the same in Fig. 6.1. Outputs of these 3 PFCs enter into a voter, which outputs a correct command by a 2-out-of-3 majority decision structure, and the command controls actuators of aircraft. Each PFC is constituted with 3 lanes, each of which has power supply and processor, and these 3 processors are different types such as AMD 29050, MOTOROLA 68040, and INTEL 80486. Because these outputs enter into a voter which outputs a correct command by a 2-out-of-3 structure. So that, Boeing FCS is an example of a hierarchical 2-out-of-3 system with 2 stage, which is called triple-triple redundant architecture [2].

© Springer Nature Switzerland AG 2023 K. Ito and T. Nakagawa, Optimal Inspection Models with Their Applications, Springer Series in Reliability Engineering, https://doi.org/10.1007/978-3-031-22021-0_6

107

108

6 Hierarchical Structure Reliability

Fig. 6.1 Boeing 777 flight control system with hierarchical structures [2]

High reliable systems can be realized by employing redundancy. A standard parallel system, which consists of n identical subsystem, is the most typical redundant model. It was shown illustratively that a redundant system can be utilized for a specified mean time by either changing the replacement time or increasing the number of subsystems [3, p. 65]. In addition, reliabilities of many kinds of redundant systems were discussed [4]. Redundant systems with multiple failure modes [5] and dependent failures of units [6] were considered. Furthermore, some optimization methods of redundancy allocation for series-parallel systems were surveyed [7]. A variety of redundant systems affect maintenance policies. Lately, a new classification of three preventive maintenance models were suggested [8], and optimal maintenance policies for 2-phase systems with wearout states were investigated [9]. A k-out-of-n (1 ≤ k ≤ n) system can operate if and only if at least k units of the total n units are operable [3, p. 216], and is widely used in practical systems such that data transmission, redundant networks, redundant copies because such systems can realize high reliability with moderate components [10, p. 7], and their reliability characteristics were investigated [11, 12]. The managers of mass transits and computer systems have to put n units on line with the assurance that at least k units will be available to complete the mission, and for this purpose, their solutions for computing availabilities were provided [13]. A k-out-of-n system is also used as a self-checking checker for error detecting codes [14]. Studies of multi-state and consecutive k-out-of-n systems were surveyed [15, 16]. Airbus has produced wide body commercial aircrafts A320, A340, and A380 which are as same scales as Boeing 777. They also adopted fly-by-wire control systems, however, their architectures were different from Boeing 777. Which architecture is appropriate for FCS when Boeing aircraft FCS and Airbus one are compared? Answering for this question, we propose the following hierarchical redundant system with j stages: System 1 has an original reliability R1 and System 2 has its reliability R2 (R1 ) which is a function of R1 . Repeating the above procedures, System ( j + 1)

6.1 Basic System

109

has its reliability R j+1 (R j ) ( j = 1, 2, . . .), which is called hierarchical system with ( j + 1) stage. Reliability R j of various systems are computed, and optimal replacement times and expected cost rates of series-parallel and parallel-series systems with 4 units are discussed and compared. Optimal inspection policies for several reliability models have been discussed theoretically and numerically under the assumption that an operating unit has a failure distribution F(t) in Chaps. 2–5. It is shown in Sect. 6.2 that reliabilities of any redundant systems with identical units are given by F K (t) ≡ 1 − FK (t), using a random K -out-of-n system. Accordingly, replacing F(t) with FK (t), we discuss optimal inspection policies for any redundant systems. As such examples, we take up Series-parallel, Parallel-series and Consecutive k-out-of-n systems, and discuss their optimal inspection policies in Sect. 6.3.3. These techniques would be useful greatly for analyzing general redundant systems.

6.1 Basic System Reliabilities R j+1 (R j ) and their mean times (MTTF) μ j of a parallel system, a series system, and a k-out-of-n system with n units are obtained, and their properties are summarized.

6.1.1 n-Unit Parallel System We consider a parallel system with n (n = 1, 2, . . .) identical units, which have the same reliability q (0 < q < 1). System 1 has reliability R1 = 1 − (1 − q)n . System 2 has reliability R2 = 1 − (1 − R1 )n = 1 − (1 − q)2n , and generally, System j ( j = 1, 2, . . .) has reliability R j = 1 − (1 − q) jn ,

(6.1)

which increases strictly with j from R1 to 1. When q ≡ 1 − F(t) ≡ F(t), MTTF μ j of System j is  μj = 0





 1 − F(t) jn dt .

(6.2)

110

6 Hierarchical Structure Reliability

In particular, when F(t) = 1 − exp(−λt) (Problem 1),  μj =



0

  11 . 1 − (1 − e−λt ) jn dt = λ i=1 i jn

(6.3)

In addition, the following approximations are given [17, p. 142] :     1 1 1 1 log( jn) + < μj < log( jn) + 1 , μ j ≈ log( jn) + γ , (6.4) λ jn λ λ where γ is Euler’s constant and γ ≡ 0.5772156619 · · · . When F(t) = 1 − exp[−(λt)m ] (m ≥ 1) which is a Weibull distribution, MTTF is 



μj =

  m  jn 1 − 1 − e−(λt) dt ,

(6.5)

0

and its approximation is jn 1/m 1/m 1 1 1

μj ≈ log( jn) + γ ≈ . λ i=1 i λ

(6.6)

Example 6.1 When F(t) = 1 − exp(−t 2 ), Table 6.1 presents μ j and its approximations of a parallel system with j stage for n = 2, 3. These approximations almost match up to the first decimal with exact solutions. Especially, approximation [log( jn) + γ]1/j /λ could be used in actual fields as the upper limit of μ j . 

6.1.2 n-Unit Series System We consider a series system with n identical units, which have the same reliability q (0 < q < 1). System 1 has reliability R1 = q n , and System 2 has reliability R2 = R1n = q 2n , and generally, System j ( j = 1, 2, . . .) has reliability R j = q jn ,

(6.7)

6.1 Basic System

111

Table 6.1 MTTF μ j and approximations when F(t) = 1 − exp(−t 2 ) n=2

j

n=3

2 j

μj

(ln 2 j + γ)1/2

(

1/2 μ j i=1 1/i)

1

1.1458

1.1271

1.2247

2

1.3885

1.4013

1.4434

3

1.5203

1.5391

4

1.6092

5

3 j

(ln 3 j + γ)1/2

(

1.2904

1.2945

1.3540

1.5203

1.5391

1.5652

1.5652

1.6446

1.6657

1.6820

1.6299

1.6486

1.7285

1.7499

1.7616

1.6757

1.6970

1.7114

1.7914

1.8125

1.8216

6

1.7285

1.7499

1.7616

1.8414

1.8621

1.8695

7

1.7722

1.7934

1.8032

1.8828

1.9031

1.9093

8

1.8093

1.8302

1.8387

1.9179

1.9379

1.9432

9

1.8414

1.8621

1.8695

1.9485

1.9680

1.9727

10

1.8698

1.8902

1.8968

1.9754

1.9946

1.9968

20

2.0473

2.0655

2.0685

2.1447

2.1614

2.0685

30

2.1447

2.1614

2.1633

2.2379

2.2532

2.1633

40

2.2112

2.2269

2.2283

2.3018

2.3162

2.2283

50

2.2615

2.2765

2.2776

2.3502

2.3639

2.2776

60

2.3018

2.3162

2.3171

2.3889

2.4021

2.3171

70

2.3353

2.3492

2.3500

2.4212

2.4340

2.3500

80

2.3640

2.3775

2.3781

2.4489

2.4613

2.3781

90

2.3889

2.4021

2.4027

2.4730

2.4851

2.4027

100

2.4111

2.4239

2.4245

2.4944

2.5062

2.4245

i=1 1/i)

1/2

which decreases strictly with j from R1 to 0. When q = F(t), MTTF μ j of System j are 



μj =

F(t) jn dt .

(6.8)

0

In particular, when F(t) = 1 − exp(−λt), 



μj =

e− jnλt dt =

0

1 . jnλ

(6.9)

When F(t) = 1 − exp[−(λt)m ] (m ≥ 1),  μj =



e− jn(λt) dt . m

(6.10)

0

∞ Example 6.2 When F(t) = 1 − exp(−t 2 ), Table 6.2 presents μ j = 0 exp μ j ≈ [1/( jn)]1/2 for n = 2, 3. This indicates (− jnt 2 )dt and their approximations

μ j decrease with j and n, and

μ j give good upper limits of μ j .  that both μ j and

112

6 Hierarchical Structure Reliability

Table 6.2 MTTF μ j and approximation

μ j when F(t) = 1 − exp(−t 2 ) j 1 2 3 4 5 6 7 8 9 10 20 30 40 50 60 70 80 90 100

n=2 μj

[1/(2 j)]1/2

n=3 μj

[1/(3 j)]1/2

0.6267 0.4431 0.3618 0.3133 0.2803 0.2558 0.2369 0.2216 0.2089 0.1982 0.1401 0.1144 0.0991 0.0886 0.0809 0.0749 0.0701 0.0661 0.0627

0.7071 0.5000 0.4082 0.3536 0.3162 0.2887 0.2673 0.2500 0.2357 0.2236 0.1581 0.1291 0.1118 0.1000 0.0913 0.0845 0.0791 0.0745 0.0707

0.5117 0.3618 0.2954 0.2558 0.2288 0.2089 0.1934 0.1809 0.1706 0.1618 0.1144 0.0934 0.0809 0.0724 0.0661 0.0612 0.0572 0.0539 0.0512

0.5774 0.4082 0.3333 0.2887 0.2582 0.2357 0.2182 0.2041 0.1925 0.1826 0.1291 0.1054 0.0913 0.0817 0.0745 0.0690 0.0646 0.0609 0.0577

6.1.3 k-Out-of-n System We consider a k-out-of-n system in which it can operate if and only if at least k (k = 1, 2, . . . , n) units of the total number of n units are operable, and each unit has the same reliability q (0 < q < 1). System 1 has reliability [17, p.155], [18] R1 =

n−k    n i=0

i

(1 − q) q

i n−i

=

n    n i=k

i

q i (1 − q)n−i ,

(6.11)

and System 2 has reliability R2 =

n−k    n i=0

i

(1 − R1 )i R1n−i

i  n−k   n−i  k−1   n−k      n n n = . q i (1 − q)n−i (1 − q)i q n−i i i i i=0

i=0

i=0

6.1 Basic System

113

Repeating the above computing procedures, reliability of System j ( j = 1, 2, . . .) is R j+1 =

n−k    n

i

i=0

. (1 − R j )i R n−i j

(6.12)

For example, for a 2-out-of-3 system, R1 = q 3 + 3(1 − q)q 2 = 3q 2 − 2q 3 , R2 = R13 + 3(1 − R1 )R12 = (3q 2 − 2q 3 )2 [3 − 2q 2 (3 − 2q)] . Repeating the above procedures, reliabilities R j of a 2-out-of-3 system can be gotten. We have concern whether reliability of a k-out-of-n (n ≥ 2) system increases or decreases with j. For a parallel system, because R1 = 1 − (1 − q)n > q , reliability is larger than that of a 1-unit system, and for a series system, because R1 = q n < q , reliability is smaller than that of a 1-unit system. For 2-out-of-3 system, setting that q 3 + 3(1 − q)q 2 = q ,

(6.13)

we have q = 0.5. That is, reliability of a 2-out-of-3 system with j ( j ≥ 2) stage increases with j for q > 0.5, is equal to 0.5 for any j and q = 0.5, and decreases with j for q < 0.5. For a general (n + 1)-out-of-(2n + 1) system, reliability is R1 =

 n   2n + 1 i

i=0

(1 − q)i q 2n+1−i .

(6.14)

Noting that 2n+1  i=n+1



2n + 1 i

 =

2n+1  i=n+1



2n + 1 2n + 1 − i

 =

 n   2n + 1 i=0

we have  n   2n + 1 i=0

i

+

2n+1  i=n+1



2n + 1 i

 = 22n+1 .

i

,

114

6 Hierarchical Structure Reliability

Table 6.3 Reliabilities R j of 2-out-of-3 system j 1 2 3 4 5 6 7 8

q 0.3

0.4

0.5

0.6

0.7

0.2160 0.1198 0.0396 0.0046 0.0001 0.0000 0.0000 0.0000

0.3520 0.2845 0.1967 0.1009 0.0285 0.0024 0.0000 0.0000

0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000

0.6480 0.7155 0.8033 0.8991 0.9715 0.9976 1.0000 1.0000

0.7840 0.8802 0.9604 0.9954 0.9999 1.0000 1.0000 1.0000

Thus,  n   2n + 1 i=0

i

= 2 , i.e., 2n

  2n+1  n  1 1 2n + 1 = . i 2 2 i=0

Setting that  n   2n + 1 i=0

i

(1 − q)i q 2n+1−i = q ,

(6.15)

we have q = 0.5 for all n ≥ 1. In other words, when q = 0.5, reliability R j of any (n + 1)-out-of-(2n + 1) system is the same one and is equal to 0.5. Thus, for q > 0.5, R j increase and for q < 0.5, decrease with j.

Example 6.3 Table 6.3 presents reliabilities R j of a 2-out-of-3 system with j stage, which decrease with j to 0 for q < 0.5, and increase with j to 1 for q > 0.5, as already shown above. 

6.2 Random K -Out-of-n System We consider a K -out-of-n system when K is random and give 3 examples of their hierarchical systems. Suppose that K is a random variable with a probability function . . , n) for specified n. It is denoted that the distribution pk ≡ Pr{K = k} (k = 1, 2, . k pi , where P0 ≡ 0, Pn = 1 and Pk increases with of K is Pk ≡ Pr{K ≤ k} = i=1 k from 0 to 1. Then, reliability of a K -out-of-n system is, from (6.12) [17, p. 158], [19],

6.2 Random K -Out-of-n System

R1 =

n 

pk

115

n    n

k=1

i=k

i

q (1 − q) i

n−i

=

n  k=1

  n Pk q k (1 − q)n−k , k

(6.16)

and when q ≡ F(t), MTTF is, from (6.3), μn,P =

n  k=1

  ∞ n Pk F(t)k F(t)n−k dt . k 0

(6.17)

Note that Pk represents the probability that the system operates if k units are operable. In particular, when  0 k ≤ K − 1, Pk = 1 k≥K, the system corresponds to a K -out-of-n system. Furthermore, when F(t) = 1 − exp(−λt), MTTF is, from (6.3) (Problem 1), μn,P =

n  k=1

  ∞ n  n−k −kλt 11 n Pk . Pk e dt = 1 − e−λt k λ k=1 k 0

(6.18)

Optimal inspection policies for an operating unit with a failure distribution F(t) have been discussed in Chaps. 2–5. Replacing F(t) and μ with R1 in (6.16) and μn,P in (6.17), respectively, we can discuss optimal policies for any redundant systems with identical units, which will be shown in Sect. 6.3.3.

6.2.1 Series-Parallel System We consider Series-parallel system which consists of 4 units in Fig. 6.2. The system fails when consecutive pairs of units (1, 3), (1, 4), (2, 3), and (2, 4) fail, and is operable when pairs of units (1, 2) and (3, 4) fail. When qi denotes reliability of unit i (i = 1, 2, 3, 4), reliability of the system is

Fig. 6.2 Series-parallel system with four units

116

6 Hierarchical Structure Reliability

R1 = q1 q2 q3 q4 + (1 − q1 )q2 q3 q4 + q1 (1 − q2 )q3 q4 + q1 q2 (1 − q3 )q4 +q1 q2 q3 (1 − q4 ) + (1 − q1 )(1 − q2 )q3 q4 + q1 q2 (1 − q3 )(1 − q4 ) . (6.19) When qi = q, R1 = q 4 + 4(1 − q)q 3 + 2(1 − q)2 q 2 = (2 − q 2 )q 2 .

(6.20)

Thus, when Pk is given by P1 = 0, P2 = 1/3, P3 = 1, and P4 = 1, from (6.16), R1 =

1 3

    4 4 (1 − q)2 q 2 + (1 − q)q 3 + q 4 = (2 − q 2 )q 2 , 2 3

(6.21)

√ which agrees with (6.20). In addition, setting (2 − q 2 )q 2 = q, we have q = ( 5 − 1)/2 ≈ 0.6180, which is a golden ratio [17, p. 11]. Thus, if q > 0.6180, then reliability with j stage increases strictly with j to 1. Furthermore, when q = exp(−λt), MTTF is  μ1 =



0

  3 , e−2λt 2 − e−2λt dt = 4λ

(6.22)

which is smaller than that of a 1-unit system. It is easily given from (6.18) that when P1 = 0, P2 = 1/3 and P3 = P4 = 1, 1 μ1 = λ



1 1 1 1 × + + 2 3 3 4

 =

3 . 4λ

Next, we consider the system with 2 stage. From (6.21), reliability is   R2 = (2 − R12 )R12 = (2 − q 2 )2 2 − q 4 (2 − q 2 )2 q 4 .

(6.23)

When q = exp(−λt), MTTF is 



μ2 =

 2   2  2 − e−4λt 2 − e−2λt dt e−4λt 2 − e−2λt

0

1051 0.6256 3 = ≈ < . 1680λ λ 4λ

(6.24)

Generally, reliability and MTTF with j ( j = 1, 2, . . .) stage can be calculated because   R j+1 = R 2j 2 − R 2j . Clearly, R j+1 increase strictly with j from (2 − q 2 )q 2 to 1 for q > 0.6180.

(6.25)

6.2 Random K -Out-of-n System

117

Table 6.4 Reliability R j of Series-parallel system j 1 2 3 4 5 6 7 8 9 10

q 0.60

0.61

0.6180

0.63

0.64

0.5904 0.5756 0.5529 0.5180 0.4646 0.3852 0.2747 0.1452 0.0417 0.0035

0.6057 0.5992 0.5892 0.5738 0.5501 0.5136 0.4580 0.3755 0.2621 0.1327

0.6180 0.6180 0.6180 0.6180 0.6180 0.6180 0.6180 0.6180 0.6180 0.6180

0.6363 0.6458 0.6602 0.6817 0.7134 0.7589 0.8202 0.8929 0.9589 0.9935

0.6514 0.6686 0.6943 0.7317 0.7841 0.8517 0.9246 0.9789 0.9983 1.0000

Example 6.4 Table 6.4 presents reliabilities R j of Series-parallel with 4 units for q. When q = 0.6180, reliabilities R j are equal to 0.6180, decrease with j for q < 0.6180, and increase with j for q > 0.6180. 

6.2.2 Parallel-Series System We consider Parallel-series system with 4 units in Fig. 6.3. The system fails when pairs of units (1, 3) and (2, 4) fail, and is operable when pairs of units (1, 2), (1, 4), (2, 3), and (3, 4) fail. When qi denotes reliability of unit i, reliability of the system is R1 = q1 q2 q3 q4 + (1 − q1 )q2 q3 q4 + q1 (1 − q2 )q3 q4 + q1 q2 (1 − q3 )q4 +q1 q2 q3 (1 − q4 ) + (1 − q1 )(1 − q2 )q3 q4 + (1 − q1 )q2 q3 (1 − q4 ) +q1 (1 − q2 )(1 − q3 )q4 + q1 q2 (1 − q3 )(1 − q4 ) .

Fig. 6.3 Parallel-series system with four units

(6.26)

118

6 Hierarchical Structure Reliability

When qi = q, R1 = q 4 + 4(1 − q)q 3 + 4(1 − q)2 q 2 = (2q − q 2 )2 .

(6.27)

Thus, when K is random and Pk is given by P1 = 0, P2 = 2/3, P3 = 1, and P4 = 1, from (6.16), R1 =

2 3

    4 4 (1 − q)2 q 2 + (1 − q)q 3 + q 4 = (2q − q 2 )2 , 2 3

(6.28)

2 2 which is larger than that √of Series-parallel system. In addition, setting (2q − q ) = q, we have q = (3 − 5)/2 ≈ 0.3820. Thus, if q > 0.3820, then reliabilities R j with j stage increase strictly with j to 1. Furthermore, when q = exp(−λt), MTTF is





μ1 = 0

 −λt 2 11 2e − e−2λt dt = , 12λ

(6.29)

which is smaller than that of 1-unit system. It is easily given from (6.18) that when P1 = 0, P2 = 2/3 and P3 = P4 = 1, 1 μ1 = λ



1 2 1 1 × + + 2 3 3 4

 =

11 . 12λ

Next, we consider the system with 2 stage. From (6.27), reliability R2 is  2  4   R2 = 2R1 − R12 = 2q − q 2 2 − (2q − q 2 )2 .

(6.30)

When q = exp(−λt), MTTF is 

∞

μ2 = 0

2e−λt − e−2λt

4 

 2  4931 0.7115 2 − 2e−λt − e−2λt dt = ≈ . (6.31) 6930λ λ

Example 6.5 Table 6.5 presents reliabilities R j of Parallel-series system with j stage for q. When q = 0.3820, R j are equal to 0.3820, decrease with j for q < 0.3820, and increase with j for q > 0.3820. 

6.2 Random K -Out-of-n System

119

Table 6.5 Reliability R j of Parallel-series system j 1 2 3 4 5 6 7 8 9 10

q 0.60

0.61

0.3820

0.63

0.64

0.3486 0.3314 0.3057 0.2683 0.2159 0.1483 0.0754 0.0211 0.0017 0.0000

0.3637 0.3542 0.3398 0.3183 0.2866 0.2411 0.1798 0.1071 0.0411 0.0065

0.3820 0.3820 0.3820 0.3820 0.3820 0.3820 0.3820 0.3820 0.3820 0.3820

0.3943 0.4008 0.4108 0.4262 0.4499 0.4864 0.5420 0.6245 0.7379 0.8673

0.4096 0.4244 0.4471 0.4820 0.5354 0.6148 0.7253 0.8548 0.9583 0.9965

6.2.3 Consecutive K-Out-of-5:F System (1) k = 2 We consider consecutive 2-out-of-5:F systems in Figs. 6.4, 6.5, and 6.6, in which each unit has the same reliability q: When the number of operating units are 2, the system is operable in 1 case of Fig. 6.5. When the number of operating units are 3, the system is operable in 6 cases of Fig. 6.6. Setting P0 = P1 = 0, P2 = 1/10, P3 = 3/5, P4 = P5 = 1 in (6.16), reliability of the system is R1 =

1 10

        3 5 5 5 5 q 2 (1 − q)3 + q 3 (1 − q)2 + q 4 (1 − q) + q5 2 4 5 5 3

= q 2 (1 + 3q − 4q 2 + q 3 ) .

(6.32)

Thus, setting q 2 (1 + 3q − 4q 2 + q 3 ) = q , we have 1 − 3q 2 + q 3 = 0 , and its solution is q ≈ 0.6527. Furthermore, when q = e−λt , MTTF is 



μ1 = 0

Fig. 6.4 Consecutive 2-out-of-5 system Fig. 6.5 Consecutive 2-out-of-5:F system is operable when 2 units operate

  7 . e−2λt 1 + 3e−λt − 4e−2λt + e−3λt dt = 10λ

(6.33)

120

6 Hierarchical Structure Reliability

Fig. 6.6 Consecutive 2-out-of-5:F system is operable when 3 units operate

Table 6.6 Reliability R j of consecutive 2-out-of-5: F system j

q 0.63

0.64

0.6527

0.66

0.67

1 2 3 4 5 6 7 8 9 10

0.6162 0.5937 0.5571 0.4974 0.4022 0.2628 0.1057 0.0142 0.0002 0.0000

0.6323 0.6199 0.5998 0.5670 0.5136 0.4276 0.2980 0.1390 0.0259 0.0007

0.6527 0.6527 0.6527 0.6527 0.6527 0.6527 0.6527 0.6527 0.6527 0.6527

0.6643 0.6712 0.6821 0.6990 0.7252 0.7643 0.8195 0.8881 0.9543 0.9919

0.6802 0.6961 0.7207 0.7577 0.8104 0.8777 0.9459 0.9888 0.9995 1.0000

It is easily given from (6.18) that when P0 = P1 = 0, P2 = 1/10 and P3 = 3/5, P4 = P5 = 1,   1 1 3 1 1 7 1 1 μ1 = × + × + + = . λ 2 10 3 5 4 5 10λ Repeating the following relation: R j+1 = R 2j (1 + 3R j − 4R 2j + R 3j ) , we can calculate reliabilities R j+1 with ( j + 1) stage. Example 6.6 Table 6.6 presents reliabilities R j of a consecutive 2-out-of-5:F system with j stage for q. When q = 0.6527, R j are equal to 0.6527, decrease with j for q < 0.6527, and increase with j for q > 0.6527. 

6.2 Random K -Out-of-n System

121

Table 6.7 Reliability R j of consecutive 3-out-of-5: F system j 1 2 3 4 5 6 7 8 9 10

q 0.05

0.1

0.15

0.2

0.25

0.0569 0.0657 0.0773 0.0929 0.1150 0.1474 0.1975 0.2791 0.4162 0.6353

0.1252 0.1629 0.2223 0.3205 0.4852 0.7312 0.9522 0.9997 1.0000 1.0000

0.2016 0.2859 0.4277 0.6522 0.9030 0.9974 1.0000 1.0000 1.0000 1.0000

0.2832 0.4231 0.6455 0.8980 0.9970 1.0000 1.0000 1.0000 1.0000 1.0000

0.3672 0.5605 0.8199 0.9846 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

(2) k = 3 We consider a consecutive 3-out-of-5: F system: Setting that P0 = 0, P1 = 1/5, P2 = 7/10, P3 = P4 = P5 = 1 (Problem 2), reliability is, from (6.16), R1 =

      7 5 5 5 q 3 (1 − q)2 q(1 − q)4 + q 2 (1 − q)3 + 3 1 10 2     5 5 + q 4 (1 − q) + q 5 = 1 − (1 − q)3 (1 + 2q) ≥ q , (6.34) 4 5 1 5

which increases with j. Furthermore, when q = e−λt , MTTF is 



μ1 = 0

  4 . 1 − (1 − e−λt )3 (1 + 2e−λt ) dt = 3λ

(6.35)

Repeating the following relation: R j+1 = 1 − (1 − R j )3 (1 + 2R j ) ,

(6.36)

we can obtain reliability with j + 1 stage.

Example 6.7 Table 6.7 presents reliabilities R j of a consecutive 3-out-of-5: F system with j stage. This indicates that R j increase with q and j, and go to 1 fast even if q is small. 

122

6 Hierarchical Structure Reliability

6.3 Comparison of Series-Parallel and Parallel-Series Systems 6.3.1 Reliability Properties As one example of modified hierarchical systems, we consider a series system with parallel units and a parallel system with series units as follows: It has been wellknown in reliability theory that “which is better redundancy of units or redundancy of systems?”. Parallel-series system is the simplest one with redundancy of units and Series-parallel system is the simple one with redundancy of systems. Suppose that 4 units are identical and have reliability q = F(t). Then, reliability R S and MTTF μ S when F(t) = 1 − exp(−λt) of Series-parallel system are given in (6.21) and (6.22), respectively, and failure rate is   4λ 1 − e−2λt 4λe−2λt − 4λe−4λt = , h S (t) = 2λe−2λt − λe−4λt 2 − e−2λt

(6.37)

which increases strictly with t from 0 to 2λ. Reliability R P and MTTF μ P when F(t) = 1 − exp(−λt) of Parallel-series system are given in (6.27) and (6.29), respectively, and failure rate is      4λ 2e−λt − e−2λt e−λt − e−2λt 4λ 1 − e−λt = , h P (t) =  2 2 − e−λt 2e−λt − e−2λt

(6.38)

which increases strictly with t from 0 to 2λ. It can be easily shown that R P > R S , μ P > μ S , and h P (t) < h S (t) for 0 < t < ∞, i.e., three properties of Parallel-series system are better than those of Series-parallel systems, as previously expected. Note that Parallel-series system consists of 4 units and 4 paths, and Series-parallel system consists of 4 units and 2 paths [10, p. 188]. For example, it is assumed that c1 is cost of unit and c2 is cost of network. Then, the expected cost rates [10, p. 10] are, for Parallel-series system, CP 4c1 + 4c2 48(c1 + c2 ) = = , λ 11/12 11 and for Series-parallel system, CS 4c1 + 2c2 8(2c1 + c2 ) = = . λ 3/4 3 Thus, if 4c1 > 7c2 then Parallel-series system is better than Series-parallel one economically, and conversely, if 4c1 < 7c2 then Series-parallel system is better than

6.3 Comparison of Series-Parallel and Parallel-Series Systems

123

Parallel-series one. This means that when network cost c2 is high, Series-parallel system is rather than Parallel-series system.

6.3.2 Replacement Time Suppose that a system is replaced at time T (0 < T ≤ ∞) or at failure, whichever occurs first, and ci (i = S, P) is the respective costs of Series-parallel and Parallelseries systems, and c R is replacement cost for a failed system.  ∞ It is assumed that each unit has a failure distribution F(t) with finite mean μ ≡ 0 F(t)dt and failure rate h(t) ≡ f (t)/F(t), where f (t) is a density function of F(t). Then, the expected cost rates [10, p. 10] are, for Series-parallel system, c S + c R [1 − R S (T )] T 0 R S (t)dt   2(2c1 + c2 ) + c R 1 − 2F(T )2 + F(T )4 = , T   2 4 0 2F(t) − F(t) dt

C S (T ) =

(6.39)

and for Parallel-series system, c P + c R [1 − R P (T )] T 0 R P (t)dt   2 4(c1 + c2 ) + c R 1 − 2F(T ) − F(T )2 , = T   2 2 dt 2F(t) − F(t) 0

C P (T ) =

(6.40)

where R S (t) and R P (t) are the respective reliabilities of Series-parallel and Parallelseries systems. We find optimal TS∗ and TP∗ to minimize C S (T ) and C P (T ), respectively, when failure rate h(t) increases with t. Differentiating C P (T ) with respect to T and setting it equal to zero, 

T

h P (T ) 0

 2 2  4(c1 + c2 ) 2F(t) − F(t)2 dt − 1 + 2F(T ) − F(T )2 = , (6.41) cR

where h P (T ) ≡

4h(T ) 4h(T )F(T ) = , 1 + F(T ) 1 + 1/F(T )

which increases strictly with T from 0 to 2h(∞). Thus, if

124

6 Hierarchical Structure Reliability





h(∞) 0

 2 4(c1 + c2 ) + c R 2F(t) − F(t)2 dt > , 2c R

then there exists a finite and unique TP∗ (0 < TP∗ < ∞) which satisfies (6.41), and the resulting cost rate is C P (TP∗ ) = c R h P (TP∗ ) .

(6.42)

Next, differentiating C S (T ) with respect to T and setting it equal to zero,  h S (T )

T

0



 2(2c1 + c2 ) , 2F(t)2 − F(t)4 dt − 1 + 2F(T )2 + F(T )4 = cR

(6.43)

where h S (T ) ≡

  4h(T ) 1 − F(T )2 2 − F(T )2

=

4h(T ) ,  1 + 1/ 1 − F(T )2

which increases strictly with T from 0 to 2h(∞). Thus, if  h(∞) 0



  2(2c1 + c2 ) + c R , 2F(t)2 − F(t)4 dt > 2c R

then there exists a finite and unique TS∗ (0 < TS∗ < ∞) which satisfies (6.43), and the resulting cost rate is C S (TS∗ ) = c R h S (TS∗ ) .

(6.44)

In particular, when F(t) = 1 − exp(−λt), (6.41) is      4  1  4 1 − e−λT −2λT −3λT −4λT 1−e 1−e 2 1−e − + 2 − e−λT 3 4 2  4(c1 + c2 ) , (6.45) −1 + 2e−λT − e−2λT = cR whose left-hand side increases strictly with T from 0 to 5/6. Thus, if 5/24 > (c1 + c2 )/c R , then there exists a finite and unique TP∗ (0 < TP∗ < ∞) which satisfies (6.45), and the resulting cost rate is  ∗ 4c R 1 − e−λTP C P (TP∗ ) = . ∗ λ 2 − e−λTP

(6.46)

6.3 Comparison of Series-Parallel and Parallel-Series Systems

125

Table 6.8 Optimal TP∗ , TS∗ , and resulting cost rates C P (TP∗ )/(λc R ), C S (TS∗ )/(λc R ) when c2 /c R = 0.05 and F(t) = 1 − e−λt c1 /c R λTP∗ C P (TP∗ )/(λc R ) λTS∗ C S (TS∗ )/(λc R ) 0.05 0.06 0.07 0.08 0.09 0.10

0.8631 0.9573 1.0604 1.1750 1.3046 1.4544

1.4654 1.5249 1.5812 1.6348 1.6862 1.7356

0.6327 0.7511 0.9011 1.1093 1.4610 ∞

1.6716 1.7495 1.8202 1.8850 1.9447 2.0000

On the other hand, (6.43) is     2 4 1 − e−2λT 1 2(2c1 + c2 ) −2λT −4λT 1 − e − 1 − e−2λT = − , 1 − e 4 cR 2 − e−2λT

(6.47)

whose left-hand side increases strictly with T from 0 to 1/2. Thus, if 1/4 > (2c1 + c2 )/c R , then there exists a finite and unique TS∗ (0 < TS∗ < ∞) which satisfies (6.47), and the resulting cost rate is  ∗ 4c R 1 − e−2λTS C S (TS∗ ) = . ∗ λ 2 − e−2λTS

(6.48)

Example 6.8 Table 6.8 presents optimal λTP∗ , λTS∗ , and their resulting cost rates C P (TP∗ )/(λc R ), C S (TS∗ )/(λc R ) for c1 /c R when c2 /c R = 0.05 and F(t) = 1 − exp (−λt). Both λTi∗ and Ci (Ti∗ )/(λc R ) (i = P, S) increase with c1 /c R . In this case, C S (TS∗ )/(λc R ) is greater than C P (TP∗ )/(λc R ), TP∗ is greater than TS∗ when c1 /c R ≤ 0.08, and TP∗ is less than TS∗ when c1 /c R ≥ 0.09. 

6.3.3 Inspection Time We consider the periodic inspection policy for Series-parallel and Parallel-series systems with 4 units in which the system is checked at periodic times kT (k = 1, 2, . . .). When q = F(t), the total expected cost for Series-parallel system is, from (2.7) and (6.27), C S (T ) = (cT + c D T )

∞  

2F(kT ) − F(kT )2

2

− c D μS ,

(6.49)

k=0

where cT = cost of one check, c D = downtime cost per unit of time for the time ∞ 2 elapsed between failure and its detection, and μ S ≡ 0 2F(t) − F(t)2 dt from (6.20).

126

6 Hierarchical Structure Reliability

In particular, when F(t) = 1 − e−λt , the total expected cost is C S (T ) = (cT + c D T )

∞   −λkT 2 11c D . − e−2λkT − 2e 12λ k=0

(6.50)

We find optimal TS∗ to minimize C S (T ). Cleary, lim C S (T ) = lim C S (T ) = ∞ .

T →0

T →∞

Thus, a finite TS∗ (0 < TS∗ < ∞) exists. Differentiating C S (T ) with respect to T and setting it equal to zero, TS∗ satisfies   e2λT 4/(1 − e−2λT ) − 4/(1 − e−3λT ) + 1/(1 − e−4λT )   2 2 2    4 2/ 1 − e−2λT − 3e−λT / 1 − e−3λT + e−2λT / 1 − e−4λT −λT =

cT . c D /λ

(6.51)

The total expected cost for Parallel-series system is ∞    C P (T ) = (cT + c D T ) 2F(kT )2 − F(kT )4 − c D μ P ,

(6.52)

k=0

∞  where μ P ≡ 0 2F(t)2 − F(t)4 dt. In particular, when F(t) = 1 − e−λt , the total expected cost is C P (T ) = (cT + c D T )

∞   −2λkT  3c D . − e−4λkT − 2e 4λ k=0

(6.53)

We find optimal TP∗ to minimize C P (T ). Clearly, lim C P (T ) = lim C P (T ) = ∞ .

T →0

T →∞

Thus, a finite TP∗ (0 < TP∗ < ∞) exists. Differentiating C P (T ) with respect to T and setting it equal to zero, TP∗ satisfies  2λT   e − 1 2 − 1/(1 + e−2λT ) cT  2  − λT = c /λ .  −2λT −2λT D 4 1−e / 1+e

(6.54)

6.4 Problems

127

Table 6.9 Optimal λTS∗ , λTP∗ , and resulting costs C S (TS∗ )/(c D /λ), C P (TP∗ )/(c D /λ) when F(t) = 1 − exp(−λt) cT /(c D /λ) λTS∗ C S (TS∗ )/(c D /λ) λTP∗ C P (TP∗ )/(c D /λ) 1 5 10 15 20 25

1.1634 1.8689 2.1952 2.3906 2.5309 2.6404

1.931 6.525 11.825 17.010 22.144 27.250

1.0010 1.6213 1.9225 2.1058 2.2386 2.3429

2.120 6.752 12.071 17.265 22.406 27.517

Example 6.9 Table 6.9 presents optimal λTS∗ and λTP∗ for cT /(c D /λ) when F(t) = 1 − exp(−λt). Both λTS∗ and λTP∗ increase with cT /(c D /λ). This indicates that  TS∗ > TP∗ and C S (TS∗ ) < C P (TP∗ ). Furthermore, suppose that the system is checked at successive times Tk (k = 1, 2, . . .) in Sect. 2.1. Then, replacing kT with Tk , the total expected cost for Seriesparallel system is, from (6.49), C S (T) =

∞  2  [cT + c D (Tk+1 − Tk )] 2F(Tk ) − F(Tk )2 − c D μ S ,

(6.55)

k=0

and for Parallel-series system is, from (6.52), ∞    C P (T) = [cT + c D (Tk+1 − Tk )] 2F(Tk )2 − F(Tk )4 − c D μ P .

(6.56)

k=0

Using the same method given in Sect.2.1, we can compute optimal Tk∗ which minimize the total expected costs Ci (T) (i = S, P) (Problem 3).

6.4 Problems 1. Derive (6.3) and (6.18). 2. Derive that P0 = 1, P2 = 7/10, P3 = P4 = P5 = 1. 3. Using the method in Sect. 2.1, compute T∗ to minimize Ci (T) (i = S, P) numerically.

128

6 Hierarchical Structure Reliability

References 1. Yeh YC (1998) Design considerations in Boeing 777 fly-by-wire computers. In: Proceedings Third IEEE International High-Assurance Systems Engineering Symposium, pp 64–72 2. Yeh YC (1996) Triple-triple redundant 777 primary flight computer. In: Proceedings, IEEE Aerospace Conference, pp 293–307 3. Barlow BE, Proschan F (1965) Matematical theory of reliability. Willey, New York 4. Ushakov IA (1994) Hand book of reliability engineering. Willey, New York 5. Pham H (2003) Reliability of system with multiple failure modes. In: Pham H (ed) Hand-book of reliability engineering. Springer, London, pp 19–36 6. Blokus A (2006) Reliability analysis of large systems with dependent components. Inter J Reliab Qual Saf Eng 13:1–14 7. Zia L, Coit DW (2010) Redundancy allocation for series-parallel systems using a column generation approach. IEEE Trans Reliab 59:706–717 8. Wu S, Zuo MJ (2010) Linear and nonlinear preventive maintenance models. IEEE Trans Reliab 59:242–249 9. MacPherson AJ, Glazebrook KD (2011) A dynamic programming policy improvement approach to the development of maintenance policies for 2-phase systems with aging. IEEE Trans Reliab 60:448–459 10. Nakagawa T (2008) Advanced reliability models and maintenance policies. Springer, London 11. Linton DG, Saw JG (1974) Reliability analysis of the k-out-of-n: F system. IEEE Trans Reliab R 23:97–103 12. Nakagawa T (1985) Optimization problems in k-out-of-n systems. IEEE Trans Reliab R 34:248–250 13. Kenyon RL, Newell RL (1983) Steady-state availability of k-out-of-n: G system with single repair. IEEE Trans Reliab R 32:188–190 14. Lala PK (1985) Fault tolerant and fault testable hardware design. Prentice Hall, London 15. Chang CJ, Cui L, Hwang FK (2000) Reliability of consecutive-k systems. Kluwer, Dordrecht 16. Kuo W, Zuo MJ (2003) Optimal reliability modeling, principles and applications. Wiley, Hoboken 17. Nakagawa T (2014) Random maintenance policies. Springer, London 18. Ito K, Nakagawa T (2019) Reliability properties of K-out-of-N: G systems. In: Ram M, Dohi T (eds) System engineering: reliability analysis using k-out-of-n structures. CRC Press, Boca Raton, pp 25–40 19. Ito K, Zhao X, Nakagawa T (2017) Random number of units for K-out-of-n systems. Appl Math Modell 45:563–572

Chapter 7

Application Examples of Storage System

This chapter describes optimal inspection and maintenance policies for storage systems. A system such as missiles and spare parts of aircrafts is in storage for a long time from transportation to usage [1, 2], which is called a storage system. To keep a high reliability, a system has to be checked and maintained at periodic times, and at last, it is overhauled if its reliability becomes lower than a prespecified level [3, 4]. Section 7.1 takes up the fundamental optimal inspection policies for storage systems [5–11]: A system is assumed to be composed of two units where unit 1 can be maintained periodically and be replaced with a new one at failures, and unit 2 cannot be done in Fig. 7.1. A system is overhauled if its reliability becomes lower than a prespecified value. The expected number of inspections and the mean time to overhaul are obtained and optimal inspection time to minimize the total expected cost is discussed. Section 7.2 takes up the modified policies of Sect. 7.1 in which a system is overhauled when its reliability becomes lower than a prespecified level or at failure, whichever occurs first. Extended inspection policies for a storage system are discussed in Sects. 7.3 and 7.4. In Sect. 7.3, optimal inspection policies for a storage system with finite numbers of inspections are considered [12]: A system cannot be used after finite numbers of inspections, because it involves some parts which have to be replaced when the total operating times have exceeded a prespecified time of quality warranty. It has been wellknown that big electric currents in electronic circuits with induction parts such as coil and motor, occur at power on and off. Storage systems are constituted of various kinds of parts, and some of them are degraded by power on-off cycles during each inspection [3]. A system is overhauled at the next inspection when it has failed or at a specified number of inspections, whichever occurs first. Then, the expected cost rate is derived and optimal time to minimize it is discussed. Section 7.4 takes up optimal inspection policies for a storage system which degrades with time and at each inspection [13, 14]: The cumulative hazard function of indication system degradation is introduced, and the mean time to system © Springer Nature Switzerland AG 2023 K. Ito and T. Nakagawa, Optimal Inspection Models with Their Applications, Springer Series in Reliability Engineering, https://doi.org/10.1007/978-3-031-22021-0_7

129

130

7 Application Examples of Storage System

failure and the expected number of inspections before failure are obtained. Using these quantities, we derive the total expected cost until failure detection and compute numerically optimal inspection time to minimize it.

7.1 Basic Inspection Model A system consists of 2 units where unit i has a cumulative hazard function Hi (t) (i = 1, 2) in which units 1 and 2 are a broad division of storage system components in Fig. 7.1 [15, p. 216]: When the system is checked at periodic times N T (N = 1, 2, . . .), unit 1 is maintained and is like new after every time N T , and unit 2 is not done, i.e., its hazard rate remains unchanged by any inspections. From these assumptions, reliability F(t) of the system with no inspection is F(t) ≡ e−H1 (t)−H2 (t) .

(7.1)

If the system is checked and maintained at time t, reliability just after t+0 is F(t+0 ) = e−H2 (t) . Thus, reliabilities just before N T−0 and after N T+0 are, respectively, F(N T−0 ) = e−H1 (T )−H2 (N T ) ,

F(N T+0 ) = e−H2 (N T ) .

(7.2)

Suppose that if reliability of the system is equal to or lower than q (0 < q ≤ 1), then the overhaul is made, and units 1 and 2 are renewed after overhaul. Then, if F(N T−0 ) > q ≥ F[(N + 1)T−0 ] , i.e.,

Fig. 7.1 Simple storage system with units 1 and 2

7.1 Basic Inspection Model

131

e−H1 (T )−H2 (N T ) > q ≥ e−H1 (T )−H2 ((N +1)T )

(N = 0, 1, 2, . . .) ,

(7.3)

the overhaul is made, and its mean time is N T + t0 , where t0 (0 < t0 ≤ T ) satisfies e−H1 (t0 )−H2 (N T +t0 ) = q .

(7.4)

This means that reliability is greater than q just before the N th inspection and is equal to q at time N T + t0 . Therefore, the expected cost rate until overhaul is C(T, N ) =

N cT + c R , N T + t0

(7.5)

where cT is the cost of inspection and c R is the cost of overhaul. If no inspection is made, the expected cost rate is C(T, 0) =

cR . t0

(7.6)

Suppose that the failure time of units has an exponential distribution, i.e., Hi (t) = λi t (i = 1, 2). Then, (7.3) is 1 1 1 1 ln ≤ λT < ln , Na + 1 q (N − 1)a + 1 q

(7.7)

where λ ≡ λ1 + λ2 and a ≡ H2 (T )/[H1 (T ) + H2 (T )] = λ2 /λ, i.e., λ2 = aλ and λ1 = (1 − a)λ, and a (0 ≤ a ≤ 1) represents an efficiency of inspection [3], and is adopted widely in practical reliability calculations of a storage system. When inspection time T is given, optimal inspection number N ∗ which satisfies (7.7) is determined. Particularly, if ln(1/q) ≤ λT then N ∗ = 0, and N ∗ diverges as λT tends to 0. In this case, (7.4) is 1 N ∗ λ2 T + λt0 = ln . q

(7.8)

Thus, the total mean time to overhaul is N ∗ T + t0 = N ∗ (1 − a)T +

1 1 ln , λ q

(7.9)

and the expected cost rate is C(T, N ∗ ) =

N ∗ cT + c R . N ∗ (1 − a)T + ln(1/q)/λ

(7.10)

132

7 Application Examples of Storage System

Table 7.1 Optimal N ∗ and resulting time λ(N ∗ T + t0 ) when a = 0.1 and q = 0.8 λT N∗ λ(N ∗ T + t0 ) [0.223, ∞) [0.203, 0.223) [0.186, 0.203) [0.172, 0.186) [0.159, 0.172) [0.149, 0.159) [0.139, 0.149) [0.131, 0.139) [0.124, 0.131) [0.117, 0.124) [0.112, 0.117)

[0.223, ∞) [0.406, 0.424) [0.558, 0.588) [0.687, 0.725) [0.797, 0.841) [0.893, 0.940) [0.976, 1.026) [1.050, 1.102) [1.116, 1.168) [1.174, 1.227) [1.227, 1.280)

0 1 2 3 4 5 6 7 8 9 10

When inspection time T and q are given, we compute optimal N ∗ from (7.7) and N T + t0 from (7.9). Substituting these values into (7.10), we obtain the expected cost rate C(T, N ∗ ). Changing T from 0 to ln(1/q)/[λ(1 − a)], we can compute optimal T ∗ to minimize C(T, N ∗ ). In particular case of λT ≥ ln(1/q)/(1 − a), N ∗ = 0 and the expected cost rate is ∗

C(T, 0) =

λc R cR . =− t0 lnq

(7.11)

Suppose that the failure time of units has a Weibull distribution, i.e., Hi (t) = (λi t)m (i = 1, 2) for m ≥ 1. Then, (7.3) and (7.4) are, respectively, 

1 1 ln m a[(N + 1) − 1] + 1 q

1/m



1 1 ln m a(N − 1) + 1 q 1 1 (1 − a)t0m + a(N T + t0 )m = m ln , λ q ≤ λT
F[(K + 1)T ] ,

(7.15)

i.e., K (K = 0, 1, 2, . . .) satisfies the following equation: 

F 1 (T )

K

  K +1 F 2 (K T ) ≥ q > F 1 (T ) F 2 [(K + 1)T ] .

(7.16)

Noting that F(K T ) decreases with T from 1 to 0, K decreases with T from ∞ to 0. (6) The system is overhauled at detection of failures or at time (K + 1)T , whichever occurs first. Introduce the following costs: Cost cT is required for one inspection, cost c D is required for the downtime for the time elapsed between failure and its detection per unit of time, and cost c R is required for overhaul. The expected cost when the system is overhauled at time (K + 1)T before failure is F[(K + 1)T ](K cT + c R ) , and when the system fails and is detected at time (k + 1)T (k = 0, 1, . . . , K ), K  k=0

(k+1)T



kcT + [(k + 1)T − t]c D + c R dF(t) .

kT

Thus, the total expected cost until overhaul is C(T ) = F[(K + 1)T ](K cT + c R ) K (k+1)T 

+ kcT + [(k + 1)T − t]c D + c R dF(t) k=0

kT

= (cT + c D T )

K  k=0



(K +1)T

F(kT ) − c D 0

Furthermore, from (7.14), the total expected cost is

F(t)dt + c R − cT .

(7.17)

136

7 Application Examples of Storage System K 

C(T ) = (cT + c D T )

[F 1 (T )]k F 2 (kT )

k=0

−c D

K 

[F 1 (T )]k

k=0

(k+1)T kT

F 1 (t − kT )F 2 (t)dt + c R − cT .

(7.18)

When T and F i (t) are given, K and C(T ) are computed from (7.16) and (7.18), respectively. Changing T , we can determine optimal T ∗ to minimize C(T ). We consider the following three particular cases: (i) When F 1 (t) ≡ 1, i.e., the system is unchanged at any inspection and F(t) = F 2 (t), the total expected cost is C(T ) = (cT + c D T )

K 

F 2 (kT ) − c D

(K +1)T

F 2 (t)dt + c R − cT , (7.19)

0

k=0

where K is given by F 2 (K T ) ≥ q > F 2 [(K + 1)T ] . If K = ∞, then this corresponds to the usual periodic inspection model in Sect. 2.1 when c R = cT . (ii) When F 2 (t) ≡ 1, i.e., the system is like new after every inspection and F(t) = [F 1 (T )]k F 1 (t − kT ) for kT ≤ t < (k + 1)T , C(T ) = (cT + c D T )

K  [F 1 (T )]k k=0

−c D

K 



(k+1)T

[F 1 (T )]k

k=0

F 1 (t − kT )dt + c R − cT , (7.20)

kT

where K is given by [F 1 (T )] K ≥ q > [F 1 (T )] K +1 . (iii) When K = 0, we should overhaul the system at time T0 , where T0 satisfies F(T0 ) = q. Then, the total expected cost is C(T0 ) = c D

T0

F(t)dt + c R .

(7.21)

0

It can be easily seen from the above results that changing T from 0 to T0 , we can determine optimal T ∗ to minimize C(T ) in (7.18).

7.2 Modified Inspection Model

137

When F i (t) ≡ exp(−λi t m ) (i = 1, 2) for m ≥ 1, from (7.14), reliability of the system is, for kT ≤ t < (k + 1)T , F(t) = e−λ1 kT e−λ1 (t−kT ) m

m

−λ2 t m

.

(7.22)

Thus, the total expected cost is, from (7.18), C(T ) = (cT + c D T )

K 

e−λ1 kT e−λ2 (kT )

k=0

−c D

K 

e−λ1 kT

m

m



(k+1)T

m

e−λ1 (t−kT )

m

−λ2 t m

dt + c R − cT ,

(7.23)

kT

k=0

where K is given by K λ1 + K m λ2 ≤

1 1 ln < (K + 1)λ1 + (K + 1)m λ2 . Tm q

Furthermore, when λ1 = 0 and F(t) = e−λ2 t , (7.23) is m

C(T ) = (cT + c D T )

K 

e

−λ2 (kT )m

− cD

(K +1)T

e−λ2 t dt + c R − cT , m

(7.24)

0

k=0

where K is given by 1 T



1 1 ln λ2 q

1/m

1 −1< K ≤ T



1 1 ln λ2 q

1/m .

In particular, when m = 1, i.e., the failure time has an exponential distribution 1 − exp (−λ2 t), (7.23) is

 cT + c D T  cD + c R − cT , C(T ) = 1 − e−(K +1)λ2 T − 1 − e−λ2 T λ2 where K is given by 1 1 1 1 ln − 1 < K ≤ ln . λ2 T q λ2 T q Noting 1 − e−(K +1)λ2 T > 1 − q ,

(7.25)

138

7 Application Examples of Storage System

we have C(T ) > (1 − q)

cD cT + c D T − −λ T 2 1−e λ2

+ c R − cT ,

whose right-hand side goes to ∞ in both cases of T → 0 and T → ∞. Thus, there exists a finite T ∗ (0 < T ∗ < ∞) to minimize C(T ) in (7.25). We find optimal T ∗ to minimize C(T ) in (7.25). Differentiating C(T ) with respect to T and setting it equal to zero, K 

e−kλ2 T − (K + 1)e−(K +1)λ2 T − T

k=0

K 

kλ2 e−kλ2 T =

k=1

K cT  kλ2 e−kλ2 T . c D k=1

Using K K    1 − e−λ2 T (k + 1)e−kλ2 T = e−kλ2 T − (K + 1)e−(K +1)λ2 T , k=0

k=0

we have 1 − e−λ2 T cT PK (T ) − T = , λ2 cD

(7.26)

where K PK (T ) ≡

−kλ2 T k=0 (k + 1)e K −kλ2 T k=0 ke

,

which increases strictly with T from (K + 2)/K to ∞ and decreases strictly with K from P1 (T ) to eλ2 T (Problem 1). Furthermore, letting L K (T ) be the left-hand side of (7.26), L K (0) = 0 and K lim L K (T ) = lim

T →∞

T →∞

L ’K (T ) =

k=0 (k

K + 1)e−kλ2 T − λ2 T k=0 ke−kλ2 T = ∞, K λ2 k=0 ke−kλ2 T

1 − e−λ2 T ’ (K + 1)e−(K +1)λT PK (T ) +  K > 0, −kλ2 T λ2 k=0 ke

which follows that L K (T ) increases strictly with T from 0 to ∞. Thus, there exists a finite and unique T ∗ (0 < T ∗ < ∞) which satisfies (7.26) for any K , and T ∗ increases with K . In particular, when K = 1, (7.26) is  1  λ2 T cT + 1 − 2e−λ2 T − T = , e λ2 cD

(7.27)

7.2 Modified Inspection Model

139

Table 7.4 Inspection time T and expected cost C(T )/c D when cT /c D = 10, λ1 = 0, λ2 = 29.24 × 10−6 / h and q = 0.8 K T C(T )/c D K T C(T )/c D 0 1 2 3 4 5 6 7 8 9 10

(7631, ∞] (3816, 7631] (2544, 3816] (1908, 2544] (1526, 1908] (1272, 1526] (1090, 1272] (954, 1090] (848, 954] (763, 848] (694, 763]

(792, ∞] (398, 1433] (275, 570] (219, 357] (190, 269] (173, 224] (163, 199] (159, 185] (157, 177] (157, 173] (159, 172]

11 12 13 14 15 16 17 18 19 20 21

(636, 694] (587, 636] (545, 587] (509, 545] (477, 509] (449, 477] (424, 449] (402, 424] (382, 402] (363, 382] (347, 363]

(162, 173] (166, 175] (171, 178] (176, 183] (182, 187] (188, 193] (195, 199] (202, 205] (208, 211] (216, 218] (223, 225]

and when K = ∞,  cT 1  λ2 T e − (1 + λ2 T ) = . λ2 cD

(7.28)

∗ be the respective solutions of (7.27) and (7.28), T ∗ increase with Letting T1∗ and T∞ ∗ ∗ ∗ . When λ2 = 0, the expected cost is, from K from T1 to T∞ , and T1∗ ≤ T ∗ ≤ T∞ (7.23),

C(T ) = (cT + c D T )

K 

e−kλ1 T

k=0

−c D

K  k=0

e−kλ1 T

m



m

(k+1)T

e−λ1 (t−kT ) dt + c R − cT , m

(7.29)

kT

where K is given by 1 1 1 1 ln − 1 < K ≤ ln . λ1 T m q λ1 T m q In particular cases of m = 1 and λ1 = λ2 , (7.29) agrees with (7.25). Example 7.2 It is assumed that c R = 0 for the simplicity of computations. When F(t) = exp(−λ2 t m ), Table 7.4 presents the inspection time T and the resulting expected cost C(T )/c D from (7.25) for K when m = 1, cT /c D = 10, λ2 = 29.24 × 10−6 /h and q = 0.8 [16]. For example, when 1526 < T ≤ 1908, K = 4 and C(T )/c D increase from 190 to 269 with T . For T > 7631, K = 0, i.e., no inspection is made and the system should be overhauled at T0 = 7631 and the resulting cost is

140

7 Application Examples of Storage System

Table 7.5 Optimal T ∗ and expected cost C(T ∗ )/c D when q = 0.8 m

0.8

λ 29.24 × 10−6

58.48 × 10−6

1.0

29.24 × 10−6

58.48 × 10−6

1.2

29.24 × 10−6

58.48 × 10−6

cT /c D

λ1 = 0 and λ2 = λ

λ1 = λ and λ2 = 0

T∗

C(T ∗ )/c D K

T∗

C(T ∗ )/c D K

10

2460

497

28

1417

355

22

15

2972

607

23

1799

442

18

20

3567

700

19

2067

517

16

25

3963

780

17

2417

583

14

30

4196

853

16

2634

643

13

10

1579

320

18

1016

239

14

15

2000

390

14

1215

297

12

20

2307

449

12

1498

346

10

25

2500

500

11

1687

390

9

30

2727

546

10

1924

430

8

10

848

157

8

848

157

8

15

954

190

7

954

190

7

20

1090

217

6

1090

217

6

25

1272

240

5

1272

240

5

30

1526

261

4

1526

261

4

10

545

109

6

545

109

6

15

763

131

4

763

131

4

20

763

149

4

763

149

4

25

954

163

3

954

163

3

30

954

177

3

954

177

3

10

430

69

3

542

77

3

15

430

83

3

542

90

3

20

574

92

2

689

99

2

25

574

101

2

689

108

2

30

860

110

1

966

116

1

10

322

50

2

387

54

2

15

322

59

2

387

63

2

20

483

65

1

542

68

1

25

483

69

1

542

73

1

30

483

74

1

542

77

1

792. Table 7.5 presents optimal T ∗ for m, λ2 , and cT /c D when q = 0.8. This indicates that both T ∗ and C(T ∗ )/c D decrease with λ2 and m, and increase with cT /c D , and K decrease with λ2 , m, and cT /c D . Finally, when F 1 (t) = exp(−λ1 t m ) and λ2 = 0, Table 7.5 presents optimal T ∗ and resulting costs C(T ∗ )/c D for m, λ2 , and cT /c D when q = 0.8. This indicates that when λ2 = 0, these values have the same tendency as those when λ1 = 0, however, optimal T ∗ when λ2 = 0 are slightly larger than those when λ1 = 0 because the system is like new at each inspection. 

7.3 Finite Number of Inspections

141

7.3 Finite Number of Inspections We consider the inspection of a storage system with two kinds of units, where unit 1 is checked and is maintained at time Tk (k = 1, 2, . . . , N ) and unit 2 is continuously degraded with time. The system is overhauled at detection of failures or at the N th inspection (N = 1, 2, . . .), whichever occurs first. Under the above assumptions, we obtain the expected cost rates and derive optimal checking times to minimize them. First, when checking times are periodic times, i.e., Tk ≡ kT , we derive optimal time T ∗ to minimize the expected cost in two cases where failure times are exponential and Weibull distributions. Next, using Barlow’s algorithm in Sect. 2.1, we compute optimal schedule {Tk∗ } for a Weibull distribution, and compare it with that of periodic times. The system is checked and is maintained at time Tk where T0 ≡ 0. Any failure is detected at the next checking time and the system is overhauled immediately. Suppose that a prespecified inspection number of warranty is N , i.e., the system is overhauled at checking time TN . Any checking and overhaul times are negligible. The system is roughly divided into two kinds of units, where unit 1 is like new after every inspections, however, unit 2 remains unchanged by any inspections. Unit i (i = 1, 2) has hazard rate h i (t) at time t. Then, hazard rate h(t) of the system is, for Tk−1 < t < Tk (k = 1, 2, . . . , N ), h(t) = h 1 (t − Tk−1 ) + h 2 (t) ,

(7.30)

and the cumulative hazard function is

t

H (t) ≡

h(u)du =

0

k−1 

H1 (T j − T j−1 ) + H1 (t − Tk−1 ) + H2 (t) , (7.31)

j=1

t where Hi (t) ≡ 0 h i (u)du (i = 1, 2), and reliability F(t) of the system is F(t) = exp[−H (t)]. Then, the total expected cost when a failure is detected and the system is overhauled at time Tk (k = 1, 2, . . . , N ) is N  k=1

Tk

[kcT + (Tk − t)c D + c R ] dF(t) ,

Tk−1

and when it is overhauled without failure at time TN , F(TN )(N cT + c R ) , where costs cT , c D and c R are given in (7.17) Thus, the total expected cost until overhaul is

142

7 Application Examples of Storage System

 C(T) =

N 

[kcT + (Tk − t)c D + c R ] dF(t) + F(TN )(N cT + c R )

Tk−1

k=1

=

Tk

N −1    cT + c D (Tk+1 − Tk ) F(Tk ) − c D

TN

F(t)dt + c R ,

(7.32)

0

k=1

where T = (T1 , T2 , . . . , TN ). Next, the mean time to system overhaul is N  k=1

Tk

Tk dF(t) + F(TN )TN =

Tk−1

N −1 

(Tk+1 − Tk )F(Tk ) .

(7.33)

k=0

Therefore, the expected cost rate is, from (7.32) and (7.33) T  cT + c D (Tk+1 − Tk ) F(Tk ) − c D 0 N F(t)dt + c R C(T) =  N −1 k=0 (Tk+1 − Tk )F(Tk ) T  N −1 cT k=0 F(Tk ) − c D 0 N F(t)dt + c R (7.34) + cD . =  N −1 k=0 (Tk+1 − Tk )F(Tk )  N −1  k=0

Next, when the system is checked at periodic times kT (k = 1, 2, . . . , N ), the expected cost rate is C(T, N ) =

cT

 N −1 k=0

 NT F(kT ) − c D 0 F(t)dt + c R + cD ,  N −1 T k=0 F(kT )

(7.35)

which agrees with (2.12). When the failure times of unit i (i = 1, 2) have a Weibull distribution, i.e., Hi (t) = λi t m , the expected cost rate in (7.35) is C(T, N ) cD cT + cD − = T

 N −1 k=0

m  (k+1)T m m e−kλ1 T kT e−λ1 (t−kT ) −λ2 t dt − c R .  N −1 −kλ T m −λ (kT )m 2 T k=0 e 1

(7.36)

Thus, changing T , we can compute T ∗ to minimize C(T, N ) for given N . Example 7.3 Suppose λ1 = aλ and λ2 = (1 − a)λ (0 < a < 1), where a is an efficiency of inspection. Table 7.6 presents optimal T ∗ to minimize C(T, N ) in (7.36) for λ, m and N when a = 0.9, cT /c D = 10, and c R /c D = 100. This indicates that  T ∗ decrease with λ, m and N .

7.3 Finite Number of Inspections

143

Table 7.6 Optimal T ∗ when a = 0.9, cT /c D = 10 and c R /c D = 100 N

λ = 1.0 × 10−3

λ = 1.1 × 10−3

m

m

λ = 1.2 × 10−3 m

1.0

1.1

1.2

1.3

1.0

1.1

1.2

1.3

1.0

1.1

1.2

1.3

1

563

435

355

309

543

422

347

307

525

411

341

307

2

395

300

235

192

380

289

228

188

366

280

222

185

3

327

245

189

150

314

236

183

146

302

228

177

142

4

289

215

164

128

277

206

158

123

266

199

153

120

5

264

195

148

114

252

187

142

110

243

180

137

106

6

246

181

136

104

235

173

131

100

226

167

126

97

7

232

170

127

97

222

163

122

93

213

157

118

90

8

222

162

121

91

212

155

116

88

203

149

111

85

9

213

155

115

87

204

149

111

84

195

143

106

80

10

206

150

111

84

197

143

106

80

189

138

102

77



157

116

88

67

151

112

85

65

146

109

82

63

We find optimal Tk∗ (k = 1, 2, . . . , N ) to minimize C(T) in (7.34) for given N . Differentiation C(T) with respect to Tk and setting it equal to zero, we have the following results: For N = 1,

T1 0



 cT + c R F(t) − F(T1 ) dx = , cD

whose left-hand side increases with T1 to μ ≡ T2 − T1 =

∞ 0

cT F(T1 ) − , f (T1 ) c D − α

(7.37)

F(t)dt. For N = 2,

F(T1 ) F(T2 )

=

cD , cD − α

(7.38)

where α ≡ c D F(T1 ). For general N (N = 3, 4, . . .), cT F(T1 ) − , f (T1 ) c D − α cT F(Tk ) − F(Tk−1 ) − Tk+1 − Tk = f (Tk ) cD − α T2 − T1 =

F(TN −1 ) F(TN )

=

cD . cD − α

(k = 2, 3, . . . , N − 1) , (7.39)

144

7 Application Examples of Storage System

By making a modification of Algorithm 1 in Sect. 2.1, we can specify Modified Algorithm for computing optimal schedule under the assumption that μ > (cT + c R )/c D [17, p. 116][18]. Modified Algorithm 1. When N = 1, compute T1 which satisfies 0

T1



 cT + c R F(t) − F(T1 ) dt = , cD

i.e., cT + c R + c D T1

 T1 0

F(t)dt

= c D F(T1 ) ,

and set α1 ≡ c D F(T1 ). 2. Compute Tk(1) (k = 1, 2, . . . , N ) from (7.39) for α1 , and from (7.34), set α2 ≡ C(T) when Tk = Tk(1) . 3. Repeat the above computing procedures until α1 > α2 > · · · are determined to the degree of accuracy required. 4. Changing α and repeating from Steps 1 to 3, compute α∗ which satisfies D(T∗ , α) ≡ E C (T∗ ) − α∗ E T (T) = 0 , i.e., α∗ =

E C (T∗ ) = C(T∗ ) , E T (T∗ )

where E C (T) and E T (T) are the respective numerator and denominator of (7.34). ∗ ) and resulting cost Example 7.4 Table 7.7 presents optimal T∗ = (T1∗ , T2∗ , . . . , T10 ∗ −3 rate C(T )/c D for m when λ = 1.0 × 10 , N = 10, a = 0.9, cT = 10, c D = 1 and c R = 100. This indicates that Tk∗ decrease with m, and the differences δk = ∗ decrease with k and their differences of δk increase with m. Comparing Tk∗ − Tk−1 with Table 7.6, T1∗ and TN∗ in Table 7.7 are larger than T ∗ and N T ∗ for each m, respectively, Furthermore, Table 7.8 presents C(T)/c D for m and N , and indicates that C(T)/c D decrease with N and decrease with m. 

7.4 Degradation at Inspection

145

Table 7.7 Optimal T∗ when N = 10, a = 0.9, cT /c D = 10, c R /c D = 100 and λ = 1.0 × 10−3 k m 1.1 1.2 1.3 1 2 3 4 5 6 7 8 9 10 C(T∗ )/c D

187 372 556 740 923 1106 1288 1470 1652 1833 0.344

179 354 527 698 868 1037 1206 1374 1541 1707 0.482

162 317 470 621 770 917 1064 1209 1353 1496 0.656

Table 7.8 Expected cost rates C(T∗ )/c D when a = 0.9, cT /c D = 10, c R /c D = 100 and λ = 1.0 × 10−3 N m 1.1 1.2 1.3 1 2 3 4 5 6 7 8 9 10

0.724 0.501 0.429 0.394 0.375 0.363 0.355 0.350 0.347 0.344

0.812 0.605 0.542 0.514 0.500 0.492 0.488 0.485 0.483 0.482

0.938 0.746 0.693 0.673 0.664 0.660 0.658 0.657 0.657 0.656

7.4 Degradation at Inspection We consider a storage system with two kinds units, where unit 1 is maintained at periodic times N T (N = 1, 2, . . .) for a specified T (0 < T < ∞), unit 2 is degraded with time and at each inspection, and the system is overhauled if its reliability becomes lower than a prespecified value q. For such an inspection model, time N T + t0 and total expected cost C(T ) until overhaul are obtained. Using these quantities, we derive optimal times to maximize N T + t0 and minimize C(T ) [19]. We make the following assumptions of the periodic inspection policy for a storage system which has to operate when it is used at any time:

146

7 Application Examples of Storage System

(1) The system is new at time 0, and is checked and is maintained if necessary at periodic times N T (N = 1, 2, . . .). (2) The system has a failure distribution F(t) and its reliability F(t) ≡ 1 − F(t) has to hold a higher reliability than a prespecified value q (0 < q ≤ 1), i.e., F(N T ) ≥ q > F[(N + 1)T ] .

(7.40)

Furthermore, reliability F(t) is q at time N T + t0 , i.e., F(N T + t0 ) = q .

(7.41)

(3) The system is consisted of two independent units, where unit 1 is like new after every inspection, however, unit 2 does not become like new and is degraded with time and at each inspection. (4) Unit 1 has hazard rate h 1 (t), which is given by h 1 (t − N T ) for N T < t ≤ (N + 1)T , because it is like new at time N T . (5) Unit 2 has two hazard rates h 2 (t) and h 3 (t), which indicate the hazard rates of system degradations with time and at each inspection, respectively. Hazard rate h 2 (t) remains undisturbed by any inspection. In addition, unit 2 is degraded by power on-off cycles during this inspection interval, h 3 (t) increases by constant rate λ3 at each inspection denoted as h 3 (t) = N λ3 for N T < t ≤ (N + 1)T [3, 13]. (6) Hazard rate h(t) of the system is, from (4) and (5), for N T < t ≤ (N + 1)T , h(t) ≡ h 1 (t − N T ) + h 2 (t) + N λ3 .

(7.42)

Under the above assumptions, we derive  t reliability F(t) of the system at time t. Cumulative hazard function H (t) ≡ 0 h(u)du is, from (7.42), for N T < t ≤ (N + 1)T (N = 0, 1, 2, . . .), H (t) = N H1 (T ) + H1 (t − N T ) + H2 (t) +

N −1 

jλ3 T + N λ3 (t − N T ) , (7.43)

j=0

where Hi (t) ≡ (N + 1)T ,

t 0

h i (u)du (i = 1, 2). Thus, reliability F(t) is, for N T < t ≤

   (N + 1)T . F(t) = exp −N H1 (T ) − H1 (t − N T ) − H2 (t) − N λ3 t − 2 (7.44) From (7.44), (7.40) is

7.4 Degradation at Inspection

N H1 (T ) + H2 (N T ) +

147

N (N − 1)λ3 T 2

≤ q < (N + 1)H1 (T ) + H2 [(N + 1)T ] +

N (N + 1)λ3 T , 2

(7.45)

and (7.41) is  N H1 (T ) + H1 (t0 ) + H2 (N T + t0 ) + N λ3

 (N − 1)T 1 + t0 = ln . 2 q

(7.46)

Thus, the expected cost rate until overhaul is C(T ) ≡

N cT + c R . N T + t0

(7.47)

When the failure time has a Weibull distribution, i.e., Hi (t) = λi t m , (7.45) is λ1 N T m + λ2 (N T )m + ≤ ln

N (N − 1)λ3 T 2

1 N (N + 1)λ3 T < λ1 (N + 1)T m + λ2 [(N + 1)T ]m + , q 2

(7.48)

and (7.46) is λ1 t0m + λ2 (N T + t0 )m + N λ3 t0 = ln

1 N (N − 1)λ3 T − N λ1 T m − . q 2

(7.49)

When the failure time has an exponential distribution, i.e., m = 1, (7.48) is 

   1 2(λ1 + λ2 ) + λ3 2 2 ln − (λ1 + λ2 )T + 2λ3 λ3 T q   2(λ1 + λ2 ) − λ3 2 1 2 2(λ1 + λ2 ) − λ3 ln , (7.50) + + 0 ,

k=0 j=0

L(∞) = (λT − 1 + e−λT )

∞  N −1 

p j (kT )

k=0 j=0



N −1 ∞  

p j (kT )

∞  N −1 

p j (kT ) − λT

k=0 j=0

= N − (1 − e−λT )

∞  N −1 

(i − N + j) pi (T )

i=N − j

k=0 j=0

= (λT − 1 + e−λT )

∞ 

∞  N −1 

p j (kT ) + N

k=0 j=0

p j (kT ) .

k=0 j=0

Thus, if N>

∞  N −1

 λc1 + 1 − e−λT p j (kT ) , c3 k=0 j=0

then there exists a finite and unique minimum M ∗ (1 ≤ M ∗ < ∞) which satisfies (8.4).

160

8 Application Examples of Phased Array Radar

8.1.2 Delayed Maintenance We consider the following delayed maintenance of a PAR: (4’) All failed elements are replaced with new ones only when failed elements have exceeded a number N D (N D = 1, 2, . . . , N ) at periodic time kT (k = 1, 2, . . .). Other assumptions are the same as ones in Sect. 8.1.1. A conceptual diagram of delayed maintenance is drawn in Fig. 8.2 The expected cost when failed elements are less than N D at time (k − 1)T and has exceeded N D at time kT is ∞ N D −1 

N − j−1



p j [(k − 1)T ]

[c1 + (i + j)c2 ] pi (T ) ,

i=N D − j

k=1 j=0

and when failed elements are less than N D at time (k − 1)T and have exceeded N at time kT , ∞ N D −1  k=1 j=0

p j [(k − 1)T ]

∞   [c1 + (i + j)c2 ] pi (T ) i=N − j

+c3



kT

(k−1)T

 (kT − t)d pi [t − (k − 1)T ] .

Thus, the total expected cost until replacement is

Fig. 8.2 Schematic diagram of delayed maintenance

8.1 Cost Model

c1 + c2 λT

161

∞ N D −1 

p j (kT ) +

k=0 j=0

∞ N D −1 ∞  c3   p j (kT ) (i − N + j) pi (T ) . λ k=0 j=0 i=N − j

(8.5) Similarly, the mean time to replacement is ∞ N D −1 

N − j−1

∞ N D −1 

kT pi (T )

i=N D − j

k=1 j=0

+



p j [(k − 1)T ] p j [(k − 1)T ]

∞ 

kT pi (T ) = T

i=N − j

k=1 j=0

∞ N D −1 

p j (kT ) .

(8.6)

k=0 j=0

Therefore, the expected cost rate is, from (8.5) and (8.6), C D (N D ) =

c1 + (c3 /λ)

∞  N D −1 k=0

T

∞

j=0 p j (kT ) ∞  N D −1 k=0 j=0

i=N − j (i

− N + j) pi (T )

p j (kT )

+ c2 λ , (8.7)

which agrees with (8.3) when M = ∞ and N D = N . We find optimal N D∗ to minimize C D (N D ). Forming the inequality C D (N D + 1) − C D (N D ) ≥ 0, ∞ N D −1  k=0 j=0

⎡ p j (kT ) ⎣

∞ 

(i − N + N D ) pi (T ) −

i=N −N D

∞ 

⎤ (i − N + j) pi (T )⎦

i=N − j

λc1 ≥ , c3

(8.8)

whose left-hand side increases with N D to N (Problem3). Thus, if N > λc1 /c3 then there exists a finite and unique minimum N D∗ (1 ≤ N D∗ ≤ N ) which satisfies (8.8). Example 8.1 When c2 = 0, because it does not affect M ∗ and N D∗ , Table 8.1 presents optimal M ∗ and N D∗ , and the resulting cost rates CC (M ∗ ) and C D (N D∗ ) for T = 24, 48, 72, . . . , 168 h and λ = 1, 2, 3, . . . , 10 × 10−1 /h, when N = 100 and c1 = c3 = 1. All cases in Table 8.1 satisfy the sufficient conditions (8.4) and (8.8). This indicates that M ∗ and N D∗ decrease with T and λ. It is of interest that M ∗ T are approximately 720 when λ = 1 × 10−1 . In this calculation, CC (M ∗ ) is greater than C D (N D∗ ) and CC (M ∗ )/C D (N D∗ )  1.2. Therefore, delayed maintenance is more efficient than cyclic one. However, we count the number of failed elements for delayed maintenance and do not need to do for cyclic one. Considering such works into maintenance costs, cyclic maintenance might be better than delayed one. 

162

8 Application Examples of Phased Array Radar

∗ , and resulting cost rates C (M ∗ ) and C (N ∗ ) when N = 100 and Table 8.1 Optimal M ∗ , N D C D D c1 = c 3 = 1

λ

T

M∗

M∗T

CC (M ∗ )

∗ ND

∗) C D (N D

∗) CC (M ∗ )/C D (N D

1.29

24

31

744

1.38 × 10−3

93

1.07 × 10−3

24 × 2

15

720

1.41 × 10−3

89

1.10 × 10−3

1.28

24 × 3

10

720

1.42 × 10−3

86

1.13 × 10−3

1.26

1 × 10−1 24 × 4

7

672

1.49 × 10−3

83

1.16 × 10−3

1.28

24 × 5

6

720

1.42 × 10−3

80

1.18 × 10−3

1.20

24 × 6

5

720

1.42 × 10−3

77

1.21 × 10−3

1.17

24 × 7

4

672

1.49 × 10−3

74

1.24 × 10−3

1.20

2 × 10−1

16

384

2.75 × 10−3

90

2.19 × 10−3

1.26

3 × 10−1

11

264

4.19 × 10−3

87

3.34 × 10−3

1.25

4 × 10−1

8

192

5.41 × 10−3

85

4.54 × 10−3

1.19

5 × 10−1

6

144

6.98 × 10−3

82

5.77 × 10−3

1.21

6 × 10−1 24

5

120

8.36 × 10−3

80

7.03 × 10−3

1.19 1.25

7 × 10−1

5

120

1.04 × 10−2

77

8.34 × 10−3

8 × 10−1

4

96

1.06 × 10−2

75

9.69 × 10−3

1.09

9 × 10−1

3

72

1.39 × 10−2

73

1.10 × 10−2

1.26

10 × 10−1

3

72

1.39 × 10−2

71

1.26 × 10−2

1.10

8.2 Availability Model We have discussed optimal maintenance policies to minimize the expected cost rates based on maintenance costs and mean time to replacements. Phased array radars are mainly used in air defence systems because they have a high functional ability for enemy attacks. Expected cost rates are important measures in civilian systems, however, availabilities are more important in defence systems than expected costs. We take up maintenance policies to maximize availabilities as an objective function, using cyclic and delayed maintenances [13–15]. In numerical examples, we decide which maintenance is better by comparing availabilities. Furthermore, we propose combined models with cyclic and delayed maintenances, obtain their expected cost rates and availabilities, and discuss optimal policies theoretically and numerically.

8.2.1 Cyclic Maintenance We consider the following cyclic maintenance of a PAR: (3’) Failed elements cannot be detected during operation and can be ascertained only according to the diagnosis software executed by a PAR system computer.

8.2 Availability Model

163

Failed elements are usually detected at periodic times kT (k = 1, 2, . . . , M), which spends time T0 . (4”) All failed elements are replaced with new ones at time M T (M = 1, 2, . . .) or at the time when failed elements have exceeded N , whichever occurs first, which spends time T1 . Assumptions (1) and (2) are the same as ones in Sect. 8.1.1. When the replacement time of failed elements is the regeneration point, availability of the system is defined as [8, p. 9] A=

Operation time between regeneration points . Total time between regeneration points

(8.9)

The mean time until replacement when the number of failed elements is less than N at time M T is MT

N −1 

p j (M T ) ,

j=0

and when failed elements have exceeded N at time kT , N −1 M   k=1 j=0

=

N −1 M−1  k=0 j=0

p j [(k − 1)T ]

∞   i=N − j

kT

(k−1)T

td pi [t − (k − 1)T ]

 ∞   i−N+ j (k + 1)T − pi (T ) . p j (kT ) λ i=N − j

Thus, the total mean operation time until replacement is (Problem 4) T

N −1 M−1  k=0 j=0

M−1 N −1 ∞  1  p j (kT ) − p j (kT ) (i − N + j) pi (T ) . λ k=0 j=0 i=N − j

(8.10)

Next, the mean time between two regeneration points when failed elements are less than N at time M T is [M(T + T0 ) + T1 ]

N −1 

p j (M T ) ,

j=0

and when failed elements have exceeded N at time kT , M N −1 ∞    [k(T + T0 ) + T1 ] p j [(k − 1)T ] pi (T ) . k=1

j=0

i=N − j

164

8 Application Examples of Phased Array Radar

Thus, the total mean time between two regeneration points is (Problem 5) T1 + (T + T0 )

N −1 M−1 

p j (kT ) .

(8.11)

k=0 j=0

Therefore, from (8.10) and (8.11), availability of cyclic maintenance is  M−1  N −1 T k=0 p j (kT ) j=0  M−1 ∞ N −1 −(1/λ) k=0 j=0 p j (kT ) i=N − j (i − N + j) pi (T ) AC (M) =  M−1  N −1 T1 + (T + T0 ) k=0 j=0 p j (kT ) ⎤ ⎡  M−1  N −1 T1 /(T + T0 ) + [1/(λT )] k=0 p j (kT ) j=0 ∞ ⎥ ⎢ × i=N T − j (i − N + j) pi (T ) ⎥ ⎢ = ⎥ ⎢1 −  M−1  N −1 ⎦ T + T0 ⎣ T1 /(T + T0 ) + k=0 j=0 p j (kT ) ≡

  T C (M) . 1− A T + T0

(8.12)

C (M). We find optimal M ∗ to maximize availability AC (M), i.e., to minimize A   Forming AC (M + 1) − AC (M) ≥ 0, ⎡ Q(M) ⎣

T1 + T + T0



M−1 N −1 

N −1 M−1 

⎤ p j (kT )⎦

k=0 j=0

p j (kT )

∞  i=N − j

k=0 j=0

(i − N + j) pi (T ) ≥

λT T1 , T + T0

(8.13)

whose left-hand side agrees with (8.4) when T1 /(T + T0 ) = 0. Letting L(M) be the left-hand side of (8.13), L(∞) = (λT − 1 + e−λT )

N −1 ∞ 

 T1 + N − 1 − e−λT p j (kT ) . T + T0 k=0 j=0

Thus, if ⎡

N > 1 − e−λT ⎣

⎤ ∞  N −1  T1 + p j (kT )⎦ , T1 + T0 k=0 j=0

then there exists a finite and unique M ∗ (1 ≤ M ∗ < ∞) which satisfies (8.13).

8.2 Availability Model

165

8.2.2 Delayed Maintenance We consider the following delayed maintenance of a PAR: (4”’) All failed elements are replaced with new ones only when failed elements have exceeded a number N D (N D = 1, 2, . . . , N ) at periodic times kT (k = 1, 2, . . .), which spends time T1 . We have the same assumptions (1) and (2) in Sect. 8.1.1, and assumption (3’) in Sect. 8.2.1. The mean operation time until replacement when failed elements are between N D and N is ∞ N D −1 

N − j−1



p j [(k − 1)T ]

kT pi (T ) ,

i=N D − j

k=1 j=0

and when failed elements have exceeded N , ∞ N D −1 

p j [(k − 1)T ]

=

k=0 j=0

kT

(k−1)T

i=N − j

k=1 j=0 ∞ N D −1 

∞  

td pi [t − (k − 1)T ]

 ∞   i−N+ j (k + 1)T − pi (T ) . p j (kT ) λ i=N − j

Thus, the total mean operation time until replacement is (Problem 6), T

∞ N D −1 

p j (kT ) −

k=0 j=0

∞ N D −1 ∞  1  p j (kT ) (i − N + j) pi (T ) . λ k=0 j=0 i=N − j

(8.14)

Similarly, the mean time between two regeneration points when failed elements are between N D and N is ∞ N D −1  k=1 j=0

N − j−1



p j [(k − 1)T ]

[k(T + T0 ) + T1 ] pi (T ) ,

i=N D − j

and when failed elements have exceeded N , ∞ N D −1  k=1 j=0

p j [(k − 1)T ]

∞ 

[k(T + T0 ) + T1 ] pi (T ) .

i=N − j

Thus, the total mean time between two regeneration points is

166

8 Application Examples of Phased Array Radar

T1 + (T + T0 )

∞ N D −1 

p j (kT ) .

(8.15)

k=0 j=0

Therefore, availability of delayed maintenance is, from (8.14) and (8.15),   N D −1 T ∞ k=0 j=0p j (kT ) ∞ N D −1 −(1/λ) ∞ k=0 j=0 p j (kT ) i=N − j (i − N + j) pi (T ) A D (N D ) = ∞  N D −1 T1 + (T + T0 ) k=0 j=0 p j (kT ) ⎤ ⎡   N D −1 T1 /(T + T0 ) + [1/(λT )] ∞ k=0 j=0 p j (kT )  ∞ ⎥ ⎢ × i=N T − j (i − N + j) pi (T ) ⎥ ⎢ = ⎥ ⎢1 −   N D −1 ⎦ T + T0 ⎣ T1 /(T + T0 ) + ∞ p (kT ) j k=0 j=0 ≡

  T D (N D ) , 1− A T + T0

(8.16)

which agrees with (8.12) when M = ∞ and N = N D . D (N D ). Forming the inequality A D (N D ) − We find optimal N D∗ to minimize A  A D (N D + 1) ≥ 0, ⎡

∞  i=N −N D



⎤ ∞ N D −1  T 1 (i − N + N D ) pi (T ) ⎣ + p j (kT )⎦ T + T0 k=0 j=0

∞ N D −1  k=0 j=0

p j (kT )

∞  i=N − j

(i − N + j) pi (T ) ≥

λT T1 , T + T0

(8.17)

whose left-hand side increases strictly with N D to N + λT T1 /(T + T0 ) (Problem 7). Thus, there exist a finite and unique minimum N D∗ (1 ≤ N D∗ < N ) which satisfies (8.17). C (M ∗ ) Example 8.2 Table 8.2 presents optimal M ∗ and N D∗ , and unavailabilities A ∗  and A D (N D ) for N = 70, 90, 100, T = 168, 240, 336 h, T0 = 0.1, 0.5, 1.0, T1 =2, 5, 8 and λ =0.1, 0.2, 0.3/h. In all cases, both left-hand sides of (8.13) and (8.17) increase strictly with M and N D , respectively. This indicates that M ∗ and N D∗ increase with N , 1/T , T1 , and 1/λ, and the change of T0 hardly affects M ∗ and N D∗ . In this D (N D∗ ). Therefore, delayed maintenance is C (M ∗ ) is greater than A calculation, A more available than cyclic one, as already shown in Table 8.1. 

8.2 Availability Model

167

∗ , and resulting unavailabilities A C (M ∗ ) and A D (N ∗ ) Table 8.2 Optimal M ∗ and N D D

N

T

T0

T1

λ

M∗

C (M ∗ ) A

∗ ND

D (N ∗ ) A D

D (N ∗ ) C (M ∗ )/ A A D 1.22

100

24 × 7

1.0

8

0.1

5

1.141 × 10−2

78

0.936 × 10−2

90

24 × 7

1.0

8

0.1

4

1.186 × 10−2

68

1.058 × 10−2

1.12

70

24 × 7

1.0

8

0.1

3

1.574 × 10−2

49

1.430 × 10−2

1.10

100

24 × 10

1.0

8

0.1

3

1.097 × 10−2

70

0.998 × 10−2

1.10

100

24 × 14

1.0

8

0.1

2

1.174 × 10−2

60

1.113 × 10−2

1.05

100

24 × 7

0.5

8

0.1

5

1.144 × 10−2

78

0.938 × 10−2

1.22

100

24 × 7

0.1

8

0.1

5

1.146 × 10−2

78

0.941 × 10−2

1.22

100

24 × 7

1.0

5

0.1

4

0.735 × 10−2

77

0.593 × 10−2

1.24

100

24 × 7

1.0

2

0.1

4

0.295 × 10−2

75

0.242 × 10−2

1.22

100

24 × 7

1.0

8

0.2

2

2.312 × 10−2

62

2.143 × 10−2

1.08

100

24 × 7

1.0

8

0.3

1

4.520 × 10−2

48

3.830 × 10−2

1.18

8.2.3 Combined Cyclic and Delayed Maintenances We consider the combined maintenance of cyclic and delayed ones of a PAR: (4””) All failed elements are replaced with new ones at time M T (M = 1, 2, . . .) or at the time when failed elements have exceeded a number N D (N D = 1, 2, . . . , N ), whichever occurs first, which spends time T1 , and when failed elements have exceeded a number N until time M T , they are replaced, which spends time T2 (T2 > T1 ). We have the same assumptions (1) and (2) in Sect. 8.1.1, and assumption (3’) in Sect. 8.2.1. The expected cost when failed elements are less than N D at time M T is N D −1

(c1 + c2 j) p j (M T ) ,

j=0

when failed elements is between N D and N at time kT , M N D −1 

N− j



p j [(k − 1)T ]

k=1 j=0

[c1 + (i + j)c2 ] pi (T ) ,

i=N D − j

and when failed elements are more than N at time kT , M N D −1  k=1 j=0

p j [(k − 1)T ]

∞   [c1 + (i + j)c2 ] pi (T ) i=N − j

+c3



kT

(k−1)T

 (kT − t)d pi [t − (k − 1)T ] .

168

8 Application Examples of Phased Array Radar

Thus, the total expected cost until replacement is c1 + c2 λT

M−1 D −1  N k=0

j=0

M−1 N D −1 ∞  c3   p j (kT ) + p j (kT ) (i − N + j) pi (T ) , λ k=0 j=0 i=N − j

(8.18) and the mean time to replacement is MT

N D −1

p j (M T ) +

N M ∞ D −1   (kT ) p j [(k − 1)T ] pi (T )

j=0

k=1

=T

i=N D − j

j=0

M−1 D −1  N k=0

p j (kT ) .

(8.19)

j=0

Therefore, the expected cost rate is, from (8.18) and (8.19), CC D (M, N D ) =

c1 + (c3 /λ)

 M−1  N D −1 k=0

T

∞

j=0 p j (kT )  M−1  N D −1 k=0 j=0

i=N − j (i

− N + j) pi (T )

p j (kT )

+c2 λT ,

(8.20)

which agrees with CC (M) in (8.3) when N D = N , and C D (N D ) in (8.7) when M = ∞. We find optimal M ∗ and N D∗ to minimize CC D (M, N D ) in (8.20). Forming the inequality CC D (M + 1, N D ) − CC D (M, N D ) ≥ 0, Q(M, N D )

M−1 D −1  N k=0

p j (kT ) −

j=0

M−1 D −1  N k=0



j=0

p j (kT )

∞ 

(i − N + j) pi (T )

i=N − j

λc1 , c3

(8.21)

where  N D −1 Q(M, N D ) ≡

j=0

∞ p j (M T ) i=N − j (i − N + j) pi (T ) ,  N D −1 j=0 p j (M T )

∞ which increases strictly with M to i=N +1−N D (i − N − 1 + N D ) pi (T ) (Problem 2), because when N D = N , (8.21) is equal to (8.4). Thus, letting L(M, N D ) be the left-hand side of (8.21), it increases strictly with M, and if L(∞, N D ) > λc1 /c3 , then there exists a finite and unique minimum M ∗ (1 ≤ M ∗ < ∞) which satisfies (8.21). Forming the inequality CC D (M, N D + 1) − CC D (M, N D ) ≥ 0,

8.2 Availability Model

169

∗ , and resulting cost rates C ∗ ∗ Table 8.3 Optimal M ∗ , N D C D (M , N D )

λ

T

M∗

∗ ND

∗) CC D (M ∗ , N D

1 × 10−1

24 24 × 2 24 × 3 24 × 4 24 × 5 24 × 6 24 × 7

2 × 10−1 3 × 10−1 4 × 10−1 5 × 10−1 6 × 10−1 7 × 10−1 8 × 10−1 9 × 10−1 10 × 10−1

24

79 39 25 19 15 12 10 39 26 19 15 13 11 9 8 7

92 88 85 82 79 76 73 89 86 84 81 79 77 75 73 73

1.07 × 103 1.10 × 103 1.13 × 103 1.16 × 103 1.18 × 103 1.21 × 103 1.22 × 103 2.19 × 103 3.34 × 103 4.54 × 103 5.77 × 103 7.03 × 103 8.30 × 103 9.56 × 103 1.07 × 102 1.19 × 102

M−1 D −1  N k=0

j=0

⎡ p j (kT ) ⎣

∞ 

(i − N + N D ) pi (T ) −

i=N −N D

∞ 

⎤ (i − N + j) pi (T )⎦

i=N − j

λc1 ≥ , c3

(8.22)

which agrees with (8.8) when M = ∞, and whose left-hand side increases strictly with N D to N − Nj=0 (N − j) p j (M T ). Thus, if N−

N  λc1 (N − j) p j (M T ) > , c3 j=0

then there exists a finite and unique minimum N D∗ (1 ≤ N D∗ ≤ N ) which satisfies (8.22). Example 8.3 We compute optimal M ∗ and N D∗ when c2 = 0. Table 8.3 presents optimal M ∗ and N D∗ , and the resulting costs CC D (M ∗ , N D∗ ) for T = 24, 48, 72, . . . , 168 h and λ = 1, 2, 3, . . . , 10 × 10−1 /h, when N = 100 and c1 = c3 = 1. All cases in Table 8.3 satisfy the sufficient conditions (8.21) and (8.22). Comparing with Table 8.1,  CC (M ∗ ) > C D (N D∗ ) ≥ CC D (M ∗ , N D∗ ). The mean operation time until replacement when failed elements are less than N D at time M T is

170

8 Application Examples of Phased Array Radar

MT

N D −1

p j (M T ) ,

j=0

when failed elements have exceeded N D , M N D −1 

N − j−1



p j [(k − 1)T ]

kT pi (T ) ,

i=N D − j

k=1 j=0

and when failed elements have exceeded N , M N D −1 

∞  

p j [(k − 1)T ]

(k−1)T

i=N − j

k=1 j=0

kT

td pi [t − (k − 1)T ] .

Thus, the total mean operation time until replacement is T

M−1 D −1  N k=0

p j (kT ) −

j=0

M−1 D −1  N k=0

p j (kT )

j=0

∞  i−N+ j pi (T ) . λ i=N − j

(8.23)

Next, the mean time between two regeneration points when failed elements are less than N D at time M T is [M(T + T0 ) + T1 ]

N D −1

p j (M T ) ,

j=0

when failed elements have exceeded N D and is less than N at time kT , M N D −1 

N − j−1



p j [(k − 1)T ]

[k(T + T0 ) + T1 ] pi (T ) ,

i=N D − j

k=1 j=0

and when failed elements have exceeded N at time M T , M N D −1  k=1 j=0

p j [(k − 1)T ]

∞ 

[k(T + T0 ) + T2 ] pi (T ) .

i=N − j

Thus, the total mean time between two regeneration points is (T + T0 )

M−1 D −1  N k=0

j=0

p j (kT ) + T1 + (T2 − T1 )

M−1 D −1  N k=0

j=0

p j (kT )

∞ 

pi (T ) .

i=N − j

(8.24)

8.2 Availability Model

171

Therefore, availability is, from (8.23) and (8.24),  M−1  N D −1 T k=0 j=0p j (kT )  M−1 ∞ N D −1 −(1/λ) k=0 j=0 p j (kT ) i=N − j (i − N + j) pi (T ) AC D (M, N D ) = ,  M−1  N D −1 (T + T0 ) k=0 p j (kT ) + T1 j=0  M−1  N D −1 ∞ +(T2 − T1 ) k=0 j=0 p j (kT ) i=N − j pi (T ) (8.25) which agrees with AC (M) in (8.12) when N D = N and T2 = T1 , and A D (N D ) in (8.16) when M = ∞ and T2 = T1 . In particular, when T2 = T1 , availability in (8.25) is ⎡

⎤ T1 /(T + T0 )  M−1  N D −1 ⎢ ⎥ + [1/(λT )] k=0 j=0 p j (kT ) ⎥ ⎢ ∞ ⎢ ⎥ × (i − N + j) p (T ) T i i=N − j ⎢ ⎥ AC D (M, N D ) = ⎢1 − ⎥  M−1  N D −1 ⎥ T + T0 ⎢ T1 /(T + T0 ) + k=0 j=0 p j (kT ) ⎥ ⎢ ⎣ ⎦ ≡

  T C D (M, N D ) , 1− A T + T0

(8.26)

C D (M, N D ). Forming the inequality We find optimal M ∗ and N D∗ to minimize A   AC D (M + 1, N D ) − AC D (M, N D ) ≥ 0, ⎡ Q(M, N D ) ⎣



M−1 D −1  N k=0

T1 + T + T0

p j (kT )

j=0

∞ 

M−1 D −1  N k=0

⎤ p j (kT )⎦

j=0

(i − N + j) pi (T ) ≥

i=N − j

λT T1 , T + T0

(8.27)

where Q(M, N D ) is given in (8.21). Letting L(M, N D ) be the left-hand side of (8.27), it increases strictly with M, and if L(∞, N D ) > λT T1 /(T + T0 ), then there exists a finite and unique minimum M ∗ (1 ≤ M ∗ < ∞) which satisfies (8.27). C D (M, N D ) ≥ 0, C D (M, N D + 1) − A Forming the inequality A ∞  i=N −N D





⎤ N M−1 D −1  T 1 (i − N + N D ) pi (T ) ⎣ + p j (kT )⎦ T + T0 k=0 j=0

M−1 D −1  N k=0

j=0

p j (kT )

∞  i=N − j

(i − N + j) pi (T ) ≥

λT T1 , T + T0

(8.28)

172

8 Application Examples of Phased Array Radar

∗ , and unavailabilities A C D (M ∗ , N ∗ ) Table 8.4 Optimal M ∗ , N D D

N

T

T0

T1

λ

M∗

∗ ND

C D (M ∗ , N ∗ ) A D

100

24 × 7

1

8

0.1

10

78

0.936 × 10−2

90

24 × 7

1

8

0.1

10

68

1.058 × 10−2

70

24 × 7

1

8

0.1

8

49

1.430 × 10−2

100

24 × 10

1

8

0.1

7

70

0.998 × 10−2

100

24 × 14

1

8

0.1

5

60

1.113 × 10−2

100

24 × 7

0.5

8

0.1

11

78

0.938 × 10−2

100

24 × 7

0.1

8

0.1

10

78

0.941 × 10−2

100

24 × 7

1

5

0.1

10

77

0.593 × 10−2

100

24 × 7

1

2

0.1

10

75

0.242 × 10−2

100

24 × 7

1

8

0.2

5

62

2.143 × 10−2

100

24 × 7

1

8

0.3

3

48

3.830 × 10−2

which agrees with (8.22) when T0 = 0, and whose left-hand side increases strictly with N D to  λT T1 +N− (N − j) p j (M T ) . T + T0 j=0 N

Thus, there exists a finite and unique minimum N D∗ (1 ≤ N D∗ ≤ N ∗ ) which satisfies (8.28). C D (M ∗ , Example 8.4 Table 8.4 presents optimal M ∗ and N D∗ , and unavailabilities A ∗ N D ) for N = 70, 90, 100, T = 168, 240, 336 h, T0 = 0.1, 0.5, 1.0, T1 = 2, 5, 8 and λ = 1, 2, 3 × 10−1 / h. All cases in Table 8.4 satisfy the sufficient conditions (8.27) D (N D∗ ) ≥ A C D (M ∗ , N D∗ ).  C (M ∗ ) > A and (8.28). Comparing with Table 8.2, A

8.3 Problems 1. 2. 3. 4. 5. 6. 7.

Derire (8.1) and (8.2). Prove that Q(M) increases strictly with M from 0 to λT − 1 + e−λT . Prove that the left-hand side of (8.8) increases strictly with N D to N . Derive (8.10). Derive (8.11). Derive (8.14). Prove that the left-hand side of (8.17) increases strictly with N D to N + λT T1 / (T + T0 ).

References

173

References 1. 2. 3. 4. 5.

6. 7. 8. 9. 10. 11. 12. 13.

14.

15.

Brookner E (1985) Phased array radars. Sci Am 252(2):94–102 Skolnik MI (1980) Introduction to radar systems. McGraw-Hill, Singapore Brookner E (1991) Practical phased-array antenna systems. Artech House, Boston Bucci OM, Capozzoli A, D’elia G (2000) Diagnosis of array faults from far-field amplitudeonly data. IEEE Trans on Antennas Propag 48:647–652 Keithley HM (1966) Maintainability impact on system design of a phased array radar. In: Conference record of 1966 7th annual New York conference on electronic reliability, vol 9, pp 1–10 Hevesh AH (1967) Maintainability of phased array radar systems. IEEE Trans Reliab R-16:61– 66 Hevesh AH (1967) Maintainability of phased array radar systems. In: Proceedings of 1967 annual reliability and maintainability symposium, pp 547–554 Nakagawa T (2005) Maintenance theory of reliability. Springer, London Hesse JL (1975) Maintainability analysis and prototype operations. In: Proceedings of 1975 annual reliability and maintainability symposium, pp 194–199 Nakagawa T (1986) Modified discrete preventive maintenance policies. Nav Res Logist Q 33:703–715 Ito K, Nakagawa T, Teramoto K (1999) Optimal maintenance policy for a phased array radar. J Reliab Eng Assoc Jpn 21:229–236 Ito K, Nakagawa T (2004) Comparison of cyclic and delayed maintenances for a phased array radar. J Oper Res Soc Jpn 47:51–61 Nakagawa T, Ito K (2007) Optimal availability models of a phased array radar. In: Dohi T, Osaki S, Sawaki K (eds) Recent advances in stoshastic operations research. World Scientific, Singapore, pp 115–130 Ito K, Nakagawa T (2009) Applied maintenance models. In: Ben-Daya M, Duffuaa SO, Raouf A, Knezevic J, Ait-Kadi D (eds) Handbook of maintenance management and engineering. Springer, Dordrecht, pp 363–395 Ito K, Nakagawa T (2009) Maintenance models of miscellaneous systems. In: Nakamura S, Nakagawa T (eds) Stochastic reliability modeling, optimization and applications. World Scientific, Singapore, pp 243–278

Chapter 9

Application Examples of Power Generator

The electric power is one of important infrastructures which support our society, and its inexpensive supply is required because the power cost is reflected in product costs. Various renewable energies have great expectations for their future. The current mainstream power source is the thermal power generation, which is used continuously in the future against global warming. In thermal power plants, steam and gas turbines are used to generate electricity. Gas turbine generation systems have the advantage of the same output and smaller equipments than steam turbine ones. Extended systems of gas turbines such as cogeneration systems have recently attracted attention to industries. In Fig. 1.6 of Chap. 1, we show a schematic diagram of cogeneration system which is an example of thermal power plants. The cogeneration system provides two outputs of electricity and steam, and is used mainly in papermaking factories. The example in Fig.1.7 is a power generation motor for open-cycle gas turbines. The gas turbine is controlled by the governor controller so that the output is constant when the load fluctuates, and which manages rotational speed of gas turbine by throttling a fuel valve. The recent digital governor controller is called FADEC (Full Authority Digital Engine Controller), and its self-diagnosis policies are performed to keep normal operation because abnormal operations of FADEC may cause damage to the gas turbine dealt with Sect. 9.1. While thermal power plants are composed of various components such as mechanical and electronic parts. These components deteriorate with time and need to be replaced in order to continue operation. These components are replaced when their deteriorations have exceeded certain levels, but their deterioration speeds and levels vary greatly with circumstances. Optimal maintenance policies for thermal power plants are considered in Sect. 9.2, using shock and damage models.

© Springer Nature Switzerland AG 2023 K. Ito and T. Nakagawa, Optimal Inspection Models with Their Applications, Springer Series in Reliability Engineering, https://doi.org/10.1007/978-3-031-22021-0_9

175

176

9 Application Examples of Power Generator

9.1 Self-Diagnosis Policy for Gas Turbine FADEC A FADEC is an electric governor controller which performs the complicated signal processing of digital engine data [1–3]. Aircraft FADECs, which are expected to have high mission reliabilities and are needed to decrease weight, become complicated, and generally, their electric constructions form a duplicated system [4–6]. Industrial gas turbine engines have been advanced in absorbing key technologies which were established for aircraft ones, while aircraft FADECs have also been adopted for industrial ones. Comparing between industrial gas turbine and aircraft FADECs, the following differences are recognized: (1) Aircraft FADECs have to perform high speed data processing because the rapid response for aircraft body maneuver is necessary, and inlet pressure and temperature change greatly depend on height. On the other hand, industrial FADECs are not needed to have such high performance compared to aircraft ones because they are operated at steady speed on ground. (2) Aircraft FADECs have to be reliable and tolerable for faults, and compose a duplicate system because their malfunction in operation may cause serious damages to aircraft and crews. Industrial FADECs also have to be reliable and tolerable, and still be low cost because they have to be competitive on the market. Due to the advance of microelectronics, small and high performance and low cost programmable logic controllers (PLC) have been distributed on the market. Applying numerical calculation abilities of microprocessors, these PLCs occupy the analoguedigital and digital-analogue transformers and can perform numerical control. High cost performances of FADEC can be realized by such PLCs. However, these PLCs are developed as industrial controllers, and PLC makers do not permit them for use with high temperature and pressurized hot gas controllers. Thus, gas turbine makers, which apply these PLCs to FADECs, have to design some protective mechanisms and assure their high reliabilities.

9.1.1 Standard FADEC System We consider the following self-diagnosis policy for a FADEC system, obtain the expected costs and derive optimal policies to minimize them [7–9]. ∞ (1) The system has a failure distribution F(t) with finite mean μ ≡ 0 F(t)dt and reliability F(t) ≡ 1 − F(t), where (t) ≡ 1 − (t) for a general function (t). (2) The control calculation of the system is made at time T0 (0 < T0 < ∞) and the self-diagnosis calculation is made at time N T0 (N = 1, 2, . . .), where its coverage is perfect. (3) The total amount of control calculations becomes smaller with frequencies of self-diagnosis ones and reduce its control quality, because the control calculation stops during the self-diagnosis ones. That is, if the number of self-diagnosis

9.1 Self-Diagnosis Policy for Gas Turbine FADEC

177

increases then it degrades the quality of control calculations. Let one cycle from the self-diagnosis to the next one and T10 be its calculation time. Then, the degradation rate of control calculations for one cycle is proportional to time T0 and inversely to the total of the cycle interval N T0 and self-diagnosis time T10 ≡ αT0 , and is assumed: c1 T0 c1 , = N T0 + T10 N +α where α is the rate T10 to T0 , i.e., α ≡ T10 /T0 , and c1 is a coefficient degration rate. (4) The interval from failure occurrence to its detection increases with N , and it causes the damage to the system which is given by c2 (N T0 − t), where t is the time of failure. (5) Cost c1 is the loss cost of degradation rate in (3), cost c2 is the loss cost of system failure per unit of time in (4), and cost c3 is the cost of self-diagnosis which does not depend on N and T10 . The total expected cost of one cycle is  N T0 c1 + c2 (N T0 − t)dF(t) + c3 N +α 0  N T0 c1 + c2 = F(t)dt + c3 . N +α 0

C1 (N ) =

(9.1)

Therefore, the expected cost rate until self-diagnosis is  NT c1 /(N + α) + c2 0 0 F(t)dt + c3 C2 (N ) = . N T0

(9.2)

We find optimal N1∗ and N2∗ to minimize the total expected cost C1 (N ) and the expected cost rate C2 (N ), respectively. Forming the inequality C1 (N + 1) − C1 (N ) ≥ 0,  (N + 1 + α)(N + α)

(N +1)T0 N T0

F(t)dt ≥

c1 , c2

(9.3)

whose left-hand side increases with N to ∞. Thus, there exists a finite and unique minimum N1∗ (1 ≤ N1∗ < ∞) which satisfies (9.3), and N1∗ decreases with T0 and α. Next, forming the inequality C2 (N + 1) − C2 (N ) ≥ 0,  0

or

N T0

 F(t)dt − N

(N +1)T0 N T0

F(t)dt −

c3 2N + 1 + α c1 ≥ , c2 (N + α)(N + 1 + α) c2

178

9 Application Examples of Power Generator

(N + α)(N + 1 + α) 2N + 1 + α



N T0



(N +1)T0

F(t)dt − N

0

N T0

 c1 c3 ≥ F(t)dt − . c2 c2 (9.4)

Note that the left-hand side of (9.4) increases strictly with N to μ (Problem 1). Therefore, we give the following optimal policy: (i) If μ > c3 /c2 , then there exists a finite and unique minimum N2∗ (1 ≤ N2∗ < ∞) which satisfies (9.4). (ii) If μ ≤ c3 /c2 , then N2∗ = ∞. Next, when μ > c3 /c2 , we investigate the properties of T0 and α as follows: It is noted (Problem 1) that the left-hand side L(N , T0 ) of (9.4) increases strictly with T0 to L(N , ∞). When T0 = ∞, (9.4) is 

c3 μ− c2



(N + α)(N + 1 + α) c1 ≥ . 2N + 1 + α c2

(9.5)

Letting N be the minimum N which satisfies (9.5), N2∗ decreases with T0 to N , and N2∗ ≥ N . Furthermore, N2∗ decreases with α, because the left-hand side of (9.5) increases with α. When α = 0, (9.4) is 

N T0



(N +1)T0

F(t)dt − N

0

F(t)dt −

N T0

c3 c1 2N + 1 ≥ . c2 N (N + 1) c2

(9.6)

Letting N be the minimum N which satisfies (9.6), N2∗ decreases with α from N and N ≥ N2∗ . In addition, N2∗ increases with c1 /c2 and c3 /c2 . We compare N1∗ and N2∗ given in (9.3) and (9.4). From (9.3) and (9.4) (Problem 2), 

(N +1)T0

(N + α)(N + 1 + α)

F(t)dt N T0



(N + α)(N + 1 + α) 2N + 1 + α



N T0 0

 F(t)dt − N

(N +1)T0

F(t)dt −

N T0

 c3 > 0. c2 (9.7)

Thus, the left-hand side of (9.3) is greater than that of (9.4), and N2∗ ≥ N1∗ . In particular, when F(t) = 1 − exp(−λt), (9.1) is C1 (N ) =



c2  c1 + N λT0 − 1 − e−N λT0 + c3 , N +α λ

(9.8)

and (9.3) is, 

c1 (N + α)(N + 1 + α) λT0 − e−N λT0 1 − e−N λT0 ≥ . c2 /λ

(9.9)

9.2 Maintenance of Power Generator

179

Table 9.1 Optimal N1∗ and N2∗ , and resulting costs C1 (N1∗ )/c2 and C2 (N2∗ )/c2 c1 /c2 0.1 0.5 1.0

c3 /c2 0.1 0.1 0.1

λ

α

0.001 0.001 0.001

0.1 0.1 0.1

N1∗

C1 (N1∗ )/c2

N2∗

C2 (N2∗ )/c2

5

1.321 × 10−1

15

1.457 × 10−2

8

1.936 × 10−1

18

1.604 × 10−2

10

2.488 × 10−1

20

1.742 × 10−2

32

3.155 × 10−2

0.1

0.5

0.001

0.1

5

5.321 × 10−1

0.1

1.0

0.001

0.1

5

1.032

46

4.444 × 10−2

3

1.546 × 10−1

7

3.360 × 10−2

2

1.675 × 10−1

5

4.851 × 10−2

4

1.302 × 10−1

15

1.456 × 10−2

4

× 10−1

15

1.455 × 10−2

0.1 0.1 0.1 0.1

0.1 0.1 0.1 0.1

0.005 0.01 0.001 0.001

0.1 0.1 0.5 1.0

1.280

Similarly, (9.2) is 1 C2 (N ) = N T0



   c1 1 − e−N λT0 , + c3 + c2 1 − N + T1 N λT0

(9.10)

and (9.4) is 1 − (N + 1)e−N λT0 + N e−(N +1)λT0 −

c3 c1 (2N + 1 + α)λ ≥ . c2 (N + α)(N + 1 + α) c2 /λ

(9.11)

Example 9.1 When T0 is a unit time, i.e., T0 = 1, Table 9.1 presents optimal N1∗ and N2∗ , and the resulting costs C1 (N1∗ )/c2 and C2 (N2∗ )/c2 for c1 /c2 , c3 /c2 , λ, and α. This indicates that both N1∗ and N2∗ increase with c1 /c2 , c3 /c2 and 1/λ, and N1∗ are not changed for c3 /c2 . Furthermore, N1∗ decrease with α, however, N2∗ and C1 (N2∗ )  are changed little for α, and N2∗ ≥ N1∗ as shown already above.

9.2 Maintenance of Power Generator A number of aged fossil-fired power plants are increasing in Japan. For example, 33% of these plants are currently operated from 150,000 to 199,999 hr (from 17 to 23 years) and 26% of them are above 200,000 hr (23 years) [10]. Japanese government eliminates regulations of electric power industries, however, most industries restrain from the investment for new plants and prefer to operate current plants efficiently because of the long-term recession. The deliberative maintenance plans are indispensable to operate these aged plants without serious troubles such as the emergency stop of operation. The importance of maintenance for aged plants is much higher than that for new ones because occurrence probabilities of severe troubles increase and new failure phenomena might

180

9 Application Examples of Power Generator

unexpectedly appear according to the degradation of plants. Furthermore, actual life spans of plant components are mostly different from predicted ones affected by various kinds of factors such as material qualities and operational circumstances [11]. Thus, maintenance plants should be established for probabilities of miscellaneous components. Maintenance is mainly classified into two ones: Preventive maintenance (PM) and corrective maintenance (CM). Many authors have studied PM policies for systems because CM cost at failure is much higher than PM one and the consideration of effective PM is significant [12, 13]. The occurrences of failures are discussed by utilizing the cumulative damage model, where a system suffers damage due to shocks and fails when the total damage has exceeded a failure level. Such cumulative damage models generate a cumulative process theoretically [14]. Some aspects of damage models from the reliability viewpoint were discussed [15, 16]. In Sect. 9.2.1, we take up optimal PM policies for a shock model with given damage level [8, 9, 15, 17–19]: Shocks occur according to a nonhomogeneous Poisson process and the system such as power plants fails only by degradation. CM is undergone when the total damage has exceeded a failure level K , and PM is undergone at time T (0 < T < ∞) or when the total damage has exceeded a managerial level Z (Z ≤ K ). In Sect. 9.2.2, we propose a system with N kinds of multi-echelon damage levels, and failures occur when the total damage surpasses these levels, and a managenial level is assigned for various kinds of failure levels [20]. The expected cost rates are obtained and optimal maintenance policies to minimize them are discussed in Sects. 9.2.1 and 9.2.2.

9.2.1 Shock Model with Damage Level Suppose that shocks occur at a nonhomogeneous Poisson process with an intensity t function λ(t) and a mean-value function R(t) i.e., R(t) ≡ 0 λ(u)du which is called a cumulative hazard function in Chap. 5. Then, the probability that shocks occur exactly j ( j = 0, 1, 2, . . .) times during the interval (0, t] is H j (t) ≡

[R(t)] j −R(t) e , j!

where ∞ j=0 H j (t) = 1. Letting F j (t) denote the probability that the jth shock occurs during (0, t] (Problem 3), 

t

F j+1 (t) ≡ 0

λ(u)H j (u)du =



Hi (t) ,

(9.12)

i= j+1

F0 (t) ≡ 1 for t ≥ 0, F(t) ≡ F1 (t) = 1 − exp[−R(t)] and μ ≡ where ∞ exp[−R(t)]dt. 0

∞ 0

F(t)dt =

9.2 Maintenance of Power Generator

181

Furthermore, an amount Y j ( j = 0, 1, 2, . . .) of damage due to the jth shock has an identical distribution G(x) ≡ Pr{Y j ≤ x} with finite mean, and each damage is j additive. Then, the total damage W j ≡ i=1 Yi ( j = 0, 1, 2, . . .) to the jth shock, where W0 ≡ 0, has a distribution Pr{W j ≤ x} = G ( j) (x)

( j = 1, 2, . . .) ,

where ( j) (x) ( j = 1, 2, . . .) denotes the j-fold Stieltjes convolution of (x) with itself, and (0) (x) ≡ 1 for x ≥ 0. Then, the probability that the total damage exceeds exactly a failure level K at jth shock is G ( j) (K ) − G ( j+1) (K ). Letting W (t) be the total damage at time t, its distribution is [14, 15], Pr{W (t) ≤ x} =



G

( j)

(x)[F j (t) − F j+1 (t)] =

j=0



G ( j) (x)H j (t) . (9.13)

j=0

The system with damage level z 0 (0 ≤ z 0 < ∞) begins to operate at time 0 and the total damage due to shocks increases gradually. When the total damage has exceeded a failure level K (K > z 0 ) which is previously prespecified, the system has to undergo CM immediately because a safety warranty is expired. CM cost is c K and might be very expensive from the following reasons: CM interrupts suddenly the system operation and its system down causes various kinds of penalties, and users cannot prepare for its maintenance and the urgent maintenance needs excessive expenses. To avoid such CM, PM is performed at a managerial level Z (z 0 ≤ Z ≤ K ) or at time T (0 < T ≤ ∞), whichever occurs first. PM cost is c0 and is much less than c K (c K > c0 ). After CM or PM, the time returns to 0 and the system with some damage level begins to operate. For example, such maintenance policies are applicable to fossil-fired power plants which need PM for their steady operations, and their sudden downs might bring expensive CM cost. Then, the probability that the system undergoes PM at time T before damage Z is PT =



G ( j) (Z − z 0 )H j (T ) ,

j=0

the probability that the system undergoes PM when the total damage has exceeded damage Z is PZ =

∞ 

j=0

Z −z 0

[G(K − z 0 − x) − G(Z − z 0 − x)]dG ( j) (x)F j+1 (T ) ,

0

and the probability that the total damage has exceeded failure K at ( j + 1)th shock when its damage was less than damage Z at jth shock is

182

9 Application Examples of Power Generator

PK =

∞ 

j=0

Z −z 0

G(K − z 0 − x)dG ( j) (x)F j+1 (T ) .

0

Thus, the mean time to some maintenance is T



G ( j) (Z − z 0 )H j (T ) +

j=0

=

∞ 

j=0 ∞

G ( j) (Z − z 0 ) − G ( j+1) (Z − z 0 )

G ( j) (Z − z 0 )

j=0

 T 0

H j (t)dt ,

 T 0

t F j+1 (t)dt

(9.14)

and the total expected cost until some maintenance is c0 (PT + PZ ) + c K PK = c0 + (c K − c0 )PK .

(9.15)

Therefore, from (9.14) and (9.15), the expected cost rate is  Z −z0 G(K − z 0 − x)dG ( j) (x)F j+1 (T ) c0 + (c K − c0 ) ∞ j=0 0 C(T, Z ) = . T ∞ ( j) j=0 G (Z − z 0 ) 0 H j (t)dt (9.16) When shocks occur at a renewal process with a renewal function M F (t) ≡ ∞ ( j) ( j) ( j+1) (t), (9.16) is j=1 F (t), by replacing H j (t) with F (t) − F  Z −z0 c0 + (c K − c0 ) ∞ G(K − z 0 − x)dG ( j) (x)F ( j+1) (T ) j=0 0 C(T, Z ) = . T  ∞ ( j) ( j) ( j+1) (t) dt j=0 G (Z − z 0 )] 0 F (t) − F (9.17) When shocks occur at a Poisson process with rate λ, i.e., λ(t) = λ, H j (t) = ∞ i −λt (j = F j (t) − F j+1 (t) = [(λt) j /j!]e−λt and F j (t) = F ( j) (t) = i= j [(λt) /i!]e 0, 1, 2, . . .), (9.16) is  Z −z0 c0 + (c K − c0 ) ∞ G(K − z 0 − x)dG ( j) (x)F ( j+1) (T ) C(T, Z ) j=0 ∞ 0 ( j) = . ( j+1) (T ) λ j=0 G (Z − z 0 )F (9.18) We find optimal policies to minimize C(T ,Z )/λ for the following four cases: (1) Replacement with first shock Suppose that the system undergoes PM at time T (0 < T < ∞) or at the first shock, whichever occurs first. Then, putting that Z = z 0 in (9.18),

9.2 Maintenance of Power Generator

183

c0 C(T, z 0 ) = + (c K − c0 )G(K − z 0 ) , λ 1 − e−λT

(9.19)

which decreases strictly with T to C(∞, z 0 ) = c0 + (c K − c0 )G(K − z 0 ) . λ Thus, optimal T ∗ to minimize C(T, z 0 ) is T ∗ = ∞, i.e., no PM should be performed, and the system undergoes only at the first shock. (2) Replacement with time T Suppose that the system undergoes PM only at time T (0 < T < ∞). Then, putting that Z = K in (9.18), the expected cost rate is

C(T, K ) = λ

c0 + (c K − c0 )

∞ 

 G ( j) (K − z 0 ) − G ( j+1) (K − z 0 ) F ( j+1) (T )

j=0 ∞ j=0

G ( j) (K − z 0 )F ( j+1) (T )

.

(9.20) Differentiating C(T, K ) with T and setting it equal to zero, Q(T, K )



G ( j) (K − z 0 )F ( j+1) (T )

j=0



∞ 

 G ( j) (K − z 0 ) − G ( j+1) (K − z 0 ) F ( j+1) (T ) =

j=0

c0 , c K − c0

(9.21)

where ∞  Q(T, K ) ≡

j=0

 G ( j) (K − z 0 ) − G ( j+1) (K − z 0 ) H j (T ) ∞ . ( j) j=0 G (K − z 0 )H j (T )

If [G ( j) (x) − G ( j+1) (x)]/G ( j) (x) increases strictly with j to 1, then Q(T, K ) increases strictly with T to 1, and side of (9.21) increases strictly the left-hand ( j) G (K − z with T from 0 to MG (K − z 0 ) ≡ ∞ 0 ) (Problem 4). j=1 Therefore, we have the following optimal policy: (i) If MG (K − z 0 ) > c0 /(c K − c0 ), then there exists a finite and unique T ∗ (0 < T ∗ < ∞) which satisfies (9.21), and the resulting cost rate is C(T ∗ , K ) = (c K − c0 )Q(T ∗ , K ) . λ

(9.22)

184

9 Application Examples of Power Generator

(ii) If MG (K − z 0 ) ≤ c0 /(c K − c0 ), then T ∗ = ∞ and C(∞, K ) c0 = . λ 1 + MG (K − z 0 )

(9.23)

(3) Replacement with damage Z Suppose that the system undergoes PM only at damage Z (z 0 ≤ Z ≤ K ). Then, putting T = ∞ and C(Z ) ≡ C(∞, Z ) in (9.18), the expected cost rate is C(Z ) = λ

   Z −z c0 + (c K − c0 ) G(K − z 0 ) + 0 0 G(K − z 0 − x)dMG (x) 1 + MG (Z − z 0 )

.

(9.24) Differentiating C(Z ) with respect to Z and setting it equal to zero, 

K −z 0 K −Z

[1 + MG (K − z 0 − x)]dG(x) =

c0 , c K − c0

(9.25)

whose left-hand side increases strictly with Z from 0 to MG (K − z 0 ). Thus, we have the following optimal policy: (i) If MG (K − z 0 ) > c0 /(c K − c0 ), then there exists a finite and unique Z ∗ (z 0 < Z ∗ < K ) which satisfies (9.25), and the resulting cost rate is C(Z ∗ ) = (c K − c0 )G(K − Z ∗ ) . λ

(9.26)

(ii) If MG (K − z 0 ) ≤ c0 /(c K − c0 ), then Z ∗ = K and the expected cost rate is given in (9.23). (4) Replacement with time T and damage Z We find optimal T ∗ and Z ∗ to minimize C(T, Z ) in (9.18). Differentiating C(T, Z ) with respect to Z and setting it equal to zero, ∞

j=0

F ( j+1) (T )



Z −z 0

[G(K − Z ) − G(K − z 0 − x)]dG ( j) (x) =

0

c0 , c K − c0 (9.27)

whose left-hand side L 1 (Z ; T ) increases strictly with Z from 0 to L 1 (K ; T ) ≡



j=1

G ( j) (K − z 0 )F ( j) (T ) .

9.2 Maintenance of Power Generator

185

Thus, if L 1 (K ; T ) > c0 /(c K − c0 ), then there exists a finite and unique Z ∗ (z 0 ≤ Z ∗ < K ) which satisfies (9.27). Differentiating C(T, Z ) with respect to T and setting it equal to zero, ∞

F ( j+1) (T )

j=0



Z −z 0

[Q(T, Z ) − G(K − z 0 − x)]dG ( j) (x) =

0

c0 , (9.28) c K − c0

where ∞  Z −z0 Q(T, Z ) ≡

j=0 0

G(K − z 0 − x)dG ( j) (x)H j (T ) ∞ < G(K − Z ) . ( j) j=0 G (Z − z 0 )H j (T )

Letting L 2 (T ; Z ) be the left-hand side of (9.28) and noting Q(T, Z ) < G(K − Z ) , L 2 (T ; Z ) < L 1 (Z ; T ) =

c0 . c K − c0

Thus, L 2 (T ; Z ∗ ) < c0 /(c K − c0 ), and L 2 (T ; Z ∗ ) decreases with T , i.e., T ∗ = ∞, and Z ∗ is given in (9.25).  Z −z If 0 0 G(K − z 0 − x)dG ( j) (x)/G ( j) (Z − z 0 ) increases strictly with j to G(K − Z ), then L 2 (T, Z ) increases strictly with T to ∞ 

j=0

Z −z 0

G(K − z 0 − x)dG ( j) (x) ,

0

which agrees with that of (9.24) (Problem 5). Thus, if Z > Z ∗ in (9.25), then there exists a finite and unique T ∗ (0 < T ∗ < ∞) which satisfies (9.28). Conversely, if Z ≤ Z ∗ , then T ∗ = ∞.

9.2.2 Two Modified Schock Models In the previous models, we might not perform PM at time T or at some shock for sudden and unprepared attempts. We propose the following two modified PM models in which PM is performed at the first shock over time T and when the total damage has been between Z and K at some shock [16, p. 72]. (1) Model 1 Suppose that CM is performed before time T immediately when the total damage has exceeded a failure level K at some shock, and its cost is c K . Furthermore, PM

186

9 Application Examples of Power Generator

is performed at the first shock over time T when the total damage has not exceeded level K before T , and its cost is c0 , where c0 < c K . Then, the mean time to CM is ∞



G ( j) (K − z 0 ) − G ( j+1) (K − z 0 )





T

tdF ( j+1) (t) ,

0

j=0

and the mean time to PM is ∞

G

( j)





T

(K − z 0 )

uλ(u)e 0

j=0



−[R(u)−R(t)]



du dF ( j) (t) .

T

Thus, the mean time to maintenance is (Problem 6) ∞

G ( j) (K − z 0 )

=

T



G ( j) (K − z 0 )



T 0

j=0



H j (t)dt + H j (T )

0

j=0 ∞



e−[R(u)−R(T )] du



T







e−R(u)+R(t) du dF ( j) (t) .

(9.29)

t

Therefore, from (9.20) and (9.29), the expected cost rate is  ( j) c0 + (c K − c0 ) ∞ j=0 1 − G (K − z 0 ) H j (T )  C1 (T ) = . T ∞ ∞ ( j) −R(u)+R(t) du dF ( j) (t) j=0 G (K − z 0 ) 0 t e

(9.30)

(2) Model 2 PM is performed at the first shock when the total damage has been between Z and K at some shock. The mean time to CM is ∞ 

j=0

Z −z 0

G(K − z 0 − x)dG

( j)





(x)

0

tdF ( j+1) (t) ,

0

and the mean time to PM is ∞ 

j=0

Z −z 0

[G(K − z 0 − x) − G(Z − z 0 − x)] dG

0

Thus, the mean time to maintenance is (Problem 7)

( j)





(x) 0

tdF ( j+2) (t) .

9.2 Maintenance of Power Generator

187

Table 9.2 Optimal λT ∗ and resulting cost rate C(T ∗ , K )/(λc0 ) and C(∞, z 0 )/(λc0 ) ω(K − z 0 ) c K /c0 λT ∗ C(T ∗ )/(λc0 ) C(∞, z 0 )/(λc0 ) 1 2 4.79 5 10 20 10 10 10

2 2 2 2 2 2 5 10 20



36.36 15.04 7.98 7.99 10.13 16.87 5.75 4.38 3.47

G ( j) (Z − z 0 )

∞ 

j=0

= μ+



1.3679 1.1353 1.0083 1.0067 1.0000 1.0000 1.0002 1.0004 1.0009

H j (t)dt

0

j=0

+



0.9937 0.6667 0.3403 0.3275 0.1676 0.0798 0.2598 0.3338 0.4200

Z −z 0

[G(K − z 0 − x) − G(Z − z 0 − x)] dG ( j) (x)



H j+1 (t)dt

0

0

∞ 

j=0



Z −z 0 0

G(K − z 0 − x)dG ( j) (x)





H j+1 (t)dt .

(9.31)

0

Therefore, from (9.24) and (9.31), the expected cost rate is    c0 + (c K − c0 ) G(K − z 0 ) + 0Z −z0 G(K − z 0 − x)dMG (x) C2 (Z ) = . ∞  Z −z0 μ+ ∞ G(K − z 0 − x)dG ( j) (x) 0 H j+1 (t)dt j=0 0 (9.32)

Example 9.2 Table 9.2 presents optimal λT ∗ and its resulting cost rates C(∞, z 0 ) in (9.19) and C(T ∗ , K )/(λc0 ) in (9.22) for ω(K − z 0 ) and c K /c0 . This indicates that C(T ∗ , K )/(λc0 ) decrease with ω(K − z 0 ) and c0 /c K . It is of interest that when  c K /c0 = 2, λT ∗ is minimum at 7.98 when ω(K − z 0 ) = 4.79. Example 9.3 Table 9.3 presents optimal ω(Z ∗ − z 0 ) and its resulting cost rate C(Z ∗ ) /(λc0 ) in (9.26). Variations of ω(K − z 0 ) and c K /c0 are the same as ones in Table 9.2. This indicates that ω(Z ∗ − z 0 ) increase and C(Z ∗ )/(λc0 ) decrease with ω(K − z 0 ) and c0 /c K . Table 9.3 also presents optimal λT1∗ and its resulting cost rate C1 (T1∗ )/(λc0 ) in t (9.30) when R(t) = 0 λdt = λt. This indicates that λT1∗ increases and C1 (T1∗ )/(λc0 ) decreases with ω(K − z 0 ) and c0 /c K .

188

9 Application Examples of Power Generator

Table 9.3 Optimal ω(Z ∗ − z 0 ) and resulting cost rates C(Z ∗ )/(λc0 ), λT1∗ and C1 (T1∗ )/(λc0 ), w(Z 2∗ − z 0 ) and C2 (Z 2∗ )/(λc0 ) C(Z ∗ )/(λc0 ) λT1∗

C1 (T1∗ )/(λc0 ) ω(Z 2∗ − z 0 )

C2 (Z 2∗ )/(λc0 )

1.00

1.0000

0.98

4.6980

0.21

0.8279

1.56

0.6422

1.21

2.5881

0.93

0.5191

2

3.69

0.2708

1.97

1.3947

3.33

0.2311

10

2

7.93

0.1261

3.29

1.0818

7.73

0.1146

20

2

17.16

0.0583

5.92

1.0055

17.05

0.0554

10

5

6.71

0.1490

2.79

1.1299

6.56

0.1323

10

10

6.02

0.1664

2.45

1.1816

5.87

0.1458

10

20

5.38

0.1861

2.13

1.2510

5.22

0.1608

ω(K − z 0 )

c K /c0

1

2

2

2

5

ω(Z ∗ − z 0 )

Table 9.3 also presents optimal ω(Z 2∗ − z 0 ) and its resulting cost rate C2 (Z 2∗ )/(λc0 ) in (9.32). This indicates that changes of ω(Z 2∗ − z 0 ) and C2 (Z 2∗ )/(λc0 ) show the same tendencies as ω(Z ∗ − z 0 ) and C(Z ∗ ) /(λc0 ). When ω(K − z 0 ) and c K /c0 are same,  ω(Z 2∗ − z 0 ) and C2 (Z 2∗ )/(λc0 ) are lower than ω(Z ∗ − z 0 ) and C(Z ∗ ) /(λc0 ).

9.2.3 Shock Model with Multi-echelon Risks When a failure level K is multi-stage, we propose the following extended maintenance models: (1) Model 1 We consider the system which operates for an infinite time span and make the following assumptions: (1) Successive shocks occur at time intervals X j ( j = 1, 2, . . .) which have  ∞ an identical distribution F(t) ≡ Pr{X j ≤ t} with finite mean μ ≡ E{X j } = 0 F(t)dt. That is, shocks occur at a renewal process with inter-arrival time distribution F(t), and hence, the probability that shocks occur exactly j ( j = 1, 2, . . .) times during (0, t] is H j (t) ≡ F ( j) (t) − F ( j+1) (t). (2) An amount Y j of damage due to the jth shock has an identical distribution G(x) ≡ Pr{Y j ≤ x} ( j = 1, 2, . . .) with finite mean, and each damage is addi j tive. Then, the total damage W j ≡ i=1 Yi has a distribution Pr{W j ≤ x} = G ( j) (x), where W0 ≡ 0. (3) Assume n (n = 1, 2, . . .) kinds of failures such that 0 ≤ K 0 < K 1 < K 2 < · · · < K n < K n+1 = ∞. When the total damage has exceeded a failure level K i (i = 1, 2, . . . , n), CM is performed, and its cost is ci (< ci+1 ). (4) PM is performed when the total damage has exceeded a managerial level Z ≡ K 0 < K 1 , and its cost is c Z (≡ c0 < c1 ).

9.2 Maintenance of Power Generator

189

Let W (t) denote the total damage at time t, where its probability Pr{W (t) ≤ x} is given in (9.13). The probability that the system undergoes PM when the total damage is between Z and K 1 at some shock is ∞ 

PZ =

Z

[G(K 1 − x) − G(Z − x)]dG ( j) (x) ,

0

j=0

and the probability Pi (i = 1, 2, . . . , n) that the system undergoes CM when the total damage is between K i and K i+1 at some shock is Pi =

∞ 

j=0

Z

[G(K i+1 − x) − G(K i − x)]dG ( j) (x) (i = 1, 2, . . . , n) ,

0

where note that PZ + ∞



G

( j)

n i=1

(Z ) − G

Pi = 1. Thus, the mean time to maintenance is

( j+1)

(Z )







tdF

( j+1)

(t) = μ

0

j=0



G ( j) (Z ) ,

(9.33)

j=0

and the total expected cost is, c Z PZ +

n

ci Pi = c Z +

i=1

n

(ci − c Z )Pi .

(9.34)

i=1

Therefore, from (9.33) and (9.34), the expected cost rate is n

∞  Z

[G(K i+1 − x) − G(K i − x)]dG ( j) (x) ∞ ( j) j=0 G (Z )   Z n c Z + i=1 (ci − ci−1 ) G(K i ) + 0 G(K i − x)dMG (x) = , (9.35) 1 + MG (Z )

μC1 (Z ) =

cZ +

i=1 (ci

− cZ )

j=0 0

( j) where MG (x) ≡ ∞ j=1 G (x). ∗ We find optimal Z to minimize C1 (Z ) in (9.35). Differentiating C1 (Z ) with respect to Z and setting it equal to zero (Problem 8),  n

(ci − ci−1 ) i=1

Ki K i −Z

[1 + MG (K i − x)]dG(x) = c Z ,

(9.36)

whose left-hand side L 1 (Z ) increases strictly with Z from 0 to L 1 (K 1 ). Therefore, we have the following optimal policy:

190

9 Application Examples of Power Generator

(i) If L 1 (K 1 ) > c Z , then there exists a finite and unique Z ∗ (0 < Z ∗ < K 1 ) which satisfies (9.36), and the resulting cost rate is μC1 (Z ∗ ) =

n

(ci − ci−1 )G(K i − Z ∗ ) .

(9.37)

i=1

(ii) If L 1 (K 1 ) ≤ c Z , then Z ∗ = K 1 , i.e., the system undergoes only CM, and the expected cost rate is

μC1 (K 1 ) =

c1 +

   K1 (c − c ) G(K ) + G(K − x)dM (x) i i−1 i i G i=2 0

n

1 + MG (K 1 )

. (9.38)

n n Noting that L 1 (K 1 ) ≤ i=1 (ci − ci−1 )MG (K i ), if i=1 (ci − ci−1 )MG (K i ) ≤ c Z , then Z ∗ = K 1 . When n = 1, (9.35) is μC1 (Z ) =

  Z c Z + (c1 − c Z ) G(K 1 ) + 0 G(K 1 − x)dMG (x) 1 + MG (Z )

,

(9.39)

and (9.36) is 

K1 K 1 −Z

[1 + MG (K 1 − x)]dG(x) =

cZ . c1 − c Z

(9.40)

Therefore, the optimal policy is simplified as follows : (i) If MG (K 1 ) > c Z /(c1 − c Z ), then there exists a finite and unique Z ∗ (0 < Z ∗ < K 1 ) which satisfies (9.40), and the resulting cost rate is μC1 (Z ∗ ) = (c1 − c Z )G(K 1 − Z ∗ ) .

(9.41)

(ii) If MG (K 1 ) ≤ c Z /(c1 − c Z ), then Z ∗ = K 1 , and its expected cost rate is μC1 (K 1 ) =

c1 . 1 + MG (K 1 )

(9.42)

When G(x) ≡ 1 − exp(−ωx) and MG (x) = ωx, (9.35) is μC1 (Z ) = and (9.36) is

cZ +

n

− ci−1 )e−ω(K i −Z ) , 1 + ωZ

i=1 (ci

(9.43)

9.2 Maintenance of Power Generator

ωZ

n

191

(ci − ci−1 )e−ω(K i −Z ) = c Z .

(9.44)

i=1

Therefore, the optimal policy is rewritten as follows: n ωK i (ci − ci−1 ) > c Z , then there exists a finite and unique Z ∗ (0 < (i) If i=1 ∗ Z < K 1 ) which satisfies (9.44), and the resulting cost rate is μC1 (Z ∗ ) =

n

∗ (ci − ci−1 )e−ω(K i −Z ) .

(9.45)

i=1

(ii) If

n i=1

ωK i (ci − ci−1 ) ≤ c Z , then Z ∗ = K 1 , and the resulting cost rate is μC1 (K 1 ) =

c1 +

n

− ci−1 )e−ω(K i −K 1 ) . 1 + ωK 1

i=2 (ci

(9.46)

Example 9.4 Table 9.4 presents optimal ωZ ∗ and its resulting cost rate μC1 (Z ∗ ) for n when ci = 5i, 10i and ωK i = 5i, 5i + 5 (i = 1, 2, . . . , n). These CM costs ci are arranged based on a financial evaluation model of atomic reactor plant [21] and the failure damage levels ωK i are proportional to CM costs. This indicates that ωZ ∗ decrease and μC1 (Z ∗ ) increase when ci and 1/(ωK i ) increase.  (2) Model 2 Assumptions from (1)–(4) are the same as ones in Model 1, and the following assumption is added: Table 9.4 Optimal ωZ ∗ and resulting cost rate μC1 (Z ∗ ) when ci = 5i and ωK i = 5i n ci = 5i, ωK i = 5i ci = 10i, ωK i = 5i ci = 5i, ωK i = 5i + 5 ωZ ∗ μC1 (Z ∗ ) ωZ ∗ μC1 (Z ∗ ) ωZ ∗ μC1 (Z ∗ ) 1 2 5 10 1 2 5 10 1 2 5 10

0.304 0.193 0.135 0.125 0.438 0.334 0.296 0.294 1.060 1.010 1.005 1.005

329 517 740 799 228 300 339 341 95 99 100 100

0.157 0.099 0.069 0.064 0.238 0.180 0.158 0.156 0.685 0.650 0.645 0.645

638 1009 1451 1567 420 558 634 638 146 154 155 155

0.438 0.289 0.207 0.193 0.816 0.656 0.594 0.592 2.640 2.570 2.565 2.565

228 346 483 518 122 152 168 169 38 39 39 39

192

9 Application Examples of Power Generator

(5) PM is performed at time T (0 < T < ∞) when the total damage is less than Z and its cost is c Z . The probability PT that the system undergoes PM at time T when the total damage is less than Z is PT =



G ( j) (Z )H j (T ) ,

j=0

when the total damage is between Z and K 1 before time T , PZ =

∞ 

Z

[G(K 1 − x) − G(Z − x)]dG ( j) (x)F ( j+1) (T ) ,

0

j=0

and when the total damage is between K i and K i+1 (i = 1, 2, . . . , n) before time T ,

Pi =

∞ 

j=0

Z

[G(K i+1 − x) − G(K i − x)]dG ( j) (x)F ( j+1) (T ) .

0

Thus, the mean time to maintenance is, from (9.33), ∞



G ( j−1) (Z ) − G ( j) (Z )





T

tdF ( j) (t) + T PT =

0

j=1



G ( j) (Z )

j=0



T

H j (t)dt ,

0

(9.47) and the total expected cost is, from (9.34), c Z (PT + PZ ) +

n

ci Pi = c Z +

i=1

n

(ci − c Z )Pi .

(9.48)

i=1

Therefore, the expected cost rate is C2 (Z , T ) =

cZ +

n

i=1 (ci

Z ( j+1) − ci−1 ) ∞ (T ) 0 G(K i − x)dG ( j) (x) j=0 F , T ∞ ( j) j=0 G (Z ) 0 H j (t)dt (9.49)

which agree with (9.35) when T = ∞. Differentiating C2 (Z , T ) with respect to Z and setting it equal to zero,

9.2 Maintenance of Power Generator

 n ∞

(ci − ci−1 ) F ( j+1) (T ) i=1

j=0

193 Ki K i −Z

G ( j) (K i − x)dG(x) = c Z ,

(9.50)

whose left-hand side L 2 (Z ) increases strictly with Z from 0 to L 2 (K 1 ). Therefore, we have following optimal policy: (i) If L 1 (K 1 ) > c Z , then there exists a finite and unique Z ∗ (0 < Z ∗ < K 1 ) which satisfies (9.50), and the resulting cost rate is n ∗

C2 (Z , T ) =

i=1 (ci

( j+1) − ci−1 ) ∞ (T )G(K i − Z ∗ )G ( j) (Z ∗ ) j=0 F . (9.51) T ∞ ( j) ∗ j=0 G (Z ) 0 H j (t)dt

(ii) If L 2 (K 1 ) ≤ c Z , then Z ∗ = K 1 . When G(x) = 1 − exp(−ωx) and F(t) = 1 − exp(−λt), G ( j) (x) = i j ( j) [(ωx) ∞ /i!] iexp(−ωx), H j (t) = [(λt) /j!] exp(−λt), and F (t) = i= j [(λt) /i!] exp(−λt), (9.49) is n −ωK i cZ + i=1 (ci − c i−1 )e ∞ ∞ j × j=0 (ωZ ) /j! m= j+1 [(λT )m /m!]e−λT C2 (Z , T ) ∞ = ∞ ∞ , i −ωZ m −λT λ j=0 i= j+1 [(ωZ ) /i!]e m= j+1 [(λT ) /m!]e

∞ i= j

(9.52)

and (9.50) is n ∞ ∞ ∞

(ωZ )l −ωZ (λT )m −λT e e (ci − ci−1 )e−ω(K i −Z ) = cZ . l! m! i=1 j=0 l= j+1 m= j+1

(9.53)

When T = ∞, (9.52) and (9.53) agree with (9.43) and (9.44), respectively.

Example 9.5 Table 9.5 presents optimal ωZ ∗ and their resulting cost rates C2 (Z ∗ , T )/λ for n when ci = 5i, ωK i = 5i + 5, and λT = 8760 h (1 year), 17520 h (2 years), 43800 h (5 years). Optimal ωZ ∗ increase and C2 (Z ∗ , T )/λ decrease with λT . The changes of ωZ ∗ and C2 (Z ∗ , T )/λ become slight when λT increase, and also, C2 (Z ∗ , T )/λ increase with n. These indicate that we can recognize that both optimal ωZ ∗ and their resulting costs C2 (Z ∗ , T )/λ are greatly affected by n, ci and ωK i . Thus, when PM plan is established, all serious failure phenomena should be considered thoroughly by estimating FMECA and CM costs, and damage levels should be scheduled accurately to operate a plant efficiently [22]. 

194

9 Application Examples of Power Generator

Table 9.5 Optimal ωZ ∗ and resulting cost rate C2 (Z ∗ , T )/λ when ci = 10i and ωK i = 5i + 5 n

λT = 8760

λT = 17520

λT = 43800

ωZ ∗

C2 (Z ∗ , T )/λ ωZ ∗

C2 (Z ∗ , T )/λ ωZ ∗

C2 (Z ∗ , T )/λ

1

0.711

300

0.533

251

0.447

230

2

0.476

417

0.351

368

0.294

347

5

0.344

554

0.251

504

0.211

484

10

0.321

589

0.234

540

0.196

520

1

1.284

196

0.998

147

0.836

125

2

1.044

225

0.800

176

0.670

155

5

0.952

240

0.724

192

0.608

170

10

0.946

241

0.722

192

0.604

171

1

3.815

122

3.250

69

2.765

43

2

3.725

123

3.160

70

2.690

44

5

3.715

123

3.155

70

2.680

44

10

3.715

123

3.155

70

2.680

44

9.3 Problems 1. Derive (9.4), and prove that its left-hand side increases strictly with N to μ and increases strictly with T0 . 2. Derive (9.7). 3. Derive (9.12). 4. Derive (9.21) and when [G ( j) (x) − G ( j+1) (x)]/G ( j) (x) increases strictly with j to 1, the left-hand side of (9.21) increases strictly with T from 0 to 1.  Z −z 5. Prove that when 0 0 G(K − z 0 − x)dG ( j) (x)/G ( j) (Z − z 0 ) increases strictly with j to G(K − Z ), Q(T, Z ) increases strictly with T to 1 and L 2 (T, Z )  Z −z0 increases strictly with T to ∞ G(K − z 0 − x)dG ( j) (x). j=0 0 6. Derive (9.29). 7. Derive (9.31). 8. Derive (9.36) and prove that the left-hand side L 1 (Z ) of (9.36) increases strictly with Z from 0 to L 1 (K 1 ).

References 1. Robinson K (1987) Digital controls for gas turbine engines. In: Presented at the gas turbine conference and exhibition 2. Kendell R (1981) Full-authority digital electronic controls for civil aircraft engines. In: Presented at the gas turbine conference and Prod show 3. Scoles RJ (1986) FADEC–every jet engine should have one. In: Presented at aerospace technology conference and exhibition

References

195

4. Eccles ES, Simons ED, Evans JFO (1980) Redundancy concepts in full authority electronic engine control–particularly dual redundancy. In: Presented at AGARD conference 5. Davies WJ, Hoelzer CA, Vizzini RW (1983) F-14 aircraft and propulsion control integration evaluation. J Eng Gas Turbines Power 105:663–668 6. Cahill MJ, Underwood FN (1987) Development of digital engine control system for the Harrier II. In: Presented at AIAA/SAE/ASME/ASEE 23rd joint propulsion conference 7. Ito K, Nakagawa T (2003) Optimal self-diagnosis policy for FADEC of gas turbine engines. Math Comput Model 38:1243–1248 8. Ito K, Nakagawa T (2009) Applied maintenance models. In: Ben-Daya M, Duffuaa SO, Raouf A, Knezevic J, Ait-Kadi D (eds) Handbook of maintenance management and engineering. Springer, Dordrecht, pp 363–395 9. Ito K, Nakagawa T (2009) Maintenance models of miscellaneous systems. In: Nakamura S, Nakagawa T (eds) Stochastic reliability modeling, optimization and applications. World Scientific, Singapore, pp 243–278 10. Hisano K (2000) Preventive maintenance and residual life evaluation technique for power plant (I. Preventive maintenance). Therm Nucl Power 51:491–517 (in Japanese) 11. Hisano K (2001) Preventive maintenance and residual life evaluation technique for power plant (V. Review of future advances in preventive maintenance technology). Therm Nucl Power 52:363–370 (in Japanese) 12. Barlow RE, Proschan F (1965) Mathematical theory of reliability. Wiley, New York 13. Nakagawa T (2005) Maintenance theory of reliability theory. Springer, London 14. Cox DR (1962) Renewal theory. Methuen, London 15. Nakagawa T (2007) Shock and damage models in reliability theory. Springer, London 16. Zhao X, Nakagawa T (2018) Advanced maintenance policies for shock and damage models. Springer, London 17. Qian CH, Ito K, Nakagawa T (2005) Optimal preventive maintenance policies for a shock model with given damage level. J Qual Maint Eng 11:216–227 18. Ito K, Nakagawa T (2006) Maintenance of a cumulative damage model and its application to gas turbine engine of co-generation system. In: Pham H (ed) Reliability modeling, analysis and optimization. World Scientific, Singapore, pp 429–438 19. Ito K, Nakagawa T (2009) Optimal censoring policies for the operation of a damage system. In: Dohi T, Osaki S, Sawaki K (eds) Recent advances in stochastic operations research II. World Scientific, Singapore, pp 201–210 20. Nakagawa T, Ito K (2006) Optimal maintenance policies for a system with multiechelon risks. IEEE Trans Syst Man Cybern Part A Syst Humans 38:461–469 21. Sagisaka M, Isobe Y, Yoshimura S, Yagawa G (2004) Economic evaluation of maintenance strategies of steam generator tubes using probabilistic fracture mechanics and financial method. J Nucl Sci Technol 3:11–24 (in Japanese) 22. US Department of Defense (1989) Procedure for performing a failure mode, effects, and criticality analysis. (MIL-STD-1629A)

Chapter 10

Application Examples of Airframe

The airframe for civil aviation has to be lightweight because its weight affects the range and fuel-efficiency of aircraft and is different from military aviation. The airframe has to be tolerant for damage due to statistical and dynamic mechanical stresses produced by changes of air pressure and temperature, vibration of engine and aerodynamic turbulence and shocks when it takes off and lands. Aluminum alloys 2024 and 7075 are adopted for the principal substance of construction because its tensile strength is almost as same as that of ferrous alloy and its specific weight is from one third to one fourth of ferrous alloy. The stressed skin semi-monocoque structure is adopted and its stress is analyzed strictly utilizing a finite element method for reducing weight [1]. The remaining life of mechanical structures under cyclic stresses is estimated by S-N curve, and the ferrous alloy structure has infinite lifespan when it is designed to hold stress below a threshold of tolerance limitation. Whereas, aluminum alloy has no such specific limitation and fails finally under a small stress condition. Hence, the lifespan of airframe with aluminum alloy is finite and the total of tiny stresses might cause serious failures in a long period. Therefore, airframe maintenance is indispensable to operate aircraft without any serious troubles. Four-echelon preventive maintenances (PMs) are undergone [2]: (1) A check : Inspections of landing gear, control surfaces, fluid levels, oxygen systems, lighting and auxiliary power systems are performed at every three to five days at airport gate. (2) B check : A check plus inspections of internal control systems, hydraulic systems and energy equipments are performed at every eight months at airport gate. (3) C check : Aircraft is opened up extensively, and inspections of wear, corrosion and cracks are performed at every 12–17 months at hub airport. (4) D check : Aircraft is disassembled and overhauled perfectly at every 22,500 flight hours at specialized facility. © Springer Nature Switzerland AG 2023 K. Ito and T. Nakagawa, Optimal Inspection Models with Their Applications, Springer Series in Reliability Engineering, https://doi.org/10.1007/978-3-031-22021-0_10

197

198

10 Application Examples of Airframe

A and B checks are minor level maintenance, whereas, C and D checks are heavy maintenance. Especially, D check affects aircraft availability and lifecycle cost. Fourechelon PMs are adopted and their optimal policies are discussed theoretically in Sect. 10.1. Some cracks cause damage to airframes and grow with operation steadily. When such cracks have exceeded a certain scale, airframes cannot be operated because its strength cannot be maintained. So that, maintenance policies should be discussed for airframes with cracks in Sect. 10.2. Furthermore, the number of cracks should be considered for maintenance of airframes because multiple ones may not maintain sufficient strength. Optimal PM policies are discussed in Sect. 10.3, considering both scales and number of cracks.

10.1 Optimal Multi-echelon Maintenance of Aircraft There have existed some parameters that change optimal policies significantly even if they change slightly for imperfect PM models [3–5]. Thus, sensitive parameters have to be payed much attention to making the design of high reliability systems. In Sect. 10.1.1, some maintenance models without such sensitive parameters are discussed. Airline companies own their aircrafts for certain periods and sell them to another companies because their values decrease with time. It is another important issue in securing profits for these companies when to sell their aircrafts. In Sect. 10.1.2, optimal censoring policies are derived, using the maintenance model in Sect. 10.1.1.

10.1.1 Multi-echelon Maintenance with Hazard Rates We consider the imperfect maintenance of airframes when their PMs are imperfect. The imperfect maintenance has several derivative models because there exist some differences in parameters which express the imperfectness of inspection [3, p.171]. We make the following assumptions of airframes: (1) The airframe is maintained preventively at times kT (k = 1, 2, 3, . . . , M − 1) and is overhauled at time S ≡ M T (M = 1, 2, . . .). (2) Failure rate h(t) increases strictly with t and remains undisturved by maintenance. The age becomes x units younger at each PM, where x (0 ≤ x ≤ T ) is constant and previously specified. Furthermore, airframe is replaced if it oerates for the time interval S = M T . (3) PM cost is c1 , overhaul cost is c S and repair cost of airframe failure is c R . The expected cost rate of the imperfect maintenance where the age becomes younger at each PM is [3, p. 179]

10.1 Optimal Multi-echelon Maintenance of Aircraft

1 C0 (M) = MT

 cR

M−1   (ka+1)T k=0

199

 h(t)dt + c1 M + c S

,

(10.1)

kaT

where a ≡ (T − x)/T and a is a very sensitive parameter because subtle changes have a big impact on results. Designers have to pay attention to such sensitive parameters. One example of such parameters is the factor of safety (FS), which is the factor how a system is stronger than it needs to be for the certain load and is equal to the yield stress divided by the working stress. The main part FS of airframes is settled in advance, for example, FS of pressurized fuselage is 2.0, that of landing gear structures is 1.25, and that of other parts are equal to 1.5. Once these FSs are settled, they are not changed until the end of development because a slight difference of FS might cause a major redesign. Designers do not wish to deal with such sensitive parameters in their making. Therefore, we propose the imperfect maintenance model in which sensitive parameters do not appear [4].

(1) Model 1 We consider the following airframe maintenance: t (2) Cumulative hazard function H (t) ≡ 0 h(u)du of airframe is composed of three parts such as H (t) ≡ H0 (t) + H1 (t) + HS (t) ,

(10.2)

where H1 (t) becomes new at time kT because all failures of this part are detected and repaired, H1 (t) + HS (t) becomes new at time S because airframe is opened up, inspected and repaired extensively, and H0 (t) is undisturbed at any times of maintenance and overhaul. We have the same assumptions (1) and (3). Cumulative hazard functions H (t) before (−) and after (+) PM at time kT are denoted as, respectively, H− (kT ) = H1 (T ) + HS (kT ) + H0 (kT ) , H+ (kT ) = HS (kT ) + H0 (kT ) . Cumulative hazard functions H (t) before and after overhaul at time S are denoted as, respectively, H− (S) = H1 (T ) + HS (S) + H0 (S) , H+ (S) = H0 (S) . The change of cumulative hazard functions with time is illustrated in Fig. 10.1. Thus, cumulative hazard function after overhaul at time S is

200

10 Application Examples of Airframe

Fig. 10.1 Change of cumulative hazard function H (t) with time t M−1   (k+1)T k=0

 h(t)dt − H1 (T ) − HS (S)

kT

= H (S) − M H1 (T ) − HS (S) ≡ H A (S) − M H1 (T ) ,

(10.3)

where H A (t) ≡ H0 (t) + H1 (t). The failure during operation is not so serious and failure rate h(t) ≡ d H (t)/dt remains undisturbed by repair. The expected cost rate until time S is C1 (M) =

1 {c R [H A (M T ) − M H1 (T )] + c1 M + c S } . MT

(10.4)

We find optimal M ∗ to minimize C1 (M). Forming the inequality C1 (M + 1) − C1 (M) > 0,  H A (M T ) cS H A ((M + 1)T ) − > M(M + 1) . M +1 M cR 

(10.5)

Suppose that failure rate h(t) ≡ d H (t)/dt increases strictly with t to ∞. Then, H (t)/t increases strictly with t to h(∞) (Appendix A.2). Thus, the left-hand side L 1 (M) of (10.5) increases strictly with M to ∞ (Problem 1). Therefore, we have the following optimal policy: (i) If L 1 (1) ≥ c S /c R , then M ∗ = 1, i.e., airframe is overhauled at every time T . (ii) If L 1 (1) < c S /c R , then there exists a finite and unique minimum M ∗ (1 < M ∗ < ∞) which satisfies (10.5). Example 10.1 Suppose that Hi (t) = λi t m (i = 1, 2) and H0 (t) + HS (t) = (λ0 + λ S )t m . Table 10.1 presents optimal M ∗ in (10.5) for λ0 + λ S , λ1 , m, c S /c R and c1 /c R .

10.1 Optimal Multi-echelon Maintenance of Aircraft

201

Table 10.1 Optimal M ∗ and resulting cost rate C1 (M ∗ )/c R when T = 5 λ0 + λ S λ1 m c S /c R c1 /c R M∗ 0.1 0.2 0.3 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1

0.9 0.9 0.9 1.0 1.1 0.9 0.9 0.9 0.9 0.9 0.9

1.05 1.05 1.05 1.05 1.05 1.06 1.07 1.05 1.05 1.05 1.05

100 100 100 100 100 100 100 110 120 100 100

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.1 1.2

279 254 234 254 234 219 178 305 331 279 279

C1 (M ∗ )/c R 0.7325 0.8758 1.0185 0.7674 0.8017 0.8219 0.9137 0.7394 0.7457 0.7525 0.7725

This indicates that M ∗ decrease with λ0 + λ S , λ1 , m and c R /c S , and C1 (M ∗ )/c R increase with all of parameters λ0 + λ S , λ1 , m, c S /c R and c1 /c R . 

(2) Model 2 One kind of checks has been undergone in Model 1. We undergo two kinds of checks in Model 2, i.e., Check A is undergone periodically and Check B is undergone at multiple period times of Check A in Fig. 10.2. (1) Airframe is maintained preventively at times kT1 (k = 1, 2, 3, . . . , M1 − 1) and j T2 ≡ j M1 T1 ( j = 1, 2, 3, . . . , M2 − 1), and is overhauled at time S ≡ M2 M1 T1 = M2 T2 . (2) Cumulative hazard function H (t) is composed of four parts such as H (t) = H0 (t) + H1 (t) + H2 (t) + HS (t) ,

Fig. 10.2 PM of Model 2

(10.6)

202

10 Application Examples of Airframe

where H1 (t) becomes new at times kT1 because all failures of this part are detected and repaired, H1 (t) + H2 (t) becomes new at times j T2 because all failures of this part are detected and repaired, H1 (t) + H2 (t) + HS (t) becomes new at time S because airframe is opened up, inspected and repaired extensively, and H0 (t) is undisturbed at any maintenance and overhaul. (3) PM costs are c1 at times kT1 and c2 at times j T2 , overhaul cost is c S at time S, and repair cost is c R at failure. Cumulative hazard functions H (t) before (−) and after (+) PM at time kT1 are, respectively, H− (kT1 ) = H1 (T1 ) + H2 (kT1 ) + HS (kT1 ) + H0 (kT1 ), H+ (kT1 ) =

H2 (kT1 ) + HS (kT1 ) + H0 (kT1 ).

Similarly, H (t) before (−) and after (+) PM at time j T2 are, respectively, H− ( j T2 ) = H1 (T1 ) + H2 (T2 ) + HS ( j T2 ) + H0 ( j T2 ), H+ ( j T2 ) =

HS ( j T2 ) + H0 ( j T2 ),

and H (t) before (−) and after (+) overhaul at time S are, respectively, H− (S) = H1 (T1 ) + H2 (T2 ) + HS (S) + H0 (S), H+ (S) = H0 (S). Thus, cumulative hazard function after overhaul at time S is M 2 −1 

M −1  1 

j=0

k=0

(k+1)T1 + j T2 kT1 + j T2





h(t)dt − H1 (T1 ) − H2 (T2 ) − HS (S)

= H B (S) − M2 M1 H1 (T1 ) − M2 H2 (M1 T1 ) ,

(10.7)

where H B (t) ≡ H0 (t) + H1 (t) + H2 (t). The expected cost rate until time S is C2 (M1 , M2 ) =



1 c R H B (M2 M1 T1 ) − M2 M1 H1 (T1 ) − M2 H2 (M1 T1 ) M2 M1 T1  (10.8) +c1 M2 M1 + c2 M2 + c S ,

We find optimal M1∗ and M2∗ to minimize C2 (M1 , M2 ) when h B (t) ≡ dH B (t)/dt increases strictly with t to ∞. Forming the inequality C2 (M1 , M2 + 1) − C2 (M1 , M2 ) > 0,   cS H B (M2 M1 T1 ) H B ((M2 + 1)M1 T1 ) > − . (10.9) M2 (M2 + 1) M2 + 1 M2 cR

10.1 Optimal Multi-echelon Maintenance of Aircraft

203

Letting L 21 (M2 ) be the left-hand side of (10.9), L 21 (M2 ) increases strictly with M2 to ∞ by using the same argument of (10.5) in Model 1. Therefore, we have the following optimal policy: (i) If L 21 (1) ≥ c S /c R , then M2∗ = 1, i.e., Model 2 corresponds to Model 1. (ii) If L 21 (1) < c S /c R , then there exists a finite and unique minimum M2∗ (1 < M2∗ < ∞) which satisfies (10.9) for a fixed M1 . Forming the inequality C2 (M1 + 1, M2 ) − C2 (M1 , M2 ) > 0,  M2 M1 (M1 + 1) +

H A (M2 (M1 + 1)T1 ) H A (M2 M1 T1 ) − M2 (M1 + 1) M2 M1

 H2 (M2 M1 T1 ) H2 (M1 T1 ) H2 (M2 (M1 + 1)T1 ) H2 ((M1 + 1)T1 ) − + − M2 (M1 + 1) M2 M1 M1 + 1 M1 c2 M2 + c S . (10.10) > cR

Letting L 2 (M1 ) be the left-hand side of (10.10), L 2 (M1 ) increases strictly with M1 to ∞ (Problem 2). Therefore, we have the following optimal policy: (i) If L 2 (1) ≥ (c2 M2 + c S )/c R , then M1∗ = 1. (ii) If L 2 (1) < (c2 M2 + c S )/c R , then there exists a finite and unique minimum M1∗ (1 < M1∗ < ∞). Example 10.2 Maximum M1 and M2 in Model 2 are prespecified by the full-scale test and defined as M1 ≤ 73 (1 year) and M1 M2 ≤ 438 (6 years), respectively, when T1 = 5 days. Under the same assumptions of Example 10.1, Table 10.2 presents optimal M1∗ and resulting cost rate C2 (M1∗ , M2 )/c R for M2 when λ0 + λ S = 0.10, λ1 = 0.5, λ2 = 0.4, m = 1.05, T1 = 5 , c S /c R = 100, c1 /c R = 0.10 and c2 /c R = 1.0. Table 10.2 Optimal M1∗ and resulting cost rate C2 (M1∗ , M2 )/c R when λ0 + λ S = 0.1, λ1 = 0.5, λ2 = 0.4, m = 1.05, T1 = 5, c S /c R = 100, c1 /c R = 0.10 and c2 /c R = 1.0 M2 M1∗ M1∗ M2 C2 (M1∗ , M2 )/c R 1 2 3 4 5 6 7 8 9 10

457 226 150 113 90 76 65 57 51 45

457 452 450 452 450 456 455 456 459 457

0.4056 0.4261 0.4379 0.4463 0.4529 0.4582 0.4628* 0.4668 0.4703 0.4735

204

10 Application Examples of Airframe

Table 10.3 Optimal M1∗ , M2∗ and resulting cost rate C2 (M1∗ , M2∗ )/c R when T1 = 5, c S /c R = 100, c1 /c R = 0.10 and c2 /c R = 1.0 λ0 + λ S λ1 λ2 m M1∗ M2∗ M1∗ M2∗ C2 (M1∗ , M2∗ )/c R 0.1 0.2 0.3 0.1 0.1 0.1 0.1 0.1 0.1

0.5 0.5 0.5 0.6 0.7 0.5 0.5 0.5 0.5

0.4 0.4 0.4 0.4 0.4 0.5 0.6 0.4 0.4

1.05 1.05 1.05 1.05 1.05 1.05 1.05 1.06 1.07

65 66 70 66 70 64 63 70 70

7 6 5 6 5 7 7 5 4

455 396 350 396 350 448 441 350 280

0.4628 0.6049 0.7451 0.4965 0.5284 0.4764 0.4901 0.5261 0.5924

Table 10.4 Optimal M1∗ , M2∗ and resulting cost rate C2 (M1∗ , M2∗ )/c R when λ0 + λ S = 0.1, λ1 = 0.5, λ2 = 0.4, m = 1.05 and T1 = 5 c S /c R c1 /c R c2 /c R M1∗ M2∗ M1∗ M2∗ C2 (M1∗ , M2∗ )/c R 100 110 120 100 100 100 100

0.10 0.10 0.10 0.11 0.12 0.10 0.10

1.0 1.0 1.0 1.0 1.0 1.1 1.2

65 62 67 65 65 66 66

7 8 8 7 7 7 7

455 496 536 455 455 462 462

0.4628 0.4710 0.4748 0.4648 0.4668 0.4631 0.4634

When M2 = 7, M1∗ = 65 to minimize C2 (M1∗ , M2∗ )/c R for M1∗ ≤ 73. This indicates that Check D is undergone at M1∗ (M2∗ − 1) + M1 = 65 × 6 + 48 = 438 < 455. Table 10.3 presents optimal M1∗ , M2∗ and resulting cost rate C2 (M1∗ , M2∗ )/c R for λ0 + λ S , λ1 , λ2 and m when T1 = 5, c S /c R = 100, c1 /c R = 0.10 and c2 /c R = 1.0. This indicates that M1∗ M2∗ decrease and C2 (M1∗ , M2∗ )/c R increase with λ0 + λ S , λ1 , λ2 and m. This has similar tendencies to Table 10.1 when M ∗ = M1∗ M2∗ . Table 10.4 presents optimal M1∗ , M2∗ and resulting cost rate C2 (M1∗ , M2∗ )/c R for c S /c R , c1 /c R and c2 /c R when λ0 + λ S = 0.1, λ1 = 0.5, λ2 = 0.4, m = 1.05 and T1 = 5. This indicates that M1∗ M2∗ and C2 (M1∗ M2∗ )/c R increase with c S /c R , M1∗ M2∗ does not change and C2 (M1∗ M2∗ )/c R increase with c1 /c R and c2 /c R . 

10.1 Optimal Multi-echelon Maintenance of Aircraft

205

10.1.2 Operation Policy for Commercial Aircraft Approximately 50% of aircrafts have been retired in 25 years and 90% in 35 years [6]. After retirement, aircrafts have been converted to flight transportation, reused for other purposes, or disassembled. There are secondary markets for aircrafts, and the gain on sale declines with time. Therefore, considering their market prices, newly purchased aircrafts must be sold to a secondary market at appropriate points [7]. Using the maintenance model in Sect. 10.1.1, we propose optimal operation censoring policies for commercial aircrafts to maximize the profit of airline companies [4]. We consider two-echelon of PM for aircrafts and make the following assumptions: (1) An airline company purchases the aircraft at time 0, maintaines it preventively at times kT (k = 1, 2, . . . , M − 1) and overhauls at time S ≡ M T (M = 1, 2, . . .) for a specified T > 0, where T usually means a month, a year and so on. The aircraft operation is censored and is sold to other companies at time N S (N = 1, 2, . . .). (2) If PM is not undergone, cumulative hazard function H (t) is composed of three parts such as H (t) = H0 (t) + H1 (t) + HS (t) ,

(10.11)

where H1 (t) becomes new at times kT because all failures of this part are detected and repaired, H1 (t) + HS (t) becomes new at time S because aircraft is opened up, inspected and repaired extensively, and H0 (t) is undisturbed at any times of maintenance and overhaul. (3) The purchasing cost of aircraft is c0 , the average return per unit time for aircraft operation is c P , PM cost is c1 , the overhaul cost is c S and the repair cost at airframe failure is c R . Cumulative hazard functions H (t) before (−) and after (+) PM at times kT (k = 1, 2, . . . , M − 1) are denoted as follows, respectively, H− (kT ) = H1 (T ) + HS (kT ) + H0 (kT ) , H+ (kT ) = HS (kT ) + H0 (kT ) . Similarly, H (t) before and after overhaul at times j S ( j = 1, 2, . . .) are denoted as follows, respectively, H− ( j S) = H1 (T ) + HS (S) + H0 ( j S) , H+ ( j S) = Thus, the expected number of failures in [0, N S] is

H0 ( j S) .

206

10 Application Examples of Airframe

⎧ M−1  N ⎨ j  j=1

=

N 



k=( j−1)M

(k+1)T

 h(t)dt − H1 (T ) − HS (S)

kT

⎫ ⎬ ⎭

{H ( j M T ) − H (( j − 1)M T ) − M H1 (T ) − HS (S)}

j=1

= H (N S) − N M H1 (T ) − N HS (S) .

(10.12)

The total expected cost of airframe maintenance in [0, N S] is [4] C M (M, N ) = c R [H (N M T ) − N HS (M T ) − N M H1 (T )] + c1 N M + c S N . (10.13)

The expected profit rate until time [0, N S] is [9] (Profit) − (Loss) (Operation period) (Investment income) + (Gain on sale) − (Maintanance cost) = (Operation period) −c0 + c P N M T − C M (M, N ) + C L (N M T ) (M, N = 1, 2, . . .) , (10.14) = N MT

P(M, N ) =

where C L (t) is a capital gain when the company trades its aircraft to another one at time t. We find optimal N ∗ and M ∗ to maximize P(M, N ) in (10.14) when h(t) ≡ dH (t)/dt increases strictly with t to ∞. Forming the inequality P(M, N + 1) − P(M, N ) ≤ 0,  H (N S) H ((N + 1)S) − N (N + 1) N +1 N   1 C L ((N + 1)S) C L (N S) c0 + N (N + 1) − . ≥ cR N +1 N 

(10.15)

Forming the inequality P(M + 1, N ) − P(M, N ) ≤ 0,  H (N S) HS ((M + 1)T ) HS (S) H (N (M + 1)T ) − − + c R N (M + 1)S N (M + 1)T NS (M + 1)T S   C L (N (M + 1)T ) C L (N S) − . (10.16) ≥ c0 + c S N + N (M + 1)S N (M + 1)T NS 

It is assumed that the capital gain decreases strictly with time t from c0 to cb (cb < c0 ) in Fig. 10.3, for example, C L (t) = (c0 − cb )e−at + cb . Then, (10.15) is

(10.17)

10.1 Optimal Multi-echelon Maintenance of Aircraft

207

Fig. 10.3 Sample of function C L (t) with time t

c0 − cb H ((N + 1)S)/(N + 1) − H (N S)/N ≥ . −a N S −a(N +1)S (1 − e )/N − [1 − e ]/(N + 1) cR

(10.18)

It is easily noted that the numerator L a (N ) of the left-hand side of (10.18) increases strictly with N to h(∞) and the denominator L b (N ) decreases strictly with N to 0 (Problem 3). Therefore, we have the following optimal policy: (i) If L a (1)/L b (1) ≥ (c0 − cb )/c R , then N ∗ = 1. (ii) If L a (1)/L b (1) < (c0 − cb )/c R , then there exists a finite and unique N ∗ (1 < N ∗ < ∞) which satisfies (10.18). Equation (10.16) is  c R M H (N (M + 1)T ) − (M + 1)H (N M T )

 −N M HS ((M + 1)T ) + N (M + 1)HS (M T )      , (10.19) ≥ c S N + (c0 − cb ) (M + 1) 1 − e−a N M T − M 1 − e−a N (M+1)T

whose right-hand side L R (M) increases strictly with M from L R (1) to c S N + c0 − cb (Problem 4). When h(x) = dH (x)/dx and h S (x) = dHS (x)/dx, the bracket L L (M) of the left-hand side of (10.19) is L L (M) =

M−1   N (M+1)T k=0

−N

 h(u)du −

N MT



(M+1)T MT

h(u)du 

h S (u)du −

N (k+1)T N kT (k+1)T

 h S (u)du

,

kT

which increases strictly with M to ∞. Thus, there exists a finite M ∗ (1 ≤ M ∗ < ∞) which satisfies (10.19) (Problem 4).

208

10 Application Examples of Airframe

Table 10.5 Optimal N ∗ when M = 10 and T = S/10 (c0 − cb )/c R m S λ0 λS 1000 1500 2000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000

1.20 1.20 1.20 1.25 1.30 1.20 1.20 1.20 1.20 1.20 1.20 1.20 1.20 1.20 1.20

365 × 7 365 × 7 365 × 7 365 × 7 365 × 7 365 × 12 365 × 17 365 × 7 365 × 7 365 × 7 365 × 7 365 × 7 365 × 7 365 × 7 365 × 7

0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.15 0.20 0.10 0.10 0.10 0.10 0.10 0.10

0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.15 0.20 0.10 0.10 0.10 0.10

λ1

1/a

N∗

0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.10 0.15 0.20 0.10 0.10

365 × 10 365 × 10 365 × 10 365 × 10 365 × 10 365 × 10 365 × 10 365 × 10 365 × 10 365 × 10 365 × 10 365 × 10 365 × 10 365 × 20 365 × 30

6 7 8 4 3 5 4 4 3 7 9 6 6 5 4

Example 10.3 When Hi (t) = λi t m (i = 0, 1, S), Table 10.5 presents optimal N ∗ to maximize P(M, N ) in (10.14) for (c0 − cb )/c R , m, S, λ0 , λ S and 1/a. This indicates that N ∗ decrease with c R /(c0 − cb ), m, S, λ0 , 1/λ S and 1/a. It is of interest that λ1 hardly affect N ∗ . 

10.2 Airframe Maintenance with Crack Growth Airframe has to be tolerant for damage during a designed operation period. To assure the airframe damage, the fatigue test using full-scale airplane structure is required by Federal Aviation Administration (FAA) regulation [10, 11]. The regulation directs that the sufficient full-scale test evidence has to be more than two times of a prespecified operation interval to guarantee the operation safety. During this interval, optimal PM (A, B, C and D checks) arrangement policies to minimize the total expected cost are discussed in Sect. 10.1. In such policies, the airframe failure during a guaranteed period is minor and its cost is constant. Airframe failures due to cracks are discovered at A, B, C and D checks by visual and non-destructive inspections [12]. At the start of operation, any cracks cannot be discovered because their sizes are so small [13], grow up with various kinds of mechanical stresses during operation, and can be discovered when their size is greater than a certain level [14, 15]. Airframe with cracks can operate when the number and the size of cracks are less than a prespecified level [16] because it adopts damage

10.2 Airframe Maintenance with Crack Growth

209

tolerant design [17–19]. We should clarify a crack growth rate for PM models and study optimal PM policies based on its model [20, 21]. In Sect. 10.2, we examine two optimal maintenance models with degradation of strength caused by crack growth, using a Markov chain.

10.2.1 Airframe Cracks (1) We consider the following PM policies for airframe failures caused by damages due to cracks based on the stochastic model [22]:

(1) Model 1 We make the following assumptions of PMs of airframes: (1) Airframe begins to operate at time 0 and suffers stresses caused by damages due to cracks. The length and depth of each crack become wider with the number of cracks when the stress accumulates. Airframe can operate when the total damage is less than Z 2 , and cannot operate when it is more than Z 2 , because the number of cracks with critical size might cause catastrophic phenomenon such as mid-air disintegration. We set up a managerial level Z 1 (0 < Z 1 < Z 2 ) and can undergo PM before the total damage has exceeded Z 2 . (2) We assume the following states of airframe: State 0: When the total damage is less than Z 1 , airframe can operate normally. State 1: When the total damage is more than Z 1 and less than Z 2 , airframe can operate and PM is undergone. State 2: When the total damage is more than Z 2 , airframe cannot operate until CM. The airframe states defined above form a Markov chain where State 2 is an absorbing state [23], and the transition diagram between states is shown in Fig. 10.4. Airframe starts its operation at State 0 and finishes its operation at State 2. (3) Airframe starts its operation at time 0 and is inspected at time kT (k = 1, 2, . . .) during operation for given T > 0. The damage Wk from (k − 1)T to kT can be evaluated at each inspection and have an identical distribution G(x) ≡ Pr{Wk ≤ x}. (4) The respective damage levels in States 0 and 1 become 0 and Z 1 after inspection for the simplicity of the model. (5) Inspection cost is c1 , PM cost is c2 , and CM cost is c3 , where c3 > c2 > c1 . One-step transition probabilities Q i j (i = 0, 1, 2; j = 0, 1, 2) from State i to State j is

210

10 Application Examples of Airframe

Fig. 10.4 Transition diagram among states of Model 1

⎞ G(Z 1 ) G(Z 2 ) − G(Z 1 ) G(Z 2 ) Qi j = ⎝ 0 G(Z 2 − Z 1 ) G(Z 2 − Z 1 ) ⎠ . 0 0 1 ⎛

(10.20)

The numbers Ii (i = 0, 1) of inspections from State i to State 2 are I0 = Q 00 (1 + I0 ) + Q 01 (1 + I1 ) + Q 02 , I1 = Q 11 (1 + I1 ) + Q 12 . The numbers Mi (i = 0, 1) of PMs from State i to State 2 are M0 = Q 00 M0 + Q 01 (1 + M1 ) , Q 11 (1 + M1 ) . M1 = Solving the above equations for I0 and M0 , respectively, 1 − G(Z 2 − Z 1 ) + G(Z 2 ) − G(Z 1 ) 1 + Q 01 − Q 11 = , (1 − Q 00 )(1 − Q 11 ) G(Z 1 )G(Z 2 − Z 1 ) G(Z 2 ) − G(Z 1 ) Q 01 = , (10.21) M0 = (1 − Q 00 )(1 − Q 11 ) G(Z 1 )G(Z 2 − Z 1 ) I0 =

where G(x) ≡ 1 − G(x). The respective total expected cost and expected cost rate from the start of operation to State 2 are C1 (Z 1 ) = c1 I0 + c2 M0 + c3 ,   1 C1 (Z 1 ) c2 M0 + c3  . = c1 + C1 (Z 1 ) = I0 T T I0

(10.22) (10.23)

In particular, when G(x) = 1 − e−ωx , from (10.21), I0 = eω(Z 2 −Z 1 ) + eω Z 1 − 1 , Thus, from (10.22) and (10.23), respectively,

M0 = eω(Z 2 −Z 1 ) − 1 .

(10.24)

10.2 Airframe Maintenance with Crack Growth

211

  C1 (Z 1 ) = (c1 + c2 ) eω(Z 2 −Z 1 ) − 1 + c1 eω Z 1 + c3 ,   c2 e−ω Z 1 + (c3 − c2 ) e−ω Z 2 1  c1 + −ω Z . C1 (Z 1 ) = 1 + e−ω Z 2 (eω Z 1 − 1) T e

(10.25) (10.26)

We find optimal Z 1∗ to minimize C1 (Z 1 ). Differentiating C1 (Z 1 ) with respect to Z 1 and setting it equal to zero, Z 1∗

1 = 2



1 c1 + c2 Z 2 + ln ω c1

 .

(10.27)

If Z 1∗ ≥ Z 2 , then we should not undergo PM. 1 (Z 1 ). Differentiating C 1 (Z 1 ) with respect Next, we find optimal  Z 1 to minimize C to Z 1 and setting it equal to zero (Problem 5), ⎡ ⎤ "  2 ω Z2 ω Z2 ω Z2 c c e e e 1 c 2 2 3 ⎦. Z 1∗ = ln ⎣− + + ω c3 − c2 c3 − c2 c3 − c2

(10.28)

(2) Model 2 We make the following assumptions where assumptions (3)–(5) are the same ones as Model 1: (1’) Airframe begins to operate at time 0 and suffers stresses caused by damages due to cracks. The number of cracks increases and the length and depth of each crack become wider with stresses. Airframe can operate when the total damage is less than Z 3 , and cannot operate more than Z 3 . At the beginning of operation, airframe is just delivered from factory and cracks cannot be detected at inspection because all cracks are micro-size. Cracks grow with stresses and can be detected at the total damage Z 1 (< Z 3 ). We set up a managerial level Z 2 (Z 1 < Z 2 < Z 3 ) and can undergo PM before the total damage has exceeded Z3. (2’) We assume the following states of airframe: State 0: When the total damage is less than Z 1 , airframe can operate normally and cracks cannot be detected at inspection. State 1: When the total damage is more than Z 1 and less than Z 2 , airframe can operate normally and cracks can be detected at inspection. State 2: When the total damage is more than Z 2 and less than Z 3 , airframe can operate and PM is undergone. State 3: When the total damage is more than Z 3 , airframe cannot operate until CM. The airframe states defined above form a Markov chain where State 3 is an absorbing state. The transition diagram between states is shown in Fig. 10.5. Airframe starts

212

10 Application Examples of Airframe

Fig. 10.5 Transition diagram among states of Model 2

its operation at State 0 and finishes its operation at State 3. The respective damage levels in States 0, 1 and 2 become 0, Z 1 and Z 2 after inspection for the simplicity of the model. One-step transition probabilities Q i j (i = 0, 1, 2, 3; j = 0, 1, 2, 3) from State i to State j is Qi j



⎞ G(Z 1 ) G(Z 2 ) − G(Z 1 ) G(Z 3 ) − G(Z 2 ) G(Z 3 ) ⎜ 0 G(Z 2 − Z 1 ) G(Z 3 − Z 1 ) − G(Z 2 − Z 1 ) G(Z 3 − Z 1 ) ⎟ ⎟. ≡⎜ ⎝ 0 G(Z 3 − Z 2 ) ⎠ 0 G(Z 3 − Z 2 ) 0 0 0 1

(10.29)

The number Ii (i = 0, 1, 2) of inspections from State i to State 3 are I0 = Q 00 (1 + I0 ) + Q 01 (1 + I1 ) + Q 02 (1 + I2 ) + Q 03 , I1 = I2 =

Q 11 (1 + I1 ) + Q 12 (1 + I2 ) + Q 13 , Q 22 (1 + I2 ) + Q 23 .

The number Mi (i = 0, 1, 2) of PMs from State i to State 3 are M0 = Q 00 M0 + Q 01 M1 + Q 02 (1 + M2 ) , M1 = Q 11 M1 + Q 12 (1 + M2 ) , M2 =

Q 22 (1 + M2 ) .

Solving the above equations for I0 and M0 , respectively, (1 − Q 11 )(1 − Q 22 ) + Q 01 (1 + Q 12 − Q 22 ) + Q 02 (1 − Q 11 ) , (1 − Q 00 )(1 − Q 11 )(1 − Q 22 ) Q 01 Q 12 + Q 02 (1 − Q 11 ) . (10.30) M0 = (1 − Q 00 )(1 − Q 11 )(1 − Q 22 ) I0 =

The total expected cost and the expected cost rate are, respectively,

10.2 Airframe Maintenance with Crack Growth

213

C2 (Z 2 ) = c1 I0 + c2 M0 + c3 ,   2 (Z 2 ) = C2 (Z 2 ) = 1 c1 + c2 M0 + c3 . C I0 T T I0

(10.31) (10.32)

Using (10.29), I0 and M0 are, respectively,

I0 = M0 =

G(Z 2 − Z 1 )G(Z 3 − Z 2 ) + [G(Z 3 ) − G(Z 1 )]G(Z 2 − Z 1 ) +[G(Z 2 ) − G(Z 1 )][G(Z 3 − Z 2 ) − G(Z 3 − Z 1 )]

, G(Z 1 )G(Z 2 − Z 1 )G(Z 3 − Z 2 ) [G(Z 3 ) − G(Z 1 )]G(Z 2 − Z 1 ) − [G(Z 2 ) − G(Z 1 )]G(Z 3 − Z 1 ) G(Z 1 )G(Z 2 − Z 1 )G(Z 3 − Z 2 )

. (10.33)

Substituting I0 and M0 into (10.31) and (10.32), respectively, we can get C2 (Z 2 ) and 2 (Z 2 ). C In particular, when G(x) = 1 − e−ωx , I0 and M0 are I0 = eω(Z 3 −Z 2 ) + eω(Z 2 −Z 1 ) + eω Z 1 − 2 ,

M0 = eω(Z 3 −Z 2 ) − 1 .

(10.34)

Thus, from (10.31) and (10.32), respectively,     C2 (Z 2 ) = (c1 + c2 ) eω(Z 3 −Z 2 ) − 1 + c1 eω(Z 2 −Z 1 ) + eω Z 1 − 1 , (10.35)

ω Z2 ω Z3 − c )e + c e 1 (c 3 2 2 2 (Z 2 ) =  . (10.36) C c1 + ω Z  ω(Z −Z ) T e 2 e 2 1 + eω Z 1 − 2 + eω Z 3 We find optimal Z 2∗ to minimize C2 (Z 2 ). Differentiating C2 (Z 2 ) with respect to Z 2 and setting it equal to zero, Z 2∗ =

1 2

 Z1 + Z3 +

1 c1 + c2 ln ω c1

 .

(10.37)

2 (Z 2 ). Differentiating C 2 (Z 2 ) with respect to We find optimal  Z 2 to minimize C Z 2 and setting it equal to zero (Problem 5), ⎧ ⎤ "  (  2  ' ω Z 1 c2 e − 1 − c3 eω(Z 1 +Z 3 ) c2 e ω Z 3 1 ⎨ c2 eω Z 3  ⎦. + − Z 2 = ln − ω ⎩ c3 − c2 c3 − c2 c3 − c2

(10.38)

Example 10.4 Table 10.6 presents optimal Z 1∗ and the resulting cost C1 (Z 1∗ ) − c3 of Model 1 for c1 , c2 , ω and Z 2 when G(x) = 1 − e−ωx . This indicates Z 1∗ decrease with c1 , 1/c2 , ω and 1/Z 2 , and C1 (Z 1∗ ) − c3 increase with c1 , c2 , ω and Z 2 .

214

10 Application Examples of Airframe

Table 10.6 Optimal Z 1∗ and resulting cost C1 (Z 1∗ ) of Model 1, and optimal Z 2∗ and resulting cost C2 (Z 2∗ ) of Model 2 when G(x) = 1 − e−ωx c1

c2

ω

Model 1 Z2

Model 2

Z 1∗

C1 (Z 1∗ ) − c3

Z1

Z3

Z 2∗

C2 (Z 2∗ ) − c3

1

10

6.70

5.88 × 102

1

10

1

10

6.20

9.73 × 102

2

10

1

10

5.90

1.44 × 103

1

10

6.40

8.73 × 102

5.73

1.84 × 103

1

10

6.23

1.12 × 103

1

10

7.02

8.06 × 102

3

10

1

10

1

20

1

10

6.52

1.34 × 103

1

30

1

10

6.72

1.66 × 103

1

10

7.22

9.73 × 102

5.60

1.46 × 105

1

10

6.10

5.37 × 104

1

10

5.90

4.84 × 106

1

10

2

10

1

10

3

10

5.40

2.17 × 107

1

10

1

20

11.20

1.46 × 105

2

10

7.20

3.58 × 102

3

10

7.70

2.28 × 102

1

10

1

30

16.20

2.17 × 107

1

10

1



—–

—–

1

20

11.70

8.86 × 104

1

10

1



—–

—–

1

30

16.70

1.32 × 107

Table 10.6 also presents optimal Z 2∗ and resulting cost C2 (Z 2∗ ) − c3 of Model 2 for c1 , c2 , ω, Z 1 and Z 3 when G(x) = 1 − e−ωx . This indicates that Z 2∗ decrease with c1 , 1/c2 , ω and 1/Z 3 , and C2 (Z 2∗ ) − c3 increase with c1 , c2 , ω and Z 3 . This tendency  is as same as Model 1, and Z 2∗ increase and C2 (Z 2∗ ) − c3 decrease with Z 1 .

(3) Model 3 We extend to a Markov chain model with n states and make the following assumptions, where assumptions (3) and (5) are the same ones as Model 1: (1”) Airframe begins to operate at time 0 and suffers stresses caused by damages due to cracks. The number of cracks increases and the length and depth of each crack becomes wider with stresses. Airframe can operate when it is less than Z n , and cannot operate when it is more than Z n . Cracks grow with stresses which can be detected at Z 1 (< Z n ). There are n − 2 levels from Z 1 to Z n such that Z 1 < Z 2 < Z 3 < · · · < Z n−1 < Z n , and Z n−1 is a managerial stress level and PM is undergone less than Z n . (2”) We denote the following states of airframe: State 0: The total damage is less than Z 1 , and airframe can operate normally and cracks cannot be detected at inspection. State i (i = 1, 2, . . . , n − 2): The total damage is more than Z i and less than Z i+1 . Airframe can operate normally and cracks can be detected at inspection.

10.2 Airframe Maintenance with Crack Growth

215

State n − 1: The total damage is more than Z n−1 and less than Z n . Airframe can operate and PM is undergone. State n: The total damage is more than Z n and airframe cannot operate until CM. The airframe states defined above form a Markov chain where State Z n is an absorbing state. Airframe starts its operation at State 0 and finishes its operation at State n. (4’) The respective damage levels in States 0, 1, . . . , n − 2 and n − 1 become 0, Z 1 , Z 2 , . . . , Z n−2 and Z n−1 after inspection for the simplicity of the model. One-step transition probabilities Q i j (i = 0, 1, . . . , n; j = 0, 1, . . . , n) from State i to State j are Q i j ≡ G(Z j+1 − Z i ) − G(Z j − Z i ) ,

(10.39)

where Z 0 ≡ 0, Z n+1 ≡ ∞, G(0) ≡ 0 and G(∞) ≡ 1. The number of inspections Ii (i = 0, 1, . . . , n − 1) from State i to State n are Ii =

n−1 

Q i j (1 + I j ) + Q in ,

j=i

and the numbers Mi (i = 0, 1, . . . , n − 1) of PMs from State i to State n are Mi =

n−1 

Q i j M j + Q in−1 .

j=i

By solving the above equations for Ii and Mi , )n−1 Ii =

j=i+1

Qi j I j + 1

1 − Q ii

)n−1 ,

Mi =

j=i+1

Q i j M j + Q in−1 1 − Q ii

.

(10.40)

The total expected cost from the start of operation to State n and the expected cost rate are, respectively, Cn−1 (Z n−1 ) = c1 I0 + c2 M0 + c3 ,   n−1 (Z n−1 ) = 1 c1 + c2 M0 + c3 . C T I0 In particular, when G(x) = 1 − e−ωx , Q ii = 1 − e−ω(Z i+1 −Z i ) , Q i j = e−ω(Z j −Z i ) − e−ω(Z j+1 −Z i ) Q in = e−ω(Z n −Z i ) .

( j = i + 1, . . . , n − 1) ,

(10.41) (10.42)

216

10 Application Examples of Airframe

Thus, from (10.40), n−1 

Ii e−ω Z i+1 = e−ω Z i +

(e−ω Z j − e−ω Z j+1 )I j (i = 0, 1, . . . , n − 1) .

j=i+1

By obtaining Ii (i = 0, 1, . . . , n − 1) successively (Problem 6), Ii =

n−1 

eω(Z j+1 −Z j ) − (n − 1 − i) .

(10.43)

j=i

Similarly, Mi e−ω Z i+1 =

n−1 

(e−ω Z j − e−ω Z j+1 )M j + e−ω Z n−1 − e−ω Z n .

j=i+1

By obtaining Mi (i = 0, 1, . . . , n − 1) successively (Problem 6), Mi = eω(Z n −Z n−1 ) − 1.

(10.44)

Therefore, from (10.41) and (10.42), respectively, ⎡ Cn−1 (Z n−1 ) = c1 ⎣ n−1 (Z n−1 ) = 1 C T

n−1 



⎤ eω(Z j+1 −Z j ) − (n − 1)⎦ + c2 eω(Z n −Z n−1 ) + c3 − c2 , (10.45)

j=0

c2 eω(Z n −Z n−1 ) + c3 − c2 c1 + )n−1 ω(Z j+1 −Z j ) − (n − 1) j=0 e

 .

(10.46)

∗ n−1 (Z n−1 ). Difand  Z n−1 to minimize Cn−1 (Z n−1 ) and C We find optimal Z n−1 ferentiating Cn−1 (Z n−1 ) with respect to Z n−1 and setting it equal to zero, ∗ Z n−1

1 = 2



1 c1 + c2 Z n−2 + Z n + ln ω c1

 .

(10.47)

n−1 (Z n−1 ) with respect to Z n−1 and setting it equal to zero, Next, differentiating C )n−1 j=0

eω(Z j+1 −Z j ) − (n − 1)

1 − e−ω(Z n −2Z n−1 +Z n−2 )

− eω(Z n −Z n−1 ) =

c3 − c2 . c2

∗ can be obtained by solving (10.48) numerically. Optimal Z n−1

(10.48)

10.2 Airframe Maintenance with Crack Growth

217

1 (  Table 10.7 Optimal  Z 1 and resulting cost rate C Z 1 ) of Model 1 and optimal  Z 2 and resulting 2 (  cost rate C Z 2 ) of Model 2 when G(x) = 1 − e−ωx , c1 = 1, w = 1 and T = 1 c2 c3 Model 1 Model 2  1 (   2 (  Z1 C Z2 C Z2 Z1) Z1 Z3 Z2) 10 20 30 10 10 10 10 10 10

100 100 100 200 300 100 100 100 100

10 10 10 10 10 20 30 – –

1.61 0.92 0.51 2.30 2.70 1.61 1.61 — —

11.01 21.01 31.00 11.05 11.10 11.00 11.00 ——-

1 1 1 1 1 1.5 2 1 1

10 10 10 10 10 10 10 20 30

2.42 1.50 0.79 3.20 3.63 2.68 2.59 2.42 2.42

11.02 21.01 31.00 11.10 11.24 11.02 11.01 11.00 11.00

1 (  Example 10.5 Table 10.7 presents optimal  Z 1 and the resulting cost rate C Z 1 ) of Model 1 for c2 , c3 and Z 2 = 10, 20, 30 when c1 = 1, ω = 1, T = 1, G(x) = 1 − 1 (  Z 1 decrease with c2 and 1/c3 , and C Z 1 ) increase with c2 e−ωx . This indicates that  1 (  and c3 . It is of interest that Z 2 hardly affect  Z 1 and C Z 1 ). 2 (  Z 2 ) of Model 2 for Table 10.7 also presents optimal  Z 2 and resulting cost rate C c2 , c3 , Z 1 and Z 3 when c1 = 1, ω = 1, T = 1, G(x) = 1 − e−ωx . This indicates that 2 (   Z 2 ) increase with c2 and c3 . This is the same Z 2 decrease with c2 and 1/c3 , and C 1 (  Z 1 and C Z 1 ).  tendency to Model 1. It is of interest that Z 3 hardly affect 

10.2.2 Airframe Cracks (2) We propose another PM policy for a commercial airframe by considering transition probabilities among states. First, we introduce models with 3 and 4 states, and extend them to a general model [24]. The expected costs are derived, and their optimal policies to minimize them are discussed. Finally, numerical examples are given. We make the following assumptions: (1) Airframe begins to operate at time 0 and suffers stresses caused by damages due to cracks and erosion. The number of cracks increases, and the length and depth of each crack become wider with stresses. Airframe is inspected periodically, and cracks can be detected and their scales can be monitored. Crack scale are proportion to the total damage and n + 1 states of damage levels increase from Z 0 ≡ 0 to Z n where 0 < Z 1 < Z 2 < · · · < Z n−1 < Z n . At the begining of operation, airframe is just delivered from factory and cracks cannot be detected. Cracks due to stresses are growing and can be detected when a damage level is more than Z 1 . Airframe cannot operate when it has exceeded Z n , and Z n−1 is

218

10 Application Examples of Airframe

a managerial damage level and PM is undergone before the total damage has exceeded Z n . (2) We define the following states of airframe: State 0: When the total damage is less than Z 1 , airframe can operate normally and any cracks cannot be detected at inspection. State i (i = 1, 2, . . . , n − 2): When the total damage is more than Z i and is less than Z i+1 , airframe can operate normally and cracks can be detected at inspection. State n − 1: When the total damage is more than Z n−1 and is less than Z n , airframe can operate and PM is undergone. State n: When the total damage is more than Z n , airframe cannot operate until CM. The airframe states defined above form a transition diagram between states where State n is an absorbing state. Airframe starts its operation at State 0 and stops its operation at State n. (3) Airframe starts its operation at time 0 and is inspected at time kT (k = 1, 2, . . .) for given T > 0. Airframe suffers damage Wk during the interval ((k − 1)T, kT ] finite mean 1/ω, and and has an identical distribution G(x) ≡ Pr{Wk ≤ x} )with j each damage is additive. The total damage Z j ≡ i=1 Wi has a distribution Pr{Z j ≤ x} = G ( j) (x), where ( j) (x) ( j = 1, 2, . . .) is the j-fold Stieltjes convolution of (x) and (0) ≡ 1 for t ≥ 0. (4) The respective damage levels in States 0, 1, . . . , n − 2 and n − 1 become 0, Z 1 , Z 2 , . . . , Z n−2 and Z n−1 by inspection. (5) Inspection cost is c1 , PM cost is c2 and CM cost is c3 , where c3 > c2 > c1 .

(1) Model with 3 States Damage levels in States 0, 1 and 2 are 0, Z 1 and Z 2 in Fig. 10.4. When the total damage has exceeded Z 1 , airframe transits from State 0 and its probability is G(Z 1 ) ≡ 1 − G(Z 1 ). The transition probability from State 0 at jth inspection is 1 − G ( j) (Z 1 ). The expected number of inspections from State 0 to State i (i = 1, 2) is ∞ 

∞    ( j + 1) G ( j) (Z 1 ) − G ( j+1) (Z 1 ) = G ( j) (Z 1 ) ≡ MG (Z 1 ) + 1 , (10.49)

j=0

j=0

) ( j) where MG (x) ≡ ∞ j=1 G (x) is a renewal function of G(x). The number I1 of inspections from State 1 to State 2 is I1 = G(Z 2 − Z 1 )(I1 + 1) + G(Z 2 − Z 1 ) . Solving above equation for I1 ,

10.2 Airframe Maintenance with Crack Growth

I1 =

219

1 G(Z 2 − Z 1 )

.

(10.50)

Thus, the total expected number of inspections is MG (Z 1 ) + 1 +

1 G(Z 2 − Z 1 )

.

(10.51)

The number M1 of PMs from State 1 to State 2 is M1 = G(Z 2 − Z 1 )(M1 + 1) . Solving the above equation for M1 , M1 =

G(Z 2 − Z 1 ) G(Z 2 − Z 1 )

.

(10.52)

Thus, the total expected number of PMs is M1 + 1 =

1 G(Z 2 − Z 1 )

.

(10.53)

(2) Model with 4 States Damage levels in States 0, 1, 2 and 3 are 0, Z 1 , Z 2 and Z 3 in Fig. 10.5, respectively. When the total damage has exceeded Z 1 and Z 2 − Z 1 , airframe transits from State 0 and 1, respectively. From (10.49), their respected expected numbers of inspections are MG (Z 1 ) + 1 and MG (Z 2 − Z 1 ) + 1, and summing them, MG (Z 1 ) + MG (Z 2 − Z 1 ) + 2 .

(10.54)

The expected number of inspections from State 2 to State 3 is, from (10.50), 1 G(Z 3 − Z 2 )

.

Thus, the total expected number is MG (Z 1 ) + MG (Z 2 − Z 1 ) + 2 +

1 G(Z 3 − Z 2 )

.

(10.55)

The expected number M2 of PMs from State 2 to State 3 is, from (10.53), M2 + 1 =

1 G(Z 3 − Z 2 )

.

(10.56)

220

10 Application Examples of Airframe

(3) Model with n + 1 States Extending the number of states to n (n = 1, 2, . . .), the total expected number of inspections is, from (10.51) and (10.55), n−1  

 MG (Z i − Z i−1 ) + 1 +

i=1

1 G(Z n − Z n−1 )

.

(10.57)

The total number M3 of PMs from State n − 1 to State n is, from (10.53) and (10.56), M3 + 1 =

1 G(Z n − Z n−1 )

.

(10.58)

 n−1 ) are, respecThe total expected cost C(Z n−1 ) and the expected cost rate C(Z tively, ⎡ C(Z n−1 ) = c1 ⎣

n−1 

⎤ MG (Z j − Z j−1 ) + n − 1 +

j=1

+ C(Z n−1 )

 n−1 ) = ) C(Z n−1

1 G(Z n − Z n−1 ) c2

G(Z n − Z n−1 )



+ c3 ,

(10.59)

j=1 MG (Z j − Z j−1 ) + n − 1 + 1/G(Z n − Z n−1 )

= c1 + )n−1 j=1

When G(x) = 1 −

c2 /G(Z n − Z n−1 ) + c3 MG (Z j − Z j−1 ) + n − 1 + 1/G(Z n − Z n−1 )

)k−1

j=0 [(ωx)

j

. (10.60)

/j!]e−ωx (k = 1, 2, . . .), (10.59) is

C(Z n−1 ) = c1 (ω Z n−1 + n − 1) + (c1 + c2 )eω(Z n −Z n−1 ) + c3 .

(10.61)

Differentiating C(Z n−1 ) with respect to Z n−1 and setting it equal to zero, ∗ = Zn + Z n−1

  c2 1 . ln 1 + ω c1

(10.62)

In this case, (10.60) is  n−1 ) = c1 + C(Z

c2 eω(Z n −Z n−1 ) + c3 . ω Z n−1 + n − 1 + eω(Z n −Z n−1 )

 n−1 ) with Z n−1 and setting it equal to zero, Differentiating C(Z

(10.63)

10.3 Commercial Airframe with Damage and Crack

221

 Table 10.8 Optimal  Z 1 and resulting cost rate C( Z 1 ) when n = 2 ω c1 c2 c3 Z2 0.01 0.1 1.0 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01

0.5 0.5 0.5 0.1 0.05 0.5 0.5 0.5 0.5 0.5 0.5

1 1 1 1 1 2 3 1 1 1 1

10 10 10 10 10 10 10 20 30 10 10

50 50 50 50 50 50 50 50 50 60 70

 Z1

1 (  C Z1)

35.44 42.55 9.00 35.44 35.44 22.02 9.97 42.60 45.04 44.40 53.35

0.844 0.564 0.500 0.444 0.394 1.095 1.276 0.871 0.880 0.827 0.812

ω Z n−1 + n c3 = , −ω(Z −Z ) n n−1 1−e c2

(10.64)

whose left-hand side L 1 (Z n−1 ) increases strictly from L 1 (Z n−2 ) to L 1 (Z n ) = ∞ . Therefore, we have the following optimal policy : (i) If L 1 (Z n−2 ) < c3 /c2 , then there exists optimal  Z n−1 (Z n−2 <  Z n−1 < Z n ) which satisfies (10.64), and the resulting cost rate is  C( Z n−1 ) = c1 +

c2 1−

Z n−1 ) eω(Z n − 

.

(10.65)

(ii) If L 1 (Z n−2 ) ≥ c3 /c2 , then  Z n−1 = Z n−2 .  Z 1 ) for ω, Example 10.6 Table 10.8 presents optimal  Z 1 and resulting cost rate C(  Z 1 and C( Z 1 ) increase with c1 , c2 , c3 and Z 2 when n = 2. This indicates that both   Z 1 ) increase with c2 and 1/Z 2 . It is of interest that  Z1 1/ω and  Z 1 decrease, and C(  hardly change and C( Z 1 ) decreases with c1 . 

10.3 Commercial Airframe with Damage and Crack Commercial aircrafts have become a fundamental infrastructure. The airfare reduction is the prime issue for airline companies because the cost competitiveness among them has become severe. These companies encourage the efficiency of all activities including maintenance. In FY2011, the maintenance cost shares 12% in total airline operational cost and is $46.9 billion [25], and its reduction is one of preoccupations in airline corporations.

222

10 Application Examples of Airframe

Aluminum alloy 2024 and 7075 are utilized for a major material of airframe structures because its tensile strength is almost as same as that of ferrous ones and its specific weight is from one third to one fourth of ferrous one [18]. The consumed life of mechanical structures under the cyclic stress condition is assessed by S-N curve, and the ferrous alloy structure has an infinite lifetime when it is designed to hold stress below the threshold of endurance limit. Whereas, aluminum alloy has no such distinct limit and fails finally under a slight stress condition. Thus, the lifetime of aluminum alloy airframe is finite and the accumulation of tiny stresses may cause serious damage in a long operation [19]. Therefore, maintenances of airframes are indispensable to operate them without any serious troubles. Various kinds of PMs with time periods and maintenance contents are different and are assembled appropriately. We called such maintenance as the multi-echelon one and studied optimal maintenance policies for a fossil-fired power plant with multi-echelon risks [26], and four-echelon PMs were undergone for commercial aviation [2]. Airframe has to be tolerant of failures during a designed operation period. To assure the airframe tolerance, the fatigue test using full-scale airplane structure is required by Federal Aviation Administration (FAA) regulation [11]. The regulation directs that the sufficient full-scale test evidence has to be more than two times of a prespecified operation interval to guarantee the operation safety. From the above viewpoints, optimal PM policies to minimize the total expected cost are discussed in Sects. 10.1 and 10.2, under the assumption that airframe failures during a guaranteed period are minor and their costs are constant. The airframe failure is discovered at visual inspections and non-destructive inspections [12]. At the start of operation, cracks of airframes cannot be discovered because their sizes are so small [13], however, they grow up with various kinds of mechanical stresses, and can be discovered when their sizes are greater than a certain level [14]. Airframe with cracks can operate because it adopts tolerant design [18, 19] and can operate when the number and size of cracks are less than a prespecified level [16]. We have to clarify a growth rate of cracks for planning PM policies. Optimal PM policies for airframes with the deterministic cracks growth model [20] are discussed. In Sect. 10.3.1, maintenance measures with two parameters of crack size K and number M are examined, and an extended model with multi-echelon K is proposed in Sect. 10.3.2.

10.3.1 Damage Level and Crack Number Airframe is inspected at time N T (N = 1, 2, 3, . . .) and fails when the total damage has exceeded a certain level K or a total number of cracks has exceeded a certain crack number M, whichever occurs first. We obtain the total expected cost rate and derive optimal N to minimize it [28]. Airframe begins to operate at time 0 and suffers stresses caused by damages due to cracks and erosion. The number of cracks increases, and the length and depth of each

10.3 Commercial Airframe with Damage and Crack

223

crack become wide with stresses. The airframe inspection is undergone periodically, and cracks can be detected and their scales can be monitored. At the begining of operation, airframe is just delivered from a factory and cracks cannot be detected because all cracks are micro-size. Cracks are growing with stresses and can be detected when a damage level is more than a certain size. Airframe can operate when the total damage is below a critical size, and cannot operate when it has exceeded the size because some number of critical cracks might cause catastrophic phoenomenon such as mid-air disintegration. We make the following assumptions of PM policies: (1) Airframe suffers damage due to shocks, and its amount is measured only at periodic times kT (k = 1, 2, . . .) for a specified T > 0. Each amount Wk of damage between periodic times has an identical distribution G(x) ≡ Pr{Wk ≤ x} [27]. In addition, airframe suffers some number of cracks and its number Yk is measured only )nat periodic times and has a probability function pn ≡ Pr{Yk = pi = Pr{Yk ≤ n} (n = 0, 1, 2, . . .), independently of damage n} and Pn ≡ i=0 Wk . Airframe fails when the total damage has exceeded a failure level K (0 < K < ∞) or a total number of cracks has exceeded a specified number M (M = 1, 2, . . .), whichever occurs first. (2) Replacement cost at time N T is c N , and replacement costs at damage K and at number M are c K (c K > c N ) and c M (c M > c N ), respectively.

(1) Model 1 with Constant M Suppose that airframe is replaced preventively at time N T (N = 1, 2, . . .). The probability that airframe is replaced at time N T is Pr{W1 + W2 + · · · + W N ≤ K and Y1 + Y2 + · · · + Y N ≤ M} = G (N ) (K )P (N ) (M) , the probability that it is replaced when the total damage has exceeded damage K is N −1  

 G (k) (K ) − G (k+1) (K ) P (k) (M) ,

k=0

where P)(k) (M) = Pr{Y1 + Y2 + · · · + Yk ≤ M} (k = 1, 2, . . .), P (0) (M) ≡ 1, M P (k) (M) ≡ ∞ k=1 P (M), and the probability that it is replaced at number M is N −1  

 P (k) (M) − P (k+1) (M) G (k+1) (K ) .

k=0

Thus, the mean time to replacement is

224

10 Application Examples of Airframe

N T G (N ) (K )P (N ) (M) +

N −1 

  (k + 1)T G (k) (K ) − G (k+1) (K ) P (k) (M)

k=0

+

N −1 

  (k + 1)T P (k) (M) − P (k+1) (M) G (k+1) (K )

k=0

=T

N −1 

G (k) (K )P (k) (M) .

(10.66)

k=0

Therefore, the expected cost rate is  ) N −1  (k) G (K ) − G (k+1) (K ) P (k) (M) c N + (c K − c N ) k=0  ) N −1  (k) +(c M − c N ) k=0 P (M) − P (k+1) (M) G (k+1) (K ) T C(N ; K , M) = . ) N −1 (k) (k) k=0 G (K )P (M)

(10.67)

We find optimal N ∗ for given K and M to minimize the expected cost rate T C(N ; K , M) [27]. When c M = c K > c N , T C(N ; K , M) =

c K − (c K − c N )G (N ) (K )P (N ) (M) . ) N −1 (k) (k) k=0 G (K )P (M)

(10.68)

When M = ∞, i.e., airframe is replaced at time N T or at damage K , T C(N ; K ) =

c K − (c K − c N )G (N ) (K ) , ) N −1 (k) k=0 G (K )

(10.69)

and when K = ∞, i.e., it is replaced at time N T or at number M, T C(N ; M) =

c M − (c M − c N )P (N ) (M) . ) N −1 (k) k=0 P (M)

(10.70)

First, we find optimal N ∗ to minimize C(N ; K ) in (10.69) and C(N ; M) in (10.70). Forming the inequality C(N + 1 : K ) − C(N : K ) ≥ 0, Q K (N )

N −1 

  G (k) (K ) − 1 − G (N ) (K ) ≥

k=0

cN , cK − cN

where Q K (N ) ≡

G (N ) (K ) − G (N +1) (K ) . G (N ) (K )

(10.71)

10.3 Commercial Airframe with Damage and Crack

225

Thus, if Q K (N ) increases strictly with N , then the left-hand side increases )of (10.71) (k) G (x). Therestrictly with N to Q K (∞)[1 + MG (K )] − 1, where MG (x) ≡ ∞ k=0 fore, if Q K (∞)[1 + MG (K )] > c K /(c K − c N ), then there exists a finite and unique minimum N ∗ (1 ≤ N ∗ < ∞) which satisfies (10.71). Similarly, forming the inequality C(N + 1; M) − C(N ; M) ≥ 0, Q M (N )

N −1 

P (k) (M) − [1 − P (N ) (M)] ≥

k=0

cN , cM − cN

(10.72)

where Q M (N ) ≡

P (N ) (M) − P (N +1) (M) . P (N ) (M)

Thus, if Q M (N ) increases strictly with N and Q M (∞)[1 + M P (M)] > c M /(c M − c N ), then there exists a finite and unique minimum N ∗ (1 ≤ N ∗ < ∞) which satisfies (10.72). In particular, when G(x) = 1 − exp(−wx), (wK ) N /N ! , Q K (N ) ≡ )∞ k k=N (wK ) /k! increases strictly with N from exp(−wK ) to 1 (Appendix A.3), and MG (K ) = wK . Thus, if wK > c N /(c K − c N ) then a finite N ∗ (1 ≤ N ∗ < ∞) exists. Next, we find optimal N ∗ to minimize C(N ; K , M) in (10.68). Forming the inequality C(N + 1; K , M) − C(N ; K , M) ≥ 0, Q K M (N )

N −1 

G (k) (K )P (k) (M) − [1 − G (N ) (K )P (N ) (M)] ≥

k=0

cN , (10.73) cK − cN

where Q K M (N ) ≡

G (N ) (K )P (N ) (M) − G (N +1) (K )P (N +1) (M) . G (N ) (K )P (N ) (M)

Thus, if Q K M (N ) increases strictly with N and   Q K M (∞) 1 + MG P (K , M) >

cK , cK − cN

then there exists a finite and unique N ∗ (1 ≤ N ∗ < ∞) which satisfies )∞ minimum (k) (k) (10.73), where MG P (K , M) ≡ k=1 G (K )P (M). In particular, = 1 − e−wx and pn = (λn /n!)e−λ (n = 0, 1, 2, . . .), (G(x) )n 'when i −λ i.e., Pn = i=0 λ /i! e , using the property of the superposition of a Poisson distribution [23, p. 22],

226

10 Application Examples of Airframe

P (N ) (M) =

M  (N λ)i

i!

i=0

e−N λ .

Thus, + −λ )M * k i k=N +1 [(wK ) /k!] i=0 [(N + 1)λ] /i! e )∞ )M k i k=N [(wK ) /k!] i=0 [(N λ) /i!]

)∞ Q K M (N ) ≡ 1 −

,

which increases strictly with N to 1 (Appendix A.4). Thus, if ∞ 

G (k) (K )P (k) (M) ≥

k=0

cK , cK − cN

then there exists a finite and unique minimum N ∗ (1 ≤ N ∗ < ∞) which satisfies (10.73). Finally, forming the inequality C(N + 1; K , M) − C(N ; K , M) ≥ 0 in (10.67), (c K − c N ) Q K (N )

N −1 

G (k) (K )P (k) (M)

k=0



N −1 

(k)

[G (K ) − G

(k+1)

(K )]P

(k)

 (M)

k=0

+(c M − c N ) Q M (N )

N −1 

G (k) (K )P (k) (M)

k=0



N −1 

[P

(k)

(M) − P

(k+1)

(M)]G

(k+1)

(K ) ≥ c N .

(10.74)

k=0

Thus, if both Q K (N ) and Q M (N ) increase with N and ,

(c K − c N )[Q K (∞) − 1] + (c M − c N )[Q M (∞) − 1]

∞ -

G (k) (K )P (k) (M)

k=0

+(c K − c M )

∞ 

G (k+1) (K )P (k) (M) ≥ c N ,

k=0

then there exists a finite and unique minimum N ∗ (1 ≤ N ∗ < ∞) which satisfies (10.74).

10.3 Commercial Airframe with Damage and Crack

227

(2) Model 2 with Random M Suppose that a crack number M is a random variable with a probability function fm ≡ )n Pr{M = m} (m = 1, 2, . . .) and F ≡ Pr{M ≤ n} = f (n = 1, 2, . . .), n m=1 m ) f ≡ 1. Then, the expected cost rate in (10.67) is, for N = 1, 2, . . ., where ∞ m=1 m T C(N ; K , f )

 )∞ ) N −1  (k) (k) G (K ) − G (k+1) (K ) c N + (c K − c N ) k=0 m=1 f m P (m)   (k) ) N −1 (k+1) )∞ (k+1) (K ) m=1 f m P (m) − P (m) +(c M − c N ) k=0 G . (10.75) = ) N −1 (k) )∞ (k) m=1 f m P (m) k=0 G (K )

When K = ∞, T C(N ; f )

 (k)  ) N −1 )∞ (k+1) c N + (c M − c N ) k=0 (m) m=1 f m P (m) − P = . ) N −1 )∞ (k) k=0 m=1 f m P (m)

(10.76)

We find optimal N ∗f to minimize C(N ; f ) in (10.76). Forming the inequality C(N + 1; f ) − C(N ; f ) ≥ 0, Q f (N )

N −1  ∞ 

f m P (k) (m)−

) N −1 )∞ k=0

m=1

  f m P (k) (m) − P (k+1) (m)

k=0 m=1



cN , cM − cN

(10.77)

where )∞ Q f (N ) ≡

m=1

  f m P (N ) (m) − P (N +1) (m) )∞ . (N ) (m) m=1 f m P

Thus, letting L f (N ) be the left-hand side of (10.77), if Q f (N ) increases strictly with N and L f (∞) > c N /(c M − c N ), then there exists a finite and unique minimum N ∗f (1 ≤ N ∗f < ∞) which satisfies (10.77). In particular, when pn = (λn /n!) exp(−λ) (n = 0, 1, 2, . . .) and f m = [θm−1 / (m − 1)!] exp(−θ) (m = 1, 2, . . .), )∞ * Q f (N ) = 1 −

+ ) [(N + 1)λ]k /k! e−λ ∞ (θm /m!)  )∞ m=k , )∞  i m i=0 (N λ) /i! m=i (θ /m!)

k=0

increases strictly with N to 1 − exp(−λ) (Appendix A.5). Thus, if

228

10 Application Examples of Airframe

Table 10.9 Optimal N ∗ and resulting cost rate T C(N ∗ ; K )/c N wK c K /c N N∗ 8 10 12 8 8

3 3 3 5 7

T C(N ∗ ; K )/c N 2.37 ×10−2 1.83 ×10−1 1.48 ×10−1 2.83 ×10−1 3.15 ×10−1

6 7 9 5 4

Table 10.10 Optimal N ∗ and resulting cost rate T C(N ∗ ; M)/c N , and optimal N ∗f and resulting cost rate T C(N ∗f ; f )/c N λ

c M /c N

M

N∗

T C(N ∗ ; M)/c N

θ

N ∗f

T C(N ∗f ; f )/c N

1.0 1.0 1.0 2.0 3.0 1.0 1.0

6 6 6 6 6 4 2

20 15 10 20 20 20 20

12 9 5 6 4 13 16

9.23 ×10−2 1.35 ×10−1 2.32 ×10−1 1.85 ×10−1 2.77 ×10−1 8.70 ×10−2 7.56 ×10−2

20 15 10 20 20 20 20

11 8 5 6 4 13 17

1.11 ×10−1 1.62 ×10−1 2.78 ×10−1 2.22 ×10−1 3.32 ×10−1 1.00 ×10−1 7.91 ×10−2

∞ ∞ ∞ ' ( (N λ)i −N λ  θm −θ  (N λ)i −N λ θi −θ cN e e − e e > 1 − e−λ , i! m! i! i! c M − cN i=0 m=i i=0

then a finite N ∗f (1 ≤ N ∗f < ∞) exists. Example 10.7 Table 10.9 presents optimal N ∗ and resulting cost rate T C(N ∗ ; K ) /c N for wK and c K /c N . This indicates that N ∗ decrease with 1/wK and c K /c N . Table 10.10 presents optimal N ∗ and resulting cost rate T C(N ∗ ; M)/c N for M, λ and c M /c N . This indicates that N ∗ decrease with 1/M, λ and c M /c N . Table 10.10 also presents optimal N ∗f and resulting cost rate T C(N ∗f ; f )/c N for θ, λ and c M /c N . This indicates that N ∗f decrease with 1/θ, λ and c M /c N , and N ∗f ≥ N ∗ . Table 10.11 presents optimal N ∗ and resulting cost rate T C(N ∗ ; K , M) /c N for wK , λ, M, c M /c N . This indicates that N ∗ decrease with c K /c N , wK and λ. It is of  interest that changes of c M /c N and M does not affect N ∗ .

10.3.2 Multiple Damage Levels and Crack Number We should consider multiple damage levels of maintenances because their costs depend on greatly the amount of damage. In Sect. 10.3.1, PM is undergone when the

10.3 Commercial Airframe with Damage and Crack

229

Table 10.11 Optimal N ∗ and resulting cost rate T C(N ∗ ; K , M)/c N wK λ M c K /c N c M /c N N∗ 10 11 12 10 10 10 10 10 10 10 10

1.0 1.0 1.0 1.5 2.0 1.0 1.0 1.0 1.0 1.0 1.0

20 20 20 20 20 40 60 20 20 20 20

6 6 6 6 6 6 6 12 24 6 6

2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 2.0 3.5 5.0

8 7 6 5 4 8 8 7 7 8 8

T C(N ∗ ; K , M)/c N 1.35 ×10−1 1.47 ×10−1 1.67 ×10−1 2.03 ×10−1 2.68 ×10−1 1.35 ×10−1 1.35 ×10−1 1.43 ×10−1 1.44 ×10−1 1.35 ×10−1 1.35 ×10−1

total damage has exceeded a failure level K or a total number M of cracks, whichever occurs first. In this section, airframe is inspected at time N T and fails when the total damage has exceeded a failure level K i (i = 1, 2, . . . , m) or a total number of cracks has exceeded a certain failure number M, whichever occurs first. We obtain the total expected cost rate and derive optimal number N ∗ to minimize it [29]. We assume the following maintenance policy: (1) Airframe suffers damage due to shocks, and its amount is measured only at periodic times kT (k = 1, 2, . . .) for a specified T > 0. Each amount Wk of damage between periodic times has an identical distribution G(x) ≡ Pr{Wk ≤ x}. In addition, airframe suffers some number of cracks and its number Yk is measured ) only at periodic times and has a probability function pn ≡ Pr{Yk = n} n pi = Pr{Yk ≤ n} (n = 0, 1, 2, . . .). Airframe fails when the total and Pn ≡ i=0 damage has exceeded a failure level K i (K i−1 < K i < ∞, i = 1, 2, . . . m, K 0 ≡ 0) or a total number of cracks has exceeded a failure number M (M = 1, 2, . . .), whichever occurs first. (2) Replacement cost at time N T is c0 , replacement cost when the total damage is between K i−1 and K i is ci (i = 1, 2, . . . , m) and replacement cost when the total number of cracks exceeds M is c M . The probability that airframe is replaced at time N T is G (N ) (K 1 )P (N ) (M) , the probability that it is replaced when the total damage has exceeded K i and is less than K i+1 (i = 0, 1, 2, . . . , m − 1) is N −1   k=0

K1 0



 G(K i+1 − x) − G(K i − x) dG (k) (x)P (k) (M) ,

230

10 Application Examples of Airframe

which includes the probability that the total damage exceeds K i and the total number exceeds M, the probability that it is replaced when the total damage has exceeded K m is N −1   k=0

K1

G(K m − x)dG (k) (x)P (k) (M) ,

0

and the probability that it is replaced when the total number has exceeded M is N −1 

  G (k+1) (K 1 ) P (k) (M) − P (k+1) (M) .

k=0

Thus, the mean time to replacement is N T G (N ) (K 1 )P (N ) (M) + +

N −1 

(k + 1)T

k=0

=T

m  K1 

  (k + 1)T G (k+1) (K 1 ) P (k) (M) − P (k+1) (M)

k=0



 G(K i+1 − x) − G(K i − x) dG (k) (x)P (k) (M)

0

i=1

N −1 

N −1 

G (k) (K 1 )P (k) (M) ,

(10.78)

k=0

where K m+1 = ∞. Thus, the expected cost until replacement is m 

ci

N −1  

i=1

k=0

+c0 G

(N )

K1



 G(K i+1 − x) − G(K i − x) dG (k) (x)P (k) (M)

0

(K 1 )P

(N )

(M) + c M

N −1 

  G (k+1) (K 1 ) P (k) (M) − P (k+1) (M)

k=0

= c0 + (c M − c0 )

N −1 

  G (k+1) (K 1 ) P (k) (M) − P (k+1) (M)

k=0

+

m  i=1

(ci − ci−1 )

N −1   k=0

K1

G(K i − x)dG (k) (x)P (k) (M) .

0

Therefore, the expected cost rate is, from (10.78) and (10.79),

(10.79)

10.3 Commercial Airframe with Damage and Crack

231

T C(N ; K i , M) =

  ) N −1 (k+1) G (K 1 ) P (k) (M) − P (k+1) (M) c0 + (c M − c0 ) k=0  )m ) N −1 K 1 (k) (k) + i=1 (ci − ci−1 ) k=0 0 G(K i − x)dG (x)P (M) . ) N −1 (k) (k) k=0 G (K 1 )P (M)

(10.80)

When M = ∞, T C(N ; K i ) =

c0 +

)m i=1

) N −1  K 1 (k) (ci − ci−1 ) k=0 0 G(K i − x)dG (x) . (10.81) ) N −1 (k) k=0 G (K 1 )

We find optimal N ∗ to minimize C(N ; K i ) in (10.81). Forming the inequality C(N + 1; K i ) − C(N ; K i ) ≥ 0, m 

(ci − ci−1 )

i=1

 K

×

N −1 

G (k) (K 1 )

k=0 1

0

G(K i − x)dG (N ) (x) − G (N ) (K 1 )

 K1 0

G(K i − x)dG (k) (x) G (k) (K 1 )

 ≥ c0 .

(10.82)

Let L(N ) be the left-hand side of (10.82). Then, if  K1 G(K i ) ≤

0

G(K i − x)dG (k) (x) ≤ G(K i − K 1 ) G (k) (K 1 )

increases strictly with k and L(∞) > c0 , then there exists a finite and unique N ∗ (1 ≤ N ∗ < ∞) which satisfies (10.82). Next, we find optimal N ∗ to minimize C(N ; K i , M) in (10.80). Forming the inequality C(N + 1; K i , M) − C(N ; K i , M) ≥ 0, (c M − c0 )

N −1 

G (k) (K 1 )P (k) (M)

k=0



(N +1)

(K 1 ) P (N ) (M) − P (N +1) (M) G (k+1) (K 1 ) P (k) (M) − P (k+1) (M) G − × G (N ) (K 1 ) P (N ) (M) G (k) (K 1 ) P (k) (M) +

m 

(ci − ci−1 )

i=1

 K ×

0

Thus, if both

N −1 

G (k) (K 1 )P (k) (M)

k=0 1



G(K i − x)dG (N ) (x) − G (N ) (K 1 )

 K1 0

G(K i − x)dG (k) (x) G (k) (K 1 )

 ≥ c0 .

(10.83)

232

10 Application Examples of Airframe

G (k+1) (K 1 ) P (k) (M) − P (k+1) (M) and G (k) (K 1 ) P (k) (M)

 K1 0

G(K i − x)dG (k) (x) G (k) (K 1 )

increase strictly with k, then there exists a finite and unique N ∗ (1 ≤ N ∗ < ∞) which satisfies (10.83). In particular, when G(x) = 1 − exp(−wx),  K1 0

G(K i − x)dG (k) (x) (wK 1 )k e−w(K i −K 1 ) /k! )∞ = (k) i G (K 1 ) i=k (wK 1 ) /i!

increases strictly with k from exp(−wK i ) to exp[−w(K i − K 1 )]. Therefore, (10.82) is m 

(ci − ci−1 ) e

−w(K i −K 1 )

i=1

N −1  k=0



(wK 1 )k /k! (wK 1 ) N /N ! ) − G (K 1 ) )∞ ∞ i i i=N (wK 1 ) /i! i=k (wK 1 ) /i! (k)



≥ c0 ,

)m (ci − ci−1 ) exp[−w(K i − whose left-hand side increases strictly with N to wK 1 i=1 )m K 1 )]. Thus, if wK 1 i=1 (ci − ci−1 ) exp[−w(K i − K 1 )] > c0 , then there exists a finite and unique N ∗ (1 ≤ N ∗ < ∞) which satisfies (10.84). ) M−1 i λ Similarly, when pi = λi−1 exp(−λ)/(i − 1)! (i = 1, 2, . . .), P(M) = i=0 ) M−1 (k) i exp(−λ)/i!, and P (M) = i=0 (kλ) exp(−kλ)/i!, (10.83) is (c M − c0 )  ×

N −1 

G (k) (K 1 )P (k) (M)

k=0

 ) M−1  i −λ (wK 1 ) N /N ! i=0 [(N + 1)λ] e /i! 1 − )∞ 1− ) M−1 i i i=N (wK 1 ) /i! i=0 (N λ) /i! 

 ) M−1   i −λ (wK 1 )k /k! i=0 [(k + 1)λ] e /i! − 1 − )∞ 1− ) M−1 i i i=k (wK 1 ) /i! i=0 (kλ) /i!

+

m  i=1

(ci − ci−1 ) e−w(K i −K 1 ) 

N −1 

G (k) (K 1 )P (k) (M)

k=0

 (wK 1 ) N /N ! (wK 1 )k /k! ) × )∞ − ≥ c0 . ∞ i i i=N (wK 1 ) /i! i=k (wK 1 ) /i!

(10.84)

If the left-hand side of (10.84) increases strictly with N and is larger than c0 , then there exists a finite and unique N ∗ (1 ≤ N ∗ < ∞) which satisfies (10.84).

10.3 Commercial Airframe with Damage and Crack

233

Table 10.12 Optimal N ∗ and resulting cost T C(N ∗ ; K i ) when ω = 8 and c0 = 0.1, and optimal N ∗ and resulting cost T C(N ∗ ; K i , M) when ω = 8, c0 = 0.1, M = 10 and λ = 1.0 α

m

0.1

1 2

N∗

T C(N ∗ ; K i )

N∗

T C(N ∗ ; K i , M)

10

2.96 × 10−1

14

2.76 × 10−1

10

3.66 × 10−1

14

3.37 × 10−1

9

4.14 × 10−1

12

3.82 × 10−1

9

4.18 × 10−1

12

3.86 × 10−1

15

1.89 × 10−1

15

1.89 × 10−1

14

2.36 × 10−1

14

2.36 × 10−1

13

3.01 × 10−1

13

3.01 × 10−1

12

3.24 × 10−1

12

3.24 × 10−1

19

1.18 × 10−1

19

1.18 × 10−1

17

1.32 × 10−1

17

1.32 × 10−1

15

1.66 × 10−1

15

1.66 × 10−1

14

× 10−1

14

2.00 × 10−1

5 9 0.05

1 2 5 9

0.01

1 2 5 9

2.00

Table 10.13 Optimal N ∗ and resulting cost T C(N ∗ ; Mi ) when m = 9, α = 0.1 and c0 = 0.1 M

10

12

λ

ω=7 T C(N ∗ ; K i , M)

12

3.86 × 10−1

1.1

15

3.51 × 10−1

1.2

21

3.16 × 10−1

1.0

10

1.1

1.0

1.2 14

ω=8 N∗

N∗

ω=6 T C(N ∗ ; K i , M)

N∗

T C(N ∗ ; K i , M)

9

4.31 × 10−1

7

4.69 × 10−1

11

4.15 × 10−1

8

4.65 × 10−1

13

3.83 × 10−1

8

4.56 × 10−1

4.16 × 10−1

8

4.42 × 10−1

7

4.69 × 10−1

11

4.08 × 10−1

9

4.41 × 10−1

7

4.69 × 10−1

13

3.88 × 10−1

9

4.35 × 10−1

7

4.70 × 10−1

1.0

9

4.21 × 10−1

8

4.40 × 10−1

7

4.68 × 10−1

1.1

10

4.20 × 10−1

8

4.41 × 10−1

7

4.68 × 10−1

1.2

10

3.17 × 10−1

8

4.42 × 10−1

7

4.69 × 10−1

Example 10.8 When ci = αi + c0 and K i = i, Table 10.12 also presents optimal N ∗ and resulting cost T C(N ∗ ; K i ) in (10.81) for α and m when ω = 8 and c0 = 0.1. This indicates that N ∗ decrease with α and m. Table 10.12 also presents optimal N ∗ and resulting cost T C(N ∗ ; K i , M) in (10.80) for α and m when ω = 8, c0 = 0.1, M = 10 and λ = 1.0. This indicates that variations of N ∗ and T C(N ∗ ; K i , M) are the same as ones of N ∗ and T C(N ∗ ; K i ). Table 10.13 presents optimal N ∗ and resulting cost T C(N ∗ ; K i , M) in (10.80) for ω, M and λ when m = 9, c0 = 0.1 and α = 0.1. In this illustration, N ∗ increases with ω, 1/M and λ. 

234

10 Application Examples of Airframe

10.4 Problems 1. Prove that the left-hand side L 1 (M) of (10.5) increases strictly with M to ∞. 2. Prove that the left-hand side L 2 (M1 ) increases strictly with M1 to ∞. 3. Prove that the numerator of the left-hand side of (10.18) increases strictly with N to h(∞) and the denominator decreases strictly with N to 0. 4. Probe that the right-hand side L R (M) of (10.19) increases strictly with M from L R (1) = c S N + (c0 − cb )(1 − e−a N T )2 , to L R (∞) = c S N + c0 − cb and the left-hand side bracket L L (M) of (10.19) is L L (1) = [H (2N T ) − 2H (N T ) − N HS (2T ) + 2N HS (T )] , and L L (∞) = ∞. 5. Derive (10.28) and (10.38). 6. Derive (10.43) and (10.44).

Appendix A.1

H(t)/ t Increases Strictly with t to h(∞)

Assume that failure rate h(t) increases strictly with from h(0) to h(∞). Proof Noting that lim

t→0

H (t) = h(0) , t

lim

t→∞

H (t) = h(∞) , t

and d dt



  H (t) th(t) − H (t) 1 t = = [h(t) − h(u)]du > 0 , t t2 t2 0

H (t)/t increases strictly with t from h(0) to h(∞).

A.2 L 3 (n, x) Increases Strictly with x to ∞ Proof

Note that for n > 1,

Appendix

235

 nx   x 1 H (nx) − n H (x) = h(t)dt − n h(t)dt nx nx 0 0    x n kx 1  h(t)dt − n h(t)dt > 0 , = nx k=1 (k−1)x 0   H (nx) H (nx) − n H (x) H (x) lim L 3 (n, x) = lim = lim − x→∞ x→∞ x→∞ nx nx x = lim [h(nx) − h(x)] = ∞ . L 3 (n, x) ≡

x→∞

Differentiating L 3 (n, x) with x, [h(nx)n − nh(x)]x − [H (nx) − n H (x)] = nxh(nx) − H (nx) − nxh(x) + n H (x)  x  nx     h(nx) − h(t) dt − n h(x) − h(t) dt = 0

=

0

n  kx  k=1



(k−1)x

 h(nx) − h(t) dt −

 x  0

  h(x) − h(t) dt > 0 ,

which follows that L 3 (n, x) increases strictly with x to ∞.

A.3 Proof

Q K (N) Increases Strictly with N from exp(−w K ) to 1 Noting that for N = 0, 1, 2, . . ., [(wK ) N /N !]e−wK Q K (N ) ≡ )∞ , k −wK k=N [(wK ) /k!]e

we easily have Q K (0) = e−wK , lim Q K (N ) = lim

N →∞

N →∞

1+

[N !/(wK ) N ]

1 )∞

k=N +1 [(wK )

k /k!]

= 1.

Furthermore, forming Q K (N + 1) − Q K (N ), ∞ ∞ (wK ) N  (wK )k (wK ) N +1  (wK )k − (N + 1)! k=N k! N ! k=N +1 k!

=

∞  (wK ) N +k [k − (N + 1)] > 0 , k! k=N +1

which follows that Q K (N ) increases strictly with N from exp(−wK ) to 1.

236

A.4

10 Application Examples of Airframe

Q K M (N) Increases Strictly with N to 1

Proof From A.3, )∞ k (wK ) N /N ! +1 [(wK ) /k!] ) L 1 (N ) ≡ )k=N = 1 − ∞ ∞ k k k=N [(wK ) /k!] k=N [(wK ) /k!] decreases strictly with N from 1 − exp(−wK ) to 0. Letting  )M  j j=0 (x + 1) /j! L 2 (x) ≡ , )M j j=0 x /j! we have M  1 L 2 (0) = , j! j=0

lim L 2 (∞) = 1 .

x→∞

Differentiating L 2 (x) with respect to x, M−1 

M M M−1 (x + 1) j  x j  (x + 1) j  x j − j! j! j! j! j=0 j=0 j=0 j=0       M−1 1 j xM  j 1 M 1+ = x − 1+ < 0, M! j=0 x x

) which follows that L 2 (x) decreases with x from L 2 (0) to 1, and so that, M    )  ) M  j  j=0 j −λ [(N + 1)λ] j /j! e−λ / M j=0 (N λ) /j! decreases with N from j=0 λ /j! e −λ to e . Thus, Q K M (N ) increases strictly with N from M ( ' λ j −λ e Q K M (0) = 1 − 1 − e−wK j! j=0

to 1.

A.5

Q f (N) Increases Strictly with N to 1 − exp(−λ)

Proof Letting for 0 < x < ∞,

References

237

 )∞  j j=0 (x + 1) /j! M(x) ≡ , )∞  j  j=0 x /j! where M j ≡

)∞

m= j [θ

m

M(0) =

∞  1 Mj , j! j=0

M(∞) ≡ 1,

/m!]e−θ . Differentiating M(x) with respect to x,

    ∞ ∞   1 j xi x j 1 i 1+ M j+1 Mi − 1+ i! j! x x j=0 i=0      ∞  ∞  1 j x j xi 1 i 1+ M j+1 Mi = − 1+ j!i! x x j=0 i= j      j  1 j x j xi 1 i 1+ . + M j+1 Mi − 1+ j!i! x x i=0 On the other hand, the latter term is     ∞ ∞   1 j x j xi 1 i 1+ M j Mi+1 − 1+ . j!i! x x j=0 i= j Thus, the total of the first and latter term is       ∞  ∞  M j+1 1 j 1 1 i Mi+1 < 0, 1+ − 1+ − M j Mi j!i! x x Mj Mi j=0 i= j which follows that Q f (N ) increases strictly with N from Q f (0) to 1 − e−λ .

References 1. Paul D, Kelly L, Venkayya V (2002) Evolution of U.S. military aircraft structures technology. J Aircr 39:18–29 2. Dixon M (2006) The maintenance costs of aging aircraft: insights from commercial aviation. RAND Corporation, California 3. Nakagawa T (2005) Maintenance theory of reliability. Springer, London 4. Ito K (2013) Maintenance models of miscellaneous systems. In: Nakamura N, Qian CH, Chen M (eds) Reliability modeling with applications. World Scientific, Singapore, pp 307–330 5. Ito K, Nakagawa T (2011) Optimal multi-echelon maintenance of aircraft. Int J Reliab Qual Perform 3:67–73 6. Gill M (2018) Best industry practices for aircraft decommissioning (BIPAD). Int Air Transp Assoc (IATA) 7. Ackert S (2012) Basics of aircraft market analysis. Aircr Monit 8. Ito K, Nakagawa T (2015) Optimal operation censoring policy of aircraft. Int J Reliab Qual Perform 6:19–25

238

10 Application Examples of Airframe

9. Senju S, Fushimi T, Fujita S, Knight JE (1980) Analysis for managerial and engineering decisions. Asian Productivity Organization, Tokyo 10. FAA (1998) Fatigue evaluation of structure; final rule. 14 CFR Part 25, Docket No. 27358, Amendment No. 25–96 March 31,63,61:15708-15715 11. FAA Advisory Circular (1998) Damage tolerance and fatigue evaluation of structure. AC 25, 571-1C 12. FAA Advisory Circular (1998) Acceptable methods, techniques, and practices—aircraft inspection and repair, AC 43. 13-1B 13. Rinaldi A, Krajcinovic D (2006) Statistical damage mechanics—constitutive relations. J Theor Appl Mech 44(3):585–602 14. Levis WH, Dodd BD, Sproat WH, Hamilton JM (1978) Reliability of nondestructive inspections, AD A072097 15. Heida JH, Grooteman FP (1998) Aircraft inspection reliability using field inspection data. In: RTO AVT workshop on aircraft inspection reliability under field/depot conditions 16. Goranson Ulf G (1997) Fatigue issues in aircraft maintenance and repair. Int J Fatigue 20(6):413–431 17. US Department of Defense (1974) Airplane damage tolerance requirements. (MIL-A-83444) 18. Niu Michael CY (1988) Aircraft structural design. Technical Book Company, Los Angeles 19. Chen D (1990) Bulging of fatigue cracks in a pressurized aircraft fuselage. Report LR-647, Delft University of Technology 20. Nechval KN, Nechval NA, Purgailis M, Rosevskis U, Strelchonok VF (2010) Optimal adaptive planning in-service inspections of aircraft structures in terms of terminal control problem. Inf Technol, Manage Soc 3(1):27–41 21. Tang R, Elias B (2012) Offshoring of airline maintenance: implications for domestic jobs and aviation safety. CRS Rep for Congr 7-5700, R42876, USA 22. Ito K, Nakagawa T (2014) Optimal maintenance policy of airframe cracks. Int J Reliab Qual Saf Eng 21:1450014(16 p) 23. Nakagawa T (2011) Stochastic processes with applications to reliability theory. Springer, London 24. Ito K, Nakagawa T (2014) Optimal maintenance policy of airframe. Proceedings of the AsiaPacific International Symposium (APARM2014), pp 192–199 25. IATA’s Maintenance Cost Task Force (2013) Airline maintenance cost executive commentary— an exclusive benchmark analysis. FY2011 data 26. Nakagawa T, Ito K (2006) Optimal maintenance policies for a system with multiechelon risks. IEEE Trans Syst Man Cybern Part A Syst Humans 38:461–469 27. Nakagawa T (2007) Shocks and damage models in reliability theory. Springer, London 28. Ito K, Nakagawa T (2015) A stochastic maintenance model of commercial airframe with damage level and crack number. Proceedings of the 21st ISSAT international conference on reliability and quality in design, pp 247–251 29. Ito K, Nakagawa T (2015) Stochastic maintenance model of airframe with multi stages damage level. Proceedings of the Asia-Pacific International Symposium (APARM2016), pp 179–186

Appendix

Answers to Selected Problems

Chapter 1 1.1 Using the mathematical induction, 

(2)

x

G (x) =



0

G

∞  (ωx)i

i!

i=2



x

(x) =

 −ωy  e − e−ωx dy

0

= 1 − e−ωx − ωxe−ωx = ( j+1)

x

G(x − y)dG(y) = ω

G

( j)



∞ x

(x − y)dG(y) = ω

0

0

= ωe−ωx

∞  x  i= j

0

e−ωx ,

i= j

[ω(x − y)]i −ω(x−y) −ωy e e dy i!

∞ 

(ωz) (ωx)i −ωx dz = e , i! i! i= j+1 i

and ∞ 

G ( j) (x) =

j=1

∞ ∞   (ωx)i j=1 i= j

i!

e−ωx =

i ∞   (ωx)i i=1 j=1

i!

e−ωx =

∞  (ωx)i −ωx e i i! i=1

= ωx .

Chapter 2 2.1 When F(t) = 1 − exp[−(λt)m ], cT = 10, c D = 1 and λ = 1/500, Table A.1 presents optimal Tk∗ for m = 1.1, 1.2 and 1.5. © Springer Nature Switzerland AG 2023 K. Ito and T. Nakagawa, Optimal Inspection Models with Their Applications, Springer Series in Reliability Engineering, https://doi.org/10.1007/978-3-031-22021-0

239

240

Appendix: Answers to Selected Problems

Table A.1 Optimal Tk∗ to minimize R(T) k 1 2 3 4 5 6 7 8 9 10

Tk m = 1.1

m = 1.2

m = 1.5

k

109 208 304 398 491 583 674 765 855 945

121 221 316 408 498 586 672 757 841 925

155 258 350 435 515 593 667 740 810 879

11 12 13 14 15 16 17 18 19 20

m = 1.1

Tk m = 1.2

m = 1.5

1034 1122 1211 1298 1386 1473 1561 1647 1734 1820

1007 1089 1170 1251 1331 1410 1489 1568 1646 1724

947 1013 1078 1142 1205 1267 1329 1389 1449 1508

2.2 Letting A N (T ) be the left-hand side of (2.15), lim A N (T ) = −

T →0

cR , N

lim A N (T ) =

T →∞

cD − cR , λ

because   λN T e−λN T 1 − e−λT 1 . = lim  2 T →0 −λN T N 1−e First, note that the first term of A N (T ) increases strictly with T from 0. Differentiating the second term of A N (T ) with T ,       λN T 1 − e−λT 1 + e−λN T − 1 − e−λT + λT e−λT 1 − e−λN T . Letting L N (T ) (N = 1, 2, · · · ) be the above equation,    L 1 (T ) = 1 − e−λT e−λT − 1 + λT > 0 , L N +1 (T ) − L N (T )   

   = 1 − e−λT λT 1 − N 1 − e−λT e−λN T − 1 − e−λT e−λN T    

   1 − e−λT 1 − N 1 − e−λT e−λN T − 1 − e−λT e−λN T > 1 − e−λT 2

 1 − (N + 1)e−λN T + N e−λ(N +1)T > 0 , = 1 − e−λT

which follows that A N (T ) increases strictly with T from −c R /N to c D /λ − c R . Next, forming A N +1 (T ) − A N (T ),

Appendix: Answers to Selected Problems

241

1 − (1 + λT )e−λT  

+ λT 1 − e−λN T 1 − e−λ(N +1)T

 

(N + 1)e−λT

N 1 − e−λT



2 −

2 1 − e−λ(N +1)T

.

The first term is positive, and the second term is  

 2  e−λT N eλT − 1 1 − e−(2N +1)λT − 1 − e−λN T > 0.  2

2 1 − e−λT 1 − e−λ(N +1)T Noting that

cD 1 − (1 + λT )e−λT − c R , λ  c

D − c R 1 − (1 + λT )e−λT , A∞ (T ) ≡ lim A N (T ) = N →∞ λ A1 (T ) =

which follows that A N (T ) increases strictly with N from A1 (T ) to A∞ (T ). 2.3 Setting k = N ∗ , S = TN =

1 1/m (S) , i.e., S 1/m = λS , λ

which follows (2.31). 2.4 Using the approximation exp(a) ≈ 1 + a + a 2 /2, (2.35) is λT 2 cT = , i.e., T = 2 cD

 2cT . λc D

Chapter 3 3.1 When G(t) = 1 − exp(−θ t) and M(t) = θ t, (3.3) is cT

∞ 

F(kT ) + c R

∞

t + 0 +c D = cT

∞   k=0 ∞ 

(k+1)T

θ tdF(t) + (c R − cT )

0

k=0





k=0

kT

 e−θt − e−θ(k+1)T

  −θ(t−x) −θ[(k+1)T −x] e θ dx dF(t) −e

 (k+1)T t

kT

∞   (k+1)T

e

−θ y

dy +

F(kT )+c R θ μ + (c R − cT )

 t  (k+1)T −x 0

t−x

e

−θ y

+ cθD

∞  k=0

 (k+1)T  kT



dy θ dx dF(t)

∞    (k+1)T  1 − e−θ[(k+1)T −t] dF(t) kT

k=0

k=0



 1 − e−θ[(k+1)T −t] dF(t) ,

242

Appendix: Answers to Selected Problems

which follows (3.5). Differentiating C R (T ) with respect to T and setting it equal to zero, −cT

 cD  k f (kT ) + c R − cT + θ k=1

∞ 



  ∞ − 1 − e−θ T k=1 k f (kT )

 ∞   (k+1)T −θ[(k+1)T −t] (k + 1) kT θe dF(t) = 0 , + k=0

which follows (3.6). R (T ) with respect to T and setting it equal to zero, 3.2 Differentiating C ∞ cT k=0 F(kT ) ∞ . −T = c k f (kT ) Dθ k=1 ∗ Thus, by replacing c D in (3.13) with c D θ , we can compute √ T easily. ∗ ∗ 3.3 It can be easily shown in Table 3.3 that T /(1/θ ) < 2. 3.4 Letting

L 1 (θ ) ≡

   λ  λT λ2 e −1 − 1 − e−θ T , θ +λ θ (θ + λ)

we have L 1 (∞) = 0 and L 1 (0) ≡ lim L(θ ) = eλT − 1 − λT , θ→0

which agrees with the left-hand side of (2.9). Differentiating L 1 (θ ) with respect to θ ,  −

λ θ +λ

2 



eλT − 1 1 − e−θ T θ +λ −θ T − − 1 − (1 + θ T )e . λ θ θ2

Letting L 2 (T ) be the bracket of the above equation, L 2 (0) = 0 and

L 2’ (T ) = e−θ T e(θ+λ)T − 1 − (θ + λ)T > 0 , which follows that L 2 (T ) > 0, and L 1 (θ ) decreases strictly with θ from L 1 (0) to 0. Thus, TF∗ increases with θ from T ∗ to ∞. 3.5 Noting that θ (θ + λ) exp(θ T ) increases with θ to ∞, the left-hand side of (3.42) L decreases with θ to T ∗ given increases with θ to [exp(λT ) − (1 + λT )]/λ and T in (2.9). 3.6 The mean downtime l D from failure to its detection is given by a renewal equation

Appendix: Answers to Selected Problems

 lD = lD

θ e−λT θ +λ

243

 +T +

1 1 θ − + e−λT , θ λ λ(θ + λ)

i.e., lD =

T + 1/θ 1 − . 1 − [θ/(θ + λ)]e−λT λ

Thus, from (3.49), C O (T ) = c R M R + c D l D =

c R + c D (T + 1/θ ) cD . − −λT 1 − [θ/(θ + λ)]e λ

3.7 From (3.33) and (3.51), 

     1 1  λT 1  λT λ + e −1 −T − e −1 + 1 − e−θ T λ θ θ +λ θ (θ + λ)       1 1 λ 1 + − eλT − 1 − T + 1 − e−θ T = λ θ θ +λ θ (θ + λ) 2   λ λ T + 1 − e−θ T > 0 , > θ (θ + λ) θ (θ + λ) which follows that TO∗ < TF∗ . From (3.42) and (3.51),    λ λ 1 λ(T +1/θ) e − e−θ(T +1/θ) − 1 + λT + λ θ θ (θ + λ)    1  λT 1 + e −1 +T − λ θ ⎡ ⎤   ∞ λ ⎦ 1 ⎣ λT  (λ/θ ) j λ − 1+ e−θ(T +1/θ) = e − λ j! θ θ (θ + λ) j=0    1  λT 1 + e −1 − λ θ ⎤ ⎡ ∞ λ2 1 ⎣ λT  (λ/θ ) j − e−θ(T +1/θ) ⎦ = e λ j! θ (θ + λ) j=2 ⎡ ⎤ ∞ θ λ ⎣ λT  (λ/θ ) j−2 − e−θ(T +1/θ) ⎦ = 2 e θ j! θ + λ j=2

244

Appendix: Answers to Selected Problems

⎡ ⎤   ∞ λ 1 1 λ ⎣ (λ/θ ) j θ −1 ⎦ > 2 > 2 − e − > 0, θ ( j + 2)! θ + λ θ 2 e j=0 which follows that TO∗ + 1/θ > TL∗ .

Chapter 4 4.1 Note that 



  1 − e−λt G(t)dt < 0 , 0  T   T  λt  λt   e − 1 dt − e − 1 G(T − t)dt lim L L F (T ) = lim L L F (0) = −

T →∞

T →∞



= lim

0 T

T →∞ 0

0

 λt  e − 1 G(T − t)dt = ∞ .

Differentiating L L F (T ) with respect to T , 

T

λe

λ(T −t)





G(t)dt +

0

λe−λ(t−T ) G(t)dt > 0 ,

T

which follows that L L F (T ) increases strictly with T from L L F (0) < 0 to ∞. 4.2 Note that "  ∞ n !   −λt G i (t) dt < 0 , 1−e 1− L n (0) = − 0



lim L n (T ) = lim

T →∞

T →∞

 = lim

T →∞ 0

i=1 T

 λt  e − 1 dt −



0

0

T

n !

 λt  e −1 1−

T

"  n  λt  ! e −1 G i (T − t) dt "

i=1

G i (T − t) dt = ∞ .

i=1

Differentiating L n (T ) with respect to T , 

T 0

λe

λ(T −t)

1−

n ! i=1

"





G i (t) dt + T

λe

−λ(t−T )

1−

n !

" G i (t) dt > 0 ,

i=1

which follows that L n (T ) increases strictly with T from L n (0) < 0 to ∞. 4.3 Draw the outline figures of L F (T ), L L (T ), and L M (T ), noting that 0 = L F (0) > L M (0) > L L (0), and L F (∞) = L M (∞) = L L (∞) = ∞. Then, certain that there do not exist of the cases {L M (TL M ) < c R /c D < L M (TM F ), L F (TL F ) < c R /c D }.

Appendix: Answers to Selected Problems

245

4.4 Compute TF∗ M in (4.38) and TL∗M in (4.43) like Table 4.2, and compare them with TM∗ in Table 4.2.

Chapter 5 5.1 Differentiating C(T) with respect t to Tk and setting it equal to zero, 

  

cM cT F(Tk ) cM − 1+ − H (Tk+1 ) − H (Tk ) h(Tk ) + Tk+1 − Tk + cD f (Tk ) cD cD   F(Tk−1 ) cM = 0, h(Tk ) + 1+ cD f (Tk ) which follows (5.6). 5.2 The total expected cost until failure detection is ∞  

 G((k + 1)T ) cT (k + 1) + c D [(k + 1)T − t] + c M H (t) dF(t) G(t) k=0 kT ∞  (k+1)T   (k+1)T   cT k + c D (u − t) + (k+1)T

k=0

= cT

kT

∞  

(k+1)T

t



 1 +c M [1 + H (t)] dG(u) dF(t) G(t)

G((k + 1)T ) + kG(t)

k=0 kT ∞  (k+1)T 

+c D

k=0 kT ∞  (k+1)T 





1 G(t)

dF(t) 

(k+1)T

(k + 1)T − t −

G(u)du t

1 G(t)

dF(t)

 1 [1 + H (t)]G(t) − G((k + 1)T ) dF(t) G(t) k=0 kT   (k+1)T ∞   1 F((k + 1)T ) + G((k + 1)T ) dF(t) = cT G(t) kT k=0  ∞  (k+1)T  (k+1)T  1 G(u)du dF(t) +c D G(t) t k=0 kT   (k+1)T ∞  (k+1)T  1 [1 + H (t)]dF(t) − G((k + 1)T ) dF(t) + cM G(t) kT kT k=0 +c M

246

Appendix: Answers to Selected Problems

= (cT − c M ) +c D

∞   k=0

∞  k=0

(k+1)T

 G((k + 1)T )

(k+1)T

G(t)

kT





(k+1)T

G(u)du kT

t

1

1 G(t)

dF(t) + cT

∞ 

F((k + 1)T )

k=0





dF(t) + c M

[1 + H (t)]dF(t) .

0

5.3 When N = 2 and T2 = S,  S C(T1 ; S) = cT + c D T1 + c M H (T1 ) − c D F(t)dt 0  + cT + c D (S − T1 ) + c M [H (S) − H (T1 )] F(T1 ). Differentiating C(T1 ; S) with respect to T1 and setting it equal to zero, we have (5.27). When N ≥ 3 and k = 1, 2, · · · , N − 2, differentiating (5.26) with respect to Tk and setting it equal to zero,  −[c D + c M h(Tk )]F (Tk ) − cT + c D (Tk+1 − Tk ) + c M [H (Tk+1 ) − H (Tk )] f (Tk ) + [c D + c M h(Tk )] F (Tk−1 ) = 0, which follows (5.28). When k = N − 1, putting that TN = S, we easily have (5.29) 5.4 Make a table similar to Table 5.3.

Chapter 6 6.1 Setting that x ≡ 1 − e−λt , i.e., dt = dx/[λ(1 − x)], (6.3) is μj =

1 λ

 0

1

jn−1  jn 1  1 i 1 − x jn 11 dx = . x dx = 1−x λ i=0 0 λ i=1 i

Similarly, (6.18) is   ∞ n (1 − e−λt )n−k e−kλt dt k 0 k=1   1 n  1 n = Pk x n−k (1 − x)k−1 dx k λ 0

μn,P =

n 

Pk

k=1

Appendix: Answers to Selected Problems

247

Fig. A.1 7 cases that 3-out-of-5:F system is operable

   1 k−1 n x n−k+1 (1 − x)k−2 dx k n−k+1 0    1 n 1 k−2 k−1 n = Pk x n−k+2 (1 − x)k−3 dx k n−k+1n−k+2 0 λ k=1

1 = Pk λ k=1 n

.. .

1 = Pk λ k=1 n

   n 11 n (k − 1)!(n − k)! 1 n−1 Pk . x dx = k (n − 1)! λ k=1 k 0

6.2 When the number of operating units is 1, the system is operable in 1 case, and when its number is 2, the system is operable in 7 cases of Fig. A.1. Thus, we have P0 = 0, P1 = 1/5, P2 = 7/10, P3 = P4 = P5 = 1.

Chapter 7 7.1 Note that  K +1 k K +2 , PK (0) = k=1 = K K k=1 k "  K +1 −kλ2 T ke K + 1 PK (∞) = lim k=1 eλ2 T = lim eλ2 T +  K = ∞, K −kλ2 T (K −k)λ2 T T →∞ T →∞ k=1 ke k=1 ke ∞ (k + 1)e−kλ2 T ∞ = e λ2 T . P∞ (T ) = k=0 −kλ2 T k=0 ke

248

Appendix: Answers to Selected Problems

Differentiating PK (T ) with respect to T , −

K 

k(k + 1)e

−kλ2 T

k=0

K 

ie

−λ2 T

+

i=0

K 

K 

2 −iλ2 T

i e

i=0

(k + 1)e−kλ2 T

k=0

K K   = (k + 1)e−kλ2 T ie−iλ2 T (i − k) k=0

=

i=0

K 

(k + 1)e

−kλ2 T

k=0

=

ie

−iλ2 T

(i − k) +

i=0

K 

" ie

−iλ2 T

(i − k)

i=k

K K K K     (k + 1)e−kλ2 T ie−iλ2 T (i − k) − ke−kλ2 T (i + 1)e−iλ2 T (i − k) k=0

=

k 

K 

i=k

e−kλ2 T

k=0

K 

k=0

i=k

e−iλ2 T (i − k)2 > 0 ,

i=k

which follows that PK (T ) increases strictly with T from (K + 2)/K to ∞. Furthermore, forming PK (T ) − PK +1 (T ), K 

e

−kλ2 T

k=0

K +1 

ie

−iλ2 T



i=0

K 

ke

−kλ2 T

K +1 

k=0

= (K + 1)e−(K +1)λ2 T

K 

e−iλ2 T

i=0

e−kλ2 T − e−(K +1)λ2 T

k=0

= e−(K +1)λ2 T

K 

K 

ke−kλ2 T

k=0

ke−kλ2 T (K + 1 − k) > 0 ,

k=0

which follows that PK (T ) decreases strictly with K from P1 (T ) to exp(λ2 T ). 7.2 Denoting

A N ≡ (N + 2) e−N (N +1)λ3 T /2 − e−(N +1)(N +2)λ3 T /2 , B N ≡ (N + 1)(N + 2)λ3 e−(N +1)(N +2)λ3 T /2 , we have (N +1)λ3 T

1 AN e = −1 = BN (N + 1)λ3



T

e(N +1)λ3 t dt,

0

which increases strictly with N from [exp(λ3 T ) − 1]/λ3 to ∞, and A N /B N = 0 when T = 0. Thus, noting that

Appendix: Answers to Selected Problems

249

N A0 AN j=0 A j ≤ N ≤ , i.e., B0 BN j=0 B j

N e λ3 T − 1 j=0 A j ≤ N , λ3 j=0 B j

the left-hand side of (7.61) goes with T from 0 to ∞. 7.3 We show one simple example of approximation: The left-hand side of (7.61) is (N +1)λ T

∞ −(N +1)(N +2)λ3 T /2 3 e −1 N =0 (N + 2)e ∞ −T −(N +1)(N +2)λ3 T /2 N =0 (N + 1)(N + 2)λ3 e ∞  e(N +1)λ3 T − 1 ≤ −T, (N + 1)λ3 N =0  be a solution of equation which increases strictly with T from 0 to ∞. Letting T ∞  e(N +1)λ3 T − 1 cT −T = , (N + 1)λ c 3 D N =0

 < T ∗. we have 0 < T

Chapter 8

We use the following relations throughout this chapter when p j (t) = (λt) j /j! e−λt ( j = 0, 1, 2, · · · ): For 0 < T1 , T2 < ∞ and N = 0, 1, 2, · · · , N 

N− j

p j (T1 )



j=0 N 

pi (T2 ) =

i=0



j=0

(i + j) pi (T2 ) =

i=0

p j (T1 + T2 ),

j=0

N− j

p j (T1 )

N 

N 

j p j (T1 + T2 ),

j=0

and for 0 < T < ∞, N −1 ∞  

p j (kT )

k=0 j=0

p j (kT )

pi (T ) = 1,

i=N − j

k=0 j=0 ∞  N −1 

∞ 

∞  i=N − j

(i + j) pi (T ) = λT

∞  N −1  k=0 j=0

p j (kT ).

250

Appendix: Answers to Selected Problems

8.1 Summing up the terms of cost c1 in the above equations, N −1 

=

p j (M T ) +

j=0

k=1 j=0

N −1 

N −1 M  

p j (M T ) +

j=0

=

N −1 M  

∞ 

p j [(k − 1)T ]

pi (T )

i=N − j



p j [(k − 1)T ] 1 −

k=1 j=0

N −1 

p j (M T ) +

j=0

M−1 N −1 

"

N − j−1

pi (T )

i=0

p j (kT ) −

k=0 j=0

M  N −1 

p j (kT ) = 1 ,

k=1 j=0

where note that p j (0) = 0 for j ≥ 1 and p0 (0) = 1. The term of cost c2 is N −1 

=

j p j (M T ) +

j=0

k=1 j=0

N −1 

M  N −1 

j p j (M T ) +

j=0

=

N −1 M  

N −1 

(i + j) pi (T )

i=N − j

j p j (M T ) + λT

p j [(k − 1)T ] λT + j −

M  N −1 

j p j (kT ) = λT

k=1 j=0



(i + j) pi (T )

i=0

p j [(k − 1)T ] +

k=1 j=0

M  N −1 

"

N − j−1

k=1 j=0

j=0



∞ 

p j [(k − 1)T ]

M  N −1 

M  N −1 

j p j [(k − 1)T ]

k=1 j=0

p j (kT ) .

k=0 j=0

Noting that ∞   i=N − j

=

kT (k−1)T

(kT − t)d pi [t − (k − 1)T ] =

∞   i=N − j

T

pi (t)dt

0

∞ 1  (i − N + j) pi (T ), λ i=N − j

the term of c3 is M  N −1  k=1 j=0

=

p j [(k − 1)T ]

∞   i=N − j

kT (k−1)T

(kT − t)d pi [t − (k − 1)T ]

M−1 N −1 ∞  1  p j (kT ) (i − N + j) pi (T ) . λ k=0 j=0 i=N − j

Appendix: Answers to Selected Problems

251

Similarly, MT

= MT

N −1 

p j (M T ) +

k=1

j=0

N −1 

M 

N −1 

p j (M T ) +

N −1 

kT

k=1

p j (M T ) +

j=0

=T

kT

N −1 

j=0

j=0

= MT

M 

M−1 N −1 

p j [(k − 1)T ]

∞ 

pi (T )

i=N − j



p j [(k − 1)T ] 1 −

j=0

M−1 

pi (T )

i=0

(k + 1)T

k=0

"

N − j−1

N −1 

p j (kT ) −

j=0

M 

kT

k=0

N −1 

p j (kT )

j=0

p j (kT ) .

k=0 j=0

8.2 Note that Q(0) = 0,

lim Q(M) =

M→∞

∞ 

(i − 1) pi (T ) = λT − 1 + e−λT .

i=1

∞ Next, note that A j ≡ i=N − j (i − N + j) pi (T ) increases strictly with j. Forming Q(M + 1) − Q(M), N −1 

p j [(M + 1)T ]A j

j=0

=

N −1 

i=0

Aj

j=0

=

N −1 

N −1  

pi (M T ) −

N −1  j=0

p j (M T )A j

N −1 

pi [(M + 1)T ]

i=0

p j [(M + 1)T ] pi (M T ) − p j (M T ) pi [(M + 1)T ]



i=0

Aj

 j 

j=0

+

N −1 

N −1   i= j

p j [(M + 1)T ] pi (M T ) − p j (M T ) pi [(M + 1)T ]

i=0

p j [(M + 1)T ] pi (M T ) − p j (M T ) pi [(M + 1)T ]



.



252

Appendix: Answers to Selected Problems

The first term is N −1 N −1  

 A j p j [(M + 1)T ] pi (M T ) − p j (M T ) pi [(M + 1)T ]

i=0 j=i

=

N −1 N −1  

 Ai pi [(M + 1)T ] p j (M T ) − pi (M T ) p j [(M + 1)T ] .

j=0 i= j

Summing up it to the second term, N −1  N −1 

 (Ai − A j ) p j (M T ) pi (M T )

j=0 i= j

pi [(M + 1)T ] p j [(M + 1)T ] − pi (M T ) p j (M T )

 > 0,

because pi [(M + 1)T ] = pi (M T )



M +1 M

i

e−λT

increases with i. Thus, Q(M) increases strictly with M from 0 to λT − 1 + e−λT . 8.3 When N D = N , the left-hand side of (8.8) is N −1 ∞   k=0 j=0

=

N −1 ∞  

⎡ p j (kT ) ⎣

∞ 

i pi (T ) −

i=0

∞  N −1  k=0 j=0

⎤ (i − N + j) pi (T )⎦

i=N − j N − j−1

p j (kT ) N − j +

k=0 j=0

=

∞ 



"

(i − N + j) pi (T )

i=0

(N − j) p j (kT ) −

∞  N −1 

(N − j) p j [(k + 1)T ] = N .

k=0 j=0

Thus, because the left-hand increases strictly with N D , it increases strictly with N D to N .

Appendix: Answers to Selected Problems

253

8.4 Summing up the terms of the above equations, MT

N −1 

M−1 

p j (M T ) +

j=0

= MT

N −1 

+

+

M−1 N −1 ∞  1  p j (kT ) (i − N + j) pi (T ) λ k=0 j=0 i=N − j

N −1 

p j (kT )

j=0

(k + 1)T

N −1 

 ∞

1 λ

N −1 M−1 

M−1 N −1 

p j (kT ) −

∞ 

p j (kT )

pi (T )

(i − N + j) pi (T )

i=N − j M−1 

(k + 1)T

N −1 

k=0

p j (kT ) −

k=0 j=0

 i=0

k=0 j=0

j=0



N − j−1

pi (T ) −

i=0

p j (M T ) −

k=0

=T

pi (T )

i=N − j

p j (M T ) −

j=0 M−1 

p j (kT )

j=0

M−1 N −1 ∞  1  p j (kT ) (i − N + j) pi (T ) λ k=0 j=0 i=N − j

(k + 1)T

N −1 

∞ 



k=0

= MT

(k + 1)T

k=0

j=0 M−1 

N −1 

p j [(k + 1)T ]

j=0

M−1 N −1 ∞  1  p j (kT ) (i − N + j) pi (T ). λ k=0 j=0 i=N − j

8.5 Noting that M 

=

[k(T + T0 ) + T1 ]

k=1

j=0

M 

N −1 

[k(T + T0 ) + T1 ]

k=1

=

M 

M 

∞ 

p j [(k − 1)T ]

pi (T )

i=N − j

[k(T + T0 ) + T1 ]

N −1 

p j [(k − 1)T ] 1 − p j [(k − 1)T ] −

[(k − 1)(T + T0 ) + T1 ]

+(T + T0 )



pi (T )

i=0

j=0

k=1

"

N −1− j

j=0

k=1

=

N −1 

M 

[k(T + T0 ) + T1 ]

k=1 N −1 

N −1 

p j (kT )

j=0

p j [(k − 1)T ]

j=0 M  N −1 

p j [(k − 1)T ] −

k=1 j=0

= T1 + (T + T0 )

M−1 N −1  k=0 j=0

which follows (8.11).

M 

[k(T + T0 ) + T1 ]

k=1

p j (kT ) − [M(T + T0 ) + T1 ]

N −1 

p j (kT )

j=0 N −1  j=0

p j (M T ) ,

254

Appendix: Answers to Selected Problems

8.6 Note that ∞ 

kT

N D −1

k=1

=

∞ 

N − j−1



p j [(k − 1)T ]

j=0

kT

k=1

pi (T ) −



i=0

N D −1 j=0



pi (T )

i=0

N − j−1

p j [(k − 1)T ]

"

N D − j−1

pi (T ) −

i=0

∞ 

kT

k=0

N D −1

p j (kT ) .

j=0

Summing up the above equations, ∞ 

(k + 1)T

k=0

N D −1

pj( jT ) −

j=0

∞  k=0

kT

N D −1

p j (kT )

j=0

∞ N D −1 ∞  1  − p j (kT ) (i − N + j) pi (T ) λ k=0 j=0 i=N − j

=T

∞ N D −1 

p j (kT ) −

k=0 j=0

∞ N D −1 ∞  1  p j (kT ) (i − N + j) pi (T ) . λ k=0 j=0 i=N − j

8.7 Letting L(N D ) be the left-hand side of (8.17), ⎤ ∞  N −1  T 1 L(N ) = i pi (T ) ⎣ + p j (kT )⎦ T + T0 k=0 j=0 i=0 ∞ 



∞  N −1  k=0 j=0



p j (kT )

∞ 

(i − N + j) pi (T ).

i=N − j

The first term of L(N ) is ⎡

⎤ ∞  N −1  T 1 λT ⎣ + p j (kT )⎦ , T + T0 k=0 j=0 and the second term of L(N ) is

Appendix: Answers to Selected Problems ∞  N −1 

255

p j (kT ) λT − N + j −

k=0 j=0

=

N −1 ∞  



(i − N + j) pi (T )

i=0

p j (kT )(λT − N + j) −

k=0 j=0

= λT

"

N − j−1

∞  N −1 

N −1 ∞  

p j (kT )(N − j)

k=0 j=0

p j (kT ) − N .

k=0 j=0

Summing up two terms, L(N ) = N +

λT T1 . T + T0

Furthermore, we easily have L(N D + 1) − L(N D ) > 0, which follows that L(N D ) increases strictly with N D to L(N ).

Chapter 9 9.1 Forming C2 (N + 1) − C2 (N ) ≥ 0,   N c1

   (N +1)T0 1 1 − + c2 F(t)dt N +1+α N +α N T0  N T0 c1 − c2 F(t)dt − N +α 0  N T0   (N +1)T0 c1 (2N + 1 + α) ≥ c3 , = c2 F(t)dt − N F(t)dt − (N + α)(N + 1 + α) 0 N T0 which follows (9.4). Letting L(N , T0 ) be the left-hand side of (9.4), lim L(N , T0 ) = μ ,

N →∞



L(N + 1, T0 ) − L(N , T0 ) = (N + 1)

(N +1)T0 N T0

+

 F(t)dt −



(N +2)T0

F(t)dt (N +1)T0

2(N + 1) c1 > 0, c2 (N + α)(N + 1 + α)(N + 2 + α)

which follows that L(N , T0 ) increases strictly with N to μ. Similarly,

256

Appendix: Answers to Selected Problems

L(N , ∞) ≡ lim L(N , T0 ) = μ − T0 →∞

2N + 1 + α c1 , c2 (N + α)(N + 1 + α)



dL(N , T0 ) = N F(N T0 ) − N (N + 1)F((N + 1)T0 ) − N F(N T0 ) dT0

= N (N + 1) F(N T0 ) − F((N + 1)T0 ) > 0 , which follows that L(N , T0 ) increases strictly with T0 to L(N , ∞). 9.2 Letting L 1 (N ) and L 2 (N ) be the respective left-hand sides of (9.3) and (9.4),  L 1(N ) − L 2 (N ) = (N + 1 + α)(N + α)

(N +1)T0

F(t)dt N T0

  N T0  (N +1)T0 (N + α)(N + 1 + α) c3 F(t)dt − N F(t)dt − 2N + 1 + α c2 0 N T0   (N +1)T0 (N + α)(N + 1 + α) (2N + 1 + α)T0 − (2N + 1 + α) F(t)dt = 2N + 1 + α N T0   (N +1)T0  N T0 c3 F(t)dt + N F(t)dt + − c2 0 N T0   (N +1)T0  N T0 (N + α)(N + 1 + α) (2N + 1) > F(t)dt + F(t)dt 2N + 1 + α N T0 0   (N +1)T0 F(t)dt −N −

N T0

   (N +1)T0  N T0 (N + α)(N + 1 + α) = (N + 1) F(t)dt + F(t)dt > 0 , 2N + 1 + α N T0 0 which follows that L 1 (N ) > L 2 (N ). 9.3 In (9.12),  t   λ(u)H j (u)du = e−R(u) d [R(u)] j+1 /( j + 1)! 0 0  t  t λ(u)H j+1 (u)du = H j+1 (t) + H j+2 (t) + λ(u)H j+2 (u)du . = H j+1 (t) +  t

0

0

Repeating the above calculations, we have (9.12). 9.4 Using dF j+1 (t)/dt = λ(t)H j (t), and differentiating C(T ) with respect to T and setting it equal to zero, we have (9.21). Furthermore, when x ≡ K − z 0 , lim Q(T, x) = lim

T →∞

j→∞

G ( j) (x) − G ( j+1) (x) = 1, G ( j) (x)

lim Q(T, x) = G(x) .

T →0

Differentiating Q(T, x) with respect to T , i.e., differentiating ∞ (i) [R(T ) j /j!]/ i=0 G (x)[R(T )i /i!] with respect to T ,

∞ j=0

G ( j+1) (x)

Appendix: Answers to Selected Problems

257

∞ ∞ λ(T )  ( j+1) R(T ) j  (i) R(T )i ( j − i) G (x) G (x) R(T ) j! i! j=0

=

λ(T ) R(T )

i=0

 ∞

G ( j+1) (x)

j=0

j R(T ) j  (i) R(T )i ( j − i) G (x) j! i! i=0

∞ 



j=0

λ(T ) = R(T )

∞ 

G

( j)

(x)

G

( j)

 j R(T ) j  (i+1) R(T )i ( j − i) (x) G (x) j! i! i=0

j R(T ) j 

j!

j=0

i=0

G (i+1) (x) G ( j+1) (x) R(T )i − ( j − i) G (x) ( j) i! G (x) G (i) (x)

"

(i)

0 because H (t)/t increases strictly with t to h(∞) from Appendix A.1. Furthermore, L 1 (M + 1) − L 1 (M) = (M + 1) [H A ((M + 2)T ) − 2H A ((M + 1)T ) + H A (M T )]  (M+2)T   (M+1)T = (M + 1) h A (t)dt − h A (t)dt > 0 , (M+1)T

10.2

MT

which follows that L 1 (M) increases strictly with M from L 1 (1) > 0 to ∞. Using the same argument in (10.1), the term including H2 in the left-hand side of (10.10) is, from Appendix A.1,  L 2 (M1 ) ≡ M2 M1 (M1 + 1)T1

H2 (M2 (M1 + 1)T1 ) H2 (M2 M1 T1 ) − M2 (M1 + 1)T1 M2 M1 T1

H2 ((M1 + 1)T1 ) H2 (M1 T1 ) − + (M1 + 1)T1 M1 T1 = M2 M1 (M1 + 1)T1 [L(M2 , (M1 + 1)T1 ) − L(M2 , M1 T1 )] > 0 ,



where L(n, x) ≡ [H (nx) − n H (x)]/(nx), and L 2 (M1 + 1) − L 2 (M1 )   M2 (M1 +2)T1  = (M1 + 1) h 2 (u)du − −M2

10.3

M2 (M1 +1)T1   (M1 +2)T1 (M1 +1)T1

h 2 (u)du −

M2 (M1 +1)T1

M2 M1 T1  (M1 +1)T1

h 2 (u)du 

h 2 (u)du

> 0.

M1 T1

Thus, L 2 (M1 ) is always positive and increases strictly with M1 to ∞. It is proved from Appendix A.1 that the numerator L a (N ) increases strictly with N to ∞. Letting L b (N ) ≡

1 − e−a(N +1)S 1 − e−a N S − , N N +1

we have L b (∞) = 0. Forming L b (N ) − L b (N + 1) and t ≡ aS, L(t) ≡ we have

1 − e−N t 1 − e−(N +1)t 1 − e−(N +2)t −2 + , N N +1 N +2

260

Appendix: Answers to Selected Problems

2 , N (N + 1)(N + 2) L (t) = e−N t (1 − e−t )2 > 0 , L(0) = 0 ,

10.4

L(∞) =

which follows that L b (N ) decreases strictly with N from L b (1) to 0. Letting L R (M) be the right-hand side of (10.19), L R (1) = c S N + (c0 − cb )(1 − e−a N T )2 ,

L R (∞) = c S N + (c0 − cb ) ,

L R (M + 1) − L R (M) = (c0 − cb )(M + 1)(1 − e−a N T )2 e−a N M T > 0 , which follows that L R (M) increases strictly with M from L R (1) to c S N + c0 − cb . Letting L L (M) be the bracket of the left-hand side of (10.19), L L (1) = [H (2N T ) − 2H (N T ) − N HS (2T ) + 2N HS (T )] ,  N (k+1)T M−1   N (M+1)T L L (M) = h(u)du − h(u)du N MT

k=0



−N ≥

M−1   (M+1)T MT

k=0

 −

=

M−1   (M+1)T MT

k=0

≥T

M−1 

N kT (M+1)T MT

h(u)du 

h S (u)du +

h 1 (u)du −

 h S (u)du

(k+1)T kT

MT

(k+1)T

kT



h(u)du − (M+1)T



h S (u)du −



(k+1)T

 h S (u)du

kT (k+1)T



h 1 (u)du

kT

[h 1 (M T ) − h 1 ((k + 1)T )] ,

k=0

when h(t) ≡ h 1 (t) + h S (t). Thus, when h 1 (t) increases strictly with t to ∞, L L (∞) = ∞. 10.5 Setting that xi ≡ exp(ωZ i ) (i = 1, 2), (10.26) is   c2 x2 + (c3 − c2 ) x1 1  c1 + . C1 (x1 ) = T x2 + x1 (x1 − 1) 1 (x1 ) with respect to x1 and setting it equal to zero, Differentiating C x12 + 2

c2 x2 c3 x1 − x2 = 0 . c3 − c2 c3 − c2

Appendix: Answers to Selected Problems

261

Thus, ⎤ ⎡    ωZ 2 ωZ 2 2 ωZ 2 c e e e 1 c c 2 2 3 ⎦. + + Z 1∗ = log ⎣− ω c3 − c2 c3 − c2 c3 − c2

10.6

Similarly, we have (10.38). Solving Ii for i = n − 1, n − 2, n − 3 · · · , we easily have In−1 = eω(Z n −Z n−1 ) ,

In−2 =

n−1 

eω(Z k+1 −Z k ) − 1 ,

k=n−2

In−3 =

n−1 

eω(Z k+1 −Z k ) − 2 , · · · ,

k=n−3

and consequently, Ij =

n−1 

eω(Z k+1 −Z k ) − (n − 1 − j) .

k= j

Thus, Ii = eω(Z i+1 −Z i ) + eωZ i+1

n−1   −ωZ k  e − e−ωZ k+1 k=i+1

⎤ ⎡ n−1  eω(Z j+1 −Z j ) − (n − 1 − k)⎦ ×⎣ j=k

= eω(Z i+1 −Z i ) + eωZ i+1

  n−1

eω(Z j+1 −Z j )

j=i+1



n−1 

j   −ωZ k  e − e−ωZ k+1 k=i+1

 −ωZ k  e − e−ωZ k+1 (n − 1 − k)

k=i+1

=

n−1 

eω(Z j+1 −Z j ) − (n − 1 − i) .

j=i

Similarly, solving Mi for i = n − 1, n − 2, n − 3,· · · , we have Mi = eω(Z n −Z n−1 ) − 1 .