Structural Durability: Methods and Concepts: Enabling Cost and Mass Efficient Products [1st ed.] 9783030481728, 9783030481735

This book provides methods and concepts which enable engineers to design mass and cost efficient products. Therefore, th

230 46 10MB

English Pages XV, 209 [221] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter ....Pages i-xv
Motivation (Ruediger Heim)....Pages 1-11
Operating Life Analysis (Ruediger Heim)....Pages 13-66
Influencing Factors for Fatigue Strength (Ruediger Heim)....Pages 67-107
Fatigue Strength Under Spectrum Loading (Ruediger Heim)....Pages 109-145
Structural Durability (Ruediger Heim)....Pages 147-208
Back Matter ....Pages 209-209
Recommend Papers

Structural Durability: Methods and Concepts: Enabling Cost and Mass Efficient Products [1st ed.]
 9783030481728, 9783030481735

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Structural Integrity 17 Series Editors: José A. F. O. Correia · Abílio M. P. De Jesus

Ruediger Heim

Structural Durability: Methods and Concepts Enabling Cost and Mass Efficient Products

Structural Integrity Volume 17

Series Editors José A. F. O. Correia, Faculty of Engineering, University of Porto, Porto, Portugal Abílio M. P. De Jesus, Faculty of Engineering, University of Porto, Porto, Portugal Advisory Editors Majid Reza Ayatollahi, School of Mechanical Engineering, Iran University of Science and Technology, Tehran, Iran Filippo Berto, Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology, Trondheim, Norway Alfonso Fernández-Canteli, Faculty of Engineering, University of Oviedo, Gijón, Spain Matthew Hebdon, Virginia State University, Virginia Tech, Blacksburg, VA, USA Andrei Kotousov, School of Mechanical Engineering, University of Adelaide, Adelaide, SA, Australia Grzegorz Lesiuk, Faculty of Mechanical Engineering, Wrocław University of Science and Technology, Wrocław, Poland Yukitaka Murakami, Faculty of Engineering, Kyushu University, Higashiku, Fukuoka, Japan Hermes Carvalho, Department of Structural Engineering, Federal University of Minas Gerais, Belo Horizonte, Minas Gerais, Brazil Shun-Peng Zhu, School of Mechatronics Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China Stéphane Bordas, University of Luxembourg, ESCH-SUR-ALZETTE, Luxembourg Nicholas Fantuzzi , DICAM Department, University of Bologna, BOLOGNA, Bologna, Italy Luca Susmel, Civil Engineering, University of Sheffield, Sheffield, UK

The Structural Integrity book series is a high level academic and professional series publishing research on all areas of Structural Integrity. It promotes and expedites the dissemination of new research results and tutorial views in the structural integrity field. The Series publishes research monographs, professional books, handbooks, edited volumes and textbooks with worldwide distribution to engineers, researchers, educators, professionals and libraries. Topics of interested include but are not limited to: – – – – – – – – – – – – – – – – – – – – – –

Structural integrity Structural durability Degradation and conservation of materials and structures Dynamic and seismic structural analysis Fatigue and fracture of materials and structures Risk analysis and safety of materials and structural mechanics Fracture Mechanics Damage mechanics Analytical and numerical simulation of materials and structures Computational mechanics Structural design methodology Experimental methods applied to structural integrity Multiaxial fatigue and complex loading effects of materials and structures Fatigue corrosion analysis Scale effects in the fatigue analysis of materials and structures Fatigue structural integrity Structural integrity in railway and highway systems Sustainable structural design Structural loads characterization Structural health monitoring Adhesives connections integrity Rock and soil structural integrity.

** Indexing: The books of this series are submitted to Web of Science, Scopus, Google Scholar and Springerlink ** This series is managed by team members of the ESIS/TC12 technical committee. Springer and the Series Editors welcome book ideas from authors. Potential authors who wish to submit a book proposal should contact Dr. Mayra Castro, Senior Editor, Springer (Heidelberg), e-mail: [email protected]

More information about this series at http://www.springer.com/series/15775

Ruediger Heim

Structural Durability: Methods and Concepts Enabling Cost and Mass Efficient Products

123

Ruediger Heim Research Division Structural Durability Fraunhofer-Institute for Structural Durability and System Reliability | LBF Darmstadt, Germany

ISSN 2522-560X ISSN 2522-5618 (electronic) Structural Integrity ISBN 978-3-030-48172-8 ISBN 978-3-030-48173-5 (eBook) https://doi.org/10.1007/978-3-030-48173-5 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Often, the term metal fatigue is introduced together with the high visibility of spectacular mechanical failures which have been publicized from bridges, aircraft structures, power turbines, or other structures. In 1982, an extensive study from the Battelle Columbus Laboratories was published in which the annual costs of material fracture to the US economy were estimated in a range of 119 billion USD that is equivalent to 300 billion USD and about 1.5% of the GDP today. More than 53% of those costs were considered reducible, to a larger extent through the adoption of best practices, technology transfer, and education. Materials and structures which are subjected to fluctuating loads in sufficient numbers are liable to fail by fatigue. In that study, hundreds of components were examined with regard to their individual failure cause and about 44% failed due to improper maintenance. Half of those incidents showed fatigue-related fractures. For other failure causes—such as material defects or manufacturing issues—fatigue was the dominant mechanism that finally led to the loss of structural integrity. Fatigue is the primary originator for unintentional material fractures of parts which are dynamically loaded. At the component level, about 80% of mechanical failures are somehow related to fatigue, alongside other mechanisms such as wear, corrosion, or creep. Many people still recall that in 1979 a McDonnell Douglas DC-10 crashed into the ground shortly after taking off from the runway at O’Hare International Airport in Chicago. All passengers and the complete crew were killed as well as two more people on the ground because the left engine separated from its wing due to maintenance-induced fatigue damage. Another dramatic accident was the Eschede derailment in 1998 when 101 people were killed because an ICE-type high-speed train from Deutsche Bahn derailed and crashed into a concrete road bridge in the North of Germany. Few wheel sets of that ICE were equipped with resilient wheels which had a rubber damping ring between the wheel body and the rail-contacting steel tyre. The latter became thinner due to wear which limited its load-carrying capabilities and resulted in crack initiation and growth until it fractured.

v

vi

Preface

The word fatigue is derived from the Latin ‘fatigo’ which means ‘get tired’; in a technical context, it was used for the very first time in 1839 in a book on mechanics by Jean-Victor Poncelet, who gained experience about mechanical failures of cast iron axles and supposed that the axles became tired, or fatigued, after a certain period of operational use before they finally break. Preliminary assumptions about the nature of metal fatigue were developed during the industrial revolution in Europe when rail vehicles failed under cyclic loads. Later, this topic became even more important because of serious accidents which took place around World War II and the years after. At that time, simple and low-cost liberty class naval cargo ships were built in a large number using extensive welding instead of time-consuming riveting. Soon subzero temperatures led to brittle fractures of the welds and the parent metals. We now understand that fatigue failures can result in injuries, downtime and reduced availability, repair and rework, scrap, recalls, lost output and liability claims. Fatigue-related incidents simply constitute a significant part of costs for a manufacturer. It sounds like fatigue is synonymic to disaster—a micro-mechanical mechanism initiating cracks which start to grow and enter a macroscopic scale violating structural integrity. While the fatigue of structural materials is virtually inevitable with certain types of loading, the use of advanced design methods is concerned with avoiding or at least mitigating the often very negative effects of material fatigue. Structural durability is one such method, which leads to very useful, engineer-scientific findings on suitability for use and operating life. But structural durability is much more than just a risk assessment method, and here, we want to look at it in a slightly different way: Structural durability is a genuine design method for particularly sustainable products and for reliable lightweight construction. We basically understand that fatigue is a progressive structural damage of materials and components under cyclic loads. Hence, fatigue life is an important characteristic of an engineering component and is measured by the number of cycles before a failure occurs. Since safety, reliability, and profitability are core criteria for the development and successful commercialization of products, managing fatigue life according to customer expectations and product requirements is an enabler for sustainability: Lifetime-oriented design makes it possible to use the minimum quantity of material, or the most cost-efficient material without compromising fatigue life. Once we understand how to design a structure for a specified operational life, overengineered features and un-efficient material utilization can be avoided. In this way, structural durability becomes a powerful method enabling sustainable design. So, our mission is to analyze the in-service conditions carefully and to get the component’s strength characteristic aligned to the requirements from the operational loading. Having a well-balanced product which recognizes both stress and strength in a proper way is much more than a safety pointer—it is a fundamental approach for a minimal effort design. That is the path toward mass and cost-efficient products which serve profitability while not violating future requirements of environment and societies.

Preface

vii

The structure of this book is based on my lecture at the University of Applied Science in Kaiserslautern. For many years, I have been teaching the basics of structural durability there in an international master’s course of study, in which particular importance is attached to a consistent treatment of both load and stressability. In structural durability, material and component strength properties are combined with typical operational load sequences to make a conclusion regarding the expected service life. A consistent and coherent presentation in this respect for students and engineers in mechanical engineering and mechatronics has been missing until now. The present book is intended to close this gap. It is a rather brief introduction, even in terms of its scope, which actually aims to be read straight from the first to the last page. This book is a quick introduction to structural durability and is intended to provide basic concepts and methods, i.e., an understanding of how structural durability works. It is therefore not a specialist book for specialists—there are other excellent works by well-known authors for this purpose. I would like to thank Springer Nature for its courage to get involved in such an experiment. I thank the Fraunhofer LBF and the University of Applied Science in Kaiserslautern for the opportunity to work, research, and teach in the field of structural durability. And, finally, I would like to thank my wife Sandra for her patience with a writing spouse who, for more than a year, put his thoughts down on paper every weekend at half past five in the morning. Of course, this is not right: Even I already worked with a computer. Nevertheless, my wife was really patient with me. Darmstadt, Germany

Ruediger Heim

Contents

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

1 1 5 11

2 Operating Life Analysis . . . . . . . . . 2.1 Stress–Strength Interference . . . 2.2 Load Analysis . . . . . . . . . . . . . 2.2.1 Counting Methods . . . . . 2.2.2 Spectrum . . . . . . . . . . . . 2.2.3 Finite Element Analysis . 2.3 Strength Analysis . . . . . . . . . . . 2.3.1 Fatigue . . . . . . . . . . . . . 2.4 Damage Calculation . . . . . . . . . References . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

13 13 21 37 44 45 46 46 56 65

3 Influencing Factors for Fatigue Strength 3.1 Repeating Tests . . . . . . . . . . . . . . . . 3.2 Material- and Process-Related Effects 3.2.1 Material Strength . . . . . . . . . . 3.2.2 Size Effects . . . . . . . . . . . . . . 3.2.3 Geometry . . . . . . . . . . . . . . . 3.2.4 Surface Conditions . . . . . . . . . 3.2.5 Applied Loads . . . . . . . . . . . . 3.2.6 Mean Stress . . . . . . . . . . . . . 3.2.7 Process-Related Conditions . . 3.2.8 Environmental Conditions . . . References . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. 67 . 67 . 72 . 73 . 75 . 77 . 86 . 88 . 93 . 97 . 103 . 106

1 Motivation . . . . . . . . . . . . . . . . 1.1 Environment and Emissions 1.2 Business Sustainability . . . . References . . . . . . . . . . . . . . . . .

. . . .

. . . .

4 Fatigue Strength Under Spectrum Loading . . . . . . . . . . . . . . . . . . . 109 4.1 Blocked-Program Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.2 Standardized Random-Type Testing . . . . . . . . . . . . . . . . . . . . . . . 111

ix

x

Contents

4.3 Limitations of the Miner’s Rule . . . . . . . . . . . . . . 4.4 Microplastic Straining . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Local Stress Approaches by Math Modeling 4.5 Weldings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Standards and Guidelines . . . . . . . . . . . . . . 4.5.2 Structural Hot Spot Stress Concept . . . . . . . 4.5.3 Effective Notch Stress Approach . . . . . . . . . 4.5.4 Fracture Mechanics Approach . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

116 117 120 126 126 136 139 142 144

5 Structural Durability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Working with Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Experimental Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Severity Level of Operational Loads . . . . . . . . . . . . . . . 5.2.2 Sufficient Amount of Test Results . . . . . . . . . . . . . . . . 5.2.3 Accelerated Life Testing . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Probability of Failure . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 An Application: Biaxial Wheel Fatigue . . . . . . . . . . . . . . . . . . 5.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Wheel Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 ZWARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Math Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Beyond ‘Safe Life’ and Traditional Durability Performance . . . 5.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Fracture Mechanics Approach for a Multitiered Integrity Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Standards for Axle Design . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Metro Rail Vehicle Integrity Concept . . . . . . . . . . . . . . 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

147 147 151 159 162 164 170 172 172 174 181 185 190 190

. . . . .

. . . . .

196 199 200 206 207

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

Abbreviations

AC ASD ASIP ASME ASTM BCC BDC BRICS BS C CAD CARLOS CoG CPSC CS DAQ DC DIN DOF EAC EN EoL EPS EV FAT FAW FCC FEA FEM GDP

Alternating current Acceleration spectral density Aircraft structural integrity program American society of mechanical engineers American society for testing and materials Body-centered cubic Bottom dead center Brazil, Russia, India, China, and South Africa British standard Complexity of object Computer aided design Car loading standard Center of gravity Consumer product safety commission Coordinate system Data acquisition Direct current Deutsche Industrie Norm Degree of freedom Environmentally assisted cracking European Norm End of life Equivalent pre-crack size Electric vehicle Fatigue class Front axle weight Face-centered cubic Finite element analysis Finite element method Gross domestic product

xi

xii

GFC GM HAZ HCF HCP HFP ICE IIW LCF LEFM M MEMS MIC MPI MY NDI NVH O OECD OEM PDE pdf PG PoD PSD PWT Q&T RAW RFS RMS SAE SIF S-N SUV TDC TIG UPT USAF UTS VHCF Z ZWARP

Abbreviations

Guide function for cornering General motors Heat-affected zone High cycle fatigue Hexagonal closest packed High frequency peening Internal combustion engine International Institute of Welding Low cycle fatigue Linear elastic fracture mechanics Aesthetic measure Microelectromechanical systems Microbiologically influenced corrosion Magnetic particle inspection Model year Non-destructive inspection Noise, vibration, and harshness Order/symmetry Organization for Economic Co-operation and Development Original equipment manufacturer Partial differential equations Probability density function Proving ground Probability of detection Power spectral density Post-weld treatment Quenched and tempered Rear axle weight Required fatigue strength Root-mean-square Society of Automotive Engineers Stress intensity factor Stress vs. life Sports utility vehicles Top dead center Tungsten inert-gas Ultrasonic peening treatment US Air Force Ultimate tensile strength Very high cycle fatigue standard score Zweiaxiale Räderprüfung

Symbols

A A0 a a0 a* a b C D D Drel d da/dN E Fv,stat Fx Fy Fz H H0 i K KC Kf Kt k k* k' l M

Cross section after fracture Initial cross section Crack size Initial crack growing from material discontinuities Notch-related length Constant in Wöhler equation Coefficient in Wöhler equation Confidence Wheel press fit diameter Damage Relative damage Adjacent cross section to wheel press fit Fatigue crack growth rate Elastic (Young’s) modulus Static force in vertical direction Force in x-direction Force in y-direction Force in z-direction Cumulative number of cycles Reference number of cycles Number of repetitions Stress intensity factor Critical stress intensity factor Fatigue notch factor Theoretical stress concentration factor Slope of S-N curve Slope of S-N curve after kneepoint Slope of S-N curve after kneepoint for damage accumulation Number of lifetimes Mean stress sensitivity

xiii

xiv

Mx My Mz m N Nk Ncalc Ntest n ntotal P Pe Ps p Q R R2 Ra Rz r r Sa Sa,e Sa,k Sa,max Smean SD s TN TSa tflight t V X X Y Y C DKth U ea d η µ m

Symbols

Moment around x-axis Moment around y-axis Moment around z-axis Actual number of load cycles Number of cycles to failure Number of cycles to failure at kneepoint Calculated number of cycles to failure Number of cycles to failure from tests Fatigue life cycles Total number Probability Probability of exceedance Probability of survival Interval Constant crack growth term Load ratio/stress ratio Correlation coefficient Average roughness Mean roughness depth Crack growth exponent Effective notch root radius Stress amplitude Stress amplitude at endurance limit Stress amplitude at knee point Maximum stress amplitude Mean stress of cyclic loading Standard deviation Growth rate adjustment parameter of Walker equation Scatter related to number of cycles Scatter related to stress amplitudes Operational flight hours Thickness Highly stressed volume Stress as random variable Normally distributed stress Strength as random variable Normally distributed strength Parameter of Palmgren equation Threshold stress intensity Interference random variable Strain amplitude Test time factor Length of success run/sample size Expected value/mean value Safety margin

Symbols

r rallow rhs ryield rUTS r/ u / v w x

xv

Sigma units Allowable stress Hot-spot stress Yield stress Ultimate tensile stress Far-field stress value Load severity factor Ductility parameter Spectrum shape parameter Geometry parameter Geometry and load dependent parameter

Chapter 1

Motivation

Abstract Sustainability is a widely used, yet new degree subject that attempts to bridge social science with future technologies. Often, we think about reducing carbon emissions and protecting the environment, while driving innovation and not compromising the way of life we are used to live. For transport and mobility, we have to look for systems and solutions which provide a much better eco-friendliness than everything what we did in the past. But it is not only about electric vehicles and renewable fuel sources: Transportation is much about lightweighting and energy efficiency too. Hence, there is a strong need for methods and concepts to design ready to manufacture, lightweight, yet durable enough to withstand the rigors of challenging use. In the first chapter of this book, emphasis is placed on the impact of lightweight design for energy savings in transportation that is the hidden background of structural durability.

1.1 Environment and Emissions The Sustainable Development Goals of the United Nations are a blueprint to achieve a better and more sustainable future for all. These goals address the challenges we face globally, including those related to climate and environmental degradation. The concept of sustainability was initially defined by Lester Brown in 1981 and expressed as a society that is able to satisfy its needs without diminishing the chance of future generations.

In these societies, a number of indicators related to social, environment and economy point to a high level of wellbeing. Since 2006 a Sustainable Society Index is presented which today is available for 154 countries, comprising 99% of the world population. While the scores of human wellbeing and economic wellbeing increased on average since then, the situation for the environment becomes unhealthier because of negative effects related to renewable water resources, energy use and greenhouse gas emissions. Energy is the dominant contributor to climate change and accounts for

© Springer Nature Switzerland AG 2020 R. Heim, Structural Durability: Methods and Concepts, Structural Integrity 17, https://doi.org/10.1007/978-3-030-48173-5_1

1

2

1 Motivation

60% of the global greenhouse gas emissions. But it is not only about carbon dioxide: Pollution is a severe issue especially for the huge cities in India—half of the 20 most polluted cities in the world are in India. Two-third of all households in India even today rely on wood, coal, charcoal or animal waste for cooking and heating—globally that concerns 3 billion people. The population of India will grow to 1.6 billion in 2040 and the electricity demand will be four times higher than today. What do we have to expect for the future when talking about environmental issues due to energy use? While the world population today is 7.8 billion people, the population is currently growing at a rate of around 1.05% per year, and by 2040, the total population is expected to be more than 9 billion. As an effect of social and economic sustainability, the per capita gross domestic product (GDP) is projected to rise significantly, particularly in the countries outside of the Organization for Economic Co-operation and Development (OECD): A huge number of people will join the global middle class. By 2030, the global middle class will likely expand from 3 billion to more than 5 billion people. For this expanding population, the rising living standards come with high need for energy which raise the global energy demand by about 25% until 2040. Nearly, all growth will be in non-OECD countries, where the energy demand will increase about 40% as well as the electricity demand almost will double. That is more than understandable when we look at the figures of the energy demand per person and the GDP per capita for different countries (Fig. 1.1). For India, Japan and the USA, we almost find the same value when the energy demand is divided by the GDP per capita, though the individual economic power is pretty different. The same is found for China, Germany and Singapore, but those are way more energy efficient. The simple message from such a chart is that a

Fig. 1.1 Energy demand versus GDP per capita for different countries

1.1 Environment and Emissions

3

significant expansion of the middle class in the emerging countries and their greater access to modern energy in homes as well as the significant increases in personal and commercial transportation needs will rise the energy demand per person radically. Hence, we have to look for substantial energy efficiency improvements to curb the growth in the global demand for energy. The growth of economic activity and personal income inevitably leads to an increase of commercial transportation as well as personal mobility demands. The global fleet will go up from today about 1 billion cars to likely reach the two billion mark in 2040—not including commercial vehicles which today make up approximately 26% of the total motor vehicle annual production in the world. While in Europe the number of vehicles per 1000 people will slightly increase by 10% in 2040, we will see a much bigger increase in some of the BRICS1 countries. In Brazil, the access to four-wheeled vehicles for personal mobility will double and in China that will triple. A huge increase is expected for India too, but remaining on a comparatively lower number because that market will see an extraordinary huge number of two- and three-wheelers. Motorcycles offer a lower-cost entry point to personal mobility, with ownership particularly high in India and other countries in Asia Pacific. Though we will see significant changes in the future fleet mix and a huge number of hybrids and electric vehicles (EV), the liquid fuel demand for light-duty vehicles is still expected to be relatively flat to 2040. Currently, there are slightly more than 2 million electric vehicles in the global fleet, or about 0.2% of the total, but for 2040 it is expected that 55% of all new car sales and one-third of the global fleet will be electric [1]. All those numbers and figures immediately make it clear that we have to anticipate an increasing global population and number of middle-class people which make the global fleet much, much bigger than today. Consequently, we have to look into every single aspect where efficiency gains could come from. Among other aspects, our special attention is aimed to vehicle lightweight design immediately. The fundamental relation between the mass of a vehicle and its energy consumption is generally known by the public: Once people ride a heavy bicycle and may have to transport additional luggage, they understand the value of lightweight. All vehicles need energy to operate because of the driving resistances which they have to conquer. Most of those resistances depend on the mass of the vehicle. Only the aerodynamic drag of the vehicle is not influenced by the mass. Different to that, the rolling resistance, the inclination resistance and the acceleration resistance show a linear relation to the vehicle’s weight. For a mix of routes—including city, country road and motorway—the fuel savings of classical car models range from 0.15 to 0.5 l per 100 km for a 100 kg weight reduction. The higher fuel efficiency is achieved when lightweighting comes together with adjustments in the transmission to benefit from the different power-to-weight

1 BRICS

is an international political organization of leading emerging market countries consisting of Brazil, Russia, India, China and South Africa.

4

1 Motivation

ratio [2]. As a rule of thumb, a value of 0.3 l per 100 km is often used to explain the effect of 100 kg weight reduction for a passenger car under normal operation. Now, let us think about the potential savings throughout the whole operating lifecycle of such a car—assuming it would have 100 kg less weight and travels a mileage of 200,000 km. The energy savings would be in a range of 25 GJ for a car. Much higher savings would be achieved for such cars when used for taxi operation: Here, the operating lifecycle comes with much higher mileage and mass-related stopand-go of urban driving which then would account for lifecycle savings in the range of almost 120 GJ [3]. Substantial, weight-related savings are expected for classical car models but for electric vehicles too. Though sometimes lightweighting for EV’s is thought not to be of vital importance because those cars get energy back from braking, the vehicle’s weight is still a major parameter for its energy consumption. Based on my own research work with an EV fleet, the energy consumption can be determined as 0.1– 0.11 Wh km−1 kg−1 . That fleet included a range of vehicles from 1345 to 2125 kg. Certainly, brake energy recovery helps the heavy electric car models to improve their efficiency, but first thing one have to consider is the energy that is needed to accelerate a heavy mass: You may get some energy back when braking, but you have spent so much more energy to get such a car up to speed—and you love to do that as often as you can. Hence, you do not find a serious reason to neglect the aspect of lightweighting an electric car. Due to the mass and size of the Li-ion energy storage systems that mission is not an easy one. Automakers therefore like the idea of producing sports utility vehicles (SUV) which have become one of the most popular vehicle segments in the past years: The SUVs are bigger and taller and have more packaging space for the battery system under the floor. Premium electric SUV then has an engine power of 300 kW and a mass of more than 2500 kg. Though they are zero emission vehicles, they are far off from being energy efficient. Consequently, the European Union wants to see for future urban mobility a totally different kind of a city car that is a small, light quadric-mobile with a maximum power of 6 kW and an enclosed driving and passenger compartment accessible through a maximum of three sides. Those vehicles are four-wheeled and have a top speed ≤ 45 kph, while the operational mass is ≤425 kg which does not include the battery. Compared to the premium electric SUV that bespoke city car needs only half of the space and its mass is reduced to the fifth part of the SUV. The SUV itself has 50 times more power than the light quadric-mobile which makes it clear that electric mobility does not mean it is automatically a fundamental part of sustainable future transport. For heavy commercial vehicles—such as a truck or an express route bus—the operational mileage is typically more than 1 million km. For 100 kg weight reduction, the lifecycle energy savings then turn out to be about 30 GJ, which is even more than for average passenger cars. And it does not sound too difficult to reduce the weight of those vehicles by 100 kg, doesn’t it? A European 4 × 2 tractor unit with 500 HP weighs 7500 kg.

1.1 Environment and Emissions

5

Trains and trams can save even more energy when getting lighter, due to frequent stops at train stations and again because of their huge operational mileage—which can be 1 million km annually for a high-speed train. A 100 kg weight reduction of trains and trams then counts for 65–125 GJ savings through their lifecycle. The most effective lightweighting starts when doing the mass optimization for aircrafts. Does that mean any surprise to us? Isn’t the aircraft industry the major player when it comes to the use of high-end composite materials or titanium? Indeed, a 100 kg weight reduction would give lifecycle energy savings in the range of 10,000– 20,000 GJ for short-distance aircrafts and even higher savings for long-distance planes. Thus, the lifecycle-related energy savings for an aircraft are expected to be more than 100 times higher than for rail vehicles and almost a 1000 times higher than for passenger cars. Where do we go from here? Does it make any sense to look for lightweight design in automotive engineering? Isn’t it right to focus on ride comfort and NVH (noise, vibration and harshness) and built heavy SUV, because there is a comparatively small impact of automotive lightweighting? Shouldn’t we spend all the effort for mass optimization on trains and planes only? The answer is: Certainly not, because there is such a huge number of cars and duty vehicles in the world. Though the individual impact of a 100 kg weight reduction is much smaller for road vehicles, the extraordinary large number of vehicles makes the difference: Assuming a 100 kg weight reduction for all road vehicles, trains and aircrafts in the world, the annual energy savings today would be more than 2600 Peta-Joule, a nicely shaped number in the range of 1018 , and the same energy than 420 million barrels of oil equivalent. Almost 90% of these savings would come from lightweighting of the road vehicles, while the contribution of the trains would be negligible. And there is another interesting aspect we have to look at: For a sustainable future, we have primarily to get the urban environment managed, since even today the cities contain more than half of the world’s population and create almost 80% of the greenhouse gas emissions, while they cover just 2% of the globe’s surface.

1.2 Business Sustainability Often sustainable business simply means to manage a business’ financial, environmental and social opportunities, obligations and risks. Managing that triple bottom line makes the business aligned to healthy economic, environmental and social systems and helps companies to ensure resiliency over time. Under a range of best practices to foster business sustainability, we often find the term life cycle analysis which means the systematic analysis of the environmental and social impact of the products a company use and produce. Tech firms therefore may introduce a paperless office environment, and cell phone manufacturers may look for conflict-free supply chains for the minerals and metals they need.

6

1 Motivation

But business is not working without profitability which often is a short-term goal for stock corporations looking to the next quarterly report. As an example, we may look back to the early 1990s at which a Spanish-born auto executive helped General Motors (GM) to slash costs and to stop the carmaker’s huge losses both in Europe and the USA by ripping up long-standing contracts with GM suppliers and demanding ever-lower prices and faster deliveries. The annual savings started at US$ 1.1 billion in 1991 and were more than doubled in the following year. For a while, GM got a steroid-type boost in profitability: End of April 1992 the corporation announced its first quarterly profit since the second quarter of 1990, and a year later, it reported a first-quarter net income of more than US$ 500 million because revenue went up and many costs down. When looking at the average retail price of cars, GM’s procurement strategy seems comprehensible because buying material and parts contribute to the pro-rate costs more than 40%. Engineering, assembly and labor costs are 15% of the retail price, while overhead costs, depreciation and distribution add up to slightly more than 35% (Fig. 1.2). But on long term, the damage to GM from beating up on suppliers for lower costs was immense: GM went from being the most preferred customer of the automotive supply base to becoming the least preferred one. That happened in a time when technology leadership started to shift from the original equipment manufacturer (OEM) to the vendors, and consequently, GM became the last customer to receive innovative products and technologies. In a supply industry survey GM ranked last on supplier relations among all carmakers having production sites in the USA [4]. GM won immediate savings, but ignored total cost: Too often parts from lower-cost alternatives flunked quality tests. GM’s quality hurt for years after that Spanish-born auto executive left the company.

Fig. 1.2 Material, warranty and profit as part of the retail price in the car industry

1.2 Business Sustainability

7

Typically, warranty-related costs are somewhat more than 4% of the retail price. With an average price of a new car in Germany of more than 31,000 Euro in 2018 that means a price tag of almost 1400 Euro for repair parts and services within the 24 months warranty period. Often the company’s profit from car production and sales is smaller than those pro-rate costs for warranty claims. And that is a point a sustainable business strategy has to consider: Quality and durability improvements lead to better profit margins by lowering the expenses for unwanted repair parts and services. When talking about the term durability, we do mean the ability that an equipment, machine or material exist for long without significant deterioration. Hence, durable products resist the effects of heavy use, drying, wetting, heating, freezing, corrosion, oxidation and many more. Avoiding wear and tear just means for the manufacturer a much smaller effort for product fixes, or even retrieving and replacing defective goods for its customers. In January 2017, a leading smart phone manufacturer from South Korea summarized that battery fires and explosions—quite literally—sparked two recalls for a specific model due to irregularly sized batteries causing the first round of battery fires in August 2016 and a number of different manufacturing issues which created trouble a second time. The part itself was a sizable 3500 mAh (milliamp hour) lithium battery that was packaged in a 7.9 mm thin phone. Those batteries were prone to exploding because of missing insulation tape and sharp edge protrusions which caused severe issues in nearly 100 cases in the USA alone. The manufacturer together with the US Consumer Product Safety Commission (CPSC) officially recalled the complete product line in the USA. Finally, the OEM got back 96% of the 3 million devices which were sold, and started to extract more than 150 t of valuable rare earth metals, gold, silver, copper and cobalt to reuse them for new products. The ill-fated smartphone created total costs of more than $5 billion to the company [5]. Errors both in design and manufacturing which affected batteries from two different suppliers were the root cause for such a huge recall—and can happen any time again, not only in the smartphone business. Electric cars heavily rely on the technology of lithium–ion batteries too. And while EV battery fire or explosions do not happen with unusually high frequency yet, the battery itself shows gradual energy or power loss with time and use. Again, durability could be an issue—specifically when the battery has to be replaced before the end of vehicle life. Since the lithium–ion battery system of an EV is the most expensive single part, its useful life should be aligned to the vehicle’s life which is—as we saw before—12–15 years or a mileage of at least 200,000 km. Does a customer accept a costly replacement much before that time? That is certainly not a genuine question, but a rhetorical one: An EV customer is typically not even interested in size, mass or layout of the battery pack, but simply wants to be sure that it operates properly over a longer period. Talking about what a customer normally wants brings us to the importance of determining the target customers’ needs and wants. Having product features which

8

1 Motivation

help to achieve customer satisfaction and to develop a minimum viable product is key for product development. That can be shown by the Kano model that was developed by Noriaki Kano in the 1980s to classify product features depending on the value they provide to customers (Fig. 1.3). Understanding the Kano model helps to keep the focus on optimizing important features as well as to fade out features which are not necessary or superfluous. According to the Kano model, a feature can be categorized by its level of functionality and satisfaction provided to the user. Depending on the parameter of the Kano model, a feature is placed into one of the following categories: • must-be, • one-dimensional, • attractive. Features falling in the must-be category are those which users deem essential for the product to operate as expected. Those features have to be present, because otherwise the product would not hold any value for the user. But, while features of the pure must-be category introduce a certain level of functionality, that does not mean a positive level of customer satisfaction. Much

Fig. 1.3 Positioning basic durability performance in the Kano diagram

1.2 Business Sustainability

9

more important is to follow that curve to the left side of the graph: The absence of such a feature absolutely crushes the overall experience for the user. A one-dimensional feature provides a level of satisfaction that correlates directly to the level of functionality, or—much simpler—‘the more, the better,’ hence called performance feature. Here, enhanced functionality typically leads to higher levels of satisfaction among customers, but again the absence of the feature clearly results in the opposite. Looking at that category a company may want to find a ‘sweet spot’ in which the value to the customer is obvious, while the profit the product brings in is also maximized. Attractive features can be viewed as the inverse of must-be features: Introducing these features are appropriate to enhance the customer satisfaction. That means the absence of an attractive feature does not affect the user negatively. Hence, a manufacturer certainly benefits from including these features in a product, but the goal should be to ‘wow’ the customers with as little investment as possible. As technology becomes more and more advanced, features which were originally categorized as attractive may come down to the performance or even the must-be level after some time. When looking at the durability performance level of established products, customers today simply expect that as a must-be feature. The product has to be reliable and its functions have to be available. A smartphone that cannot make phone calls would be unacceptable as well as a car without a steering wheel—assuming we are not talking about a full autonomous car yet. The critical position of the must-be feature is where it comes to a low level of functionality: A company cannot win too much when having the functionality available but may lose customer satisfaction completely without a certain level of functionality. In such a way, it becomes clear that durability is often groundwork but not a feature for product differentiation. It is expected by the user but certainly does not create excitement. The users’ satisfaction goes from delight to total dissatisfaction or frustration: • • • • •

delighted, satisfied, neutral, dissatisfied, frustrated.

When product features—such as durability and reliability—are simply expected by users and the product does not have them, it is considered to be incomplete or just plain bad. No matter how much a company invests in such a feature, the satisfaction level never even reaches the positive side of the dimension. And often the company’s management may think that once a basic level of expectations is achieved, they do not have to keep investing in it.

10

1 Motivation

That happened at GM when they won immediate savings due to their revised supplier integration, but ignored the total costs which then came together with the quality and reliability issues putting them into a position on the very left side of the must-be curve. According to the knowledge which comes from the Kano model, a manufacturer has to look for the opportunity to combine a must-be feature with another one having at least performance attributes. In most customer’s quality perception, upmarket material and design lead to a higher level of satisfaction. Consequently, a manufacturer may introduce new and advanced products which attract attention from both, design and durability. That is something we can see for the automotive wheel market which is worth more than US$ 80 billion currently and expected to grow to US$ 100 billion by 2024 [6]. In terms of styling, functionality and durability a basic steel wheel is certainly nothing else than a must-be product—it would be unacceptable not to have four wheels on the own car.2 Those four basic wheels are commodity products having together a mass of more than 35 kg—not including plastic covers, wheel bolts or tires—and the OEM has to ensure that those wheels must not fail during the whole vehicle life, though there can be rough operational use, no maintenance and huge environmental impact by corrosion. Nothing else than proper durability is expected from those wheels—in the eyes of the user, it is a must-be feature made from rolled steel. The engineering, manufacturing and testing efforts related to that product are not much appreciated because the product itself is almost unnoticed on our car. That becomes significantly different when the wheel is made of aluminum having an attractive styling which upgrades the whole car. Though an aluminum cast wheel is not necessarily much lighter3 than a steel wheel, but it can be nicely shaped, looks more appealing and—from business perspective most important—may even introduce excitement attributes. The combination of advanced styling and material makes it an upmarket product for which the OEM gets paid beneficially. Though it still is a wheel product that must not have any structural failure or cause an accident, it is now noticed by the customer and much more related to satisfaction than the steel wheel. For the manufacturer the effort related to engineering, production and testing remains unchanged, but now it is possible to increase profit because the product is more than a simple ‘must-be’ and the customer is willing to pay for that.

2 Years

ago, a car actually was equipped with five solid wheels including the spare wheel. Since that put huge additional cost and weight to the car and was uncomfortable to be mounted, the OEM replaced that with a tire repair kit successfully—an action which is totally aligned to the Kano model. 3 Hence the term ‘light alloy’ is not perfectly correct when it comes to using aluminum for technical products. Compared to steel it is a lower density metal, but not always enabling lightweight in a larger extent than the more traditional steel or cast iron. We’ll fully understand that a bit later when introducing strength and stiffness.

1.2 Business Sustainability

11

Moving from steel to aluminum has started in the 1970s for the upper class or flagships models in order to give them a distinctive individual touch. Today, aluminum wheels come in casted or forged design—in Europe cast ones represent more than 80% of the market and mainly produced by low-pressure die casting. Creating even more satisfaction from wheel products comes with materials such as magnesium or carbon composites. Magnesium is roughly 20% more expensive than aluminum and the wheel products can be 20–25% lighter than those from aluminum making them interesting for sports and race cars. Using carbon composites for wheels is a relatively new approach and in small numbers available yet, which makes those wheels extraordinary expensive but create a huge level of excitement too. People today have to pay considerably more than EUR 10,000 to get a full set of composite wheels which are clearly positioned in the attractive category of the Kano model. Hence, even if there are unwanted durability issues4 , the users not necessarily see those as plain bad because it is a new technology which may show some flaws initially.

References 1. Bloomberg—new energy finance: electric vehicle outlook (2018) 2. Eberle R Methodik zur ganzheitlichen Bilanzierung im Automobilbau. Dissertation, Technische Universität, Berlin 3. Helms H, Lambrecht U (2006) The potential contribution of light-weighting to reduce transport energy consumption. Int J LCA 4. Lewis JD (1995) The connected corporation—how leading companies win through customersupplier alliances. The Free Press. A Division of Simon & Schuster Inc., New York 5. BBC News (2017) Samsung confirms battery faults as cause of Note 7 fires 6. Global Market Insights, Inc. (2019) Automotive wheel market trends 2018–2014. Industry size report. Selbyville, Delaware

4 Compared

to the long history and huge experience with steel products, carbon composites are relative new material for which engineers do not have that level of knowledge. Hence design, manufacturing and testing are more prone to mistakes and issues.

Chapter 2

Operating Life Analysis

Abstract An operating life analysis is the backbone of managing structural durability successfully and, thus, the focus of this chapter. Therefore, the understanding of the component’s strength behavior as well as of the loads to the structure is necessary. The stress–strength interference is introduced which equally relies on the analysis of the loads and the characteristics of the structure. Hence, load and stress analyses are described in this chapter as well as fundamental strength behavior and fatigue are introduced. We will learn to bring the results into a consistent form and to perform a damage calculation that answers the question as to whether the design is safe and durable enough.

2.1 Stress–Strength Interference Structural durability should be viewed as the ability of a system or component to perform its required functions under stated conditions for a specified period of time. Hence, durability is always related to an operating life requirement which makes it necessary to carefully specify that time period. The life requirement of a component certainly depends on various different aspects such as the product itself as well as product alternatives, customer expectations, legal regulations, sustainability requirements and others. Riding a bicycle in Germany was examined by the Federal Ministry of Transport and Digital Infrastructure showing that about 82% of the population in Germany has bicycles [1]. According to those figures, a good third of all ride more than 2500 km annually, and almost 9% even more than 5000 km. From experience, it can be assumed that passionate cyclists will replace their bike after a while and do not continue with the same for more than a few years. But, nevertheless, we may have to consider an operating life of six to eight years and a mileage of 30,000 km. That is the period for which a bicycle should not show any structural failures from normal use. This is a well-balanced compromise taking into account the need for lightweight design as well as for vehicle safety.

© Springer Nature Switzerland AG 2020 R. Heim, Structural Durability: Methods and Concepts, Structural Integrity 17, https://doi.org/10.1007/978-3-030-48173-5_2

13

14

2 Operating Life Analysis

The bike’s frame easily can be designed for double mileage, but then would be more expensive and heavy because of the additional quantity of material used for such a long operating life. Other structures have much different usage periods: For heavy commercial vehicles, the operating life is 15+ years and a mileage of more than 1 million km, for railway vehicles it is 35+ years, for steel cranes typically 20 years, and for road and railroad bridges that is 100+ years. It can be seen that both, vehicles and infrastructure, do have a design life—ranging from five years for a bicycle to more than 100 years for bridges. In Germany, there are about 25,000 railroad bridges having a mean age of 57 years [2]. More than 9000 of those bridges are 100 years or even older—a bridge in Saxony is 180 years old and used by high speed trains on a daily basis. Throughout such a long time, the structure itself has to resist the traffic and wind loads as well as corrosion and other environmental impact may weaken the materials from which the bridge is built. In August 2018, the Morandi Bridge in Genoa, Italy, collapsed after just 50 years in which the A10 motorway toward France and A7 to Milan were connected. It was a cable-stayed bridge which had only two stays per tower, one on each side, which made that design quite different to traditional bridges having multiple stays that fan out from the towers and attach to multiple points on the deck (Fig. 2.1). While most cable-stayed bridges have stays made of woven metal cables, the Morandi Bridge had pre-stressed concrete around metal tie-rods. Covering the reinforcing metal with concrete made it impossible to examine the condition of the tie-rods, and with only a pair of stays on each side of the tower, a failure of a single stay does not provide any redundancy to the load transfer characteristics and the stability of the bridge. Hence, this cable-stayed bridge has its own characteristics regarding material, surface and shape. There was a combination of pre-stressed concrete and metal tierods, and because of the chloride buildup in the concrete, the cables could have been corroded.

Fig. 2.1 Cable-stayed bridge in Zwingenberg/Neckar—Germany

2.1 Stress–Strength Interference

15

Fig. 2.2 Operating life analysis using information from load analysis and component analysis

Basically, the operating life depends on how the load is working on the component. The analysis of the failed section of the Morandi Bridge has to take into account the characteristic strength that is given by material, surface and shape as well as the loads to the structure which interact with the component’s strength and may lead to degradation over time (Fig. 2.2). The analysis of the operating life is a four-step process which starts with the load analysis, continues with the analysis of the component and combines the information for an operating life analysis. Finally, it has to be evaluated whether the predicted operating life meets the minimum lifetime requirement. That is the fundamental task of a durability performance evaluation that can be performed in the design process or for post-mortem analysis. It is not an easy task because • failure modes can be too complex to be fully understood, • the loads to the structure, including demanding usage scenarios have to be examined carefully, • the component has to be analyzed with regard to material properties, shape and surface to evaluate its resistance over time, • the required lifetime can be many years or even decades in which both, the loading and the component, may undergo major changes, • uncertainty about loads, model’s and component’s characteristics have to be quantified,

16

2 Operating Life Analysis

• analytical and/or numerical models are often simplified and may not include all relevant aspects, • physical testing can be extraordinary expensive and time consuming. From the fundamentals of engineering, we know that strength measures how much load can be applied to a structure before it deforms permanently or fractures. In the figure above, the load is changing with time—hence, we not only have to ask about how much load is applied, but we even have to look at a specific time when it is applied. That is true not only for road vehicles for which we easily accept that the load time history is heavily influenced by vehicle speed or road conditions, but for bridges too. A structure such as the one of the Morandi Bridge in Italy is loaded statically by its own mass, and dynamic loading is introduced by wind loads and vehicles crossing the bridge. Hence, the operational loading often can be expressed as a combination of static and dynamic loads. That makes it not easy to describe the loads to a structure, because a non-constant load means we have to look at the load magnitude, at the number of relevant events as well as to the time when certain events happened. The strength of materials looks at the behavior of materials when loads are applied. Those loads introduce different types of stress upon and within the material in all different directions. Studying the reactions of materials usually begins with looking at static forces on and within the material to determine all of the forces affecting it. Once this examination is completed, we can look at the reactions of the material. A relatively simple type of load application is implemented in tensile testing that is performed as per ASTM, BS, DIN or EN standards. That involves introducing a slowly applied force on the opposite ends of the specimen and pulling outwardly until the part breaks. A strain gage or an extensometer is used to measure the elongation of the specimen. The main product of a tensile test is a load versus elongation curve which is then converted into a stress versus strain curve. The stress obtained at the highest applied force corresponds to the ultimate tensile strength (UTS). The yield strength is related to the stress at which a prescribed amount of plastic deformation is created—typically that is 0.2% plastic strain (Fig. 2.3). Such a test is a fundamental mechanical test where a carefully prepared specimen is loaded in a very controlled manner. It is used to determine the proportional limit, which can be seen in Fig. 2.3 at position A, the yield point (B) and the strength (C), the tensile strength (D), the elongation (E) and other tensile properties. Since the stress is—in a simplified way—obtained by dividing the load and elongation by the value from initial specimen geometry information, the UTS is higher than the stress at fracture which is first of all a bit confusing. That is the so-called engineering stress which is always referenced by the initial cross-section of the specimen. The tensile strength is the maximum engineering stress level reached in a tensile test. If the true stress, based on the actual cross-sectional area of the specimen, is used, it is found that the stress–strain curve increases continuously up to fracture. In ductile materials—such as shown in the above figure—the UTS will be well outside of the elastic portion into the plastic portion of the stress–strain curve. The

2.1 Stress–Strength Interference

17

Fig. 2.3 Engineering stress–strain curve from tensile testing

UTS may not be completely representative of the highest level of stress a material can support, but it is rarely used in the design of components anyway. For ductile materials, the current design practice is to use the yield strength for sizing statically loaded components. That is the point at which the stress–strain curve deviates from the straight-line relationship and the strain increases faster than the stress. From this point on, some permanent deformation occurs and the specimen reacts plastically to any further increase in load or stress—which means, it will not return to its original condition when the load is removed. For most materials, the exact point at which plastic deformation begins to occur is difficult to determine. Hence, the yield strength is defined as the stress required to produce a small amount of plastic deformation—typically 0.2%. Though the tensile test produces results in a very controlled manner, there is some variability in yield strength, UTS or elongation to break that is primarily given by chemistry and metallurgy in material processing as well as manufacturing and thermal processing conditions of semi-finished products or final parts. While [3] saw for hot rolled steel strips a variation in mechanical properties of more than 10%, more recent investigations on the variability of high strength low alloy steels reported a standard deviation about the mean yield value of 4.4% [4]. The variability of those hot rolled strips primarily was influenced by the strip thickness, alloy content—such as niobium and carbon—and a combination of processing variables. That is the reason for defining minimum strength values and to accept some scatter. Hence, it is usually not possible to rate the exact strength value, but to have an idea about the minimum or sometimes the most likely value of the strength parameter. In this way, we have to accept that material strength is not a figure which is determined exactly by tensile testing but related to probability. Most of the specimen we would test are pretty much in the range what we expect—that is the nominal or specified

18

2 Operating Life Analysis

strength. But few specimens are outside of the nominal strength, either better or worse than nominal. The more specimen we test, the higher the number of specimens which are below or beyond, though the number of test results which are really outside of the specifications are typically small. Hence, we need to think about the probability that the actual strength of an arbitrary specimen is close to a single number—and we capture the notion of being close to a number with a probability density function. If that probability density around a certain point is large, this means the strength as the random variable is likely to be close to that point. The density of the random variable is a function that is used to specify the probability of being within a particular range of values. The probability density function (pdf) is a statistical expression and most commonly follows a normal distribution, also known as the Gaussian distribution that is symmetric about the mean, showing that data near the mean occurs more frequently than data far from the mean. In the figure above, we can see the mean UTS as well as the deviation about that mean strength in form of a Gaussian distribution. Here, the material strength is a random variable which is likely to be close to the mean but not always. That is true not only for the tensile strength, but for the yield strength too. Since for ductile materials current design practices often use the yield strength for sizing components, we have to take into account the deviation about the mean yielding—especially the situation when the strength is lower than the mean. Hence, we cannot use the nominal strength but have to consider a certain scatter and lower than average strength values. That is the lower bound strength that limits the load applied to a structure before it deforms permanently or fractures. But what about the load itself? Is that something which we can determine exactly or do we have to treat it as another random variable? When asking about the nature of loading, we quickly come to the conclusion that this is random again: A person who rides a bike may weigh more than 100 kg,1 although the bicycle frame is designed for just 100 kg. Hence, the actual load to the structure can be higher than in the design process anticipated—and then may lead to unwanted stress levels.2 A truly worst case would be when a structure that has lower than average strength becomes loaded much higher than expected: Though we may believe to have an exact idea about the stress at which the material starts to yield and the operational stress that is introduced by the loading, the reality comes as a random number. While our design considers the maximum operational stress to be significantly lower than the nominal yield stress to avoid unwanted plastic deformation, there is a certain

1 We

learned that about 82% of the population in Germany has bicycles: Do we believe that this number does not include heavyweight people? 2 A truly heavyweight gentleman may ride his bike off-road though it is designed for city roads only: What does that mean for the loads to the structure and finally to the structural stresses? And does it make any difference when that gentleman repeats that regularly?

2.1 Stress–Strength Interference

19

Fig. 2.4 Yield stress versus operational stress

likelihood for not having a proper safety margin—simply because of the scatter of both the operational load as well as the yield strength (Fig. 2.4). In engineering design and analysis dealing with uncertainties often means to take into account a safety margin ν such as: ν=

σyield = 1.25 . . . 1.5 σallow

(2.1)

That does not always ensure to limit the allowable stress σ allow to be lower than the yield stress σ yield : The uncertainty about operational load and component’s strength is mapped to a static figure, while having scatter for both load and strength is a pretty dynamic condition. A static safety margin may work sufficiently because of experience with similar design from the past, but new products often come with new materials, functions or design features which can motivate customers to change their use habits.3 Dealing with bicycle frame design the biker’s weight seems to be a good point to start with for the load analysis. That is a static load first of all, introducing quasi-static stresses when driving slowly on a smooth road. But when it comes to different road conditions or the biker breaks at a traffic light, the stresses become different to those on a smooth road at constant speed. How’s about the loads of a bike which is operated off-road or even downhill with high

3 Just

before Apple introduced its first iPhone in 2007, the market leader at that time presented the new Nokia N95 slider phone which came with a 21-mm-thick body and a 2.6 in. display. In April 2019, Samsung planned to introduce the world’s first smartphone with a foldable OLED display which is then 7.3 in. in size. According to the manufacturer, the folding mechanism should work more than 100,000 times and they started a final trial with testees who quickly reported about failures. Mark Gurman from Bloomberg sent a Twitter statement (@markgurman) on April 17th: ‘The screen on my Galaxy Fold review unit is completely broken and unusable just two days in. Hard to know if this is widespread of not. The phone comes with a protective layer/film. Samsung says you are not supposed to remove it. I removed, not knowing you’re not supposed to (consumers won’t know either).’

20

2 Operating Life Analysis

speed4 ? Those loads can be extraordinary huge and frequently change with time: The non-stationary type of loading is certainly different to the character of a static load which was our initial idea when designing a bike frame which has to resist the weight of a fat person. Hence, we are not only talking about the six to eight years of operational usage with a mileage of 30,000 km for the design life of a bicycle but about a complex load time history with many peaks and lows too. We need to know that time history pretty well, because we want to design our structure as mass efficient as possible without ever compromising on quality. And we have to consider that each biker introduces an individual load profile too—so, it is quite difficult to know accurate time histories of all bikers, or at least of a representative group of our bicycle customers. That’s why we meet the term probability again: Not only the strength but the operational stress is a random variable too. Although the mean strength could be much greater than the mean stress, there can be some likeliness for failure, because there is a region where the distributions may overlap (Fig. 2.5). When looking at the images II and III of Fig. 2.5, it is obviously that both distributions overlap and the stress can actually exceed the strength and cause material failures. The overlap region is referred to as the interference region, and from a qualitative perspective, such an interference implies that there is a probability of failure5 for the design. Although in II and III, the mean strength is greater than the mean stress, and the resulting safety factor is much greater than 1, failure is not precluded but reaches a certain probability.6 For an operating life analysis, the most fundamental task is to examine the stress– strength interference and to review the probability of failure within the design life of the structure. In a stress–strength interference model, both stress and strength are described as random variables having their own probability density functions. The term ‘stress’ may include any applied load or load-induced response quantity that has the potential to cause a failure—which can be stress, force, moment, torque, temperature, pressure or so. And the term ‘strength’ is considered to be the capacity of the structure to withstand the stress—and may include yield stress, ultimate stress, fatigue-related stress figures or so. Structural durability to us now means a practical problem of applied statistics: When the random variable X representing the ‘stress’ experienced by the structure and the random variable Y representing the ‘strength’ of the structure which is available to overcome the stress, then durability is defined as the probability P of not failing—or mathematically: P(X > Y ).

4 The

first downhill mountain biking race took place at Fairfax, California, in 1976—and today, it is pretty popular riding a bike at a speed of car on a steep, muddy or rocky slope of a mountain. 5 Hence, the probability of failure has it own density function—and that is what we can see in II and III of Fig. 2.5. 6 The larger the interference region is, the more likely a failure occur—hence, it is more likely for III than for II.

2.2 Load Analysis

21

Fig. 2.5 Stress–strength interference for safe (I), failure prone (II) and unsafe (III) design

2.2 Load Analysis In a static problem, load is constant with respect to time. By contrast, a dynamic problem is time varying in nature—because both loading and responses vary with time. There are different types of responses, such as displacement or acceleration, while a static problem has a single response only that is displacement. Static load is a load that—after it is applied—does not change in magnitude or direction. Otherwise, dynamic load may change in magnitude as well as in direction—for example, the traffic of varying weight passing a bridge in both directions. Dynamic

22

2 Operating Life Analysis

loads are rarely monotonic increasing, but often occur as fluctuating loads and then can be differentiated as either deterministic or stochastic. Deterministic events can be predicted, such as we know from a sine oscillation, but when it comes to stochastic loads they have to be managed by statistics. A stochastic process can be viewed as a family of random variables: We can treat the variation in the load as a random variable. Stochastic loads are stationary when their statistical parameter remains unchanged whenever we look at them. And loads are quasi-stationary when the parameter remains unchanged over a certain period of time. Fatigue loads are fluctuating loads which are created from operational load time histories. To cause fatigue, it is necessary to have a large enough variation in the applied stress and to have a sufficiently large number of cycles of the applied stress too. According to the structure and how it is used, the operational loads are somewhere in between pure deterministic and completely stochastic. A fundamental type of fatigue load is the sinusoid: A sine wave is a continuous wave that characterizes a smooth periodic oscillation. Its pattern repeats and the peak deviation from zero is known as its amplitude. In this way, it can be seen as constant amplitude loading which is the most basic type of load to characterize fatigue strength.7 The number of oscillations that occur each second of time is the frequency, and the reciprocal of that is the duration of time of one cycle. The higher the frequency the bigger, the total number of cycles becomes in a certain time period. To get a feeling for those numbers, we may look at the crankshaft rotation of an engine which starts below 1000 rpm at idle speed and easily goes up to 6000 rpm or beyond for a gasoline engine at full speed. Traveling for 1 h at that speed means that the crankshaft is rotating 360,000 times—in less than 3 h that component sees a million cycles from its rotation.8 Such a repetitive loading has a certain character, which in most cases is either tensile loading, bending, torsion or a combination of these types. Hence, fatigue loads may come in an extraordinary huge number and cause progressive structural damage under cyclic loads. The damage itself is induced by the application of fluctuating stresses and strains which result from the loads.9 So, we are talking about loads and stresses and actually have to differentiate them from each other. Loads always create stresses in a structure, but stresses can occur without loads to a structure.10 Loads are acting on a structure, while in continuum mechanics stress is a physical quantity characterizing material-related effects of internal forces. Consequently, we 7 August

Wöhler performed pioneering work during 1860–1870 when he examined fatigue failures by applying controlled load cycles under constant amplitudes [5]. 8 We have to consider that this is the number of load cycles which lead to internal forces—stresses— because of the rotating mass. The obvious source of forces applied to a crankshaft is the product of combustion chamber pressure acting on top of the piston as well as the piston acceleration. 9 Loads are different from stresses and strains—that is something we assume that the reader knows. 10 Residual stresses occur without external loads and are an important factor of the fatigue life of structures.

2.2 Load Analysis

23

would have to talk about load and stress analysis, but that chapter is named ‘load analysis’ for the sake of simplicity. In the first instance, loads rather than stresses are the focus of the following sections. A load cycle is characterized by the load ratio R=

min. load max. load

(2.2)

which helps to differentiate the type of cyclic loading. For R = −1 the minimum and maximum load have the same amount but their sign is inverted—hence that is a fully reversed fluctuating load. For R = 0, the minimum load is zero and the rest of the load cycle has positive values. R > 0 means that the complete load cycle is moving up and the minimum load is positive too. The mean load is the arithmetic mean of a load cycle, and from that level, the load amplitude shows in the direction of either the maximum or the minimum load. The load range—sometimes noted as double amplitude—is then the difference between maximum load and minimum (Fig. 2.6). The load cycles in this figure are constant amplitude loading having constant mean load too. Once the number of load cycles is counted, the load time history is characterized completely. That is a pretty simple deterministic type of load which does not describe typical operational loads accurately but helps to introduce the fundamentals of cyclic load properties. When looking at an internal combustion engine (ICE), there is the connecting rod which connects the piston pin to the crankshaft pin. Its main function is to convert the translational motion of a piston into the rotational of a crankshaft. In the compression stroke, the piston travels from the bottom dead center (BDC) to the top dead center (TDC).

Fig. 2.6 Load cycle and characteristic parameter

24

2 Operating Life Analysis

In this process, both the pressure and the temperature increase, and the piston works on the gas in the cylinder. In the expansion stroke, the piston moves toward BDC and the gases push the piston. The analysis of what happens in an internal combustion engine makes it clear, that • the pressure of the gases exerts a force on the piston, • all parts of an ICE have their individual mass and therefore an inertia; thus, a certain fraction of the input forces is used to accelerate or decelerate those masses, • some of the input is ‘spent’ to overcome friction on the piston walls and at the bearings. Because of that, the connecting rod primarily undergoes tensile and compressive loading during the up and down of the piston. Major forces to the conrod come from the maximum combustion pressure acting on the piston during the combustion stroke as well as from the inertia of the conrod itself and reciprocating masses. The mass forces are pretty much constant amplitude loading as long as the engine speed remains unchanged. Understanding the physics of a system obviously helps to analyze fatigue relevant loads: When looking at the kinematics of an ICE, it becomes clear that components such as a connecting rod undergo fluctuating loads with constant amplitudes. Other components of an automobile can have a completely different type of loading and that is why we now take a look at suspension parts. The suspension itself prevents the vehicle body from road shocks and provides comfort. By having springs, shock absorbers and links, a suspension system allows relative motion between the vehicle body and its wheels. For passenger cars, a wish-bone type is often used for an independent front suspension. In this type, coil springs are used between the two suspension arms and are controlled at the open ends of the upper and lower wish-bones which are connected to the chassis frame. The closed ends of both arms are connected with the steering knuckle and are supported by means of a kingpin. Generally, it can be said that a suspension system consists of elastic members which are designed to cushion the impact from road irregularities and driving maneuvers. Ideally, the vehicle body would be moving in driving direction steadily while not having any movement in vertical or lateral direction. That’s certainly different for the tires and wheels: Those have to follow the road and not to lose contact because of vehicle stability. Curbs and potholes lead to vertical movements which can be huge and pretty transient—and that is the motion we do not want to be introduced into the body. Hence, the suspension system is in a ‘sandwich’ position and has to ensure that the road impact is minimized when it comes to body motion. The suspension loads can create • basic constant stresses because of dead weight, • basic (quasi)static variable stresses because of payload or temperature effects, • additional stresses because of operational conditions, rare events or vibration effects.

2.2 Load Analysis

25

Again, we have to look into the physics of the system: The basic stresses caused by dead weight and payload remains constant section by section, and operational loads overlap whenever the vehicle is moving. Hence, there is a superposition of static and dynamic loading that can be characterized as a semi-continuous random process as long as a perfect road is considered. Taking into account that simplification first of all, the dynamic loads in the center of gravity (CoG) come from • straight driving: moving forward or back having additional pitch movement whenever higher speed change is applied, • cornering: moving right or left having yaw and additional roll movement whenever higher speed is applied. Dependent on the path the vehicle is following and the acceleration11 that is applied, the frequency of load change can vary significantly. On a motorway in Germany, the minimum curve radius is 720–900 m which means a distance of at least 565 m for a 45° curve arc. At a speed of 130 kph, a vehicle needs more than 15 s for that distance which is then a quasi-static driving maneuver. For the time, the vehicle is cornering in such a way, the loads are constant and then change for a new path or speed. That is a semi-continuous process—and since it is unknown what comes next in terms of path and speed, this part of the load time history is a semi-continuous random process. But real-world car driving has another level for load effects: Road irregularities may imply huge dynamic effects in vertical direction as well as curbs, potholes or even unpaved roads come with longitudinal and lateral dynamic loads additionally. Here, typically the frequency of load changes is much higher than for those related to driving maneuvers and the loading itself cannot be determined easily. Hence, these variable, short duration loads from road irregularities are stochastic in nature and an additional part of the complete load time history of a vehicle with speed ahead. The complete vehicle loading is a complex superposition of static and dynamic parts which all are relevant for damaging effects and finally material fatigue. An analysis of the load time history has to use techniques for characterization of the individual source of loading by features such as: • graphical representation of the data, • period duration and propagation speed of fluctuating data, • frequency and magnitude. The type of data which is usually acquired and analyzed for chassis durability is • displacement, • acceleration, • force or 11 When talking about speed change related to straight driving, that implies both acceleration and deceleration. For the sake of simplicity, here is only acceleration noted, but it is anyway correct because deceleration means a negative acceleration as well.

26

2 Operating Life Analysis

• strain. Specific sensors for each type of data are available and installed at those locations which then create proper and meaningful responses. For suspension applications, the most relevant displacement is the one in vertical direction that is often measured by using cable-actuated position sensors. Those are easy to install and make it possible to perform linear position measurements: They use the length of a spring-loaded cable which is pulled out of the sensor unit to convert mechanical motion into an electrical signal. For measuring accelerations, there are two different classes of sensors available: DC-response and AC-response accelerometers. While the latter cannot be used to measure static acceleration such as gravity or constant centrifugal acceleration, a DC-response accelerometer can respond down to zero Hertz sampling frequency that makes it possible to measure static, as well as dynamic accelerations. DC-accelerometer use either capacitive sensing technology or piezoresistive. The capacitive type is based on the capacitive changes in a seismic mass under acceleration that is today the most common technology and built in many applications such as air bags or smartphones. Although those MEMS sensors (microelectromechanical systems) have incredibly low manufacturing costs today,12 for durability data acquisition they lack dynamic range and signal-to-noise ratio. The maximum range is typically limited to less than 200 g’s which make those sensors most suitable for on-board monitoring applications, or measuring low frequency motion such as vibration measurements in civil engineering. For durability data acquisition, a much better choice is piezoresistive sensing technology for DC-response accelerometers. They produce resistance changes in the strain gages which are part of the accelerometer’s seismic system. The bandwidth of those sensors can reach upwards of 7 kHz and their signal-to-noise ratio is outstanding because the output is differential and purely resistive. The dynamic range is only limited by the quality of the DC bridge amplifier, and from its acceleration output, the desired velocity or displacement information can be derived without any integration error. Forces are often directly measured using packaged load cell assemblies. By installing them into a complex structure such as a wheel, a wheel force transducer can be created. A six-component wheel force transducer is capable of measuring three forces F x , F y , F z and three moments M x , M y , M z directly at the wheel using individual load cells13 which are connected to the structure. The sensor signals are then amplified in the load cells and brought to the wheel electronics at which the signals are filtered, digitized and coded. Finally, the data is transmitted by telemetry via a rotor and stator pair to a data acquisition box. 12 It is quite easy to find lower power three-axis MEMS sensors for max. accelerations of ±16 g at a price of less than 1 e. In one of the biggest manufacturing site, every day more than 4 million MEMS sensors are produced (according to press release of BOSCH Sensortec GmbH—2015). 13 Other designs may use a complex circuitry of numerous strain gages applied to specific locations of the wheel structure that makes it possible to rely on the original wheel structure instead of having a specific design to which the load cells are attached to.

2.2 Load Analysis

27

Measuring strains help to understand how an object reacts to various forces—and that is the most important information related to material fatigue. Basically, strain is the ratio of the change in length of a material to the original, unaffected length, and it can be axial, bending, shear or torsional. The most common method to measure it is with a strain gage. A strain gage’s electrical resistance varies in proportion to the amount of strain that is measured by a very fine wire or, more often, metallic foil arranged in a grid pattern. This grid is bonded to a thin carrier which is directly attached to the test object. Hence, the strain experienced by the test object is transferred directly to the gage that responds with a linear change in electrical resistance. Typically, strain gages have a gage factor around 2 which expresses the ratio of the fractional change in electrical resistance to the fractional change in strain. Since in practical applications the measured quantities are rarely larger than just a few millistrains, the small changes in electrical resistance have to be measured pretty accurately. Therefore, gage configurations are based on the concept of a Wheatstone bridge that is fundamentally equivalent of two parallel voltage divider circuits. The number of active elements in the Wheatstone bridge gives the type of strain gage configuration and can be quarter-, half-, and full-bridge. Dependent on the bridge type and its configuration, the bridge’s sensitivity to strain or the compensation of potential temperature effects can be set. Strain gages are available in a range of different grid widths: Using a wider grid, if not limited by the installation site, improves heat dissipation and enhances the stability. For normal applications, a grid width of 3 mm is a good choice,14 but if the test object has severe strain gradients because of small geometrical features, a narrow grid should be considered to be more accurate regarding the peak strain.15 Installing strain gages may take some amount of time and resources, and that greatly depends on the bridge configuration and the mounting location. Finally wiring, signal conditioning and the data acquisition (DAQ) system have to be finished before the strain measurements can be started. 14 About 90% of all strain gages which have been applied at my lab for automotive applications are 3 mm gages, but few others are significantly smaller to catch local strains at locations which have a huge gradient and need more attention to the variation in strain. 15 That is pretty much the same than in finite element analysis (FEA): Once you face a huge strain or stress gradient, you may want to reduce the size of your elements at least in that region and recompute the analysis to get a better idea of the numbers you are computing. The smaller the element size is, the better is the geometrical approximation as well as the nodal interpolation becomes more accurate—and that typically results in an increasing computational strain or stress at the point of interest. Without diving deeply into the theory of math modeling using FEA, it can be said that a standard element with a linear shape function is a ‘constant strain element’—just similar to a strain gage. To get a proper correlation between gage readings and computational strain, the element size should be really small too. Alternatively, the results accuracy of FEA can be improved by using a quadratic shape function which then means that the strain or stress can vary linearly within an individual element. However, it needs to be taken into consideration that a correlation study between computational and experimental strain analysis is not trivial and relies on careful specifications for FEA element type and mesh size as well as strain gage configuration and placement.

28

2 Operating Life Analysis

Having now some basic knowledge about sensor technology and the data which is usually acquired for chassis durability, we have a very great deal to learn from what comes out of a data acquisition system: The vehicle, for which data from roads and tracks is available for us, belongs to the category of sports utility. Although designs vary, SUVs have historically been mid-size passenger vehicles with a body-on-frame chassis similar to that found on light trucks and frequently equipped with four-wheel drive for on- or off-road ability. Some SUVs include the towing capacity of a pickup truck with the passenger-carrying space of a minivan or large sedan. Typically, the center of gravity of those vehicles as well as their weight is relatively high, which limits lateral and longitudinal vehicle dynamics to some content. The vehicle we want to look at in a short study may have a front axle weight (FAW) of 1050 kg; its rear axle weight (RAW) is 1200 kg. The vehicle coordinate system (CS) used for the data analyses is a Cartesian system according to the standard in the vehicle industry. That means, x points to the longitudinal driving direction, y to the lateral direction and z to the vertical. The vehicle is driven on various different roads as well as on proving ground (PG) tracks by professional car drivers and in accordance with traffic and/or track regulations (Fig. 2.7). A proving ground is an optimal test environment to make sure that no manufacturing defects or design miscalculation will cause an unsafe or unpredictable behavior of the vehicle. Hence, on a PG, the cars are put through paces and tested rigorously against extreme and varying conditions at different speeds with varying passenger loads. A proving ground is typically designed for multilevel testing and contributes to the testing from concept condition to the final product status. A PG makes it possible to have vehicle and system assessment, safety testing, powertrain development as well as climate-controlled chambers for cold and hot conditions.

Fig. 2.7 Load time history from road load data acquisition on proving ground

2.2 Load Analysis

29

Until a hundred years ago, automobile testing was done in the same place as most cars driving, but as the city streets and country roads filled with cars, this ceased to be feasible because it simply was too dangerous to test cars on public roads. In 1924, the world’s first dedicated proving ground was opened in a fairly isolated portion of Michigan in the North of the USA.16 From the air—or the view Google maps is easily providing—a PG consists of loops and whorls, straight lines and circles. On some of the road, sprinklers are used to water down the surface to test the behavior of cars to slippery, low-friction surfaces. Obstacle courses feature just about every hazard that a car is likely to handle under anything resembling normal use. Other courses provide curves to test a car’s ability to take tight corners without going out of control or running off the roadway. Basically, PG roads and tracks represent the conditions found on public roadways and other specialty surfaces which introduce more than average loads into the vehicle structure. For our car, a total number of six different road tracks (numbering order A to F) and eleven different PG road sections (numbering order 1–11) are available for data analysis. The individual length of the road tracks is 22–196 km, while the PG sections are much smaller having an individual length of just 400–1400 m. The vehicle is fully equipped for an extensive data acquisition program related to suspension and chassis. The dynamic vehicle loads can be characterized by wheel forces and moments from wheel force transducers at all vehicle corners, as well as accelerations of the front and rear suspension and the vehicle CoG. To get an initial idea about the range of vehicle loads, the time history information is analyzed with regard to its maximum forces. Because the full-time history information the wheel force transducers can provide consists of three forces and three moments, a simple determination of the maximum figures does not include a correlation to the time at which the data occurs. Typically, we may expect to have peak verticals whenever the car is hitting a curb or a pothole, and huge lateral forces may come with speedy cornering. Longitudinal forces are certainly introduced when the car is braking—and having those fundamental vehicle dynamic properties in mind, we are quickly able to anticipate what happens for a ‘bump and brake’ load case. To get a more understandable result, the individual figure is normalized to the static wheel force F v,stat . That makes it easy to gain a fundamental understanding for the loads implied on public roads and on PG tracks (Fig. 2.8). The forces obtained on the public roads showed that basically the maximum verticals are close to double static wheel force or even beyond, while on most PG tracks, the maximum vertical forces are much lower. 16 In 1928, General Motors Company stated about the rationale behind its Milford proving ground: ‘No other industry has gone forward so swiftly with so few basic facts—facts that are needed if the motor car is to be of increasing usefulness to a greater number of people. If the industry is to continue its rate of progress it must know more facts about the material used, the economics of design and what happens as the car is being operated mile after mile upon the road in the hands of the use. To get these facts, General Motors five years ago decided to establish a proving ground and to make it the most comprehensive undertaking of the kind the world.’

30

2 Operating Life Analysis

Fig. 2.8 Maximum wheel forces of SUV from various different roads and PG tracks

The ultimate peak comes from road E and is almost 2.8 times the static vertical wheel force which means it is about 25% higher than the peak on proving ground track 7. Road E as well as road F both give huge normalized forces in all directions: The longitudinal forces are in the range of the static wheel force and the lateral forces are 60% of that. Those figures give us a good idea what can happen on the front wheels of such a car. Surprisingly on PG tracks 4, 5 and 6 there are much lower forces than on the public roads giving an indication that those PG tracks were not strictly designed for durability testing. On PG tracks 1 and 2, comparatively huge longitudinal forces can be found that may come from vehicle braking conditions providing excessive longitudinal forces which are in the range of more than 80% of the static wheel force. Looking on the individual maximum figures helps to get an idea about what happened on road and tracks, but does not provide a complete image of the vehicle driving conditions: At every point in time, the wheel force transducer acquires three forces which represent together a specific condition, but the simple maximum

2.2 Load Analysis

31

load statistics gives the information about just one direction. Hence, another type of analysis is needed to put information from more than one direction together. When a car is heavily braking, there is not just a longitudinal force but on the front wheel the vertical force gets higher too. That is because of Newton’s first law of motion and the location of the vehicle’s center of gravity that is behind and above the front axle. Hence, there is a strong correlation between longitudinal and vertical wheel forces for braking and it makes sense to look at such a correlation by a dedicated data analysis method. Two-dimensional dwell time correlation graphs are helpful to identify basic maneuver content of roads and tracks by giving an information about individual vehicle driving conditions which show strong or weak correlation between specific forces and/or moments. When looking at the individual maximum forces we found the largest number of lateral forces on PG track 9. On that track, the peak goes up much beyond the static wheel force, while on all other tracks and roads, the lateral forces are limited to roughly 60% of that force. Hence, something very special may happened on track 9 that introduced such a huge lateral force. When plotting a time basis correlation of lateral and vertical forces together in a diagram, the result is a two-axis plot which talks about the total time at which a certain pair of forces occurred (Fig. 2.9). In this diagram, the static wheel force is identified easily, having zero lateral force. The maximum vertical force is almost double the static one, and that comes together with a lateral force of roughly −4000 N. In the other direction, the maximum lateral force is only 2000 N, and then the vertical force is about 500 N only. Much time on track 9 is spent with a pair of 7500 N in vertical direction and −2000 N in lateral direction.17 The whole plot looks like a relatively small band curve which shows an obvious correlation between vertical and lateral forces. Hence, for this track, we cannot neglect this strong correlation image that puts together high vertical forces and high negative laterals, while in combination with positive lateral forces the verticals become smaller than the static one. From vehicle dynamics point of view that is a clear indication for primarily having cornering maneuvers on PG track 9: In bends, the vehicle’s roll tendency brings higher vertical forces to the outer wheel which then enable those huge lateral forces. Concurrently, the inner wheel becomes unloaded for heavy dynamics cornering—and that is exactly what occurs at the low end on the right side of that diagram. Such a clear correlation cannot be found for road E (Fig. 2.10) which had the absolute maximum for longitudinal and vertical forces, as well as reasonably large lateral forces. Here, the plot looks like concentric circles with a central spot at a pair of 7250 N in vertical direction and −1000 N in lateral direction. 17 Here,

the dwell time is shown by colors and goes up from blue to light blue, then green, yellow, orange, red and finally black. Such a qualitative information is often fine enough, because even that small region in red color means that the original time history offers many different individual events which then accumulate to such a total time.

32

2 Operating Life Analysis

Fig. 2.9 Dwell time correlation plot of vertical versus lateral forces on track 9

Since there is no strong correlation, but an indifferent relation of the forces to each other, we might have a rough road with many holes in front of us. The holes introduce relative huge vertical forces,18 but some laterals as well. The direction of the lateral forces on harsh roads is not predictable because it depends on the position of the steering wheel when hitting a curb or a hole. With the concentric circles not exactly at the center of the static wheel force, road E obviously has some more curves and bends in that direction that was dominating at track 9 too. Both analyses—looking at the individual peaks and the dwell time correlation images—give some indication that the roads are quite different to what happened on the proving ground sections. Since a proving ground is designed to simulate the most demanding parts of public roads, and to eliminate those parts which do not introduce huge forces, PG tracks provide the possibility for accelerated testing. Otherwise, a proving ground is a subset of public road variations only and may lose some content with regard to the vehicle dynamics. To characterize broadband random signals—which we certainly got from the roads and PG tracks—their power spectral density (PSD) can be analyzed. That gives a 18 According

to the diagram, there are verticals at roughly 12,500 N which come together with lateral forces from −600 to +1000 N.

2.2 Load Analysis

33

Fig. 2.10 Dwell time correlation plot of vertical versus lateral forces on road E

signal’s power content versus frequency that must not mixed up with the physical quantity of power as in watts. In vibration analysis, the PSD stands for power spectral density of a signal and the magnitude is the mean-square value of the signal being analyzed. The PSD represents the distribution of a signal over a spectrum of frequencies and its magnitude is normalized to a single hertz bandwidth. The mean-square value—in other words: the power—is a pretty convenient measure of the strength of a signal, because the signal itself is squared and has a positive quantity then for which the mean value can be computed. Mathematically random vibration is characterized as an ergodic and stationary process. The term ‘ergodic’ was introduced by Ludwig Boltzmann when working on a problem related to statistical mechanics: A random process is called ergodic if every sequence is equally representative of the whole. To specify random vibration in mechanical engineering, the acceleration spectral density (ASD) is used. The root-mean-square acceleration is the square root of the area under the ASD curve in the frequency domain. The root-mean-square acceleration expresses the overall energy of a particular random vibration signal and is statistical parameter that can be used for structural analysis. Generally, random vibration processes have no simple relationship between their peak and root-mean-square values.

34

2 Operating Life Analysis

The root-mean-square acceleration can be computed in the frequency domain, using Parseval’s theorem, and it is a statistical measure of the magnitude of discrete values or for a continuously varying function. Back to our road load data: A detailed comparison of the road types can be done by analyzing the square mean spectra for the individual forces F x , F y and F z . At the first eigenfrequency of the front suspension—which is here about 12 Hz and shown by the slope upwards—it can be seen, that road type B provides for the vertical force an RMS value > 200 (Fig. 2.11), while this is much lower for type C road (Fig. 2.12). That is even more obvious for the eigenfrequency at 68 Hz.

Fig. 2.11 RMS spectrum for road B

Fig. 2.12 RMS spectrum for road C

2.2 Load Analysis

35

Fig. 2.13 Deviation about the mean of maximum forces for PG track cycles on bumpy road

Hence, road type B can be rated as a rough road, and road C is more like a regular road. The situation is different for the longitudinal direction, which is excited by both, road irregularities and braking. The comparison between road B and C shows, that, e.g., at a frequency of 30 Hz the RMS value for the regular road. For the regular road, the RMS of the longitudinal force is even higher than it is for the vertical direction, which is continued to higher frequencies until the resonance is excited at 68 Hz. Another parameter we may have to analyze is the probability of exceedance for repeating driving conditions: Basically, PG track loads highly depend on the driver’s ability to be aligned to given specifications as accurate as possible. For rough track sections and events, the loads are influenced primarily by the vehicle’s speed which can be controlled by the driver. Consequently, the repeatability of loads should be appropriate for those sections. For the bumpy road on PG track 6 both, vertical and longitudinal forces do not show a huge deviation about their mean when a total number of six repetitions were analyzed (Fig. 2.13). At this point, we have to look into statistics a bit: When repeating a random experiment, the new number might be slightly different from the previous one. Hence, it would be unlikely to get the same forces in longitudinal and vertical direction when re-running the PG track.19 19 Although those values have to be different from lap to lap, they are not too much different: It would be unlikely to have huge force differences, since it is the same PG track and the vehicle drivers do know what they do. Hence, we often assume to see a probability density for those values which satisfy a Gaussian distribution representing a real-valued random variable.

36

2 Operating Life Analysis

Fig. 2.14 Deviation about the mean of maximum forces for successive PG cornering track cycles

To further analyze the data, we may have to look at the rank-size distribution: Here, the individual results are ordered in ascending sequence having a total number ntotal of six different values from the individual repetitions i on the PG track. According to the approximative equation of Benard-van Elteren, the probability of exceedance Pe can be calculated20 : Pe =

i − 0.3 · 100 n total + 0.4

(2.3)

That is a rough and quick approximation of the probability of exceedance which comes together with an individual result from PG testing. Based on a total number of six repetitions—creating six different values for the maximum forces—the biggest force represents a value which exceeds roughly 90% of those which can be expected on this PG track. That means—on the other hand—almost 10% may be even higher. For lateral dynamics, the driver’s influence as well as the impact of road texture and tire becomes bigger. Hence, the deviation about the mean can be expected to become larger. That is confirmed by the dynamic cornering on PG track 9 (Fig. 2.14). Here, the deviation about the mean is more than doubled compared to the bumpy road. When repeating a certain driving condition, it is absolutely unlikely to meet the exact same point with the same speed and steering position. Although professional test drivers on a proving ground have dedicated instructions they have to follow, there are always slight deviations from the intended lane, speed or braking point. 20 The Benard-van Elteren test from 1953 is an extension of the Friedman test (1937) which utilizes the method of m rankings.

2.2 Load Analysis

37

Basically, that makes us feel uncertain when looking to all those deviations that we may see in a load time history. What is the true content we have to consider for a proper load analysis? In the preface, we learned that structures which are subjected to fluctuating loads in sufficient numbers are liable to fail by fatigue. That gives a clear indication that we have to count the numbers of load cycles as well as we have to get those numbers linked to the load amplitude. When having constant amplitude loading that seems to be pretty simple, because then it is just a matter of counting the load cycles— assuming that the amplitude itself is known. When the load frequency is constant too, then this task is reduced to just look at the time period the operational loads are acting. But variable amplitude loading is different: Here, we may see a complete random type of loading that can have various different amplitudes following each other with no regularity. After a peak load, there can be another load cycle which is similar in range, but it can be much smaller too. To count the numbers of load cycles and to get those numbers linked to the amplitude, it is necessary to go through the load time history—such as reading a text book—and to collect the information as requested. That can be done by analyzing the signal with regard to • • • •

its reversals that can be either a minimum or a maximum, the range it goes from minimum to the next maximum and conversely, the level it is crossing in positive or negative direction, a hysteresis that is closed in positive or negative direction.

2.2.1 Counting Methods The major information of a load cycle is given by its amplitude and mean value, or by its minimum and maximum. A counting method that is just processing the information about the amplitude is called uni-parametric, while the more complete information—containing amplitude and mean value, or minimum and maximum—is processed by two-parametric counting methods. Early applications of counting methods in structural durability were aligned to requirements and possibilities in aircraft engineering [6, 7] mid of the twentieth century. Various different methods were proposed and had primarily the intention of simplicity and efficiency because data acquisition and storage were not easy at that time. One of the easiest ways to collect information from a time history plot is the level crossing counting method. Therefore, a number of equidistant levels are introduced which include the absolute minimum and maximum values of the time history. The lower bound of the first level is smaller than the minimum, and the upper bound of the highest level is bigger than the maximum.

38

2 Operating Life Analysis

Fig. 2.15 Level crossing counting scheme: a Load time history. b Crossing frequency

Then for the ascending cycle parts of the signal it is analyzed which levels they are crossing. For each upward part of the individual cycles, the absolute values are counted and can be plotted with regard to their crossing frequency (Fig. 2.15). The level crossing counting method does not include the information about amplitude and mean, but gives the information about the absolute values of the signal. The mean value is not counted directly, but can be assessed by looking at the level with the highest crossing frequency. Although in the example from the figure the upper bound of level 3, which is coincident to load #34, has the highest crossing frequency, there are still cycles which do not cross that level. Hence, it is not possible to get an accurate number of the cumulative exceedances by using the level crossing counting method: There can be several cycles which are completely below or beyond the level with the highest crossing frequency. Another issue of such a counting method is that cycles having very small fluctuations within a level are not counted anyway. And there is no guarantee to find the absolute peak accurately, because that is somewhere between the lower and upper bound of the highest level.21 Another popular counting method is the range pair counting. That is looking for an ascending cycle part and a descending part of the same value and put that information together with the mean value. The matching subcycles may occur directly after each other, or much time may be spent to find an appropriate cycle part to finish the individual cycle counting. The most popular counting method nowadays is rainflow counting. That method was introduced in 1968 by the Japanese researchers M. Matsuishi and T. Endo and it is extracting closed loading reversals that are found in the time histories.

21 The

computer performance today allows many more equidistant levels than the small number used for the example in Fig. 2.15, so this type of inaccuracy is not necessarily a big issue nowadays.

2.2 Load Analysis

39

1972 it was published in an English journal paper for the first time [8]. Since this method works like looking at the flow of rain falling on a pagoda and running down the edges of the roof, the name ‘rainflow counting’ was given. To imagine that the time history is a pagoda, the time history is turned clockwise 90°, so the starting time is at the top. Now, all the peaks are imagined as a source of water that drips down the pagoda roof (Fig. 2.16). Starting with the tensile load reversals, the number of half-cycles is counted by looking for a termination of the water flow because of either • the end of the time history is reached, • the flow itself merges with a flow which started earlier at another peak, or • a trough of a greater magnitude is encountered. For the compressive troughs that is repeated. Then a magnitude to each halfcycle is assigned that is equal to the difference between its start and its termination. Finally, the half-cycles having an identical magnitude, but opposite sense, are pairedup, similar to what we mentioned for the range pair counting as well. Typically, there are some half-cycles for which a matching pair cannot be found. Those are then stored in a residual. Although the image of water that drips down a pagoda helps a lot to explain the concept of that counting method, it is actually much more related to fatigue mechanics than we may see in such an image: A full rainflow cycle is defined as a load range formed by two points which are bounded within adjacent points of higher and lower magnitude. Coming back from the general term ‘load’ to the more fatigue specific terms ‘stress’ and ‘strain’, we can see a closed stress–strain hysteresis loop when looking at the stress path which returns past the first turning point (Fig. 2.17). Since in material mechanics it is assumed that fatigue damage is attributed to closed hysteresis loops, the rainflow counting method offers a strong physical background too. The area which is given by the closed hysteresis in the cyclic stress–strain curve can be viewed as an energy term which is applied to the material. Such an energy makes the material deforming plastically and leads to fatigue damage effects. Searching for closed hysteresis, various different algorithms exist which typically use three or four successively sequenced peaks or valleys. Standard practices for rainflow cycle counting are given for example by ASTM E1049—85(2017) which documents the algebraic formulas using Boolean operators for carrying out the counting process. Rainflow counting results create a matrix in which the individual ascending and descending cycles together with their maximum and minimum, or the mean and amplitude information are stored efficiently. Although in this way a fair amount of relevant data can be extracted from the time history, some information is lost: The frequency of the loading as well as the time-related order and the shape of the individual cycles are not processed by the counting methods. Using counting methods does not mean to have access to the complete information of the time history, but to rely on an excerpt of data which can be visualized easily

40

2 Operating Life Analysis

Fig. 2.16 Rainflow counting method scheme: a Load time history. b Analogy to water drips from pagoda roof

2.2 Load Analysis

41

Fig. 2.17 Turning points processed in rainflow counting form a closed hysteresis loop

and used to meet the most dominant effects for fatigue. A cumulative exceedance plot provides an understandable representation of a time history. Such a spectrum plot shows the information about the peak and the number of occurrences at that load level as well as the information about the spectrum shape and total number of cycles. That can be used as proper information to compare different roads or PG sections (Fig. 2.18). Much better than with the time history, the load characteristics can be visualized and understood using a cumulative exceedance plot. When repeating specific driving conditions on a PG with different people, the driver-related spectra provide a more complete information about what happened on the tracks (Fig. 2.19). And, we will see that a bit later, the spectrum can be used beneficially for serving the stress–strength interference model in an operating life analysis. Basically, the term ‘operating life’ refers to operational conditions that are related to the intended use. Hence, it is necessary to define those operational conditions carefully, because they are a decisive factor for design-to-durability. If there is no clear definition for operational conditions, it is simply not possible to design for the pursued life time. In automotive product development that is the reason for using dedicated proving grounds: Here, the operational conditions are represented from the rough side. The individual track sections provide harsh conditions one after another, but still give an image of the intended use. But that term ‘intended use’ has to be aligned completely to the markets a product will be sold (Fig. 2.20). Roads, highways and bridges are commonly referred to as the backbone of a transportation system and serve as a lifeline of commerce and economic activity. But even Western countries lack a long-term sustainable transportation funding source to

42

Fig. 2.18 Comparison of rough road spectra

Fig. 2.19 Comparison of PG spectra from different test drivers

2 Operating Life Analysis

2.2 Load Analysis

43

Fig. 2.20 Railroad crossing in Beijing–Tianjin region, China | 2013

pay for needed investments and improvements22 —how can that work for emerging countries which rely on cheap road transportation for economic growth? In Africa, the roads are in bad condition and need huge investments: The proportion of paved roads today is only 20% of that in developed countries. As a result, transport costs are 63% higher in Africa than in developed countries, limiting its competitiveness in the international and local markets [9]. Without having fundamental knowledge and insight about market specific conditions, it is difficult to design proper products: After the re-unification of Germany in 1990, people were surprised to see that 58% of all roads in East Germany had huge to catastrophic damages and introduced such severe loads to vehicles that were unknown to Western Germany car manufacturers. And that makes it sometimes challenging for engineers to design their product properly: Hence, intended use has to consider rare events which cannot be avoided entirely. In Europe that might be harsh loads introduced by unintentional curb impact on icy roads. That is special event loading that happens at low speed and may introduce severe loads to the structure which have to be considered for a small number of occurrences without compromising product safety or durability. 22 ‘Driving on poor roads with deteriorating conditions costs motorists roughly $67 billion in additional operating and repair costs annually’ says a policy analysis of the Committee for Economic Development in the USA in May 2017.

44

2 Operating Life Analysis

Fig. 2.21 Overloading a light duty truck in Beijing, China | 2013

But impacting such a curb with high speed is certainly nothing a designer has to consider as permitted operational condition. That is clearly beyond any operational limit and has to be treated as abuse load. Similarly, the issue of overloading has to be seen: Basically, freight carriers are motivated to set load weight levels that will yield maximum profits and may exceed weight limits defined by the vehicle manufacturer (Fig. 2.21). Overloading is an inevitable outcome of economic growth: A study [10] reported for Indonesia that 22% of the trucks exceeded the axle weight limit and 38% exceeded the pavement design limit causing 90% of the pavement damage. In developed countries, the overloading percentage is 2–5%, while in emerging countries it can reach as high as 80% of the total number of trucks on the road.

2.2.2 Spectrum Sometimes there is an information about load and spectra available directly in norms, standards and guidelines: The DIN 15018 included such data for the verification and analysis of steel structures of cranes which later was transferred to ocean-going ships. Today, the EN 13001 replaces the old DIN and gives in its part 3–1 relevant information about limit states and proof competence of steel structures. The spectra proposed for the steel structures of cranes can be expressed by using an exponent χ > 2 in the following equation: χ  Sa 1− Sa,max

H = H0

(2.4)

2.2 Load Analysis

45

The cumulative number of cycles H is referenced with a constant number H 0 (e.g., 106 ) and a stress ratio which is normalized by the maximum stress amplitude S a,max . The spectrum shape parameter χ then can describe various different unit spectra such as • • • •

χ = ∞: constant amplitude loading, χ > 2: crane structure loading acc. to DIN 15018, χ = 2: stationary random process (Gaussian process), χ = 1: straight line spectrum in semi-log axis system (e.g., smooth tracks, sea waves, longtime observations), • χ = 0.8: log-normal distributed loading (e.g., wind loads). In this way, constant amplitude loading can be seen as a particular case of a generalized description of spectra. While constant amplitudes come with an infinite exponent, the random nature of variable amplitude loading often shows a huge number of small load cycles and rare peaks only. We will see what that means for initiating fatigue and damaging components.

2.2.3 Finite Element Analysis Once the conditions of the operational loading as well as their limits are defined, the strains and stresses of relevant components have to be analyzed. That can be done experimentally by using strain gages which are applied to those parts directly, or math modeling techniques are used to determine stresses and strains. While strain gage applications provide strain readings just for a very localized area, math modeling often has the advantage of providing a more complete information for the field of stresses. Finite element analysis (FEA) is a powerful tool to gain stress data of components from different load cases which then can be used for an in-depth analysis of the operational loading (Fig. 2.22). FEA is a numerical technique that can solve partial differential equations (PDE) which are used to describe physical phenomena. For running FEA successfully, relevant model properties need accurate calibration and comparison with experiments. FEA is an effective tool and widely used to create stress data for the use in operating life analyses, but it always has to be considered that it is an approximation: The quality of results depends on mesh convergence and the discretization error. Using FEA for analyzing mechanical components and systems allows to get displacements as direct results from solving the PDE-equivalent matrices. Strains and stresses are then derived from the displacements by using relatively simple interpolation functions that downgrade the accuracy. Simulation engineers have to ask themselves how accurate their computer models are, and they have to be aware of simulation specific characteristics such as

46

2 Operating Life Analysis

Fig. 2.22 Functional prototype and FEA simulation results of conrod

• CAD geometries are stored often as B-splines to represent the exact geometry, while for FEA, these geometries are discretized and may not always fit well with the geometry, • the distortion of elements cannot be avoided when meshing complex geometries, but may affect the accuracy of the obtained solution, • meshing leads to an approximation by the discretized finite model which makes mesh independent results never possible and means that the mesh convergence of the results has to be checked, • in the presence of singularities strains and stresses do not converge, • linear finite elements often show volumetric locking effects when used in problems involving compressibility like for modeling rubber materials or general plasticity.

2.3 Strength Analysis 2.3.1 Fatigue When subjected to repeated cyclic loads metals, ceramics and plastics exhibit damage by fatigue. Since the stress levels of those cycles are relatively small, fatigue failures are not caused by a single cycle but a large number of cycles are needed. Fatigue comes in the form of initiation or nucleation of a crack followed by its growth till the critical crack size is reached which then leads to rupture. The

2.3 Strength Analysis

47

crack initiation in metals is caused by irreversible dislocation movement leading to intrusions and extrusions. The dislocation itself is a flaw in the lattice of the metal which causes slip to occur along favorable oriented crystallographic planes. Metals have a dislocation density at zero strain and a rate of dislocation accumulation with increasing applied strain.23 These dislocations agglomerate into bundles almost perpendicular to the magnitude and direction of the slip. The crack initiation is intensified by environmental effects also: When oxygen diffuses into slip bands, they become weakened. That is a reason for having new cracks typically initiated at the surface: Since the surface grains are in intimate contact with the atmosphere, they are more susceptible to fatigue. And a surface grain is not wholly supported by adjoining grains, so it can deform easier than a grain from inside that is surrounded on all sides by other grains. If the surface of the component is hardened, either metallurgically or by surface hardening, the fatigue strength increases as a whole. Fatigue cracks can nucleate during cyclic load that does not happen under static monotonic load. Even more important: Fatigue crack initiation and growth is possible at stress levels far below the monotonic tensile strength of the materials. Once a crack is initiated, it advances continuously by very small amounts and is driven by the magnitude of load and geometry of the component. A microstructurally small crack is strongly affected by the microstructure of the metal and its growth stops at microstructural barriers if the applied stress level is below the fatigue limit. Then the size of such a crack is generally limited to the order of the grain size. Next stage in crack size is physically small cracks having a length of the order of 3–4 grain size. From practical point of view, a fundamental knowledge of microstructurally and physically small cracks is helpful to manage production quality, since those indicate the size of flaws which can be accepted in manufacturing. A damaged surface showing fatigue is characterized by two types of markings: Beachmarks24 and striations. They indicate the position of the initial crack tip and show concentric ridges that expand from the crack initiation site often in a circular or semicircular pattern. Beachmarks often can be observed with an unaided eye and the individual beachmark band represents a period of time in which the crack propagated. Different to that, striations are microscopic in size and that part of a beachmark which represents the distance by which the crack advances during single load cycles: There are a huge number of striations within a single beachmark. The width of those striations increases with increasing stress range. Fatigue is intrinsically a dissipative process which results in the production of heat, and thus some change in temperature. Particularly in low-cycle fatigue, metals can undergo a significant rise in temperature due to hysteresis heating. The temperature response of a material has been shown to be a good indicator of the fatigue limit in certain metals. Hence, thermography has been used to observe the progression of 23 Dislocation

density in metals can be measured by using X-ray diffraction (XRD) analysis. are sometimes called clamshell marks too.

24 Beachmarks

48

2 Operating Life Analysis

fatigue damage and some methodologies have been developed for the prediction of the fatigue life. To look at tensile testing as a particular case of cyclic fatigue testing is not quite correct, but may help to find a classification of fatigue. Tensile testing introduces a slowly applied force on the opposite ends of a specimen and pulls it until it breaks: The force time history of such a test can be seen as the very first part of a load cycle which starts at zero and ends at UTS. That is basically one-quarter of a full fatigue load cycle and introduces plastic instability such as necking, or the formation of a shear band. The tensile test specimen does not deform uniformly any longer but sees the deformation localized to a small region only that comes together with a loss of load bearing capacity and a large increase in plastic strain rate in the localized region. Reducing the peak load and continuing with fluctuating loads is then the step from tensile to cyclic fatigue testing. When the load level is still high enough for plastic deformation to occur, this type of fatigue is known as low-cycle fatigue (LCF): In such case, the number of cycles needed to initiate cracks and to get fracture is small and often in the range of 1000 to a few tens of thousands. In the LCF regime, using stresses is less useful than using strains for load characterization. Consequently, low-cycle fatigue is also called strain-based fatigue. When the load level becomes even lower and the deformation is primarily elastic, that regime is called as high-cycle fatigue (HCF): Here, the number of cycles needed for fracture is high and may range from 100,000 to a few millions. Using stresses for characterizing the load makes more sense, and therefore, it is called stress-based fatigue. In such a way, a diagram can be drawn that shows the applied load amplitude and the number of cycles to create a failure (Fig. 2.23). Although the monotonic type of load increase of a tensile test is different to the cyclic loading of fatigue testing, the tensile strength can be considered as an upper limit for fatigue. Because in the LCF regime typically more than 1000 load cycles are applied, the fatigue results are significantly shifted toward the lower right side of the load versus

Fig. 2.23 Information from tensile testing and cyclic testing in a single graph

2.3 Strength Analysis

49

cycles-to-failure diagram.25 It is very understandable that the number of cycles-tofailure increase with the load level becoming lower—it is not yet clear for us what that relation precisely looks like. And, basically, in such a diagram, the values on the vertical axis are pretty different from each other: The tensile testing is characterized by monotonic increasing stress, while the cyclic data comes with strains for the LCF regime and with stresses for high-cycle fatigue. Mixing up these different properties does not make much sense and causes confusion—hence, it is a more descriptive diagram than one which is really used to document material strength characteristics. Another question may come into our mind when thinking about the lower end of such a curve: Is there a lower load limit for fatigue which then would give an infinite life or may allow the crack to propagate extremely slowly? Does a material fatigue limit exist that does not allow a nucleated crack to grow at all? What happens if the applied stress is close to or less than such a fatigue limit? Results from early fatigue testing were published in 1870 by August Wöhler after he conducted extensive research work on railway axles. ‘Wöhler’s law’ became a fundamental declaration of the technical contexts related to fatigue [11]: Material can be induced to fail by many repetitions of stresses, all of which are lower than the static strength. The stress amplitudes are decisive for the destruction of the cohesion of the material. The maximum stress is of influence only in so far as the higher it is, the lower are the stress amplitudes which lead to failure.

Wöhler himself presented his results from fatigue testing in form of tables; later those results were plotted as curves, and that general form of documenting the relation between stress amplitudes sa and number of cycles-to-failure N is called ‘Wöhler curve’ since that term was first introduced by a paper in 1936 [12]. This curve was described as a straight line in a semi-logarithmic graph: log N = a − b · Sa

(2.5)

In 1910, Olin Hanson Basquin reshaped the original form of the ‘Wöhler curve’ and plotted that in a double-log graph as a line [13] which is still today used to describe the results from a wide range of LCF and HCF tests: log N = a − b · log(Sa )

(2.6)

Arvid Palmgren—a Swedish researcher who worked on life prediction models for ball and roller bearings—then proposed in 1924 a four-parameter equation that made it possible to describe an extended graph from finite life to a material fatigue limit [14]:   log(N + Γ ) = a − b · log Sa − Sa,e

(2.7)

In Palmgren’s paper, we can read: 25 Keep in mind that in such a diagram at least the abscissa is in logarithmic scale—hence, the space

with regard to the number of cycles between different points is much bigger than expected.

50

2 Operating Life Analysis

Fig. 2.24 S–N graph indicating an endurance limit

If we start out from the assumption that the material has a certain fatigue limit, meaning that it can withstand an unlimited number of cyclic loads on or below a certain, low level of load, the service life curve will be asymptotic.

Hence, the lower end of the ‘Wöhler-curve’ then would be a horizontal line and characterizes the stress amplitude of the endurance limit sa,e (Fig. 2.24). This portion of an S–N curve represents the maximum stress that the material can withstand for an infinitely large number of cycles. So, it has to be decided how to precisely define such an endurance limit by parameters from testing: At which number of cycles, the material fatigue limit starts, and what is the level of it? One of the widely used processes to examine the characteristic mean parameters of an endurance limit is the ‘staircase-method’: Based on a pre-assigned limit cycle, specimens are tested on defined load levels which are equidistantly to each other. If a specimen reaches the limit cycle without failure, then the following specimen will be tested on the next step load level. In case the specimen does not fail at the limit cycle, the following test will be performed at a lower load step. After some testing, there are a few results for ‘no-failure at limit cycle’ as well as a few ‘failure at limit cycle’. Finally, the evaluation for the endurance limit is done by using the event with the smaller number of occurrences [15]. This method uses a relatively small number of specimens—often less than 25—and provides acceptable results with regard to the mean value of the endurance limit. Other methods—such as the Probit method or the two-point method—are better with regard to a more complete proposition about the statistics of the endurance limit which then includes the deviation about the mean too. Basically, all relevant work about fatigue and its background until the early twentieth century—including finite life region and endurance limit—was motivated by ferrous metal failures. Although the use of ferrous metals started in around 1200 bc when iron production became commonplace, non-ferrous metals have been used since the beginning of civilization: Copper is used since 5000 bc and that marked the end of the Stone Age.

2.3 Strength Analysis

51

With the later invention of bronze, an alloy of copper and tin, the Bronze Age started in around 3000 bc. Aluminum—a more modern type of low density, non-ferrous metal—was found in 1808 for the first time26 and then used for building airships end of the nineteenth century. So, material fatigue did mean ‘ferrous metal fatigue’ for a long time and included the concept of having an endurance limit which represents a stress level below which the material does not fail and can be cycled infinitely. That is due to interstitial elements in ferrous metals which help to pin dislocations, thus preventing the slip mechanism that leads to the formation of microstructurally small cracks. While for non-ferrous metals such as aluminum an endurance limit never was assumed, for ferrous metals with body-centered cubic (BCC) crystal structure it was believed that if the material survives 106 or 107 load cycles, it would never fail with increasing number of cycles at the same stress level. Because of limitations in test rig hardware as well as limited fatigue life requirements, for a long time, there was no need to look what happens beyond 107 load cycles. Very high-cycle fatigue (VHCF) of engineering materials is a phenomenon that first became acknowledged and evoked scientific interest only a few decades ago. It was observed that some materials, when subjected to a sufficiently high number of load cycles, actually in the range of 108 to 1010 , fail at stress levels that traditionally were considered as safe. In the VHCF regime, the process of crack formation consumes about 80–99% of the total fatigue life, while failures at lower number of load cycles have a significant portion of crack growth. Fatigue testing up to 1010 load cycles needs proper test rig hardware which is capable to provide a huge test frequency: Using servo-hydraulic test rig hardware, which runs at a maximum frequency of 100 Hz, a single test is finished after three years. An ultrasonic fatigue test machine can reach a frequency of 20 kHz and makes it possible to have the 10 billion cycles27 within a week. In 1950, the first test rig hardware with piezoelectric transducers was introduced that transformed 20 kHz electrical signals into mechanical vibrations of the same frequency. Starting from the early 1980s results from ultrasonic fatigue testing were published. The materials for VHCF testing are primarily ferrous materials, titanium alloys, nickel alloys, aluminum alloys and polycrystalline copper which are widely used in aeronautics, aerospace, automotive, railway and other industries. Those materials provide the vast majority of parts and components that operate in VHCF conditions: Gas turbine components, cylinder heads and blocks of cars and trucks, ball bearings, high speed machine tools, ship engines as well as high speed train bogies and wheelsets. 26 Actually, aluminum is the most abundant metal in the world and the third most common element,

comprising about 8% of the earth’s crust which makes it today the most widely used metal after steel. 27 Going much beyond 106 and even up to 1010 cycles means to be in the giga-cycle regime, which is another word for VHCF.

52

2 Operating Life Analysis

Experiments in the VHCF regime have shown that most materials do not have an endurance limit at 106 or 107 cycles but instead their fatigue strength gradually decreases as fatigue life reaches 108 to 1010 load cycles. Therefore, the concept of fatigue limit is often not adequate and it would be more appropriate to introduce a fatigue strength at a certain number of cycles. Moreover, considerable attention has to be drawn to the crack initiation mechanisms at VHCF. For a high-performance nickel–chromium–cobalt alloy used in gas turbine hot section components, it was found that its fatigue strength decreases by 200 MPa when moving from 106 to 109 load cycles [16]. The nickel-based super-alloy Inconel 718 had fatigue failures at >107 load cycles, but by further decreasing the stress amplitude a limit of 530 MPa was reached below which failures did not happen even at 109 cycles [17]. For an AZ31 magnesium alloy, a drop by 30 MPa can be found when going from 106 to 109 load cycles which is a >20% decrease considering the low strength of the alloy [18]. Coming to ferrous metals, an important group is spheroidal graphite cast irons which contain 3.2–3.6% graphite—or: carbon. Here, the graphite has round tube shape that adds the important ‘spherical’ to the material’s name. Typical components made from ‘S.G. iron’ are suspension arms, gears or crankshafts and their required life expectancy exceeds 109 load cycles. In fatigue tests, it was demonstrated that the specimens continue to fail beyond 107 cycles and a fatigue limit was not observed [19]. Further examination showed two different types of crack initiation: While in the HCF regime the cracks started at the specimen surface, with increasing number of cycles the cracks showed interior initiation. Even low-carbon ferritic steel has a slight decrease of fatigue strength with increasing number of cycles: About 10% is the difference between fatigue strength at 106 and 109 load cycles [20]. For stainless steels, the situation is quite complicated with regard to a fatigue limit. The type 17-4 PH martensitic stainless steel is the most widely used of all of the precipitation-hardening stainless steels and comes with aerospace applications, oil and gas equipment as well as general metalworking: Its fatigue strength drops by almost 25% between 106 and 109 load cycles. Different to the martensitic alloy, the stainless steel type 304 is the most versatile and widely used stainless steel28 and shows a rather asymptotic fatigue strength evolution in the VHCF regime [21]. Nevertheless, it has to be considered that the strength level of type 304 is significantly lower than that of the type 17-4 PH stainless steel and cannot compete regarding its fatigue properties. According to the same paper, pearlitic steel grades that are used for rail track applications showed a 20% drop in fatigue strength between 106 and 109 load cycles too.

28 Actually, that material is often

referred to by its old name 18/8 which is derived from its nominal composition with 18% chromium and 8% nickel.

2.3 Strength Analysis

53

It can be concluded that the majority of materials show a continuous degradation of their fatigue properties with an increasing number of load cycles: The material fatigue limit is actually a non-existent concept. Claude Bathias published his pioneering work about giga-cycle fatigue in many papers and stimulated worldwide research. He showed that the crack often starts at subsurface defects rather than at the surface which can be explained by the fact that in giga-cycle fatigue, the stresses are too low to produce plastic deformation in the form of surface roughening. Hence, the internal defects such as non-metallic inclusions or pores become the main source for crack initiation. Here, the fatigue crack initiation is provoked by the stress concentration from the metallurgical microstructure. While at high stresses the fatigue life is primarily determined by crack growth, at low stresses most of the number of cycles-to-failure are related to the process of crack initiation. By taking into account that the region of finite life does not merge with an endurance limit, the S–N curve becomes a multistage graph. While the region for which we expect surface fatigue predominantly is examined for more than 150 years now, there are limited results available for more than 107 load cycles. Dependent on the type of material, the trend toward subsurface fatigue at high number of cycles seems to be present, but cannot be generalized in a form that gives an accurate representation of the VHCF regime. In a double-logarithmic diagram, the ‘classical fatigue life’ is represented by a straight line having a slope k and a lower end in the range of 106 to 107 load cycles (Fig. 2.25). Cetin M. Sonsino—one of the grand seigneurs in durability research I have the great pleasure to know and to work with—brought those specimen results into an approach for component related fatigue life assessment and recommended to use a

Fig. 2.25 Multistage S–N curve

54

2 Operating Life Analysis

Fig. 2.26 Bilinear S–N curve with a slope k* dependent on material type

5% decrease of fatigue strength per decade for iron-based materials and magnesium alloys, and a 10% decrease for aluminum alloys and welded joints which typically contain tensile residual stresses [22]. Hence, there is a slope k* in the range of 22–45 instead of having the horizontal line of the endurance limit. That can be seen as a recommended practice for the continuation of the S–N graph in the HCF and VHCF regime (Fig. 2.26). Although much is said about how fatigue is evolving with regard to stress amplitudes and the corresponding number of cycles-to-failure, there is still a missing link between the parts of such a bilinear approximation of the S–N curve: In which way, both lines come together and mark the transition from the finite life region to the more horizontal continuation? When looking on such a graph with experimental data, it is often not easy to define that transition because the available data does not allow an accurate determination (Fig. 2.27). As far as our eyes can see, it is possible to use two lines for an approximation of that data and the lower end of the first line can be the start of the second one. But, actually, the term ‘line’ is a bit misleading because the data is plotted in a double-logarithmic graph. What here looks like a ‘line’ is actually a power function: Two of those may be used to represent the complete experimental data in that graph. Typically, the transition region between those concatenated curves is simplified by just a knee point which is then representing the end of the ‘classical fatigue life’ (Fig. 2.28). We now have an understanding of the major content of an S–N curve which puts together the information about individual stress amplitudes and their corresponding fatigue life results. But there is still a knowledge gap with regard to bridging the upper end of the curve to the material strength properties which are dominated by stresses close to

2.3 Strength Analysis

55

Fig. 2.27 Experimental data from cyclic tests in a double-logarithmic graph

Fig. 2.28 S–N graph with knee point

the yield limit or even above.29 When a material starts to deform plastically a small stress increase leads to a large strain increment. Then the induced fatigue damage will be due to global plasticity and it makes sense to consider strains rather than stresses in a fatigue model. 29 Although

exceeded.

high-load amplitudes may be introduced, the static load carrying capacity is not

56

2 Operating Life Analysis

An approach was found in the mid of the twentieth century: The Coffin–Manson criterion helps to describe the behavior of metallic materials under cyclic inelastic strain amplitudes and was developed for components which are loaded by relatively few cycles at elevated temperatures [23], for example disks of gas or steam turbines. Although Coffin and Manson performed plastic strain-controlled fatigue tests not even to 104 cycles, their criterion is often used in a more general way than just for cyclic loading with huge strain values. They described the relationship between strain amplitudes εa and fatigue life N f 30 by using three material parameter which were examined from the tensile test: Young’s modulus E and the ultimate tensile strength are basic characteristics of such a test, and a ductility parameter φ can be assessed by the tensile specimen cross-section before the test A0 and after fracture A*. The ductility parameter largely corresponds to the strain at fracture ε*. φ = log

A0 ∼ ∗ =ε A∗

(2.8)

The Coffin–Manson relationship then combines a fatigue model for the elastic part—referred to as ‘el’—with another for the plastic part—referred to as ‘pl’:     1 0.6 UTS · Nf−0.12 + · φ · Nf−0.6 εa = εael + εapl = 1.75 · E 2

(2.9)

We can see for each part a specific exponent which is the fatigue strength exponent for the elastic part—varying in the range of 0.05 and −0.12—and the fatigue ductility exponent that is in the range of −0.5 and −0.7. Hence that gives a complete description of the fatigue strain-life curve (Fig. 2.29). The applicability of that concept much beyond the LCF regime has been discussed since Coffin and Manson came up with their results about fatigue life under cyclic thermal stresses. Specifically, the fatigue model for the elastic part often does not approximate experimental data properly. More recent research from Wagener proposes a complete fatigue life curve that is capable to describe a continuous cyclic load versus strength relationship from the LCF to the VHCF regime for aluminum, steel and cast iron using both strain as well as stress-controlled tests [24].

2.4 Damage Calculation When adding the information from load data acquisition and fatigue testing to a single graph, a complete representation of load versus strength is available. From load time history, a spectrum can be derived by using rainflow counting, and the results from fatigue testing are documented by the S–N curve (Fig. 2.30). 30 When using amplitudes, the term N f

actually means the number of reversals-to-failure: Since one reversal is just half of a cycle, so, pretty often we can read the term 2.Nf .

2.4 Damage Calculation

57

Fig. 2.29 Strain-life curve using Coffin–Manson criterion

Fig. 2.30 Load spectrum and material S–N curve in a single, double-logarithmic graph

In that way, relevant information about the cyclic loading as well as the fatigue strength is put together. Hence that is exactly the information that has to be combined for an operating life analysis. Going back to the year 1924 when Arvid Palmgren published his paper about the fatigue life of ball bearings, a hypothesis for life prediction was introduced: In the event of a cyclic variable load we obtain a convenient formula by introducing the number of intervals p and designate m as the revolutions in millions that are covered within

58

2 Operating Life Analysis a single interval. […] In order to obtain a value for a calculation, the assumption might be conceivable that a bearing which has a life of n million revolutions under constant load at a certain rpm, a portion m/n of its durability will have been consumed.

Though it was aimed at the life prediction of ball bearings, it quickly became a generalized rule for fatigue damage accumulation. The Americans Langer in 1937 and Miner in 1945 [25] came up with the same approach and today the linear damage accumulation is known as Palmgren–Miner’s approach.31 In his paper from 1924, Palmgren assumed that a component may survive a number of n load cycles which is then characterizing its strength under the specific conditions. In a single interval of the component’s service life, a number of m load cycles may actually occur at the same conditions. Then within such an interval, a certain portion of its possible operating life will have been consumed—and that is simply the ratio of the actual load cycles m to the maximum cycles n given by the component’s strength. When having cyclic variable loading, similar amplitudes can be merged to have constant amplitudes for a number p of intervals—that is what a counting method may provide effortlessly. Having several intervals at which the component is individually loaded by a number of load cycles within a single interval, a total proportion of its maximum possible life expectancy is consumed: Using Palmgren’s notation leads to  D= p·

m2 m3 m1 + + + ... n1 n2 n3

(2.10)

Here, D stands for ‘damage’ and means that proportion of the maximum possible life that comes from the accumulation of the individual intervals. The complete life would be consumed when D reaches a value of 1. Palmgren proposed this form of a linear damage accumulation hypothesis for ball bearings; 21 years later, the aircraft engineer Miner published his paper, dealing with the damage calculation in the same way, independent from Palmgren, but had a more broader application focus. Looking at the term ‘damage’ in such a way is a captivating idea: Since the S–N curve contain the information about the number of cycles-to-failure at a certain load level, this is characterizing a damage D = 1.0. Hence, an individual cycle at a certain cyclic stress amplitude S a,i introduces an incremental damage Di which is in inverse proportion to the life-related cycles ni at that a load: Di =

1 , when Sa = Sa,i = constant ni

(2.11)

Having the load spectrum as well as the S–N curve in a single graph, there is all the relevant information available that is needed to perform a damage accumulation. 31 As it often happens with name assignment in science and engineering, Anglo-Saxon people do prefer to see their guys as the most important. Hence in English literature, the damage accumulation is often referred to as ‘Miner’s rule.’

2.4 Damage Calculation

59

Fig. 2.31 Partial damage of discrete intervals based on damage calculation

Individual load cycles may be grouped together to constant amplitude intervals and each of those blocks constitute a partial damage contribution which is accumulated to the total damage D D=

 p 

mi i=1

ni

(2.12)

For each interval, a partial damage can be easily calculated and is then accumulated over all intervals which contribute to a discrete cumulative frequency representing the continuous spectrum (Fig. 2.31). Again, failure is assumed at a damage sum of D = 1.0; any smaller value theoretically confirms that the operational load spectrum will not be severe enough to create failure. When looking at the individual portion of durability that is consumed at a certain level of cyclic stress, it is pretty easy to assess what is the most damaging part of a spectrum. Therefore, the distance of the spectrum’s boundary to the S–N curve has to be examined: Where the boundary comes close to the S–N curve, the individual damage becomes large. It is interesting that typically the most damaging part of a spectrum is not even close to the peak loads, because those may introduce a significant damage increment by a single cycle, but the corresponding number of cycles is small. Hence, the most damaging part of spectra which are derived from operational loading is more often related to moderate to medium cyclic stresses which appear frequently (Fig. 2.32). The Palmgren–Miner’s rule is a great example for methods in engineering: In an efficient way, the available information about load and strength is used to find an

60

2 Operating Life Analysis

Fig. 2.32 Evaluation of the most damaging part of a spectrum

approximative solution for calculating the life time. Here, the term damage is just related to the results of constant amplitude testing and does not give a meaningful definition with regard to material mechanics. The total damage D = 1 just means that a specimen is broken because it reached its maximum possible life expectancy which is expressed by the related number of cycles. Since fatigue is actually the process of progressive and permanent change within the structure of a material which finally leads to the total loss of integrity— what actually happens at the end of a constant amplitude test too—it is questionable in which way the term ‘damage’ is able to characterize the progressive process of crack initiation and propagation as well. In other words: What does a damage sum D = 0.7 mean? Mathematically that is just 70% of the maximum possible number of cycles-to-failure, but that value is lacking any information about the progression of the structural change. Again, it should be noted that we are talking about an engineering method which is used to assess the possible life expectancy—it is not a concept of material science. Hence, we may see other issues as well: When assuming to have an endurance limit, a certain portion of a spectrum may not contribute to the damage accumulation (Fig. 2.33). The complete stress levels which are below the horizontal line can be ignored, because there is no partial damage expected to come from. The Palmgren–Miner’s rule says that you have to divide by an infinite number of cycles, and now you can even quantify which amount of additional life expectancy you may get when assuming to have an endurance limit. But can we believe in such an algorithm when it comes to an operational load time history which introduces some damage by high loads initially and then continues with stress cycles below an anticipated material fatigue limit? What if the initial load

2.4 Damage Calculation

61

Fig. 2.33 Palmgren–Miner’s rule applied to the concept of material fatigue limit

interval initiated few submicroscopic cracks and the subsequent loads are lower than the horizontal line? Since both the spectrum as well as the linear damage accumulation hypothesis do not know anything about the load sequence and history, assuming an endurance limit may lead to an underestimation of the damaging effects from low-cyclic stresses: The material has a load history which is not accurately represented in Palmgren–Miner’s damage accumulation. A widely used modification was introduced by Erwin Haibach in 1970 based on his research work at LBF [26]: He proposed to have a fictitious continuation of the S–N curve after the knee point which then takes care about the progressive and permanent structural change when loaded by amplitudes above the endurance limit.32 The modification according to Haibach is using a slope k  for that fictitious continuation which is calculated from the slope k of the S–N curve in the finite life region (Fig. 2.34). The slope of the fictitious continuation was aligned to a theoretical approach which was published a few years earlier [27] and brought a reliable background to Haibach’s rule though it looks so easy. When the curve before the knee point is continued without any different slope after that point, the damage calculation is still possible but may overestimate the damaging effects caused by smaller cyclic stress amplitudes. That was the first modification of the original Palmgren–Miner’s rule and it was proposed by Corten and Dolan [28]. Using such an S–N curve without a knee point

32 At

that time, it was the majority view to accept an endurance limit for steel and cast iron— hence, the modification was related to the original Palmgren–Miner’s rule which assumed to has a horizontal line below nothing happened in terms of partial damage. When today using Sonsino’s rule of having a 5–10% decrease in fatigue strength per decade, for the modified damage accumulation that does not make any difference: Haibach’s rule reflects on the progressive changes within the material’s structure when having a mix of cyclic stress amplitudes.

62

2 Operating Life Analysis

Fig. 2.34 Modified damage accumulation hypothesis according to Haibach

often leads to pretty conservative results with regard to the maximum possible life expectancy and is today known as the elementary Palmgren–Miner’s rule. For certain materials or challenging environmental conditions—such as temperature, radiation or corrosion—a continuous slope of the S–N curve even beyond the HCF regime is quite possible; we will come to that topic later. An interesting modification came from Zenner and Liu in the early 1990s [29]: Here, both the cyclic stresses below an assumed endurance limit and the sequence of the stress cycles were considered. To create the reference strength information for the Zenner–Liu approach the slope k of the S–N curve is modified by a standard or information from crack growth experiments. A linear damage accumulation hypothesis in whatever form or modification from the original Palmgren–Miner’s rule is not really a sophisticated approach, but it is still the most widely used method to assess a structure’s fatigue life. To some content that is confusing, because certainly Palmgren–Miner’s rule cannot • describe the physical processes related to the damage of the material, • consider the damage progression from the different cyclic load levels which are typically found in an operational spectrum, • recognize the random nature and sequence of cyclic load amplitudes as it happened in the time history. In practical engineering, it is good to be aware of those limitations, but to have something available to make that concept still work, though often D = 1. To assess the fatigue life of components more accurately, we do not have to look at the fundamental issues of the linear damage accumulation hypothesis only, but to learn about the major influencing factors of the cyclic strength properties of materials and components which comes with the next chapter.

2.4 Damage Calculation

63

A final topic related to damage calculation should be noticed here: Since the term ‘damage’ is not a user-friendly definition, people often feel more comfortable when looking at an equivalent term which is better understandable. Such a term is the • damage equivalent stress amplitude. Instead of calculating a conceptional term such as damage, it is often preferred to calculate a damage equivalent stress, which will lead to the same damage as all loading blocks of a spectrum together. Hence, the damage accumulation hypothesis is used to transform the variable amplitude loading, that is given by the spectrum, to an equivalent constant amplitude loading. While there was an information that is hard to understand previously—namely a stress spectrum which, together with an S–N curve, creates a certain damage sum— the transformation delivers a constant stress amplitude which then easily can be compared to the fatigue strength that is defined by the S–N curve. Although the number of cycles for which the equivalent stress is calculated can be chosen arbitrarily, the value should be—for logical reasons—identical to the number of cycles at the knee point N k , because then it is especially easy to compare meaningful figures for the stress and the strength directly. Having a bilinear S–N curve, indicating i for what happens before the knee point and j for what happens after it, the damage equivalent stress amplitude S a,eq can be calculated by [30]: ⎡ Sa,eq = ⎣

 ⎤ k1  k−k  k  k Sa,i · m i + Sa,k · Sa, j · m j ⎦ Nk

(2.13)

Using the damage equivalent stress, it is quite easy to calculate the utilization ratio on stress level: That is S a,eq at the knee point k divided by the stress amplitude at the knee point S a,k which is actually the strength given by the S–N curve. This ratio describes how close the equivalent stress is to the allowable (Fig. 2.35). Another clever concept to overcome the issue of calculating the conceptional term damage is to use the required fatigue strength (RFS): That figure was developed at Fraunhofer LBF and is pretty similar to the damage equivalent stress for an utilization ratio equals 1. So, the RFS value is calculated by moving the S–N curve up or down until the stress at the knee point is equal to the damage equivalent stress (Fig. 2.36). This results in an artificial figure that represents the minimum fatigue strength that is necessary to let the component survive. Hence, this computed strength requirement can be compared to the actual fatigue strength and helps to understand the level of utilization without referring to a conceptional figure such as the damage. Both the damage equivalent stress and the required fatigue strength are engineering concepts to evaluate the utilization level of a component under variable amplitude loading by staying with a more understandable factor.

64

2 Operating Life Analysis

Fig. 2.35 Damage equivalent stress as constant amplitude loading (rectangular spectrum)

Fig. 2.36 Required fatigue strength as minimum S–N curve required

References

65

References 1. Winter M, Kindermann R, Humke B (2019) Fahrrad-Monitor 2019—Zahlen und Fakten. Bundesministerium für Verkehr und digitale Infrastruktur 2. Allianz Pro Schiene e.V. Übersicht—Erneuerungsbedarf bei Eisenbahnbrücken in Deutschland 3. Aichbhaumik D (1979) Steel variability effects on low cycle fatigue behavior of high strength low alloy steel. Metall Trans A—Phys Metall Mater Sci 3 4. Bright GW et al (2011) Variability in the mechanical properties and processing conditions of a high strength low alloy steel. ICM11. Procedia Eng 10 5. Wöhler A (1867) Versuche über die Festigkeit der Eisenbahnwagenachsen. Zeitschrift für Bauwesen 10. English summary. Engineering 4:160–161 6. Gaßner E (1941) Auswirkung betriebsähnlicher Belastungsfolgen auf die Festigkeit von Flugzeugbauteilen. Jahrbuch d. Deutschen Luftfahrtforschung 1:972–983 7. Schijve J (1963) The analysis of random load-time histories with relation to fatigue tests and life calculations. In: Barrois W, Ripley EL (eds) Fatigue of aircraft structures. Pergamon Press, London 8. Dowling ME (1972) Fatigue failure predictions for complicated stress-strain-histories. J Mater 7(JMLSA):71–87 9. United Nations—Economic Commission for Africa (2015) Transport for sustainable development—the case of inland transport. United Nations, UNECE 10. Koniditsiotis C (2017) The economic costs of overloading—case studies. In: Asia-Pacific economic corporation—workshop on regulating high mass heavy road vehicles for safety, productivity and infrastructure 11. Schütz W (1996) A history of fatigue. Eng Fract Mech 54:263–300 12. Kloth W, Stroppel T (1936) Kräfte, Beanspruchungen und Sicherheiten in den Landmaschinen. Z-VDI 80:85–92 13. Basquin OH (1910) The exponential law of endurance tests. In: Proceedings of the annual meeting. American Society for Testing Materials, vol 10, pp 625–630 14. Palmgren A (1924) Die Lebensdauer von Kugellagern. VDI-Zeitschrift 68:339–341 15. Dixon WJ, Wood AM (1948) A method for obtaining and analyzing sensitivity data. J Am Stat Assoc 43:108 16. Bathias C (1999) There is no infinite fatigue life in metallic materials. Fatigue Fract Eng Mater Struct 07:559–565 17. Chen Q et al (2005) Small crack behavior and fracture of nickel-based superalloy under ultrasonic fatigue. Int J Fatigue 10:1227–1232 18. Yang F et al (2008) Crack initiation mechanism of extruded AZ31 magnesium alloy in the very high cycle fatigue regime. Mater Sci Eng Struct Mater: Prop Microstruct Process 491:131–136 19. Wang QY, Bathias C (2004) Fatigue characterization of a spheroidal graphite cast iron under ultrasonic loading. J Mater Sci 39:687–689 20. Marines I, Bin X, Bathias C (2003) An understanding of very high cycle fatigue of metals. Int J Fatigue 25:1101–1107 21. Bathias C et al (2000) How and why the fatigue S–N curve does not approach a horizontal asymptote. Int J Fatigue 23:141–151 22. Sonsino CM (2005) ‘Endurance limit’—a fiction. Konstruktion, pp 87–92 23. Coffin LF (1954) A study of the effects of cyclic thermal stresses on a ductile metal. Trans Am Soc Test Mater 76:931–950 24. Wagener R, Melz T (2018) Fatigue life curve—a continuous Wöhler curve from LCF to VHCF. Mater Test 10:924–930 25. Miner MA (1945) Cumulative damage in fatigue. Trans ASME J Appl Mech 12:159–164 26. Haibach E (1970) Modifizierte lineare Schadensakkumulations-Hypothese zur Berücksichtigung des Dauerfestigkeitsabfalls mit fortschreitender Schädigung. LBF—Technische Mitteilung. TM50/70 27. Gatts RR (1962) Application of a cumulative damage concept to fatigue. Trans ASME Pap J Basic Eng 84:403–409

66

2 Operating Life Analysis

28. Corten HT, Dolan TJ (1956) Cumulative fatigue damage. In: Proceedings of the international conference on fatigue of metals. Institution of Mechanical Engineers, pp 235–245 29. Zenner H, Liu J (1992) Vorschlag zur Verbesserung der Lebensdauerabschätzung nach dem Nennspannungskonzept. Konstruktion 44:9–17 30. McDonald K (2011) Fracture and fatigue of welded joints and structures. Woodhead, Cambridge

Chapter 3

Influencing Factors for Fatigue Strength

Abstract Metal fatigue is a weakened condition induced by repeated stresses which ultimately results in cracks or even fracture and, thus, has to be understood in terms of its influencing factors. In this chapter, the fatigue strength of materials and components will be reviewed including the major parameters which have an effect on the number of cycles-to-failure. Material- and process-related effects will be discussed as well as stress concentration and surface conditions. A major proportion of this chapter is related to the effects of mean stresses and residual stresses to demonstrate effective measures for improving fatigue strength and, thus, optimizing lightweight design.

3.1 Repeating Tests So far, fatigue strength has been shown as a curve or its bilinear representation in a double-logarithmic graph which seems to be pretty deterministic: Once the number of cycles-to-failure are determined for a specific stress amplitude that gives a record in the graph. After repeating that experiment on various different load levels, the results in the graph can be connected by a proper regression line to finally come up with the S–N curve. But in this way the fatigue strength is not completely processed because it would be unlikely to get the exact same number of cycles-to-failure for another specimen at a load level which already has been tested previously. Although for a material test precisely machined specimen are used, there is always some deviation about the mean when repeating the same test. The reasons are manifold and may range from local material properties, slightly different surface conditions as well as test machine-related influences. Let us assume to have 10 different specimens of the same material tested under fully reversed, constant amplitude loading: For that experiment, the number of cyclesto-failure was in a certain value range. For the random sampling, the mean value and the median can be calculated easily as well as the rank-size distribution. Assuming that the results are close to a normal distribution the previously used approximation

© Springer Nature Switzerland AG 2020 R. Heim, Structural Durability: Methods and Concepts, Structural Integrity 17, https://doi.org/10.1007/978-3-030-48173-5_3

67

68

3 Influencing Factors for Fatigue Strength

Fig. 3.1 Probability of failure when repeating an experiment: a Test results from 10 specimens. b Unnotched cylindrical specimen. Probability of failure for individual specimens

according to Benard-van Elteren makes it possible to get an idea about the probability of failure of the individual specimen (Fig. 3.1). In that example, specimen #5 had the least number of cycles-to-failure, while #9 lasted longest. Though we can see even for that small-sized random sampling a significant deviation about the mean, it is unlikely that we got the absolute minimum lifetime for a specimen made from that specific material: When repeating the tests with some more specimen, we most likely can see a new minimum which is even below specimen #5. On the vertical axis of the lower diagram, we can see that this has a probability of failure of about 6.5% that means we can expect to see fatigue failures of other specimen before the number of cycles we got for #5. From statistics point of view, we may have about 15 out of 100 which might fail before 220,000 cycles according to our analysis. And 1% of our specimen might fail even before 190,000 cycles which is significantly lower than the mean value of our random sampling results. Plotting those as a histogram and a density function, it becomes clear that we get a nicely shaped Gaussian distribution which is capable to represent the scatter of results from that experiment (Fig. 3.2). Though the majority of specimen fatigue life

3.1 Repeating Tests

69

Fig. 3.2 Fatigue life scatter characterized by Gaussian distribution: a Unnotched cylindrical specimen. b Histogram and density function based on test results from 10 specimens

values can be expected to be close to the mean, those which are below average are the more interesting in terms of fatigue strength evaluation. And that is only the scatter for a certain level of cyclic stresses when repeating fatigue testing. For a complete S–N curve, various different levels have to be examined with regard to the number of cycles-to-failure and the deviation about the mean. Hence, it is more appropriate to talk about a ‘Wöhler scatter band’ rather than a ‘Wöhler curve.’ And that is just because of repeating an experiment with visually identical specimens and their inherent differences in strength which is a statistical uncertainty1 whenever we are dealing with a discrete figure for fatigue life. But having many results for identical specimens made and tested at the same place is a kind of optimum with regard to the scatter. We may have many results for similar specimens obtained mainly at one place over a period of years, or even many results for similar specimens obtained from many different test sites over a period of years—which level of scatter can be expected? How to treat those results? 1 In

uncertainty quantification, this type of uncertainty—often called aleatoric uncertainty—causes results which differ each time the same experiment is performed. Hence, probability distributions, or alternatively Monte Carlo experiments have to be used for the quantification of this type of uncertainty.

70

3 Influencing Factors for Fatigue Strength

Basically, there is no simple statistical method suitable for treating all those different sources of scatter, and especially, the latter issue is the biggest one with regard to the level of scatter: That is the source of data and scatter which has to be evaluated in the formulation of design rules. Here certainly the data cannot be assumed to belong to the same population—even when the test results were obtained from geometrically similar specimens under the same loading conditions. An all-data analysis would lead to a best-fit S–N curve for which the slope is quite different from the slope indicated by separate analyses of the individual sets of data. And that is only the scatter for a certain level of cyclic stresses when repeating fatigue testing. For a complete S–N curve, various different levels have to be examined with regard to the number of cycles-to-failure and the deviation about the mean. Hence, it is more appropriate to talk about a ‘Wöhler scatter band’ rather than a ‘Wöhler curve.’ And that is just because of repeating an experiment with visually identical specimens and their inherent differences in strength which is a statistical. Basically, there is no simple statistical method suitable for treating all those different sources of scatter, and especially, the latter issue is the biggest one with regard to the level of scatter: That is the source of data and scatter which has to be evaluated in the formulation of design rules. Here certainly the data cannot be assumed to uncertainty whenever we are dealing with a discrete figure for fatigue life. But having many results for identical specimens made and tested at the same place is a kind of optimum with regard to the scatter. We may have many results for similar specimens obtained mainly at one place over a period of years, or even many results for similar specimens obtained from many different test sites over a period of years—which level of scatter can be expected? How to treat those results? Basically, there is no simple statistical method suitable for treating all those different sources of scatter, and especially, the latter issue is the biggest one with regard to the level of scatter: That is the source of data and scatter which has to be evaluated in the formulation of design rules. Here certainly the data cannot be assumed to belong to the same population—even when the test results were obtained from geometrically similar specimens under the same loading conditions. An all-data analysis would lead to a best-fit S–N curve for which the slope is quite different from the slope indicated by separate analyses of the individual sets of data. Since the S–N curves is assumed to be linear plotted in double-log coordinates, the basic method of fatigue data analysis is the calculation of the best-fit regression curve of log(N) on log(S a ) to the fatigue data by the method of least squares. The standard deviation of log(N) about the regression line is calculated, and it is used to establish the confidence limits based on the assumption that the data follows a Gaussian distribution. Mathematically, those confidence limits are strictly hyperbolic functions which are closest to the mean regression line at the mean value of the log(S a ) data, but typically that is simplified by drawing tangents to the hyperbolic confidence limits

3.1 Repeating Tests

71

Fig. 3.3 ‘Wöhler scatter band’

parallel to the mean regression line. The lower 95% confidence limit,2 which is approximately two standard deviations below the mean regression line, corresponds theoretically to a probability of failure of 2.5% or to a probability of survival of 97.5%. In a ‘Wöhler scatter band’ (Fig. 3.3), there is a certain deviation about the mean in the direction of log(N) as well as in the direction of log(S a ). In the direction of log(N), the scatter is often noted as TN = 1 :

N10% N90%

(3.1)

using the information about the number of cycles-to-failure related to a probability of survival of 10% and 90%, respectively. In the same way, the scatter in the direction of log(S a ) can be reviewed: TSa = 1 :

Sa10% Sa90%

(3.2)

There is a surprisingly large scatter in fatigue results—and that is not yet influenced by too many factors, since we are still looking at tests with small specimen. The specimen themselves are precisely manufactured—machined and finally polished—to remove any unwanted deviation about the mean and to just get materialrelated fatigue results. Since the fatigue data is typically provided using the mean curve, which represents a 50% probability of survival or average life, that is not a proper way for managing the stress–strength interference. The scatter has to be considered carefully, and using data from a mean curve is by far too optimistic and would lead to an unsafe design. Hence, the nominal fatigue 2 Here

the term ‘confidence limit’ denotes a scatter band of the test data of a given probability of occurrence which has to be differentiated from the confidence limit for a certain percentile of the statistical analysis expressing the uncertainty associated with the calculation of this percentile.

72

3 Influencing Factors for Fatigue Strength

Table 3.1 Empirical scatter values acc. to [1] Material | geometry | cause of scatter

TN

T Sa

Steel specimen | notched | accurately machined

1:2.5

1:1.20

Steel part | normal size and geometrical features

1:3.2

1:1.26

Cast-iron part | normal size and geometrical features | machined | same batch

1:4.0

1:1.26

Forged steel part |as forged surface | w/o tool wear

1:4.5

1:1.30

Forged steel part |as forged surface | w/tool wear

1:5.5

1:1.33

professional steel welding | controlled conditions

1:2.5

1:1.30

professional steel welding | normal conditions

1:3.0

1:1.45

professional aluminum welding | normal conditions

1:5.0

1:1.45

strength—although that seems to be easily available from standards and tables—is never the property used in engineering design but just a figure which may allow a comparison of the fatigue strength of different materials. Few empirical scatter data can be found in [1] and shows a range from T N = 1:2.5 to 1:5.5 for specimen and parts. Certainly, manufacturing accuracy and quality gives a significant impact. For a typical steel part which has normal size and geometrical features, a scatter of T N = 1:3.2 can be expected. Since the S–N curve of such a part has a certain slope k, the scatter in stress direction is about T Sa = 1:1.26 which means the below average parts have a 10% lower strength than the average (Table 3.1).

3.2 Material- and Process-Related Effects Having now an idea about the scatter that occurs when identical or similar specimens are tested, we are ready for the next step and may ask ourselves what about the fatigue strength of components? While specimen are small, simply shaped and polished, genuine parts are much different with regard to their size, shape and surface finish. I would like to say: Components have a purpose, while specimens are material only.3 Furthermore, parts and components have a manufacturing history which often is more complicated than the one of the specimens. Those effects, mainly • • • • •

material strength, geometry and size, surface conditions, applied loads, process-related conditions,

3 That

is certainly not true in a narrow sense, because we will see that specimen may include geometrical features too. But such a simplified statement helps us to understand that it cannot be the final stage to use specimen test results for a durability performance evaluation of parts and assemblies.

3.2 Material- and Process-Related Effects

73

Fig. 3.4 From material fatigue to component-related fatigue

• environmental conditions. have a huge influence on the fatigue strength of components and may change the strength characteristics of the baseline significantly (Fig. 3.4). Accepting that fatigue is not controlled by solely the material means that other influencing factors have to be examined and understood. For component-related fatigue results, we have to scale the specimen fatigue strength using corrections, preferably derived from tests under conditions matching the component as far as possible.

3.2.1 Material Strength Since performing a tensile test is such an easy task, for most materials the information about the static strength properties—such as UTS or yielding—is available and widely understood as ‘material strength.’ Hence, it would be great to find a correlation between this fundamental strength and fatigue strength—though what happens to the material is quite different between static and cyclic loading. For the crack initiation phase, the fundamental material strength is a major parameter that helps to get a correlation to fatigue strength—as long as crack initiation is the most dominant part of fatigue life. That is true for a range of materials, and then, yield strength or tensile strength can be used to get an assessment of the knee point fatigue strength of those materials too. For steel and cast-iron materials, there are a few correlations established which help to assess the material fatigue strength from yield and/or ultimate tensile strength. An early correlation, giving the strength data in MPa, came from [2] for ordinary steel material and showed: Sa,k = 0.2 · (σUTS + σyield ) + 57[MPa]

(3.3)

That is an assessment for the material fatigue strength without any consideration of component-related features such as geometry, surface conditions or complex loading.

74

3 Influencing Factors for Fatigue Strength

And that figure gives an assessment for the average fatigue strength only—hence that is showing a 50% probability of survival without any information about the scatter. Few years after that initial assessment, an even more simple correlation was proposed for steel [3]: Sa,k = 0.45 · σUTS

(3.4)

Sa,k = 0.39 · σUTS

(3.5)

as well as for gray iron material:

Similar approaches can be found for medium- to high-strength steels or for nodular cast iron. With data from [4] a relationship between tensile strength and fatigue strength for unnotched and polished specimens subjected to rotating, bending loads can be demonstrated for steel and aluminum material (Fig. 3.5). For steel materials with an UTS ranging from 360 MPa to almost 1500 MPa, an obviously good correlation to the fatigue strength can be shown. Assuming a linear relation between static and cyclic strength, the correlation coefficient is R2 = 0.91 which indicates a strong correlation. That is different for the aluminum specimen: Here the coefficient is about R2 = 0.5 and therefore can be considered moderately correlated. Hence for steels and iron-based materials, the relation between static and cyclic strength is more pronounced than for aluminum. Basically, for the fundamental material strength characteristics, it can be concluded that higher static strength is a precondition for getting improved fatigue strength, but only for a certain range of materials the relationship between those two properties show a distinctive correlation. Fig. 3.5 Correlation of static to fatigue strength for steel and aluminum. Data from [4]

3.2 Material- and Process-Related Effects

75

Fig. 3.6 Size effect for heat treatable, low alloy steel. Data from [5]

A different behavior is observed for welded joints: They spend most of their life in crack propagation, and their fatigue strength does not depend much on the fundamental material strength.4 Such a correlation can help in an early design phase to look for material alternatives, but certainly does not tell the complete story about fatigue strength and its influencing parameters. Hence, the next step is to look at effects related to size and geometry which can be pretty different between specimen and final parts.

3.2.2 Size Effects Actually, it can be assumed that the nominal strength of a structure does not depend on the size of that structure when classical elasticity theories and materials having non-random strength are considered. Any deviation from that assumption is called the size effect: In fatigue tests, typically, larger specimens have lower fatigue strength than smaller ones—assuming that the material as well as the type and level of loading remain unchanged. That is an important effect because typical specimen dimensions are much smaller than parts and components which are built for various different applications such as bridges, machines or vehicles. Hence, those size-related effects put the fatigue strength characteristics to the more optimistic side whenever we are looking at specimen test results. The size effect is comprised by three sub-effects: That is the so-called geometric size effect, the statistical size effect and the technological size effect. Unfortunately, it is difficult to separate those effects and therefore a single correction factor is typically applied to account for all three sub-effects. For a heat treatable and low alloy steel containing chromium, nickel and molybdenum, data can be found [5] to show that the size effect follows a power equation having a reasonably good correlation coefficient (Fig. 3.6).

4 Welded joints are an important part of engineered structures and, thus, need some special remarks

which will serve a dedicated chapter later.

76

3 Influencing Factors for Fatigue Strength

Fig. 3.7 Geometric size effect demonstrated by stress gradient caused by bending load

Traditionally, those effects are linked to the thickness of specimens and parts; however, it would be more reasonable to consider the volume of those objects instead. Since for bending the size effect often is more present than for axially loaded specimen and parts, the stress gradient which is introduced by the type of loading is considered to be primarily responsible for the size-related effects. The geometric size effect is related to a non-constant stress distribution which becomes more severe the smaller the cross-sectional property is. Such a non-constant stress distribution is called a stress gradient, and through bending loads such a gradient can be introduced. This stress distribution is well known from early lectures in structure mechanics, and when the load is applied in a way that the nominal surface stress is the same for two different beams, the thinner beam has the more severe stress gradient. Hence, the change in stress is bigger for the smaller part which is here part 2 (Fig. 3.7). The thickness of the parts has an obvious influence on the stress gradient and that demonstrates the geometric size effect. The stress gradient effect does not only occur in the case of bending, but similar relations are observed at notches under axial loading. This effect is strongly linked to the notch support effect and will be explained later. While the geometric size effect is strongly related to shape and loading, the statistical size effect accounts for the specimen’s volume primarily. It considers that the probability of occurrence of a fatigue relevant defect5 is higher in a large part than in a small one. Assuming that the distribution of internal defects is of random nature, the existence as well as the location of these defects cannot be predicted, but from statistics point of view, it is unlikely that the total number of defects is not related to the volume in which they may occur. Certainly, the number of defects in a part depends on other parameter too—such as the type of material and the manufacturing process. Cast iron as an example is more prone to internal defects than steel: The latter typically has a very nice homogeneous microstructure, while cast iron, on the other hand, has inevitable internal defects from the casting process. 5 It is important to make a difference between a defect and a fatigue relevant defect: A structure may

have a large number of defects but is not influenced by them in terms of fatigue life, if the defects are not in a highly loaded area of the structure.

3.2 Material- and Process-Related Effects

77

Another sub-effect related to size and volume is the one that accounts for differences in residual stresses, surface roughness and microstructure: That is called the technological size effect, and it refers to the rougher manufacturing conditions which are typically applied for larger parts and may include effects from thermal treatment and cooling as well. Technological implications from the manufacturing process are relevant for the fatigue strength, and that becomes pretty understandable when looking at the slow cooling rate of parts with large cross-sectional properties which often results in a lower strength.

3.2.3 Geometry The term geometry is related to a specification for the construction of a object that can be a prototype or a product. It is part of the design process which means the complete realization of a concept into a configuration and helps to achieve the product’s designated objectives. For many products not only features, but an aesthetic quality has to be realized. That is again about the level of functionality and satisfaction provided to the user which we learned from the Kano diagram before. Altering must-be features into performance or even excitement attributes helps to be differentiated from the competitors quite significantly—and that works among others by good appearance design. When Philips, the Dutch multinational conglomerate corporation headquartered in Amsterdam, started in 1994 to develop a range of kitchenware in a completely new appearance design which based on soft rounded lines, pastel colors, and velvet-textured finishes, these products helped the company to become a leader in kitchenware because of that iconic design. Hence, it is not about technical features and functionality only, but needs ‘ground breaking’ appearance design that sets new standards in its field and becomes a benchmark for similar products. Actually, as early as 1933 the mathematician George David Birkhoff tried to give the nature of aesthetic experience a mathematical description. According to Birkhoff, this experience is characterized by three successive phases • attention which is necessary for perception—the complexity of the object (C), • feeling of value which rewards the effort of attention—the aesthetic measure (M), • realization that the object consists of a certain harmony, symmetry or order (O). He then gave a formulation of the aesthetic experience [6] in which he suggests that the aesthetic feelings arise primarily because of an unusual degree of harmonious interrelation within the object. More definitely, if we regard M, O, and C as measureable variables, we are led to write M = O/C and thus to embody in a basic formula the conjecture that the aesthetic measure is determined by the density of order relations in the aesthetic object.

78

3 Influencing Factors for Fatigue Strength

Fig. 3.8 Steel wheel design versus aluminum wheel design

Hence, the aesthetic measure according to Birkhoff is a function of order and complexity, and a certain feeling of aesthetic value can be achieved by a wealth of options with regard to the order and complexity. A reduced level of order and complexity, thus, can result in outstanding success which is shown by Apple’s products for a while now: The company’s products are clean, friendly, and above all, simple to use—a set of principles that has often separated their products from the rest. If design stimulates aesthetic experience, then geometry is the language to express the visual fundamentals. By stringing together geometric features—such as points, curves and surfaces—order and complexity are shaped. In engineering design, most of those features are arranged because of functional requirements but the aesthetic experience is certainly considered too. When looking at an ordinary steel wheel for passenger cars we may see a pretty functional design which has a few holes in the wheel disk because of lightweight and heat removal from the brake system. An aluminum alloy wheel—which does not have much lower weight necessarily—has a totally different style and is much more related to the aesthetic experience (Fig. 3.8). The latter comes mainly with an uneven number of spokes6 which individually offer a certain level of complexity. The increased negative space and, thus, the part of the wheel you can look through, helps to create an even more technical design by making the brake system visible. In contrast to that, the ordinary steel wheel offers a pretty basic aesthetic experience by having only circular features. In any case, those features are deliberately introduced to achieve a certain aesthetics as well as to be aligned with constraints

6 Please

have a look at the various different designs you easily find for aluminum alloy wheels and count the spokes: The uneven number that is found for the majority of the wheels is part of Birkhoff’s concept of certain harmony, symmetry or order.

3.2 Material- and Process-Related Effects

79

Fig. 3.9 Nominal and actual stress at tensile loaded specimen with central hole

from functionality, material and manufacturing. Having these three-dimensional geometrical features, the structure gets a complex shape locally whereby stresses are concentrated as soon as loads are applied. A stress concentration factor Kt describes the increase in stress level due to changes in the geometry (Fig. 3.9). It is therefore sometimes called the geometric or theoretical stress concentration factor which we then find by the subscript t. It is given as the ratio of local stress to the nominal stress in the section that means K t = 1 for a geometry without any shape-related stress concentration. Because of a local change in geometry—such as a central hole in a plate—the actual stress is much higher than the nominal and it is not uniformly distributed anymore. The equilibrium of the stress itself means that the actual stress decreases quite quickly and is even lower than the nominal stress as soon as a point in some distance from the hole is concerned. For technical products, there are always and a large number of significant changes in the local geometry which lead to several spots introducing stress concentrations K t > 1 (Fig. 3.10). Often the term stress concentration is replaced by the term notch effect. The stress concentration factor primarily depends on the root radius of the notch, furthermore on other geometric parameters of the notch—such as its opening angle— and on the loading type which can be tension, bending or torsion. The nominal stress, which is sometimes called engineering stress, is easily determined for uniaxially loaded specimens as either the axial, bending or torsional stress or similar expressions, or maybe a combination. In many practical cases, however,

80

3 Influencing Factors for Fatigue Strength

Fig. 3.10 3D geometrical features causing local stress concentrations

the nominal stress can be very difficult to be determined due to complex geometry or loading.7 Coming back to the topic of geometry-related influences on fatigue life: An initial estimation for the fatigue strength of a notched component could be the basic material fatigue strength dividing by the stress concentration factor. However, the effect of a notch cannot be fully described by the stress concentration due to local geometry change only, other factors also contribute. Luckily enough, using the material fatigue strength dividing by the stress concentration factor would lead to very conservative results, especially in the case of sharp notches. Hence a fatigue notch factor K f —sometimes called fatigue effective stress concentration factor—is introduced, which relates the fatigue strength of a smooth specimen to that of a notched specimen. The fatigue notch factor K f describes the decrease of fatigue strength because of the notch. It can only be determined accurately from experiments and thus includes both the effect of the geometric stress concentration, but also other effects which we have to look at, such as the notch support. Although huge effort has been spent to establish a relation between the stress concentration factor and the fatigue notch factor in order to calculate the latter, there is no generalized equation available yet. But one thing is clear regarding the magnitude of the fatigue notch factor though: It is never bigger than the stress concentration factor. Hence, we find: 1 ≤ K f ≤ K t , and may see for very small notch radii a huge increase of the stress concentration factor, while the fatigue notch factor is not increasing so much and even becomes smaller again for sharp notches (Fig. 3.11). At mild notches, which are characterized here by a bigger radius, K f has a similar value to K t , whereas for sharp notches the geometric stress concentration goes up dramatically as the notch radius gets close to zero, but the effect on the fatigue notch factor is significantly different as it is shown in the previous figure. The fatigue notch 7 Hence using strain gages applied to the structure at the change in local geometry, or finite element

analysis can be a measure to get an information about the actual stress which results from stress concentration and loading.

3.2 Material- and Process-Related Effects

81

Fig. 3.11 Effect of notch radius on stress concentration and fatigue notch factor

factor first of all increases moderately with the notch becoming sharper, but then gets into a reversed course. The stress concentration factor is a parameter which depends on the type of loading as well as the local geometry changes, but it is not influenced by the material itself. The smaller a notch radius becomes, the bigger the stress concentration will be—and we may see no upper limit for K t because the local stress even can approach infinity for an extreme sharp notch such as it may happen with a crack. So, changes of the local geometry can occur due to intentionally designed features, or due to unintentional surface scratches or even cracks—the latter then may introduce almost infinite stresses at the crack tip.8 Obviously, the local stresses which result from geometry changes are not that destructive as their level would suggest. That brings us back to the previous paragraph in which we learned about the effect of stress gradients as a nonuniform stress distribution, especially when slim cross sections get bending loads. Notches introduce stress gradients too, and the stress distribution is not uniformly anymore. Although the notch peak stress goes up with the stress concentration factor and easily may reach a level that is two or three times higher than the nominal stress, the stress gradient helps to limit the negative effect from the stress concentration. That can be demonstrated by results from fatigue tests which were performed with smooth as well as with notched specimens: Using nominal stresses, the number of cycles-to-failure of the smooth specimen was bigger than those of the notched specimen at the same load level. But if those results are shown in terms of local stresses, the picture is more puzzling, because then the notched specimen show better fatigue life. That is rather counterintuitive, but demonstrates clearly that the local peak stress does not explain the correlation to fatigue failure completely (Fig. 3.12). 8 The

concept of the stress concentration factor is more related to macroscopic features than to microscopically small changes of the local geometry. Hence, the stress at a crack tip is not part of that concept and actually replaced by the so-called stress intensity factor that is linearly related to the stress and directly related to the square root of a characteristic crack dimension. We’ll come back to that when looking at crack growth and fracture mechanics.

82

3 Influencing Factors for Fatigue Strength

Fig. 3.12 Fatigue data for smooth and notched specimens. Data from [7]

Indeed, something more is needed to properly describe the effect of the notch—and the missing factor is of course K f . If instead the data is plotted in terms of fatigue effective stresses, the data collapses in a narrow scatter band, indicating that the transformation of the results from testing fully describe the difference in the data (Fig. 3.13). Looking at the nonuniform stress distribution caused by a notch, the effect on fatigue life due to peak stress and stress gradient goes in two different directions: While the stress concentration factor deals with the local peak stress only, the fatigue notch factor additionally takes the stress concentration into account too. From material mechanics point of view, the difference between K t and K f is given by the so-called notch support effect that is a support from the material which is near to the highly stressed zone, but significantly lower stressed because there is such a distinct gradient function. Fig. 3.13 Fatigue data using fatigue effective stress. Data from [7]

3.2 Material- and Process-Related Effects

83

Fig. 3.14 Stress averaging approach

Describing the notch support effect is possible by considering the steepness of the stress gradient below the notch. Several concepts exist which help to express the influence of the nonuniform stress distribution [8], and most of them are twodimensional in nature, whereas the highly stressed volume [9] is an approach to describe spatial differences in the stress field. Neuber examined the stress limit which may exist in a notch because of local plasticity and proposed an approximation to estimate the local elastic–plastic stresses in notches based on linear elastic stress results. The Neuber plasticity correction can help to calculate an averaged value of the notch stress over a material and notchrelated length a* (Fig. 3.14). Compared to the elastically calculated stress concentration factor, the stress averaging approach, thus, will lead to a slightly lower reference figure which gives a correction to the large value of K t . Values of a* can be found in textbooks. In the stress gradient approach, the fatigue notch factor is calculated from the stress gradient below the notch. Using the concept of the maximum normalized stress gradient, we have to look at the level the stress is decreasing at 1 mm depth, while the peak stress is 1 at the notch root (Fig. 3.15). Hence, that is a measure of how fast the stress field is decreasing under the notch and thus how much material is actually subjected to a high level of stress. Material- and notch-related factors for that approach can be found in textbooks as well. The fatigue notch factor then can be determined by scaling down the stress concentration factor with the notch support-related correction. Experimental data demonstrate in which way the notch support factor gets larger with an increased stress gradient (Fig. 3.16). For the determination of the stress gradient of an arbitrary geometry and loading, FEA can be used beneficially to map the stress to a path along the notch.

84

3 Influencing Factors for Fatigue Strength

Fig. 3.15 Stress gradient approach

Fig. 3.16 Notch support factor from experimental data. Data from [5]

While the above approaches use two-dimensional information only, true 3D geometrical features and operational loading introduce a more complex stress distribution which is nonuniformly in downwards direction too. Here the highly stressed volume approach might help: The highly stressed volume is defined as that subjected to a stress higher than some percentage of the peak notch stress, typically 90%, hence denoted V 90 (Fig. 3.17). This volume can be calculated based on the stress gradient for simple geometries or using FEA for arbitrary and more complex geometries.

3.2 Material- and Process-Related Effects

85

Fig. 3.17 Highly stressed volume approach | 2D scheme

Fig. 3.18 Local fatigue strength based on highly stressed volume approach. Data from [5]

The idea is that the different fatigue strength obtained for similar specimens with different notches or loading can be attributed to the difference in the highly stressed volume. Although material parameters for this concept are scarcely available, this approach offers the best predictive capability, compared to the stress averaging or the stress gradient approach. Using data from [5] a pretty good correlation can be found for an alloyed high-strength steel using a power equation again (Fig. 3.18). Having a really small value for V 90 —that is synonymous for a notch creating a huge stress gradient—it can be seen that the local fatigue strength is about 65% of the material’s UTS.

86

3 Influencing Factors for Fatigue Strength

Hence, the notch support contributes significantly to the fatigue strength of components which have local changes in geometry.9 Since the quantification of V 90 is accessible by using a fine mesh density in FEA, the highly stressed volume approach is a good example that fatigue analysis methods become continuously adapted and improved.

3.2.4 Surface Conditions The term ‘surface conditions’ is another difficult technical matter: There are effects related to the surface which are visible—such as tool marks or machining grooves— and others are more related to technological effects and not easy to detect—such as residual stresses which can be introduced by an inappropriate feed rate in the machining process. Surface conditions often mean surface roughness and then do not include those technological effects. Hence, we start with the influence of the roughness which is pretty simple to predict: The rougher the outside of a part, the earlier it fails under cyclic loading. The roughness has a large influence on the fatigue strength in the crack initiation part of the fatigue life: The microscopic ridges in the surface typically obtained after machining will act as crack initiation locations. These ridges are a special type of notch—we may call them micro-notches, and they cause pretty localized effects. Another influence related to the surface roughness comes from the manufacturing process—it makes a difference whether a part is rolled, casted, forged or machined, but here it is difficult to get that differentiated from the technological impact of these processes which may not influence surface roughness only. So, back to surface roughness which is specified by the average roughness Ra or the mean roughness depth Rz —and that makes a difference too. The average roughness Ra is calculated by an algorithm that measures the average length between the peaks and valleys and the deviation from the mean line on the entire surface within the sampling length: Hence, all peaks and valleys of the roughness profile are averaged and extreme points have no significant impact on the final results. On the other side, the mean roughness depth Rz is calculated by measuring the vertical distance from the highest peak to the lowest valley within five sampling lengths, and then, these distances are averaged. By averaging only five peaks and five valleys, the extremes have a much greater influence on the final results. And the definition of Rz has also changed over the years. While the mean roughness depth Rz is the more common definition in Europe, in the USA the average roughness Ra is more often used. Since the actual shape of the component’s roughness profile has a significant impact on the roughness parameter, 9 But it has to be clear that it is not a proper concept to introduce sharp notches in engineering design

to make use of the notch support—that would be not appropriately because the huge increase of the peak stress is what limits the fatigue life of a poorly designed product.

3.2 Material- and Process-Related Effects

87

Fig. 3.19 Influence of surface roughness on fatigue strength. Data from [10]

there is no way to exactly convert the roughness depth into the roughness depth, but it is a safe conversion to use a ratio range for Rz -to-Ra = 4-to-1 to 7-to-1. A mirror polished surface can be achieved by polish lapping and gives a mean roughness depth Rz = 0.05–0.2 μm. A normal polished surface has Rz = 0.8–5.5 μm which still is very smooth, but significantly rougher than mirror polishing. These surfaces minimize the effect from having micro-notches which may cause fatigue crack initiation. Hence, specimens with polished surfaces are the reference for quantifying roughness-related effects on fatigue strength: The larger the roughness becomes, the lower the strength is—and that is often shown by a normalized roughness-related strength (Fig. 3.19). For a medium strength steel grade, it can be seen that a mean roughness depth Rz = 25 μm leads to a 20% reduction in fatigue strength compared to polished specimen results.10 And such a value is representing the roughness given by machine operations such as turning or countersinking. Hence, it is pretty obvious that it costs a lot of effort and money not to lose more than 25% of the fatigue strength for such a steel material. While specimens typically have a well-prepared surface to exactly eliminate the effect of roughness-related strength decrease and to provide a uniform basis for material characterization, it always has to be considered that a part’s finished surface often has a mean roughness depth Rz ≥ 10 μm and rarely below. We saw that the crack initiation in metals is typically initiated at the surface and caused by irreversible dislocation movement leading to intrusions and extrusions. The dislocation itself is a flaw in the lattice of the metal which causes slip to occur along favorable-oriented crystallographic planes. So, there are a few parameters which may suggest that the roughness-related strength decrease is a function of the material’s strength too. It is actually found that higher strength materials are more sensitive to surface roughness: While a part made from a medium strength steel with an UTS = 500 MPa has a 20% reduction in fatigue strength when its mean roughness depth Rz is 25 microns, that number goes up significantly when using high-strength steel. 10 Again,

that relation follows a power function pretty nicely and that is the reason to show it in a semi-log diagram here.

88

3 Influencing Factors for Fatigue Strength

Fig. 3.20 Influence of surface roughness on fatigue strength for steel material. Data from [10]

Hence, the irreversible dislocation movement as a root cause for fatigue crack initiation becomes more pronounced when both the material’s strength and surface roughness increase. Although the fatigue strength basically increases with higher UTS—which we saw before—and the high-strength steel remains tough, that benefit becomes smaller with an unfavorable surface roughness. Looking at good machining conditions—which may introduce a mean roughness depth Rz of 20 microns—a low to medium strength steel with an UTS of 400 MPa still has about 84% of the fatigue strength of a perfectly smooth surface, while a high-strength material with an UTS of 1000 MPa is lowered by 10.5% additionally (Fig. 3.20). We saw that the material’s fatigue strength is about 45% of the tensile strength fundamentally, and now the additional effect from the surface roughness limits the fatigue strength to a range of 85 to 70% of the material’s fatigue strength for proper machining conditions. By taking that correction into account, we can see that even smaller parts which do not have a significant size effect but a surface roughness are quickly at 32–38% of the UTS only when it comes to an assessment for the fatigue strength. And there are still effects related to the loading, the manufacturing process or the environmental conditions which have a major influence on fatigue life too.

3.2.5 Applied Loads The fatigue strength of materials often is given for axial loading. That is a standard which makes it possible to compare different materials with regard to their fundamental fatigue properties by using specimens for fatigue testing. As we know, the term specimen is here used in the sense of a test-piece of simple shape, frequently standardized, of small size and prepared carefully and with good surface finish. The purpose of the simplification is to reduce the variability of the test results and to keep different influential factors under control.

3.2 Material- and Process-Related Effects

89

Various different standards for conducting axial fatigue tests are available as well as practices for the presentation of the test results. Organizations such as ASTM have established a number of standards related to cyclic deformation and fatigue crack formation which can easily found on their websites. In Germany, the standard DIN 50100 has been established and got its last revision in 2016. The ISO—the International Organization for Standardization—as a worldwide federation of national standards bodies has a huge family of norms and standards related to fatigue testing and specific types of applied loads. When using fluctuating, pure axial loads and an unnotched specimen for fatigue testing, there is a constant stress distribution throughout the whole cross section. In other words: Axial loading does not introduce any stress gradient by itself. The specimen is stressed uniformly and the stress level at the surface is the same than in the inside. When having bending loads, the situation is completely different: A specimen deforms when a transverse load is applied on it, and the material at the upper side is compressed while the material at the lower side is stretched. The basic formula for the determination of the bending stress in a beam under simple bending is given by the bending moment about the neutral axis and the resistance moment about the neutral axis—that is well known from technical mechanics and does not have to be repeated here. Compressive and tensile forces develop in the direction of the axis of the specimen under bending loads. The loads between these two opposing maxima vary linearly and that represents a gradient again. Hence, there is a significant difference related to fatigue simply because of the type of applied loads: For unnotched specimens, axial loads do not introduce a stress gradient, but bending loads do that. Hence, it can be expected that the type of loading has an effect on the fatigue life—although the maximum stress is the same (Fig. 3.21). Here the external loads are determined in a way that the maximum stresses are the same for all three specimens. That means the bending moment is bigger for specimen C than for specimen B, simply because C has an enlarged height and thereby a bigger resistance moment about the neutral axis. Using our knowledge about the geometric size effect and notch support, it is pretty obvious in which order the fatigue test results of those different specimens have to appear: Specimen B—the basic specimen under bending—offers the best fatigue life, followed by specimen C with a less steep stress gradient because of its size, and finally specimen A which is axially loaded (Fig. 3.22). The type of loading has a significant influence on the fatigue strength—and the results from the example above can be explained easily by the highly stressed volume approach too. Another type of loading is the twisting of an object due to an applied torque: When a shaft is subjected to a torque or twisting a shearing stress is produced in the shaft which varies from zero in the axis to a maximum at the outside surface of the shaft. Despite the fact that steels are used extensively in the industry and many components undergo torsional loadings quite frequently, fatigue properties of this nature

90

3 Influencing Factors for Fatigue Strength

Fig. 3.21 Maximum stress and stress distribution for different types of loading and specimens

Fig. 3.22 S–N curve characteristics of the different types of loading and specimens

3.2 Material- and Process-Related Effects

91

are not generally available: The majority of the available fatigue data and material properties resemble axial loading rather than torsion. By using the von-Mises or the Tresca criterion, it is possible to assess a relation between axial and torsional material properties which then can be used for a comparative analysis of the fatigue strength. The results from torsional fatigue testing are lower than those from axial loading. With regard to the fatigue life, it can be concluded that torsion leads to a minimum number of cycles-to-failure compared to bending and axial loading. Operational conditions often are characterized not only by variable amplitude loading, but include multiaxial loads too. That may be a superposition of tension– torsion loads which means a concurrent application of different types of loads. Basically, it has to be differentiated whether those loads are proportional or nonproportional: Under proportional loading, the stress components are acting in phase and the resulting stress range becomes relatively large. For non-proportional loading, the resulting range of stress becomes smaller, because the two stress components are acting out of phase (Fig. 3.23). To make it clear in which way proportional loading can be differentiated from non-proportional loading we may think about the stability of the principal stress directions: For proportional loading, the directions of principal stresses are constant, and for non-proportional loading, they may rotate. When looking at results which are obtained for specific combinations of normal stress and shear stress, it can be seen in which way the normal stress may increase when the shear stress declines (Fig. 3.24): Here an ellipse fits the experimental data pretty well—and that is found to be acceptable for proportional loading primarily.

Fig. 3.23 Superposition of stress components 1 and 2 to establish a resulting principle stress

92

3 Influencing Factors for Fatigue Strength

Fig. 3.24 Normalized fatigue strength under multiaxial loading. Data from [12]

That simple way to address the problem of multiaxiality is referring to the Gough– Pollard equation [11]. When using the principal stress to represent the stress state from the superposition of the individual components of the stress tensor, we may expect a longer fatigue life for non-proportional loading due to the resulting stress range, but most experiments show that the opposite is the case. Hence, it has to be accepted that using the principal stress range often is a poor choice for multiaxial fatigue assessment. Thus, a number of alternatives have been proposed, and frequently, the critical plane approach is used to evaluate the effect of multiaxial loading. In that approach, a number of search planes are examined and finally the plane that maximizes the damage parameter is called the critical plane. Using multiaxial damage criteria means that an equivalent uniaxial stress range is evaluated against the normal uniaxial fatigue strength. The equivalent uniaxial stress range is generally a combination of normal stress ranges and shear stress ranges, and it may include various stress invariants and material-specific fitting parameters: For ductile metals, it is known that the initial part of the crack initiation occurs on the plane of maximum shear stress, and consequently, for these materials, the multiaxial criteria are based primarily on the shear stress range, while for brittle materials the crack initiation typically occurs at the plane of maximum normal stress. That is the simplest applied criterion in which a search for the plane, that experiences the maximum damage calculated from only the normal stress, is performed. This criterion is used for the evaluation of multiaxial fatigue of brittle materials, but is in good agreement to results for semi-ductile materials too and consequently found in guidelines for the wind turbine industry [13] which is using GJS-400-18 ductile cast iron frequently. A widely used shear stress-based critical plane approach is the Findley criterion which predicts fatigue failure on a plane considering the shear stress amplitude, the maximum normal stress occurring over a load cycle and an experimentally determined material parameter that describes its sensitivity to the normal stress.

3.2 Material- and Process-Related Effects

93

Actually, multiaxial fatigue and the corresponding criteria are much beyond of an introduction; hence, they cannot be dealt with in a more detailed manner here. The Czech researcher Jan Papuga published an extraordinary complete review analyzing 17 different methods for multiaxial damage criteria which were tested on a series of 407 experiments [14]—so, there is plenty material available for project research and own reading purposes to dive deeply into this topic.

3.2.6 Mean Stress The load-related effects, which have been discussed up to this point, depend on the load range, the type of loading as well as the superposition of load time histories leading to multiaxiality. When looking at the general form of cyclic loading that is a continuous and repeated application of a load which may occur as a sinusoidal fluctuation. The mean load is the arithmetic mean of a load cycle, and from that level, the load amplitude shows in the direction of either the maximum or the minimum load. Hence, for a load or stress cycle, there can be a different mean load or mean stress, although the load or stress range remains constant (Fig. 3.25). We may see different mean stress levels within an extended time history, and that is more than understandable because the mean stress often is given by the static load conditions which—as an example—can change due to the payload and number of passengers in a car. While in Fig. 3.25 the stress time history #A shows positive stresses only, the time history #B has zero-level minimum stress which results in a stress ratio R=

Sa,min =0 Sa,max

(3.6)

and the mean stress becomes equivalent to the amplitude or half of the stress range. The stress time history #C shows R = −1 because the minimum and maximum stress have the same amount but their sign is inverted. Then, the mean stress is at zero level. Looking at those time histories, it would be a surprise not to have any influence from the mean stress on the fatigue life: While for #A the complete stress time history is in the tensile regime, half of #C is having compressive stresses. Since tensile mean stresses cause opening of microcracks and make sliding easier, it is generally observed that such mean stresses reduce fatigue life or decrease the allowable stress amplitudes. As opposed to this effect, fatigue life is improved when zero-level or even compressive mean stresses arise. As early as 1941, J. O. Smith published his results about the mean stress sensitivity for metals (Fig. 3.26). Top-left experimental results were examined for zero-level mean stress which means R = −1 and is often used as the reference for the fatigue strength.

94

3 Influencing Factors for Fatigue Strength

Fig. 3.25 Stress time histories and mean stress

Fig. 3.26 Mean stress sensitivity. Data from: Engineering Experiment Station Bulletin 334, 1941

Bottom-right experiments were performed using a mean stress level which comes close to the UTS from static testing: Pretty clear that once the mean stress is at the level of UTS, the material fractures and there is no chance for fluctuating load cycles any more.

3.2 Material- and Process-Related Effects

95

Hence, using those results from fatigue testing, the influence of mean stress can be explained in a vivid manner by such a graph. Most traditional materials show the behavior that for increased mean stress the fatigue strength diminishes. The opposite also holds: In case of compressive mean stresses, the fatigue strength increases.11 The effect of mean stresses on the fatigue strength can be expressed by using corrections: The allowable cyclic stress amplitude is scaled down when tensile mean stress is applied, and that correction often is done by using a linear relation or a parabola. The fatigue strength at a given mean stress S mean is then determined using a correction factor κ which is multiplied with the fatigue strength at R = −1. A simple linear correction is shown by the dotted line in the graph above which requires the tensile strength as a single material parameter only: κ =1−

Smean σUTS

(3.7)

That is known as the modified Goodman correction and recommended for the mean stress correction when having high-strength and brittle materials. Another linear mean stress correction reduces the allowable stress amplitude depending on a parameter that is calculated by the fatigue strength at zero mean stress and at R = 0. That parameter is called the mean stress sensitivity M which is determined from experiments and often is in the range of 0.3:  κ =1−

M =1− Sa,k(R=−1)

Sa,k(R=−1) Sa,k(R=0)

 −1

Sa,k(R=−1)

(3.8)

Using the mean stress sensitivity parameter M is especially popular in Germany and often referenced by the leading durability experts Harald Zenner and Cetin M. Sonsino from Clausthal and Darmstadt. Smith’s data from 1941 certainly was based on what we call today mild and ordinary steels: Obviously, such materials are not much sensitive with regard to the mean stress level than it is expressed by those linear corrections. A less conservative correction can be achieved by using the Gerber approach which is mathematically similar to the modified Goodman correction, but has the last term squared and therefore describes a parabola:  κ =1−

11 Fiber-reinforced

Smean σUTS

2 (3.9)

plastics and composites have the opposite behavior: Since the fibers have their load carrying capability under tensile loading, the reinforced plastics get better fatigue strength when there is positive mean stress. Having compressive mean stress the fibers may start to bend or crush which then limits the load carrying capability of the material significantly.

96

3 Influencing Factors for Fatigue Strength

Fig. 3.27 Haigh diagram

The Gerber parabola is recommended for ductile materials, and it makes sense for tensile mean stresses only, because for a stress ratio R < −1 that would lead to a contradiction of the effect previously stated for compressive mean stresses. The results of fatigue tests using nonzero mean stresses are often presented in a Haigh diagram (Fig. 3.27). Since for compressive mean stresses the effect on the fatigue strength often can be assessed by extrapolating the linear correction lines, in this aspect the Gerber parabola is more conservative because it is often continued as a horizontal line. The Haigh diagram plots the mean stress, usually tensile, along the x-axis and the oscillatory stress amplitude along the y-axis. Lines of constant life are drawn through the data points. The infinite life region is the region under the curve, and the finite life region is the region above the curve. Since a substantial amount of testing is required to generate a Haigh diagram which provides the fundamental information for relevant combinations of mean and alternating stresses, the empirical relationships, that relate alternating stress to mean stress, have been developed by researchers such as Gerber (Germany, 1874), Goodman (England, 1899), Soderberg (USA, 1930) or Morrow (USA, 1960s). These corrections define various curves to connect the fatigue limit on the alternating stress axis to either the yield strength or the ultimate strength on the mean stress axis. All corrections should be used for tensile mean stress values only. When the mean stress is comparatively small against the alternating stress, the corrections provide very similar results. As the stress ratio approaches R = 1, the different models show significant differences. Using the mean stress sensitivity parameter, it is relatively easy to figure out a single and scalar value to compare different types of materials (Fig. 3.28). Aluminum alloys show a certain mean stress sensitivity, while steel materials are more robust against mean stress influence. The simple lessons learned from looking at the effects of having a nonzero mean stress are: To improve fatigue strength, we should try to avoid tensile mean stresses and to have compressive mean stresses instead.

3.2 Material- and Process-Related Effects

97

Fig. 3.28 Mean stress sensitivity for different materials

Though that sounds right, it is typically not reasonable to manipulate the operational loading to have proper mean stresses: Mean stresses are introduced by the static part of the operational loading and cannot be changed from tension to compression easily. Hence, at that point, we have to close this chapter about the influence of applied loads, not without emphasizing that due to the mean stress sensitivity a full operational load analysis has to consider both, the cyclic stress amplitudes as well as the mean stresses.

3.2.7 Process-Related Conditions At the end of the previous chapter, we saw that it would be beneficial to improve fatigue strength by avoiding tensile mean stresses and to introduce compressive mean stresses. This can often be achieved by using residual stresses which are in equilibrium within a part not relying on any external loads to create those stresses. They are called residual stresses because they remain from previous manufacturing processes and exist in most manufactured parts. These stresses are locked-in stresses which are present in components and developed primarily due to nonuniform volumetric change. But it has to be clear that residual stresses are categorically signed: There can be tensile stresses as well as compressive stresses, having the same effect on fatigue life such we saw from mean stresses. Tensile stresses can be introduced unintentionally, or at least uncontrolled by processes such as shrinking, welding, fitting or machining. As an example, cast components usually have remaining tensions as residual stresses which may cause cracking on the component surface. Compressive residual stresses can be created in a controlled manner by shot peening, low plasticity burnishing and autofrettage, induction hardening, or carburizing too. Often the target of inducing compressive residual stresses is to balance the detrimental effects of tensile stresses. Stress relief annealing can also be used to reduce residual tensile stresses.

98

3 Influencing Factors for Fatigue Strength

The total stress of a component is the sum of all applied operational stresses and the process-related residual stresses. Hence, residual stresses have a potential to improve or to completely spoil parts which are subjected to fatigue. The applicability of introducing intentional residual stresses into materials and structures during design, manufacturing or service to slow down, to stop, or to divert a fatigue crack growth can result in safer methods of maintaining structural integrity. While today it is possible to model how residual stresses develop in a simple coupon during welding, for example, modeling of the stresses in an entire structure after manufacturing, and how they alter during service is important but not well understood. Residual stress engineering is a relatively new concept that might help to extend existing knowledge to deliberately applying residual stresses to a structure to increase its fatigue life. It is becoming an increasingly important aspect in the design and manufacturing of large aircraft structures for which typically rolled plate-type materials and forgings are used. There are three different known types of residual stresses. • Type-1, or so-called macro-residual stresses which are developed in several grains: Any treatment or process, which causes inhomogeneous distribution of strains, produces type-1 residual stresses. Changes in the equilibrium of type-1 residual stresses will result in changing the macroscopic dimensions, • Type-2, or so-called micro-residual stresses which are developed in one grain and may have different sizes in different grains. Especially, the transformation from austenite to martensite—a diffusion-less phase transformation in the solid state of steels—produces type-2 residual stresses because the volume of martensite is larger than that of austenite and this difference forms residual stresses, • Type-3, or so-called sub-micro residual stresses which are developed within several atomic distances of the grain due to crystalline defects such as vacancies, dislocations, etcetera. All those residual stress types have immediate relevance to what happens in practice. For introducing compressive residual stresses, intentionally there are three main groups: mechanical treatment, thermal treatment and plating. The mechanical treatment typically relies on applying external load which creates localized plastic deformation. The most widely used processes to induce compressive surface residual stresses are surface rolling—using the pressure of narrow rolls— and shot peening that comes with the pressure of the impact of small spherical parts. Both tensile and compressive residual stresses must be present in order to ensure the internal force and moment equilibrium. Shot peening is successfully used for the treatment of steels, ductile iron, aluminum, titanium and nickel-based alloys using small spherical parts—balls or shots—that range from 0.2 to more than 3 mm in size and are shot at high speeds against the part’s surface. They produce surface caps and plastic stretching due to the impact.

3.2 Material- and Process-Related Effects

99

Fig. 3.29 Example for surface residual stresses introduced by shot peening

Compressive stresses are thus produced in the skin: The residual stress directly at the surface itself is a bit smaller than the maximum residual stress, and then, the compressive stress becomes smaller with increasing depth (Fig. 3.29). Dependent on the material strength and the intensity of the shot peening, the depth of the compressive layer introduced by such a treatment is in a range of 0.25 to almost 1 mm for steels, about 0.75 mm for aluminum alloys and less than 0.5 mm for titanium [15]. The peening intensity itself depends on the size and material of shot, the impact speed as well as the time of exposure. It is specified using Almen strips which consist of a steel strip that is peened on one side only. The residual compressive stress from the peening will cause the Almen strip to bend toward the peened side which is a very repeatable function of the energy of the treatment. Excessive treatment may produce a rough surface together with excessive tensile stresses in the core of the part, while insufficient treatment may fail to provide a proper compressive layer. Surface rolling is often used as an economical forming operation in the production of bolts and screws for creating their threads. It is also used for introducing compressive residual stresses in fillets of components such as gear teeth, turbine blades, crankshafts or axles. For shot peening as well as surface rolling, the nonuniform inelastic loading leads to surface residual compressive stresses when the load is removed. An adequate depth of the compressive layer is important because it must be deep enough to be able to stop cracks. Due to the compressive layer, fatigue crack nucleation sites and growth can be shifted to subsurface residual tensile stress regions. While shot peening and surface rolling are the major mechanical methods for introducing compressive residual surface stresses, induction hardening, nitriding and carburizing are widely used thermal methods which create a surface skin that is hard and in compression.

100

3 Influencing Factors for Fatigue Strength

Plating is a surface treatment to increase corrosion resistance and for esthetic appearance: While chromium plating is used to increase wear resistance and to build up undersized parts, the electroplating with chromium or nickel creates significant residual tensile stresses in the plating material along with microcracking. Here hydrogen can be introduced into the base metal that can cause a susceptibility to hydrogen embrittlement. Hence, it has to be considered that surface treatment sometimes can create detrimental effects with regard to improve fatigue life: The thermal effects from welding may produce huge tensile stresses as well, which are up to and even beyond the yield strength of the base material. Finally, it has to be considered that machining operations such as turning, milling, planing or broaching may significantly affect fatigue life too, because those operations may initiate effects related to surface finish, cold working, phase transformations as well as residual stresses. Because of high machining speed and pressure in those operations, pretty often surface residual stresses are tensile with subsurface residual compressive stresses. Similar residual stress distribution comes from abrasive operations such as grinding: Conventional, or abusive operations with high speed and high feed rate as well as with water as lubricant, or even without any lubricant can introduce residual surface tensile stresses with high magnitude but shallow depth. Changing these operational parameters to low speed and low feed rate, as well as using oil as a lubricant, a shallow low magnitude residual compressive layer may result from. In welding components, both residual stresses, either tensile or compressive, can be found. Tensile residual stresses are found in the weld metal area, but the distribution of welding residual stress varies in different locations that depend on welding parameters, types, sequence, component type, component materials and component sizes. Welding itself is a process that creates localized heat from a moving heat source, and the welded structures are heated up rapidly to the melting temperature. After the welding is done and the heat source is removed, the rapid cooling causes microstructural alterations which lead to the generation of residual stress. Hence, the weldingrelated residual stresses are the results from the nonuniform expansion and compression of the weld and base material due to the heat distribution during the welding process. Often the residual stresses found in weldings are tensile stresses which give a negative effect to the fatigue life of those components. The fundamental sources of welding residual stresses are shrinkage, quenching and phase transformation. While tensile residual stresses occur due to shrinkage primarily, quenching and phase transformation processes may create compressive residual stresses even in weldings. Hence, tensile residual stresses exist primarily in the weld metal zone as well as in the heat-affected zone (HAZ), and in the base metal there is a compressive layer (Fig. 3.30). Due to the varying heating and cooling rate in different zones near the weld, residual stresses are developed: Different temperature conditions lead to varying strength and volumetric changes in the base metal during welding. An increasing

3.2 Material- and Process-Related Effects

101

Fig. 3.30 Residual stress distribution in weld metal zone- and heat-affected zone

temperature decreases the yield strength of the material and simultaneously tends to cause thermal expansion of the metal being heated. Since the surrounding low temperature base metal restricts thermal expansion, compressive strain in the metal becomes developed during heating. Along the weld, the residual stress is generally tensile in nature, while compressive residual stress is developed adjacent to the weld in the heat-affected zone. On the top and bottom surfaces of the weld joint, the cooling rate is higher than that in the core or middle portion of the weld and the HAZ. These differences in the cooling rate cause differential thermal expansion through the thickness of the area being welded: While the core material is still hot, near the surface the material starts to contract which leads to the development of compressive residual stresses at the surface and tensile residual stress in the core. It was mentioned before that residual stresses are created due to nonuniform volumetric changes. During steel welding, a transformation of the austenite into other metal phases occurs in the heat-affected zone as well as in the weld zone. Such transformations lead to an increase in the specific material volume at microscopic level. Especially, the low temperature transformation of austenite into martensite comes with a significant increase in the specific volume, which contributes notably to the development of residual stresses. Residual stresses in as-welded structures may be up to yield magnitude in tension. By preheating and post-weld heat treatment, many issues related to welding residual stresses can be minimized. Hence, by heating up the base metal to a specific temperature prior to the welding operation, the cooling rate can be reduced which has a positive effect on the shrinkage stresses.

102

3 Influencing Factors for Fatigue Strength

Post-weld heat treatment—which is called stress relief too—helps to reduce and to redistribute the residual stresses which have been introduced by the welding operation. With post-weld heat treatment, the tensile residual stresses are generally relaxed to just 10–25% of the value from the as-welded stage [16]. In summary, it is possible to state that residual stresses come from process-related conditions and can be beneficial for fatigue life, as well as they may decrease the operational stress level for crack initiation: The positive effect strongly depends on whether a compressive layer is built. Similitude exists between residual stresses and mean stresses, and S–N corrections methods can be used for both types of static stresses: It is common to assess residual stress effects on fatigue by treating them as mean stresses. With regard to the presence of those stresses, it has to be clear that there is a significant difference: While mean stresses persist as long as the mean load remains, residual stresses persist as long as the sum of residual stress and applied stress does not exceed the pertinent yield strength of the material. Hence in mild steels, which may yield below 300 MPa, rough operational load conditions can decrease the residual stresses pretty quickly. Such a load-based relaxation of residual stress is a reason not to shot-peen softer metals, but to apply the surface treatment to hard metals with high yield strengths more beneficially. When predicting fatigue life, it is important to consider the magnitude and distribution of the residual stresses, both at the surface and below the surface, as well as their stability under service loading. The relaxation behavior is primarily affected by the material hardness and applied strain amplitude, and not that much by the magnitude of the residual stress itself. The resistance of a material to stress relaxation can be determined by subjecting axial specimens to biased strain cycling and observing the cyclic change in mean stress: Residual stresses would be expected to relax whenever the applied loading resulted in reversed plastic straining in the material. Because of the tendency of many steels to exhibit cycle-dependent softening, this may occur at lower stresses than would be anticipated based upon monotonic yield strengths. In this context, it has to be mentioned that under variable amplitude loading a relatively few, high stress levels can trigger cyclic softening to a significant amount. Depending on the material’s hardness, a threshold stress amplitude can be assessed below which relaxation would not be expected to occur. That stress amplitude threshold for the material’s specific relaxation behavior shows a trend similar to that of the material’s cyclic yield strength. Furthermore, for high-strength steels having an UTS > 1500 MPa, the stress relaxation can be expected at lives of 106 reversals or longer, while the influence of compressive residual stresses on fatigue resistance diminishes rather rapidly with decreasing strength [17]. To determine residual stresses is not an easy task—and certainly beyond the scope of this introduction. Today, it is possible to determine the stresses by computational

3.2 Material- and Process-Related Effects

103

methods using finite element analysis, but more often they are determined experimentally: Surface stresses often are measured in a non-destructive way, while sub-surface stress measurement is mostly destructive.12 An old-fashioned type13 of destructive measurement is to drill a small hole, typically 1.5–3 mm deep, in a part, and to measure the relaxation around the hole from the drilling by using a strain gage rosette. For measuring residual stresses nondestructively, X-ray diffraction can be used: Since residual stresses create a certain level of lattice distortion, measuring the spacing of the lattice gives an indication about the residual stress magnitude. Subsurface residual stresses can be measured by that method too—after thin material layers are polished away subsequently, but that is then a destructive process too.

3.2.8 Environmental Conditions In power plants, the steel pipes carry steam at high pressure and high temperatures to create electrical energy by operating turbines. In nuclear power stations, the cooling water temperature reaches nearly 300 °C and it pressure up to 120 bar. These environmental conditions become even more challenging because of high-energy radiation too. Hence, the environment in which the operation takes place obviously can have a significant impact on the fatigue behavior and the durability performance of materials and components. Austenitic steels work best in hot environments where corrosion is standard. Because of the extraordinary construction times of nuclear plants as well as the large number of containment service water systems, even microbiologically influenced corrosion (MIC) effects may occur, which—again—demonstrate how complex environmental conditions can be. Specifically, when pipes and lines are conditioned with phosphate-based chemicals, the conditions are favorable for anaerobic bacteria to develop and to create pits in stainless steel pipes, especially in weld-affected zones, and have led to leaks in some instances. While in power plant construction as well as for heat engines or jet engines the effect of temperatures cannot be neglected, for many other applications that is not necessarily to be considered, as long as the temperature does not affect the material properties, which is true in a range of −25 to +60 °C or even +100 °C for materials typically used in mechanical engineering. On the other hand, a corrosive environment can have a dramatic effect on the fatigue strength: In the crack initiation phase, corrosion pits act as crack initiation sites, and in the crack propagation phase, corrosive media usually accelerates the 12 It is easy to understand that a destructive technique for measuring residual stresses is applied if it is expected to have unintentional and unwanted residual stresses. 13 Though the hole-drilling method looks like an old-fashioned method, it is even today successfully used and has many applications.

104

3 Influencing Factors for Fatigue Strength

crack growth. In such an environment, the S–N curve tends to continue with an almost constant slope from the LCF to the HCF regime, not showing any cut-off even at very small stress amplitudes. Contrary to a pure mechanical fatigue, there is no fatigue limit load in corrosionassisted fatigue. The simultaneous effect of cyclic stresses and the chemical attack leads to corrosion fatigue and subsequentially to environmentally assisted cracking (EAC). The combined action of alternating stresses and a corrosive environment causes rupture of the protective passive film, upon which corrosion is accelerated. If the metal is simultaneously exposed to a corrosive environment, the failure can take place at even lower loads and after shorter time. The fatigue fracture is brittle and the cracks are most often trans-granular. No metal is immune from some reduction of its resistance to cyclic stressing if the metal is in a corrosive environment: Even relatively mild corrosive effects can reduce the fatigue strength of aluminum structures considerably, down to 75 to 25% of the fatigue strength in dry air. In corrosion fatigue, the primary mechanisms are anodic dissolution and hydrogen embrittlement. Anodic dissolution occurs when a crack is opened by cyclical stress and the passive film is broken within the crack. Then, a new surface is exposed to the corrosive environment, and this surface is partially dissolved, which facilitates the growth of the crack and thereby accelerates fatigue damage. Hydrogen-assisted cracking occurs when single hydrogen atoms enter the lattice of a metal and diffuse into voids or microcracks in the material which form highpressure bubbles. That creates localized stress within the material leading to a reduced fracture strength. Pitting corrosion is a slightly different form of corrosion: Here a localized dissolution of the base material occurs, which results in the creation of depressions, or pits, on the surface of the material. Pitting corrosion has been observed to progress in multiple stages: Passive film breakdown, pit growth and pit arrest. The breakdown of passive films is pretty sensitive to the temperature, the alloy composition, the corrosion environment and an extraordinary number of other factors. Pit growth is the stage associated with the continuous growth of corrosion pits which is mostly determined by mass transport properties of the system: The pit growth rate decreases with time, often proportional to the inverse square root of time. Pitting corrosion can have a significant effect on the fatigue life of metals but does not act in the same way as corrosion fatigue. Hence, it can be concluded that corrosion pits act as pre-existing flaws in the material to nucleate fatigue cracks. A pretty pragmatic examination of the influence of corrosion on the fatigue behavior of automotive steels was performed in the late 1970s when a car was driven for 1 or 2 years in Southern Ontario before making specimens from parts which were exposed to corrosion [18]. The pit depth effect on fatigue life was then evaluated, and from those results fatigue notch factors for pits were derived (Fig. 3.31). Even relatively small pits—having a depth of just 0.2 mm—may induce an effect on the fatigue strength of mild to medium strength steels which is similar to a moderate notch effect.

3.2 Material- and Process-Related Effects

105

Fig. 3.31 Corrosion pit notch factor for automotive steels grades. Data from [18]

Fig. 3.32 Reduction of S–N data for steel when corrosion pre-pitted. Data from [19]

Similar results can be found for specimens which are pre-pitted in a corrosion solution of 3.5% sodium chloride [19]: Here the stress concentration factor of the pre-pitted samples is around 1.5 and this significantly reduces the fatigue life by 40% compared to the non-corroded specimens (Fig. 3.32). Besides the reduction in fatigue strength, for the pre-pitted specimens, the slope k* is different from the slope of the un-corroded steel specimens. But it has to be considered that there is no normative approach available to take care about the influence of corrosion affecting the fatigue strength: With the current state of knowledge, it is not possible to reliably assess the effect of corrosion on the fatigue strength under the exposure of tap water or seawater. Frequently a knock-down factor is used to describe the reduction in fatigue life in a corrosive environment compared to the performance in air which is aligned to a surface-related effect (Fig. 3.33). But since the fatigue knock-down, because of the exposure to a corrosive environment while cyclically loaded, means such a complex interaction of various different mechanisms, it is more a tool rather than a concept to describe the effect of corrosion by an approach such as shown in that figure.

106

3 Influencing Factors for Fatigue Strength

Fig. 3.33 Corrosion knock-down factor of steel materials aligned to surface-related effects

References 1. Haibach E (2006) Betriebsfestigkeit—Verfahren und Daten zur Bauteilberechnung. VDI-Book, 3rd ed. Springer, Berlin 2. Lang OR (1979) Dimensionierung komplizierter Bauteile aus Stahl im Bereich der Zeit- und Dauerfestigkeit. Zeitschrift Werkstofftechnik 10:24–29 3. Hück M, Thrainer L, Schütz W (1983) Berechnung von Wöhlerlinien für Bauteile aus Stahl, Stahlguss und Grauguss—Synthetische Wöhlerlinien. Report ABF 11, 3rd ed. 4. Schijve J (2004) Fatigue of structures and materials, 2nd ed. Springer, Netherlands 5. Härkegaard G, Halleraker G (2010) Assessment of methods for prediction of notch and size effects at the fatigue limit based on test data by Böhm and Magin. Int J Fatigue 32:1701–1709 6. Birkhoff GD (1933) Aesthetic measure. Harvard University Press, Cambridge 7. Marquis G, Solin J (2000) Long-life fatigue design of GRP 500 nodular cast iron components. Research Notes 2043, VTT Technical Research Center of Finland, Espoo 8. Neuber H (1958) Kerbspannungslehre, 2nd ed. Springer, Berlin 9. Kuguel R (1961) A relation between theoretical stress concentration factor and fatigue notch factor deduced from the concept of highly stressed volume. ASTM Proc 61:732–748 10. Gudehus H, Zenner H (2004) Leitfaden für eine Betriebsfestigkeitsberechnung. Stahleisen, 4th ed. 11. Gough H, Pollard H (1935) The strength of metals under combined alternating stresses. Proc Inst Mech Eng 31:3–18 12. Bruun A, Härkegaaard G (2015) A comparative study of design code criteria for prediction of the fatigue limit under in-phase and out-of-phase tension-torsion cycles. Int J Fatigue 73:1–16 13. Germanischer Lloyd Industrial Services GmbH: GL2012—Guideline for the Certification of Offshore Wind Turbines. Edition 2012 (2012) 14. Papuga J (2011) A survey on evaluating the fatigue limit under multiaxial loading. Int J Fatigue 33(2):153–165 15. Tufft MK (1996) Instrumented single particle impact tests using production shot: the role of velocity, incidence angle and shot size on impact response. Induced plastic strain and life behavior. GE Aircraft Engines, Cincinnati, OH 16. Bate SK, Green D (1997) A review of residual stress distributions in welded joints for the defect assessment of offshore structures. Offshore Technology Report 482, AEA Technology plc 17. Landgraf RW, Chernenkoffl RA (1988) Residual stress effects on fatigue of surface processed steels. Analytical and experimental methods for residual stress effects in fatigue. ASTM STP 1004. Philadelphia, pp 1–12

References

107

18. Hiam JR, Pietrowski R (1978) The influence of forming and corrosion on the fatigue behavior of automotive steels. SAE Trans 87:132–140, Section 1: 780006–780229 19. Cerit M, Genel K, Eksi S (2009) Numerical investigation on stress concentration of corrosion pit. Eng Failure Anal 16:2467–2472

Chapter 4

Fatigue Strength Under Spectrum Loading

Abstract In-service loading often is characterized by random variable amplitudes which cannot be represented by sinusoidal type of loading sufficiently. Rare events may introduce microplastic straining and are followed by a huge number of cycles with moderate to medium stresses. In such a way, in-service loading creates its unique history and may lead to an early termination of serviceability. Chapter 4 looks into fundamental contexts of variable amplitude loading and how to deal with stress analysis. Finite element analysis is introduced, because as an engineer nowadays it does not bear thinking about being without. Hence, a major part is devoted to the fatigue analysis of welded joints which is an excellent playing field to look at nominal stress, structural stress and local stress concepts.

4.1 Blocked-Program Testing Applying cyclic loads with a constant amplitude often is just a rough approximation of the true load time history, but has been the preferred type for fatigue testing because of its simplicity with regard to the load application and control. Because of the various different influencing factors which were discussed in the previous chapter, it is—first of all—more than understandable to stay with constant amplitudes and, thus, to avoid an almost unmanageable complexity due to the random nature of true load time histories. But, while constant amplitude testing is beneficial for material characterization and comparative studies, the actual type of loading has to be considered for designto-durability: According to the stress–strength interference model, the component’s response and resistance to variable amplitude loading has to be managed. A linking function between constant amplitude and spectrum loading was introduced previously: The linear damage accumulation is a simple approach to use the material’s response from constant amplitude loading for an assessment of the fatigue life under variable amplitude loading. But here, too, tests with spectrum loading are necessary to evaluate the accuracy of such a simplistic approach that the Miner’s rule actually is.

© Springer Nature Switzerland AG 2020 R. Heim, Structural Durability: Methods and Concepts, Structural Integrity 17, https://doi.org/10.1007/978-3-030-48173-5_4

109

110

4 Fatigue Strength Under Spectrum Loading

Fig. 4.1 Gaßner 8-step blocked-program test

If an amplitude is changing in a load time history, this is formally acknowledged as variable amplitude loading, even if only a small number of different amplitude levels occur. In 1939 Ernst Gaßner introduced a procedure for the approximation of fluctuating loads with variable amplitudes: That was a blocked-program test with a Gaussiantype distribution of a total number of eight different amplitudes (Fig. 4.1). The new standard proposed by Gaßner was a balanced approach taking into consideration the need for a more realistic type of loading and the technically limited test rig hardware at that time. A basic sequence of that blocked-program consisted of 500,000 cycles and a constant mean stress. The peak stress cycle occurred once within the basic sequence. To evaluate the validity of that approximation, Gaßner performed an extraordinary and remarkable study using true random loading [1]: He was using the spring deflection of a passenger car rear axle as the operational load for a series of notched specimens which were kinematically linked to the rear suspension. Hence, that was an actual variable amplitude test which in no way differed from the true operational loading and included mean stress changes, rest periods as well as corrosion. The corresponding load spectra were counted simultaneously using the level-crossing method. From that data damage equivalent 8-step blocked-program tests were derived and performed in laboratory test rig hardware. The purpose was to see if the fatigue life in actual service was equivalent to that in the laboratory under the blocked-program test. Actually, the blocked-program test gave a longer life than the random load test, and thus, it was proven to be a non-conservative approximation of random operational conditions.

4.1 Blocked-Program Testing

111

Even if the amplitude distribution and the sequence length of blocked-program and random tests are pretty much the same, random loading will shorten the fatigue life by factors of 3–10 depending on material, stress concentration and amplitude distribution. Although the 8-step blocked-program test was the very first approach of simulating random operational conditions, the lifetime differences to actual service loads were too large to use it as a general standard. Soon after servo-hydraulic test rig hardware became available in the 1960s, any type of random loading was applicable for testing.

4.2 Standardized Random-Type Testing With the availability of closed loop servo-hydraulic test systems, it was possible to use variable amplitude load standards which were derived for aircrafts and helicopters, offshore structures, wind turbines or rolling-mill drives. Hence, a more realistic type of loading for those structures can be applied without spending too much effort for a complete data acquisition and load file development. Using application-specific load standards were a huge step toward service load simulation and extensively used for the • determination of fatigue-related parameters and allowable stress levels in fatigue design, • comparative fatigue life evaluation of notched specimens and components made from different materials, • determination of life time scatter, • evaluation of the accuracy level of math modeling related to life time and crack propagation, • examination of the effect from surface treatment such as shot peening. The intention of such standardized random-type testing was not to fully approve a specific design, but to have an operational load simulation at an early stage of product development and testing. It was a level beyond the 8-step blocked-program test and helped significantly to move fatigue testing from research into an industry focus. In 1990 European car manufacturer released standardized load sequences for wheel suspension components—a work that was jointly coordinated by Fraunhofer LBF and IABG in Germany. Those load sequences—known as CARLOS (car loading standard)—were specifying vertical, lateral and longitudinal force time histories to perform uniaxial fatigue tests. CARLOS actually was a series of fixed load sequences which simulated forces from the tire contact position. The forces themselves were treated as independent from each other and came from data acquisition on public roads primarily. Road load data of about 10 different vehicles—covering a wide range of gross vehicle weight from 1100 to 1800 kg and engine power from 50 to 175 kW—were used.

112

4 Fatigue Strength Under Spectrum Loading

Fig. 4.2 CARLOS extrapolated frequency distribution for 40,000 km. Data from [2]

Five different road types—rough roads, city roads, normal as well as poor country roads and highways—were considered and simulated a driving distance of 40,000 km for an individual CARLOS sequence. A mileage of approximately 6000 km was measured for each load direction, and cyclic loads within a range smaller than 3% of the car’s axle weight were neglected, assuming that these load cycles do not contribute to any damaging effects. The data was low-pass filtered at 30 Hz, and a number of 64 load levels were introduced before the cumulative frequency analyses were performed using the rainflow counting method without consideration of the static loads. At that stage, it was proven that the influences from different drivers as well as those from the route were much higher than the effects from the individual vehicles. Hence, stitching the load data from all vehicles together to an uniform load sequence was a valid approach (Fig. 4.2). Since the load sequences originally consisted of significantly more than one million load cycles to cover a driving distance of just 40,000 km, the total effort to test a vehicle’s whole life would be much too high. Assuming that parts made from metals do not get any damage from load cycles which are below 50% of the ‘endurance limit,’ the number of load cycles was reduced to 83,000 for the longitudinal forces, 95,000 for the laterals and 136,000 for the verticals. The final CARLOS spectra recognized the static loads too and can be applied for uniaxial fatigue testing to suspension-type specimens and components. Four years later, an initiative for a multiaxial load standard—‘CARLOS multi’— was finalized to further improve the load simulation quality for those suspension components by taking into account the different forces acting toward different directions which occur at the same time. Again, the three load directions and additionally braking forces were examined, and the load time histories were created using a so-called guide function concept to ensure a proper correlation between the individual forces. These are time-related functions using parameters which give an unambiguous result for a specific type of loading: The ‘guide function for cornering’ (GFC) as an example was calculated

4.2 Standardized Random-Type Testing

113

by the summation of the lateral forces from both sides within a frequency range of 0–0.65 Hz which then was corrected by the longitudinal forces. Hence, a thorough understanding of vehicle dynamics was key to get that job done and to create realistic load spectra for the usage of passenger cars on public roads. By means of this service-like loading, product development is supported much better than using the peak loads only: While a constant peak amplitude for the vertical direction is leading to a small number of cycles-to-failure, the CARLOS spectrum gives a result that is significantly different to the simplistic approach using constant amplitudes (Fig. 4.3). For the same equivalent damage value—e.g., D = 1—the number of cycles-tofailure of the CARLOS spectrum is actually 1280-times the number of the constant amplitude approach. Such a huge difference is understandable because of the CARLOS spectrum characteristics, which almost follows a straight-line distribution and, thus, does not introduce a huge damage from the medium to high-load levels. In other words: By using realistic variable amplitude loading instead of a pretty conservative ‘worst case scenario,’ the suspension component can be designed much more mass efficient. Since design-to-durability means to manage the stress–strength interference model, an adequate sizing of the component relies on the availability of proper load information too. For suspension parts, the CARLOS load standards help to reduce unnecessary conservative assumptions derived from constant amplitudes approaching peak loads. Counting the number of cycles-to-failure is the only way to compare fatigue life related to constant and variable amplitudes. While for constant amplitude loading sufficient information is given by the single amplitude itself, the variable amplitude loading is not fully characterized by the single peak but needs additional information about the shape of the spectrum. A convexed type of spectrum—such as a Gaussian spectrum derived from a stationary random process—may give an increased number of cycles-to-failure of about 200, while a straight-line distribution may lead to a 2000-times bigger number compared to constant amplitudes. Fig. 4.3 Damage accumulation CARLOS multiversus constant amplitude for vertical direction

114

4 Fatigue Strength Under Spectrum Loading

While for a given material and specimen under, e.g., fully reversed axial loading, the Wöhler curve is unambiguous, a Gaßner curve depends on the specific spectrum for which the life data is examined. Certainly, the number of cycles-to-failure related to variable amplitudes are much beyond those of constant amplitudes: Since for many structures spectrum loading is more realistically, these results feed a curve which is set off to the right related to the S-N curve, and often described as lifetime curve. Usually the slope of this curve is flatter than the S-N curve as well as there is no knee point (Fig. 4.4). Using damage accumulation to calculate individual lifetime results from an S-N curve and an associated spectrum, the number of cycles-to-failure from the calculation N calc should be close to that from the experiments N test to validate the usability of the Miner’s rule. But when looking at the relative damage Drel =

Ntest Ncalc

(4.1)

there can be significant differences between those numbers which gives a clear indication that the Miner’s rule does not always lead to accurate results. If the linear damage accumulation would work properly, the relative damage value is around Drel = 1.0, but often the mean value of a series of experiments is much lower than that, because it is hard to believe that progressive damage effects appear in a linear way. At first sight, it would appear that after a crack has formed in a component its effective fatigue limit might be expected to drop and the lower-level stress cycles become effective and, thus, stress cycles of the spectrum which are below the initial fatigue limit should be damaging. As an example, we may look onto the results of a research study related to spectra loading of case-hardened steel specimens [3]: Those specimens were simulating hollow shafts under torsional loading, and some of them had a cross bore which introduces significant stress gradients similar to those of real gearbox shafts. Fig. 4.4 Lifetime curve derived from CARLOS spectrum loading

4.2 Standardized Random-Type Testing

115

Fig. 4.5 Relative damage of specimens under torsion using Miner’s rule. Data from [3]

For 72 different experiments with the notched specimens as well as with unnotched specimens using three different omission levels and two different spectra, the relative damage value was statistically examined: More than three-fourths of all results showed a relative damage which was smaller than the theoretical value and 10% of the results were even below Drel = 0.3 (Fig. 4.5). For the case-hardened specimens under torsional spectrum loading, the mean value is close to Drel = 0.5 which means that the actual experimental results showed significant smaller life than it was proposed by the damage accumulation. Even more unfavorable such results are when evaluating steel specimens under bending or axial loading. The same research group around Kotte and Eulitz from Technical University Dresden published data related to the reliability of lifetime assessment: They showed that the relative damage value for steel specimens loaded by in-plane bending is about Drel = 0.4, and just 0.2 for specimens under axial loading. Hence, we have to accept a huge deviation from the theoretical value, and since most results are much smaller than that, the Miner’s rule often is non-conservative when using its original form with D = 1. Consequently, it is recommended to use a cumulative damage value for spectrumrelated lifetime assessment in the range of 0.2–0.5 only, which depends on the type of loading, mean stress fluctuations and other parameters. This limited level of accuracy of the Miner’s rule is hardly a surprise to us: At the beginning of that chapter we saw that—when comparing blocked-program tests

116

4 Fatigue Strength Under Spectrum Loading

and true random loading—even with the same amplitude distribution and sequence length the random loading shortens the fatigue life by factors of 3–10, although there is no difference to be expected when using Miner’s rule.

4.3 Limitations of the Miner’s Rule Originally, the Miner’s rule was intended to be applied for technical crack occurrences, but there is no fundamental reason why it cannot be applied to fatigue life up to any state of damage because it is lacking a serious background of material mechanics. Hence, it can be concluded that both, mean damage and the deviation about the mean, often are not well aligned to the theoretical Miner’s values. Residual stresses can explain much of the variation, but other factors are relevant too—such as the • variation of the relative damage rate with stress and associated changes in fatigue mechanism, • effect of the load at which a component fails, • effect of low-level stress cycles, • effect of strain hardening, • effect of fretting as well as • crack propagation considerations. For high-strength steels or aluminum alloys, residual stresses strongly affect cumulative damage behavior and may cause Miner’s rule to either overestimate or underestimate fatigue life. These stresses may be present in specimens due to manufacturing processes and may also be produced by local yielding at points of fatigue initiation under high loads in either direction. When residual stresses are introduced either deliberately to improve fatigue performance or unintentionally during the manufacturing process, Miner’s rule is not able to fully cover those effects. Residual stresses do not greatly affect the behavior of low and medium strength steels, mainly because the local mean stress is reduced close to zero on the first cycle of any significant amplitude. With regard to the effect of strain hardening, there is a distinction to be made between monotonically increasing load and cyclic plastic straining: Plastic deformation of metals due to a single monotonically increasing load leads to strain hardening which comes together with an increase in indentation hardness and yield stress, while cyclic plastic straining may result in either strain hardening or softening. Cyclic strain hardening means an increase in stress amplitude under constant strain conditions, or—vice versa—an increase in strain amplitude under constant stress loading. For a number of metals subjected to different heat treatments, research studies in the 1960s showed that in general cyclic strain hardening occurs under reversed plastic loading when the ratio of UTS to yield strength is >1.4, and cyclic strain

4.3 Limitations of the Miner’s Rule

117

softening occurs when that ratio is