285 95 39MB
English Pages 436 [437] Year 2022
FUNDAMENTALS OF MULTISCALE MODELING OF STRUCTURAL MATERIALS
FUNDAMENTALS OF MULTISCALE MODELING OF STRUCTURAL MATERIALS
Edited by
WENJIE XIA
Assistant Professor, North Dakota State University, USA
LUIS ALBERTO RUIZ PESTANA Assistant Professor, University of Miami, USA
Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States Copyright © 2023 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. ISBN: 978-0-12-823021-3 For information on all Elsevier publications visit our website at https://www.elsevier.com/books-and-journals
Publisher: Matthew Deans Acquisitions Editor: Dennis McGonagle Editorial Project Manager: Isabella C. Silva Production Project Manager: Prasanna Kalyanaraman Cover Designer: Miles Hitchen Typeset by STRAIVE, India
Introduction In 2011, the US Government presented the Materials Genome Initiative (MGI), one of the most significant efforts in history focused on accelerating the design and development of the next generation of advanced materials for global competitiveness. The traditional paradigm of materials discovery primarily relies on trial-and-error experimentation, and the epitome of this approach was Edison’s assertion: “I haven’t failed. I’ve just found 10,000 ways that won’t work.” The traditional paradigm is neither efficient nor cost-effective, rendering it insufficient to current demands. The launch of the MGI seeded a paradigm shift in the way materials are discovered and optimized. In the new “materials-by-design” paradigm, quantitative theoretical and computational predictions precede manufacturing and experimental testing. As a result, computational modeling methods, which are ideally suited to predict multiscale structure-property relations, took center stage. In the computer, one can model environmental conditions that are challenging to explore them experimentally. More importantly, simulations provide an ideal testbed of materials that do not even yet exist necessarily. Multiscale modeling methods and simulation tools, which range from quantum mechanical methods that can reveal the electronic properties of materials to continuum models that can shed light on their macroscopic behavior as parts, have emerged as the foundation of the “materials-by-design” paradigm. In the last decade, data-driven approaches based on artificial intelligence methods are also becoming increasingly important, and the prospect of combining physics-based simulations and data-driven methods is particularly enticing. It would be hard to overemphasize the potential impact on society by applying such a computational framework to structural materials such as steel, aluminum, or concrete, which are characterized by multiscale complex structures and are among the most used man-made materials worldwide. Predicting the emergent response of structural materials therefore requires a multiscale modeling framework to investigate phenomena at different multiple time and length scales. An essential step in the discovery of the next generation of sustainable, resilient, and environmentally friendly structural materials is to train scientists and engineers in computational multiscale modeling methods, the overarching aim of this book.
xiii
xiv
Introduction
The content of this book is aimed primarily at upper-level undergraduates, graduate students, and active researchers with interest in structural materials but who may not be experts in computational modeling. The book provides a comprehensive introduction to mainstream multiscale modeling methods (Part One) and offers practical guidelines specific to a variety of structural materials (Part Two). Chapter 1 introduces density functional theory (DFT), arguably the most popular electronic structure method in materials science. In Chapter 2, we introduce atomistic molecular modeling methods, with an emphasis on molecular dynamics (MD) simulations, where the electrons are not explicitly modeled, and the interatomic interactions are captured by empirical functions. Chapter 3 focuses on particle-based mesoscale modeling techniques and coarse-grained methods, which, at the cost of chemical accuracy, can reach microscopic time and length scales with model systems that still retain some molecular features of a material. Chapter 4 focuses on reduced-order models (ROMs) that take advantage of data-driven approaches. Chapter 5 covers advances in computational continuum mechanics based on an immersogeometric formulation for large-scale modeling of free-surface flows. Chapter 6 provides an introductory overview of machine learning and data-driven techniques for materials modeling and design. The second part of the book is focused on the modeling and properties of specific classes of material systems. Chapter 7 discusses the use of bottom-up multiscale models to study the failure behavior of carbon fiber-reinforced polymer (CFRP) composites. Chapter 8 examines the molecular and multiscale mechanisms of elasticity of biopolymers that exhibit exceptional elasticity in vivo with the aim of deducing design principles and mechanisms that can be used to develop novel elastic biopolymers for medical and engineering applications. Chapter 9 focuses on multiscale modeling approaches to study metal additive manufacturing, from the manufacturing processes to microstructure evolution and finally mechanical properties. Finally, Chapter 10 overviews the mechanical behavior of supramolecular assemblies of two-dimensional materials simulated using coarse-grained modeling approaches. Luis Alberto Ruiz Pestana Wenjie Xia
Contributors Amirhadi Alesadi Department of Civil, Construction and Environmental Engineering, North Dakota State University, Fargo, ND, United States
Francisco Manuel Andrade Pires Department of Mechanical Engineering, Faculty of Engineering, University of Porto, Porto, Portugal
Amara Arshad Materials and Nanotechnology, North Dakota State University, Fargo, ND, United States
Miguel Anı´bal Bessa Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands
Jane Breslin Department of Mechanical Engineering, Clemson University, Clemson, SC, United States
Fatima Department of Mathematics, Computer Science and Physics, Roanoke College, Salem, VA; Department of Civil, Construction and Environmental Engineering, North Dakota State University, Fargo, ND, United States
Bernardo Proenc¸ a Ferreira Department of Mechanical Engineering, Faculty of Engineering, University of Porto, Porto, Portugal; Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands
Daijun Hu Department of Mechanical Engineering, National University of Singapore, Singapore
Kamrun N. Keya Department of Civil, Construction and Environmental Engineering, North Dakota State University, Fargo, ND, United States
Genevieve Kunkel Department of Mechanical Engineering, University of Connecticut, Storrs, CT, United States
Zhaofan Li Department of Civil, Construction and Environmental Engineering, North Dakota State University, Fargo, ND, United States
Yangchao Liao Department of Civil, Construction and Environmental Engineering, North Dakota State University, Fargo, ND, United States
Mohammad Madani Department of Mechanical Engineering; Department of Computer Science & Engineering, University of Connecticut, Storrs, CT, United States
Zhaoxu Meng Department of Mechanical Engineering, Clemson University, Clemson, SC, United States
Wenjian Nie Department of Civil, Construction and Environmental Engineering, North Dakota State University, Fargo, ND, United States
ix
x
Contributors
Luis Alberto Ruiz Pestana Department of Civil and Architectural Engineering, University of Miami, Coral Gables, FL, United States
Qingping Sun College of Aerospace and Civil Engineering, Harbin Engineering University, Harbin, China
Anna Tarakanova Department of Mechanical Engineering; Department of Biomedical Engineering, University of Connecticut, Storrs, CT, United States
Sara A. Tolba Materials and Nanotechnology, North Dakota State University, Fargo, ND, United States
Lu Wang Department of Mechanical Engineering, National University of Singapore, Singapore
Yang Wang School of Materials Science and Engineering, University of Science and Technology Beijing, Beijing, China
Wenjie Xia Department of Civil, Construction and Environmental Engineering; Materials and Nanotechnology, North Dakota State University, Fargo, ND, United States
Jinhui Yan Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
Wentao Yan Department of Mechanical Engineering, National University of Singapore, Singapore
Chengeng Yang Department of Biomedical Engineering, University of Connecticut, Storrs, CT, United States
Zhangke Yang Department of Mechanical Engineering, Clemson University, Clemson, SC, United States
Yefeng Yu Department of Mechanical Engineering, National University of Singapore, Singapore
Guowei Zhou Department of Engineering Mechanics, School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai, China
Qiming Zhu Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
Preface Imagine a world where one could accurately predict, without the need for manufacture or experimental testing, the complex behavior of a material, which may not even exist yet. In such a world, it would be possible to rationally develop new materials with tailored, optimal properties in a small fraction of the time that it currently takes. Reaching that point is the overarching goal of multiscale materials modeling. In fact, multiscale modeling has already contributed to the momentous shift of the materials discovery paradigm away from the inefficient experimental “trial-and-error” (also known as Edisonian) approach. This book, which primarily focuses on materials with mechanical or structural applications, is thus motivated by the pivotal role that multiscale modeling has and continues to play in the discovery and optimization of the next generation of materials. The motivation to write and edit the book originated from the scarcity of pedagogical materials for self-instruction on the basics and applications of multiscale modeling of structural materials. Simulating and gaining insight into the behavior of structural materials across multiple time and length scales is a complex interdisciplinary affair that requires not only a deep understanding of the specific nuances associated with different classes of materials, from biological structural materials to metals and alloys, but also knowledge of diverse scientific fields ranging from algorithms and computational methods to foundational areas of physics and chemistry, such as quantum or statistical mechanics. While the development and use of structural materials often fall under the purview of engineers, most of the areas of knowledge relevant to multiscale modeling are seldom covered in the traditional engineering curriculum. Furthermore, while an abundance of texts exists that extensively cover each of those areas, separately, in detail, very few books, if any, offer an integrated, introductory treatment of multiscale modeling of structural materials, together with applications of those modeling tools to solve challenging problems. This book is our humble but ambitious attempt to fill in the gray area between theory and practice, between novice and expert, and between methods and applications. Overall, as the editors and coauthors of several chapters of this book, we have tried to provide a concise, coherent set of notes aimed at upper-level undergraduates, graduate students, and active researchers who are interested in structural materials and want to get started xi
xii
Preface
in the field of multiscale modeling. The book was designed to have two parts: the first part focuses on the basic computational methods that can be broadly applied to different materials systems and the second part is dedicated to how to model specific important classes of structural materials. Given the broad scope of the book, it was not practical, or even beneficial to our intended readership, to undertake a comprehensive coverage of each and every topic. Instead, we have tried to include pertinent references throughout the book, with the goal to provide a somewhat structured framework to further self-learning. We view the book as a first stop for someone who is unfamiliar with multiscale modeling of materials, but wants to go beyond typical introductory texts on simulation, but does not have time to screen the vast literature or to study multiple textbooks in parallel. Although each of us has over 10 years of experience using multiscale modeling to predict and understand materials at multiple scales—an adventure that started during our PhD in Dr. Sinan Keten’s lab at Northwestern University—a book like this would not have been possible without the amazing contributions of world-class academics and colleagues, experts in the different relevant fields. Wenjie Xia Luis Alberto Ruiz Pestana
CHAPTER ONE
Electronic structure and density functional theory Fatimaa,b, Yangchao Liaob, Sara A. Tolbac, Luis Alberto Ruiz Pestanad, and Wenjie Xiab,c a
Department of Mathematics, Computer Science and Physics, Roanoke College, Salem, VA, United States Department of Civil, Construction and Environmental Engineering, North Dakota State University, Fargo, ND, United States c Materials and Nanotechnology, North Dakota State University, Fargo, ND, United States d Department of Civil and Architectural Engineering, University of Miami, Coral Gables, FL, United States b
Contents 1. A brief introduction to electronic structure methods 2. The theoretical framework behind density functional theory 2.1 Where does DFT come from? 2.2 A formulation of DFT a la Kohn-Sham 2.3 DFT levels of theory and the zoo of exchange-correlation functionals 2.4 Where are the Van der Waals interactions in DFT? 2.5 Basis sets 2.6 Pseudopotentials 3. Using DFT to calculate properties of solids 3.1 Crystal structure 3.2 Elastic constants 3.3 Surface energy 3.4 Adsorption energies 3.5 Band structure 3.6 Density of states and absorption spectra 3.7 Finding transition state 4. Recommended further reading References
1.
3 5 5 6 10 14 15 19 20 20 23 25 26 28 29 30 31 31
A brief introduction to electronic structure methods
Understanding the complex behavior of matter at the levels of individual atoms and molecules is a first step in the quest to exploit and eventually design, from the bottom-up, advanced materials with properties that are inaccessible through conventional bulk design. Quantum mechanics Fundamentals of Multiscale Modeling of Structural Materials https://doi.org/10.1016/B978-0-12-823021-3.00007-5
Copyright © 2023 Elsevier Inc. All rights reserved.
3
4
Fatima et al.
(QM) is currently our best theory to describe the physical behavior of materials at the atomic and subatomic scales. In classical mechanics, systems are defined as a deterministic collection of interacting particles whose positions and velocities evolve according to Newton’s equations of motion. In contrast, in quantum mechanics systems are described by a mathematical object known as the wavefunction, which depends on the positions of all the electrons and nuclei in the system. The wavefunction, more specifically the square of the wavefunction, represents the probability of finding the electrons in particular quantum states. The wavefunction of the electron evolves over time according to the Schr€ odinger equation. Unfortunately, computationally solving the many-body, timedependent Schr€ odinger equation remains unfeasible beyond trivial systems. To make the problem of solving Schr€ odinger equation tractable, a first approximation is to decouple the wavefunction of the electrons and the nuclei, which is known as the Born-Oppenheimer (BO) approximation. The validity of the BO approximation lies in the fact that because the atomic nuclei are much heavier than the electrons, electronic relaxation takes place at a much faster rate than the timescale of nuclear motion. A further simplification is to focus on solving the time-independent problem instead, which is reasonable given that most material properties of interest are associated with the equilibrium or ground state of the system and thus do not depend on time. Under the assumption of equilibrium, the scope of most electronic structure calculations (as we are only solving for the wavefunction of the electrons) is reduced to solving the time-independent Schr€ odinger equation for a system of electrons under the external potential created by the “frozen” nuclei. However, solving even this simplified problem (i.e., electronic and time-independent) for systems of practical importance remains challenging due to the high dimensionality of the electronic wavefunction (three spatial dimensions plus spin for each electron in the system), which makes the computational cost of the calculation increase exponentially with the number of electrons in the system. Many electronic structure methods have been developed over the last several decades, based on different approximations to the many-body electronic problem, aimed at tackling problems of practical importance. These ab initio or first-principles methods, as they are also known, can be broadly classified into two families: wavefunction methods and density functional theory (DFT). Modern wavefunction methods, such as Møller-Plesset perturbation theory [1] or coupled cluster theory [2], are generally favored by computational chemists due to their high accuracy but are too expensive to
Electronic structure and density functional theory
5
simulate systems in the condensed phase, which, however, are central to most materials science and engineering applications. In DFT, the wavefunction is replaced by the electron density as the fundamental variable to solve for in the calculations (hence the term “density” in the name). Because the electron density is only a three-dimensional scalar field, the computational cost of the calculations is dramatically reduced. The great balance between computational cost and accuracy has made DFT the most popular electronic structure method to study materials. In this chapter, we provide a beginner’s guide to DFT. We will cover the basic underlying theoretical principles, outline some of its main advantages and shortcomings, and provide some practical guidelines to use DFT to calculate basic material properties based on simple examples. We assume the reader has some basic knowledge of quantum mechanics. For a more indepth theoretical treatment of electronic structure methods, we refer the reader to R.M. Martin’s book [3] on electronic structure, and for a more practical introduction to DFT (more comprehensive than what is presented in this chapter), we recommend the book by D.S. Sholl and J.A. Steckel [4].
2.
The theoretical framework behind density functional theory 2.1 Where does DFT come from? The main premise that drives the development of density functional methods is the replacement of the many-body electronic wavefunction by the three-dimensional scalar electronic density. This, of course, is not an easy task. The first method for calculating the electronic structure of atoms based on the electronic density alone was independently proposed by Thomas and Fermi in 1927 [5,6]. Although promising, Thomas-Fermi model had a series of drawbacks, such as the inaccurate representation of kinetic energy and exchange energy. Specifically, it completely ignored contributions to the energy of the system due to electron correlations (related to the Pauli exclusion principle for same-spin electrons); as a result, it suffered from self-interaction errors (electrons interacting with their own contribution to the electron density) and crudely approximated the kinetic energy of the electronic system by that of a uniform electron gas when the electron density can rapidly vary in real systems. Despite the fact that those approximations predictably result in very poor quantitative predictions for molecular systems, these early DFT approaches found some hold in solidstate applications where the material systems are more homogeneous.
6
Fatima et al.
It was only in the mid-1960s that DFT took a turn toward stardom. In 1964, Hohenberg and Kohn [7] proved that the problem of solving the Schr€ odinger equation could be recast exactly as the problem of finding the electron density that minimizes the energy of the system (a variational principle for DFT). In the ground state (i.e., equilibrium), the energy is a unique functional of the electron density (hence the term “functional” in DFT). The Hohenberg-Kohn theorems did not, however, offer any guidance on how to find the electron density of the ground state or what the form of the unique energy functional of the electron density might be. Just a year later, in 1965, Kohn and Sham offered a practical breakthrough by formulating the problem as a set of self-consistent equations corresponding to a fictitious system of noninteracting electrons whose density is, remarkably, the same ground-state electron density of the fully interacting system [8]. The Hohenberg-Kohn and Kohn-Sham theorems set the stage for modern DFT approaches by providing both a rigorous theoretical footing (a variational principle) and a practical path forward (i.e., the Kohn-Sham equations) to solve electronic structure problems. For his foundational work on DFT, Walter Kohn won the 1998 Nobel Prize in Chemistry, which he shared with John Pople who contributed tremendously to the development of computational methods in quantum chemistry. There is a catch to DFT, however. Despite that DFT is in principle an exact theory, the precise form of the energy functional (i.e., how the energy in the ground state depends on the electron density) is unknown. As a result, DFT is only an approximate method in practice, and a multitude of DFT models exist depending on how the energy functional of the electron density is described.
2.2
A formulation of DFT a la Kohn-Sham
The goal of a DFT calculation is to find the electron density nðrÞ associated with the ground-state energy of a system of electrons in an external potential of positively charged frozen nuclei. The time-independent electronic b ¼ Eψ, where H b is the HamSchr€ odinger equation can be written as: Hψ iltonian of the system (an operator that acts on the wavefunction), ψ ðr1 , r2 , …, rN Þ is the electronic wavefunction which depends on the spatial coordinates r1 ,r2 ,…,rN of all the electrons, and E is the energy of the system. The relationship between the electronic density and the wavefunction is nðrÞ ¼ jψ ðrÞj2. The first step in DFT is to try to formulate
7
Electronic structure and density functional theory
the Schr€ odinger equation as a function of the electron density, nðrÞ, instead of the wavefunction ψ ðr1 , r2 , …, rN Þ. Using atomic units, where the reduced Plank constant ħ ¼ 1, the mass of the electron m ¼ 1, and the charge of the electron is 1, the electronic Hamiltonian can be written as: _
H¼
N X i¼1
N X N N X M X X 1 1 Z k r2i + + |r Rk | 2 |ri rj | i¼1 j>i i¼1 k¼1 i
(1)
where the first term corresponds to the kinetic energy of the electrons, the second term to the classical electron-electron repulsion interactions, and the last term represents the attractive Coulombic interactions between each of the N electrons and the external potential created by the M nuclei with charge Zk at fixed positions Rk. The energy of the system, which is the expectation value of the Hamiltonian operator, can be written as the sum of those three contributions corresponding to the kinetic energy, Ekin, the electron-electron repulsion, Eee, and electron-nuclei interactions, Eext: Z Z ∗ b dr1 …drN ¼ E kin + E ee + E ext (2) b E ¼ ψ ∗ jHjψ ¼ … ψ Hψ where ψ ∗ is the complex conjugate of the wavefunction. The theorems by Hohenberg and Kohn state that a universal density functional exist such that: E[n(r)] ¼ Ekin[n(r)] + Eee[n(r)] + Eext[n(r)]. The easiest term is the contribution to the energy from the electron-nuclei interactions, which can be written without approximations as: Eext ½nðrÞ ¼
M Z X k¼1
Z k nðrÞdr |ri Rk |
(3)
The classical Coulombic repulsive interactions between electrons can be Ð Ð nðrÞnðr∗ Þ drdr∗, where electron exchange and correlawritten as E Coul ¼ 12 jr r∗ j tion effects (i.e., quantum effects) are neglected. Furthermore, ECoul also includes unphysical self-interactions between the electrons and their own contribution to the electron density. In order to account for all nonclassical effects and to remove the contribution from the self-interactions, we can add a correction term, ΔEee, which results in the following expression for the electron-electron interactions:
8
Fatima et al.
1 Eee ½nðrÞ ¼ 2
ðð
nðrÞn r∗ drdr∗ + ΔEee jr r∗ j
(4)
The kinetic energy contribution is the most problematic term to approximate. “Pure” or “orbital-free” DFT methods, such as the Thomas-Fermi model, exist, but remain inaccurate, mostly due to the poor description of the electronic kinetic energy. Alternatively, we can calculate the kinetic energy of a system of noninteracting electrons as a function of the single electron wavefunctions, φi, known as molecular orbitals (MOs), exactly and then add a correction term, ΔEkin, to account for the interacting nature of the electrons: N Z X 1 φ∗i r2i φi dri + ΔE kin E kin ½nðrÞ ¼ (5) 2 i¼1 While the kinetic energy term in Eq. (5) is not completely satisfactory as not exclusively expressed in terms of the electron density, the electron density can be easily calculated from the single-electron MOs: nðrÞ ¼ 2
N =2 X i¼1
φ∗i ðrÞφi ðrÞ ¼ 2
N =2 X
jφi ðrÞj2
(6)
i¼1
where the factor of 2 arises from the fact that two electrons with opposite spins can occupy the same MO. Putting together Eqs. (2)–(5), the total energy of the system as a function of the electron density (and the molecular orbitals in the case of the kinetic energy) becomes: ðð N Z X 1 nðrÞn r∗ ∗1 2 E ½nðrÞ ¼ φi ri φi dri + drdr∗ ∗| 2 2 |r r i¼1 M Z X Z k + (7) nðrÞdr + E xc |ri Rk | k¼1 where Exc ¼ ΔEee + ΔEkin is the exchange-correlation energy, which includes the correction terms that account for the nonclassical electron correlations, the self-interactions introduced by the Coulombic term, and the difference in kinetic energy between the system of noninteracting electrons and the real interacting one. The biggest caveat of this formulation, known as Kohn-Sham DFT, is that Exc is not formally known. Despite that,
Electronic structure and density functional theory
9
approximations to the exchange-correlation functional are, generally, much more accurate than any direct approximations to the kinetic energy of the interacting system of electrons (e.g., in the Thomas-Fermi model). The different existing flavors of DFT arise from the different ways of approximating the exchange-correlation functional Exc, which embodies our ignorance of the unique, exact functional of the energy. Having the functional expression of the energy of the system (Eq. 7), the next step is to find the electron density that minimizes it. Kohn and Sham, in a remarkable stroke of genius, showed that the original system of interacting electrons can be replaced by a system of self-consistent single-electron, timeindependent Schrodinger equations that represent a fictitious system of noninteracting electrons whose electronic density is the same as that of the original system [8]. The Kohn-Sham equations (one for each electron) read as:
1 2 r + V KS ðrÞ φi ðrÞ ¼ εi φi ðrÞ 2
(8)
where VKS(r) is known as the Kohn-Sham potential (the same for all the electrons), and φi and εi are the molecular orbital and the single-electron contribution to the total energy of the system, respectively, P corresponding to electron i. The total energy of the system is just E ¼ εi. The KohnSham potential, VKS(r) ¼ δE[n]/δn, which is the functional derivative with respect to the electron density of the energy functional given in Eq. (7), contains the following terms: Z V KS ðrÞ ¼
M Z X n r∗ Z k δE xc ∗ + dr dr + ∗ |ri Rk | δn |r r | k¼1
(9)
The first term on the right-hand side of Eq. (9) is commonly known as the Hartree potential. To find the MOs that minimize the energy of the system, the Kohn-Sham equations (in which there is one for each electron) need to be solved simultaneously. To be able to write each of the equations, we need to construct VKS(r) (Eq. 9) and determine the Hartree potential requiring knowing the electron density n(r), which in turn depends on the same MOs (Eq. 6) that we are trying to find. To escape this circularity, we solve the system of equations through an iterative process using a self-consistent field (SCF) method following the steps schematically illustrated in Fig. 1.
10
Fatima et al.
Fig. 1 Self-consistent field (SCF) procedure to find the ground state of the system.
2.3
DFT levels of theory and the zoo of exchange-correlation functionals
The exact exchange-correlation energy functional is unknown. This functional should account for all nonclassical electron correlations, electron-electron self-interactions arising from the Hartree potential, and the differences in the kinetic energy between the noninteracting system of electrons and the original interacting system. The accuracy of a DFT calculation, therefore, hinges on how we approximate Exc[n(r)] (typically referred to as just the functional). One way of classifying the exchangecorrelation functionals in DFT is based on “levels of theory,” which can be pictorially arranged like the rungs in a ladder (Fig. 2). Each higher rung in the ladder incorporates further physical ingredients to the approximation, generally making them more accurate (in principle but not always in practice) and more computationally expensive. This representation was first illustrated by John Perdew and coworkers using an analogy to the biblical Jacob’s ladder [9]. Because of the speed-accuracy tradeoff, understanding the strengths and limitations of the different approximations is essential to be able to choose the right functional for the right application. In this section, we briefly examine the first four rungs of the ladder, which are typically used in most engineering and materials science applications. All the different levels of theory in DFT (or rungs in the ladder) are built based on the simple uniform electron gas (UEG) model, also known as jellium. In the UEG, the interacting electrons experience a positive charge that is uniformly distributed in space, thus resulting in a uniform electron density.
11
Electronic structure and density functional theory
Fig. 2 Jacob’s ladder of density functional approximations to the exchange-correlation energy which links Hartree World of Independent Electrons and Chemical Accuracy [9]. Here, n(r), rn(r), τ, EHF XC, and ϕa presents the electron density, density gradient, the kinetic energy density, Hartree-Fock exchange, and unoccupied Kohn-Sham orbitals, respectively. The plus signs next to the quantities in the left of the ladder indicate that those are extra ingredients in the respective rung.
The reason UEG occupies such a prominent place in DFT is that it is the only system for which the exchange and correlation energy functionals are either exact or at least can be calculated with very high accuracy [10]. In real material systems, however, the electron density is not uniform in space and we cannot apply the UEG model directly and expect good results. A first approximation is to assume that the contribution to the exchange-correlation energy of each differential volume element of space is equal to what would be expected from the UEG model at the electron density in that particular point in space, εUEG XC [n(r)], which is known as the local density approximation (LDA): Z ELDA XC ½nðrÞ
¼
nðrÞεUEG XC ½nðrÞdr
(10)
The more uniform the electron density of the system is (e.g., bulk metals), the better the accuracy of LDA calculations. In most molecular systems, however, the electron density can change rapidly in space, making LDA a poor choice. In the early days, LDA was the only approximation available, so DFT was mostly employed in solid-state physics (i.e., extended periodic systems) and had a very moderate impact in problems involving chemistry (i.e., molecules). Rungs 2 and 3 are built upon the LDA but incorporate the description of the energy functional higher order terms of the electron density such as the gradient rn(r) or the Laplacian r2n(r), which results in better accuracy.
12
Fatima et al.
Because of that, functionals in Rungs 2 and 3, which can be considered as the workhorse of DFT applications, are known as semilocal. The first successful attempt to improve the LDA was in the early 1980s [11–14], by incorporating information about the gradient of the electron density, rn(r), a classic functional approximation known as the generalized gradient approximation (GGA). Because there are numerous ways of incorporating information about the gradient of the electron density into the exchange-correlation, a vast number of GGA functionals (with names akin to Star Wars droid characters) exist in the literature. Popular examples of GGA functionals include the Perdew-Burke-Ernzerhof (PBE) [15], Perdew-Wang-1991 (PW91) [16], or Becke-Lee-Yang-Parr (BLYP) [17,18], which are named after their creators. GGA functionals capture (to a degree) the nonhomogeneous nature of the electron density in real materials and are relatively accurate when the density of the system is slowly varying in space. For example, GGAs are not particularly accurate for quantifying energy barriers of chemical reactions, which involve sharp variations in the electron density. The acceptable accuracy and computational efficiency of GGAs, however, make them one of the most widely used classes of functionals in many areas of physics, chemistry, and engineering, where the system size is of paramount importance. The next rung in the ladder above the GGAs corresponds to metaGGA functionals, which incorporate, in addition to the local density 2 and the density gradient, the Laplacian P of the2 electron density, r n(r), or the kinetic energy density, τ(r) ¼ jr φi(r)j . Despite historically having played a relatively discrete role due to the fact that the extra computational cost did not justify the modest gain in accuracy (if any) of earlier meta-GGAs, some of the newly developed functionals, such as B97M-rV [19] and SCAN [20], arguably offer today the best tradeoff between accuracy and computational efficiency across the board [21]. In fact, both B97M-rV and SCAN have been recently used with significant success to simulate water in the condensed phase (a system notoriously hard to model) using ab initio molecular dynamics (AIMD) simulations, which employ DFT to calculate the forces between atoms, and molecular dynamics (see Chapter 2) to integrate the equations of motion, and thus timedependent behavior [22–25]. Moving up to Rung 4, above the meta-GGAs we have hybrid GGA functionals, which were pioneered by A.D. Becke in the 1990s [26]. It is worth mentioning at this point that the electron exchange energy can be
Electronic structure and density functional theory
13
calculated exactly using Hartree-Fock (HF) theory, a variational wavefunction method upon which many modern quantum chemistry methods have been built. However, the quality of the electron density predicted by HF methods is relatively poor, and the best results are, in practice, achieved by mixing a fraction of the exact Hartree-Fock exchange with some semilocal treatment of exchange and correlation [10], which is what hybrid-GGA functionals do. We will not dwell on it here, but the success of combining HF and DFT exchange energies can be justified to some extent using the adiabatic connection formula [27,28]. Just as an example, the B3LYP functional [18,29], arguably the most popular hybrid functional, combines the following different fractions and contributions to exchange and correlation energy: B88 VWN EB3LYP ¼ 0:8E LDA + 0:2EHF + 0:81E LYP (11) XC X X + 0:72△E X + 0:19E C C
For B3LYP, and for other empirical hybrid functionals, the optimal mixing coefficients are usually determined by fitting to a benchmark dataset of molecules. One noteworthy hybrid-GGA functional that, similarly to its GGA predecessor, attempts to remove some of the empiricism involved in the determination of the mixing parameters of the different contributions to the exchange and correlation energies is PBE0, where HF PBE PBE EPBE0 XC ¼ 0.25EX + 0.75EX + EC . Hybrid-GGA functionals are particularly useful to determine energy barriers in chemical processes, where covalent bonds can be broken and formed. They also typically predict more accurate band gaps than GGAs (too narrow) or Hartree-Fock (too wide). Hybrid-GGAs have been the gold standard for applications in molecular chemistry. On the downside, the nonlocal character of the exact HF exchange term makes hybrid functionals more computationally expensive to compute. The optimal choice of DFT functional is a nontrivial matter and ultimately depends on the specific chemical system and application of interest. For an exhaustive review of recent DFT functionals and their performance on different tests (e.g., barrier heights, thermochemistry, noncovalent bonding, etc.) the reader is referred to the excellent review by N. Mardirossian and M. Head-Gordon [21]. For other interesting discussions on this topic, we refer the reader to the paper by K. Burke and coworkers [30] and to Section 10.2 in Sholl and Steckel’s book [4].
14
2.4
Fatima et al.
Where are the Van der Waals interactions in DFT?
Van der Waals or dispersion interactions, also known as London forces, arise from interactions between an instantaneous dipole moment caused by spontaneous fluctuations of the electron distribution in some atom or molecule and the temporary dipole induced by the electric field of the instantaneous dipole on another atom or molecule (i.e., an instantaneous dipole-induced dipole interaction) [31]. Dispersion interactions exist between all molecules (even nonpolar ones where the average dipole moment over time is zero), are attractive in nature, and are essential to a wide range of phenomena, from the dynamics of liquid carbohydrates to the self-assembly of supramolecular materials [13]. London showed that two spherically symmetric atoms interact with each other according to VvdW ¼ C/r6, where r is the distance between the atoms, and C is a constant that depends on the chemical identity of the atoms. Dispersion forces, being significant over even nanometer length scales, are considered long-range and are particularly important for molecules with large surface areas (e.g., 2D materials). Dispersion interactions, which are the result of nonlocal correlations between electrons characterized by large density gradients in regions where the electron density is small, cannot be captured by semilocal DFT functionals, no matter how sophisticated. The most common (also simple and efficient) strategy to ameliorate this shortcoming is to add to an empirical correction term that accounts for the dispersion energy between each pair of atoms in the system: EDFTD ¼ EDFT + Edisp, a scheme known as dispersion-corrected DFT or simply DFT-D. Although the exact form of the dispersion correction depends on the exact flavor or formulation of the used DFT-D (e.g., DFT-D2 and DFT-D3) [32,33], a rough approximation to the dispersion interaction between a pair of atoms ij, can be written as E disp s f damp r ij C ij =r ij 6, where s is a scaling factor that depends on the XC functional used, fdamp(rij) is a damping function to avoid unphysical interactions at very short-range, and Cij are coefficients parameterized a priori that depend on the type of atoms interacting. An alternative to simple empirical corrections for capturing dispersion interactions is a relatively newer class of DFT functionals that include nonlocal electron correlations, which are essential to capture long-range, van der Waals interactions. Nonlocal correlation DFT functionals have been primarily developed over the last decade, and some notable examples include the different versions of vdW-DF [34] as well as VV10 [35] and subsequent improvements [36], among others. Some of these nonlocal functionals have
15
Electronic structure and density functional theory
been combined with powerful semilocal functionals, e.g., SCAN-rVV10 [37] or B97M-rV [38], to form truly formidable functionals.
2.5
Basis sets
The goal of DFT simulation is to calculate the electron density and the energy of a system in the ground state by solving the Kohn-Sham equations (Eq. 8), which are formulated in terms of single-electron molecular orbitals. In practice, to solve this system of equations, we expand the molecular orbitals as a sum of predetermined functions (the basis set) multiplied by unknown coefficients. Once the MOs have been expanded, the goal of the DFT calculation becomes to determine those expansion coefficients. There are two major types of functions or basis sets that are used in the expansion of the MOs: localized and extended functions. Localized basis sets are particularly useful to describe isolated molecules because their wavefunction actually decays to zero at long range, and thus, they have typically been preferred by the chemistry community. On the other hand, extended functions are ideal to describe bulk periodic systems, which have typically attracted the attention of the physics community. 2.5.1 Localized basis sets A common practice today which was first introduced by Roothaan in the context of Hartree-Fock theory [39] is to expand the MOs as a linear combination of atomic orbitals (LCAO): φi ðrÞ ¼
L X μ¼1
C μi ημ ðrÞ
(12)
where {Cμi} are coefficients found by solving the KS equations, and ημ(r) are the atomic orbitals, which are predetermined basis functions centered at the nuclei of the atoms and are known as the basis set (BS). Back in the 1950s, when the LCAO method was first developed, basis functions were chosen to mimic the eigenfunctions (i.e., atomic orbitals) of the hydrogen atom, which are known as Slater-type orbitals (STO). Because numerical integration using STOs is difficult and the atomic orbitals of the hydrogen atom are somewhat limiting, today, basis functions are chosen using more pragmatic criteria based on the type of system, the phenomena one needs to study, and the computational efficiency. According to the variational principle, the more flexible the representation of the MOs is, the more accurate the DFT calculations will be. Popular in the quantum chemistry community
16
Fatima et al.
are Gaussian-type-orbitals (GTO), which allow for efficient numerical integration, and take the form: ηGTO ¼ N x1 ym zn eαr
2
(13)
where α determines how compact the orbital is (e.g., large α corresponds to a more diffuse orbital), N is a normalization factor which ensures that hημ j ημi ¼ 1, and L ¼ l + m + n categorizes the GTO as s-functions (L ¼ 0), p-functions (L ¼ 1), d-functions (L ¼ 2), etc. GTOs can be further combined to form contracted Gaussian functions (CGFs), which are the foundation of most basis sets currently available in quantum chemistry software. Besides the size of the basis set, depending on the system studied, it may be important (or not) to include orbital polarization or diffused functions in the BS. Two of the most broadly used basis sets today are the split-valence BS developed by Pople and coworkers (e.g., the basis set 6-311 + G) [40], and the correlation-consistent BS created by Dunning and others (e.g., aug-ccpVDZ) [41]. Both are typically available in most DFT programs. Despite the fact that any conceivable molecular orbital can be, in principle, represented by Eq. (12) if enough terms are included (known as the complete basis set limit), in practice, the basis set must be finite due to computational limitations. The finite nature of the basis set limits the complexity of the electron density that can be described and thus it can be an important source of error in DFT calculations. In general, larger basis sets are more accurate but more computationally expensive. If possible, a good practice is to study the convergence of the energy or other properties of the system with respect to basis set size, as the true ground state can only be really achieved in the basis set limit. Another significant source of error that arises from the finite nature of the basis set, which is particularly problematic when computing interaction energies between two molecules, is known as the basis set superposition error (BSSE). Let’s say that EAB is the energy of a system where both monomer A and monomer B are interacting. In this combined system, each of the monomers in the complex AB can borrow the overlapping basis functions of the other monomer (which is not possible when they are in isolation), thus effectively increasing the flexibility of the description of the wavefunction, which, consistent with the variational principle, can lead to a lower energy of the combined system. In other words, the basis set for the complex AB is artificially enhanced with respect to that for the monomers A or B, which ultimately leads to an artificial stabilization of the complex. The most common approach to estimating the
Electronic structure and density functional theory
17
BSSE is known as the counterpoise method introduced by Boys and Bernardi [42]. For a recent review on the topic, we refer the reader to Ref. [43]. 2.5.2 Plane waves To understand why plane wave basis sets are the natural choice to describe extended periodic systems such as metals, we first need to introduce the concepts of crystal structure and reciprocal space, which are fundamental concepts in solid-state physics. For a more in-depth treatment of the matter beyond what will be presented in this section, we recommend the introductory text by S.H. Simon [44]. A Bravais lattice is an infinite array of points generated by discrete translation operations of a unit cell defined by some primitive vectors. The structure of a Bravais lattice is captured by three primitive vectors a1, a2, and a3 (Fig. 3), and a lattice vector in the extended system is defined as: R ¼ m1a1 + m2a2 + m3a3, where m1, m2, and m3 are integer numbers. The simplest primitive cell of a Bravais lattice is known as the Wigner-Seitz cell, which contains just one atom, and it is constructed by applying a Voronoi tessellation around an atom in the crystal lattice. There are 14 different possible Bravais lattices or symmetry groups in three-dimensional space. Of those, three are cubic crystal systems (i.e., have a unit cell in the shape of a cube): simple cubic (SC), body-centered cubic (BCC), and face-centered cubic (FCC).
Fig. 3 Examples of cubic crystal structures. The unit vectors in the x, y, and z Cartesian coordinates are i, j, and k, respectively. (a) Polonium has a simple cubic (SC) lattice structure with a lattice constant of | a1 | ¼ | a2 | ¼ | a3 | ¼ 3.359 Å. (b) Iron has a body-centered cubic (BCC) lattice structure, a1 ¼ ai, a2 ¼ aj, a3 ¼ (i + j + k)a/2, with a lattice constant of |a1 | ¼ | a2 | ¼ | a3 | ¼ 2.8665 Å. (c) Sodium chloride (NaCl) has a face-centered cubic (FCC) lattice structure, a1 ¼ (i + j)a/2, a2 ¼ (j + k)a/2, a3 ¼ (i + k)a/2 with a lattice constant |a1 | ¼ | a2 | ¼ | a3 | ¼ 5.6402 Å.
18
Fatima et al.
Block’s theorem states that the solution to Schrodinger’s equation for a periodic system takes the following form: X X ψ¼ φk ðrÞ ¼ uk ðrÞ eik r (14) k
k
where uk(r) is a periodic function with the same periodicity as the lattice, i.e., uk(r) ¼ uk(r + R), where R is a lattice vector, and k is the wave vector of the plane wave e ikr. The space of vectors k is known as reciprocal space or k space, which is just the Fourier transform of the real space lattice (the space of vectors r). The length of the primitive vectors in real, ai, and reciprocal space, bi, is related by: |bi | ¼ 2π/ | ai |, which implies that larger lattice vectors in real space result in shorter lattice vectors in reciprocal space. The Wigner-Seitz cell in reciprocal space is known as the Brillouin zone (BZ), and different real space vectors in the Wigner-Seitz cell have correspondence in the BZ. The volume of both real and reciprocal primitive cells is also inversely related. In practice, to calculate the wavefunction in Eq. (14), we must choose a finite number of wave vectors k. We specify the k vectors by a mesh of points in the reciprocal space known as k points. In DFT simulations, the system typically consists of a supercell [4]. The larger the real space supercell is, the smaller the cell in reciprocal space becomes, and the fewer k points are required to achieve the same accuracy. If the supercell is cubic, one usually specifies an even grid of k points M M M, where M is an integer number. For noncubic supercells, it is a good practice to maintain the k point density approximately constant (i.e., use more k points along the shorter dimensions). A common method to determine the mesh of k points, which is implemented in most electronic structure packages, is the MonkhorstPack method [45]. To select the number of k points are adequate for a given system, a good practice is to investigate the convergence of the DFT calculations with respect to the number of k points. The periodic nature of uk(r) in Eq. (14) lends itself to an expansion in terms of plane waves: X uk ðrÞ ¼ c k,G eiG r (15) G
where G are the reciprocal lattice vectors (G R ¼ 2πm, where R is one of the lattice vectors and m is an integer), and ck,G are the coefficients that need to be estimated from the DFT calculation. According to Eq. (15), each
Electronic structure and density functional theory
19
k point involves a summation over the infinite number of values that G can take. In practice, we truncate that sum: X φk ðrÞ ¼ c k,G eiðk+GÞ r (16) |G|G max
where Gmax is a wave vector cutoff. The energy cutoff should be chosen such that above it, the wavefunction is smoothly varying (i.e., the coefficients ck,G become increasingly smaller for larger |k + G|). The energy cutoff Gmax is related to the highest kinetic energy of the plane wave, Ekin ¼ ħ2/2mjk + Gj2, and it is a critical parameter in DFT calculations with PW basis sets. Typical energy cutoff values lie between 10 and 50 Ha [46]. Because plane waves are extended throughout the whole space and not centered at the nuclei, they implicitly include the concept of periodic boundary conditions. As a result, PWs are typically used to describe solid state systems [10]. There are also numerical advantages to using a plane-wave basis. First, it is possible to change from a real-space representation in which the potential energy V has a diagonal representation using a Fast Fourier Transform to momentum-space where the kinetic energy T is diagonal. Second, it is sufficient to monitor the eigenvalues and total energies as a function of the cut-off energy, which facilitates the control of basis-set convergence. Third, the Hellmann-Feynman forces acting on the atoms and the stresses on the unit cell may be calculated straightforwardly. A slight disadvantage of PW basis sets is that the treatment of exact exchange, for example when using hybrid functionals, becomes more difficult.
2.6
Pseudopotentials
Core electrons are strongly bound to the nuclei and rarely participate in the chemistry between atoms or molecules. As a result, material properties do not depend on the behavior of core electrons. Furthermore, core electrons, besides being typically numerous, which makes the calculations more computationally demanding, are associated with wavefunctions that oscillate on short length scales in real space, which makes it problematic to describe using either localized or plane wave basis sets. For those reasons, core electrons are usually not explicitly taken into account in DFT calculations. The most popular approach to capturing the effect of core electrons without explicitly representing them is the use of smooth fields representing their electron density, known as pseudopotentials (PPs) [47–49]. Pseudopotentials, which are offered in most DFT codes, are designed based on a single atom of a single
20
Fatima et al.
element, as they should not, in principle, depend on the chemical environment of that atom. PPs contain several empirical parameters that are optimized for a specific exchange-correlation functional, typically a semilocal one. Transferability of PPs to other different functionals is not guaranteed, although it is routinely done. A drawback of PPs is that to capture the nonlinearity of the exchange interaction between valence and core electrons in systems where the overlap between the valence and core electron densities is not negligible, elaborate nonlinear core corrections [50], not typically transferrable, are required. An alternative approach to PPs that circumvents some of its main disadvantages is the projector-augmented wave (PAW) method initially introduced by Bl€ ochl [51]. The PAW method, typically combined with the frozen core approximation, transforms the rapidly oscillating core electron wavefunctions into smoother and more computationally amenable functions, and unlike PPs, it accounts for the nodal features of the valence orbitals and ensures orthogonality between valence and core wavefunctions.
3.
Using DFT to calculate properties of solids
Earlier in this chapter, we stated that the fundamental output of a DFT calculation is the ground state electron density and energy of the system for a particular configuration of frozen nuclei. In this section, we illustrate how DFT calculations can be used to predict some basic physical and electronic properties of crystalline solids, such as the crystal structure, bulk modulus, band structure, density of states (DOS), optical properties, and adsorption energies. Some of these properties could be calculated directly from the energy of the system or its electron density from DFT. Other properties, such as the surface structure or adsorption energies, require finding not just the ground state energy of the system, but the minimum energy configuration of the nuclei, which can be achieved by coupling optimization schemes, such as gradient descent, to the DFT calculation of the forces on the nuclei based on the Hellman-Feynman theorem.
3.1
Crystal structure
Predicting the crystal structure of a solid involves determining both the packing arrangement (e.g., FCC and BCC) and the equilibrium lattice constant. DFT calculations cannot be used to directly predict the crystal structure of a solid. Instead, what can be done is to compare the energy of the
Electronic structure and density functional theory
21
system arranged in different crystal packings and achieve the crystal structure with the minimum energy. To predict the equilibrium lattice constant, we can define the cohesive energy of a crystalline solid (the energy of the solid with respect to the same collection of atoms in the gas phase) as [46]: E c ðaÞ ¼
U ðaÞ Es M
(17)
where U(a) is the energy of the system with a lattice constant a, M is the total number of atoms in the crystal, and Es is the energy of an isolated atom. Since the primitive lattice vectors and the coordinates of the nuclei depend solely on the lattice constant, a, the total potential energy of the system can be written as a function of a only. We can therefore set up the nuclei in the system in a specific crystal packing and calculate Es as well as U(a) using DFT for systems with different lattice constants. We can then plot Ec vs a and define the equilibrium lattice constant as that which minimizes the cohesive energy. Let’s consider an example. Si has a diamond lattice structure, which consists of two interpenetrating face-centered cubic (FCC) primitive lattices (Fig. 4a). The three primitive lattice vectors of the Si crystal structure are a1 ¼ (i + j)a/2, a2 ¼ (j + k)a/2, a3 ¼ (i + k)a/2, where a is the lattice constant. The primitive unit cell contains only two atoms, called the “basis,” with positions at R1 ¼ 0, and R2 ¼ (i + j + k)a/4, respectively. By performing discrete translations of this basis using linear combinations of the primitive lattice vectors, a lattice of any size can be generated
Fig. 4 (a) Ball-and-stick model of the Si crystal. The lattice parameter, a, the primitive lattice vectors, and the two atoms forming the basis (red circles) are shown. (b) Calculated plot of cohesive energy as a function of the lattice parameter, a. The equilibrium lattice parameter, a ¼ 5.399 Å at the minimum point of the curve and the cohesive energy, Ec ¼ 5.30 eV. The arrow points to the equilibrium point with the lowest cohesive energy.
22
Fatima et al.
[52]. A Si atom has 14 electrons with the configuration (1s22s22p6)3s22p2, where the electrons in parenthesis are considered core and thus are represented by a pseudopotential. The DFT simulation thus comprises two nuclei and eight electrons, and the calculation is performed using the LDA. Fig. 4b shows the plot of cohesive energy vs lattice parameter. ˚ , which leads The equilibrium lattice parameter from DFT is a ¼ 5.399 A to the minimum energy. The predicted a value is 0.6% smaller than the ˚ . Correspondingly, the calexperimentally measured lattice constant 5.43 A culated value of cohesive energy is 5.30 eV, which is 15% larger than the experimental value of 4.62 eV. Overall, the agreement with the experimental results is remarkable given the low computational expense. Rabindran et al. [53] used DFT to calculate the elastic constants that were then used to calculate bulk modulus, shear modulus, Young’s modulus, and Poisson’s ratio for orthorhombic titanium disilicide (TiSi2) using DFT. However, before performing calculations to obtain the elastic constants, they carefully optimized the structural parameters of TiSi2. It is important to accurately predict the equilibrium structural parameters, so they used both the generalized gradient approximation (GGA) and the local density approximation (LDA) exchange-correlation functionals. The predicted values using LDA and GGA calculations were compared to the corresponding experimental values. In doing this, they first took on the experimental values [54] of the lattice parameters a/b and c/b ratios, kept them constant, and optimized the equilibrium volume; then, the theoretically predicted equilibrium volume was used, kept constant, and the ratios of a/b and c/b were optimized (as shown in Fig. 5). Finally, the predicted equilibrium
Fig. 5 The structural optimization curves for TiSi2 obtained from GGA calculations: (a) The total energy vs the ratios of a/b and c/b and (b) the total energy vs cell volume. The arrows point to the experimental values of lattice parameters ratios and the experimental cell volume.
Electronic structure and density functional theory
23
structural parameters obtained from GGA calculations were found to be in better agreement with the experimental values than the LDA predicted values. Such that, LDA calculation underestimated the equilibrium volume by 3.7%, relative to the experimental value, while GGA calculation overestimated it by 0.5%.
3.2
Elastic constants
The mechanical properties of solids are not only related to their structural performance but also to other fundamental solid-state properties such as the phonon spectrum, which are crucial for many of their applications. Furthermore, there are correlations between elastic constants and other physical properties such as the melting temperature [53,55]. The strategy to calculate the elastic constants using DFT will be to apply small deformations to the equilibrium lattice, measure the changes in the total energy, and then use theoretical relations to obtain the elastic constants from that information. Because we are interested in the elastic response of the solid, only small lattice distortions need to be considered within DFT. The internal energy of a crystal under strain, δ, can be Taylor expanded as the following equation: ! X 3 1X EðV , δÞ ¼ E ðV 0 , 0Þ + V 0 + O δ (18) τ ξ δ + c δ ξ δ ξ i i ij i j i i j i 2 ij where V0 and E(V0, 0) are the volume and the total energy of the undeformed system, respectively; cij is the elastic constants; τi and δi represent element in the stress tensor and strain tensor, respectively. Here, the Voigt’s notation is used, where the xx, yy, zz, yz, xz, and xy components of the stress and strain tensors are replaced by the subindex 1, 2, 3, 4, 5, and 6. ξi ¼ 1 if the Voigt index is 1, 2, or 3 and ξi ¼ 2 if the Voigt index is 4, 5, or 6. It is worth noting that the elastic behavior of completely asymmetric materials is specified by 21 independent elastic constants, while this number is 2 for isotropic materials. Since there are nine independent elastic constants (i.e., c11, c22, c33, c44, c55, c66, c12, c13, and c23), nine different strains are needed to determine them. The first three elastic constants, c11, c22, and c33, were determined by the following distortion matrices D1, D2, and D3 respectively, and they correspond to straining the lattice along the x, y, and z axes, respectively:
24
Fatima et al.
2
3 2 1+δ 0 0 1 6 7 6 D1 ¼ 4 0 1 0 5 , D2 ¼ 4 0 0 0 1 0 2 3 1 0 0 6 7 D3 ¼ 4 0 1 0 5 0 0 1+δ
0 1+δ 0
3 0 7 0 5, 1 (19)
The volume is changed by the distortion, but the symmetry of the strained lattice remains orthorhombic. By substituting the values of the distortion matrices D1, D2, and D3 into Eq. (18), the energy associated with these distortions can be obtained as: 8 c 11 2 > δ V 0 τ1 δ + > > 2 > < c 22 2 δ E ðV , δÞ ¼ EðV 0 , 0Þ + V 0 τ2 δ + (20) > 2 > > > : V τ δ + c 33 δ2 0 3 2 Similarly, to calculate the elastic constants related to shear, namely c44, c55, and c66, we apply the following volume conserving monoclinic shear distortions: 2 ∗ 3 2 ∗ 3 0 0 δ∗ δ δ 0 δ 6 7 6 7 D4 ¼ 4 0 δ∗ δ∗ 0 5, δ∗ δ 5, D5 ¼ 4 0 2
0
δ∗ 6 ∗ D6 ¼ 4 δ δ 0
δ∗ δ δ∗ δ δ
∗
0
δ∗ 0
3
7 0 5
δ∗ δ
0
δ∗ (21)
∗
δ
where δ∗ ¼ 1/(1 δ2)1/3. The corresponding energy of the distortions D4, D5, and D6 can be represented as: 8 2 > < V 0 τ4 δ + 2c 44 δ EðV , δÞ ¼ E ðV 0 , 0Þ + V 0 τ5 δ + 2c 55 δ2 (22) > : 2 V 0 τ6 δ + 2c 66 δ We are left with three more elastic constants c12, c13, and c23. The rest of the three elastic constants c12, c13, and c23 can be calculated by using volume conserving orthorhombic distortions of the following forms:
25
Electronic structure and density functional theory
2
δ∗ ð1 + δÞ 6 D7 ¼ 4 0 2
0
0 δ∗ ð1 δÞ 0
δ∗ ð1 + δÞ 0 6 D8 ¼ 4 0 δ∗ 0 2
δ∗ 6 D9 ¼ 4 0 0
0
3 0 7 0 5, δ∗ 3
7 5, 0 ∗ 0 δ ð1 δÞ 3 0 0 7 5 δ∗ ð1 + δÞ 0 ∗ 0 δ ð1 δÞ
(23)
Here, the D7 distortion decreased b and increased a with the same amount and c remained constant. The D8 distortion decreased c and increased a with the same amount and b stood constant. The D9 distortion decreased c and increased b with the same amount and a stood constant. By substituting the values of the strain matrices D7, D8, and D9 into Eq. (22), the energy associated with these distortions can be obtained as: 8 h c + c i 11 22 2 > > ð τ τ Þδ + c V 0 1 2 12 δ > > 2 > < h c + c i 11 33 E ðV , δÞ ¼ EðV 0 , 0Þ + V 0 ðτ1 τ3 Þδ + (24) c 13 δ2 > 2 > > h i > > : V 0 ðτ2 τ3 Þδ + c 22 + c 33 c 23 δ2 2 The above relations give the value of the elastic constants c 12, c 13, and c 23 in terms of previously calculated elastic constants c 11 , c 22 , and c 33 .
3.3
Surface energy
Surfaces, which are created by cleaving a bulk material along a plane, are incredibly important for a myriad of applications (Fig. 6). DFT and surface experiments have been often used together to determine the surface structure of metals, metal oxides, nanoparticles, carbides, or sulfides, using ultrahigh-vacuum surface science experiments such as scanning tunneling microscopy, temperature-programmed desorption, X-ray diffraction, and X-ray photoelectron spectroscopy [4]. The energy needed to cleave the bulk crystal is known as surface energy, σ. In a reversible process, we can calculate this energy by considering that the energy associated with the cutting process is equal to the energy of the two
26
Fatima et al.
Fig. 6 Example of supercell. The unit cell shown in Panel (a) corresponding to Si, is translated in the x, y, and z to generate the multilayer slab geometry shown in Panel (b).
surfaces that are created. The surface energy of a slab can be calculated from the DFT using the following equation: σ¼
1 ½E slab nEbulk A
(25)
where Ebulk is the energy of one atom or formula unit of the material in the bulk, Eslab is the total energy of the slab model, n is the number of atoms or formula units in the slab model, and A is the total area of the surfaces (top and bottom) in the slab model. The units of surface energy are defined as energy ˚ 2. Because the slab and the bulk systems are simuper unit area, e.g., eV=A lated under different conditions (e.g., different simulation supercells and hence different k point mesh), it is hard to assess the robustness of the surface energy coming from Eq. (25). Variables such as how many layers to include in the slab, the size of the bulk supercell, or the k points used in DFT calculation will influence the value of the surface energy. To minimize these setting-related effects, it is considered best practice to converge each of the calculations with respect to the BS, k point mesh, etc.
3.4
Adsorption energies
DFT simulations have been extensively used to characterize the adsorption of atoms, ions, or small molecules to substrates like metallic surfaces or larger molecular systems like proteins. Adsorption energy or binding energy calculations are important to a myriad of fields, including electrocatalysis and drug development. DFT can provide insight into the most stable (equilibrium) configuration of the adsorbate-substrate complex (e.g., physical or
Electronic structure and density functional theory
27
chemical adsorption), the charge density difference between adsorbate and substrate, and perhaps most importantly the adsorption energy (Ea). The adsorption energy is defined as the difference between the total energy of the adsorbates-substrate complex, Eads+subs, and that of the adsorbate and substrate in isolation, Eads(g) and Esubs, respectively:
E a ¼ Eads+subs E adsðgÞ + E subs (26) Negative values of Ea correspond to strong binding between the substrate and the substrate. As shown in Fig. 7, Tolba et al. [56] investigated the interaction between H2S molecule and anatase TiO2 (101) surface and how can the doping with V+4, Mn+4, Nb+4, and Cr+4 metals affect that interaction. A 2 2 supercell ˚ vacuum was used as the pristine adsorbate, slab of anatase TiO2 with 15 A and one surface Ti atom was replaced with the dopant atom to simulate the doped adsorbate. Initially, they optimized the geometry of the pristine and doped adsorbate structure using DFT by applying the general gradient approximation (GGA-PW91) functional. The isolated gas-phase of H2S ˚ unit molecule was simulated by placing the molecule in a 25 25 25 A cell and geometry optimized using the setup as the used for adsorbate geometry optimization. Finally, the surfaces with the H2S bonded to it were geometry optimized using the same setup. Then, the total energy of all the optimized structures was recorded and the adsorption energy of H2S molecule on the pristine and doped surfaces was calculated using Eq. (26). The V-doped TiO2 surface had the most negative H2S adsorption
Fig. 7 (a) Optimized structures of adsorbed H2S on TiO2 (101) surfaces. (b) TiO2 (101) surface represented by a 2 TiO2-layer slab structure with a 2 2 supercell. (c) H2S molecule.
28
Fatima et al.
energy, 0.51 eV, while Cr-doped surface had the highest adsorption energy, 0.45 eV. The enhancement of H2S adsorption stability was found to be resulted from the strong orbitals’ hybridization and charge transfer between H2S and V-doped TiO2 surface.
3.5
Band structure
The band structure represents the energy levels of solids, and it is often used to determine whether a material is a conductor, semiconductor, or insulator. It can also be used to determine whether the material has direct or indirect band gaps and the valence and conduction bands (VB and CB). Fatima et al. [57] applied DFT simulation to calculate the electronic band structure of silicon nanowire (Si50H40) grounded structure, using Generalized Gradient Approximation (GGA-PBE) functional. The band structure is the change of the Kohn-Sham eigenvalues in the Brillouin zone (BZ) along a specific k-point path. Thus, they calculated the eigenvalues along certain high symmetry lines in the BZ of silicon nanowire unit cell, with a minimum of 10 k-points, typically. Fig. 8 shows the band structure of Si50H40 with 17 k-points along the growth crystallography direction h100i. Notably, it has a direct bandgap, as the CB minimum and VB maximum are at the same k-point (k ¼ 0, i.e., gamma point). Momentum dispersion is plotted as εi(k), where i runs overall calculated bands. The bandgap underestimation error of the GGA functional calculation was discussed along with comparison to the other functionals [56]. A band structure represents the energy levels of solids, and it is often used to determine whether a material is a conductor, semiconductor, or insulator.
Fig. 8 (a) Side view of the optimized Si h100i NW model. The model is rotated slightly for clarity. The wire axis is aligned in the z-axis with vacuum interfaces along x and y axes, while red spheres depict H atoms while blue spheres represent Si atoms. The lattice periodicity is az ¼ 11.2 Å. (b) Band structure of Si h100i nanowire; k space was sampled with 1 1 16 k points.
Electronic structure and density functional theory
3.6
29
Density of states and absorption spectra
To get more insights into the occupied valance band and unoccupied conduction band, the ground-state electronic density of states (DOS) is calculated using the same setup/functionals as the band structure calculations. The electronic density of states (DOS)DOS is defined as the number of electron states with energies in the interval (E, E + dE). The key idea of a plane-wave DFT calculation is to demonstrate the electron density in functions of the form eikr. Electrons associated with plane waves of this form have energy E ¼ (ħk)2/(2m). Consequently, when DFT calculation has been done, the electronic DOS can be determined by integrating the resulting electron density in k space (Fig. 9a). From the DOS plot near the band gap edges for Si h100i nanowire, we see that the valence band maximum (VBM) is higher than the conduction band minimum (CBM), which results in faster relaxation of holes than electrons. The partial density of states for given momentum k is defined as: X Dk ðεÞ ¼ (27) δðε εi,k Þ i where the energy of a given orbital is εi,k. The total density of states (DOS) is calculated as a sum of partial DOS: X DðεÞ ¼ D ðεÞ (28) k k Also, the electronic DOS can give us insights into the band-to-band electronic excitation caused by UV-visible light absorption. Such that, the absorption of light waves can cause some electrons to be excited from the
Fig. 9 (a) DOS calculated on Si h100i nanowire with k space sampling at gamma point. Here, the shaded region indicates the occupied orbitals and the unshaded region depicts the unoccupied orbitals. (b) Simulated absorption spectra of Si h100i NW. Features in DOS and the most probable electronic transitions in the absorption spectra are labeled.
30
Fatima et al.
valence band (VB) to the higher-energy conduction band (CB). The calculated absorption spectra of Si h100i NW are shown in Fig. 9b. The highintensity peaks appeared due to transitions with the large oscillator strengths. In absorption spectra, the features originate from the transitions between orbitals. Each feature in the absorption spectra shown in Fig. 9b corresponds to contributions from a number of electron-hole pairs, as shown in the DOS plot. The features denoted by A, B, C, and D in absorption spectra originate from the transitions between specific orbitals [57,58]. The wavelength with a high intensity peak corresponds to more electron transitions with large oscillator strengths.
3.7
Finding transition state
In theoretical chemistry and condensed matter physics, it is sometimes of use to identify the lowest energy path that a system follows to transition from one stable state to another. This path is known as the minimum energy path (MEP). MEP is generally used to represent a reaction coordinate for transitions [59], for example, changes in the conformation of molecules, chemical reactions, or diffusion processes in solids. The point of maximum energy along the MEP corresponds to a saddle point in the PES. The activation energy barrier can be used to estimate the transition rate within harmonic transition state theory [60]. There are different types of methods for finding reaction paths and saddle points [61]. A few schemes start at the local minimum on the potential energy surface, which represents the initial state; after that, they trace stepwise sequentially, which is a path of the slowest ascent [62–64]. Those slowest ascent paths do not lead to saddle points. The most widely used method for finding transition states in plane-wave DFT calculations is the nudged elastic band (NEB) method developed by Hannes Jo`nsson and co-workers [65] as a refinement of earlier “chain-ofstates” methods (Fig. 10). A chain-of-states calculation aims to define the
Fig. 10 Pictorial representation of a reaction path computed with NEB.
Electronic structure and density functional theory
31
minimum energy path (MEP) between two local minima. This method has been already applied successfully to a broad range of problems, such as diffusion processes at metal surfaces [66], dissociative adsorption of a molecule on a surface [67,68], multiple atom exchange processes during sputter deposition [69], contact formation between metal tip and a surface [70], diffusion of rigid water molecules in ice [71], and atomic exchange processes at semiconductor surfaces (using a plane wave-based DFT method to calculate the atomic forces) [72]. The implementation of the NEB method is quite straightforward in DFT. If interested, more details can be found from the recommended further reading as listed below.
4.
Recommended further reading
The following are comprehensive texts on density functional theory that we found particularly helpful for self-study: • R. G. Parr and W. Yang, Density-Functional Theory of Atoms and Molecules, Oxford, University Press, Oxford, UK, 1989. • W. Koch and M. C. Holthausen, A Chemist’s Guide to Density Functional Theory, Wiley-VCH, Weinheim, 2000. • R. M. Martin, Electronic Structure: Basic Theory and Practical Methods, Cambridge, University Press, Cambridge, UK, 2004. • D. Sholl and J. A. Steckle, Density Functional Theory: A Practical Introduction, John Wiley & Sons, 2009. • Tsuneda, T. Density Functional Theory in Quantum Chemistry; Springer, 2014. • Giustino, F. Materials Modelling Using Density Functional Theory Properties and Predictions, First Edit.; Oxford University Press, 2014.
References [1] C. Møller, M.S. Plesset, Note on an approximation treatment for many-electron systems, Phys. Rev. 46 (7) (1934) 618–622, https://doi.org/10.1103/PhysRev.46.618. ´zˇek, On the correlation problem in atomic and molecular systems. Calculation of [2] J. Cı wavefunction components in Ursell-type expansion using quantum-field theoretical methods, J. Chem. Phys. 45 (11) (1966) 4256–4266, https://doi.org/10.1063/ 1.1727484. [3] R.M. Martin, Electronic Structure: Basic Theory and Practical Methods, Cambridge University Press, 2020. [4] D.S. Sholl, J.A. Steckel, Density Functional Theory: A Practical Introduction, John Wiley & Sons, 2009. [5] L.H. Thomas, The calculation of atomic fields, Math. Proc. Camb. Philos. Soc. 23 (5) (1927), https://doi.org/10.1017/S0305004100011683. [6] E. Fermi, Eine statistische Methode zur Bestimmung einiger Eigenschaften des Atoms und ihre Anwendung auf das Periodensystem der Elemente, Z. Phys. 48 (902) (1928).
32
Fatima et al.
[7] P. Hohenberg, W. Kohn, Inhomogeneous electron gas, Phys. Rev. 136 (3B) (1964), https://doi.org/10.1103/PhysRev.136.B864. [8] W. Kohn, L.J. Sham, Self-consistent equations including exchange and correlation effects, Phys. Rev. 140 (4A) (1965), https://doi.org/10.1103/PhysRev.140.A1133. [9] O.A. Vydrov, G.E. Scuseria, J.P. Perdew, A. Ruzsinszky, G.I. Csonka, Scaling down the Perdew-Zunger self-interaction correction in many-electron regions, J. Chem. Phys. 124 (9) (2006), https://doi.org/10.1063/1.2176608. [10] M.C.H. Wolfram Koch, A Chemist’s Guide to Density Functional Theory, second ed., Wiley-VCH Verlag GmbH, 2001. [11] J.P. Perdew, W. Yue, Accurate and simple density functional for the electronic exchange energy: generalized gradient approximation, Phys. Rev. B 33 (12) (1986) 8800–8802, https://doi.org/10.1103/PhysRevB.33.8800. [12] J.P. Perdew, K. Burke, M. Ernzerhof, Generalized gradient approximation made simple, Phys. Rev. Lett. 77 (18) (1996) 3865–3868, https://doi.org/10.1103/ PhysRevLett.77.3865. [13] J.P. Perdew, et al., Atoms, molecules, solids, and surfaces: applications of the generalized gradient approximation for exchange and correlation, Phys. Rev. B 46 (11) (1992) 6671–6687, https://doi.org/10.1103/PhysRevB.46.6671. [14] J.P. Perdew, K. Burke, Generalized gradient approximation for the exchangecorrelation hole of a many-electron system, Phys. Rev. B Condens. Matter Mater. Phys. 54 (23) (1996) 16533–16539, https://doi.org/10.1103/PhysRevB.54.16533. [15] J.P. Perdew, K. Burke, M. Ernzerhof, Erratum: generalized gradient approximation made simple (Physical Review Letters (1996) 77 (3865)), Phys. Rev. Lett. 78 (8) (1997) 1396, https://doi.org/10.1103/PhysRevLett.78.1396. [16] J.P. Perdew, et al., Erratum: Atoms, molecules, solids, and surfaces: applications of the generalized gradient approximation for exchange and correlation (Physical Review B (1993) 48, 7, (4978)), Phys. Rev. B 48 (7) (1993), https://doi.org/10.1103/PhysRevB. 48.4978.2. [17] A.D. Becke, Density-functional thermochemistry. III. The role of exact exchange, J. Chem. Phys. 98 (7) (1993) 5648–5652, https://doi.org/10.1063/1.464913. [18] P.J. Stephens, F.J. Devlin, C.F. Chabalowski, M.J. Frisch, Ab Initio calculation of vibrational absorption and circular dichroism spectra using density functional force fields, J. Phys. Chem. 98 (45) (1994), https://doi.org/10.1021/j100096a001. [19] N. Mardirossian, M. Head-Gordon, Mapping the genome of meta-generalized gradient approximation density functionals: the search for B97M-V, J. Chem. Phys. 142 (7) (2015), https://doi.org/10.1063/1.4907719. [20] Y. Zhang, J. Sun, J.P. Perdew, X. Wu, Comparative first-principles studies of prototypical ferroelectric materials by LDA, GGA, and SCAN meta-GGA, Phys. Rev. B 96 (3) (2017), https://doi.org/10.1103/PhysRevB.96.035143. [21] N. Mardirossian, M. Head-Gordon, Thirty years of density functional theory in computational chemistry: an overview and extensive assessment of 200 density functionals, Mol. Phys. 115 (19) (2017) 2315–2372, https://doi.org/10.1080/00268976. 2017.1333644. [22] L.R. Pestana, O. Marsalek, T.E. Markland, T. Head-Gordon, The quest for accurate liquid water properties from first principles, J. Phys. Chem. Lett. 9 (17) (2018) 5009–5016, https://doi.org/10.1021/acs.jpclett.8b02400. [23] L. Ruiz Pestana, N. Mardirossian, M. Head-Gordon, T. Head-Gordon, Ab initio molecular dynamics simulations of liquid water using high quality meta-GGA functionals, Chem. Sci. 8 (5) (2017), https://doi.org/10.1039/c6sc04711d. [24] J. Wiktor, F. Ambrosio, A. Pasquarello, Note: assessment of the SCAN +rVV10 functional for the structure of liquid water, J. Chem. Phys. 147 (21) (2017), https://doi.org/ 10.1063/1.5006146.
Electronic structure and density functional theory
33
[25] A. Patra, J.E. Bates, J. Sun, J.P. Perdew, Properties of real metallic surfaces: effects of density functional semilocality and van der Waals nonlocality, Proc. Natl. Acad. Sci. U. S. A. 114 (44) (2017) E9188–E9196, https://doi.org/10.1073/pnas.1713320114. [26] A.D. Becke, A new mixing of Hartree-Fock and local density-functional theories, J. Chem. Phys. 98 (2) (1993) 1372–1377, https://doi.org/10.1063/1.464304. [27] J. Harris, R.O. Jones, The surface energy of a bounded electron gas, J. Phys. F Met. Phys. 4 (8) (1974) 1170–1186, https://doi.org/10.1088/0305-4608/4/8/013. [28] O. Gunnarsson, B.I. Lundqvist, Exchange and correlation in atoms, molecules, and solids by the spin-density-functional formalism, Phys. Rev. B 13 (10) (1976) 4274–4298, https://doi.org/10.1103/PhysRevB.13.4274. [29] A.D. Becke, Density-functional thermochemistry. I. The effect of the exchange-only gradient correction, J. Chem. Phys. 96 (3) (1992) 2155–2160, https://doi.org/ 10.1063/1.462066. [30] D. Rappoport, N.R.M. Crawford, F. Furche, K. Burke, Approximate density functionals: which should I choose? in: Encyclopedia of Inorganic and Bioinorganic Chemistry, John Wiley & Sons, 2011. [31] J.N. Israelachvili, Intermolecular and Surface Forces, Academic Press, 2011. [32] S. Grimme, Density functional theory with London dispersion corrections, Wiley Interdiscip. Rev. Comput. Mol. Sci. 1 (2) (2011) 211–228, https://doi.org/ 10.1002/wcms.30. [33] S. Grimme, A. Hansen, J.G. Brandenburg, C. Bannwarth, Dispersion-corrected meanfield electronic structure methods, Chem. Rev. 116 (9) (2016) 5105–5154, https://doi. org/10.1021/acs.chemrev.5b00533. [34] K. Berland, et al., van der Waals forces in density functional theory: a review of the vdW-DF method, Rep. Prog. Phys. 78 (6) (2015), https://doi.org/10.1088/00344885/78/6/066501. [35] O.A. Vydrov, T. Van Voorhis, Nonlocal van der Waals density functional: the simpler the better, J. Chem. Phys. 133 (24) (2010), https://doi.org/10.1063/1.3521275. [36] R. Sabatini, T. Gorni, S. De Gironcoli, Nonlocal van der Waals density functional made simple and efficient, Phys. Rev. B Condens. Matter Mater. Phys. 87 (4) (2013), https://doi.org/10.1103/PhysRevB.87.041108. [37] H. Peng, Z.H. Yang, J.P. Perdew, J. Sun, Versatile van der Waals density functional based on a meta-generalized gradient approximation, Phys. Rev. X 6 (4) (2016), https://doi.org/10.1103/PhysRevX.6.041005. [38] N. Mardirossian, L. Ruiz Pestana, J.C. Womack, C.K. Skylaris, T. Head-Gordon, M. Head-Gordon, Use of the rVV10 nonlocal correlation functional in the B97M-V density functional: defining B97M-rV and related functionals, J. Phys. Chem. Lett. 8 (1) (2017) 35–40, https://doi.org/10.1021/acs.jpclett.6b02527. [39] C.C.J. Roothaan, New developments in molecular orbital theory, Rev. Mod. Phys. 23 (2) (1951), https://doi.org/10.1103/RevModPhys.23.69. [40] W.J. Hehre, K. Ditchfield, J.A. Pople, Self-consistent molecular orbital methods. XII. Further extensions of gaussian-type basis sets for use in molecular orbital studies of organic molecules, J. Chem. Phys. 56 (5) (1972), https://doi.org/10.1063/1.1677527. [41] K.A. Peterson, T.H. Dunning, Benchmark calculations with correlated molecular wave functions. VII. Binding energy and structure of the HF dimer, J. Chem. Phys. 102 (5) (1995), https://doi.org/10.1063/1.468725. [42] S.F. Boys, F. Bernardi, The calculation of small molecular interactions by the differences of separate total energies. Some procedures with reduced errors, Mol. Phys. 19 (4) (1970) 553–566, https://doi.org/10.1080/00268977000101561. [43] R.M. Richard, B.W. Bakr, C.D. Sherrill, Understanding the many-body basis set superposition error: beyond Boys and Bernardi, J. Chem. Theory Comput. 14 (5) (2018) 2386–2400, https://doi.org/10.1021/acs.jctc.7b01232.
34
Fatima et al.
[44] S.H. Simon, The Oxford Solid State Basics, OUP, Oxford, 2013. [45] H.J. Monkhorst, J.D. Pack, Special points for Brillouin-zone integrations, Phys. Rev. B 13 (12) (1976) 5188–5192, https://doi.org/10.1103/PhysRevB.13.5188. [46] F. Giustino, Materials Modelling Using Density Functional Theory Properties and Predictions, first ed., Oxford University Press, 2014. [47] G. Kresse, J. Hafner, Norm-conserving and ultrasoft pseudopotentials for first-row and transition elements, J. Phys. Condens. Matter 6 (40) (1994) 8245–8257, https://doi. org/10.1088/0953-8984/6/40/015. [48] N. Troullier, J.L. Martins, Efficient pseudopotentials for plane-wave calculations, Phys. Rev. B 43 (3) (1991) 1993–2006, https://doi.org/10.1103/PhysRevB.43.1993. [49] D.R. Hamann, Generalized norm-conserving pseudopotentials, Phys. Rev. B 40 (5) (1989) 2980–2987, https://doi.org/10.1103/PhysRevB.40.2980. [50] S.G. Louie, S. Froyen, M.L. Cohen, Nonlinear ionic pseudopotentials in spin-densityfunctional calculations, Phys. Rev. B 26 (4) (1982), https://doi.org/10.1103/ PhysRevB.26.1738. [51] P.E. Bl€ ochl, Projector augmented-wave method, Phys. Rev. B 50 (24) (1994), https:// doi.org/10.1103/PhysRevB.50.17953. [52] C. Kittel, Introduction to Solid State Physics, eighth ed., Wiley Sons, New York, NY, 2004. [53] P. Ravindran, L. Fast, P.A. Korzhavyi, B. Johansson, J. Wills, O. Eriksson, Density functional theory for calculation of elastic properties of orthorhombic crystals: application to TiSi2, J. Appl. Phys. 84 (9) (1998) 4891–4904, https://doi.org/10.1063/ 1.368733. [54] G. Rosenkranz, R. Frommeyer, Microstructures and properties of the refractory compounds TiSi2 and ZrSi2, Z. Met. 83 (9) (1992) 685–689. [55] M. Nakamura, Elastic constants of some transition-metal-disilicide single crystals, Metall. Mater. Trans. A 25 (2) (1994) 331–340, https://doi.org/10.1007/BF02647978. [56] S.A. Tolba, I. Sharafeldin, N.K. Allam, Comparison between hydrogen production via H2S and H2O splitting on transition metal-doped TiO2 (101) surfaces as potential photoelectrodes, Int. J. Hydrog. Energy 45 (51) (2020) 26758–26769, https://doi. org/10.1016/j.ijhydene.2020.07.077. [57] Fatima, D.J. Vogel, Y. Han, T.M. Inerbaev, N. Oncel, D.S. Kilin, First-principles study of electron dynamics with explicit treatment of momentum dispersion on Si nanowires along different directions, Mol. Phys. 117 (17) (2019) 2293–2302, https://doi.org/ 10.1080/00268976.2018.1538624. [58] F. Fatima, Y. Han, D.J. Vogel, T.M. Inerbaev, N. Oncel, E.K. Hobbie, D.S. Kilin, Photoexcited electron lifetimes influenced by momentum dispersion in silicon nanowires, J. Phys. Chem. C 123 (12) (2019) 7457–7466, https://doi.org/10.1021/acs. jpcc.9b00639. [59] R.A. Marcus, On the analytical mechanics of chemical reactions. quantum mechanics of linear collisions, J. Chem. Phys. 45 (12) (1966), https://doi.org/10.1063/1.1727528. [60] G.H. Vineyard, Frequency factors and isotope effects in solid state rate processes, J. Phys. Chem. Solids 3 (1–2) (1957), https://doi.org/10.1016/0022-3697(57) 90059-8. [61] K.B. Lipkowitz, D.B. Boyd, Reviews in Computational Chemistry, vol. 5, John Wiley & Sons, 2009. [62] C.J. Cerjan, W.H. Miller, On finding transition states, J. Chem. Phys. 75 (6) (1981) 2800–2801, https://doi.org/10.1063/1.442352. [63] D.T. Nguyen, D.A. Case, On finding stationary states on large-molecule potential energy surfaces, J. Phys. Chem. 89 (19) (1985) 4020–4026, https://doi.org/10.1021/ j100265a018.
Electronic structure and density functional theory
35
[64] W. Quapp, A gradient-only algorithm for tracing a reaction path uphill to the saddle of a potential energy surface, Chem. Phys. Lett. 253 (3–4) (1996) 286–292, https://doi.org/ 10.1016/0009-2614(96)00255-2. [65] H. Jo´nsson, G. Mills, K.W. Jacobsen, Nudged elastic band method for finding minimum energy paths of transitions, 1998, https://doi.org/10.1142/ 9789812839664_0016. [66] M. Villarba, H. Jo´nsson, Diffusion mechanisms relevant to metal crystal growth: Pt/Pt (111), Surf. Sci. 317 (1–2) (1994), https://doi.org/10.1016/0039-6028(94)90249-6. [67] G. Mills, H. Jo´nsson, G.K. Schenter, Reversible work transition state theory: application to dissociative adsorption of hydrogen, Surf. Sci. 324 (2–3) (1995) 305–337, https://doi.org/10.1016/0039-6028(94)00731-4. [68] G. Mills, H. Jo´nsson, Quantum and thermal effects in H2 dissociative adsorption: evaluation of free energy barriers in multidimensional quantum systems, Phys. Rev. Lett. 72 (7) (1994) 1124–1127, https://doi.org/10.1103/PhysRevLett.72.1124. [69] M. Villarba, H. Jo´nsson, Atomic exchange processes in sputter deposition of Pt on Pt(111), Surf. Sci. 324 (1) (1995), https://doi.org/10.1016/0039-6028(94)00631-8. [70] M.R. Sørensen, K.W. Jacobsen, H. Jo´nsson, Thermal diffusion processes in metal-tipsurface interactions: contact formation and adatom mobility, Phys. Rev. Lett. 77 (25) (1996), https://doi.org/10.1103/PhysRevLett.77.5067. [71] E.R. Batista, S.S. Xantheas, H. Jo´nsson, Molecular multipole moments of water molecules in ice Ih, J. Chem. Phys. 109 (11) (1998), https://doi.org/10.1063/1.477058. [72] G. Henkelman, B.P. Uberuaga, H. Jo´nsson, Climbing image nudged elastic band method for finding saddle points and minimum energy paths, J. Chem. Phys. 113 (22) (2000), https://doi.org/10.1063/1.1329672.
CHAPTER TWO
Atomistic molecular modeling methods Luis Alberto Ruiz Pestanaa, Yangchao Liaob, Zhaofan Lib, and Wenjie Xiab a
Department of Civil and Architectural Engineering, University of Miami, Coral Gables, FL, United States Department of Civil, Construction and Environmental Engineering, North Dakota State University, Fargo, ND, United States b
Contents 1. 2. 3. 4.
The history and significance of atomistic simulations What is atomistic modeling and what is it good for? The zoo of atomistic modeling methods Modeling interatomic interactions using empirical force fields 4.1 Bonded interactions 4.2 Nonbonded interactions 4.3 A short comment on force field parameterization 4.4 Challenges and limitations of empirical force fields 5. Integrating the dynamics of atoms: Molecular dynamics (MD) 6. Ensembles and molecular dynamics at constant temperature and/or pressure 7. How to calculate properties from an MD simulation 7.1 Structural and thermodynamic properties 7.2 Dynamical properties 8. Some odds and ends of atomistic simulations 9. Concluding remark References
1.
37 39 41 45 47 50 52 52 53 55 62 62 66 68 71 72
The history and significance of atomistic simulations
Since the pioneering simulations of hard spheres by Alder and Wainwright in 1956 [1], and the first simulations of a realistic material (liquid argon) reported by Rahnman in 1964 [2], the field of atomistic modeling has evolved from a promising tool to facilitate theoretical developments to a mainstream technique routinely used to make discoveries in disciplines ranging from molecular biology to engineering mechanics, geochemistry, or energy storage, to mention but just a few fields of application. Atomistic simulations, by reaching spatial and temporal resolutions hard to reach using Fundamentals of Multiscale Modeling of Structural Materials https://doi.org/10.1016/B978-0-12-823021-3.00006-3
Copyright © 2023 Elsevier Inc. All rights reserved.
37
38
Luis Alberto Ruiz Pestana et al.
current experimental techniques and by allowing precise control over the simulated system and its environmental conditions, offer a unique window to study the potentially complex behavior of materials at the molecular and nano scales. Atomistic simulations, however, are very computationally demanding. As a result, the evolution of the field has gone hand in hand with the advancement of high-performance computing, which has progressed exponential in the last decades. Particularly, revolutionary has been the advent of supercomputers, which now allow to ordinarily simulate systems with millions of atoms over time scales of a few microseconds. Both atomistic modeling methods and supercomputing resources have been crucial contributors to establishing scientific computing and simulations as the third pillar of science, along with experiments and theory. A recent recognition of this fact is, for example, the 2013 Nobel prize awarded to Martin Karplus, Michael Levitt, and Arieh Warshel, for developing atomistic and multiscale techniques that blend quantum and classical mechanics. As with other well-established computational simulation tools, atomistic modeling has become increasingly accessible to nonspecialists through welldeveloped software, such as the freely available open-source program LAMMPS [3], which only requires high-level coding instructions to be able to carry out a simulation. While it is our opinion that using standard atomistic methods does not necessarily require being an expert on the foundational, and often challenging disciplines involved in atomistic and molecular modeling, which include statistical mechanics, quantum chemistry, and scientific computing, one does need sufficient understanding of what is going on under the hood to use these methods to their full potential and to do so responsibly. Unfortunately, these areas of knowledge are not easily approachable by nonexperts and most of the existing literature on atomistic or molecular modeling, while of great value for developers and researchers in the field, consists of books that are either too extensive and/or too advanced for the self-taught novice. This high entry barrier deters scientists, unfamiliar with atomistic simulation, from incorporating this tool into their research. On the other hand, there is a reasonable wealth of hands-on tutorials associated with the different atomistic simulation programs, whose aim is focused on teaching the user how to apply the different knobs and settings of the program, rather than the rationale for using those settings. The illusion of proficiency gained by being able to run simulations can lead to the misuse of the techniques. Our goal in the following sections is to provide the reader with a balanced mix of fundamentals and practical advice on performing atomistic simulations.
Atomistic molecular modeling methods
2.
39
What is atomistic modeling and what is it good for?
Atomistic modeling is an amalgam of mathematical models used to describe the interactions between atoms in a systems, known as force fields, combined with a set of algorithms used to generate or simulate realistic configurations of the system of interacting atoms (e.g., Monte Carlo, and Verlet). The simulated system is subject to certain thermodynamic constraints or control variables (e.g., constant temperature, volume, and number of particles) as well as boundary conditions. Using the theoretical framework of statistical mechanics, the configurations of the system generated at each step of the simulation (known as microstates) can be treated as an ensemble and be used to calculate the equilibrium properties of the system in a particular state, thus linking the microscopic information obtained from the simulation to the equilibrium, macroscopic properties of the system. Atomistic simulations are typically aimed at calculating some equilibrium property X of the system. For each different configuration or microstate of the system i, the property of interest will, in general, take a different value, Xi. Statistical mechanics tells us that, for a system in thermodynamic equilibrium, the expectation value of the equilibrium property X, which would correspond to an experimental macroscopic measurement, is equal to the weighted average over the ensemble of microstates that are compatible with the given thermodynamic control variables P P applied to the system. In other words, X ¼ piXi, where pi (obviously, pi ¼ 1) is the probability that the system is found in configuration i, and the sum is over all the possible microstates of the system. The exact functional form of pi depends on the thermodynamic control variables imposed. For example, for an isolated system (constant number of particles, constant volume, and constant energy), each microstate is equally likely. However, if instead the number of particles, volume, and temperature are kept constant, pi e Ei/kT, which means that low energy states P are much more likely than high energy ones. From the expression X ¼ piXi, it is easy to see that the goals of an atomistic simulation are essentially twofold. The first goal is to calculate Xi as accurately as possible, which requires simulating configurations of the system and its behavior that resemble those of the real physical system, which in turn depends on the quality of the model used to describe the interatomic interactions. The second goal, because we have only access to finite resources and thus can only
40
Luis Alberto Ruiz Pestana et al.
sample a finite number of microscopic configurations, is to efficiently sample the microstates of the system associated with a high probability pi (i.e., those that contribute more significantly to the ensemble average). Given the incredibly high dimensionality of the problem (the configuration of a system is determined by 3N degrees of freedom in three dimensions, where N is the number of atoms) together with the fact that most configurations of the system generated at random will be associated with low probabilities and therefore will not significantly contribute to the ensemble average, numerical methods based on uniform sampling of the configuration space are incredibly impractical in atomistic simulations. Generating configurations of the system proportionally to the theoretical pi requires efficient sampling algorithms. The greatest advantages of atomistic modeling methods are their predictive power and wide applicability to a myriad of material systems. From a set of minimal assumptions about how atoms interact with each other, a virtually limitless variety of systems ranging from 2D materials to proteins, as well as a broad range of phenomena from self-assembly to heat transfer, can be simulated and studied. No other computational tool offers that versatility. Another essential advantage of atomistic methods is their high resolution, out of reach to continuum modeling approaches and most experimental techniques, and achieved at a significantly lower cost than electronic structure methods. Furthermore, in atomistic simulations, the environmental conditions can be precisely controlled and, being a bottom-up approach, it is possible to simulate systems that cannot currently be manufactured or synthesized, making it an ideal testbed for materials discovery and design or to understand the behavior of systems under extreme conditions that are either challenging or impossible to achieve experimentally. If you want to study iron crystallization in the planetary core of Pluto, or you want to predict the mechanical properties of a new still nonexisting nanocomposite, you better do it in your computer. The biggest disadvantage of atomistic simulations is the high computational cost (compared with continuum techniques), which limits the time and length scales that can be simulated. State-of-the-art classical MD simulations using empirical force fields can deal with systems containing millions of atoms (a cube of approximately a few tens of nanometers big) and can reach the millisecond time scale [4]. Besides tour-de-force calculations, everyday simulations are typically in the range of a few hundred thousand
Atomistic molecular modeling methods
41
atoms and tens to a few hundred nanoseconds. The number of atoms that can be simulated is limited by how the sampling algorithm scales with the number of atoms, N, which is typically O(N logN). This, together with the fact that doubling the length scale of a system translates into an eightfold increase in the number of atoms (N ∝ L and V ∝ L3, and thus V ∝ N3), makes increasing the system size to quickly become unmanageable. Furthermore, the time that can be simulated in an MD simulation is limited by the small time step required to numerically integrate the dynamics of the fastest motion in the system, which usually correspond to the vibrations of light, covalently bonded atoms, such as hydrogen, whose period of oscillation is about 10 fs. The time step in atomistic MD simulations typically ranges from 0.5 to 2 fs, depending on the model used for the intramolecular covalent interactions. In the case of Monte Carlo simulations, there is no time, but the efficiency of the calculations is limited by the small incremental variations that are required between subsequently generated configurations. The cost of an atomistic simulation can significantly increase even further if the methods used go beyond simple empirical potentials or classical physics, and polarization effects (e.g., polarizable force fields), chemical reactivity (e.g., reactive force fields), the electronic structure of atoms (e.g., ab initio MD), or nuclear quantum effects (e.g., ring polymer MD) are taken into account. Current research in atomistic simulation methods is largely focused on pushing the time and length scales that can be simulated through enhanced sampling techniques [5], coarse-graining methods (see Chapter 4), or through more computationally efficient models and algorithms (e.g., machine learning force fields [6,7]).
3.
The zoo of atomistic modeling methods
Many atomistic modeling methods exist, and choosing the right atomistic technique—that which captures the physics of the problem of interest at a minimal computational cost—is a crucial step in the modeling pipeline. To make an informed decision, one needs to know what options are available and understand the scope, advantages, and drawbacks of those different options. In this section, we offer a bird’s-eye view of the landscape of atomistic simulation methods. The simple definition of atomistic modeling that we gave at the beginning of the previous section reveals the two major axes that we will here use to classify atomistic methods: (1) the description of the
42
Luis Alberto Ruiz Pestana et al.
interatomic interactions, and (2) the sampling algorithm used to generate configurations of the system. Molecular dynamics methods where the electrons are modeled explicitly and the energy of the system and interatomic forces are calculated using electronic structure methods, such as density functional theory (DFT), are known as ab initio molecular dynamics (AIMD). The dynamics of the nuclei in AIMD are still treated classically, and depending on how the equations of motion are integrated, two popular flavors of AIMD exist: Born-Oppenheimer MD (BOMD) and Car-Parrinello MD (CPMD) [8]. The former approach, which is typically more numerically stable, has become more popular in recent times. Using AIMD simulations, one can study, with high accuracy, systems with hundreds of atoms for tens to hundreds of picoseconds. As such, AIMD is primarily used to solve problems where the zero-temperature, single minimum energy state (the only state accessible in electronic structure calculations) is not informative enough, and instead averaging over the ensemble of microstates is required, which is the case when studying chemical reactions in the condensed phase (e.g., aqueous chemistry) [9,10]. When the electrons in the system are not treated explicitly, one enters the realm of force-field methods, where the true quantum mechanical interactions in the system are approximated, piecewise, and additively, by a collection of empirical mathematical functions and parameters, such as the equilibrium length of a covalent bond, that are fitted using data from experiments and electronic structure calculations. In a force field, the contributions to the energy of the system and the resulting interatomic forces from different types of interactions, such as covalent bonds, van der Waals forces, Coulombic interactions, or Pauli repulsion, are individually described by different functions that involve only a small number of atoms. The complexity of a force field can range from simple spring-mass models to polarizable models, such as the atomic multipole optimized energetics for biomolecular applications (AMOEBA [11]), or bond-order potentials, such as the embedded atom method (EAM) or ReaxFF [12,13], which can simulate chemical transformations and materials like metals that are not governed by neither covalent nor ionic interactions, and where many-body effects cannot be neglected. Of course, the more sophisticated the treatment of the interatomic interactions is, the higher the computational cost. There is no free lunch in atomistic modeling. Besides capturing the interatomic interactions accurately, sampling configurations of the system that significantly contribute to the ensemble
Atomistic molecular modeling methods
43
average (i.e., those with high pi) is paramount to the success of an atomistic simulation. There are mainly two, qualitatively distinct, sampling approaches: molecular dynamics (MD) and Monte Carlo (MC). In MD, the configurations of the system are generated by integrating in time the classical equations of motion of the atomic nuclei. In MC, where time is not well-defined and the forces between atoms do not need to be calculated, new configurations of the system are generated by introducing small stochastic perturbations (i.e., moves) to the positions of the atoms or molecules in the system which are accepted according to a certain criterion that satisfies the form of the theoretical pi (e.g., Metropolis-Hastings algorithm). The main distinction between MD and MC is the absence of dynamics in the latter, which carries some advantages and disadvantages. On the one hand, MC allows for nonphysical moves that may help the system undergo transitions between states that are inaccessible through thermal fluctuations in the timescale of an MD simulation (e.g., torsional barriers), thus enhancing the exploration of the configurational space of the system. On the other hand, access to the dynamics of the system in MD simulations allows the study of transport properties, such as diffusion, and the calculation of time correlation functions. Furthermore, MD offers high versatility toward studying systems under nonequilibrium conditions (e.g., ballistic impact and driven flow), which are harder to achieve in MC simulations. Another, less common, classification axis for MD methods is the level of physics, classical or quantum, employed to describe the dynamics of the atomic nuclei. Because all-atomistic modeling techniques and most electronic structure methods rely on the Born-Oppenheimer approximation (the motion of the much heavier nuclei is assumed to be decoupled from that of the electrons), the quantum mechanical treatment or not of the nuclei is typically treated independently of the quantum mechanical treatment or not of the interatomic interactions. For example, in AIMD, the system is treated quantum mechanically regarding the interatomic forces, but the nuclei are still treated classically (i.e., they obey Newton’s equations of motion). On the other hand, there are empirical force fields that have been parameterized for use with nuclear quantum effects (NQEs) [14], such as zero-point energy, delocalization, or tunneling, which are particularly important for atoms with light nuclei, such as hydrogen, especially at low temperatures. Methods that attempt to reproduce (approximately) the quantum nature of the nuclear dynamics are considered under the umbrella of Path Integral Molecular Dynamics
44
Luis Alberto Ruiz Pestana et al.
(PIMD). Path integral methods require simulating simultaneously multiple coupled replicas of the system (in the order of several dozens), which dramatically increase the cost of a simulation. The most widespread and computationally efficient PIMD methods are centroid molecular dynamics (CMD) and ring polymer molecular dynamics (RPMD), which make use of an isomorphism between the path integral formulation of quantum mechanics and the Hamiltonian of a classical ring polymer, which allow to significantly reduce the number of replicas needed for convergence. Because of the steep tradeoff between computational cost and accuracy, we suggest following Einstein’s advice when trying to select an atomistic simulation method: “Everything should be made as simple as possible, but not simpler.” If describing the electronic structure of your system is essential to your application or force field parameters do not exist for your system, you will have to use AIMD and limit yourself to simulating systems with a few hundred atoms for times below one nanosecond. Reactive force fields, such as ReaxFF, can be a suitable alternative to AIMD to study reactive systems, but extensive validation from the simulations is often required, as the transferability of the force-field parameters is limited. If force field parameters do not exist for your system of interest, and the system size that you need to simulate is beyond the capabilities of AIMD, you will have to parameterize your own force field, which is challenging, timeconsuming, and requires a high level of expertise and domain knowledge. Empirical force fields, despite their simplicity, are still the method of choice for most applications. Regarding the sampling algorithm, both MD and MC techniques are similarly computationally efficient, and researchers typically use one or the other according to their comfort level with each of the respective techniques. MD is more versatile in terms of the phenomena that can be investigated, and most atomistic simulation programs are based on MD (e.g., LAMMPS, GROMACS, NAMD, and TINKER). Lastly, if you need NQEs in your simulations, we recommend using the program i-PI for its simplicity and easy integration with other simulation programs. In the sections that follow, we will center our discussion on MD simulations primarily, which is the atomistic simulation technique most widely employed. As a roadmap, in Fig. 1 we show the main steps in an MD simulation, which we will discuss in some detail in the following sections.
Atomistic molecular modeling methods
45
Fig. 1 Typical steps of a molecular dynamics (MD) simulation.
4.
Modeling interatomic interactions using empirical force fields
Generating new configurations of the system in an MD simulation requires calculating the forces acting on each atom i, fi ¼ ∂ U(r)/∂ ri, where U(r) is the potential energy of the system and r ¼ r1, r2, …, rN are the positions of all the atoms in the system. Therefore, U(r) provides a quantitative description of how the atoms in the system interact with each other as a function of their relative positions. A force field is just the collection of analytical functions and parameters that approximate the true many-body quantum mechanical energy of the system by a sum of few-body potentials that capture different contributions to the energy (e.g., electrostatics, van der Waals, etc.). In force fields, the electrons are essentially coarse-grained, and their effect is implicitly captured by the functional forms and parameters of the different potential terms. Different functional forms are used to approximate the different types of interactions The functional form therefore captures the physics of the interaction, and the precise parameterization of the function captures the chemistry, i.e., the quantitative differences between different chemical species, or between atoms of the same species in different chemical environments (e.g., the hybridization of a chemical
46
Luis Alberto Ruiz Pestana et al.
bond). For example, a covalent bond can be modeled as a harmonic spring, which while being a reasonable approximation close to equilibrium, will not capture the physics of the bond at either long or very short bond distances. The chemical environment, for example whether a carbon-carbon covalent interaction is a single, double, or triple bond, is captured by the spring constant and equilibrium bond length. Approximating the many-body energy of the electronic system by a sum of few-body potential terms carries stupendous computational savings. As a result, force field methods are routinely used to simulate condensed phase systems with hundreds of thousands of atoms, such as biomolecules or nanostructured materials, which are completely out of reach to electronic structure methods. Table 1 offers a summary of some of the most used force fields, and the types of materials that are typically modeled with them. Since condensed matter is made of interacting molecules, and molecules are made of covalently bonded atoms, we typically distinguish between two distinct groups of interactions in force fields: bonded, or intramolecular (e.g., covalent bonds), and nonbonded, or intermolecular (e.g., Coulomb Table 1 List of commonly used force fields in MD or MC simulations. Force field family Variants Most applicable system
CHARMM [1]
CGenFF, CHARMM22, CHARMM27, and CHARMM36. AMBER [2] GAFF, GLYCAM, ff94, ff96, ff98, ff99, ff02, ff02EP, etc. DREIDING [15] – OPLS [3,4] MMX [9,10] CFF [14]
ReaxFF [16] ClayFF [17]
Organic molecules, solutions, polymers, biochemical molecules, etc. Biochemical molecules (proteins, nucleic acids, polysaccharides, etc.) Organic, biological, polymeric, and main-group inorganic molecules OPLS-AA and OPLS-UA. Biological macromolecules, solutions, etc. MM2, MM3, MM4, MM Organic chemical compounds, +, etc. free radicals, ions, etc. COMPASS, COMPASS Organic small molecules, II, CFF95, CFF91, and biomolecules, molecular sieves, PCFF. polymers, metals, etc. – Metals, ceramics, silicon, polymers – Hydrated and multicomponent mineral systems and their interfaces with aqueous solutions.
Atomistic molecular modeling methods
47
Fig. 2 Different types of interactions typically included in empirical force fields. Bond, angle, dihedral, and improper interactions are considered bonded, and electrostatics and van der Waals nonbonded.
electrostatics, Van der Waals interactions). One reason for this separation is that the models of bonded and nonbonded interactions have different physical qualities and computational requirements and thus are qualitatively different. For example, in contrast to covalently bonded atoms, nonbonded neighboring atoms change frequently during a simulation, which requires frequent updating of the list of nonbonded interactions, which translates into an additional computational cost. On the other hand, because covalent interactions are very stable and expected to last during the time scale of the simulation under most conditions of interest, the bonded topology of a system does not typically need to be updated. Furthermore, because covalent interactions are close to equilibrium under most conditions of interest, covalent bonds can be modeled by simple short-range harmonic potentials that capture their behavior around equilibrium. Fig. 2 shows the different types of interactions typically accounted for in force fields. In the following subsections, we discuss the potential terms most used to describe them.
4.1
Bonded interactions
We refer to bond stretching, angle bending, bond torsions (or dihedrals), and impropers, which occur between neighboring atoms within the same molecule, as bonded interactions. The behavior of a bond or an angle vibrating
48
Luis Alberto Ruiz Pestana et al.
Fig. 3 Harmonic potential, Ebond(r) ¼ K(r r0)2, where K ¼ 100 kcal/mol/Å2 is the bond spring constant, and r0 ¼ 2.75 Å is the equilibrium bond length.
about its equilibrium state can be accurately captured, in most chemical environments and at most temperatures, by a harmonic potential (Fig. 3), e.g., V(d) ¼ kd(d d0)2 or V(θ) ¼ kθ(θ θ0)2, for a bond and angle potential, respectively, where d is the bond distance and θ is the angle. The chemical identity of the bonded interaction, whether it is an OdH bond in water, or a CdC bond in graphene, is captured by the value of the spring constants and the equilibrium bond length or angle. The different orbital hybridization of the atoms involved in the covalent interaction, which leads to different molecular configurations (e.g., linear or tetrahedral), is captured by the equilibrium parameters of the angle potential. The harmonic approximation obviously fails to describe the behavior of the covalent interactions far from equilibrium, such as during bond dissociation or the short-range repulsion due to Pauli exclusion. It also becomes an inaccurate description of covalent bonds at very low temperatures, when nuclear quantum effects such as zero-point energy or tunneling can be significant, or in situations when bond dissociation is expected, for example under large deformations or in reactive environments. Anharmonicity, which is another feature clearly not captured by harmonic potentials, can also be important in some applications. Anharmonic models exist, such as the Morse potential, albeit they are slightly more computationally expensive. Materials governed by metallic or ionic bonds would also require a different treatment of the interatomic interactions beyond pairwise harmonic potentials, which, again, are only a reasonable approximation of covalent interactions close to equilibrium.
Atomistic molecular modeling methods
49
While in a real system the chemical environment surrounding the bonded interactions will affect their behavior, there is certain transferability of covalent behavior between chemical environments, which is one of the main reasons why the field of molecular mechanics has been able to prosper. For example, carbon double bonds exhibit relatively similar properties even when present in different molecules (i.e., chemical environments). Bond torsions, also known as dihedral angles or just dihedrals, are typically needed to capture the behavior of macromolecules such as proteins or polymers. Dihedral interactions involve four consecutive (i.e., covalently bonded) atoms, and the torsion angle of the central bond is measured between the two planes defined by atoms ijk and kjl (see Fig. 2). In contrast to bond stretching and angle bending, torsional potentials can exhibit multiple minima (e.g., cis and trans conformations) separated by relatively low energy barriers (2–5 kcal/mol). Because of the low energy barriers, the system could visit those different local minima driven by thermal fluctuations alone even during the short time scale of a simulation. It worth noting that a harmonic potential would not allow that. Bond torsions are therefore typically described by periodic functions, such as V(φ) ¼ kφ[1 + cos(nφ δ)] in the CHARMM force field (Fig. 4), where the number and location of the minima and the activation barrier between them can be tuned. The last common contribution to the bonded interactions in empirical force fields is known as improper torsions or just impropers, which are typically implemented to maintain the planarity (or not) of certain functional groups, such as hydrogen atoms in aromatic rings. Impropers also involve four bonded atoms as the dihedrals, however, in the former case the atoms
Fig. 4 Energy as a function of the dihedral angle for the CHARMM dihedral potential, Edihedral(ϕ) ¼ K[1 + cos(nϕ d)]. ϕ is the degree of the dihedral; K is the height of the energy barriers. (a) Potentials with different periodicity, i.e., different values of n, and with (b) different locations of the minima.
50
Luis Alberto Ruiz Pestana et al.
do not need to be consecutively bonded. As such, harmonic approximations are used for improper torsions, as the aim is to describe the behavior close to equilibrium and there is typically only a single minimum state. Depending on the treatment of the bonded interactions, different classes of empirical force fields exist. Class I force fields, such as AMBER, CHARMM, DREIDING, GROMOS, or OPLS, are the most popular and, with a few minor differences, include the simple potential terms explained above: bond stretching, angle bending, dihedral torsions, and improper torsions. Class II and Class III force fields, which are more sophisticated albeit less popular force fields, incorporate coupled terms in the bonded interactions. For example, an angle bending energy may depend on the length of the bonds defining the angle.
4.2
Nonbonded interactions
Nonbonded interactions exist between atoms in the system that do not directly share electrons with each other, whether they are in the same molecule or in different molecules. There are two types of nonbonded interactions that are considered in empirical force fields: point charge electrostatics and van der Waals forces. More sophisticated force fields can also include higher order electrostatic terms, such as dipole-dipole interactions, or polarizable parameters, which can be important for applications where complex and varying electrostatic fields exist. Point charge electrostatic interactions are captured by a Coulombic potential, V(r) ∝ qiqj/r. According to this description of the system (atoms as charged point particles), the electron density of molecules, which in general is a 3D nonuniform field that depends on the local chemical environment of the molecule, is approximated by a discrete system of fixed charges assigned to each atom. The charges do not necessarily need to be placed at the center of the atomic nuclei, e.g., see the TIP4P model of water [18], and to be able to capture electrostatic interactions between polar molecules and not just ions, noninteger charges, known as partial charges, can be assigned to the atoms. Van der Waals forces, also known as dispersion interactions, arise from the instantaneous dipole-induced dipole interactions, which exist between every atom and molecule even they are uncharged or nonpolar. Van der Waals forces in empirical force fields are universally described by the h σ 6 i σ 12 Lennard-Jones (LJ) potential, V LJ ðr Þ ¼ 4ε r r , where r is the
Atomistic molecular modeling methods
51
Fig. 5 The Lennard-Jones (LJ) potential. In the figure, r is the distance between a pair of atoms, rcut is the cutoff distance, 2.5σ is a typical value, ε is the depth of the minimum energy, and σ is a length scale parameter related to the equilibrium distance between atoms. The dotted and dashed lines show the repulsion and attraction branches of the LJ potential, respectively.
distance between the pair of atoms, ε is the dissociation energy or depth of the potential well depth, and σ is a length scale parameter related to the equilibrium distance of two nonbonded atoms, which can be easily shown to be related to σ by ro ¼ 21/6σ (Fig. 5). The scaling of the attractive branch of the potential, ∝r6, is physically accurate, while the repulsive branch is chosen for computational efficiency and does not really capture the true exponential dependence with distance that corresponds to Pauli repulsion. Both the Coulomb and LJ potentials are pairwise, which implies that every atom interacts with every other atom in the system, no matter how far they are from each other. However, because the nonbonded interaction energy between atoms in the system that are far apart is much smaller than that between nearby atoms (e.g., Lennard-Jones interactions decay as r6), these interactions are truncated at some distance cutoffs beyond which the interactions are neglected. In Section 4.8, we discuss in detail some of the implications of truncating the pairwise potentials. Furthermore, the nonbonded interactions between bonded neighboring atoms are typically ignored because their effect is implicitly included in the parameterization of the bonded potentials. These are known as “bonded exclusion” rules. Typically, the 1-4 rule is applied, where atoms that are connected by less than two linear bonds do not interact with each other. Exclusion rules are particularly important for modeling macromolecules.
52
4.3
Luis Alberto Ruiz Pestana et al.
A short comment on force field parameterization
The parameters of the different potential terms that constitute the force field, such as the value of the point charges of the atoms or the equilibrium bond lengths or angles, are calculated by fitting the potential function to either experimental results (e.g., vibrational frequencies or structural information) or ab initio calculations. It is worth noting that both the experiments and ab initio calculations carried out to parameterize the force field are typically not done on the exact system that we wish to simulate, but on simplified model systems that represent parts of the material system of interest, such as small molecular fragments in ab initio simulations or simple molecular liquids in experiments. For example, in the case of proteins, ab initio simulations of groups of two or three amino acids have been traditionally used for parameterization purposes. Because the systems that we ultimately want to simulate are much bigger and complex (e.g., proteins) than those systems used to parameterize the force field (e.g., small peptide fragments), the parameterization is not always completely accurate. The quality a simulation, therefore, depends on how far from the parameterization conditions the simulated system is. Only recently, the use of full-scale simulations combined with solution experiments and machine learning (ML) methods (if not familiar with ML methods, you can think about them as efficient optimization algorithms) has been used to parameterize force fields from the topdown [19].
4.4
Challenges and limitations of empirical force fields
It is the implicit treatment of the electronic structure of atoms by force fields that primarily limits their transferability. Because the force field parameters are fitted assuming some fixed electronic environment and thermodynamic conditions, and once fitted they remain constant, an empirical force field cannot reproduce changes in the interatomic interactions that arise when the same atoms interact in a different chemical environment. For example, the dipole moment of a water molecule, measured experimentally, is significantly different in the gas, liquid, or solid phase. In an empirical force field, however, the equilibrium bond length, bond stiffness, or partial charges of the oxygen and hydrogen atoms, which determine its molecular properties like the molecular dipole moment or the vibrational frequencies, are fixed regardless of whether the molecule is in the gas, liquid, or solid phase. As a result, a force field is only reliable for simulating systems under thermodynamic conditions and chemical environments that are reasonably close to
Atomistic molecular modeling methods
53
those used during its parameterization. Using a force field to simulate a system outside that regime requires extensive validation with experimental results. Further limiting the transferability of force fields is the description of covalent interactions by harmonic terms and the fixed topology or bonded connectivity of the system. For example, empirical force fields cannot capture situations where the system is driven to states where chemical reactions or bond dissociation events could arise (e.g., under extreme temperatures or at high deformations). Lastly, the omission of many-body effects by pairwise potentials (this is that the interactions between two atoms depend on the positions of the other atoms in the system) presents a huge shortcoming in the simulation of materials like metals or polarizable systems, where many-body effects are crucial. A force field should be considered as a single entity. It is not only the accuracy of the individual terms and parameters that makes a force field accurate, but mainly their relative strength with respect to each other. Therefore, if, for example, one wants to parameterize a new molecule within an existing force field, the exact same protocols that were used to calibrate the original parameters of the force field should be followed. Using more sophisticated techniques, for example higher level electronic structure calculations, would overall lead to internal inconsistencies rather than better results. For similar reasons, parameters from different force field should not, in principle, be mixed. For example, think about the use of the TIP3P model of water with the CHARMM force field for proteins. Although more sophisticated models exist that capture the behavior of water much better than TIP3P, the overall results of using those better models together with the CHARMM parameters for biomolecular systems can lead to poor results.
5.
Integrating the dynamics of atoms: Molecular dynamics (MD)
In an MD simulation, time is discretized into short time steps Δt. A new configuration of the system {r(t + Δt), v(t + Δt)} is generated every step from the previous positions and velocities of the atoms, {r(t), v(t)}. The time sequence of the atomic positions, r(t), is known as the trajectory. The equations of motion are integrated numerically according to the equations of classical mechanics. In other words, we assume that the dynamics of the atomic nuclei can be described by Newton’s equations of motion. To update
54
Luis Alberto Ruiz Pestana et al.
the positions and velocities of the atoms each step, we first calculate the forces acting on each atom i using fi ¼ U(r)/ri(t), where U(r) is the energy of the system according to the force field. Once the forces on the atoms are known, we use Newton’s second Law to calculate the accelerations ai ¼ mi/fi of the atoms, which we can then input into a numerical integrator to calculate the new velocities v(t + Δt) and the new positions r(t + Δ t). For example, the equations of a simple but reliable integration algorithm known as Velocity Verlet which is conventionally used to perform MD simulations are: rðt + ΔtÞ ¼ rðtÞ + vðt ÞΔt + aðtÞΔt 2 =2
(1)
vðt + ΔtÞ ¼ vðtÞ + ½aðtÞ + aðt + ΔtÞΔt=2
(2)
Notice that in the Velocity Verlet algorithm, the equation for the new velocities requires that we calculate the forces in both the current configuration, a(t), and the new configuration, a(t + Δt). Because the integrator is seldom a setting that you will need to choose or change if you perform simulation in available programs, and because integrators have been extensively reviewed in other publications [20], we will not discuss different integration schemes here. We will mention, however, that because the laws of classical mechanics conserve energy and are time reversible, both those features are desirable in an integration scheme. The reader is referred to Section 4.3 in Frenkel and Smit book for a more in-depth treatment of the issue. One essential parameter related to the numerical integration of the equations of motion and one which you will need to choose in your simulations is the duration of the time step. Ideally, one would use the largest time step that still allows for a stable numerical integration of the system dynamics. It is evident that given the same number of steps, which is constrained by the available computational resources, the larger the time step, the longer the time that the system can be simulated. The time step, however, needs to be small enough such that the numerical integration of the fastest atoms in the system remains stable, which depends on the stiffness of the stiffest interaction. For example, imagine two atoms of a monoatomic gas moving at some velocity toward each other in the simulation. In the real world, if the trajectory of the atoms intersects, the atoms would collide and bounce off each other due to hard-core repulsive interactions. In a simulation, that is exactly what will happen if the time step is small enough. However, if the time step is too long, the two atoms will either pass through each other
Atomistic molecular modeling methods
55
Fig. 6 Schematic illustration of a bond displacement from its equilibrium as a function of time. Here, we take the example of a bond composed of hydrogen atoms, where the period of the fastest displacement oscillation without considering damping is around 10 fs. An appropriate time step, illustrated in red, would be 1 fs.
without having interacted or will be able to interact in some unphysical configuration (e.g., be nearly overlapping). The stiffest interactions in atomistic systems are covalent bonds involving the lightest atoms, which can oscillate at frequencies of about 100 THz. That corresponds to 1014 times per second or a period of oscillation of about 10 fs. To be able to integrate this extremely fast motion numerically, a time step of 1 fs is typically needed, according to the rule of thumb Δt T/10 (Fig. 6). Because it is the stiffest interaction in the system that limits the size of the time step, some force fields describe molecules as rigid bodies, thus removing the need for integrating the motion due to the covalent bonds. Examples include some models of water (e.g., TIP3P) or polymers, and the time step typically used in those cases is 2 fs.
6.
Ensembles and molecular dynamics at constant temperature and/or pressure
If you ever read papers or have attended talks where atomistic simulations were used, you have likely run into terms like “NVT,” or “microcanonical ensemble.” Even if you are familiar with statistical mechanics and the concept of ensemble, you may be wondering why or how to choose a particular ensemble over others for a given simulation and what are the implications of that choice. Selecting an ensemble is an unavoidable and essential step in setting up a molecular simulation, and the purpose of this
56
Luis Alberto Ruiz Pestana et al.
section is to briefly explain the meaning of ensemble and how to choose the right one for your specific purpose. Until now, we have learned how to model the interatomic interactions using empirical force fields (Section 3.4), which we need to generate new configurations of the system by numerically integrating the classical equations of motion of the atoms (Section 4.5). Although we did not explicitly discuss it until now, an atomistic simulation must always take place under certain thermodynamic constraints or control variables, which apply to the whole system and reproduce the interactions between the system and its surrounding environment. In the lingo of statistical mechanics, and atomistic simulations by association, the set of thermodynamic control variables applied to the system determines the ensemble to which the configurations or microstates of the system generated during the simulation belong. Typically, we refer to an ensemble by three capital letters, which indicate the thermodynamic parameters that remain constant during a simulation (some variables such as the volume or number of particles can be kept strictly constant, others, such as the temperature or the pressure fluctuate around a target value but remain constant in a statistical sense). We use N for the number of atoms, T for temperature, P for pressure, V for volume, E for energy, and μ for chemical potential. Obviously, combinations of conjugated variables, such as pressure and volume, cannot be controlled at the same time. Because of Gibbs’ phase rule, we can only control three thermodynamic variables at a time. Depending on the thermodynamic control variables, different thermodynamic potentials are optimized at equilibrium, such as the entropy, the Helmholtz free energy, or the Gibbs free energy. For an in-depth treatment of the role of Statistical Mechanics in atomistic simulations, the reader is referred to M. Tuckerman’s book [16]. The idea of ensemble was first put forward by J.W. Gibbs and can be thought as the set of all possible configurations or microstates that a system can be in which are compatible with certain thermodynamic control variables that determine the macrostate of the system. There are (infinitely) many microstates that are compatible with a single macrostate. As stated earlier in this chapter, the goal of an atomistic simulation is to approximate a given P equilibrium property of the system using the weighted sum: X ¼ piXi, where pi is the probability of configuration i and Xi is the value of the property of the system in that configuration or microstate. Rigorously speaking, the system microstate is defined by the positions and momenta of the atoms (i.e., as a point in phase space) while the configuration of the system just involves the atomic positions. We use both interchangeably here
Atomistic molecular modeling methods
57
because the contribution of the moment degrees of freedom to the probability of certain microstate can be easily integrated out for systems in equilibrium, and the nontrivial component that remains is due to the configuration of the system. Because of our finite computational resources, only a finite number of configurations or microstates can be sampled from the infinitely many in the ensemble during a simulation. That implies that we can only calculate the expectation value of X approximately. In an MD simulation, the configurations of the system are generated as a trajectory over time, and the microstates are sampled from the ensemble directly with the right probability so there is no need to re-weight them. In other words, P the property of interest is calculated as a simple time average: X ¼ Xi/nsteps, where the sum is over the number of steps of the simulations and Xi is the value of the property of interest at step i. One key assumption that we make when we use MD simulations to calculate equilibrium properties is that the configurations that we generate during a trajectory are representative of the ensemble (i.e., have high associated probabilities), and that over long enough times, the time averages become equal to the ensemble averages, which is known as the ergodic hypothesis. The probability that the system accesses different microstates depends on the thermodynamic control variables that define the ensemble. For example, in the microcanonical or NVE ensemble, every configuration of the system (compatible with the NVE constraints) has the same weight or probability. If your system is, for example, a collection of noninteracting atoms in a box, a configuration where all the atoms are bunched up in one corner of the box or one where the atoms are randomly distributed in the box are both equally likely. In the canonical or NVT ensemble, the probability of a given configuration depends on its energy according to the Boltzmann weight, pi e Ei/kBT, which implies that high energy configurations occur with very low probability during a simulation. An immediate consequence of this is that MD simulations have trouble sampling high energy configurations of the system. While we are not very interested in such microstates, as they contribute very little to the ensemble average, those states sometimes correspond to transitions between different minima. As a result, systems can be easily trapped in local minima during MD simulations, thus violating the ergodic assumption and making the time averages unreliable. There are three ensembles that are mostly used in practice: the NVE, NVT, and NPT ensembles, which are also known as the microcanonical, canonical, and isothermal-isobaric ensembles, respectively. They all correspond to closed systems that cannot exchange matter with their
58
Luis Alberto Ruiz Pestana et al.
surroundings, or in other words, particles cannot be created or destroyed within the system. The conjugated variable to N is the chemical potential μ, and simulating systems at constant μ, while relevant to some applications, requires a special algorithm to add or remove atoms or molecules from the system, which is not straight forward and thus not covered here. In the following, we describe the strengths and limitations of the NVE, NVT, and NPT ensembles, which are, by far, the most used ensembles. Let’s start with the NVE or microcanonical ensemble. This ensemble corresponds to a system placed in a closed simulation box of fixed volume; therefore, there is no exchange of energy, particles, or work between the system and its environment. If we use a time-reversible integrator, the energy in the system will be automatically conserved without the need of further action (the time symmetry of the classical equations of motion implies the conservation of energy as the mathematician Emmy Noether showed). As a result, in the microcanonical ensemble, the system will evolve toward an equilibrium state where the entropy of the system is maximized. In practice, the energy is only approximately conserved, due to inaccuracies of the numerical integration. As shown in Fig. 7, in the NVE ensemble, the initial potential energy of the system is transformed into kinetic energy and vice versa, until reaching equilibrium at certain temperature. We would like to note that the results shown in this chapter are based on simulations of a coarse-grained (CG) model of ortho-terphenyl (OTP) [21], a small-molecule glass-forming
Fig. 7 Temperature as a function of time in the NVE ensemble. The fluctuating gray line shows the actual temperature, and the blue curve shows a moving average. (Inset) Mapping from the all-atomistic model to the coarse-grained model of OTP. The force centers of the coarse-grained beads are located at the center of mass of phenyl ring.
59
Atomistic molecular modeling methods
liquid, where each phenyl ring is grouped into one CG bead with the force center located at the center of mass of each ring (inset in Fig. 7) (see Chapter 4 to learn more about mesoscale coarse-graining methods). This system can be thought of as a simple LJ molecular fluid. The dynamics of the atoms in the NVE ensemble are uncorrupted or physically accurate (we will see later how controlling the temperature corrupts the dynamics); therefore, it is the ensemble of choice when dynamical and transport properties need to be computed, such as diffusivity or time correlation functions. A typical protocol to calculate dynamical and transport properties at some target temperature is to first equilibrate the system in an isothermal ensemble, which we will discuss later, once the system is at the right temperature, switch to the NVE ensemble to generate uncorrupted atomistic dynamics. While controlling the temperature or pressure experimentally is relatively straight forward, realizing a completely isolated system (i.e., a system in the NVE ensemble) is extremely challenging in experiments. So, comparing our simulation results to experiments requires that we carry out simulations at constant temperature or pressure. The key to control T or P is to understand how they can be formulated from the microscopic degrees of freedom the system. The expression of the temperature can be derived using the equipartition theorem (which states that if the energy of the system has only quadratic terms, which is a reasonable assumption close to equilibrium, each degree of freedom contributes kBT/2 to thermal energy, in average), resulting in: T¼
N 1 X mi v2i 3NkB i¼1
(3)
Furthermore, in equilibrium, the distribution of the atomic speeds follows the Maxwell-Boltzmann distribution: pðvÞ ¼
m 2πkB T
3=2
mv2
e2kB T
(4)
1=2 . where the speed v ¼ vx 2 + vy 2 + vz 2 The microscopic interpretation of pressure would be the average momentum carried across a given area. Momentum can be carried through two distinct mechanisms: (1) a particle moving across the area, and (2) two particles interacting on either side of the area, which correspond,
60
Luis Alberto Ruiz Pestana et al.
respectively, to the first and second terms on the right-hand side of the equation below: p¼
XX NkB T + rij f ij V i j>i
(5)
The NVT or canonical ensemble is of great importance to atomistic and molecular simulations, as it allows to simulate the system under conditions, constant volume, mass, and temperature, that can be easily controlled in experiments. In the NVT ensemble, the system is allowed to exchange energy with an infinitely large fictitious reservoir at some target temperature (the reservoir is a theoretical construct). In other words, the system is closed, but not isolated, thus the energy of the combined reservoir and system (i.e., the universe) is conserved, but not that of the system alone. In practice, to maintain the system at certain temperature, the velocities of the atoms are periodically adjusted so that their statistical distribution is compatible with the Maxwell-Boltzmann distribution at the target temperature. The algorithm used to do that is known as a thermostat, and multiple thermostats have been proposed, e.g., Berendsen, Andersen, and Nose-Hoover [17,22] While the mathematical details of the implementations are not critical for most applications, choosing how strong the coupling between the thermostat and the system is, which in practice means how often the velocities of the atoms are readjusted, is an important input in an MD simulation. Manipulating the atom velocities very infrequently is analogous to a weak coupling, and the temperature in that case is allowed to fluctuate more. In Fig. 8, we show how the temperature fluctuates as a function of time for two different couplings of the thermostat.
Fig. 8 Fluctuations in temperature in the NVT ensemble for control of coupling in the thermostat (a) every 100 time steps and (b) every 1000 time steps.
Atomistic molecular modeling methods
61
It should be apparent that artificially readjusting the atom velocities, while not being an issue (in fact it is needed) for calculating equilibrium properties, it creates artifacts on the dynamics of the atoms. Also, because the energy of the system is not conserved in isothermal ensembles, the time reversal symmetry is strictly broken. As a result, dynamical properties should not be calculated when using constant temperature ensembles. Again, the fact that the individual dynamics of the atoms are corrupted does not interfere with the fact that the configurations of the system that we are generating are sampled correctly in the given ensemble. The level of corruption depends on how strongly the thermal reservoir is coupled to the system (i.e., how often the thermostat acts on the system). For systems far from equilibrium (i.e., starting at a temperature far from the target), you can adjust the thermostat to be strongly coupled for a while, and when system reaches the target temperature, you can decrease the strength of the coupling, which will create less artifacts in the dynamics. You may be wondering, why do we allow the temperature to fluctuate at all? Why not adjust the distribution of velocities at every step of the simulation? The main reason is that fluctuations in temperature are rigorously expected in a system in an NVT ensemble. For further details on how thermostats work, we refer the reader to Chapter 6 in Frenkel and Smith’s book [23]. In the NPT ensemble, the system is maintained at a target pressure using a barostat. The work by Parrinello and Rahman, Shinoda, or Martyna, and their collaborators [24–26] on developing algorithms to control the pressure in simulations is worth of mention. The NPT ensemble is typically used to evolve systems to their equilibrium density. It is worth noting that given the small size of the systems in MD simulations, small variations in volume can lead to large changes in pressure. Thus, while the fluctuations in volume are typically small, the fluctuations in pressure can be very large, as shown in Fig. 9.
Fig. 9 (a) Pressure fluctuations and (b) density fluctuations in a MD simulation under the NPT ensemble.
62
Luis Alberto Ruiz Pestana et al.
7.
How to calculate properties from an MD simulation
In this section, we will explain, and illustrate using a case study, how to calculate some basic properties of a system using MD simulations. The basic outputs of an MD simulation are the positions and velocities of all the atoms in the system at different time steps, {r(t), v(t)}. Any other property of the system must be calculated from that basic information. Therefore, the challenge becomes how to distill the huge amount of data in the trajectory into meaningful equilibrium properties of the system such as the elastic constants or its viscosity, which can be experimentally measured and validated. In the next few sections, we will show how to calculate different thermodynamic, structural, and dynamical properties of the system from an MD simulation.
7.1
Structural and thermodynamic properties
In the previous section, we discussed that if the simulation is ergodic, we can approximately calculate any equilibrium property of the system Xmacro ¼ hXi, where the angular brackets indicate an ensemble average, which is the average over the microstates generated during the simulation: nsteps X
X macro ¼ hX i
Xi
1
nsteps
(6)
To perform that operation, we need to be able to express the property of interest as a function of the positions and velocities of the atoms in the system Xi ¼ X({r, v}i). Thermodynamic properties include simple functions of the Hamiltonian, which is the total energy of the system in a particular microstate, such as internal energy, temperature, or pressure, as well as response functions, such as heat capacity, compressibility, or the thermal expansion coefficient. Thermodynamic properties can only be calculated for systems in equilibrium. The potential energy of the system at a particular microstate depends only on the atomic positions and can be calculated directly using the expression of the force field. Temperature and pressure can be calculated from the positions and velocities of the atoms in the system using Eqs. (3) and (5), respectively. Thermodynamic response functions involve partial
63
Atomistic molecular modeling methods
derivatives of a thermodynamic variable with respect to another. For example, heat capacity is the derivative of the internal energy with respect to temperature: ∂U CV ¼ (7) ∂T V Response functions can be of interest on their own right, but can also be used to monitor, for example, phase transitions where the linear response diverges. There are two ways of calculating thermodynamic response functions. The first method involves calculating the partial derivative numerically by simulating the system multiple times under different imposed values of one of the variables while measuring the other one. In the case of heat capacity, we would simulate the system in the NVT ensemble at different temperatures T1, T2, … and measure the internal energy of the system in each of those simulations, U1, U2, …. Using that data, it is possible to reconstruct the function U(T), which we can then differentiate numerically either directly or after fitting some polynomial function to the data. This method involves performing multiple simulations of the system, which can be very costly. The second method requires only one simulation and relies on analytical expressions of the partial derivative in terms of the fluctuations of some thermodynamic variable. In the case of the heat capacity: CV ¼
∂U ∂T
¼ V
hE 2 i hE i2 kB T 2
(8)
where E is the potential energy of a configuration or microstate of the system and the internal energy U ¼ hEi. The downside of this method is that because it involves small differences between large numbers, which is an operation carrying large uncertainty, it is often hard to converge the numerator. Structural properties can be precisely calculated from an MD simulation as we have access to the positions of all or a group of the atoms. A useful description of the structure of the system, particularly for liquids, is the radial distribution function (RDF), which can be written as: ρðr Þ 1 gðr Þ ¼ ¼ ρ0 ρ0
*XN j¼1
δðr Δr=2Þ
4πr 2 Δr
+ (9)
64
Luis Alberto Ruiz Pestana et al.
Fig. 10 Spatial discretization for the evaluation of the radial distribution function (RDF). The gray bead represents the reference particle, and orange particles are those whose centers are within the pink circular shell.
P where N j¼1 δðr Δr=2Þ is the number of atoms between r Δr/2 and r + Δr/2, 4πr2 Δ r is the volume of the spherical shell at distance r, and ρ0 is the bulk density of the system (Fig. 10). The average indicated by the brackets is over all the atoms in the system and over all the configurations of the system. In practice, to calculate the RDF, for each atom at each step, one calculates the distances to every other atom and produces a histogram using bins of width Δr. The histograms are averaged for the different atoms in the different configurations through the simulation. Finally, the value of each bin in the resulting averaged histogram is normalized by the volume of each bin, which is 4πr2 Δ r, and all the bins are also normalized by ρ0. It is worth noting that the RDF is dimensionless and corresponds to the probability of finding a pair of atoms at a distance r relative to the probability of finding that pair of atoms at that same distance if the system would be completely uniform. At very short distances, smaller than the atomic diameter (e.g., the Lennard-Jones equilibrium distance), the RDF is equal to zero. At intermediate distances, the RDF is characterized by peaks and valleys, where the peaks correspond to distances where it is much more likely to find neighboring atoms, and the valleys correspond to interstitial spaces, which suggests order and structure. At large distances, the RDF tends toward 1, which means that the probability of finding atoms at long range is equal to the equilibrium density in average. Crystalline solids display deeper valleys and higher peaks that persist for much longer distances than on liquids, which illustrate the long-range order in solids (Fig. 11).
65
Atomistic molecular modeling methods
Fig. 11 (a) Schematic of the solid, liquid, and gas phases of the system. (b) Radial distribution functions (RDF) of coarse-grained ortho-terphenyl (OTP) bulk system at different temperatures. The system is in the solid, liquid, and gas phase at 150 K, 400 K, and 1000 K, respectively.
In Fig. 11, we show the RDF of an ortho-terphenyl (OTP) bulk system modeled using a coarse-grained force field (see Chapter 4 to learn about coarse-graining methods) and simulated at different temperatures. The differences between the solid, liquid, and gas phases can be easily observed. The integral of the RDF, which gives the number of atoms below a distance r, can be used to compute the number of nearest neighbors and thus distinguish between different crystal packings: Z
r
nðr Þ ¼ ρo
gðr 0 Þ4πr 0 dr 0 2
(10)
o
The RDF is also a very convenient property because it can be experimentally measured through X-ray and neutron diffraction. The Fourier transform of the Neutron Structure Factor is the RDF. Besides static
66
Luis Alberto Ruiz Pestana et al.
structure, a range of other order parameters, such as the Ackland-Jones [27] order parameters or those develop by Steinhardt and co-workers [28], also exists to characterize bond orientational order.
7.2
Dynamical properties
Generating the microstates of the system as a time series of the system dynamics, as it is done in MD simulations, is not a necessary condition to calculate thermodynamic or structural equilibrium properties of the system, which require only an ensemble average which is insensitive to the order in which the microstates were generated. However, the atomistic trajectory opens the door to the study of dynamical properties, which is a unique feature of MD simulations with respect to MC simulations. It is important to remember that simulations designed to calculate dynamical properties of the system should be carried out in the NVE ensemble to avoid artifacts coming from thermostatting. If NVT or NPT is used to calculate dynamical quantities, the coupling between the thermostat and the system must be very weak, and the results need to be validated. For example, the thermostat has a small effect on the dynamics of the systems shown in Fig. 12. The dynamics of the system are associated to transport coefficients, such as viscosity or diffusivity, which can be calculated from time-autocorrelation functions: C ðtÞ ¼ hAðt0 ÞAðt0 + t Þi
(11)
Fig. 12 The mean-squared displacement comparison for NVE simulation and NVT simulations with different couplings of a Nose-Hoover thermostat with the system, which acts every 10 steps (blue) and every 100 steps (orange).
67
Atomistic molecular modeling methods
where the average is over multiple time origins (in equilibrium, a correlation function depends on the time interval but should not depend on the time origin), and over atoms if A is a single atom property (it could also be a system property, such as the total dipole of the system). At t ¼ 0, C(t) ¼ hA2i, which is the highest correlation possible. The correlation function is, as a result, typically normalized by hA2i, so that it takes a value of 1 at t ¼ 0. For long time intervals, if the signal A(t) is nonperiodic, then the two will become uncorrelated and C(t) ¼ hAi2. The rate of decay of the autocorrelation function is known as relaxation time. Transport coefficients can be calculated from integrating correlation functions over time, which are known as Green-Kubo relations. For example, the integral of the particle velocity autocorrelation function (VAF) is equal to the self-diffusion coefficient of the atoms or molecules in the system: Z ∞ D¼ (12) hvðtÞ vð0Þidt 0
There are also properties that can be calculated from the autocorrelation of properties of the whole system. For example, from the autocorrelation function of the stress (the negative of the pressure), the shear modulus (the value of the autocorrelation function at t ¼ 0) or the viscosity (the integral) can be calculated. However, those calculations, while easier to computer, are typically less accurate and hard to converge. The most common way of calculating diffusivity, however, is using Einstein’s relations, which relate transport coefficients to the time derivative of some correlation function: ½r ðt + t0 Þ r ðt 0 Þ2 D ¼ lim (13) t!∞ 6t The numerator, h[r(t + t0) r(t0)]2i, is known as the mean-squared displacement (MSD). The average is over atoms in the system and time origins. The diffusivity, therefore, is just the slope of the MSD curve at long times. Diffusive behavior involves many atomic collisions over time, and it is that characterized by a constant slope in the MSD curve. At short time scales, however, systems display ballistic dynamics. Fig. 13 shows the mean-squared displacement hr2i, in the logarithmic scale, of the OTP system at different temperature. Increasing the temperature increases the hr2i of the system significantly due to enhanced molecular mobility (Fig. 13).
68
Luis Alberto Ruiz Pestana et al.
Fig. 13 The mean-squared displacement (MSD) hr2i results for the CG OTP model at different temperatures.
8.
Some odds and ends of atomistic simulations
As we previously mentioned, the pairwise potentials used for modeling nonbonded electrostatic and van der Waals interactions involve, in principle, all the atom pairs in the system. To calculate the energy of the system, therefore, we need to add up the energy from the interactions between every pair of atoms: V ðrÞ ¼
Xi¼N Xj¼N V r ij i¼1 j>i
(14)
From Eq. (14), it is easy to see that we need to evaluate V(rij), where rij is the distance between atoms i and j, approximately N2 times, which quickly becomes an unaffordable operation as the number of atoms in the system increases. As a result, the most expensive calculation during an MD simulation is that of the forces between atoms. Comparatively, the time spent on integrating the equations of motion is small, for example. To reduce the cost of the simulation by reducing the number of interacting atom pairs, we can take advantage of the fact that Lennard-Jones interactions decay quickly with distance, as r6. This means that the interaction energy between atoms that are far apart is a very small contribution to the total energy. For example, the LJ interaction energy between two atoms at r 2.5σ, a typical cutoff distance, is just 1.6% than the energy at the equilibrium distance. In practice, for short-range interactions, which are those that decay faster than the dimensionality of the space (e.g., r3 in 3D),
Atomistic molecular modeling methods
69
we simply ignore the contributions from atom pairs that are separated further than some distance cutoff. Truncating the potential generates a discontinuity in energy and force when the distance between the two atoms changes through the cutoff distance, which affects the cohesive energy and the pressure of the system. However, these effects are relatively small, and usually ignored, when using reasonable cutoffs. A typical cutoff for a LJ system could ˚ be 2.5σ, and in force fields like CHARMM or OPLS, cutoffs from 9 to 12 A are customary. To really save computational time, the potential cutoff must be used in conjunction with neighbor lists. Otherwise, we would still need to calculate the distances between all atom pairs, which is just as costly as calculating the energy or the forces, to determine what contributions to ignore. Neighbor lists are based on the idea that in molecular liquids (even more so in solids), an atom’s neighbors (i.e., those within the cutoff distance) do not significantly change over multiple time steps. The list of neighbors can be stored in an array, and be updated only infrequently, every 10 or 20 steps. Only the interactions between each atom and its neighbors according to the neighbor list are considered. We will not delve into the details here about programing neighbor lists, but how often the neighbor list is updated is an important parameter in the simulation that you will need to set. Simulation programs will typically come with some default values that will be reasonable for most systems. However, if you are simulating quick dynamical processes (e.g., shockwave propagation, and evaporation processes), updating the neighbor list more frequently is something you may have to consider. The result of a poorly chosen update frequency of the neighbor list will be noticeable, as your simulation will likely crash. The boundary conditions of your simulation box are typically either shrink-wrapped or periodic. Using the former, the simulation box accommodates the motion of the atoms no matter how far they move, which are used to simulate finite systems and surfaces. Under periodic boundary conditions (PBCs), the atoms interact across the boundaries, and when they exit the simulation box on one end, they re-enter the box on the opposite end (Fig. 14). PBCs are used to reproduce the behavior of infinite systems, or bulk, by removing the surface or interface effects due to the simulation box. Depending on how the simulation program of your choice implements periodic boundary conditions, you may have to make the cutoff distance smaller than half of the simulation box size, to avoid unphysical interactions
70
Luis Alberto Ruiz Pestana et al.
Fig. 14 Two-dimensional representation of periodic boundary condition. The central cell (filled with gray) represents the simulation box. Cyan beads represent particles in the simulation box, and dark green beads represent their periodic image in other cells. Bold and dashed red lines with arrows show movement of two particles near the boundary; as a particle leaves the simulation box, its image enters the box from the opposite end.
between periodic images. To this end, the minimum image convention is also commonly adopted, where each atom only interacts with the closest image of each other atom in the system. We use the term image to refer to the copies of the simulation cell in its infinite tiling. The treatment of long-rage interactions under PBCs merits a short discussion. Electrostatic interactions between point charges, which decay slowly as r1, are considered long-range. Imposing cutoffs and thus truncating long-range interactions leads to very significant errors, which cannot be ignored. Fortunately, a variety of methods are readily available in most atomistic simulation programs to cleverly integrate the contribution of the long-range tail of the Coulombic potential beyond the cutoff. Examples include the Ewald Summation or the Particle-Particle ParticleMesh (PPPM) methods. While the mathematics of the methods is interesting but not essential to understand, it is important to know that they must be used when simulating systems containing charged chemical species.
Atomistic molecular modeling methods
9.
71
Concluding remark
In conclusion, MD simulations are akin to a powerful microscope with sub-Angstrom and femtosecond spatial and temporal resolution, respectively. Using atomistic molecular modeling methods systems with hundreds of thousands of atoms can be simulated for hundreds of nanoseconds. In contrast, computational quantum chemistry methods, which offer resolution at the level of the electronic structure and thus are tremendously transferrable between different materials and applications, can only be used to simulate systems with a few hundred atoms. As a result, techniques such as molecular dynamics (MD) have become the workhorse to study the properties of materials and an incredible variety of physical phenomena at the nanoscale. The connection between the microscopic degrees of freedom accessible through atomistic simulations and the macroscopic equilibrium properties of interest is provided by the theoretical framework of statistical mechanics, and the interactions between the simulated system and its environment are captured by the choice of ensemble, defined by constraining three compatible thermodynamic variables (e.g., NVE, NPT, or NVT). The success of an atomistic simulation hinges on both the quality of the force field and the algorithm used to sample configurations of the system in equilibrium. The force field is the element that provides both the main strength (coarse-graining electrons reduces the computational expense of the simulations orders of magnitude) and weakness (the parameters are typically not transferrable between different material systems or thermodynamic conditions) of these methods. Empirical force fields rely on relatively strong assumptions about the interatomic interactions, namely the pairwise additive treatment of van der Waals and electrostatics interactions, and the description of covalent interactions by harmonic potentials, which are only accurate close to equilibrium. While a wide range of force fields have been developed to capture increasingly complex interactions, for example, bond-order potentials that can capture metallic bonding or even chemical reactions, those models come at a significant computational expense. The future of atomistic modeling is bright and lies on pushing the limits of the techniques in terms of the time and length scales that can be simulated. New exciting developments are continuously taking place to improve the efficiency and transferability of force fields and to develop clever enhanced sampling methods to map the rugged free energy landscape of complex material systems.
72
Luis Alberto Ruiz Pestana et al.
References [1] B.J. Alder, T.E. Wainwright, Phase transition for a hard sphere system, J. Chem. Phys. 27 (5) (1957) 1208–1209, https://doi.org/10.1063/1.1743957. [2] A. Rahman, Correlations in the motion of atoms in liquid argon, Phys. Rev. 136 (2A) (1964), https://doi.org/10.1103/PhysRev.136.A405. [3] S. Plimpton, Fast parallel algorithms for short-range molecular dynamics, J. Comput. Phys. 117 (1) (1995) 1–19, https://doi.org/10.1006/jcph.1995.1039. [4] D.E. Shaw, et al., Millisecond-scale molecular dynamics simulations on Anton, in: Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis, SC ’09, 2009, https://doi.org/10.1145/1654059.1654099. [5] R.C. Bernardi, M.C.R. Melo, K. Schulten, Enhanced sampling techniques in molecular dynamics simulations of biological systems, Biochim. Biophys. Acta Gen. Subj. 1850 (5) (2015) 872–877, https://doi.org/10.1016/j.bbagen.2014.10.019. [6] F. Noe, S. Olsson, J. K€ ohler, H. Wu, Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning, Science 365 (6457) (2019), https:// doi.org/10.1126/science.aaw1147. [7] O.T. Unke, et al., Machine learning force fields, Chem. Rev. 121 (16) (2021) 10142–10186, https://doi.org/10.1021/acs.chemrev.0c01111. [8] R. Car, M. Parrinello, Unified approach for molecular dynamics and density-functional theory, Phys. Rev. Lett. 55 (22) (1985) 2471–2474, https://doi.org/10.1103/ PhysRevLett.55.2471. [9] R. Iftimie, P. Minary, M.E. Tuckerman, Ab initio molecular dynamics: concepts, recent developments, and future trends, Proc. Natl. Acad. Sci. U. S. A. 102 (19) (2005) 6654–6659, https://doi.org/10.1073/pnas.0500193102. [10] D. Marx, J. Hutter, Ab initio Molecular Dynamics: Basic Theory and Advanced Methods, Cambridge University Press, 2009. [11] J.W. Ponder, et al., Current status of the AMOEBA polarizable force field, J. Phys. Chem. B 114 (8) (2010) 2549–2564, https://doi.org/10.1021/jp910674d. [12] T.P. Senftle, et al., The ReaxFF reactive force-field: development, applications and future directions, npj Comput. Mater. 2 (2016), https://doi.org/10.1038/ npjcompumats.2015.11. [13] A.C.T. Van Duin, S. Dasgupta, F. Lorant, W.A. Goddard, ReaxFF: a reactive force field for hydrocarbons, J. Phys. Chem. A 105 (41) (2001) 9396–9409, https://doi. org/10.1021/jp004368u. [14] S. Fritsch, R. Potestio, D. Donadio, K. Kremer, Nuclear quantum effects in water: a multiscale study, J. Chem. Theory Comput. 10 (2) (2014) 816–824, https://doi.org/ 10.1021/ct4010504. [15] S.L. Mayo, B.D. Olafson, W.A. Goddard, DREIDING: a generic force field for molecular simulations, J. Phys. Chem. 94 (26) (1990) 8897–8909, https://doi.org/10.1021/ j100389a010. [16] S. Alavi, Statistical mechanics: theory and molecular simulation, By Mark E. Tuckerman, Angew. Chem. Int. Ed. 50 (51) (2011) 12138–12139, https://doi. org/10.1002/anie.201105752. [17] P.H. H€ unenberger, Thermostat algorithms for molecular dynamics simulations, Adv. Polym. Sci. 173 (2005) 105–147, https://doi.org/10.1007/b99427. [18] H.W. Horn, et al., Development of an improved four-site water model for biomolecular simulations: TIP4P-Ew, J. Chem. Phys. 120 (20) (2004) 9665–9678, https://doi. org/10.1063/1.1683075. [19] T. Fr€ ohlking, M. Bernetti, N. Calonaci, G. Bussi, Toward empirical force fields that match experimental observables, J. Chem. Phys. 152 (23) (2020), https://doi.org/ 10.1063/5.0011346.
Atomistic molecular modeling methods
73
[20] B. Leimkuhler, C. Matthews, Numerical integrators, in: Molecular Dynamics, vol. 39, Springer, Cham, 2015, pp. 53–96. [21] W. Xia, J. Song, N.K. Hansoge, F.R. Phelan, S. Keten, J.F. Douglas, Energy renormalization for coarse-graining the dynamics of a model glass-forming liquid, J. Phys. Chem. B 122 (6) (2018) 2040–2045, https://doi.org/10.1021/acs. jpcb.8b00321. [22] D.J. Evans, B.L. Holian, The Nose-Hoover thermostat, J. Chem. Phys. 83 (8) (1985) 4069–4074, https://doi.org/10.1063/1.449071. [23] D. Frenkel, B. Smit, Understanding Molecular Simulation: From Algorithms to Applications, vol. 1, Elsevier, 2001. [24] M. Parrinello, A. Rahman, Polymorphic transitions in single crystals: a new molecular dynamics method, J. Appl. Phys. 52 (12) (1981) 7182–7190, https://doi.org/10.1063/ 1.328693. [25] G.J. Martyna, D.J. Tobias, M.L. Klein, Constant pressure molecular dynamics algorithms, J. Chem. Phys. 101 (5) (1994) 4177–4189, https://doi.org/10.1063/1.467468. [26] W. Shinoda, M. Shiga, M. Mikami, Rapid estimation of elastic constants by molecular dynamics simulation under constant stress, Phys. Rev. B Condens. Matter Mater. Phys. 69 (13) (2004), https://doi.org/10.1103/PhysRevB.69.134103. [27] G.J. Ackland, A.P. Jones, Applications of local crystal structure measures in experiment and simulation, Phys. Rev. B Condens. Matter Mater. Phys. 73 (5) (2006), https://doi. org/10.1103/PhysRevB.73.054104. [28] P.J. Steinhardt, D.R. Nelson, M. Ronchetti, Bond-orientational order in liquids and glasses, Phys. Rev. B 28 (2) (1983) 784–805, https://doi.org/10.1103/ PhysRevB.28.784.
CHAPTER THREE
Particle-based mesoscale modeling and coarse-graining methods Zhaofan Lia, Yang Wangb, Amirhadi Alesadia, Luis Alberto Ruiz Pestanac, and Wenjie Xiaa a
Department of Civil, Construction and Environmental Engineering, North Dakota State University, Fargo, ND, United States School of Materials Science and Engineering, University of Science and Technology Beijing, Beijing, China c Department of Civil and Architectural Engineering, University of Miami, Coral Gables, FL, United States b
Contents 1. Introduction to mesoscale modeling of materials 2. Overview of coarse-graining modeling strategies 3. Particle-based mesoscale modeling techniques 3.1 Classical molecular dynamics 3.2 Langevin dynamics 3.3 Dissipative particle dynamics (DPD) 3.4 Multiscale coarse-graining methods 4. Concluding remarks References
1.
75 78 81 81 82 82 85 107 109
Introduction to mesoscale modeling of materials
Multiscale modeling of materials involves the representation of material systems by computational models across two or more different spatiotemporal scales, where fundamental physical principles and simulation algorithms might be quite different. Fig. 1 illustrates the hierarchy of commonly used computational modeling methods associated with different time and length scales. Among them, computer-aided “mesoscale” modeling techniques have become increasingly important as they allow to calculate material properties at time and length scales out of reach of atomistic modeling methods (see Chapters 2 and this chapter) and to simulate the behaviors of complex hierarchical materials. At the highest level of accuracy, ab initio methods based on quantum mechanics can provide insights into how Fundamentals of Multiscale Modeling of Structural Materials https://doi.org/10.1016/B978-0-12-823021-3.00004-X
Copyright © 2023 Elsevier Inc. All rights reserved.
75
Fig. 1 Schematics of the multiscale modeling techniques across different time and length scales of material systems from quantum, all-atom, coarse-graining, mesoscale, to continuum. The paradigm shows the approximate ranges of time and length scales associated with each modeling technique along with relevant applications.
Particle-based mesoscale modeling and coarse-graining methods
77
physical properties of materials arise from their electronic structures [1]. One step further, atomistic molecular modeling methods based on empirical force fields (e.g., classical molecular dynamics [MD] simulations) have become widely employed to investigate how material properties depend on their molecular and nanostructures. However, despite the success in their respective domains of application, atomistic methods are computationally expensive, and the systems that can be simulated are limited to a small number of atoms (e.g., less than 1000 for ab initio calculations). In order to simulate material systems at larger time and length scales, mesoscale modeling methods that significantly speed up computation by reducing the number of degrees of freedom, while preserving the most essential atomic or molecular features, such as segmental structure, shape, and topology, or chemical interaction, are required. Particle-based mesoscale coarse-grained (CG) modeling approaches based on MD, in particular, have played a vital role in the multiscale modeling of complex hierarchical materials, as they can reach extended spatiotemporal scales while retaining molecular resolution. Similar to classical MD simulations, commonly used CG modeling approaches revolve around the concept of “force field,” which describes the potential energy of a Hamiltonian system based on the coordinates of point particles interacting via analytical or tabulated functional forms. In CG models, the point particles, typically called beads or “superatoms,” represent multiple atoms. CG models are therefore computationally efficient due to the reduced number of degrees of freedom, simpler potential functions, the possibility of using a larger integration timestep, and exhibiting accelerated dynamics (which however might not be ideal for certain cases). The unique advantages of CG modeling techniques allow simulating systems and phenomena at extended time and length scales, which can be difficult or outright impossible to reach using atomistic modeling methods. Different CG approaches are designed to conserve different select physical features, such as thermodynamic properties, conformational and structural properties, tacticity, and other aspects, providing semiquantitative predictive capabilities. Despite their relative success, several major challenges remain (in particular for soft materials with complex microstructural and physical features), which include (i) the difficulty of simultaneously retaining chemical, mechanical, structural, and dynamical properties in the upscaling process; (ii) the limited transferability of CG force fields for chemistry-specific material systems; and (iii) limitations of predictive power beyond the quantities of interest that are used in model calibration.
78
Zhaofan Li et al.
2.
Overview of coarse-graining modeling strategies
Generally, we can categorize the CG modeling into two major classes: (i) generic bead-spring CG models, which qualitatively represent more than one type or class of materials, and (ii) chemistry-specific CG models, which represent a certain type of material chemistry. In generic CG models [2], the difference between two material systems modeled with the same molecular topology (e.g., two different polymers) can be captured via different force field parameters. These largely simplified models are particularly useful for qualitatively exploring the influence of different molecular parameters, such as cohesive energy or rigidity of a molecule or polymer chain, on various physical properties. On the other hand, chemistry-specific CG models can preserve most essential features of real material systems, including segmental structure and molecular topology, thus offering more predictive power. The force fields of chemistry-specific CG models are also typically more complicated than the generic CG models but still much simpler than the atomistic models. These physics-informed CG models can be tailored to quantitatively predict the thermomechanical properties and dynamics of materials in good agreement with atomistic models or experiments. Therefore, we will focus more on describing chemistry-specific CG modeling approaches in this chapter. Developing a chemistry-specific CG model generally involves three steps: first, determine the mapping of CG beads from the underlying atomistic structure to reduce the number of degrees of freedom by removing “nonessential” chemical features; second, derive the force field parameters in a systematic way to capture the interactions between the CG beads; and third, validate the CG model performance and refine the derived force field. Force field components are often parameterized via three different paradigms: top-down, bottom-up, and hybrid approaches, as shown in Fig. 2. “Top-down” models tailor the molecular interactions to address phenomena that are experimentally observed for a particular system on length and time scales that are accessible to the CG model. Chemically specific topdown models are characterized by potential terms with simple functional forms that are parameterized to reproduce thermodynamic properties of the real system [3]. One popular example of the top-down approach is the nonbonded interactions of the Martini force field [4], which was developed to explore the effects of hydrophobic, van der Waals, and electrostatic interactions between sites as a function of charge and polarity. By contrast, in
Particle-based mesoscale modeling and coarse-graining methods
79
Fig. 2 General framework of the top-down and bottom-up coarse-graining approaches. The top-down approach utilizes experimental results to tune the nonbonded interaction, and the bottom-up approach reproduces the static structures of the system at a coarser description under certain thermodynamic state points. The hybrid approach combines the top-down and bottom-up approaches to derive the CG model and potential.
the bottom-up paradigm, the interactions between CG beads are parameterized based on the predictions of an underlying atomistic model. Typically, bottom-up methods use more complex analytical forms to capture the more detailed information of the underlying atomistic model, which is sometimes inaccessible experimentally. One example of this approach is the energyrenormalization method, which requires matching the segmental dynamics
80
Zhaofan Li et al.
of polymers in the CG model to those of the atomistic model [5]. Since bottom-up models do not rely on experimental observation, they can be ideal methods to make predictions in cases where experimental information is lacking (e.g., in new materials). Finally, the hybrid bottom-up/top-down approach could potentially describe both structural and thermodynamic properties of materials. In this hybrid strategy, some of the force field parameters are directly informed from underlying atomistic models, and both experimentally and computationally reported properties are used to parameterize the remaining force field parameters. For example, a hybrid CG modeling approach was used to model methacrylate polymers [6], where bonded potentials were determined by the inverse Boltzmann method (IBM) [7], but nonbonded parameters were optimized to capture density, and mechanical and glass transition properties obtained from experiments. The fundamental basis of the development of CG models is to sacrifice accuracy for a significant improvement in computational efficiency in order to overcome the spatiotemporal limitations of atomistic methods. Most CG approaches average out some unessential degrees of freedom and treat chemically connected atoms as an individual interaction site (i.e., also called “superbead”) upon a predefined mapping scheme, forming the so-called pseudo bonds (i.e., existing only in the CG model but not the atomistic model). However, it should be emphasized that there is no universal template for coarse-graining materials once and for all due to different properties of interests and purposes of study, which could be more suitable for one specific coarse-graining method than the other. In this chapter, we will review several particle-based modeling techniques and mainstream coarse-graining strategies that are frequently used to develop mesoscale models. We will focus on the application of these methods to model soft materials such as polymers, which are good examples due to their complex microstructure and physical behavior. We will also cover some coarse-graining approaches for modeling crystalline materials with ordered molecular structures. As CG modeling is an ever-growing and evolving field of research, the aim of this chapter is to provide a background of coarse-graining methods and to guide readers in the exploration of suitable CG methods for different material systems. We also discuss current achievements and remaining challenges of CG modeling methods and offer a perspective for developing future CG models for material design and prediction.
81
Particle-based mesoscale modeling and coarse-graining methods
3.
Particle-based mesoscale modeling techniques
In this section, we provide an overview of three commonly used simulation techniques for numerical integration of the equations of motion of particle-based systems in mesoscale CG modeling. Each of these techniques relies on different algorithms and assumptions, which impact the accuracy and resolution of the resulting CG models.
3.1
Classical molecular dynamics
Classical molecular dynamics (MD) is a commonly applied computer-aided simulation technique to numerically solve the classical equations of motion (i.e., Newton’s law) for both atomistic and CG modeling: mi
d2 r i ¼ fi dt2
fi ¼
∂ N U r ∂r i
i ¼ 1,…,N
(1)
where fi stands for the forces acting on the atoms, mi is the atom mass, ri is the position vector of the atom i, N is the number of atoms, and U(rN) represents the potential energy and depends on the spatial position of the atoms or superatoms in the system. Potential energy plays a key role in MD simulations since forces are derived from U(rN), as shown in Eq. (1). Determining the potential energy or force field for CG modeling will be reviewed in the following sections. One can solve the equation of the motion by discretizing the equations in time. There are many integration schemes that are used in MD for this purpose. The Velocity Verlet (VV) algorithm is one of the most popular methods that have been utilized to integrate Newton’s equations of motion [8]. As we have discussed in the previous chapter, through the VV algorithm, positions and velocities are updated as follows: r i ðt + △tÞ ¼ r i ðtÞ + vi ðt Þ△t + vi ðt + △tÞ ¼ vi ðtÞ +
1 ai ðt Þ△t2 2
1 ½ai ðtÞ + ai ðt + △tÞ△t 2
(2) (3)
This approach advances the coordinates and momenta over a timestep △t. More detailed information regarding the VV and classical MD frameworks can be found in Chapter 2. Another note is that since classical MD simulation can explicitly capture the atomic details, this scheme is particularly useful for the CG modeling with a relatively higher resolution (or a lower degree of coarse-graining). Therefore, the accessible time and length scales
82
Zhaofan Li et al.
are typically limited to microsecond and micrometer scales, still covering an enlarged range compared with the atomistic molecular modeling.
3.2
Langevin dynamics
In Langevin dynamics, the effect of the solvent molecules, the friction of the atoms with air molecules, or occasional high-velocity collisions (which would have to be explicitly represented in an MD simulation) are captured implicitly by a stochastic force and a friction term. The Langevin equation is a stochastic differential equation where two force terms are added to Newton’s second law to capture the effects of omitted degrees of freedom. f i γ i vi + f Ri ðtÞ ¼ mi ai
(4)
As it is mentioned earlier, the first added term, γ ivi, represents the frictional force and the second one, fiR(t), is added to take random forces into account. Here, γ i is the damping constant and fiR(t) is a delta-correlated stationary Gaussian process that satisfies the following equation: 8 R < f i ðt Þ ¼ 0 ð (5) : F Ri ð0Þ F Ri ðt Þ dt ¼ 6kB T γ i where T is the temperature and kB is Boltzmann’s constant. The random force is assumed to be uncorrelated over time. Thus, Eq. (5) will take the following form: R F i ðt Þ F Ri ðt0 Þ ¼ 6kB Tγ i δðt t0 Þ (6) As shown in Eq. (6), the temperature T of the system is coupled with the damping constant γ. Indeed, the parameter γ determines both the magnitude of the frictional force and the variance of the random forces [9]. As γ becomes larger, the greater the influence on the systems of the surrounding environment. For small values of γ, the motion is termed inertial; however, for the larger values, in the overdamped limit, the motion is purely diffusive (Brownian motion).
3.3
Dissipative particle dynamics (DPD)
Dissipative particle dynamics (DPD) is an alternative approach to address the simulation of complex fluid and soft matter at the mesoscale when hydrodynamics play a critical role. This technique was first introduced by
Particle-based mesoscale modeling and coarse-graining methods
83
Hoogerbrugge and Koelman [10] to simulate the dynamic behavior of fluids. In typical DPD simulations, the “bead-spring” type particles represent clusters of molecules (i.e., with a much lower resolution of chemical features than the classical MD scheme), whose motions are governed by certain collision rules. Similar to MD, the time evolution of the motion of each particle can be expressed by Newton’s second law: X dri dP i F ij ¼ vi , ¼ dt dt j6¼i
(7)
D R F ij ¼ F C ij + F ij + F ij
(8)
where ri, vi, and Pi are the position, velocity, and momentum vectors for the particle i, and Fij refers to the total interparticle force exerted on the particle i by the particle j. Specifically, the force is usually assumed to be pairwise additive and consists of three parts: a purely repulsive conservative force FC ij deriving from a potential, a dissipative or friction force FD representing the ij effects of viscosity for reducing radial velocity differences between the particles, and a further random (stochastic) force FR ij representing the thermal or vibrational energy of the system (shown in Fig. 3a). Among these force components, the conservative force, FC ij , characterizes a soft interaction acting along the line of particle centers, which is defined as follows: C FC r ij ij ¼ aij w ðr Þ^ ð1 r Þ, r < 1:0 w C ðr Þ ¼ 0, r 1:0
(9) (10)
Fig. 3 (a) Illustration of dissipative particle pairwise interaction with three force components: A conservative linear repulsive force, a Brownian dashpot showing a friction force, and a stochastic force. These forces vanish beyond a finite cutoff radius rc, which is usually taken as the unit length in DPD models [11]. (b) Snapshot of capillary imbibition of fluid into nanochannels using DPD.
84
Zhaofan Li et al.
where aij is a maximum repulsion between particles i and j, rij ¼ ri rj, r ¼ rij ¼ jrij j, and b rij ¼ rij/rij. Here, wC(r) is the weight function for FC ij , which describes a repulsive force and allows for simulation of much larger time and length scales in the DPD framework (than classical MD). Moreover, the dissipative force FD ij depends on both the relative positions and velocities of the particles, and the random force FR ij depends on the relative positions of the particles, and these two components can be individually written as follows: D FD r ij ^r ij vij ^rij (11) ij ¼ γw F Rij ¼ σwR r ij θij ^rij (12) where vij ¼ vi vj, and wD(rij) and wR(rij) are bell-shaped distance-dependent weight functions with a finite support that render the dissipative interactions local [11]. The θij term corresponds to a Gaussian white noise function with symmetry property θij ¼ θji, having the following stochastic properties: (13) θij ðtÞ ¼ 0, θij ðtÞθkl ðt 0 Þ ¼ δik δjl + δil δjk δðt t0 Þ The parameters γ and σ are the coefficients of the dissipative and random forces, respectively. It is reported by Espanol and Warren [12] that there is a relation between the amplitudes of two weight functions and kBT in order to recover the proper thermodynamic equilibrium for DPD fluids at a prescribed T: 2 wD ðr Þ ¼ w R ðr Þ γ¼
σ 2kB T
(14)
2
(15)
where kB is the Boltzmann constant and T is the equilibrium temperature. All the interaction energy terms are expressed in units of kBT, which is usually assigned a value of unity. The random fluctuation force FR ij acts to heat D up the system, while the dissipative force Fij acts to restrain the relative velocity of particles, thus cooling down the system by removing kinetic energy. Consequently, DPD simulations are effectively thermostatic MD simulations with softer particle-particle interactions as the fluctuating and dissipative forces act together to maintain a constant T with small fluctuations [13]. As an example, Fig. 3b presents a CG model to simulate fluid imbibition into a nanochannel using the DPD approach. Historically, the aim of DPD was to make fluid dynamic behavior correctly emerge from
Particle-based mesoscale modeling and coarse-graining methods
85
a particle-based simulation method, while using the simplest interactions possible.
3.4
Multiscale coarse-graining methods
As mentioned earlier, coarse-graining alleviates the spatiotemporal limitations and costly simulations associated with atomistic models by systematically clumping groups of atoms to form CG beads, by enabling longer simulation times and larger system sizes, and by improving the sampling of the phase space. Over the past decades, several approaches have been developed to systematically derive and optimize CG force fields from underlying AA models (i.e., a “bottom-up” strategy), including the iterative Boltzmann inversion (IBI) [14], force matching (also called the multiscale coarse-graining) [15], relative entropy [16], inverse Monte Carlo methods [17], and some other approaches based on statistical mechanics [18]. In the following section, we introduce some of the widely used methods for the development of effective force fields or potentials in CG modeling. 3.4.1 Generic coarse-graining methods A classical generic coarse-graining model: FENE model
Generic coarse-graining (CG) models are highly effective for the simulation of materials where the chemical structures of the molecules are largely simplified and represented by connecting a sequence of beads with springs. Due to the simplified chemical structure, generic CG models can explore a broad range of length and time scales, which can be beneficial for the simulation of polymers, fluids, and solutions. One of the frequently used CG models is the finite extensible nonlinear elastic (FENE) model, which has attracted a lot of attention for simulations of polymers and solutions [2,19]. These simplified, idealized, general models with less force field parameters and thereby chemical details are easy to use while providing significantly improved computational efficiency. Although these largely simplified models are not able to provide quantitative predictions due to the lack of chemical details, they are efficient tools to explore qualitative trends and understand the complex physical behavior of materials. In particular, since Kremer and Grest [19] employed the FENE model to study the dynamics of a polymeric liquid, this model has become one of the most used CG models to predict physical trends of “ideal” polymers, making it extensively popular for the polymer physics community.
86
Zhaofan Li et al.
The force field of the FENE model is composed of two potential components, namely, the bonded potential of the connected beads and nonbonded interactions. Generally, FENE models do not necessarily include any angular or torsional stiffness in their force field. Therefore, in the original FENE model description, the simulated polymer is considered a freely jointed chain having a linear bead-spring topology. Different from the harmonic spring potential, for the bonded neighbors along the chain, the bonded potential can be expressed as the sum of FENE potential and 12-6 Lennard-Jones (LJ) potential, which is defined as follows [19]: U bond ðr Þ ¼ U FENE ðr Þ + U LJ ðr Þ
(16)
where r is the distance between the beads. UFENE(r) and ULJ(r) are given by the following equations: " 2 # r 2 , r R0 U FENE ðr Þ ¼ 0:5KR0 ln 1 (17) R0
σ 12 σ 6 U LJ ðr Þ ¼ 4ε (18) , r Rcut r r where K is the spring constant, and bond elongation r cannot exceed a limit of R0 in UFENE; ε stands for the depth of the potential well; and σ is the distance at which the LJ potential energy between two beads is zero in ULJ. The LJ function is usually truncated at Rcut ¼ 2.5σ. To have a better understanding of the magnitude of these parameters, it should be noted that Kremer and Grest [19] used K ¼ 30 ε/σ 2 and R0 ¼ 1.5σ for their linear polymer model. Fig. 4 shows the bonded potential energy in the FENE model. From the
Fig. 4 Schematic illustration of a Lennard-Jones (LJ) potential (blue line), a finitely extensible nonlinear elastic (FENE) potential (green line), and the sum of both contributions (yellow line).
87
Particle-based mesoscale modeling and coarse-graining methods
plot, one can observe that the bonded potential has an equilibrium bond length req ¼ 0.9609σ. For the nonbonded interaction, only ULJ is applied to the nonbonded pair of beads within Rcut: U nonbonded ðr Þ ¼ U LJ ðr Þ
(19)
Note that units reported here are Lennard-Jones (LJ) reduced units, which we will describe next. For the sake of convenience, all quantities are often reported in reduced LJ units for the FENE and other generic CG models. In this reduced unit style, the Boltzmann constant kB ¼ 1 is considered, and fundamental quantities mass (m), ε, and σ are set to be the unit of mass, energy, and length, respectively. The masses, distances, and energies are multiples of these fundamental values. Temperature is defined in the unit of ε/kB, and time ðτÞ has pffiffiffiffiffiffiffiffiffiffiffiffiffi the unit of mσ 2 =ε . The description of the reduced LJ unit and the corresponding physical meaning of argon are listed in Table 1. It should be noted that these reduced units can be transformed into laboratory measurements on units relevant to real materials [20]. For instance, if we consider a polymer chain having a typical segmental size of 1–2 nm (σ ¼ 1 is equivalent to 1 nm) and the energy scale of nonbonded interactions to be on the order of 1 kJ/mol, time should be measured in picoseconds (τ ¼ 1 is equivalent to 1 ps), and the CG model can be employed to approximate a typical polymer such as polystyrene (PS), whose glass transition temperature Tg for a moderate chain length is about 100°C [21]. Generalized generic coarse-graining models
Built upon the FENE model, more generalized versions of the generic CG models have often been employed by modifying the force field functional Table 1 Reduced LJ units used in the FENE model and the corresponding real SI base units of physical quantity values for liquid argon (Ar). Physical quantity Symbol Reduced unit Real unit for Ar
Length Energy Mass Time Velocity Force Pressure Temperature
r⁎ U⁎ m⁎ t⁎ v⁎ F⁎ p⁎ T⁎
σ ε m pffiffiffiffiffiffiffiffiffiffi m2 =ε (ε/m)1/2 ε/σ ε/σ 3 ε/kB
3.4 1010 m 1.65 1021 J 6.69 1026 kg 2.17 1012 s 1.57 102 m/s 4.85 1012 N 4.20 107 N m2 120 K
88
Zhaofan Li et al.
forms and parameters and adding other components, making the CG models more representative and realistic than the original one. For instance, a harmonic potential can be used to replace the original bonded interaction in the FENE models: U bond ðr Þ ¼
Kb ðr r 0 Þ2 2
(20)
where Kb stands for the stiffness of the spring and r0 is the equilibrium bond distance between two beads. These considerably simplified harmonic models have become more popular in the polymer community to explore the dynamics and thermomechanical properties of the polymers [21,22]. By adjusting Kb and r0 (e.g., Kb ¼ 2000 ε/σ 2 and r0 ¼ 1 σ), crystallization can be avoided and the formation of an amorphous polymer system can be facilitated, even with such a simplified CG model. To simulate materials, for example, polymers, in a more realistic manner, one may need to include angular and torsional potential components in the calculations to have a more realistic representative model since, for many materials, angular and torsional stiffness are determining factors. For instance, polymers can be divided into several categories based on the relative rigidities of the backbone and side chains. One possible combination is the chain with a relatively stiff backbone and flexible side chains, such as the conjugated polymer. To embody this important parameter, in polymer simulation, the chain stiffness is often incorporated via a harmonic style of potentials, describing the fluctuations of the bond, angle, and dihedral interactions. For the angular stiffness between three neighboring beads, a harmonic angle potential could be defined: U angle ðθÞ¼
Kθ ðθ θ0 Þ2 2
(21)
where θ0 is the equilibrium value of the angle and Kθ determines the angular stiffness. Similarly for the dihedral angle, that is, the angle between two intersecting planes, a harmonic style is often used to determine the potential: U dihedral ðϕÞ ¼ K ϕ ½1 + cos ðnϕ dÞ
(22)
where Kϕ is the dihedral force constant, n is the multiplicity of the function, ϕ is the dihedral angle, and d stands for the phase shift. Illustrations of bonds, angles, and dihedrals in bead-spring models are exhibited in Fig. 5, which are quite similar to other classic force fields for atomistic molecular simulations.
Particle-based mesoscale modeling and coarse-graining methods
89
Fig. 5 Schematic representation of (a) bond, (b) angle, and (c) dihedral interactions and corresponding potential curves from Eqs. (20)–(22).
It is well known that molecular or polymer topology can have a significant influence on the physical characteristics of materials in both solution and melt states. Beyond the linear-chain polymer, the generalized generic CG models have also been widely utilized to explore the structure-property relationship between the polymer architecture and the physical properties of polymer materials. A schematic that illustrates the typical molecular architectures is presented in Fig. 6 [23]. Furthermore, a diverse library of polymers with varying backbone chemistries, backbone length, sidechain length, and sidechain grafting density is essential for understanding the complex behaviors of diverse polymers. 3.4.2 Chemistry-specific coarse-graining methods United-atom
The united-atom (UA) approximation is a more precise CG model with higher resolution (or a lower degree of coarse-graining) than the Martini and most other CG models. Therefore, it is the most elementary CG modeling approach providing molecular scale resolution. The UA approach
90
Zhaofan Li et al.
Fig. 6 Schematic illustration of the topological architectures of the bottlebrush, linear chain, regular star, and unknotted ring polymers. Typical molecular conformations of bottlebrush polymers with the variation of the sidechain length n. The backbone core segments are represented in orange color, and the sidechain segments are represented in cyan color.
Fig. 7 Incorporating hydrogens into directly bonded carbon in the (top) AA model yields the (down) UA model with hydrogens expressed implicitly.
incorporates each carbon atom with directly bonded hydrogen atoms (e.g., methylene CH2 or methyl CH3 units) into one unified interaction site, that is, the hydrogen atoms are implicitly represented, as shown in Fig. 7, achieving the preservation of the chemical structure and thus a moderate increase in computational efficiency without much sacrifice of atomistic accuracy. Since roughly half of the atoms in biological or other organic macromolecules are hydrogens, the UA models are often used for modeling larger hydrocarbon systems to avoid burdensome computing chores, including alkane chains and lipid bilayer systems with a long alkane hydrophobic tail [24]. There are many UA force fields developed in previous works based on the characterization of the density, heats of vaporization, and other thermophysical properties for hydrocarbons, linear and branched alkenes, and alkylbenzenes, including OPLS-UA [25], GROMOS [26], TraPPEUA [27], and Dreiding [28]. The simplest representation of the UA model
Particle-based mesoscale modeling and coarse-graining methods
91
is polyethylene. The CH2 and end group CH3 units are treated as homogeneous beads after eliminating hydrogens, yielding a linear bead-spring model. The bond stretching, angle bending, torsion angle, and 12-6 LJ-type nonbonding potentials are usually utilized to describe the total energy of the system, which are similar to the potential forms in Eqs. (19)–(22). In addition, the bond lengths (sometimes bond angles are also involved) are fixed constant during the MD simulation and equal to their equilibrium values for alleviating time-consuming computation. Iterative Boltzmann inversion (IBI)
The iterative Boltzmann inversion (IBI) method has become a popular choice to derive chemistry-specific CG potentials due to its straightforward implementation by preserving structural properties, and general applicability over a diverse range of material systems, such as polymers and biopolymers, glasses, liquids, small molecules, and even inorganic solids [29]. The first step in the coarse-graining process is to define the AA-to-CG mapping scheme. Generally, the center of mass (COM) of a cluster of atoms is typically defined as the effective interaction site (i.e., force center) of the CG bead. It should be noted that the electrostatic interactions are considered when parameterizing the nonbonded interactions. Often, Columbic interactions are accounted for implicitly, thus ignored at the CG level, which offers a significant computational advantage. Only in the situations where the electrostatic interactions play a critical role, such as CG polyelectrolyte or ionic liquid, charge and columbic interactions are incorporated to capture detailed behaviors. Taking the regioregular poly(3-hexylthiophene) (rr-P3HT) as a representative model system, a three-consecutive beadper-monomer mapping scheme is shown in Fig. 8a, where a repeat unit is represented by three types of CG beads denoted as “P1,” “P2,” and “P3” types in the CG model, corresponding to a thiophene ring, first three hexyl sidechain methyl groups, and last three sidechain methyl groups, respectively [30]. Sometimes, however, the force center of the CG bead can be placed in a certain location (e.g., a specific atomic position) to simplify the potential derivation by reducing the statistical variation of the probability distribution of bonded terms. The main feature of the IBI method is to derive the effective bead-bead interactions by matching a set of target structural properties. These target structural properties typically include the probability distributions of the bond, angle, dihedrals, and radial distribution function (RDF) for the nonbonded interactions from a more detailed reference simulation (i.e.,
Fig. 8 (a) Mapping from the AA model to CG model of P3HT. The probability distribution functions and corresponding relative potentials for CG P3HT (b) bonds, (c) angles, and (d) dihedrals.
Particle-based mesoscale modeling and coarse-graining methods
93
atomistic model) mapped to the CG level. The potential is iteratively optimized according to the following equation:
P i ðxÞ U i+1 ðxÞ ¼ U i ðxÞ + kB T ln (23) P target ðxÞ where kB is the Boltzmann constant and T is the absolute temperature; Pi(x) represents the probability distribution in the ith iteration (x can be the bond length l, bond angle θ, dihedral angle φ, or distance r for bond stretching, bond bending, torsion, or nonbonded interactions, respectively); and Ptarget(x) denotes the relative target probability distribution derived directly from reference simulation. Potentials are usually optimized in the decreasing order of relative strengths, that is, Ubond ! Uangle ! Unonbonded ! Udihedral. One assumption in deriving the potentials in IBI is that each interaction term is treated relatively independently. The standard procedure for updating potentials in IBI is to individually optimize one potential until convergence is achieved before moving on to the next in the order of optimization, and then update the other potentials while keeping the converged potentials fixed. Generally, the determination of the set of CG interactions is based upon the underlying assumption that the total potential energy, UCG, can be sepCG arated into bonded, UCG b , and nonbonded, Unb , contributions: X X U CG ¼ U CG U CG (24) b + nb For the CG bonded interactions, the structural distributions PCG are characterized in terms of CG bond lengths l between adjacent pairs of beads, bond angles θ between three subsequent neighbors, and dihedral angles or torsions ϕ between neighboring quadruplet of beads, similar to the bonded term definition in the atomistic system. Note that those bonded terms in the CG description are “pseudo” ones and may change if the AA-to-CG mapping is altered. To obtain the bonded potentials, it is assumed that different internal CG degrees of freedom are independent at a given temperature P CG ðl, θ, ϕ, T Þ ¼ P CG ðl, T ÞP CG ðθ, T ÞP CG ðϕ, T Þ
(25)
The individual distributions PCG(l, T), PCG(θ, T), and PCG(ϕ, T) are initially obtained by sampling l, θ, and ϕ parameters from the AA trajectories based on the predefined mapping scheme, which are used as the target structural
94
Zhaofan Li et al.
distributions. Having the probability distribution functions, the initial CG bonded potentials can be estimated using the direct Boltzmann inversion: h i CG 2 U CG ð l, T Þ ¼ k T ln P ð l, T Þ=l (26) B 1 target h i CG (27) U CG 1 ðθ, T Þ ¼ kB T ln P target ðθ, T Þ= sin θ h i CG (28) U CG 1 ðϕ, T Þ ¼ kB T ln P target ðϕ, T Þ Note that the probability distributions for bond length and angle are normalized by considering the corresponding volume elements, namely, l2 and sinθ, respectively. Those initial CG bonded potentials are then iteratively optimized by the IBI approach by Eq. (23): h i CG CG CG U CG (29) i+1 ðl, T Þ ¼ U i ðl, T Þ + kB T ln P i ðl, T Þ=P target ðl , T Þ h i CG CG CG (30) U CG i+1 ðθ, T Þ ¼ U i ðθ, T Þ + kB T ln P i ðθ, T Þ=P target ðθ, TÞ h i CG CG CG (31) U CG i+1 ðϕ, T Þ ¼ U i ðϕ, T Þ + kB T ln P i ðϕ, T Þ=P target ðϕ, T Þ where the subscript i denotes the ith iteration in the aforementioned expressions. The CG potentials are obtained after several rounds of IBI until convergence to the target probability distribution from the atomistic counterpart (i.e., the relative error is within an acceptable threshold value, such as 2%), as depicted in Fig. 8b–d as an example. On the other hand, the nonbonded CG potentials UCG nb (r) are derived from a given target intermolecular RDF between nonbonded CG interaction sites, which can be obtained from atomistic reference systems. In practice, the Boltzmann inversion of the RDF, gtarget(r), is usually used as a reasonable initial guess [14]: target U CG ðr Þ nb,1 ðr, T Þ ¼ kB T ln ½g
(32)
CG (r, T) The CG simulation with the first guessed nonbonded potential Unb,1 CG target yields a corresponding RDF, g1 (r), which is different from g (r). The potential is then updated by the following relation:
CG gi ðr Þ CG CG U nb,i+1 ðr, T Þ ¼ U nb,i ðr, T Þ + kB T ln target (33) g ðr Þ
The previous step is iterated until the reference gtarget(r) is reproduced and convergence is achieved. Depending on the complexity of the potential,
Particle-based mesoscale modeling and coarse-graining methods
95
the number of iterations could vary greatly for different bonded and nonbonded terms, as well as systems. Generally, a smaller number of interactions are required for bond and angle interactions than dihedral and nonbonded integrations. Moreover, the CG potentials can be described by either analytical potentials (e.g., harmonic and Lennard-Jones potential) or numerically derived potentials through tabulated forms, which are usually available in most MD software packages, for example, LAMMPS [31]. Inverse Monte Carlo (IMC)
For a given structure-based CG system with a reduced number of degrees of freedom, the inverse Monte Carlo (IMC) is another method to invert the ensemble average and derive an effective potential, which reproduces the same set of configurations of the AA system. The underlying principle of the IMC method is to reconstruct the Hamiltonian from radial distribution functions (RDFs), which give a detailed description of the structural properties and potential of mean force (PMF) [17]. The pair potentials obtained by the IMC method are not limited to any specific functional forms and are typically chosen in a tabulated form. Within the IMC framework, the tabulated potentials are updated at each iteration. The total potential energy (Hamiltonian) of the system can be then presented as follows [32]: X H¼ V α Sα (34) α
where Vα refers to a set of tabulated values defining a step-wise potential function and Sα values refer to the number of pairs of particles with the mutual distance found inside the α-slice. For the nonbonded interactions, Sα serves as an estimator of the RDF; for the intramolecular interactions, Sα corresponds to histograms of distributions of the relevant bond length and angles. The average values of Sα are some functions of the potential Vα with the expansion: ΔhSα i ¼
X ∂hSα i γ
∂V γ
ΔV γ + O ΔV 2
(35)
where the derivatives ∂hSαi/∂ Vγ , by using the statistical mechanics expression for averages in canonical ensemble, can be presented as follows: S α S γ hS α i S γ ∂hSα i ¼ (36) ∂V γ kB T
96
Zhaofan Li et al.
Then, the IMC method iteratively adjusts Vα to mitigate the differences in the averages hSαi between the CG and referenced AA systems (i.e., ΔhSαi ¼ hSαi S∗α ! 0 8 α). Then, the iterative procedure proceeds until convergence is achieved, resulting in corrected values of potentials Vα: V ðαk+1Þ ¼ V ðαkÞ + ΔV ðαkÞ
(37)
Therefore, the IMC method requires extensive sampling of both distributions and correlations for the scheme to reduce the statistic noise and converge to a satisfactory solution. Energy renormalization (ER)
Despite CG potential parameters being optimized to satisfactorily reproduce physical properties of the atomistic simulations or available experiments at and near certain thermodynamic states, the general use of CG methods is limited to predicting the certain properties under specified thermodynamic conditions, which is known as the “transferability issue.” Specifically, the “temperature transferability” of CG modeling remains challenging due to a lack of understanding of the strong temperature dependence of molecular friction parameters and relaxation properties of glass-forming (GF) materials [33]. This occurs mainly because of the reduced degree of freedoms under coarse-graining, which can be viewed as reducing fluctuation and frictional forces associated with the loss of fine atomistic details. To address this issue, a coarse-graining strategy, namely, the energy renormalization (ER) approach, has recently been proposed, aiming to preserve the dynamics of polymers and other glass-forming systems over a wide temperature range and thereby effectively addressing the temperature transferability issue in CG modeling. This approach borrows ideas from the Adam-Gibbs (AG) theory [34] and the generalized entropy theory (GET) [35] of glass formation, both of which emphasize the central role of the configurational entropy, sc, in the dynamics of GF systems. On the basis of the AG theory, the CG models with the loss of sc due to coarse-graining often exhibit an artificial speedup of the molecular motions and divergent activation energies in the regime of incipient glass formation relative to the AA counterparts. The ER method presented toward compensating for this effect by correspondingly renormalizing the enthalpy of the system, based on the established “entropy-enthalpy compensation” effect [36]: ΔGðT Þ≡ΔH ðT Þ TΔSðT Þ ¼ Const: ΔH ðT Þ ¼ a + bΔSðT Þ
(38) (39)
Particle-based mesoscale modeling and coarse-graining methods
97
where ΔG, ΔH, and ΔS refer to the change in free energy, enthalpy, and entropy of activation, respectively. Mechanistically, to preserve Δ G of the AA system after coarse-graining at any fixed temperature, the loss of entropy ΔS due to the reduced number of degrees of freedom and thus sc can be effectively compensated by elevating enthalpy ΔH through an increase in cohesive energy. In the CG modeling, the cohesive interaction strength, which is often parametrically related to the commonly used Lennard-Jones (LJ) energetic parameter ε, has a strong influence on the dynamics and mechanical response of the materials through its influence on sc. In practice, by renormalizing ε as a function of temperature ε(T), one aims to compensate for the reduction in overall sc and effectively “correct” for the activation free energy and thereby preserving the dynamics of the CG modeling, especially during the glass transition. More recently, Simmons et al. [37] introduced the generalized localization model (LM) based upon the local free-volume perspective, which suggests a scaling relationship between Debye-Waller factor hu2i, a fast dynamics physical property defined at picosecond timescale, and structural relaxation τ, τ exp[(u20/hu2i)α/2], where the exponent α is a measure of the shape of free-volume (e.g., one could expect α ¼ 3 for roughly spherical free volume). Going one step forward, Betancourt et al. [38] reduced the unspecified parameters of generalized LM by directly defining them based upon measurements of hu2i and τ at the onset temperature TA (i.e., one characteristic temperature below which the complex fluid starts to exhibit nonArrhenius behavior due to glass formation), rather than treating them as fitting parameters (i.e., u2A ≡ hu2(TA)i and τA ≡τ(TA)), leading to a highly predictive relation: h i α=2 τðT Þ ¼ τA exp u2A = u2 ðT Þ 1 (40) Here, this LM relation (i.e., Eq. 40) is applied as the basis of a strategy for coarse-graining the dynamics of polymer melts and deriving the CG potential. Taking polycarbonate (PC) as a representative model system (Fig. 9a), one aims to derive the ER factor α(T), that is, ε(T) ¼ α(T)ε0, where ε is a cohesive interaction strength parameter and ε0 is a constant estimated from the radial RDF using the IBI method by preserving the Debye-Waller factor hu2i of the AA system (i.e., a fast dynamic property at a picosecond timescale, related to molecular “mobility” and “free volume”) to recover the T-dependent dynamics of the AA model over a wide range of T by its CG analog. As shown in Fig. 9b, the first step is to examine the influence
98
Zhaofan Li et al.
Fig. 9 (a) Mapping from the AA model to the CG model of polycarbonate (PC). (b) Influence of cohesive interaction strength on dynamics of the CG model.
of cohesive interaction strength by systematically varying α on hu2i for the CG model system. For each fixed α value, hu2i increases with T in a nonlinear fashion for both AA and CG systems over a wide T range, which is typical for GF materials. Moreover, as the α value increases, hu2i decreases at any given T, suggesting a suppressed molecular motion when increasing the cohesive interaction strength of the CG system. For each α value, the measured CG hu2i intersects with AA hu2i at a different T. Accordingly, ε(T) can be determined by demanding that the CG hu2i must equal the AA hu2i at each T (inset in Fig. 9b), leading to a typical sigmoidal variation in ε(T). Furthermore, to maintain the T-dependent structural properties such as density, the length scale parameter σ(T) is also regulated based on the renormalization factor β(T), which is defined as β(T) ¼ β2T2 + β1T + β0. All parameters related to α(T) and β(T) are presented in Table 2. Another model example is ortho-terphenyl (OTP), a small-molecule glass-forming liquid, as shown in Fig. 10a [39]. To coarse-graining the atomistic OTP model, each phenyl ring is grouped into one CG bead with the force center located at the center of mass of each ring, resulting in three consecutive CG beads per molecule. It shows that preserving hu2i under coarse-graining by renormalizing the cohesive interaction strength allows for quantitative prediction of both short- and long-time dynamics covering the wide T range of glass formation, indicating that the ER approach is also applicable to small GF molecules. Furthermore, note that in the highT Arrhenius and low-T glassy regimes, ε(T) tends to be plateau values, while ε(T) strongly varies with T in the non-Arrhenius regime at intermediate T between glassy and Arrhenius regimes. This type of sigmoidal variation in ε(T) is also observed for different polymers—polycarbonate (PC, a
Particle-based mesoscale modeling and coarse-graining methods
99
Table 2 Functional forms and parameters of the temperature-dependent energy renormalization functions for the nonbonded potentials of the CG model of polycarbonate (PC). Functional form Parameters
U nb ðr, T Þ ¼ 4εðT Þ
αðT Þ ¼
12
σ ðT Þ r
αA αg 1 + exp ½kðT T T Þ
β(T) ¼ β2T2 + β1T + β0
+ αg
6
σ ðT Þ r
εAA ¼ α(T) 0.0761 kcal/mol, σ AA ¼ β(T) 7.55 A˚ εBB ¼ α(T) 0.4599 kcal/mol, σ BB ¼ β(T) 5.81 A˚ εCC ¼ α(T) 0.4016 kcal/mol, ˚ σ CC ¼ β(T) 4.21 A αA ¼ 2.659 kcal/mol, αg ¼ 5.304 kcal/mol k ¼ 0.0075 K1, TT ¼ 419.6 K ˚ /K2, β2 ¼ 2.227 107A 4 ˚ β1 ¼ 1.087 10 A/K ˚ β0 ¼ 0.8245 A
Fig. 10 (a) Self-diffusion coefficient D of ortho-terphenyl (OTP) molecules for the AA and CG models using ER at varying temperatures, and their comparison with the experimental data. Inset shows the cohesive interaction strength ε(T) for the CG model determined by matching the T-dependent hu2i of the AA model to the CG model. (b) ER factor α as a function of temperature for the developed CG models of polycarbonate (PC), polystyrene (PS), and polybutadiene (PB) via the ER approach for achieving T-transferable coarse-graining of their dynamics.
relatively fragile polymer), polystyrene (PS, a semiflexible polymer), and polybutadiene (PB, a relatively strong and flexible polymer) with the following functional form [33]:
1 αðT Þ ¼ αA αg (41) + αg 1 + ekðT T T Þ
100
Zhaofan Li et al.
where αA and αg are renormalization factor α(T) values in the Arrhenius regime and glassy regime, respectively; k is the temperature breadth of the transition, and TT is the crossover point of the sigmoidal function. As shown in Fig. 10b, although the chemical structures and GF properties are appreciably different for these three polymers, their α(T) universally exhibit sigmoidal variations with T with different magnitudes. This sigmoidal variation of α(T) is also largely consistent with the prediction of the AG theory and GET, emphasizing the crucial role of cohesive energy in the activation energy and dynamics upon glass formation. By implementing ε(T) based on the condition that hu2i is preserved, the calibrated CG model can replicate the dynamics of the AA model over the entire temperature range of glass formation.
Force matching
Force matching (FM) was initially introduced by Ercolessi and Adams [40] to obtain numerically optimal interatomic potentials from the first-principles ab initio calculations. This method is based on fitting the interatomic potentials to ab initio atomic forces of different atomic configurations, such as surfaces, clusters, liquids, and crystals. Later, Izvekov and Voth [41] used the FM approach for the coarse-graining of an interparticle force field for liquids by matching the trajectories and forces calculated from the atomistic trajectory and force data for the CG beads of the targeted system. Indeed, they utilized a reference all-atomistic simulation and then iterated to find an optimal CG model by reaching optimal criteria. CG beads are simply defined as the centers of mass of atomic groups, and the developed system was also called a multiscale coarse-graining (MS-CG) representation. This method has been initially tested for polar fluids, for example, water methanol. For the M sets of atomic configurations available (set of M parameters g1, …, gM), force matching is achieved in a least-squares sense by directly minimizing the objective function: χ2 ¼
L X N 1 X F ref F p ðg , …, g Þ2 M il il 1 3LN l¼1 i¼1
(42)
where N is the number of atoms in the atomic configuration; L is the total p number of atomic configurations used for fitting; and Fref il and Fil represent a reference ab initio force and a force predicted via the analytical form, respectively, which are performed on the ith atom in the lth atomic configuration.
Particle-based mesoscale modeling and coarse-graining methods
101
Despite its wide usage, the FM approach usually faces difficulties as the number of fitting parameters increases, when many different kinds of interactions are involved in the calculations. In addition, fitting the potentials to the entire coordinate and force database simultaneously limits the accuracy and makes it difficult to use large sets of atomic configurations. The resulting potentials informed from ab initio calculations do not often show satisfactory performance in the reproduction of macroscopic properties. Thus, Izvekov and Voth [41] replace the least square–based minimization of Eq. (42) with a configurational averaging scheme. This improved methodology is found more consistent with the development of a CG force field since averaging helps choose configurational entropy that contributes to the potential of mean force between CG beads.
Relative entropy
The relative entropy method was developed by Shell and coworkers [42,43] to derive the CG potentials based on minimizing an objective function, called the relative entropy, as presented in the following discrete state for coarse-graining:
X ℘FP ðiÞ Srel ¼ (43) ℘FP ðiÞIn + Smap FP ℘CG ðM ðiÞÞ i Here, i stands for an index that proceeds overall configurational microstates of the first-principles (FP) system, M(i) is the operation of the mapping function on FP configuration i to define a corresponding CG configuration I ¼ M(i), the microstate probabilities ℘ denote the respective interaction potentials on the FP and CG systems, and hSmapiFP is the mapping entropy and measures the degeneracy of the mapping: X Smap ðI Þ ¼ In δI,M ðiÞ (44) i
where δ is the Kronecker delta function. It should be noted that the mapping entropy is not a function of the CG interaction potential but a unique function of the mapping operator and the FP configurational weights. The same expressions for the relative entropy of the systems with continuous degrees of freedom could be written as follows:
ð ℘FP ðr Þ Srel ¼ ℘FP ðr Þ ln (45) dr + Smap FP ℘CG ðM ðr ÞÞ
102
Zhaofan Li et al.
ð Smap ðRÞ ¼ ln
δ½M ðr Þ Rdr
(46)
A key concept in the relative entropy method is that optimal CG models have minimal values of the relative entropy. The relative entropy provides a scoring metric for discriminating the quality of CG models having different configurational designs. In addition, the interaction potential for a given CG model can be optimized by minimizing the relative entropy with respect to its free parameters. Particularly, it has been shown that the form of the CG potential directly determines which of the coarse-graining errors could be eliminated by relative entropy minimization. Martini approach
Martini CG models and the corresponding Martini force field are widely used and highly recognized for studying the structure and thermodynamics of biomolecular systems. The computational efficiency and conceptual simplifications of the Martini model make it prevalent in the exploration of the soft matters under a broader spatiotemporal. This model was originally developed by Marrink and coworkers in 2004 for CG-MD simulations of lipids [4] and later was extended to proteins [44], sugars [45], and diverse molecules [46]. The Martini force field allows for a broad range of applications for diverse organic and solvent molecules without the need to reparameterize the force field each time. The Martini model is based on a four-to-one mapping scheme, namely, on average four heavy atoms with accompanied hydrogens are grouped into one interaction site. Accordingly, four water molecules are mapped to a single CG bead. As for the ring-like molecules, like benzene, a more detailed CG mapping is introduced, that is, 2 or 3 to 1 mapping of ring atoms onto one CG bead for preserving the ring structure. Fig. 11 shows some representative mapping of selected molecules. The Martini model classifies the interaction sites into four main types: polar (P), intermediately polar (N), apolar (C), and charged (Q). Each particle type has several subtypes, enabling a more accurate description of the chemical essence of the underlying AA model. Specifically, subtypes are distinguished either by a letter representing the hydrogen-bonding capabilities (d ¼ donor, a ¼ acceptor, da ¼ both, 0 ¼ none) or by a number indicating the degrees of polarity (from 1 ¼ low polarity to 5 ¼ high polarity). Generally, the Martini force field is parameterized upon a combination of bottom-up and top-down approaches, namely, the hybrid approach [47]. Specifically, the nonbonded interactions are determined by reproducing
Particle-based mesoscale modeling and coarse-graining methods
103
Fig. 11 Selected molecule examples with superimposed Martini mapping. (a) Standard water bead describing four AA water molecules; (b) representation of dimethoxyethane constituting poly(ethylene oxide); (c) citrate; (d) dipalmitoylphosphatidylcholine (DPPC) lipid; (e) organic solvent toluene molecule; (f) benzene ring; (g) trimer poly(3hexylthiophene); (h) polystyrene segment (five monomers are shown here); and (i) sarcosine-n chain (here, n ¼ 5 denoting five monomers). Martini beads are shown as colored semilucent sphere, and topological structures between beads are represented as dotted lines.
experimental partitioning free energies between polar and apolar phases of various chemical compounds, and the bonded interactions are defined and calculated by referring to the underlying all-atomistic simulations. In Martini force field, the nonbonded interaction of all particle pairs follows 12-6 LJ potential, as shown in Eq. (18), where the well depth ε is determined by the polarity of interacting particle types, ranging from ε ¼ 5.6 kJ/mol for interactions between strongly polar groups to ε ¼ 2.0 kJ/mol for interactions between polar and apolar groups denoting the hydrophobic effect. The effective particle size determined by the parameter σ is set as 0.47 nm for all normal particle types. Exceptionally, for the ring structure molecules, σ is slightly reduced to 0.43 nm and ε is scaled to 75% of the standard value. In addition to the LJ interaction, the charged particles (Q) possessing charge q to interact with each other via a Coulombic energy function: U Columbic ¼
qi qj 4πε0 εrel r ij
(47)
104
Zhaofan Li et al.
where qi and qj represent the charges of particles i and j; εrel is the relative dielectric constant for explicit screening with εrel ¼ 15; rij denotes the separation distance between beads i and j. Note that the nonbonded potential energy in Martini force field is in a shifted manner, that is, the LJ potential goes smoothly to zero from r ¼ 0.9 nm to rcut ¼ 1.2 nm. In addition to the nonbonded interaction, the bonded part in the Martini force field is similar to Eqs. (20)–(22) accompanied by an additional improper dihedral angle potential: U Improper ¼ K iϕ ðϕ ϕid Þ2
(48)
where Kiϕ is the energy constant for improper dihedral angle potential and ϕid is the equilibrium improper dihedral angle. This potential is utilized to prevent out-of-plane distortions as for more complicated geometries, like ring structures. Strain energy conservation
The “strain energy conservation” method is another systematic way to calibrate the CG potential from atomistic reference, which is particularly effective for systems with well-defined molecular ordering systems such as crystalline materials. With the presumption of small deformation, one assumes that the elastic strain energy stored in both AA and its corresponding CG model must be equal. Specifically, for crystalline materials (i.e., entropic effects assumed to be negligible), one can derive a direct link between atomistic potentials and stress-strain behavior of the material using the “CauchyBorn” (CB) rule, where a strain energy density under a uniform strain field can be defined as follows: ð 1 ψ AA ðr Þ ¼ U ðr ÞdΩ ¼ ψ CG (49) Ω Ω
where Ω is the volume of the system and U is the potential energy as a function of atomic coordinates. Furthermore, a mapping function that relates strain tensor to positions, r ¼ DΩ(εij), can be inserted into Eq. (49), resulting in the expression of strain energy density as a function of strain: 1 X ψ AA εij ¼ ϕ r εij ¼ ψ CG εij (50) Ω α where ϕ(r) is the potential form of interaction (e.g., LJ potential) in simulations.
Particle-based mesoscale modeling and coarse-graining methods
105
The principle of this approach is based upon calibrating the CG potentials by enforcing the assertion of energy conservation between the AA model and its CG analog. As a result, the mechanical properties of the target material system, such as Young’s modulus, shear modulus, bending stiffness, and adhesion energy per surface area, predicted by the mesoscopic CG model are in fair agreement with those attained from atomistic simulations. Taking the graphene sheet as a representative model system (Fig. 12a), a series of atomistic calculations of mechanical test cases are performed to extract a set of parameters of CG potentials to capture the mechanical behavior of graphene sheets under coarse-graining by employing a square lattice geometry [48]. Specifically, the test suite includes four cases, as shown in Fig. 12c: I. Uniaxial tensile loading to determine the elastic modulus from the stress-strain relation by applying a strain along a principal axis. II. In-plane shear loading to determine the shear modulus from the stressstrain relation by fixing a single edge and loading the opposite edge in the transverse direction. III. Out-of-plane bending to determine the bending stiffness from the bending test. IV. An assembly of two or multilayer graphene sheets in equilibrium to determine interfacial adhesion energy (normalized by the interfacial area) and equilibrium spacing between the layers. More recently, Ruiz et al. proposed another CG model of graphene in a 4:1 mapping scheme based on the hexagonal lattice geometry (Fig. 12b) [49], which was derived directly from atomistic simulations by asserting equivalence between the elastic strain energy and a CG molecular potential. This CG model was shown to quantitatively reproduce complex mechanical features similar to atomistic reactive force fields but at a fraction of the computational cost, and faithfully capture asymmetries between the armchair and zigzag directions in the nonlinear large-deformation regime, in the interlayer shear response, and in the anisotropic features such as superlubricity. A summary of force field contributions, respective functional forms, and the calibrated parameters of the CG graphene model is presented in Table 3. The strain energy conservation paradigm, on the other hand, has been extensively employed for a variety of CG models to preserve equivalent physical processes from atomistic simulations. For example, approaches based upon the strain energy conservation have been applied to capture the anisotropic mechanical properties of aligned cellulose nanocrystal (CNC) films, including modulus, strength, and failure properties [50].
Fig. 12 (a) Atomistic graphene sheet with the overlaid coarse-graining model where the distance between each mesoscale particle is r0 [48]. (b) Illustration of 4:1 mapping of graphene where each bead in the coarse-graining model represents four atoms [49]. (c) Different loading geometries used in all-atomistic simulations in order to derive mesoscale model parameters.
Particle-based mesoscale modeling and coarse-graining methods
107
Table 3 Four types of interactions, corresponding functional forms, and related parameters of the CG model of graphene using the strain-energy conservation approach. Interaction Function form Parameters
Bond
Vb(d) ¼ D0[1 e α(dd0)]2 for d < dcut
Angle
Va(θ) ¼ kθ(θ θ0)2
Dihedral Vd(ϕ) ¼ kϕ[1 hcos(2ϕ)] i Nonbonded V ðr Þ ¼ 4ε σLJ 12 σLJ 6 for r < r nb LJ cut r r
4.
D0 ¼ 196.38 kcal/mol α ¼ 1.55 A˚1 d0 ¼ 2.8 A˚ dcut ¼ 3.25 A˚ kθ ¼ 409.40 kcal/mol θ0 ¼ 120 degrees kϕ ¼ 4.15 kcal/mol εLJ ¼ 0.82 kcal/mol ˚ σ LJ ¼ 3.46 A ˚ rcut ¼ 12 A
Concluding remarks
In conclusion, the main goal of CG modeling is to simulate complex hierarchical molecular systems by bridging the atomistic and mesoscopic scales. Coarse-graining focuses on eliminating unessential degrees of freedom, which results in a smoother energy landscape, in turn allowing a timestep longer than that in an atomistic simulation, enabling the exploration of important phenomena (e.g., phase separation and self-assembly) at extended spatiotemporal scales that are often inaccessible to fully atomistic models. Although CG models provide significant computational efficiency, most CG models present severe representability and transferability issues. Bottomup and top-down CG methods aim at reproducing characters from atomistic scale simulations and satisfying macroscopic experimental properties, respectively. It should be noted that the electrostatic, Coulomb, and hydrogen bond interactions are already incorporated in the fully atomistic system when deriving the CG bonded and nonbonded interactions through the “bottom-up” scheme; therefore, these chemical details are usually not explicitly incorporated in the CG models. However, for some specific systems where the chemical interaction details such as hydrogen bonds and atomic charges are important, it requires one to choose the CG model discreetly when conducting simulation works to avoid physically inaccurate or wrong outcomes due to the lack of those details. Moreover, the optimized
108
Zhaofan Li et al.
CG potential depends on the appropriate selection of the initial thermodynamic condition and target physical quantities to be preserved. For example, CG models derived from one thermodynamic state are usually not transferable to another set of thermodynamic states due to the alteration of their temperature dependence of activation or free energy, impeding their practical usages. In addition, the smoother energy landscape and loss of atom friction can lead to significant acceleration of dynamics and softer mechanical response due to the decrease in configurational entropy upon coarsegraining. As a result, the derived CG potential shows less accurate prediction of dynamic and mechanical properties at varying thermodynamic states, necessitating the development of the temperature-transferable CG models. Some approaches have been developed to address these issues, such as DPD, pressure-matching, and energy renormalization approaches. The aforementioned and some other methods can compensate for the missing atomic degrees of freedom and molecular friction in different fashions to some extent, such as incorporation of additional friction force terms in the DPD method, the correction of instantaneous excess pressure in pressurematching, and the application of the “entropy-enthalpy compensation” principle in the ER approach. However, the ideal outcome would be to develop the CG model and potential that can faithfully preserve the thermodynamic, structural, and dynamical properties of the AA counterparts at varying states, which is still an ongoing research field. It is important to realize that there is no universal strategy to coarse-grain a complex material such as polymer. Different CG models are best suited to different scenarios. For instance, the FENE model was devised to qualitatively study the physical behaviors of polymers with an emphasis on simplicity and computational efficiency. To be more representative as real polymer chains, certain chemical features, such as chain stiffness and architecture, can be incorporated into the generic bead-spring CG model with modified force field terms. The Martini force field is a very popular choice in biomolecular simulations and further expands to many other common polymers through reasonable modifications, such as PS, PEO, P3HT, and small organic molecules. The CG model derived from the IBI approach retains the most chemical details of polymer, which can be further improved by the recently developed energy renormalization approach, leading to more accurate predictions of dynamics and thermomechanical properties of polymers over a wide temperature range. As for the well-defined molecular ordering systems and crystalline materials, such as graphene and cellulose nanocrystals, the entropy change is not that significant upon coarse-graining; one can calibrate
Particle-based mesoscale modeling and coarse-graining methods
109
the CG potential of such system using the strain energy conservation approach from their reference counterparts (e.g., DFT, atomistic, or experiments) to preserve the elasticity, strength, and failure strain of the system. For a given material system, one needs to be careful to choose the suitable CG modeling strategy based on the material type and chemistry and properties of interests and then systematically carry out the corresponding CG system construction, potential calibration, and further application. It would also be burdensome when developing the CG model for a complex material system with multiple chemical components involving heterogeneous chemical interactions and structures. For instance, due to the highly complex structure of conjugated polymers, the corresponding CG models often contain multiple CG bead species and interactions, complicating the CG potential development and calibration due to the difficulties in solving high-dimensional variables synchronously. To address this issue, machine learning (ML) has emerged as one of the most promising and powerful tools in various applications of materials science and engineering. Inspired by the success of big data and artificial intelligence (e.g., deep neural network or kernel methods) in designing and discovering materials, these methods are currently being applied to the development of CG models of materials that can faithfully capture both the chemistry-specific and temperature-dependent properties. Top-down and bottom-up strategies, in conjunction with the discussed coarse-graining methods, can be incorporated into the ML framework for training the model using the data from experimental and numerical simulations, respectively, to optimize the CG potentials and match defined target properties, achieving high efficiency in solving high-dimensional problems.
References [1] D. Marx, J. Hutter, Ab Initio Molecular Dynamics: Basic Theory and Advanced Methods, Cambridge University Press, 2009. [2] G.S. Grest, K. Kremer, Molecular dynamics simulation for polymers in the presence of a heat bath, Phys. Rev. A 33 (1986) 3628. [3] W.G. Noid, Perspective: coarse-grained models for biomolecular systems, J. Chem. Phys. 139 (2013) 090901. [4] S.J. Marrink, A.H. De Vries, A.E. Mark, Coarse grained model for semiquantitative lipid simulations, J. Phys. Chem. B 108 (2004) 750–760. [5] W. Xu, W. Xia, Energy renormalization for coarse-graining polymers with different fragilities: predictions from the generalized entropy theory, Macromol. Theory Simul. (2020) 1900051. [6] D.D. Hsu, W. Xia, S.G. Arturo, S. Keten, Systematic method for thermomechanically consistent coarse-graining: a universal model for methacrylate-based polymers, J. Chem. Theory Comput. 10 (2014) 2514–2527.
110
Zhaofan Li et al.
[7] F. M€ uller-Plathe, Coarse-graining in polymer simulation: from the atomistic to the mesoscopic scale and back, ChemPhysChem 3 (2002) 754–769. [8] L. Verlet, Computer “experiments” on classical fluids. I. Thermodynamical properties of Lennard-Jones molecules, Phys. Rev. 159 (1967) 98. [9] T. Schlick, Molecular Modeling and Simulation: An Interdisciplinary Guide, vol. 2, Springer, 2010. [10] P.J. Hoogerbrugge, J.M.V.A. Koelman, Simulating microscopic hydrodynamic phenomena with dissipative particle dynamics, EPL 19 (1992) 155–160. [11] P. Espan˜ol, P.B. Warren, Perspective: dissipative particle dynamics, J. Chem. Phys. 146 (2017) 1–16. [12] P. Espanol, P. Warren, Statistical mechanics of dissipative particle dynamics, EPL 30 (1995) 191–196. [13] M.B. Liu, G.R. Liu, L.W. Zhou, J.Z. Chang, Dissipative particle dynamics (DPD): an overview and recent developments, Arch. Comput. Methods Eng. 22 (2015) 529–556. [14] D. Reith, M. P€ utz, F. M€ uller-Plathe, Deriving effective mesoscale potentials from atomistic simulations, J. Comput. Chem. 24 (2003) 1624–1636. [15] S. Izvekov, G.A. Voth, A multiscale coarse-graining method for biomolecular systems, J. Phys. Chem. B 109 (2005) 2469–2473. [16] M.S. Shell, Coarse-graining with the relative entropy, Adv. Chem. Phys. 161 (2016) 395–441. [17] A.P. Lyubartsev, A. Laaksonen, Calculation of effective interaction potentials from radial distribution functions: a reverse Monte Carlo approach, Phys. Rev. E 52 (1995) 3730–3737. € [18] M. Grmela, H.C. Ottinger, Dynamics and thermodynamics of complex fluids. I. Development of a general formalism, Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Top. 56 (1997) 6620–6632. [19] K. Kremer, G.S. Grest, Dynamics of entangled linear polymer melts: a moleculardynamics simulation, J. Chem. Phys. 92 (1990) 5057–5086. [20] S. Cheng, M.O. Robbins, Capillary adhesion at the nanometer scale, Phys. Rev. E 89 (2014) 1–14. [21] W.-S. Xu, J.F. Douglas, K.F. Freed, Influence of cohesive energy on the thermodynamic properties of a model glass-forming polymer melt, Macromolecules 49 (2016) 8341–8354. [22] S. Zhang, et al., Toward the prediction and control of glass transition temperature for donor–acceptor polymers, Adv. Funct. Mater. (2020) 2002221. [23] A. Chremos, J.F. Douglas, A comparative study of thermodynamic, conformational, and structural properties of bottlebrush with star and ring polymer melts, J. Chem. Phys. 149 (2018). [24] R. Tj€ ornhammar, O. Edholm, Reparameterized united atom model for molecular dynamics simulations of gel and fluid phosphatidylcholine bilayers, J. Chem. Theory Comput. 10 (2014) 5706–5715. [25] W.L. Jorgensen, J.D. Madura, C.J. Swenson, Optimized intermolecular potential functions for liquid hydrocarbons, J. Am. Chem. Soc. 106 (1984) 6638–6646. [26] L.D. Schuler, X. Daura, W.F. van Gunsteren, An improved GROMOS96 force field for aliphatic hydrocarbons in the condensed phase, J. Comput. Chem. 22 (2001) 1205–1218. [27] C.D. Wick, M.G. Martin, J.I. Siepmann, Transferable potentials for phase equilibria. 4. United-atom description of linear and branched alkenes and alkylbenzenes, J. Phys. Chem. B 104 (2000) 8008–8016. [28] A. Izumi, Y. Shudo, K. Hagita, M. Shibayama, Molecular dynamics simulations of cross-linked phenolic resins using a united-atom model, Macromol. Theory Simul. 27 (2018).
Particle-based mesoscale modeling and coarse-graining methods
111
[29] T.C. Moore, C.R. Iacovella, C. McCabe, Derivation of coarse-grained potentials via multistate iterative Boltzmann inversion, J. Chem. Phys. 140 (2014). [30] Y. Wang, Z. Li, K. Niu, W. Xia, Energy renormalization for coarse-graining of thermomechanical behaviors of conjugated polymer, Polymer 256 (2022) 125159. [31] S. Plimpton, Fast parallel algorithms for short-range molecular dynamics, J. Comp. Phys. 117 (1995) 1–19. [32] A.P. Lyubartsev, A. Laaksonen, On the reduction of molecular degrees of freedom in computer simulations, in: M. Karttunen, et al. (Eds.), Novel Methods in Soft Matter Simulations. Lecture Notes in Physics, vol. 640, Springer, 2004, pp. 219–244. [33] W. Xia, et al., Energy renormalization for coarse-graining polymers having different segmental structures, Sci. Adv. 5 (2019) eaav4683. [34] G. Adam, J.H. Gibbs, On the temperature dependence of cooperative relaxation properties in glass-forming liquids, J. Chem. Phys. 43 (1965) 139–146. [35] J. Dudowicz, K.F. Freed, J.F. Douglas, Generalized entropy theory of polymer glass formation, Adv. Chem. Phys. 137 (2008), 125–222. [36] R. Lumry, S. Rajender, Enthalpy–entropy compensation phenomena in water solutions of proteins and small molecules: a ubiquitous properly of water, Biopolymers 9 (1970) 1125–1227. [37] D.S. Simmons, M.T. Cicerone, Q. Zhong, M. Tyagi, J.F. Douglas, Generalized localization model of relaxation in glass-forming liquids, Soft Matter 8 (2012) 11455–11461. [38] B.A. Pazmin˜o Betancourt, P.Z. Hanakata, F.W. Starr, J.F. Douglas, Quantitative relations between cooperative motion, emergent elasticity, and free volume in model glassforming polymer materials, Proc. Natl. Acad. Sci. U. S. A. 112 (2015) 2966–2971. [39] W. Xia, et al., Energy renormalization for coarse-graining the dynamics of a model glass-forming liquid, J. Phys. Chem. B 122 (2018) 2040–2045. [40] F. Ercolesi, J.B. Adams, Interatomic potentials from first-principles calculations: the force-matching method, EPL 26 (1994) 583–588. [41] S. Izvekov, G.A. Voth, Multiscale coarse graining of liquid-state systems, J. Chem. Phys. 123 (2005) 1–13. [42] M.S. Shell, The relative entropy is fundamental to multiscale and inverse thermodynamic problems, J. Chem. Phys. 129 (2008) 1–7. [43] A. Chaimovich, M.S. Shell, Coarse-graining errors and numerical optimization using a relative entropy framework, J. Chem. Phys. 134 (2011) 1–15. [44] L. Monticelli, et al., The MARTINI coarse-grained force field: extension to proteins, J. Chem. Theory Comput. 4 (2008) 819–834. [45] J.J. Uusitalo, H.I. Ingo´lfsson, S.J. Marrink, I. Faustino, Martini coarse-grained force field: extension to carbohydrates, Biophys. J. 113 (2017) 246–256. [46] S.J. Marrink, D.P. Tieleman, Perspective on the martini model, Chem. Soc. Rev. 42 (2013) 6801–6822. € [47] C. Heged€ us, A´. Telbisz, T. Hegedus, B. Sarkadi, C. Ozvegy-Laczka, Lipid regulation of the ABCB1 and ABCG2 multidrug transporters, Adv. Cancer Res. 125 (2015) 97–137. [48] S. Cranford, D. Sen, M.J. Buehler, Meso-origami: folding multilayer graphene sheets, Appl. Phys. Lett. 95 (2009) 3–5. [49] L. Ruiz, W. Xia, Z. Meng, S. Keten, A coarse-grained model for the mechanical behavior of multi-layer graphene, Carbon 82 (2015) 103–115. [50] X. Qin, S. Feng, Z. Meng, S. Keten, Optimizing the mechanical properties of cellulose nanopaper through surface energy and critical length scale considerations, Cellulose 24 (2017) 3289–3299.
CHAPTER FOUR
Fast homogenization through clustering-based reduced-order modeling Bernardo Proenc¸ a Ferreiraa,b, Francisco Manuel Andrade Piresa, and Miguel Aníbal Bessab a
Department of Mechanical Engineering, Faculty of Engineering, University of Porto, Porto, Portugal Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands b
Contents 1. 2. 3. 4. 5.
Introduction Computational homogenization and multiscale modeling Clustering-based reduced-order modeling Overview of self-consistent clustering analysis Preliminaries on micromechanics 5.1 Introduction 5.2 Problem formulation 5.3 The auxiliary homogeneous problem 5.4 Green operator under linear elastic isotropy 5.5 The Lippmann-Schwinger integral equation 6. Offline stage 6.1 Step 1: Conduct DNS linear elastic analyses 6.2 Step 2: Perform clustering-based domain decomposition 6.3 Step 3: Compute cluster interaction tensors 7. Online stage 7.1 Continuous Lippmann-Schwinger integral equation 7.2 Discretized Lippmann-Schwinger integral equation 7.3 Numerical solution of the reduced microscale equilibrium problem 7.4 The homogenized consistent tangent modulus 7.5 The reference homogeneous elastic material 7.6 Self-consistent scheme 8. Numerical application 8.1 Definition of the heterogeneous material RVE 8.2 Offline stage: Conduct DNS linear elastic analyses 8.3 Offline stage: Perform clustering-based domain decomposition 8.4 Offline stage: Compute cluster interaction tensors 8.5 Online stage: Multiscale equilibrium problem
Fundamentals of Multiscale Modeling of Structural Materials https://doi.org/10.1016/B978-0-12-823021-3.00012-9
Copyright © 2023 Elsevier Inc. All rights reserved.
116 116 119 120 123 123 124 125 127 127 129 129 131 135 137 137 139 142 144 146 146 150 150 151 151 152 152
113
Bernardo Proenc¸a Ferreira et al.
114
9. Concluding remarks and future directions Appendix A Linearization of the discretized Lippmann-Schwinger equilibrium equations B Self-consistent scheme optimization problem References
Abbreviations BVP CIT CPU CROM CRVE DNS DOF FEM FFT ICME PMVP ROM RVE SCA SCT
boundary value problem cluster interaction tensor central processing unit clustering-based reduced-order model cluster reduced representative volume element direct numerical simulation degrees of freedom finite element method fast Fourier transform integrated computational materials engineering principle of multiscale virtual power reduced-order model representative volume element self-consistent clustering analysis strain concentration tensor
Notation a a A a, A a A A
scalar first-order tensor second-order tensor fourth-order tensor computational vector storage computational matrix storage body, space, or set
Operators and symbols d() Δ() δ() ∂()/∂a Ff g F 1 f g DF T f g
infinitesimal incremental quantity iterative quantity partial derivative with respect to a Fourier transform inverse Fourier transform discrete Fourier transform
157 159 159 161 163
Fast homogenization through clustering-based reduced-order modeling
DF T 1 f g div[ • ] div0[ • ] r() r0() tr [ • ] k()k () () () () () : () () * ()
inverse discrete Fourier transform spatial divergence operator material divergence operator spatial gradient operator material gradient operator matrix trace Frobenius norm tensorial product tensorial double contraction tensorial double contraction appropriate product
Subscripts and superscripts ()(I) ()0 ()(k) ()n ()m ()0 ()μ ()s ()e ()p
cluster-related quantity (also ( J ) and (K )) reference material-related quantity general iteration of the Newton-Raphson method general macroscopic increment general microscopic increment reference configuration, initial time microscale-related quantity sampled quantity elastic nature plastic nature
Accents _ first-order time derivative ðÞ € second-order time derivative ðÞ ^ incremental constitutive function ðÞ ~ fluctuation quantity ðÞ ðÞ defined in the frequency domain
Sets, domains, and boundaries Ωμ ∂Ωμ Ωμ,0 ∂Ωμ,0
RVE RVE RVE RVE
domain domain domain domain
in the current configuration boundary in the current configuration in the reference configuration boundary in the reference configuration
115
Bernardo Proenc¸a Ferreira et al.
116
1. Introduction One of the most important challenges in the mechanics of materials community is the development of accurate and efficient predictions of the macroscopic behavior of materials with complex heterogeneous microstructures. These predictions are crucial to the development of new materials with improved functionality, enhanced properties, and reduced development cost through a paradigm designated as Integrated Computational Materials Engineering (ICME) [1, 2]. ICME can be defined as an integrative computational framework that aims at designing new materials, products, or structures meeting specific performance criteria and/or reducing the associated costs. In its most general form, coined hybrid ICME [2], it involves the integration of processstructure-property-performance relationships (horizontal ICME) with multiscale process-structure and structure-property modeling over different length scales (vertical ICME). Through the continuous integration of the most advanced experimental, computational, and data science methodologies, this framework can effectively provide the basis of virtual materials testing standards. However, this is only possible when large and reliable datasets for the quantities of interest are available. This chapter focuses on the important challenge of predicting macroscopic mechanical behavior both accurately and efficiently from microscale simulations of representative volume elements (RVEs) of materials via clustering-based reduced-order models (CROMs). This is one of the most important bottlenecks in ICME frameworks. The chapter starts with a short introduction to homogenization and multiscale modeling, then it discusses CROMs with a special emphasis on the self-consistent clustering analysis (SCA), finishing with an illustrative application.
2. Computational homogenization and multiscale modeling The main idea in multiscale mechanics consists of extracting predictive macroscopic properties and behavior of heterogeneous materials by resolving the topological, physical, and statistical details of the underlying microstructure. Of particular importance here is the proper definition and quantification of relationships bridging different scales. The neighboring field of mathematical homogenization (or field averaging theorems) inspired
Fast homogenization through clustering-based reduced-order modeling
117
engineers to apply homogenization methods to engineering problems, being the homogenization method, originally coined by Babusˇka [3], a wellestablished disciple for the analysis of heterogeneous materials in multiscale mechanics nowadays [4]. Nowadays, one of the most powerful methods in the field of homogenization is designated computational homogenization, a topic which has been the object of extensive research and achievements, and fundamental to the most recent paradigms of multiscale modeling [4–8]. This method relies on the continuous interchange of information between scales through spatial homogenization. It is based on the nested solution of boundary value problems (BVPs) at different scales. Usually, the approach is considered for two scales,a (1) the macroscale, where the material’s macroscopic constitutive response is sought, and (2) the microscale, where numerical computations are conducted over RVEs in order to account for the microstructure heterogeneities and mechanisms. Originally proposed by Hill [9], an RVE consists of a representative computational domain of the material’s microstructure in an average sense. In essence, a domain that captures the effective material behavior (physically meaningful) and that preserves the geometrical and morphological complexity of the microstructure (statistically representative). Along with the suitable choice and enforcement of boundary conditions, the proper definition of the RVE is critical to achieve high-fidelity predictive capabilities. A complete variational foundation for a multiscale constitutive theory of solids based on computational homogenization has been established by de Souza Neto and coworkers [10–16]. Since the late 1990s, several multiscale models have been built upon the principles of computational homogenization. A particular numerical approach based on the finite element method (FEM) started with the pioneering works of Renard and Marmonier [17], who first introduced the idea of using a finite element discretization at the microscale representative domain. Smit et al. [18] introduced a multilevel finite element model, where a unique discretized RVE is attached to each macroscopic integration point. This approach received much attention and further developments followed in the next years [19–30]. From these contributions emerged the so-called FE2 method, originally coined by Feyel [31], where finite element discretizations and computations are performed in both macroscale a
This approach can be readily extended to multiple scales as long as the principle of scales separation is satisfied and a meaningful homogenization procedure can be performed between scales.
118
Bernardo Proenc¸a Ferreira et al.
Fig. 1 Schematic of a finite strain first-order strain-driven hierarchical multiscale model based on computational homogenization. F(x,t) denotes the macroscale deformation gradient, P(x,t) denotes the macroscale homogenized first Piola-Kirchhoff stress tensor, and A(x,t) denotes the consistent tangent modulus.
and microscale in a coupled manner. The solution of a microscale BVP is associated with each macroscale integration point (see Fig. 1). Given its quantitative reliability and completeness, the first-order formulation is well established and frequently used in both academia and industry. However, a major disadvantage of this method lies in its high computational cost, both in terms of computational time and memory requirements, which limits its practical applicability and often calls for simplified representative domains and/or suboptimal mesh refinement. Nonetheless, some valuable efforts have already been put forth to alleviate these limitations, namely through FE-FFT schemes. In this instance, the microscale equilibrium problem is solved by fast Fourier transform (FFT)-based homogenization [32, 33]. Despite the highly accurate solutions provided by the so-called direct numerical simulation (DNS) methods (e.g., FEM, FFT methods), the associated computational costs may severely limit the approaches mentioned earlier in terms of engineering applicability in ICME frameworks, as mentioned in Section 1. In this context, the so-called reduced-order models (ROMs) come in handy and can be considered as a major piece of the ICME framework, given current computational resources.
Fast homogenization through clustering-based reduced-order modeling
119
3. Clustering-based reduced-order modeling Despite of being leveraged in recent years, a myriad of so-called ROMs have been developed over the last decades aiming to find an appropriate balance between accuracy and computational cost. Different approaches and strategies have proven successful and one can enumerate the following among the most well-known ROMs: analytical micromechanical methods [34], transformation field analysis [35], nonuniform transformation field analysis [36, 37], principal component analysis [38], proper orthogonal decomposition [39], proper generalized decomposition [40–42], reduced basis method [43, 44], high-performance reduced-order model [45], empirical interpolation method [46], best point interpolation method [47], missing point estimation [48], discrete empirical interpolation method [49], energy conserving sampling and weighting [50], empirical cubature method [51], and waveletreduced-order model [52]. These methods have advantages and disadvantages, mainly related to the nature of the applications where they are employed, and, in some cases, the potential to be coupled with each other to combine the best characteristics of each method. A particular family of ROM coined clustering-based ROMs (CROMs) has been recently introduced and comprises models that rely on unsupervised machine learning cluster analysis algorithms in order to perform the model reduction. This class of ROMs emerged from the pioneering contribution of Liu and coworkers [53], who propose the SCA. Several contributions of Liu and coworkers have been published since then with the aim of improving the formulation, integrating the method in state-of-the-art modeling frameworks, and exploring different applications. Bessa et al. [54] incorporated the SCA in the proposed data-driven modeling framework in order to build the material response database in an accurate and efficient way. Liu et al. [55] formulated a regularized multiscale damage method (coupling between a nonlocal macroscopic damage model with a stable microscale damage homogenization algorithm) relying on the efficient SCA analyses to conduct FE-SCA multiscale simulations. Yan et al. [56–58] propose a data-driven multiscale multiphysics modeling framework to link process-structure-propertyperformance on additive manufacturing taking advantage of the SCA in the structure-property multiscale modeling. Based on the previous work laid down in Liu’s Ph.D. thesis [59, 60] generalized the SCA formulation for finite strains and the latter was readily applied to perform FE-SCA multiscale simulations on polycrystalline materials, and predict the nucleation and growth of
120
Bernardo Proenc¸a Ferreira et al.
voids in ductile materials by debonding and fragmentation as well as to develop a fatigue strength prediction model [61]. More recently, He et al. [62], Gao et al. [63], and Han et al. [64] further explored the SCA formulation in the multiscale modeling of composite materials. Other relevant contributions stem from Wulfinghoff and coworkers [65], proposing a CROM derived from the Hashin-Shtrikman variational principle (HSFE), which turned out to be equivalent to the SCA formulation. This method was later integrated into an efficient FE2-like multiscale homogenization approach by Cavaliere et al. [66]. Recently, Schneider [67] provided valuable mathematical foundations of both SCA and HSFE CROMs, namely by demonstrating the well posedness of the equilibrium equations and convergence upon cluster refinement. Taking inspiration from the work of Liu and coworkers, Li et al. [68] reformulated the SCA in a consistent FEM framework, proposing the FEM-cluster-based analysis (FCA) method. This method was later enhanced through the principle of minimum complementary energy in a cluster-based variational way [69]. With the hope of providing a fundamental understanding of the main ideas behind the CROMs mentioned earlier, the remainder of this chapter is entirely focused on the pioneering SCA method. Given its wide application in the literature and following the original paper of the SCA method [53], a standard macroscale/microscale hierarchical model is considered here. Nonetheless, it is remarked that it can be applied to any length scale ranging from the macroscale to the nanoscale, as long as the homogenization procedure between any two scales is physically meaningful. Moreover, the formulation is suitable to the solution of quasistatic equilibrium problems, being the dynamical equilibrium formulation not dealt with to best of authors’ knowledge. In this context, the main underlying assumptions, key ideas, and formulation are presented in a step-by-step manner, and the numerical details required for a suitable computational implementation are discussed. A simple numerical example considering standard J2 nonlinear elastoplastic material behavior is then presented to illustrate the main features and performance of the method, as well as to further encourage the reader to perform their own implementation. Finally, some concluding remarks are established and future directions are addressed.
4. Overview of self-consistent clustering analysis Before diving into the details behind the SCA formulation, it is helpful to have an initial overview of the method as schematically illustrated in Fig. 2. Similar to other methods belonging to the CROMs family, the
Offline stage Online stage
Fig. 2 Schematic of the SCA clustering-based reduced-order model proposed by Liu and coworkers [53].
122
Bernardo Proenc¸a Ferreira et al.
SCA is a two-stage algorithm involving an offline training/learning stage and an online prediction stage. The offline stage performs the clustering-based model reduction procedure, compressing the high-fidelity discretized RVEb into a cluster-reduced RVE (CRVE). The main idea is to perform a cluster analysis over the RVE and decompose its spatial domain into a given number of material clusters. Each material cluster groups points with similar mechanical behavior. As detailed in the following section, Liu and coworkers [53] chose to evaluate such similarity based on the so-called local elastic strain concentration tensors, whose computation demands a given set of DNS linear elastic analyses. At last, cluster interaction tensors that naturally arise in the formulation must be computed, establishing a strain-stress-type relation between all pairs of clusters. The online stage is then formulated over the CRVE and involves the actual solution of the multiscale equilibrium problem when the CRVE is subjected to macroscale strain and/or stress constraints. The weak formulation of the equilibrium problem is based on the well-known Lippmann-Schwinger equation that, after a suitable discretization, leads to a cluster-wise Lippmann-Schwinger system of equilibrium equations. In general, the microscale equilibrium problem is nonlinear and, after being properly linearized, can be efficiently solved through a standard NewtonRaphson iterative scheme.c Once the microscale equilibrium problem is solved, the macroscale mechanical response is then computed through computational homogenization. As reported in the previous section, the SCA has proven its usefulness in accelerating the predictions of quasistatic equilibrium analyses under both infinitesimal and finite strains. It has been successfully applied over different types of heterogeneous materials involving a constitutive behavior ranging from linear elasticity to nonlinear elasto-viscoplasticity. It is truly remarkable that although the SCA’s offline stage only requires conducting a welldefined set of linear elastic analyses under orthogonal loading conditions, the online stage may include any set of nonlinear constitutive laws associated with each material phase and can be performed for any set of macroscale loading conditions without redoing the offline stage. Moreover, this b
c
Note that the designation of “high-fidelity” RVE is here employed to describe the RVE associated with a given DNS homogenization method, for example, FEM or FFT-based homogenization schemes, where all the degrees of freedom resulting from a suitable spatial discretization procedure are accounted for in the solution process. As later discussed, the solution of the microscale equilibrium problem is intrinsically related to a fictitious reference elastic material, being a self-consistent scheme employed to deal with such dependency in the SCA formulation.
Fast homogenization through clustering-based reduced-order modeling
123
CROM has the desirable characteristic of allowing the analyst to choose a suitable degree of model compression to achieve the desired degree of accuracy, a tradeoff between accuracy and computational cost. From a computational point of view, it is also relevant to point out that the method is inherently nonintrusive and its integration in multiscale coupled analyses based on computational homogenization, multiscale data-driven frameworks, and even commercial software [54, 57, 60] is straightforward. On the downside, the need for an offline stage to perform the model reduction represents a significant overhead computational cost. Although one may be tempted to associate such cost to the DNS linear elastic analyses, the cluster analysis and the computation of the cluster interaction tensors contribute the most to the overhead. In addition, it can be stated that the method does not scale well with the number of clusters. On the one hand, the computation of the cluster interaction tensors is computationally demanding and must be performed for every pair of material clusters. On the other hand, the method is nonlocal in nature in the sense that the equilibrium of a given material cluster accounts for the interaction with all the remaining clusters. Consequently, the nonlocal formulation results in a dense tangent matrix arising in the numerical solution of the equilibrium problem. Having the SCA’s big picture in mind, each step of the underlying formulation is discussed in the following sections by following the scheme of Fig. 2. The infinitesimal strain formulation is addressed here following the original SCA’s proposal in Ref. [53], being the finite strain extension straightforward and done in analogy to the generalization of FFT-based solution schemes [59, 60, 65].
5. Preliminaries on micromechanics 5.1 Introduction Despite the simplicity of the SCA [53], it is believed that the understanding of some of its most important underlying concepts requires some background knowledge on micromechanics. With this in mind, this section is included in order to provide at least a basic comprehension of such concepts and a deeper insight into the SCA’s formulation. Given its reasonably comprehensible formulation, the approach followed here is based on the well-known pioneering contribution of Moulinec and Suquet [70]. In this chapter, Moulinec and Suquet propose a micromechanics-based homogenization method based on the Fourier series and on the solution of a Lippmann-Schwinger-type equilibrium
Bernardo Proenc¸a Ferreira et al.
124
equation. Despite being classified as a DNS solution method, its theoretical formulation shares many common aspects with the SCA, being a suitable basis to fulfill the objective of this section. As discussed in Section 6.1, it turns out that this method can also be efficiently employed to perform the required DNS analyses in the SCA’s offline stage. In order to clearly distinguish quantities associated with both scales, the following notation is considered hereafter: at the macroscale, the domain and its boundary are denoted by Ω and ∂Ω, and the coordinates of a material point are given by X and x in the reference and deformed configurations, respectively; at the microscale, the domain and its boundary are denoted by Ωμ and ∂Ωμ, and the coordinates of a material point are given by Y and y in the reference and deformed configurations, respectively. In addition, the reference configuration is generally denoted as ðÞ0 and the microscale fields are denoted as ðÞμ .
5.2 Problem formulation Consider a heterogeneous RVE of domain Ωμ,0 and boundary ∂Ωμ,0 in its reference configuration, composed by a given number of perfectly bonded linear elastic phases and characterized by a given tangent modulus De(Y). Following standard procedures of computational homogenization, consider that a given macroscopic strain, ε(X), is prescribed such that Z 1 εðXÞ ¼ εμ ðYÞ dv, (1) vμ,0 Ωμ,0 where vμ,0 denotes the volume of the RVE in the reference configuration, and that periodic boundary conditions are assumed.d In addition, without any loss of generality, the microscale strain field can be decomposed as εμ ðYÞ ¼ εðXÞ+~ εμ ðYÞ,
(2)
where ε(X) and ε~μ ðYÞ are, respectively, the average and fluctuation components of the microscale strain field. In the absence of body forces, the microscale quasistatic equilibrium problem consists of finding the microscale strain field that satisfies the strong equilibrium equations div 0 ½σ μ ðYÞ ¼ 0 d
with σ μ ðYÞ ¼ De ðYÞ : εμ ðYÞ,
8Y Ωμ,0 :
(3)
Periodic boundary conditions assume that the heterogeneous material microstructure can be reproduced by a pattern of identical RVEs, thus implying compatibility between opposite sides of the RVE boundary.
125
Fast homogenization through clustering-based reduced-order modeling
5.3 The auxiliary homogeneous problem In order to solve the microscale equilibrium problem, a suitable eigenstress problem can be formulated by considering a reference (fictitious) homogeneous linear elastic solid occupying the same region Ωμ,0 of boundary ∂Ωμ,0 and characterized by a given tangent modulus De,0. Note that in this auxiliary homogeneous problem, the fictitious RVE is solely subjected to a prescribed eigenstress field, σ *μ ðYÞ, such that the associated microscale quasistatic equilibrium problem reads div 0 ½σ μ ðYÞ ¼ 0 with σ μ ðYÞ ¼ De,0 ðYÞ : ε~μ ðYÞ+σ *μ ðYÞ,
8Y Ωμ,0 :
By performing the following Fourier transforms,e n o F div 0 ½σ μ ðYÞ i ¼ i σμ,ij ðζÞ ζ j , ,0 F σ μ, ij ðYÞ ≡ σ μ, ij ðζÞ ¼ i Deijkl ζ l u~k ðζÞ + σ ∗μ, ij ðζÞ,
(4)
(5) (6)
where ζ is the frequency wave vector and ð Þ denotes definition in the frequency domain, the auxiliary microscale quasistatic equilibrium problem can be established in the frequency domain as i σμ,ij ðζÞ ζ j ¼ 0
~μ,k ðζÞ+ with σμ,ij ðζÞ ¼ i De,0 σ *μ,ij ðζÞ: ijkl ζ l u
(7)
After replacing the constitutive relation in the strong equilibrium equation and performing some straightforward algebraic manipulations, it comes 0ik ðζÞ u~μ,k ðζÞ ¼ i σ*μ,ij ζj , K
(8)
0ik ðζÞ ¼ De,0 K ijkl ζ j ζ l :
(9)
0 is defined as where K
From the previous equation, the microscale displacement field can be obtained as u~μ,k ðζÞ ¼ i Q 0ki ðζÞ σ*μ,ij ðζÞ ζ j , (10) 1
0 ¼ ðK 0 Þ , and attending to the symmetry of the eigenstress field, where Q σ*μ ðζÞ, it can be conveniently rewritten as e
Recall that in the infinitesimal strain formulation one has the explicit relation between the displacement and strain fields given by ε~μ ðYÞ ¼ rs0 u~μ ðYÞ, where rs0 denotes the symmetric material gradient operator.
Bernardo Proenc¸a Ferreira et al.
126
h 0 i u~μ,k ðζÞ ¼ i Q 0kj ðζÞ ζ i σ*μ,ij ðζÞ: ki ðζÞ ζj + Q 2
(11)
The microscale strain field can then be obtained through the symmetric material gradient of the microscale displacement field as ε~μ,kl ðζÞ ¼ i ζ l u~μ,k ðζÞ+ζ ku~μ,l ðζÞ : 2
(12)
After substitution of Eq. (11), the microscale strain field can be finally expressed as ε~μ,kl ðζÞ ¼ Φ 0 ðζÞ σ* ðζÞ, μ,ij klij
(13)
0 (ζ) is the Green operator defined as where Φ h 0 i 0 0 0 0 ðζÞ ¼ 1 Q Φ ðζÞ ζ ζ + Q ðζÞ ζ ζ + Q ðζÞ ζ ζ + Q ðζÞ ζ ζ j l i l j k i k : kj li lj klij 4 ki (14) The solution of the auxiliary homogeneous equilibrium problem can then be established in the frequency domain asf ε~μ ðζÞ ¼ Φ 0 ðζÞ : σ* ðζÞ, μ
8ζ 6¼ 0, ε~μ ð0Þ ¼ 0:
Attending to the following inverse Fourier transform, n o Z 1 0 * Φ ðζÞ : σμ ðζÞ ¼ Φ0 ðY Y 0 Þ : σ *μ ðY 0 Þ dv0 , F Ωμ,0
(15)
(16)
where the volume integration is performed over Y 0 as denoted by the prime ()0 , the solution of the auxiliary homogeneous equilibrium problem is given in the spatial domain by Z ε~μ ðYÞ ¼ Φ0 ðY Y 0 Þ : σ *μ ðY 0 Þ dv0 , 8YΩμ,0 : (17) Ωμ,0
From a physical point of view, it transpires from this solution that the Green operator component Φijkl(Y Y 0 ) represents the strain εμ,ij at the point Y due to a unit stress σ *μ,kl applied at the point Y 0 , thus effectively establishing a strain-stress relationship between two points of the domain. f
Note that the zero-frequency term of the microscale strain field, ε~μ ðζ ¼ 0Þ, is associated with its average value on the spatial domain, which in the case of the auxiliary homogeneous equilibrium problem is null.
Fast homogenization through clustering-based reduced-order modeling
127
5.4 Green operator under linear elastic isotropy Despite having established the solution of the auxiliary homogeneous equilibrium problem in Eq. (17), it still remains to define the Green operator explicitly. While for anisotropic materials, the closed-form expression of the Green operator is usually complex [71], under isotropy conditions it assumes a particularly simple explicit form in the frequency domain as demonstrated here. Considering the tangent modulus of an isotropic linear elastic material, 0 0 De,0 ijkl ¼ λ δij δkl + μ ðδik δjl + δil δjk Þ,
(18)
where (λ0, μ0) are the reference material Lame parameters. Substitution in Eq. (9) yields ik0 ðζÞ ¼ ðλ0 + μ0 Þ ζ i ζk + μ0 k ζk2 δik , K
(19)
and, after some algebraic manipulations, the inverse of the previous expression leads to ! 1 0 0 1 λ + μ ζ ζ 0 0 k i ik ≡ K ik : (20) ¼ δki 0 Q λ + 2μ0 k ζk2 μ0 k ζk2 Finally, the substitution of this result in Eq. (14) renders 0 ðζÞ ¼ Φ klij
1 2 δki ζ j ζ l + δkj ζ i ζ l + δli ζ j ζ k + δlj ζ i ζ k k ζk λ0 + μ0 ζk ζl ζ i ζj 0 0 : μ ðλ + 2μ0 Þ k ζk4 4μ0
(21)
5.5 The Lippmann-Schwinger integral equation Having the solution of the auxiliary homogeneous equilibrium problem at hand, it is now possible to solve the actual problem formulated in Section 5.2. In order to do so, it is convenient to introduce a reference (fictitious) homogeneous linear elastic material with tangent modulus De,0 into the constitutive formulation as σ μ ðYÞ ¼ De ðYÞ+De,0 De,0 : εμ ðYÞ, 8Y Ωμ,0 , (22)
Bernardo Proenc¸a Ferreira et al.
128
which can be rewritten as σ μ ðYÞ ¼ De,0 : εμ ðYÞ+ De ðYÞ De,0 : εμ ðYÞ,
8Y Ωμ,0 :
(23)
In the first place, the last term on right-hand side can be regarded as an eigenstress field defined as σ *μ ðYÞ ¼ De ðYÞ De,0 : εμ ðYÞ: (24) From a physical point of view, σ *μ ðYÞ represents, at each point Y, the difference of the stress state between the actual heterogeneous material and the reference homogeneous material for a given strain state εμ(Y). In addition, taking into account Eq. (2), the constitutive formulation can be finally established as σ μ ðYÞ ¼ De,0 : ε~μ ðYÞ+σ *μ ðYÞ+De,0 : εðXÞ:
(25)
Given that the term De,0 : ε(X) is constant with respect to Y, that is, is a divergence-free term, the microscale quasistatic equilibrium problem can then be restated as div0 ½σ μ ðYÞ ¼ 0
with
σ μ ðYÞ ¼ De,0 ðYÞ : ε~μ ðYÞ+σ *μ ðYÞ,
8Y Ωμ,0 ,
(26)
having a functional format similar to the auxiliary homogeneous problem solved in Section 5.3. Observing Eq. (15), the microscale quasistatic equilibrium problem reads in the frequency and spatial domain, respectively, 0 ðζÞ : σ ∗ ðζÞ, 8ζ 6¼ 0, ε μ ð0Þ ¼ εðXÞ, ε μ ðζÞ ¼ Φ μ
(27)
and Z εμ ðYÞ ¼
Ωμ, 0
Φ0 ðY Y 0 Þ : σ ∗μ ðY 0 Þ dv0 + εðXÞ, 8Y Ωμ, 0 :
(28)
This nonlinear integral equation can be identified as a Fredholm equation of the second type [72] and has the same functional form as the so-called Lippmann-Schwinger equation of Quantum Mechanical Scattering Theory [73]. Early applications of the Lippmann-Schwinger nonlinear integral equilibrium equation in the context of elastic mechanical problems are due to Kr€ oner [74], Dederichs and Zeller [75], and Zeller and Dederichs [76], being shown in the following sections that it plays a major role on the SCA’s formulation.
Fast homogenization through clustering-based reduced-order modeling
129
6. Offline stage 6.1 Step 1: Conduct DNS linear elastic analyses The first step consists of generating a high-fidelity RVE, which contains the morphological and topological details of the microstructure under analysis. Because of the discussion presented in Section 4, one is then interested in reducing the computational cost of the RVE analysis without losing the ability to capture the material mechanical behavior accurately. In order to achieve these seemingly incompatible requirements, one can think of grouping points with similar mechanical behavior and treat them in a unified sense. The most efficient and straightforward way of characterizing the mechanical similarity between different points of the RVE is to perform one or more low-demanding linear elastic analyses and base such characterization in appropriate physical metrics. Two immediate questions emerge around this line of thought: first, how can one ensure that a group of points will behave similarly under general and distinct loading conditions; and second, to what extent do elastic analyses provide sufficient information to characterize the mechanical behavior similarity in the following nonlinear regime involving plasticity, damage, and localization phenomena. The answer to both these questions is perhaps the most crucial point underlying all the following discussions on the SCA formulation. For now, it is assumed that performing one or more elastic analyses over the high-fidelity RVE and all the information that can be extracted from them is a suitable and efficient starting point to perform the model reduction. Concerning the DNS solution of one or more microscale equilibrium problems in the linear elastic regime, one may employ a standard first-order homogenization model based on FEM. After performing a suitable discretization of the high-fidelity RVE in nelem finite elements and solving the equilibrium problem, all the fields of interest (e.g., displacement, strain, stress, etc.) could then be determined at the FE mesh nodes. However, despite this being the conventional approach in computational homogenization, homogenization methods based on the so-called FFT algorithms have attracted a lot of attention as they outperform other methods in terms of speed and memory footprint [77]. Given the evident interest in performing the offline stage in the most efficient way possible, such alternative homogenization methods seem a suitable tool to be integrated into the SCA’s offline stage. As previously pointed out in Section 5, the pioneering contribution of the FFT-based homogenization approach is due to Moulinec and Suquet [70], being often referred to as FFT-based homogenization basic scheme.
Bernardo Proenc¸a Ferreira et al.
130
Over the past two decades, the original method has been the object of several developments in order to deal with the material nonlinear behavior [78, 79], to account for finite strains [80], and to improve its convergence and efficiency [72, 81–84]. More recently, it has been reformulated in an equivalent Galerkin variational framework [85] and successfully applied to finite strain problems accounting for the material constitutive nonlinear behavior [77, 86]. Despite the potential of the most recent FFT-based homogenization methods, the FFT-based homogenization basic scheme [70] is considered here without any loss in what concerns the objectives of this chapter. Given its similarity with the SCA’s formulation, the RVE spatial discretization related to the FFT-based homogenization basic scheme is discussed here. The interested reader is referred to the original publication [70] for a complete numerical treatment of this method. 6.1.1 FFT-based homogenization basic scheme The starting point of the method is the spatial discretization of the RVE in a regular grid of nv ¼ nv,1 nv,2 pixels (two-dimensional [2D] problem) or nv ¼ nv,1 nv,2 nv,3 voxels (three-dimensional [3D] problem). As schematically illustrated in Fig. 3 for a 2D problem, the coordinates of any sampling point,g Ys(s1,s2) Ωμ,0, are defined ash Y s ðs1 , s2 Þ ¼ ðs1 l s,1 , s2 l s,2 Þ,
si ¼ 0,1,…,nv, i 1,
i ¼ 1,2,
(29)
where ls,i ≡ li/nv,i and li are, respectively, the sampling period and the RVE dimension in the direction i. By performing a discrete Fourier transform (DFT) and jumping to the (discrete) frequency domain, each sampling angular frequency is then characterized by a wave vector, ζ s(s1, s2), defined as 2π 2π ζ s ðs1 , s2 Þ ¼ s1 , s2 , si ¼ 0,1,…,nv,i 1, i ¼ 1,2: (30) l1 l2 Having performed a suitable discretization of the RVE, Moulinec and Suquet [70] propose to solve the discretized Lippmann-Schwinger nonlinear integral equilibrium equation (see Eq. 28) through a simple fixed-point iteration procedure. The main idea behind the basic iterative scheme relies on alternating evaluations in both spatial and frequency domains at each iteration: the Lippmann-Schwinger equation is evaluated in the frequency g
h
Note that each pixel (2D problem)/voxel (3D problem) has one centered sampling point, which means that the number of pixels/voxels and the number of sampling points are coincident. The notation ()s emphasizes the discrete nature of the sampling points (spatial domain) and frequencies (frequency domain).
131
Fast homogenization through clustering-based reduced-order modeling
Fig. 3 Spatial discretization of a 2D RVE in a regular grid of 7 7 pixels.
domain, where the Green operator convolution can be efficiently computed, and the constitutive response is evaluated in the spatial domain, where the microscale strain field is sought.
6.2 Step 2: Perform clustering-based domain decomposition Liu and coworkers [53] propose to evaluate the mechanical behavior similarity between different points of the domain based on the fourth-order local elastic strain concentration tensor, He, εeμ ðYÞ ¼ He ðYÞ : εe ðXÞ,
8Y Ωμ,0 ,
(31)
establishing the relationship between the macroscale and microscale strains at each point of the microscale domain. In a 2D problem, the previous relation can be written in matricial form using Voigt’s notation as 2 e 3 2 e 32 e 3 εμ,11 H1111 He1122 He1112 ε11 6 εe 7 6 e 76 e 7 e e e e e εμ ¼ H ε ! 4 μ,22 5 ¼ 4 H2211 H2222 H2212 54 ε22 5, (32) εeμ,12
He1211
He1222
He1212
2εe12
hence the local elastic strain concentration tensor has nine independent components, which can be computed by solving three linear elastic
Bernardo Proenc¸a Ferreira et al.
132
microscale equilibrium problems over the high-fidelity RVE under orthogonal loading conditions (e.g., the first column of the strain concentration tensor matrix, He( :, 1), is determined by imposing a macroscopic strain defined as εe ¼ ½1, 0, 0T ). In a similar way, a 3D problem renders 2 e 3 εμ,11 6 εe 7 6 μ,22 7 6 e 7 6 εμ,33 7 e e e 7 εμ ¼ H ε ! 6 6 εe 7 6 μ,12 7 6 e 7 4 εμ,23 5 2
εeμ,13 He1111
6 He 6 2211 6 e 6 H3311 ¼6 6 He 6 1211 6 e 4 H2311 He1311
He1113
32
εe11
3
He1122
He1133
He1112
He1123
He2222
He2233
He2212
He2223
He3322
He3333
He3312
He3323
He1222
He1233
He1212
He1223
He2322
He2333
He2312
He2323
6 e 7 He2213 7 76 ε22 7 7 7 6 He3313 76 εe33 7 76 7 6 e 7, He1213 7 76 2ε12 7 76 7 He2313 54 2εe23 5
He1322
He1333
He1312
He1323
He1313
2εe13 (33)
thus requires the solution of six linear elastic microscale equilibrium problems in order to determine the 36 independent components of the strain concentration tensor. The choice of such physical metric provides an initial answer to the first question left in the previous section: it is assumed that if two domain points have similar strain concentration tensors, then their mechanical behavior will be the same under any loading condition within the elastic regime. At the same time, in view of the second question, given that plastic flow will generally occur in points with equivalent high strain concentrations, it is expected some similarity between these points even in the nonlinear plastic regime. Having established the physical metric, a group of points with similar mechanical behavior can now be formally defined as a material cluster, being assumed that every local field, here denoted as aμ(Y) in a completely general sense, is uniform within each material cluster. This assumption can be mathematically formulated as
nc ðIÞ X ðIÞ ðIÞ ðIÞ aμ ðYÞ ¼ aμ χ ðYÞ, χ ðYÞ ¼ 1 if Y Ωμ, 0 , (34) 0 otherwise I¼1
Fast homogenization through clustering-based reduced-order modeling
133
Fig. 4 Schematic of a material cluster, ΩðIÞ μ , within the microscale domain, Ωμ, 0, and the associated uniform assumption for a generic field aμ(Y). (I ) where aðIÞ μ is the homogeneous field in the Ith material cluster and χ (Y) is the characteristic function of the Ith material cluster (see Fig. 4). The RVE domain, Ωμ, which in a standard DNS microscale analysis would be discretized in nelem finite elements (FEM) or nv sampling points (FFT), can then be decomposed in nc material clusters such that nc nelem, nv—such procedure materializes the model reduction and leads to a clusterreduced RVE (CRVE). It is important to notice that the material clusters may have arbitrary-shaped domains and may include points that are not adjacent to each other (see Fig. 2). Moreover, from the definition in Eq. (34), the following relation can be readily established Z Z ðIÞ χ ðYÞ½μ dv ¼ ½μ dv: (35)
Ωμ,0
ðI Þ
Ωμ,0
Finally, a suitable clustering algorithm must be chosen to perform the cluster-based domain decomposition. Among the extensive amount of clustering algorithms that have been developed and applied to different fields, Liu and coworkers [53] selected a particular and well-known centroid-based clustering algorithm named k-means clustering. This unsupervised machine learning clustering technique can partition a set of data points into k clusters through two main steps: 1. Evaluation. Define the quantity of interest based on which similar points are to be grouped and evaluate this quantity at each point of the set. 2. Clustering. Partition the set of points into k clusters such that each cluster contains the points whose quantity of interest is closer to the average of that quantity over that cluster when compared to the average of any other cluster.
Bernardo Proenc¸a Ferreira et al.
134
In the particular case of interest envisaged here, the quantity of interest is the strain concentration tensor, He, and all the points of the RVE domain are to be grouped into nc clusters such that the cluster I contains the points whose e,ðIÞ, than to the average of any other He is closer to the average of cluster I, H . The technique can be mathematically formulated as a cluster J 6¼ I, H minimization problem as follows: Problem 1. (RVE domain decomposition through k-means clustering) For a given set of strain concentration tensors, H , evaluated at each point of n o the RVE domain, find nc sets of points, S ¼ S ð1Þ , S ð2Þ , …, S ðnc Þ , e,ðJÞ
such that the within-cluster least-squares sum with respect to the cluster average is minimized as 2 3 nc X X e , ðIÞ 4 (36) k He ðYÞ H k2 5, S ¼ argmin S ðIÞ
I¼1
YS ðIÞ
e,ðIÞ is the average of all the strain concentration tensors He(Y) at the where H points within the cluster S ðIÞ and k ½ k denotes the Frobenius norm of the matrix ½. There are several well-developed algorithms to solve the previous k-means minimization problem. Among these, Liu and coworkers [53] employed the standard Lloyds’ algorithm [87] that is schematically illustrated in Fig. 5. Given that each material cluster groups points with similar mechanical behavior, the previous clustering procedure may be applied independently on each phase of the RVE domain. This approach takes advantage of the a priori knowledge of the problem (material phases) by narrowing the domains, where the unsupervised machine learning technique k-means is employed to perform the domain decomposition. A similar level of clustering refinement among the different phases can be achieved, for instance, by prescribing the number of clusters in proportion to the respective volume fractions. It is important to remark that the number of clusters nc is directly related to the achieved degree of data compression, that is, the model reduction from the RVE to the CRVE. Given that the analyst can choose the number of clusters to perform the domain decomposition, a suitable choice should keep the balance between the model reduction and the CRVE mechanical response accuracy. In general, an increased number of clusters stores more information about the microstructure details in the offline stage, leading to more accurate predictions on the online stage at the expense of an increased computational cost.
Fast homogenization through clustering-based reduced-order modeling
135
Fig. 5 Schematic of the solution of the k-means clustering minimization problem through the standard Lloyds’ algorithm [87] in order to perform the RVE cluster-based domain decomposition.
6.3 Step 3: Compute cluster interaction tensors The final step of the offline stage consists of the computation of the so-called interaction tensors between clusters (see Fig. 6). As later demonstrated in Section 7.2, these tensors naturally arise from the clusterwise discretization of the Lippmann-Schwinger equilibrium equation and are defined as T
ðIÞðJÞ
1 ¼ ðIÞ f vμ
Z
Z Ωμ,0 Ωμ,0
χ ðIÞ ðYÞ χ ðJÞ ðY 0 Þ Φ0 ðY Y 0 Þ dv0 dv,
I,J ¼ 1,2,…,nc ,
(37)
where f (I ) is the volume fraction of the Ith cluster, f ðIÞ ¼
vðIÞ μ : vμ
(38)
Bernardo Proenc¸a Ferreira et al.
136
Fig. 6 Schematic of the cluster interaction tensors in a CRVE with three material clusters.
Recalling the physical meaning of the reference homogeneous material Green operator introduced in Section 5.2, it transpires from the previous definition that the fourth-order interaction tensor T(I )(J ) physically represents the influence of the stress in the Jth cluster on the strain in the Ith cluster, that is, describes the strain-stress interaction between clusters I and J. Following Moulinec and Suquet [70], Liu and coworkers [53] also assume a reference (fictitious) homogeneous linear elastic material under isotropic conditions, De,0 ¼ λ0 I I + 2μ0 Is ,
(39)
where Is is the fourth-order symmetric projection identity tensor, so that the associated Green operator takes the explicit form of Eq. (21) in the frequency domain. Moreover, as in the FFT-based homogenization basic scheme [70], the required convolution over the cluster J, Z ðJÞ 0 ðζÞ , χ ðJÞ ðY 0 Þ Φ0 ðY Y 0 Þ dv0 ¼ F 1 χ ðζÞ Φ (40) Ωμ, 0
can be computed in the discrete frequency domain through efficient FFT algorithms.
Fast homogenization through clustering-based reduced-order modeling
137
7. Online stage 7.1 Continuous Lippmann-Schwinger integral equation The Lippmann-Schwinger nonlinear integral equilibrium equation has already been derived in Section 5 (see Eq. 28) and is the basis of the SCA’s formulation equilibrium problem. Nonetheless, Liu and coworkers [53] formulate the equilibrium problem by introducing a homogeneous far-field strain, ε0μ , asi Z εμ ðYÞ ¼ Φ0 ðY Y 0 Þ : σ ∗μ ðY 0 Þ dv0 + ε0μ , 8Y Ωμ, 0 , (41) Ωμ, 0
that, attending to the definition of the eigenstress field (see Eq. 24), can be conveniently expanded as Z εμ ðYÞ ¼ Φ0 ðY Y 0 Þ : σ μ ðY 0 Þ De, 0 : εμ ðY 0 Þ dv0 + ε0μ , 8Y Ωμ, 0 : Ωμ, 0
(42) Several comments are in order to complement the original publication: • Homogeneous far-field strain. In comparison with Eq. (28), the homogeneous far-field strain, ε0μ , has been introduced as a replacement of the macroscale strain tensor, ε(X), and assumed as an additional unknown of the equilibrium problem. Such tensor is here interpreted as a convenient numerical artifact to enforce general macroscale strain and/or stress tensors formulated through homogenization-based constraints as Z 1 εμ ðYÞ dv ¼ εðXÞ, 8Y Ωμ,0 , (43) vμ Ωμ,0 and 1 vμ
Z Ωμ,0
σ μ ðYÞ dv ¼ σðXÞ, 8Y Ωμ,0 :
(44)
It is numerically observed that the homogeneous far-field strain tensor, ε0μ, recovers the macroscale strain tensor, ε(X), in the most common case of a pure strain homogenization-based constraint. This observation contrasts with the original paper [53], where such recovery is presented as a i
The meaning of the introduced homogeneous far-field strain, ε0μ, in the integral Lippmann-Schwinger equilibrium equation is conveniently discussed in Section 7.5.
Bernardo Proenc¸a Ferreira et al.
138
consequence of adopting a self-consistent scheme to set the reference material Lame parameters. In the more general case of mixed strain and stress constraints, the prescribed macroscale strain tensor components are numerically recovered as well. • Green operator singularity. It should be noticed that the explicit form of the Green operator derived in Eq. (21) has a singularity at ζ ¼ 0. Such singularity is circumvented in the FFT-based homogenization basic scheme [70] because the macroscale strain tensor is directly enforced at the zero frequency of the microscale strain field, that is, ε˘μ(0) ¼ ε(X). Given that the macroscale constraints are explicitly enforced in the spatial domain by Eqs. (43), (44) in the SCA’s formulation, the zero frequency of the Green 0 operator should be set to zero, that is, Φ˘ (ζ ¼ 0) ¼ 0. Such enforcement seems to numerically guarantee that the homogeneous far-field strain tensor, ε0μ, recovers the macroscale strain tensor, ε(X), as expected from the Lippmann-Schwinger integral equation derived in Eq. (28). • Extension to constitutive nonlinearity. The derivation of the LippmannSchwinger integral equation described in Section 5 assumed linear elastic phases. The extension to the case where the material phases have a general nonlinear behavior calls for a suitable time discretization of the Lippmann-Schwinger integral equilibrium equation and can be achieved by means of an implicit scheme based on approximated incremental constitutive functions. The time discretization of the microscale equilibrium problem follows standard procedures, where an algorithm based on approximated incremental constitutive functions is adopted. Therefore, considering the general (pseudo-)time increment ½tm , tm+1 , the incremental Lippmann-Schwinger integral equilibrium equation can be written as Z εμ,m+1 ðY Þ ¼ Φ0 ðY Y 0 Þ : σ^ μ,m+1 ðY 0 Þ De,0 : Ωμ,0
εμ,m+1 ðY 0 ÞÞdv0 + ε0μ,m+1 , as well as the macroscale constraints, Z 1 εμ,m+1 ðYÞ dv ¼ εm+1 ðXÞ, vμ Ωμ,0 and 1 vμ
Z Ωμ,0
σ^ μ,m+1 ðYÞ dv ¼ σ m+1 ðXÞ,
8Y Ωμ,0 ,
(45)
8Y Ωμ,0 ,
(46)
8Y Ωμ,0 ,
(47)
Fast homogenization through clustering-based reduced-order modeling
139
where σ^ μ denotes the incremental constitutive function for the Cauchy stress tensor.
7.2 Discretized Lippmann-Schwinger integral equation Solving the continuous incremental Lippmann-Schwinger integral equilibrium equation for every point of the domain would be very time consuming and even slower than the actual simulation of the high-fidelity RVE with a standard DNS analysis [53]. Therefore, the model reduction achieved in the offline stage will provide a key advantage toward performing accurate and efficient microscale analyses. The incremental Lippmann-Schwinger integral equation can be then averaged over each material cluster I of the CRVE asj Z Z "Z 1 1 Δεμ,m+1 ðYÞ dv ¼ ðIÞ Φ0 ðY Y 0 Þ : ðIÞ ðI Þ ðIÞ vμ Ωμ,0 vμ Ωμ,0 Ωμ,0 Z 1 Δε0μ,m+1 dv, Δ σ^ μ,m+1 ðY 0 Þ De,0 : Δεμ,m+1 ðY 0 Þ dv0 dv + ðIÞ ðIÞ vμ Ωμ,0 I ¼ 1,2,…,nc : (48) Attending to the relationship established in Eq. (35) and to the definition of the cluster volume fraction (see Eq. 38), the previous equation can be rewritten as 1
Z
1
ðIÞ
Z
"Z
χ ðYÞΔεμ, m + 1 ðYÞ dv ¼ χ ðIÞ ðYÞ Φ0 ðY Y 0 Þ : f ðIÞ vμ Ωμ, 0 f ðIÞ vμ Ωμ, 0 Ωμ, 0 Δ^ σ μ, m + 1 ðY 0 Þ De, 0 : Δεμ, m + 1 ðY 0 Þ dv0 dv + Δε0μ, m + 1 , I ¼ 1,2, …, nc : (49)
In addition, the cluster piecewise uniform assumption established in Eq. (34) yields Δεμ,m+1 ðY 0 Þ ¼
nc X
ðJÞ
χ ðJÞ ðY 0 Þ Δεμ,m+1 ,
(50)
J¼1
Δσ μ,m+1 ðY 0 Þ ¼
nc X
ðJÞ
χ ðJÞ ðY 0 Þ Δσ μ,m+1 ,
(51)
J¼1
j
Given the additive nature of the infinitesimal strain tensor and Cauchy stress tensor, Liu and coworkers [53] formulate the incremental Lippmann-Schwinger integral equation having the incremental infinitesimal strain tensor as the primary unknown.
Bernardo Proenc¸a Ferreira et al.
140
which after substitution in Eq. (49) renders Z
1 f ðIÞ vμ
Ωμ , 0
2 4
χ
nc X J¼1
ðIÞ
ðYÞΔεμ, m + 1 ðYÞ dv ¼
1 f ðIÞ vμ
"Z
Z Ωμ, 0
Ωμ, 0
χ ðIÞ ðYÞ Φ0 ðY Y 0 Þ :
3 3 ðJÞ ðJÞ χ ðJÞ ðY 0 Þ Δ^ σ μ, m + 1 De, 0 : Δεμ, m + 1 5 dv0 5 dv + Δε0μ, m + 1 ,
I ¼ 1, 2, …, nc :
(52)
Finally, by performing some rearrangements and noticing that the first term is actually the homogeneous incremental strain in cluster I, Z 1 ðIÞ χ ðIÞ ðYÞ Δεμ,m+1 ðYÞ dv ¼ Δεμ,m+1 , I ¼ 1,2,…,nc , (53) ðIÞ f vμ Ωμ,0 the discretized incremental Lippmann-Schwinger integral equilibrium equation can be written as ! Z Z nc X 1 ðIÞ χ ðIÞ ðYÞχ ðJÞ ðY 0 Þ Φ0 ðY Y 0 Þ dv0 dv : Δεμ, m + 1 ¼ ðIÞ v f μ Ω Ω μ, 0 μ, 0 J¼1 ðJÞ ðJÞ Δ^ σ μ, m + 1 De, 0 : Δεμ, m + 1 + Δε0μ, m + 1 , I ¼ 1,2,…,nc , (54) or, in view of the already introduced cluster interaction tensors (see Eq. 37), as ðIÞ
Δεμ, m+1 ¼ +
nc X
TðIÞðJÞ : Δ σ^
J¼1
Δε0μ, m+1 ,
ðJÞ μ, m+1
ðJÞ
De, 0 : Δεμ, m+1
(55)
I ¼ 1,2,…,nc :
Concerning the discretization of the macroscale strain constraint (see Eq. 46), applying once again the cluster piecewise uniform assumption (see Eq. 34), one gets Z nc X 1 ðIÞ χ ðIÞ ðYÞ Δεμ,m+1 dv ¼ Δεm+1 ðXÞ: (56) vμ Ωμ,0 I¼1
Fast homogenization through clustering-based reduced-order modeling
141
Recalling the relation established in Eq. (35), it is possible to write Z nc nc X X vðIÞ 1 μ ðIÞ ðIÞ dv Δεμ,m+1 ¼ Δεμ,m+1 ¼ Δεm+1 ðXÞ, ðIÞ v v I¼1 μ Ωμ,0 I¼1 μ
(57)
which finally yields the discretized macroscale strain constraint nc X
ðIÞ
f ðIÞ Δεμ, m+1 ¼ Δεm+1 ðXÞ:
(58)
I¼1
Following an analogous procedure for the macroscale stress constraint (see Eq. 47), its discretized counterpart becomes nc X
ðIÞ
f ðIÞ Δ σ^ μ, m+1 ¼ Δσ m+1 ðXÞ:
(59)
I¼1
In summary, collecting Eqs. (55), (58), (59), the reduced microscale equilibrium problem consists of the solution of a system of nonlinear equations composed by (i) nc Lippmann-Schwinger integral equilibrium equations and (ii) macroscale strain and/or stress constraints. This system, which is henceforth called the Lippmann-Schwinger system of equilibrium equations, is explicitly established as 8 nc X > ð1Þ ðJÞ ðJÞ ðJÞ > > Δεμ, m + 1 ¼ Tð1ÞðJÞ : Δ^ σ μ, m + 1 Δεμ, m + 1 , βm De, 0 : Δεμ, m + 1 + Δε0μ, m + 1 , > > > J¼1 > > > nc > X > ð2Þ ðJÞ ðJÞ ðJÞ ð2ÞðJÞ e, 0 0 > > Δε D ¼ T : Δ^ σ Δε , β : Δε > μ, m + 1 m μ, m + 1 μ, m + 1 μ, m + 1 + Δεμ, m + 1 , > < J¼1 : ⋮ > nc X > > ðnc Þ ðJÞ ðJÞ ðJÞ ðnc ÞðJÞ e, 0 0 > > T : Δ^ σ μ, m + 1 Δεμ, m + 1 , βm D : Δεμ, m + 1 + Δεμ, n + 1 , Δεμ, m + 1 ¼ > > > J¼1 > > > nc nc > X X > ðIÞ ðIÞ > f ðIÞ ΔεðIÞ > f ðIÞ Δ^ σ μ, m + 1 Δεμ, m + 1 , βm ¼ Δσ m + 1 ðXÞ, : μ, m + 1 ¼ Δεm + 1 ðXÞ; I¼1
I¼1
(60) and must be then solved for the unknowns n o ð1Þ ð2Þ ðnc Þ , Δε0μ,m+1 : Δεμ,m+1 ¼ Δεμ,m+1 , Δεμ,m+1 , …, Δεμ,m+1
(61)
Bernardo Proenc¸a Ferreira et al.
142
Note that the relation between the incremental stress, Δσ ðIÞ μ , and the incre-
mental strain, ΔεðIÞ μ , on each material cluster I is established by the corresponding incremental constitutive function.k
7.3 Numerical solution of the reduced microscale equilibrium problem The Lippmann-Schwinger system of equilibrium equations (see Eq. 60) is generally nonlinear due to the nonlinearity associated with the material cluster’s constitutive behavior. The solution can be efficiently found through the well-known Newton-Raphson method and the associated quadratic convergence rate. In order to do so, it is first convenient to set the global residual function defined as " Rm+1 ¼
ðIÞ
Rm+1 ðn +1Þ
c Rm+1
# ¼
0 0
,
8I ¼ 1,2,…,nc , (62)
where the nc + 1 residual functions are defined as ðIÞ Rm + 1
ðIÞ ¼ Δεμ, m + 1
+
nc X
ðJÞ ðJÞ TðIÞðJÞ : Δ^ σ μ, m + 1 De, 0 : Δεμ, m + 1
J¼1 0 Δεμ, m + 1 , 8I ¼ 1,…,nc , nc X ðnc +1Þ ðIÞ Rm+1 ¼ f ðIÞ Δεμ,m+1 Δεm+1 ðXÞ; I¼1 nc X ðIÞ ðnc +1Þ Rm+1 ¼ f ðIÞ Δσ^ μ,m+1 Δσ m+1 ðXÞ: I¼1
(63)
(64)
Then, each Newton-Raphson iteration (k) consists of finding the solution of the linearized version of Eq. (62) (see Box 1) as ðkÞ
ðk1Þ
Δεμ,m+1 ¼ Δεμ,m+1 + δΔεðkÞ μ , ðk1Þ ðk1Þ 1 ¼ J Δε δΔεðkÞ R Δε μ,m+1 μ,m+1 , μ
k
(65)
The dependency on both incremental strain and constitutive internal variables is made explicit in Eq. (60) to emphasize the unknowns of the Lippmann-Schwinger system of equilibrium equations.
143
Fast homogenization through clustering-based reduced-order modeling
Box 1 Solution procedure of a given load increment m + 1 of the Lippmann-Schwinger system of equilibrium equations through the Newton-Raphson method. (i) Initialize iterative counter, k :¼ 1 (ii) Set initial guess for incremental strains, Δε(0) μ,m+1 ¼ 0 (iii) Perform the material state update for each cluster ðIÞðk1Þ ðIÞðk1Þ Δσ μ,m+1 ¼ Δ σ^ μ,m+1 Δεμ,m+1 , αðIÞ 8I ¼ 1,2,…,nc m , (iv) Compute global residual functions nc h X ðIÞ ðk1Þ ðI Þðk1Þ ðJÞðk1Þ Rm + 1 Δεμ,m + 1 TðI ÞðJÞ : Δ^ σ μ,m + 1 ¼ Δεμ,m + 1 + J¼1 i ðJÞðk1Þ 0, ðk1Þ e,0 D : Δεμ,m + 1 Δεμ, m + 1 , nc X ðn + 1Þ ðk1Þ ðI Þ f ðIÞ Δεμ, m + 1 Δεm + 1 ðXÞ; Rm +c 1 Δεμ,m + 1 ¼ I¼1
ðn + 1Þ Rm +c 1
nc X ðk1Þ ðIÞ Δσ μ,m + 1 ¼ f ðIÞ Δσ μ, m + 1 Δσ m + 1 ðXÞ, I¼1
8I ¼ 1,2, …,nc (v) Check for convergence 1. if (converged) then • Update incremental solution, ð Þm+1 ¼ ð ÞðkÞ m+1 , and exit (vi) Compute Jacobian matrix 2
ðIÞ
∂Rm+1
6 6 ∂ΔεðKÞ μ,m+1 6 ðk1Þ J Δεμ,m+1 ¼ 6 ðn +1Þ 6 ∂R c 4 m+1 ðKÞ ∂Δεμ,m+1
3 ðIÞ ∂Rm+1 7 ∂Δε0μ,m+1 7 7 , 7 ðnc +1Þ 7 ∂Rm+1 5 ∂Δε0μ,m+1 ðΔεðk1Þ μ,m+1 Þ
8I,K ¼ 1,2,…,nc
where the residual derivatives are defined by Eqs. (A.5), (A.7), (A.10) or (A.13), and (A.14) (vii) Solve the system of linear equations ðk1Þ ðk1Þ 1 δΔεðkÞ ¼ J Δε R Δε μ,m+1 μ,m+1 μ (viii) Update incremental strains ðkÞ
ðk1Þ
Δεμ,m+1 ¼ Δεμ,m+1 + δΔεðkÞ μ (ix) Increment iteration counter, k :¼ k + 1 (x) Go to step (iii)
Bernardo Proenc¸a Ferreira et al.
144
where Δεμ,m+1 is the matricial form of the vector of unknowns defined in Eq. (61), R is the vectorial form of the global residual defined in Eq. (62), and J is the matricial form of the Jacobian matrix. 3 2 ðIÞ ðIÞ ∂Rm+1 ∂Rm+1 7 6 6 ∂ΔεðKÞ ∂Δε0μ,m+1 7 ∂Rm+1 μ,m+1 7 6 8I, K ¼ 1,…,nc : (66) J m+1 ¼ ¼6 7, ∂Δεμ,m+1 6 ∂Rðnc +1Þ ∂Rðnc +1Þ 7 5 4 m+1 n+1 ðKÞ ∂Δε0μ,m+1 ∂Δε μ,m+1
The linearization of the Lippmann-Schwinger system of equilibrium equations is provided in Appendix A.
7.4 The homogenized consistent tangent modulus Suppose the SCA is to be employed, for instance, in FE-SCA or FFT-SCA multiscale simulations. In that case, the solution of a macroscale NewtonRaphson iteration requires the computation of the homogenized consistent tangent modulus of the RVE (see Fig. 1). In view of the homogeneous far-field strain, ε0μ,m+1 , introduced in the Lippmann-Schwinger integral equation (see Section 7.2), the homogenized consistent tangent modulus of the CRVE at the loading increment m + 1 is defined as ! nc X ∂Δσ ðXÞ ∂ m+1 ðIÞ m+1 ¼ (67) D ¼ f ðIÞ Δ σ^ μ,m+1 : ∂Δε0μ,m+1 ∂Δε0μ,m+1 I¼1 Applying the chain rule and noticing that the material consistent tangent modulus of the cluster I is defined as ðIÞ ðIÞ ∂Δ σ^ μ,m+1 Δεμ,m+1 , βm ðIÞ Dm+1 ¼ , (68) ðIÞ ∂Δεμ,m+1 the CRVE homogenized consistent tangent modulus can be written as m+1 ¼ D
nc X
ðIÞ
ðIÞ
f ðIÞ Dm+1 : Hm+1 ,
(69)
I¼1 ðIÞ
where Hm+1 is the Ith cluster fourth-order strain concentration tensor defined as
Fast homogenization through clustering-based reduced-order modeling ðIÞ
ðIÞ
Δεμ,m+1 ¼ Hm+1 : Δε0μ,m+1 ,
8I ¼ 1,2,…,nc
145
(70)
such that ðIÞ
∂Δεμ,m+1 ¼ , ∂Δε0μ,m+1
ðIÞ Hm+1
8I ¼ 1,2,…,nc :
(71)
ðIÞ
In order to compute Hm+1 at the equilibrium state of cluster I, it is possible to establish ðIÞ dRm+1
ðIÞ
¼
∂Rm+1 ðKÞ
∂Δεμ,m+1
:
ðKÞ dΔεμ,m+1
ðIÞ
∂Rm+1 + : dΔε0μ,m+1 ¼ 0, 0 ∂Δεμ,m+1
8I, K ¼ 1,2,…,nc , ðIÞ
(72)
ðKÞ
ðIÞ
where both ∂Rm+1 =∂Δεμ,m+1 and ∂Rm+1 =∂Δε0μ,m+1 (provided in Appendix A) are determined at the converged state variables. Assembling the previous equation for all material clusters I ¼ 1,2,…,nc and taking the associated matricial forms, one can readily establish the following system of linear equations: M Sm+1 ¼ Q,
(73)
where M and Q contain the residual derivatives as 3 2 3 2 ð1Þ ð1Þ ð1Þ ∂Rm+1 ∂Rm+1 ∂R m+1 … 7 6 7 6 ðnc Þ 7 6 ∂Δεð1Þ 6 ∂Δε0μ,m+1 7 ∂Δεμ,m+1 μ,m+1 7 6 7 6 7 6 7, (74) M¼6 ⋮ ⋱ ⋮ ⋮ 7, Q ¼ 6 7 6 7 6 7 6 ðn Þ ðn Þ ðn Þ c c 7 6 ∂R c 4 ∂Rm+1 5 ∂Rn+1 5 m+1 4 … ð1Þ ðnc Þ ∂Δε0μ,m+1 ∂Δεμ,m+1 ∂Δεμ,m+1 and Sm+1 contains the clusters strain concentration tensors as "
Sm+1
ð1Þ
ðn Þ
c ∂Δεμ,m+1 ∂Δεμ,m+1 ¼ … ∂Δε0μ,m+1 ∂Δε0μ,m+1
#T
h iT ðIÞ ðnc Þ ≡ Hm+1 … Hm+1 :
(75)
Given that ðIÞ
∂Rm+1 ¼ I, ∂Δε0μ,m+1
8I ¼ 1,2,…,nc ,
(76)
Bernardo Proenc¸a Ferreira et al.
146
the solution of the previous system of linear equations shows that the Ith cluster strain concentration tensor is computed as ðIÞ
Hm+1 ¼
nc X
M1
ðIÞðKÞ
, 8I ¼ 1,2,…,nc :
(77)
K¼1
7.5 The reference homogeneous elastic material In order to solve the microscale equilibrium problem and formulate the so-called Lippmann-Schwinger integral equilibrium equation, a reference (fictitious) homogeneous linear elastic material is conveniently introduced into the constitutive formulation as shown in Section 5.5. As pointed out by Liu and coworkers [53] and recently addressed by Schneider [67], the solution of the continuous Lippmann-Schwinger integral equilibrium equation does not depend on the choice of the homogeneous elastic tangent modulus of the reference material. In contrast, the solution of the discrete clustered Lippmann-Schwinger equation does actually depend on the reference material elastic properties, as discussed in detail by Schneider [67] and references therein. Despite the recent progress in understanding polarization schemes popular in FFT-based computational micromechanics, the mathematical quantification of such dependency still remains an open and crucial challenge to the best of the authors’ knowledge. For this reason and given the fundamental purposes sought here, a mathematical coverage of such a topic is out of the scope of this chapter. Although not addressing this mathematical quantification, Liu and coworkers [53] proposed an ingenious self-consistent micromechanical approach to find an “optimal” choice of the reference material Lame parameters. The details underlying the so-called regression-based self-consistent scheme are described in the following section as well as the coupling with the solution of the Lippmann-Schwinger system of equilibrium equations.
7.6 Self-consistent scheme Given that the reference homogeneous elastic material is assumed to be isotropic (see Eq. 39), finding an exact match with the CRVE homogenized consistent tangent modulus is typically impossible due to the general nonlinear and/or anisotropic heterogeneous material behavior. Therefore, in order to set the homogeneous tangent modulus of the reference material, De,0, as close as possible to the effective tangent modulus of the CRVE, D, Liu and coworkers [53] propose a self-consistent approach inspired by the
Fast homogenization through clustering-based reduced-order modeling
147
well-known self-consistent method in classical micromechanics. The underlying optimization problem can be stated as follows: Problem 2. (Self-consistent scheme optimization problem) Assuming a reference homogeneous linear elastic isotropic material with the 0 0 tangent modulus at each load increment m + 1 given by De,0 m+1 ðλm+1 ,μm+1 Þ, find the two Lame parameters (λ0m+1 , μ0m+1 ) such that the difference with respect to the homogenized consistent tangent modulus of the CRVE, m+1 , is minimized. D With the purpose of solving the previous minimization problem, two types of self-consistent approaches are proposed by Liu and coworkers: (1) a regression-based scheme [53], which seeks to approximate the actual homogenized stress field and the homogeneous stress field of the reference elastic material when subjected to the same macroscale strain and (2) a projection-based scheme [55], based on the isotropic projection of the effective tangent modulus. Without any shortcoming concerning the illustrative purpose of this chapter, only the regression-based self-consistent scheme is described here. The regression-based self-consistent scheme can be mathematically formulated as Problem 3. (Regression-based self-consistent scheme optimization problem) Assuming a reference homogeneous linear elastic isotropic material with the 0 0 tangent modulus at each load increment m + 1 given by De,0 m+1 ðλm+1 ,μm+1 Þ, 0 find the two Lame parameters (λm+1 , μ0m+1 ) such that the difference with m+1, is minimized respect to the effective tangent modulus of the CRVE, D in the sense that 0 2 0 0 λm+1 , μ0m+1 ¼ arg min k Δσ m+1 De,0 m+1 ðλ ,μ Þ : Δεm+1 k , fλ0 , μ0 g
(78)
where k[•]k denotes the Frobenius norm of the matrix [•] and the homogenized incremental strain and stress are defined in Eqs. (58), (59). In order to solve the minimization problem, one can define the cost function 2 0 0 gðλ0 , μ0 Þ ¼k Δσ m+1 De,0 m+1 ðλ ,μ Þ : Δεm+1 k ,
(79)
and find the optimum point ðλ0m+1 , μ0m+1 Þ by solving the system of two linear equations defined as
Bernardo Proenc¸a Ferreira et al.
148
8 ∂gðλ0 , μ0 Þ > > ¼ 0, < ∂λ0 > ∂gðλ0 , μ0 Þ > : ¼ 0: ∂μ0
(80)
The details on the establishment and solution of the previous system of linear equations are provided in Appendix B, being the solution given by " #
tr ½I tr ½Δεm+1 2 tr ½Δεm+1 1 tr ½Δσ m+1 λ0m+1 ¼ : tr ½Δεm+1 2 Δσ m+1 : Δεm+1 2Δεm+1 : Δεm+1 μ0m+1 (81) A closer inspection of the previous solution reveals two shortcomings of this self-consistent scheme: first, for pure shear loadings (tr[Δε] ¼ 0), the system of linear equations becomes undetermined and one of the two Lame parameters must be estimated; and second, for hydrostatic loadings, the system of linear equations also becomes undetermined due to the fact that both equations become linearly dependent. Irrespective of the adopted self-consistent scheme, given that the tangent modulus of the reference homogeneous elastic material is updated at each load increment, the solution of the Lippmann-Schwinger system of equilibrium equations summarized in Box 1 must be enriched. Besides the explicit dependency of the cluster equilibrium residuals on De,0 (see Eq. 63), notice that the cluster interaction tensors, T(I )(J ), also depend on the reference material elastic properties through the associated Green operator (see Eq. 21) and, therefore, must be updated. For convenience, it is important to notice that the Green operator can be rewritten as 0 ðζÞ ¼ c1 ðλ0 , μ0 Þ Φ 0 ðζÞ + c2 ðλ0 , μ0 Þ Φ 0 ðζÞ, Φ 1 2
(82)
where the reference material-independent components are defined as 0 ðζÞ ¼ Φ 1,ijkl 0 ðζÞ ¼ Φ 2,ijkl
1 2 δik ζ l ζ j + δil ζ k ζ j + δjk ζ l ζ i + δjl ζ k ζ i , k ζk ζi ζj ζk ζl k ζk4
,
and the associated coefficients as
(83)
149
Fast homogenization through clustering-based reduced-order modeling
c 1 ðλ0 ,μ0 Þ ¼
1 λ0 + μ0 0 0 : , c ðλ ,μ Þ ¼ 2 4μ0 μ0 ðλ0 + 2μ0 Þ
(84)
This means that only the coefficients c1(λ0, μ0) and c2(λ0, μ0) must be recomputed in each iteration of the self-consistent scheme in order to update the cluster interaction tensors, T(I )(J ), that is, the computation of the fourth 0 ðζÞ and Φ 0 ðζÞ is carried out only once order Green operator components Φ 1 2 in the offline stage. Liu and coworkers [53] adopt a partitioned (or staggered) solution scheme, where the iterative solution procedure of the Lippmann-Schwinger system of equilibrium equations is embedded within a iterative selfconsistent scheme as described in Box 2. The convergence of the selfconsistent scheme can be evaluated through the iterative relative change of the reference material Lame parameters. Box 2 Partitioned (or staggered) scheme adopted to solve a given loading increment m + 1 of the clustered LippmannSchwinger system of equilibrium equations while updating the parameters through a self-consistent reference material Lame scheme. (i) Set initial guess for reference material Lame parameters, λ0m+1 and μ0m+1 (ii) Update tangent modulus of the reference homogeneous elastic material 0 De,m+1 ¼ λ0m+1 I I + 2μ0m+1 Is
(iii) Recompute the cluster interaction tensors by updating the Green operator coefficients Z Z 1 ðI ÞðJÞ T ¼ ðIÞ χ ðIÞ ðYÞ χ ðJÞ ðY 0 Þ Φ0 ðY,Y 0 Þ dv0 dv, f vμ Ωμ,0 Ωμ,0 0 ðζÞ ¼ c 1 ðλ0 ,μ0 Þ Φ 0 ðζÞ+c 2 ðλ0 ,μ0 Þ Φ 0 ðζÞ Φ 1 2 (iv) Solve Lippmann-Schwinger system of equilibrium equations through the Newton-Raphson method (described in Box 1), without performing the incremental solution update when convergence is achieved ðk1Þ ðk1Þ 1 δΔεðkÞ ¼ J Δε R Δε μ,m+1 μ,m+1 μ
Continued
Bernardo Proenc¸a Ferreira et al.
150
BOX 2 Partitioned (or staggered) scheme adopted to solve a given loading increment m + 1 of the clustered LippmannSchwinger system of equilibrium equations while updating parameters through a selfthe reference material Lame consistent scheme—cont’d (v) Solve the self-consistent scheme optimization problem (see Problem 3) and update the reference material elastic properties, λ0m+1 and μ0m+1 (vi) Check for convergence 1. if (converged) then • Update incremental solution, ð Þm+1 ¼ ð ÞðkÞ m+1 , and exit 2. else • Proceed to Step (ii)
8. Numerical application A simple 2D example in the infinitesimal strain context is adopted here to illustrate the numerical application of the SCA. The analysis presented next follows as close as possible the different formulation steps described in the previous sections.
8.1 Definition of the heterogeneous material RVE The heterogeneous material is a fiber-reinforced composite characterized by unidirectional circular cross-section fibers (phase 2, f ¼ 30%) randomly distributed and embedded in a matrix (phase 1, f ¼ 70%). Note that this type of microstructure is suitable to be modeled in plane strain conditions by considering any transversal plane relative to the direction of the fibers. By further assuming periodic boundary conditions, the unitary side-length 2D RVE shown in Fig. 7 is accepted as representative of the heterogeneous material. The matrix phase constitutive behavior is elastoplastic and governed by the standard von Mises constitutive model with an isotropic strain hardening law, while the fiber phase constitutive behavior is assumed as linear elastic. The material properties of both phases are listed in Fig. 7.
Fast homogenization through clustering-based reduced-order modeling
151
Fig. 7 2D RVE of fiber-reinforced composite characterized by randomly distributed unidirectional circular cross-section fibers (f ¼ 30%) embedded in a matrix (f ¼ 70%). Constitutive models and associated properties assumed for each material phase.
8.2 Offline stage: Conduct DNS linear elastic analyses The first step to perform the clustering-based model reduction is the computation of the fourth-order local elastic strain concentration tensors (see Eq. 31). As shown in Eq. (32), this requires the DNS solution of three microscale equilibrium problems under orthogonal unitary macroscale strain tensors (εe ¼ ½1, 0, 0T , εe ¼ ½0, 1, 0T εe ¼ ½0, 0, 1T ) and where the material phases are assumed to be linear elastic. Given that the microscale equilibrium problems are linear, the macroscale strain tensor can be totally prescribed in a single increment. The DNS solutions are here obtained with the FFT-based homogenization basic scheme [70] described in Section 6.1. Concerning the RVE spatial discretization (see Fig. 3), the same discretization is performed in both problem dimensions (nv,1 ¼ nv,2). A preliminary discretization convergence study has been performed and a resolution of nv ¼ 600 600 is deemed acceptable to capture the topological details and constitutive behavior of the microstructure. Once the DNS solutions are obtained, the fourth-order local elastic strain concentration tensors, He(Ys), are thus known at every sampling point Ys(i1, i2) Ωμ,0 (see Eq. 29) of the spatially discretized RVE.
8.3 Offline stage: Perform clustering-based domain decomposition The following step consists in the compression of the RVE into a CRVE with a given number of material clusters according to the desired accuracy.
152
Bernardo Proenc¸a Ferreira et al.
The cluster analysis is performed here with the well-known k-means clustering algorithm (see Eq. 36) based on the fourth-order local elastic strain concentration tensors. For illustrative purposes, the RVE is decomposed into a different number of clusters, as shown in Fig. 8. The clustering procedure is performed independently for each material phase and the degree of data compression is set proportional to the respective volume fractions. Given the volume fractions of both phases, f1 ¼ 70% and f2 ¼ 30%, it is set nc,1/nc,2 ¼ 2 for all CRVE. After the clustering is successfully performed for a prescribed number of clusters, nc, a given cluster label is then assigned to each sampling point of the spatially discretized RVE.
8.4 Offline stage: Compute cluster interaction tensors The last step to complete the offline stage is the computation of the cluster interaction tensors between every pair of clusters (see Eq. 37). Recall that the cluster interaction tensors, T(I )(J ), depend on the reference material elastic properties through the associated Green operator. Given that an isotropic reference elastic material has been adopted, the Green operator can be conveniently decomposed, as shown in Eq. (82). Substitution of the previous decomposition in Eq. (37) immediately shows that each cluster interaction tensor may be consequently decomposed in a similar way. This means that only the reference material-independent components (see Eq. 83) should be considered in the computation of the cluster interaction tensors in the offline stage, that is, setting c1(λ0, μ0) ¼ c2(λ0, μ0) ¼ 1. This expedites the update of the cluster interaction tensors in each iteration of the online stage self-consistent scheme, given that computations in the frequency domain are avoided and only scaling is required after both c1(λ0, μ0) and c2(λ0, μ0) are updated.
8.5 Online stage: Multiscale equilibrium problem Having completely defined the CRVE in the offline stage, it is now possible to solve the multiscale equilibrium problem in the so-called online stage through the solution of the discretized Lippmann-Schwinger system of equilibrium equations (see Eq. 60). Two macroscale strain-stress loading cases are considered: (1) a uniaxial tension strain loading (εxx ¼ 5 102, σ yy ¼ σ xy ¼ 0) and (2) a pure
Fig. 8 Clustering-based domain decomposition (k-means clustering) of the fiber-reinforced composite RVE for a different number of clusters. The ratio between the number of clusters of the matrix phase (f ¼ 70%) and the fiber phase (f ¼ 30%) is set as 2.
Bernardo Proenc¸a Ferreira et al.
154
shear-strain loading (εxy ¼ 2.5 102, σ xx ¼ σ yy ¼ 0). In both cases, the macroscale loading is enforced in a total of 200 increments of equal magnitude. The equilibrium Newton-Raphson iterative scheme convergence tolerance is set to 1 106 and the convergence tolerance of the self-consistent scheme is set to 1 103. Moreover, the volume averages of the material phases elastic properties are adopted as an initial guess for the reference material Lame parameters at the first macroscale loading increment. In the following increments, the initial guess is simply set from the converged values of the previous increment. The macroscale homogenized response of the fiber-reinforced composite under both uniaxial and pure shear macroscale loadings is presented in Figs. 9 and 10, respectively. The relative error with respect to the DNS solution is also shown to evaluate the accuracy of the SCA’s solution for the different number of clusters. The high-fidelity DNS solution is obtained with a first-order computational homogenization FEM model by considering a regular grid of ne ¼ 600 600 standard quadratic quadrilateral elements (QUAD8). The comparison between the computational cost of the SCA’s online stage and the FEM DNS solution is shown in Fig. 11 for the uniaxial tension strain loading case, being similar results obtained for the pure shear (a)
(b)
Fig. 9 Homogenized response of the fiber-reinforced composite under uniaxial tension strain-stress loading conditions obtained with SCA for a different number of clusters: (a) homogenized stress and (b) relative error with respect to the FEM DNS solution. (*) The DNS solution is obtained with a first-order computational homogenization FEM model considering standard quadratic quadrilateral elements.
155
Fast homogenization through clustering-based reduced-order modeling
(a)
(b)
Fig. 10 Homogenized response of the fiber-reinforced composite under pure shear strain-stress loading conditions obtained with SCA for a different number of clusters: (a) homogenized stress and (b) relative error with respect to the FEM DNS solution. (*) The DNS solution is obtained with a first-order computational homogenization FEM model considering standard quadratic quadrilateral elements.
(a)
(b)
105
100
DNS
10− 1
103
t online / t DNS
t online (s)
104
102
10− 2
10− 3
101
10− 4
100 100
101
102 nc
103
100
101
102
3
10
nc
Fig. 11 SCA online stage computational cost in the solution of the fiber-reinforced composite equilibrium problem under uniaxial tension strain loading conditions: (a) comparison between the computational time of the SCA online stage and the FEM DNS solution time and (b) ratio between the computational time of the SCA online stage and the FEM DNS solution time. The FEM DNS solution is obtained with a solver parallelization of 20 CPU cores, while the SCA numerical simulation is performed with one CPU core.
156
Bernardo Proenc¸a Ferreira et al.
loading. In this regard, it is important to remark that the FEM DNS solutions are obtained with a solver parallelization of 20 CPU cores, while the SCA numerical simulations are performed solely with one CPU core.l In the first place, it is possible to conclude that the SCA can capture the homogenized nonlinear elastoplastic behavior of the fiber-reinforced composite with a considerable model compression. Even for the cases with the highest number of clusters (nc ¼ 315), note that the number of degrees of freedom (approximately 1.3 103) is significantly less than in the FEMbased homogenization DNS solution (approximately 2.0 106). After the initial linear elastic regime, where the error relative to the DNS solution is generally low, a sudden error spike coincident with the onset of the matrix plastic yielding is observed, followed by a decrease to lower error values in the postyielding region. The aforementioned error spike is comprehensible and highlights one of the limitations of the SCA—the inability to capture localized phenomena. Because the high-fidelity RVE is decomposed in a substantially lower number of finite subdomains (clusters based on the similarity of elastic mechanical response), the CRVE cannot accurately capture the localized onset of the plastic yielding in these cases. As the plastic flow starts to diffuse in the matrix after the initial yielding, the error decreases following the same reasoning. In the second place, it is clearly seen from the homogenized responses and associated errors that the SCA predictions accuracy increases as the number of clusters increases (degree of model reduction decreases). This result is expected since more information about the microstructure details is stored in the offline stage (more refined clustering-based domain decomposition and associated cluster interaction tensors) and is readily available in the following online stage to solve the microscale equilibrium problem. Of course, an increase in the number of clusters will also lead to the rise of the overall computational cost of the analysis. This tradeoff between accuracy and computational cost is evidenced in Fig. 11. The SCA online stage computational time increases monotonically as the number of clusters increases (notice that the time scale is logarithmic). Together with the relative errors shown in Figs. 9b and 10b, this plot highlights the balance between accuracy and computational cost that must be made by the analyst. On one side, a lower number of clusters may lead to massive speedups compared with the DNS solution, but the obtained l
Although the solver parallelization benefits the DNS in comparison to the SCA computational times, this aspect ends up actually highlighting the efficiency of the reduced-order analyses.
Fast homogenization through clustering-based reduced-order modeling
157
solution is perhaps not sufficiently accurate (accuracy lower bound). On the other side, a high number of clusters may yield highly accurate solutions but at the expense of prohibitive computational times (computational cost upper bound). Also, as the computational time of the SCA analysis gets closer to the DNS time and the speedup becomes less significant, the use of the ROM itself loses its meaning. For instance, if the analyst accepts an overall homogenized stress relative error below 1% in these numerical examples, a number of clusters around 50 would be a suitable choice: the fiber-reinforced composite SCA analysis with nc ¼ 54 took approximately 9 min while the DNS took almost 7 h. This desirable characteristic of the SCA as a CROM allows the analyst to choose a suitable degree of model compression to achieve the desired degree of accuracy. Nonetheless, it is truly remarkable that even when considering solely three clusters, the accuracy of the homogenized response in the nonlinear elastoplastic regime is relatively reasonable given the massive degree of data compression. This aspect emphasizes the potential of performing a domain decomposition through a clustering procedure based on the local elastic strain concentration tensors.
9. Concluding remarks and future directions This chapter introduces CROMs. In particular, focus is given to the pioneering SCA method [53] as a fast homogenization tool for multiscale simulations of linear and nonlinear behavior of materials. This CROM involves two stages: offline and online. In the offline stage, material subdomains with similar mechanical behavior are created by a clustering machine learning algorithm applied to computationally cheap linear elastic simulations of the RVE. The subsequent online stage is then conducted for fast prediction of nonlinear plasticity by solving the Lippmann-Schwinger integral equilibrium equation. CROMs such as the SCA assume that the computational cost associated with the offline stage is significantly compensated by the speedup gained at the online stage without considerable loss in accuracy. Several examples and references are shown throughout the chapter, where this is observed for a wide range of materials. The method does not need additional empirical parameters and can be readily applied to any number of nested scales and considering infinitesimal and finite deformations [88]. However, there are still important future developments to consider.
158
Bernardo Proenc¸a Ferreira et al.
One of the most challenging aspects in SCA and CROMs is capturing localization and failure phenomena such as plastic strain localization, damage, and fracture. Despite these methods being able to predict the average behavior of the RVEs fast and accurately, the largest errors arise when intense localization occurs. Such limitation stems from the fact that (i) CROMs clustering-based domain decomposition is intrinsically nonlocal and (2) CROMs’ offline stage material clustering has remained constant irrespective of the conditions found in the actual equilibrium problem. Recall that the number of material clusters in these methods determines the degree of compression (coarsening of the discretization), so this number directly affects the computational cost and accuracy of both offline and online stages. In this context, we believe that clustering adaptivity, that is, the ability to dynamically refine the clustering discretization throughout the deformation path, is key to address this limitation while maintaining a delicate balance between low computational cost and high accuracy. This approach may also improve CROMs accuracy when the offline stage data and the resulting clustering is not sufficiently representative, namely when dealing with finite strains and complex nonmonotonic loading paths. The so-called adaptive clustering-based reduced-order models (ACROMs) have been recently proposed by Ferreira et al. [89], where the coined adaptive self-consistent clustering analysis (ASCA) delivers encouraging results when modeling highly localized plasticity. Another important challenge concerns the anisotropy of the reference elastic material and its influence of the equilibrium problem solution. This may lead to a significant improvement of accuracy when dealing with many materials of interest and whose RVEs exhibit a significant anisotropic behavior. In fact, even initially isotropic RVEs can become anisotropic as the result of accumulated plasticity or damage, while being subjected to large deformations. However, anisotropic reference materials imply the use of a more complex Green operator that needs to be computed efficiently. Moreover, the self-consistent scheme optimization problem must be enriched to account for an increased number of reference material properties. Considering nonperiodic boundary conditions in CROMs can also become a relevant extension to these methods, for instance, when materials exhibit nonperiodic localization or fracture patterns. Additional topics of interest include dealing with nonuniform discretization grids when using FFT-based formulations and implementation of CROMs in highperformance computing systems with multiple CPUs and GPUs.
159
Fast homogenization through clustering-based reduced-order modeling
Appendix A Linearization of the discretized Lippmann-Schwinger equilibrium equations In order to solve the discretized Lippmann-Schwinger system of nonlinear equilibrium equations with the Newton-Raphson method, it is necessary to linearize the associated residual functions properly. The required linearizations are performed here in a comprehensive way. The residuals of the Lippmann-Schwinger system of equilibrium equations are defined by Eqs. (63), (64). Attending to the Jacobian matrix resulting from the linearization of the problem (see Eq. 66) reproduced here, 3 2 ðIÞ ðIÞ ∂Rm + 1 ∂Rm + 1 6 ∂Δε0μ, m + 1 7 7 6 ∂ΔεðKÞ ∂Rm + 1 , m + 1 μ 7 Jm + 1 ¼ ¼6 ðnc + 1Þ ðnc + 1Þ 7, 8I,K ¼ 1, 2,…,nc : (A.1) 6 ∂Δεμ, m + 1 4 ∂Rm + 1 ∂Rm + 1 5 ðKÞ ∂Δε0μ, m + 1 ∂Δε μ, m + 1
The first-order derivatives of the residuals with respect to each unknown (see Eq. 61) are required. The first derivative can be written in its expanded form as ! ðI Þ ðI Þ nc X ∂Δεμ, m + 1 ∂Rm + 1 ∂ ðJ Þ ðJ Þ ¼ + TðI ÞðJ Þ : Δ^ σ μ, m + 1 De, 0 : Δεμ, m + 1 ðK Þ ðK Þ ðK Þ ∂Δεμ, m + 1 ∂Δεμ, m + 1 ∂Δεμ, m + 1 J¼1 +
∂Δε0μ, m + 1 ðK Þ
∂Δεμ, m + 1 |fflfflfflfflffl{zfflfflfflfflffl}
,
(A.2)
0
and applying the derivation chain rule one gets ðIÞ
∂Rm+1 ðKÞ
∂Δεμ,m+1
¼ δðIÞðKÞ I +
nc X
TðIÞðJÞ
J¼1
:
δðJÞðKÞ I :
ðKÞ μ,m+1 ðKÞ ∂Δεμ,m+1
∂Δ σ^
! De, 0 : δðJÞðKÞ I ,
(A.3)
where I is the fourth-order identity tensor. Noticing that the material consistent tangent modulus of the cluster K is defined as
Bernardo Proenc¸a Ferreira et al.
160
ðKÞ
Dm+1 ¼
∂Δ σ^
ðKÞ μ,m+1
ðKÞ
Δεμ,m+1 , βm ðKÞ
∂Δεμ,m+1
,
(A.4)
it finally results ðIÞ
∂Rm+1 ðKÞ
∂Δεμ,m+1
ðKÞ ¼ δðIÞðKÞ I + TðIÞðKÞ : Dm+1 De,0 :
(A.5)
The derivative of the same residual in order to the homogeneous incremental far-field strain is expanded as ! ðI Þ nc i X ∂Rm + 1 ðI Þ ðJ Þ ðJ Þ ðI Þ ðI ÞðJ Þ e, 0 Δεμ, m + 1 + T : Δ^ σ μ, m + 1 D : Δεμ, m + 1 ∂Rm + 1 ¼ ∂Δε0μ, m + 1 J¼1 0 ∂Δεμ, m + 1 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
∂Δε0μ, m + 1 ∂Δε0μ, m + 1
0
(A.6)
,
being defined as ðIÞ
∂Rm+1 ¼ I: ∂Δε0μ,m+1
(A.7)
Concerning the macroscale strain constraint, the third residual derivative yields ! ðnc +1Þ nc X ∂Rm+1 ∂ ∂Δεm+1 ðXÞ ðIÞ ¼ f ðIÞ Δεμ,m+1 , (A.8) ðKÞ ðKÞ ðKÞ ∂Δεμ,m+1 ∂Δεμ,m+1 ∂Δεμ,m+1 I¼1 |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} 0
which can be developed as ðn +1Þ
c ∂Rm+1
ðKÞ ∂Δεμ,m+1
¼
nc X I¼1
ðIÞ
f ðIÞ
∂Δεμ,m+1 ðKÞ ∂Δεμ,m+1
¼
nc X
f ðIÞ δðIÞðKÞ I,
(A.9)
I¼1
and results in ðn +1Þ
c ∂Rm+1
ðKÞ
∂Δεμ,m+1
¼ f ðKÞ I:
(A.10)
161
Fast homogenization through clustering-based reduced-order modeling
In a similar manner, the residual derivative associated with a macroscale stress constraint is given by ! ðnc +1Þ nc X ∂Rm+1 ∂ ∂Δσ m+1 ðXÞ ðIÞ ðIÞ σ ¼ f Δ ^ μ,m+1 , (A.11) ðKÞ ðKÞ ðKÞ ∂Δεμ,m+1 ∂Δεμ,m+1 ∂Δεμ,m+1 I¼1 |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} 0
which after application of the derivation chain rule comes ðn +1Þ
c ∂Rm+1
ðKÞ ∂Δεμ,m+1
¼
nc X I¼1
ðIÞ
f
ðIÞ
∂Δ σ^ μ,m+1 ðKÞ ∂Δεμ,m+1
¼
nc X
ðKÞ
f ðIÞ δðIÞðKÞ I : Dm+1 ,
(A.12)
I¼1
and results in ðn +1Þ
c ∂Rm+1
ðKÞ ∂Δεμ,m+1
ðKÞ
¼ f ðKÞ Dm+1 :
(A.13)
Finally, the fourth and last derivative is null, ðn +1Þ
c ∂Rm+1 ¼ 0: ∂Δε0μ,m+1
(A.14)
B Self-consistent scheme optimization problem In order to set the homogeneous tangent modulus of the reference material, De,0, as close as possible to the effective tangent modulus of the CRVE, D, Liu et al. [53] proposed a self-consistent approach. The establishment and solution of the optimization problem underlying the regression-based self-consistent scheme are detailed here. The regression-based self-consistent scheme optimization problem is stated in Problem 3 and aims to minimize the cost function defined as 2 0 0 gðλ0 , μ0 Þ ¼k Δσ m+1 De,0 m+1 ðλ ,μ Þ : Δεm+1 k ,
(B.1)
where De,0 m+1 is the tangent modulus of the reference homogeneous elastic material at the loading increment m + 1 defined as 0 0 0 0 De,0 m+1 ðλ , μ Þ ¼ λm+1 I I + 2μm+1 Is :
(B.2)
Denoting the tensorial quantity in Eq. (B.1) as A for the sake of compact notation, 0 0 A ¼ Δσ m+1 De,0 m+1 ðλ ,μ Þ : Δεm+1 ,
(B.3)
Bernardo Proenc¸a Ferreira et al.
162
and by applying the differentiation chain rule, the derivative of the cost function with respect to λ0 is given by ∂gðλ0 , μ0 Þ ∂ k Ak2 ∂ k A k ∂A ∂Dme, 0+ 1 ðλ0 , μ0 Þ ¼ ¼ 2A : : Δεm + 1 : (B.4) : ∂λ0 ∂λ0 ∂ k A k ∂A ∂λ0 Given that 0 0 ∂De,0 m+1 ðλ ,μ Þ ¼ I I, ∂λ0
(B.5)
ðI I Þ : Δεm+1 ¼ tr ½Δεm+1 I,
(B.6)
and that
it is possible to write ∂gðλ0 ,μ0 Þ ¼ 2 tr ½Δεm+1 A : I ¼ 2 tr ½Δεm+1 tr½A: ∂λ0
(B.7)
After expanding A and some simple algebraic manipulations, the derivative can be finally written as ∂gðλ0 ,μ0 Þ ∂λ0
¼ 2 tr ½Δεm+1 ðtr ½Δσ m+1 λ0 tr ½I tr ½Δεm+1
(B.8)
2μ0 tr ½Δεm+1 Þ:
Concerning the derivative with respect to μ0 , by following a similar procedure, one can write ∂gðλ0 ,μ0 Þ ∂ k Ak2 ∂ k A k ∂A ∂Dme, 0+ 1 ðλ0 , μ0 Þ ¼ ¼ 2A : : Δεm + 1 : (B.9) : ∂μ0 ∂μ0 ∂ k A k ∂A ∂μ0 Given that 0 0 ∂De,0 m+1 ðλ ,μ Þ ¼ 2 Is , ∂μ0
(B.10)
and that the infinitesimal strain tensor is symmetric, Is : Δεm+1 ¼ Δεm+1 ,
(B.11)
∂gðλ0 ,μ0 Þ ¼ 4A : Δεm+1 : ∂μ0
(B.12)
it comes
Fast homogenization through clustering-based reduced-order modeling
163
After expanding A and performing some algebraic manipulations, the derivative is finally given by ∂gðλ0 , μ0 Þ 4ðΔσ m + 1 : Δεm + 1 λ0 tr½Δεm + 1 tr½Δεm + 1 ¼ 2μ0 Δεm + 1 : Δεm + 1 Þ: ∂μ0
(B.13)
The following system of two linear equations can then be established to solve the minimization problem: 8 ∂gðλ0 , μ0 Þ > > ¼ 2 tr ½Δεm + 1 tr ½Δσ m + 1 λ0 tr ½I tr ½Δεm + 1 2μ0 tr ½Δεm + 1 ¼ 0, < ∂λ0 ∂gðλ0 , μ0 Þ > > : ¼ 4 Δσ m + 1 : Δεm + 1 λ0 tr ½Δεm + 1 tr ½Δεm + 1 2μ0 Δεm + 1 : Δεm + 1 ¼ 0, 0 ∂μ (B.14)
and the optimum point ðλ0m+1 , μ0m+1 Þ found as " #
1
tr ½I tr ½Δεm+1 2 tr ½Δεm+1 tr ½Δσ m+1 λ0m+1 ¼ : tr ½Δεm+1 2 Δσ m+1 : Δεm+1 2 Δεm+1 : Δεm+1 μ0m+1 (B.15)
References [1] M.F. Horstemeyer, Integrated Computational Materials Engineering (ICME) for Metals: Using Multiscale Modeling to Invigorate Engineering Design With Science, John Wiley & Sons, 2012. [2] M.F. Horstemeyer, Integrated Computational Materials Engineering (ICME) for Metals: Concepts and Case Studies, John Wiley & Sons, 2018. [3] I. Babusˇka, Homogenization and its application. Mathematical and computational problems, in: B. Hubbard (Ed.), Numerical Solution of Partial Differential Equations—III, Academic Press, 1976, pp. 89–116, https://doi.org/10.1016/B9780-12-358503-5.50009-9. [4] K. Matousˇ, M.G.D. Geers, V.G. Kouznetsova, A. Gillman, A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials, J. Comput. Phys. 330 (2017) 192–220, https://doi.org/10.1016/j.jcp.2016.10.070. [5] M.G.D. Geers, V.G. Kouznetsova, W.A.M. Brekelmans, Multi-scale computational homogenization: trends and challenges, J. Comput. Appl. Math. 234 (7) (2010) 2175–2182, https://doi.org/10.1016/j.cam.2009.08.077. [6] V.P. Nguyen, M. Stroeven, L.J. Sluys, Multiscale continuous and discontinuous modeling of heterogeneous materials: a review on recent developments, J. Multiscale Model. 3 (4) (2011) 229–270, https://doi.org/10.1142/S1756973711000509. [7] S. Saeb, P. Steinmann, A. Javili, Aspects of computational homogenization at finite deformations: a unifying review from Reuss’ to Voigt’s bound, Appl. Mech. Rev. 68 (5) (2016) 050801, https://doi.org/10.1115/1.4034024. [8] M.G.D. Geers, V.G. Kouznetsova, K. Matousˇ, J. Yvonnet, Homogenization methods and multiscale modeling: nonlinear problems, in: second ed., Encyclopedia of
164
[9] [10] [11] [12]
[13] [14] [15] [16]
[17] [18]
[19]
[20] [21] [22] [23] [24]
Bernardo Proenc¸a Ferreira et al.
Computational Mechanics, American Cancer Society, 2017, pp. 1–34, https://doi.org/ 10.1002/9781119176817.ecm2107. R. Hill, Elastic properties of reinforced solids: some theoretical principles, J. Mech. Phys. Solids 11 (5) (1963) 357–372, https://doi.org/10.1016/0022-5096(63)90036-X. E. de Souza Neto, R. Feijo´o, Variational foundations of multi-scale constitutive models of solid: small and large strain kinematical formulation, LNCC Res. Dev. Rep. 16 (2006) 1–53. E. de Souza Neto, R. Feijo´o, On the equivalence between spatial and material volume averaging of stress in large strain multi-scale solid constitutive models, Mech. Mater. 40 (2008) 803–811, https://doi.org/10.1016/j.mechmat.2008.04.006. D. Peric, E. de Souza Neto, R. Feijo´o, M. Partovi, A. Molina, On micro-to-macro transitions for multi-scale analysis of non-linear heterogeneous materials: unified variational basis and finite element implementation, Int. J. Numer. Methods Eng. 87 (1–5) (2011) 149–170. E. de Souza Neto, P. Blanco, P.J. Sa´nchez, R. Feijo´o, An RVE-based multiscale theory of solids with micro-scale inertia and body force effects, Mech. Mater. 80 (2014), https://doi.org/10.1016/j.mechmat.2014.10.007. P. Blanco, P. Sa´nchez, E. de Souza Neto, R. Feijo´o, Variational foundations and generalized unified theory of RVE-based multiscale models, Arch. Comput. Methods Eng. (2014), https://doi.org/10.1007/s11831-014-9137-5. P. Blanco, P.J. Sa´nchez, E. de Souza Neto, R. Feijo´o, The method of multiscale virtual power for the derivation of a second order mechanical model, Mech. Mater. 99 (2016), https://doi.org/10.1016/j.mechmat.2016.05.003. R. Feijo´o, E. de Souza Neto, Variational foundations of large strain multiscale solid constitutive models: kinematical formulation, in: Advanced Computational Materials Modeling: From Classical to Multi-Scale Techniques, 2011, pp. 341–378. https:// doi.org/10.1002/9783527632312.ch9. J. Renard, M.F. Marmonier, Etude de l’initiation de l’endommagement Dans La Matrice d’un Matea´riau Composite Par Une Mea´thode d’homogea´nisation, Aerosp. Sci. Technol 6 (1987) 37–51. R.J.M. Smit, W.A.M. Brekelmans, H.E.H. Meijer, Prediction of the mechanical behavior of nonlinear heterogeneous systems by multi-level finite element modeling, Comput. Methods Appl. Mech. Eng. 155 (1) (1998) 181–192, https://doi.org/ 10.1016/S0045-7825(97)00139-4. C. Miehe, J. Schr€ oder, J. Schotte, Computational homogenization analysis in finite plasticity simulation of texture development in polycrystalline materials, Comput. Methods Appl. Mech. Eng. 171 (3) (1999) 387–418, https://doi.org/10.1016/ S0045-7825(98)00218-7. F. Feyel, J.-L. Chaboche, FE2 multiscale approach for modelling the elastoviscoplastic behaviour of long fibre SiC/Ti composite materials, Comput. Methods Appl. Mech. Eng. 183 (3) (2000) 309–330, https://doi.org/10.1016/S0045-7825(99)00224-8. N. Takano, Y. Ohnishi, M. Zako, K. Nishiyabu, The formulation of homogenization method applied to large deformation problem for composite materials, Int. J. Solids Struct. 37 (44) (2000) 6517–6535, https://doi.org/10.1016/S0020-7683(99)00284-X. K. Terada, N. Kikuchi, A class of general algorithms for multi-scale analyses of heterogeneous media, Comput. Methods Appl. Mech. Eng. 190 (40) (2001) 5427–5464, https://doi.org/10.1016/S0045-7825(01)00179-7. C. Miehe, Strain-driven homogenization of inelastic microstructures and composites based on an incremental variational formulation, Int. J. Numer. Methods Eng. 55 (11) (2002) 1285–1322, https://doi.org/10.1002/nme.515. C. Miehe, A. Koch, Computational micro-to-macro transitions of discretized microstructures undergoing small strains, Arch. Appl. Mech. 72 (4) (2002) 300–317, https:// doi.org/10.1007/s00419-002-0212-2.
Fast homogenization through clustering-based reduced-order modeling
165
[25] C. Miehe, J. Schr€ oder, C. Bayreuther, On the homogenization analysis of composite materials based on discretized fluctuations on the micro-structure, Acta Mechanica 155 (1) (2002) 1–16, https://doi.org/10.1007/BF01170836. [26] J. Segurado, J. Llorca, A numerical approximation to the elastic properties of spherereinforced composites, J. Mech. Phys. Solids 50 (10) (2002) 2107–2121, https://doi. org/10.1016/S0022-5096(02)00021-2. [27] F. Feyel, A multilevel finite element method (FE2) to describe the response of highly non-linear structures using generalized continua, Comput. Methods Appl. Mech. Eng. 192 (28) (2003) 3233–3244, https://doi.org/10.1016/S0045-7825(03)00348-7. [28] K. Terada, I. Saiki, K. Matsui, Y. Yamakawa, Two-scale kinematics and linearization for simultaneous two-scale analysis of periodic heterogeneous solids at finite strain, Comput. Methods Appl. Mech. Eng. 192 (31) (2003) 3531–3563, https://doi.org/ 10.1016/S0045-7825(03)00365-7. [29] S. Klinge, K. Hackl, Application of the multiscale FEM to the modeling of nonlinear composites with a random microstructure, Int. J. Multiscale Comput. Eng. 10 (3) (2012), https://doi.org/10.1615/IntJMultCompEng.2012002059. [30] J. Schr€ oder, A numerical two-scale homogenization scheme: the FE2-method, in: J. Schr€ oder, K. Hackl (Eds.), Plasticity and Beyond: Microstructures, Crystal-Plasticity and Phase Transitions, Springer, Vienna, 2014, pp. 1–64, https://doi.org/ 10.1007/978-3-7091-1625-8_1. [31] F. Feyel, Application Du Calcul Paralle`le Aux Mode`les a` Grand Nombre de Variables cole Nationale Superieure des Mines de Paris, 1998. Internes (Ph.D. thesis), E [32] J. Kochmann, S. Wulfinghoff, S. Reese, J.R. Mianroodi, B. Svendsen, Two-scale FE-FFT- and phase-field-based computational modeling of bulk microstructural evolution and macroscopic material behavior, Comput. Methods Appl. Mech. Eng. 305 (2016) 89–110, https://doi.org/10.1016/J.CMA.2016.03.001. [33] J. Kochmann, S. Wulfinghoff, L. Ehle, J. Mayer, B. Svendsen, S. Reese, Efficient and accurate two-scale FE-FFT-based prediction of the effective material behavior of elasto-viscoplastic polycrystals, Comput. Mech. 61 (2018) 751–764, https://doi.org/ 10.1007/S00466-017-1476-2. [34] J.A. Moore, R. Ma, A.G. Domel, W.K. Liu, An efficient multiscale model of damping properties for filled elastomers with complex microstructures, Compos. B Eng. 62 (2014) 262–270, https://doi.org/10.1016/j.compositesb.2014.03.005. [35] G.J. Dvorak, Transformation field analysis of inelastic composite materials, Proc. R. Soc. Lond. A Math. Phys. Sci. 437 (1900) (1992) 311–327, https://doi.org/ 10.1098/rspa.1992.0063. [36] J.C. Michel, P. Suquet, Nonuniform transformation field analysis, Int. J. Solids Struct. 40 (25) (2003) 6937–6955, https://doi.org/10.1016/S0020-7683(03)00346-9. [37] S. Roussette, J.C. Michel, P. Suquet, Nonuniform transformation field analysis of elastic–viscoplastic composites, Compos. Sci. Technol. 69 (1) (2009) 22–27, https:// doi.org/10.1016/j.compscitech.2007.10.032. [38] I.T. Jolliffe, Principal Component Analysis, in: Springer Series in Statistics, second ed., Springer-Verlag, New York, 2002. [39] J. Yvonnet, Q.-C. He, The reduced model multiscale method (R3M) for the nonlinear homogenization of hyperelastic media at finite strains, J. Comput. Phys. 223 (1) (2007) 341–368, https://doi.org/10.1016/j.jcp.2006.09.019. [40] P. Ladeve`ze, J.-C. Passieux, D. Neron, The LATIN multiscale computational method and the proper generalized decomposition, Comput. Methods Appl. Mech. Eng. 199 (21) (2010) 1287–1296, https://doi.org/10.1016/j.cma.2009.06.023. [41] E. Cueto, D. Gonza´lez, I. Alfaro, Proper Generalized Decompositions, Springer International Publishing, Cham, 2016, https://doi.org/10.1007/978-3-319-29994-5.
166
Bernardo Proenc¸a Ferreira et al.
[42] R. Iba´n˜ez, E. Abisset-Chavanne, F. Chinesta, A. Huerta, E. Cueto, A local multiple proper generalized decomposition based on the partition of unity, Int. J. Numer. Methods Eng. 120 (2) (2019) 139–152, https://doi.org/10.1002/nme.6128. [43] S. Boyaval, Reduced-basis approach for homogenization beyond the periodic setting, Multiscale Model. Simul. 7 (1) (2008) 466–494, https://doi.org/10.1137/070688791. [44] N.C. Nguyen, A multiscale reduced-basis method for parametrized elliptic partial differential equations with multiple scales, J. Comput. Phys. 227 (23) (2008) 9807–9822, https://doi.org/10.1016/j.jcp.2008.07.025. [45] J.A. Herna´ndez, J. Oliver, A.E. Huespe, M.A. Caicedo, J.C. Cante, High-performance model reduction techniques in computational multiscale homogenization, Comput. Methods Appl. Mech. Eng. 276 (2014) 149–189, https://doi.org/10.1016/j. cma.2014.03.011. [46] M. Barrault, Y. Maday, N.C. Nguyen, A.T. Patera, An ‘empirical interpolation’ method: application to efficient reduced-basis discretization of partial differential equations, C. R. Math. 339 (9) (2004) 667–672, https://doi.org/10.1016/j. crma.2004.08.006. [47] N.C. Nguyen, A.T. Patera, J. Peraire, A ‘best points’ interpolation method for efficient approximation of parametrized functions, Int. J. Numer. Methods Eng. 73 (4) (2008) 521–543, https://doi.org/10.1002/nme.2086. [48] P. Astrid, S. Weiland, K. Willcox, T. Backx, Missing point estimation in models described by proper orthogonal decomposition, IEEE Trans. Autom. Control 53 (2008) 2237–2251, https://doi.org/10.1109/TAC.2008.2006102. [49] S. Chaturantabut, D. Sorensen, Nonlinear model reduction via discrete empirical interpolation, SIAM J. Sci. Comput. 32 (5) (2010) 2737–2764, https://doi.org/ 10.1137/090766498. [50] C. Farhat, P. Avery, T. Chapman, J. Cortial, Dimensional reduction of nonlinear finite element dynamic models with finite rotations and energy-based mesh sampling and weighting for computational efficiency, Int. J. Numer. Methods Eng. 98 (2014), https://doi.org/10.1002/nme.4668. [51] J.A. Herna´ndez, M.A. Caicedo, A. Ferrer, Dimensional hyper-reduction of nonlinear finite element models via empirical cubature, Comput. Methods Appl. Mech. Eng. 313 (2017) 687–722, https://doi.org/10.1016/j.cma.2016.10.022. [52] R.A. van Tuijl, C. Harnish, K. Matousˇ, J.J.C. Remmers, M.G.D. Geers, Wavelet based reduced order models for microstructural analyses, Comput. Mech. 63 (3) (2019) 535–554, https://doi.org/10.1007/s00466-018-1608-3. [53] Z. Liu, M.A. Bessa, W.K. Liu, Self-consistent clustering analysis: an efficient multi-scale scheme for inelastic heterogeneous materials, Comput. Methods Appl. Mech. Eng. 306 (2016) 319–341, https://doi.org/10.1016/j.cma.2016.04.004. [54] M.A. Bessa, R. Bostanabad, Z. Liu, A. Hu, D.W. Apley, C. Brinson, W. Chen, W.K. Liu, A framework for data-driven analysis of materials under uncertainty: countering the curse of dimensionality, Comput. Methods Appl. Mech. Eng. 320 (2017) 633–667, https://doi.org/10.1016/j.cma.2017.03.037. [55] Z. Liu, M. Fleming, W.K. Liu, Microstructural material database for self-consistent clustering analysis of elastoplastic strain softening materials, Comput. Methods Appl. Mech. Eng. 330 (2018) 547–577, https://doi.org/10.1016/j.cma.2017.11.005. [56] W. Yan, S. Lin, O.L. Kafka, C. Yu, Z. Liu, Y. Lian, S. Wolff, J. Cao, G.J. Wagner, W.K. Liu, Modeling process-structure-property relationships for additive manufacturing, Front. Mech. Eng. 13 (4) (2018) 482–492, https://doi.org/10.1007/s11465-0180505-y. [57] W. Yan, S. Lin, O.L. Kafka, Y. Lian, C. Yu, Z. Liu, J. Yan, S. Wolff, H. Wu, E. NdipAgbor, M. Mozaffar, K. Ehmann, J. Cao, G.J. Wagner, W.K. Liu, Data-driven multiscale multi-physics models to derive process-structure-property relationships for
Fast homogenization through clustering-based reduced-order modeling
[58]
[59] [60] [61] [62] [63]
[64]
[65] [66] [67] [68]
[69]
[70]
[71] [72]
167
additive manufacturing, Comput. Mech. 61 (5) (2018) 521–541, https://doi.org/ 10.1007/s00466-018-1539-z. W. Yan, Y. Lian, C. Yu, O.L. Kafka, Z. Liu, W.K. Liu, G.J. Wagner, An integrated process-structure-property modeling framework for additive manufacturing, Comput. Methods Appl. Mech. Eng. 339 (2018) 184–204, https://doi.org/10.1016/j. cma.2018.05.004. Z. Liu, Reduced-Order Homogenization of Heterogeneous Material Systems: From Viscoelasticity to Nonlinear Elasto-Plastic Softening Material (Ph.D. thesis), Northwestern University, 2017. C. Yu, O.L. Kafka, W.K. Liu, Self-consistent clustering analysis for multiscale modeling at finite strains, Comput. Methods Appl. Mech. Eng. 349 (2019) 339–359, https://doi. org/10.1016/j.cma.2019.02.027. M. Shakoor, O.L. Kafka, C. Yu, W.K. Liu, Data science for finite strain mechanical science of ductile materials, Comput. Mech. 64 (1) (2019) 33–45, https://doi.org/ 10.1007/s00466-018-1655-9. C. He, J. Gao, H. Li, J. Ge, Y. Chen, J. Liu, D. Fang, A data-driven self-consistent clustering analysis for the progressive damage behavior of 3D braided composites, Compos. Struct. 249 (2020) 112471, https://doi.org/10.1016/j.compstruct.2020.112471. J. Gao, M. Shakoor, G. Domel, M. Merzkirch, G. Zhou, D. Zeng, X. Su, W.K. Liu, Predictive multiscale modeling for unidirectional carbon fiber reinforced polymers, Compos. Sci. Technol. 186 (2020) 107922, https://doi.org/10.1016/j. compscitech.2019.107922. X. Han, J. Gao, M. Fleming, C. Xu, W. Xie, S. Meng, W.K. Liu, Efficient multiscale modeling for Woven composites based on self-consistent clustering analysis, Comput. Methods Appl. Mech. Eng. 364 (2020) 112929, https://doi.org/10.1016/j. cma.2020.112929. S. Wulfinghoff, F. Cavaliere, S. Reese, Model order reduction of nonlinear homogenization problems using a Hashin-Shtrikman type finite element method, Comput. Methods Appl. Mech. Eng. 330 (2017), https://doi.org/10.1016/j.cma.2017.10.019. F. Cavaliere, S. Reese, S. Wulfinghoff, Efficient two-scale simulations of engineering structures using the Hashin-Shtrikman type finite element method, Comput. Mech. (2019), https://doi.org/10.1007/s00466-019-01758-4. M. Schneider, On the mathematical foundations of the self-consistent clustering analysis for non-linear materials at small strains, Comput. Methods Appl. Mech. Eng. 354 (2019) 783–801, https://doi.org/10.1016/j.cma.2019.06.003. H. Li, O.L. Kafka, J. Gao, C. Yu, Y. Nie, L. Zhang, M. Tajdari, S. Tang, X. Guo, G. Li, S. Tang, G. Cheng, W.K. Liu, Clustering discretization methods for generation of material performance databases in machine learning and design optimization, Comput. Mech. 64 (2) (2019) 281–305, https://doi.org/10.1007/s00466-019-01716-0. Y. Nie, G. Cheng, X. Li, L. Xu, K. Li, Principle of cluster minimum complementary energy of FEM-cluster-based reduced order method: fast updating the interaction matrix and predicting effective nonlinear properties of heterogeneous material, Comput. Mech. 64 (2) (2019) 323–349, https://doi.org/10.1007/s00466-019-01710-6. H. Moulinec, P. Suquet, A fast numerical method for computing the linear and nonlinear mechanical properties of composites, Comptes Rendus de l’Academie Des Sciences. Serie II. Mecanique, Physique, Chimie, Astronomie 318 (11) (1994) 1417–1423. T. Mura, Micromechanics of Defects in Solids, Springer Science & Business Media, 1987. J. Yvonnet, A fast method for solving microstructural problems defined by digital images: a space Lippmann-Schwinger scheme: the SLS method, Int. J. Numer. Methods Eng. 92 (2) (2012) 178–205, https://doi.org/10.1002/nme.4334.
168
Bernardo Proenc¸a Ferreira et al.
[73] B.A. Lippmann, J. Schwinger, Variational principles for scattering processes. I, Phys. Rev. 79 (3) (1950) 469–480, https://doi.org/10.1103/PhysRev.79.469. [74] E. Kr€ oner, Statistical Continuum Mechanics: Course Held at the Department of General Mechanics, October 1971, Springer, 1971. [75] P.H. Dederichs, R. Zeller, Variational treatment of the elastic constants of disordered materials, Z. Phys. Hadrons Nuclei 259 (2) (1973) 103–116, https://doi.org/10.1007/ BF01392841. [76] R. Zeller, P.H. Dederichs, Elastic constants of polycrystals, Phys. Status Solidi (B) 55 (2) (1973) 831–842, https://doi.org/10.1002/pssb.2220550241. [77] T.W.J de Geus, J. Vondrejc, J. Zeman, R.H.J. Peerlings, M.G.D. Geers, Finite strain FFT-based non-linear solvers made simple, Comput. Methods Appl. Mech. Eng. 318 (2017) 412–430, https://doi.org/10.1016/j.cma.2016.12.032. [78] H. Moulinec, P. Suquet, A numerical method for computing the overall response of nonlinear composites with complex microstructure, Comput. Methods Appl. Mech. Eng. 157 (1–2) (1998) 69–94, https://doi.org/10.1016/S0045-7825(97)00218-1. [79] J.C. Michel, H. Moulinec, P. Suquet, Effective properties of composite materials with periodic microstructure: a computational approach, Comput. Methods Appl. Mech. Eng. 172 (1–4) (1999) 109–143, https://doi.org/10.1016/S0045-7825(98)00227-8. [80] N. Lahellec, J.C. Michel, H. Moulinec, P. Suquet, Analysis of inhomogeneous materials at large strains using fast Fourier transforms, in: C. Miehe (Ed.), IUTAM Symposium on Computational Mechanics of Solid Materials at Large Strains, Springer Netherlands, Dordrecht, 2003, pp. 247–258, https://doi.org/10.1007/978-94-0170297-3_22. [81] J.C. Michel, H. Moulinec, P. Suquet, A computational method based on augmented Lagrangians and fast Fourier transforms for composites with high contrast, Comput. Model. Eng. Sci. 1 (2000) 79–88. [82] J.C. Michel, H. Moulinec, P. Suquet, A computational scheme for linear and non-linear composites with arbitrary phase contrast, Int. J. Numer. Methods Eng. 52 (12) (2001) 139–160, https://doi.org/10.1002/nme.275. [83] J. Zeman, J. Vondrejc, J. Nova´k, I. Marek, Accelerating a FFT-based solver for numerical homogenization of periodic media by conjugate gradients, J. Comput. Phys. 229 (21) (2010) 8065–8071, https://doi.org/10.1016/j.jcp.2010.07.010. [84] V. Monchiet, G. Bonnet, A polarization-based FFT iterative scheme for computing the effective properties of elastic composites with arbitrary contrast: a polarization-based FFT iterative scheme, Int. J. Numer. Methods Eng. 89 (11) (2012) 1419–1436, https://doi.org/10.1002/nme.3295. [85] J. Vondrejc, J. Zeman, I. Marek, An FFT-based Galerkin method for homogenization of periodic media, Comput. Math. Appl. 68 (3) (2014) 156–173, https://doi.org/ 10.1016/j.camwa.2014.05.014. [86] J. Zeman, T.W.J de Geus, J. Vondrejc, R.H.J. Peerlings, M.G.D. Geers, A finite element perspective on nonlinear FFT-based micromechanical simulations, Int. J. Numer. Methods Eng. 111 (10) (2017) 903–926, https://doi.org/10.1002/nme.5481. [87] S. Lloyd, Least squares quantization in PCM, IEEE Trans. Inform. Theory 28 (2) (1982) 129–137, https://doi.org/10.1109/TIT.1982.1056489. [88] C. Yu, O.L. Kafka, W.K. Liu, Multiresolution clustering analysis for efficient modeling of hierarchical material systems, Comput. Mech. 67 (5) (2021) 1293–1306, https://doi. org/10.1007/s00466-021-01982-x. [89] B.P. Ferreira, F.M.A. Pires, M.A. Bessa, Adaptive clustering-based reduced-order modeling framework: fast and accurate modeling of localized history-dependent phenomena, arXiv:2109.11897 [cond-mat] (2021).
CHAPTER FIVE
Immersogeometric formulation for free-surface flows Qiming Zhu and Jinhui Yan Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
Contents 1. Introduction 2. Governing equations of free-surface flow 2.1 Level set method 2.2 Navier-Stokes equations of incompressible flows 3. Semidiscrete formulation 3.1 Residual-based variational multiscale method 3.2 Redistancing and mass conservation 3.3 Weak enforcement of Dirichlet boundary conditions 4. Tetrahedral finite cell method 4.1 Recursive refinement of quadrature points 4.2 Inside-outside test by ray-tracing method 5. Time integration 5.1 Generalized-α method 5.2 Fully coupled linear solver 6. Numerical examples 6.1 Solitary wave impacting a stationary platform 6.2 Dam break with obstacle 6.3 Planning of a DTMB 5415 ship model 7. Summary and future work References
169 172 172 173 174 174 176 178 179 179 182 183 183 185 186 186 188 192 198 198
1. Introduction Free-surface flow simulations play an important role in the design and optimization of many marine engineering structures, such as floating offshore wind turbines, tidal turbines, ships, underwater vehicles, etc. Finite element method is used for free surface considering its capability to handle complex geometry from marine structures. In addition to handling high Reynolds number turbulent flows, there are two key challenging problems Fundamentals of Multiscale Modeling of Structural Materials https://doi.org/10.1016/B978-0-12-823021-3.00008-7
Copyright © 2023 Elsevier Inc. All rights reserved.
169
170
Qiming Zhu and Jinhui Yan
in free-surface flow simulations. One is how to treat the fluid-fluid interface, and the associated large property ratio between two fluid phases, pressure discontinuity and possible violent topological interfacial changes. Another is how to treat the fluid-structure interface, which typically has complicated geometries for real engineering structures and involves surrounding thin boundary layers. The methods of treating free surface can be classified into two categories: interface-tracking and interface-capturing [1, 2]. Interface-tracking methods, such as arbitrary Lagrangian-Eulerian (ALE) methods [3], fronttracking methods [4], boundary-integral methods [5], and space-time methods [6], explicitly represent the free surface by using a deformable mesh that moves with the free-surface deformation. Interface-tracking methods possess higher accuracy of per degree of freedom and have been applied to several challenging problems in offshore engineering and additive manufacturing [7, 8]. However, mesh motion and remeshing techniques are often required if the free surface undergoes singular topological changes, which can be very challenging for some scenarios. Interface-capturing methods, including level set [9, 10], front-capturing methods [11], volume of fluid [12], phase field [13, 14], and diffuse-interface methods [15, 16], define an auxiliary field in the computational domain and make use of an implicit function to represent the free surface. Although interface-capturing methods typically need higher mesh resolution around the free surface to compensate the lower accuracy of resolving the interface, they are more flexible and do not require any mesh motion or remeshing procedures. The free-surface topological changes can be automatically handled by solving an additional scalar partial differential equation. Interface-capturing methods have been widely applied to a wide range of interfacial problems, including bubble dynamics [17–19], jet atomization [20], and free-surface flows [21, 22]. The methods of treating fluid-structure interface can be also classified into two categories: boundary-fitted methods and immersed methods (or nonboundary-fitted methods). Among boundary-fitted methods, ALE methods and space-time methods are the two frequently used approaches. Both methods use meshes to represent the fluid-structure interface explicitly. One major difficulty of boundary-fitted methods is that the generation of high-quality volumetric meshes that conform to the complex fluidstructure interface is extremely resistant from automation. It often requires intensive labor processes, such as defeaturing, geometry cleanup, and mesh manipulation, which are time consuming in the design-through-analysis
Immersogeometric formulation for free-surface flows
171
loop. In the context of fluid-structure interaction simulations, sophisticated mesh motion and remeshing techniques [23–25] (similar to interfacetracking methods) are often required, which makes the problem even more challenging. On the other hand, immersed methods [26] make use of nonboundary-fitted fluid meshes to approximate the solutions of the fluid equations. Unlike its boundary-fitted counterpart, the fluid mesh can be independent of the surface representation. This type of methods releases the strict mesh conforming constraint, circumvents mesh motion and remesh procedures, and simplifies the volumetric mesh generation significantly, especially for the structures with complex boundary geometries. The first immersed boundary method can be found in Ref. [27], which dealt with computational fluid dynamics (CFD) analysis of cardiovascular flows with moving boundaries. Since that the research on immersed methods has been growing significantly. Some recent developments using immersed approach can be found in Refs. [28–32]. Although immersed methods for single-phase fluid flows around complex geometries can be widely found in the literature, immersed methods based on variational principles for free-surface flows are still lacking. In this chapter, an immersed free-surface formulation is developed by integrating the immersogeometric methods developed in Refs. [33, 34] and the freesurface flow formulation developed in Refs. [35–38]. The terminology of immersogeometric method, inspired by isogeometric analysis [39, 40], denotes the immersed methods that include techniques to accurately represent the immersed structure boundary to reduce geometric errors that prevent higher-order accuracy of immersed methods. For example, immersogeometric methods can directly immerse the boundary representation (B-rep) of CAD models into the nonbody-fitted background fluid mesh [41, 42]. Some applications to heart valve modeling and compressible flow modeling of rotorcraft can be found in Refs. [43–47]. To model the free-surface flow, the techniques previously developed in Refs. [35–38] are adopted. In the formulation, the level set method is chosen to capture the free surface, due to its easiness of implementation and ability to represent complicated free-surface shape by an implicit function. The level set field is convected by the fluid velocity. The free-surface flow motion is governed by unified Navier-Stokes equations of incompressible flows, in which the fluid properties are evaluated with the assistance of the level set field. A residual-based variational multiscale (RBVMS) formulation [48], which is widely used for turbulence modeling in CFD simulations [49–51], is adopted for the coupled Navier-Stokes and level set
172
Qiming Zhu and Jinhui Yan
equations. The combination of level set and RBVMS has been proved to be an effective technique to model multiphase flows. The applications include offshore floating wind turbines [36], tidal turbines [35], bubble dynamics [52], and metallic manufacturing [53]. Immersed methods by nature prevent the application of strong enforcement for Dirichlet boundary conditions. For that a Nitsche-type weak enforcement of essential boundary conditions for free-surface flows, which can be applied to both boundary- and nonboundary-fitted meshes, is incorporated into the current immersogeometric formulation. The structure of this chapter is as follows. Section 2 presents the continuous governing equations of free-surface flows, which include Navier-Stoke equations of incompressible flows and the level set convection equation. Section 3 presents semidiscrete formulations, which include RBVMS, redistancing, mass balancing, and weak boundary conditions. Section 4 presents the details of implementing the tetrahedral finite cell method (FCM). Section 5 presents the time integration and linear solver. Section 6 presents the application of the proposed formulation to three challenging problems in marine engineering. The first problem is solitary wave impacting a stationary platform. The second problem is dam break with an obstacle. The third problem is the planning of a DTMB 5415 ship model. Simulated results are compared with experimental results and computational results based on boundary-fitted methods from other researchers. Section 7 shows the conclusions and future work.
2. Governing equations of free-surface flow 2.1 Level set method In this section, we summarize the governing equations of free-surface flows based on level set method. Let Ω 3 denote air-water two-fluid domain and Γ denote its boundary. In Ω, a scalar function ϕ(x, t) is defined at each point. The free surface is denoted by Γ l, which is implicitly defined as Γ l ¼ fxΩ j ϕðx,tÞ ¼ 0g:
(1)
At air subdomain Ωa and water subdomain Ωw, ϕ(x, t) is a signed distance function with respect to the free surface. In present work, ϕ(x, t) takes negative value in the air phase and positive value in the water phase, namely, Ωa ¼ fxΩ j ϕðx,tÞ < 0g, Ωw ¼ fxΩ j ϕðx,tÞ > 0g:
(2) (3)
173
Immersogeometric formulation for free-surface flows
For a given point in Ω, the fluid density ρ(ϕ) and viscosity μ(ϕ) can be computed as ρðϕÞ ¼ ρa ð1 HðϕÞÞ+ρw HðϕÞ, μðϕÞ ¼ μa ð1 HðϕÞÞ+μw HðϕÞ,
(4) (5)
where subscripts a and w denote the quantities for air and water, respectively, and H(ϕ) is the Heaviside function, defined by 8 ϕ > h ϕ E
2 π E E > : 1 ϕh +E where E is the free-surface thickness, which scales with the element length around the free surface. Using regularized smoothed Heaviside function requires the level set field to satisfy the singed distance property. However, the level set field may lose its singed distance property as being convected by fluid velocity. In order to recover that, a redistancing approach is included, which is based on Eikonal equation with the constraint on the air-water interface. The Eikonal equation reads k rϕd k¼ 1 in Ωa , k rϕd k¼ 1 in Ωw , ϕd ¼ 0 on Γ l ,
(28) (29) (30)
where ϕd is the redistanced level set field. In the present work, a pseudotime t that scales with element length around the free surface is introduced to
177
Immersogeometric formulation for free-surface flows
make the equation time dependent. Then, the strong-form equation of the redistancing process can be stated as: given ϕh, find the ϕhd that satisfies the following equation: ∂ϕd + signðϕd Þðk rϕd k 1Þ ¼ 0 ∂ t ϕd ðx, t ¼ 0Þ ¼ ϕðx,tÞ
in Ω,
(31)
in Ω:
(32)
Variational multiscale (VMS) method is employed to solve the earlier equation. The weak formulation of the redistancing problem is stated as follows: given ϕh, find ϕhd , such that for all the test functions ηhd W hs , Z h h h h ∂ϕd ηd + SE ðϕd Þðk rϕd k 1Þ dΩ ∂ t Ω h XZ rϕhd h ∂ϕd + τdϕ SE ðϕhd Þ rη dΩ d ∂ t k rϕhd k Ωe \Ω e +
XZ e
+
Ωe \Ω
XZ e
Ωe \Ω
τdϕ SE ðϕhd Þ ηhd λpen
rϕhd rηhd SE ðϕhd Þðk rϕhd k 1Þ dΩ h k rϕd k
∂H E h ðϕ ϕh ÞdΩ ¼ 0, ∂ϕhd d (33)
is the regularized sign function, where SE ðϕhd Þ ¼ 2H E ðϕhd Þ 1 h rϕd is an equivalent “convective velocity,” τdϕ is a streamline SE ðϕhd Þ h k rϕd k upwind Petrov-Galerkin (SUPG) stabilization parameter, and λpen is a penalty parameter to enforce the air-water interface, which is solved by Navier∂H E Stokes and level set convection equations. With the help of h , the penalty ∂ϕd term is independent of mesh size [35] and only active around the free surface. Level set method by nature is not mass conservative. In order to restore the global mass balance, we shift the level set field by a global constant ϕ0 . The solution of ϕ0 is obtained by recovering the mass conservation equation, which reads Z Z ∂H E ðϕhd + ϕ0 Þ H E ðϕhd + ϕ0 Þuh n d∂Ω: (34) dΩ ¼ ∂t Ω ∂Ω
178
Qiming Zhu and Jinhui Yan
The earlier equation is obtained by the global mass conservation law, given as Z Z ∂ρðϕh Þ ρðϕh Þuh n d∂Ω: (35) dΩ ¼ ∂t Ω ∂Ω The previous mass-balancing procedure is performed after the redistancing process. This mass balancing scheme is very efficient because only a scalar equation needs to be solved. Since the level set field is shifted by a global constant, it does not change the signed distance property obtained in the redistancing stage.
3.3 Weak enforcement of Dirichlet boundary conditions Strongly imposing Dirichlet boundary conditions around fluid-structure interface is not feasible in an immersed approach. In the present work, the Dirichlet boundary condition is enforced weakly by using a Nitschebased method [62, 63]. For that Eq. (36) is added, if no-slip boundary condition is applied, and Eq. (37) is added, if no-penetration boundary condition is applied, to the left-hand side of Eq. (15). Z wh ðσðuh , ph Þ nÞdΓ ZΓD ð2μðϕÞεðwh Þn + qh nÞ ðuh ug ÞdΓ (36) Z ΓD + τB wh ðuh ug ÞdΓ, Z
ZΓ D
ΓD
wh nðσðuh , ph Þ : ðn nÞÞdΓ
ð2μðϕÞεðwh Þn + qh nÞ nðuh n ug nÞdΓ Z ΓD + τB wh nðuh n ug nÞdΓ:
(37)
ΓD
The earlier formulation can be derived by an augmented Lagrangian approach. The detailed interpretation of the terms can be found in Ref. [63]. The parameter τB is a penalty-like stabilization parameter that helps to satisfy the Dirichlet boundary condition and improve the stability of the formulation. τB needs to be carefully defined. If τB is too large, the penalty term dominates the formulation, overshadowing the variational
Immersogeometric formulation for free-surface flows
179
consistency and resulting in an ill-conditioned stiffness matrix. If τB is too small, the solution is not stable. More discussion for appropriate choice of τB for immersed methods can be found in Ref. [33]. In this work, τB ¼ Cμ(ϕ)/μa is used, where C is a constant. Considering the structure boundary may intersect the free surface, τB is scaled with μ(ϕ) in order to provide bigger penalty in the water phase. In the present work, C is set to 103, which is calibrated by the numerical experiments in order to achieve a good balance between accuracy and stability.
4. Tetrahedral finite cell method The main challenge of immersed methods is the geometrically accurate evaluation of volume and surface integrals in the variational formulation posed on intersected elements. The immersogeometric method in the present work largely draws on the tetrahedral FCM, which uses a volume quadrature method based on the recursive subdivision of intersected elements and a surface quadrature method based on an independent surface representation. In this section, we briefly present the key techniques of FCM and an octree-based point location query that can quickly determine whether a given point is located inside or outside of the fluid domain.
4.1 Recursive refinement of quadrature points Fig. 1 (extracted from [34]) shows the basic concept of FCM for a twodimensional (2D) case. In FCM, the original computational domain Ωphy is extended by a fictitious domain Ωfict to an embedding domain Ω that can be easily meshed irrespective of the boundary Γ. This introduces several elements that are arbitrarily cut by the immersed structure boundary. Complex integration domains are therefore created and the accuracy of the volume integration cannot be guaranteed if standard Gauss quadrature schemes are used. Following the idea from Ref. [64], we use a volume quadrature based on recursive subdivisions of cut elements, to faithfully capture the immersed boundary Γ through the cut elements. For the elements with all nodes inside the computational domain Ωphy, standard four-point quadrature rule is used for linear tetrahedral element. For the elements with all nodes outside Ωphy, no quadrature point is generated. For a cut element, it is split into four subtetrahedral elements. For each of the subtetrahedron intersected by Γ, further subdivisions are performed recursively, until a preset level is reached. For clarity, we use a
180
Qiming Zhu and Jinhui Yan
W W G
W
W
W
Fig. 1 Physical domain is extended by fictitious domain by FCM [34], red curve represents the physical interface.
triangular mesh in Fig. 2 to show the subdivision process of cut elements up to level ¼ 3. Enough recursive level is important for an accurate geometry representation of the immersed structure boundary. In the present work, recursive level ¼ 2 is chosen. After the subdivision, standard four-point Gauss quadrature rule is used for each of the subtetrahedra. Fig. 3A shows the generated quadrature points with level ¼ 1. The quadrature points inside computational domain Ωphy (green points) are used in numerical integration, while quadrature points outside computational domain Ωphy (magenta points) are discarded. Afterward, if an element has at least one active quadrature points, all of its associated DOFs will be included. The rest of the DOFs, which are only associated with elements with no quadrature points, contribute nothing to the system matrix and, therefore, will be discarded. Fig. 3B shows the DOFs in which only the one marked in green is discarded. Another challenge is performing the surface integration of the weak boundary condition formulation. The quadrature points of the surface integration locate on an independent surface discretization. To perform the surface integration, the coordinates of these surface quadrature points must first be located in the parameter space of the tetrahedral finite elements in which they fall. This requires us to invert the mapping from the finite element
181
Immersogeometric formulation for free-surface flows
Level = 0
Level = 1
Level = 2
Level = 3
Fig. 2 Recursive subdivision of cut elements of a triangular mesh (the red circle denotes the immersed structure boundary, blue triangles denote the subdivision of cut elements, black elements denote the uncut elements).
parameter space to physical space. Basis functions of the background volume elements are then evaluated at these surface quadrature points, to interpolate the weak boundary condition terms and assemble the surface integrals into the stiffness matrix. To speed up the process of location query, an octree is constructed. The tetrahedral elements are represented by tight bounding boxes and are sorted into a hierarchical octree. When we query the background element of quadrature points, most elements are eliminated by the octree search. At the deepest level of octree, we only need to calculate
182
(A)
Qiming Zhu and Jinhui Yan
(B)
Fig. 3 Quadrature points and degrees of freedom (recursive level ¼ 1), green quadrature points are kept and magenta quadrature points are dropped in (A), magenta DOF are active and green DOF are inactive in (B).
the parametric coordinate for a few possible elements and judge whether this quadrature point falls in this element. Note that it is necessary to have enough surface quadrature points to correctly enforce the weak boundary conditions, as studied and demonstrated in Ref. [41]. Remark 1. The separation of the volume and surface discretization makes it possible to immerse arbitrary types of surface representation into a background fluid mesh, as long as a surface quadrature rule can be proposed. This facilitates the research of direct immersogeometric analysis on boundary representations, such as STL [34], NURBS [41], and analytic surfaces [42, 47], which are inline with the ultimate goal of integration of design and analysis by isogeometric analysis [39, 40].
4.2 Inside-outside test by ray-tracing method When generating the quadrature points, inside-outside test is often needed. In ray-tracing techniques, a very common practice is to use tree structure for improved performance. First, all the surface triangle elements are inserted into a hierarchical octree based on their bounding boxes. Second, ray-octree intersections are performed recursively to reduce the unnecessary raytriangle intersections. Third, ray-triangle intersections are needed to judge whether ray intersect with possible triangles from the deepest level of the octree. Finally, the number of intersections between ray and immersed
183
Immersogeometric formulation for free-surface flows
Leaf node Tree node Target node
Fig. 4 In-out test by octree search (quadtree is shown here for simplicity, white box is a leaf node, gray box is a tree node, blue box is a target node).
surface triangles is counted, denoted by N. If the number N is odd, the point is inside of the surface. If the number N is even, the point is outside of the surface. We assume the immersed surface mesh is closed, so one ray is needed for the inside-outside test. Fig. 4 provides the explanation of a quad tree with triangle elements.
5. Time integration 5.1 Generalized-α method Generalized-α method [65, 66] is used for time integration. In generalized-α method, the residuals of free-surface flow equations are evaluated with immediate-level fluid velocity and level set solutions at each time step, namely, u_ n+αm ¼ αm u_ n+1 + ð1 αm Þu_ n , un+αf ¼ αf un+1 + ð1 αf Þun , ϕ_ n+αm ¼ αm ϕ_ n+1 + ð1 αm Þϕ_ n ,
(40)
ϕn+αf ¼ αf ϕn+1 + ð1 αf Þϕn ,
(41)
(38) (39)
where the quantities with superscript n + 1 are the unknown solutions at time step n + 1 and the quantities with superscript n are known solutions are the previous time step n. In addition, the relationship between nodal degree of freedom and their time derivatives are given by the following Newmark-β scheme: un+1 ¼ un + Δtðð1 γÞu_ n + γ u_ n+1 Þ,
(42)
184
Qiming Zhu and Jinhui Yan
ϕn+1 ¼ ϕn + Δtðð1 γÞϕ_ n + γ ϕ_ n+1 Þ:
(43)
In Eqs. (38)–(43), αm, αf, and γ are parameters of the Generalized-α and Newmark-β methods chosen based on the unconditional stability and second-order accuracy and requirements. With the previous definitions, application of the time integration of the coupled free-surface flow formulation leads to the following nonlinear equations at each time step: 8 > < NM ðu_ n + αm , pn + 1 , ϕ_ n + αm Þ ¼ 0, (44) NC ðu_ n + αm , pn + 1 , ϕ_ n + αm Þ ¼ 0, > : N ðu_ _ L n + αm ,pn + 1 , ϕ n + αm Þ ¼ 0, where NM, NC, and NL are the vectors of nodal residuals of fluid momentum, fluid continuity, and level set convection equations, respectively. To solve the previous equations, Newton’s method is adopted, which results in the following two-stage predictor-multicorrector algorithm. 5.1.1 Predictor stage u_ 0n+1 ¼
γ1 u_ n , γ
u0n+1 ¼ un , ¼ pn , γ1 _ ϕn , ¼ γ
p0n+1 0 ϕ_ n+1
ϕ0n+1 ¼ ϕn ,
(45) (46) (47) (48) (49)
where the quantities with superscript “0” are initial guesses, and “0” denotes the initial value of the Newton-iteration counter. 5.1.2 Multicorrector stage Repeat the following procedure until convergence. 1. Evaluate intermediate levels u_ ln+αm ¼ αm u_ ln+1 + ð1 αm Þu_ n ,
(50)
uln+αf ¼ αf uln+1 + ð1 αf Þun ,
(51)
l l ϕ_ n+αm ¼ αm ϕ_ n+1 + ð1 αm Þϕ_ n ,
(52)
ϕln+αf
¼
αf ϕln+1
+ ð1 αf Þϕn ,
(53)
185
Immersogeometric formulation for free-surface flows
where l is the Newton-iteration counter. 2. Use the solution of intermediate level to evaluate the right-hand side residuals and the corresponding Jacobian matrix. 8 ∂RM l ∂RM l ∂RM _ l > l > Δu_ + Δp + Δϕ ¼ RM , > > > ∂u_ n + 1 l n + 1 ∂pn + 1 l n + 1 ∂ϕ_ n + 1 l n + 1 > > < ∂R ∂RC l ∂RC _ l C l l Δ u _ + Δp + (54) n + 1 ∂p n + 1 ∂ϕ_ Δϕ n + 1 ¼ RC , ∂ u _ > n + 1 n + 1 l l > n + 1 l > > > ∂RL l ∂RL l ∂RL _ l > > Δu_ n + 1 + Δpn + 1 + Δϕ n + 1 ¼ RLl : : ∂u_ n + 1 l ∂pn + 1 l ∂ϕ_ n + 1 l The previous linear equations are solved to get the increment of the velocity, pressure, and level set unknowns. 3. Correct the solutions as follows: u_ l+1 _ ln+1 + Δu_ ln+1 , n+1 ¼ u
(55)
l ul+1 _ ln+1 , n+1 ¼ un+1 + γΔtΔu
(56)
pl+1 n+1 l+1 ϕ_ n+1
¼ pln+1 l ¼ ϕ_ n+1
ϕl+1 n+1 ¼
+ Δpln+1 , l + Δϕ_ n+1 , l ϕln+1 + γΔtΔϕ_ n+1 :
(57) (58) (59)
5.2 Fully coupled linear solver The multicorrector stage requires the solution of a large linear system given by Eq. (54), which couples the different components of the free-surface flow formulation. To increase the robustness of the formulation, Eq. (54) is solved by a direct coupling approach [67] based on GMRES solver [68], in which the Jacobian matrix is fully constructed with all terms in RBVMS represented, namely, 2 ∂R ∂R ∂R 3 M
6 ∂u_ n + 1 6 6 ∂RC J ¼6 6 ∂u_ n + 1 6 4 ∂RL ∂u_ n + 1
M
∂pn + 1 ∂RC ∂pn + 1 ∂RL ∂pn + 1
M
∂ϕ_ n + 1 7 7 ∂RC 7 7: ∂ϕ_ n + 1 7 7 ∂RL 5 ∂ϕ_ n + 1
(60)
The condition number of the earlier matrix is typically very large due to the complexity of the free-surface flow problems. To improve the efficiency and
186
Qiming Zhu and Jinhui Yan
robustness, the following preconditioning matrix is used, which represents the inverse of the decoupled Jacobian matrices for individual Navier-Stokes and level set problems, namely, 31 2 ∂RM ∂RM 0 7 6 ∂u_ n + 1 ∂pn + 1 7 6 7 6 ∂RC ∂RC 6 0 7 (61) M ¼6 7 : 7 6 ∂u_ n + 1 ∂pn + 1 4 ∂RL 5 0 0 ∂ϕ_ n + 1 The preconditioning problems are solved by diagonally preconditioned GMRES solver. Remark 2. While the level set convection is solved within the Newton iterations, the redistancing and mass balancing of level set field processes are performed after the predictor-multicorrector stage for each time step. This is done from the consideration of computational cost.
6. Numerical examples In this section, the proposed formulation is applied to three marine engineering problems. The first problem is solitary wave impacting a stationary platform, which is a well-known benchmark problem, widely used for validating free surface flow simulations. Refinement study is performed on this problem. The second problem is three-dimensional (3D) dam break with an obstacle, which involves violent free-surface evolution. Rich experiment data for pressure are available. The third problem is the free surface simulation of the planning of a scaled DTMB 5415 ship model. All the simulations here make use of linear tetrahedral elements. The simulated results are compared against experimental results and computational results from other researchers in the literature to validate the accuracy of the proposed formulation.
6.1 Solitary wave impacting a stationary platform The computational setup of this problem is defined as follows. As shown in Fig. 5, the computational domain is a rectangular box with dimensions 10m 1m 1m. The platform with dimensions 1.524m 0.4m 0.3m is located inside the computational domain, with its front face 5 m away from the inlet of the computational domain. A second-order solitary wave profile based on potential flow theory is initialized in the computational domain. The initial level set function, velocity vectors are defined as
187
Immersogeometric formulation for free-surface flows
0.23
5.0 m
1.0 m
469
6m
5.0 m
1.
m
0.3 m
0
0.4 m
4m 1.497 45 m 0.92
4m
1.52
Fig. 5 Computational domain of the solitary wave past a box obstacle (magenta box denotes the obstacle, dimensions, and pressure point locations are shown in the subfigure).
w
3 2 2 2 ϕ ¼ d EsechðqÞ E sechðqÞ , 4
pffiffiffiffiffi u ¼ gd EsechðqÞ2 + E2 sechðqÞ2 1 3 s 2 2 2 2 sechðqÞ ð2 3sechðqÞ Þ , 4 4 d v ¼ 0, pffiffiffiffiffi pffiffiffiffiffi s ¼ gdE 3E sechðqÞ2 tanhðqÞ d
3 1 s 2 2 2 1 E + 2sechðqÞ + ð1 3sechðqÞ Þ , 8 2 d
(62)
(63)
(64) (65)
where (u, v, w) are the velocity components in stream-wise, span-wise, and vertical directions, g is the magnitude of gravitational acceleration, d is the still water depth, H is the wave height, E is the ratio between wave height pffiffiffiffiffi 3 2 E is the wave speed, and still water depth E ¼ Hd , c ¼ gd 1 + 12 E 20 pffiffiffi q ¼ 2d3E 1 58 E ðx ctÞ, s ¼ z + d, z is the distance from still water surface. The air-water interface (far from the peak) in the hydrostatic configuration locates at z ¼ 0. The parameters in this simulation case are chosen as followed: d ¼ 0.234696 m, E ¼ 0.42, and zero clearance (the distance between the bottom surface of the platform and the still water level) are used. The distance between the wave peak and the front surface of the platform is 2 m.
188
Qiming Zhu and Jinhui Yan
The boundary conditions are defined as follows. Strong no-penetration boundary condition is used for inlet, outlet, side, and bottom boundaries of computational domain. Traction-free boundary condition is used for the top surface. Finally, weakly enforced no-slip boundary condition, based on Eq. (36), is used for the fluid-platform interface. The simulations are performed using two meshes. In order to capture the free-surface evolution and hydrodynamic load better, the region around the air-water interface and the platform is refined. The total number of nodes, elements, and element lengths of the two meshes are summarized in Tables 1 and 2. Fig. 6 shows snapshot of the coarse mesh and the fine mesh on the central plane (y ¼ 0). Fig. 7 shows the instantaneous free-surface shape colored by velocity magnitude. Both the velocity magnitude and free-surface shape look very natural. The simulation is also able to capture the flow separations near the sharp edge of the platform. Fig. 8 shows the normalized pressure at the two points. The location of the two points can be found in Fig. 5. To validate the proposed formulation, the experimental measurements obtained by French [69] is also plotted in Fig. 5. Both meshes generate quite accurate results. However, the coarse over predict the pressure. One possible reason is that the thickness of air-water interface scales with local mesh size in the formulation. As a result, integrating along the depth direction, the coarse mesh with bigger interface thickness by nature gives higher pure hydrostatic pressure.
6.2 Dam break with obstacle The dam break case investigates how a column of water, initial at rest, collapses due to gravity and impacts a stationary obstacle. The computational domain is a rectangular box with dimensions 3.22m 1.0m 1.0m. Table 1 Element lengths employed in the solitary wave case. Near platform Near outer boundary
Coarse mesh Fine mesh
0.028 m 0.014 m
0.15 m 0.15 m
Table 2 Number of elements and nodes in the solitary wave case. Number of elements Number of nodes
Coarse mesh Fine mesh
1,839,451 8,614,067
302,061 1,400,329
Immersogeometric formulation for free-surface flows
189
Fig. 6 Meshes of the solitary wave case.
Fig. 7 A snapshot of free surface colored by velocity magnitude of the solitary wave case.
The water column with dimensions 1.22m 1.0m 0.5m initially locates on the left of the domain, with a distance of 2.3955 m from the center of a stationary obstacle with dimensions 0.403m 0.161m 0.161m. The computational setup of this problem is shown in Fig. 9. The region around the obstacle is refined to capture the pressure. The element length of the mesh, and the number of nodes and elements are given in Tables 3 and 4. Fig. 10 shows the mesh on the central plane. The initial and boundary conditions are set as follows. For initialization, zero velocity is used and the level set function is defined based on the signed distance with respect to the initial free surface of the water column. No-penetration boundary condition is set strongly for all the boundaries of the computational domain, while no-penetration boundary condition is applied weakly, based on Eq. (37), for the fluid-obstacle interface. Δt ¼ 0.0005 s is used for this case. Simulation is performed until t ¼ 6.0 s.
0.5
0.5
Experiment Coarse mesh Fine mesh
0.4
0.4 0.3 P2/g d
P1/g d
0.3 0.2
0.2
0.1
0.1
0.0
0.0
−0.1 −5
Experiment Coarse mesh Fine mesh
−3
−1
1
3
5
t g/d Point 1
Fig. 8 Time history of normalized pressure of two points. Here γ ¼ ρg.
−0.1 −5
−3
−1
1 t g/d Point 2
3
5
191
0.55 m
0.45 m
Immersogeometric formulation for free-surface flows
1.22 m 0.
29
m
3.22 m m
m
0.161 m
1.0 2.3955
85
0.0
0.1
76
m
P2 0.0 80 P1 m 0. 0.02 40 3 1m m
0.0 21 m m 0.1 P3 76 m
80
P4
0.161 m
Fig. 9 Problem setup of dam break with obstacle (blue box denotes the initial water location, magenta box denotes the obstacle, dimensions, and pressure point locations are shown in the subfigure).
Table 3 Element length employed in the mesh of the dam break case. Near obstacle box Near outer boundary
0.0045 m
0.03 m
Table 4 Number of elements and nodes of the mesh in the dam break case. Number of elements Number of nodes
1,461,086
241,252
Fig. 10 Mesh of the dam break case the central plane (magenta box denotes the obstacle).
192
Qiming Zhu and Jinhui Yan
Fig. 11 shows the free surface at t ¼ 0.50 s, t ¼ 1.25 s, t ¼ 1.75 s, and t ¼ 4.75 s. When water hits the obstacle, the free-surface evolution is more violent compared with the previous solitary wave case. After impacting the outlet of the tank, water runs up the back wall quickly and touches the top of the tank. At the later stages of the simulation, wave breaking occurs. All these free-surface features are also observed in the experiments reported in Ref. [70]. We report the time history of the pressure at four points on the obstacle in Fig. 12. The location of the four points can also be found in Fig. 9. Experiments data from the Maritime Research Institute Netherlands (MARIN) [70] and computational results based on a boundary-fitted approach [37] are also plotted to validate the simulated results in the present work. For Points 1 and 2, an excellent agreement is achieved. Although the computational results of both boundary-fitted approach and present immersed approach deviate from the experiment measurement for Points 3 and 4, this comparison shows that the immersogeometric approach is able to at least produce the same level of accuracy as boundary-fitted approach for this problem.
6.3 Planning of a DTMB 5415 ship model In this section, the planning of the David Taylor Model Basin (DTMB) 5415 ship model is simulated by using the proposed formulation. Fig. 13 shows the CAD model of the DTMB 5415 bare ship. For the geometry details, the readers are referred to Ref. [71]. The length L of the model is 5.72 m. The draft T is 0.248 m. The Froude number Fr is 0.28. Fig. 14 shows computational setup. The computational domain is a box with dimensions 3L L L. Fig. 15 shows the mesh on the central plane. The mesh is also refined around the air-water interface and the ship. The element length employed in the mesh, and the number of elements and nodes are summarized, respectively, in Tables 5 and 6. The boundary conditions are defined as follows. Uniform water speed and zero air speed are applied strongly for inflow, hydrostatic pressure condition is used for outflow, free slip and no-penetration condition is applied strongly for side boundaries, and no-slip boundary condition is applied weakly for the fluid-ship interface. The time step Δt is set to 0.003 s for this case. The simulation is performed until no noticeable free-surface change is observed (quasistatic stage).
Immersogeometric formulation for free-surface flows
193
Fig. 11 Free-surface of the dam break case (t ¼ 0.50 s, 1.25 s, 1.75 s, 4.75 s, from the top to the bottom).
12000
12000
Experiment Present Boundary-fitted
10000 8000
8000 P/Pa
P/Pa
Experiment Present Boundary-fitted
10000
6000
6000
4000
4000
2000
2000
0
0 0
1
2
3
4
5
6
0
1
2
t/s Point 1
3
4
5
6
t/s Point 2
4000
4000 Experiment Present Boundary-fitted
3000
Experiment Present Boundary-fitted
3000
2000 P/Pa
P/Pa
2000
1000
1000
0
0
−1000
0
1
2
3
4
5
6
t/s Point 3
Fig. 12 Time history of pressure at four points (locations shown in Fig. 9).
−1000
0
1
2
3 t/s Point 4
4
5
6
195
Immersogeometric formulation for free-surface flows
Fig. 13 Geometry of DTMB 5415 ship hull.
No penetration
5.72 m
No slip
Un ifo infl r m ow
tration No pene tration No pene
Hydr osta outflo tic w
tration
No pene
5.
72
.72 m 3×5
m
Fig. 14 Computational setup of the DTMB 5415 ship case.
Fig. 15 Mesh of the DTMB 5415 ship case on the central plane (mesh is refined near the free surface and the ship structure).
Table 5 Element length employed in the mesh of the DTMB 5415 ship case. Ship and free surface Top boundary Bottom boundary
Mesh size
0.012 m
0.174 m
0.350 m
196
Qiming Zhu and Jinhui Yan
Table 6 Number of elements and nodes of the mesh of the DTMB 5415 ship case. Number of elements Number of nodes
15,272,253
2,788,077
Fig. 16 Free surface colored by wave height at the quasistatic stage from two different view angles.
Fig. 16 shows the free surface colored by the water elevation from two view angles. Wave height profile is very symmetric with respect to the center line. Fig. 17 shows the wave heights normalized by L along the center line and alone the line of y/L ¼ 0.172, respectively. The experimental data from Ref. [72] are plotted for comparison. Close agreement is achieved again that indicates the accuracy of the proposed formulation.
0.010 Experiment
Experiment
Present
Present
0.02
0.005
0.01
0.000
z/L
z/L
0.03
0.00
-0.005
−0.01
−0.010 0.0
0.2
Fig. 17 Wave height.
0.4
0.6
0.8
1.0
0.0
0.3
0.6
0.9
1.2
x/L
x/L
Wave height along the center line
Wave height along line at y = L = 0.172
1.5
198
Qiming Zhu and Jinhui Yan
7. Summary and future work This chapter summarizes an immersogeometric formulation for freesurface flow simulations around complex geometry by integrating level set method, residual-based variational multiscale and finite cell method. The Dirichlet boundary condition on the fluid-structure interface is enforced by a weak formulation. An FCM-based adaptive quadrature is employed to better resolve the immersed structure boundary. Octree-based ray-tracing method is used to perform the inside-outside test for complex geometry. The proposed framework is applied to challenging marine engineering problems including solitary wave impacting a stationary platform, dam break with obstacle, and planning of a DTMB 5415 ship model. Computational results agree well with experimental data and computational results from boundary-fitted methods. Together with its high accuracy, the flexibility of the method facilitates high-quality analysis of marine structures with complicated geometry in free-surface flows by circumventing labor-intensive volumetric meshing step. In the future, we plan to include adaptive mesh refinement around free surface and cavitation model in this immersogeometric formulation.
References [1] A. Prosperetti, G. Tryggvason, Computational Methods for Multiphase Flow, Cambridge University Press, 2009. [2] T.E. Tezduyar, Finite element methods for flow problems with moving boundaries and interfaces, Arch. Comput. Methods Eng. 8 (2001) 83–130. [3] T.J.R. Hughes, W.K. Liu, T.K. Zimmermann, Lagrangian-Eulerian finite element formulation for incompressible viscous flows, Comput. Methods Appl. Mech. Eng. 29 (1981) 329–349. [4] S.O. Unverdi, G. Tryggvason, A front-tracking method for viscous, incompressible, multi-fluid flows, J. Comput. Phys. 100 (1) (1992) 25–37. [5] J.P. Best, The formation of toroidal bubbles upon the collapse of transient cavities, J. Fluid Mech. 251 (1993) 79–107. [6] T.E. Tezduyar, M. Behr, J. Liou, A new strategy for finite element computations involving moving boundaries and interfaces—the deforming-spatial-domain/spacetime procedure: I. The concept and the preliminary numerical tests, Comput. Methods Appl. Mech. Eng. 94 (3) (1992) 339–351. [7] H. Braess, P. Wriggers, Arbitrary Lagrangian Eulerian finite element analysis of free surface flow, Comput. Methods Appl. Mech. Eng. 190 (1–2) (2000) 95–109. [8] Z. Gan, G. Yu, X. He, S. Li, Numerical simulation of thermal behavior and multicomponent mass transfer in direct laser deposition of co-base alloy on steel, Int. J. Heat Mass Transf. 104 (2017) 28–38. [9] M. Sussman, P. Smereka, S. Osher, A level set approach for computing solutions to incompressible two-phase flow, J. Comput. Phys. 114 (1) (1994) 146–159.
Immersogeometric formulation for free-surface flows
199
[10] S. Osher, J.A. Sethian, Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations, J. Comput. Phys. 79 (1) (1988) 12–49. [11] E. Shirani, N. Ashgriz, J. Mostaghimi, Interface pressure calculation based on conservation of momentum for front capturing methods, J. Comput. Phys. 203 (1) (2005) 154–175. [12] C.W. Hirt, B.D. Nichols, Volume of fluid (VOF) method for the dynamics of free boundaries, J. Comput. Phys. 39 (1981) 201–225. [13] D. Jacqmin, Calculation of two-phase Navier-Stokes flows using phase-field modeling, J. Comput. Phys. 155 (1) (1999) 96–127. [14] J. Liu, Thermodynamically Consistent Modeling and Simulation of Multiphase Flows (Ph.D. thesis), The University of Texas at Austin, 2014. [15] P. Yue, J.J. Feng, C. Liu, J. Shen, A diffuse-interface method for simulating two-phase flows of complex fluids, J. Fluid Mech. 515 (2004) 293–317. [16] L. Amaya-Bower, T. Lee, Single bubble rising dynamics for moderate Reynolds number using Lattice Boltzmann method, Comput. Fluids 39 (7) (2010) 1191–1207. [17] S. Nagrath, K.E. Jansen, R.T. Lahey, Computation of incompressible bubble dynamics with a stabilized finite element level set method, Comput. Methods Appl. Mech. Eng. 194 (42) (2005) 4565–4587. [18] M.K. Tripathi, K.C. Sahu, R. Govindarajan, Dynamics of an initially spherical bubble rising in quiescent liquid, Nat. Commun. 6 (2015) 1–9. [19] M. van Sint Annaland, N.G. Deen, J.A.M. Kuipers, Numerical simulation of gas bubbles behaviour using a three-dimensional volume of fluid method, Chem. Eng. Sci. 60 (11) (2005) 2999–3011. [20] J.M. Gimenez, N.M. Nigro, S.R. Idelsohn, E. On˜ate, Surface tension problems solved with the particle finite element method using large time-steps, Comput. Fluids 141 (2016) 90–104. [21] R. Calderer, L. Zhu, R. Gibson, A. Masud, Residual-based turbulence models and arbitrary Lagrangian-Eulerian framework for free surface flows, Math. Models Methods Appl. Sci. 25 (12) (2015) 2287–2317. [22] L. Zhu, S. Goraya, A. Masud, Interface-capturing method for free-surface plunging and breaking waves, J. Eng. Mech. 145 (2019) 1–15. [23] T.E. Tezduyar, S. Sathe, J. Pausewang, M. Schwaab, J. Christopher, J. Crabtree, Interface projection techniques for fluid-structure interaction modeling with moving-mesh methods, Comput. Mech. 43 (2008) 39–49. [24] A.A. Johnson, T.E. Tezduyar, Mesh update strategies in parallel finite element computations of flow problems with moving boundaries and interfaces, Comput. Methods Appl. Mech. Eng. 119 (1994) 73–94. [25] M.-C. Hsu, Y. Bazilevs, Fluid-structure interaction modeling of wind turbines: simulating the full machine, Comput. Mech. (2012), https://doi.org/10.1007/s00466-0120772-0. [26] C.S. Peskin, The immersed boundary method, Acta Numerica 11 (2002) 479–517. [27] C.S. Peskin, Flow patterns around heart valves: a numerical method, J. Comput. Phys. 10 (2) (1972) 252–271. [28] W.K. Liu, Y. Liu, D. Farrell, L. Zhang, X.S. Wang, Y. Fukui, N. Patankar, Y. Zhang, C. Bajaj, J. Lee, et al., Immersed finite element method and its applications to biological systems, Comput. Methods Appl. Mech. Eng. 195 (13–16) (2006) 1722–1749. [29] A. Main, G. Scovazzi, The shifted boundary method for embedded domain computations. Part II: linear advection-diffusion and incompressible Navier-Stokes equations, J. Comput. Phys. 372 (2018) 996–1026. [30] A. Main, G. Scovazzi, The shifted boundary method for embedded domain computations. Part I: Poisson and Stokes problems, J. Comput. Phys. 372 (2018) 972–995.
200
Qiming Zhu and Jinhui Yan
[31] Y. Bazilevs, K. Kamran, G. Moutsanidis, D.J. Benson, E. On˜ate, A new formulation for air-blast fluid-structure interaction using an immersed approach. Part I: basic methodology and FEM-based simulations, Comput. Mech. 60 (1) (2017) 83–100. [32] Y. Bazilevs, G. Moutsanidis, J. Bueno, K. Kamran, D. Kamensky, M.C. Hillman, H. Gomez, J.S. Chen, A new formulation for air-blast fluid-structure interaction using an immersed approach: Part II—coupling of IGA and meshfree discretizations, Comput. Mech. 60 (1) (2017) 101–116. [33] D. Kamensky, M.-C. Hsu, D. Schillinger, J.A. Evans, A. Aggarwal, Y. Bazilevs, M.S. Sacks, T.J.R. Hughes, An immersogeometric variational framework for fluid-structure interaction: application to bioprosthetic heart valves, Comput. Methods Appl. Mech. Eng. 284 (2015) 1005–1053. [34] F. Xu, D. Schillinger, D. Kamensky, V. Varduhn, C. Wang, M.-C. Hsu, The tetrahedral finite cell method for fluids: immersogeometric analysis of turbulent flow around complex geometries, Comput. Fluids 141 (2016) 135–154. [35] J. Yan, X. Deng, A. Korobenko, Y. Bazilevs, Free-surface flow modeling and simulation of horizontal-axis tidal-stream turbines, Comput. Fluids 158 (2016) 157–166. [36] J. Yan, Computational Free-Surface Fluid-Structure Interaction With Applications on Offshore Wind and Tidal Energy (Ph.D. thesis), University of California San Diego, 2016. [37] I. Akkerman, Y. Bazilevs, C.E. Kees, M.W. Farthing, Isogeometric analysis of freesurface flow, J. Comput. Phys. 230 (2011) 4137–4152. [38] I. Akkerman, Y. Bazilevs, D.J. Benson, M.W. Farthing, C.E. Kees, Free-surface flow and fluid-object interaction modeling with emphasis on ship hydrodynamics, J. Appl. Mech. 79 (2012) 010905. [39] J.A. Cottrell, T.J.R. Hughes, Y. Bazilevs, Isogeometric Analysis. Toward Integration of CAD and FEA, Wiley, 2009. [40] T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs, Isogeometric analysis: CAD, finite elements, NURBS, exact geometry, and mesh refinement, Comput. Methods Appl. Mech. Eng. 194 (2005) 4135–4195. [41] M.-C. Hsu, C. Wang, F. Xu, A.J. Herrema, A. Krishnamurthy, Direct immersogeometric fluid flow analysis using B-rep CAD models, Comput. Aided Geom. Des. 43 (2016) 143–158. [42] C. Wang, F. Xu, M.-C. Hsu, A. Krishnamurthy, Rapid B-rep model preprocessing for immersogeometric analysis using analytic surfaces, Comput. Aided Geom. Des. 52–53 (2017) 190–204. [43] M.-C. Hsu, D. Kamensky, Y. Bazilevs, M.S. Sacks, T.J.R. Hughes, Fluid-structure interaction analysis of bioprosthetic heart valves: significance of arterial wall deformation, Comput. Mech. 54 (2014) 1055–1071. [44] M.-C. Hsu, D. Kamensky, F. Xu, J. Kiendl, C. Wang, M.C.H. Wu, J. Mineroff, A. Reali, Y. Bazilevs, M.S. Sacks, Dynamic and fluid-structure interaction simulations of bioprosthetic heart valves using parametric design with T-splines and Fung-type material models, Comput. Mech. 55 (2015) 1211–1225. [45] M.C.H. Wu, R. Zakerzadeh, D. Kamensky, J. Kiendl, M.S. Sacks, M.-C. Hsu, An anisotropic constitutive model for immersogeometric fluid-structure interaction analysis of bioprosthetic heart valves, J. Biomech. 74 (2018) 23–31. [46] F. Xu, S. Morganti, R. Zakerzadeh, D. Kamensky, F. Auricchio, A. Reali, T.J.R. Hughes, M.S. Sacks, M.-C. Hsu, A framework for designing patient-specific bioprosthetic heart valves using immersogeometric fluid-structure interaction analysis, Int. J. Numer. Methods Biomed. Eng. 34 (4) (2018) e2938. [47] F. Xu, Y. Bazilevs, M.-C. Hsu, Immersogeometric analysis of compressible flows with application to aerodynamic simulation of rotorcraft, Math. Models Methods Appl. Sci. 29 (5) (2019) 905–938.
Immersogeometric formulation for free-surface flows
201
[48] Y. Bazilevs, V.M. Calo, J.A. Cottrel, T.J.R. Hughes, A. Reali, G. Scovazzi, Variational multiscale residual-based turbulence modeling for large eddy simulation of incompressible flows, Comput. Methods Appl. Mech. Eng. 197 (2007) 173–201. [49] S. Xu, N. Liu, J. Yan, Residual-based variational multi-scale modeling for particleladen gravity currents over flat and triangular wavy terrains, Comput. Fluids 188 (2019) 114–124. [50] J. Yan, A. Korobenko, A.E. Tejada-Martı´nez, R. Golshan, Y. Bazilevs, A new variational multiscale formulation for stratified incompressible turbulent flows, Comput. Fluids 158 (2016) 150–156. [51] T.M. van Opstal, J. Yan, C. Coley, J.A. Evans, T. Kvamsdal, Y. Bazilevs, Isogeometric divergence-conforming variational multiscale formulation of incompressible turbulent flows, Comput. Methods Appl. Mech. Eng. 316 (2016) 859–879. [52] J. Yan, S. Lin, Y. Bazilevs, G.J. Wagner, Isogeometric analysis of multi-phase flows with surface tension and with application to dynamics of rising bubbles, Comput. Fluids 179 (2019) 777–789. [53] J. Yan, W. Yan, S. Lin, G.J. Wagner, A fully coupled finite element formulation for liquid-solid-gas thermo-fluid flow with melting and solidification, Comput. Methods Appl. Mech. Eng. 336 (2018) 444–470. [54] A.N. Brooks, T.J.R. Hughes, Streamline upwind/Petrov-Galerkin formulations for convection dominated flows with particular emphasis on the incompressible NavierStokes equations, Comput. Methods Appl. Mech. Eng. 32 (1982) 199–259. [55] T.E. Tezduyar, Stabilized finite element formulations for incompressible flow computations, Adv. Appl. Mech. 28 (1992) 1–44. [56] T.E. Tezduyar, Y. Osawa, Finite element stabilization parameters computed from element matrices and vectors, Comput. Methods Appl. Mech. Eng. 190 (2000) 411–430. [57] T.J.R. Hughes, L. Mazzei, A.A. Oberai, A. Wray, The multiscale formulation of large eddy simulation: decay of homogeneous isotropic turbulence, Phys. Fluids 13 (2001) 505–512. [58] T.J.R. Hughes, G. Scovazzi, L.P. Franca, Multiscale and stabilized methods, in: Encyclopedia of Computational Mechanics, John Wiley & Sons, 2004. [59] M.C. Hsu, Y. Bazilevs, V.M. Calo, T.E. Tezduyar, T.J.R. Hughes, Improving stability of stabilized and multiscale formulations in flow simulations at small time steps, Comput. Methods Appl. Mech. Eng. 199 (2010) 828–840. [60] C. Johnson, Numerical Solution of Partial Differential Equations by the Finite Element Method, Cambridge University Press, Sweden, 1987. [61] T.E. Tezduyar, Finite element methods for fluid dynamics with moving boundaries and interfaces, in: Encyclopedia of Computational Mechanics, 2004. € [62] J. Nitsche, Uber ein Variationsprinzip zur L€ osung von Dirichlet-Problemen bei Verwendung von Teilr€aumen, die keinen Randbedingungen unterworfen sind, in: Abhandlungen aus dem Mathematischen Seminar der Universit€at Hamburg, vol. 36, 1971, pp. 9–15. [63] Y. Bazilevs, T.J.R. Hughes, Weak imposition of Dirichlet boundary conditions in fluid mechanics, Comput. Fluids 36 (2007) 12–26. [64] V. Varduhn, M.-C. Hsu, M. Ruess, D. Schillinger, The tetrahedral finite cell method: higher-order immersogeometric analysis on adaptive non-boundary-fitted meshes, Int. J. Numer. Methods Eng. 107 (12) (2016) 1054–1079. [65] J. Chung, G.M. Hulbert, A time integration algorithm for structural dynamics with improved numerical dissipation: the generalized-α method, J. Appl. Mech. 60 (2) (1993) 371–375. [66] K.E. Jansen, C.H. Whiting, G.M. Hulbert, A generalized-α method for integrating the filtered Navier-Stokes equations with a stabilized finite element method, Comput. Methods Appl. Mech. Eng. 190 (3–4) (2000) 305–319.
202
Qiming Zhu and Jinhui Yan
[67] Y. Bazilevs, K. Takizawa, T.E. Tezduyar, Computational Fluid-Structure Interaction: Methods and Applications, John Wiley & Sons, 2013. [68] Y. Saad, M. Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput. 7 (1986) 856–869. [69] J.A. French, Wave Uplift Pressures on Horizontal Platforms (Ph.D. thesis), California Institute of Technology, 1970. [70] K.M.T. Kleefsman, G. Fekken, A.E.P. Veldman, B. Iwanowski, B. Buchner, A volume-of-fluid based simulation method for wave impact problems, J. Comput. Phys. 206 (1) (2005) 363–393. [71] Geometry of DTMB 5415 model, 2008. http://www.simman2008.dk/5415/5415_ geometry.htm. [72] J. Longo, F. Stern, Uncertainty assessment for towing tank tests with example for surface combatant DTMB model 5415, J. Ship Res. 49 (1) (2005) 55–68.
CHAPTER SIX
Machine learning in materials modeling and design Kamrun N. Keyaa, Amara Arshadb, Sara A. Tolbab, Wenjian Niea, Amirhadi Alesadia, Luis Alberto Ruiz Pestanac, and Wenjie Xiaa,b a
Department of Civil, Construction and Environmental Engineering, North Dakota State University, Fargo, ND, United States b Materials and Nanotechnology, North Dakota State University, Fargo, ND, United States c Department of Civil and Architectural Engineering, University of Miami, Coral Gables, FL, United States
Contents 1. Introduction 1.1 What is data science? 1.2 What is machine learning? 1.3 Types of machine learning 2. Math preliminaries for machine learning 3. Overview of machine learning algorithms 3.1 Data selection and feature selection 3.2 Classification of machine learning 3.3 Feature reduction methods 3.4 Regression models 3.5 Deep learning 4. Applications of machine learning in materials design and modeling 4.1 Machine learning in prediction of materials properties 4.2 Material classification via machine learning 4.3 Advances of machine learning in molecular model development 4.4 Application of machine learning in designing biomaterials 5. Future outlook References
203 204 205 205 207 209 209 209 213 215 217 218 219 223 227 228 233 233
1. Introduction Materials development plays a very important role in our daily lives and technological revolution. Progress in human society is directly connected to the development of new materials with advanced properties. The design of new materials is imperative to solving pressing societal challenges related to health, energy, and sustainability. It is important to know Fundamentals of Multiscale Modeling of Structural Materials https://doi.org/10.1016/B978-0-12-823021-3.00010-5
Copyright © 2023 Elsevier Inc. All rights reserved.
203
204
Kamrun N. Keya et al.
how to achieve the desired properties of designed materials for many advanced applications [1]. Traditionally, materials design has been driven by empirical laws and experimental trial and error, which is an inefficient and cost-ineffective approach [2]. The emergence of machine learning (ML), a subfield of artificial intelligence (AI), offers a great promise to address the challenges in materials design and discovery from a data-driven perspective. With the dataset collected from measurement and calculation, it is now possible to rapidly develop new materials and navigate the vast design space of hypothetical materials using ML techniques [3]. Combining experimentation, molecular and multiscale modeling with ML methods enables faster identification of high-performance material candidates and establishment of designing principles by uncovering the complex structure-processingproperty relationships.
1.1 What is data science? Data science is an interdisciplinary field that extracts knowledge and insights from raw, structured, and unstructured data using algorithms, processes, and scientific methods. It is at the confluence of computer science, statistics, and subject matter expertise for the task at hand (Fig. 1). Nowadays, data science is being widely applied in many fields, such as mathematics, statistics, computer science, software programming, informatics science, and domain knowledge, leading to the emergence of ML. Currently, the term “data science” is mostly
Fig. 1 Concepts of data science.
Machine learning in materials modeling and design
205
used in academic programs and, in general, does not refer to any specific mathematical algorithms.
1.2 What is machine learning? Developers have long imagined building machines that are able to think. When computer programs were first pictured many years ago, people speculated whether such devices can be intelligent. Nowadays, AI becomes a broad branch of computer science that deals with building intelligent machines capable of performing actions normally performed by humans. ML is one subset of AI that analyzes data as a series of algorithms, allowing a system to learn automatically and improve from experience. This automatic learning is further enabled by deep learning (DL) techniques that can analyze enormous amounts of unstructured data, like text, images, or videos. Specifically, DL is a branch of ML that uses multilayered artificial neural networks to achieve cutting-edge accuracy in tasks like object detection, speech recognition, language translation, and others. Several important algorithms, such as neural networks, convolutional neural networks, autoencoders, deep belief networks, and recurrent neural networks, are considered the most prominent methods in DL.
1.3 Types of machine learning There are multiple well-developed ML algorithms that are used to analyze data for materials design and a myriad of other problems. Generally, ML models can be classified into four major categories (Fig. 2) based on the nature of input data and the purpose of its application, namely, supervised learning, unsupervised learning, semisupervised learning, and reinforcement learning, which are briefly reviewed in the following sections. 1.3.1 Supervised learning Supervised ML models can be classified into two major groups: regression and classification. There are several regression models available, such as support vector machine (SVM), linear discriminant analysis (LDA), regression, random forests, and others. In supervised learning, a computer algorithm is trained on input data that have been labeled for a particular output to create a ML model. Training of data continues until the model can detect patterns and relationships between the input data and output labels, allowing it to accurately predict new data. Supervised learning excels in classification and regression issues, such as determining the properties of materials or chemicals based
206
Kamrun N. Keya et al.
Fig. 2 Types of machine learning algorithms.
on their characteristics or predicting the material’s properties as a function of molecular parameters. Such algorithms can also make a comparison of its output with the intended output (i.e., ground truth label) and detect errors so that it can adjust itself accordingly (e.g., via backpropagation). 1.3.2 Unsupervised learning The opposite of supervised learning is known as unsupervised learning. In unsupervised learning, the algorithm is supplied with unlabeled data, and it is designed to find patterns or similarities on its own. In this category, only input data have been used, and there are no corresponding output data. These techniques analyze and cluster unlabeled datasets. There are two major types of unsupervised models: one is clustering and the other one is association. It is the best solution for exploratory data analysis, cross-selling techniques, consumer segmentation, and image identification because of its capacity to detect similarities and differences in information. Principal component analysis (PCA), K-means, mixture models, and others are examples of models in this family. 1.3.3 Semisupervised learning Semisupervised learning is a form of ML algorithm that falls somewhere between supervised and unsupervised learning. During the training stage, it uses a mix of labeled and unlabeled datasets. Between supervised and
Machine learning in materials modeling and design
207
unsupervised ML, semisupervised learning is an important category. Although semisupervised learning acts on data with a few labels and is the middle ground between supervised and unsupervised learning, it largely consists of unlabeled data. Labels are expensive, yet a few labels may suffice for corporate purposes. These algorithms are often based on a few assumptions, such as continuity, cluster, and manifold assumptions [4]. 1.3.4 Reinforcement learning Reinforcement learning (RL) is an ML technique that allows an agent to learn by trial and error in an interactive environment using feedback from its own actions and experiences. However, it is a relatively long-term iterative process. Although both supervised learning and reinforcement learning involve mapping between input and output, unlike supervised learning, reinforcement learning uses rewards and punishments as signals for positive and negative behaviors. As a result, reinforcement learning provides the agent with the right set of behaviors for executing a task. In terms of objectives, this algorithm differs from unsupervised learning. In unsupervised learning, the goal is to detect similarities and differences between data points; in reinforcement learning, the goal is to develop an appropriate action model that maximizes the agent’s total cumulative reward. There are several algorithms often used as reinforcement learning methods, such as Q-learning, deep adversarial networks, and temporal difference.
2. Math preliminaries for machine learning ML is a field that intersects statistical, probabilistic, computer science, and algorithmic aspects arising from learning iteratively from data and finding hidden insights, which can be used to build intelligent applications. Despite the immense possibilities of ML, a thorough mathematical understanding of many of these techniques is necessary for a good grasp of the inner working of the algorithms and for obtaining good results: 1. selecting the right algorithm that includes considerations of accuracy, training time, model complexity, number of parameters, and number of features; 2. choosing parameter settings and validation strategies; 3. identifying underfitting and overfitting by understanding the biasvariance trade-off; 4. estimating the right confidence interval and uncertainty. Research into mathematical formulations and theoretical advancement of ML is ongoing, and the level of importance is illustrated in Fig. 3. We summarize
208
Kamrun N. Keya et al.
Fig. 3 Importance of math topics needed for machine learning.
the math topics commonly involved in ML algorithms in the following sections. Linear Algebra: Linear algebra is the basis of ML methodologies [5]. Topics such as principal component analysis (PCA), singular value decomposition (SVD), eigen decomposition of a matrix, lower-upper (LU) decomposition, orthogonal matrix Q and upper triangular matrix R (QR), decomposition/factorization, symmetric matrices, orthogonalization and orthonormalization, matrix operations, projections, eigenvalues and eigenvectors, vector spaces, and norms are needed for understanding the optimization methods used for ML. Probability Theory and Statistics: Some of the fundamental statistical and probability theories needed for ML are combinatorics, probability rules and axioms, Bayes’ theorem, random variables, variance and expectation, conditional and joint distributions, standard distributions (Bernoulli, binomial, multinomial, uniform, and Gaussian), moment-generating functions, maximum-likelihood estimation (MLE), prior and posterior, maximum a posteriori estimation (MAP), and sampling methods [6]. Multivariate Calculus: Necessary topics include differential and integral calculus, partial derivatives, vector-valued functions, directional gradient, and Hessian, Jacobian, Laplacian, and Lagrangian distributions [7]. Algorithms and Complex Optimizations: This is important for understanding the computational efficiency and scalability of ML algorithms and for exploiting sparsity in datasets. Knowledge of data structures (binary trees, hashing, heap, stack, etc.), dynamic programming, randomized and
Machine learning in materials modeling and design
209
sublinear algorithm, graphs, gradient/stochastic descents, and primal-dual methods are needed [8]. Other Concepts: There are some other math topics not covered in the four major areas described earlier, and the topics include real and complex analysis (sets and sequences, topology, metric spaces, single-valued and continuous functions, limits, Cauchy kernel, Fourier transforms), information theory (entropy, information gain), function spaces, and manifolds.
3. Overview of machine learning algorithms In the previous section, different types of ML algorithms and types of math used in ML have been discussed. In this section, we will go through the key concepts of commonly used ML algorithms as well as how to select data and implement those algorithms.
3.1 Data selection and feature selection The first and most important step of material designing is feature selection and data cleaning. Feature selection is a technique for reducing the input variable of a chosen model by only using useful data and eliminating noise. It refers to the process of choosing relevant features for an ML model automatically based on the type of problem to be solved. This can be performed without changing the features themselves. In addition to reducing the size of the input data, it helps in reducing the noise in data, which contains a large amount of additional meaningless information. Feature selection methods are based on supervised and unsupervised models. Supervised feature selection techniques are based on the target variable, such as methods that remove irrelevant variables from the dataset, whereas unsupervised feature selection ignores the target variable, such as methods that eliminate redundant variables using correlation. There are three types of supervised feature selection methods: intrinsic, wrapper, and filter methods. In the filter method, outside predictors are evaluated for relevance by evaluating each criterion, and only the best predictors are modeled. The wrapper method identifies the optimal combination of predictors that maximizes model performance by adding or removing predictors. The intrinsic method is the combination of both filter and wrapper methods [9].
3.2 Classification of machine learning There are many classification algorithms used in ML. There is no method to conclude which algorithms are better than others. Based on the types of data
210
Kamrun N. Keya et al.
and application, choosing classification varies. Here, several popular classification algorithms will be discussed such as decision tree, random forest, and k-nearest neighbor. 3.2.1 Decision tree algorithm The decision tree (DT) algorithm is a powerful classification algorithm that is used for approximating discrete function values. It introduces a set of classification rules from the training set with the objective of correctly classifying examples. The DT algorithm is performed based on feature selection, decision tree generation, and decision tree pruning. The DT algorithm belongs to supervised learning algorithms and can be used for classification and regression analyses of both numerical and categorical data. The DT starts from the root of the tree (sometimes, it is called a node). Each node corresponds to a test case for some attribute in the tree, and each edge leads to a possible answer to the test case. The process is recursive in nature and is repeated for each subtree rooted at its new node [10]. Several assumptions are made within the DT. At first, the whole training is set as a root and then features the values whether they are continuous or not. If feature values are continuous, they are discretized before building the model. In the DT algorithm, identifying the attribute of the root node in each level is a challenging part. In this algorithm, two main feature selection methods are commonly used: one is “information gain” and the other one is “Gini index” [11,12]. “Information gain” is the main method for building a decision tree (DT). This reduces the amount of information needed to classify the tuples and reduces the test values. The original information can be classified based on the tuple dataset (D) given by the following equation: E ðS Þ ¼
c X
pi log 2 pi
(1)
i¼1
where p indicates the probability of the tuple which is under class c and E(S) refers to the information gain, which is required for classifying the dataset. It is also known as entropy. For exact classification and gaining information, the following equation can be used: EðT, X Þ ¼
c X
P ðc ÞEðc Þ
(2)
cX
Here, P(c) is the weight of partition which refers to X and T. X is the probabilistic prediction and T is the subset using X of the DT model for class c.
211
Machine learning in materials modeling and design
Information gain is considered as the difference between the original information and expected information that is used to classify the dataset (D). Information gain is used to measure the prediction in entropy before and after the split on the subset T using the X. By knowing the value of X, it is possible to reduce the information from the dataset. GainðT, X Þ ¼ E ðT Þ E ðT , X Þ
(3)
The “Gini index” can be used for binary variables and regression analysis. It is used to measure the impurity of the training dataset (D) from tuples as follows: X Gini ¼ 1 pðijt Þ2 (4) i
where p refers to the probability of the dataset, which belongs to class (c) and t indicates the training dataset. For binary split dataset D, the Gini index can be calculated as follows: GINI split ¼
k X ni i¼1
n
GINI ðiÞ
(5)
where n is the nth partition of the dataset D. Although DT is very useful for regression and classification, it faces some overfitting problems if the dataset contains various features [13]. To solve this overfitting issue, it is important to stop building trees. There are two methods to solve overfitting problems: prepruning and postpruning. Prepruning involves a “termination condition” to determine when it is desirable to terminate some of the branches prematurely when a decision tree is generated. It can be used to stop models from developing in the early stage. However, it is difficult to choose a stopping point from the DT. A postpruning approach generates a complete decision tree and then removes some of the branches with an aim of improving the classification accuracy of unseen data. It is used for cross-validation to check whether expanding the tree will make improvements or lead to overfitting [12]. 3.2.2 Random forest Random forest (RF) algorithms can be used by constructing a multitude of decision trees (DTs), which are considered as one of the most powerful algorithms in supervised learning for classification and regression tasks [13]. It shows excellent performance during the classification and regression models.
212
Kamrun N. Keya et al.
In terms of accuracy, this algorithm provides more accurate results and may be less prone to overfitting than other classification methods. Since it is a combination of multiple decision trees, it usually provides a more accurate prediction than the DT. The RF algorithm does not require any feature scaling as needed in the DT. This algorithm is more robust for selecting training samples from the dataset. Although the RF algorithm is very difficult to interpret, it is easy to tune the hyperparameter as compared with the DT classifier [13,14]. 3.2.3 k-nearest neighbor The k-nearest neighbor (KNN) is a supervised algorithm used in statistical prediction methods and pattern recognition. This algorithm is to classify the objects using their neighbors closest to them. At first, it classifies the test sample and chooses values from a nearby test sample, which is known as the k sample. As the prediction for that test sample, most of these nearest samples (or the nearest single sample when k ¼ 1) are returned. It is possible to measure the distance between two samples using a variety of methods [14,15]. In computing distances between data points, the Euclidean distance is the most used method. However, we need to choose the distance metric based on the size and dimensions of the dataset. There are some metrics available that are used to measure the distance between points in the dataset. Among them, the Minkowski distance is commonly used to measure the distance between two points x and y in the n dimensional space. sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n X w dðxyÞ ¼ (6) jxi yi jw i¼1
Usually, a smaller value of w (i.e., an integer) provides a better option for measuring the distance of high-dimensional datasets. The Hamming distance is another method for measuring the distance using the integer-valued vector spaces. By measuring the distance between nonequal characters, it is usually used to compare strings of equal length. d ðx, yÞ ¼
k X jxi yi j
(7)
i¼1
Choosing the correct k value affects the results. The advantages of this algorithm are that it is very easy to use, and there is no need for building a model. This algorithm can be used for regression and classification. After increasing
Machine learning in materials modeling and design
213
the number of independent variables/predictors, this algorithm becomes slower. This is one of the disadvantages of the KNN algorithm. Moreover, other algorithms can generate more accurate results, which are used in the classification and regression model. Although the KNN is not able to solve all types of problems, it is very useful for problems that have solutions based on identifying similar objects.
3.3 Feature reduction methods Feature reduction is the technique of reducing the number of features without losing important and valuable information from the datasets. This method is also known as dimensionality reduction. Feature reduction is normally used to avoid overfitting, improve the performance and accuracy of the ML models, and reduce the computational cost and time. These techniques can be divided into two processes, namely, feature selection and feature extraction. Among many feature reduction methods, principal component analysis (PCA), T-distributed stochastic neighbor embedding (t-SNE), and linear discriminant analysis (LDA) are the most popular methods. 3.3.1 Principal component analysis Principal component analysis (PCA) is a technique that uses the covariance between features to identify the uncorrelated variables among them and thus reduce the dimensionality of a large dataset [16,17]. PCA is normally performed by calculations of the covariance matrix eigenvalues and eigenvectors. The main purpose of this technique is to use fewer independent factors to reflect the majority of the original variables and to eliminate their duplication. It is more useful for analyzing high-dimensional data and using the dependencies between the variables to represent it in a more tractable, lower dimensional form, without losing too much information [18]. As a dimensional reduction technique, PCA can be used for recognizing the patterns among the multidimensional datasets such that the data can be more easily visualized. For instance, using the PCA technique, Mumtaz and coworkers showed that different polymers with similar dielectric properties are distinguishable in terahertz time-domain spectroscopy, facilitating the identification of materials with no characteristic features in the spectral range of interest [19]. Other benefits of PCA include reduction of noise in the data, feature selection (to a certain extent), and the ability to produce independent, uncorrelated features of the data. However, linearity is assumed here, which is the main shortcoming of this algorithm, particularly for solving the nonlinear problems.
214
Kamrun N. Keya et al.
3.3.2 T-distributed stochastic neighbor embedding Generally, during material design, material data have multiple features, which consist of the high-dimensional space. Material data are very important to visualize, present, and perceive. Therefore, dimensionality reduction is a primary requirement for visualizing the low-dimensional space of twoto three-dimensional data. T-distributed stochastic neighbor embedding (t-SNE) can be used to visualize high-dimensional data. It is a nonlinear dimensionality reduction technique. This technique is based on the unbiased data and correlation of descriptors at a lower complexity level (reduced dimension) [20]. It reduces the dimensionality of a dataset by minimizing the Kullback-Leibler divergences between the high-dimensional space with the latent space. Although the t-SNE method is an excellent technique for data visualization in a low-dimensional space, it does require a huge computational cost. This technique solves the crowding problem using a probability distribution, which is found in the original SNE method. This can be computed from the probability pij that is proportional to the xi and xj objects, which can be explained as follows: 2 exp xi xj =2σ 2i (8) pij ¼ X exp jjxk xl jj2 =2σ 2i k6¼1
Here, pij is the probability distribution in the original space and qij is the distribution in the latent space between two points in the similar objects yi and yj as follows:
1 1 + jjyi yi jj2 qij ¼ X 1 1 + jjyk yl jj2
(9)
k6¼1
The Kullback-Leibler divergences can be used to minimize the cost function between the high-dimensional space and the latent space, which is given as follows: C ¼ KL ðPkQÞ ¼
XX pij pij log qij i j
(10)
Here, Kullback-Leibler divergences can be determined from the distribution P from the distribution Q. This minimization is calculated from
Machine learning in materials modeling and design
215
gradient descent. This technique is used for identifying data from multiple features. It is used for visualization purposes and unsupervised learning [21]. 3.3.3 Linear discriminant analysis Linear discriminant analysis (LDA) is used as a supervised learning method for the classification and dimensionality reduction of the linear model. Generally, it is used for feature extraction in pattern classification problems. LDA is used for producing the lower dimensional space, which maximizes the ratio S between the between-class variance and the within-class variT
pC b p ance: max SðpÞ ¼ pC T , where Cb and Cw are the between-class and wp within-class variances, respectively, and p is the space of the dataset, which is also known as eigenvector obtained from CbC1 w . This technique is commonly used for classifying multiple problems, and it can be used for reducing the number of features from the data. LDA makes assumptions about the data during analysis, such as data point distributions being normally or Gaussian, and each class having equal covariance matrices [20,22].
3.4 Regression models Regression modeling is commonly used in ML to develop a predictive relationship between feature inputs and outputs. Generally, regression modeling can predict a continuous function (y) as a function of one or more (x) independent variables/descriptors/features. There are several regression algorithms, such as linear, polynomial, logistic, and stepwise. Here, we will only introduce three popular regression algorithms. 3.4.1 Linear regression This is one of the most popular ML regression algorithms. To use the linear regression algorithm, the dependent variable must be continuous, while the independent variable(s) can be either continuous or discrete [23]. Linear regression creates a relationship between the dependent variable (y) and independent variable(s) (x) in the form of a regression line defined by the following equations: y ¼ a + bx + err y ¼ a + b1 x1 + b2 x2 + ⋯+bn xn + err
(11) (12)
Here, a and b are the linear coefficients such that a is the intercept, b is the slope of the line, and err is the error term. Eq. (11) is for the simple regression
216
Kamrun N. Keya et al.
model, and Eq. (12) is the multiple regression formula, which is considered an extension of the simple linear regression. 3.4.2 Polynomial regression It is a regression model for the nonlinear dataset that uses an nth degree polynomial. It is similar to the multiple linear regression, but it fits the nonlinear relationship between the independent and dependent variables [24]. The linear regression equation is in the form of a first-degree polynomial regression, so the polynomial equation is derived from the linear one. The polynomial regression equation is defined as follows: y ¼ b0 + b1 x1 + b2 x2 + ⋯bn xn + bn+1 x2 + bn+2 x3 + ⋯+bn+m xm+1 + err (13)
where y is the predicted/target output, b0, …bn+m are the regression coefficients, x is an independent/input variable, and err is the error term. 3.4.3 Regularized linear regression This is a form of regression that regularizes the coefficient of the linear regression and shrinks the coefficient value toward zero. In other words, this technique is used to simplify the model to avoid overfitting from the model. Two commonly used regularized linear regression models are discussed in the following text. Ridge Regression: Ridge regression (also known as Tikhonov regularization) is a popular regularization method of the linear regression algorithm in ML [23]. It is used for dirty datasets that contain outliers, interdependent features, and multicollinearity (highly correlated variables). It does not set the coefficient value to zero, applies nonsparse solutions, and forces the weights to be small. In this method, the ordinary least squares are modified to minimize the squared absolute sum of the coefficients (L2 regularization) [24]. Thus, the complexity of the ML model can also be reduced via ridge regression. LASSO Regression: The least absolute shrinkage and selection operator (LASSO) as a ridge regression is used for building learning regression ML models when having datasets with a large number of features [24]. It is used to reduce the complexity of the model to prevent overfitting. In the LASSO regression, the ordinary least squares are modified to minimize the absolute sum of the coefficients (L1 regularization). LASSO minimizes the prediction error for a quantitative response variable by rendering coefficients to absolute zero and finding the best subset of predictors. The LASSO regression is mainly used to eliminate less important features to obtain a better subset of variables [23].
Machine learning in materials modeling and design
217
3.5 Deep learning Deep learning (DL) is a type of ML approach that imitates the way how brain gains knowledge with applied mathematics and statistics. Nowadays, DL is widely applied in predictive modeling to process substantial amounts of data and automate the predictive reasoning by algorithms that are piled in the hierarchy of increasing order of complexity. There are many well-developed DL algorithms, such as multilayer perceptron (MLP) and convolutional neural networks (CNNs) [24]. The biggest challenge for DL algorithms is the learning based on the small size of datasets with only one source that cannot be representative of wide-ranging functional areas [25]. If the data comprise biases reproduced in the prediction, that can be a vexing concern for DL models [26,27]. The model should be trained to differentiate sensitive variations in the data. Reinforcement learning is a crowning extension of DL that learns on the basis of trial and error. It mostly performs tasks without human operator’s guidance. This kind of DL approach is capable of accomplishing human-level performances such as robotics [28]. The artificial neural network (ANN) (also called a deep neural network, which underpins DL) is one of the most widely used DL algorithms. The ANN is inspired by the simulation of a sophisticated network of neurons in computers and mimics the functionality of sensory processing of neurons by applying different algorithms and making it learn from provided input datasets [29]. The ANN has functional implementations in real-world problems such as speech recognition, prediction of complex 3D protein structure, gene prediction, and classification of cancers [30]. A simple model of neuron and its function is illustrated in Fig. 4. In Fig. 4 the threshold unit of the McCulloch-Pitts neuron model gets input data from N type of external sources (numbers 1 N). Wi is the associated weight of input Xi. The total input units are the sum of the weight of PN all inputs i¼1 W i X i ¼ W 1 X 1 + W 2 X 2 + ⋯+W N X N . If this value is above or below the threshold (t), the output would be 0 or 1, respectively. PN Therefore, the output can be expressed as g i¼1 W i X i t , where g is the so-called continuous transfer function (Fig. 4A). In three dimensions, a threshold can be used to classify different points that can be separated by hyperplane. Each green dot denotes input values X1, X2, and X3 of class 1, and red dots correspond to class 2. In Fig. 4B the red and green crosses show the function to find a plane that can separate the red dots from the green dots. The feed-forward network exhibited seven input units taken by the network, with five in the hidden layer and one in the output layer.
218
Kamrun N. Keya et al.
Fig. 4 McCulloch-Pitts neuron model in graphical representation. (A) Input data from different sources converge into sum weight through a function. (B) Input values from different classes are distributed onto the plane. (C) Distribution of data into the feedforward neural network.
It is also known as a two-layer network as the input layer is not counted because it cannot participate in any computational performance (Fig. 4C). The inputs are external sources that are weighted and added by the network; if the weight of every input is at a certain threshold level, then it is also called hyperplane. This threshold is mainly used for solving classification problems. The computer programs implicate simulations by changing the weight of input each time with new examples in a way to improve the classification [31]. In most cases, nonlinear problems are used to separate the data by introducing extra hidden layers for each threshold unit, and the final layer assembles it for final classification. This so-called feed-forward network is mainly used for regression problems [32].
4. Applications of machine learning in materials design and modeling ML algorithms have been increasingly implemented to develop surrogate or reduced order models with excellent predictive capability in materials
Machine learning in materials modeling and design
219
design and discovery. By extracting molecular features based on the chemical structure of the molecules, ML models could be advantageous for the material design compared with a trial-and-error design process, which can save enormous time and resources toward developing high-performance materials. In this section, we briefly review several examples of applications of ML in materials design and modeling.
4.1 Machine learning in prediction of materials properties Here, we review two ML models that offer predictive frameworks to calculate the glass transition temperature (Tg) of polymers from the geometry of the repeat unit. Although both models are developed based on regression analysis, each of them follows a specific feature selection technique since their defined molecular features are inherently different. 4.1.1 A surrogate model to predict glass transition of conjugated polymers Semiconducting conjugated polymers (CPs) exhibited potential in a wide range of optoelectronic applications due to their easy processability and tunable optoelectronic and mechanical performance. The Tg is a fundamental property that governs the chain dynamics and thermomechanical properties of the CPs. Despite the vital role of Tg in the applicability of these materials, it remains challenging to experimentally measure the Tg of CPs due to the heterogeneous chain architectures and diverse chemical structures. To address this challenge, Alesadi and coworkers developed a simplified ML model to predict Tg for a wide range of CPs directly from the geometry of the repeat unit [33]. With 154 diverse Tg data points, which they collected from experiments and simulations, an ML model was developed based on ordinary least squares (OLS) multiple linear regression. At first, more than 30 structural features were defined from the geometry of the monomer. Next, as Fig. 5 shows, a backward elimination approach was employed as a feature selection technique to find those structural features that govern Tg [35]. Through the backward elimination, a significance level of 5% was considered to test the null hypothesis for each feature based on the p-value. Then, the regression model is trained to the dataset, and the p-value of each feature is calculated, which determines whether the feature will generalize any population out of the current sample. Iterating over several steps, the feature with the highest p-value is eliminated from the predictive model, and this process is repeated for the rest of the features in the dataset. This mechanism continues until all p-values of the remaining features in the
220
Kamrun N. Keya et al.
Fig. 5 Flowchart of the integrated machine learning framework to predict glass transition temperature (Tg) of conjugated polymers [34]. At the bottom, comparison between Tg predicted by the regression model and experimental and simulation data is shown. Eighty percent of the dataset was divided into the training set, and 20% was preserved for out-of-sample testing. The contribution of features in the regression model was assessed by the value of the t-statistic (t-stat). Larger absolute values of t-statistic mean a more noticeable contribution.
Machine learning in materials modeling and design
221
dataset are less than the significance level. Then, the final regression model is trained to the selected feature to make a Tg prediction. As the outcome of the backward elimination technique, six chemical structures (i.e., side chain fraction, isolated rings, fused rings, bridged rings, double/ triple bonds, and halogen atoms) were found as input molecular features, which showed a noticeable statistical correlation with Tg. The multiple linear regression model from structural features was finally determined as follows: T g ¼ β0 +
X
βi X i ,
X i ¼ ½NSC, NFCL, NDTB, NIR, NFR, NBR=N (14)
where β0 is a constant value, βi are contribution coefficients of each feature in the ML model, N stands for the total number of atoms in the repeat unit (excluding hydrogens), NSC is the number of carbon atoms in the alkyl side chains, NFCL stands for the total number of “fluorine” and “chlorine” atoms in the monomer, NDTB is the number of double/triple bonds that are not part of aromatic rings, NIR is the number of free/isolated rings either in the backbone or side groups, NFR stands for the number of fused rings in which two adjacent carbon atoms are shared, and NBR is the number of bridged rings where adjacent rings are bridged via valence bonds. Eventually, the contribution of each feature (βi) was determined by fitting the regression model to the training set, and the ML model for Tg prediction is presented as follows: T g ð°CÞ ¼ 109:1 232:8NSC 296:2NFCL þ 161:9NDTB þ 449NIR þ 670:4NFR þ 1475:3NBR þ N (15)
A comparison between predicted Tg and experimental/computational values is shown in Fig. 5 for training and test sets. The evaluation of the model exhibits an appropriate predictive performance as measured by coefficient determination (R2) of 0.86 and RMSE of 23.5°C for the training set. In addition, the test set yields R2 of 0.81 and RMSE of 25.3°C, showing a satisfactory performance based on an out-of-sample dataset for the proposed ML model. Furthermore, the t-statistic parameter was employed to determine the contribution of each structural feature in the prediction of Tg, where aromatic rings and side chains were found to have the most significant influence. The evaluation of the results demonstrates that the proposed simplified ML model offers an effective strategy to predict the Tg of newly designed CPs before going through any synthesis process.
222
Kamrun N. Keya et al.
4.1.2 Quantitative structure-property relationship (QSPR) for Tg prediction of polymers Quantitative structure-property (or activity) relationship (QSPR or QSAR) models are regression or classification approaches that are broadly utilized in many scientific fields, including materials discovery. These models indicate a mathematical relationship between a property (i.e., target variable) and a set of physicochemical properties called molecular descriptors [34]. The molecular descriptors are generated through a mathematical procedure that transforms the chemical structure of the molecules into numerical features. These descriptors represent several physical and chemical properties, such as polarizability, dipole moment, toxicity, and molecular topology. Overall, interpreting the molecular descriptors in the QSPR models is a little complex, compared with the previously reviewed simplified ML model. However, these models generally provide a higher level of statistical accuracy. Here, we describe a QSPR model recently developed by Karuth and coworkers to predict the Tg of a diverse set of polymers [36]. In that project, they collected 100 experimental Tg data points of different polymers from previously published articles. Then, the molecular descriptors were extracted based on the chemical structure of monomers, where more than 4500 various descriptors corresponding to 0D-, 1D-, 2D-, and 3D-structure-based classes were included. To find the most important molecular descriptors (i.e., maximizing the predictive power of the models) related to Tg, the genetic algorithm (GA) approach was utilized, which evaluates descriptors/features based on the performance on the training partition of the dataset. Eventually, their regression model was developed based on the seven most important molecular descriptors (see Table 1 for the description of each of these molecular features): Log T g ¼ 0:42AVS_BðeÞ + 0:05Mor06v + 0:60RARS + 0:05nCt 0:32nOxiranes + 0:20SsssSiH + 0:11B02½N O + 0:650 (16) For the regression analysis, as used by Karuth and coworkers, R2 is often used to examine the goodness of fit. For the reviewed QSPR model, as shown in Fig. 6A, an R2 values of 0.75 and 0.74 were calculated for train and test sets, respectively [36]. Furthermore, as Fig. 6B indicat es, Y-scrambling or randomization was implemented to check whether the independent variables are correlated by chance to the response variables (i.e., Tg) or not. To this end, the Tg vector was randomly shuffled, while the descriptor matrix was preserved intact. During this procedure, over several iterations, each randomly generated QSPR model has to show an R2 or Q2 (i.e., cross-validated
223
Machine learning in materials modeling and design
Table 1 Description of molecular descriptors employed in the proposed seven-variable QSPR model. Descriptor Descriptor description Type
AVS_B(e)
Mor06v RARS nCt nOxiranes B02[N-O]
SsssSiH
Average vertex sum from burden matrix weighted by Sanderson electronegativity Signal 06/weighted by van der Waals volume R matrix average row sum Number of total tertiary C(sp3) Ethylene oxide Presence/absence of N-O at topological distance 2 Sum of sssSiH E-states
2D matrix-based descriptors
3D-MoRSE descriptors Gateway descriptors Functional group count Functional group count 2D atom pairs
Atom type E-state indices
correlation coefficient) significantly lower than that of the original model, and this can be considered as a piece of strong evidence showing that the proposed model is well established and not just the result of chance correlation (see Fig. 6B). Furthermore, the authors implemented coarse-grained (CG) molecular dynamics (MD) simulations to examine the interpretability of their proposed predictive model from a physics-chemistry perspective, Fig. 6C. Interestingly, CG-MD simulations confirmed the contribution of selected molecular features in Tg determination. As Fig. 6D indicates, those seven molecular descriptors belong to three physical parameters: cohesive energy, chain stiffness, and grafting density. In the CG-MD simulations, these mentioned parameters were systematically varied, and the results showed that increasing backbone stiffness and cohesive energy of backbone particles considerably enhance Tg (see Fig. 6E). Moreover, variation of the cohesive energy of the side chain CG particles showed a dual impact on the Tg. For lower values of side chain cohesive energy, increasing the side chain length tended to reduce Tg. However, after reaching a critical value, larger values of the cohesive energy turn the role of side chain groups and eliminate their decreasing influence on Tg (see Fig. 6F). Overall, integrating CG-MD simulations and QSAR models provided a physically examined and reliable method to quantify Tg.
4.2 Material classification via machine learning Compared with other simulations, ML can identify patterns in large highdimensional datasets effectively, extract useful information quickly, and discover hidden laws. Therefore, it is well suited for material classification. In
224
Kamrun N. Keya et al.
Fig. 6 (A) Correlation between the experimental and predicted values of Tg of polymers in the QSPR model. (B) Representation of the y-scrambling plot; blue and yellow dots (highlighted by a dashed circle) represent R2 and Q2 values of the original QSPR model, respectively. (C) Snapshots of linear polymer chain with grafting density f ¼ 0, an alternatively branched polymer chain with f ¼ 0.5, and a fully branched polymer chain with f ¼ 1. (D) Categorizing selected molecular descriptors into three physical parameters of CG models. (E) Contour plot of glass transition temperature (Tg) in the plane of chain rigidity versus cohesive energy of chain backbone of the CG model. (F) Contour plot of Tg in the plane of grafting density versus cohesive energy of the side chains [36].
Machine learning in materials modeling and design
225
this section, we will present two case studies of applying ML for material classification. 4.2.1 Decision tree to synthesize new AB2C Heusler compounds As discussed in Section 3, the decision tree (DT) is a typical classification method that induces a set of classification rules from the training set with the objective of correctly classifying examples. Fig. 7 illustrates the architecture of a DT. The DT method typically consists of three steps: feature selection, DT generation, and DT pruning. Among them, the objective of feature selection is to retain features that exhibit sufficient classification performance; the objective of pruning is to make the tree simpler and thus more generalizable. There may be multiple DTs that can correctly classify the training data; hence, it is crucial to choose a DT that is less inconsistent with the training data and is sufficiently generalizable. Carrete used DTs to synthesize new AB2C Heusler compounds [37]. The training data were collected from Pearson’s Crystal Data and the ASM Alloy Phase Diagram Database with the following conditions: (a) the phases do not contain hydrogen, noble gases, or radioactive or actinide elements, and (b) the phases exhibit exact 1:2:1 stoichiometry, contain three components, and are thermodynamically stable. Twenty-two features (e.g., the group number of element B, the total number of p valence electrons, the radius difference A/B, and the electronegativity value of each element) were selected for representing the Heusler compounds. The random forest algorithm (which consists of multiple DTs) was applied to train multiple predictors and combine their output to yield a single final prediction. Each subpredictor is a DT that
Fig. 7 Schematic representation of a single decision tree, with a fraction of descriptors applied to a fraction of data, for prediction of the Heusler structure in a candidate AB2C.
226
Kamrun N. Keya et al.
has been trained on a small subset of the training data, where the possible descriptors for each branch point is a random subset of the descriptor list. An example of one such decision tree is illustrated in Fig. 7. 4.2.2 Random forest for studying the chemical toxicity of quantum dots An alternative approach to improving the prediction power is to generate ensembles of decision trees. For example, in the random forest approach, an ensemble of decision trees is created by introducing a stochastic element into the algorithm for constructing trees. The prediction is then based on the mode (for classification) or average (for regression) of predictions made by the members of the ensemble. As early as 1993, the use of ML to study the solubility of C60 was proposed [38]. ML has been widely used to predict the toxicity of nanomaterials, discover new nontoxic nanoparticles, develop multistructure/single-property relationships of nanoparticles, study quantum-mechanical observables of molecular systems, analyze chemical reactions of nanomaterials, and solve kinetic systems [39–42]. Oh et al. successfully applied metaanalysis to study the chemical toxicity of quantum dots (QDs). They used text mining to extract relevant QD toxicity data from 307 studies in the literature and applied the random forest regression model to analyze the data [42]. According to their results, the toxicity was closely related to surface properties (e.g., the shell, ligand, and surface modification), the diameter, the assay type, and the exposure time of QDs. Fig. 8 illustrates the characteristics of 14 species that influence QD toxicity [43].
Fig. 8 Metaanalysis of cellular toxicity for cadmium-containing quantum dots (QDs). Researchers used data mining to collect toxicity data of QDs, and the random forest was used to identify relevant QD data attributes and to develop robust data-driven models of QD toxicity.
Machine learning in materials modeling and design
227
4.3 Advances of machine learning in molecular model development 4.3.1 Development of molecular force fields via machine learning To face the growing need for more realistic simulation that uses large systems, improved interatomic potentials have been developed using ML. These improved potentials are called machine learning potentials, and it is also known as ML force fields of interatomic ML potentials. ML potentials provide sufficient efficiency with first-principles accuracy to evaluate the energies and forces of the simulation system. They are usually fitted using ab initio or DFT-calculated reference data for training and parameterizing. There is a trade-off between the accuracy and computational speed of different methods used to generate reference data [44]. Different types of ML-informed potentials are already available using different methods such as neural network methods and kernel methods. Typically, the ML method is applied to construct a relationship between the system atomic configuration and its energy using a set of presimulated/calculated atomic structures and energies, without using any physical approximations. Specifically, the first step when using an ML-informed potential in MD simulations is to transform the atomic positions into a set of coordinates as input descriptors for the chosen ML method. Many different methods can be used to calculate geometric features of atomic configurations, such as atom-centered symmetry functions, bi-spectrum of the neighbor density, and smooth overlap of atomic positions [45–47]. Subsequently, the functional form provided by the ML method is used to assign energy to this structure. However, the ML potentials must be carefully tested against reference data as the physical basis is not clear [44]. 4.3.2 Machine learning-informed coarse-grained modeling Understanding complex behaviors of soft materials at the fundamental molecular level is a critical step for their design, characterization, and development. Evaluation of these properties using all-atomistic (AA) MD simulations is inherently challenging since they are limited to a restricted time and length scale. To tackle this computational barrier, CG models aim to eliminate unimportant structural features and provide more computationally efficient models. By reducing degrees of freedom in the CG-MD models, extended time and length scales will be available to probe the property of interest. Among different CG methods that are reviewed in previous chapters, the energy renormalization (ER) approach exhibited an appropriate performance in simulating soft materials since ER-based CG models can
228
Kamrun N. Keya et al.
capture the “transferability” of the system [48]. Inherently, CG models suffer a lack of understanding of the temperature effect on molecular friction parameters and relaxation process, resulting in accelerated dynamics. This issue is addressed in the ER approach by preserving the dynamics of the CG model matched to the underlying AA model through renormalizing the enthalpy of the system based on the established “entropy-enthalpy compensation” effect. One step further and developed based on the ML technique, Keten et al. extended the applicability of the ER strategy to a CG model for epoxy resins, focusing on the degree of crosslinking (DC) transferability and simultaneously matching the density, dynamics, and mechanical properties of the models [49]. To this end, as shown in Fig. 9, force field parameters are calibrated by using the ML method through Gaussian processes [50]. To develop the model, seven different CG beads are defined to capture the chemical heterogeneity of the epoxy system. This CG strategy results in 14 adjustable parameters for the nonbonded interactions (i.e., ε and σ for each Lennard-Jones potential). Then, the DC-dependent parameters are simultaneously calibrated to the four target properties (i.e., density, Debye-Waller factor, Young’s modulus, and yield stress) of the AA system. Eventually, the ML approach is utilized to find the optimal set of functional calibration parameters in such high-dimensional space based on a train set of CG and AA simulations. This ER-ML approach offers extremely efficient parametrizations in treating high-dimensional models by incorporating multiresponse calibrations and avoids costly AA and CG simulations. The proposed CG model is found to be 103 times faster than AA systems, allowing for exploring a larger class of epoxy resins.
4.4 Application of machine learning in designing biomaterials The dynamic and exciting field of biomaterials is hampered by a relatively expensive and time-consuming development period. In recent years, driven by the dire need to accelerate biomaterial development, the combination of ML with theoretical predictions has shifted the traditional high-throughput experiments to a data-driven paradigm. ML algorithms are used in the discovery phase of biomaterials to predict bioactive moieties, assessed for biomaterial optimization, screened for fabrication optimization, and learned for material interactions with biological systems such as antifouling.
Fig. 9 Flowchart of the coarse-graining parametrization for molecular modeling through utilizing Gaussian processes and the energy normalization approach.
230
Kamrun N. Keya et al.
4.4.1 AlphaFold prediction of protein structure Proteins are an essential part of life, and the significance of 3D structures cannot be overstated. The structures of proteins can be facilitated to explain protein biological functions and for the mechanistic understanding of various biological interactions and drug design. Many types of life-threatening diseases, such as Parkinson’s and Alzheimer’s diseases, are related to misfolded structures of proteins. Meanwhile, the determination of single protein structural coverage, through experimental techniques, is bottlenecked by the painstaking effort of months to years and costs tens of thousands of dollars. There is a great demand for accurate computational approaches to address this challenge and fill a large-scale gap in structural bioinformatics. The automated structural prediction of the protein folding problem from amino acid sequences has been a fundamental open research challenge in biology for more than 50 years and considered as the holy grail of molecular biophysics [50–52]. In 2018, DeepMind caught the researchers’ attention with its latest AI system AlphaFold by winning the competition at the 14th Critical Assessment of Structure Prediction (CASP14). AlphaFold is the new ML approach that demonstrates competitive atomic accuracy with experimental structures and outperforms all other methods available till now [53,54]. In CASP14, AlphaFold-predicted protein structures were highly accurate than other competing methods. AlphaFold protein structures had a backbone ˚ r.m.s.d.95 (Cα root-mean-square deviation at 95% confiprecision of 0.96 A dence interval). AlphaFold can produce accurate side chain prediction when the backbone is favorably accurate and substantially improves when strong templates are available. In CASP14 competition, all-atom accuracy of ˚ r.m.s.d.95 (95% confidence interval ¼ 1.2–1.6 A ˚ ) comAlphaFold was 1.5 A ˚ ˚ pared with the 3.5 A r.m.s.d.95 (95% confidence interval ¼ 3.1–4.2 A) of the best alternative method. AlphaFold Protein Structure Database is a new resource created in collaboration between the EMBL-European Bioinformatics Institute (EMBL-EBI) and DeepMind. The first release of the database covers the human proteome with 20 other model organisms, but this database is still expanding to cover a large proportion of all cataloged proteins (over 130 million protein cluster representatives) [53,54]. 4.4.2 AlphaFold network AlphaFold incorporates the novice neural network architectures with training procedures that exploit biological, physical, and evolutionary knowledge into geometric constraints of the 3D protein structure, leveraging sequence into the deep learning algorithm. Fig. 10A depicts all the methods for details
Fig. 10 (A) Overview of AlphaFold architecture. Arrows show the information flow among the various components as described in the previous section. The MSA representation array (orange) continuously exchanges information with the pair representation array (blue) to refine the structural predictions. An example of well-predicted AlphaFold serum albumin protein. (B) Predicted aligned error (PAE) of the top five models of predicted albumin (blue region shows the high confidence at the atomic positions). (C) Prediction of AlphaFold albumin protein (AlphaFold P02769, blue) compared with the true experimental structure (PDB 6RJV, green) with 0.88 Å RMSD of aligned structures. (D) High value of pIDDT per-residue scores also gives the highly confident structure on the perspective positions.
232
Kamrun N. Keya et al.
of inputs including MSA construction, use of templates, and database. Two main stages of the network comprise the trunk of the network to process the input into multiple repeated layers of the neural network to produce an array of Nseq (number of sequences) Nres (number of residues) that also represent as processed multiple sequence alignment (MSA) and an array of residue pairs [55]. The trunk of the neural network is followed by the structure module of a protein that operates on a specific structure module and incorporates the 3D structure backbone in the form of translation and rotation for each residue with respect to the global frame [56]. An example of the structure alignment of the bovine serum albumin protein with AlphaFold-predicted and experimentally predicted structure is provided in Fig. 10C for comparison. The predicted protein structures comprise atomic coordinates, and the confidence estimates per residue are in the range of 0–100, on a scale related to high confidence. The predicted aligned error (PAE) suggests the expected error at position x if the predicted and experimental structures are aligned on position y (using the C, N, and Cα atoms), as shown in Fig. 10B. PAE can be used to assess the confidence in the orientation and relative position of different domains of the structure [57]. This confidence measure of the protein structure corresponds to the predicted per-residue scores of models on the local distance difference test (lDDT) metric called per-residue local confidence in protein structure (plDDT). lDDT is a preexisting metric used to access the accuracy in the protein structure prediction field, awarding a high score for well-predicted domains even if the full protein prediction cannot be aligned to the experimental structure (Fig. 10D). AlphaFold depicts the value of AI in evolving the discoveries in some of the primary fields of science. The outburst in genome sequencing methods and large databases has revolutionized bioinformatics, but the inherent challenge of complex experimental structure analysis has narrowed the structural knowledge expansion. We can accelerate the development structural bioinformatics, by developing a precise structure prediction algorithm and coupling it with existing well-curated sequence and structural databases compiled by the experimental techniques [58]. Moreover, AlphaFold uses a learned reference state of ML to predict a protein-specific statistical potential, but not a physical based reference state. The performance of AlphaFold signifies a huge leap in protein 3D structure prediction; however, it is not the endgame of the field. The average accurate predicted global distance topology score determined by AlphaFold is less than 40, where 100 is the perfect score [58,59]. These AlphaFold predictions are highly promising to operate
Machine learning in materials modeling and design
233
as a supporting point for the scientific community to compute and enrich the protein structure models with improvement for the next generation of structural biology knowledge.
5. Future outlook In this chapter, we reviewed several machine learning (ML) algorithms and their recent applications in the field of materials design and modeling. The development of ML approaches greatly facilitates finding novel materials with tailored performance and leads to noticeable technological advancements by learning the complex structure-property relationships. The application of ML-based materials-by-design approach has an enormous potential to replace the empirical approach. Despite tremendous progress, the usage of ML in materials research is still limited due to a lack of datasets available. Having more data will undoubtedly enable the development of more realistic and versatile ML models for materials development. Furthermore, storage and management of high-volume datasets are still big challenges that need to be addressed in materials discovery research. To date, many small datasets are generated and stored in small groups and research laboratories, and many of them may not be publicly available. A comprehensive platform to safely store and unify the datasets will accelerate materials development. In addition, despite utilizing powerful statistical methods, ML models may not necessarily obey fundamental principles of physics and chemistry of the materials, and in such scenarios, those models may not be generalized for the prediction of a different class of materials that they have not been trained to. To avoid such biased models, ML outputs need to be examined by material scientists to explore model explanations with the aid of experimental and physics-based modeling approaches.
References [1] K. Guo, Z. Yang, C.-H. Yu, M.J. Buehler, Artificial intelligence and machine learning in design of mechanical materials, Mater. Horiz. 8 (4) (2021) 1153–1172. [2] S. Kumar, I.M. Nezhurina, Machine learning applications for design of new Materials: a review, Int. Sci. J. “Industry 4.0” 3 (4) (2018) 186–189. [3] S. Axelrod, D. Schwalbe-Koda, S. Mohapatra, J. Damewood, K.P. Greenman, R. Go´mez-Bombarelli, Learning matter: materials design with machine learning and atomistic simulations, Acc. Mater. Res. 3 (3) (2022) 343–357. [4] Introduction to Semi-Supervised Learning-Javatpoint, www.javatpoint.com, 2022. Retrieved from: https://www.javatpoint.com/semi-supervised-learning (Retrieved 25 August 2022). [5] J. Brownlee, Basics of Linear Algebra for Machine Learning: Discover the Mathematical Language of Data in Python, Machine Learning Mastery (2018).
234
Kamrun N. Keya et al.
[6] A. DasGupta, Probability for Statistics and Machine Learning: Fundamentals and Advanced Topics (Springer Texts in Statistics), 2011th ed., Springer, New York, 2011. [7] N. Ratliff, Multivariate Calculus II: The Geometry of Smooth Maps, Lecture Notes: Mathematics for Intelligent Systems series, 2014. [8] A.J. Kulkarni, S.C. Satapathy (Eds.), Optimization in Machine Learning and Applications, Springer, Heidelberg, 2020. [9] K. Menon, Everything You Need to Know About Feature Selection in Machine Learning, Simplilearn.Com, 2021, September 16. https://www.simplilearn.com/tutorials/ machine-learning-tutorial/feature-selection-in-machine-learning. [10] M.F. Ak, A comparative analysis of breast cancer detection and diagnosis using data visualization and machine learning applications, Healthcare 8 (2) (2020) 111. [11] J. Wei, X. Chu, X. Sun, K. Xu, H. Deng, J. Chen, Z. Wei, M. Lei, Machine learning in materials science, InfoMat 1 (3) (2019) 338–358. [12] M.N. Sohail, R. Jiadong, M.M. Uba, M. Irshad, A comprehensive looks at data mining techniques contributing to medical data growth: a survey of researcher reviews: Proceedings of ICCD 2017, in: Recent Developments in Intelligent Computing, Communication and Devices, Springer, 2019, pp. 21–26. [13] Random Forest Simple Explanation, 2022, Retrieved from: https://williamkoehrsen. medium.com/random-forest-simple-explanation-377895a60d2d (Retrieved 24 August 2022). [14] T. Sanlı, C ¸ . Sıcaky€ uz, O.H. Y€ uregir, Comparison of the accuracy of classification algorithms on three data-sets in data mining: example of 20 classes, Int. J. Eng. Sci. Technol. 12 (3) (2020) 81–89. [15] Machine Learning: Supervised Learning-classification, 2022, Retrieved from: https://medium.com/machine-learning-bites/machine-learning-supervised-learningclassification-4f44a91d767 (Retrieved 24 August 2022). [16] B. Greenwell, Principal components analysis, in: Hands-On Machine Learning With R, 2022. (Chapter 17). Retrieved 25 May 2022, from https://bradleyboehmke. github.io/HOML/pca.html. [17] A step-by-step explanation of principal component analysis (PCA), Built In, 2022 [Online], Available from: https://builtin.com/data-science/step-step-explanationprincipal-component-analysis. [18] J.C. Snyder, M. Rupp, K. Hansen, K.-R. M€ uller, K. Burke, Finding density functionals with machine learning, Phys. Rev. Lett. 108 (25) (2012) 253002. [19] M. Mumtaz, A. Mahmood, S.D. Khan, M.A. Zia, M. Ahmed, I. Ahmad, Investigation of dielectric properties of polymers and their discrimination using terahertz timedomain spectroscopy with principal component analysis, Appl. Spectrosc. 71 (3) (2017) 456–462. [20] W. Zhu, Z.T. Webb, K. Mao, J. Romagnoli, A deep learning approach for process data visualization using T-distributed stochastic neighbor embedding, Ind. Eng. Chem. Res. 58 (22) (2019) 9564–9575. [21] L. van der Maaten, G. Hinton, Visualizing data using t-SNE, J. Mach. Learn. Res. 9 (2022) 2579–2605. http://jmlr.org/papers/v9/vandermaaten08a.html. [22] D. Science, W. Learning, What Is Linear Discriminant Analysis(LDA)? 2022, Retrieved 19 May 2022, from https://www.knowledgehut.com/blog/data-science/lineardiscriminant-analysis-for-machine-learning. [23] I.H. Sarker, Machine learning: algorithms, real-world applications and research directions, SN Comput. Sci. 2 (3) (2021) 160. [24] G. Bonaccorso, Machine Learning Algorithms: Popular Algorithms for Data Science and Machine Learning, second ed., Packt Publishing Ltd, 2018. [25] D.E. Rumelhart, G.E. Hinton, R.J. Williams, Learning representations by backpropagating errors, Nature 323 (6088) (1986) 533–536.
Machine learning in materials modeling and design
235
[26] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, Y. Bengio, Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation, 2014. [27] M.D. Zeiler, R. Fergus, Visualizing and Understanding Convolutional Networks, Computer Vision – ECCV 2014, 2014, pp. 818–833. [28] N. Qian, T.J. Sejnowski, Predicting the secondary structure of globular proteins using neural network models, J. Mol. Biol. 202 (4) (1988) 865–884. [29] J.A. Anderson, E. Rosenfeld, A. Pellionisz (Eds.), Neurocomputing, vol. 2, MIT Press, 1988. [30] M.L. Minsky, P. Seymour, Perceptrons: An Introduction to Computational Geometry, MIT Press, Cambridge, 1988. [31] A. Krogh, What are artificial neural networks? Nat. Biotechnol. 26 (2) (2008) 195–197. [32] V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Riedmiller, A.K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, D. Hassabis, Human-level control through deep reinforcement learning, Nature 518 (7540) (2015) 529–533. [33] A. Alesadi, Z. Cao, Z. Li, S. Zhang, H. Zhao, X. Gu, W. Xia, Machine learning prediction of glass transition temperature of conjugated polymers from chemical structure, Cell Rep. Phys. Sci. 3 (6) (2022), 100911. [34] A. Tropsha, Best practices for QSAR model development, validation, and exploitation, Mol. Inform. 29 (6–7) (2010) 476–488. [35] R.R. Hocking, A biometrics invited paper. The analysis and selection of variables in linear regression, Biometrics 32 (1) (1976) 1. [36] A. Karuth, A. Alesadi, W. Xia, B. Rasulev, Predicting glass transition of amorphous polymers by application of cheminformatics and molecular dynamics simulations, Polymer 218 (2021) 123495. [37] A.O. Oliynyk, E. Antono, T.D. Sparks, L. Ghadbeigi, M.W. Gaultois, B. Meredig, A. Mar, High-throughput machine-learning-driven synthesis of full-Heusler compounds, Chem. Mater. 28 (20) (2016) 7324–7331. [38] R.S. Ruoff, D.S. Tse, R. Malhotra, D.C. Lorents, Solubility of fullerene (C60) in a variety of solvents, J. Phys. Chem. 97 (13) (1993) 3379–3383. [39] M. Wang, T. Wang, P. Cai, X. Chen, Nanomaterials discovery and design through machine learning, Small Methods 3 (5) (2019) 1900025. [40] B. Sanchez-Lengeling, A. Aspuru-Guzik, Inverse molecular design using machine learning: generative models for matter engineering, Science 361 (6400) (2018) 360–365. [41] F. Amato, J.L. Gonza´lez-Herna´ndez, J. Havel, Artificial neural networks combined with experimental design: a “soft” approach for chemical kinetics, Talanta 93 (2012) 72–78. [42] M. Maghsoudi, M. Ghaedi, A. Zinali, A.M. Ghaedi, M.H. Habibi, Artificial neural network (ANN) method for modeling of sunset yellow dye adsorption using zinc oxide nanorods loaded on activated carbon: kinetic and isotherm study, Spectrochim. Acta A Mol. Biomol. Spectrosc. 134 (2015) 1–9. [43] E. Oh, R. Liu, A. Nel, K.B. Gemill, M. Bilal, Y. Cohen, I.L. Medintz, Meta-analysis of cellular toxicity for cadmium-containing quantum dots, Nat. Nanotechnol. 11 (5) (2016) 479–486. [44] J. Behler, Perspective: machine learning potentials for atomistic simulations, J. Chem. Phys. 145 (17) (2016) 170901. [45] A.P. Barto´k, M.C. Payne, R. Kondor, G. Csa´nyi, Gaussian approximation potentials: the accuracy of quantum mechanics, without the electrons, Phys. Rev. Lett. 104 (13) (2010) 136403. [46] A.P. Barto´k, R. Kondor, G. Csa´nyi, On representing chemical environments, Phys. Rev. B 87 (18) (2013) 184115.
236
Kamrun N. Keya et al.
[47] J. Behler, M. Parrinello, Generalized neural-network representation of highdimensional potential-energy surfaces, Phys. Rev. Lett. 98 (14) (2007) 146401. [48] W. Xia, N.K. Hansoge, W.-S. Xu, F.R. Phelan, S. Keten, J.F. Douglas, Energy renormalization for coarse-graining polymers having different segmental structures, Sci. Adv. 5 (4) (2019). [49] A. Giuntoli, N.K. Hansoge, A. van Beek, Z. Meng, W. Chen, S. Keten, Systematic coarse-graining of epoxy resins with machine learning-informed energy renormalization, npj Comput. Mater. 7 (1) (2021) 168. [50] D.J. MacKay, D.J. Mac Kay, Information Theory, Inference and Learning Algorithms, Cambridge University Press, 2003. [51] C.B. Anfinsen, Principles that govern the folding of protein chains, Science (1979) 181 (4096) (1973) 223–230. [52] A.W. Senior, R. Evans, J. Jumper, J. Kirkpatrick, L. Sifre, T. Green, C. Qin, A. Zˇ´ıdek, A.W.R. Nelson, A. Bridgland, H. Penedones, S. Petersen, K. Simonyan, S. Crossan, P. Kohli, D.T. Jones, D. Silver, K. Kavukcuoglu, D. Hassabis, Improved protein structure prediction using potentials from deep learning, Nature 577 (7792) (2020) 706–710. [53] S. Wang, S. Sun, Z. Li, R. Zhang, J. Xu, Accurate de novo prediction of protein contact map by ultra-deep learning model, PLoS Comput. Biol. 13 (1) (2017) E1005324. [54] M. AlQuraishi, AlphaFold at CASP13, Bioinformatics 35 (22) (2019) 4862–4865. [55] A. Kryshtafovych, T. Schwede, M. Topf, K. Fidelis, J. Moult, Critical assessment of methods of protein structure prediction (CASP)-round XIV, Proteins Struct. Funct. Bioinf. Bioinform. 89 (12) (2021) 1607–1617. [56] J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Zˇ´ıdek, A. Potapenko, A. Bridgland, C. Meyer, S.A.A. Kohl, A.J. Ballard, A. Cowie, B. Romera-Paredes, S. Nikolov, R. Jain, J. Adler, T. Back, S. Petersen, D. Reiman, E. Clancy, M. Zielinski, M. Steinegger, M. Pacholska, T. Berghammer, S. Bodenstein, D. Silver, O. Vinyals, A.W. Senior, K. Kavukcuoglu, P. Kohli, D. Hassabis, Highly accurate protein structure prediction with AlphaFold, Nature 596 (7873) (2021) 583–589. [57] Z. Tu, X. Bai, Auto-context and its application to high-level vision tasks and 3D brain image segmentation, IEEE Trans. Pattern Anal. Mach. Intell. 32 (10) (2010) 1744–1757. [58] A. David, S. Islam, E. Tankhilevich, M.J.E. Sternberg, The AlphaFold database of protein structures: a biologist’s guide, J. Mol. Biol. 434 (2) (2022) 167336. [59] H. Bagdonas, C.A. Fogarty, E. Fadda, J. Agirre, The case for post-predictional modifications in the AlphaFold protein structure database, Nat. Struct. Mol. Biol. 28 (11) (2021) 869–870.
CHAPTER SEVEN
Multiscale modeling of failure behaviors in carbon fiber-reinforced polymer composites Qingping Suna, Guowei Zhoub, Zhangke Yangc, Jane Breslinc, and Zhaoxu Mengc a
College of Aerospace and Civil Engineering, Harbin Engineering University, Harbin, China Department of Engineering Mechanics, School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai, China c Department of Mechanical Engineering, Clemson University, Clemson, SC, United States b
Contents 1. 2. 3. 4.
Introduction Synopsis of the multiscale modeling framework Nanoscale characterization of the interphase region Microscale model development for UD CFRP composites 4.1 UD RVE model and constitutive laws for microstructure components 4.2 Boundary conditions and RVE size 4.3 Failure analysis of UD CFRP composites under uniaxial stress state 4.4 Failure envelopes of UD CFRP composites under multiaxial stress state 4.5 Elastic-plastic-damage model for homogenized UD CFRP composites 5. Mesoscale model development for woven composites 5.1 Woven composites description 5.2 Mesoscale RVE model generation for woven composites 5.3 Constitutive and damage laws 5.4 Results predicted by the woven RVE model 6. Macroscale model of U-shaped part made of UD and woven CFRP composites 7. Conclusions References
239 241 242 246 246 249 250 252 263 269 269 270 272 273 281 287 288
1. Introduction With the growing attention on energy conservation and environmental protection, lightweight design for structural parts is becoming increasingly attractive in the automotive industry [1]. Generally speaking, Fundamentals of Multiscale Modeling of Structural Materials https://doi.org/10.1016/B978-0-12-823021-3.00005-1
Copyright © 2023 Elsevier Inc. All rights reserved.
239
240
Qingping Sun et al.
material replacement, structural optimization, and application of new manufacturing technology can be adopted in the lightweight design process, among which material replacement is arguably the most effective approach [2]. Carbon fiber-reinforced polymer (CFRP) composites, with a tensile strength of up to 2000 MPa along the fiber direction and very low overall density, stand as one of the most promising classes of materials to replace the engineered metals for automotive structural components. In particular, woven composites have great potential in automotive manufacturing due to their excellent mechanical performance, moderate cost, and easier fabrication compared to unidirectional (UD) CFRP composites [3,4]. To fully exploit the potential of these CFRP composites, we need to target increasing the accessibility of CFRP component designs, improving the robustness of initial designs, and most importantly, reducing the development to deployment lead time of CFRP components. The typical material design (or replacement) cycle consists of appropriate material selection followed by component size, shape design, and optimization [5]. In woven CFRP composites, the material selection also involves the selection of representative microstructure. For instance, changing the size and/or angle of fiber tows for woven composites significantly alters the anisotropic behavior of the material and its failure mechanisms, which in turn influences failure strength and fracture toughness of the material [6–8]. Thus, material design includes not only the size and geometry of constituents but also the microstructural design. Traditionally, the characterization of materials is realized by carrying out expensive and time-consuming material tests to establish stiffness (e.g., elastic modulus) and strength values in all material coordinates. With the advancements in computational materials science and engineering as well as multiscale modeling, the “validation through simulation” approach has been increasingly adopted to decrease the time and cost of applying new materials to automotive components. Compared to advanced metallic systems, the CFRP composites have the unique features of anisotropic and heterogeneous multiscale microstructures and even larger variations in the material properties induced by the unique injecting/compression molding processes. Currently, there still exist two major challenges in developing a multiscale modeling framework for CFRP composites: (1) development of high-fidelity computational models; and (2) integration of the validated models and computational tools into an automated workflow so that the material processing, microstructure, and component structure can be optimized simultaneously [9].
Carbon fiber-reinforced polymer composites
241
Our previous work and those of other researchers have established computational models of CFRP composites at different scales [7,10–23]. These earlier studies on computational model development have addressed the first challenge to a large extent. The primary objective of the present work is to address the second challenge, i.e., to bridge different computational models and demonstrate an integrated framework of CFRP composites, with a particular focus on two types of widely used thermoset epoxy-based CFRP composites in structural parts: long fiber UD composites and woven composites. We will show that the integrated framework will greatly benefit the design and characterization of CFRP composites with a targeted application of lightweight design for structural parts in automobiles.
2. Synopsis of the multiscale modeling framework In this chapter, we present a bottom-up multiscale modeling framework that effectively captures the intrinsic relationship between CFRP structures at different length scales (i.e., constituent interface/interphase, the fiber volume fraction of UD CFRP composites, woven yarn angle, and ply layups, etc.) and the mechanical performance of CFRP composites. As shown in Fig. 1, we establish models at four different scales to describe the cured CFRP composites: properties of constituents (e.g., resin matrix and interphase region) at the nanoscale, UD representative volume element (RVE) at the microscale, woven RVE at the mesoscale, and lastly, the
Fig. 1 Schematic of the bottom-up multiscale modeling approach.
242
Qingping Sun et al.
structural part at the macroscale. More importantly, those models are bridged using a bottom-up approach. In the proposed bottom-up approach, the information obtained at lower scales will be passed onto higher scales. Specifically, molecular dynamics (MD) simulations will be utilized as it offers a promising way to determine the material properties of constituents depending on molecular structures [24,25]. Particularly, MD simulations are used to quantify the influence of material gradient on mechanical property within the interphase region [10], which plays a critical role in the mechanical behaviors of microscale UD RVE. The UD RVE is then used to investigate the mechanical properties and failure envelopes of UD CFRP composites under multiple loading conditions, which inform us to propose an effective elasto-plastic constitutive law with damage evolution for homogenized UD composites. Fiber tows (e.g., yarns) in mesoscale woven RVE have very similar properties as UD RVE. Therefore, the elasto-plastic-damage material law and results from UD RVE are used to describe the properties of fiber tows, which are basic structures of mesoscale woven RVE. Furthermore, mesoscale-woven models are constructed to gain an in-depth insight into their complex failure mechanisms under various loading conditions. Finally, the effective properties of mesoscale woven RVE are used to predict the homogenized mechanical performance of a U-shaped part at the macroscale. Along the way, the accuracy of the proposed multiscale models developed by the bottom-up approach is validated by comparing with the corresponding experimental results at different length scales. In summary, the multiscale modeling framework integrates four length scales by communicating information from lower to upper scales and simultaneously achieves the capability of designing and optimizing the global and local structures and constituents down to the micro- and nanoscale.
3. Nanoscale characterization of the interphase region Down to the nanoscale level, the carbon fiber surface is quite rough, partially due to the treatments applied to the carbon fibers during the fiber manufacturing process [26]. In addition, there is a significant nanoconfinement effect from the carbon fibers on matrix resins [27–29]. As a result, a submicron-thick interphase region exists between carbon fibers and the matrix, as shown schematically in Fig. 2A. It has been demonstrated that the interphase property has a significant influence on the composite mechanical performance [10,30]. For CFRP composites specifically, the
Carbon fiber-reinforced polymer composites
243
Fig. 2 (A) Schematic of the cross-section of UD CFRP composites with the interphase region highlighted. (B) Variations of Young’s modulus or strength inside the interphase region.
thickness ti of the interphase region has been evaluated to be about 200 nm [26]. To achieve a good balance of accuracy and simplicity, we have assumed the interphase region as a cylindrical shell adjacent to fiber, with the inner radius rf being the same as fiber radius and outer radius ri ¼ rf + ti. In the following text, subindices f, i, and m denote fiber, interphase region, and matrix, respectively. To characterize the average properties of the interphase region, we adopted an analytical gradient model in our previous study [10] to describe the modulus and strength profile inside the interphase region. The essential details of the analytical gradient model are briefly summarized here. As
244
Qingping Sun et al.
shown in Fig. 2B, the proposed gradient model includes two parts. In Part I (from rf to ris), Young’s modulus and strength decrease from fiber values to the lowest value, i.e., Ems or σ ms. In the Part II (from ris to ri), the value gradually increases from the lowest value to that of the property of the matrix. The decreasing Part I is due to the attenuation of the fiber confinement effect, and the reason for the lower mechanical properties at ris than those at ri is because of the insufficient cross-linking due to incompatibility between the fiber treatment (sizing) and the resin matrix. The increasing Part II is attributed to the intrinsic epoxy resin stiffening through sufficient cross-linking. The position of the lowest values (ris) is assumed to be at threequarters of the interphase width away from the fiber surface (ris rf ¼ 0.75(ri rf)). The variations of the properties of the interphase region are assumed to follow the exponential functions as consistent with previous studies [31]. Specifically, for Part I of the interphase region, of which the modulus and strength decay from those of carbon fibers to the weakest point, i.e., ris: Ei ¼ Ems + E f Ems Rðr Þ, (1) σ i ¼ σ ms + σ f σ ms Rðr Þ For the Part II of the interphase region, the modulus and strength recover to those of the bulk matrix: E i ¼ E m + ðEms E m ÞQðr Þ, σ i ¼ σ m + ðσ ms σ m ÞQðr Þ
(2)
The functions R(r) and Q(r) in Eqs. (1) and (2) are constructed to match the boundary conditions: Rðr Þ ¼
1 ðr=r is Þ exp ð1 r=r is Þ , 1 r f =r is exp 1 r f =r is
1 ðr=r i Þ exp ð1 r=r i Þ Qðr Þ ¼ 1 ðr is =r i Þ exp ð1 r is =r i Þ
(3)
where Ei and σ i are predicted modulus and strength within the interphase region. Ef, Em, Ems are effective moduli of the fiber, matrix, and the lowest value inside the interphase region; σ f, σ m, σ ms are the tensile strength of the fiber, matrix, and the lowest value inside the interphase region. Since the interphase region mainly includes the sizing and part of the matrix, which show isotropic behavior, we use isotropic mechanical properties for the interphase. To accommodate the isotropic properties of the
Carbon fiber-reinforced polymer composites
245
interphase region, Young’s modulus on the left (fiber surface) boundary r ¼ rf of the interphase region is taken as the average moduli of the fiber in three dimensions Ei(r ¼ rf) ¼ (Ef11 + Ef22 + Ef33)/3 ¼ 95 GPa. The Young’s modulus on the right boundary r ¼ ri of the interphase region is taken as the modulus of the isotropic matrix Ei(r ¼ ri) ¼ Em ¼ 3.8 GPa. For strength values, there is only tensile strength available for the carbon fiber, and the tensile strength is adequate to describe the failure strength given that the composites fail by tension even during compressive deformation [12]. Thus, we formulate the strength boundaries according to the tensile strength of the fiber and the matrix. Specifically, the left-bound value of the interphase region is taken as the tensile strength of the carbon fibers, which is σ i(r ¼ rf) ¼ σ f ¼ 3 GPa. The right-bound strength value is taken as the tensile strength of the matrix, which is σ i(r ¼ ri) ¼ σ m ¼ 68 MPa. More details about property selection can be found in our previous study [10]. MD has been used to characterize Ems and σ ms inside the interphase region. The effect of curing degree on mechanical properties such as Young’s modulus and intrinsic yield and maximum stress of epoxy resin has been investigated [24]. The MD simulation results show an increasing trend of Young’s modulus and strength with an increasing degree of crosslinking. Specifically, the difference in Young’s modulus between undercured epoxy (70% curing degree) and fully cured epoxy (95% curing degree) is about 20%, and the difference in the strength values could be up to 50%. The MD simulation results provide reasonable lower bounds of interphase mechanical properties due to insufficient curing, and the ratio between the lower bounds and matrix values have been selected as: Ems/Em ¼ 0.8 and σ ms/σ m ¼ 0.5. The average modulus and strength in the interphase region can be obtained by as ðri Ei ¼ Ei ðr Þdr= r i r f , rf
σi ¼
ðri
σ i ðr Þdr= r i r f
(4)
rf
The resulting average modulus (Ei) and average strength (σ i) of the interphase region are 22.5 GPa and 670 MPa, respectively. On comparing these values to the modulus (Em ¼ 3.8 GPa) and tensile strength(σ ft ¼ 68 MPa) of the bulk resin matrix as measured from experiments, the average Young’s modulus and strength of the interphase region are higher about 5 and 9 times, respectively. As a result, the interphase region shows an obvious
246
Qingping Sun et al.
stiffened response on average compared to the bulk matrix, although a portion of the interphase region is weaker due to insufficient cross-linking. This also demonstrates that the confinement effect by the fiber surface plays a dominant role in the interphase region. We note that as we assume isotropic properties to the interphase, the modulus of the interphase becomes larger than the transverse modulus of the carbon fiber (Table 1). However, our previous studies indicate that the failure behaviors and failure strength of the composites have minor dependence on the elastic properties assigned to the interphase region, and it is mainly the interfacial properties (to be discussed in Section 4.1) between fiber and interphase that influence the failure strength of the UD composites [10]. We note that improvement can be made to the model by considering transversely isotropic behavior for the interphase, with different moduli in the longitudinal and transverse directions [32]. A recent study adopted a similar analytical function to represent the interphase region in fiber-reinforced composites [31]. The study also showed the capability to represent different types of interphase regions by adjusting ris and boundary values. For instance, imperfect adhesion between fiber and interphase, interphase region with impurities or damages, and inadequate cross-linking between interphase and matrix can be considered by replacing Ef, Em, Ems (or σ f, σ m, σ ms) with lower values. Additionally, the effect of ris has been investigated systematically and it was found that increasing ris from 0 to 1, both the effective Young’s modulus and shear modulus of the interphase region increase. In sum, the analytical gradient function used in this chapter is versatile to represent the interphase regions in other fiber and particle-reinforced composites systems.
4. Microscale model development for UD CFRP composites 4.1 UD RVE model and constitutive laws for microstructure components For model development, a cross-section of the microstructure of the UD RVE model with cylindrical fibers randomly distributed in the matrix is obtained using an algorithm proposed by Melro et al. [36]. The average fiber diameter of 7 μm and fiber volume fraction of 51.4% are utilized based on experimental material characterization. In addition to the fiber and matrix phase, the UD RVE model includes a finite thickness (200 nm) interphase region adjacent to fibers, consistent with the nanoscale model of the
Table 1 Constituent properties used in the microscale UD RVE model.
Carbon fiber
Epoxy matrix [10]
Interphase region [10]
Fiber/interphase region interface [10]
E11 (GPa)
E22 ¼ E33 (GPa) G12 ¼ G13 (GPa) G23 (GPa)
v12
σ T (GPa)
245
19.8
29.191
5.922
0.28
3.46
Em (GPa)
vm
vp
σ fc (MPa) GIC (J/m2) σ ft (MPa) ASTM D638 ASTM D695 ASTM E399 [33] [34] [35]
3.8
0.38
0.3
61.6
Ei (GPa)
σ i (MPa)
22.5
670
300
334.1
K (MPa/mm)
σ 1 (MPa)
σ 2, σ 3 (MPa) GIC (J/m2)
GIIC, GIIIC (J/m2)
108
70
80
32
2
248
Qingping Sun et al.
interphase region in the earlier Section 3. A zero-thickness interface between fiber and interphase region is also considered to capture realistic failure strength and debonding failure mechanism by inserting cohesive elements, as shown in Fig. 3. The details of the UD RVE model can be found in our previous study [10]. In the UD RVE model, carbon fibers are assumed to be transversally isotropic and linearly elastic. The five independent material constants of AKSACA carbon fibers and fiber tensile strength (σ T) are listed in Table 1. The matrix adopted herein is developed by the Dow Chemical company. Uniaxial tensile, compressive, and Mode I fracture toughness tests are conducted according to ASTM D638 [33], ASTM D695 [34], and ASTM E399 [35], respectively. Table 1 summarizes the basic epoxy properties obtained through experiments, where Em is Young’s modulus; vm is Poisson’s ratio in the elastic region; vp refers to the Poisson’s ratio in the plastic region; σ ft is tensile strength; σ fc is compressive strength; and GIC represents the Mode I fracture toughness. The polymeric matrix of epoxy is modeled as an isotropic elasto-plastic solid, and it follows the isotropic damage law proposed by Melro et al. [37]. The latter is implemented as a VUMAT user subroutine in the finite element commercial software Abaqus [38]. The constitutive behavior and damage model of the interphase region are justified to be similar to those of the matrix material. In addition to the three phases (fiber, matrix, and interphase), we also add a zero-thickness interface between the fiber and interphase region, which plays a key role in stress transferring between constituent materials and functions as the initiation site for the debonding failure. We add the interface
Fig. 3 A schematic representation of the cross-section microstructure of UD CFRP composites used in the UD RVE model.
Carbon fiber-reinforced polymer composites
249
between the fiber and the interphase while not the interphase and matrix based on two reasons. First, the failure of the matrix and interphase has been well described in the constitutive model, and the interphase is treated as a mechanically enhanced matrix region. Second, experimental evidence shows that interfacial debonding more likely occurs toward the fiber side [39]. We have previously found that the interfacial parameters are crucial to accurately capture the failure strength of UD CFRP compared to experimental results. More details about constitutive law calibration and material property selection can be found in our previous study [10]. Recently, an analytical method in Refs. [40, 41] has been proposed to predict interfacial debonding under an arbitrary load with the input of only the transverse tensile strength and the components’ properties of the UD composite. This approach has the advantage of avoiding errors due to the transformation of scales. Compared to this approach, the cohesive zone model (CZM) has been more extensively used among the available computational methods for the investigation of interfacial debonding [42]. We adopt the CZM approach, where interfacial debonding is considered by inserting cohesive elements at the interface between fiber and interphase region, with a constitutive response defined by a bilinear mixed-mode softening law [10].
4.2 Boundary conditions and RVE size The applied boundary conditions on RVE influence the assessment of mechanical properties. There is also an interplay between the boundary conditions and the RVE size [43,44]. In particular, the independence on the boundary condition is usually considered as the indicator for the sufficiency of the RVE size. The classical approach to introduce periodic boundary condition (PBC) in an RVE is by means of the definition of constraint equations between periodic nodes, hence imposing constraints to their allowed displacements. The traditional PBC approach is appropriate for standard [45,46] and implicit integration numerical schemes but exhibits several drawbacks when explicit dynamic time integration is used. It has been observed that when using when explicit dynamic time integration, the PBC would introduce intense high-frequency oscillations in the system that would compromise the numerical solutions [47]. Moreover, the simulations using traditional PBC are very computationally expensive. An alternative approach is to apply uniform boundary conditions (displacement or traction). It has been shown that for sufficiently large RVEs, the results obtained from uniform boundary conditions are close to those obtained from PBC
250
Qingping Sun et al.
[48–50]. Thus, in this work, loading was applied by imposing uniform displacements (traction) on the boundary nodes. To choose a sufficiently large RVE, we adopted the size convergence approach [43] in which the RVE size was gradually enlarged, and we took the size where the results reached convergence. Previous studies have shown that the results obtained with 30 fibers in the RVE were equivalent to those computed with 70 fibers in terms of the stress-strain curves and of the dominant failure micro-mechanisms when subjected to transverse compression and shear [46]. Following the previous research, we chose our RVE size to include around 50 fibers, and this size has been reported to be sufficient to capture the essential microscale features with relatively low computational costs [51,52]. The RVE is developed using Abaqus/Explicit [38] in the following way: an orphan mesh technique with predominantly first-order hexahedral elements under reduced integration (C3D8R) and tetrahedral elements (C3D6) is adopted for these three phases, while first-order cohesive elements (COH3D8) are used to represent the interface. To generate a wellstructured, high-quality mesh, a seed density of 2 elements in the thickness direction of the interphase region is used, leading to an average element size for the interphase region of 0.1 μm, while the element size of fiber and matrix is slightly larger with a size around 0.35 μm. Mass scaling artificially increases the mass of elements. Although adding some “nonphysical” mass increases the time increment, it can strongly affect the results, especially for a dynamic analysis where the inertia effects become dominant. A common technique to check the significance of mass scaling is by comparing the kinetic energy with the internal energy of the system. A ratio below 5%– 10% is often treated as insignificant [53]. In the RVE model presented herein, the mass scaling (stable time increment) is selected as 1e-5 to make sure it has negligible influence on the results. Additionally, the linear bulk viscosity in Abaqus-explicit is set to be 0.06, and the quadratic bulk viscosity parameter is 1.2.
4.3 Failure analysis of UD CFRP composites under uniaxial stress state The failure modes of UD CFRP composite are quite complex, and different failure mechanisms exist depending on the loading conditions. In the following text, we first summarize the failure behaviors of UD CFRP composites under uniaxial loading conditions. Pure transverse tensile loading causes debonding between fiber/interphase region interface, which leads to matrix
Carbon fiber-reinforced polymer composites
251
microcracking and matrix yielding, and subsequently results in a transverse crack perpendicular to the loading axis, as illustrated in Fig. 4A. As for the case of transverse compressive loading, the final failure of the UD CFRP composite takes place through the development of a matrix shear band, with an angle of inclination of 57.12 degrees with respect to the plane perpendicular to the loading axis as shown in Fig. 4B. This angle of the fracture surface orientation is very close to experimental data (56.15 degrees). In the case of in-plane shear, interfacial debonding starts at different positions on the front surface of the UD RVE. The debonded regions link up with matrix cracks leading to complete rupture. The matrix cracks are not parallel to the fibers but have an angle of about 45 degrees with the fiber direction, as shown in Fig. 4C. For out-of-plane shear, failure is trigged by interfacial debonding. Through the process of damage accumulation, these local failures coalesce to form a dominated interfacial crack orientated at 45 degrees to the applied shear load, as shown in Fig. 4D. Longitudinal compression failure of the UD CFRP composite can be very sensitive to the initial fiber waviness angle introduced during the manufacturing process, which will lead to a localized hinge zone of fibers (kink-band), as shown in Fig. 4E. It is, however, difficult to determine the exact initial fiber misalignment angle since manufacturing defects appear
Fig. 4 Predicted failure modes of UD CFRP composite under different uniaxial loading conditions: (A) transverse tension, (B) transverse compression, (C) in-plane shear, (D) out-of-plane shear, and (E) longitudinal compression.
252
Qingping Sun et al.
stochastically. Recently, Bai et al. [54] presented a 3D RVE investigation of the formation of kink-band under longitudinal compression of the composites. Here, we further extend that work in the fiber longitudinal direction by considering local fiber waviness with a cosine wave shape following the work by Bishara and co-workers [42,43]. The maximum waviness angle (θmax) locates at the deflection point between the trough and adjacent crest of the cosinoidal waveform. We note that due to computational limitation, we only introduce this simple yet representative form in the computational model to present the initial fiber misalignment angle, but θmax should be pertinent to the averaged misalignment angle outlined and discussed in Yurgartis’s paper [55]. A parametric study relating the mechanical properties of longitudinal compression and θmax has been conducted in our previous work [12]. The predicted values of fiber rotation angle α ¼ 20.8– 21.5 degrees and band width W ¼ 187.2–193.7 μm are in good agreement with experimental results (α ¼ 20.9–22.5 degrees, W ¼ 188–335 μm [12]). The values of the material properties of UD composites obtained from experiments and UD RVE model predictions under uniaxial loading conditions are summarized in Tables 2 and 3. The reported numerical results are the average of 10 different UD RVEs, and they show very good agreement with the experimentally measured properties.
4.4 Failure envelopes of UD CFRP composites under multiaxial stress state 4.4.1 Failure envelopes of σ22-τ12 and σ22-τ23 In this section, we utilize the computational micromechanics RVE model to predict the failure envelopes under multiaxial loading conditions, i.e., the failure loci for the whole range of combined stress states. First, the focus is put on the prediction of failure envelopes in the σ 22-τ12 and σ 22-τ23 stress planes. We adopt an RVE model with a thickness of 2R, where R is the fiber radius. We then apply different proportional amounts of transverse and inplane shear displacement to the RVE model, as shown in Fig. 5A and B. The failure envelopes are obtained by plotting the failure strength values in the σ 22-τ12 and σ 22-τ23 stress space, as shown in Fig. 5C and D. The failure envelopes show that the maximum shear strength increases by applying transverse compressive stress up to a transition point, before which it indicates a hardening effect of shear strength under moderate transverse compression. In this regime, the failure is shear dominated, as also marked in Fig. 5C. With further increasing the magnitude of the transverse compression, failure of the matrix under compression loading starts to
Table 2 Experimentally measured and UD RVE predicted material properties of UD CFRP composites. Load cases Property Experiment
Modulus: E22T (GPa) Strength: YT (MPa) Transverse compression Modulus: E22C (GPa) Strength: YC (MPa) In-plane shear Modulus: G12 (GPa) Strength: SL (MPa) Out-of-plane shear Modulus: G23 (GPa) Strength: ST (MPa) Longitudinal compression Modulus: E11C (GPa) Transverse tension
Strength: XC (MPa)
9.2 (Cov: 2.2%) 62.1 (Cov: 0.9%) 9.29 (Cov: 7.8%) 185.9 (Cov: 7.4%) 4.95 (Cov: 6.3%) 81.6 (Cov: 1.7%) 3.29 (Cov: 5.7%) 52.5 (Cov: 12.0%) 114.97 (Cov: 5.9%) 1098.74 (Cov: 7.6%)
ASTM-D3039 [56] ASTM-D6641 [57] 10-degree off-axis tensile test [10] SSB 3PB test [58] ASTM-D6641 [57]
UD RVE
8.62 (Cov: 1.5%) 64.95 (Cov: 2.1%) 8.81 (Cov: 1.3%) 182.32 (Cov: 2.5%) 4.86 (Cov: 1.8%) 81.4 (Cov: 2.3%) 2.74 (Cov: 1.2%) 60.1 (Cov: 3.5%) 126.93 (θmax ¼ 0.9 degrees) (Cov: 3.7%) 1066.26 (θmax ¼ 0.9 degrees) (Cov: 4.2%)
254
Qingping Sun et al.
Table 3 Summary of the elastic properties of UD composites predicted from UD RVE model. E22 5 E33 (GPa) G12 5 G13 (GPa) G23 (GPa) v12 5 v13 v23 E11 (GPa)
125.9
8.6
4.86
2.74
0.32
0.606
Fig. 5 Schematic of the RVE of the UD CFRP composites subjected to (A) transverse load (σ 22) and in-plane shear (τ12), (B) transverse load (σ 22) and out-of-plane shear (τ23). (C) and (D) show the predicted failure envelopes of combined loading conditions corresponding to (A) and (B). The numbers next to the red points represent the ratios of shear displacement (δs) to transverse displacement (δt/c for either tension or compression).
dominate the failure process of composites, and the shear strength starts to decrease. On the transverse tension side, we observe that the shear strength decreases with the magnitude of transverse tension monotonically. Fig. 5C and D show the entire failure envelopes of σ 22-τ12 and σ 22-τ23 and the corresponding three dominant failure mechanisms or modes. 4.4.2 Failure envelopes of σ11-τ12 In this section, we further include the loading conditions in the fiber longitudinal direction by considering local fiber waviness, which is
Carbon fiber-reinforced polymer composites
255
characterized in an interval between x1 x x2 in the longitudinal direction located at the middle of the 3D RVE model, as shown in Fig. 6A. The total length of the 3D RVE model is LT ¼ 700μm. We use half of the wavelength of a cosine wave to represent the fiber waviness, and the wavelength (L ¼ 1000 μm) is estimated by evaluating the experimental sample [12], which is assumed to be constant for all the fibers in the computational micromechanics RVE model. Different maximum waviness angles (θmax) can be achieved by changing the wave amplitudes (A). Fig. 6A shows the imperfection area and the parameters for the local fiber waviness.
Fig. 6 (A) Schematic of the computational model considering local fiber waviness (LW ¼ L/2 ¼ 500 μm, LT ¼ 700 μm). (B) Comparison between the predicted longitudinal compressive strength values from RVE models and analytical solutions proposed by Pinho et al. [59]. (C) Schematic of computational micromechanics model subjected to the stress state of σ 11-τ12. (D) Failure envelopes of σ 11-τ12 for different θmax. (E) and (F) show the failure modes that correspond to the points I and II in panel (D) for θmax ¼ 0.90 degrees, respectively.
256
Qingping Sun et al.
The waviness function of the bottom boundary is given by 8 0 0 x < x1 > < A cos ð2πx=L Þ x1 x x2 y¼ > : A x2 < x L T
(5)
The initial misalignment is geometrically introduced according to the derivation of y(x) tan θðxÞ ¼
2πA sin ð2πx=L Þ L
(6)
We have shown that the longitudinal compressive strength of the composites is governed by the initial fiber waviness angle in our previous study [12]. A parametric study relating to θmax is first conducted using our computational model. The compressive strength decreases with increasing θmax, as is shown in Fig. 6B. Pinho and co-workers [59] have proposed a theoretical model to predict the compressive strength depending on the initial fiber waviness angle under pure longitudinal compressive loading: Xc ¼
Vf ð1 V f Þ=Gm + θ max =SisL
(7)
where Vf is the fiber volume fraction, Gm is the matrix shear modulus, θmax is the maximum fiber waviness angle, and SisL is the in-situ in-plane shear strength of the composite. Fig. 6B also shows the comparison of longitudinal compressive strength obtained computationally and theoretically using Eq. (7). We can find that the computational and analytical results show excellent agreement. We would like to point out that for θmax ¼ 5.4 degrees, the compressive strength decreases by approximately 70%. It indicates that the compressive strength is very sensitive to the initial fiber waviness introduced during manufacturing processes. The experimental determination of the failure envelope of σ 11-τ12 is very difficult to set up. To overcome these difficulties and determine the failure envelope of σ 11-τ12, in-plane shear load (τ12) is applied simultaneously with longitudinal compression (σ 11) using our computational micromechanics model, as shown in Fig. 6C. Taking advantage of the RVE model we have set up, we further investigate the effect of fiber misalignment, i.e., maximum waviness angle θmax, on the shape and magnitude of the failure envelope in
Carbon fiber-reinforced polymer composites
257
the space of σ 11-τ12. From Fig. 6D, we can see that the σ 11-τ12 envelopes rotate around the point (σ 11 ¼ 0, τ12 ¼ SL) and increasing θmax leads to a significant decrease of the domain defined by the area under the failure envelopes. In addition, the longitudinal compressive strength decreases with increasing τ12. Interestingly, we find that the in-plane shear load changes the kink-band failure mechanism of UD CFRP composites, and we observe that the main failure mode changes from fiber kink to matrix cracking/splitting with increasing τ12. Meanwhile, the area of matrix cracking goes up with increasing τ12, as shown in the comparison between Fig. 6E and F.
4.4.3 Proposed failure criteria Based on these failure envelopes, a new set of failure criteria of σ 22-τ12, σ 22τ23, and σ 11-τ12 has been proposed [11]. We have identified three dominant failure mechanisms or modes from the computational UD RVE analyses under multiaxial loading conditions of σ 22-τ12 and σ 22-τ23, and they are tension, shear, and compression-dominated failure modes. The failure envelopes of σ 22-τ12 and σ 22-τ23 show that the maximum shear strength increases by applying transverse compressive stress up to a transition point, before which it indicates a hardening effect of shear strength under moderate transverse compression. In this regime, the failure is shear dominated. With further increasing the magnitude of the transverse compression, failure of the matrix under compression loading starts to dominate the failure process of composites, and the shear strength starts to decrease. The transition point is defined as the transition between compression-dominated and sheardominated failure. In the transverse tension side, we observe that the shear strength decreases monotonically with the magnitude of transverse tension, as shown in Fig. 5A and B. The same transition point is expected to appear in the σ 33-τ12 and σ 33-τ23 stress space as the transverse directions (direction 2 and 3) are identical. These three dominant failure mechanisms resemble those proposed by the NU-Daniel failure criteria [60,61]. Based on such observations, we propose a new set of failure criteria for σ 22-τ12 and σ 22τ23 building upon the NU-Daniel failure criteria. For the multiaxial loading case of σ 11-τ12, the shape of the failure envelope in σ 11-τ12 space will be different as the main failure mechanism changes to fiber kinking, as investigated in our previous studies [12]. We modify Tsai-Wu’s failure criterion [62] by considering the dependence of compressive strength on the maximum waviness angle θmax. All of the proposed failure criteria based on UD RVE model results are summarized in Eqs. (8)–(12).
258
Qingping Sun et al.
Tension-dominated failure ðσ 22 > 0Þ σ 22 τ12 2 + L ¼1 YT S Shear-dominated failure σ Tran 22 < σ 22 0 τ 2 σ 22 12 +α ¼ 1, SL " Y T # T ran 2 τ12 YT α ¼ Tran 1 SL jσ 22 j Compression-dominated failure Y c σ 22 σ Tran 22 σ 2 2 22 2 τ12 + β ¼ 1, YC Y C Y C σ T22ran β¼ SL Fiber-compression-dominated failure ðσ 11 0Þ σ 11 τ12 2 C + L ¼ 1, X S Vf Xc ¼ ð1 V f Þ θmax + is SL Gm Fiber-tension-dominated failure ðσ 11 > 0Þ σ 11 ¼1 XT
(8)
(9)
(10)
(11)
(12)
Tran where σ Tran 22 and τ12 are transverse normal and in-plane shear stresses of the transition point; Vf is the fiber volume fraction, which is 51.4% in our case; Gm is the matrix shear modulus; α and β are material-based parameters; and SisL is the in-situ in-plane shear strength of the composite considering the insitu constraining effect, which can be theoretically calculated by using the fracture mechanics model proposed by Camanho et al. [63]. By changing τ12 and SL to τ23 and ST in Eqs. (8)–(11), we can obtain the failure criteria for σ 22-τ23. Material properties in Eqs. (8)–(12) are listed in Table 4.
Table 4 Material properties used in Eqs. (8)–(12) for UD CFRP composites. Tran Tran Tran (|σ Tran 22 |, | τ12 |) (| σ 22 |, |τ23 |) XT (MPa) XC (MPa) YT (MPa) YC (MPa) SL (MPa) SisL (MPa) ST (MPa) (MPa, MPa) (MPa, MPa)
2022.44 1066.26 64.95
182.32 81.4
113.3
60.1
(53, 103.7)
(49, 71)
Carbon fiber-reinforced polymer composites
259
Fig. 7 Comparison of the failure envelopes obtained from computational RVE model and the proposed failure criteria under multiaxial stress states: (A) σ 22-τ12, (B) σ 22-τ23, and (C) σ 11-τ12.
A comparison of computational results and the proposed failure criteria for several stress states is shown in Fig. 7. We can see that the proposed failure criteria are in good agreement with computational UD RVE results. 4.4.4 Validation of the proposed failure criteria Failure criteria of σ 22-τ12
To evaluate the failure envelope of σ 22-τ12 obtained from proposed failure criteria, we also conduct off-axial tests to measure the failure strengths under multiaxial loading conditions [64,65]. We choose five different off-axis angles (i.e., θ ¼ 10 degrees, 30 degrees, 45 degrees, 60 degrees, and 90 degrees) of UD CFRP specimens for the experimental analysis. The geometries of these specimens are identical, as shown in Fig. 8A. According to the given off-axis angles, the off-axis specimens are cut from the laminated plates using a diamond saw and polished with standard techniques. To reduce the gripping effects, woven glass/ epoxy tabs are used on the specimens. Compressive and tensile tests are conducted in accordance with ASTM Standards D6641 [52] and D3039 [53],
260
Qingping Sun et al.
Fig. 8 (A) Geometries of off-axis specimen (unit: mm). (B) Comparisons between failure envelopes of σ 22-τ12 obtained from proposed failure criteria and our experimental tests.
respectively. AMTS Sintech test machine fitted with a 30,000 lb load cell is used to determine the transverse behavior of the UD CFRPs with a crosshead speed of 2 mm/min. For the off-axis specimen, the stresses in a ply with fibers oriented at an angle θ to the direction of the applied stress can be obtained as follows: σ 11 ¼ σ x1 cos 2 θ, σ 22 ¼ σ x1 sin θ,
(13)
τ12 ¼ σ x1 sin θ cos θ where σ 11, σ 22, and τ12 are stresses in the material axis and σ x1 is the off-axis loading stress. The corresponding σ 11u, σ 22u, and τ12u can be obtained by plugging the off-axis ultimate strength σ xu into Eq. (13). The experimental results obtained are not scattered, and thus, this set of experimental data is ideal for checking the applicability of the proposed failure criteria. Fig. 8B shows the comparisons between failure envelopes of σ 22-τ12 obtained from experimental results and analytical results. It is shown that the proposed failure criteria are in very good agreement with experimental results. Thus, the failure criteria proposed here are proved to be sufficient to describe the failure envelopes of UD CFRP composites in the σ 22-τ12 stress space. To check whether the proposed failure criteria are generally applicable to other UD reinforced composites for the failure envelopes of σ 22-τ12, a comparison is made with six sets of experimental data found in the literature [66–71]. The mechanical properties for these materials needed to generate the corresponding failure envelopes are reported in Table 5. Fig. 9 shows the
261
Carbon fiber-reinforced polymer composites
Table 5 Material properties of composites used in different references. AS4/ IM7/ E-glass/ E-glass/ AS4/ Material 3501-6 8552 RP528 LY556 55A
T800/ 3900-2
Ref. XT (MPa) XC (MPa) YT (MPa) YC (MPa) SL (MPa) j σ Tran 22 j (MPa) j τTran 12 j (MPa)
[71] – – 48.8 201.7 53 73 127
[66] 2300 1725 60.2 275.8 73.4 140 130
[67] 2280 1725 80 290 90 140 143
[68] – – 47 134 47 65 65
[69] 1140 570 37.5 130.3 66.5 55 91
[70] – – 27 91.8 51.3 39 75
Fig. 9 Comparison between failure envelopes of σ 22-τ12 predicted by the proposed failure criteria and experimental data on different materials: (A) AS4/3501-6 [66], (B) IM7/ 8552 [67], (C) E-glass/RP528 [68], (D) E-glass/LY556 [69], (E) AS4/55A [70], and (F) T800/ 3900-2 [71].
262
Qingping Sun et al.
comparisons between failure envelopes of σ 22-τ12 obtained from the proposed failure criteria and these experimental testing data. Good agreements between the predictions and experimental data are generally observed. Failure criteria of σ 11-τ12
Experimental determination of the failure envelopes for the combined inplane shear/longitudinal compression (σ 11-τ12) is much more complex. Here, we refer to the limited experimental data found in the literature. Fig. 10 shows the failure envelopes generated by the proposed failure criteria and the corresponding experimental data determined by the torsion and compression tests of tubes [72–74]. The relevant mechanical properties of the material are shown in Table 6. The results show that the predictions of the proposed failure criteria are in good agreement with the experimental data of Soden et al. [74], as shown in Fig. 10A and B. However, some
Fig. 10 Comparison between failure envelopes of σ 11-τ12 predicted by the proposed failure criteria and experimental data on different materials: (A) and (B) T300/914C [74], (C) T300/LY556/HY917/DY070 [73], and (D) E-glass/411-C50 [72].
Table 6 Material properties of previously investigated composites. T300/914C T300/914C T300/LY556/HY917/ Material Set 1 [74] Set 2 [74] DY070 [73] E-glass/411-C50 [72]
XC (MPa) SL (MPa)
898.6 125
784 101.25
900 80
629.8 55
263
Carbon fiber-reinforced polymer composites
discrepancies between the predicted results and the experimental data of Michaeli et al. [73] and Chandra et al. [72] are shown in Fig. 10C and D. Large scatters also exist in these two sets of experimental data, which is possibly due to largely variant fiber misalignment or waviness in the specimens. Nevertheless, the general trend of failure strengths in the σ 11-τ12 stress space is captured by the proposed failure criteria. Failure envelopes of σ 22-τ23
The stress states of σ 22-τ23 are extremely difficult to realize experimentally because the geometries of the test specimens are extremely difficult to obtain. However, computational analysis has been used to generate failure envelopes of σ 22-τ23. So, we validate the proposed failure criteria with computational results found in the literature [70,75]. Table 7 lists the material properties used in these computational models. Fig. 11A and B show the comparison between the failure envelopes of σ 22-τ23 found in two studies and the failure criteria proposed in this paper. Again, they agree very well.
4.5 Elastic-plastic-damage model for homogenized UD CFRP composites 4.5.1 Proposed elastic-plastic-damage model For mesoscale woven structures, of which the basic elements-fiber tows (yarns) need to be treated as homogenized UD CFRP composites to enable Table 7 Material properties used in previous studies. YC (MPa) YT (MPa) ST (MPa)
Tran (| σ Tran 22 |, | τ23 |) (MPa, MPa)
Melro et al. [70] Danial et al. [75]
(26, 47) (33.6, 45.3)
114.4 115.5
48.32 51.47
46.28 42.67
Fig. 11 Comparison between failure envelopes of σ 22-τ23 predicted by the proposed failure criteria and computational results from (A) Melro et al. [70] and (B) Danial et al. [75].
264
Qingping Sun et al.
more efficient characterization of these larger scale structures. As such, we need to epitomize the failure behaviors and failure envelopes obtained in previous sections to construct the constitutive material laws and damage behaviors for homogenized UD CFRP composites. In this section, the homogenized UD CFRP composites are modeled using an elastic-plasticdamage model combining the Liu-Huang-Stout (LHS) yield criterion [76] and the UD RVE-based failure criteria. LHS yield criterion is used to describe the elasto-plastic behavior of UD CFRP composites. The yield criterion is defined as ;¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi F ðσ 22 σ 33 Þ2 þ Gðσ 33 σ 11 Þ2 þ H ðσ 11 σ 22 Þ2 þ 2Lτ223 þ 2Mτ213 þ 2N τ212 þ Iσ 11 þ Jσ 22 þ Kσ 33 1 (14)
where F, G, H, L, M, N, I, J, and K are parameters characterizing the current state of anisotropy. These parameters can be simplified and defined as follows: 1 hX2 X2 X2 i 1 hX2 X2 X2 i ,G ¼ , + + 2 3 1 3 1 2 2 2 1 hX2 X2 X2 i H¼ , + 1 2 3 2 X σ 1c + σ 1t X σ 2c + σ 2t X σ 3c + σ 3t 1 ¼ , ¼ , ¼ ,L ¼ 2, 1 2 3 2σ 1c σ 1t 2σ 2c σ 2t 2σ 3c σ 3t 2ðτy23 Þ F¼
M¼ K¼
1 2 2ðτy31 Þ
,N ¼
σ 3c σ 3t 2σ 3c σ 3t
1 2 2ðτy12 Þ
,I ¼
σ 1c σ 1t σ 2c σ 2t ,J ¼ , 2σ 1c σ 1t 2σ 2c σ 2t (15)
where σ 1t and σ 1c are longitudinal tensile and compressive yield stresses, respectively; σ 2t (σ 3t) and σ 2c (σ 3c) are the transverse tensile and compressive yield stresses, respectively; τy12 (τy31) and τy23 are the in-plane and out-of-plane shear stresses, respectively. An associative flow rule is adopted to describe the yield surface evolution: ε_ ¼ γ_
∂∅ ∂σ d
(16)
265
Carbon fiber-reinforced polymer composites
where∅ has been previously defined in Eq. (14), γ_ represents the plastic multiplier, determined by the Newton-Raphson method, σ d is the stress tensor, and the partial derivation represents the gradient vector. RVE-based failure criteria incorporating various damage mechanisms are adopted to describe the damage initiation. The damage evolution of homogenized UD CFRP composites is captured by the reduction in the stiffness matrix C(D), expressed as 2 6 6 6 6 C ðDÞ ¼ 6 6 6 6 4
d21 C 11
d1 d2 C 12
d 1 d3 C 13
d 22 C 22
d 2 d3 C 23
3 7 7 7 7 7 (17) 7 7 7 5
d 23 C 33 d 4 C 44 sym
d5 C 55 d6 C 66
where Cij (i ¼ 1–6, j ¼ 1–6) are the components of the undamaged stiffness matrix, and the parameters di are defined as follows: d1 ¼ 1 dL , d2 ¼ d3 ¼ 1 dT , 2 1 d2 d 4 ¼ d5 ¼ d2d , 1 + d2
(18)
d 6 ¼ d22 where dL and dT are denoted as εLf ðε εL0 Þ , dL ¼ ε εLf εL0 εTf ðε εT0 Þ dT ¼ ε εTf εT0
(19)
In Eq. (19), εLf is the tensile or compressive failure strain along fiber direction, εL0 is the tensile or compressive initial damage strain along fiber direction, εTf is the tensile or compressive failure strain along the transverse direction, and εT0 is the tensile or compressive initial damage strain along the transverse direction. The damage initiation strains (εL0, εT0) are defined by appropriate failure criteria (Eqs. 8–12), which are only related to stress state. The postfailure behavior is governed by the damage evolution law, which is
266
Qingping Sun et al.
associated with the fracture toughness, element stress, and element characteristic length Lc. To avoid the element size effect, the ultimate failure strains (εLf, εTf) are determined by the fracture toughness based on smeared formulation [77] εLf ¼
2GL T,C X ∗L
, εTf ¼ c
2GT T Y ,C ∗L
(20) c
where GL and GT are the fracture toughness in the longitudinal and transverse direction, respectively, which are adopted from the results of UD CFRP composites from Pinho [69]. The above damage model is implemented at the integral point in every element of homogenized UD CFRP composites through a VUMAT user subroutine in ABAQUS [38]. 4.5.2 Validation of the proposed elastic-plastic-damage model Open-hole off-axial tension/compression and four-point bending simulations are conducted to evaluate and validate the proposed elastic-plasticdamage model by comparing it with the experimental tests. Specimen geometries for different tests are shown in Fig. 12A–C. For the off-axis
Fig. 12 Open-hole specimen geometry for (A) off-axial tension, (B) off-axial compression, and (C) four-point bending.
267
Carbon fiber-reinforced polymer composites
specimen, the off-axis angle θ is defined as the angle between fiber orientation (0-degree direction) and loading direction along X1-axis, as shown in Fig. 12A and B. The layup of open-hole off-axial tension/compression and four-point bending samples are [θ]12 and [0/90/90/0/0/0]s, respectively. The hole diameter in all cases is 10 mm. Three different off-axis angles (i.e., θ ¼ 45 degrees, 60 degrees, and 90 degrees) are selected for the tension, and two off-axis angles (θ ¼ 45 degrees and 60 degrees) are selected for compression. The open-hole samples are meshed using an eight-node linear brick element (C3D8R) with the reduced integration and hourglass control, and five C3D8R elements are used in laminated layers along the thickness direction. Both the global and local coordinates are defined to interpret the ply orientation and to describe the mechanical behavior of laminated layers accurately. Inputs of mechanical properties of the open-hole samples are obtained from the UD RVE model as described in Section 4.3 and listed in Tables 3 and 4. Similar to the fiber-interphase region interface, the interface between adjacent plies of laminated composites is modeled using the zero-thickness cohesive elements (COH3D8), with a constitutive response defined by a bilinear mixed-mode softening law. But the parameters are different. The corresponding properties of cohesive elements for laminate delamination are obtained from experimental and computational results in our previous work [13]. The properties of the interface between adjacent plies are listed in Table 8. Comparisons between predicted and experimental load-displacement curves as well as failure modes in off-axial tension and compression are shown in Fig. 13A and B. Load-displacement curves predicted from the computational models are in close agreement with experimental results. For off-axial tension and compression, splitting at the hole boundary and along the orientation of the off-axial angle is observed as the main failure mode, which also agrees well with experimental observations, as shown in Fig. 13C and D. Thus, the proposed elastic-plastic-damage constitutive model appears to be sufficiently accurate in describing the failure mechanisms in UD CFRP composites in the σ 22-τ12 stress space. Table 8 Interfacial parameters between adjacent plies used in the model for open-hole specimen. K (MPa/mm) σ 1 (MPa) σ 2, σ 3 (MPa) GIc (J/m2) GIIc (J/m2)
105
17
60
550
913
268
Qingping Sun et al.
Fig. 13 Comparison between computational and experimental force-displacement curves of (A) off-axial tension and (B) off-axial compression. Comparison of failure modes of (C) off-axial tension and (D) off-axial compression.
The modeling and experimental results for the open-hole specimen under four-point bending are shown in Fig. 14. Again, a good general agreement between the computational predictions and experimental loaddisplacement curves is observed, as shown in Fig. 14A. The computationally predicted peak and stable loads are 1.19 kN and 0.57 kN, respectively, with a relative error of less than 1%. The proposed elastic-plastic-damage model can adequately predict the load-carrying capacity of the UD CFRPlaminated composite under four-point bending load. The sequence of damage initiation and propagation at various points along the load-displacement curve is further investigated using the computational model. As shown in Fig. 14B, 0-degree layer breakage occurs when the compressive stress at the compressive bending side reaches the longitudinal compression strength of UD CFRP composites, which results in a dramatic load drop to half of the maximum load. Subsequently, the load remains stable (0.57 kN) with severe delamination around the hole and edge area. Finally, failure initiation and propagation of the 0-degree and 90-degree layers on the tensile bending side lead to catastrophic failure of the open-hole cross-ply laminates, as shown in
Carbon fiber-reinforced polymer composites
269
Fig. 14 A comparison between computational and experimental results of the openhole specimen under four-point bending. (A) Experimental and computational forcedisplacement curves. (B)–(F) show a sequence of predicted and experimental failure modes at different stages of the load-displacement curve: computationally predicted failure modes in Fig. 9B–D correspond to points A, B, and C, respectively; experimentally observed failure modes in Fig. 9E and F correspond to points A and B, respectively.
Fig. 14C and D. This sequence of damage initiation and propagation is in good agreement with experimental results, as shown in Fig. 14E and F.
5. Mesoscale model development for woven composites 5.1 Woven composites description Woven fabric composites are produced using the same DowAksa A42 carbon fiber and Dow thermoset epoxy as the UD CFRP composites. The fiber diameter and fiber volume fraction of fiber tows (yarns) for woven are 7 μm
270
Qingping Sun et al.
Fig. 15 Typical initial microstructures of woven composites along warp and weft directions.
and 51.4%, respectively, which are the same as for UD CFRP composites. For woven composite, 2 2 twill weave plaques of 300 300 mm2 are formed by hot compression with four prepreg plies in the same layup direction to obtain a nominal thickness of 2.5 mm. The prepregs have an area density of 660 g/m2 and a weave density of 4 picks/cm. Both warp and weft yarns have the same bundle size of 12K (number of fibers in a bundle). As shown in Fig. 15, the twill composite has a complex initial configuration. Both warp and weft yarns have a quasielliptical shape. Due to the weaving, the yarns in two directions have different cross-section geometries. Meanwhile, yarns in the warp direction show less undulation (crimp ratio) than those in the weft direction [7,8].
5.2 Mesoscale RVE model generation for woven composites In this section, the existing challenges on RVE modeling and mesh generation are illustrated, and then a new method to address these challenges is proposed. The first challenge is that high-quality mesh is usually prohibited by complex geometry features, including gap and penetration. Fig. 16A shows that although two contacted yarns share the same geometry boundary, there still can be certain mismatches between the discrete elements after regular meshing. In addition, the gradual separation of yarn boundaries would further introduce tiny features and lead to poor-quality elements. The second challenge is that by using the previous method, the corresponding yarn volume fraction can only reach a relatively low value, while the yarn volume fraction is much higher in real applications, over 87% in the current work. Additionally, the yarns are usually modeled by sweeping a constant crosssection along an idealized path. However, both yarn shape and path evolve remarkably after the compression molding in practice, depending on the neighboring contacts.
Carbon fiber-reinforced polymer composites
271
Fig. 16 (A) Penetrations and gaps are generated in the meshing process even with the shared geometry boundary and the modifications on the meshes. (B) Processed meshes show significant improvement.
In the current work, a mesh-dependent method is proposed, which processes the nodes and mesh directly to eliminate the small overlaps and gaps, as shown in Fig. 16B. The gaps among mesh elements can be eliminated by shifting nodes within a cut-off distance to the same location and adding additional nodes when necessary. TexGen is implemented to generate the initial mesh for the dry fabric only. Then, a compression molding simulation with Abaqus is performed on the dry fabric model. The effects of the compression simulation are twofold. First, it aims to capture the influence of compression molding in practice on yarn geometry and the variations of cross-sections along the yarn. Second, this will increase the yarn volume friction in the model. A MATLAB code is developed to process the mesh model before and after the compression simulation to eliminate any tiny gaps or penetrations. A layer of cohesive elements with zero thickness and highquality matrix mesh is also generated with the MATLAB code after the compression simulation. The yarn is modeled with C3D8R and C3D6R elements, and the interface is modeled with COH3D6 and COH3D8 elements. To describe the complex matrix pocket accurately, tetrahedral element C3D4 is adopted. The size of the RVE is 12.5 mm 12.5 mm 2.5 mm (4 4 yarns in-plane and four layers in the thickness direction), and the yarn volume fraction is 87%, as shown in Fig. 17. The homogeneous traction boundary conditions are also added in the woven RVE model. Here, the basic steps for the proposed mesh generation method are summarized below: (1) A rough mesh of dry woven fabric is generated by using TexGen.
272
Qingping Sun et al.
Fig. 17 Schematic of the woven RVE model.
(2) Apply a MATLAB code to rearrange the nodes, elements, and orientations, and generate a model for the following compression molding simulation. (3) Various boundary conditions are applied in the compression molding simulation according to the target geometry, and we output the new node coordinates and orientations for postprocessing. (4) Postprocess with a MATLAB code is carried out to rearrange the nodes, elements, and orientations again and establish the final RVE model, including the cohesive layer and matrix based on the dry fabric mesh information.
5.3 Constitutive and damage laws In addition to the constructed mesoscale RVE models, constitutive and damage laws for each constituent are required for failure analysis. A comprehensive schematic diagram of the constitutive and damage law for each constituent is presented in Fig. 18. The constitutive law of the matrix is established using a paraboloidal yield criterion and an isotropic damage model, the same as the UD RVE model. The material parameters of the epoxy matrix are listed in Table 1. The fiber tows of woven RVE are modeled by the proposed elastic-plastic-damage model combining LHS yield criterion and UD RVE-based failure criteria, as discussed earlier in Section 4.5. Critical parameters of fiber tows in mesoscale RVE models are obtained from the previous UD RVE model as described in Sections 4.3 and 4.4 and listed in Tables 3 and 4. The interface between matrix
273
Carbon fiber-reinforced polymer composites
Fig. 18 Constitutive and damage laws developed for RVE models of woven composites.
Table 9 Interfacial parameters used in woven RVE model. Meso-scale RVE K (N/mm3) Nn (MPa) Ss (MPa) GIc (J/m2)
Woven [78]
1x105
20
20
360
GIIc (J/m2)
m
480
1.2
and fiber tow is modeled using the cohesive element with a bilinear mixedmode traction-separation law. The interfacial properties are listed in Table 9.
5.4 Results predicted by the woven RVE model 5.4.1 Experimental and computational stress-strain curves The tensile samples of dimensions 200 mm 25 mm are prepared in the warp and weft directions following the ASTM standard D3039 [56]. Similarly, compression samples of dimensions 140 mm 12 mm are prepared in the warp and weft directions following the ASTM standard D6641 [57]. The experimental and computationally predicted stress-strain curves of woven composite subjected to tensile and compressive load in the warp and weft directions are presented in Fig. 19A and B, respectively. It is clear that the predicted shapes and positions of stress-strain curves for both tension and compression are in good agreement with the corresponding experimental curves. The mechanical properties of woven composites obtained from experiments and woven RVE model predictions under uniaxial loading conditions are summarized in Tables 10 and 11. It should be noted that
274
Qingping Sun et al.
Fig. 19 Comparison of computationally predicted and experimentally measured stressstrain curves for both warp and weft directions under uniaxial (A) tension and (B) compression conditions.
the predicted ultimate strength is slightly lower than that of experimental results. This could be attributed to the difference between the microstructure of the woven RVE model and the actual microstructure of the test specimens. Under tensile loading, a clear transition region is observed in both warp and weft directions, where the tangent modulus drops after the initial elastic stage. After that, the stress increases quasilinearly again until the final failure. Based on the previous experimental analysis, the stress-strain evolution is determined by multiple hardening and softening mechanisms [6,7]. The rapid development of damage is the main reason for the transition region. Meanwhile, due to the difference in the crimp ratio, the warp direction shows higher stress than those in the weft direction, as shown in Fig. 19A. A significantly different mechanical behavior is observed under compression loading along warp and weft directions. The smaller initial crimp ratio gives rise to higher compression strength in the warp direction, although the trends of stress-strain behaviors in the two directions are similar, as shown in Fig. 19B. Moreover, it can be seen that the compressive stress-strain curves increase almost linearly until the damage develops to a certain extent. This indicates that the initial local damage of the matrix has fewer effects on the stiffness behavior of the woven composite under compressive loading. Once the stress reaches the final longitudinal compression strength of fiber tows causing fiber breakage, there is a sharp decrease in the stress-strain curves, indicating a catastrophic failure of the woven composite occurs. By comparing the experimental and computational results, we can observe that the current woven RVE model can capture the distinctive mechanical behaviors under tension and compression accurately. Especially, the transition region is well reproduced, and the difference in stress-strain
Table 10 Computationally predicted vs experimentally measured mechanical properties of woven composites under uniaxial tension and compression. Tension Compression Direction
Property
Experiment ASTM-D3039 [56]
Woven RVE
Experiment ASTM-D6641 [57]
Woven RVE
Warp (1-direction)
Modulus (GPa)
E11T ¼ 62.8 (Cov: 1.0%)
E11T ¼ 62.6
E11C ¼ 62.6 (Cov: 1.2%)
E11C ¼ 62.9
Strength (MPa)
XT 11 ¼ 805
XT 11 ¼ 687.6
XC 11 ¼ 376
(Cov: 11.5%)
XC 11 ¼ 344.4
Weft (2-direction)
Modulus (GPa)
E22T ¼ 59.5 (Cov: 1.1%)
E22T ¼ 59.1
E22C ¼ 55.2 (Cov: 1.8%)
E22C ¼ 55.7
Strength (MPa)
YT 22 ¼ 559
YT 22 ¼ 577.4
YC 22 ¼ 303
YC 22 ¼ 258.3
(Cov: 1.2%)
(Cov: 4.6%)
(Cov: 10.8%)
276
Qingping Sun et al.
Table 11 Summary of the elastic properties predicted from woven RVE model. v32 E11 (GPa) E22 (GPa) E33 (GPa) G12 (GPa) G13 (GPa) G23 (GPa) v21 v31
62.6
59.1
12.89
4.63
4.56
4.56
0.6 0.04 0.12
curves along warp and weft directions is conserved, which demonstrates the validity of the proposed multiscale modeling framework. 5.4.2 Damage initiation and propagation process Specimens are loaded to specific strain levels (0.40%, 0.75%, and 1.20%) by conducting interrupted tests to obtain further information about failure evolution and failure modes along the warp direction. Deformed specimens are subsequently cut for microstructural analysis using a digital optical microscope (Keyence VHX 2000). The strain levels of 0.40% and 0.75% correspond approximately to the initiation of nonlinearity and an inflection point (or transition stage) in stress-strain curves, as shown in Fig. 20A. Due to the relatively
Fig. 20 (A) Experimentally measured stress-strain curves under tension along warp direction. (B)–(D) show the damage accumulation process of woven composites at specific strain levels of 0.40%, 0.75%, and 1.20%.
Carbon fiber-reinforced polymer composites
277
low transverse strength of fiber tows, the interfiber transverse fracture initiates first in weft fiber tows at a strain of 0.40%, as shown in Fig. 20B. As strain increases to 0.75%, more interfiber tows’ transverse cracks develop as well as matrix cracking, and interply delamination occurs (see Fig. 20C), and the rapidly accumulated damage leads to the transition stage in the stress-strain curve. At a strain of 1.20%, transverse cracks, matrix cracks, and delamination take over the whole observation area, as shown in Fig. 20D. Finally, the fiber breakage in warp tows occurs at a strain of 1.5%, which eventually causes the ultimate tensile failure of woven composites. For a more in-depth investigation of the damage evolution process, the woven RVE model is further utilized to investigate the microscopic failure mechanisms of woven composites under tensile and compressive loadings. In the strain-stress curves predicted from the woven RVE model, the damage initiation and propagation points of constituents (weft tows, matrix, interface between fiber tows and matrix, warp tows) under tensile load in warp and weft direction are illustrated in Figs. 21A and 22A, respectively. The damage associated with dWeft and dWarp in weft and warp tows, dMatrix in the matrix, dInterface in the interface between fiber tows and matrix appear and gradually accumulate with applied loading. Due to the transverse strength of fiber tows (62.1 MPa) and tensile strength of matrix (61.6 MPa) being nearly the same and relatively low, the interfiber transverse cracks and matrix cracks initiate firstly in fiber tows and matrix at the strain of 0.40% in both warp and weft tension load, and then propagate along a path perpendicular to the loading axis, as shown in Figs. 21B and C and 22B and C. Subsequently, the interfacial cracks in the interface between fiber tows and matrix are triggered at the strain of 0.75% (Figs. 21D and 22D), and damage fraction of transverse/matrix cracks as well as interfacial cracks accumulated to a high amplitude leads to an obvious turning point in the stressstrain curve. Finally, the longitudinal fiber damage initiates in the fiber tows parallel to loading direction at the strain about 1.35% for warp tension (1.36% for weft tension) and results in the final failure of the woven RVE (Figs. 21E and 22E), causing a sudden drop-off of the stress-strain curves. The predicted failure behaviors show good agreement with the experimental observations for warp tension. For the compression case, the samples also show a catastrophic failure mode. Matrix cracks initiate firstly in transverse fiber tows at the strain of 0.44% in both warp and weft compression load and propagate with load increases (Figs. 23A and 24A), leading to slightly nonlinear behavior of compressive strain-stress curves, as shown in Figs. 23B and 24B. However, the
278
Qingping Sun et al.
Fig. 21 A sequence of failure initiation and propagation observed in the woven RVE model under tension along warp direction (X-direction): (A) A few selected cases to illustrate damage initiation and propagation in different constituents are marked as solid points in the strain-stress curve; (B) transverse cracks initiation and propagation in weft fiber tows; (C) matrix damage initiation and propagation; (Continued)
Carbon fiber-reinforced polymer composites
279
Fig.21, cont’d (D) interfacial debonding initiation and propagation; (E) fiber breakage initiation and propagation in warp fiber tows. SDV39 and SDV37 represent transverse and longitudinal tensile damage evolution index, respectively.
woven RVE still exhibits good load-bearing capacity with certain matrix damage accumulation, as indicated in the stress-strain curves. The fiber failure, on the other hand, exhibits much significant brittle behavior compared to the matrix failure. Once the fiber compressive failure initiates at ε ¼ 0.56% for warp compression or ε ¼ 0.52% for weft compression, the damage propagates rather quickly perpendicular to the loading direction and finally leads to the catastrophic failure of the woven composites, as shown in Figs. 23C and 24C. Different from the gradual damage accumulation under tension, no clear interfacial and transverse fiber tow cracks are observed in the compression simulation until the final rupture. Therefore, the transverse fiber tows damage (Figs. 23D and 24D) and interfacial delamination (Figs. 23E and 24E) are caused by the fiber breakage and occur at the final stage simultaneously.
280
Qingping Sun et al.
Fig. 22 A sequence of failure initiation and propagation observed in the woven RVE model under tension along weft direction (Y-direction): (A) A few selected cases to illustrate the damage initiation and propagation in different constituents are marked as solid points in the strain-stress curve; (B) transverse cracks initiation and propagation in warp fiber tows; (C) matrix damage initiation and propagation; (Continued)
Carbon fiber-reinforced polymer composites
281
Fig.22, cont’d (D) interfacial debonding initiation and propagation; (E) fiber breakage initiation and propagation in weft fiber tows.
The above analysis shows that this proposed woven RVE model can provide reliable predictions of failure strength under tension and compression and capture the realistic damage mechanisms of the woven composites under different loading conditions.
6. Macroscale model of U-shaped part made of UD and woven CFRP composites In this section, we use a U-shaped part subjected to a four-point bending load to demonstrate the final stage of the proposed bottom-up multiscale modeling framework. The detailed geometry of the U-shaped
282
Qingping Sun et al.
Fig. 23 (A) Matrix cracks initiation and propagation stages correspond to the marked points in the (B) strain-stress curve under compression along warp direction. (C)–(E) showcase the damage of warp fiber tows, weft fiber tows, and interface at ε ¼ 0.56%, respectively. SDV40 represents the transverse compressive damage evolution index.
Carbon fiber-reinforced polymer composites
283
Fig. 24 (A) Matrix cracks initiation and propagation stages correspond to the marked points in the (B) strain-stress curve under compression along weft direction. (C)–(E) showcase the damage of weft fiber tows, warp fiber tows, and interface at ε ¼ 0.52%, respectively.
284
Qingping Sun et al.
Fig. 25 (A) Geometry and dimension of the U-shaped sample (unit: mm). (B) Schematic of the testing setup. (C) Macroscale model of a beam sample under four-point bending load.
part is shown in Fig. 25A. The UD CFRP U-shaped part is made up of [0/ 90/90/0/0/0]s and [0/60/60/0/60/60]s layup with 12 layers (noted as UD 0-90 and UD 0-60, respectively), and the layups of woven CFRP Ushaped part are [90/0/90/0]s and [45/45/45/45]s with four layers (noted as woven 0-90 and woven 45-45, respectively). The warp yarn orientation of 0 degrees is parallel to the length direction of the U-shaped sample. For the experimental setup, a backplate of the same layup and thickness is glued to the bottom of the U-shaped sample at the location of the lower roller. The testing setup of the four-point bending is shown in Fig. 25B. The specimen is tested at a constant displacement rate (or speed) of 0.2 mm/min. The experimental load-displacement curves and failure modes are obtained for comparison with the computational predictions. Fig. 25C shows the macroscale model of the beam under four-point bending load in LS-DYNA [79] with 12 layers (UD)/4 layers (woven) of thick shell elements. The thick shell element is an 8-noded element following the kinematics of shell theory but with added strain component through
285
Carbon fiber-reinforced polymer composites
the thickness. Each layer of thick shell element represents one ply. The mesh size is 4 mm. A layer of cohesive elements of 0.01 mm thickness is inserted between two layers to simulate the delamination. A linear-elastic composite material model MAT_54 [80] in LS-DYNA for the thick shell element combined with Chang/Chang failure criteria [81] is adopted to model the ply property. Both the global and local coordinates are defined to interpret the ply orientation and accurately describe layers’ mechanical behavior. We note that the input parameters of the LS-DYNA model are informed from the microscale and mesoscale models and computational results discussed in earlier sections. The material input parameters, including elastic properties and strength predicted from UD and woven RVE models, are listed in Tables 3, 4, 7, and 8 in Sections 4 and 5. A cohesive material model considering strain rate effect, i.e., MAT_240 [82], in LS-DYNA is adopted to simulate the delamination. A bi-linear traction-separation law with a quadratic yield and damage initiation criterion in mixed-mode is included. The damage evolution is governed by a power-law formulation. The input parameters of cohesive elements for different modes are obtained from experimental and computational results in our previous work [13]. The interfacial parameters between adjacent plies are listed in Table 12. Fig. 26A and B show the comparisons of load-displacement curves from computational modeling and experimental testing of UD 0-90, UD 0-60, woven 0-90, and woven 45-45 specimens. The computational models capture the overall behaviors reasonably well for all the U-shaped samples. For UD CFRP U-shaped samples, the load increases linearly at first and then becomes slightly nonlinear toward the final failure stage, while for woven CFRP U-shaped samples, nonlinear behavior occurs rather early, especially for woven 45-45. For both UD 0-60 and woven 45-45 samples, the failure is concentrated around the direct loading locations. All the samples show significant corner failure (local failure) due to stress concentration during deformation. As the corner failure becomes more severe, there is greater fiber direction failure/ fiber breakage, which results in a dramatic load drop and catastrophic laminated layer failure of the UD CFRP U-shaped samples, as shown in Table 12 Interfacial parameters between adjacent plies [13]. K (MPa/mm) σ 1 (MPa) σ 2, σ 3 (MPa) GIc (J/m2)
GIIc (J/m2)
105
913
17
60
550
286
Qingping Sun et al.
Fig. 26 Comparison of load-displacement curves between modeling and experimental testing of (A) UD and (B) woven CFRP beam samples. Comparison of experimentally observed and computational predicted failure modes of (C) UD 0-60 and (D) woven 45-45 under four-point bending load.
Fig. 26C and D. Due to the slight deviation of the location of the lower and upper rollers during the testing setup, the experimental observed failure stage of UD 0-60 at the two loading points (Fig. 26C) is different. Compared to experiments, the computational model has the advantage of clearly revealing the failure initiation and evolution process of UD 0-60 and woven 45-45 under a four-point bending load. The symmetric local failure first occurs at the corners of the sample contacted with upper rollers where the stress concentration is higher, and the final failure of laminated layers takes place when the load increases to a critical value, as shown in Fig. 26C and D. Compared with experimentally observed failure modes, the proposed macroscale models of the U-shaped part agree well with experimental observations.
Carbon fiber-reinforced polymer composites
287
7. Conclusions This chapter presents a systematic investigation into the failure behaviors of UD and woven CFRP composites by developing a multiscale modeling framework with a bottom-up approach. The multiscale models combine MD simulations at the nanoscale, UD RVE models at the microscale, woven RVE models at the mesoscale, and finally, the macroscale models of Ushaped parts. The major conclusions and takeaways are summarized below. (1) At the nanoscale, the average mechanical properties of the interphase region are obtained from an analytical gradient model and MD simulations. The average Young’s modulus and strength of the interphase region are found to increase by about 5 and 9 times compared to the bulk epoxy resin matrix. (2) A microscale UD RVE model has been adopted to investigate the failure mechanisms and failure criteria of UD CFRP composites under uniaxial and multiaxial stress states. Based on the computational results and analyses, a new elasto-plastic constitutive law with damage evolution for homogenized UD CFRP composites is proposed. The newly proposed elastic-plastic-damage model shows significant improvement in efficiency to model fiber tows of the woven composites. (3) Conforming model/mesh generation methods are adopted to construct mesoscale woven RVE models. The complex failure mechanisms of woven composites under various loading conditions are captured by the mesoscale computational models. Building upon the mesoscale model, we also present an effective computational approach to analyze the complex failure behaviors of woven composites, which is of significant interest in the design and optimization of CFRP composite-based structural components. (4) A macroscale U-shaped part consisting of UD or woven CFRP composites under four-point bending is presented, which integrates the models and results at nano-, micro-, and mesoscales. The macroscale model shows good agreements with testing results and captures the failure strength and behaviors. This successful demonstration of integrating lower-scale structural and mechanical attributes into large-scale models provides a promising solution to the grand challenge of realistically modeling failure behaviors of composites used in structural applications. In conclusion, the validated framework with well-integrated multiscale models and tools presented in this chapter has been demonstrated to link the structures and mechanical properties effectively and accurately from
288
Qingping Sun et al.
different levels and predict the failure behaviors of CFRP composites, including, but not limited to, UD and woven CFRP composites. The multiscale modeling framework has the potential to significantly shorten the time from composite development to deployment.
References [1] H.C. Kim, T.J. Wallington, Life-cycle energy and greenhouse gas emission benefits of lightweighting in automobiles: review and harmonization, Environ. Sci. Technol. 47 (2013) 6089–6097. [2] Q. Liu, Y. Lin, Z. Zong, G. Sun, Q. Li, Lightweight design of carbon twill weave fabric composite body structure for electric vehicle, Compos. Struct. 97 (2013) 231–238. [3] J. Xu, S.V. Lomov, I. Verpoest, S. Daggumati, W. Van Paepegem, J. Degrieck, A comparative study of twill weave reinforced composites under tension–tension fatigue loading: experiments and meso-modelling, Compos. Struct. 135 (2016) 306–315. [4] G. Zhou, Q. Sun, J. Fenner, D. Li, D. Zeng, X. Su, et al., Crushing behaviors of unidirectional carbon fiber reinforced plastic composites under dynamic bending and axial crushing loading, Int. J. Impact Eng. 140 (2020) 103539. [5] A. Shaik, Y. Kalariya, R. Pathan, A. Salvi, ICME based hierarchical design using composite materials for automotive structures, in: Proceedings of the 4th World Congress on Integrated Computational Materials Engineering (ICME 2017), Springer, 2017, pp. 33–43. [6] G. Zhou, Q. Sun, D. Li, Z. Meng, Y. Peng, Z. Chen, et al., Meso-scale modeling and damage analysis of carbon/epoxy woven fabric composites under in-plane tension and compression loadings, Int. J. Mech. Sci. 190 (2021) 105980. [7] G. Zhou, Q. Sun, D. Li, Z. Meng, Y. Peng, D. Zeng, et al., Effects of fabric architectures on mechanical and damage behaviors in carbon/epoxy woven composites under multiaxial stress states, Polym. Test. 90 (2020) 106657. [8] G. Zhou, Q. Sun, Z. Meng, D. Li, Y. Peng, D. Zeng, et al., Experimental investigation on the effects of fabric architectures on mechanical and damage behaviors of carbon/ epoxy woven composites, Compos. Struct. (2020). [9] H. Xu, Y. Li, D. Zeng, Process integration and optimization of ICME carbon fiber composites for vehicle lightweighting: a preliminary development, SAE Int. J. Mater. Manuf. 10 (2017) 274–281. [10] Q. Sun, Z. Meng, G. Zhou, S.-P. Lin, H. Kang, S. Keten, et al., Multi-scale computational analysis of unidirectional carbon fiber reinforced polymer composites under various loading conditions, Compos. Struct. 196 (2018) 30–43. [11] Q. Sun, G. Zhou, Z. Meng, H. Guo, Z. Chen, H. Liu, et al., Failure criteria of unidirectional carbon fiber reinforced polymer composites informed by a computational micromechanics model, Compos. Sci. Technol. 172 (2019) 81–95. [12] Q. Sun, H. Guo, G. Zhou, Z. Meng, Z. Chen, H. Kang, et al., Experimental and computational analysis of failure mechanisms in unidirectional carbon fiber reinforced polymer laminates under longitudinal compression loading, Compos. Struct. 203 (2018) 335–348. [13] Q. Sun, G. Zhou, H. Guo, Z. Meng, Z. Chen, H. Liu, et al., Failure mechanisms of cross-ply carbon fiber reinforced polymer laminates under longitudinal compression with experimental and computational analyses, Compos. Part B 167 (2019) 147–160. [14] Q. Sun, G. Zhou, Z. Meng, M. Jain, X. Su, An integrated computational materials engineering framework to analyze the failure behaviors of carbon fiber reinforced polymer composites for lightweight vehicle applications, Compos. Sci. Technol. (2020) 108560.
Carbon fiber-reinforced polymer composites
289
[15] J. Gao, M. Shakoor, G. Domel, M. Merzkirch, G. Zhou, D. Zeng, et al., Predictive multiscale modeling for unidirectional carbon fiber reinforced polymers, Compos. Sci. Technol. 186 (2020) 107922. [16] W. Tao, P. Zhu, C. Xu, Z. Liu, Uncertainty quantification of mechanical properties for three-dimensional orthogonal woven composites. Part II: Multiscale simulation, Compos. Struct. 235 (2020) 111764. [17] Z. Ullah, X.-Y. Zhou, L. Kaczmarczyk, E. Archer, A. McIlhagger, E. Harkin-Jones, A unified framework for the multi-scale computational homogenisation of 3D-textile composites, Compos. Part B 167 (2019) 582–598. [18] T. Zheng, L. Guo, J. Huang, G. Liu, A novel mesoscopic progressive damage model for 3D angle-interlock woven composites, Compos. Sci. Technol. 185 (2020) 107894. [19] S.L. Omairey, P.D. Dunning, S. Sriramula, Multiscale surrogate-based framework for reliability analysis of unidirectional FRP composites, Compos. Part B 173 (2019) 106925. [20] Q. Sun, G. Zhou, H. Tang, Z. Meng, M. Jain, X. Su, et al., In-situ effect in cross-ply laminates under various loading conditions analyzed with hybrid macro/micro-scale computational models, Compos. Struct. 261 (2021) 113592. [21] Q. Sun, G. Zhou, H. Tang, Z. Chen, J. Fenner, Z. Meng, et al., A combined experimental and computational analysis of failure mechanisms in open-hole cross-ply laminates under flexural loading, Compos. Part B 215 (2021) 108803. [22] H. Tang, Q. Sun, Z. Li, X. Su, W. Yan, Longitudinal compression failure of 3D printed continuous carbon fiber reinforced composites: an experimental and computational study, Compos. A Appl. Sci. Manuf. 146 (2021) 106416. [23] G. Zhou, Q. Sun, Z. Meng, D. Li, Y. Peng, D. Zeng, et al., Experimental investigation on the effects of fabric architectures on mechanical and damage behaviors of carbon/ epoxy woven composites, Compos. Struct. 257 (2021) 113366. [24] Z. Meng, M.A. Bessa, W. Xia, W. Kam Liu, S. Keten, Predicting the macroscopic fracture energy of epoxy resins from atomistic molecular simulations, Macromolecules 49 (2016) 9474–9483. [25] N. Vu-Bac, M. Bessa, T. Rabczuk, W.K. Liu, A multiscale model for the quasi-static thermo-plastic behavior of highly cross-linked glassy polymers, Macromolecules 48 (2015) 6713–6723. [26] Q. Wu, M. Li, Y. Gu, Y. Li, Z. Zhang, Nano-analysis on the structure and chemical composition of the interphase region in carbon fiber composite, Compos. A Appl. Sci. Manuf. 56 (2014) 143–149. [27] C. Shao, S. Keten, Stiffness enhancement in nacre-inspired nanocomposites due to nanoconfinement, Sci. Rep. 5 (2015) 1–12. [28] W. Xia, J. Song, Z. Meng, C. Shao, S. Keten, Designing multi-layer graphene-based assemblies for enhanced toughness in nacre-inspired nanocomposites, Mol. Syst. Des. Eng. 1 (2016) 40–47. [29] T. Li, Z. Meng, S. Keten, Interfacial mechanics and viscoelastic properties of patchy graphene oxide reinforced nanocomposites, Carbon 158 (2020) 303–313. [30] W. Zhao, R.P. Singh, C.S. Korach, Effects of environmental degradation on near-fiber nanomechanical properties of carbon fiber epoxy composites, Compos. A Appl. Sci. Manuf. 40 (2009) 675–678. [31] B. Wang, G. Fang, S. Liu, J. Liang, Effect of heterogeneous interphase on the mechanical properties of unidirectional fiber composites studied by FFT-based method, Compos. Struct. 220 (2019) 642–651. [32] C. Kiritsi, N. Anifantis, Load carrying characteristics of short fiber composites containing a heterogeneous interphase region, Comput. Mater. Sci. 20 (2001) 86–97. [33] A. Standard, D638: Standard Test Method for Tensile Properties of Plastics, ASTM International, West Conshohocken, PA, 2010.
290
Qingping Sun et al.
[34] Standard A. D695-15, Standard Test Method for Compressive Properties of Rigid Plastics, 2015. [35] Fatigue AICEo, Mechanics FSEoF, Standard Test Method for Linear-Elastic PlaneStrain Fracture Toughness K [Ic] of Metallic Materials, ASTM International, 2013. [36] A. Melro, P. Camanho, S. Pinho, Generation of random distribution of fibres in longfibre reinforced composites, Compos. Sci. Technol. 68 (2008) 2092–2102. [37] A. Melro, P. Camanho, F.A. Pires, S. Pinho, Micromechanical analysis of polymer composites reinforced by unidirectional fibres: part I—constitutive modelling, Int. J. Solids Struct. 50 (2013) 1897–1905. [38] K. Hibbitt, Sorensen., ABAQUS/explicit: User’s Manual, Hibbitt, Karlsson and Sorenson Incorporated, 2001. [39] D.A. Vajari, B.F. Sørensen, B.N. Legarth, Effect of fiber positioning on mixed-mode fracture of interfacial debonding in composites, Int. J. Solids Struct. 53 (2015) 58–69. [40] Y. Zhou, Z.M. Huang, L. Liu, Prediction of interfacial debonding in fiber-reinforced composite laminates, Polym. Compos. 40 (2019) 1828–1841. [41] Z.-M. Huang, On micromechanics approach to stiffness and strength of unidirectional composites, J. Reinf. Plast. Compos. 38 (2019) 167–196. [42] M. Heidari-Rarani, M. Shokrieh, P. Camanho, Finite element modeling of mode I delamination growth in laminated DCB specimens with R-curve effects, Compos. Part B 45 (2013) 897–903. [43] M. Galli, J. Botsis, J. Janczak-Rusch, An elastoplastic three-dimensional homogenization model for particle reinforced composites, Comput. Mater. Sci. 41 (2008) 312–321. [44] T. Sadowski, T. Nowicki, Numerical investigation of local mechanical properties of WC/Co composite, Comput. Mater. Sci. 43 (2008) 235–241. [45] E. Totry, C. Gonza´lez, J. LLorca, Prediction of the failure locus of C/PEEK composites under transverse compression and longitudinal shear through computational micromechanics, Compos. Sci. Technol. 68 (2008) 3128–3136. [46] E. Totry, C. Gonza´lez, J. LLorca, Failure locus of fiber-reinforced composites under transverse compression and out-of-plane shear, Compos. Sci. Technol. 68 (2008) 829–839. [47] S.H.M. Sa´daba, F. Naya, et al., Special-purpose elements to impose periodic boundary conditions for multiscale computational homogenization of composite materials with the explicit finite element method, Compos. Struct. 208 (2019) 434–441. [48] T. Kanit, S. Forest, I. Galliet, V. Mounoury, D. Jeulin, Determination of the size of the representative volume element for random composites: statistical and numerical approach, Int. J. Solids Struct. 40 (2003) 3647–3679. [49] D. Pahr, H. B€ ohm, Assessment of mixed uniform boundary conditions for predicting the mechanical behavior of elastic and inelastic discontinuously reinforced composites, Comput. Model. Eng. Sci. 34 (2008) 117–136. [50] G. Chen, U. Ozden, A. Bezold, C. Broeckmann, A statistics based numerical investigation on the prediction of elasto-plastic behavior of WC–Co hard metal, Comput. Mater. Sci. 80 (2013) 96–103. [51] C. Gonza´lez, J. LLorca, Mechanical behavior of unidirectional fiber-reinforced polymers under transverse compression: microscopic mechanisms and modeling, Compos. Sci. Technol. 67 (2007) 2795–2806. [52] F. Naya, C. Gonza´lez, C. Lopes, S. Van der Veen, F. Pons, Computational micromechanics of the transverse and shear behavior of unidirectional fiber reinforced polymers including environmental effects, Compos. A Appl. Sci. Manuf. 92 (2017) 146– 157. [53] F.R.-L.E. Ducobu, E. Filippi, On the introduction of adaptive mass scaling in a finite element model of Ti6Al4V orthogonal cutting, Simul. Model. Pract. Theory 53 (2015) 1–14.
Carbon fiber-reinforced polymer composites
291
[54] X. Bai, M.A. Bessa, A.R. Melro, P.P. Camanho, L. Guo, W.K. Liu, High-fidelity micro-scale modeling of the thermo-visco-plastic behavior of carbon fiber polymer matrix composites, Compos. Struct. 134 (2015) 132–141. [55] S. Yurgartis, Measurement of small angle fiber misalignments in continuous fiber composites, Compos. Sci. Technol. 30 (1987) 279–293. [56] Standard A. D3039/D3039M-14, Standard Test Method for Tensile Properties of Polymer Matrix Composite Materials, 2014. [57] Standard A. D6641/D6641M-14, Standard Test Method for Compressive Properties of Polymer Matrix Composite Materials Using a Combined Loading Compression (CLC) Test Fixture, ASTM International, West Conshohocken, 2014. [58] J.S. Fenner, I.M. Daniel, Testing the 2-3 Shear Strength of Unidirectional Composite, Mechanics of Composite, Hybrid and Multifunctional Materials, Vol. 5, Springer, 2019, pp. 77–84. [59] R. Gutkin, S. Pinho, P. Robinson, P. Curtis, A finite fracture mechanics formulation to predict fibre kinking and splitting in CFRP under combined longitudinal compression and in-plane shear, Mech. Mater. 43 (2011) 730–739. [60] I.M. Daniel, S.M. Daniel, J.S. Fenner, A new yield and failure theory for composite materials under static and dynamic loading, Int. J. Solids Struct. 148 (2018) 79–93. [61] I.M. Daniel, Yield and failure criteria for composite materials under static and dynamic loading, Prog. Aerosp. Sci. 81 (2016) 18–25. [62] K.-S. Liu, S.W. Tsai, A progressive quadratic failure criterion for a laminate, Compos. Sci. Technol. 58 (1998) 1023–1032. [63] P.P. Camanho, C.G. Da´vila, S.T. Pinho, L. Iannucci, P. Robinson, Prediction of in situ strengths and matrix cracking in composites under transverse tension and in-plane shear, Compos. A Appl. Sci. Manuf. 37 (2006) 165–176. [64] N.T. Chowdhury, J. Wang, W.K. Chiu, W. Yan, Matrix failure in composite laminates under compressive loading, Compos. A Appl. Sci. Manuf. 84 (2016) 103–113. [65] N.T. Chowdhury, J. Wang, W.K. Chiu, W. Yan, Matrix failure in composite laminates under tensile loading, Compos. Struct. 135 (2016) 61–73. [66] I. Daniel, B. Werner, J. Fenner, Strain-rate-dependent failure criteria for composites, Compos. Sci. Technol. 71 (2011) 357–364. [67] I.M. Daniel, S.M. Daniel, J.S. Fenner, A new yield and failure theory for composite materials under static and dynamic loading, Int. J. Solids Struct. (2017). [68] K.W. Gan, T. Laux, S.T. Taher, J.M. Dulieu-Barton, O.T. Thomsen, A novel fixture for determining the tension/compression-shear failure envelope of multidirectional composite laminates, Compos. Struct. 184 (2018) 662–673. [69] S.T. Pinho, Modelling Failure of Laminated Composites Using Physically-Based Failure Models, 2005. [70] P. Camanho, A. Arteiro, A. Melro, G. Catalanotti, M. Vogler, Three-dimensional invariant-based failure criteria for fibre-reinforced composites, Int. J. Solids Struct. 55 (2015) 92–107. [71] S.T. Pinho, C.G. Davila, P.P. Camanho, L. Iannucci, P. Robinson, Failure models and criteria for FRP under in-plane or three-dimensional stress states including shear nonlinearity, 2005. [72] C.S. Yerramalli, A.M. Waas, A failure criterion for fiber reinforced polymer composites under combined compression–torsion loading, Int. J. Solids Struct. 40 (2003) 1139–1164. [73] W. Michaeli, M. Mannigel, F. Preller, On the effect of shear stresses on the fibre failure behaviour in CFRP, Compos. Sci. Technol. 69 (2009) 1354–1357. [74] P. Soden, M. Hinton, A. Kaddour, Biaxial test results for strength and deformation of a range of E-glass and carbon fibre reinforced composite laminates: failure exercise benchmark data, in: Failure Criteria in Fibre-Reinforced-Polymer Composites, Elsevier, 2004, pp. 52–96.
292
Qingping Sun et al.
[75] D.A. Vajari, C. Gonza´lez, J. Llorca, B.N. Legarth, A numerical study of the influence of microvoids in the transverse mechanical response of unidirectional composites, Compos. Sci. Technol. 97 (2014) 46–54. [76] C. Liu, Y. Huang, M. Stout, On the asymmetric yield surface of plastically orthotropic materials: a phenomenological study, Acta Mater. 45 (1997) 2397–2406. [77] S. Pinho, L. Iannucci, P. Robinson, Physically based failure models and criteria for laminated fibre-reinforced composites with emphasis on fibre kinking. Part II: FE implementation, Compos. A Appl. Sci. Manuf. 37 (2006) 766–777. [78] A. Turon, C.G. Davila, P.P. Camanho, J. Costa, An engineering solution for mesh size effects in the simulation of delamination using cohesive zone models, Eng. Fract. Mech. 74 (2007) 1665–1682. [79] Y.D. Murray, Users Manual for LS-DYNA Concrete Material Model 159, Federal Highway Administration. Office of Research, United States, 2007. [80] B. Wade, P. Feraboli, M. Osborne, M. Rassaian, Simulating Laminated Composite Materials Using LS-DYNA Material Model mat54: Single-Element Investigation, Federal Aviation Administration DOT/FAA/TC-14, vol. 19, 2015, pp. 1–36. [81] F.-K. Chang, K.-Y. Chang, A progressive damage model for laminated composites containing stress concentrations, J. Compos. Mater. 21 (1987) 834–855. [82] J.K. Sønstabøa, D. Morina, M. Langsetha, A Cohesive Element Model for Large-Scale Crash Analyses in LS-DYNA®, 14th International Users Conference, Bamberg, 2016.
CHAPTER EIGHT
Engineering elasticity inspired by natural biopolymers Mohammad Madania,b,⁎, Chengeng Yangc,⁎, Genevieve Kunkela,⁎, and Anna Tarakanovaa,c a
Department of Mechanical Engineering, University of Connecticut, Storrs, CT, United States Department of Computer Science & Engineering, University of Connecticut, Storrs, CT, United States c Department of Biomedical Engineering, University of Connecticut, Storrs, CT, United States b
Contents 1. Introduction 2. Sequence and structure in elastomeric biopolymers 2.1 Elastomeric sequences and motifs 2.2 Secondary and tertiary structure of elastomeric protein polymers 3. Cross-linking for tuning elastomeric biopolymer properties 4. Intrinsic and extrinsic factors modulating elastomeric protein elasticity 4.1 Conformational entropic effects in elastin-based materials 4.2 Solvent and hydration effects in elastomeric proteins 4.3 Temperature as a trigger for modulating elastomeric biomaterials 5. Computational approaches to elastomeric protein polymers 5.1 Case study 1: Computational smart polymer design based on elastin protein mutability 5.2 Case study 2: Elasticity, structure, and relaxation of extended proteins under force 5.3 Case study 3: Effect of sodium chloride on the structure and stability of spider silk’s N-terminal protein domain 5.4 Case study 4: Molecular model of human tropoelastin and implications of associated mutations 6. Conclusion References
1.
293 295 295 298 299 303 303 305 308 314 317 318 318 319 320 320
Introduction
Elastomeric proteins such as collagen [1], elastin [2,3], fibrillin [4], resilin [5], spider silk [6], and abductin [7] represent a unique class of biomolecular materials that exhibit large extensibility, reversible deformation, and ⁎
These authors contributed equally to this work.
Fundamentals of Multiscale Modeling of Structural Materials https://doi.org/10.1016/B978-0-12-823021-3.00011-7
Copyright © 2023 Elsevier Inc. All rights reserved.
293
294
Mohammad Madani et al.
high resilience to applied forces [2]. The ability of any protein to exhibit elasticity lies in its molecular and structural organization of amino acids which encode protein self-assembly to form a network of chains through covalent linkages, so that the applied stress is distributed throughout the structure and modulated by noncovalent linkages [2,8,9]. Leveraging the mechanical properties of elastomeric proteins, including but not limited to collagen [1], fibrillin [10], elastin [11,12], and resilin [13–15] to understand and develop high-performance biomaterials has remained of significant research interest [2,16–20]. Elastomeric proteins are ubiquitous in nature. Consider collagen that, together with elastic fibers, imparts elasticity and resilience to connective tissues and bone [21,22]. Wheat gluten proteins form networks that give baked goods their viscoelastic stretch [23]. Tropoelastin is a dominant component of elastic fibers that provide strength and elasticity to tissues such as blood vessels, skin, and lungs [11,24]. These examples also demonstrate the characteristics that make elastomeric proteins attractive candidates for designing elastic and resilient biopolymers. In this chapter, we will focus on elastin and resilin protein polymers as example systems to survey essential features and molecular mechanisms of biopolymer elasticity, discuss how such biopolymers can be tailored with respect to specific properties, and review associated applications. Elastin provides tissues with resilience through stretch and recoil cycles, and is primarily made up of cross-linked tropoelastin monomers [3]. Encoded by the ELN gene, tropoelastin is 60–70 kDa consisting of 36 exons with a total length of 698 residues [25]. Mutations in the ELN gene may alter tropoelastin production levels or interaction properties leading to the emergence of diseases like cutis laxa [26,27], supravalvular aortic stenosis [28], or Marfan syndrome [29]. Tropoelastin exons vary in length but occur in alternating patterns of hydrophobic and cross-linking domains [24,25,30]. Studies on the hierarchical assembly behavior of tropoelastin have shown that the flexibility and shape exhibited by the wild-type (WT) protein are essential for formation of higher order elastic fibers, through a process called elastogenesis [25,31,32]. Tropoelastin is primarily deposited during prenatal development and childhood and has an estimated halflife of 70 years due to its remarkable durability and resistance to degradation. Resilin is an isotropic rubber-like protein found in the cuticle that composes much of the exoskeleton of insects [33,34]. Full-length resilin in Drosophila melanogaster contains three significant domains constituting a total of 620 residues [35]. It was first identified during an investigation of the winghinge ligament of locusts and the elastic tendons of dragonflies in 1960 [33]. Also, recently, resilin has been found in some parasites such as monogenean fish parasites [36] and in the tymbal structures of cicadas [37]. In insects,
Engineering elasticity inspired by natural biopolymers
295
resilin provides rubber elasticity necessary for flight [38], sound production [39], enhancement of attachment efficiency of the clamp [36], generation of flexibility of wing vein joints [40], feeding [41], and storing energy for movement [42]. Some studies suggest that in most insects, chitin, a relatively hard polysaccharide material, rather than resilin, is largely responsible for energy storage [40]. In these species, resilin and the chitinous cuticle form a composite material that stores and releases energy [40]. In Anisoptera insects, resilin provides conditions for returning a muscle to its native position after contraction [43]. In damselflies, resilin patches are distributed along the longitudinal veins on wing surfaces playing a pivotal role in elasticity and flexibility of the wing [44–46]. In the hind wings of beetles, resilin efficiently enhances the performance of flight through prevention of torsion, improvement of tensile strength of the wings, and mitigation of the stress and strain caused by the folding and unfolding of the wings [47]. With the abundance of elastomeric proteins in nature, recombinant approaches have been used to synthesize novel elastomeric-like polypeptides that mimic naturally occurring proteins, which we term XLPs in the remainder of the chapter. XLPs offer great advantages for use in varied applications such as tissue engineering [48], drug delivery [49,50], and biomaterial design [2,51]. XLPs mimic the properties of naturally occurring elastomeric proteins and are highly tunable to exhibit customized properties [50,52,53]. While natural elastomeric proteins are optimized for their various functions, XLPs can sometimes offer several benefits over their naturally occurring counterparts. For example, in vivo elastomeric proteins may be incorporated with other molecules, or be extensively cross-linked, making them difficult to isolate. Repetitive sequences are also easier to produce recombinantly. In this chapter, we will discuss defining properties of naturally occurring elastomeric proteins and widely used XLPs, focusing on elastin-like polypeptides (ELPs) and resilin-like polypeptides (RLPs), although we note that recombinant approaches have surveyed a much wider spectrum of XLPs, including spider-silk-like (SSLPs) [54], silkworm-like polypeptide (SWLPs) [55], and collagen-like peptides (CLPs) [56], among others. We also discuss the related structural, mechanical, and thermal characteristics of elastomeric protein polymers, and review design strategies for tuning XLP properties.
2. 2.1
Sequence and structure in elastomeric biopolymers Elastomeric sequences and motifs
A defining feature of elastomeric protein sequences is their distinct domain structure consisting of repeating elastomeric motifs interspersed with other
296
Mohammad Madani et al.
nonelastic domains [57]. Naturally occurring elastomeric proteins (Fig. 1) form intra- and intermolecular cross-links via their nonelastic domains to assemble into higher order structures [2,25,57]. Elastomeric motifs and domains are typically rich in glycine, arginine, proline, valine, and glutamine [57,58]. Glycine and glutamine are known to contribute to higher flexibility [59]. The lack of a true side chain in glycine allows for a high degree of chain mobility [58]. Proline, on the other hand, promotes rigidity, precluding the formation of stable secondary structures [58]. Thus, elastomeric proteins and XLPs in particular often display properties associated with a high degree of disorder, such as high mobility, aggregation, and flexibility, all encoded by sequence simplicity and high hydrophobicity [52]. The length and composition of elastomeric domains determine resulting elastic properties such as Young’s modulus, extensibility, and resilience [2]. The ability of predictable and simple sequence patterns—relative to other classes of proteins—to yield a spectrum of different elastic properties makes them highly tunable and thus desirable candidates for biomaterial design. Recombinant methods offer a powerful tool for synthesis of designed XLPs that mimic the properties of naturally occurring elastomeric proteins [60]. Numerous and varied XLP-based materials such as fibers, films, and hydrogels have been designed by adjusting sequence composition, length, and cross-linking [50,53,61–63]. XLPs are often expressed in Escherichia coli and are synthesized usually by concatemerization or recursive directional ligation [64]. Synthesis of recombinant polypeptides includes three stages: (1) recombinant DNA development and transfer [65], (2) expression [66], and (3) purification [67]. Since XLPs are genetically encoded, polypeptide sequence and length can be explicitly controlled. The high tunability of both mechanically functional domains and biologically active regions makes XLPs attractive for biomaterial applications. XLPs are typically less complex but can still exhibit structural and mechanical properties similar to that of their naturally occurring counterparts [68,69]. Urry and coworkers performed pioneering work in this field by being the first to synthesize XLPs of repetitive elastin peptides, their oligomers, and polymers from the hydrophobic domains of elastin [70]. Representatives of elastin polymer building blocks, VPGXG, and GVGVP are commonly studied ELP sequences [2,5,53,61,70]. The guest amino acid, X, can be any amino acid except proline [2,60,71]. Altering the guest residue and conditions of its environment allows the polypeptide to explore various levels of disorder and elasticity [52]. A summary of XLP hallmarks is shown in Fig. 2.
Fig. 1 (A) Summary of example naturally occurring elastomeric proteins and their defining domain structure consisting of repeating elastomeric motifs with other nonelastic domains. (B) Domain structure of corresponding elastomeric proteins in (A). (Modified from A.S. Tatham, P.R. Shewry, Comparative structures and properties of elastic proteins, Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci. 357 (2002) 229–234, https://doi.org/ 10.1098/rstb.2001.1031, with permission.)
298
Mohammad Madani et al.
Fig. 2 Summary of multiscale XLP properties from sequence to applications. Examples of repetitive sequences are provided for the XLPs mentioned in this chapter.
2.2
Secondary and tertiary structure of elastomeric protein polymers
Secondary and tertiary structures of elastomeric proteins and polypeptides dictate nanomechanical and higher order assembly behaviors. Although elastomeric proteins often contain regularly repeated motifs, this does not always imply a regular, ordered structure [7]. By selecting different elastomeric motif combinations, it is possible to generate protein polymers with varied secondary and tertiary structures ranging from highly ordered to highly disordered [7,52,71,72]. For example, collagen-like polypeptides (CLPs) and natural collagen can form stable triple helical structures [73] while ELPs and natural elastin typically exhibit transient secondary structures [6,52,54,68]. The high degree of proline and glycine in tropoelastin and resilin encodes for flexibility by creating a structural imbalance [58,74]. Specifically, resilin and elastin are composed of polyproline II (PPII), unordered conformations and β-turns, in aqueous solution [75]. Dihedral angle restrictions associated with proline residues promote PPII conformations in resilin, and the presence of glycine creates instability in turn conformations [74,75]. The presence of repetitive sequences, high glycine content, and flexible equilibrium between β-turn and PPII secondary structures is analogous to the general features observed in other elastomeric proteins [75]. Therefore,
Engineering elasticity inspired by natural biopolymers
299
it has been proposed that the available β-turn and PPII structures may produce multifunctional equilibria between β-turn and PPII conformations, imparting flexible, dynamic chain movements in elastomeric structural proteins [68,75–79]. Often, a delicate balance at the amino acid scale dictates elastomeric properties: for example, a threshold ratio of proline to glycine content is necessary to maintain intrinsic disorder required for elastin, for example, while impeding amyloid formation [58].
3.
Cross-linking for tuning elastomeric biopolymer properties
Cross-links are indispensable elements in naturally occurring biopolymers, including elastin and resilin. In elastin, enzymatic cross-linking is responsible for elastin’s insolubility and plays an essential role in elastin’s resistance toward degradation, structural integrity, and biomechanics [80]. This process is mediated by lysyl oxidase where first lysine residues from hydrophilic domains are converted into allysines, which then react to form subsequent mature cross-links in elastin. Regarding the quantities of involved lysines, these mature cross-links can be bi-, tri-, tetra-, and pentafunctional cross-links [80]. In natural resilin, tyrosine residues are responsible for cross-linking by the formation of di-tyrosine and tri-tyrosine [81] through chemical (peroxidase [82]) or physical (ruthenium-mediated photo crosslinking) reactions. In both reactions, covalent bonds between the ortho positions of two and three tyrosine residues are linked together and make dityrosine and tri-tyrosine, respectively [83]. The cross-links in resilin are formed from the phenolic side groups of tyrosine repeat units. The ratio between trityrosine and dityrosine can serve as a parameter to determine the cross-linking degree of impure resilin [83]. A high ratio suggests progressed cross-linking [83]. Through the formation of crosslinks, the mobility of chain segments would be decreased, and the protein would stiffen, while still maintaining elasticity [83]. Cross-linking is therefore essential in engineering elastomeric materials inspired by elastin/resilin, both for accurate biomimicry and mechanical tunability. Various cross-linking agents of elastin/resilin-based materials have been reported, including γ-irradiation [84], photochemistry [8,85], transglutaminase enzymes [86,87], and the chemical cross-linking agents [69,88–91]. Chemical cross-linking agents that react with lysines, cysteines, and glutamines are the most commonly used agents in preparing elastin-based
300
Mohammad Madani et al.
materials. Nowatzki et al. have reported an elastic film comprised of alternating CS5 region, a segment of fibronectin capable of cell binding, and an ELP representative sequence, VPGIG [92]. With hexamethylene diisocyanate as a chemical cross-linking agent in dimethyl sulfoxide (DMSO) solution, this elastic film exhibits lower cytotoxicity, enhanced extensibility, and an elastic modulus comparable to native elastin, compared to those cross-linked with glutaraldehyde, another chemical cross-linking agent [92]. In the case of α-elastin, hexamethylene diisocyanate also outperforms glutaraldehyde due to its increased cross-linking density and thereby improved mechanical properties [93]. Despite the advantages of hexamethylene diisocyanate over glutaraldehyde revealed in these studies, the requirement of removing DMSO solution after crosslinking reaction is a negligible deficiency for its applications in tissue engineering. In another study, glutaraldehyde is employed as a cross-linking agent for a composite recombinant tropoelastin/α-elastin hydrogel [94]. Under the condition of high-pressure CO2, the cross-linking of the recombinant hydrogel is accelerated and enhanced to a higher degree, which in turn renders a higher compressive modulus and a lower swelling ratio, thereby improving mechanical properties, compared to those fabricated under atmospheric conditions with a lower extent of cross-linking and subsequently lower compressive modulus [94]. Bis(sulfosuccinimidyl) suberate and disuccinimidyl suberate are another two commonly used chemical cross-linkers to construct elastinbased materials. Di Zio et al. presented a study using an ELP-based film, consisting of CS5 region and [(VPGIG)2(VPGKG)(VPGIG)2] and cross-linked by bis(sulfosuccinimidyl) suberate and disuccinimidyl suberate, to show how the amount of cross-linkers and protein weight fraction varies its mechanical properties [95]. They concluded that a larger amount of cross-linking leads to an increase in both tensile modulus and shear modulus. Notably, the film cross-linked by disuccinimidyl suberate exhibits a range of tensile modulus from 0.35 to 0.97 MPa, on the order of natural elastin, 0.3–0.6 MPa [95–98]. These studies point out a potent role of cross-linking in fabricating elastininspired materials, remarkable in their function of providing mechanical strength. In resilin and resilin-like materials, cross-links have been similarly extensively employed for controlling mechanical strength [99]. As for elastin materials, chemical approaches are also frequently used for cross-linking. In recombinant resilin-based materials, lysine and tyrosine residues, highly reactive with various chemical cross-linkers, are often selected as the basis for cross-linking polypeptide chains [100]. For example, Bracalello et al.
Engineering elasticity inspired by natural biopolymers
301
used four lysine residues to cross-link resilin-elastin-collagen-like chimeric polypeptides (REC) [100]. Renner et al. employed tris(hydroxymethyl phosphine) (THP) as a cross-linker for production of RZ-based hydrogels (RZ10-RGD, RZ30-BA). THP mediates a rapid cross-linking process [69,89–91]. Li et al. utilized the Mannich-type condensation reaction of THP with amines from tyrosine residues of RLPs to produce cross-linked resilin-based hydrogels [15,101,102]. [Tris(hydroxymethyl) phosphine] propionic acid (THPP) is another cross-linker reactive with lysine residues to make RLP12-based hydrogels with outstanding mechanical properties such as high resilience, elasticity, and strain to break [16,99]. Utilization of other cross-linkers such as THP to construct the hydrogel led to decreased Young’s modulus and dynamic storage modulus, because this cross-linker includes more chemically reactive ends that attach themselves to the functional groups found in proteins to construct a more stable product. RLP-PEG hydrogels, a common RLP-based material, are constructed via various cross-linking methods to create materials with specific structural and mechanical properties. For example, a Michael-type addition reaction between the thiols of cysteine residues of RLP and vinyl sulfones of an end-functionalized multiarm star PEG leads to the production of RLP12PEG, RLP24-PEG, or RLP48-PEG hybrid hydrogels, all highly elastic [103,104]. In other approaches, chemical cross-linking was used through a Mannich-type reaction with the cross-linker tris-(hydroxy methyl phosphine) (THP) for the production of phase-separated RLP-PEG solutions [105]. Additionally, chemical approaches can also be combined with physical approaches in cross-linking. Through physical cross-linking, the electrostatic interactions between lysine residues of the RLP and HA backbones formed transparent and homogenous cross-linked-RLP/HA hydrogels, which are much more degradable than those constructed with chemical crosslinking approaches [106], because of the highly efficient reaction with no by-products. Studies have also revealed the use of photochemical cross-linking in casting resilin-like proteins into biomaterials and hydrogels [8]. This crosslinking method generally yields fewer by-products in comparison to other approaches [107]. Also, products constructed via photochemical crosslinking are generally nontoxic and have outstanding mechanical properties. For the construction of cross-linked Rec1-resilin [8,68,108], An16 [68,109], Dros16 [68,109], Hi-resB [110], and Cf-resB [110], Ruthenium (Ru II)-mediated photo cross-linking was used for the formation of dityrosine, which is necessary for cross-linking RLPs. Lyons et al. estimated
302
Mohammad Madani et al.
that the amount of dityrosine formation in Rec1-resilin, An16, Dros16, CfresB, and Hi-resB were 18.8%, 14.3%, 46%, 19.2%, and 17.1% respectively [68,109,110]. In other methods, horseradish peroxidase (HRP)-mediated cross-linking reaction was utilized for the formation of dityrosine to generate cross-linked biomaterials or hydrogels from the RLP-ChBD, exon 1, exon 3, clone 1 (exon 1 + exon 2), and full length resilin (exon 1 + exon 2 + exon 3) [9,111,112]. Another approach used for cross-linking resilin was photo-Fenton polymerization. The cross-linking was performed via the formation of tyrosyl radicals, which yielded di-tyrosine cross-linked covalent bridges [112,113]. Qin et al. tested this approach using citratemodified photo-Fenton reactions with different ratios and concentrations of fenton components (Fe and H2O2) to introduce reactive oxygen species (ROS) for cross-linking to generate resilin-based proteins from exon 1. These cross-linked biomaterials had both excellent elasticity and great adhesive features [112]. Another advantage of this approach was the use of a lower concentration of protein than the HRP cross-linking method for production of cross-linked resilin [112]. The GB1-resilin based hydrogels were constructed by means of [Ru(bpy)3]-mediated photochemical cross-linking method which allowed for the cross-linking of two tyrosine residues in close proximity into di-tyrosine adducts [18]. This method was also employed for cross-linking of the FRF4RF4R into a three-dimensional network structure [114]. Both of these resilin-based materials showed outstanding elasticity and increased Young’s modulus in comparison to resilin-based hydrogels constructed via HRP cross-linking approach. In fact, [Ru(bpy)3]-mediated photochemical cross-linking method for the generation of these RLP-based hydrogels mimics passive elastic properties of muscle tissue, having a unique combination of strength, extensibility, and resilience. Another interesting chemical cross-linking approach uses specific enzymes to produce RZ-based hydrogels. Transglutaminase is an enzyme used as a cross-linker in RZ-TGase gels [87], forming amide bonds between the γ-carboxamide group of glutamines and the amides in the protein, resulting in an increase in elasticity by 17% in comparison to those constructed via traditional crosslinking approaches [87]. Redox-responsive cross-linkers are another good cross-linking option. For example, RZ10RGD proteins were cast into hydrogels using a redox-responsive cross-linker, 3,30 -dithiobis (sulfosuccinimidyl propionate) (DTSSP) [115– 117], including a sulfo-NHS ester on each end, to create DTSSP-crosslinked RZ10-RGD hydrogels. The DTSSP cross-linked RZ10-RGD hydrogels dramatically degraded in reducing environments such as glutathione
Engineering elasticity inspired by natural biopolymers
303
solutions (GSH) [115]. The RZ-based material constructed via this crosslinking approach had a porous structure with more stable storage modulus, high viability, and high degradation rate in reducing environments than those materials produced via HRP cross-linking. These studies reveal how using various cross-linking methods for building elastomeric biomaterials based on very similar building block units, e.g., ELPs or RLPs, can result in significant variability in mechanical and structural properties. Cross-linking can therefore be employed as a powerful tuning fork for modulating the properties of elastomeric materials.
4. 4.1
Intrinsic and extrinsic factors modulating elastomeric protein elasticity
Conformational entropic effects in elastin-based materials
Five decades ago, Hoeve and Flory proposed an elasticity theory from a configurationally entropic perspective consistent with the classical theory of rubber elasticity where, in contrast to a system description characterized by discrete globules represented by a two-phase model, elastic materials adopt a configuration consisting of random molecular chains of highest entropy; when external forces are applied during deformation, the distribution of such configurations is changed, yielding a reduction in entropy that the elastic restoring force stems from [118]. This theory is derivative from their thermodynamic studies on elastin molecules. Later, Urry and his co-workers exploited a repeating pentamer sequence, VPGVG, as a model for elastin, from which they deduced a mechanism of elasticity, the so-called librational entropy mechanism (Fig. 3). Urry proposed that instead of random configurations suggested in the classical theory of rubber elasticity, polypeptides with this pattern of sequence form a dynamic β-spiral structure with a large number of accessible states that would reduce along with elastomer entropy upon extension when the polypeptides are stretched, from which the elastomeric restoring force is generated [119]. To support this theory, Urry and Chang performed the first molecular dynamics simulation on an elastin-like polypeptide (ELP) using the same polypentapeptide for 50 ps in vacuo, where van der Waals interactions were neglected. Even if such a short simulation run failed to reveal the primary structural periodicity in root mean square displacements of torsion angles, their simulation results did demonstrate the damping of amplitude of librational motions when the polypentapeptide is elongated [120]. In addition to
Fig. 3 (A, i) Stereo drawing of one of a class of β-spiral conformations for the polypentapeptide. (ii) Schematic drawing showing the polypentapeptide β-spiral with β-turn spacers between turns of the spiral. (iii) Lambda plots showing the correlated motions of the ψ5- and φ1-torsion angles, which represent freedom for librational motions of the Gly5-Val1 peptide moiety, in relaxed state (top) and extended states (bottom), showing the number of states accessible with an energy cutoff of 2 kcal/mol residue. This demonstrates that the effect of stretching is one of markedly decreasing the entropy of the pentameric unit and that the motions can be described as librational. (B) Cartoon representation of tropoelastin domains, where the N-terminal coil region is annotated in green, the hinge region is annotated in blue, the bridge region is annotated in purple, and the foot region is annotated in orange. Secondary structures representation: helix in purple, β structure in yellow, turn in blue, and coil in ochre. (Figure (A) is reproduced with permission from Urry, et al., A librational entropy mechanism for elastomers with repeating peptide sequences in helical array, Int. J. Quantum Chem. Quantum Biol. Symp. 10 (1983) 81–93, and (B) is reproduced with permission from Yang, et al., Changes in elastin structure and extensibility induced by hypercalcemia and hyperglycemia, Acta Biomater. (2022).)
Engineering elasticity inspired by natural biopolymers
305
Urry’s fundamental studies, Wasserman’s work also suggested the contribution of librational entropy to elasticity. By running molecular dynamics simulations using the same polypentapeptides consisting of 18 repeats in a water solvent, Wasserman et al. concluded that the reduction in the frequency and magnitude of conformational motions upon extension leads to decreased elastomeric entropy which gives rise to the restoring force, especially at a larger strain [121]. Experimentally, Urry’s studies measuring mechanical resonances in the dielectric relaxation and acoustic absorption spectra of elastin and elastin-like polypeptides again validated the dynamic β-spiral model of elastin [122]. Their classical molecular mechanics and dynamics calculations of entropy further substantiate this mechanism [122]. While the β-spiral model is no longer accepted as a representative structure of elastin (Fig. 3Ai-ii), given the availability of a fully atomistic model of tropoelastin (Fig. 3B) [123] and based on subsequent studies showing that the elastin structure is not a stable β spiral in water, some of the observations made on this basis may still hold. Additionally, by using solid-state NMR, Yao and co-workers revealed that Poly(Lys-25), an ELP, exhibits large-amplitude and low-frequency motion during its relaxed state, also indicating the contribution of conformational entropy to elasticity [124]. Notably, Yao et al. considered hydration as a requirement to maintain protein dynamics [124]. Recently, Huang et al. reported a computational study using ELPs, [LGGVG]3 and [VPGVG]3, where a reduced number of low-frequency modes and an increased number of high-frequency modes are identified upon elongation, pointing to an estimation of decreased conformational entropy according to their calculations based on a quasiharmonic approach [125]. Overall, the librational entropy mechanism suggests a role played by limited torsional oscillations during the extension of elastin or ELPs in engendering elastomeric restoring force. Yet, further experimental studies are needed to validate this theory.
4.2
Solvent and hydration effects in elastomeric proteins
4.2.1 Effects of solvent on elastin and ELP conformations The polarity of solvents is suggested to give rise to differences in conformations of dissolved elastin, for example, the soluble monomer of elastin, tropoelastin (Fig. 4), is reported to exist mainly in (quasi)-extended conformations in polar solvents, opposite to the compact conformations when dissolved in nonpolar solvents, such as TFE [126]. Furthermore,
Fig. 4 (A) Conformational behavior of elastin as probed by molecular dynamics simulations in water at 10 and 42°C. The simulations began from the idealized β-spiral structure, which was allowed to adapt to its environment for 6 ns of MD. Then, various cycles of pulling, holding, and releasing the molecule were performed. The final structure from each simulation is shown with the main chain in red and side-chain atoms in green. (B) The orientational entropy of water in shell 1 (within 4.5 Å of any elastin heavy atom) of elastin during the final nanosecond time period for hydrophobic side-chain hydration (solid bar) and main-chain hydration (cross-hatched bar). (Reproduced with permission from Li, et al., Hydrophobic hydration is an important source of elasticity in elastin-based biopolymers, J. Am. Chem. Soc. 123 (2001) 11991–11998.)
Engineering elasticity inspired by natural biopolymers
307
the composition of solvents resulting in changes in viscosity may play a role in altering the elasticity of elastomeric materials. The work of Winlove et al. demonstrated that glucose, sucrose, and ethylene glycol alter the pattern of intrafibrillar water distribution within elastin fibers, which in turn affects the dynamic response of the fibers upon elongation [127]. The pH of solvents also plays a role in modulating elasticity, based on the findings that both increasing and decreasing pH raises the elastic modulus of elastin fibers [127]. This can be possibly attributed to the alterations of conformation by pH, which also shifts the balance between hydrophobic and hydrophilic interactions [128]. 4.2.2 Hydration level effect on XLPs The degree of hydration can significantly alter the elasticity of elastin/ELPs. In the same study of Yao and coworkers aforementioned, the importance of water hydrating ELPs was revealed, based on their findings of a discernible increase in protein dynamics represented by the root mean square fluctuation angles from 11 to 18 degrees, when the polypeptide is in a solid form to 16–21 degrees, when the polypeptide is under 20% hydration [124]. At above 30% hydration levels, both the backbone and side-chains of the polypeptide show molecular motions with a large amplitude, as might be expected [124]. This corresponds to Goseline’s findings using bovine elastin that even removing a very small number of bound waters increases glass transition temperature (Tg) significantly, which in turn enhances the rate of dynamic fatigue [129]. In another study by Gosline and co-workers, removal of 70% hydration water leads to a reduction of resilience from 78% to 12%, suggesting a loss in the storage of elastic energy [130]. Recently, by treating elastin obtained from porcine thoracic aortas with polyethylene glycol (PEG), Wang et al. showed that depletion of extrafibrillar water not only correlates with the shrinkage and stiffening of elastin but also engenders an increase in the stress relaxation of the tissue [131]. These findings suggest that hydration water is an essential factor regulating elasticity of elastin/ ELPs, and dehydration yields impaired mechanical properties and brittle elastin-based materials. The degree of hydration and water sorption also has profound effects on molecular chain mobility, chemical stability, physical properties, and viscoelastic behavior of resilin-like proteins and hydrogels. For example, in rec1resilin, Truong et al. [132] found that dehydrating rec1-resilin embrittles the material and also increases the Tg to 180°C from 60°C in the hydrated state, resulting in a decrease in elasticity. High natural hydration levels of rec1-
308
Mohammad Madani et al.
resilin are due to the existence of high polar residues (45%) in rec1 resilin lending to high water-binding ability [132]. The increase of hydration levels in rec1-resilin is achieved by the condensation or clustering of noncrystallizable water around hydrophilic sites of rec1-resilin, which in turn yields higher water content and evident crystallizable water [132]. Increasing hydration levels (even by > < Ej+1 ¼ E j + dS L Ej dE ρ Xn C i Z i > > 103 ½kV=nm ln 1:116 + Ki ¼ 7:8 : i¼1 F Ji dS Ej i (11) where E is the kinetic energy of the electron (subscript j for jth collision), L is the distance between two elastic collision, and Ci, Zi, Ji, and Ki are the mass fraction, atomic number, mean ionization potential, and a constant of element i, respectively. The trajectories of electrons in TC4 under 100 kV acceleration voltage are depicted in Fig. 5a. The maximum penetration depth of the electrons is
Multiscale modeling applied to additive manufacturing
341
Fig. 4 The scheme of electron-material interaction model. (a) The flowchart of the Monte-Carlo model to track electrons. (b) The trajectories of the electrons in the substrate. (Reprinted from W. Yan, J. Smith, W. Ge, F. Lin, W.K. Liu, Multiscale modeling of electron beam and substrate interaction: a new heat source model, Comput. Mech. 56 (2) (2015) 265–276, with permission from Springer Nature.)
Fig. 5 (a) The electron trajectories in the TC4 substrate under 100 kV acceleration voltage. (b) The fitting results of absorbed energy distribution in X-Z plane. (Reprinted from W. Yan, W. Ge, J. Smith, S. Lin, O.L. Kafka, F. Lin, W.K. Liu, Multi-scale modeling of electron beam melting of functionally graded materials, Acta Materialia 115 (2016) 403–412, with permission from Elsevier.)
342
Lu Wang et al.
about 35 μm. With the trajectories of electrons in the material, the absorbed energy distribution can be estimated as shown in Fig. 5b. Through fitting the energy distribution, Yan et al. [14] have brought up a simplified electron energy absorption function: qðx,y,zÞ ¼ αPf ðzÞgðx,yÞ
(12)
where f(z) is the energy distribution in the transverse direction and g(x, y) is the radial in-plane energy distribution in the cross section of the beam. 8 ! " Z #1 2 > +∞ > ðz z Þ o > 2 > exp > < f ðzÞ ¼ δd zo exp ðh Þ dh δd 2 δd ! (13) > > N ðx xs Þ2 + ðy ys Þ2 > > > : gðx, yÞ ¼ πr 2 exp N r2 b
b
where h, δd, and z0 are the penetration depth of the electron beam, characteristic depth, and location with the highest-energy density. xs and ys are the coordinates of the beam central axis. In Fig. 6a, the stationary electron beam is penetrating a TC4 particle, where the energy of the electron beam is uniformly distributed in the cross-section. The temperature profile in Fig. 6b shows that the first melted
Fig. 6 Simulation of heat transfer during electron beam penetrating a TC4 particle. (a) The geometry model of the simulation. (b) The temperature distribution in the particle. (a) and (b) are cut from the X-Z plane. (Reprinted from W. Yan, Y. Qian, W. Ma, B. Zhou, Y. Shen, F. Lin, Modeling and experimental validation of the electron beam selective melting process, Engineering 3 (5) (2017) 701–707, with permission from Elsevier. Open Access.)
Multiscale modeling applied to additive manufacturing
343
region of the particle is slightly below the particle surface in an arc shape. In addition, the electrons penetrate the particle and heat the substrate directly due to the thinner boundary of the particle from the top view. Although the energy of the electron beam is uniformly distributed, the temperature in the particle decreases with the horizontal distance to the beam central axis increase. This phenomenon indicates that the energy absorption decreases with the incident angle of the electron increase. 2.2.3 Laser absorption modeling In the process of laser absorption modeling, the heat flux (PL) in the crosssection of the beam is initialized with Gaussian distribution. NP r2 (14) P L ¼ 2 exp N 2 rb πr b To calculate the laser absorptivity on the surface of the molten pool, a raytracing model [17] is adopted as shown in Fig. 7a. In the ray-tracing model, the laser is divided into a series of subrays similar to the electron beam dividing process. As the ith subray PL,i reflects on the molten pool surface and the feedstock material surface, the energy of subray PL,i is absorbed until lower than a threshold or out of the simulation domain. Generally, the reflection on the surface is assumed as specular reflection and the scattering and deflection are not included in the AM process simulation as shown in Fig. 7b. Thus, the direction of the reflected subray is vR ¼ vI 2ðvI nÞnr ¼ vI 2 cos θnr
(15)
where vI, vR, and nr are the incident subray direction, reflected subray direction, and the unit normal vector of the specular surface, respectively.
Fig. 7 Schematic of the laser absorption process. (a) The schematic representation of the ray-tracing model. (b) Laser reflection on a specular surface.
344
Lu Wang et al.
When the laser is incident onto a surface, it is reflected, absorbed, or transmitted. The surface is opaque without laser transmissivity, and the laser energy ratio relationship before and after reflection is 1¼
Absorbed Reflected + ¼ α + ρλ Incident Incident
(16)
where ρλ is the reflectivity of the laser. For naturally polarized radiation, the reflectivity is composed by the p-polarized wave reflectivity (ρp,λ) and s-polarized wave reflectivity (ρs,λ): 1 ρλ ¼ ðρp,λ + ρs,λ Þ 2
(17)
where ρp, λ and ρs, λ are given by the Fresnel equation [18]: 8 ðnI cos θ 1Þ2 + ðkI cos θÞ2 > > > ρ ¼ < p, λ ðnI cos θ + 1Þ2 + ðkI cos θÞ2 > ðnI cosθÞ2 + k2I > > : ρ s, λ ¼ ðnI + cos θÞ2 + k2I
(18)
where nI and kI are the refractive index and extinction coefficient of the material. Thus, the absorptivity of ith subray (α) in a reflection can be calculated through Eqs. (16)–(18). The total absorbed energy on the molten pool surface or feedstock material can be calculated by tracking the reflections of the subrays. A verification simulation [19] case of the ray-tracing model is shown in Fig. 8. The laser reflection between two parallel bare plate results presents an accurate laser energy absorption process. A fraction of the laser power is absorbed into the bare plate at each reflection spot, resulting in a decrease of the laser power and temperature at each reflection spot. Although progress has been made on heat source modeling in the metal AM simulation, it is still tough to predict the energy absorption. For laser reflection, the refractive index of a metal or alloy is not only dependent on the wavelength of the laser, but also depends on the surface temperature. Furthermore, the refractive index of alloys is still scarce, and only common pure metals are available. But the current trend of metal AM is building high strengthen, heat-resistant alloy, even high-entropy alloy. In addition, the plasma and metal vapor also have an influence on the energy absorption process, which leads to electron and photon deflection. The reflection on the molten pool surface should be a diffuse reflection.
Multiscale modeling applied to additive manufacturing
345
Fig. 8 The verification simulation for the ray-tracing model with four reflections between two flat plates. (Reprinted from W. Ge, S. Han, S.J. Na, J.Y.H. Fuh, Numerical modelling of surface morphology in selective laser melting, Comput. Mater. Sci. 186 (2021) 110062, with permission from Elsevier.)
2.3 Metal evaporation modeling As the feedstock material and substrate are fiercely heated, the metal is evaporating and even generating strong recoil pressure. In metal AM process, the strong recoil pressure leads to the fluctuation of molten pool surface, and pores in the scanning track, known as the keyhole phenomenon. A commonly used evaporation model is Anisimov’s evaporation model [20], which is based on pure metal evaporation without full consideration of gas flow structure. To improve the accuracy of simulation of the molten pool flow, an evaporation model derived from the thermodynamic principle for different ambient environments is briefly presented and the full model is given in Ref. [6]. As shown in Fig. 9a, the gas flow structure during the molten pool flow is classified as the ambient gas, vapor, rarefaction fun, Knudsen layer, and transition layer. The transition layer, between the condensed matter and the Knudsen layer, is extremely small and simplified as a discontinuity condition. Above the transition layer, there is a Knudsen layer in the same order of magnitude as the mean free path of gas molecules. The Boltzmann equation is adopted to describe the transport phenomenon since the continuum theory is not applicable on such a scale. The evaporating molecules in the
346
Lu Wang et al.
Fig. 9 (a) The gas flow structure of the evaporation model. (b) Schematic representation of the Knudsen layer (region A in (a)). (c) The shock wave between layers II and III (region B in (a)), calculated with Rankine-Hugoniot jump condition. The origin is on the top surface of the transition layer and y-positive direction is the evaporation direction. For common atmosphere, there is no rarefaction fun (region IV), which is for the near-vacuum situation. (Reprinted from L. Wang, Y. Zhang, W. Yan, Evaporation model for Keyhole dynamics during additive manufacturing of metal, Phys. Rev. Appl. 14 (6) (2020) 064039, with permission from APS.)
Knudsen layer are divided into two parts: the emitting and condensing molecules as shown in Fig. 9b. The mass, momentum, and energy conservation equations for both sides of the Knudsen layer are 8 rffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffi > RTe RT3 > > ¼ ρ3 u3 ρ3 βF ρe > > 2π 2π > < 1 1 (19) ¼ ρ3 u23 + ρ3 RT3 ρe RTe + ρ3 RT3 βG > 2 2 r r ffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffi ffi > > > RTe RT3 1 > > : 2ρe RTe 2ρ3 RT3 βH ¼ ρ3 u23 u23 + 5RT3 2π 2π 2 where R is the specific gas constant, β is the condensation ratio, and the subscript “e” and “3” denote the saturated and vapor phase layers, respectively. The variables F, G, and H are 8 rffiffiffi rffiffiffi u3 γ u3 γ > def > > pffiffiffiffiffiffiffiffiffiffiffiffi ¼ Ma, m ¼ pffiffiffiffiffiffiffiffiffiffiffiffi ¼ > > 2 γRT 3 2 2RT 3 > > > > F ¼ pffiffiffi > π m½1 erf ðmÞ + exp ðm2 Þ, < (20) 2 > G ¼ ð2m2 + 1Þ½1 erf ðmÞ pffiffiffi exp ðm2 Þ, > > π > > > > > π 5 1 > > ½1 erf ðmÞ + ðm2 + 2Þ exp ðm2 Þ : H ¼ m m2 + 2 2 2
Multiscale modeling applied to additive manufacturing
347
where Ma is the Mach number at y ¼ y3 and erf(m) is the Gaussian error function. Essentially, the mass loss and recoil pressure are mass flux and momentum flux on the liquid surface. Based on Eq. (19), the mass loss and recoil pressure are given as 8 rffiffiffiffi 2 P3 T e Pe > > > < mloss ¼ ρ3 u3 ¼ m R P T T e 3 e (21) rffiffiffiffiffiffi m2 > P e T > e 3 2 > F + G ð2m + 1Þ : P rec ¼ ρ3 u23 + ρ3 RT 3 ¼ 2 Te where the saturated vapor pressure Pe is obtained from the ClausiusClapeyron relation: P e,i L v,i 1 1 ln ¼ (22) P r,i R T e,i T r,i where (Tr,i, Pr,i) is the reference point on the saturated vapor temperaturepressure curve for the ith component in an alloy, for example, the boiling temperature under the standard atmosphere pressure. Lv,i is the specific latent heat of evaporation for the ith component. The saturate pressure Pe, molar mass M, and the specific gas constant R of the alloy vapor are 8 X > ni Pe, i Pe ¼ > > > i > X > > > Mi ni Pe, i > < i (23) M ¼ X > ni Pe, i > > > > i > > > Rmol > :R ¼ M where Mi and ni are the molar mass and molar fraction of ith component under the specific temperature. Rmol is the ideal gas constant 8.314 J/(K mol).
2.4 Simulation of keyhole dynamics The most commonly used recoil pressure function derived from Anisimov’s evaporation model is taken to compare with the current evaporation model. T Tb P rec ¼ βv P b exp L v (24) RTT b
348
Lu Wang et al.
where Pb is the saturated pressure at boiling temperature Tb. βv is the evaporation coefficient, which is equal to 0.54 by assuming the bulk velocity of vapor at y ¼ y3 (Fig. 9a) at the local sound speed [20]. For pure metals (single composition), there is not much difference between Anisimov’s and current evaporation model (Fig. 10a and b), when the recoil pressure ratio is much larger than 1 for both common and nearvacuum ambient pressure. It means that the metal evaporation is strong enough to ignore the influence of the ambient pressure. However, the material compositions in alloy have shown different influences on recoil pressure under distinct ambient pressure. In Fig. 10c, for TC4 under 1 atm and 298 K ambient environment, the recoil pressure ratio of Anisimov’s model is smaller than that in the current model as the surface temperature ranges from
Fig. 10 The recoil pressure at different surface temperatures. (a) and (b) are the recoil pressure ratio of pure metal under distinct ambient pressure. (c) and (d) are the recoil pressure ratio of the alloy under distinct ambient pressure. The starting points of the curves are the boiling point of the materials. (Reprinted from L. Wang, Y. Zhang, W. Yan, Evaporation model for Keyhole dynamics during additive manufacturing of metal, Phys. Rev. Appl. 14 (6) (2020) 064039, with permission from APS.)
Multiscale modeling applied to additive manufacturing
349
3315 to 3800 K. However, the recoil pressure in Anisimov’s model is higher when the temperature is higher than 3800 K. Under the near-vacuum ambient environment, the recoil pressure of the current evaporation model is much higher than that of Anisimov’s model above the boiling point. The keyhole dynamic simulation results by different evaporation models are given in Fig. 11, which are further compared with X-ray imaging results by the Argonne National Labs [21]. The simulation results [6] show that the current evaporation model triumphs over Anisimov’s evaporation model in keyhole dynamics. In the common environment, the over prediction of the recoil pressure in Anisimov’s model is the reason that the keyhole shape by
Fig. 11 Comparison of the molten pool and keyhole shape at different time frames in the stationary laser melting under 1 atm and 298 K environment. The simulation results are compared with the X-ray imaging results in Ref. [21]. (a1)–(a6) show the keyhole shapes with the current evaporation model and (b1)–(b6) show the keyhole shapes with Anisimov’s evaporation model. (Reprinted from L. Wang, Y. Zhang, W. Yan, Evaporation model for Keyhole dynamics during additive manufacturing of metal, Phys. Rev. Appl. 14 (6) (2020) 064039 with permission from APS.)
350
Lu Wang et al.
Fig. 12 The keyhole geometry feature of simulation results under different laser scanning speeds. The simulation results are compared with the X-ray imaging results in Ref. [21]. The incline angle θk is the angle of the keyhole front wall with the z-direction shown in (a). (Reprinted from L. Wang, Y. Zhang, W. Yan, Evaporation model for Keyhole dynamics during additive manufacturing of metal, Phys. Rev. Appl. 14 (6) (2020) 064039, with permission from APS.)
the current evaporation model matches more with the experimental results. In the stationary laser melting simulation, the higher recoil pressure by Anisimov’s evaporation model suppresses the keyhole depth fluctuation process. Moreover, the rate of the keyhole depth penetration of Anisimov’s evaporation model is far greater. The molten pool flow in the keyhole is obtained by applying this highfidelity molten pool flow model under different laser scanning speeds as shown in Fig. 12. The simulation results are consistent with the X-ray imaging results by the Argonne National Labs [21]. The molten pool length and depth and keyhole depth of the simulation results match well with the experimental results, even the inclination of the keyhole front wall matching well. As the scanning speed increases, the keyhole depth and the temperature on the rear wall of the keyhole decrease.
2.5 Single-track, multitrack, and multilayer simulation After spreading the powder layer on the previous layer with the discrete element method (DEM) [22], the validated thermal-fluid flow model can be applied to the single-track, multitrack, and multilayer simulations. Single-track as the building unit of AM process can be simulated by the AM processing model to predict the morphology and defects in the track and the molten pool flow. Moreover, the influence of the parameters for multiple tracks (e.g., scanning path and hatching distance) can be analyzed with this model. Taking the balling effect simulation [22] of EB-PBF as an example, the powder size ranges 4080 μm, the layer thickness is 0.1 mm, and the
Multiscale modeling applied to additive manufacturing
351
Fig. 13 Single-track simulation results of balling effect from top view. (Reprinted from W. Yan, W. Ge, Y. Qian, S. Lin, B. Zhou, W.K. Liu, F. Lin, G.J. Wagner, Multi-physics modeling of single/multiple-track defect mechanisms in electron beam selective melting, Acta Materialia 134 (2017) 324–333, with permission from Elsevier.)
electron beam power and scanning speed are 30 W and 0.5 m/s. In the single-track simulation, the discontinuous balls form as shown in Fig. 13. The particles are heated and melted, while the molten regions are discontinuous (see the black dashed box in Fig. 13a–d and circle in (e–f )). These isolated regions form small clusters and solidify into balls. These discontinuous regions between the solidified balls were not within the powder layer, where there was no particle. Instead, the solidified balls form within the powder layer (the black dashed rhombus in Fig. 13a), and the melt substrate pulls the neighboring melted particles into that region (see Fig. 13d and e).
352
Lu Wang et al.
Fig. 14 The fused zones with various scan paths of (a) single track, (b) S-shaped scan path with hatching space of 0.2 mm, and (c–f ) Z-shaped scan path with hatching distance of 0.20, 0.24, 0.26, and 0.32 mm respectively, at various cross sections at a given x-position, denoted by (1) 0.35mm, (2) 1.04mm, and (3) 2.06mm. (Reprinted from W. Yan, W. Ge, Y. Qian, S. Lin, B. Zhou, W.K. Liu, F. Lin, G.J. Wagner, Multi-physics modeling of single/multiple-track defect mechanisms in electron beam selective melting, Acta Materialia 134 (2017) 324–333, with permission from Elsevier.)
To further analyze the influence of the interaction between different tracks and the scanning paths on the morphology, multiple track simulation in one layer is conducted. The cross-section of S-shaped (opposite direction for neighboring tracks) and Z-shaped (same direction for neighboring tracks) scanning simulation results are presented in Fig. 14. The red, blue, and transition regions in cross sections are melted (solidified later), unmelted, and mushy zone. For the same hatching distance simulation cases (b and c series), the height of the molten pool in the start and end regions are different, which is the result of the Marangoni effect.
Multiscale modeling applied to additive manufacturing
353
The influence of the hatching distance is analyzed under the Z-shaped scanning path (Fig. 14c–f ). As the hatching distance increases, more and more voids between the tracks form in the remelting region as shown in Fig. 14d and e series. The intertrack voids form at the beginning of the track. It is the reason that the newly introduced heat on the cold substrate has a relatively long time for heat conduction, and smaller melted regions create at the start of the track. If the hatching distance increases further, two neighboring tracks would be isolated (Fig. 14f), which leads to the discontinuous structure in the as-built part. Thus, controlling the hatching distance is important to reduce the intertrack voids and improve structural continuity. With the thermal-fluid flow model, the influence of multilayers can be studied. Two common scanning strategies, the layer-wise rotating strategy and the interlaced scanning strategy (Fig. 15), are investigated by Yan et al. [5], which show that the interlaced scanning strategy is more effective in reducing the porosity of the as-built part. Due to the lack of fusion between tracks and layers, voids can be observed in the regions contained in the midway of the track centerlines from each adjacent layer and track as indicated by the red regions in Fig. 15a. Due to the lowenergy intensity or thick powder layer, these regions are less likely fully melted. The size of such voids is proportional to the powder size distribution. This phenomenon is simulated as shown in Fig. 16a (θ ¼ 90 degrees), while no voids are observed as shown in Fig. 16b under the same parameters. The scanning lines in Layer 2 lie in the middle of the scan lines in Layer 1, effectively eliminating the voids between scan centerlines.
Fig. 15 Schematic representation of layer-wise scanning strategy of (a) rotated by θ and (b) parallel and interlaced scanning strategy. Red dots indicate the most probable position of void forming. (Reprinted from W. Yan, Y. Qian, W. Ge, S. Lin, W.K. Liu, F. Lin, G.J. Wagner, Meso-scale modeling of multiple-layer fabrication process in selective electron beam melting: inter-layer/track voids formation, Mater. Des. 141 (2018) 210–219, with permission from Elsevier. Open Access.)
354
Lu Wang et al.
Fig. 16 Schematic representation of multilayer scanning strategy of (a) rotated by θ and (b) parallel and interlaced scans. Red dots indicate the most probable position of void formation. (Reprinted from W. Yan, Y. Qian, W. Ge, S. Lin, W.K. Liu, F. Lin, G.J. Wagner, Mesoscale modeling of multiple-layer fabrication process in selective electron beam melting: inter-layer/track voids formation, Mater. Des. 141 (2018) 210–219, with permission from Elsevier. Open Access.)
In this section, the physical models of AM process simulations are discussed and validated with experimental results. The mechanisms and defects in the AM process are also simulated and discussed. Besides, the metal AM process simulation is not limited to the topics discussed earlier, and it has also been applied to liquid spattering [23], etc., to optimize the AM processing, which will not be further discussed in this section. However, there is more progress needed to improve the fidelity of the model besides the abovementioned. For example, the fully coupled gas flow, the interaction between moving particles and vapor/molten pool, and particle oxidation are not incorporated into the molten pool flow. Next, the metal vapor and plasma above the molten pool surface influence the electron penetration and laser reflection, which are not considered. To improve the accuracy of simulation, accurate material parameters are needed, while the limited material parameters lead to uncertainties in the simulations. For example, the latent heat of phase change of many alloys is unknown. Moreover, the nanoparticles are being applied to metal AM to improve the grain morphology of the as-built part, but their effect on molten pool flow is not fully studied.
3. Simulating microstructure evolution The topology, temperature gradient, and cooling rate have important influences on microstructure evolution, which further influence the
Multiscale modeling applied to additive manufacturing
355
Fig. 17 The microstructure evolution simulations. (a) Dendrite growth simulation [24]. (b) Grain morphology simulation [25]. (c) Precipitates evolution simulation [26] (red part is the precipitation). (Reprinted from Y. Yu, Y. Li, F. Lin, W. Yan, A multi-grid cellular automaton model for simulating dendrite growth and its application in additive manufacturing, Addit. Manuf. (2021) 102284; Y. Yu, M.S. Kenevisi, W. Yan, F. Lin, Modeling precipitation process of Al-Cu alloy in electron beam selective melting with a 3D cellular automaton model, Addit. Manuf. 36 (2020) 101423, with permission from Elsevier. M. Yang, L. Wang, W. Yan, Phase-field modeling of grain evolutions in additive manufacturing from nucleation, growth, to coarsening, npj Comput. Mater. 7 (1) (2021) 1–12, Open Access.)
mechanical properties of the as-built part. Thus, the process to simulate microstructure evolution is to input the temperature profile into the simulation domain to simulate the microstructure evolution. During the metal solidification process in metal AM, the dendrites (Fig. 17a) grow from solidification front or nucleation and form grain structure (Fig. 17b). Furthermore, the in situ heat treatments induce precipitation between grains as shown in Fig. 17c. To simulate the microstructure evolution, some numerical models combined the molten pool dynamics and grain evolution have been brought up: the coupled cellular automaton-finite element (CAFE) model [27], the coupled cellular automaton-finite volume method (CAFVM) [28], the coupled cellular automaton-lattice Boltzmann (CALB) model [29], the coupled phase field-thermal lattice Boltzmann (PF-TLB) model [30], and the coupled phase field-finite volume method (PF-FVM) [31]. As listed earlier, cellular automaton (CA) and phase field (PF) are two popular numerical methods to simulate microstructure evolution. In this section, the temperature profile by the AM process simulation is not discussed in detail and we mainly focus on its application in the prediction of the 3D microstructure in AM with CA and PF methods.
356
Lu Wang et al.
3.1 Simulation of dendrite growth The dendrite growth in metal solidification is directly related to the defect formation, such as cracks and precipitation of the second phases, while the quantitative simulation of the growth of dendrite with arbitrary orientation is still challenging. In this section, a multigrid CA model [24] is developed to simulate the dendrite growth for both the casting and AM processes. CA methods describe the evolution of a discrete system of variables by applying a set of deterministic rules or probabilistic transformation rules that depend on the values of the variables as well as those in the nearby cells of a regular lattice. To accurately simulate the movement of the solid-liquid interface, in this model, the cells at the solid-liquid interfacial regions are identified and further divided into small cubic cells. The original big cells are called parentgrid cells, and the small cells are then called the child-grid cells as shown in Fig. 18a. In the parent-grid cell, a set of state variables is assigned: Fs is the solid fraction, Cl is the solute concentration in the liquid, g is the index of the dendrite in the related cell, and S is the type of the cell (S ¼ 1 for solidified cell, S ¼ 0 for interfacial cell, S ¼ 1 for liquid cell, and S ¼ 2 for potential interfacial cell as shown in Fig. 18a). The state variables in the child grid are solid fraction (fs) and type of cell (s, s ¼ 0 for solid cell and s ¼ 1 for liquid cell). In the child grid, the decentered octahedron growth algorithm [32] (Fig. 18b) is implemented. The octahedron envelopes constitute the actual solid-liquid interface. Their growths and captures of the neighboring childgrid cells simulate the movement of the solid-liquid interface. In addition, the directions of the six diagonals of the octahedron envelope represent the orientation of the dendrite. The two-way coupling process between the parent and child grid is shown in Fig. 18c, the solid-liquid interface curvature (κ) and solute concentration (Cl) are calculated in the parent grid for the movement calculation of the solid-liquid interface, that is, the envelops growth calculation in the child grid. Next, the envelopes’ growth and capture are computed according to [32]. Finally, the solid fraction (fs) in the child grid is calculated and used to update the solid fraction in the parent grid. In this model, the driving force for the solid-liquid interface movement is the deviation of the solute concentration in the liquid phase (Cl) from the eq equilibrium concentration (C l ). The solid fraction change (ΔFs) in one time (Δt) in the parent-grid cell can be calculated as
Multiscale modeling applied to additive manufacturing
357
Fig. 18 The schematic representation in the multigrid CA model for dendrite growth. (a) Schematic representation of the multigrid mesh in 2D. (b) Schematic representation of decentered octahedron growth algorithm. The purple squares in (A) are the octahedral envelopes. (c) The schematic representation of the “handshaking” between the parent and child grid. (Reprinted from Y. Yu, Y. Li, F. Lin, W. Yan, A multi-grid cellular automaton model for simulating dendrite growth and its application in additive manufacturing, Addit. Manuf. (2021) 102284, with permission from Elsevier.)
8 eq ð1 Fs ÞðCl Cl Þ > > < ΔFs ¼ eq Cl T T eq Γk > eq > C ¼ C0 + : + l ml ml
(25)
where k is the solute partition coefficient and Teq is the equilibrium temperature at the initial composition.
358
Lu Wang et al.
The solute diffusion is incorporated in the parent grid. Since the diffusion coefficient in the liquid phase is several orders of magnitude higher than that in the solid phase, only liquid diffusion is accounted based on Fick’s law: ∂C l ¼ r ðDe rC l Þ ∂t
(26)
where De is the effective solute diffusion coefficient and depends on the status of the cell. Within one-time step, the numerical scheme is as follows: (1) Loop over all interfacial parent-grid cells and calculate the interface movements. (2) Loop over all envelopes to complete the growth and capture processes and calculate the solid fraction in child-grid cells, and then update the solid fraction in the parent-grid cells. (3) Loop over all parent-grid cells with solid fraction changed in this time step to update its solute concentration. (4) Calculate the diffusion and update the temperature field. The algorithm is implemented into a dendrite growth software by coding with C++. Meanwhile, OpenMp is utilized for parallel computation. To verify the ability of the model in simulating the growth of dendrites with arbitrary orientation, dendrite growth simulations in Fe-0.6 wt%C alloy is conducted as a benchmark test. In the benchmark test, the length ratio between the parent grid and child grid is 3, the undercooling ΔT is 8 K, and 27 dendrite seeds with random orientation are randomly assigned in the simulation domain. In Fig. 19, the dendrite morphology is presented under time sequence. As the dendrites grow, the secondary and tertiary arms evolve from the initial branchless needle shape. As shown in Fig. 19c and d, the dendrites interact with each other. Some dendritic trunks stop growing longer, and the trunks and arms coarsen gradually. It is known that the temperature gradient (105107 K/m) and cooling rate (103105 K/s) in EB-PBF are much higher than those in the casting process. To validate the model’s feasibility under AM conditions, dendrite growth simulations with the solidification coming from the molten pool dynamic simulation introduced in Section 2 are carried out to compare with the experimental results, which are conducted by the electron beam on a substrate of Inconel 718. Fig. 20 is the simulated dendrite structures corresponding to the dendrite structure observed at three different places in the center plane of the experimental track. The number of dendrites decreases and the primary dendrite arm spacing increases as the distance from
Fig. 19 The benchmark test of multidendrite growth at (a) t ¼ 0.008 s, (b) t ¼ 0.016 s, (c) t ¼ 0.020 s, and (d) t ¼ 0.028 s. (Reprinted from Y. Yu, Y. Li, F. Lin, W. Yan, A multi-grid cellular automaton model for simulating dendrite growth and its application in additive manufacturing, Addit. Manuf. (2021) 102284, with permission from Elsevier.)
Fig. 20 The top view of the simulated dendrite morphology in different regions of the track. The distance from the simulated region to the beam center for (a) to (c) increases. (Reprinted from Y. Yu, Y. Li, F. Lin, W. Yan, A multi-grid cellular automaton model for simulating dendrite growth and its application in additive manufacturing, Addit. Manuf. (2021) 102284, with permission from Elsevier.)
360
Lu Wang et al.
the molten pool bottom increases. These results are further compared with experimental results and show fairly good consistency.
3.2 Simulation of grain evolution Phase field method (PF) is another common approach to simulate microstructure evolution. In this section, a PF model [25] is implemented to simulate the grain growth process. Generally, a set of continuous field variables is adopted in the PF model to describe both the compositional and/or structural domains. There are two types of field variables: conserved (e.g., temperature and element concentration) and nonconserved (e.g., grain orientation and phase state). The temporal evolution of the conserved field variables that are governed by the Cahn-Hilliard [33, 34] nonlinear diffusion equation: ∂ϕi δF ¼ rM ij r ∂t δϕj ðx,tÞ
(27)
The nonconserved field variables evolve with the Allen-Cahn [35, 36] relaxation equation or time-dependent Ginzburg-Landau [37] equation: ∂ψ p δF ¼ L pq ∂t δψ q ðx,tÞ
(28)
where x, t, and Mij are the position, time, and diffusivities of the species, and ϕi are conserved field variables. Lpq is the mobility of the nonconserved field variables ψ p. Subscripts are used to distinguish different chemical species and phases/domains. F is the total free energy of the system, which has distinct forms for different PF models [38]. In the current PF model, there are some simplifications. For conservative variables, the evolution of the element composition is ignored, and the temperature field is obtained from CFD simulation (Section 2). Two nonconservative phase field variables, phase state (ζ) and grain orientation (ηi) are implemented. ζ is 1 for the solid phase and 0 for the liquid phase. ηi is 1 for the ith grain orientation and 0 for others. The time-dependent Ginzburg-Landau equation is used to describe the grain growth process ∂ζðr,tÞ δFðζ,η,T Þ ¼ L p ∂t δζðr,tÞ
(29)
Multiscale modeling applied to additive manufacturing
∂ηi ðr,tÞ δFðζ,η,T Þ ¼ L g ∂t δηi ðr,tÞ
361
(30)
where Lp and Lg are the kinetic coefficients related to the interfacial mobility and grain boundary. The total free energy (F) is composed of the local free energy density (floc) and gradient energy density (fgrad) Z ðf loc + f grad ÞdV (31) F¼ V
The floc is composed of the free energy density derived from the liquid and solid phases (fphase) and grains orientation difference (fgrain) 8 floc ¼ fphase + fgrain > > >
> 2 2 > > f ¼ m ð1 ζÞ ϕðτÞ + ζ ½ 1 ϕðτÞ phase p > > > " # < X n n X n X X η4i η2i 1 2 (32) fgrain ¼ mg η2i η2j + ð1 ζÞ η2i + +γ > > 4 2 4 > i¼1 i¼1 j6¼i i¼1 > > > > > 1 > : ϕðτÞ ¼ f1 tanh ½ϑðτ 1Þg 2 where mp and mg are the precoefficients, τ is the ratio between T and liquidus temperature Tl, ϑ is a constant in ϕ(τ), and γ is the model parameter determined by the grain boundary energy and width. The gradient energy is given as f grad ¼
κp κg ðrζÞ2 + ðrηi Þ2 2 2
(33)
where κp and κ g are the gradient term coefficients for solid-liquid interface and grain boundary, respectively. The process of grain growth simulation is indicated in Fig. 21. The temperature field is calculated with the CFD model in Section 2 and input into the PF model. Next, the grain growth in a single track is simulated with the PF model. The temperate field for different tracks and layers is calculated repeatedly and input into the PF model until the simulation of multilayer grain morphology is finished. Fig. 22a–e is the temperature fields of the multilayer multitrack thermalfluid flow simulation, where Fig. 22a and b is the temperature fields during the first layer first track and first layer second track, respectively. Since the laser scanning direction is rotating 90 degrees between the adjacent layers,
362
Lu Wang et al.
Fig. 21 The schematic representation of grain growth simulation. (a) The temperature field by thermal-fluid flow simulation. (b) The liquid and solid phase and grain morphology during metal AM. (c) The laser scanning path between layers. The black box in (a) indicates the simulation domain for PF simulation. (Reprinted from M. Yang, L. Wang, W. Yan, Phase-field modeling of grain evolutions in additive manufacturing from nucleation, growth, to coarsening, npj Comput. Mater. 7 (1) (2021) 1–12, with permission from Nature Publishing Group. Open Access.)
the scanning traces rotate as shown in Fig. 22d and e. In Fig. 22f, when the local temperature is higher than Tl, the powder particles and substrate are melted, and the grain structure is dissolved. In Fig. 22g, some grains formed in the first track are remelted. The comparison of Fig. 22h–j shows that the coarse grains are formed in the upper layers. The grain size analysis of the L-PBF process is given in Fig. 23. The aspect ratio of the grain (ϕg) is defined as ϕg ¼ 2a=ðb1 + b2 Þ
(34)
where a is the major axis length, and b1 and b2 are the other axis lengths in the ellipsoid equivalent to the grain shape as shown in Fig. 23a. The aspect ratio of the PF simulation matches well with the experimental results. The aspect ratio of 83.4% grains is larger than 1.5, which can be taken as columnar grain. In Fig. 23b, both average grain volume and grain numbers are increasing and becoming stable, except for the time difference. The reason for time difference is that the grain number is stable after metal solidification, while the grain volume increases slightly by the grain coarsening in the further cooling. The number of grains in each track decreases as the laser scanning processing, while the average grain volume increases in each layer as shown in Fig. 23c and d. Because the temperature of the substrate and as-built part increases with new track scanning, and the temperature gradient between liquid and solid is reducing.
Fig. 22 Evolution of temperature field and grains during the multilayer multitrack L-PBF process. (a–e) The temperature field evolution by thermal-fluid flow simulation. (f–j) The grain evolution by phase field simulation. The red regions in (a) and (b) are the liquid regions of the molten pool. The black arrow points to the partially melted region. (Reprinted from M. Yang, L. Wang, W. Yan, Phase-field modeling of grain evolutions in additive manufacturing from nucleation, growth, to coarsening, npj Comput. Mater. 7 (1) (2021) 1–12, with permission from Nature Publishing Group. Open Access.)
364
Lu Wang et al.
Fig. 23 Evolution of grain size during the L-PBF process. (a) The aspect ratio of grains in the multitrack multilayer simulations. (b) The average grain volume and number in the second layer second track by PF simulation. (c, d) Final grain number and average grain volume distribution by PF simulation, respectively. Symbol ij in (c) and (d) denotes ith layer jth track. (Reprinted from M. Yang, L. Wang, W. Yan, Phase-field modeling of grain evolutions in additive manufacturing from nucleation, growth, to coarsening, npj Comput. Mater. 7 (1) (2021) 1–12, with permission from Nature Publishing Group. Open Access.)
3.3 Precipitation process in the EB-PBF In some AM processes, the powder bed would be preheated, especially in the EB-PBF process, which would promote precipitation of new phases [39, 40]. The precipitates in the precipitate-hardening alloy [41] are important for strengthening the as-built components. For instance, phase θ and phase S are important precipitates for Al 2024 Alloy, which is commonly applied in aerospace and aviation industries. In this part, the precipitation process during the EB-PBF is simulated with a CA model. The CA model for precipitation simulation [26] includes: (1) Nucleation of the precipitates. The intergranular cells are identified. Some of them are chosen as nucleation sites for the precipitates according to the classic nucleation theory and the Poisson seeding algorithm.
Multiscale modeling applied to additive manufacturing
365
(2) Simulation of the evolution of the precipitates. The growth rate of the precipitates is calculated according to the thermodynamic equilibrium condition. If the growth rate is positive, the precipitates will grow up. If not, the precipitates would decompose. (3) The solute diffusion. The diffusion in the matrix is accounted and simulated according to Fick’s law. The algorithm is implemented into a dendrite growth software by coding with C++. Meanwhile, OpenMp is utilized for parallel computation. In the EB-PBF validation experiment process, the temperature of the powder bed gradually rises to 793 K in about 8000 s and then maintains until the fabrication process completes. Due to the evaporation effect, the concentration of Mg drops from 1.68 wt% to 0.358 wt%. Thus, the concentration of Cu is about 6.5 wt% higher than the nominal value for Al 2024 powder. Therefore, in this validation work, Al 2024 is taken as an Al-Cu binary alloy and only the precipitation of θ phase is simulated. The evolution of phase θ is presented in Fig. 24. In the first 250 s, the θ phases are very small, and the nearby matrix could provide enough Cu atoms for their growth. Thus, the growth rate in this period is very high as shown in Fig. 24b. As the concentration of Cu in the matrix nearby the precipitates increases, and gradually becomes close to the equilibrium value, the growth rate decreases. After that the growth of the θ phase depends on the diffusion of Cu atoms from the intragrain to the grain boundaries. Since the growth of the θ phase mainly occurs in the diffusion-determined stage, this precipitation process can be taken as a diffusion-controlled phase transformation. In Fig. 25a and b, the experimental result of the precipitates has some features: they are sparse and thin near grain junctions (red circles), while dense and thick along the grain boundaries as indicated by arrows. This distribution is well reproduced in Fig. 25c1 and d1. In Fig. 25c2 and d2, the precipitates near grain junctions (J1–J4) are farther from the Cu-rich regions than grain boundaries (b1 and b2), making it hard to get enough Cu atoms. Thus, the precipitates near the grain junctions are thinner and sparser. In addition, the microstructure in Fig. 25c3 and d3 presents the overall Cu concentration distribution, which matches well with the experimental results in Fig. 25b. Fig. 26e shows the evolution of θ phase fraction during the whole simulation, which increases rapidly at first, then keeps stable after a small decrease, and finally increases to 4.6%. The three stages correspond to the temperature increase, temperature maintenance, and temperature decrease of the as-built part, respectively in the red curve. In the temperature increase stage, the
366
Lu Wang et al.
Fig. 24 (a) Left: Cross section of the simulated microstructures at t ¼100, 1000, and 3500 s; right: The Cu concentration distribution in the corresponding time frame. (b) The phase fraction changing curve and growth rate curve. (c) The comparison of the thermodynamic driving force and geometrical driving force under different Cu concentrations. (Reprinted from Y. Yu, M.S. Kenevisi, W. Yan, F. Lin, Modeling precipitation process of Al-Cu alloy in electron beam selective melting with a 3D cellular automaton model, Addit. Manuf. 36 (2020) 101423, with permission from Elsevier.)
precipitates grow rapidly (Fig. 26a1 and a2). When the temperature is too high, some precipitates start decomposing and even disappear during 3500 < t < 5500 s as shown in the green circles and yellow arrows in Fig. 26a1–b2. However, the precipitates in the green arrow regions of Fig. 26a1 and b1 are increasing until Fig. 26d2 due to the higher Cu concentration along the grain boundaries. In the temperature maintaining stage, more precipitates decompose as indicated by the yellow arrows in Fig. 26b2 and c2. This is similar to the precipitate distribution in the red circles of Fig. 26f, where nearly no precipitates in the grain junctions. When the temperature decreases, the growth of precipitates in Fig. 26d2 resumes. The final distribution and size of precipitates are similar to the experimental results in Fig. 26g.
Multiscale modeling applied to additive manufacturing
367
Fig. 25 (a) The microstructure of the as-built part. (b) The EDS results of Cu concentration of the as-built part in (a). Series c and d are the simulation results on the crosssection z ¼ 20 and 52 μm at t ¼ 3500 s. (c1, d1) The simulated microstructure. (c2, d2) The Cu distribution in the matrix. (c3, d3) The overall Cu concentration distribution in the microstructure. Red clusters along the grain boundaries in (c1), (c2), (d1), and (d2) are precipitates. (Reprinted from Y. Yu, M.S. Kenevisi, W. Yan, F. Lin, Modeling precipitation process of Al-Cu alloy in electron beam selective melting with a 3D cellular automaton model, Addit. Manuf. 36 (2020) 101423, with permission from Elsevier.)
This section proposed some models designed for the simulation of microstructure evolution in AM. These models are validated against the experiment and show fairly good consistency with the experimental results. Some critical microstructure evolution processes are successfully reproduced by the presented models, and the physical mechanism beneath them are revealed. However, there are still some obstacles that need to be overcome before applying these simulation models broadly. The simulation domain
368
Lu Wang et al.
Fig. 26 The precipitates distribution in the cross section and 3D microstructure: (a1, a2) t ¼ 3500 s, (b1, b2) t ¼ 5500 s, (c1, c2) t ¼ 11, 500 s, and (d1, d2) t ¼ 19, 716 s. (e) The phase θ fraction changing curve. (f, g) The microstructure at the center and bottom of the sample, respectively. (Reprinted from Y. Yu, M.S. Kenevisi, W. Yan, F. Lin, Modeling precipitation process of Al-Cu alloy in electron beam selective melting with a 3D cellular automaton model, Addit. Manuf. 36 (2020) 101423, with permission from Elsevier.)
is very small compared with the actual as-built components since the computational cost is very huge. In addition, there is still a lack of accurate physical parameters of the materials. More and more alloys, even combined alloys are applied to the AM, in which the physical parameters have not been fully investigated. Furthermore, the calculation of nucleation, and grain growth are based on classical theory or under an equilibrium state, which might not be applicable in AM process.
Multiscale modeling applied to additive manufacturing
369
4. Mechanical properties simulation During AM process, the substrate is heated and cooled repeatedly, which leaves the substrate deformed. The nonuniform thermal load leads to the distortion, deformation, crack, and even delamination of the as-built part. A quantitative understanding of the thermal stress evolution during AM is meaningful to ameliorate, and even mitigate the aforementioned issues. Currently, FEM [42–44] and crystal plasticity finite element method (CPFEM) [45, 46] are used to investigate the thermal stress, residual stresses and deformation of as-built parts during AM at different scales as shown in Fig. 27. To predict the grain-level residual stress with the anisotropic properties at mesoscale, the grain morphology is inputted into the CPFEM as shown in Fig. 27b2. With grain morphology from either experiments or simulations introduced, the structure-property relationship can be revealed with CPFEM simulations. The basic idea of these models starts from a constitutive relationship between stress and strain of the material: σ ¼ Dε (35) ε ¼ εe + εth where σ, D, and ε are the stress tensor, elastic tensor, and strain tensor, respectively. In addition, εe and εth are the elastic strain and thermal strain, respectively. The temperature profile from Section 2 is inputted into the mechanical model to calculate the deformation and stress. In this section, several mechanical models applied for AM process under different scales are displayed.
4.1 Thermal stress simulation for multiple tracks and layers It is difficult and expensive to track the thermal stress evolution during the AM process through in situ observation and postprocess. Because the high local temperature and fast heating rate during the AM process are not easy to measure, the thermal stress evolution and energy absorption cannot be provided by the postprocess. Thus, more and more scientists prefer to analyze the thermal stress during AM process with FEM. As specified earlier, the load for the thermal stress comes from the temperature profile. In this section, combining the CFD model in Section 2 and the FEM model, the thermal stress simulations are conducted.
Fig. 27 Series (a) and (b) are the thermal stress prediction by FEM model [47] and grain residual stress prediction by CPFEM [48]. (a1) and (b1) are the temperature distribution; (a2) is the thermal stress; (b2) is the grain morphology; and (b3) is the residual stress. (Reprinted from F. Chen, W. Yan, High-fidelity modelling of thermal stress for additive manufacturing by linking thermal-fluid and mechanical models, Mater. Des. 196 (2020) 109185, with permission from Elsevier. Open Access; Reprinted from N. Grilli, D. Hu, D. Yushu, F. Chen, W. Yan, Crystal plasticity model of residual stress in additive manufacturing, arXiv:2105.13257 (2021), arXiv, with permission.)
Multiscale modeling applied to additive manufacturing
371
Since the CFD and FEM models have different mesh sizes, the temperature profile by CFD simulation cannot be used by the FEM model directly, and a shape function is adopted to map the temperature field by CFD simulation into the FEM model [47]. X T¼ N i T 0i (36) i
where T and T 0 are the discrete temperatures in FEM and CFD coordinates, and Ni is the mapping coefficient between two coordinates; i denotes the temperature of the CFD cells around a FEM cell. Different from the inactive element method, the quiet element method is used, in which the null properties are assigned to the nondeposited elements. The material properties of the element in ABAQUS change with the corresponding phase, that is, vapor, liquid, or solid, during the laser scanning. Generally, a heat source is applied to the substrate directly to simulate the thermal deformation and stress, named as thermomechanical simulation in this section. To compare the difference between the CFD-FEM model and thermomechanical simulation model, the results of temperature and von Mises stress distribution are given in Fig. 28. To ensure a fair crosscomparison, the geometry feature and energy absorptivity in the thermomechanical simulation are extracted from thermal-fluid simulation. In Fig. 28a, the molten pool size in the thermomechanical simulation is relatively smaller than that in the CFD-FEM simulation. Because the molten pool flow is not incorporated into the thermomechanical simulation completely, specifically the Marangoni effect is ignored. Furthermore, some stress concentration regions are released in the thermomechanical simulation compared with the CFD-FEM simulation (Fig. 28b). As shown in Fig. 29a, a crack perpendicular to the scanning path is observed. Such crack is often caused by the large stress along the scanning direction. The x component of stress in the thermomechanical simulation is smoothly distributed without stress concentration. However, the y component of stress is concentrated along the intertrack gap in CFD-FEM simulation as shown in Fig. 29b, which may cause a crack between the scanning tracks. The z component of stress concentration in Fig. 29c indicates possible cracks between layers.
4.2 Grain-level residual stress simulation At the grain scale, the nonuniform residual stress generated by the cyclic heating-cooling process can lead to part distortion and even microcracks,
Fig. 28 The comparison of (a) the temperature and (b) stress distribution between the CFD-FEM and thermomechanical simulation. (Reprinted from F. Chen, W. Yan, High-fidelity modelling of thermal stress for additive manufacturing by linking thermal-fluid and mechanical models, Mater. Des. 196 (2020) 109185, with permission from Elsevier. Open Access.)
Multiscale modeling applied to additive manufacturing
373
Fig. 29 Comparison with crack observed in the L-PBF of Fe-based metallic glass by Aconity Mini (Aconity GmbH, Germany). (a) The crack on the single-track sample. The X-stress distribution in the thermal stress and CFD-FEM simulation at t 001 ¼ 0:008 s, t01 ¼ 0:00855 s, respectively. (b) The Y-stress distribution at t01 ¼ 0:00835 s. (c) The Z-stress distribution at t 01 ¼ 0:01595 s. (Reprinted from F. Chen, W. Yan, High-fidelity modelling of thermal stress for additive manufacturing by linking thermal-fluid and mechanical models, Mater. Des. 196 (2020) 109185, with permission from Elsevier. Open Access.)
374
Lu Wang et al.
which deteriorate the mechanical properties of the as-built part. Since it is difficult to measure the residual stress for a large number of grains in the bulk via experiments, an accurate CPFEM would help to investigate the formation and evolution mechanisms of residual stress at this length scale. In addition, the grain structure should not be ignored, especially on the mesoscale due to its anisotropic mechanical properties induced by grain morphology. To simulate the residual stress at the grain scale, the temperature profiles from CFD simulation in Section 2 and grain structures by PF simulation in Section 3.2 are incorporated into the CPFEM. 4.2.1 Crystal plasticity framework Generally, the shape deformation gradient of part can be decomposed into two components: F ¼ Fe Fp where Fe and Fp are the elastic and plastic deformation gradient. The spatial gradient of the total velocity is defined as
1 1 _ F L ¼ F_ e F1 F1 + F F e p p e e ¼ L e + Fe L p Fe
(37)
(38)
where Le and Lp are the elastic and plastic velocity gradient. Lp can be formulated as the sum of the shear strain rate in all slip systems. XN s γ_α mα nα (39) Lp ¼ α¼1 where γ_α is the shear strain rate of the slip system α. There are several expressions of the shear strain rate with different effects taken into consideration [49, 50]. mα and nα are the unit vector along the slip direction and normal vector of slip plane α. Ns is the number of slip systems. With the consideration of thermal response, the deformation gradient F can be decomposed into elastic, thermal, and plastic parts [49, 51, 52]. F ¼ Fe Fth Fp
(40)
where Fth is the thermal part of deformation gradient. Taking the thermo-mechanical response into consideration, the GreenLagrange strain tensor is composed of the thermal and elastic deformation gradient Ee ¼
1 T T Fth Fe Fe Fth I 2
(41)
Multiscale modeling applied to additive manufacturing
375
where I is the identity matrix. The second Piola-Kirchhoff stress can be calculated by S ¼ ðEe αÞ
(42)
where and α are the four-elasticity tensor of the material and thermal eigenstrain. The above continuum formulations can be implemented into FEM computational frameworks or software, for example, taking the second Pila-Kirchhoff stress above into the Newton-Raphson approach, iterative calculations can be performed by FEM solvers [48, 53]. The temperature profile used in the CPFEM is linearly interpolated from the thermal-fluid flow simulation in both times as shown in Fig. 30 as well as in space [54]. Since the mechanical properties of the gas and liquid phase change a lot, a residual stiffness method based on the temperature field is applied to take the different material properties at different phases into consideration and ensure the convergence of the simulation. When the temperature of the element is out of the range from Tg to Tm, the element is considered to be in liquid (>Tm) or gas phase (Tg). A residual stiffness coefficient (qr) is introduced describe the residual stiffness tensor of the melted material: ℂresidual ðT Þ ¼ qr ℂsolid ij ij ðT 0 Þ
(43)
Fig. 30 Schematic representation of the temperature mapping between CFD and CPFEM to implement the residual stiffness method. (Reprinted from N. Grilli, D. Hu, D. Yushu, F. Chen, W. Yan, Crystal plasticity model of residual stress in additive manufacturing, arXiv:2105.13257 (2021), arXiv, with permission.)
376
Lu Wang et al.
where solid ij ðT 0 Þ is the reference stiffness at ambient environment temperature T0. Thus, the mechanical response in different phases is modeled by changing the stiffness tensor ij . Apart from the residual stiffness method, an element elimination and reactivation method is recently developed to simulate the melting and solidification by reinitializing state variables in elements, which are further described in Ref. [48]. 4.2.2 Simulation of grain-level residual stress By introducing the grain structure and temperature profiles to CPFE model, the simulations can be carried out for a representative region of AM process at the mesoscale. Taking the model in Ref. [48] as an example, a small region inside a larger sample was extracted from a section of single-layer powder and part of its underlying region to avoid the remelting of subsequent layers. To simulate the constraint from colder material in surrounding and substrate regions, the displacement is set as zero on lateral and bottom surfaces, while the top surface is set free to present the laser scanning. After the laser scanning is completed, the boundary of the adjacent pair of side faces is released to simulate the relaxation of the internal residual stress during the cooling process. Finally, the residual stress result after laser scanning and relaxation can be obtained for further analysis. The plastic deformations in Fig. 31a–c indicate that the tensile and compressive deformation of each component is induced by laser scanning. For example, in Fig. 31a, the x component of plastic deformation on the sides and at the bottom of the molten pool is compressive. By contrast, the plastic deformation along the y-direction is compressive at the bottom of the molten pool but tensile on the two sides (Fig. 31b). For the plastic deformation along with z-axis, it is tensile deformation which expands the bottom region due to the isochoric plastic deformation as shown in Fig. 31c. The diagonal components of the residual stress tensor in Fig. 31d–f show that the residual stress components σ xx and σ yy are strongly affected by the grain orientation. For σ zz, it is compressive at the bottom of the simulation domain, but not very large on the surface due to the free boundary condition. Based on the analysis of the plastic deformation and residual stress, the mechanism of the residual stress is illustrated in Fig. 31g. The thermal expansion around the molten pool induces plastic compression in the x-y plane and tension out-of-plane z-direction, particularly at the bottom of the molten pool. After the laser scanning is over, the tensile residual stress accumulates in the x- and y-direction to compensate for the plastic compression.
Multiscale modeling applied to additive manufacturing
377
Fig. 31 (a–c) x, y, and z components of the plastic deformation gradient Fp at the bottom of molten pool, (d–f ) σ xx, σ yy, and σ zz components of the residual stress, and (G) the formation mechanism of the residual stresses. (Reprinted from N. Grilli, D. Hu, D. Yushu, F. Chen, W. Yan, Crystal plasticity model of residual stress in additive manufacturing, arXiv:2105.13257 (2021), arXiv, with permission.)
4.3 Multiscale modeling of structure-property relationship Due to the inhomogeneous microstructure of the as-built part by metal AM, the structure-property relationship of it is influenced by the grain structure. In other words, CPFEM can also simulate the structure-property of as-built with an accurate grain structure. In this section, a CPFEM for structureproperty modeling is validated with the experiments. 4.3.1 Grain structure reconstruction The grain structure can be reconstructed from either the electron backscatter diffraction (EBSD) result of as-built or the numerical simulation results by Section 3.2. To validate the current model, a grain structure generation process based on the experimental results is created first, which includes five steps as shown in Fig. 32. (1) To construct the cross-section grain structure, a contour of track based on the EBSD image is created, where the grains are generated from the points on the contour.
Fig. 32 The synthetic grain structure generation, including (a) cross-section reconstruction, (b) scanning track reconstruction, (c) layer reconstruction, (d) plate reconstruction, and (e) fine-tuning. The different colored cells represent different grains in the as-built part. SD and BD represent the scanning direction and building direction. (Reprinted from H. Tang, H. Huang, C. Liu, Z. Liu, W. Yan, Multi-scale modelling of structure-property relationship in additively manufactured metallic materials, Int. J. Mech. Sci. 194 (2021) 106185, with permission from Elsevier.)
Multiscale modeling applied to additive manufacturing
379
(2) The grain morphology of the single track is created based on the crosssection in step 1 and the tilt angle from the EBSD image. (3) Taking consideration of the hatch distance (remelting effect), the grain structure a layer is built. (4) Along the build direction, the layers overlap and the laser scanning direction is changed, which are considered to build the whole grain structure in the as-built part. (5) In the fine-tuning process, due to the multiple thermal cycling, some specific substructure or phases would be generated. Moreover, the vacant voxels at the edge of the adjacent layer should be assigned to new grains. The out-of-plane and in-plane views of a 316L stainless steel are shown in Fig. 33. Comparing the EBSD images of the as-built part by L-PBF, the grain structure is well reproduced with the gain morphology reconstruction model. The grain structure in both red and black ellipses is similar to each other.
Fig. 33 (a) Out-of-plane view and (b) in-plane view of the reconstructed case [55] and the EBSD image [54] of 316 stainless steel by L-PBF. (Reprinted from H. Tang, H. Huang, C. Liu, Z. Liu, W. Yan, Multi-scale modelling of structure-property relationship in additively manufactured metallic materials, Int. J. Mech. Sci. 194 (2021) 106185, with permission from Elsevier. W. Chen, T. Voisin, Y. Zhang, J.-B. Florien, C.M. Spadaccini, D.L. McDowell, T. Zhu, Y.M. Wang, Microscale residual stresses in additively manufactured stainless steel, Nat. Commun. 10 (1) (2019) 1–12, Open Access.)
380
Lu Wang et al.
4.3.2 Polycrystal-scale plasticity model To save the computational cost, a polycrystal-scale plasticity (PCP) model is implemented, in which grains are separated as sets by the distribution of grain size, grain structure, and crystallographic orientation in the grain structure reconstruction stage. The plastic velocity gradient of the polycrystal structure is assumed as N2 X N1 X Lp ¼ γ_ α mα nα fj
(44)
j¼1 α¼1
where N1, N2, and fj are the number of the slip system, grain set, and volume fraction of grain set j. For proportional or pseudoproportional load, the yield stress of the polycrystal structure can be given as S¼
N2 X
Sj fj
(45)
j¼1
where Sj is the stress state in grain set j yields. To validate this mechanical model, uniaxial tensile, compressive, and cyclic loading simulations are conducted and compared with the experiment on a TC4 part manufactured through the L-PBF process. The size of the reconstructed model is 300 300 300 μm3. The simulation and experiment result under uniaxial loading are shown in Fig. 34. In Fig. 34, the single-crystal-scale plasticity (SCP) model refers to the model without grain set classification. A comparison of the results of the three tests shows that small deviations are observed between the results by PCP, SCP model, and experiments, respectively, which means that the PCP model is not only validated with the experiment, but also acceptable with consideration of the reduced computational cost. With the AM processing and microstructure evolution simulation result, the mechanical properties of the as-built part are simulated through FEM and CPFEM, and the models are validated with the experiments. Similar to the AM processing and microstructure evolution simulation, the FEM and CPFEM require huge computation resources and are limited in mesoscale or multilayer. Improving the efficiency and scale of multiscale simulation of metal AM will be a topic in the future. Furthermore, the cracking, dislocation, and fatigue of the as-built part by AM are still not understood fully.
Multiscale modeling applied to additive manufacturing
381
Fig. 34 Experimental [54] and numerical [55] results of the uniaxial stress-strain curves: (a) tension, (b) compression, and (c) cyclic loading. “Tens,” “Comp,” and “Cyclic” refer to the experimental results under tensile, compressive, and cyclic loading, respectively. “SPC” refers to the simulation results by the combination of grain morphology reconstruction model and the single-crystal-scale plasticity model, and “PCP” refers to the results by polycrystal-scale plasticity model. (Reprinted from H. Tang, H. Huang, C. Liu, Z. Liu, W. Yan, Multi-scale modelling of structure-property relationship in additively manufactured metallic materials, Int. J. Mech. Sci. 194 (2021) 106185, with permission from Elsevier. W. Chen, T. Voisin, Y. Zhang, J.-B. Florien, C.M. Spadaccini, D.L. McDowell, T. Zhu, Y.M. Wang, Microscale residual stresses in additively manufactured stainless steel, Nat. Commun. 10 (1) (2019) 1–12, Open Access.)
5. Summary In this chapter, various models related to AM processing, microstructure evolution, and mechanical properties are presented and constitute a multiscale simulation framework. With this high-fidelity simulation framework, the property of the as-built part at each building stage can be predicted accurately. As the simulation data are accumulated, data mining and machine learning would be efficient to predict the simulation results at different scales and guide the manufacturing process in the future. Although some scanning strategies and experimental parameters (e.g., hatching space, laser power, scanning speed, etc.) are optimized by machine learning models with experimental and simulation data, the related physical mechanisms are not fully investigated and restricted by the limited experiment and simulation data.
382
Lu Wang et al.
To improve the applicability of the multiscale modeling framework, there are still some problems that need to be solved. As mentioned in the previous sections, the lack of accurate material parameters of alloys, for example, heat capacity, heat conductivity, latent heat of phase change, etc., impedes the multiscale simulations. In addition, the calculation cost of the whole framework is still large, which means the optimized simulation model is required. Moreover, the influence of the moving particles on molten pool flow and plume on the energy absorption is not fully analyzed. Besides, to tailor the molten pool flow, ultrasonic and magnetic fields are being applied in AM process, while the simulation of these processes is lacking. The mechanism of the pore formation, and hot cracking are also not simulated or fully explained with experiments.
Nomenclature α αv β βv α D σ ε εe εth fb g mα nα nr v vI vR x ΔFs δd δs γ_α q_ e E ηi γ κI κf
energy absorptivity thermal expansion coefficient condensation ratio evaporation coefficient thermal eigenstrain deformation tensor stress tensor strain tensor mechanical strain tensor thermal strain tensor buoyancy force gravitational acceleration vector unit vector along the slip direction unit vector normal of slip plane unit normal vector of surface velocity vector of molten pool flow incident direction reflected direction position in the PF model the solid fraction change characteristic depth Stefan-Boltzmann constant shear rate of the slip system heat loss rate by evaporation emissivity factor ith grain orientation model parameter in fgrain solid-liquid interface curvature curvature of the free surface
Multiscale modeling applied to additive manufacturing
κg κp λ1 Ee F Fth Fe Fp I L Le Lp S Sj solid ij erf(m) μ ϕ ϕg ψ ρ ρ3 ρe ρλ ρp,λ ρs,λ σs σ Ts τ τt ϑ ζ a b1 b2 C Cl eq Cl Cp D De E F f(z) F fgrad
gradient term coefficient for grain boundary gradient term coefficient for solid-liquid interface characteristic length of the mush zone Green-Lagrange strain tensor deformation gradient thermal part of deformation gradient elastic deformation gradient plastic deformation gradient identity matrix spatial gradient of the total velocity elastic velocity gradient plastic velocity gradient yield stress tensor stress state in grain set j yields four elasticity tensor of the material stiffness tensor the Gaussian error function dynamic viscosity conserved field variable aspect ratio of grain nonconserved variable mass density vapor density saturated vapor density reflectivity of laser p-polarized wave reflectivity of laser s-polarized wave reflectivity of laser surface tension surface tension coefficient ratio between T and Tl tangent force on the free surface a constant in conserved field variable phase state major axis minor axis minor axis mass fraction solute concentration in the liquid phase equilibrium liquid concentration specific heat Darcy drag force coefficient effective solute diffusion coefficient kinetic energy of the electron total free energy of the system energy distribution in the transverse direction intermediate variable gradient energy density
383
384
fgrain floc fphase fj Fs Fs fs Fv g g(x, y) G h H h0 hc I Ji K kc kI L Lg Lp Lm Lpq Lv,i m mg Mi mp Mij mloss Ma N N1 N2 nI ni P p Pb Pd ps Pe,i Pr,i Prec qr R r
Lu Wang et al.
grain orientation difference component in local free energy density local free energy density phase difference component in local free energy density volume fraction of grain set j solid fraction in the FVM solid fraction in the parent grid solid fraction in the child grid volume fraction dendrite index in the related cell radial in-plane energy distribution of the cross-section of the beam intermediate variable penetration depth intermediate variable estimated molten pool depth heat convection coefficient specific internal energy mean ionization potential of element i constant thermal conductivity extinction coefficient distance between two elastic collision kinetic coefficient related to the grain boundary kinetic coefficient related to the interfacial mobility specific latent heat of melting mobility of the nonconserved field variable specific latent heat of evaporation for the ith component intermediate variable precoefficient in fgrain molar mass of ith component precoefficient in fphase diffusivity of the species mass loss Mach number at y ¼ y3 power concentration coefficient the number of the slip system the number of the grain set refractive index molar fraction of ith component power of heat source pressure saturated pressure absorbed energy pressure on the free surface saturated pressure of ith component reference pressure in Clausius-Clapeyron relation recoil pressure by the metal evaporation residual stiffness coefficient specific gas constant distance to the heat source
Multiscale modeling applied to additive manufacturing
rb Rmol S s T t T3 Te Tl T0 Te, i Tr, i Tref u3 xs ys z0
385
radius of the heat source ideal gas constant cell type in the parent grid cell type in the child grid temperature time vapor temperature saturated vapor temperature liquidus temperature of material ambient temperature saturate temperature of ith component reference temperature in Clausius-Clapeyron relation reference temperature vapor velocity x-coordinate of beam central axis y-coordinate of beam central axis the location with the highest-energy density
References [1] H.L. Wei, T. Mukherjee, W. Zhang, J.S. Zuback, G.L. Knapp, A. De, T. DebRoy, Mechanistic models for additive manufacturing of metallic components, Prog. Mater. Sci. 116 (2021) 100703. [2] T. DebRoy, H.L. Wei, J.S. Zuback, T. Mukherjee, J.W. Elmer, J.O. Milewski, A.M. Beese, A.D. Wilson-Heid, A. De, W. Zhang, Additive manufacturing of metallic components-process, structure and properties, Prog. Mater. Sci. 92 (2018) 112–224. [3] ASTM International, Standard Terminology for Additive Manufacturing Technologies: Designation F2792-12a, ASTM International, West Conshohocken, PA, 2012. [4] W. Yan, Y. Qian, W. Ma, B. Zhou, Y. Shen, F. Lin, Modeling and experimental validation of the electron beam selective melting process, Engineering 3 (5) (2017) 701–707. [5] W. Yan, Y. Qian, W. Ge, S. Lin, W.K. Liu, F. Lin, G.J. Wagner, Meso-scale modeling of multiple-layer fabrication process in selective electron beam melting: inter-layer/ track voids formation, Mater. Des. 141 (2018) 210–219. [6] L. Wang, Y. Zhang, W. Yan, Evaporation model for Keyhole dynamics during additive manufacturing of metal, Phys. Rev. Appl. 14 (6) (2020) 064039. [7] S.M.H. Hojjatzadeh, N.D. Parab, W. Yan, Q. Guo, L. Xiong, C. Zhao, M. Qu, L.I. Escano, X. Xiao, K. Fezzaa, Pore elimination mechanisms during 3D printing of metals, Nat. Commun. 10 (1) (2019) 1–8. [8] D.R. Poirier, Permeability for flow of interdendritic liquid in columnar-dendritic alloys, Metall. Trans. B 18 (1) (1987) 245–255. [9] C. Amador, L.M. de Juan, Strategies for structured particulate systems design, in: Computer Aided Chemical Engineering, vol. 39, Elsevier, 2016, pp. 509–579. [10] C.W. Hirt, B.D. Nichols, Volume of fluid (VOF) method for the dynamics of free boundaries, J. Comput. Phys. 39 (1) (1981) 201–225. [11] S. Osher, J.A. Sethian, Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations, J. Comput. Phys. 79 (1) (1988) 12–49. [12] M.J. Vogel, R. Miraghaie, J.M. Lopez, A.H. Hirsa, Flow-induced patterning of Langmuir monolayers, Langmuir 20 (14) (2004) 5651–5654.
386
Lu Wang et al.
[13] P. Farahmand, R. Kovacevic, An experimental-numerical investigation of heat distribution and stress field in single- and multi-track laser cladding by a high-power direct diode laser, Opt. Laser Technol. 63 (2014) 154–168. [14] W. Yan, W. Ge, J. Smith, S. Lin, O.L. Kafka, F. Lin, W.K. Liu, Multi-scale modeling of electron beam melting of functionally graded materials, Acta Materialia 115 (2016) 403–412. [15] D. Drouin, A.R. Couture, D. Joly, X. Tastet, V. Aimez, R. Gauvin, CASINO V2. 42—a fast and easy-to-use modeling tool for scanning electron microscopy and microanalysis users, Scanning J. Scanning Microsc. 29 (3) (2007) 92–101. [16] R. Gauvin, G. L’Esperance, A Monte Carlo code to simulate the effect of fast secondary electrons on κAB factors and spatial resolution in the TEM, J. Microsc. 168 (2) (1992) 153–167. [17] J. Ahn, S.-J. Na, Three-dimensional thermal simulation of nanosecond laser ablation for semitransparent material, Appl. Surf. Sci. 283 (2013) 115–127. [18] J.R. Mahan, The Monte Carlo Ray-Trace Method in Radiation Heat Transfer and Applied Optics, John Wiley & Sons, 2019. [19] W. Ge, S. Han, S.J. Na, J.Y.H. Fuh, Numerical modelling of surface morphology in selective laser melting, Comput. Mater. Sci. 186 (2021) 110062. [20] S.I. Anisimov, Vaporization of metal absorbing laser radiation, in: 30 Years of the Landau Institute—Selected Papers, World Scientific, 1996, pp. 14–15. [21] R. Cunningham, C. Zhao, N. Parab, C. Kantzos, J. Pauza, K. Fezzaa, T. Sun, A.D. Rollett, Keyhole threshold and morphology in laser melting revealed by ultrahighspeed X-ray imaging, Science 363 (6429) (2019) 849–852. [22] W. Yan, W. Ge, Y. Qian, S. Lin, B. Zhou, W.K. Liu, F. Lin, G.J. Wagner, Multiphysics modeling of single/multiple-track defect mechanisms in electron beam selective melting, Acta Materialia 134 (2017) 324–333. [23] S.A. Khairallah, A.T. Anderson, A. Rubenchik, W.E. King, Laser powder-bed fusion additive manufacturing: physics of complex melt flow and formation mechanisms of pores, spatter, and denudation zones, Acta Materialia 108 (2016) 36–45. [24] Y. Yu, Y. Li, F. Lin, W. Yan, A multi-grid cellular automaton model for simulating dendrite growth and its application in additive manufacturing, Addit. Manuf. (2021) 102284. [25] M. Yang, L. Wang, W. Yan, Phase-field modeling of grain evolutions in additive manufacturing from nucleation, growth, to coarsening, npj Comput. Mater. 7 (1) (2021) 1–12. [26] Y. Yu, M.S. Kenevisi, W. Yan, F. Lin, Modeling precipitation process of Al-Cu alloy in electron beam selective melting with a 3D cellular automaton model, Addit. Manuf. 36 (2020) 101423. [27] J.A. Koepf, D. Soldner, M. Ramsperger, J. Mergheim, M. Markl, C. K€ orner, Numerical microstructure prediction by a coupled finite element cellular automaton model for selective electron beam melting, Comput. Mater. Sci. 162 (2019) 148–155. [28] Y. Lian, S. Lin, W. Yan, W.K. Liu, G.J. Wagner, A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing, Comput. Mech. 61 (5) (2018) 543–558. [29] A. Rai, M. Markl, C. K€ orner, A coupled cellular automaton-Lattice Boltzmann model for grain structure simulation during additive manufacturing, Comput. Mater. Sci. 124 (2016) 37–48. [30] D. Liu, Y. Wang, Mesoscale multi-physics simulation of rapid solidification of Ti-6Al4V alloy, Addit. Manuf. 25 (2019) 551–562. [31] R. Acharya, J.A. Sharon, A. Staroselsky, Prediction of microstructure in laser powder bed fusion process, Acta Materialia 124 (2017) 360–371. [32] M. Rappaz, C.-A. Gandin, Probabilistic modelling of microstructure formation in solidification processes, Acta Metall. Mater. 41 (2) (1993) 345–360.
Multiscale modeling applied to additive manufacturing
387
[33] J.W. Cahn, J.E. Hilliard, Free energy of a nonuniform system. I. Interfacial free energy, J. Chem. Phys. 28 (2) (1958) 258–267. [34] J.W. Cahn, On spinodal decomposition, Acta Metall. 9 (9) (1961) 795–801. [35] S.M. Allen, J.W. Cahn, Ground state structures in ordered binary alloys with second neighbor interactions, Acta Metallurgica 20 (3) (1972) 423–433. [36] S.M. Allen, J.W. Cahn, A correction to the ground state of FCC binary ordered alloys with first and second neighbor pairwise interactions, Scr. Metall. 7 (12) (1973) 1261–1264. [37] L.D. Landau, I.M. Khalatikow, Collected Papers of L.D. Landau, Pergamon, 1965. [38] S.B. Biner, Programming Phase-Field Modeling, Springer, 2017. [39] X. Tan, Y. Kok, Y.J. Tan, G. Vastola, Q.X. Pei, G. Zhang, Y.-W. Zhang, S.B. Tor, K.F. Leong, C.K. Chua, An experimental and simulation study on build thickness dependent microstructure for electron beam melted Ti-6Al-4V, J. Alloys Compd. 646 (2015) 303–309. [40] H.E. Helmer, C. K€ orner, R.F. Singer, Additive manufacturing of nickel-based superalloy Inconel 718 by selective electron beam melting: processing window and microstructure, J. Mater. Res. 29 (17) (2014) 1987–1996. [41] N.D. Alexopoulos, Z. Velonaki, C.I. Stergiou, S.K. Kourkoulis, Effect of ageing on precipitation kinetics, tensile and work hardening behavior of Al-Cu-Mg (2024) alloy, Mater. Sci. Eng. A 700 (2017) 457–467. [42] R. Ganeriwala, M. Strantza, W. King, B. Clausen, T.Q. Phan, L.E. Levine, D.W. Brown, N. Hodge, Evaluation of a thermomechanical model for prediction of residual stress during laser powder bed fusion of Ti-6Al-4V, Addit. Manuf. 27 (2019) 489–502. [43] X. Liang, L. Cheng, Q. Chen, Q. Yang, A.C. To, A modified method for estimating inherent strains from detailed process simulation for fast residual distortion prediction of single-walled structures fabricated by directed energy deposition, Addit. Manuf. 23 (2018) 471–486. [44] B. Schoinochoritis, D. Chantzis, K. Salonitis, Simulation of metallic powder bed additive manufacturing processes with the finite element method: a critical review, Proc. Inst. Mech. Eng. B J. Eng. Manuf. 231 (1) (2017) 96–117. [45] N. Grilli, K.G. Janssens, H. Van Swygenhoven, Crystal plasticity finite element modelling of low cycle fatigue in FCC metals, J. Mech. Phys. Solids 84 (2015) 424–435. [46] F. Roters, M. Diehl, P. Shanthraj, P. Eisenlohr, C. Reuber, S.L. Wong, T. Maiti, A. Ebrahimi, T. Hochrainer, H.-O. Fabritius, DAMASK-The D€ usseldorf Advanced Material Simulation Kit for modeling multi-physics crystal plasticity, thermal, and damage phenomena from the single crystal up to the component scale, Comput. Mater. Sci. 158 (2019) 420–478. [47] F. Chen, W. Yan, High-fidelity modelling of thermal stress for additive manufacturing by linking thermal-fluid and mechanical models, Mater. Des. 196 (2020) 109185. [48] N. Grilli, D. Hu, D. Yushu, F. Chen, W. Yan, Crystal plasticity model of residual stress in additive manufacturing, arXiv:2105.13257 (2021). [49] F. Roters, P. Eisenlohr, L. Hantcherli, D.D. Tjahjanto, T.R. Bieler, D. Raabe, Overview of constitutive laws, kinematics, homogenization and multiscale methods in crystal plasticity finite-element modeling: theory, experiments, applications, Acta Materialia 58 (4) (2010) 1152–1211. [50] R. Bandyopadhyay, A.W. Mello, K. Kapoor, M.P. Reinhold, T.F. Broderick, M.D. Sangid, On the crack initiation and heterogeneous deformation of Ti-6Al-4V during high cycle fatigue at high R ratios, J. Mech. Phys. Solids 129 (2019) 61–82. [51] L. Vujosˇevic, V. Lubarda, Finite-strain thermoelasticity based on multiplicative decomposition of deformation gradient, Theor. Appl. Mech. (28–29) (2002) 379–399. [52] N. Grilli, A.C. Cocks, E. Tarleton, A phase field model for the growth and characteristic thickness of deformation-induced twins, J. Mech. Phys. Solids 143 (2020) 104061.
388
Lu Wang et al.
[53] K. Chockalingam, M. Tonks, J. Hales, D. Gaston, P. Millett, L. Zhang, Crystal plasticity with Jacobian-Free Newton-Krylov, Comput. Mech. 51 (5) (2013) 617–627. [54] W. Chen, T. Voisin, Y. Zhang, J.-B. Florien, C.M. Spadaccini, D.L. McDowell, T. Zhu, Y.M. Wang, Microscale residual stresses in additively manufactured stainless steel, Nat. Commun. 10 (1) (2019) 1–12. [55] H. Tang, H. Huang, C. Liu, Z. Liu, W. Yan, Multi-scale modelling of structureproperty relationship in additively manufactured metallic materials, Int. J. Mech. Sci. 194 (2021) 106185.
CHAPTER TEN
Multiscale modeling of supramolecular assemblies of 2D materials Yangchao Liaoa, Luis Alberto Ruiz Pestanab, and Wenjie Xiaa,c a Department of Civil, Construction and Environmental Engineering, North Dakota State University, Fargo, ND, United States b Department of Civil and Architectural Engineering, University of Miami, Coral Gables, FL, United States c Materials and Nanotechnology, North Dakota State University, Fargo, ND, United States
Contents 1. Introduction 2. Coarse-graining modeling methods for 2D materials 2.1 Overview of the coarse-graining technique 2.2 Coarse-graining model of graphene 2.3 Coarse-graining model of graphene oxide 2.4 Mesoscale model of graphene 2.5 Coarse-graining model of multilayer graphene 2.6 Summary 3. Multiscale modeling of crumpled sheet and supramolecular assemblies 3.1 Crumpled graphene 3.2 Nanostructured supramolecular assemblies 3.3 Multilayer graphene assemblies 3.4 Multilayer graphene-reinforced nanocomposites 4. Conclusion and future outlook References
389 391 392 393 395 396 398 400 400 401 407 412 415 418 419
1. Introduction A nanomaterial is one that displays a length scale of approximately 1 nm in at least one dimension. Thanks to the quantum size effect (i.e., the effect of the size of the particle on the electron energy levels near the Fermi level), or surface effect (i.e., the effect of the size of the particle on the specific surface area), nanomaterials can display remarkable properties that are simply out of reach in the bulk, making them promising for a wide range of applications, such as super- or semiconductors, wave absorption, Fundamentals of Multiscale Modeling of Structural Materials https://doi.org/10.1016/B978-0-12-823021-3.00002-6
Copyright © 2023 Elsevier Inc. All rights reserved.
389
390
Yangchao Liao et al.
and catalysis. According to the number of dimensions that are constrained, nanomaterials can be categorized as zero-dimensional (0D) (constrained by three dimensions, e.g., fullerenes), one-dimensional (1D) (constrained by two dimensions, e.g., carbon nanotubes), two-dimensional (2D) (constrained by one dimension, e.g., graphene), and three-dimensional (3D) (unconstrained, e.g., graphite) materials. Specifically, a 2D material refers to materials consisting of a single layer of atoms [1]. The advantages of 2D materials over nanomaterials in other dimensions are their superior in-plane chemical bond strength and transparency, promising applications in fields such as wearable smart devices and flexible energy storage devices, and their tunable structure and composition, from which a diversity of properties can be derived [2,3]. In 2004, Novoselov et al. isolated graphene by mechanical peeling from graphite with an adhesive tape, a discovery that granted him and Andre Geim the Nobel Prize in 2010. The unique physical, chemical, and electronic properties of graphene helped open the field of 2D materials [4]. Since then, a number of 2D materials have emerged, such as MXenes, molybdenum disulfide, tungsten disulfide, hexagonal boron nitride, and layered double hydroxides, which have contributed to maturing the field. Currently, 2D materials have become an active area of research in condensed matter physics, material science and engineering, chemistry, and nanotechnology [5]. As an example of the amazing properties of 2D materials, graphene is the thinnest material with excellent mechanical and electrical properties known in the world, and many researchers are using graphene to develop the next generation of electronic components that are thinner and can conduct electricity faster. Although the discovery of graphene paved the way for the development of 2D materials, the use of graphene still faces great challenges in practical applications due to its complex and expensive manufacturing process, as well as the lack of bandgap, which reduces the flow rate of electrons and thus the transmission speed when graphene is compounded with other materials [6]. Supramolecular assemblies of 2D nanomaterials, which are stabilized by covalent or noncovalent interactions, are fabricated using simple synthesis methods that allow adjustable structures and good compatibility with other materials, which broadens the applicability of 2D materials [7]. Notably, supramolecular assemblies of 2D materials have excellent prospects for applications in structural materials and smart manufacturing due to their unique physical properties, such as light weight, flexibility, high adjustability, and
Supramolecular assemblies of 2D materials
391
wide adaptability (e.g., the network density and pore size change with environmental temperature [8]). Over the last decade, the optimal design, structural characterization, and study of properties of supramolecular 2D materials have become increasingly popular in the field of supramolecular self-assembly [8,9]. By designing the structure of the building blocks, controlling the size and dimensions, and selecting the self-assembled approach, diverse supramolecular materials having superior properties and functionality can be achieved [10]. Although a large number of supramolecular assemblies of 2D materials have been constructed and fabricated via various kinds of synthetic methods, the complex behaviors of supramolecular assemblies that exhibit large area, ultrathin, and freestanding properties still need to be further explored. For modeling such complex multiscale material systems, three aspects should be considered, namely, computational accuracy, efficiency, and size of the simulated system. To enable a comprehensive understanding of the mechanical properties of graphene supramolecular assemblies while overcoming the problems associated with spatiotemporal scales, researchers have successively developed different multiscale coarse-graining (CG) models of graphene and 2D materials [11–14]. These models aggregate carbon atoms into beads and interact through an effective CG force field. CG models not only allow the simulation of mesoscale physical processes but also retain the molecular details of the system. Building upon prior studies, this chapter will give an overview of recent development and applications of multiscale modeling of 2D materials and their supramolecular assemblies, ranging from crumpled graphene to multilayer assemblies and nanocomposites.
2. Coarse-graining modeling methods for 2D materials As explained in detail in Chapter 3, multiscale CG models can be classified into two major categories: generic bead-spring CG models and chemistry-specific CG models [15,16]. Specifically, the generic bead-spring CG models can qualitatively represent more than one type or class material whose CG and all-atomistic (AA) systems are related by force field parameters of simplified interaction terms in reduced units. By contrast, chemistryspecific CG models retain most of the essential features of the AA model for materials and thus have higher predictive power. Chemistry-specific CG models are usually able to quantitatively predict mechanical, dynamic,
392
Yangchao Liao et al.
and thermal properties of specific materials, thus maintaining good agreement with AA models and experiments. In this section, we provide an overview of the coarse-graining techniques that have been applied to develop chemistry-specific multiscale models of graphene-based systems.
2.1 Overview of the coarse-graining technique The development of chemistry-specific CG models can be broadly divided into the following three steps. The first step is to determine the mapping of the CG structure from the underlying atomistic one, which defines the pseudo-“bond” and topology of the CG model. For graphene, mapping schemes commonly used, which can be commensurate with the hexagonal symmetry, are 4-to-1 (i.e., four carbon atoms represented by one CG bead) [11,17,18], 16-to-1 [19], 64-to-1, and 256-to-1 [14]; the lattice structures of the CG models of graphene could vary depending on the mapping scheme as well, with hexagonal [11] and square [13] lattices being the most common. Generally, the higher the degree of coarse-graining, the larger the scale of the system that can be simulated but trading off model accuracy. After determining the basic structure of the CG model, parameterization of the CG force field is required. The parameters of the force field describe the interactions between CG beads, which in turn control the in-plane elasticity, out-of-plane bending stiffness, and adhesion of the CG model. AA simulations and/or experimental results are usually used to derive and optimize the parameters of the CG force field [11]. In particular, as discussed in Chapter 3, the “strain energy conservation” method is a systematic approach to calibrating the CG force field by enforcing the same strain energy between the AA model and its CG analog under (small) deformation. The elastic and interfacial properties of the system predicted by the CG model are therefore consistent with those obtained from its AA simulation. Taking graphene as an example, four mechanical tests are usually performed to calibrate the force field parameters of the CG model: uniaxial tension, in-plane shear, out-ofplane bending, and bilayer graphene assembly tests, which determine the elastic modulus, shear modulus, bending stiffness, and interfacial adhesion energy of the CG model, respectively. The final step is to validate the physical performance of the CG model and refine the derived force field parameters. In the following earlier, we introduce several CG models of graphene and relevant systems along with their structural features and derivation of force field parameters.
Supramolecular assemblies of 2D materials
393
2.2 Coarse-graining model of graphene In a previous study, we reported a CG model of graphene developed using the strain energy conservation approach [11], which was able to reproduce the mechanical properties of the AA model at a significantly reduced computational cost. The CG model of graphene consisted of a hexagonal lattice of CG beads similar to the AA system, as shown in Fig. 1a, where each CG bead represents four carbon atoms of the graphene atomic lattice. This preserved hexagonal lattice allows the CG model to capture the asymmetry in the interlayer shear response and nonlinear large deformation in both armchair and zigzag directions. Moreover, the CG force field of the model includes contributions of bond (Vb), angle (Va), dihedral (Vd), and nonbonded (Vnb) interactions to the total potential energy of the system (Fig. 1a), which can be written as follows: 8 2 V b ðdÞ ¼ D0 1 eαðdd0 Þ > > > > > < V a ðθÞ ¼ kθ ðθ θ0 Þ2 V d ðϕÞ ¼ kϕ ½1 cos ð2ϕÞ > > > h i > > : V ðr Þ ¼ 4ε σ LJ 12 σLJ 6 , for r < r nb LJ cut r r
(1)
where D0, α, and d0 are the stiffness parameter, energy well depth, and equilibrium bond length of the bond interaction, respectively; kθ and θ0 are the spring constant and equilibrium angle of the angle interaction, respectively;
Fig. 1 (a) Schematic illustration of the CG-MD model of graphene. Each bead (yellow) in the CG model represents four carbon atoms (gray), i.e., a 4-to-1 mapping scheme; there are four contributions of the CG force field to the potential energy of the system, which are bonded (i.e., bond, angle, and dihedral) and nonbonded interactions. (b) Calibration of the CG force field, where the out-of-plane bending stiffness per unit width (M) is a function of the dihedral spring constant (kϕ), and the adhesion energy per surface area (Ua) is a function of the Lennard-Jones (LJ) potential well depth (εLJ).
394
Yangchao Liao et al.
kϕ is the dihedral spring constant; r is the distance between the pair of beads ˚ , εLJ is the Lennard-Jones (LJ) potential well smaller than the cutoff rcut ¼ 12 A depth that controls the adhesive interaction strength, and σ LJ is a length scale parameter related to the equilibrium distance of two nonbonded beads. ˚ , θ0 ¼ 120 degrees, and σ LJ ¼ 3.46 A ˚ of the eight indeNote that d0 ¼ 2.8 A pendent parameters mentioned earlier are directly related to the geometric features of the CG model and can be predetermined. The use of the strain energy conservation method allows the elastic and fracture properties of the CG model—in-plane Young’s modulus (E), shear modulus (S), the bond’s failure strain (εmax), the out-of-plane bending stiffness per unit width (M), and the adhesion energy per surface area (Ua)—to be consistent with the target values obtained from AA simulation and experiments. The reported target values of the properties used to calibrate the CG force field parameters are E ¼ 1 TPa, S ¼ 450 GPa, M ¼ 1.6 eV, εmax ¼ 16%, and Ua ¼ 260 mJ/m2, respectively, and these target values are in agreement with the relevant computational and experimental findings [13,20–22]. With the aforementioned target values, we can derive the force field parameters kθ, α, and D0 that characterize the in-plane properties of the CG model using the following relations: 8 pffiffiffi 3Δzeq ES > > > kb ¼ > > 4S E > > pffiffiffi > > > 3ðd 0 Þ2 Δzeq ES > > k ¼ < θ 6ð3E 4SÞ > log 2 > > α¼ > > > ε max d0 > > > > > > : D0 ¼ kb α2
(2)
where kb and kθ are the force constants of the harmonic bonds and angles of the atomistic lattice and Δzeq is an interlayer equilibrium spacing ˚ ). Then, as shown in Fig. 1b, we simulated and tested the (Δ zeq ¼ 3.35 A strain energy of the curved sheet to determine the parameter kϕ, which controls the out-of-plane response, and the adhesion energy between the bilayer sheet models to determine the parameter εLJ, which controls the adhesion properties of the sheet. The corresponding kϕ and εLJ were determined to match the target values of M ¼ 1.6 eV and Ua ¼ 260 mJ/m2, respectively. The derived parameters of the CG force field are listed in Table 1.
395
Supramolecular assemblies of 2D materials
Table 1 Summary of parameters of the CG model of graphene. Parameter Value
d0 θ0 D0 Α kθ kϕ εLJ σ LJ dcut rcut
2.8 A˚ 120 degrees 196.38 kcal/mol ˚ 1 1.55 A 409.4 kcal/mol 4.15 kcal/mol 0.82 kcal/mol ˚ 3.46 A ˚ 3.25 A ˚ 12 A
It has been demonstrated that this CG model graphene can quantitatively capture the complex mechanical behaviors of graphene, including nonlinear elasticity, buckling under large shear deformation, interlayer shear response, and anisotropy along zigzag and armchair directions [11,23,24]. The capability of this CG model to reach experimentally relevant spatial dimensions and temporal scales offers great prospects to explore hierarchical systems such as supramolecular 2D assemblies and experimental observations in an efficient and inexpensive manner.
2.3 Coarse-graining model of graphene oxide A CG model of graphene oxide can be developed using a similar coarsegraining approach as described earlier [12]. Fig. 2 shows a schematic representation of graphene oxide from an all-atomistic structure (Fig. 2a) to a CG structure (Fig. 2b). Specifically, the CG model includes three types of CG beads, namely, nonoxidized, hydroxy-oxidized, and epoxide-oxidized
Fig. 2 (a) Schematic of the all-atomistic (AA) structure of graphene oxide. (b) Schematic of the CG model of graphene oxide for the all-atomistic structure to CG structure, which includes three types of CG beads, respectively, nonoxidized (yellow), hydroxy-oxidized (green), and epoxide-oxidized (magenta) beads.
396
Yangchao Liao et al.
beads. By varying the percentage of each type of beads in the CG model, CG graphene oxide structures with different oxidation levels and compositions can be generated. The CG model retains a hexagonal honeycomb structure, which can efficiently capture interactions from different functionalized groups. The computational speed of the CG model is nearly 8000 times faster than density functional-based tight binding (DFTB) calculations of equivalent size. The force field of the CG graphene oxide model involves the bond, angle, and nonbonded interactions. The strain energy conservation approach based on DFTB calculations was used in calibrating the force field parameters of the CG model, and the DFTB results in the armchair direction for the three extreme cases were chosen as a reference. In brief, the uniaxial stretching results of pristine graphene were utilized to calibrate the parameters of the bond and angle interactions between the nonoxidized beads; the parameters of the bond and angle interactions between hydroxy-oxidized beads and their interaction with nonoxidized beads were calibrated with the maximally oxidized hydroxyl-rich composition (i.e., 72% degree of oxidation); the parameters of the remaining bonding and angular interactions were calibrated with the maximally oxidized epoxide-rich case (i.e., 80% degree of oxidation). It should be noted that the dihedral term was not considered in this CG model, which greatly simplifies the CG force field and improves the computational efficiency. More details of the CG model development can be found in the earlier study [12]. Despite the simplicity of the force field, it has been shown that this CG model of graphene oxide can capture relatively well the interlayer adhesion energy, Young’s modulus, and uniaxial tensile strength of graphene oxide for varying degrees of oxidation and epoxide-to-hydroxyl functionalization ratios [12]. Furthermore, nanoindentation simulations of monolayer CG graphene oxide showed that systems with different oxidation levels and oxidation types exhibit fracture behavior consistent with that observed experimentally. Overall, the CG models of graphene and graphene oxide developed using the strain energy conservation approach preserve a honeycomb structure of graphene and effectively capture its complex mechanical behavior, providing a platform for studying supramolecular 2D assemblies.
2.4 Mesoscale model of graphene Cranford et al. have reported a mesoscale graphene model with a larger degree of coarse-graining than the aforementioned CG model, and such
397
Supramolecular assemblies of 2D materials
mesoscale model largely extends the accessible time and length scales, enabling the simulation of large-scale graphene structures [13]. The authors applied a so-called “fine-trains-coarse” framework to generate a mesoscale graphene model derived entirely from atomic calculations. First, a complete series of mechanical tests were performed by AA-MD simulations to determine the simplified parameters of the mesoscale graphene model. These mechanical tests included uniaxial tensile loading, simple shear loading, out-of-plane bending, and stacking of bilayer sheets, which were used to determine Young’s modulus, shear modulus, bending stiffness per unit width, and adhesion energy per unit area of the graphene, respectively. The results of AA simulations are listed in Table 2. The mesoscale model of graphene was then built based on a “super” square lattice structure, as shown in Fig. 3a, with each bead representing ˚ 25 A ˚ planar section of AA graphene. The force field of the mesoa 25 A scale model includes contributions of bond (VT), angle (Vφ and Vθ), and nonbonded (Vadhesion) interactions to the total potential energy of the system, Table 2 Summary of results from full atomistic simulations. Parameter Value
Young’s modulus Ultimate tensile strain Shear modulus Bending modulus
Surface energy Equilibrium spacing
968 GPa 0.23% 224 GPa 2.1 eV (1 layer) 130 eV (2 layer) 1199 eV (4 layer) 13,523 eV (8 layer) 260 mJ/m2 ˚ 3.35 A
Fig. 3 Schematic of the (a) mesoscale graphene model and (b) derived mechanical potentials, where the mesoscale model has a square lattice, and each bead (yellow) represents of 25 Å 25 Å planar section of AA graphene (gray).
398
Yangchao Liao et al.
which are employed to describe axial stretching, shear deformation and out-of-plane bending, and adhesion properties of the model, respectively. Particularly, VT, Vφ, Vθ, and Vadhesion can be written as follows: 8 kT > > V T ðr Þ ¼ ðr r 0 Þ2 > > 2 > > > > k > φ > < V φ ðφÞ ¼ ðφ φ0 Þ2 2 (3) kθ > 2 > > V ð θ Þ ¼ ð Þ θ θ θ 0 > > 2 > > > h > i > : V adhesion ðr Þ ¼ 4εLJ σLJ 12 σ LJ 6 r r where r0, φ0, θ0, and σ LJ are four parameters that can be determined by the equilibrium condition conditions of the model; kT, kφ, kθ, and εLJ are four parameters that can be determined by using the strain energy conservation method. Geometric parameters for tensile stretching (bond) potential, outof-plane bending (one-way), and in-plane distortion (shear) are shown in Fig. 3b, and the parameters of the mesoscale model are listed in Table 3. This mesoscale model can be employed to study graphene meso- and microstructures on the order of micrometers spanning over larger timescales [13]. Moreover, the approach to building this mesoscale model can be extended to other materials such as cell membranes [25], polymeric nanofilms [26], or other 2D systems [27], providing an effective means to implement material models at much extended spatiotemporal scales.
2.5 Coarse-graining model of multilayer graphene Recently, Liu et al. reported a CG model of multilayer graphene (CG), where one CG layer can effectively represent varying numbers of stacked Table 3 Summary of parameters of the mesoscale model of graphene. Parameter 1 layer 2 layer 4 layer 8 layer Units
r0 φ0 θ0 σ LJ kT kφ kθ εLJ
25 90 180 2.98 470 16,870 144.9 473
25 90 180 5.96 930 33,740 8970 473
25 90 180 11.92 1860 67,480 82,731 473
25 90 180 23.84 3720 134,960 933,087 473
˚ A degrees degrees ˚ A kcal/mol/rad2 kcal/mol/rad2 kcal/mol/rad2 kcal/mol
399
Supramolecular assemblies of 2D materials
Fig. 4 Illustration of the coarse-graining approach applied in the thickness direction of graphene multilayers, where a CG layer (yellow) represents N AA graphene layers (gray).
AA graphene layers [14]. As shown in Fig. 4, their CG model preserves the hexagonal lattice of graphene for in-plane coarse-graining to maintain the properties arising from the hexagonal symmetry (e.g., anisotropic mechanical behavior along the zigzag and armchair directions). The CG force field captures bond, angle, and dihedral interactions. Note that the potential functions describing the bond (Vb(d)), angle (Va(θ)), and dihedral (Vd(ϕ)) interactions are given in Eq. (1). In the direction perpendicular to the plane, one CG layer represents several AA graphene layers, and the adhesion between CG layers is captured by nonbonded interactions. Specifically, the Mie potential, a versatile form of the standard LJ potential, is used to govern the nonbonded interactions between CG beads [28,29]: V nb ðr Þ ¼ CεLJ
σ R LJ
r
σ A
LJ , for r < r cut r
(4)
where R and A are the repulsion and attraction indices, respectively; the constant C ¼ R/(R A)(R/A)A/(RA) ensures the potential well depth as εLJ. Eq. (4) becomes the conventional 12-6 LJ potential (see Eq. 1) for the special cases of R ¼ 12 and A ¼ 6. Then, the adhesion energy per unit area between two CG layers can be captured by the Mie potential based on the following relationship: U a ðhÞ ¼ 2πρ CεLJ 2
R A ! σ LJ σ LJ 1 1 R2 R2 h A 2 hA2
(5)
pffiffiffi where h is the gap between two CG layers; ρ ¼ 4= 3 3d 20 is an area density. Table 4 summarizes the derived parameters of the CG model of MLG with different numbers of layers (N) ranging from N ¼ 1 to 4.
400
Yangchao Liao et al.
Table 4 Summary of parameters of the CG model of MLG sheets. N51 N52 N53 N54 Parametera
Units
m d0 h0 θ0 D0 α kθ kϕ εLJ σ LJ R A dcut rcut
g/mol ˚ A ˚ A degrees kcal/mol ˚ 1 A kcal/mol/rad2 kcal/mol kcal/mol ˚ A – – ˚ A ˚ A
a
24.49 2 3.4 120 196.32 1.51 222.21 2.28 0.26 3.4 12 6 3.4 13.4
195.92 4 6.8 120 1570.58 0.75 1777.70 4.55 2.63 6.8 16 12 6.8 16.8
459.18 5 10.2 120 3681.04 0.60 4166.48 6.83 4.42 10.2 24 16 10.2 20.2
881.63 6 13.6 120 7067.60 0.50 7999.65 9.11 6.97 13.6 32 20 13.6 23.6
Here, m is the mass of each CG bead; h0 is the equilibrium distance of two CG layers.
To validate the CG models, Liu et al. [14] constructed a large-scale selfassembled graphene structure and found that the CG model of MLG exhibits structural morphology and compression behavior consistent with those reported previously while lowering the computational cost [30].
2.6 Summary In summary, in this section, we have reviewed four CG (mesoscale) models for graphene and graphene oxide with hexagonal lattices, square lattices, and multilayer systems. Each model has its own advantages and disadvantages and thus a distinct domain of application. For example, the CG model with a hexagonal lattice and 4-to-1 mapping scheme has relatively higher computational accuracy than the others, whereas the ultra-coarse-graining model with a square lattice is less computationally expensive. The proper choice of CG models should be made based on the specific supramolecular material system of interest and consideration of the balance between modeling accuracy and efficiency.
3. Multiscale modeling of crumpled sheet and supramolecular assemblies In this section, we provide an overview of applications of multiscale modeling to study the structural and mechanical behaviors of graphene and its supramolecular assemblies ranging from crumpled graphene and
Supramolecular assemblies of 2D materials
401
macromolecular sheets to foams and nanocomposites, which will be discussed next.
3.1 Crumpled graphene When subjected to compression, a 2D sheet (e.g., a thin paper) will undergo random deformation and then form a 3D crumpled structure. Studies on crumpled paper balls have revealed that they have a highly complex internal structure, consisting of a random network of ridges and facets with different densities [31–33]. Moreover, the crumpling of sheets is a complicated phenomenon, featuring different crumpled structures depending on material and surface properties and deformation modes. Despite its complexity, the crumpled structure typically consists of four fundamental building blocks, which are the bend, fold, developable cone, and stretching ridge formed between two developable cones [34]. The lightweight crumpled matter has drawn considerable interest because of its remarkable compressive strength and impact resistance. Because of the atomically thin nature, graphene and other 2D materials often exhibit significant wrinkles and crumples in real situations due to their low out-of-plane bending stiffness [35,36]. In addition, the edge stresses, defects, and reconstructed defects in graphene sheets and their interaction with the substrate can lead to the formation of wrinkles and crumples [37–39]. Interestingly, graphene in a crumpled form has excellent antiaggregation, energy storage, and compression resistance properties, making it an ideal candidate for many applications. In the next section, we will present the recent multiscale modeling studies on the crumpling behaviors of graphene and macromolecular sheets to better understand their structureproperty relationships. 3.1.1 Size effects on the crumpling behaviors of graphene Recently, we have employed CG-MD simulations to systematically investigate the structural behavior of square graphene sheets with varying edge lengths during the crumpling process [40]. As described in Section 2.2, the employed CG model allows for simulations of graphene sheets having varying edge sizes up to 200 nm. The graphene model is initially placed at the center of the simulation box (whose dimensions are much larger than the size of the sheet model). The energy of the system is first minimized using an iterative conjugate gradient algorithm, and then the system is equilibrated in the NVT ensemble. The crumpling simulation of the sheet is carried out after the total potential energy of the system saturates to an almost constant value [40].
402
Yangchao Liao et al.
To mimic the crumpling process of graphene sheets in experiments (e.g., aerosol evaporation method [41]), we applied a confining sphere with a timevarying volume that encompasses the sheet model, as illustrated in Fig. 5a. The boundary of the enclosed sphere imposes a repulsive force on the model within a certain cutoff distance, and the crumpling simulation is achieved by varying the volume of the confining sphere. Here, the compaction ratio ρc ¼ Rc/Rc0 is used to characterize the degree of crumpling for the sheet, where Rc0 and Rc are the radii of the confining sphere for the model at the initial state and during the crumpling processes, respectively. The crumpled model has the same density as bulk graphite (i.e., 2.267 g/cm3) at the end of the crumpling simulation. Fig. 5b shows the result of the total potential energy increment per area ΔPEtotal of the system during crumpling. It can be observed that compared
Fig. 5 (a) Schematic of the MD simulation of the crumpling process. The square graphene sheet with side length L is confined by a spherical volume, and Rc0 and Rc are the radii of the confining sphere in the initial state and during the crumpling process, respectively. Evolution of the (b) total potential energy increment per area (ΔPEtotal) and (c) relative shape anisotropy (κ 2) as a function of the compaction ratio (ρc) for different L. The colored shaded areas represent the standard deviation from five independent crumpling simulations of each specific graphene sheet. (d) Three different regimes that characterize the shape and geometry of crumpled graphene as a function of ρc and L. The solid and dashed lines mark the transitions from regime I to regime II and regime II to regime III, respectively.
Supramolecular assemblies of 2D materials
403
with the small-sized one (e.g., L ¼ 20 nm), the ΔPEtotal of large-sized (e.g., L ¼ 199.8 nm) graphene sheet exhibits a deeper valley in the middle of the crumpling process while having a smaller value at the end of the crumpling simulation. The aforementioned observations suggest that the level of selfadhering and self-folding effects are stronger for larger size graphene sheets during the crumpling process, and that larger size sheets are more easily compressed into spherical structures than the smaller ones. In the four cases shown in Fig. 5b, the well of Δ PEtotal is not obvious for the smallest sheet size L ¼ 20 nm, which can be attributed to the fact that elastic strain energy plays a dominant role in the crumpling process of smaller graphene sheets. Moreover, the fractal dimension of crumpled graphene calculated based on the conformation of the sheet in the quasiequilibrium state is 2.395, which is consistent with the fractal dimension obtained from AA-MD simulations [42]. Note that the quasiequilibrium state is the crumpled state at which the ΔPEtotal reaches a local minimum, where the competing effects of elastic strain energy and adhesion energy are balanced. Interestingly, three different crumpling regimes of graphene sheets have been found by analyzing the relative shape anisotropy κ 2 of the sheet during the crumpling process (Fig. 5c), i.e., the less crumpled (regime I, pffiffiffi pffiffiffi ρc > 1= 2), intermediate (regime II, 0:4≲ρc < 1= 2), and highly crumpled (regime III, ρc ≲ 0.4) states, respectively. Note that κ 2 ¼ 0.25 for a flat sheet and κ 2 ¼ 0 for a spherical structure. In short, regime I is related to the edge bending of the sheet, and a larger κ2 means that the sheet retains a larger planar configuration within this regime; the adhesion of graphene dominates the crumpling process within regime II, with κ2 gradually decreasing; a small peak of κ2 occurs at the quasiequilibrium state for the larger size sheet, implying that the sheet forms a relatively larger planar configuration; thereafter, the sheet is further compressed into a spherical crumpled structure (regime III), and κ 2 decreases to zero. Fig. 5d further summarizes the three different regimes for graphene sheets with different sheet sizes and compacpffiffiffi tion ratios. The transition from regimes I to II occurs at ρc ¼ 1= 2, while the transition from regimes II to III is related to the sheet size, which roughly occurs at a quasiequilibrium state when the ΔPEtotal achieves the minimum value. One study based on AA-MD simulations revealed that the conformation of graphene sheets is largely governed by their geometry (e.g., size and aspect ratio) and environmental conditions (e.g., surface modification and substrate) [43]. Particularly, the study reported three conformation phases of graphene sheets at equilibrium, i.e., membranes, ribbons, and scrolls, and
404
Yangchao Liao et al.
the critical aspect ratios for the transitions of these three phases. Our study then uncovers the influence of sheet size on the crumpling behaviors of graphene through detailed analyses and characterization of their energy, structural, and shape changes. The employed modeling framework can be further extended to investigate the influence of the initial geometry (e.g., ribbons and scrolls) of graphene on its crumpling behavior, facilitating a deeper understanding of the structure-property relationships needed for the design of the crumpled structure. 3.1.2 Effects of defects on the crumpling behaviors of graphene In the process of sample preparation, a variety of defects appear in the lattice structure of graphene, such as single vacancy (SV) and multiple vacancy defects, Stone-Wales (SW) defects, line defects, and out-of-plane carbon adatoms [44]. MD simulations have shown that the graphene sheet in equilibrium forms a wrinkled and crumpled configuration with SW defects [45] or a certain degree of multiple vacancy defects [46]. Moreover, the reconstructed SV defects, induced by the formation of reconstructed bonds between the edge beads while retaining the remaining dangling bonds (Fig. 6a), also affect the configuration of graphene sheets in the equilibrium state [47], where the reconstruction of the SV defects involves saturating two random dangling bonds with the reconstructed bonds between the edge beads while retaining the remaining dangling bonds. In this subsection, we will present our recent investigation of the effect of reconstructed vacancy defects on the crumpling behavior of graphene sheets using CG-MD simulations [48]. As shown in Fig. 6a, we have built models of graphene with SV defects and reconstructed SV defects based on the original CG model described in Section 2.2. The equilibrium and crumpling simulations of the system are consistent with those described in the previous subsection. In addition to the pristine graphene sheet as a reference, we mainly considered five defective graphene sheets (the edge size of the sheets is 100.6 nm), all of which have the same SV defect ratio (psv ¼ 0.05), except for the reconstructed SV defect ratio precons, which ranges from 0 to 1 (here, precons ¼ 0 and precons ¼ 1 indicating no reconstructed SV defects and all SV defects being reconstructed, respectively). Equilibrium simulations of the model revealed that the pristine and unreconstructed defective (precons ¼ 0) graphene sheets exhibit relatively flat configurations in equilibrium (Fig. 6b), while the defective sheets exhibit significant wrinkles and crumples after the reconstruction (e.g., precons ¼ 1);
Supramolecular assemblies of 2D materials
405
Fig. 6 (a) Schematic of SV defects before (gray background) and after (cyan background) reconstruction in the CG graphene sheet. (b) Representative configurations of three CG graphene sheets (i.e., pristine sheet, defective sheet without reconstruction (precons ¼ 0), and fully reconstructed defective sheet (precons ¼ 1)) in initial equilibrium. Evolution of the (c) total potential energy increment per area (ΔPEtotal) and (d) relative shape anisotropy (κ 2) as a function of the compaction ratio (ρc) for graphene sheets having different precons ranging from 0 to 1. The inset in panel (c) shows the total potential energy contribution of the system to three CG graphene sheets at the final crumpled state.
the larger the precons, the more pronounced the wrinkling and crumpling [48]. This wrinkling and crumpling phenomena are mainly attributed to the local lattice distortion and deformation caused by the reconstructed SV defect lattice, resulting in an overall wrinkling and crumpling of the sheet, which is consistent with the previous findings [47,49,50]. By analyzing the total potential energy increment per area Δ PEtotal (Fig. 6c) and relative shape anisotropy κ2 (Fig. 6d) of the sheet during the crumpling process, we found that the reconstructed SV defects reduce the self-adhesion of the crumpled system, allowing the sheet to form a crumpled structure without significant self-folding or self-adhering in regimes II and III. Although the adhesion (i.e., nonbonded potential energy) of the system decreases as the precons increases, the angle potential energy of the system increases significantly as precons increases. The increase in angle potential energy makes the sheet with reconstructed SV defects more difficult to compact in the crumpling process, resulting in a higher bulk modulus and a stronger level of stress heterogeneity [51].
406
Yangchao Liao et al.
As it is often difficult to avoid defects in graphene and other 2D materials, it would be meaningful to better understand and utilize the unique properties of crumpled structures induced by defects. For example, the introduction of reconstructed SV defects, as we discussed earlier, can increase the bulk modulus of crumpled graphene. Up to date, our understanding of different types of defects and their influence on the complex behaviors of sheet materials is still limited, and more comprehensive and in-depth studies in this direction would be much needed. 3.1.3 Effects of self-adhesion on the crumpling behaviors of macromolecular sheets From the previous two subsections, we know that both the sheet size and the defect reconstruction ratios of the sheet affect the crumpling process of the sheet mainly by changing the self-adhesive properties of the system [40,48]. In addition, the overall crumpling behavior of sheet materials from molecular to macroscopic scales is usually the result of the competition between bending strain energy and interfacial adhesion energy [52]. Therefore the role of sheet adhesion in its crumpling process should not be underestimated. In this subsection, we discuss the influence of adhesion of the macromolecular sheets on their crumpling behavior. Built upon the CG model of graphene presented in Section 2.2, we generalize this model as a macromolecular sheet with different adhesion energies ranging from 86.3 to 1665.5 mJ/m2 by varying the LJ potential well depth (εLJ, ranging from 0.3 to 5 kcal/mol) of the nonbonded interaction [53]. The larger εLJ is, the more adhesive the model is. Fig. 7a shows the tendency of the total potential energy increment per area (ΔPEtotal) of macromolecular sheets having various adhesions during the crumpling process. It is observed that the macromolecular sheet with large adhesion (εLJ ¼ 5 kcal/mol) has a deeper valley of ΔPEtotal (i.e., a larger decrease of ΔPEtotal) in the intermediate phase of crumpling, which is attributed to the self-folding and selfadhering effects of the sheet. Moreover, the self-folding and self-adhering effects can be observed from the configuration of the sheets in the quasiequilibrium state (Fig. 7a), where the sheets tend to be folded to develop a larger planar structure. This is also confirmed by the analysis of the relative shape anisotropy (κ2), showing that the sheet forms a flatter configuration in the intermediate phase of crumpling (i.e., the transition between regimes II and III), marked by a larger peak in κ 2 (Fig. 7b). Furthermore, it is found that the fractal dimension of the crumpled sheet decreases from approximately 2.5 to 2 as the sheet adhesion increases,
Supramolecular assemblies of 2D materials
407
Fig. 7 Results of the (a) total potential energy increment per area (ΔPEtotal) and (b) relative shape anisotropy (κ 2) as a function of the compaction ratio (ρc) for macromolecular sheets having different nonbonded interactions (LJ potential well depth, εLJ, range 0.3–5 kcal/mol). The insets in panel (a) show the schematics of nonbonded interaction and representative configurations of crumpled macromolecular sheets in the quasiequilibrium state.
indicating a lower packing efficiency of the sheet with higher adhesion during crumpling. Meanwhile, both the radius of gyration and hydrodynamic radius of macromolecular sheets with different edge sizes can be quantitatively described by a power-law scaling relationship based on the adhesion, and the transition of the sheet from the initially ordered state to the disordered glassy state is also highly correlated with adhesion [53]. The sheet adhesion plays an important role in the internal structure of crumpled macromolecules. Based on our recent modeling study, the crumpled ball with low adhesion consists mostly of random crumpled blocks, while with higher adhesion, the crumpled ball tends to form a folded internal structure. However, no work has been carried out to correlate the internal configuration (e.g., curvature) of the crumpled structure with its mechanical and dynamic properties, which deserve further study.
3.2 Nanostructured supramolecular assemblies Self-assembly is the organization or aggregation of multiple fundamental structural units (e.g., molecules, nanomaterials, and micron or large-scale matters) to form a stable 3D structure with a certain geometry. Such 3D self-assemblies allow the physicochemical properties of fundamental structural units to be converted to macroscopic materials and structures useful for various engineering and technological applications. For instance, graphene-based 1D structured fibrous materials can be applied to wearable devices [54], 2D structured thin films can be used to develop flexible
408
Yangchao Liao et al.
electronics [55], and 3D bulk materials are capable of being developed into compressible and porous materials [56]. Particularly, 3D graphene foams and aerogels can achieve lightweight performance, superior mechanical stability, and high electrical conductivity, along with high porosity and surface area. In this section, we will review the recent development of multiscale modeling to investigate the thermomechanical behaviors of nanosheet-based bulk assemblies, i.e., graphene foams and melts. 3.2.1 Mechanical behavior of graphene foam Graphene foam is a novel porous bulk material made of graphene sheets with excellent physical and mechanical properties [57–59]. It has been reported that graphene foam has great potential in applications such as environmental purification [60], energy storage [61], flexible electronics [62], and advanced composite materials [63]. Numerous simulation and experimental studies have focused on understanding the structural and mechanical behaviors of graphene foams under uniaxial tension [64], compression [65], and shear tests [66,67]. Pan et al. have employed CG-MD simulations and experiments to explore the mechanical response of graphene foams under uniaxial tension [64]. As shown in Fig. 8a, they built 3D graphene foam with physical cross-linking by employing the graphene mesoscale model described in
Fig. 8 (a) Power-law scaling relationship between the tensile elastic modulus and the mass density for 3D graphene foam-related materials. The inset shows the relaxed configuration of the 3D graphene foam consisting of multiple 2D square mesoscopic graphene flakes. (b) Stress-strain curve of 3D graphene foam during uniaxial supercompression and release simulations. The inset shows the relaxed configuration of the 3D graphene foam consisting of multiple 2D mesoscopic graphene hole-flakes. Each graphene flake and hole-flake are colored differently for visualization.
Supramolecular assemblies of 2D materials
409
Section 2.4. Their study reported multipeaked stress-strain curves with distinct yield plateaus and ductile fractures near a plane at 45 degrees to the tensile direction. They observed a power-law scaling relationship between the tensile elastic modulus and the mass density of the foam materials (Fig. 8a). The multipeaked stress-strain response of graphene foams under uniaxial stretching is attributed to the sheet alignment and intermittent bond breakings of graphene sheets and cross-links [64]. Their follow-up studies on the compression and unloading of graphene foams with hole-flakes further showed that graphene foams exhibit rubber-like responses under uniaxial compression with three typical phases in the stress-strain curve (Fig. 8b), i.e., the initial linear elastic phase, the intermediate yielding phase, and the final densification phase, respectively [65]. Furthermore, the CG-MD simulations indicate the nonlocal fracture response of the graphene foam under shear loading [66]. Such nonlocal fracture and the resultant geometry and stress rearrangement phenomena lead to a strain-hardening phase after the yield plateau. Also, the shear stiffness of the graphene foam tends to increase linearly with increasing cross-link density. The development of mesoscale CG models of graphene offers an efficient simulation approach to explore the complex behaviors of graphene foams and relevant assemblies. The multiscale modeling framework not only lends valuable insights into the deformational mechanisms of graphene foams but also provides a materials-by-design strategy for achieving tailored and improved performance of such supramolecular assemblies based on 2D materials. 3.2.2 Temperature effects on the mechanical and dynamic behaviors of graphene foam Most of the recent MD simulations have focused on the mechanical properties of graphene-assembled systems (e.g., graphene foams [64] and aerogels [68]) at room temperature. However, graphene and other nanosheet materials could also be applied at conditions well above room temperature due to their excellent thermal stability [69,70]. Several recent studies have explored the effect of temperature on the mechanical properties of graphene, showing that both the failure strain and Young’s modulus of graphene decrease with increasing temperature, while Poisson’s ratio tends to increase [71]. Moreover, it has been shown that bulk graphene materials exhibit rubber-like elasticity and polymer-like rheological response, as well as excellent reversible deformation and fatigue resistance even in solution [72].
410
Yangchao Liao et al.
Using the CG model of graphene (Section 2.2), we have systematically investigated the temperature-dependent thermomechanical properties of bulk graphene, the so-called graphene “melt” (i.e., 3D graphene foam in a melt state) [23]. As shown in Fig. 9a, the model system consists of randomly packed nanoribbons with a length and width of 48 nm and 8 nm, respectively, forming the 3D bulk melt. At elevated temperatures, we have systematically examined the dynamics of graphene melts, where the sheet behaves like a fluid. By studying the structural relaxation time τα of the graphene melt in the high-temperature regime, it is found that τα increases as temperature
Fig. 9 (a) Snapshots of CG graphene nanoribbon and graphene melt. Here, the graphene melt consists of 40 graphene nanoribbons that are disoriented and crumpled. Each nanoribbon is colored differently for visualization. (b) Structural relaxation time (τα) of the graphene melt versus the temperature (T) of the system, and the data are fitted with the VFT relation. The inset shows the potential energy of the graphene melt versus T, and the intersection of the two linear line fits is used to estimate Tg. (c) Snapshots of shear simulation of the graphene foam at 0 and 50% shear strain, respectively. (d) Nonlinear scaling relationship between the shear modulus (G) and inverse of the Debye-Waller factor (1/hu2i) in the glassy regime (below Tg), and the data are fitted by a power-law function. The inset shows the relationship between G and T in the glassy regime.
411
Supramolecular assemblies of 2D materials
T decreases (Fig. 9b), which can be well captured by the Vogel-FulcherTammann (VFT) relation:
DT 0 τα ðT Þ ¼ τ∞ exp T T0
(6)
where τ∞, D, and T0 are fitting parameters associated with the glass-forming process. The relationship between τα and T for the graphene melt is analogous to that for linear polymer melts and other glass-forming liquids [23]. Furthermore, the inset in Fig. 9b shows the time-averaged potential energy versus T for the melt, which can be described by the bilinear slopes at highand low-temperature regimes. The apparent glass-transition temperature Tg of the graphene melt can thus be estimated from the intersection of the two lines, yielding a value of 1610 K. The Tg of the graphene melt is significantly larger than that of most polymer materials, demonstrating the superior thermal stability of the bulk graphene at elevated temperatures. At a lower temperature, we performed shear simulations of bulk graphene for testing their mechanical response (Fig. 9c). As shown in Fig. 9d, there exists a power-law scaling relationship between the shear modulus G of graphene foam at lower temperature and the inverse of the Debye-Waller factor 1/hu2i (i.e., a measure of molecular stiffness). This observation is different from that of metallic glass in the glassy state, where G is often found to be linearly scaled with 1/hu2i [73]. The observed powerlaw relationship in our simulation can be attributed to the high porosity of the graphene foam, leading to the differences in the local free volume and molecular caging compared with metallic glasses. The inset in Fig. 9d reveals a nonlinear relationship between G and T for graphene foam, which is qualitatively different from the in-plane shear properties of individual graphene sheets [74]. In addition, the G of graphene foams at room temperature ( 425 MPa) is smaller than that of glassy polymers. This observation can be explained by the fact that the density of graphene foams (0.7 g/cm3) is much lower than that of linear chain glassy polymers (>1 g/cm3), and the shear response is dominated by intersheet sliding (rather than sheet stretching), leading to a lower G. For the first time, the aforementioned study reveals that 3D graphene melts possess a “glass-forming” response as marked by a high Tg predicted from our CG-MD simulations. It also demonstrates that graphene melts have excellent thermal stability and can be utilized to enhance the thermal conductivity and mechanical stability of fire extinguishing additives,
412
Yangchao Liao et al.
functional structural materials, etc. Our modeling framework establishes an analogy between a graphene melt and glassy polymers from a thermomechanical standpoint, paving the way to explore the intriguing glass-forming behavior of such 3D supramolecular assemblies at elevated temperatures.
3.3 Multilayer graphene assemblies Multilayer graphene assemblies (MLGs) have a wide range of potential applications due to their high toughness and failure strains compared with the constituent single graphene sheets. Particularly, MLGs can be stacked in a staggered manner, where the tensile stress is transferred by shear at the component interface and the deformation is induced by the relative sliding between different graphene layers [75]. Understanding the influence of architectural parameters (e.g., the overlap length between the sheets in different layers) on the mechanical behavior of the assembly is of critical importance to design and predict their mechanical performance of MLGs. To address this aspect, our previous study has investigated the effects of overlap length Lo of staggered MLGs on their strength, failure strain, and toughness under uniaxial tensile strain using CG-MD simulations [76]. The graphene model employed is a CG model with a 4-to-1 mapping scheme (see Section 2.2). As shown in Fig. 10a, we simulated staggered MLGs with different numbers of layers (ranging from 2 to 10 layers) in a representative volume element (RVE) with one sheet per layer (width of 6.4 nm), and Lo ranges from 3 to 780 nm. The simulations indicated that the critical overlap length controlling the strength (Lsc) of MLGs is 17 nm, where the system reaches 90% of the maximum strength; the critical overlap length associated with the saturation of the plastic stress (Lpc ) is observed to be 50 nm. In particular, the analysis of the force-displacement relationship in the axial deformation process of the MLGs revealed that the process can be divided into two regimes, i.e., the constant force regime (0 < Lo ≲ (Lo Lpc )) and the force decay regime (Lo ≳ Lpc ). As Lo is beyond Lsc and Lpc , the strength and plastic stress are almost independent of Lo, respectively. Fig. 10a illustrates the deformation process of the MLGs in the simulation. We found that the deformation of the MLG is uniformly distributed over all interfaces; after reaching a certain level of deformation, the strain of the system is localized in a region along the length of the MLGs until the system collapses due to deoverlapping in the region where the strain localizes. Furthermore, the failure strains obtained from MD simulations are comparable with those predicted by the kinetic model
413
Supramolecular assemblies of 2D materials
Fig. 10 (a) Schematic of staggered MLGs under uniaxial tension simulation, and snapshots of the MLG deformation process in MD simulation. Here, Lo, H, and W are the overlap length, thickness, and width of the MLG, respectively; σ is the tensile stress. (b) Normalized toughness (UT/σ p) as a function of Lo. The solid curve shows the prediction of the MLG system with one sheet per layer. (c) Predicted critical length (LTc ) for the toughness as a function of sheet stiffness (E) and interlayer adhesion energy (γ).
when Lo > Lpc ; the failure strains from MD simulations are larger as Lo < Lpc , which is most likely due to the fact that some homogeneous deformations can still carry on before strain localization occurs for a finite rate and short Lo. The analysis of the toughness (UT) of the system showed that the toughness increases with increasing Lo (Fig. 10b), which can be described by the following equation: U T ¼ σp
Lc p 1 Lo
+
σpLc p 4ns L o
(7)
where σ p and ns are the plastic stress and number of sheets per layer, respectively. We further stated the critical overlap length regarding the toughness (LTc ) of the system is approximately 400 nm, and a further increase in Lo beyond this critical length does not significantly improve toughness. Finally, it is reported that failure strain and toughness can be controlled by adjusting the stiffness (E) of the constituent sheets and the interfacial adhesion energy (γ) between the sheets, as shown in Fig. 10c. Based on our simulation results,
414
Yangchao Liao et al.
a straightforward and feasible design is to build multilayer graphene oxide assemblies (MLGOs) using graphene sheets with different functionalization levels. Since their interfacial adhesion energy is enhanced and modulus is reduced compared with MLGs, MLGOs with different toughness can be achieved by controlling the aforementioned length scale parameters. The deformation and failure mechanisms of MLG sheets have been investigated through nanoindentation CG-MD simulations [76]. In the simulations, two stacking configurations for the MLG were considered, including the commensurate stacking and noncommensurate stacking (Fig. 11). Note that the commensurate stacking corresponds to the minimum energy configurations for sheets without rotational stacking faults, while the noncommensurate stacking represents a high energy state with a 90-degree offset angle between each pair of neighboring layers, which has a lower interlayer shear stiffness [77]. The nanoindentation simulations reproduced the experimentally observed kink in the force vs indentation depth curve and hysteresis phenomenon during the loading/unloading cycle for a commensurate stacked trilayer graphene (Fig. 11a). It was also observed that the kink activation force of the commensurate stacked trilayer graphene is higher than that of the noncommensurate stacked one. Interestingly, the unloading curve overlaps with the loading segment when the tip is retracted to its original position, indicating that the system can recover to its original energy state when it is completely unloaded, despite the energy dissipation during
Fig. 11 (a) Evolution of the force as a function of the indentation depth for commensurate stacked trilayer graphene during loading and unloading simulations. (b) Dissipated energy from loading and unloading simulations for commensurate stacked and noncommensurate stacked bilayer graphene with different film radii. The insets show the snapshots of the commensurate stacked and noncommensurate stacked bilayer graphene.
415
Supramolecular assemblies of 2D materials
the deformation. In addition, the simulation results showed a linear relationship between the dissipated energy of the system and the film radii for both commensurate stacked and noncommensurate stacked bilayer graphene (Fig. 8b). Overall, the recoverable slippage between graphene layers that we observed in our previous studies is the fundamental mechanism for the closed hysteresis loop in the loading/unloading cycle. This energy dissipation mechanism enables the MLG to dissipate energy repeatedly, which increases with the perimeter of the film.
3.4 Multilayer graphene-reinforced nanocomposites Graphene, for example, has excellent thermal and electrical transport properties, as well as stiffness and strength. However, it is necessary to overcome the low fracture toughness of graphene for its application in load-bearing structures [78]. Layered nanocomposite structures, such as nacre, are capable of dissipating enormous amounts of energy under large deformation and have high fracture toughness [79]. Inspired by the nacrelike system, our previous study has explored the mechanical properties and failure mechanisms of MLG domains embedded in a poly(methyl methacrylate) (PMMA) matrix, forming the layer-by-layer assembly using CG-MD simulations [24]. Specifically, we build an MLG-PMMA nanocomposite system using the CG graphene model, as described in Section 2.2, and two-bead-permonomer CG PMMA model, i.e., one bead represents the backbone group and the second bead represents the side chain methyl group [80]. Fig. 12a shows the representative volume element (RVE) of the nanocomposite considered in our simulation. By performing pull-out test simulations, two different deformations and failure mechanisms of the system are identified, i.e., the pull-out failure and yielding failure modes (Fig. 12b). In detail, pull-out failure is characterized by MLG being pulled out along the MLG-PMMA nanocomposite interface; yielding failure implies that the graphenegraphene interface yields. Both failure modes significantly affect the toughness and energy dissipation of the system. Validated by the simulation results, a theoretical model was developed to predict the critical number of layers (Ncr) of graphene that governs the failure mode: N cr ¼
2F ∞ tanh ðαL Þ hσ m
(8)
where F∞ is the maximum pull-out force per unit width at an infinitely large L; α is a length scale parameter governing the shear stress transfer along the
416
Yangchao Liao et al.
Fig. 12 (a) Snapshot of nacre-inspired layer-by-layer assembly of MLG and PMMA nanocomposites, and illustration of the microscopic picture of the nanocomposite system under loading. The representative volume element (RVE) studied includes one PMMA layer and one MLG layer (staggered stack of discrete graphene sheet layers). Here, H, L, and N are the thickness of the PMMA layer (H 20 nm), embedded length, and number of layers of graphene sheets in the MLG phase, respectively. (b) Normalized work of fracture per layer (W/N) as a function of N. The dashed lines show the trend. The different color regions correspond to different failure modes as identified in the simulations, i.e., yielding mode and pull-out mode.
MLG-PMMA interface; σ m and h are the tensile strength and thickness of MLG, respectively; and the factor 2 is needed to account for the upper and lower MLG-PMMA interfaces. Our simulations indicated that the yielding failure mode (significant energy dissipation) occurs when N Ncr, and the pull-out failure mode (higher toughness) occurs when N > Ncr. Furthermore, increasing the embedded length of the system and the strength of the interfacial interaction between the layers can enhance the energy dissipation of the MLG-PMMA nanocomposites, which is attributable to the nacre-like structure of the hard and soft phases arranged in a layer-by-layer manner [24]. In addition to layered nanocomposites, the strength, toughness, and functionality of nanocomposites can be significantly improved by embedding a small weight fraction of nanosheets into a polymer matrix [81]. As shown in Fig. 13a, Wang et al. investigated the mechanical and viscoelastic properties of wrinkled MLG-reinforced PMMA nanocomposites using CG-MD simulations [82]. They constructed MLG-PMMA systems with wrinkled MLGs having sinusoidal wrinkle shapes and different wavelengths
Supramolecular assemblies of 2D materials
417
Fig. 13 (a) Schematics of the nanocomposite models that consist of bilayer graphene sheets embedded in the PMMA matrix, and the conducted mechanical test simulations (i.e., uniaxial tension, uniaxial shear, and oscillatory shear simulations). (b) Residual velocity versus impact velocity relationships of different films for sharp-nosed projectile and blunt-nosed projectile. Here, the annotations “Gra” and “PMMA” denote the two different models in which the thicker MLG phase and PMMA phase are impacted first, respectively; the number “1” after “Gra” and “PMMA” indicates the number of repetitions of alternating MLG and PMMA phases.
and performed uniaxial tension, out-of-plane shear, and small-amplitude oscillatory shear simulations. Their simulation results showed that the addition of MLGs to the PMMA matrix leads to an enhancement of the uniaxial tensile modulus, and such enhancement effect increases with the increase in the volume fraction of graphene and the decrease in wavelength. However, the enhancement effect under shear deformation is highly dependent on the size and number of layers of MLGs. For example, for specific wrinkled structures of bilayer and trilayer MLG-PMMA systems, the MLGs exhibit interlayer sliding under shear deformation, resulting in an abnormal decrease in stress. In brief, the interlayer sliding of MLGs is more easily activated when they are moderately wrinkled, and the viscoelastic properties of the nanocomposites are distinctly altered by the interlayer sliding within the MLGs. A recent effort by Chiang et al. has been made to explore the impact behaviors of nacre-inspired MLG-PMMA nanocomposite films based on CG-MD simulations [83]. As shown in Fig. 13b, the studied nanocomposite consists of alternating MLG and PMMA phases with designed thickness. It is found that for the model with the PMMA phase on top of the thicker MLG phase, the viscoelastic behavior of the PMMA phase significantly drags the projectile during penetration, leading to small residual velocities for large impact velocities. By contrast, for the model with the PMMA film below
418
Yangchao Liao et al.
the MLG phase, the MLG is directly impacted by the projectile, and only a smaller impact velocity is required to obtain a comparable residual velocity due to the stress waves and finite boundary conditions. Moreover, the blunt-nosed projectile can lead to early spalling-like failure at the bottom surface of the single repetitive structure compared with the sharp-nosed projectile. Further investigation of the deformation mechanism of the layered nanocomposite film under low-velocity impact has revealed that the nanocomposite undergoes a dramatic deformation within the polymer phase, which drastically dissipated the impact energy.
4. Conclusion and future outlook In this chapter, we have showed that multiscale modeling offers an effective way to study the structural and mechanical behaviors of supramolecular assemblies of 2D materials. Using graphene as an example, we have reviewed several popular CG and mesoscale models parameterized based on a strain energy conservation approach. The CG models sacrifice degrees of freedom in exchange for higher computational efficiency, making them suitable to simulate mesoscale structures. Generally, as the mapping ratio or degree of coarse-graining of CG models increases, their predictive accuracy usually decreases. We have shown that the CG models covered in the chapter retain the microstructural features and mechanical properties of graphene, while significantly improving the computational efficiency compared with all-atom (AA) models. This allows us to computationally explore their thermomechanical and deformational behavior of 2D supramolecular assemblies at extended spatiotemporal scales, which would otherwise be difficult for AA-MD simulations. As discussed in this chapter, multiscale CG models have been successfully applied to predict and understand the deformational behaviors and failure mechanisms of crumpled sheets and nanostructured 2D assemblies, offering a new strategy and guidelines for the design and fabrication of novel graphene-based supramolecular assemblies. The CG models can also incorporate additional structural features, such as defects (e.g., vacancy defects and Stone-Wales defects) or surface functionalization of sheets (e.g., adhesion and grafting polymer). The coarsegraining methods outlined in this chapter could be straightforwardly extended to other 2D materials.
Supramolecular assemblies of 2D materials
419
References [1] J.N. Tiwari, R.N. Tiwari, K.S. Kim, Zero-dimensional, one-dimensional, twodimensional and three-dimensional nanostructured materials for advanced electrochemical energy devices, Prog. Mater. Sci. 57 (4) (2012) 724–803, https://doi.org/ 10.1016/j.pmatsci.2011.08.003. [2] T. Yang, D. Xie, Z. Li, H. Zhu, Recent advances in wearable tactile sensors: materials, sensing mechanisms, and device performance, Mater. Sci. Eng. R. Rep. 115 (2017) 1–37, https://doi.org/10.1016/j.mser.2017.02.001. [3] R. Mas-Balleste, C. Go´mez-Navarro, J. Go´mez-Herrero, F. Zamora, 2D materials: to graphene and beyond, Nanoscale 3 (1) (2011) 20–30, https://doi.org/10.1039/ c0nr00323a. [4] K.S. Novoselov, et al., Electric field in atomically thin carbon films, Science (80-. ) 306 (5696) (2004) 666–669, https://doi.org/10.1126/science.1102896. [5] D. Akinwande, et al., A review on mechanics and mechanical properties of 2D materials—graphene and beyond, Extreme Mech. Lett. 13 (2017) 42–77, https:// doi.org/10.1016/j.eml.2017.01.008. [6] S.Z. Butler, et al., Progress, challenges, and opportunities in two-dimensional materials beyond graphene, ACS Nano 7 (4) (2013) 2898–2926, https://doi.org/10.1021/ nn400280c. [7] K.E. Huggins, S. Son, S.I. Stupp, Two-dimensional supramolecular assemblies of a polydiacetylene. 1. Synthesis, structure, and third-order nonlinear optical properties, Macromolecules 30 (18) (1997) 5305–5312, https://doi.org/10.1021/ma961715i. [8] Q. Zhang, R.J. Xing, W.Z. Wang, Y.X. Deng, D.H. Qu, H. Tian, Dynamic adaptive two-dimensional supramolecular assemblies for on-demand filtration, iScience 19 (2019) 14–24, https://doi.org/10.1016/j.isci.2019.07.007. [9] L. Yang, X. Tan, Z. Wang, X. Zhang, Supramolecular polymers: historical development, preparation, characterization, and functions, Chem. Rev. 115 (15) (2015) 7196–7239, https://doi.org/10.1021/cr500633b. [10] Q. Zhang, C.Y. Shi, D.H. Qu, Y.T. Long, B.L. Feringa, H. Tian, Exploring a naturally tailored small molecule for stretchable, self-healing, and adhesive supramolecular polymers, Sci. Adv. 4 (7) (2018), https://doi.org/10.1126/sciadv.aat8192. [11] L. Ruiz, W. Xia, Z. Meng, S. Keten, A coarse-grained model for the mechanical behavior of multi-layer graphene, Carbon N. Y. 82 (C) (2015) 103–115, https://doi.org/ 10.1016/j.carbon.2014.10.040. [12] Z. Meng, et al., A coarse-grained model for the mechanical behavior of graphene oxide, Carbon N. Y. 117 (2017) 476–487, https://doi.org/10.1016/j.carbon.2017.02.061. [13] S. Cranford, M.J. Buehler, Twisted and coiled ultralong multilayer graphene ribbons, Model. Simul. Mater. Sci. Eng. 19 (5) (2011), https://doi.org/10.1088/0965-0393/ 19/5/054003. [14] S. Liu, K. Duan, L. Li, X. Wang, Y. Hu, A multilayer coarse-grained molecular dynamics model for mechanical analysis of mesoscale graphene structures, Carbon N. Y. 178 (2021) 528–539, https://doi.org/10.1016/j.carbon.2021.03.025. [15] W.G. Noid, Perspective: coarse-grained models for biomolecular systems, J. Chem. Phys. 139 (9) (2013), https://doi.org/10.1063/1.4818908. [16] G.S. Grest, K. Kremer, Molecular dynamics simulation for polymers in the presence of a heat bath, Phys. Rev. A 33 (5) (1986) 3628–3631, https://doi.org/10.1103/ PhysRevA.33.3628. [17] J.J. Shang, Q.S. Yang, X. Liu, New coarse-grained model and its implementation in simulations of graphene assemblies, J. Chem. Theory Comput. 13 (8) (2017), https:// doi.org/10.1021/acs.jctc.7b00051.
420
Yangchao Liao et al.
[18] S.J. Marrink, H.J. Risselada, S. Yefimov, D.P. Tieleman, A.H. De Vries, The MARTINI force field: coarse grained model for biomolecular simulations, J. Phys. Chem. B 111 (27) (2007) 7812–7824, https://doi.org/10.1021/jp071097f. [19] Q. Liu, J. Huang, B. Xu, Evaporation-driven crumpling and assembling of twodimensional (2D) materials: a rotational spring – mechanical slider model, J. Mech. Phys. Solids 133 (2019), https://doi.org/10.1016/j.jmps.2019.103722. [20] C. Lee, X. Wei, J.W. Kysar, J. Hone, Measurement of the elastic properties and intrinsic strength of monolayer graphene, Science (80-. ) 321 (5887) (2008) 385–388, https:// doi.org/10.1126/science.1157996. [21] G. Van Lier, C. Van Alsenoy, V. Van Doren, P. Geerlings, Ab initio study of the elastic properties of single-walled carbon nanotubes and graphene, Chem. Phys. Lett. 326 (1–2) (2000) 181–185, https://doi.org/10.1016/S0009-2614(00)00764-8. [22] S. Cranford, D. Sen, M.J. Buehler, Meso-origami: folding multilayer graphene sheets, Appl. Phys. Lett. 95 (12) (2009) 123121, https://doi.org/10.1063/1.3223783. [23] W. Xia, F. Vargas-Lara, S. Keten, J.F. Douglas, Structure and dynamics of a graphene melt, ACS Nano 12 (6) (2018) 5427–5435, https://doi.org/10.1021/acsnano.8b00524. [24] W. Xia, J. Song, Z. Meng, C. Shao, S. Keten, Designing multi-layer graphene-based assemblies for enhanced toughness in nacre-inspired nanocomposites, Mol. Syst. Des. Eng. 1 (1) (2016) 40–47, https://doi.org/10.1039/c6me00022c. [25] M. Arslan, M.C. Boyce, H.J. Qi, C. Ortiz, Constitutive modeling of the stress-stretch behavior of two-dimensional triangulated macromolecular networks containing folded domains, J. Appl. Mech. Trans. ASME 75 (1) (2008) 0110201–0110207, https://doi. org/10.1115/1.2745373. [26] X. Dai, Y. Zhang, Y. Guan, S. Yang, J. Xu, Mechanical properties of polyelectrolyte multilayer self-assembled films, Thin Solid Films 474 (1–2) (2005) 159–164, https:// doi.org/10.1016/j.tsf.2004.08.084. [27] H. Zhang, C.T. Sun, A multiscale mechanics approach for modeling textured polycrystalline thin films with nanothickness, Int. J. Mech. Sci. 48 (8) (2006) 899–906, https:// doi.org/10.1016/j.ijmecsci.2006.01.003. [28] G. Mie, Zur kinetischen Theorie der einatomigen K€ orper, Ann. Phys. 316 (8) (1903) 657–697, https://doi.org/10.1002/andp.19033160802. [29] J.E. Jones, On the determination of molecular fields. —II. From the equation of state of a gas, Proc. R. Soc. Lond. Ser. A Contain. Pap. Math. Phys. Character 106 (738) (1924) 463–477, https://doi.org/10.1098/rspa.1924.0082. [30] Z. Sun, S. Fang, Y.H. Hu, 3D graphene materials: from understanding to design and synthesis control, Chem. Rev. 120 (18) (2020) 10336–10453, https://doi.org/ 10.1021/acs.chemrev.0c00083. [31] D.L. Blair, A. Kudrolli, Geometry of crumpled paper, Phys. Rev. Lett. 94 (16) (2005), https:// doi.org/10.1103/PhysRevLett.94.166107. [32] E. Sultan, A. Boudaoud, Statistics of crumpled paper, Phys. Rev. Lett. 96 (13) (2006), https:// doi.org/10.1103/PhysRevLett.96.136103. [33] A.D. Cambou, N. Menon, Three-dimensional structure of a sheet crumpled into a ball, Proc. Natl. Acad. Sci. U. S. A. 108 (36) (2011) 14741–14745, https://doi.org/10.1073/ pnas.1019192108. [34] A.B. Croll, T. Twohig, T. Elder, The compressive strength of crumpled matter, Nat. Commun. 10 (1) (2019), https://doi.org/10.1038/s41467-019-09546-7. [35] J.C. Meyer, A.K. Geim, M.I. Katsnelson, K.S. Novoselov, T.J. Booth, S. Roth, The structure of suspended graphene sheets, Nature 446 (7131) (2007) 60–63, https://doi. org/10.1038/nature05545. [36] A. Fasolino, J.H. Los, M.I. Katsnelson, Intrinsic ripples in graphene, Nat. Mater. 6 (11) (2007) 858–861, https://doi.org/10.1038/nmat2011.
Supramolecular assemblies of 2D materials
421
[37] A.N. Obraztsov, E.A. Obraztsova, A.V. Tyurnina, A.A. Zolotukhin, Chemical vapor deposition of thin graphite films of nanometer thickness, Carbon N. Y. (2007), https:// doi.org/10.1016/j.carbon.2007.05.028. [38] H.S. Seung, D.R. Nelson, Defects in flexible membranes with crystalline order, Phys. Rev. A 38 (2) (1988) 1005–1018, https://doi.org/10.1103/PhysRevA.38.1005. [39] V.B. Shenoy, C.D. Reddy, A. Ramasubramaniam, Y.W. Zhang, Edge-stress-induced warping of graphene sheets and nanoribbons, Phys. Rev. Lett. 101 (24) (2008), https:// doi.org/10.1103/PhysRevLett.101.245501. [40] Y. Liao, Z. Li, Fatima, W. Xia, Size-dependent structural behaviors of crumpled graphene sheets, Carbon N. Y. 174 (2021) 148–157, https://doi.org/10.1016/j. carbon.2020.12.006. [41] X. Ma, M.R. Zachariah, C.D. Zangmeister, Crumpled nanopaper from graphene oxide, Nano Lett. 12 (1) (2012) 486–489, https://doi.org/10.1021/nl203964z. [42] S.W. Cranford, M.J. Buehler, Packing efficiency and accessible surface area of crumpled graphene, Phys. Rev. B Condens. Matter Mater. Phys. 84 (20) (2011) 205451, https:// doi.org/10.1103/PhysRevB.84.205451. [43] Z. Xu, M.J. Buehler, Geometry controls conformation of graphene sheets: membranes, ribbons, and scrolls, ACS Nano 4 (7) (2010) 3869–3876, https://doi.org/10.1021/ nn100575k. [44] F. Banhart, J. Kotakoski, A.V. Krasheninnikov, Structural defects in graphene, ACS Nano 5 (1) (2011) 26–41, https://doi.org/10.1021/nn102598m. [45] F.L. Thiemann, P. Rowe, A. Zen, E.A. M€ uller, A. Michaelides, Defect-dependent corrugation in graphene, Nano Lett. 21 (19) (2021) 8143–8150, https://doi.org/10.1021/ acs.nanolett.1c02585. [46] I. Giordanelli, M. Mendoza, J.S. Andrade, M.A.F. Gomes, H.J. Herrmann, Crumpling damaged graphene, Sci. Rep. 6 (2016), https://doi.org/10.1038/srep25891. [47] A. El-Barbary, H. Telling, P. Ewels, I. Heggie, R. Briddon, Structure and energetics of the vacancy in graphite, Phys. Rev. B Condens. Matter Mater. Phys. 68 (14) (2003), https:// doi.org/10.1103/PhysRevB.68.144107. [48] Y. Liao, Z. Li, W. Nie, W. Xia, Effect of reconstructed vacancy defects on the crumpling behavior of graphene sheets, Forces Mech. (2021) 100057, https://doi.org/ 10.1016/j.finmec.2021.100057. [49] W. Zhang, et al., Defect interaction and deformation in graphene, J. Phys. Chem. C 124 (4) (2020) 2370–2378, https://doi.org/10.1021/acs.jpcc.9b10622. [50] A.W. Robertson, et al., Structural reconstruction of the graphene monovacancy, ACS Nano 7 (5) (2013) 4495–4502, https://doi.org/10.1021/nn401113r. [51] M. Becton, L. Zhang, X. Wang, On the crumpling of polycrystalline graphene by molecular dynamics simulation, Phys. Chem. Chem. Phys. 17 (9) (2015) 6297–6304, https://doi.org/10.1039/c4cp05813e. [52] X. Meng, M. Li, Z. Kang, X. Zhang, J. Xiao, Mechanics of self-folding of single-layer graphene, J. Phys. D. Appl. Phys. 46 (5) (2013), https://doi.org/10.1088/0022-3727/ 46/5/055308. [53] Y. Liao, Z. Li, S. Ghazanfari, Fatima, A.B. Croll, W. Xia, Understanding the role of selfadhesion in crumpling behaviors of sheet macromolecules, Langmuir 37 (28) (2021) 8627–8637, https://doi.org/10.1021/acs.langmuir.1c01545. [54] L. Lim, et al., All-in-one graphene based composite fiber: toward wearable supercapacitor, ACS Appl. Mater. Interfaces 9 (45) (2017) 39576–39583, https://doi.org/ 10.1021/acsami.7b10182. [55] D. Corzo, G. Tostado-Bla´zquez, D. Baran, Flexible electronics: status, challenges and opportunities, Front. Electron. 1 (2020), https://doi.org/10.3389/felec.2020.594003. [56] L. Qiu, et al., Extremely low density and super-compressible graphene cellular materials, Adv. Mater. 29 (36) (2017), https://doi.org/10.1002/adma.201701553.
422
Yangchao Liao et al.
[57] A. Nieto, B. Boesl, A. Agarwal, Multi-scale intrinsic deformation mechanisms of 3D graphene foam, Carbon N. Y. 85 (2015), https://doi.org/10.1016/j.carbon.2015.01.003. [58] K.M. Yocham, et al., Mechanical properties of graphene foam and graphene foam— tissue composites, Adv. Eng. Mater. 20 (9) (2018), https://doi.org/10.1002/ adem.201800166. [59] A. Pedrielli, S. Taioli, G. Garberoglio, N.M. Pugno, Mechanical and thermal properties of graphene random nanofoams via molecular dynamics simulations, Carbon N. Y. 132 (2018), https://doi.org/10.1016/j.carbon.2018.02.081. [60] X. Xu, et al., Three dimensionally free-formable graphene foam with designed structures for energy and environmental applications, ACS Nano 14 (1) (2020) 937–947, https://doi.org/10.1021/acsnano.9b08191. [61] Y. Ma, H. Chang, M. Zhang, Y. Chen, Graphene-based materials for lithium-ion hybrid supercapacitors, Adv. Mater. 27 (36) (2015) 5296–5308, https://doi.org/ 10.1002/adma.201501622. [62] J. Li, et al., Highly stretchable and sensitive strain sensor based on facilely prepared three-dimensional graphene foam composite, ACS Appl. Mater. Interfaces 8 (29) (2016) 18954–18961, https://doi.org/10.1021/acsami.6b05088. [63] A. Idowu, B. Boesl, A. Agarwal, 3D graphene foam-reinforced polymer composites—a review, Carbon 135 (2018) 52–71, https://doi.org/10.1016/j.carbon.2018.04.024. [64] D. Pan, C. Wang, T.C. Wang, Y. Yao, Graphene foam: uniaxial tension behavior and fracture mode based on a mesoscopic model, ACS Nano 11 (9) (2017) 8988–8997, https://doi.org/10.1021/acsnano.7b03474. [65] D. Pan, C. Wang, X. Wang, Graphene foam: hole-flake network for uniaxial supercompression and recovery behavior, ACS Nano 12 (11) (2018) 11491–11502, https://doi.org/10.1021/acsnano.8b06558. [66] T. Yang, C. Wang, Z. Wu, Strain hardening in graphene foams under shear, ACS Omega 6 (35) (2021) 22780–22790, https://doi.org/10.1021/acsomega.1c03127. [67] J.A. Baimova, L.K. Rysaeva, B. Liu, S.V. Dmitriev, K. Zhou, From flat graphene to bulk carbon nanostructures, Phys. Status Solidi Basic Res. 252 (7) (2015) 1502–1507, https://doi.org/10.1002/pssb.201451654. [68] S.P. Patil, P. Shendye, B. Markert, Molecular investigation of mechanical properties and fracture behavior of graphene aerogel, J. Phys. Chem. B 124 (28) (2020) 6132–6139, https://doi.org/10.1021/acs.jpcb.0c03977. [69] Y. Yin, Z. Cheng, L. Wang, K. Jin, W. Wang, Graphene, a material for high temperature devices—intrinsic carrier density, carrier drift velocity, and lattice energy, Sci. Rep. 4 (2014), https://doi.org/10.1038/srep05758. [70] P. Huang, et al., Graphene film for thermal management: a review, Nano Mater. Sci. 3 (1) (2021) 1–16, https://doi.org/10.1016/j.nanoms.2020.09.001. [71] T.H. Fang, W.J. Chang, J.C. Yang, Temperature effect on mechanical properties of graphene sheets under tensile loading, Dig. J. Nanomater. Biostructures 7 (4) (2012) 1811–1816. [72] Y. Wu, et al., Three-dimensionally bonded spongy graphene material with super compressive elasticity and near-zero Poisson’s ratio, Nat. Commun. 6 (2015), https://doi. org/10.1038/ncomms7141. [73] J.F. Douglas, B.A.P. Betancourt, X. Tong, H. Zhang, Localization model description of diffusion and structural relaxation in glass-forming Cu-Zr alloys, J. Stat. Mech. Theory Exp. 5 (2016) 2016, https://doi.org/10.1088/1742-5468/2016/05/054048. [74] K. Min, N.R. Aluru, Mechanical properties of graphene under shear deformation, Appl. Phys. Lett. 98 (1) (2011), https://doi.org/10.1063/1.3534787. [75] B. Ji, H. Gao, Mechanical principles of biological nanocomposites, Annu. Rev. Mater. Res. 40 (2010) 77–100, https://doi.org/10.1146/annurev-matsci-070909-104424.
Supramolecular assemblies of 2D materials
423
[76] W. Xia, L. Ruiz, N.M. Pugno, S. Keten, Critical length scales and strain localization govern the mechanical performance of multi-layer graphene assemblies, Nanoscale 8 (12) (2016) 6456–6462, https://doi.org/10.1039/c5nr08488a. [77] G. Yang, L. Li, W.B. Lee, M.C. Ng, Structure of graphene and its disorders: a review, Sci. Technol. Adv. Mater. 19 (1) (2018) 613–648, https://doi.org/10.1080/ 14686996.2018.1494493. Taylor & Francis. [78] J.H. Lee, P.E. Loya, J. Lou, E.L. Thomas, Dynamic mechanical behavior of multilayer graphene via supersonic projectile penetration, Science (80-. ) 346 (6213) (2014) 1092–1096, https://doi.org/10.1126/science.1258544. [79] P. Das, S. Schipmann, J.M. Malho, B. Zhu, U. Klemradt, A. Walther, Facile access to large-scale, self-assembled, nacre-inspired, high-performance materials with tunable nanoscale periodicities, ACS Appl. Mater. Interfaces 5 (9) (2013) 3738–3747, https://doi.org/10.1021/am400350q. [80] D.D. Hsu, W. Xia, S.G. Arturo, S. Keten, Systematic method for thermomechanically consistent coarse-graining: a universal model for methacrylate-based polymers, J. Chem. Theory Comput. 10 (6) (2014) 2514–2527, https://doi.org/10.1021/ ct500080h. [81] S. Chandrasekaran, N. Sato, F. T€ olle, R. M€ ulhaupt, B. Fiedler, K. Schulte, Fracture toughness and failure mechanism of graphene based epoxy composites, Compos. Sci. Technol. 97 (2014) 90–99, https://doi.org/10.1016/j.compscitech.2014.03.014. [82] Y. Wang, Z. Meng, Mechanical and viscoelastic properties of wrinkled graphene reinforced polymer nanocomposites—effect of interlayer sliding within graphene sheets, Carbon N. Y. 177 (2021) 128–137, https://doi.org/10.1016/j.carbon.2021.02.071. [83] C.C. Chiang, J. Breslin, S. Weeks, Z. Meng, Dynamic mechanical behaviors of nacreinspired graphene-polymer nanocomposites depending on internal nanostructures, Extreme Mech. Lett. 49 (2021), https://doi.org/10.1016/j.eml.2021.101451.
Index
Note: Page numbers followed by f indicate figures and t indicate tables.
A AB2C Heusler compounds, 225–226, 225f Ab initio molecular dynamics (AIMD), 42, 44 Adaptive clustering-based reduced-order models (ACROMs), 158 Adaptive self-consistent clustering analysis (ASCA), 158 Additive manufacturing (AM) directed energy deposition (DED), 334 mechanical properties simulation, 369–380, 370f grain residual stress simulation, 371–376, 373f multiple tracks and layers, thermal stress simulation, 369–371, 372f structure-property relationship, 377–380 microstructure evolution, 354–368, 355f dendrite growth simulation, 356–360, 357f, 359f EB-PBF, precipitation process in, 364–368, 366–368f grain evolution simulation, 360–363, 362–364f powder bed fusion (PBF), 334 selective laser melting (SLM), 334 simulating additive manufacturing process, 336–354, 336f heat source modeling, 339–344 metal evaporation modeling, 345–347 molten pool dynamics, 337–339, 347–350, 348f multilayer simulation, 350–354 multitrack simulation, 350–354 single-track simulation, 350–354, 351f All-atomistic modeling techniques, 43–44 AlphaFold network, 230–233 Atomic multipole optimized energetics for biomolecular applications (AMOEBA), 42
Atomistic molecular modeling methods, 75–77 advantages, 40–41 definition, 39–41 force fields, modeling interatomic interactions, 45–53, 46t, 47f bonded interactions, 47–50, 48–49f challenges and limitations, 52–53 nonbonded interactions, 50–51, 51f parameterization, 52 history, 37–38 molecular dynamics (MD), 53–67 significance, 37–38
B Basis sets, 15–19 localized basis sets, 15–17 plane waves, 17–19, 17f Becke-Lee-Yang-Parr (BLYP), 11–12 Biomaterials designing, machine learning in, 228–233 Bonded interactions, 47–50, 48–49f Born-Oppenheimer MD (BOMD), 42 Boundary value problems (BVPs), 117
C Carbon fiber-reinforced polymer (CFRP) interphase, nanoscale characterization of, 242–246, 243f, 247t material design, 240 multiscale modeling framework, 241–242, 241f unidirectional (UD) CFRP composites boundary conditions, 249–250 elastic-plastic-damage model, homogenized UD CFRP composites, 263–269 425
426 Carbon fiber-reinforced polymer (CFRP) (Continued ) failure analysis of, 250–252, 251f macroscale model of, 281–286, 284f, 285t microstructure components, UD RVE model and constitutive laws for, 246–249, 248f, 253–254t multiaxial stress state, 252–263 proposed elastic-plastic-damage model, 263–269, 266f, 267t, 268f RVE size, 249–250 uniaxial stress state, 250–252 woven composites, macroscale model of, 281–286, 284f, 285t woven composites, mesoscale model development, 269–281 constitutive and damage laws, 272–273, 273t, 273f damage initiation and propagation process, 276–281, 276f, 278–282f experimental and computational stress-strain curves, 273–276, 274f, 275–276t mesoscale RVE model generation, 270–272, 271–272f woven RVE model, 273–281 Car-Parrinello MD (CPMD), 42 Centroid molecular dynamics (CMD), 43–44 Chemistry-specific coarse-graining methods, 89–106 energy renormalization (ER), 96–100, 98–99f, 99t force matching (FM), 100–101 inverse Monte Carlo (IMC), 95–96 iterative Boltzmann inversion (IBI), 91–95, 92f Martini approach, 102–104, 103f relative entropy, 101–102 strain energy conservation, 104–106, 106f, 107t united-atom (UA) approximation, 89–91, 90f Classic generic coarse-graining model, 85–87
Index
Classic molecular dynamics (MD), 81–82 Clustering-based domain decomposition, 131–134, 133f, 135f, 151–152 Clustering-based reduced-order models (CROMs), fast homogenization computational homogenization, 116–118 boundary value problems (BVPs), 117 direct numerical simulation (DNS), 118 fast Fourier transform (FFT)-based homogenization, 117–118 finite element method (FEM), 117–118 definition, 119–120 Integrated Computational Materials Engineering (ICME), 116 micromechanics, 123–128 auxiliary homogeneous problem, 125–127 green operator under linear elastic isotropy, 127 Lippmann-Schwinger integral equation, 127–128 problem formulation, 124 numerical application, 150–157 clustering-based domain decomposition, 151–152 compute cluster interaction tensors, 152 conduct DNS linear elastic analyses, 151 heterogeneous material RVE, 150 multiscale equilibrium problem, 152–157, 154–155f offline stage, 151–152 online stage, 152–157 offline stage clustering-based domain decomposition, 131–134, 133f, 135f compute cluster interaction tensors, 135–136, 136f conduct DNS linear elastic analyses, 129–131 online stage
427
Index
continuous Lippmann-Schwinger integral equation, 137–139 discretized Lippmann-Schwinger integral equation, 139–142 homogenized consistent tangent modulus, 144–146 reduced microscale equilibrium problem, numerical solution, 142–144 reference homogeneous elastic material, 146 self-consistent scheme, 146–150 representative volume elements (RVEs), 116 self-consistent clustering analysis (SCA), 116, 120–123, 121f Cluster interaction tensors, 152 Cluster-reduced RVE (CRVE), 133–134, 136f Coarse-graining (CG) modeling, 75–80, 76f, 79f graphene, 393–395, 393f, 395t graphene oxide, 395–396, 395f mesoscale model of graphene, 396–398, 397–398t multilayer graphene, 398–400, 399f, 400t overview of, 392 Common heat source, 339–340 Complex optimizations, 208–209 Computational homogenization, 116–118 boundary value problems (BVPs), 117 direct numerical simulation (DNS), 118 fast Fourier transform (FFT)-based homogenization, 117–118 finite element method (FEM), 117–118 Computational smart polymer design, 317 Compute cluster interaction tensors, 135–136, 136f Conduct DNS linear elastic analyses, 129–131, 151 Conjugated polymers (CPs), 219–221 Continuous Lippmann-Schwinger integral equation, 137–139 Cross-linking for tuning, 299–303 Crumpled graphene, 401–407
defects, 404–406, 405f self-adhesion effects, 406–407, 407f size effects, 401–404, 402f Crystal plasticity framework, 374–376, 375f
D Dam break with obstacle, 188–192, 188t, 189–191f, 191t, 193f Data science, 204–205, 204f Data selection, 209 David Taylor Model Basin (DTMB) 5415 ship model, 192–197, 195t, 195–197f Decision tree (DT) algorithm, 210–211, 225–226 Deep learning (DL), 217–218, 218f Dendrite growth simulation, 356–360, 357f, 359f Density functional theory (DFT) basis sets, 15–19 localized, 15–17 plane waves, 17–19, 17f Becke-Lee-Yang-Parr (BLYP), 11–12 exchange-correlation functionals, 10–13, 11f generalized gradient approximation (GGA), 11–12 Hartree-Fock (HF) theory, 12–13 jellium, 10–11 Kohn-Sham, 6–9, 10f levels of theory, 10–13, 11f local density approximation (LDA), 10–11 overview, 5–6 Perdew-Burke-Ernzerhof (PBE), 11–12 Perdew-Wang-1991 (PW91), 11–12 pseudopotentials, 19–20 solids, 20–31 absorption spectra, 29–30, 29f adsorption energies, 26–28, 27f band structure, 28, 28f crystal structure, 20–23, 21–22f density of states (DOS), 29–30, 29f elastic constants, 23–25 finding transition state, 30–31, 30f surface energy, 25–26, 26f uniform electron gas (UEG) model, 10–11 Van der Waals interactions, 14–15
428 Directed energy deposition (DED), 334 Direct numerical simulation (DNS), 118 Dirichlet boundary conditions, weak enforcement, 178–179 Discretized Lippmann-Schwinger equilibrium equations, 159–161 Discretized Lippmann-Schwinger integral equation, 139–142 Dissipative particle dynamics (DPD), 82–85, 83f
E Elastin, 294 Elastomeric biopolymers cross-linking for tuning, 299–303 elastin, 294 elastomeric protein polymers, 314–319 computational smart polymer design, 317 elasticity, structure, and relaxation, 318 human tropoelastin, molecular model of, 319 sodium chloride, 318–319 spider silk’s N-terminal protein domain, 318–319 elastomeric proteins, 293–294 secondary and tertiary structure, 298–299 elastomeric sequences and motifs, 295–297, 297–298f intrinsic and extrinsic factors, 303–314 elastin-based materials, conformational entropic effects, 303–305, 304f elastomeric proteins, solvent and hydration effects, 305–308 temperature, 308–314 resilin, 294–295 sequence, 295–299 silkworm-like polypeptide (SWLPs), 295 structure, 295–299 Elastomeric proteins, 293–294 elastin-like polypeptides (ELPs), 305–307, 306f hydrophobic hydration and elasticity, 308
Index
polymers, 314–319 computational smart polymer design, 317 elasticity, structure, and relaxation, 318 human tropoelastin, molecular model, 319 sodium chloride, 318–319 spider silk’s N-terminal protein domain, 318–319 secondary and tertiary structure, 298–299 solvent and hydration effects, 305–308 XLPs, hydration level effect, 307–308 Elastomeric sequences and motifs, 295–297, 297–298f Electron beam absorption modeling, 340–343, 341f Electron beam-powder bed fusion (EB-PBF), precipitation process, 364–368, 366–368f Electronic structure methods Born-Oppenheimer (BO) approximation, 4 classical mechanics, 4 density functional theory (DFT) (see Density functional theory (DFT)) quantum mechanics (QM), 3–4 Embedded atom method (EAM), 42 Energy renormalization (ER), 96–100, 98–99f, 99t Ergodic hypothesis, 57 Exchange-correlation functionals, 10–13, 11f Extension to constitutive nonlinearity, 138
F Fast Fourier transform (FFT)-based homogenization, 117–118, 130–131 Feature selection, 209 Finite cell method (FCM), 172 Finite element method (FEM), 117–118, 337 Finitely extensible nonlinear elastic (FENE) model, 85–87, 86f, 87t
429
Index
Finite volume method (FVM), 337 Force fields, modeling interatomic interactions, 45–53, 46t, 47f bonded interactions, 47–50, 48–49f challenges and limitations, 52–53 nonbonded interactions, 50–51, 51f parameterization, 52 Force matching (FM), 100–101 Free-surface flows, immersogeometric formulation finite cell method (FCM), 172 governing equations, 172–173 incompressible flows, Navier-Stokes equations, 173 level set method, 172–173 methods, 170–171 numerical examples, 186–197 dam break with obstacle, 188–192, 188t, 189–191f, 191t, 193f David Taylor Model Basin (DTMB) 5415 ship model, 192–197, 195t, 195–197f stationary platform, solitary wave impacting, 186–188, 187f wave height, 197f residual-based variational multiscale (RBVMS), 170–172 semidiscrete formulation Dirichlet boundary conditions, weak enforcement, 178–179 redistancing and mass conservation, 176–178 residual-based variational multiscale method, 174–176 tetrahedral finite cell method, 179–183 ray-tracing method, inside-outside test by, 182–183, 183f recursive refinement of quadrature points, 179–182, 180–182f time integration, 183–186 fully coupled linear solver, 185–186 generalized-α method, 183–185 Fully coupled linear solver, 185–186
G Generalized-α method, 183–185 multicorrector stage, 184–185 predictor stage, 184
Generalized generic coarse-graining models, 87–89, 89–90f Generalized gradient approximation (GGA), 11–12 Generic coarse-graining methods, 85–89 classic generic coarse-graining model, 85–87 finitely extensible nonlinear elastic (FENE) model, 85–87, 86f, 87t generalized generic coarse-graining models, 87–89, 89–90f Grain evolution simulation, 360–363, 362–364f Grain residual stress simulation, 371–376, 373f crystal plasticity framework, 374–376, 375f simulation, 376, 377f Graphene, 393–395, 393f, 395t Graphene foam, mechanical behavior of, 408–409, 408f Graphene oxide, 395–396, 395f Green operator singularity, 138
H Hartree-Fock (HF) theory, 12–13 Hashin-Shtrikman variational principle (HSFE), 120 Heat source modeling, 339–344 common heat source, 339–340 electron beam absorption modeling, 340–343, 341f laser absorption modeling, 343–344, 343f, 345f Heterogeneous material RVE, 150 Homogeneous far-field strain, 137 Homogenized consistent tangent modulus, 144–146 Human tropoelastin, molecular model of, 319
I Incompressible flows, Navier-Stokes equations, 173 Integrated Computational Materials Engineering (ICME), 116 Interphase, nanoscale characterization, 242–246, 243f, 247t
430 Intrinsic and extrinsic factors, 303–314 elastin-based materials, conformational entropic effects, 303–305, 304f elastomeric proteins, solvent and hydration effects, 305–308 temperature, 308–314 Inverse Monte Carlo (IMC), 95–96 Iterative Boltzmann inversion (IBI), 91–95, 92f
J Jellium, 10–11
K k-nearest neighbor (KNN), 212–213 Knudsen layer, 345–346 Kohn-Sham, 6–9, 10f
L Langevin dynamics, 82 Laser absorption modeling, 343–344, 343f, 345f Least absolute shrinkage and selection operator (LASSO) regression, 216 Level set method, 172–173 Levels of theory, 10–13, 11f Linear algebra, 208 Linear discriminant analysis (LDA), 215 Lippmann-Schwinger equation, 122 Lippmann-Schwinger integral equilibrium equation, 146 Lippmann-Schwinger system, 159 Local density approximation (LDA), 10–11 Local elastic strain concentration tensors, 122
M Machine learning (ML) algorithms, 209–218 applications, 218–233 AB2C Heusler compounds, 225–226, 225f AlphaFold network, 230–233 AlphaFold prediction of protein structure, 230 biomaterials designing, 228–233 conjugated polymers (CPs), 219–221
Index
decision tree (DT), 225–226 machine learning-informed coarse-grained modeling, 227–228, 229f material classification, 223–226 materials properties prediction, 219–223 molecular force fields, 227 molecular model development, 227–228 quantitative structure-property relationship (QSPR), 222–223, 223t, 224f quantum dots (QDs), 226, 226f classification, 209–213 data science, 204–205, 204f data selection, 209 decision tree algorithm, 210–211 deep learning (DL), 217–218, 218f definition, 205 feature reduction methods, 213–215 linear discriminant analysis (LDA), 215 principal component analysis (PCA), 213 T-distributed stochastic neighbor embedding, 214–215 feature selection, 209 k-nearest neighbor (KNN), 212–213 materials development, 203–204 math preliminaries, 207–209, 208f algorithms, 208–209 complex optimizations, 208–209 linear algebra, 208 multivariate calculus, 208 probability theory and statistics, 208 random forest (RF) algorithms, 211–212 regression models, 215–216 linear regression, 215–216 polynomial regression, 216 regularized linear regression, 216 reinforcement learning (RL), 207 semisupervised learning, 206–207 supervised learning, 205–206 types, 205–207, 206f unsupervised learning, 206 Machine learning-informed coarse-grained modeling, 227–228, 229f
431
Index
Marangoni effect, 338 Martini approach, 102–104, 103f Material classification, 223–226 Material design, 240 Materials properties prediction, 219–223 Mean-squared displacement (MSD), 67 Mechanical properties simulation, 369–380, 370f grain residual stress simulation, 371–376, 373f multiple tracks and layers, thermal stress simulation, 369–371, 372f structure-property relationship, multiscale modeling, 377–380 Mesoscale model of graphene, 396–398, 397–398t Micromechanics, preliminaries on, 123–128 auxiliary homogeneous problem, 125–127 green operator under linear elastic isotropy, 127 Lippmann-Schwinger integral equation, 127–128 problem formulation, 124 Microstructure evolution, 354–368, 355f dendrite growth simulation, 356–360, 357f, 359f electron beam-powder bed fusion (EB-PBF), precipitation process, 364–368, 366–368f grain evolution simulation, 360–363, 362–364f Minimum image convention, 69–70 Molecular dynamics (MD), 42–43 calculate properties from, 62–67 dynamical properties, 66–67, 66f radial distribution functions (RDF), 64, 65f structural properties, 62–66, 64f thermodynamic properties, 62–66 at constant temperature and/or pressure, 55–61, 58f, 60–61f dynamics of atoms, 53–55, 55f Molecular force fields, 227 Molecular model development, 227–228 Monte-Carlo method (MC), 42–43, 340
Multiaxial stress state, UD CFRP composites failure envelopes of σ11-t12, 254–257, 255f failure envelopes of σ22-t12 and s22-t23, 252–254, 254f proposed failure criteria, 257–259, 258t, 259f σ11-t12, 262–263, 262f, 262t σ22-t12, 259–262 σ22-t23, 263, 263f, 263t validation, 259–263 Multilayer graphene, 398–400, 399f, 400t Multilayer graphene assemblies (MLGs), 412–415, 414f Multilayer graphene-reinforced nanocomposites, 415–418, 416–417f Multiple tracks and layers, thermal stress simulation, 369–371, 372f Multiscale coarse-graining methods, 85–106 chemistry-specific coarse-graining methods, 89–106 generic coarse-graining methods, 85–89 Multiscale equilibrium problem, 152–157, 154–155f Multiscale modeling framework, 241–242, 241f Multivariate calculus, 208
N Nanostructured supramolecular assemblies, 407–412 mechanical behavior of graphene foam, 408–409, 408f multilayer graphene assemblies (MLGs), 412–415, 414f multilayer graphene-reinforced nanocomposites, 415–418, 416–417f temperature effects, 409–412, 410f Navier-Stokes equations, incompressible flows, 173 Newton-Raphson method, 142 Nonbonded interactions, 50–51, 51f Nuclear quantum effects (NQEs), 43–44
432
O Ortho-terphenyl (OTP), 58–59, 65
P Parameterization, 52 Particle-based mesoscale modeling atomistic molecular modeling methods, 75–77 classic molecular dynamics (MD), 81–82 coarse-graining (CG) modeling, 75–80, 76f, 79f dissipative particle dynamics (DPD), 82–85, 83f Langevin dynamics, 82 multiscale coarse-graining methods, 85–106 chemistry-specific coarse-graining methods, 89–106 generic coarse-graining methods, 85–89 Particle-particle particle-mesh (PPPM), 70 Path integral molecular dynamics (PIMD), 43–44 Perdew-Burke-Ernzerhof (PBE), 11–12 Perdew-Wang-1991 (PW91), 11–12 Periodic boundary conditions (PBCs), 69 Powder bed fusion (PBF), 334 Principal component analysis (PCA), 213 Probability theory and statistics, 208 Pseudopotentials, 19–20
Q Quantitative structure-property relationship (QSPR), 222–223, 223t, 224f Quantum dots (QDs), 226, 226f Quantum mechanical scattering theory, 128 Quantum mechanics (QM), 3–4
R Radial distribution functions (RDF), 64, 65f Random forest (RF) algorithms, 211–212
Index
Ray-tracing method, inside-outside test, 182–183, 183f Recursive refinement of quadrature points, 179–182, 180–182f Redistancing and mass conservation, 176–178 Reduced microscale equilibrium problem, numerical solution, 142–144 Reference homogeneous elastic material, 146 Regression-based self-consistent scheme, 147, 161 Regression models, 215–216 linear regression, 215–216 polynomial regression, 216 regularized linear regression, 216 Reinforcement learning (RL), 207 Relative entropy, 101–102 Representative volume elements (RVEs), 116–117, 158 Residual-based variational multiscale (RBVMS) method, 170–172, 174–176 Resilin, 294–295 Ridge regression, 216 Ring polymer molecular dynamics (RPMD), 43–44
S Selective laser melting (SLM), 334 Self-consistent clustering analysis (SCA), 116, 120–123, 121f, 154–156, 155f Self-consistent scheme, 146–150, 161–163 Semidiscrete formulation Dirichlet boundary conditions, weak enforcement, 178–179 redistancing and mass conservation, 176–178 residual-based variational multiscale method, 174–176 Semisupervised learning, 206–207 Silkworm-like polypeptide (SWLPs), 295 Simulating additive manufacturing process, 336–354, 336f heat source modeling, 339–344
433
Index
metal evaporation modeling, 345–347 molten pool dynamics, 337–339, 347–350, 348f multilayer simulation, 350–354 multitrack simulation, 350–354 single-track simulation, 350–354, 351f Sodium chloride, 318–319 Solids, 20–31 absorption spectra, 29–30, 29f adsorption energies, 26–28, 27f band structure, 28, 28f crystal structure, 20–23, 21–22f density of states (DOS), 29–30, 29f elastic constants, 23–25 finding transition state, 30–31, 30f surface energy, 25–26, 26f Spider silk’s N-terminal protein domain, 318–319 Stationary platform, solitary wave impacting, 186–188, 187f Strain energy conservation, 104–106, 106f, 107t Structure-property relationship, multiscale modeling, 377–380 grain structure reconstruction, 377–379, 378–379f polycrystal-scale plasticity model, 380, 381f Supervised learning, 205–206 Supramolecular assemblies crumpled graphene, 401–407 defects effects, 404–406, 405f self-adhesion effects, 406–407, 407f size effects, 401–404, 402f future outlook, 418 nanostructured supramolecular assemblies, 407–412 mechanical behavior of graphene foam, 408–409, 408f multilayer graphene assemblies (MLGs), 412–415, 414f multilayer graphene-reinforced nanocomposites, 415–418, 416–417f temperature effects, 409–412, 410f
T T-distributed stochastic neighbor embedding, 214–215
Temperature, 308–314, 409–412, 410f LCST/UCST in resilin- and elastin-based materials, 310–314, 311f, 312t resilin-and elastin-based materials, 309–310 Tetrahedral finite cell method, 179–183 ray-tracing method, inside-outside test, 182–183, 183f recursive refinement of quadrature points, 179–182, 180–182f Time integration, 183–186 fully coupled linear solver, 185–186 generalized-α method, 183–185 Two-dimensional (2D) materials, coarsegraining (CG) modeling graphene, 393–395, 393f, 395t graphene oxide, 395–396, 395f mesoscale model of graphene, 396–398, 397–398t multilayer graphene, 398–400, 399f, 400t overview, 392
U Unidirectional (UD) CFRP composites boundary conditions, 249–250 elastic-plastic-damage model, homogenized UD CFRP composites, 263–269 failure analysis, 250–252, 251f macroscale model, 281–286, 284f, 285t microstructure components, UD RVE model and constitutive laws, 246–249, 248f, 253–254t multiaxial stress state, 252–263 proposed elastic-plastic-damage model, 263–269, 266f, 267t, 268f RVE size, 249–250 uniaxial stress state, 250–252 Uniform electron gas (UEG) model, 10–11 United-atom (UA) approximation, 89–91, 90f Unsupervised learning, 206
V Van der Waals interactions, 14–15 Velocity Verlet algorithm, 54 Volume of fluid (VoF), 338
434
W Wave height, 197f Woven composites macroscale model, 281–286, 284f, 285t mesoscale model development, 269–281 constitutive and damage laws, 272–273, 273t, 273f
Index
damage initiation and propagation process, 276–281, 276f, 278–282f experimental and computational stress-strain curves, 273–276, 274f, 275–276t mesoscale RVE model generation, 270–272, 271–272f woven RVE model, 273–281