Materials Kinetics 9780128239070

Materials Kinetics: Transport and Rate Phenomena provides readers with a clear understanding of how physical-chemical pr

274 80 10MB

English Pages [555] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Materials Kinetics
 9780128239070

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

MATERIALS KINETICS Transport and Rate Phenomena

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

MATERIALS KINETICS Transport and Rate Phenomena

JOHN C. MAURO The Pennsylvania State University University Park, Pennsylvania

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States Copyright © 2021 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-12-823907-0 For information on all Elsevier publications visit our website at https://www.elsevier.com/books-and-journals About the Cover Art: “Moments in Love: Mean, Variance, Skew, Kurtosis” Jane Cook, 2018 Corning, New York Acrylic and tissue paper on canvas, 61 cm x 45 cm The title of this painted collage is a word-play on the title of the song “Moments in Love” by the 1980s new wave band Art of Noise. The canvas is painted red and overlaid with crinkled red tissue, over which four roughly rectangular forms are positioned essentially squared with each other and the canvas. The piece is the artist’s nerdy commentary on the utility and futility of analysis in matters of the heart. One might find joy amongst the scattered, undulating hills and valleys of the underlying “function” of love without knowing more details of that function; or, one can deploy a knowledge of statistics to extract the moments of the distribution of highs and lows. Neither method of experience is superior e they each provide unique insights. Publisher: Matthew Deans Acquisitions Editor: Dennis McGonagle Editorial Project Manager: Chiara Giglio Production Project Manager: Nirmala Arumugam Cover Designer: Miles Hitchen Typeset by TNQ Technologies This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Dedicated to my loving wife and daughter, Yihong and Sofia Mauro

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Contents Foreword Preface Acknowledgments

1. Thermodynamics vs. Kinetics 1.1. What is Equilibrium? 1.2. Thermodynamics vs. Kinetics 1.3. Spontaneous and Non-Spontaneous Processes 1.4. Microscopic Basis of Entropy 1.5. First Law of Thermodynamics 1.6. Second Law of Thermodynamics 1.7. Third Law of Thermodynamics 1.8. Zeroth Law of Thermodynamics 1.9. Summary Exercises References

2. Irreversible Thermodynamics 2.1. Reversible and Irreversible Processes 2.2. Affinity 2.3. Fluxes 2.4. Entropy Production 2.5. Purely Resistive Systems 2.6. Linear Systems 2.7. Onsager Reciprosity Theorem 2.8. Thermophoresis 2.9. Thermoelectric Materials 2.10. Electromigration 2.11. Piezoelectric Materials 2.12. Summary Exercises References

3. Fick’s Laws of Diffusion 3.1. Fick’s First Law 3.2. Fick’s Second Law 3.3. Driving Forces for Diffusion

xvii xxi xxv

1 1 3 6 8 11 12 13 15 15 16 17

19 19 22 24 26 26 27 28 29 31 33 34 35 36 38

39 39 41 45 vii

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

viii

Contents

3.4. Temperature Dependence of Diffusion 3.5. Interdiffusion 3.6. Measuring Concentration Profiles 3.7. Tracer Diffusion 3.8. Summary Exercises References

4. Analytical Solutions of the Diffusion Equation 4.1. Fick’s Second Law with Constant Diffusivity 4.2. Plane Source in One Dimension 4.3. Method of Reflection and Superposition 4.4. Solution for an Extended Source 4.5. Bounded Initial Distribution 4.6. Method of Separation of Variables 4.7. Method of Laplace Transforms 4.8. Anisotropic Diffusion 4.9. Concentration-Dependent Diffusivity 4.10. Time-Dependent Diffusivity 4.11. Diffusion in Other Coordinate Systems 4.12. Summary Exercises References

5. Multicomponent Diffusion 5.1. Introduction 5.2. Matrix Formulation of Diffusion in a Ternary System 5.3. Solution by Matrix Diagonalization 5.4. Uphill Diffusion 5.5. Summary Exercises References

6. Numerical Solutions of the Diffusion Equation 6.1. 6.2. 6.3. 6.4. 6.5. 6.6.

Introduction Dimensionless Variables Physical Interpretation of the Finite Difference Method Finite Difference Solutions Considerations for Numerical Solutions Summary

This book belongs to Alice Cartes ([email protected])

46 49 52 53 55 56 58

59 59 60 62 63 67 68 71 75 77 78 79 80 81 84

85 85 87 88 93 96 97 98

99 99 100 101 103 106 107

Copyright Elsevier 2023

Contents

Exercises References

7. Atomic Models for Diffusion 7.1. Introduction 7.2. Thermally Activated Atomic Jumping 7.3. Square Well Potential 7.4. Parabolic Well Potential 7.5. Particle Escape Probability 7.6. Mean Squared Displacement of Particles 7.7. Einstein Diffusion Equation 7.8. Moments of a Function 7.9. Diffusion and Random Walks 7.10. Summary Exercises References

8. Diffusion in Crystals 8.1. Atomic Mechanisms for Diffusion 8.2. Diffusion in Metals 8.3. Correlated Walks 8.4. Defects in Ionic Crystals 8.5. Schottky and Frenkel Defects 8.6. Equilibrium Constants for Defect Reactions 8.7. Diffusion in Ionic Crystals 8.8. Summary Exercises References

9. Diffusion in Polycrystalline Materials 9.1. 9.2. 9.3. 9.4.

Defects in Polycrystalline Materials Diffusion Mechanisms in Polycrystalline Materials Regimes of Grain Boundary Diffusion Diffusion Along Stationary vs. Moving Grain Boundaries 9.5. Atomic Mechanisms of Fast Grain Boundary Diffusion 9.6. Diffusion Along Dislocations 9.7. Diffusion Along Free Surfaces 9.8. Summary Exercises References

This book belongs to Alice Cartes ([email protected])

ix 107 108

109 109 110 112 116 117 118 120 121 123 125 126 127

129 129 131 134 135 137 139 141 144 145 146

147 147 148 150 153 156 157 157 158 159 160

Copyright Elsevier 2023

x

Contents

10. Motion of Dislocations and Interfaces 10.1. Driving Forces for Dislocation Motion 10.2. Dislocation Glide and Climb 10.3. Driving Forces for Interfacial Motion 10.4. Motion of Crystal-Vapor Interfaces 10.5. Crystalline Interface Motion 10.6. Summary Exercises References

161 161 165 168 169 173 174 174 175

11. Morphological Evolution in Polycrystalline Materials

177

11.1. Driving Forces for Surface Morphological Evolution 11.2. Morphological Evolution of Isotropic Surfaces 11.3. Evolution of Anisotropic Surfaces 11.4. Particle Coarsening 11.5. Grain Growth 11.6. Diffusional Creep 11.7. Sintering 11.8. Summary Exercises References

177 178 181 182 184 189 190 195 196 197

12. Diffusion in Polymers and Glasses 12.1. Introduction 12.2. Stokes-Einstein Relation 12.3. Freely Jointed Chain Model of Polymers 12.4. Reptation 12.5. Chemically Strengthened Glass by Ion Exchange 12.6. Ion-Exchanged Glass Waveguides 12.7. Anti-Microbial Glass 12.8. Proton Conducting Glasses 12.9. Summary Exercises References

13. Kinetics of Phase Separation 13.1. 13.2. 13.3. 13.4.

Thermodynamics of Mixing Immiscibility and Spinodal Domes Phase Separation Kinetics Cahn-Hilliard Equation

This book belongs to Alice Cartes ([email protected])

199 199 200 201 202 203 211 211 212 213 213 215

217 217 222 224 226

Copyright Elsevier 2023

Contents

13.5. Phase-Field Modeling 13.6. Summary Exercises References

14. Nucleation and Crystallization 14.1. Kinetics of Crystallization 14.2. Classical Nucleation Theory 14.3. Homogeneous Nucleation 14.4. Heterogeneous Nucleation 14.5. Nucleation Rate 14.6. Crystal Growth Rate 14.7. Johnson-Mehl-Avrami Equation 14.8. Time-Temperature-Transformation Diagram 14.9. Glass-Ceramics 14.10. Summary Exercises References

15. Advanced Nucleation Theories 15.1. Limitations of Classical Nucleation Theory 15.2. Statistical Mechanics of Nucleation 15.3. Diffuse Interface Theory 15.4. Density Functional Theory 15.5. Implicit Glass Model 15.6. Summary Exercises References

16. Viscosity of Liquids 16.1. 16.2. 16.3. 16.4. 16.5. 16.6. 16.7. 16.8. 16.9. 16.10.

Introduction Viscosity Reference Points Viscosity Measurement Techniques Liquid Fragility Vogel-Fulcher-Tammann (VFT) Equation for Viscosity Avramov-Milchev (AM) Equation for Viscosity Adam-Gibbs Entropy Model Mauro-Yue-Ellison-Gupta-Allan (MYEGA) Equation for Viscosity Infinite Temperature Limit of Viscosity Kauzmann Paradox

This book belongs to Alice Cartes ([email protected])

xi 229 230 232 232

235 235 235 236 238 241 242 244 246 248 251 251 252

255 255 257 259 260 263 264 266 267

269 269 270 271 272 274 275 276 279 284 286

Copyright Elsevier 2023

xii

Contents

16.11. Fragile-to-Strong Transition 16.12. Non-Newtonian Viscosity 16.13. Volume Viscosity 16.14. Summary Exercises References

17. Nonequilibrium Viscosity and the Glass Transition 17.1. Introduction 17.2. The Glass Transition 17.3. Thermal History Dependence of Viscosity 17.4. Modeling of Nonequilibrium Viscosity 17.5. Nonequilibrium Viscosity and Fragility 17.6. Composition Dependence of Viscosity 17.7. Viscosity of Medieval Cathedral Glass 17.8. Summary Exercises References

18. Energy Landscapes 18.1. Potential Energy Landscapes 18.2. Enthalpy Landscapes 18.3. Landscape Kinetics 18.4. Disconnectivity Graphs 18.5. Locating Inherent Structures and Transition Points 18.6. ExplorerPy 18.7. Summary Exercises References

19. Broken Ergodicity 19.1. What is Ergodicity? 19.2. Deborah Number 19.3. Broken Ergodicity 19.4. Continuously Broken Ergodicity 19.5. Hierarchical Master Equation Approach 19.6. Thermodynamic Implications of Broken Ergodicity 19.7. Summary Exercises References

This book belongs to Alice Cartes ([email protected])

287 288 290 291 291 294

295 295 295 299 301 304 306 309 312 313 313

315 315 319 322 324 327 336 338 339 339

341 341 343 346 351 355 357 360 361 361

Copyright Elsevier 2023

Contents

20. Master Equations 20.1. Transition State Theory 20.2. Master Equations 20.3. Degenerate Microstates 20.4. Metabasin Approach 20.5. Partitioning of the Landscape 20.6. Accessing Long Time Scales 20.7. KineticPy 20.8. Summary Exercises References

21. Relaxation of Glasses and Polymers 21.1. Introduction 21.2. Fictive Temperature 21.3. Tool’s Equation 21.4. Ritland Crossover Effect 21.5. Fictive Temperature Distributions 21.6. Property Dependence of Fictive Temperature 21.7. Kinetic Interpretation of Fictive Temperature 21.8. Stretched Exponential Relaxation 21.9. Prony Series Description 21.10. Relaxation Kinetics 21.11. RelaxPy 21.12. Stress vs. Structural Relaxation 21.13. Maxwell Relation 21.14. Secondary Relaxation 21.15. Summary Exercises References

22. Molecular Dynamics 22.1. 22.2. 22.3. 22.4. 22.5. 22.6. 22.7.

Multiscale Materials Modeling Quantum Mechanical Techniques Principles of Molecular Dynamics Interatomic Potentials Ensembles Integrating the Equations of Motion Performing Molecular Dynamics Simulations

This book belongs to Alice Cartes ([email protected])

xiii

363 363 365 368 370 375 383 385 386 386 387

389 389 390 390 392 392 396 396 397 399 404 406 406 411 413 415 416 417

419 419 420 422 424 426 426 430

Copyright Elsevier 2023

xiv

Contents

22.8. Thermostats 22.9. Barostats 22.10. Reactive Force Fields 22.11. Tools of the Trade 22.12. Summary Exercises References

23. Monte Carlo Techniques 23.1. Introduction 23.2. Monte Carlo Integration 23.3. Monte Carlo in Statistical Mechanics 23.4. Markov Processes 23.5. The Metropolis Method 23.6. Molecular Dynamics vs. Monte Carlo 23.7. Sampling in Different Ensembles 23.8. Kinetic Monte Carlo 23.9. Inherent Structure Density of States 23.10. Random Number Generators 23.11. Summary Exercises References

24. Fluctuations in Condensed Matter 24.1. 24.2. 24.3. 24.4. 24.5. 24.6. 24.7.

What are Fluctuations? Statistical Mechanics of Fluctuations Fluctuations in Broken Ergodic Systems Time Correlation Functions Dynamical Heterogeneities Nonmonotonic Relaxation of Fluctuations Industrial Example: Fluctuations in High Performance Display Glass 24.8. Summary Exercises References

25. Chemical Reaction Kinetics 25.1. Rate of Reactions 25.2. Order of Reactions 25.3. Equilibrium Constants

This book belongs to Alice Cartes ([email protected])

433 434 434 438 439 440 441

443 443 444 446 447 448 450 451 452 457 460 463 464 465

467 467 468 470 474 476 477 480 482 485 485

487 487 489 490

Copyright Elsevier 2023

Contents

25.4. First-Order Reactions 25.5. Higher Order Reactions 25.6. Reactions in Series 25.7. Temperature Dependence of Reaction Rates 25.8. Heterogeneous Reactions 25.9. Solid State Transformation Kinetics 25.10. Summary Exercises References

26. Thermal and Electrical Conductivities 26.1. Transport Equations 26.2. Thermal Conductivity 26.3. Electrical Conductivity 26.4. Varistors and Thermistors 26.5. Summary Exercises References Index

This book belongs to Alice Cartes ([email protected])

xv 491 493 494 495 496 496 498 499 500

501 501 502 504 506 509 510 511 513

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Foreword Tis true without lying, certain & most true. That which is below is like that which is above & that which is above is like that which is below to do the miracles of one only thing And as all things have been & arose from one by the mediation of one: so all things have their birth from this one thing by adaptation. Sir Isaac Newton, excerpt from “The Emerald Tablet”

The original author of “The Emerald Tablet,” a seminal work of alchemical thinking, was a now-unknown 6th-8th Century Arabic mystic. A millennium later, Newton made his translation from version in Latin, with the first key lines given above. Although the phrase “As above, so below” is now popularized in contemporary metaphysical lore, I think it likely that Newton could sense, if not know, that a deeper truth of Nature was at the core of this concept. But it would be 300 years before Albert Einstein, another genius who could see beyond the common dimensional and mathematical restrictions of his time, would unite the “above” and “below” in his diffusion equation that connected the microscopic motion of individual atoms to the macroscopic flow of large systems. And that “one thing,” from which this scale-bridging understanding arises, is what floats now before your eyes: Materials Kinetics. In the song “Heaven” by the Talking Heads, David Byrne croons: Heaven Heaven is a place A place where nothing ever happens.

This view of a heavenly state is a state of thermodynamic equilibrium. There are no reservoirs of potential energy to dissipate. No chemical reactions to go forward; no elevated weights on pulleys waiting to do work; no electrons waiting to discharge from capacitive plates. It would seem an engineering student’s Heaven.

xvii This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

xviii

Foreword

But reality is the here-and-now. And stuff is happening! And if it’s not happening, it’s probably trying to. Things ARE kinetics! We live life in a universe in motion. We, and all around us, are verbs, as Buckminster Fuller pointed out: “.integral functions of Universe.” We are motivated fundamentally by the possibilities of change. Thermodynamics tells us what can be, what is favored, what is desired. But it is Kinetics that tells us how it can happen, what it’ll take, the ultimate, integrated “pain vs. gain.” It is the traffic report for the freeways of materials science. I grew up in the 1970s in the suburbs north of Los Angeles, a city famous for its Car Culture. A sprawling megalopolis, LA offered 16 yearold Me uncountable worlds to explore. But along with mastering the critical skills of safe vehicle operation, I needed to know the freeways, highways, boulevards, side streets – and even the alleyways – to get to places safely, timely, and with low-stress. I had a paper map in my bedroom that I would study before heading out, and another in the glovebox. But maps were never enough. I also needed the radio’s traffic reports – the realtime data – so I’d know, for example, that the southbound I-405 over the pass was backed up to Victory Boulevard due to a vehicle fire, and so it would, surprisingly, be faster to take I-5 through downtown to I-10 to get to the beach. Thermodynamics told me I wanted to get to the beach, that there I’d be chill and at deep local equilibrium; my “heaven” was a spot of sand just north of Pepperdine University in Malibu. Kinetic pathways of maps and traffic reports provided the options for roads to take. It was the complex interplay of those two, moment to moment on that journey that defined the actual lowest energy pathway that my car should take: Reality is our thermodynamic motivations playing out on the diamond lanes, stoplights, fender-benders, and alleyway shortcuts of serial/parallel kinetic mechanisms. This textbook unites the elegant potentialities of thermodynamics with the real universe of shifting chemical activity gradients, thermal transients, alternating electromagnetic fields, and deviatoric stresses. Materials Kinetics shows how “one thing by adaptation” from Newton’s alchemical picture manifests the Einsteinian “miracle of the one thing,” the astonishing myriad subtlety of the physical universe. Jane B. Cook The Pennsylvania State University, University Park, PA, United States of America

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Foreword

xix

Notes: http://webapp1.dlib.indiana.edu/newton/mss/norm/ALCH00017. “Heaven” lyrics by David Byrne and Jerry Harrison, on “Fear of Music” 1979, Sire Records. “I Seem to be a Verb” by Buckminster Fuller, ISBN-13 : 978-1127231539.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Preface Kinetic processes are the agents of change in materials science and engineering. Despite the ubiquitous importance of kinetics in materials science, a comprehensive textbook on this topic has been sorely lacking. I hope the current volume will fill this gap, providing a textbook that covers the full breadth of materials kinetics, spanning all classes of materials and covering both the liquid and solid states from the macroscale down to the atomic level. In writing this book, I have attempted to strike a balance among fundamental theory, modeling and simulation techniques, experimental methods, and practical applications in materials design. The book is written in a pedagogical fashion, aiming to provide a rigorous treatment of materials kinetics in a format that is accessible to both first-year graduate students and upper-level undergraduates in materials science and engineering. It should also be useful as a reference for professionals in the field. Emphasis has been placed on developing the fundamental concepts of materials kinetics and the importance of these concepts in the understanding and design of materials. Real-world examples are given throughout the text, and each chapter ends with a series of exercises that are meant to stimulate critical and creative thinking around relevant concepts. This volume emerged from teaching MATSE 503, “Kinetics of Materials Processes,” my first-year graduate course at Penn State. The organization of the book follows exactly how I teach the course. I am happy to provide additional course contentdincluding lecture slidesdto any instructors who ask. Please feel free to email me at [email protected] to request a copy of this material. The book begins with an overview of important thermodynamic concepts and the difference between thermodynamics and kinetics (Chapter 1). Thermodynamics is an elegant subject, but one that is often difficult to grasp for many students. I hope the overview provided in Chapter 1 will help students develop an intuitive understanding of some key thermodynamic concepts which are vital to materials science and engineering. One of my goals with this book is to provide a seamless connection between the thermodynamics and kinetics of materials. With respect to transport and rate phenomena, thermodynamics can often be viewed as the “cause” while kinetics is the “effect.”

xxi This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

xxii

Preface

With this foundation in place, we introduce the subject of irreversible thermodynamics (Chapter 2), which provides the thermodynamic driving force for kinetic processes. Aside from the excellent chapter in Callen’s classic thermodynamics textbook, there is almost no good overview of irreversible thermodynamics in the literature that is both physically rigorous and clearly accessible to introductory readers. Such was my goal with Chapter 2, which took me more time to write than any other chapter in this book. I hope the reader will find this to be a lucid and useful introduction to Onsager’s formulation of irreversible thermodynamics, as well as its practical importance in materials science. In Chapter 3, we introduce Fick’s laws of diffusion. Chapter 4 is devoted to analytical solutions of the diffusion equation. Here, I have tried to select an assortment of solutions that I have seen most commonly used in practical problems. Next, in Chapter 5 we consider multicomponent diffusion, a problem first rigorously addressed by my advisor (Arun Varshneya) during his Ph.D. studies at Case Western Reserve University. Of course, many diffusion problems are too difficult to be solved analytically. Chapter 6 is, therefore, devoted to numerical solutions of the diffusion equation using the finite difference method. While Chapters 1e6 deal with macroscopic thermodynamics and kinetics, in Chapter 7 we dive into the microscopic description of diffusion in terms of atomic jumping. The connection between the microscopic and macroscopic descriptions of diffusion is made via the Einstein diffusion equation. Then in Chapter 8 we specifically deal with atomic jumping and diffusion in single crystals, starting first with perfect crystals and then moving to those having point defects. In Chapter 9 we cover diffusion in polycrystalline materials, accounting for the impact of grain boundaries and free surfaces. Chapter 10 then deals with the kinetics of dislocation and interfacial motion. Chapter 11 concludes our treatment of polycrystalline materials by studying various types of morphological evolution, including particle coarsening, grain growth, diffusional creep, and sintering. In Chapter 12, we turn our attention to diffusion in polymers and glasses, including reptation and ion exchange processes. Next, in Chapter 13 we cover the thermodynamics and kinetics of phase separation, including droplet nucleation and spinodal decomposition. Chapters 14 and 15 are devoted to crystal nucleation and growth. Chapter 14 presents classical nucleation theory, while Chapter 15 covers several types of advanced nucleation theories. In Chapter 16 we turn our attention to liquid viscosity, including fundamental theory, experimental measurement techniques, and models

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Preface

xxiii

describing the viscosity-temperature relationship. This naturally leads to the topics of nonequilibrium viscosity and the glass transition in Chapter 17. The glass transition is a particularly interesting topic since it is intrinsically a kinetic transition, and one with profound thermodynamic consequences, i.e., kinetics is the “cause” and thermodynamics is the “effect”! Chapter 18 introduces the notion of an energy landscape, which is one of the most powerful and versatile ways of describing both the thermodynamics and kinetics of materials. In Chapter 19, we cover the vitally importantdbut usually overlookeddtopic of broken ergodicity, which is of critical importance for nonequilibrium systems displaying long relaxation times. The kinetics of broken ergodic systems can be rigorously calculated in terms of master equations, which are detailed in Chapter 20. Next, Chapter 21 applies this knowledge to the study of long-time relaxation in glasses and polymers. This chapter includes detailed coverage of fictive temperature and the stretched exponential relaxation function. The next pair of chapters focuses on useful computer simulation techniques for modeling kinetic phenomena at the atomic level. Chapter 22 covers the fundamentals of molecular dynamics, and Chapter 23 is devoted to Monte Carlo techniques, including the kinetic Monte Carlo approach for accessing long time scales. In Chapter 24, we discuss fluctuations in condensed matter. Atomic scale fluctuations in time and space are critically important across all of materials science and engineering. However, this topic is rarely given much attention in standard materials science curricula. With Chapter 24, I hope to bring more attention to this important topic. Chapter 25 provides an introduction to chemical reaction kinetics, i.e., kinetics from the perspective of chemistry and chemical engineering. Finally, we conclude the book with a brief chapter on thermal and electrical conductivities (Chapter 26), topics that are already part of the standard curriculum in solid state physics courses. More in-depth treatment is left for solid state physics textbooks (see either of the canonical texts by Kittel or by Ashcroft and Mermin). Whether you are an experimentalist or theorist, a metallurgist, ceramist, glass scientist, or polymer scientist, I hope there is something of interest here for you, and I hope that you will enjoy reading this book as much as I have enjoyed writing it! John C. Mauro The Pennsylvania State University University Park, Pennsylvania

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Acknowledgments I am blessed with an amazing group of students here at Penn State, whose passion for materials research and making the world a better place is a daily inspiration to me. I would like to thank all my students for making my role as a professor such a fulfilling experience. With respect to the current volume, I would especially like to thank the following Penn State students who provided figures from their research to help elucidate various concepts throughout this book: Sun Hwi Bang, Nicholas Clark, Anthony DeCeanne, George Kotsonis, Rebecca Welch, Collin Wilkinson, and Yongjian Yang. I would also like to thank all the students in my MATSE 503 “Kinetics of Materials Processes” class for all their great questions and enthusiasm for learning. I would especially like to thank my teaching assistant, Karan Doss, who is a wellspring of great ideas and insights. Discussions with Karan have improved both the course itself and the content of this textbook. I would also like to thank Matthew Mancini and Daniel Cassar for their helpful suggestions on this book. I owe an ocean of thanks to Brittney Hauke, who designed 37 of the figures in this book. I really appreciate Brittney’s ability to clearly capture key concepts in her illustrations, which will be greatly helpful to students learning the course content for the first time. Brittney has also been amazingly generous with her time, providing careful word-by-word proofreading of most of this book. Thank you, Brittney! Thanks also to my colleagues at Penn State for all of their support and encouragement. In particular, Venkat Gopalan has been a huge source of encouragement throughout the writing of this book. Sorry for clogging your email with so many chapter files! I’d also like to thank Susan Sinnott, Long-Qing Chen, Jon-Paul Maria, and Clive Randall for their ongoing support and for sharing some of their exciting new research for inclusion as part of this volume. I also owe a huge thanks to Carlo Pantano, Seong Kim, and John Hellmann for their amazing support of both me personally and my research group at Penn State. I am deeply thankful to Jane Cook for all of her encouragement, inspiration, and friendship over so many years, first at Corning Incorporated and now here at Penn State. It is difficult to express how deeply meaningful it is to me that she would write the Foreword for this book and also offer her beautiful artwork, “Moments in Love,” both as the cover for this book and as one of the figures in Chapter 7. Jane’s work at the intersection of art and science is profoundly inspirational to me, as I know it is to our students. xxv This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

xxvi

Acknowledgments

I am also very fortunate to have a wonderful network of friends and collaborators from around the globe. I appreciate all the good times working together to address our latest research challenges. In the context of the current book, I would like to extend special thanks to Roger Loucks (Alfred University); Morten Smedskjaer (Aalborg University, Denmark); Edgar Zanotto (Federal University of São Carlos, Brazil); Doug Allan, Adam Ellison, Ozgur Gulbiten, and Matt McKenzie (Corning Incorporated); Ken Kelton (Washington University in St. Louis); and Prabhat Gupta (The Ohio State University). Prabhat, in particular, has been like a second advisor to me. We worked very closely together in developing many of the models and techniques described in this book, including the MYEGA equation for liquid viscosity, temperature-dependent constraint theory, the statistical mechanics of continuously broken ergodicity, and a general technique for solving large sets of master equations over disparate time scales. I would also like to express my deepest gratitude to my advisor, Arun Varshneya from Alfred University, who is both my “glass guru” and my academic father. I owe so much to him, for everything he has taught me directly and for all the doors of opportunity he has opened for me. Arun has taught me that being an academic advisor is a lifelong commitment. He is the role model whom I am striving to follow, in his engaging teaching style, his outstanding research at the interface of science and engineering, and (most importantly) in his deep commitment to his students. I would also like to thank my family for their unfailing love and support. From birth to the present day, my parents, Ron and Susie Mauro, have always been such wonderful role models for how to live my life. Finally, the biggest thanks of all goes to my wife, Yihong, and our daughter, Sofia, for their unbounded love and support, especially during the countless hours I’ve spent writing this book. I struggle to find the words to express the depth of my love and gratitude for you. John C. Mauro The Pennsylvania State University University Park, Pennsylvania

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 1

Thermodynamics vs. Kinetics 1.1 What is Equilibrium? The field of classical thermodynamics is primarily concerned with equilibrium states, i.e., the states to which systems will eventually evolve and become stable. Although the concept of equilibrium is seemingly intuitive, this simplicity can be deceptive. Perhaps the most insightful definition of equilibrium is given by Richard Feynman [1], who said that equilibrium is “when all the fast things have happened but the slow things have not.” “Surely you’re joking, Mr. Feynman!” we may be tempted to reply. While Feynman’s definition may seem somewhat flippant at first, it incisively captures the importance of time scale in determining what constitutes equilibrium. Perhaps we should rephrase our response to Mr. Feynman. If equilibrium is “when all the fast things have happened but the slow things have not,” this begs the questions, “What is fast?” and “What is slow?” Indeed, “fast” and “slow” are intrinsically relative terms and depend on your perspective as an observer [2]. Maybe some caffeine will help accelerate our understanding of the relative nature of equilibrium. Let us consider the simple example of a mixture of coffee and cream in a mug, depicted in Figure 1.1. Initially the coffee and the cream are two separate phases. When the cream is poured into the coffee, the mixture homogenizes on a time scale of several seconds. Hence, the coffee-cream mixture achieves an equilibrium within the mug on a fairly short time scale. This would be an appropriate time to take a sip, because if we wait longer, the hot coffee-cream mixture will cool and reach thermal equilibrium with the room temperature environment. This second equilibrium occurs on a longer time scale, e.g., on the order of tens of minutes. Now let us extend the time scale again, this time to several days. On this much longer time scale, the thermally equilibrated coffee-cream mixture will reach a vapor equilibrium with the atmosphere, and our thirsty reader will be left with only some residue of what was formerly coffee in the bottom of the mug. Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00002-9 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

1 Copyright Elsevier 2023

2

Materials Kinetics

Figure 1.1 Example of coffee-cream equilibrium. Different stages of equilibrium can be observed on different time scales.

Of these three equilibria depicted in Figure 1.1, which one is the true equilibrium? The answer to this question depends on you, dear reader. Specifically, what is the objective of your experiment? If you intend to study vapor equilibrium, then the longest time scale is the most relevant, as your experiment must be conducted on a long enough time scale to reach the desired vapor equilibrium. However, if you are a disciple of Fourier and wish to study the approach to thermal equilibrium, then the intermediate time scale is the appropriate choice, since the experiment can conclude when thermal equilibrium has been achieved, i.e., when there is no further change in the temperature of the coffee-cream mixture. In this case, the subsequent vapor equilibrium is irrelevant. Or perhaps the reader simply wishes to enjoy a refreshing cup of coffee, in which case the relevant equilibrium is the homogenization of the coffee-cream mixture. The reader can enjoy the outcome of the experiment within several seconds. It should now be apparent to our caffeinated reader that the question of thermodynamic equilibrium is really a question of time scale. We may think of an experiment as having two relevant time scales: an internal time scale on which the approach to the relevant equilibrium occurs, and an external time scale on which the system is being observed or measured. The internal time scale of equilibration is essentially a time scale for the relaxation of atomic structure (“structural relaxation”) over which a system loses any “memory” of its preceding states. The external time scale (or observation time scale) defines the time over which the system is being measured. This measurement can be made either directly by a human observer or using an instrument capable of accessing time scales not available to an unaided human, i.e., on either ultra-fast or ultra-long time scales. Following Feynman’s definition above, a system is considered to be

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermodynamics vs. Kinetics

3

in equilibrium if all of the relevant relaxation processes have taken place, while the remaining slower processes are essentially frozen on the external (observation) time scale of interest [3].

1.2 Thermodynamics vs. Kinetics Hence, thermodynamics involves determining the relative stability of different possible states of a system under appropriate constraints related to chemical composition, temperature, pressure, etc. Thermodynamics tells us whether a given reaction is feasible and which product is in stable equilibrium. The approach to equilibrium is the domain of kinetics and the subject of this monograph. The word “kinetics” is derived from the ancient Greek “kinesis” (kίnhsi2), meaning “movement” or “motion.” The goal of kinetics is to determine the rates of processes and the evolution of materials systems under some set of conditions. The kinetics of a given process fundamentally depend on two important factors: 1. The thermodynamic driving force for the process; and 2. The kinetic rate parameters. The thermodynamic driving force depends on the degree of disequilibrium of the system. A greater degree of disequilibrium gives rise to a larger thermodynamic driving force for equilibration. The details of this driving force are the subject of irreversible thermodynamics, which will be introduced in Chapter 2. The kinetic rate parameters govern the time scale of the process. They determine how fast a reaction occurs and how much time is required to achieve equilibration. The kinetics of the system will also determine which intermediate states are visited along the path to equilibrium. Whereas some intermediate states are kinetically stable (i.e., on a short time scale), they may be thermodynamically unstable (i.e., in the limit of long time). While the kinetics of materials processes is the raison d 0être of this volume, it would be prudent to begin with a brief review of some key concepts in classical thermodynamics. Figure 1.2 shows a mnemonic table for remembering some of these key quantities. Let us begin in the upperleft quadrant with U, the internal energy of the system. The internal energy is an extensive parameter including all relevant contributions to the energy of the system. If a system is at constant pressure rather than constant volume (i.e., if the system is isobaric rather than isochoric), then the relevant thermodynamic variable is the enthalpy, H ¼ U þ PV, where P is

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

4

Materials Kinetics

Figure 1.2 Mnemonic for remembering key thermodynamics quantities.

pressure and V is volume. Enthalpy is given in the upper-right quadrant of Figure 1.2. The corresponding free energies are obtained in the second row of the figure by subtracting the product of the absolute temperature, T, and the entropy, S, from either the internal energy or the enthalpy. For an isochoric (constant volume) system, the relevant quantity is the Helmholtz free energy, F ¼ U  TS. For an isobaric (constant pressure) system, the relevant quantity is the Gibbs free energy, G ¼ H  TS. In either case, moving from the upper row to the lower row in Figure 1.2 involves subtracting TS, and moving from the left-hand column (for isochoric systems) to the right-hand column (for isobaric systems) involves adding PV. Most practical experiments tend to be performed under isobaric rather than isochoric conditions. Hence, Gibbs free energy is the more commonly used quantity compared to Helmholtz free energy. Under such conditions, the relative stability of the various states of a system is determined by the differences in the Gibbs free energy among these states. The state having the lowest free energy is the thermodynamic equilibrium and the state to which the system will evolve in the limit of long time. Any number of metastable equilibrium states may also exist, which require overcoming an activation barrier to evolve to a more stable configuration having lower free energy. A system can remain in a state of metastable equilibrium for a very long time, enabling it to be treated as a

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermodynamics vs. Kinetics

5

thermodynamic equilibrium state over that finite interval of time. The condition of metastability implies that small perturbations to the system will not be sufficient for it to change its statedthe relaxation to a more stable state can only occur if a large activation barrier is overcome. The simplest possible system of any practical significance is the twostate model depicted in Figure 1.3. This figure shows a “double-well” free energy diagram with two different stable states of the system. The higher free energy well on the left-hand side of the figure represents a metastable equilibrium state. The lower free energy state on the right-hand side is the stable equilibrium. Suppose that a reaction occurs to transition the system from the initial left-hand state (the reactant) to the final righthand state (the product). There are two relevant parameters for this reaction, as depicted in the figure: 1. The free energy of reaction, DG, which is the change in the free energy of the system due to the reaction, i.e., the difference in free energy between the initial and final states; and 2. The activation free energy, G * , which is the activation barrier that must be overcome for the reaction to proceed. The free energy of reaction (DG) provides the thermodynamic driving force for the reaction, while the reaction rate is governed by the activation barrier (G * ). For the reaction to proceed during a given observation time, there must be both a nonzero thermodynamic driving force and a reaction rate that allows the kinetics to occur within the time allotted for the experiment.

Figure 1.3 Gibbs free energy of an example two-state model, where G* is the activation free energy and DG is the free energy of the reaction.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

6

Materials Kinetics

1.3 Spontaneous and Non-Spontaneous Processes During a spontaneous process, the system moves to a more thermodynamically stable state having a lower free energy [4]. In other words, a spontaneous process has a negative free energy of reaction: DG ¼ Gproducts  Greactants < 0. This occurs, for example, if the two-state system moves from the left state to the right state in Figure 1.3. Spontaneous processes occur without any external stimulus, i.e., without any application of work to the system. Examples of spontaneous processes in our everyday lives include: • Ice melting at room temperature. • Water flowing downhill. • Sugar dissolving in water. • Heat flowing from a hotter object to a colder object. • A gas expanding to fill an empty vessel. • Iron exposed to oxygen and water forming rust. However, merely satisfying the condition of DG < 0 does not tell us anything about the kinetics of the spontaneous process. Consider the case of diamond and graphite depicted in Figure 1.4. Under ambient conditions, diamond is a metastable form of carbon, having a higher free energy than graphite. Hence, the transformation from diamond to graphite is a spontaneous process, lowering the free energy of the system. Should our readers be concerned that their diamond jewelry (and Vickers diamond indenters) will spontaneously transform into graphite? Fortunately, the answer is “no,”

Figure 1.4 The conversion of carbon from metastable diamond to stable graphite is a spontaneous process. However, we do not observe this process under normal conditions since the kinetics of the transformation have time scales on the order of billions of years.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermodynamics vs. Kinetics

7

because the kinetics of this transformation are much too slow to observe at room temperature. Although the transformation of diamond to graphite is a thermodynamically spontaneous process, a diamond is kinetically confined in the diamond state, unable to overcome the activation barrier necessary to transition to graphite. Although a diamond is not “forever,” it should safely last for billions of years. The opposite of a spontaneous process is a non-spontaneous process. During a non-spontaneous process, the system moves from a more thermodynamically stable state to a less thermodynamically stable state having a higher free energy, i.e., DG > 0. As the name implies, non-spontaneous processes cannot occur on their own: they require an external source of energy to drive the process. Examples of non-spontaneous processes in our everyday lives include: • Filling a tire with air. • Photosynthesis. • Refrigeration. • Skiing uphill. • Water purification. • Taking a Kinetics exam. They are all non-spontaneous processes because they all require work! Whether a process is spontaneous or non-spontaneous is determined entirely by the sign of DG, i.e., DG < 0 for a spontaneous process and DG > 0 for a non-spontaneous process. Given that DG ¼ DH  TDS, we must consider both the enthalpic (DH) and entropic (DS) contributions to the free energy of reaction. Four different scenarios are possible, as shown in Figure 1.5. If the reaction is both enthalpically and entropically favored,

Figure 1.5 Thermodynamic processes can be either spontaneous or non-spontaneous depending on their associated changes in entropy and enthalpy.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

8

Materials Kinetics

i.e., if the enthalpy decreases (DH < 0) and the entropy increases (DS > 0) as a result of the reaction, then the process is always spontaneous at all temperatures. This is indicated in the upper-left quadrant of Figure 1.5. Conversely, if a process involves both an increase in enthalpy (DH > 0) and a decrease in entropy (DS < 0), then that process is always nonspontaneous, as indicated in the lower-right quadrant of the figure. The more interesting cases are when there is a competition between the enthalpic and entropic terms. For example, the lower-left quadrant of the figure shows the case of a reaction that is enthalpically favorable (DH < 0) but entropically unfavorable (DS < 0). Since DG ¼ DH  TDS, the spontaneity of the reaction depends on the temperature of the system. At low temperatures, the entropic term (TDS) contributes less because of the smaller value of T. Hence, at low temperatures the Gibbs free energy of the reaction is dominated by the enthalpy of the reaction, i.e., the heat of reaction. This means that DG < 0 at low temperatures, since the enthalpy is dominant and DH < 0. However, at high temperatures the entropic term becomes dominant, leading to DG > 0. Hence, such an enthalpy-driven reaction would be spontaneous at low temperatures but non-spontaneous at high temperatures. The opposite is true in the upper-right quadrant of Figure 1.5, which shows a reaction that is favored by entropy (DS > 0) but not by enthalpy (DH > 0). Again, the entropic term is dominant at high temperatures and the enthalpic term is dominant at low temperatures. Therefore, in this case of an entropy-driven reaction, the process is spontaneous at high temperatures but non-spontaneous at low temperatures. One of the most common themes in thermodynamics is the competition between enthalpy and entropy. As we shall see throughout this volume, the competition between enthalpic and entropic effects has a profound impact on all of materials physics, including the associated kinetic processes of materials.

1.4 Microscopic Basis of Entropy Every good story needs a hero, and the hero of our story is Ludwig Boltzmann, an Austrian physicist born in Vienna in 1844. While the concept of atoms, i.e., the indivisible fundamental units of matter, dates back to the ancient Greek philosophers, most notably Democritus (ca. 460e370 B.C.), Boltzmann was the first to develop the physics connecting atomic theory and thermodynamics. Boltzmann became the father of a new

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermodynamics vs. Kinetics

9

field of physics known as statistical mechanics, which explains the macroscopic properties of materials in terms of their underlying atomic structure or microstates. A microstate corresponds to a specific atomic configuration of the system. The macrostate of the system encompasses its set of measurable properties, e.g., volume, temperature, heat capacity, thermal expansion coefficient, etc. The fundamental postulate of statistical mechanics is that the macroscopic properties of a system are a direct result of suitable averaging over the various microstates adopted by the system. Since the microstates are not necessarily known, they must be described in terms of their probabilities of occurrence. The macrostate of a system can then be determined based on this set of microstates and their respective probabilities. Boltzmann’s theory of statistical mechanics faced stiff criticism and outright rejection from many prominent scientists in the late 19th century [5]. At that time, the field of statistics was considered an “immoral” branch of mathematics, suitable only for gambling and not for legitimate science. Moreover, the atomic theory of matter itself was highly controversial. For example, in a famous exchange at the conclusion of one of Boltzmann’s lectures, the eminent physicist Ernst Mach rose from his seat and boldly declared, “I do not believe that atoms exist,” flatly rejecting Boltzmann’s entire life’s work. Indeed, Boltzmann’s ideas proved to be a few decades ahead of their time. Unfortunately, he did not cope well with this rejection from his less enlightened contemporaries. Boltzmann took his own life on September 5, 1906, by hanging himself while on a vacation with his family near Trieste, in what is now northeastern Italy. As shown in Figure 1.6, Boltzmann’s tombstone in Vienna is engraved with his famous equation for entropy, which today is most commonly written as: S ¼ k ln U:

(1.1)

Here, S is the entropy of the system, k is Boltzmann’s constant, and U is the number of microstates visited by the system to yield the particular macrostate. In other words, U is the number of unique atomic configurations explored by the system, where averaging over these microstates yields its particular macroscopic thermodynamic state. The constant of proportionality between ln U and S is Boltzmann’s constant, k. Oddly, Boltzmann himself never determined the value of his own constant; its value was rather determined by Max Planck as the microscopic partitioning of the gas constant, R. Whereas the gas constant defines the energy of a

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

10

Materials Kinetics

Figure 1.6 Tombstone of Ludwig Boltzmann (1844e1906), showing his famous equation for entropy, which provides the microscopic basis for thermodynamics and the foundation for the field of statistical mechanics. (From https://commons.wikimedia. org/wiki/File:Zentralfriedhof_Vienna_-_Boltzmann.JPG, Creative Commons License).

system per unit temperature per mole, Boltzmann’s constant defines the energy of a system per unit temperature per atom. In other words, Boltzmann’s constant is equal to the gas constant (R ¼ 8.314462 J K1 mol1) divided by Avogadro’s number (NA ¼ 6.02214076  1023 mol1), or k¼

R ¼ 1:380648  1023 J=K. NA

(1.2)

One important fundamental question about Eq. (1.1) is: “Why the logarithm?” To understand the physical rationale for the logarithm, let us consider the composite system in Figure 1.7. Here, the system consists of two subsystems, each having its own volume (V1 and V2), enthalpy (H1 and H2), and entropy (S1 and S2). Since volume, enthalpy, and entropy are all extensive quantities, the total volume of the system must be equal to the volumes of the individual subsystems, i.e., V ¼ V1 þ V2, and likewise for enthalpy (H ¼ H1 þ H2) and entropy (S ¼ S1 þ S2). Following Boltzmann’s formula for entropy in Eq. (1.1), we know that S1 ¼ k ln U1 and

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermodynamics vs. Kinetics

11

Figure 1.7 Composite system consisting of two subsystems. The volume (V ), enthalpy (H), and entropy (S) of the composite system are equal to the sums of the respective quantities of the two subsystems.

S2 ¼ k ln U2, where U1 and U2 are the numbers of microstates visited by the first and second subsystems, respectively. According to probability theory, the total number of possible microstates of the composite system is the product of that of the individual subsystems, i.e., U ¼ U1U2. For example, suppose there are U1 ¼ 10 number of microstates available in the first subsystem and U2 ¼ 50 number of microstates available in the second subsystem; the total number of microstates of the composite system is equal to the number of possible combinations of microstates in the two subsystems, i.e., U ¼ U1U2 ¼ 10  50 ¼ 500. The logarithm in Eq. (1.1) is therefore required to make the entropies additive between these two subsystems. In other words, the logarithm is required to ensure: S ¼ k lnðU1 U2 Þ ¼ k ln U1 þ k ln U2 ¼ S1 þ S2 :

(1.3)

The importance of Boltzmann’s contributions to science cannot be overstated, particularly as embodied by his famous Eq. (1.1). Boltzmann was the first scientist to establish a rigorous connection between the macroscopic properties of a material and its underlying atomic structure. In founding the field of statistical mechanics, Boltzmann was also the first physicist to understand the importance of probability theory and the role of uncertainty in the formulation of physics. Today, Boltzmann’s impact stretches far beyond statistical mechanics, as Eq. (1.1) is also the basis for the field of information theory, pioneered by Shannon [6] more than a halfcentury later.

1.5 First Law of Thermodynamics Let us now briefly review the laws of thermodynamics. The first law of thermodynamics is the most straightforward: energy is conserved.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

12

Materials Kinetics

In other words, the energy of the universe is a constant. Energy can be neither created nor destroyed, but it can be converted among different forms. For example, mechanical energy can be converted to thermal energy, electrical energy can be converted to chemical energy, etc. Energy can exist in many forms, but the total energy of the universe must remain constant. This law is fundamental to both thermodynamics and to Newtonian mechanics.

1.6 Second Law of Thermodynamics The second law of thermodynamics is more intriguing and has been stated in several forms. Some common statements of the second law include [7]: • “Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.” (Rudolf Julius Emanuel Clausius) • “Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased.” (Max Karl Ernst Ludwig Planck) • “The entropy of the universe tends to a maximum.” (Clausius) The second law of thermodynamics has profound physical implications. Whereas Newtonian physics is time-reversible, entropy is the only physical quantity that requires a particular direction for time, viz., Boltzmann’s arrow of time. Time moves in one direction: always forward, never backward. As we move forward in time, the second law dictates that the entropy of the universe increases, but never decreases. Hence, the most general statement of the second law, and the one which we shall favor in this volume, is that the entropy of the universe increases. An example of Boltzmann’s arrow of time is depicted in Figure 1.8. Moving forward in time, an egg can be broken, but once broken it cannot be put back together again (“Humpty Dumpty effect”). This is because the number of possible configurations of a broken egg so greatly exceeds the number of possible configurations of an intact egg, i.e., Ubroken >> Uintact. In other words, the number of ways to break an egg completely

Figure 1.8 A consequence of the Second Law is Boltzmann’s arrow of time, which states that time flows only in one direction: toward increasing entropy of the universe.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermodynamics vs. Kinetics

13

overwhelms the number of ways to reassemble a broken egg. Following Eq. (1.1), the set of broken egg states has a higher entropy than the set of intact egg states. Therefore, the broken egg must exist in time after the intact egg, and not the other way around. Why is the second law valid? There is nothing in Newtonian mechanics that prohibits a broken egg from being reassembled into an intact form. Likewise, there is nothing in Newtonian mechanics that prohibits an ice cube from spontaneously appearing in a glass of liquid water at room temperature. Why have we never observed such events? The fundamental reason goes back to Boltzmann in his formulation of statistical mechanics. We never observe such events because they are overwhelmingly improbable. The universe (or any isolated system) will naturally evolve toward macrostates having a greater number of possible microstates, because it is overwhelmingly more probable to do so. Hence, following Eq. (1.1), the universe (or any isolated system) will evolve in the direction of increasing entropy. The second law cannot be explained in terms of classical Newtonian mechanics, where all details of the system are known and deterministic. It can only be explained in terms of statistical mechanics and the evolution of probabilities. In other words, entropy increases because it is overwhelmingly probable to do so.

1.7 Third Law of Thermodynamics As with the second law, there are several well-known statements of the third law of thermodynamics [7], including: • “At absolute zero the entropy change in chemical reactions is zero.” (Walther Hermann Nernst) • “At absolute zero the entropy of a substance is zero.” (Planck) • “At absolute zero the entropy of a perfect crystalline solid is zero.” (Gilbert Newton Lewis) The third law is also often quoted as the unattainability of absolute zero temperature, also due to Nernst: “The absolute zero of temperature can never be reached.” However, this is more of a corollary to the third law rather than a statement of the third law itself. There are subtle but well-marked differences among the three statements of the third law above. Whereas Nernst’s statement is framed in terms of a change in entropy vanishing at absolute zero temperature, Planck and Lewis both state the third law in terms of the absolute entropy of a system at absolute zero. Whereas Planck’s statement of the third law is

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

14

Materials Kinetics

general for any system, Lewis’s statement is more restrictive, considering only that the entropy of a perfect crystalline solid is zero at absolute zero temperature. Lewis’s interpretation of the third law is indicative of a common fundamental misunderstanding of entropy, viz., that entropy is a measure of disorder. The origins of this misconception have been described in detail by Styer [8,9]. While it is often true that disordered systems have a greater number of microstates than ordered system, e.g., gases tend to have more possible microstates than liquids which, in turn, tend to have more possible microstates than crystalline solids, this is not always true. An example is the case of inverse melting, where a crystal can melt into the liquid state upon cooling rather than upon heating [10,11]. Moreover, there is no rigorous definition of what constitutes disorder and how disorder can be quantified. Hence, we must kindly ask the reader to leave behind this misconception. Entropy is not a measure of disorder. Rather, following Boltzmann’s definition in Eq. (1.1), entropy is a measure of the number of microstates that yield a particular macrostate. In other words, the thermodynamic state of a system is entirely determined by a suitable average over all the microstates visited by the system. The entropy is a measure of the logarithm of this number of microstates. For a system with high entropy, the macrostate is determined by averaging over a large number of microstates, i.e., by visiting a large number of different microscopic configurations. On the other hand, a system has low entropy if its macrostate is determined by only a few microstates. The third law fundamentally considers the lower bound of entropy in the limit of absolute zero temperature, where the system has no thermal energy to overcome any activation barrier. Barring quantum tunneling, in the limit of absolute zero temperature any classical system would be confined to one and only one microstate. Using Eq. (1.1), the entropy of any system in the limit of absolute zero temperature must satisfy lim SðT Þ ¼ lim ðk ln UÞ ¼ k ln 1 ¼ 0:

T /0

T /0

(1.4)

Hence, the most rigorous statement of the third law, and the one we shall adopt in this textbook, is due to Planck: at absolute zero the entropy of a substance is zero. Besides being the most physically rigorous statement of the third law, Planck’s statement also offers an important insight into the kinetics of a system in the limit of absolute zero temperature. In this limit, any system is

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermodynamics vs. Kinetics

15

frozen into a single microstate, i.e., the kinetics of any system cease since there is no thermal energy to facilitate motion. This is equally true for perfect crystals (the typical equilibrium state that one considers in the limit of zero temperature) as it is for nonequilibrium disordered states at zero temperature. Regardless of whether the system is ordered or disordered, in the limit of absolute zero the macrostate of the system is determined by one and only one microstate, since only one microstate is accessible. This is also consistent with the principle of causality, which states that the thermodynamic properties of a system are determined only by those microstates actually accessible by the system [12,13]. This is why the third law is universally true for any classical system, and not just for perfect crystals.

1.8 Zeroth Law of Thermodynamics The zeroth law of thermodynamics was proposed in 1935 by Ralph H. Fowler, after the first three laws of thermodynamics had already been established. The zeroth law states the transitive nature of thermal equilibrium: “If a body A is in temperature equilibrium with two bodies B and C, then B and C themselves will be in temperature equilibrium with each other.” The zeroth law is important in that it establishes a mathematical equivalence relation for the concept of thermal equilibrium.

1.9 Summary Whereas thermodynamics concerns the relative stability of the various states of a system, kinetics concerns the approach to equilibrium and the intermediate states visited along the way. Thermodynamic processes are spontaneous if they result in a decrease in the free energy of the system. The change in Gibbs free energy has both enthalpic and entropic contributions. However, the spontaneity of a process says nothing about the rate of its kinetics. The laws of thermodynamics may be summarized as: • First Law: Energy is conserved. • Second Law: The entropy of the universe increases with time. • Third Law: At absolute zero the entropy of a substance is zero. As proposed by Boltzmann, entropy is a measure of the number of microstates visited by a system to produce a specific macrostate. Justification of both the second and third laws of thermodynamics can be made in terms of the underlying statistical mechanics of the system and Boltzmann’s equation for entropy, Eq. (1.1).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

16

Materials Kinetics

Exercises (1.1) Using your thesis research or another research project that you are currently working on or have worked on in the past: (a) Provide a few sentences of background on your research, including motivation and objectives. (b) Explain an experimental or modeling setup used in your research. What is the most appropriate time scale for this experiment or model? (c) Discuss the importance of thermodynamics in your research project, giving at least one specific example. (d) Describe the importance of kinetics in your project, giving at least one specific example. (e) Give an example of a spontaneous process that is relevant for your research. Why is it spontaneous? (f) Give an example of a non-spontaneous process that is relevant for your research. Why is it non-spontaneous? (g) What role does entropy play in your research? (h) What do you hope to learn from the study of kinetics that will be most useful or relevant for your research? (1.2) Does global warming violate the first law of thermodynamics? Why or why not? Justify your answer. (1.3) Newtonian mechanics is time-reversible. Is this a violation of the second law of thermodynamics? Why or why not? (1.4) Give an example of a process that is spontaneous at high temperatures but non-spontaneous at low temperatures. What governs the transition between spontaneous and non-spontaneous behavior in this example? (1.5) Is it possible for a process to be spontaneous under isobaric conditions but non-spontaneous under isochoric conditions? Justify your answer. (1.6) Can you cool a kitchen by keeping the freezer door open? Why or why not? (1.7) The zeroth law of thermodynamics is an example of syllogistic reasoning. Construct a truth table to prove the zeroth law using Boolean logic. (1.8) Maxwell’s demon is a thought experiment proposed by James Clerk Maxwell, which considers a demon who operates a door separating two chambers of gas [14]. As an individual gas molecule approaches

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermodynamics vs. Kinetics

17

the door, the demon will either open or close the door such that the fast and slow molecules are separated into different chambers. Since the fast molecules have a higher temperature, the demon causes one chamber to heat up and the other chamber to cool down, which decreases the total entropy of the system. Is this a violation of the second law of thermodynamics? Why or why not? (1.9) Consider two systems having the same chemical composition and the same entropy. Do they necessarily have the same structure? Why or why not? (1.10) Give an example of a system for which the ordered state has a higher entropy than the disordered state. What are the physical implications of this difference in entropy?

References [1] R. P. Feynman, Statistical Mechanics, Westview, New York (1972). [2] R. G. Palmer, “Broken Ergodicity,” Adv. Phys. 31, 669 (1982). [3] J. C. Mauro, P. K. Gupta, and R. J. Loucks, “Continuously Broken Ergodicity,” J. Chem. Phys. 126, 184511 (2007). [4] H. B. Callen, Thermodynamics and an Introduction to Thermostatistics, John Wiley & Sons, New York (1985). [5] E. Johnson, Anxiety and the Equation: Understanding Boltzmann’s Entropy, MIT Press, Cambridge (2018). [6] R. M. Gray, Entropy and Information Theory, Springer Science & Business Media, Berlin (2011). [7] B. Linder, Thermodynamics and Introductory Statistical Mechanics, John Wiley & Sons, New York (2004). [8] D. Styer, “Insight into Entropy,” Am. J. Phys. 68, 1090 (2000). [9] D. Styer, “Entropy as Disorder: History of a Misconception,” Phys. Teacher 57, 454 (2019). [10] A. L. Greer, “The Thermodynamics of Inverse Melting,” J. Less Common Metals 140, 327 (1988). [11] F. H. Stillinger and P. G. Debenedetti, “Phase Transitions, Kauzmann Curves, and Inverse Melting,” Biophys. Chem. 105, 211 (2003). [12] D. Kivelson and H. Reiss, “Metastable Systems in Thermodynamics: Consequences, Role of Constraints,” J. Phys. Chem. B 103, 8337 (1999). [13] H. Reiss, “Apparent Entropy, Residual Entropy, Causality, Metastability, Constraints, and the Glass Transition,” J. Non-Cryst. Solids 355, 617 (2009). [14] K. Maruyama, F. Nori, and V. Vedral, “Colloquium: The Physics of Maxwell’s Demon and Information,” Rev. Mod. Phys. 81, 1 (2009).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 2

Irreversible Thermodynamics

2.1 Reversible and Irreversible Processes Classical thermodynamics focuses principally on equilibrium states and reversible processes. A reversible process is any process where the system can be restored to its initial state from the final state without changing the thermodynamic properties of the universe. In thermodynamics, the universe consists of the system under study, as well as its surroundings, i.e., everything else. For a system to undergo a reversible process, it must never leave equilibrium and must not increase the entropy of the universe. A reversible process should occur infinitely slowly and due to an infinitesimally small driving force. Owing to the infinitely slow nature of a reversible process, all of the changes that occur in the system are in thermodynamic equilibrium with each other. If the process is also adiabatic, i.e., if the heat content of the system remains constant during the process, then the reversible process is also isentropic, meaning that the entropy of the system itself also remains constant. The phenomenon of undergoing a reversible process is called reversibility. An example of a hypothetical reversible process is shown in Figure 2.1, where a gas is compressed by frictionless mechanical loading of sand, one grain at a time. The process is reversible because the perturbation of a single grain of sand is so small that the system can respond with an infinitesimally small change in its state, never leaving equilibrium. After the sand grains have compressed the gas, the process can be reversed by slowly removing each grain of sand, one at a time. The initial thermodynamic state of the universe is fully recovered after the grains of sand are removed. Of course, reversible processes primarily exist in a hypothetical ideal universe. In reality, nearly all processes are irreversible, where the initial state of the universe cannot be restored from the final state. During an irreversible process, the various intermediate states of the system are not in equilibrium with each other. As a result, the entropy of the universe increases during an irreversible process and cannot be restored to its initial value. Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00020-0 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

19 Copyright Elsevier 2023

20

Materials Kinetics

Figure 2.1 Example of a reversible process, where a gas is slowly compressed by dropping one grain of sand at a time. The original state of the universe is restored after the sand is slowly removed, one grain at a time.

Figure 2.2 Example of an irreversible process where a gas is suddenly compressed by dropping an anvil. Since the process is irreversible, the original state of the universe is unable to be restored.

The phenomenon of a system undergoing an irreversible process is called irreversibility. An example of an irreversible process is depicted in Figure 2.2, where a gas is compressed suddenly by dropping an anvil. Here the perturbation is much larger compared to the situation in Figure 2.1, and the system is not given sufficient time to remain in equilibrium at intermediate steps during the process. Reversible and irreversible processes follow different paths in phase space. Figure 2.3 depicts this difference for our example of a gas compressed along reversible and irreversible paths. For the reversible process, the system

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Irreversible Thermodynamics

21

Figure 2.3 Schematic showing the different paths followed by reversible vs. irreversible processes. Table 2.1 Key differences between reversible and irreversible thermodynamic processes. Reversible Processes

Irreversible Processes

• The process is carried out infinitely slowly. • The process takes infinite time for completion. • The system is always in equilibrium, at all stages of the process. • The system can return to its initial state without changing the thermodynamic state of the universe. • The entropy of the universe remains constant (DS ¼ 0). • Maximum work is obtained during a reversible process. • Very few real processes are reversible.

• The process is carried out quickly. • The process is completed over some finite time. • The system departs from equilibrium at some stage during the process. • The system cannot return to its initial state without changing the thermodynamic state of the universe. • An increase in the entropy of the universe is required (DS > 0). • Work obtained during an irreversible process is less than maximum. • Most real processes are irreversible!

is given sufficient time to respond to each infinitesimal perturbation such that it never leaves equilibrium. On the other hand, for the case of a large perturbation, there is a sudden change and the system does not have enough time to find its new equilibrium. The key differences between reversible and irreversible thermodynamic processes are summarized in Table 2.1.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

22

Materials Kinetics

The thermodynamics of irreversible processes was developed by Lars Onsager (1903e1976), a Norwegian-American chemist/physicist. His career in the U.S. began as a faculty member at Johns Hopkins University in 1928, but unfortunately he was let go after only one semester of teaching due to “difficulties in communicating with weaker intellects,” a problem that persisted throughout his career [1]. Onsager moved to Brown University, where he published his seminal work in irreversible thermodynamics during 1929e1931 [1e5]. Unfortunately, Onsager’s groundbreaking research went unnoticed for about 20 years after its publication. Upon leaving Brown, Onsager accepted a post-doctoral fellowship at Yale University, where “the Chemistry Department was embarrassed to discover that he had no Ph.D.” [1] Fortunately, Onsager was able to stay at Yale to earn his Ph.D., where he also spent the remainder of his highly productive career. Despite the difficulties in communicating his outstanding research, the importance of Onsager’s scientific contributions was eventually recognized. In 1968, Onsager was awarded the Nobel Prize for Chemistry for his development of irreversible thermodynamics. Indeed, the entire field of irreversible thermodynamics owes its existence to the brilliant and dedicated research of Lars Onsager. In the remainder of this chapter, we will cover the key elements of Onsager’s theory, viz., the thermodynamic driving forces for irreversible processes, the resulting fluxes, and the general formulation of the kinetics of irreversible processes. We will also describe three key simplifying assumptions to make the problem of irreversible thermodynamics more tractable: purely resistive systems, linearity, and the Onsager reciprosity theorem [5]. Finally, we will discuss applications of irreversible thermodynamics to several phenomena of interest in materials science and engineering.

2.2 Affinity The thermodynamic driving force for an irreversible process is called the affinity. The affinity, F, is defined as the increase in the entropy of the universe, S, by changing an arbitrary thermodynamic coordinate, X, of the system: vS Fh : vX

(2.1)

If the affinity is nonzero, then some increase in entropy is possible by changing the thermodynamic coordinate, X. This ability to increase

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Irreversible Thermodynamics

23

entropy serves as the thermodynamic driving force for an irreversible process to occur. When the affinity vanishes, then no further increase in the entropy of the universe is attainable by changing X, so the system has reached equilibrium with respect to that thermodynamic parameter. Let us consider a simple composite system comprised of two subsystems separated by a diathermal wall. As depicted in Figure 2.4, the two subsystems have extensive1 thermodynamic parameters with values X1 and X2, respectively, where the total value for the system is a constant X ¼ X1 þ X2. Since X is a constant, vX1 ¼ vX2 , and the affinity for this simple system is:     vS vðS1 þ S2 Þ vS1 vS2 Fh ¼ ¼  : (2.2) vX1 X + vX1 vX1 vX2 X+ The equilibrium values of X1 and X2 are determined by the vanishing of the affinity. If F ¼ 0, then the system has achieved equilibrium. If F s 0, then an irreversible process can occur, bringing the system toward the equilibrium state and increasing the entropy of the universe. While Eq. (2.2) is written in terms of some arbitrary thermodynamic parameter, X, let us suppose that the parameter of interest is the internal energy, U, which is responsible for achieving temperature equilibrium. In this case, the affinity is F¼

vS1 vS2 1 1  ¼  ; vU1 vU2 T1 T2

(2.3)

where T1 and T2 are the absolute temperatures of the first and second subsystems, respectively, and U1 and U2 are their corresponding internal energies. If F ¼ 0, then no heat flows across the diathermal wall. In other words, if T1 ¼ T2, then the system has achieved thermal equilibrium. If F s 0, then the affinity, i.e., the difference in reciprocal temperatures,

Figure 2.4 System consisting of two subsystems having extensive thermodynamic properties, X1 and X2. The sum of X ¼ X1 þ X2 is a constant. 1

Recall that an extensive variable is one that changes with the size of the system, e.g., volume, energy, entropy, etc. In contrast, an intensive variable is a property that does not vary with the size of the system, e.g., pressure, density, temperature, heat capacity, etc.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

24

Materials Kinetics

Figure 2.5 Affinities for the flow of heat, matter, and electrical charge along one continuous dimension, x.

acts as a driving force for the flow of heat between the two subsystems until equilibration is achieved. Eq. (2.2) considers the affinity in the case of two discrete subsystems. As derived by Callen [5], this can be generalized for changes along any continuous dimension, x, by considering the change in local entropy density, s, accounting for localized changes in the relevant thermodynamic properties. This is shown explicitly in Figure 2.5, where ds is expressed in terms of changes in the local internal energy density (u), the chemical potential of each species (mi), the local number density of each species (ni), the electron chemical potential (me), and the local number density of electrons (ne). Additional terms may be included for mechanical work, magnetic field effects, polarization energy, interfacial energy, etc., as appropriate for the system under study. Combining the definition of affinity in Eq. (2.1) with the equation for ds in Figure 2.5, the affinities are simply the gradients of the prefactors in front of each differential on the right-hand side of the equation. The affinities for the flow of heat, mass, and electrical charge are all provided in Figure 2.5 in terms of gradients along one continuous dimension, x. Hence, the affinities in Figure 2.5 represent the continuous form of the discrete case shown previously in Eq. (2.2).

2.3 Fluxes The response of the system to an affinity is characterized by the flux, J. The flux is defined as the time rate of change of a thermodynamic parameter, X. Stated mathematically, the flux is defined by dX Jh ; dt

This book belongs to Alice Cartes ([email protected])

(2.4)

Copyright Elsevier 2023

Irreversible Thermodynamics

25

where t is time. In other words, the flux is the rate of change of the thermodynamic state of the system in response to an affinity, i.e., in response to the thermodynamic driving force for equilibration. Consequently, the flux vanishes as the affinity vanishes, and a nonzero affinity leads to a nonzero flux (provided that the kinetics of the process are not infinitely slow). One may think of the affinity as the “cause” of the irreversible process, and the resulting flux is the “effect.” Examples of common flux equations include Fick’s first law, which governs the flux of mass, M, in response to a concentration gradient, VC, JM ¼  DVC;

(2.5)

where D is the diffusion coefficient. Likewise, the flux of heat, Q, in the presence of a temperature gradient, VT , is given by Fourier’s law, JQ ¼  kVT ;

(2.6)

where k is the thermal conductivity. The flux of electrical charge, E, in response to an electrical potential gradient, V4, is given by Ohm’s law, 1 JE ¼  V4; r

(2.7)

where r is the electrical resistivity. All three flux equations have identical forms, i.e., the flux is equal to the product of a rate parameter (governing the kinetics of the irreversible process) with the gradient of the relevant thermodynamic property. The minus signs are included in Eqs. (2.5)e(2.7) because the fluxes act to eliminate gradients in these thermodynamic properties. Note that all three of these flux equations were proposed empirically in the 19th century based on experimental observations, long before Onsager’s development of irreversible thermodynamics. Although Eqs. (2.5)e(2.7) were not originally derived from irreversible thermodynamics, they can still be obtained from Onsager’s formulation under appropriate assumptions. For example, an equivalent version of Fourier’s law could be obtained based on the gradient of the reciprocal temperature, generalizing from Eqs. (2.3) and (2.4):   1 JQ ¼ kT 2 V ; (2.8) T where the kinetic factor is kT 2 to maintain consistency with the definition of thermal conductivity, k, in Eq. (2.6).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

26

Materials Kinetics

2.4 Entropy Production The result of any irreversible process is an increase in the entropy of the universe. The rate of entropy production from an irreversible process can be obtained by taking the time derivative of entropy as: dS X vS vXi X Fi Ji ; (2.9) ¼ ¼ dt vXi vt i i where Xi represents a specific thermodynamic parameter of the system, and Fi and Ji are the corresponding affinity and flux for that parameter, respectively. Hence, the rate of entropy production is the sum of the products of each affinity with its associated flux. For a reversible process, the change in entropy is zero (dS ¼ 0) since the affinity is zero. For any irreversible process, the change in entropy is positive (dS > 0). Hence, irreversible processes are agents of the second law, ensuring that the entropy of the universe increases with time.

2.5 Purely Resistive Systems The study of irreversible thermodynamics is simplified by three approximations. The first approximation is that the system is purely resistive. A purely resistive system is one in which the fluxes at a given instant in time depend only on the value of the affinities at that same instant, and not on any previous states of the system. The name “purely resistive” comes from an analogous term in electrical engineering, since a circuit having only resistors (i.e., no capacitors and no inductors) has no memory. With the assumption of a purely resistive system, the flux can be expanded in powers of the affinities using a Taylor series approximation. Since the flux vanishes as the affinities vanish, the constant term in the series is zero, and the Taylor series becomes a Maclaurin series: X 1 XX Ji ¼ Lij Fj þ Lijk Fj Fk þ /; (2.10) 2! j k j where the subscripts indicate the different thermodynamic parameters of the system. The L parameters are the kinetic coefficients defined by   vJi Lij ¼ (2.11) vFj 0

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Irreversible Thermodynamics

for the linear terms and



v2 Ji Lijk ¼ vFj vFk

27

 (2.12) 0

for the quadratic terms in Eq. (2.10). The rate of an irreversible process is governed by the magnitude of these kinetic coefficients.

2.6 Linear Systems When the affinities are small, quadratic and higher-order terms in the flux equations can be neglected, and Eq. (2.10) can then be simplified in linear form as: X Ji ¼ Lij Fj : (2.13) j

This linear approximation is suitable for systems that deviate only slightly from equilibrium. For systems further out of equilibrium, the higher-order terms in Eq. (2.10) may need to be included. In general, each flux is expected to be a function of all the affinities acting in the system, i.e., each flux is a function of all the thermodynamic driving forces. As such, Eq. (2.13) can be expanded as: J1 ¼ L11 F1 þ L12 F2 þ L13 F3 þ /L1U FU J2 ¼ L21 F1 þ L22 F2 þ L23 F3 þ /L2U FU J3 ¼ L31 F1 þ L32 F2 þ L33 F3 þ /L3U FU

(2.14)

« JU ¼ LU1 F1 þ LU2 F2 þ LU3 F3 þ /LUU FU where U is the total number of relevant thermodynamic parameters. Hence, in a linear purely resistive system, the kinetic coefficients form a matrix: 0 1 L11 L12 L13 / L1U B C B L21 L22 L23 / L2U C B C C: (2.15) L¼B L L L / L 31 32 33 3U B C B C « « 1 « A @ « LU1 LU2 LU3 / LUU

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

28

Materials Kinetics

The diagonal terms in the matrix (L11, L22, L33, ., LUU) are called the direct kinetic coefficients, which link each flux to its own corresponding affinity. This corresponding affinity is known as the conjugate driving force for that particular flux. The off-diagonal terms of the kinetic coefficient matrix in Eq. (2.15) are called the coupling coefficients, which are responsible for coupling effects, also known as cross effects, between a flux and a different type of thermodynamic affinity. Coupling effects are a result of momentum or energy transfer among different types of species in the system (e.g., atoms, electrons, phonons). Specific examples of coupling effects include the Soret effect, the Seebeck effect, and electromigration, all of which will be discussed later in this chapter. If there is no coupling between two different modes, then the corresponding coupling coefficient in the kinetic coefficient matrix would be equal to zero.

2.7 Onsager Reciprosity Theorem The third simplifying approximation in irreversible thermodynamics is the Onsager reciprosity theorem, which states that the linear kinetic coefficients in Eqs. (2.13)e(2.15) satisfy Ljk ¼ Lkj :

(2.16)

vJj vJk ¼ : vFk vFj

(2.17)

From Eq. (2.11), this implies

The Onsager reciprosity theorem defines the fundamental symmetry of irreversible processes. Stating Eq. (2.17) in words, the change in the flux of some quantity j with respect to the conjugate driving force of a different quantity k is equal to the change in the flux of this second quantity k induced by changing the conjugate driving force of the first quantity j. The main result of the Onsager reciprosity theorem is that the kinetic coefficient matrix of Eq. (2.15) is symmetric: 0 1 0 1 L11 L12 L13 / L1U L11 L12 L13 / L1U B C B C B L21 L22 L23 / L2U C B L12 L22 L23 / L2U C B C B C C ¼ B L13 L23 L33 / L3U C: L¼B L L L / L 31 32 33 3U B C B C B C B C « « 1 « A @ « « « 1 « A @ « LU1 LU2 LU3 / LUU L1U L2U L3U / LUU (2.18)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Irreversible Thermodynamics

29

The Onsager reciprosity theorem is based on the assumption of microscopic reversibility for systems near equilibrium, i.e., the coupling between a driving force of type j and the fluctuations of quantity k is identical with respect to switching j and k. The physical implications of this symmetry will become apparent in the next few sections, where we discuss several types of systems where coupling effects are key.

2.8 Thermophoresis While the above discussion of Onsager reciprosity may seem rather abstract, coupling effects are vital in a variety of cases of practical interest. One example is thermophoresis, where the presence of a thermal gradient in a system can induce the flow of matter, even in the absence of a concentration gradient [6e9]. In other words, in a homogeneous, multicomponent solution, the presence of a temperature gradient can cause relative movements of the constituent components (i.e., demixing). Thermophoresis was originally discovered by the Swiss scientist Charles Soret in 1879 and is also known as the Soret effect [9]. The Soret effect is observed in a variety of fluids and colloidal suspensions. While thermophoresis typically involves larger particles migrating toward the cold end of the thermal gradient, as depicted in Figure 2.6, some macromolecular solutions exhibit the opposite trend [6]. The first step for applying irreversible thermodynamics is to identify the relevant thermodynamic affinities. The relevant affinities for the Soret effect can be determined from the differential form of the local entropy density in Figure 2.5, accounting for localized changes in internal energy and number density of the relevant species i: Xmi  1 ds ¼ du  (2.19) dni ; T T i

Figure 2.6 Example of thermophoresis, also known as the Soret effect, which leads to a demixing of particles in the presence of a thermal gradient. Particles at the hot end of the thermal gradient have a greater kinetic energy, which leads to a greater number of collisions with other particles. Since larger particles take up more space, it is more likely that high-energy particles will collide with larger rather than smaller particles. As a result of this increased number of collisions, the larger particles are pushed away from the hot end of the gradient. Hence, the Soret effect typically involves the larger particles exhibiting a net migration toward the cold end of the gradient.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

30

Materials Kinetics

where u is the local energy density, mi is the chemical potential of species i, and ni is its local number density. The summation in Eq. (2.19) is over all species that constitute the material. Combining the definition of affinity from Eq. (2.1) with Eq. (2.19), the conjugate driving forces are simply the gradients in the prefactors in front of each differential on the right-hand side of the equation, i.e., any change or gradient in 1/T for energy transfer (du) and change in mi/T for mass transfer (dni). With these expressions for the affinities established, the next step is to write the relevant flux equations, following the form of Eq. (2.14). In a thermophoretic system, there are two relevant flux equations, viz., for the heat flux ( JQ),   m 1 JQ ¼ LQQ V  LQM V ; (2.20) T T and for the mass flux ( JM), JM ¼ LMQ V

  m 1  LMM V ; T T

(2.21)

assuming a single diffusing species. The kinetic coefficients in Eqs. (2.20) and (2.21) include both the direct coefficients (LQQ and LMM) and the coupling coefficients (LQM and LMQ), which correspond to the diagonal and off-diagonal terms, respectively, of the kinetic coefficient matrix in Eq. (2.15). Eqs. (2.20) and (2.21) are the general flux equations for the Soret effect, assuming a linear purely resistive system. Eqs. (2.20) and (2.21) can be used to study, for example, the steady state of demixing which occurs when there is no further flow of matter, i.e., when the mass flux is zero:   m 1 JM ¼ LMQ V ¼ 0: (2.22)  LMM V T T From Eq. (2.22), we have

  m 1 LMQ V ¼ LMM V ; T T

which can be rewritten as

  LMQ 1 V V ¼ : T T LMM m

This book belongs to Alice Cartes ([email protected])

(2.23)

(2.24)

Copyright Elsevier 2023

Irreversible Thermodynamics

31

Inserting this steady state condition back into the heat flux equation of Eq. (2.20),         1 LMQ LQM 1 LMQ LQM 1 JQ ¼ LQQ V V V  ¼ LQQ  : T T T LMM LMM (2.25) Assuming validity of the Onsager reciprocity theorem, LQM ¼ LMQ, and the number of independent kinetic coefficients is reduced by one. Hence, Eq. (2.25) becomes !   2 LMQ 1 JQ ¼ LQQ  V : (2.26) T LMM The assumption of Onsager reciprosity implies an equivalent mechanism for inducing a mass flux from a heat gradient as for inducing a heat flux from a mass gradient. The result in Eq. (2.26) gives the steady-state heat flux that corresponds to a vanishing mass flux. An analogous procedure can be followed to calculate the nonzero mass flux that would correspond to a zero heat flux. Note that in the limit of a vanishing coupling coefficient, i.e., LQM ¼ LMQ ¼ 0, Eq. (2.26) would reduce to exactly the refined version of Fourier’s law presented in Eq. (2.8), where LQQ ¼ kT 2 . The corresponding entropy production can be calculated using Eq. (2.9). The physical origin of thermophoresis is an area of active research in a variety of systems, including polymer solutions and colloidal suspensions. Thermophoresis can be used, e.g., in the design of microfluidic particle separators [6].

2.9 Thermoelectric Materials Another example of coupling is the thermoelectric effect, also known as the Seebeck effect, where a thermal gradient can be converted into an electrical current [10, 11]. Here, the appropriate flux equations incorporate both heat and electrical affinities. From Figure 2.5, the relevant affinities can be obtained from: ds ¼

This book belongs to Alice Cartes ([email protected])

1 m du  e dne ; T T

(2.27)

Copyright Elsevier 2023

32

Materials Kinetics

where me is the electrochemical potential and ne is the local number density of electrons. The affinity for the electrical term is therefore given by any change or gradient in me/T, and the relevant linear flux equations are:   m  1 JQ ¼ LQQ V  LQE V e ; (2.28) T T and

  m  1 JE ¼ LEQ V  LEE V e : T T

(2.29)

Similar manipulations of the flux equations can be made as in the previous example of the Soret effect. In practice, the Seebeck effect leads to conversion of a thermal gradient into electricity at the junction of different types of materials. This effect is used, for example, in the design of thermocouples. A schematic diagram of such a system is shown in Figure 2.7. The strength of the coupling effect between the thermal gradient and electrical potential gradient is given by the Seebeck coefficient, ε, as the voltage produced (Eemf) per unit of temperature difference, DT: Eemf ¼  εDT :

(2.30)

The Seebeck coefficient is also known as the thermoelectric power of the material. In the design of new thermoelectric materials, the value of the Seebeck coefficient should be maximized to increase the coupling effect. The Peltier effect is a type of thermoelectric effect in which heat is emitted or absorbed when an electrical current passes across a junction between two materials. The Peltier effect is used to make thermoelectric

Figure 2.7 With the Seebeck effect, an electrical potential gradient (voltage, DF) can be obtained in the presence of a thermal gradient.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Irreversible Thermodynamics

33

Figure 2.8 The Thomson effect.

heat pumps or cooling devices. When comparing the Seebeck and Peltier effects, the importance of the Onsager reciprosity theorem should be apparent. With the Seebeck effect, an electrical current is induced by a thermal gradient. With the Peltier effect, a thermal gradient is induced by an electrical current. If the Onsager reciprosity theorem is valid, then the strength of these two coupling effects should be equal, i.e., LQE ¼ LEQ in Eqs. (2.28) and (2.29). Another type of thermoelectric phenomenon is the Thomson effect, which involves the evolution of heat as an electrical current passes through a material with a temperature gradient along its length. This transfer of heat is superimposed on the production of heat associated with standard electrical resistance, i.e., the Thomson heat is superimposed with the more commonly known Joule heat (i.e., resistive heat). With the Thomson effect, the Seebeck coefficient is also a function of temperature and therefore a function of position along the length of the material. A schematic diagram of a system exhibiting the Thomson effect is shown in Figure 2.8.

2.10 Electromigration Another example of coupling in an irreversible process is electromigration, which involves the transport of mass in an electrical potential gradient [12], as depicted in Figure 2.9. Electromigration is a result of momentum transfer between conducting electrons and diffusing atoms,

Figure 2.9 With electromigration, mass transfer is induced by the presence of an electrical potential gradient. Electromigration occurs due to momentum transfer from a moving electron to a neighboring ion, which knocks the ion from its original position. Over an extended period of time, a large number of ions can be moved from their original positions as a result of this momentum transfer.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

34

Materials Kinetics

especially in metals and semiconductors. Here the relevant flux equations involve both the mass and electrical affinities: m  m  JM ¼  LMM V M  LME V e ; (2.31) T T and JE ¼  LEM V

m  m  M  LEE V e : T T

(2.32)

Electromigration is an important consideration in the design of integrated circuits for microelectronics, since the resulting mass transfer can lower circuit reliability.

2.11 Piezoelectric Materials Piezoelectric materials exhibit a linear coupling between mechanical stress and electrical potential. The piezoelectric effect was first discovered by Jacques and Pierre Curie in 1880 and is defined as the production of an electrical current from an applied mechanical stress. As a result of Onsager reciprosity, piezoelectric materials also exhibit a reverse piezoelectric effect, in which a mechanical strain develops from application of an electric field. The piezoelectric effect is observed in many crystalline materials lacking inversion symmetry, including ceramics such as barium titanate and even some semi-crystalline polymers. With piezoelectric materials, the relevant flux equations involve mechanical strain, e.g.,   m  P JV ¼ LVV V  LVE V e ; (2.33) T T for the volume flux, and

  m  P JE ¼ LEV V  LEE V e T T

(2.34)

for the electrical flux, where P is pressure. In the case of Eqs. (2.33) and (2.34), the stress is assumed to be a hydrostatic pressure, and the material response is assumed to be isotropic. More sophisticated forms of these equations can also be written to account for non-hydrostatic loading and anisotropic material responses [13]. Piezoelectric materials are used for a variety of practical applications, including sensors and actuators.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Irreversible Thermodynamics

35

2.12 Summary Nearly all real processes are irreversible since they take place on a finite time scale and involve a departure from thermodynamic equilibrium. Irreversible thermodynamics is a phenomenological extension to classical reversible thermodynamics that seeks to capture the nonequilibrium nature of irreversible processes. The degree of disequilibrium of a system is characterized by its thermodynamic affinity, which provides a driving force for equilibration. A nonzero affinity produces a nonzero flux of the relevant thermodynamic quantity as the system is driven toward equilibrium. Irreversible processes generate a positive entropy, where the rate of entropy production is equal to the product of the associated affinity and flux. Since the entropy of the universe increases as the result of an irreversible process, the original state of the universe can never be recovered, in accordance with the Second Law. Irreversible thermodynamics typically involves three simplifying assumptions: • Purely Resistive System: The fluxes at a given instant depend only on the value of the affinities at that instant. In other words, the fluxes have no memory of previous states of the system. • Linearity: When the affinities are small, i.e., when the departure from equilibrium is small, then quadratic and higher-order terms of the flux equations can be neglected. In other words, the Maclaurin series that describes the functional relationship between the flux and affinities can be truncated after the linear term. • Onsager Reciprosity: For a linear purely resistive system, Onsager reciprosity defines the inherent symmetry of the kinetic coefficient matrix. This assumption is based on a microscopic reversibility of the system near equilibrium and is applicable to a wide range of phenomena with coupling between different types of driving forces and fluxes. Irreversible thermodynamics is broadly applicable to variety of materials processes which cannot be described using standard reversible thermodynamics. Several examples include: • Thermophoresis (i.e., the Soret effect), where a thermal gradient in a material gives rise to a chemical concentration gradient. • Thermoelectricity (i.e., the Seebeck effect), where a thermal gradient gives rise to an electrical current. • Electromigration, where an electrical current gives rise to a concentration gradient. • Piezoelectricity, in which mechanical work on the system gives rise to an electrical current.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

36

Materials Kinetics

Exercises (2.1) A quasi-static process occurs when the system is maintained in a state of internal equilibrium throughout that process. All reversible processes are also quasi-static processes. However, the definition of a quasi-static process does not require DS ¼ 0. In other words, a quasi-static process does not require that the thermodynamic state of the universe remain unchanged as a result of the process. (a) What happens to the surroundings in a reversible process vs. a quasi-static process? (b) Give an example of a reversible process. Why is it reversible? (c) Give an example of a process that is quasi-static but also irreversible. Why is it quasi-static? Why is it irreversible? (d) Give an example of a process that is irreversible and not quasi-static. Explain why it is irreversible and not quasi-static. (2.2) A thermocouple is a device used to measure temperature differences based on the Seebeck effect. Figure 2.7 shows a schematic diagram of a thermocouple, which consists of wires made from two different materials. If the two junctions (BA and AB) are at different temperatures, then an electrical potential difference (i.e., a voltage) will be generated between points 1 and 2 in the figure. The measured voltage, DF, in the absence of current flow ( JE ¼ 0), is related to the temperature difference, DT, between the two junctions. The Seebeck coefficient, εAB , is given by the equation in Figure 2.7 and depends on the choice of the two materials, A and B. Note that εAB is a property of the combination of the two materials, A and B. However, in the literature, thermoelectric properties are often tabulated for individual materials. In such tabulations, the Seebeck coefficient is given for the combination of the individual material and a common standard, such as platinum or lead. The temperature dependence of the Seebeck coefficient, εAB , can often be expressed in the empirical form a þ bT, where T is the temperature and a and b are constants. For example, εCu ¼ 1.34 þ 0.0094T for copper and εFe ¼ 17.15  0.0482T for iron [14], where T is expressed in units of  C and the ε values are in units of mV/ C. To obtain the Seebeck coefficient, εAB , for a combination of two different materials, just subtract: εAB ¼ εA  εB . Using the information above, what is the voltage produced in a copper-iron thermocouple when one junction is submerged in bath of ice water at 0 C and the other is submerged in boiling water at 100 C?

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Irreversible Thermodynamics

37

(2.3) Give an example of an irreversible thermodynamic process where the “purely resistive” approximation is invalid. What is the origin of the memory effect linking the flux to an affinity at some previous time? What governs the time scale of this memory effect? (2.4) An isolated rod of a thermophoretic material exhibits rapid diffusion of a species. A constant thermal gradient is imposed along the length of the rod. (a) Write the relevant linear flux equations for the heat flux ( JQ) and the mass flux ( JM) in terms of the appropriate kinetic coefficients and affinities. (b) Write an equation for the thermal conductivity (k) of the rod when the system achieves steady state concentrations (i.e., when JM ¼ 0) but without assuming Onsager reciprosity. (c) Repeat question (b) above, now assuming that Onsager reciprosity is valid. (d) Write an equation for the rate of entropy production of the system under steady state conditions. (e) Based on your answer from question (d) above, what does the second law of thermodynamics tell you about the algebraic sign of the thermal conductivity (k)? Explain your reasoning. (2.5) Consider a material that exhibits a combination of thermophoretic, thermoelectric, and electromigration effects. Write the relevant linear flux equations for heat, mass, and electrical charge, assuming Onsager reciprosity and purely resistive behavior. Do not solve the flux equationsdjust set up the problem. (2.6) Suppose that the system in Exercise (2.5) is far from equilibrium so that the linear approximation is invalid. Repeat the problem, now incorporating quadratic terms in the flux equations. (2.7) Consider a material having an imposed thermal gradient along the x direction. If an external magnetic field is imposed along the y direction, a new thermal gradient can be induced along the mutually orthogonal direction, i.e., along the z direction. This is an example of the thermomagnetic effect. Using the irreversible thermodynamic formalism of Onsager, construct a set of equations to describe the coupling of thermal and magnetic effects. (Hint: See Ref. [10].) (2.8) The galvanomagnetic effect is a phenomenon analogous to the thermomagnetic effect, except with coupling between electrical and magnetic fields [15]. For example, if an electrical current is imposed along the x direction while simultaneously applying a

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

38

Materials Kinetics

magnetic field along the y direction, then an electrical potential difference (i.e., a voltage) arises along the direction orthogonal to both, i.e., along the z direction. This particular galvanomagnetic interaction was discovered by Edwin Hall in 1879 and is known as the Hall effect. Using the concepts of irreversible thermodynamics, construct a set of coupled flux equations that describe the Hall effect. What properties of the material would lead to an enhanced Hall effect, i.e., an enhanced coupling between electrical and magnetic fluxes?

References [1] H. C. Longuet-Higgins and M. E. Fisher, “Lars Onsager: November 27, 1903e October 5, 1976,” J. Stat. Phys. 78, 605 (1995). [2] S. R. de Groot and P. Mazur, Non-Equilibrium Thermodynamics, Dover (1984). [3] L. Onsager, “Reciprocal Relations in Irreversible Processes. I,” Phys. Rev. 37, 405 (1931). [4] L. Onsager, “Reciprocal Relations in Irreversible Processes. II,” Phys. Rev. 38, 2265 (1931). [5] H. B. Callen, Thermodynamics and an Introduction to Thermostatistics, Wiley (1985). [6] R. Piazza, “Thermophoresis: Moving Particles with Thermal Gradients,” Soft Matter 4, 1740 (2008). [7] G. S. McNab and A. Meisen, “Thermophoresis in Liquids,” J. Colloid Interface Sci. 44, 339 (1973). [8] R. Piazza and A. Parola, “Thermophoresis in Colloidal Suspensions,” J. Phys.: Condens. Matter 20, 153102 (2008). [9] J. K. Platten, “The Soret Effect: A Review of Recent Experimental Results,” J. Appl. Mech. 73, 5 (2006). [10] H. B. Callen, “The Application of Onsager’s Reciprocal Relations to Thermoelectric, Thermomagnetic, and Galvanomagnetic Effects,” Phys. Rev. 73, 1349 (1948). [11] H. Littman and B. Davidson, “Theoretical Bound on the Thermoelectric Figure of Merit from Irreversible Thermodynamics,” J. Appl. Phys. 32, 217 (1961). [12] P. S. Ho and T. Kwok, “Electromigration in Metals,” Rep. Prog. Phys. 52, 301 (1989). [13] H. Jiang, “Thermodynamic Bounds and General Properties of Optimal Efficiency and Power in Linear Responses,” Phys. Rev. E 90, 042126 (2014). [14] D. V. Ragone, Thermodynamics of Materials, Volume II, Wiley (1995). [15] S. W. Angrist, “Galvanomagnetic and Thermomagnetic Effects,” Sci. Am. 205, 124 (1961).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 3

Fick’s Laws of Diffusion

3.1 Fick’s First Law The laws of diffusion were introduced by the German physician Adolf Eugen Fick (1829e1901), one of the original pioneers in the field of biophysics [1,2]. Fick’s monograph, Medical Physics [3], was the first book of its kind at the intersection of physics and medicine, covering topics including the physics of muscle contractions, the hydrodynamics of blood flow and mixing of air in the lungs, and the thermodynamic requirements for maintaining a constant body temperature. Fick’s theory of diffusion was published in 1855 [4], about 75 years before Onsager’s development of irreversible thermodynamics [5,6]. Fick based his theory on an analogy to Fourier’s law of heat conduction, which had been published in 1822 [7]. Fick assumed that the flux of diffusing matter is proportional to the concentration gradient of the diffusing species. He described the proportionality constant as a material-dependent property, which today we know as the diffusion coefficient, D. While Fick’s first law was proposed empirically, it can be deduced from irreversible thermodynamics considering the following differential of local entropy (s), m ds ¼  dn; (3.1) T where m is the chemical potential of the diffusing species, T is absolute temperature, and n is the local number density of diffusing particles. Following the formulation of irreversible thermodynamics in Chapter 2, the flux resulting from a gradient in m/T can be expressed as m J ¼  LV ; (3.2) T

Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00026-1 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

39 Copyright Elsevier 2023

40

Materials Kinetics

where L is the kinetic coefficient for this irreversible process. Assuming that the temperature is homogeneous, Eq. (3.2) can be simplified as L J ¼  Vm: T

(3.3)

In order to obtain Fick’s first law, we define the diffusion coefficient, D, as proportional to L/T and assume that the gradient in the chemical potential is predominantly governed by the gradient in the concentration of the diffusing species, VC. Eq. (3.3) can then be written as J ¼  DVC:

(3.4)

Eq. (3.4) is Fick’s first law of diffusion. Within the context of diffusion studies, Fick’s first law is also called the flux equation since it describes the flux of matter that results from a concentration gradient, VC. The thermodynamic driving force for the flux is the concentration gradient itself, and the kinetics are controlled by the diffusion coefficient, D, also known as the diffusivity. The negative sign on the right-hand sign of Eq. (3.4) is included because the flux of matter acts to reduce (rather than enhance) the concentration gradient, VC. The flux, J, is nonzero as long as both the thermodynamic driving force, VC, and the diffusion coefficient, D, governing the kinetics of the process, are nonzero. The diffusion coefficient has units of area per time, typically expressed as m2/s or cm2/s. Concentration, C, is usually expressed in terms of amount of material per unit volume, i.e., with units of mol/m3 or mol/cm3. Following Eq. (3.4), the resulting flux has units of amount of material per unit area per unit time, e.g., mol$m2s1 or mol$cm2s1. Physically, the flux is the amount of material passing through a plane of a unit area during a given unit of time. In Cartesian coordinates in three dimensions (x, y, and z), the gradient of the concentration is a vector given by VC ¼

vC vC vC b xþ b yþ b z; vx vy vz

(3.5)

where b x, b y , and b z are unit vectors along each of the three mutually orthogonal axes. Combining Eq. (3.4) with Eq. (3.5), the flux equation can be expressed as J¼  D

vC vC vC b xD b yD b z: vx vy vz

This book belongs to Alice Cartes ([email protected])

(3.6)

Copyright Elsevier 2023

Fick’s Laws of Diffusion

41

In one dimension, the flux can be expressed as a scalar quantity, and Fick’s first law can be written in its most simple form as J¼ D

dC : dx

(3.7)

3.2 Fick’s Second Law Fick’s second law can be derived from Fick’s first law and the conservation of matter. Consider the volume element in Figure 3.1, centered at point (x, y, z). The volume element has dimensions 2dx  2dy  2dz. Let us first consider the flux of matter along the x direction. The rate of matter entering the volume element through the x e dx plane is   vJx 4dydz Jx  dx ; (3.8) vx where Jx is the flux in the x direction at the center of the volume element, vJx =vx is the change in Jx with respect to the x coordinate, and 4dydz is the area of the volume element at the x e dx plane. Likewise, the rate at which matter exits the volume element through the opposite x þ dx plane is given by   vJx 4dydz Jx þ dx : (3.9) vx

Figure 3.1 Incoming and outgoing flux of a diffusing species in the x direction, where Jx is the flux in the x direction at the center of the volume element, vJx =vx is the change in Jx with respect to the x coordinate, and 4dydz is the area of the volume element at the yz plane orthogonal to the x direction. The net flux is equal to the flux entering through the left plane (at x e dx) minus the flux exiting through the right plane (at x þ dx).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

42

Materials Kinetics

Subtracting Eq. (3.9) from Eq. (3.8) gives the net rate of material entering the volume element along the x direction, i.e., the rate of matter entering the volume element at x e dx minus the rate of matter exiting the volume element at x þ dx:     vJx vJx vJx 4dydz Jx  dx  4dydz Jx þ dx ¼ 8dxdydz : (3.10) vx vx vx The same procedure can be applied along the y direction, as depicted in Figure 3.2. Here, the rate of matter exiting the volume element through the bottom y þ dy plane is subtracted from the rate of matter entering the volume element through the top y e dy plane. The resulting equation is:     vJy vJy vJy 4dxdz Jy  dy  4dxdz Jy þ dy ¼ 8dxdydz : (3.11) vy vy vy Repeating the procedure in the z direction, as depicted in Figure 3.3, gives     vJz vJz vJz (3.12) 4dxdy Jz  dz  4dxdy Jz þ dz ¼ 8dxdydz : vz vz vz

Figure 3.2 Incoming and outgoing flux of a diffusing species in the y direction, where Jy is the flux in the y direction at the center of the volume element, vJy =vy is the change in Jy with respect to the y coordinate, and 4dxdz is the area of the volume element at the xz plane orthogonal to the y direction. The net flux is equal to the flux entering through the top plane (at y e dy) minus the flux exiting through the right plane (at y þ dy). Note that the þy direction is downward in this figure.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fick’s Laws of Diffusion

43

Figure 3.3 Incoming and outgoing flux of a diffusing species in the z direction, where Jz is the flux in the z direction at the center of the volume element, vJz =vz is the change in Jz with respect to the z coordinate, and 4dxdy is the area of the volume element at the xy plane orthogonal to the z direction. The net flux is equal to the flux entering through the front plane (at z e dz) minus the flux exiting through the back plane (at z þ dz). Note that the þz direction is into the page.

The net rate of change in concentration is therefore the sum of Eqs. (3.10)e(3.12), which is given by   vJx vJy vJz 8dxdydz þ þ : (3.13) vx vy vz However, in order to satisfy conservation of matter, Eq. (3.13) must also be equal to the time rate of change of concentration over the volume of the element. Since the volume of the element is equal to 8dxdydz, the rate of change is 8dxdydz

vC : vt

(3.14)

Equating Eq. (3.13) with Eq. (3.14), the volume factors cancel, giving   vC vJx vJy vJz ¼ þ þ : (3.15) vt vx vy vz Using Fick’s first law from Section 3.1, the fluxes in the three directions are Jx ¼  D

dC ; dx

This book belongs to Alice Cartes ([email protected])

Jy ¼ D

dC ; dy

Jz ¼ D

dC : dz

(3.16)

Copyright Elsevier 2023

44

Materials Kinetics

Substituting Eq. (3.16) into Eq. (3.15), we obtain  2  vC v C v2 C v2 C ¼D þ þ : vt vx2 vy2 vz2

(3.17)

This is Fick’s second law of diffusion, also known simply as the diffusion equation. Expressed in one dimension, Fick’s second law can be simplified as vC v2 C ¼D 2: vt vx

(3.18)

Here we have assumed that the diffusivity is isotropic, i.e., that the diffusion coefficient is the same along all three directions. In the case of anisotropic diffusion, Eq. (3.17) can be generalized as       vC v vC v vC v vC ¼ Dx þ Dy þ Dz ; (3.19) vt vx vx vy vy vz vz where Dx, Dy, and Dz, are the diffusivities along the x, y, and z directions, respectively. If the diffusion is isotropic, then D ¼ Dx ¼ Dy ¼ Dz, and Eq. (3.17) is recovered. In its most general form, Fick’s second law can be written as vC ¼ V$ðDVCÞ vt

(3.20)

where V$ is the divergence operator from vector calculus. In the case of isotropic diffusivity, the diffusion coefficient D can be pulled in front of the divergence operator, giving vC ¼ DV2 C: vt

(3.21)

Here, V2 ¼ V$V is the Laplacian operator, equal to the divergence of the gradient. In three-dimensional Cartesian coordinates, the Laplacian of the concentration is given by V2 C ¼ V$VC ¼

v2 C v2 C v2 C þ 2 þ 2; vx2 vy vz

(3.22)

and Eq. (3.17) is again immediately recovered. Please do not confuse the Laplacian operator in Eq. (3.22) with the Laplace transform, which is a mathematical transformation used for solving differential equations. The Laplace transform will be used in the next chapter (Section 4.7) as one of several approaches for solving the diffusion equation.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fick’s Laws of Diffusion

45

Fick’s second law is the most widely applicable equation in the study of diffusion, as it incorporates both the time and position dependence of concentration. The diffusion equation can be solved analytically for a wide variety of boundary and initial conditions, as will be considered in Chapter 4. It can also be applied to study multi-component diffusion, which will be covered in Chapter 5. For especially complicated conditions, the diffusion equation can also be solved numerically. Numerical solutions using the finite difference method will be addressed in Chapter 6. Additional solutions to the diffusion equation under various conditions are provided in the excellent book by Crank [8].

3.3 Driving Forces for Diffusion Fick’s laws of diffusion consider the concentration gradient of the diffusing species as the sole thermodynamic driving force of diffusion. However, the flux and diffusion equations can be extended to include other driving forces using the methods of irreversible thermodynamics. For example, the application of an electrical potential gradient can act as an additional driving force for diffusion of a charged particle. This case is described by the Nernst-Planck equation, which is an extension of Fick’s first law that also incorporates electrostatic forces acting on a charged species. Under a static electric field, the steady-state Nernst-Planck equation is given by   Ze J ¼  D VC  CV4 ; (3.23) kT where Z is the valence of the ionic species, e is the elementary charge, and V4 is the electrical potential gradient. Eq. (3.23) is a direct extension of Fick’s first law in Eq. (3.4), which now considers two driving forces for diffusion: the concentration gradient and the electrostatic force due to interaction between the electrical potential gradient and the charged diffusing species. Interaction effects between different types of gradients can also contribute to diffusion. As discussed in Section 2.10, electromigration results from interaction between an electrical potential gradient and a concentration gradient. Such interaction effects can lead to a net flux that acts to amplify rather than reduce a concentration gradient, as depicted in Figure 2.9. Likewise, the interaction of thermal gradients and concentration gradients can lead to thermophoresis (Section 2.8), where “uphill” diffusion can occur in the presence of a thermal gradient, as shown in Figure 2.6.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

46

Materials Kinetics

Some other possible driving forces for diffusion include the presence of magnetic fields or mechanical stresses. When considering a diffusion problem, it is therefore important to account for all of the relevant affinities, which may include more than just the concentration gradient itself.

3.4 Temperature Dependence of Diffusion The magnitude of the diffusion coefficient, D, depends on several factors, including the identity of the diffusing species, the chemistry and structure of the material in which the diffusion occurs, and the diffusion direction (for anisotropic materials). One of the most important factors that influences the diffusivity is temperature, T. The dependence of the diffusion coefficient on temperature follows the Arrhenius equation,   H* D ¼ D0 exp  ; (3.24) kT where H * is the activation barrier for diffusion, k is Boltzmann’s constant, and D0 is the pre-exponential factor. The Arrhenius equation was proposed empirically in 1889 by the Swedish scientist Svante Arrhenius (1859e1927). Taking the natural logarithm of both sides of Eq. (3.24), the temperature dependence of diffusivity can be expressed as ln D ¼ ln D0 

H* 1 : k T

(3.25)

Writing the equation in logarithmic form, as in Eq. (3.25), linearizes the equation, which can then be plotted  as in Figure 3.4. The slope of the ln D * versus 1/T line is equal to H k. Hence, the activation barrier for the diffusion process, H * , can be determined by simply multiplying the slope of the Arrhenius plot by k. Note that sometimes the temperature dependence of diffusivity is plotted using a base-10 logarithm of D rather than a natural logarithm. In the case of a log D versus 1/T plot, the activation barrier is given by the slope of the line multiplied by k ln 10. The y-intercept of the ln D versus 1/T line gives the natural logarithm of the pre-exponential factor, i.e., ln D0. The physical meaning of the pre-exponential factor in Eq. (3.24) can be elucidated by bringing it into the argument of the exponential, such that   H* D ¼ exp  (3.26) þ ln D0 : kT

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fick’s Laws of Diffusion

47

Figure 3.4 Arrhenius dependence  of the diffusion coefficient with temperature. The slope of the line is equal to H* k, where H* is the activation barrier for diffusion and k is Boltzmann’s constant.

Rewriting the argument using a common denominator of kT, we have    * H  kT ln D0 D ¼ exp  : (3.27) kT Recalling Boltzmann’s equation for entropy in Eq. (1.1), we can define S * ¼ k ln D0 as the entropy associated with the activation process. Thus, the pre-exponential factor D0 gives us information about the number of possible activation pathways for diffusion to occur, and Eq. (3.27) becomes    * H  TS * D ¼ exp  : (3.28) kT Noting the definition of Gibbs free energy from Section 1.2, G * ¼ H *  TS* is the free energy of activation, as depicted in Figure 1.3. Physically, the activation free energy incorporates both enthalpic and entropic contributions to the activation barrier, i.e., both the enthalpy barrier that must be overcome, as well as the number of available pathways for undergoing activation. Thus, Eq. (3.28) can be rewritten as simply   G* D ¼ exp  : (3.29) kT The above discussion assumes that there is only one underlying mechanism for diffusion, which gives rise to a single activation barrier, H * , and a

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

48

Materials Kinetics

straight-line plot of ln D vs. 1/T, as in Figure 3.4. However, in many systems there can be more than one mechanism governing diffusivity. In this case, the temperature dependence of the diffusion coefficient can be described by a sum of multiple Arrhenius functions. A common example is a material having two mechanisms for diffusion, one due to the material itself (the intrinsic mechanism) and one resulting from the presence of a defect or dopant (the extrinsic mechanism). In this case, the diffusivitytemperature relationship is described as the sum of two Arrhenius functions,     H1* H2* D ¼ D0;1 exp  þ D0;2 exp  ; (3.30) kT kT having two distinct activation barriers (H1* and H2* ) and two different pre-exponential factors (D0;1 and D0;2 ) indicating two different entropic contributions to the activation process. Given the relative tradeoff of the enthalpic and entropic terms in Eq. (3.30), the higher-entropy mechanism will become more dominant at high temperatures, and the lower-entropy mechanism will become dominant at low temperatures. In such cases, typically the intrinsic mechanism is dominant at higher temperatures, while the extrinsic mechanism is dominant at lower temperatures. This is shown schematically in Figure 3.5, where the ln D vs. 1/T relationship shows two distinct linear regimes at different temperatures. The slopes in these two   * * regimes correspond to H1 k and H2 k, thereby giving the activation barriers from each mechanism.

Figure 3.5 Temperature dependence of the diffusion coefficient for a system having two different mechanisms for diffusion with two different activation barriers. The extrinsic mechanism has a lower activation barrier and is dominant at lower temperatures.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fick’s Laws of Diffusion

49

3.5 Interdiffusion If two materials of different chemistry are coupled together, an interdiffusion of two different species can occur, leading to a chemical mixing near the boundary. An example of an interdiffusion process is shown in Figure 3.6. The initial system involves two coupled materials consisting of different species, with a well-defined boundary. Over time, there is diffusion of particles from each material into the other, and the boundary becomes diffuse. If we denote the two materials as A and B, the fluxes of the individual diffusing species are: dCA dx

(3.31)

dCB ; dx

(3.32)

J A ¼  DA and JB ¼  DB

where DA and DB are the diffusion coefficients corresponding to species A and B, respectively, and CA and CB are their concentrations. The sum of the fluxes in Eqs. (3.31) and (3.32) yields: JA þ JB ¼ DA

dCA dCB  DB : dx dx

(3.33)

An important consideration for interdiffusion processes is whether the net flux is zero. A net flux of zero occurs when an equivalent number of A and B particles move in opposite directions. This happens when the mechanism for interdiffusion is a simple swapping of positions of the A and B particles.

Figure 3.6 Interdiffusion of two species with a zero net flux.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

50

Materials Kinetics

For the case of a zero net flux, as depicted in Figure 3.6, the total concentration of A and B atoms is a constant at all positions x and for all time t. Stated mathematically, the zero net flux condition implies that CB ðx; tÞ ¼ 1  CA ðx; tÞ:

(3.34)

Therefore, substituting Eq. (3.34) into Eq. (3.33), we have JA þ JB ¼ DA

dCA dð1  CA Þ ¼0  DB dx dx

(3.35)

It follows directly that the zero net flux condition can only be satisfied when the two diffusion coefficients of species A and species B are equal: DA ¼ DB :

(3.36)

As a result, interdiffusion processes of two species with zero net flux require only a single diffusion coefficient, and the concentration of one species can be determined from the concentration of the other species using Eq. (3.34). On the other hand, if DA sDB , then there is a nonzero net flux such that the total concentration of A and B is no longer constant. In this case, both diffusion coefficients are required, i.e., the problem cannot be reduced to a single diffusion coefficient. The most famous example of interdiffusion with unbalanced fluxes is the Kirkendall effect [9]. Originally discovered in 1947, the Kirkendall effect occurs when zinc and copper samples are joined. Kirkendall found that zinc diffuses into copper about three times as fast as copper diffusing into zinc, producing voids on the zinc side of the system. The alloying of zinc and copper produces brass and owing to the faster diffusion of zinc into the copper, the brass-copper interface moves more rapidly than the zinc-brass interface. The Kirkendall effect is shown in Figure 3.7. Another famous example of interdiffusion is ion-exchanged glass, wherein a glass sample having a large concentration of a small alkali ion, such as Liþ or Naþ, is immersed in a molten salt bath containing a larger alkali ion such as Kþ. This ion exchange process is shown graphically in Figure 3.8. When the larger invading alkali ions replace the smaller ions in the glass, a compressive stress is generated at the surface of the glass. In order to fracture the resulting chemically strengthened glass, this compressive stress must be overcome in addition to the native strength of the base glass material itself. Ion exchange can also be used to create anti-microbial surfaces through an ion exchange of Agþ into the surface of the glass.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fick’s Laws of Diffusion

51

Figure 3.7 The Kirkendall effect, where zinc diffuses into copper more rapidly than copper diffusing into zinc, giving a nonzero net flux of atoms. As a result, the brasscopper interface moves more rapidly than the zinc-brass interface.

Figure 3.8 Ion exchange process for chemical strengthening of glass, in which small alkali ions in the glass are exchanged with larger alkali ions from the molten salt bath.

Ion-exchanged glasses will be covered in Chapter 12 and are discussed in detail by Varshneya and Mauro [10]. The ion exchange process is an example of zero net flux interdiffusion, since charge neutrality must be obeyed. Hence, the counter-diffusion of two types of alkali ions can be described using a single diffusion coefficient for the interdiffusion process.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

52

Materials Kinetics

3.6 Measuring Concentration Profiles The study of diffusion depends on the ability to measure the relevant concentration profiles (i.e., concentration versus position) in a material. Typically, these measurements must be conducted for systems treated at different temperatures and times. Several experimental techniques are available to measure chemical concentration profiles in materials. Here, we will briefly describe a few of the more commonly used techniques for chemical profiling. In the secondary ion mass spectroscopy (SIMS) technique, the surface of the material is subjected to a pulsed beam of high-energy primary ions. During impact, the primary ion energy is transferred to the sample surface, which both releases and ionizes surface atoms from the sample. These secondary ions are accelerated into a mass analyzer, where they are separated according to their mass-to-charge ratios. This separation allows for simultaneous measurement of different types of secondary ions. As the primary ion beam probes more deeply into the sample, the concentration versus depth of each element near the surface of the sample can be obtained. With the electron probe micro-analysis (EPMA) (or simply “microprobe”) technique, the sample is bombarded with a high-energy electron beam. The sample absorbs the energy and emits x-rays at wavelengths characteristic to each element in the sample. Upon calibrating with a suitable standard, the x-ray spectrum can be used to determine the concentrations of the various elements. The microprobe technique is particularly useful for non-destructive analysis of the chemical composition of small volumes, and for analyzing concentration profiles as a function of depth into the sample. Another commonly used technique for measuring chemical concentration profiles is x-ray photoelectron spectroscopy (XPS). XPS is based on the photoelectric effect, where the sample is irradiated with highenergy x-rays. The x-ray bombardment leads to ejection of photoelectrons from atoms near the sample surface. The emission spectra are then analyzed to determine the identity and concentrations of each element. With XPS, most of the signal originates from the outer 1e10 nm of the sample surface. Deeper analysis (up to a few microns) can be conducted using XPS in conjunction with ion milling. Details on these and other characterization techniques for measuring concentration profiles can be found in the ASM Handbook on Materials Characterization [11].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fick’s Laws of Diffusion

53

3.7 Tracer Diffusion The diffusion of a species within a matrix containing the same species is known as self-diffusion. Self-diffusion can be challenging to measure, given the difficulty in distinguishing different atoms of the same species. The main method of measurement is by infusing the sample with radioactive tracer isotopes having identical chemistry to the stable isotope phase. With such tracer diffusion experiments, the concentration of the tracer isotopes can be measured as a function of depth, and the selfdiffusivity is inferred from the resulting tracer concentration profile. Radioactive tracer isotopes can be easily detected when they decay, since high-energy radiation is emitted during the decay process. If the halflife of the tracer is relatively smalldbut still large enough to allow for an experimental measurement before the tracer has vanisheddthe tracer atoms can be detected by their decay products: a, b, or g-rays. The general steps for conducting a tracer diffusion experiment are as follows: 1. Select an appropriate radioactive isotope having a suitable half-life. Some radioactive isotopes commonly used in tracer diffusion experiments include: a. 22Na, which has a half-life of 2.6 years and emits b radiation at 0.54 MeV; b. 110Ag, which has a half-life of 253 days and emits b radiation at 0.118 MeV; and c. 83Rb, which has a half-life of 83 days and emits b radiation at 0.521 MeV. 2. Deposit a thin layer of the radioactive isotopes onto the surface of the host material. 3. Hold the system at the desired temperature for a suitable period of time. 4. Remove thin layers from the surface (chemically or mechanically). 5. Measure the radioactivity of each removed layer. 6. With the known half-life of the tracer isotope and the time since the deposition of the layer, calculate the concentration of tracer isotopes in each layer. 7. Repeat the measurement at each layer to construct a concentration profile of the tracer isotopes into the depth of the sample, as shown in Figure 3.9.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

54

Materials Kinetics

Figure 3.9 Self-diffusion can be measured using radioactive tracer isotopes. The resulting concentration profile of the tracer isotopes can be used to calculate the selfdiffusion coefficient.

8. Fit the concentration profile with an appropriate solution of Fick’s second law to extract the diffusion coefficient for the temperature and time used. Several useful solutions of Fick’s second law will be presented in Chapter 4. 9. Repeat the experiment for other temperatures, collecting data points at each new temperature. 10. Using the data from each temperature, extract the activation barrier for self-diffusivity from the slope of the Arrhenius curve. Figure 3.10 shows results from sodium tracer diffusion experiments conducted in a family of sodium borosilicate glasses by the Dieckmann group at Cornell University [12]. Measurements were performed using a 22 Na tracer as a function of temperature and glass composition. From these

Figure 3.10 Temperature dependence of sodium tracer diffusion coefficients in a series of sodium borosilicate glasses with the composition (Na2O)0.17(B2O3)x(SiO2)0.83x (mol fraction). (Reproduced with permission from Ref. [12]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fick’s Laws of Diffusion

55

results, an activation enthalpy of 60e85 kJ/mol is calculated for the selfdiffusion of sodium in these systems. The activation barrier is found to increase with increasing concentration of B2O3 in the glass, indicating the important role of the matrix composition and structure in governing the self-diffusion process.

3.8 Summary Diffusion is governed by Fick’s laws. Fick’s first law, also known as the “flux equation,” describes the flux of matter resulting from a given concentration gradient and diffusion coefficient. The concentration gradient is the thermodynamic driving force for diffusion, and the kinetics are governed by the diffusion coefficient, also known as the diffusivity. Fick’s second law, also known as simply the “diffusion equation,” is derived from Fick’s first law and the conservation of matter. The diffusion equation describes the evolution of concentration profiles in space and time and will be solved in Chapter 4 under a variety of different conditions. Besides concentration gradients, additional driving forces for diffusion can include electrical potential gradients, thermal gradients, magnetic fields, and applied stresses. Interaction effects can lead to “uphill diffusion,” following the suitable equations from irreversible thermodynamics (Chapter 2). The diffusion coefficient typically has an Arrhenius dependence on temperature, where the activation barrier is determined from the slope of the Arrhenius plot. The pre-exponential factor gives the entropic contribution to the activation process. If more than one diffusion mechanism is present in a system, the temperature dependence of the diffusivity can be described by a summation of Arrhenius terms, each with its own activation barrier. Interdiffusion involves the simultaneous diffusion of two different species. If the net flux is zero (i.e., if the atoms diffuse by merely swapping positions), then the interdiffusion process can be described using a single diffusion coefficient. An example of interdiffusion with a net zero flux is the ion exchange process used to manufacture chemically strengthened glass. On the other hand, if the net flux is non-zero, then two separate diffusion coefficients must be used, one for each species. An example of interdiffusion with a non-zero net flux is the Kirkendall effect, where zinc diffuses more rapidly than copper in the production of brass. Various experimental techniques exist to measure chemical concentration profiles and deduce the value of the diffusion coefficient. These techniques generally rely upon bombarding a sample with a high-energy source and measuring the spectrum of emitted particles or energies. In the case of self-diffusion, radioactive tracer isotopes can be employed. This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

56

Materials Kinetics

Exercises (3.1) Fick’s first law assumes that the chemical potential gradient of the diffusing species can be represented by its concentration gradient. (a) Give an example of a system for which this is a good approximation. (b) Give an example of a case where this assumption is not valid. (c) For the example in (b), how should Fick’s first law be modified to improve its accuracy? (3.2) Give an example of a material exhibiting anisotropic diffusion. Why is the diffusion different along different directions? Write an appropriate flux equation (Fick’s first law) accounting for anisotropic diffusion along different directions in this system. Write an appropriate diffusion equation (Fick’s second law) describing diffusion in this system. (3.3) Derive Fick’s second law for an anisotropic two-dimensional system using Fick’s first law and the conservation of mass. Consider the flux in and out of a square element of dimensions 2dx  2dy, where the diffusion coefficient is different along the x and y directions. Show all steps of the derivation. (3.4) The following data are measured for diffusivity of Mg2þ in MgO: Temperature (8C)

D (cm2/s)

1450 1550

2.4  1011 1.2  1010

(a) Based on these data, what is the activation barrier for diffusion of Mg2þ in MgO? (b) What is the diffusion coefficient of Mg2þ in MgO at 1500 C? (c) What assumption is made in your calculation in part (b)? (d) What sources of error could be introduced by extrapolating the calculation to temperatures beyond the 1450e1550 C range? (3.5) Consider a material that has intrinsic and extrinsic mechanisms for diffusion. The activation barrier for the intrinsic mechanism is H1* ¼ 1.5 eV, and the activation barrier for the extrinsic mechanism is H2* ¼ 1.0 eV. The pre-exponential factors are D0;1 ¼ 1012 m2/s for the intrinsic mechanism and D0;2 ¼ 1014 m2/s for the extrinsic mechanism.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fick’s Laws of Diffusion

57

(a) Write an equation for the diffusivity in this system, accounting for both the intrinsic and extrinsic mechanisms. (b) Make an appropriate plot showing the temperature dependence of diffusivity across both the intrinsic and extrinsic regimes. (c) Derive an expression for the temperature at which the crossover between intrinsic and extrinsic mechanisms occurs. (d) What is the value of the crossover temperature for this system? (e) What happens to the crossover temperature if the activation barriers for the intrinsic and extrinsic mechanisms are changed? (f) What happens to the crossover temperature if the preexponential factors of the intrinsic and extrinsic mechanism are changed? (3.6) Is it possible to observe a Kirkendall effect in an ionic material? Why or why not? How would the flux equations need to be modified in this case? (3.7) Ion exchange of glass can involve more than two types of ions. For example, a glass initially containing Naþ can be simultaneously strengthened with Kþ and made antimicrobial using Agþ. Write a set of flux equations describing the ion exchange process where a Naþ-containing glass is submerged in a molten salt bath containing a mixture of Kþ and Agþ ions. Assume that there is a zero net flux to maintain an appropriate charge balance in the system. How many independent flux equations are necessary to describe this process? (3.8) Search the published literature for a diffusion experiment in a material related to your thesis project or another research project that you are conducting. Provide the reference in your answer. (a) Why is diffusion important for this material or this application? (b) How was the diffusion coefficient measured in this experiment? Be specific in your answer. (c) Why was this method chosen for measuring diffusivity in this system? (d) What would be an alternative method for measuring diffusion in the same system? What are the relative advantages and disadvantages of this technique compared to the method used in the paper? (e) What are the practical implications of these diffusion results?

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

58

Materials Kinetics

(3.9) Design an experiment to measure the activation barrier for selfdiffusion of Naþ in albite (NaAlSi3O8). Give a detailed description of each step in the experiment. (3.10) Design an experiment to measure the activation barrier for diffusion bonding of copper with steel. Give a detailed description of each step in the experiment.

References [1] J. Philibert, “One and a Half Century of Diffusion: Fick, Einstein, Before and Beyond,” Diffusion Fund. 2, 1 (2005). [2] E. Shapiro, “Adolf FickdForgotten Genius of Cardiology,” Am. J. Cardio. 30, 662 (1972). [3] A. Fick, Die Medizinische Physik, Braunschweig (1856). [4] A. Fick, “Über Diffusion,” Annalen der Physik 170, 59 (1855). [5] L. Onsager, “Reciprocal Relations in Irreversible Processes. I,” Phys. Rev. 37, 405 (1931). [6] L. Onsager, “Reciprocal Relations in Irreversible Processes. II,” Phys. Rev. 38, 2265 (1931). [7] J.-B. J. Fourier, Théorie Analytique de la Chaleur, F. Didot (1882) [8] J. Crank, The Mathematics of Diffusion, Oxford University Press, Oxford (1975) [9] A. D. Smigelskas and E. O. Kirkendall, “Zinc Diffusion in Alpha Brass,” Trans. AIME 171, 130 (1947). [10] A. K. Varshneya and J. C. Mauro, Fundamentals of Inorganic Glasses, 3rd ed., Elsevier, Amsterdam (2019) [11] ASM Handbook Committee, ASM Handbook, Volume 10: Materials Characterization, ASM International (2019) [12] X. Wu, R. E. Youngman, and R. Dieckmann, “Sodium Tracer Diffusion and 11B NMR Study of Glasses of the Type (Na2O)0.17(B2O3)x(SiO2)0.83-x,” J. Non-Cryst. Solids 378, 168 (2013).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 4

Analytical Solutions of the Diffusion Equation 4.1 Fick’s Second Law with Constant Diffusivity Fick’s second law is the most important and fundamental equation governing the physics of diffusion. Solutions of this equation provide the concentration of the diffusing species, C(x,t), as a function of both position (x) and time (t). In its most general form, Fick’s second law (i.e., the “diffusion equation”) can be expressed as: vC ¼ V$ðDVCÞ; vt

(4.1)

where C is the concentration of the diffusing species and D is the diffusion coefficient, also known as the diffusivity. For a constant isotropic diffusivity, D can be brought in front of the divergence operator, such that   2 vC v C v2 C v2 C 2 : (4.2) þ þ ¼ DV C ¼ D vt vx2 vy2 vz2 Expressed in one dimension, Fick’s second law reduces to: vC v2 C ¼D 2: vt vx

(4.3)

Mathematically, the diffusion equation is a second-order linear partial differential equation. Analytical closed-form solutions of the diffusion equation exist for a wide variety of initial and boundary conditions of practical interest. Many of these solutions are due to John Crank (1916e2006), a mathematical physicist at Brunel College in England who specialized in solving partial differential equations, especially for problems related to heat conduction and diffusion [1]. In this chapter, we will discuss several methods for solving the diffusion equation under various conditions. In some cases, a closed-form solution is not possible, so the diffusion equation must be solved numerically. Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00006-6 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

59 Copyright Elsevier 2023

60

Materials Kinetics

Numerical solutions of the diffusion equation using the finite difference method will be covered in Chapter 6.

4.2 Plane Source in One Dimension Let us begin with the simple case of a plane source in one dimension, i.e., an initial concentration of C(x,0) ¼ Md(x), where d(x) is the Dirac delta function and M is the total amount of the diffusing species. The solution of Eq. (4.3) by inspection is the Gaussian function,   A x2 Cðx; tÞ ¼ pffi exp  ; (4.4) 4Dt t where A is a constant. This solution by inspection can be double-checked by inserting Eq. (4.4) into both sides of Eq. (4.3). Starting first with the lefthand side of Eq. (4.3), the time derivative of Eq. (4.4) can be calculated using the product rule:     vCðx; tÞ x2 x2 v  1=2  1=2 v ¼ At exp  : (4.5) At þ exp  vt vt 4Dt 4Dt vt Taking the partial derivatives,

    2 vCðx; tÞ x2 A 3=2 x2 1=2 x exp  exp  ¼ At  t ; vt 2 4Dt 2 4Dt 4Dt

and combining like terms, we have  2    vCðx; tÞ A x 1 x2 ¼ 3=2  exp  : vt t 4Dt 2 4Dt

(4.6)

(4.7)

The next step is to insert the proposed solution of the diffusion equation into the right-hand side of Eq. (4.3). Taking the first partial derivative of Eq. (4.4) with respect to position,   vCðx; tÞ x2 1=2 x ¼  At exp  : (4.8) vx 2Dt 4Dt The second derivative is evaluated using the product rule:     v2 Cðx; tÞ At 1=2 x2 x2 At 1=2 x2 ¼ exp   exp  : vx2 4D2 t 2 4Dt 2Dt 4Dt

This book belongs to Alice Cartes ([email protected])

(4.9)

Copyright Elsevier 2023

Analytical Solutions of the Diffusion Equation

Multiplying by D and combining like terms, we have  2    v2 Cðx; tÞ A x 1 x2 vCðx; tÞ :  exp  D ¼ 3=2 ¼ vx2 t vt 4Dt 2 4Dt

61

(4.10)

Since Eq. (4.10) is equal to Eq. (4.7), both sides of the diffusion equation agree with each other, thereby confirming the Gaussian solution in Eq. (4.4). The next step is to determine the value of the unknown constant, A. Here, we apply the condition that the total amount of diffusing matter, M, is conserved at all times, t. The value of M can be calculated by integrating the concentration over all space: ZN M¼

ZN Cðx; tÞdx ¼

N

N

  A x2 pffi exp  dx: 4Dt t

(4.11)

Since the integrand has a Gaussian form, the integral can be solved by making the following substitution of variables: h2 ¼

x2 ; 4Dt

pffiffiffiffiffi dx ¼ 2 Dt dh:

(4.12)

Substituting Eq. (4.12) backpinto ffiffiffi Eq. (4.11), the transformed integral has a standard solution equal to p, leading to: pffiffiffiffi M ¼ 2A D

ZN

pffiffiffiffiffiffiffi expðh2 Þdh ¼ 2A pD:

(4.13)

N

Finally, we can solve for A, M A ¼ pffiffiffiffiffiffiffi; 2 pD and substitute this value into Eq. (4.4) to obtain:   M x2 Cðx; tÞ ¼ pffiffiffiffiffiffiffiffi exp  : 4Dt 2 pDt

(4.14)

(4.15)

Eq. (4.15) therefore describes the evolution of the concentration profile of a species diffusing out from an initial plane source at x ¼ 0.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

62

Materials Kinetics

Figure 4.1 shows a plot of Eq. (4.15) for different diffusion times, Dt. In the limit of t / 0, the initial profile is a Dirac delta function, Md(x), with all the mass concentrated at the x ¼ 0 plane. For a short diffusion time, e.g., Dt ¼ 1/16 in the figure, this initial delta function spreads out into a Gaussian function with finite width. As time progresses (e.g., Dt ¼ 1/2 and then Dt ¼ 1), the concentration profile continues to have a Gaussian shape, but with a progressively larger variance. The integral of the Gaussian distribution is equal to the same constant, M, the total amount of the diffusing species, which is conserved at all times.

4.3 Method of Reflection and Superposition In Section 4.2, we considered diffusion from a plane source in one dimension, where diffusion can occur in both the x and þx directions. For some systems, diffusion may only be allowed in the þx direction, e.g., if there is an impenetrable boundary for x < 0, or if the material only occupies the positive x space. The solution of Fick’s second law for a plane source with diffusion in only the þx direction can be obtained using the method of reflection and superposition. Here, we take the solution of Fick’s second law when diffusion is allowed in both x directions, i.e., the solution in Eq. (4.15), and then reflect the x part of the curve,

Figure 4.1 Concentration profiles resulting from diffusion from a planar source at x ¼ 0 for three different diffusion times (Dt ¼ 1/16, 1/2, and 1).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Analytical Solutions of the Diffusion Equation

63

superimposing it onto the þx direction. Given the even symmetry of Eq. (4.15), the final solution is simply Eq. (4.15) multiplied by 2:   M x2 Cðx; tÞ ¼ pffiffiffiffiffiffiffiffi exp  : (4.16) 4Dt pDt Figure 4.2 shows a plot of this solution for different diffusion times, viz., Dt ¼ 1/16, 1/2, and 1. As with the solutions plotted in Figure 4.1, the integral of each curve in Figure 4.2 is equal to a constant M.

4.4 Solution for an Extended Source While Sections 4.2 and 4.3 consider diffusion from an initial planar source, i.e., starting from a delta function distribution, d(x), more often the initial distribution would occupy some extended width, as depicted in Figure 4.3. Here we consider the initial condition Cðx < 0; t ¼ 0Þ ¼ C0 for negative x and Cðx  0; t ¼ 0Þ ¼ 0 for positive x. In order to solve the problem of diffusion from an extended source, we must integrate the planar source solution over the full width of the extended source. As shown in Figure 4.3, the extended source is divided into thin planar sections with infinitesimal width, dx. Denoting the

Figure 4.2 Solution of the diffusion equation using the method of reflection and superposition. Initially, the mass is fully concentrated at the x ¼ 0 plane. Over time, it diffuses in the þx direction only.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

64

Materials Kinetics

Figure 4.3 Initial and boundary conditions for diffusion from an extended source.

diffusion distance from this individual section as x, the planar source solution from Eq. (4.15) can be applied as:   C0 dx x2 Cðx; tÞ ¼ pffiffiffiffiffiffiffiffi exp  ; (4.17) 4Dt 2 pDt where C0dx is the total amount of the diffusing species initially located in a region of width dx. Eq. (4.17) represents the contribution from just a single thin section of the source. To calculate the full concentration profile, we must integrate over the full extent of the source, C0 Cðx; tÞ ¼ pffiffiffiffiffiffiffiffi 2 pDt

ZN x

  x2 exp  dx; 4Dt

(4.18)

where the integral is over the minimum diffusion distance, x, to infinity, considering that the extended source depicted in Figure 4.3 is semiinfinite. Eq. (4.18) is the integral of a Gaussian function, which can be solved by substitution of variables, x h ¼ pffiffiffiffiffi; 2 Dt

1 dh ¼ pffiffiffiffiffi dx; 2 Dt

(4.19)

such that C0 Cðx; tÞ ¼ pffiffiffi p

This book belongs to Alice Cartes ([email protected])

ZN pffiffiffiffi x=2 Dt

expðh2 Þdh:

(4.20)

Copyright Elsevier 2023

65

Analytical Solutions of the Diffusion Equation

Eq. (4.20) can be written in terms of the error function (“erf”), which is a standard mathematical function defined as: 2 erf ðzÞhpffiffiffi p

Zz expðh2 Þdh:

(4.21)

0

As can be seen in Figure 4.4, the error function has the properties: erf ðzÞ ¼  erf ðzÞ;

erf ð0Þ ¼ 0;

erf ðNÞ ¼ 1:

(4.22)

The complementary error function (“erfc”) is another standard mathematical function, defined as erfcðzÞ h 1  erf ðzÞ:

(4.23)

Rewriting Eq. (4.23) in integral form using Eq. (4.21), 2 pffiffiffi p

ZN

2 expðh Þdh ¼ pffiffiffi p

ZN

2

z

2 expðh Þdh  pffiffiffi p

Zz expðh2 Þdh;

2

0

0

(4.24) it is apparent that 2 erfcðzÞ ¼ pffiffiffi p

ZN expðh2 Þdh:

(4.25)

z

A comparison of the shapes of the error function and complementary error function is shown in Figure 4.4. Combining Eq. (4.18) with Eq. (4.25), the solution of the diffusion problem for an extended initial distribution can be written in terms of the complementary error function as   1 x Cðx; tÞ ¼ C0 erfc pffiffiffiffiffi : (4.26) 2 2 Dt Since the complementary error function is a standard function found in most plotting software, it is easy to make a graph of Eq. (4.26). Figure 4.5 plots the complementary error function solution in Eq. (4.26) for three different diffusion times. In the limit of short time, there is a sharp boundary at x ¼ 0. As time progresses, more and more material from the extended source moves into the x > 0 regime, and the boundary becomes increasingly diffuse.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

66 Materials Kinetics

Figure 4.4 Plots of the error function, erf(x), and complementary error function, erfc(x).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Analytical Solutions of the Diffusion Equation

67

Figure 4.5 Complementary error function solution of the extended source diffusion problem.

4.5 Bounded Initial Distribution In Section 4.4 we considered a semi-infinite extended initial distribution. In this Section, we consider a finite (bounded) initial distribution. As depicted in Figure 4.6, we consider the initial condition of Cð  h < x < h; t ¼ 0Þ ¼ C0 for all x between h and h, and zero initial concentration at all other values of x. As in Section 4.4, the problem of diffusion from a bounded initial distribution can be solved by dividing the region into thin slabs of width dx.

Figure 4.6 Initial condition for the problem of diffusion from a bounded initial distribution.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

68

Materials Kinetics

The contribution from one of these thin slabs comes directly from the plane source solution Eq. (4.15),   C0 dx x2 Cðx; tÞ ¼ pffiffiffiffiffiffiffiffi exp  ; (4.27) 4Dt 2 pDt where C0dx is the initial concentration of the diffusing species within the slab. Integrating over the bounds of the finite source, we have C0 Cðx; tÞ ¼ pffiffiffiffiffiffiffiffi 2 pDt

Zxþh xh



 x2 exp  dx: 4Dt

(4.28)

The integral over the Gaussian function can again be solved by making the substitution of variables, x h ¼ pffiffiffiffiffi; 2 Dt such that C0 Cðx; tÞ ¼ pffiffiffi p

1 dh ¼ pffiffiffiffiffi dx; 2 Dt

(4.29)

pffiffiffiffi ðxþhÞ=2 Z Dt pffiffiffiffi ðxhÞ=2 Dt

expðh2 Þdh:

(4.30)

Considering that both limits of the integral are finite, Eq. (4.30) can be expressed as the summation of two error functions:      1 hþx hx Cðx; tÞ ¼ C0 erf pffiffiffiffiffi þ erf pffiffiffiffiffi : (4.31) 2 2 Dt 2 Dt This solution is plotted in Figure 4.7 for three different diffusion times.

4.6 Method of Separation of Variables Separation of variables is a standard method for solving partial differential equations. With this method, the concentration function, C(x,t), is assumed to be separable into the product of two functions, X(x) and T(t), as Cðx; tÞ ¼ XðxÞT ðtÞ;

This book belongs to Alice Cartes ([email protected])

(4.32)

Copyright Elsevier 2023

69

Analytical Solutions of the Diffusion Equation

Figure 4.7 Solution of the diffusion equation for a bounded initial distribution.

where X(x) depends only on position, x, and T(t) depends only on time, t. Substituting Eq. (4.32) into Fick’s Second Law, we have: vC v2 C ¼D 2 vt vx

/ XðxÞ

dT ðtÞ d2 XðxÞ : ¼ DT ðtÞ dt dx2

(4.33)

The next step is to rearrange the terms such that all the time dependence and position dependence are separated on the two sides of the equation: 1 dT ðtÞ D d2 XðxÞ ¼ T ðtÞ dt XðxÞ dx2

(4.34)

Since the left-hand side of Eq. (4.34) depends only on time and the righthand side depends only on position, the two sides of the equation must be equal to the same constant, which is typically taken as l2 D. Therefore, we have 1 dT ðtÞ ¼  l2 D T ðtÞ dt

(4.35)

1 d 2 XðxÞ ¼  l2 : 2 XðxÞ dx

(4.36)

and

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

70

Materials Kinetics

Eq. (4.35) has the solution,

  T ðtÞ ¼ exp l2 Dt ;

(4.37)

for the time-dependent function, and Eq. (4.36) has the standard solution, XðxÞ ¼ A sinðlxÞ þ B cosðlxÞ;

(4.38)

for the position-dependent function. Taking the product of Eqs. (4.37) and (4.38), the combined equation for concentration is:   Cðx; tÞ ¼ ðA sinðlxÞ þ B cosðlxÞÞexp l2 Dt : (4.39) This is just one solution for a particular value of l. The most general solution is obtained by summing over all possible solutions: Cðx; tÞ ¼

N X m¼1

  ðAm sinðlm xÞ þ Bm cosðlm xÞÞexp l2m Dt :

(4.40)

The values of Am, Bm, and lm are determined by the particular initial and boundary conditions. Let us consider diffusion out of a sheet of thickness l, through which the diffusing substance is initially distributed uniformly, and the surfaces are kept at zero concentration. This corresponds to the following initial and boundary conditions: t ¼ 0:

C ¼ C0

t > 0:

C¼0

0 l mp : 0;

m ¼ odd m ¼ even

:

(4.44)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Analytical Solutions of the Diffusion Equation

71

using standard Fourier series methods [2]. Substituting Eqs. (4.42) and (4.44) into Eq. (4.40) gives the final solution: "  2 # N h 4C0 X 1 xi ð2n þ 1Þp Cðx; tÞ ¼ Dt : sin ð2n þ 1Þp exp  l l p n¼0 2n þ 1 (4.45) The method of separation of variables can be applied to a wide range of initial and boundary conditions.

4.7 Method of Laplace Transforms The Laplace transform method is a powerful technique for solving many partial differential equations, originally developed by the French mathematician, Pierre-Simon Laplace (1749e1827). The Laplace transform technique involves converting a function, f ðx; tÞ, into a new function, bf ðx; pÞ, via the following transformation: Lf f ðx; tÞg ¼ bf ðx; pÞ ¼

ZN

ept f ðx; tÞdt:

(4.46)

0

Here, Lf $g denotes the Laplace transform, which involves transforming the time variable, t, into a new variable, p. The goal of the Laplace transform is to convert the differential equation into a new form that is more easily solved. Once the transformed differential equation is solved in the p domain, an inverse Laplace transform, L1 f $g, is used to convert the solution back to the original t domain. Let us consider a simple example of taking the Laplace transform of f ðtÞ ¼ 1. Using the definition of the Laplace transform in Eq. (4.46), we have Lf f ðtÞ ¼ 1g:

bf ðpÞ ¼

ZN

pt

e 0

t¼N 1 pt 1 dt ¼  e ¼ : p p t¼0

(4.47)

Therefore, 1/p is the Laplace transform of 1. Another simple example is the Laplace transform of an exponential function, f ðtÞ ¼ eat , where a is a constant. In this case, the Laplace transform is: at

Lf f ðtÞ ¼ e g:

bf ðpÞ ¼

ZN e 0

This book belongs to Alice Cartes ([email protected])

pt at

ZN

e dt ¼ 0

eðpaÞt dt ¼

1 : pa

(4.48)

Copyright Elsevier 2023

72

Materials Kinetics

Standard tables exist for many Laplace transform pairs and can be easily found online or in textbooks on advanced engineering mathematics [2]. Some of the more pertinent Laplace transform pairs for materials kinetics problems are provided in Table 4.1. A key feature of the Laplace transform is its operation on time derivatives: ZN vf vf ðx; tÞ L ept ¼ dt: vt vt

(4.49)

0

Using integration by parts, Z Z dv du u dt ¼ uv  v dt; dt dt

(4.50)

Eq. (4.49) can be written as ZN vf t¼N pt L ¼ e f ðx; tÞjt¼0 þ p ept f ðx; tÞdt; vt

(4.51)

0

b pÞ ¼ Table 4.1 Table of Laplace transform pairs, LfCðx; tÞg ¼ Cðx; pffiffiffiffiffiffiffiffi where qh p=D. b ðx; pÞ Cðx; tÞ C

1

1 p

eat

1 pþa

sinðutÞ

u p2 þu2

cosðutÞ   x erfc pffiffiffiffiffiffi 4Dt

qffiffiffiffi

D x2 =ð4DtÞ pt e 2 =ð4DtÞ xepxffiffiffiffiffiffiffiffiffi 4pDt 3

xn

This book belongs to Alice Cartes ([email protected])

Z

N

ept Cðx; tÞdt,

0

p p2 þu2 eqx p eqx q

eqx n! pnþ1

Copyright Elsevier 2023

Analytical Solutions of the Diffusion Equation

which reduces to simply vf L ¼ pLf f g  f ðx; t ¼ 0Þ: vt

73

(4.52)

Hence, taking the Laplace transform of a time derivative effectively eliminates that time derivative in the Laplace domain and automatically incorporates the initial condition at t ¼ 0. The Laplace transform of a spatial derivative is straightforward: n

v f ðx; tÞ vn bf ðx; pÞ L ¼ : (4.53) vxn vxn In order to solve the diffusion equation using the Laplace transform method, we must first take the Laplace transform of both sides of Fick’s second law:

2 vCðx; tÞ v Cðx; tÞ L : (4.54) ¼ DL vt vx2 Applying Eq. (4.52) for the time derivative and Eq. (4.53) for the spatial derivative, Eq. (4.54) becomes b ðx; pÞ  Cðx; t ¼ 0Þ ¼ D pC

b pÞ v2 Cðx; : vx2

(4.55)

Hence, applying the Laplace transform removes the t-dependence and incorporates the initial condition of the diffusion problem. To give an example solution using the Laplace transform method, let us consider the initial and boundary conditions depicted in Figure 4.8 for the problem of diffusion in a semi-infinite medium, x > 0, when the boundary

Figure 4.8 Setup for the problem of diffusion from a constant source, where the surface concentration of the diffusing species is always held at a constant C0.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

74

Materials Kinetics

is kept at a constant concentration of C0. Rearranging Eq. (4.55) and applying the initial condition from Figure 4.8, we have: b pÞ p v2 Cðx; b pÞ ¼  1 Cðx; t ¼ 0Þ ¼ 0:  Cðx; 2 vx D D

(4.56)

This is a standard differential equation which has the solution [2]. pffiffiffiffiffiffi pffiffiffiffiffiffi b ðx; pÞ ¼ a1 e p=Dx þ a2 e p=Dx ; C (4.57) where the unknown constants, a1 and a2, are determined via application of the boundary conditions: C ¼ C0 for x ¼ 0;

vC ðx ¼ N; tÞ ¼ 0: vx

(4.58)

In order to apply the boundary conditions, they must also be transformed into the Laplace domain as b ¼ 0; pÞ ¼ C0 Cðx

ZN

ept dt ¼

0

C0 ; p

(4.59)

and b vC ðx ¼ N; pÞ ¼ 0: vx

(4.60)

From Eq. (4.60) we must have a1 ¼ 0, and from Eq. (4.59) we have a2 ¼ C0/p. Hence, the solution of the diffusion problem in the Laplace domain is: pffiffiffiffiffiffi b pÞ ¼ C0 e p=Dx : Cðx; (4.61) p The final step is to take the inverse Laplace transform of Eq. (4.61) to recover the time dimension: ( pffiffiffiffiffiffi ) n o  p=Dx 1 b 1 e Cðx; tÞ ¼ L : (4.62) C ðx; pÞ ¼ C0 L p From Table 4.1, we see that the inverse Laplace transform of Eq. (4.61) is a complementary error function,   x Cðx; tÞ ¼ C0 erfc pffiffiffiffiffiffiffiffi : (4.63) 4Dt

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Analytical Solutions of the Diffusion Equation

75

Figure 4.9 Solution for the problem of diffusion from a constant source, obtained via the Laplace transform method.

Therefore, Eq. (4.63) is the final solution of the diffusion equation for the case of a diffusion in a semi-infinite medium where the surface is held at a constant concentration, C0. Figure 4.9 plots this solution for various diffusion times. Note that the surface concentration is indeed held constant with this solution, as shown at the x ¼ 0 position in the figure.

4.8 Anisotropic Diffusion In many crystals and composite materials, the diffusivity can vary with direction. For example, the flux along three dimensions ( J1, J2, J3) can be written using the following anisotropic form of Fick’s first law: 3 2 vC 7 2 3 2 36 6 vx1 7 J1 D11 D12 D13 6 7 vC 7 6 7 6 76 7: 6 (4.64) 4 J2 5 ¼  4 D12 D22 D23 56 7 6 vx2 7 J3 D13 D23 D33 6 7 4 vC 5 vx3 Eq. (4.64) can be written in alternative forms as: X vC ! Ji ¼  Dij ; J ¼ DVC: vxj j

This book belongs to Alice Cartes ([email protected])

(4.65)

Copyright Elsevier 2023

76

Materials Kinetics

Since the diffusivity tensor D is symmetric, 2 3 D11 D12 D13 6 7 D ¼ 4 D12 D22 D23 5; D13 D23 D33

(4.66)

it is always possible to diagonalize the matrix. The resulting diagonalized matrix, 2 3 b 11 D 0 0 7 b ¼6 b 22 D (4.67) 4 0 0 5 D b 0 0 D 33 can be calculated by solving the eigenvalue-eigenvector problem, 2 31 2 3 eigenvector     eigenvector diagonalized square 6 6 7 7 ¼ 4 column 5 4 column 5; matrix matrix matrix matrix

(4.68)

where the eigenvalues of D are the three roots li of the cubic equation, D11  l D12 D13 (4.69) D22  l D23 ¼ 0; det D12 D13 D23 D33  l where “det” denotes the determinant of the matrix. The resulting eigenvector column matrix from Eq. (4.68) defines the set of principal axes. The principal axes are a transformed set of mutually orthogonal coordinates, as depicted in Figure 4.10, where there are no

Figure 4.10 Transformation of the coordinate system to principal axes.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Analytical Solutions of the Diffusion Equation

77

longer any interaction effects among the various axes. In other words, when the diffusivity matrix is diagonalized as in Eq. (4.67), the diffusion equation in the principal coordinate system can be written as vC b v2 C b v2 C b v2 C ¼ D 11 2 þ D 22 2 þ D 33 2 ; vt vb x1 vb x2 vb x3

(4.70)

since the interaction terms have been eliminated. Thus, anisotropic diffusion problems can be readily solved by: 1. Diagonalizing the diffusivity tensor to convert the coordinate system to principal axes, eliminating interaction effects. 2. Since the diffusion along each of the principal axes can be considered independently, standard solutions can be applied along each of these directions, using the various methods described in this chapter. 3. The resulting solutions in terms of the principal axes can be converted back to the original coordinate system using the appropriate eigenvectors.

4.9 Concentration-Dependence Diffusivity Solution of the diffusion equation becomes significantly more complicated when D is a function of concentration. In this case, Fick’s second law from Eq. (4.1) is: vC ¼ V$½DðCÞVC: vt

(4.71)

The diffusion coefficient cannot be pulled in front of the divergence operator since it varies with composition. Rewriting Eq. (4.71) in threedimensional Cartesian coordinates:       vC v vC v vC v vC ¼ D þ D þ D : (4.72) vt vx vx vy vy vz vz The outer partial derivatives with respect to the position coordinates require use of the product rule. In one dimension, x, we have:    2 vC v vC v2 C dDðCÞ vC ¼ : (4.73) DðCÞ ¼ DðCÞ 2 þ vt vx vx vx dC vx Such concentration-dependent diffusivity equations can only be solved analytically in certain special cases [1]. For identical initial conditions, the

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

78

Materials Kinetics

difference between the solution of Eq. (4.73) and the standard solution for constant D is in the second nonlinear term on the right-hand side of Eq. (4.73). Numerical solutions via the finite difference method may be more tenable than closed-form analytical solutions in such cases. Such numerical solutions of the diffusion equation will be covered in Chapter 6.

4.10 Time-Dependent Diffusivity If the temperature of a system evolves with time, the diffusion coefficient will also be a function of time. If diffusivity is a function of time but not position, i.e., D(t), then Fick’s second law can be written as: vC ¼ V$½DðtÞVC ¼ DðtÞV2 C: vt

(4.74)

Since the divergence operator acts on position but not time, the D(t) function can be pulled in front. Such time-dependent diffusion problems can be treated by making the change of variable: Zt sD h

Dðt 0 Þdt 0 ;

(4.75)

0

where sD is an average diffusion time. With the definition of sD in Eq. (4.75), we can write vC vC vsD vC DðtÞ: ¼ ¼ vt vsD vt vsD

(4.76)

Fick’s second law in Eq. (4.74) can then be rewritten as vC ¼ V2 C; vsD

(4.77)

where the time dependence of the diffusivity has been built into the sD parameter. With this substitution of variables, Eq. (4.77) can be solved using the same techniques as we have discussed for the case of constant D. For example, the Laplace transform solution from Eq. (4.63), considering the case of a semi-infinite medium with a constant surface concentration, C0, can be directly applied to give:   x Cðx; sD Þ ¼ C0 erfc pffiffiffiffiffiffiffiffi : (4.78) 4sD

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Analytical Solutions of the Diffusion Equation

79

Then, using the definition of sD from Eq. (4.75), we have the final solution 0 1 B C B C x C: q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Cðx; sD Þ ¼ C0 erfcB Rt B C @ 4 Dðt 0 Þdt 0 A 0

(4.79)

Hence, if the boundary conditions for a time-dependent diffusivity problem are invariant under this change of variable, solutions from constant diffusivity problems can be applied to the time-dependent D case. Just apply the change of variable, as in Eq. (4.75) above.

4.11 Diffusion in Other Coordinate Systems Depending on the geometry of the system, it may be appropriate to write Fick’s second law in a different coordinate system. For example, diffusion in systems with cylindrical geometry may make use of the cylindrical form of the Laplacian operator,   1 v vC 1 v2 C v2 C 2 V C¼ þ 2; (4.80) r þ 2 r vr vr r vq2 vz where r, q, and z are defined in Figure 4.11. Inserting Eq. (4.80) into Fick’s second law, we have     vC 1 v vC 1 v2 C v2 C þ : (4.81) ¼D r þ 2 vt r vr vr r vq2 vz2 Likewise, diffusion in systems with spherical geometry can be described using the definition of the Laplacian operator in spherical coordinates,       1 v 2 vC 1 v vC 1 v2 C 2 V C¼ 2 ; (4.82) r þ sin q þ 2 r vr vr sin q vq vq sin q v42 where the coordinates are also defined in Figure 4.11. Specific solutions of the diffusion equation in cylindrical and spherical coordinate systems can be found in the excellent book by Crank [1].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

80

Materials Kinetics

Figure 4.11 (a) Cylindrical and (b) spherical coordinate systems. (Image by Brittney Hauke (Penn State)).

4.12 Summary Several useful analytical techniques for solving the diffusion equation under various initial and boundary conditions have been presented. In particular: • For a plane source, the solution by inspection is a Gaussian function. • If diffusion occurs only in one direction, then the method of reflection and superposition can be used. • Solutions for an extended initial distribution are obtained by integrating the Gaussian solution for a planar source. This leads to solutions in terms of error functions or complementary error functions. • Separation of variables can be used to treat the position- and timedependence of the concentration independently and then solve for a variety of conditions. These solutions typically involve the use of Fourier series. • Laplace transforms are a powerful technique to solve the diffusion equation. Here we used Laplace transforms to obtain a complementary error function solution for diffusion where the surface is held at a constant concentration. • With anisotropic diffusion, the diffusion equation can be solved by transforming into principal component space and applying standard solutions across each of these principal axes. • Diffusivity can be an explicit function of concentration, in which an additional term appears to correct for the constant-D solution.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Analytical Solutions of the Diffusion Equation

• • •

81

Time-dependent diffusivity can be solved via substitution of variables using solutions from the constant diffusivity case. The diffusion equation can also be written and solved in other coordinate systems (e.g., cylindrical, spherical). Many diffusion problems are too complicated to be solved analytically. In such cases, numerical solutions can be obtained using the finite difference method, which will be presented in Chapter 6.

Exercises (4.1) A thin layer of radioactive 110Ag is deposited at the end of a silver bar, which was held at 800 C for 48 hours. The bar is then mechanically sectioned, and the radioactivity of each section is measured. The measured data for count rate, in counts per minute (cpm), versus distance from the end of the bar, x, are as follows: x (mm)

Count Rate (cpm)

2 4 6 8 10 12

4000 3900 3550 3230 2870 2460

Plot the data and determine the self-diffusion coefficient of silver, DAg, at 800 C. Report the diffusion coefficient in units of cm2/s. (4.2) A sample of n-type silicon semiconductor contains a homogeneous distribution of phosphorus at a concentration of 1.2  1018 cm3. The phosphorus is used to generate a p-n junction 4 mm below the surface by diffusing an acceptor dopant from the surface. The concentration of the acceptor dopant is held at a constant at 3  1020 cm3 at the surface of the sample. The diffusivity of the dopant migrating into silicon is 4  1012 cm2/s at the temperature of the experiment. How much time is required to generate the complete p-n junction? In other words, how much time is required for the concentration of the diffusing acceptor dopant from the surface to be equal to the concentration of phosphorus at 4 mm below the surface? (4.3) Prove that the error function, erf(z), is an odd function.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

82

Materials Kinetics

(4.4) Consider an extended source with Cðx < 0; t ¼ 0Þ ¼ C0 , as in Figure 4.3, but where there is some nonzero initial concentration for positive x, i.e., Cðx  0; t ¼ 0Þ ¼ C1 . Derive an expression for Cðx; tÞ and plot the solution for Dt ¼ 1/16, 1/2, and 1. (4.5) Consider diffusion from the finite source depicted in Figure 4.6, where Cð  h < x < h; t ¼ 0Þ ¼ C0 for all x between h and h, and having an initial condition of C ¼ 0 for all other values of x. Now consider that the system undergoes a two-step heat treatment, where the values of the diffusion coefficient are D1 and D2 during the two stages of the heat treatment. The duration of the first heat treatment is t1, which is followed by the second heat treatment having a duration of t2. Derive an expression for Cðx; t1 þt2 Þ, the concentration profile at the end of the second heat treatment. (4.6) Consider diffusion in one dimension, x, for a system comprised of two different materials. Suppose that the region x  0 consists of one material having a diffusion coefficient of D1. The region x < 0 consists of another material, which has a diffusion coefficient of D2. Let us assume that the initial concentration of the diffusing species is equal to a constant C0 within the first material (x  0), and the initial concentration is zero at x < 0. Derive solutions for the evolution of the concentration of the diffusing material in both the first material, C1 ðx  0; tÞ, and the second material, C2 ðx < 0; tÞ. Plot the solution for C/C0 vs. x in the domain of x ¼ [4, 4] for Dt ¼ 1/16, 1/2, and 1. (4.7) Consider three-dimensional diffusion from a cube having a volume a3 and centered at the origin (x ¼ y ¼ z ¼ 0). The initial concentration inside the cube is uniform and equal to C0. The initial concentration is zero outside of the cube. The diffusion coefficient, D, is a constant. (a) Derive the three-dimensional solution for Cðx; p y; ffiffiffiffiffi z; tÞ. (b) Show that for a long diffusion distance, i.e., 2 Dt >> a, the solution in (a) reduces to that for a standard point-source problem, where the cube effectively serves as the point source. (4.8) Solve the one-dimensional diffusion problem in an infinite medium having a periodic “square wave” initial condition described by: 8 > < C ; 0  x þ nl  l 0 2 Cðx; t ¼ 0Þ ¼ > : 0; otherwise

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Analytical Solutions of the Diffusion Equation

83

where n adopts all positive and negative integer values. The diffusion equation should be solved using separation of variable to obtain the solution in terms of an infinite sine series. (4.9) Derive an expression for the Laplace transform of the nth order time derivative of a function, i.e., Lfvn f =vt n g. Show your work. ( )   pffiffiffiffiffi  p=D x 1 e x ffi . ¼ erfc pffiffiffiffiffi (4.10) Prove that: L p 4Dt (4.11) Use Laplace transforms to solve the diffusion equation on a semiinfinite domain of x  0 where the initial conditions are Cðx; t ¼ 0Þ ¼

C0 ;

0 a, the solution in (a) reduces to that for a standard point-source problem, where the sphere serves as the effective point source.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

84

Materials Kinetics

References [1] J. Crank, The Mathematics of Diffusion, Oxford University Press, Oxford (1975). [2] D. Zill, W. S. Wright, and M. R. Cullen, Advanced Engineering Mathematics, Jones & Bartlett Learning (2011).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 5

Multicomponent Diffusion

5.1 Introduction In Chapter 4, we detailed several techniques for solving the diffusion equation considering a single diffusing species, where the concentration gradient of that species is the only thermodynamic affinity driving the irreversible process. This is consistent with Fick’s first law, where the flux of the diffusing species arises in response to its conjugate driving force, i.e., the flux of a species acts to eliminate its own concentration gradient and thereby lessen the driving force for diffusion. However, many systems of interest have more than one type of diffusing species. While direct application of Fick’s first law seems to suggest that the diffusion of each species could be treated independently, this would ignore important interaction effects among species. Interaction effects are essential for multicomponent diffusion problems and can lead to surprising behavior such as uphill diffusion, where diffusive fluxes act to enhance rather than reduce a concentration gradient. The mathematical approach for solving multicomponent diffusion problems was initially proposed by Arun K. Varshneya in 1970 [1] and further developed by Prabhat K. Gupta and Alfred R. Cooper1 [2] in 1971. These authors employed the principles of irreversible thermodynamics to account for interaction effects among the various diffusing species. If there are N different types of diffusing species, the flux equations from Eq. (2.14) are:

1

Incidentally, your author has strong personal connections with all three of these esteemed scientists. Varshneya is my Ph.D. advisor from Alfred University (my “academic father”). I also studied extensively with Gupta (my “academic uncle”), who is an emeritus professor at The Ohio State University and is also Varshneya’s first cousin, both born in the same small room in Agra, India. Cooper (my “academic grandfather”), was the Ph.D. advisor of both Varshneya and Gupta at Case Western Reserve University in Cleveland, Ohio. For more details on this family history, I refer the interested reader to Ref. [3].

Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00024-8 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

85 Copyright Elsevier 2023

86

Materials Kinetics

J1 ¼ L11 F1 þ L12 F2 þ L13 F3 þ /L1N FN J2 ¼ L21 F1 þ L22 F2 þ L23 F3 þ /L2N FN J3 ¼ L31 F1 þ L32 F2 þ L33 F3 þ /L3N FN

(5.1)

« JN ¼ LN1 F1 þ LN2 F2 þ LN3 FN þ /LNN FN : Here we have assumed a linear, purely resistive system (see Sections 2.5 and 2.6), where J is flux, F is affinity, and the L parameters are the kinetic coefficients. The subscripts of each variable indicate the type of the diffusing species. From Eq. (3.1), the relevant affinity is the gradient in  ðmi =T Þ, where T is absolute temperature and mi is the chemical potential of each species, i. Hence, the set of flux equations can be written as L11 T L21 J2 ¼  T L31 J3 ¼  T J1 ¼ 

JN ¼ 

vm1 L12  vx T vm1 L22  vx T vm1 L32  vx T

vm2 L13  vx T vm2 L23  vx T vm2 L33  vx T «

vm3 L1N / vx T vm3 L2N / vx T vm3 L3N / vx T

vmN vx vmN vx vmN vx

(5.2)

LN1 vm1 LN2 vm2 LN3 vm3 LNN vmN   / ; T vx T vx T vx T vx

and the corresponding rate of entropy production is N X dS Ji vmi ¼ : dt T vx i¼1

(5.3)

Here we have assumed diffusion along one spatial dimension, x. We have also assumed that T is homogeneous throughout the system. Note that Eq. (5.2) includes contributions to the flux of each species due to its own conjugate driving force (i.e., its own chemical potential gradient) as well as interaction terms where the chemical potential gradient of one species can induce a flux of another species. In Section 3.5 we discussed the physics of interdiffusion, i.e., the counter-diffusion of two different species. The most important question for interdiffusion is whether the net flux of the two diffusing species is zero, as depicted in Figure 3.6. If the net flux is zero, as in Eq. (3.35), then the interdiffusion process proceeds by a simple swapping of positions of the two species. In this case, the diffusion coefficients of the two species must be

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

87

Multicomponent Diffusion

equal, as in Eq. (3.36). Hence, the interdiffusion process can be described using a single diffusion coefficient rather than two independent diffusivities. This result from Section 3.5 can be extended for the general case of multicomponent diffusion involving three of more diffusing species. If the net flux of all N diffusing species is zero, then only N  1 of the concentrations (C) and fluxes are independent. Hence, the flux equations in Eq. (5.2) can simplify as Ji ¼ 

N1 X j¼1

e ij D

vCj ; vx

(5.4)

where we have also assumed that the chemical potential gradient is proportional to the concentration gradient, and that the relevant kinetic factors are lumped together into a diffusion coefficient (see Section 3.1). More specife ij is an interdiffusion coefficient governing the ically, in Eq. (5.4), D diffusion kinetics associated with the swapping of species i and j.

5.2 Matrix Formulation of Diffusion in a Ternary System Let us consider a system with simultaneous diffusion of three different species along one dimension, x. If we assume that the net flux of the three diffusing species is zero, then only two of the concentrations and fluxes are independent of each other. Hence, a ternary system with a zero net flux requires two diffusion equations,     vC1 v e vC1 v e vC2 ! D11 D12 ¼ V$ J 1 ¼ þ vx vx vt vx vx (5.5)     vC2 v vC1 v vC2 ! e 21 e 22 D D ¼ V$ J 2 ¼ þ ; vx vx vt vx vx where the third concentration, C3, is given simply by the total concentration of all diffusing species minus the sum of C1 and C2. Eq. (5.5) can be written equivalently in vector-matrix notation as       e 12 v C1 e 11 D v C1 v D ¼ (5.6) e 21 D e 22 vx C2 ; D vt C2 vx or  ! ! vC v e vC D ¼ ; vt vx vx

This book belongs to Alice Cartes ([email protected])

(5.7)

Copyright Elsevier 2023

88

Materials Kinetics

e is the interdiffusivity matrix. For a ternary system with zero net where D e flux, D is a 2  2 matrix,   e 11 D e 12 D e D¼ e (5.8) e 22 : D21 D e is e 21 , such that D e 12 ¼ D Assuming that Onsager reciprosity is valid, then D symmetric. Note that if the zero net flux condition is not satisfied, then the full 3  3 matrix must be considered.

5.3 Solution by Matrix Diagonalization The multicomponent diffusion equations can be solved using standard linear algebra methods, analogous to our solution of the anisotropic e diffusion problem (Section 4.8). The key step is to diagonalize the D matrix, which transforms the problem to principal component space, thereby eliminating the interaction terms (i.e., the off-diagonal terms) from the matrix. This is accomplished by solving the eigenvector-eigenvalue problem, 31 2 3 2 eigenvector     eigenvector diagonalized square 6 7 6 7 ¼ 4 column 5 4 column 5: (5.9) matrix matrix matrix matrix For the interdiffusivity matrix in Eq. (5.8), the eigenvalues (l) and . eigenvectors ( a ) satisfy . e. D a ¼l a:

(5.10)

e is a 2  2 matrix, the two eigenvalues can be determined by Since D solving the quadratic equation,   D  e e  l D 11 12  (5.11) det e e 22  l  ¼ 0: D21 D Expanding the determinant, the corresponding eigenvalues are l ¼ where

e 11 þ D e 22 Þ  D ðD ; 2

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi e 11  D e 21 : e 22 Þ2 þ 4D e 12 D Dh ðD

This book belongs to Alice Cartes ([email protected])

(5.12)

(5.13)

Copyright Elsevier 2023

Multicomponent Diffusion

89

Since the eigenvalues of the interdiffusivity matrix must be real and positive, there are physical limitations on what values can be adopted by the interdiffusion coefficients. For the ternary case, the interdiffusion coefficients must satisfy the following conditions: e 11 þ D e 22 > 0 D 2 e 22 Þ  4ðD e 11 D e 12 D e 11 þ D e 22  D e 21 Þ ðD e 12 D e 11 D e 22  D e 21 Þ  0: ðD The corresponding eigenvectors are 2 3 e 22  D e 11  D D 6 7 ! e 21 a ¼4 5 2D 1

(5.14)

(5.15)

for the slow diffusion mode, which corresponds to the lower eigenvalue, l , and 2 3 e 11  D e 22 þ D D 6 7 ! e 21 (5.16) a þ¼4 5 2D 1 for the fast diffusion mode, which corresponds to the higher eigenvalue, lþ . Thus, the eigenvector column matrix is 2 3 e 22  D D e 22 þ D e 11  D e 11  D D 6 7 e 21 e 21 A¼4 (5.17) 5; 2D 2D 1 1 and the inverse of the eigenvector column matrix is 3 2 e 22 þ D e 21 e 11  D D D 7 6 D 2D 7 6 1 A ¼6  7:  4 D e 21 e 11  D e 22  D 5 D  2D D

(5.18)

From Eq. (5.9), the resulting diagonalized interdiffusivity matrix is   l 0 e (5.19) ¼ A1 DA: 0 lþ We can rewrite the flux equation from Eq. (5.4) in more general form as

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

90

Materials Kinetics



J1



J2

  C1 e ¼  DV : C2

Multiplying both sides of Eq. (5.20) by A1 , we have     C1 J1 1 1 e A ¼  A DV : J2 C2

(5.20)

(5.21)

We further multiply the right-hand side of the equation by the identity matrix (AA1 ) to obtain     J1 C1 1 1 e 1 A ¼  A DAA V : (5.22) J2 C2 Substituting the diagonalized matrix form of the interdiffusivity matrix from Eq. (5.19) into the right-hand side of Eq. (5.22),       J1 l 0 C1 1 1 A ¼ A V : (5.23) 0 lþ J2 C2 From Eq. (5.23), we can define the transformed flux and concentration vectors as:         Ja Ca J1 C1 ¼ A1 ; ¼ A1 : (5.24) Jb Cb J2 C2 Combining Eqs. (5.24) and (5.23), we have the diagonalized form of the flux equations,       Ja l 0 Ca ¼ V : (5.25) 0 lþ Jb Cb The coordinate transformations are obtained by combining Eqs. (5.18) and (5.24), yielding Ja ¼ 

e 22 þ D e 21 e 11  D D D J2 J1 þ 2D D

(5.26)

and Jb ¼

e 22  D e 21 e 11  D D D J1  J2 D 2D

(5.27)

for the fluxes, and

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Multicomponent Diffusion

Ca ¼ 

e 21 e 11  D e 22 þ D D D C2 : C1 þ 2D D

91

(5.28)

and Cb ¼

e 22  D e 21 e 11  D D D C2 C1  2D D

(5.29)

for the concentrations. With the diagonalized form of the flux equations in Eq. (5.25), it is now straightforward to solve the equations for multicomponent diffusion using the standard methods already discussed in Chapter 4. Once the solution to the diagonalized problem is found, it can be transformed back to the original coordinate system via     Ja J1 ¼A (5.30) Jb J2 and



C1 C2



 ¼A

Ca Cb

 :

(5.31)

Figure 5.1 shows a visual summary of the various coordinate transformations that we used to solve the multicomponent diffusion problem. Figure 5.1(a) shows the typical representation of concentration for a threecomponent system using a standard ternary phase diagram. With the zero net flux condition, only two out of the three concentrations are mutually independent within any region in space. Hence, the concentrations along the diffusion pathway can be plotted with a reduced set of axes (C1 and C2), where the remaining concentration is given by C3 ¼ 1  C1  C2 at all times, as indicated in Figure 5.1(b). Finally, Figure 5.1(c) shows the transformed coordinate system of Ca and Cb after diagonalizing the diffusivity matrix. The new coordinates are mutually orthogonal and eliminate the cross-effects in the transformed diffusivity matrix. An example diffusion pathway in this transformed coordinate system is plotted in Figure 5.2. Here, the two transformed axes correspond to fast (lþ ) and slow (l ) eigenmodes. Solving the problem in the transformed coordinate space eliminates the off-diagonal terms such that the two diffusion equations are decoupled and can be solved independently following the methods of Chapter 4. Once the diffusion pathway is solved in the transformed coordinate system, it can then be transformed back to the original coordinate system using Eqs. (5.30) and (5.31). This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

92

Materials Kinetics

Figure 5.1 Summary of the coordinate transformations used to solve the multicomponent diffusion problem. (a) In materials science, the usual representation of compositions in a three-component system employs a ternary phase diagram. (b) Taking advantage of the zero net flux condition, only two out of the three concentrations are mutually independent. Hence, only two axes are necessary, and the third component is entirely determined by the other two. (c) By diagonalizing the diffusivity matrix, the coordinate system is transformed to align with the slow and fast diffusion axes. With this transformation of coordinates, the off-diagonal terms are effectively removed from the problem, enabling the use of standard solutions of the diffusion equations. After the diffusion equations are solved, the solutions can be transformed back to the original coordinate system.

Figure 5.2 Example diffusion pathway from point P to point Q in the concentration phase space. When the coordinates are transformed by diagonalizing the diffusivity matrix, the diffusion pathway can be decomposed into independent contributions along the fast and slow eigenmodes.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Multicomponent Diffusion

93

5.4 Uphill Diffusion One of the byproducts of multicomponent diffusion is that it is possible to observe uphill diffusion, where the flux acts to increase rather than decrease a concentration gradient. Uphill diffusion along a direction x is defined by Ji

vCi > 0; vx

(5.32)

i.e., where both the flux and the concentration gradient of a given species i have the same sign. Uphill diffusion is a frequently observed phenomenon in multicomponent systems. For example, Figure 5.3 shows a concentration profile that can occur through diffusion in the ternary Ni-Fe-Co alloy system. The uphill diffusion depicted in this figure originates from the interaction terms in the interdiffusivity matrix [4]. Such uphill diffusion is normally forbidden by Fick’s first law. However, it can be observed in multicomponent systems due to the interaction effects between species. This underscores the importance of including the offdiagonal terms in the interdiffusivity matrix, following Onsager’s formulation of irreversible thermodynamics. Indeed, interaction terms have been shown to be critically important for obtaining accurate solutions of multicomponent diffusion problems across all families of materials, including metals [5], ceramics [6], polymers [7], and glasses [8].

Figure 5.3 Example concentration profile for the Ni-Fe-Co system, which results from uphill diffusion owing to the off-diagonal (interaction) terms of the interdiffusivity matrix.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

94

Materials Kinetics

A particularly instructive example of multicomponent diffusion is the study of Pablo et al. [8] in the ternary sodium borosilicate (Na2O-B2O3-SiO2) glass system. Pablo et al. performed multicomponent diffusion experiments using three pairs of diffusion couples at temperatures ranging from 700 C to 1100 C. The initial compositions in the diffusion couples were designed around a central composition of 14.23Na2O$18.04B2O3$67.73SiO2 (mol%), which is a simplified version of a glass used for nuclear waste immobilization. The diffusion couples consisted of two glass disks having different compositions, where the concentrations of two oxides were varied by 4 mol% in each couple. For example, the SiO2-Na2O diffusion couple consisted of a disk with an additional 4 mol% SiO2 in place of Na2O, and the other disk with an additional 4 mol% Na2O in place of SiO2. The other two diffusion couples had compositions swapping SiO2 with B2O3 and Na2O with B2O3. After the 700e1100 C heat treatments were performed for different diffusion times, the concentration profiles of Na2O, B2O3, and SiO2 were measured using the electron probe micro-analysis (EPMA) technique, supplemented with secondary ion mass spectroscopy (SIMS). Both the EPMA and SIMS techniques for measuring chemical concentration profiles have been covered in Section 3.6. The measured concentration data are plotted in Figure 5.4 for the three sets of diffusion couples. The solid lines in the figure show fits of the measured concentration data to a multicomponent diffusion model constructed via the matrix formulation in Section 5.3. Interestingly, the offdiagonal terms in the interdiffusivity matrix are found to be of the same order of magnitude as the diagonal terms, indicating the very important role of interactions in governing the multicomponent diffusion process. Figure 5.4 demonstrates excellent agreement between the measured concentration data and the theoretically fitted concentration profiles. Uphill diffusion is readily apparent in these results, especially in the Na2O-B2O3 diffusion couple. Although the initial concentration of SiO2 was homogeneous in the Na2O-B2O3 diffusion couple (since only Na2O and B2O3 were swapped), a nonzero flux of SiO2 results from the interaction terms of the interdiffusivity matrix, leading to the interesting nonmonotonic concentration profile of SiO2 shown in the figure. Figure 5.5 plots the resulting diffusion pathways from the experiment of Pablo et al. [8] in the sodium borosilicate glass system. The three paths in the figure correspond to each of the three diffusion couples (SiO2-Na2O, SiO2B2O3, and Na2O-B2O3). The endpoints of each path indicate the two starting compositions of the disks within each diffusion couple. The two

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Multicomponent Diffusion

95

Figure 5.4 Measured concentration profiles of Na2O, B2O3, and SiO2 in the three sets of diffusion couples (SiO2-Na2O, SiO2-B2O3, and Na2O-B2O3) for multicomponent diffusion experiments conducted in the ternary Na2O-B2O3-SiO2 glass system at temperatures ranging from 700 C to 1100 C. The solid lines show fits of the measured concentration data using a multicomponent diffusion model as described in Section 5.3. Uphill diffusion is apparent in each of the diffusion couples, which is especially pronounced in the SiO2 profile of the Na2O-B2O3 couple. (Reproduced with permission from Ref. [8]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

96

Materials Kinetics

Figure 5.5 Multicomponent diffusion pathways plotted with the fast (“major”) and slow (“minor”) eigenmodes in the ternary Na2O-B2O3-SiO2 glass system. (Modified with permission from Ref. [8]).

eigenvectors corresponding to the fast (“major”) and slow (“minor”) principal components are also plotted. Near the initial compositions, the diffusion pathways follow the major eigenmode more closely, since a greater eigenvalue indicates faster diffusion along that mode. As the diffusion process continues, i.e., as the systems approach the average composition in the middle of the diagram, the slower (minor) eigenmode becomes dominant.

5.5 Summary Multicomponent diffusion occurs in systems having three or more mobile species. The equations for multicomponent diffusion can be derived from irreversible thermodynamics, accounting for both conjugate driving forces and interaction terms among the various species. If the zero net flux condition is satisfied for an N-component diffusion problem, then only N  1 of the species need to be considered, leading to an ðN 1Þ  ðN 1Þ matrix of interdiffusion coefficients. However, if the zero net flux condition is not satisfied, then the full N  N matrix of diffusivities must be included. As in the case of anisotropic diffusion, the multicomponent diffusion problem can be solved using the methods of linear algebra, viz., by diagonalizing the interdiffusivity matrix and solving the problem in principal component space. Transformation into principal component space removes the off-diagonal terms from the matrix, so that standard solutions of the diffusion equation can be employed. The final solution can then be obtained by transforming the solution from principal component space back to the original coordinate system.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Multicomponent Diffusion

97

Interaction effects in multicomponent diffusion problems result directly from the coupling terms in Onsager’s formulation of irreversible thermodynamics. The interaction terms are essential for accurately capturing the complicated diffusion pathways in multicomponent systems. In many cases, these interaction effects can lead to the surprising result of “uphill” diffusion, i.e., where the flux of a diffusing species acts to increase its own concentration gradient. This phenomenon would not be allowed in the original formulation of Fick’s first law, where the flux always acts to lower the concentration gradient.

Exercises (5.1) Consider a diffusion couple consisting of two semi-infinite rods with an interface at x ¼ 0. The first rod (at x < 0) has initially homogenous concentrations of three species: C1 ¼ 0.5, C2 ¼ 0.5, and C3 ¼ 0. The second rod (at x > 0) has initially homogeneous concentrations of: C1 ¼ 0, C2 ¼ 0.5, and C3 ¼ 0.5. Solve the ternary diffusion problem accounting for interaction terms and assuming that the zero net flux condition is satisfied: (a) Write the appropriate flux equations, accounting for the zero net flux condition. (b) Construct the diffusion equations using a 2  2 interdiffusivity matrix. (c) Diagonalize the 2  2 interdiffusivity matrix. (d) Transform the diffusion equations into principal component space along the major and minor eigenmodes. (e) Solve the two decoupled diffusion equations in principal component space using the error function method of Section 4.4. (f) Transform the solutions back to the original coordinate system. (g) Write the final equations for C1 ðx; tÞ, C2 ðx; tÞ, and C3 ðx; tÞ. (5.2) Formulate the ternary diffusion problem for the case in which the zero net flux condition is not satisfied: (a) Write the flux equations for the three components. (b) Construct the diffusion equations using a 3  3 diffusivity matrix. (c) Diagonalize the 3  3 diffusivity matrix. (d) Transform the diffusion equations into principal component space.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

98

Materials Kinetics

(5.3) Formulate the problem of four-component (quaternary) diffusion where the zero net flux condition is satisfied. How does your formulation of this problem compare to that in Exercise (5.2) above? (5.4) Search the scientific literature for an example of uphill diffusion in a multicomponent system. Provide the reference and summarize the experiment. What are the important factors driving uphill diffusion in this system?

References [1] A. K. Varshneya, Multicomponent Diffusion in Glasses e Theory and Application to Tektites, Ph.D. Thesis, Case Western Reserve University, Cleveland, Ohio (1970). [2] P. K. Gupta and A. R. Cooper Jr., “The [D] Matrix for Multicomponent Diffusion,” Physica 54, 39 (1971). [3] A. K. Varshneya, E. D. Zanotto, and J. C. Mauro, “Perspectives on the Scientific Career and Impact of Prabhat K. Gupta,” J. Non-Cryst. Solids X 1, 100011 (2019). [4] A. Vignes and J. P. Sabatier, “Ternary Diffusion in Fe-Co-Ni Alloys,” Trans. TMSAIME 245, 1795 (1969). [5] M. A. Dayananda, ““An Analysis of Concentration Profiles for Fluxes, Diffusion Depths, and Zero-Flux Planes in Multicomponent Diffusion,” Metal. Trans. A 14, 1851 (1983). [6] A. C. Lasaga, “Multicomponent Exchange and Diffusion in Silicates,” Geochim. Cosmochim. Acta 43, 455 (1979). [7] P. E. Price Jr. and I. H. Romdhane, “Multicomponent Diffusion Theory and its Applications to Polymer-Solvent Systems,” AIChE J. 49, 309 (2003). [8] H. Pablo, S. Schuller, M. J. Toplis, E. Gouillart, S. Mostefaoui, T. Charpentier, and M. Roskosz, “Multicomponent Diffusion in Sodium Borosilicate Glasses,” J. Non-Cryst. Solids 478, 29 (2017).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 6

Numerical Solutions of the Diffusion Equation

6.1 Introduction In Chapter 4 we considered several methods for deriving closed-form analytical solutions of the diffusion equation under various initial and boundary conditions. While Chapter 4 dealt with diffusion of a single component, in Chapter 5 we saw that linear algebra techniques can be used to extend these solutions to the case of multicomponent diffusion through diagonalization of the diffusivity matrix. The approaches in Chapters 4 and 5 are useful for solving a wide range of diffusion problems of practical interest. However, many diffusion problems are too complicated to solve via analytical means. Some difficult scenarios include: • Complicated initial or boundary conditions, including complex sample geometries. • The existence of many simultaneously diffusing components, making diagonalization of the diffusivity matrix too difficult to solve analytically. • Anisotropic diffusion, often coupled with multiple diffusing components. • Concentration-dependent diffusion coefficients. • Inhomogeneous temperature distributions, which may also evolve with time. • Application of external fields that influence the diffusion process. • Complicated source-limited kinetics, e.g., where the concentration of the diffusing species varies as a function of time following some chemical reaction. • Any combination of the above effects. With any of the above scenarios, it may not be possible to derive a closedform analytical solution of the diffusion equation. In such cases, it is necessary to use numerical methods to solve Fick’s second law. In this chapter, we discuss the finite difference method, which was originally developed by the Swiss mathematician Leonhard Euler Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00025-X This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

99 Copyright Elsevier 2023

100

Materials Kinetics

(1707e1783) around 1768 [1] and is the most common approach for obtaining numerical solutions of the diffusion equation. The finite difference method is implemented in a variety of commercial software programs, such as COMSOL [2]. Computationally inclined readers may also wish to implement their own version of the finite difference algorithm, e.g., using a freely available programming environment such as Python. In fact, an open source Python library for solving diffusion problems, called pydiffusion, has recently been released by J.-C. Zhao’s group at The Ohio State University [3]. This code implements the finite difference method discussed in this chapter and incorporates visualization tools for plotting the results.

6.2 Dimensionless Variables While not strictly necessary, for numerical solutions of the diffusion equation it is often convenient to express the relevant variables in dimensionless form. For example, if we consider one-dimensional diffusion in a planar sheet of thickness l with a constant diffusion coefficient D, the following dimensionless variables can be defined. The dimensionless position, X, x X¼ ; (6.1) l is simply the position coordinate x normalized by the thickness of the sample, l, such that X ranges from 0 to 1. The dimensionless time, T, can be obtained from the regular time, t, by T¼

Dt : l2

(6.2)

Incorporation of the diffusion coefficient into the dimensionless time allows for easier scaling of the numerical solutions, i.e., considering the kinetics of the diffusion process relative to the length scale over which the diffusion occurs. The normalized concentration, c, is simply the ratio of the concentration, C, to the initial concentration, C0: c¼

C : C0

(6.3)

Having defined the normalized coordinates in Eqs. (6.1)e(6.3), the next step is to transform the partial derivatives in Fick’s second law: vC v2 C ¼D 2: vt vx

This book belongs to Alice Cartes ([email protected])

(6.4)

Copyright Elsevier 2023

Numerical Solutions of the Diffusion Equation

101

Using the definition of the normalized position from Eq. (6.1), the first derivative with respect to the position coordinate can be written as: vC vC dX vC 1 ¼ ¼ : vx vX dx vX l

(6.5)

Likewise, the second derivative is: v2 C v vC 1 v dX vC 1 v2 C 1 ¼ : ¼ ¼ vx2 vx vX l vX dx vX l vX 2 l 2

(6.6)

The time derivative on the left-hand side of Eq. (6.4) can be expressed using the normalized time coordinate defined in Eq. (6.2): vC vC dT vC D : ¼ ¼ vt vT dt vT l 2

(6.7)

Substituting Eqs. (6.6) and (6.7) into Eq. (6.4), we have vC v2 C ¼ ; vT vX 2

(6.8)

where the factors of D/l2 have canceled from both sides of the equation. Using the definition of the normalized concentration from Eq. (6.3), the factors of 1/C0 also cancel, giving our final expression for Fick’s second law in normalized coordinates: vc v2 c ¼ 2: vT vX

(6.9)

6.3 Physical Interpretation of the Finite Difference Method The finite difference method is a numerical technique for solving differential equations based on discretizing the continuous dimensions and applying approximate numerical calculations of the derivatives. In other words, the finite difference method involves dividing the continuous spatial and temporal dimensions into discrete grid points. If the diffusion occurs along only one spatial dimension, then each of those grid points effectively represents a thin slab of material. Figure 6.1 shows an example discretization of a system along one spatial dimension. The blue curve in the lower part of the figure shows a continuous scaling of concentration, C, along the normalized spatial

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

102

Materials Kinetics

Figure 6.1 Example discretization of a system, with a grid spacing of h along the normalized position axis, X. The physical basis of the finite difference method considers the transfer of matter through two planes, R and S, positioned halfway between the grid points.

coordinate, X, where the grid spacing is denoted as h. Three grid points are indicated along the X dimension, where the local concentrations are labeled as C0, C1, and C2. To consider the physical basis of the finite difference method, imagine two planes located halfway between the central grid points in Figure 6.1. Plane R is located halfway between the 0 and 1 positions, and plane S is located halfway between the 1 and 2 positions. The shaded region between planes R and S has a normalized thickness of h. Let us denote qR as the amount of diffusing matter that enters the shaded region through the surface R over some time dt. Approximating the concentration gradient at R as ðC1 C0 Þ=h, integration of Fick’s first law yields: qR ¼ 

DdtðC1  C0 Þ : h

(6.10)

Likewise, the amount of matter flowing out of the shaded region by leaving through plane S can be approximated by: qS ¼ 

This book belongs to Alice Cartes ([email protected])

DdtðC2  C1 Þ : h

(6.11)

Copyright Elsevier 2023

Numerical Solutions of the Diffusion Equation

103

Hence, the net change in the amount of diffusing matter in the shaded region between planes R and S is the amount of material entering through plane R minus the amount of material exiting through plane S. Over the time step dt, this net change is: qR  qS ¼ 

Ddt Ddt ðC1  C0  C2 þ C1 Þ ¼ ðC0  2C1 þ C2 Þ: h h

(6.12)

If C1 is the average concentration in the shaded region of Figure 6.1, then the net gain of matter in the shaded region is:  Ddt ðC0  2C1 þ C2 Þ; C10  C1 h ¼ h

(6.13)

where C10 is the new concentration after the time step, dt. This equation can be simplified by normalizing the time step as dt ¼

h2 ; 2D

(6.14)

such that Eq. (6.13) can be expressed as 1 C10 ¼ ðC0 þ C2 Þ: 2

(6.15)

This exercise demonstrates the physical foundation of the finite difference method in that (a) the continuous spatial dimension is discretized onto a grid and (b) the fluxes entering and exiting a given grid point are approximated using a discretized form of Fick’s first law, where the concentration gradients are calculated using the differences in concentrations between neighboring grid points.

6.4 Finite Difference Solutions In order to apply the finite difference method to calculate a numerical solution of the diffusion equation, we start with the dimensionless form of Fick’s second law in Eq. (6.9). The spatial dimension of the system is divided into grid points. For one normalized dimension, X, the grid spacing covers the full range of X from 0 to 1 with intervals of dX. Likewise, the normalized time dimension is discretized with intervals of dT. The coordinates are expressed on a grid, as depicted in Figure 6.2. Here, the normalized time (T ) and position (X ) are plotted on the two axes, showing discretizations into intervals of dT and dX, respectively. Each point has a

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

104

Materials Kinetics

Figure 6.2 Discretized grid representation of the normalized time (T ) and position (X ) coordinates, with grid spacings of dT and dX. Each point on the grid has a normalized concentration (ci,j) reflecting the local concentration at that particular position (i ) and time ( j ).

corresponding normalized concentration, denoted ci,j, where i indicates the position on the X axis, and j indicates the position on the T axis. Let us consider one particular point, ci,j, on the grid in Figure 6.2. Assuming the value of ci,j is already known at the current time, j, we wish to calculate the concentration at the next increment of time, j þ 1. This concentration can be written as a Taylor series expansion in the T direction, keeping X constant:    2  vc 1 vc 2 ci;jþ1 ¼ ci; j þ dT þ ðdT Þ þ /: (6.16) vT i; j 2 vT 2 i; j Truncating the Taylor series after the linear term, we have   vc ci; jþ1 z ci; j þ dT ; vT i; j

(6.17)

Hence, the first derivative in Eq. (6.17) can be approximated by the finite difference:   ci; jþ1  ci; j vc z : (6.18) vT i; j dT

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Numerical Solutions of the Diffusion Equation

105

Likewise, the concentrations at the neighboring positions on the X axis (i.e., the i þ 1 and i  1 positions) can be written using Taylor series expansions along the X dimension, holding T constant:  2    vc 1 vc 2 ciþ1; j ¼ ci; j þ dX þ ðdXÞ þ /; (6.19) vX i; j 2 vX 2 i; j for the nearest neighbor to the right and    2  vc 1 vc 2 þ ðdXÞ  /; ci1; j ¼ ci; j  dX vX i; j 2 vX 2 i; j

(6.20)

for the nearest neighbor to the left. In order to solve the normalized form of Fick’s second law in Eq. (6.9), we must derive a finite difference approximation to the second spatial derivative. This is accomplished by truncating Eqs. (6.19) and (6.20) after the quadratic terms and summing the two equations:  2  vc ciþ1; j þ ci1; j z 2ci; j þ ðdXÞ2 : (6.21) vX 2 i; j Note that the first derivative terms cancel each other when summing Eqs. (6.19) and (6.20). Solving now for the second derivative, we have:  2  ciþ1; j  2ci; j þ ci1; j vc z : (6.22) 2 vX i; j ðdXÞ2 Thus, Eqs. (6.18) and (6.22) provide finite difference approximations for both sides of the normalized version of Fick’s second law in Eq. (6.9). Substituting these finite difference approximations into Eq. (6.9), we obtain c i; jþ1  ci; j ciþ1; j  2ci; j þ ci1; j ¼ : dT ðdXÞ2

(6.23)

If the concentrations are already known at the current time step (j), we wish to solve for the concentration at the new time step (j þ 1). From Eq. (6.23), we immediately obtain: c i; jþ1 ¼ ci; j þ

This book belongs to Alice Cartes ([email protected])

dT ðciþ1; j  2ci; j þ ci1; j Þ: ðdXÞ2

(6.24)

Copyright Elsevier 2023

106

Materials Kinetics

The grid spacing can be chosen as appropriate for the problem under study. For example, a typical choice of grid spacing may satisfy dT 1 2¼ ; ðdXÞ 2

(6.25)

1 ci; jþ1 ¼ ci; j þ ðciþ1; j  2ci; j þ ci1; j Þ: 2

(6.26)

such that

Thus, the finite difference method can calculate the evolution of normalized concentration at each new grid point (i, j þ 1) using Eq. (6.26). In its most simple form derived here, this calculation depends on only three other grid points: (i, j), (i  1, j), and (i þ 1, j). To calculate the evolution of concentrations over the entire system, a double-loop algorithm is used, where the outer loop steps progressively over time, and at every time step an inner loop updates the concentrations at each of the position coordinates.

6.5 Considerations for Numerical Solutions While implementation of the finite difference method may seem straightforward, care must be taken to ensure the accuracy of the numerical solution. As detailed by Crank [4], there are three important considerations for ensuring the accuracy of the numerical solution: • Compatibility: The derivation of the finite difference method in Section 6.4 considers Taylor series expansions truncated after the linear or quadratic terms (along the normalized time and position coordinates, respectively). Hence, the higher-order terms in the Taylor series have been neglected, which can lead to truncation error. The criterion of compatibility considers whether this truncation error vanishes in the limit of infinitesimally small grid spacing, i.e., in the limits of dT / 0 and dX/0. If the truncation error tends to zero under these conditions, then the numerical solution is considered to be compatible with solving the differential equation. However, if the truncation error does not vanish in that limit, then the approach is incompatible with solving the differential equation, indicating that there is a fundamentally wrong assumption in the derivation of the technique. • Convergence: Once compatibility is satisfied, the convergence criterion considers whether the exact solution of the diffusion problem

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Numerical Solutions of the Diffusion Equation



107

can be obtained in the limit of infinitesimally small grid spacing. In other words, if the finite difference solution tends toward the exact solution in the limits of dT /0 and dX/0, then convergence is achieved. The convergence criterion can be evaluated, e.g., by testing the algorithm against simpler problems for which exact analytical solutions are known. Stability: The condition of stability relates to the round-off error that inevitably occurs in any numerical solution, given the finite amount of memory that is available for storing variables in a computer. In other words, only a finite number of decimal places can be stored for each numerical value in the computer memory. A finite difference algorithm is stable if the net effect of these round-off errors is negligible. However, if the algorithm is unstable, small errors will compound with each other, leading to larger errors as the calculation progresses.

6.6 Summary Many diffusion problems are too difficult to solve analytically and therefore require numerical solutions. The most common technique for solving the diffusion equation numerically is the finite difference method, in which the system is discretized onto a grid. The time and spatial derivatives on both sides of Fick’s second law are approximated using Taylor series expansions in a normalized coordinate space. The finite difference technique is a powerful approach for solving diffusion problems having difficult initial or boundary conditions, concentration-dependent diffusivities, anisotropic or multicomponent diffusion, or other complicating factors. Both commercial and open source software tools exist implementing the finite difference approach for solving the diffusion equation. Regardless of which program is used, it is important to ensure that the method satisfies requirements related to algorithm compatibility, convergence, and stability.

Exercises (6.1) Following the approach of Section 6.4, derive a finite difference solution for a system that diffuses anisotropically along two normalized spatial dimensions, X and Y. Be sure to account for the different diffusion coefficients along the X and Y directions. How would you select the grid spacing for such an anisotropic system?

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

108

Materials Kinetics

(6.2) Derive a finite difference solution similar to that in Section 6.4, but which also incorporates the second term of the Taylor series expansion for the time domain in Eq. (6.16). What are the relative advantages and disadvantages of including this additional term in the Taylor series expansion? (6.3) Eq. (4.81) provides the diffusion equation in the cylindrical coordinate system. Propose a finite difference method for solving this equation in cylindrical coordinates. (6.4) How should the finite difference algorithm of Section 6.4 be modified to calculate the interdiffusion of two species with zero net flux? What if the net flux is nonzero? (6.5) Download the open source pydiffusion [3] software from https:// github.com/zhangqi-chen/pyDiffusion. Familiarize yourself with the code using the included example scripts. Then, use the program to construct and solve your own diffusion problem. State the initial and boundary conditions for the problem. Plot the concentration profile calculated from the program at different times.

References [1] M. J. Gander and G. Wanner, “From Euler, Ritz, and Galerkin to Modern Computing,” SIAM Rev. 54, 627 (2012). [2] E. J. Dickinson, H. Ekström, and E. Fontes, “COMSOL Multiphysics®: Finite Element Software for Electrochemical Analysis. A Mini-Review,” Electrochem. Comm. 40, 71 (2014). [3] Z. Chen, Q. Zhang, and J.-C. Zhao, “pydiffusion: A Python Library for Diffusion Simulation and Data Analysis,” J. Open Research Software 7, 13 (2019). [4] J. Crank, “The Mathematics of Diffusion,” Oxford University Press, Oxford (1975).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 7

Atomic Models for Diffusion

7.1 Introduction In 1827, the Scottish botanist Robert Brown (1773e1858) observed pollen grains under a microscope and saw them exhibiting random movement while floating on water. This phenomenon, now known as Brownian motion, is not the result of convection, since it occurs even when the water is perfectly still, at least to the naked eye. The source of Brownian motion was a mystery for almost 80 years, until 1905 when a young Albert Einstein deduced that Brownian motion originated from the continuous random collisions of water molecules with the grains of pollen [1]. This idea was revolutionary at the time, since it provided the first proof that a substancedwaterdwas composed of atoms. Unfortunately, the news of scientific discoveries traveled much more slowly in the early 20th century compared to now, and Ludwig Boltzmann was apparently unaware of Einstein’s discovery when he took his own life a year later (1906). One wonders if the tragedy of Boltzmann’s death could have been avoided if he had been aware of Einstein’s breakthrough, confirming this most fundamental aspect of statistical mechanics, i.e., that matter is composed of atoms, and that the macroscopic behavior we observe is the result of averaging over all of the various microstates of the system at the atomic scale. Indeed, despite the vehement opposition of Ernst Mach and other prominent deniers of the atomic theory of matter, Boltzmann’s theory of statistical mechanics has proved to be correct, providing the critical link between macroscopic properties and the underlying atomic structure of materials [2]. Thus far in this textbook, we have considered diffusiondand kinetic processes in generaldin terms of macroscopic equations. Irreversible thermodynamics itself (Chapter 2) is a phenomenological extension to classical thermodynamics and knows nothing about the underlying atomic scale motions that facilitate the irreversible processes. Likewise, Fick’s laws of diffusion (Chapter 3) are written in terms of the evolution of a continuous, macroscopic propertydthe concentration of the diffusing speciesdwhere Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00009-1 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

109 Copyright Elsevier 2023

110

Materials Kinetics

the kinetic coefficient is the diffusivity, another macroscopic property. All our solutions to the diffusion equation (Chapters 4 through 6) have likewise been in the framework of continuum-level descriptions. In the current Chapter, we will explore diffusion at the atomic scale and show how it connects to the macroscopic description. Whereas the conventional macroscopic interpretation is based on continuum equations for the fluxes of particles and the evolution of their concentrations, in this Chapter diffusion will be described microscopically by individual particle displacements. At the atomic scale, diffusion is modeled as a series of discrete thermally activated displacements, known somewhat colloquially as jumps. The rate of diffusion depends on the vibrational frequency of the atoms, the required excitation energy for the thermally activated jumps, and the temperature of the system. While individual atomic motions are largely random, the net effect of the jumping of a large ensemble of atoms is the evolution of the macroscopic concentration profile, i.e., the irreversible process of diffusion. Like diffusion, Brownian motion is governed by the random jumping of particles at the atomic level. The key difference between diffusion and Brownian motion is that the ensemble of jumping particles does not have a net direction of migration with Brownian motion. Instead, the particles move in all directions with equal probability. This is because Brownian motion does not have a thermodynamic driving force, i.e., Brownian motion is not governed by affinities such as chemical potential gradients. In contrast, with diffusion there is a net direction of particle motion, as governed by the response to its affinity. Despite these differences, in both scenariosd Brownian motion and diffusiondthe individual particle movements are random.

7.2 Thermally Activated Atomic Jumping The most fundamental process governing diffusion at the atomic level is the thermally activated jumping of particles between neighboring positions of local energy minima in the structure. Each particle spends most of its time within a local energy minimum, which corresponds to a locally stable configuration of atoms. The jump event itself involves passing over a highly unstable transition barrier. As a result, the duration of the jump is typically very short compared to the time that a particle spends in the more stable local energy minima. The ability of the particle to make a successful jump to a neighboring minimum site depends on the probability that the particle will undergo a thermal fluctuation large enough to overcome the activation barrier for the transition.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Atomic Models for Diffusion

111

Figure 7.1 shows an example of the thermally activated jumping of an interstitial atom in a crystalline matrix. Each interstitial site corresponds to a local energy minimum, i.e., a locally stable position for the particle. This local energy minimum is also known as a well state. In the case of interstitial diffusion, the activation barrier corresponds to pushing aside the neighboring particles of the crystalline matrix to allow for the interstitial particle to move into the adjacent interstitial site. The particle spends most of its time vibrating in the well states. Each of these vibrations can be considered as an attempt to overcome an activation barrier in order to jump to the neighboring well state, i.e., the next interstitial site. Most of these vibrations are unsuccessful in their attempt to overcome the activation barrier. However, for any positive temperature, there is always some nonzero probability of a particle having enough energy from a thermal fluctuation to overcome the activation barrier and jump into the neighboring site. An example trajectory of a diffusing atom is shown in Figure 7.2. As can be seen from the trajectory, most of the time is spent vibrating in the well sites. After many vibrations, the particle can randomly overcome the activation barrier. The transition through the activated state happens rapidly since it is energetically unstable. Once the particle arrives in the neighboring well site, the process repeats itself, with a sequence of vibrations within a well followed by an eventual activation and transition to the neighboring site.

Figure 7.1 Interstitial diffusion of a small dopant atom in a crystalline matrix. The interstitial sites correspond to the local energy minima, i.e., the well states. In order to jump to the neighboring interstitial site, the diffusing particle must overcome the activation barrier involved with squeezing between the neighboring atoms of the crystalline matrix.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

112

Materials Kinetics

Figure 7.2 Trajectory of a diffusing atom. (Calculated by Rebecca Welch and Collin Wilkinson (Penn State)).

This process of thermally activated jumping is an example of a Markov process, named after the Russian mathematician Andrey Markov (1856e1922). A Markov process is a random process in which the next state of a system can be determined solely from its current state and not from any of the previously visited states. As such, a Markov process is indicative of a memoryless system. In our case, the next position to which a particle jumps depends only on its current position (i.e., its current well state) and not on any of the previous well states visited by the particle. The loss of memory of the previous states is due to the many vibrations experienced by the particle in each subsequent well state. The large number of vibrations effectively randomizes the trajectory of the next activation event, and this randomization causes loss of memory of the preceding well states. Hence, the atomic jumping process, which underpins both diffusion and Brownian motion, satisfies the conditions of a Markov process.

7.3 Square Well Potential Let us consider the simplest model of particles jumping in a classical (i.e., not quantum mechanical) square well potential in one dimension, x. As depicted in Figure 7.3, the square well potential consists of two energy states: lower-energy well states of potential energy Ewell and higher-energy activated states of energy EA. The length of each well state is Lwell, and the length of the activated state is LA. In this simple model, we consider that the system is composed of identical noninteracting particles. Hence, the rate at which a particle crosses an activation barrier can be calculated as a singleparticle event.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Atomic Models for Diffusion

113

Figure 7.3 Classical square well potential. The well states have a spatial dimension of Lwell and an energy equal to Ewell. Each pair of wells is connected by an activated state of length LA and energy EA.

The rate of a particle completing a successful jump (G) over the activation barrier and into the neighboring well state is equal to the inverse of the average time required to make the jump. There are two contributions to this jump time: (a) the time that it takes for a particle to become activated and (b) the migration time across the activated state. Let us begin with the latter, i.e., the average time that it takes for a particle to cross the activation barrier after it has already been excited into the activated state itself. This average crossing time, scross , is simply the distance that the particle must travel to pass through the activate state (LA) divided by the mean velocity of the particle, hvi: scross ¼

LA LA m ¼ : hvi hpi

(7.1)

Here we have equivalently expressed the average crossing time in terms of the mass, m, and average momentum, hpi, of the particle. If we consider the particle traveling across the activated state in one direction, e.g., the positive x direction, then the average momentum in this direction is:

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

114

Materials Kinetics

  p2 p exp  dp rffiffiffiffiffiffiffiffiffi 0 mkT 2mkT   ¼ : hpi ¼ 2 RN p 2p exp  dp N 2mkT RN

(7.2)

Here we have used Boltzmann probability factors with the kinetic energy (p2/2m) of the particle, where k is Boltzmann’s constant and T is absolute temperature. The integrals are solved using the standard substitution of variables for a Gaussian function, as in Chapter 4. Inserting the average particle momentum from Eq. (7.2) into Eq. (7.1), we obtain the following result for the average crossing time through the activated state: rffiffiffiffiffiffiffiffiffi 2pm scross ¼ LA : (7.3) kT Let us now consider a system of N noninteracting particles in the square well potential of Figure 7.3. The total rate at which the particles cross the activation barrier, N_ cross , is equal to the number of particles located in an activated state divided by the average time that it takes for a particle to cross the activation barrier: number of particles in an activated state N_ cross ¼ ¼ NG; scross

(7.4)

where G is the crossing rate of a single particle, which is given by G¼

1 scross

$

sA : swell þ sA

(7.5)

Here, sA =ðswell þsA Þ is the ratio of time that a particle spends in an activated state versus a well state, which is equal to the probability of finding a particle in the activated state. Since sA ¼ swell , this ratio can be approximated by sA =ðswell þsA Þ z sA =swell , which can be calculated from Boltzmann statistics as: P EA =kT sA Ae P ¼ : (7.6) Ewell =kT swell well e Here the numerator is a summation over all activated states and the denominator is a summation over all well states. Since the square well potential in Figure 7.3 represents a continuum of states, the discrete summations can be replaced by integrals:

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Atomic Models for Diffusion

R EðxÞ=kT e dx sA ¼ R LA EðxÞ=kT ; swell e dx Lwell

115

(7.7)

where EðxÞ ¼ EA for positions within the activated state and EðxÞ ¼ Ewell for positions within the well state. After evaluating the integrals, we obtain:   sA LA EA  Ewell ¼ exp  : (7.8) swell Lwell kT Hence, there are two contributions to sA =swell , viz., the entropic contribution governed by the number of activated states versus the number of well states (i.e., LA =Lwell ) and the Boltzmann probability factor accounting for the energetic preference of the well state versus the activated state, expð  ðEA Ewell Þ =kT Þ. Inserting Eqs. (7.3) and (7.8) into Eq. (7.5), the single-particle jump rate is therefore: rffiffiffiffiffiffiffiffiffi   1 kT EA  Ewell G¼ exp  : (7.9) Lwell 2pm kT Since the jump rate, G, has units of inverse time, the pre-exponential factor in Eq. (7.9) also has units of inverse time. Physically, the pre-exponential factor corresponds to the vibrational frequency (n) of a particle within the square well, i.e., rffiffiffiffiffiffiffiffiffi 1 kT n¼ : (7.10) L well 2pm Hence, within the context of the square-well potential model, the vibrational frequency increases with temperature and decreases with both particle mass and the width of the potential energy well. Each vibration can be considered as an attempt of the particle to overcome an activation barrier. The vibrational frequency gives the number of attempts per unit time for the particle to escape its current energy well. The probability of a single attempt achieving a successful jump is given by the Boltzmann probability factor (i.e., the exponential term) in Eq. (7.9). Hence, the probability of achieving a successful jump increases exponentially at higher temperatures or with lower activation barriers, i.e., lower EA  Ewell .

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

116

Materials Kinetics

7.4 Parabolic Well Potential We have already gained several important insights from the square well potential. However, a more physically realistic shape for the potential energy well would incorporate some degree of curvature. Let us now consider the case of a harmonic (i.e., parabolic) potential energy well, as depicted in Figure 7.4. Here, the well shape is approximated using: Ewell ðxÞ ¼ Emin þ bðx  xmin Þ2 ;

(7.11)

where Emin is the minimum potential energy of the well, xmin is the position of the minimum (i.e., the center of the well), and the curvature of the well is governed by the positive b parameter. As with the square well potential, the activated state is considered to have an energy of EA and a width of LA. Hence, the time for a particle to cross the activated state is still given by Eq. (7.3). With the parabolic model, the expected fraction of particles in the activated state can be evaluated by:   R EA exp  dx LA sA sA kT z ¼ Z Lwell =2 (7.12)  2 : swell þ sA swell Emin þ bðx  xmin Þ exp  dx 2kT Lwell =2 Given that the higher-energy states become exponentially less probable, Eq. (7.12) can be approximated by [3]

Figure 7.4 Schematic of the parabolic well potential.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Atomic Models for Diffusion

  EA LA exp  sA kT z   Z N  : swell Emin bðx  xmin Þ2 exp  exp  dx kT 2kT N

117

(7.13)

Combining the exponential terms and integrating the Gaussian function, we have:  rffiffiffiffiffiffiffiffiffiffiffiffi sA ðEA  Emin Þ b ¼ LA exp  : (7.14) kT 2pkT swell Therefore, the overall single-particle jump frequency is: rffiffiffiffi   sA 1 1 b ðEA  Emin Þ ¼ exp  : G¼ kT swell scross 2p m

(7.15)

Note that the Boltzmann probability factor is identical between the parabolic well potential in Eq. (7.15) and our previous derivation of the square well potential in Eq. (7.9). The difference between the two models is in the pre-exponential factor, i.e., the vibrational attempt frequency. Owing to the harmonic shape of the potential energy well, the temperature dependence of the vibrational frequency has been eliminated in Eq. (7.15). Instead, the vibrational frequency depends only on the curvature of the potential energy well and the mass of the particle. A potential well with a greater curvature would lead to higher vibrational frequency. As with the case of the square well potential, the vibrational frequency decreases with the square root of the mass of the particle.

7.5 Particle Escape Probability The results from the square well potential in Eq. (7.9) and the parabolic well potential in Eq. (7.15) both point to the same underlying form of the atomic jump frequency:   E* G ¼ n exp  ; (7.16) kT where n is the vibrational attempt frequency and E * is the activation barrier that must be overcome to make a jump between neighboring wells. While the exact expression for n depends on the detailed shape of the potential

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

118

Materials Kinetics

energy well, Eq. (7.16) is the most general form for the jump frequency of a particle. Regardless of the shape of the energy well, n gives the number of attempts per unit time, and the probability of success   on a specific attempt is * given by the Boltzmann factor, exp E kT . Eq. (7.16) is, therefore, one of the most fundamental equations in the microscopic description of diffusion. If the system is observed for a duration of time, t, then the particle will make a total of nt number of vibrations during that time period. In order for the particle to escape the potential energy well, at least one of those nt number of vibrations must lead to a successful jump. Since the probability of a single vibration  leading   to a successful jump over the activation barrier * is given by exp E kT   ,the probability of that vibration being unsuccessful is 1  exp E* kT . It follows that of all nt vi  the  probability  nt brations being unsuccessful is 1  exp E * kT . Therefore, the probability of at least one of the nt vibrations being successful is:

  nt E* pescape ðtÞ ¼ 1  1  exp  : (7.17) kT This is equal to the probability of the particle escaping the potential energy well during time t. This escape probability satisfies: limpescape ðtÞ ¼ 0

(7.18)

lim pescape ðtÞ ¼ 1

(7.19)

t/0

at any temperature and t/N

for all positive temperatures.

7.6 Mean Squared Displacement of Particles Diffusion is the net result of a large ensemble of particles executing a sequence of thermally activated jumps. Each successful jump results in a change in the position of a particle, and the ensemble of these jumpinduced displacements yields diffusion at the macroscale. Let ! r i denote the displacement vector associated with the ith jump of an individual particle. If a particle jumps with average frequency G over a total time s, then it will undergo a total of Ns ¼ Gs number of jumps. The total displacement ! of the particle after all Ns jumps, R ðNr Þ, is the sum of the displacements from each of the individual jumps:

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Atomic Models for Diffusion

Nr X ! ! R ðNr Þ ¼ r i:

119

(7.20)

i¼1

A key quantity is the mean squared displacement of the particle, which is a scalar quantity that characterizes the spread or diffuseness of the particle distribution about its average. The mean squared displacement is given by the expectation value of the squared displacement: !

! (7.21) hR2 ðNs Þi ¼ R ðNs Þ $ R ðNs Þ : The squared displacement itself is the dot product of Eq. (7.20) with itself, i.e., ! r1 þ ! r 1 $! r2 þ ! r 1 $! r3 þ . ! r 1 $! r Ns þ r 1 $! ! ! ! ! ! ! ! ! $ r þ r $ r þ r $ r þ . r $ r Ns þ r 2 1 2 2 2 3 2 ! ! R2 ðNs Þ ¼ R ðNs Þ$ R ðNs Þ ¼ « ! ! ! ! ! r $ r þ r $ r þ r $! r þ. ! r $! r : Ns

Ns

1

2

Ns

3

Ns

Ns

(7.22) The various terms in Eq. (7.22) can be grouped into diagonal and offdiagonal terms as R ðNs Þ ¼ 2

Ns X

r i j2 þ 2 j!

N s j s 1 N X X

i¼1

j¼1

! r i $! r iþj ;

(7.23)

i¼1

or, equivalently, R ðNs Þ ¼ 2

Ns X

r i j2 þ 2 j!

Ns j N s 1 X X

i¼1

j¼1

  r i j! r iþj cos qi;iþj j!

(7.24)

i¼1

where qi;iþj is the angle between the ith and jth displacements. The diagonal terms can be used to define the mean squared jump distance as: hr 2 i ¼

Ns 1 X 2 j! r ij ; Ns i¼1

such that the mean squared displacement becomes * + Ns j N s 1 X X !  2 2 ! hR ðNs Þi ¼ Ns hr i þ 2 j r i j r iþj cos qi;iþj : j¼1

This book belongs to Alice Cartes ([email protected])

(7.25)

(7.26)

i¼1

Copyright Elsevier 2023

120

Materials Kinetics

As a measure of the diffuseness of the particle distribution, the mean squared displacement is key in making the connection between the microscopic and macroscopic descriptions of diffusion. This connection is made explicit via the Einstein diffusion equation.

7.7 Einstein Diffusion Equation The macroscopic diffusion coefficient, D, can be related to the mean squared displacement of the individual particles using the solution of Fick’s second law for a point source diffusing radially in three dimensions (x, y, z). The three-dimensional form of Fick’s second law with constant diffusivity is:   2 vC v C v2 C v2 C : (7.27) þ þ ¼D vt vx2 vy2 vz2 The initial condition for the point source is: Cðx; y; z; t ¼ 0Þ ¼ M dðxÞdðyÞdðzÞ;

(7.28)

where M is the total amount of the diffusing species and d is the Dirac delta function. Following Section 4.2 and Crank [4], the solution by inspection is the Gaussian function:   M x2 þ y2 þ z2 Cðx; y; z; tÞ ¼ exp  : (7.29) 4Dt ð4pDtÞ3=2 Defining r 2 ¼ x2 þ y2 þ z2 ;

(7.30)

  r2 Cðr; tÞ ¼ exp  4Dt ð4pDtÞ3=2

(7.31)

Eq. (7.29) simplifies as

M

The mean squared displacement in radial coordinates is given as the normalized second moment of the concentration distribution: RN 2 r Cðr; tÞ4pr 2 dr ¼ 6Dt; (7.32) hR2 ðtÞi ¼ R0 N Cðr; tÞ4pr 2 dr 0

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Atomic Models for Diffusion

121

where the solution of 6Dt was first derived by Einstein in 1905 [1]. The relation, hR2 ðtÞi ¼ 6Dt;

(7.33)

is known as the Einstein diffusion equation in three dimensions. For diffusion from a point source in two dimensions, the solution becomes hR2 ðtÞi ¼ 4Dt;

(7.34)

and in one dimension, the solution is hR2 ðtÞi ¼ 2Dt:

(7.35)

Hence, the mean squared displacement of the particles is equal to the product of twice the dimensionality of the system with Dt. The Einstein equation was revolutionary in that it provided, for the first time, the connection between the macroscopic diffusivity, D, and the underlying atomic-scale origin of the diffusion process, viz., the mean squared displacement of the jumping particles. The key mathematical maneuver for making this connection is taking the second moment of the concentration distribution, C(r,t), from the radially diffusing point source.

7.8 Moments of a Function Given the use of the second moment of the concentration function to connect the macroscopic and microscopic interpretations of diffusion, here we will make a brief but important aside to introduce the reader to the general concept of the moments of a function. The moments of any function tell us properties about its distribution. Moments are essential to the fields of mathematical and applied statistics and also give us a powerful tool for use in statistical physics. One can calculate any nth order moment of a function, where n is a positive integer. There are two basic types of moments: raw moments and central moments. The nth order raw moment of a properly normalized probability density function, f(x), is defined as: Z 0 xn f ðxÞdx: (7.36) mn ¼

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

122

Materials Kinetics

For example, the first-order raw moment of a function is Z 0 m1 ¼ h f ðxÞi ¼ xf ðxÞdx;

(7.37)

which is equal to the mean or expectation value of the distribution. The higher-order moments are typically defined as central moments about the mean value. The nth order central moment of a function is defined as: Z  n mn ¼ x  m01 f ðxÞdx: (7.38) By definition, the first-order central moment of a function is always zero. Of greater interest is the second-order central moment, given by Z

2  m2 ¼ s2 ¼ ð f ðxÞ  h f ðxÞiÞ2 ¼ (7.39) x  m01 f ðxÞdx; also known as the variance of the function. From the variance, we can also calculate the standard deviation of the function, which is simply the square root of its variance, i.e., the square root of the second-order central moment. Mathematically, the standard deviation of a function is given by qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi 2 s ¼ m2 ¼ ð f ðxÞ  h f ðxÞiÞ ffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Z   2 ¼ x  m01 f ðxÞdx:

(7.40)

Table 7.1 lists these and several higher-order central moments. Many important properties of a function can be determined from its set of moments. As already discussed, the first-order raw moment of a function gives its mean, i.e., the expectation value of the function. The second-order Table 7.1 Table of moments. Order of the Moment Raw Moment

1 2 3 4 5 6

This book belongs to Alice Cartes ([email protected])

Mean -

Central Moment

0 Variance Skewness Kurtosis Hyperskewness Hypertailedness

Copyright Elsevier 2023

Atomic Models for Diffusion

123

central moment gives the variance of the function, which is a measure of the spread of the function from its mean value. Continuing to the higherorder moments, the third-order central moment of a function is known as its skewness, which is a measure of the asymmetry of the function. The fourth-order central moment is the kurtosis, which is a measure of the tailedness of the function, i.e., how much of the function is distributed in its tails. A function with a high kurtosis is said to have a fat tail, meaning that a comparatively large amount of the distribution is located far away from its mean. Additional higher-order moments can also be calculated. For example, the fifth-order central moment is the hyperskewness of the function, and the sixth-order central moment is its hypertailedness. While such calculations can be performed for any arbitrary n to provide additional information about the function, typically such calculations are cut off after the second, third, or fourth order. In addition to the mathematical and physical significance of moments, the concept of moments has been the inspiration for beautiful artwork, as in the 2018 piece “Moments in Love: Mean, Variance, Skew, Kurtosis” (Figure 7.5) by artist/scientist Jane Cook.

7.9 Diffusion and Random Walks Another method for connecting diffusion at the macroscopic and microscopic levels is by considering the statistics of a random walk of particles. A random walk occurs when a particle undergoes a sequence of jumps, where each jump is independent of the preceding states. In other words, a random walk is a Markov process, as described in Section 7.2. Let us consider a one-dimensional random walk of a particle, as depicted in Figure 7.6. Suppose that the particle starts at position 0 and can jump either to the left or to the right. The probability of jumping to the left is pL, and the probability of jumping to the right is pR. Let NL denote the total number of displacements in the leftward (i.e., negative) direction, and let NR denote the total number of displacements in the rightward (i.e., positive) direction. If the particle undergoes a total of Ns number of jumps, what is the probability of the particle occupying position n at the end of this series of jumps? Clearly, NR and NL must satisfy: NR  NL ¼ n NR þ NL ¼ Ns :

This book belongs to Alice Cartes ([email protected])

(7.41)

Copyright Elsevier 2023

124

Materials Kinetics

Figure 7.5 “Moments in Love: Mean, Variance, Skew, Kurtosis,” Jane Cook, 2018, Corning, New York, Acrylic and tissue paper on canvas, 61 cm  45 cm. The title of this painted collage is a wordplay on the title of the song “Moments in Love” by the 1980s new wave band Art of Noise. The canvas is painted red and overlaid with crinkled red tissue, over which four roughly rectangular forms are positioned essentially squared with each other and the canvas. The piece is the artist’s nerdy commentary on the utility and futility of analysis in matters of the heart. One might find joy amongst the scattered, undulating hills and valleys of the underlying “function” of love without knowing more details of that function; or, one can deploy a knowledge of statistics to extract the moments of the distribution of highs and lows. Neither method of experience is superior e they each provide unique insights.

Figure 7.6 One-dimensional random walk of a particle.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Atomic Models for Diffusion

125

The total number of ways of occupying site n after Ns number of jumps follows a binomial distribution: Uðn; Ns Þ ¼

Ns ! Ns ! ¼ : NR !NL ! ½ðNs þ nÞ=2!½ðNs  nÞ=2!

(7.42)

Therefore, the probability of occupying site n after Ns jumps is: R NL pðn; Ns Þ ¼ Uðn; Ns ÞpN R pL

¼

Ns ! pNR pNL : ½ðNs þ nÞ=2!½ðNs  nÞ=2! R L

(7.43)

If there is an equal probability of making jumps in the leftward and rightward directions, then pL ¼ pR ¼ 1/2 , such that:  N s Ns ! 1 pðn; Ns Þ ¼ : (7.44) ½ðNs þ nÞ=2!½ðNs  nÞ=2! 2 Note that this problem is mathematically identical to that of a sequence of coin flips, where Ns is the total number of coin flips, NL is the number of resulting tails, and NR is the number of resulting heads. For a fair coin, the probabilities of flipping tails and heads are identical and equal to pL ¼ pR ¼ 1 /2 . The resulting n can be interpreted as the total number of heads in excess of the total number of tails. In the limit of n  Ns , the distribution in Eq. (7.44) can be approximated as a Gaussian distribution [3]:   n2 pðn; Ns Þfexp  : (7.45) 2Ns Eq. (7.45) has exactly the same functional form as the solution of Fick’s second law for diffusion from a point source. Comparing Eq. (7.45) with Eq. (7.31), we see that n2 is a measure of r2, the square of the radial diffusion distance. Moreover, Ns, the total number of jumps, scales with the product of the diffusion coefficient, D, with the total diffusion time, t.

7.10 Summary Macroscopic diffusion is the net result of the microscopic jumping of an ensemble of particles. This connection was first rigorously established by Einstein in 1905 considering the mean squared displacement of hopping

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

126

Materials Kinetics

particles, which is related to the continuum-level solution of Fick’s second law for diffusion from a point source. The Einstein diffusion equation therefore provides a direct means for calculating the mean squared displacement of particles from the product of the diffusion coefficient and total diffusion time, or vice versa. The connection between the macroscopic and microscopic descriptions of diffusion can also be established by considering the binomial statistics of a random walk of particles. For short jumping distances, the binomial distribution can be approximated as a Gaussian function, equivalent to the analytical solution of Fick’s second law for diffusion from a point source. The sequence of jumps at the atomic level is an example of a Markov process, where the system loses memory of its previous states after each jump. This loss of memory is the result of the large number of vibrations that the particles undergo upon reaching each new well state. The particle jump rate itself is the product of two factors: the vibrational attempt frequency and the Boltzmann probability factor, which gives the probability of success in overcoming the activation barrier.

Exercises (7.1) A particle in a potential energy well has a vibrational frequency of 1 THz. The particle must overcome a barrier of 1 eV to transition out of the potential energy well and into the neighboring site. (a) On average, how much time is required for the particle to escape the potential energy well at room temperature (T ¼ 298 K)? (b) At what temperature would the particle have a 50% chance of escape within one hour of time? Assume that the vibrational frequency is independent of temperature. 1 (7.2) Consider a uniform probability distribution function, f ðxÞ ¼ ba for a  x  b, and f ðxÞ ¼ 0 for all other values of x. (a) Calculate the mean of the distribution. (b) Calculate the variance of the distribution. (c) Calculate the standard deviation of the distribution. (d) Calculate the skewness of the distribution. (e) Calculate the kurtosis of the distribution. (7.3) Sketch a probability distribution function that has: (a) A high variance and a high skewness, but a low kurtosis. (b) A high variance and a high kurtosis, but a low skewness. (c) A low variance, a high skewness, and a low kurtosis.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Atomic Models for Diffusion

127

(d) A high variance, a high skewness, and a high kurtosis. (7.4) Consider a particle in a sinusoidal potential energy well centered at x ¼ 0, Ewell ðxÞ ¼ EA sinðux p =2Þ, where EA is the energy of the activated state. As in Sections 7.3 and 7.4, consider that the activated state has a width of LA between the sinusoidal wells. (a) What is the expected time that a particle will spend in the activated state? (b) Derive an expression for the expected fraction of particles in the activated state. (c) What is the single-particle jump frequency? (d) What is the vibrational frequency of the particle in the sinusoidal well? (e) What parameters affect the vibrational and jump frequencies? (7.5) Discuss an example from music, literature, art, or aspects of everyday life that shows or reminds you of the moments of a function, i.e., mean, variance, skew, and kurtosis. Explain why. You may be as literal or as figurative as you would like, so long as you can explain your reasoning.

References [1] J. Philibert, “One and a Half Century of Diffusion: Fick, Einstein, Before and Beyond,” Diffusion Fundamentals 2, 1 (2005). [2] E. Johnson, Anxiety and the Equation: Understanding Boltzmann’s Entropy, MIT Press, Cambridge (2018). [3] R. W. Balluffi, S. M. Allen, and W. C. Carter, Kinetics of Materials, John Wiley & Sons (2005). [4] J. Crank, The Mathematics of Diffusion, Oxford University Press, Oxford (1975).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 8

Diffusion in Crystals

8.1 Atomic Mechanisms for Diffusion Our treatment of diffusion began at the macroscopic level in terms of irreversible thermodynamics and Fick’s laws, which involved solving continuum equations for the evolution of the concentrations of the diffusing species. In Chapter 7, we investigated the atomic basis for diffusion by considering particle jumping at the microscale. The connection between the macroscopic and microscopic views of diffusion was established by Einstein, who calculated the diffusion coefficient, D, from the mean squared displacement of atoms, as in Eq. (7.33). In Chapter 7, we treated diffusion in terms of a random walk, where the individual atomic displacements are uncorrelated. However, atomic jumping in real crystals is not entirely random, since atoms can occupy only certain allowable positions in the material. Moreover, there are restrictions on particle movement based on the details of the underlying atomic jumping mechanism. Diffusion in crystals occurs through a variety of mechanisms, depending on factors such as the crystal structure, type of bonding, properties of the diffusing species, etc. Some of the principal mechanisms for atomic diffusion in single crystals include: • Ring mechanism, wherein an atom on the lattice exchanges places with a neighboring atom by a cooperative ring-like rotational movement, as shown in Figure 8.1. With the ring mechanism, atoms diffuse without creating defects such as vacancies or interstitials. Since the ring mechanism requires a cooperative motion of three or more atoms, the activation barrier for this mechanism tends to be higher than that for defect-mediated diffusion. • Vacancy mechanism, wherein atoms diffuse by jumping into neighboring vacancy sites. As shown in Figure 8.2, the vacancy mechanism effectively involves swapping of an atom with the neighboring vacancy, i.e., the atom moves into the vacancy site and the vacancy moves into the site previously occupied by the atom. Since the Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00015-7 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

129 Copyright Elsevier 2023

130

Materials Kinetics

Figure 8.1 Diffusion via the ring mechanism.

Figure 8.2 Vacancy mechanism of diffusion.





vacancy mechanism involves only a single-particle displacement rather than a cooperative motion of atoms, it tends to have a smaller activation barrier compared to the ring mechanism. As a result, the vacancy mechanism is one of the primary mechanisms of diffusion in metals and ionic crystals. Interstitial mechanism, wherein an interstitial atom diffuses by jumping into new interstitial sites, as in Figure 8.3. This is the primary mechanism for diffusion of dopant atoms that occupy low-energy interstitial sites. In order to jump to a neighboring interstitial site, the diffusing atom must squeeze between other atoms on the lattice. This occurs without the atoms on the lattice actually changing sites. The interstitial mechanism is the dominant mechanism for diffusion for many dopants dissolved in crystalline materials. Interstitialcy mechanism, wherein an atom from the lattice migrates to an interstitial site, enabling an atom already in the interstitial site to take its place on the lattice. As visualized in Figure 8.4, the interstitialcy mechanism involves atoms from the lattice taking turns bumping each other into interstitial sites. This mechanism is possible only when substitutional atoms from the lattice can occupy interstitial sites. In many cases, the interstitialcy mechanism contributes to selfdiffusion in metals.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Crystals

131

Figure 8.3 Interstitial mechanism of diffusion.

Figure 8.4 Diffusion via the interstitialcy mechanism.

8.2 Diffusion in Metals Let us analyze two specific examples of diffusion in metals using: (a) the interstitial mechanism and (b) the vacancy mechanism. In Figure 8.5, we consider an atomic model for diffusion of a dopant atom occupying interstitial sites in a body-centered cubic (BCC) crystal. Diffusion of the dopant atom, therefore, occurs via the interstitial mechanism. An example of this scenario is the diffusion of interstitial carbon in BCC iron.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

132

Materials Kinetics

Figure 8.5 Atomic model for diffusion of carbon via the interstitial mechanism in a BCC metal (such as iron).

From the Einstein diffusion relation in Eq. (7.33), a general expression connecting the jump rate (G) with interstitial jump distance (r) in three dimensions is: DI ¼

Gr 2 ; 6

(8.1)

where DI denotes the diffusion coefficient of the dopant atom via the interstitial mechanism. In Figure 8.5, we consider diffusion in response to a concentration gradient along the y axis. This results in jumping of the interstitial atoms from the a plane to the b plane. As represented in the figure, there are three possible interstitial sites in the a plane, labeled 1, 2, and 3. Let us assume that these three interstitial sites are occupied at random and therefore have equal probability. If we let c 0 denote the concentration of interstitial atoms in the a plane per unit area, then the concentration on each of the three interstitial sites is c 0 =3. Examining the BCC crystal structure in Figure 8.5, it is clear that jumps from the a plane to the b plane are only allowed from sites 1 and 3 but not from site 2. This is because new interstitial sites are available in the b plane from only the 1 and 3 positions, since an atom on the crystal lattice would block any jump from position 2. Hence, the flux from the a plane to the b plane ( Ja/b ) has contributions from only the 1 and 3 positions: Ja/b ¼

2Gc 0 ; 3

(8.2)

This example demonstrates the importance of considering the specific crystal structure when determining which atomic jumps are actually allowed.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Crystals

133

Now let us consider self-diffusion in a face-centered cubic (FCC) metal via the vacancy mechanism, as shown in Figure 8.6. In an FCC lattice, each site has twelve nearest neighbors. Assuming that the vacancies occupy the lattice sites randomly, then the jump frequency of the vacancy (GV ) is GV ¼ 12G0V ;

(8.3)

G0V

where is the jump frequency along one direction. From the Einstein diffusion equation, the diffusion coefficient of the vacancies (DV) is:  2 GV r 2 12G0V a pffiffiffi DV ¼ ¼ G0V a2 ; (8.4) ¼ 6 6 2 where a is the lattice parameter of the FCC unit cell, and the elementary pffiffiffi jump distance is r ¼ a 2. Next, we must relate the vacancy diffusivity to the self-diffusivity. At equilibrium, the fraction of sites occupied by the vacancies is:       GV SV HV XV ¼ exp  ¼ exp exp  : (8.5) kT k kT The self-diffusivity is related to the vacancy diffusivity by:     SV HV 0 2 D ¼ f DV XV ¼ f GV a exp exp  ; k kT

(8.6)

where f is a correlation factor, which is a geometry-dependent factor that captures the correlation between successive jumps in a crystal. The correlation factor is discussed in the next section.

Figure 8.6 Self-diffusion by the vacancy mechanism in an FCC metal.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

134

Materials Kinetics

8.3 Correlated Walks Unlike the random walk diffusion model considered in Chapter 7, the atomic jumping in real crystals typically has some degree of correlation owing to the constraints of the crystal structure. Consequently, a correlation factor, f, must be included in the equation for the diffusion coefficient, as in Eq. (8.6). When the jumping sequence of an atom is not random, the atom is said to undergo a correlated jump. For example, suppose that diffusion in a crystal occurs via the vacancy mechanism. If an atom jumps into a vacancy, for its next jump, it could: • Jump back into the site that it just left, which now contains the vacancy with which it has just exchanged. • Jump into a different vacancy already present in a different nearestneighbor site. • Wait for a new vacancy to arrive in a nearest-neighbor site, and then jump into that site. Of these choices, the first possibility is the most probable. Hence, there is correlation between successive jumps of the atom. The correlation factor, f, is defined as the ratio of the actual diffusion coefficient (with correlated jumps) to the diffusion coefficient one would obtain via a true random walk, i.e., f ¼ Dactual/Drandom. From Eq. (7.26), f is therefore equal to the ratio of the mean squared displacement of atoms, hR2 ðNs Þi, to the mean squared jumping distance, Ns hr 2 i: * + N s j s 1 N X X   Dactual 2 hR2 ðNs Þi r i j! r iþj cos qi;iþj ; ¼ f¼ ¼1þ j! Ns hr 2 i Ns hr 2 i j¼1 i¼1 Drandom (8.7) where Ns is the total number of jumps, ! r i is the displacement vector of the th i jump, and qi;iþj is the subtended angle between the ith and (i þ j)th displacements. For a truly random walk, f ¼ 1, since the double summation in Eq. (8.7) is zero. Hence, f ¼ 1 indicates that there is no correlation between successive jumps. On the other hand, if each individual jump is exactly the opposite of the previous jump, then f ¼ 0. In this case, there is no net diffusion (Dactual ¼ 0) because the atoms are just jumping back and forth between the same two sites. The correlation factors for real crystals can be calculated by considering the likelihood of all possible jump trajectories for a given diffusion mechanism. Table 8.1 lists correlation factors for a variety of common crystals and diffusion mechanisms.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Crystals

Table 8.1 Correlation factors for self-diffusion. Crystal Structure

135

Correlation Factor (f)

Vacancy Mechanism, two-dimensional lattices Honeycomb lattice Square lattice Hexagonal lattice

1/3 0.46694 0.56006

Vacancy Mechanism, three-dimensional crystal structures Diamond Simple cubic Body-centered cubic (BCC) Face-centered cubic (FCC) Hexagonal close-packed (with all jump frequencies equal)

1/2 0.65311 0.72722 0.78146 0.78121 (normal to c-axis) 0.78146 (parallel to c-axis)

Interstitialcy Mechanism (q ¼ angle between the displacement vectors of the two atoms participating in the jump) NaCl, collinear jumps (q ¼ 0) NaCl, noncollinear jumps with cosq ¼ 1/3 NaCl, noncollinear jumps with cosq ¼ 1/3 Ca in CaF2, collinear jumps (q ¼ 0) Ca in CaF2, noncollinear jumps (q ¼ 90 )

2/3 32/33 z 0.9697 0.9643 4/5 1

From Manning [1].

8.4 Defects in Ionic Crystals Like metals, diffusion in ionic crystals is facilitated by the presence of defects. However, the nature of defects in ionically bonded solids is more complicated than in metals since each point defect has an electrical charge. The condition of local charge neutrality requires that point defects in ionic solids form as neutral complexes of multiple charged point defects. Hence, diffusion in ionic crystals necessarily involves more than just a single charged species. Defects in ionic crystals are described using Kröger-Vink notation, which is a standard notation that describes defects in a manner relative to that of the perfect crystal. Kröger-Vink notation describes the point defect using three parts: the species occupying the site, the site location in the crystalline lattice, and the effective charge of the species occupying that site. These three parts of the Kröger-Vink notation are shown in Figure 8.7, which

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

136

Materials Kinetics

Figure 8.7 Guide to Kröger-Vink notation, using a magnesium vacancy as an example.

considers a vacancy on a magnesium site, V00Mg . The “V” indicates the particular species (in this case, a vacancy) occupying the site1. The subscript in the notation indicates the normal site on the crystalline lattice. In this case, the vacancy occupies what would normally be a magnesium site on the lattice. The superscript gives the effective charge of the defect, which is calculated relative to the charge of the perfect crystal. For the case of a magnesium defect, the double-prime indicates that the vacancy has an effective charge of negative two (i.e., 2 times the charge of a proton). In the perfect ionic crystal, the magnesium cation has a charge of positive two, i.e., Mg2þ. The vacancy is the absence of an ion, and therefore always has an absolute charge of zero. However, this zero charge is relative to the normal site charge of þ2, so the effective charge of the vacancy defect is 0  2 ¼  2. Each prime in the superscript represents a charge of 1. Hence, the doubleprime superscript in V00Mg indicates a net charge of 2. If the site has an effective positive charge rather than a negative charge, then the dot superscript • is used. A single dot represents a charge of þ1. Multiple dots are used to indicate a greater positive charge. Finally, if the site has a net neutral charge, then a superscript  is used. Neutrality occurs if the charge at the site corresponds to what is expected in the perfect crystal. Let us consider a few more examples to demonstrate further use of Kröger-Vink notation. An interstitial aluminum defect is denoted by Al••• i . Here, aluminum (Al) is the species occupying the interstitial site, which is denoted by the subscript “i”. The superscript ••• indicates that the interstitial aluminum defect has a net charge of þ3 relative to that of the perfect crystal. This is calculated by considering that an aluminum ion, Al3þ, has a 1

A common question is how to distinguish between vacancies and the element vanadium using Kröger-Vink notation, since both of these species are denoted by the same letter “V”. When describing vacancies in vanadium-containing compounds with Kröger-Vink notation, it is common to denote vacancies using either a lowercase “v” or a script “V” in order to distinguish them from vanadium, “V”.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Crystals

137

charge of þ3, and in the perfect crystal there would not be any interstitial species. Since the lack of an interstitial species has a charge of zero, the net charge of the interstitial defect is þ3  0 ¼ þ3, giving the superscript of •••. Next, let us consider a substitutional defect of a Fe2þ ion substituting for what is normally a Mg2þ ion in a perfect crystal. With Kröger-Vink notation, this substitutional defect is written as Fe Mg because Fe occupies a site that is normally held by Mg. Since Fe2þ and Mg2þ have the same charge, the net charge of the substitutional defect is zero. The superscript  is therefore used to indicate a net neutral defect. Finally, we consider a charged substitutional defect with a slight variation on the above scenario. If Fe3þ is substituted for Mg2þ, then the resulting defect is given by: Fe•Mg . As before, an Fe ion occupies a position normally held by Mg. However, now the substitutional Fe ion has a þ3 charge rather than a þ2 charge. Given the difference in charge between Fe3þ and Mg2þ, the substitutional defect has a net charge of þ1, which is indicated by the • superscript.

8.5 Schottky and Frenkel Defects It is now clear that most point defects in ionic crystals have an effective charge. Every vacancy defect has an effective charge opposite to that of the species normally occupying that site. Every interstitial defect has a charge equal to the charge of the species occupying the interstitial site. Finally, a substitutional defect has an effective charge equal to the difference in charge between the substitutional ion and the ion normally occupying the lattice site. The only neutral point defect in an ionic crystal would be a substitutional defect where both ions have the same charge. Since any vacancy or interstitial defect has a net charge, the condition of local charge neutrality demands that additional defects must be present to attain a net charge of zero in the local vicinity of the defect. Such combinations of point defects can be categorized as either Schottky or Frenkel defects. As shown in Figure 8.8, a Schottky defect consists of multiple vacancies of both cations and anions, in such a combination as to produce a net charge of zero. For example, a Schottky defect in a MgO crystal would involve creating vacancies on both a magnesium site (V00Mg ) and on an oxygen site (V••O ). The net charge of this pair of vacancies is zero. Hence, the Schottky reaction in MgO can be expressed as:

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

138

Materials Kinetics

Figure 8.8 Schottky and Frenkel defects in an ionic crystal.

null / V00Mg þ V••O ;

(8.8)

where “null” indicates a perfect crystal without any point defects. Schottky defect formation must involve more than two vacancies if the charges of the ions have different magnitudes. For example, let us consider barium titanate, BaTiO3, where the ions have the following charges: Ba2þ, Ti4þ, and O2. A full Schottky reaction in BaTiO3 involves creating vacancies on the complete chemical formula: •• ; null / V00Ba þ V0000 Ti þ 3VO

(8.9)

i.e., one vacancy each on barium and titanium sites, along with three oxygen vacancies to provide charge compensation. Note that a partial Schottky defect is also possible with this compound, with single vacancies created on barium and oxygen alone: null / V00Ba þ V••O ;

(8.10)

since this also satisfies the charge neutrality condition. Likewise, a partial Schottky defect could be formed with one titanium vacancy balanced by two oxygen vacancies. Frenkel defects are another type of defect that consist of a moving an ion from its normal lattice position into an interstitial location. As shown in Figure 8.8, a Frenkel defect consists of a pair of interstitial and vacancy defects: when an ion moves from its lattice position into an interstitial site, it leaves behind a vacancy defect having the opposite charge. There are two types of Frenkel defects: cation and anion Frenkel defects. A cation Frenkel defect occurs when a cation moves from its lattice position into an interstitial site. For example, a cation Frenkel reaction in AgCl results in an

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Crystals

139

Agþ ion moving into an interstitial position, leaving behind a negatively charged vacancy: null / Ag,i þ V0Ag :

(8.11)

Likewise, an anion Frenkel defect occurs when an anion moves from its normal lattice position into an interstitial site. Considering the same silver chloride crystal, an anion Frenkel reaction consists of: null / Cl0i þ V,Cl :

(8.12)

In both cases, the effective charges of the interstitial and vacancy defects cancel each other out, leading to a net neutral charge of the Frenkel defect.

8.6 Equilibrium Constants for Defect Reactions The equilibrium concentration of defects from a given defect reaction can be calculated from the equilibrium constant of that reaction. For example, the defect reaction to create a Schottky pair in KCl is: null / V0K þ V,Cl :

(8.13)

The equilibrium constant, K eq , is related to the Gibbs free energy of the f Schottky defect formation, GS , by GSf ¼  kT ln K eq : Solving for the equilibrium constant, Gf K eq ¼ exp  S kT

(8.14)

! ¼ aAV aCV ;

(8.15)

where K eq is also equal to the product of the activities of the anion vacancy (aAV) and cation vacancy (aCV). Note that the activity of the perfect crystal (“null”) is equal to one by definition. Following Raoult’s Law for dilute solutions, which is valid when the defect concentrations are sufficiently small that they do not interact with each other, the activities are governed by the concentrations of the species: ! f  0  ,  G VK VCl ¼ K eq ¼ exp  S : (8.16) kT

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

140

Materials Kinetics

From the defect reaction in Eq. (8.13), we know that the concentration of potassium vacancies must be equal to the concentration of chloride vacancies to maintain charge neutrality. Given this equality, we can solve for the vacancy concentration by: ! f  0   ,  pffiffiffiffiffiffiffi G VK ¼ VCl ¼ K eq ¼ exp  S : (8.17) 2kT Hence, the equilibrium concentration of vacancies increases with temperature and with lower Gibbs free energy of Schottky defect formation. The calculation becomes slightly more complicated when there are an unequal number of products from the defect reaction. Consider the reaction for a Schottky defect in calcium fluoride, CaF2: null / V00Ca þ 2V,F : Again, assuming Raoult’s Law for dilute solutions, we have ! f  00  , 2 G VCa VF ¼ K eq ¼ exp  S : kT

(8.18)

(8.19)

Charge neutrality requires that:  00  1  ,  VCa ¼ VF : 2

(8.20)

Inserting Eq. (8.20) into Eq. (8.19), we have

! f 1 , 3 G V ¼ K eq ¼ exp  S : 2 F kT

(8.21)

Finally, we can solve for the equilibrium concentration of fluoride vacancies: ! f ffiffiffi ffiffiffiffiffiffiffiffiffiffi p  , p G 3 3 S VF ¼ 2K eq ¼ 2exp  : (8.22) 3kT The procedure for Frenkel defects is identical. For example, the reaction for creating a cation Frenkel defect in AgBr is: null / Ag,i þ V0Ag :

This book belongs to Alice Cartes ([email protected])

(8.23)

Copyright Elsevier 2023

Diffusion in Crystals

141

The equilibrium constant is: 

! f h i  G Ag,i V0Ag ¼ K eq ¼ exp  F : kT

(8.24)

Applying the charge neutrality condition, we know that the concentration of silver interstitials must equal the concentration of silver vacancies: ! f  ,  h 0 i pffiffiffiffiffiffiffi G F Agi ¼ VAg ¼ K eq ¼ exp  : (8.25) 2kT

8.7 Diffusion in Ionic Crystals Since diffusion in ionic crystals is largely governed by defects, it is important to know the concentrations of these defects. For example, diffusion in sodium chloride (NaCl) crystals proceeds largely through the vacancy mechanism. There is an intrinsic concentration of vacancies in NaCl crystals, primarily due to Schottky defects: null / V0Na þ V,Cl : The concentration of vacancies from Schottky defects is: ! f pffiffiffiffiffiffiffi  ,  0  G S ; VNa S ¼ VCl S ¼ K eq ¼ exp  2kT

(8.26)

(8.27)

f

where GS is the Gibbs free energy of the Schottky defect formation. Additional sodium vacancies can be introduced through extrinsic doping with a divalent cation such as Cd2þ in cadmium chloride, CdCl2. Vacancy formation through extrinsic doping proceeds through the following reaction: NaCl

0 CdCl2 / Cd,Na þ 2Cl Cl þ VNa :

(8.28)

In other words, each Cd2þ cation substitutes for Naþ in the lattice, while simultaneously creating a sodium vacancy in order to maintain charge neutrality. Thus, the concentration of sodium vacancies from this extrinsic (substitutional) mechanism is equal to the concentration of the dopant:  0    VNa ext ¼ Cd,Na ¼ ½CdCl2 : (8.29)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

142

Materials Kinetics

Summing Eq. (8.27) with Eq. (8.29), the total number of sodium vacancies from intrinsic (Schottky defect) and extrinsic (CdCl2 doping) mechanisms is: !  0   0   0  GSf þ ½CdCl2 : VNa ¼ VNa S þ VNa ext ¼ exp  (8.30) 2kT At high temperatures, there is a larger concentration of sodium vacancies from Schottky defects, whereas at low temperatures the majority of sodium vacancies are created through doping of CdCl2. Hence, the extrinsic mechanism is dominant at lower temperatures, whereas the intrinsic mechanism is dominant at higher temperatures. The sodium diffusion coefficient is proportional to the concentration of sodium vacancies and is given by [2]. ! *  0  2 DGNa DNa ¼ VNa f l n exp  ; (8.31) kT where f is the correlation factor, l is the jump distance, n is the vibrational * frequency, and DGNa is the free energy of migration for sodium vacancies. At low temperatures, i.e., in the extrinsic regime, ! *  0  DG Na DNa;ext ¼ VNa ext f l2 n exp  ; (8.32) kT and, from Eq. (8.29), the concentration of vacancies is a constant determined by the solute concentration: ! * DG Na DNa;ext ¼ ½CdCl2  f l2 n exp  : (8.33) kT Eq. (8.33) can be expressed equivalently as !   * * DS DHNa 2 Na exp  DNa;ext ¼ ½CdCl2  f l n exp ; k kT

(8.34)

* * * where the free energy of migration, DGNa ¼ DHNa  T DSNa , has been * * separated into entropic (DSNa ) and enthalpic (DHNa ) terms. At high temperatures, i.e., in the intrinsic regime, the concentration of sodium vacancies is dominated by the Schottky defect reaction from Eq. (8.26):

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Crystals

  DNa;int ¼ V0Na S f l2 n exp

! * DGNa :  kT

143

(8.35)

Substituting Eq. (8.27) for the intrinsic concentration of sodium vacancies into Eq. (8.35), it is clear that the activation free energy for diffusion in the intrinsic regime includes both migration and Schottky formation factors: ! ! * GSf DGNa 2 DNa;int ¼ f l n exp  exp  : (8.36) 2kT kT f

f

f

Separating the free energy of the Schottky reaction, GS ¼ HS  TSS , f f into enthalpic (HS ) and entropic (SS ) terms, we have ! !     f * * S DS HSf DHNa S 2 Na exp exp  DNa;int ¼ f l n exp exp  2k k 2kT kT (8.37) From Eq. (8.31), the combined expression for the sodium diffusivity, accounting for both extrinsic and intrinsic contributions to vacancy formation, is given by ! *  0   0  2 DGNa : (8.38) DNa ¼ VNa ext þ VNa S f l n exp  kT Substituting Eqs. (8.27) and (8.29) into Eq. (8.38), we have !! ! * GSf DG Na DNa ¼ ½CdCl2  þ exp  f l2 n exp  : 2kT kT

(8.39)

Hence, the free energy of sodium vacancy migration is relevant regardless of whether the vacancies have been generated through intrinsic or extrinsic means. The extrinsic mechanism is dominant at low temperatures, where the concentration of vacancies from the Schottky defect reaction is less than that from CdCl2 doping. At higher temperatures, the concentration of vacancies from Schottky defects exceeds that from substitutional doping. Figure 8.9 clearly shows these two regimes. In the extrinsic regime, * the activation barrier is determined only by DHNa . However, in the

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

144

Materials Kinetics

Figure 8.9 Intrinsic and extrinsic regimes of diffusion. The crossover between the two regimes occurs where the number of defects is equal from the intrinsic and extrinsic defect mechanisms. f * intrinsic regime, the activation barrier is equal to DHNa þ HS 2. The crossover between the extrinsic and intrinsic regimes occurs when .  0    f VNa ext ¼ V0Na S , or ½CdCl2  ¼ exp  GS 2kT .

8.8 Summary Diffusion in crystals is governed by atomic jumping among well-defined allowable positions on the crystalline matrix or interstitial positions. The four primary atomic mechanisms for atomic diffusion in crystals are the ring, vacancy, interstitial, and interstitialcy mechanisms. Of these, only the ring mechanism does not involve the presence of a point defect. Since the (defect-free) ring mechanism requires a cooperative motion of atoms, it is less efficient than the other defect-mediated mechanisms. With the vacancy mechanism, atoms diffuse by single-particle jumping into a vacancy site. Hence, the atomic diffusion coefficient can be interpreted in terms of a diffusivity of the vacancies. The interstitial and interstitialcy mechanisms for diffusion involve atomic jumping into interstitial sites. The difference is that with the interstitialcy mechanism, the atoms jump into interstitials from their normal lattice positions. This is in contrast to the interstitial

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Crystals

145

mechanism, where dopant atoms jump directly between interstitial sites, squeezing between neighboring atoms on the lattice. Unlike random walk diffusion, diffusion in real crystals typically has memory of the preceding state and thus exhibits a correlated walk. To account for these correlations between successive jumps, a geometrydependent correlation factor is included in the atomic jumping models. The correlation factors for several common crystal structures are listed in Table 8.1. Diffusion in ionic crystals is more complicated than in metals, since local charge neutrality of the species must be satisfied. Kröger-Vink notation is used to represent point defects relative to perfect ionic crystals. Intrinsic defects in ionic crystals are generated by Schottky and Frenkel defect reactions. A Schottky defect involves creating a combination of chargebalanced anion vacancy and cation vacancy point defects. A Frenkel defect is a vacancy-interstitial defect pair created by moving an ion from its normal lattice position into an interstitial site. The equilibrium constant for a defect reaction can be used to calculate the equilibrium concentrations of defects. Extrinsic defects can also be generated by incorporating substitutional dopants in a crystal. In the presence of both intrinsic and extrinsic defects, defect-mediated diffusion will display two distinct regimes, dominated by extrinsic defects at low temperatures and by intrinsic defects at high temperatures.

Exercises (8.1) Consider the correlation factors in Table 8.1 for the vacancy mechanism in three-dimensional crystals. (a) Why is the correlation factor for diamond equal to 1/2 ? (b) Why is the correlation factor for a simple cubic structure less than that of a BCC crystal? (c) Why is the correlation factor for a BCC crystal less than that of an FCC crystal? (8.2) Use Kröger-Vink notation to write the following defect reactions and the corresponding equilibrium constants: (a) Schottky defect formation in SrCl2. (b) Schottky defect formation in Nb2O5. (c) Cation Frenkel defect formation in TiO2. (d) Anion Frenkel defect formation in B2O3.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

146

Materials Kinetics

(8.3) Consider the oxide, AO2, having the following defect formation energies: 00 Anion Frenkel: null/V,, 3:0 eV O þ Oi 0000 ,,,, Cation Frenkel: null/VA þ Ai 9:5 eV ,, Schottky: null/V0000 þ 2V 6:5 eV A O (a) What are the predominant intrinsic point defects in AO2 at 1700 C? Calculate their total concentrations accounting for Frenkel and Schottky defect reactions. (b) Write an equation for the temperature dependence of the diffusivity of oxygen in AO2 via the vacancy mechanism. Be sure to include a correlation factor ( f ) and an appropriate expression for the concentration of oxygen vacancies. (8.4) Consider anion diffusion in a KCl crystal via the vacancy mechanism. (a) What could you use as a dopant to promote anion vacancy formation? (b) Write the defect reaction and equilibrium constant for this substitutional dopant. (c) Write the defect reaction and equilibrium constant for the formation of intrinsic vacancies through the Schottky mechanism. (d) Derive an expression for the temperature at which the concentrations of intrinsic and extrinsic vacancies are equal. (e) Write an equation describing anion diffusion in both the intrinsic and extrinsic regimes. (f) Which of the intrinsic or extrinsic regimes has a higher activation barrier? Why?

References [1] J. R. Manning, Diffusion Kinetics for Atoms in Crystals, D. Van Nostrand Company (1968). [2] Y.-M. Chiang, D. Birnie III, and W. D. Kingery, Physical Ceramics, John Wiley & Sons (1997).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 9

Diffusion in Polycrystalline Materials

9.1 Defects in Polycrystalline Materials In Chapter 8, we discussed the atomic mechanisms for diffusion within a single crystal, including the ring, vacancy, interstitial, and interstitialcy processes. Of these, only the ring mechanism does not require presence of a point defect within the crystal. The ring mechanism typically has a higher activation barrier since it requires a cooperative rearrangement of atoms. On the other hand, defect-mediated diffusion by the vacancy and interstitial mechanisms involves a sequence of single-particle jumps. Hence, defectmediated diffusion typically involves overcoming a lower activation barrier compared to diffusion in a defect-free crystal. Diffusivity within a crystal scales with the concentration of the relevant type of defect which dominates the diffusion process. Defects in polycrystalline materials can be classified according to their dimensionality. In particular, polycrystalline materials exhibit: • Zero-dimensional (point) defects, viz., vacancies, interstitials, and atomic substitutions. • One-dimensional (linear) defects, viz., edge and screw dislocations. • Two-dimensional (planar) defects, viz., grain boundaries, free surfaces, and other interfaces. • Three-dimensional (volume) defects, viz., pores or inclusions. Higher-dimensional defects tend to be especially effective at increasing diffusivity. In this Chapter, we discuss the role of defects in promoting diffusion within polycrystalline materials, with particular emphasis on the impact of dislocations and interfaces.

Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00019-4 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

147 Copyright Elsevier 2023

148

Materials Kinetics

9.2 Diffusion Mechanisms in Polycrystalline Materials The presence of defects enables significantly faster diffusion compared to that in defect-free crystals [1]. Extended defects such as dislocations, grain boundaries, and free surfaces provide short-circuit diffusion pathways having dramatically accelerated diffusion compared to that in crystals with only point defects. Figure 9.1 shows a typical Arrhenius plot for diffusion in a polycrystalline material along different pathways. Each line in the figure corresponds to diffusion by one type of mechanism. The notation used in the figure is explained in Table 9.1. Diffusion through a single, dislocation-free crystal (DXL) proceeds the most slowly, since only point defects are present to facilitate the atomic motion. While point defects provide a means for single-particle jumps, they do not necessarily provide a continuous pathway for longerscale diffusion. Such pathways are provided by dislocations (DD), which enable significantly faster diffusion along the line of the dislocation core. While dislocation cores provide one-dimensional pathways for shortcircuit diffusion, planar defects such as grain boundaries and free surfaces are even more effective due to the higher dimensionality of these defects [1,2]. As shown in Figure 9.1, diffusion along grain boundaries (DB) and free surfaces (DS) can be orders of magnitude faster than diffusion within dislocation-free crystals or along dislocation cores. A grain boundary effectively provides a plane of defects, enabling rapid diffusion throughout the plane, which can be many orders of magnitude faster than internal diffusion within the grains of the polycrystal. Free surfaces are even more effective at accelerating the diffusion process, since they provide for the lowest activation barrier for diffusive hopping. Three-dimensional porous defects also effectively provide free surfaces that enable short-circuit diffusion.

Figure 9.1 Arrhenius plot showing diffusivity in a polycrystalline material by various mechanisms. The notation is specified in Table 9.1. Higher-dimensional defects enable significantly faster diffusivity compared to diffusion within single crystals having only point defects. This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polycrystalline Materials

149

Table 9.1 Notation for various diffusion modes. Notation Type of Diffusivity

DD DB DS DXL DL

Along a dislocation core Along a grain boundary Along a free surface In a bulk crystal free of line or planar defects Diffusivity in a liquid

Figure 9.1 also shows an example Arrhenius line for diffusivity in the liquid state (DL). Diffusion in liquids is governed by the viscosity of the liquid medium, which will be covered in detail later in this book. Note that the convergence of the Arrhenius curves for DL, DS, and DB at the melting temperature is arbitrary in Figure 9.1. Diffusivity within the liquid can also be different from that in the solid state enabled by grain boundaries and free surfaces. In such case, there would be a discontinuity in the Arrhenius plots at the melting temperature. Figure 9.2 shows an example of the dramatically faster diffusion enabled by short-circuit diffusion pathways. Here, data for the NiO system are plotted for diffusion of Ni2þ and O2 within the dislocation-free crystal, along dislocation cores, and along grain boundaries [3]. Both the cation and anion diffusivities are orders of magnitude faster in the presence of defects. It is clear that the short-circuit pathways provided by dislocations and grain boundaries

Figure 9.2 Diffusivity in NiO along several mechanisms, using the notation in Table 9.1. The subscripts “O” and “Ni” indicate diffusion of the O2 and Ni2þ ions, respectively. Diffusion along grain boundaries (DB) is orders of magnitude faster than diffusion along dislocations (DD), which, in turn, is orders of magnitude faster than diffusion through dislocation-free crystals (DXL). (Data are replotted from Atkinson [3]. Figure by Brittney Hauke (Penn State).)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

150

Materials Kinetics

enable diffusion that is orders of magnitude faster. Grain boundaries are particularly effective at enabling high diffusion rates, and polycrystalline materials with smaller grain sizes tend to have higher rates of diffusion since the concentration of grain boundaries is higher in such materials [4]. The impact of grain boundary diffusion is readily apparent when comparing self-diffusivity in single crystal versus polycrystalline versions of the same substance. For example, Figure 9.3 plots self-diffusivity in single crystal versus polycrystalline silver. The grain boundaries in the polycrystalline material offer pathways for short-circuit diffusion, yielding significantly higher diffusivity than in the single-crystal case. Given the lower activation barrier of grain boundary diffusion, this advantage is especially prominent at lower temperatures, as seen in Figure 9.3.

9.3 Regimes of Grain Boundary Diffusion The treatment of polycrystalline diffusion depends on how much faster grain boundary diffusion is compared to diffusion through the grains. The most prominent classification of grain boundary diffusion in polycrystalline materials is Harrison’s ABC model [6]. As shown in Figure 9.4, Harrison’s model considers three regimes of diffusion: • “A” Regime, where the diffusion length within the grains is longer than the grain size. The “A” stands for “all material,” indicating that diffusion is appreciable both along grain boundaries and through the individual grains.

Figure 9.3 Self-diffusivity in single crystal and polycrystalline silver. The short-circuit diffusion pathways provided by grain boundaries enable significantly faster diffusion in polycrystalline silver, especially at lower temperatures. (Data are replotted from Turnbull [5]. Figure by Brittney Hauke (Penn State).)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polycrystalline Materials

151

Figure 9.4 Graphical depiction of Harrison’s ABC model for diffusion in polycrystalline materials [6]. The “A” regime corresponds to fast diffusion along both the grain boundaries and into the crystalline grains. In the “B” regime, diffusion along the grain boundaries is significantly faster than diffusion into the grains, but some of the diffusing matter is still allowed to leak into the grains. In the “C” regime, the diffusion into the grains is effectively negligible, such that the diffusion process is completely dominated by the grain boundaries. (Figure by Brittney Hauke (Penn State).)



“B” Regime, where the diffusion length within the grains is significant but smaller than the grain size. Since the diffusion along the grain boundaries is dominant, the “B” stands for “boundary.” • “C” Regime, where the diffusion length within the grains is negligible, but significant diffusion still occurs along the grain boundaries. The “C” stands for “core only,” meaning that the diffusion occurs only along the grain boundaries, and not into the individual grains. Mathematically, the three regimes in Harrison’s ABC model can be described by: A Regime : DXL t > l2 ;

DB t > l2

(9.1)

B Regime : DXL t z l2 ;

D B t > l2

(9.2)

C Regime : DXL t < l2 ;

DB t > l2

(9.3)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

152

Materials Kinetics

where l is the interatomic spacing in the material, t is time, DXL is the diffusivity within a crystalline grain, and DB is the diffusivity along the grain boundaries. (The diffusion length along the grain boundaries is always taken as greater than the interatomic spacing.) As shown in Figure 9.4 and described by Eq. (9.1), diffusion in the “A” regime is characterized by universally fast diffusion, both along the grain boundaries and through the grains themselves. Hence, diffusion in this regime can be modeled as equivalent to diffusion through a macroscopically homogeneous material with an effective average diffusivity equal to: D ¼ DXL ð1  hÞ þ DB h;

(9.4)

where h is the fraction of atoms on grain boundary sites, and ð1 hÞ is the fraction of atoms within the grains. Hence, Eq. (9.4) represents an average diffusion coefficient, and diffusion in the “A” regime can be solved using the standard methods discussed in Chapter 4, considering Eq. (9.4) as the constant diffusion coefficient. Since diffusion in the “A” regime occurs across multiple grain boundaries, this is the so-called multipleboundary regime. Diffusion in the “B” regime is more complicated. As shown in Figure 9.4 and described by Eq. (9.2), in this regime the grain boundary diffusion is significantly faster than diffusion within the grains. However, there is still non-negligible diffusion into the crystalline grains. Hence, in the “B” regime, there are two diffusion processes operating on two different time scales, but where both processes are important to the overall problem under study. Analysis of diffusion in the “B” regime is significantly more complicated since it involves solving coupled diffusion equations along the grain boundaries and into the adjacent grains. One simplifying assumption to model diffusion in the “B” regime is to consider that the grain boundaries themselves are stationary. Each diffusing atom is able to move both into the grains and along the network of grain boundaries. The fast diffusion along the grain boundaries provides a source of atoms that can subsequently diffuse into the grains on a longer time scale. Mathematically, we can assume that the stationary grain boundary is semiinfinite in the y and z directions, as shown in Figure 9.5. Diffusion occurs rapidly along the slab of the grain boundary, providing a source of diffusant which can subsequently migrate into the grains along the x direction. Hence, in this case, there must be two coupled diffusion equations, viz., one for diffusion along the grain boundary and one for diffusion into the grain.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polycrystalline Materials

153

Figure 9.5 Diffusion in the “B” regime can be modeled by considering the grain boundary as stationary and semi-infinite in the yz plane. The grain boundary has a thickness of d. The fast diffusion along the grain boundary provides a source of material to diffuse more slowly into the grain (along the x direction).

In the “C” regime, diffusion occurs only along the grain boundaries, since diffusion into the grains is too slow to be observed on the experimental time scale. Hence, diffusion in the “C” regime can be treated with a single diffusion equation along the thin grain boundary slabs. Only the diffusion coefficient along the grain boundaries needs to be considered, as the diffusivity within the grains is effectively zero. Since the grain boundaries are so thin, it can be difficult to evaluate diffusion in the “C” regime experimentally. Instead of directly measuring the concentration profile, accumulation methods can be used to collect the diffusing atoms along one face of the polycrystalline material to evaluate the total concentration of diffusant migrating through the network of grain boundaries [7].

9.4 Diffusion Along Stationary vs. Moving Grain Boundaries Harrison’s “ABC” model considers different regimes of grain boundary versus single crystal diffusion, assuming stationary grain boundaries. However, the grain boundaries in a polycrystalline material may also be moving simultaneously with the diffusion processes. Figure 9.6 shows the different possible regimes of diffusion for systems with stationary versus moving grain boundaries. The left-hand side of the figure corresponds to the case of low grain boundary velocity, i.e., effectively stationary grain boundaries. Hence, the top-left corner of the figure corresponds to stationary grain boundaries

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

154

Materials Kinetics

Figure 9.6 Regimes of diffusion in stationary vs. migrating grain boundaries, based on Cahn and Balluffi [8]. Here, v is the velocity of the grain boundary, DXL is the diffusivity within the crystal, and l is the interatomic spacing. Harrison’s “ABC” model corresponds to stationary grain boundaries on the left-hand side of the figure. The righthand side of the figure corresponds to migrating grain boundaries. The notation in the figure is as follows: “S” ¼ stationary grain boundary; “M” ¼ migrating grain boundary; “SOM” ¼ stationary or migrating boundary; “I” ¼ isolated boundary; “XL” ¼ crystal diffusion; and “NXL” ¼ no crystal diffusion. (Figure by Brittney Hauke (Penn State).)

with fast diffusion within the crystalline grains. This is equivalent to the “A” regime in the Harrison model. Here diffusion can occur across multiple crystalline interfaces, since the diffusivity is so high. The “B” regime is in the middle-left of the figure, where the crystal diffusion is slower and can be treated within an isolated grain rather than across multiple grain boundaries. The “C” regime is in the lower-left corner of the figure. Here, there is no diffusion into grains, since the diffusion just occurs along the stationary grain boundaries. The right-hand side of Figure 9.6 represents the case of faster grain boundary motion. In the lower-right corner of the figure, the diffusion of material from the moving grain boundary into the crystals is slow. However, in the upper-right corner of the figure, diffusion into the grain occurs ahead of the moving grain boundary. The setup for this diffusion problem is shown in Figure 9.7, where the grain boundary is treated as a semi-infinite slab in the yz plane. Diffusion occurs rapidly in the slab of the grain boundary, providing a source of diffusant which can then diffuse ahead of and behind the moving grain boundary. This diffusion into the adjacent

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polycrystalline Materials

155

Figure 9.7 A grain boundary treated as a semi-infinite slab of thickness d extended in the yz plane. The grain boundary is moving in the x direction with a velocity of v. As the grain boundary moves, material is diffusing into the adjacent grains along the x direction.

grains occurs along the direction orthogonal to the grain boundary plane (i.e., the x direction). Diffusion occurs out through the front face of the grain boundary and into the forward grain. The grain boundary also deposits material in the backwards direction behind the moving wake of the boundary. Under simplified quasi-steady state conditions, the diffusion equation can be written as [7]:   d ! XL dC V$ J ¼  D  vC ¼ 0; (9.5) dx dx where C is the concentration of the diffusing species and v is the velocity of ! the moving grain boundary. Under these steady state conditions, the flux J must be zero: DXL

dC þ vC ¼ 0: dx

This differential equation has the solution:  vx  C ¼ C0 exp  XL ; D

(9.6)

(9.7)

where C0 is the steady-state concentration in the grain boundary. This solution is plotted in Figure 9.8, which shows the deposition of the diffusing material into the neighboring grain for different ratios of DXL/v.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

156

Materials Kinetics

Figure 9.8 Quasi-steady state solution of diffusion from a moving grain boundary source, as in Eq. (9.7). The solution is plotted for different ratios of DXL/v, where DXL is the diffusivity into the grain and v is the velocity of the grain boundary.

9.5 Atomic Mechanisms of Fast Grain Boundary Diffusion Despite the complicated nature of grain boundary diffusiondpotentially involving many different types of diffusion having varying activation energiesdgrain boundary diffusivities tend to follow the Arrhenius law quite accurately. This suggests that diffusion along grain boundaries is dominated by one type of atomistic mechanism, perhaps involving a single type of atomic jumping event or a set of hopping events with similar activation energies. Although grain boundary diffusion tends to be Arrhenius, Suzuki and Mishin [9] report that there is no unique mechanism governing grain boundary diffusion at the atomic level. The diffusion flux can be governed by either vacancy or interstitial mechanisms. The dominant mechanism depends on factors such as the grain boundary structure, temperature, and diffusion direction in the case of anisotropic grain boundaries. Anisotropic effects can be significant during grain boundary diffusion, particularly at lower temperatures where the differences in activation energies yield more pronounced changes in the diffusion rates. Depending on the particular structure of the grain boundary, anisotropy can lead to grain boundary diffusivities that vary by several orders of magnitude at the same temperature. Thus, the concept of an “average” grain boundary diffusivity in a polycrystalline material is not a well-defined physical quantity.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polycrystalline Materials

157

9.6 Diffusion Along Dislocations As with grain boundaries, diffusion along dislocation cores can vary with the atomic structure of the dislocation. Dislocations can be categorized as either edge dislocations or screw dislocations, as depicted in Figure 9.9. While we cannot make any universal statements regarding the diffusivity along different types of dislocations, there is some evidence that edge dislocations lead to faster diffusion compared to screw dislocations in many metallic systems [7]. Dislocations in many close-packed metals tend to relax into two partial dislocations, which are connected by a stacking fault ribbon, as shown in Figure 9.10. Diffusion along such dissociated dislocation cores tends to be slower compared to non-dissociated dislocation cores [7]. While stacking fault ribbons provide fast diffusion pathways, they are not quite as effective as non-dissociated edge or screw dislocations.

9.7 Diffusion Along Free Surfaces Diffusion along free surfaces is similar in spirit to diffusion along grain boundaries. Both grain boundaries and free surfaces are planar defects, providing fast diffusion pathways along two dimensions. While grain boundary diffusion tends to be very fast compared to diffusion within crystals, diffusion along free surfaces is even faster, as indicated in Figure 9.1. The reason for such fast diffusion along free surfaces is obvious, since the bonding at surfaces is weaker and there is significantly less steric hindrance to atomic motion. Hence, the migration of atoms at free surfaces can occur

Figure 9.9 Schematic of an edge dislocation and a screw dislocation. (Figure by Brittney Hauke (Penn State).)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

158

Materials Kinetics

Figure 9.10 Schematic of a stacking fault ribbon, where a dislocation has dissociated into two partial dislocations along the boundaries of the ribbon. Diffusion along partial dislocations tends to be less effective than diffusion along non-dissociated dislocation cores. (Figure by Brittney Hauke (Penn State).)

extremely rapidly. In addition to the diffusive species spreading along the free surface, the surface can act as a source of material diffusing into the crystal. Clearly these diffusion mechanisms will occur on dramatically different time scales. Hence, diffusion from the surface can be treated using the standard analytical solutions covered in Chapter 4. Internal pores can also provide free surfaces for ultra-fast diffusion. The only difference is that porosity provides free surfaces within the interior rather than at the exterior of the material. In this way, the threedimensional volume defect (i.e., the pore) has the similar effect on diffusion as a two-dimensional defect (i.e., a free surface).

9.8 Summary Polycrystalline materials have a spectrum of defects of varying dimensionality, including zero-dimensional point defects (vacancies and interstitials), one-dimensional line defects (dislocations), two-dimensional surface defects (grain boundaries and free surfaces), and three-dimensional

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polycrystalline Materials

159

volume defects (pores). Higher-dimensional defects such as dislocations and surfaces lead to “short-circuit” pathways for dramatically accelerated diffusion. Hence, the diffusivity of a polycrystalline material depends strongly on the concentrations of such defects. The Harrison “ABC” model provides a description of three diffusion regimes in a polycrystalline material, based on the relative diffusivity through the grains versus along the grain boundaries. Depending on the regime, the diffusion in polycrystalline materials can be treated using either a single diffusion equation with an effective diffusivity or a coupled set of diffusion equations on different time scales. While defects act to enhance diffusivity, there is no universal atomistic mechanism enabling this fast diffusion. The details of the enhancement in diffusivity depend on the structure and microstructure of the material under study.

Exercises (9.1) Discuss three strategies for increasing the diffusivity of Zn2þ in polycrystalline ZnO. What are the relative advantages and disadvantages of these three strategies? Be detailed in your explanations. (9.2) Consider a semi-infinite slab of a polycrystalline material that has a thin layer of diffusant deposited on its free surface at x ¼ 0. The diffusivity along the grain boundaries is DB, and the diffusivity within an individual crystalline grain is DXL. The fraction of atoms occupying grain boundary sites is h. Diffusion into the polycrystalline material occurs isothermally over a time, t. (a) Write a solution of the diffusion equation assuming that the material is in the “A” regime of the Harrison “ABC” model. (b) Write a solution of the diffusion equation assuming that the material is in the “C” regime of the Harrison “ABC” model. (c) How can the concentration profile of the diffusant be measured in both of these cases? (9.3) Consider the same system as in Exercise (9.2), but now the material is in the “B” regime of the Harrison “ABC” model. Assume that a grain boundary can be represented following the geometry in Figure 9.5. (a) Write the diffusion equation for diffusion along a grain boundary.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

160

Materials Kinetics

(b) Write the diffusion equation for diffusion into a grain, where the source of the diffusing material is the time-dependent concentration of the diffusant at the grain boundary. (c) How would you solve these equations? What are the important factors that need to be considered? (d) How would the solution be modified if the grain boundary itself is moving? (9.4) Search the literature for an investigation of diffusion in a specific polycrystalline material. Provide the reference for this paper. (a) What are the important defects in this polycrystalline material? (b) What is the impact of these defects on diffusivity? Which type of defect enables the fastest diffusion? (c) How was the diffusivity measured in this study? (d) How could this material be modified to lower the diffusivity?

References [1] R. Dohmen and R. Milke, “Diffusion in Polycrystalline Materials: Grain Boundaries, Mathematical Models, and Experimental Data,” Rev. Mineral. Geochem. 72, 921 (2010). [2] D. Gryaznov, J. Fleig, and J. Maier, “Finite Element Simulation of Diffusion into Polycrystalline Materials,” Solid State Sci. 10, 754 (2008). [3] A. Atkinson, “Diffusion in Ceramics, In R.W. Cahn, P. Haasen, and E. Kramer, editors: Materials Science and TechnologydA Comprehensive Treatment, Vol. 11, VCH Publishers (1994), pp 295e337. [4] V. Lacaille, C. Morel, E. Feulvarch, G. Kermouche, and J. M. Bergheau, “Finite Element Analysis of the Grain Size Effect on Diffusion in Polycrystalline Materials,” Comp. Mater. Sci. 95, 187 (2014). [5] D. Turnbull, “Grain Boundary and Surface Diffusion, In J.H. Holloman, editor: Atom Movements, American Society for Metals (1951). [6] L. G. Harrison, “Influence of Dislocations on Diffusion Kinetics in Solids with Particular Reference to the Alkali Halides,” Trans. Faraday Society 57, 1191 (1961). [7] R. W. Balluffi, S. M. Allen, and W. C. Carter, Kinetics of Materials, John Wiley & Sons (2005). [8] J. W. Cahn and R. W. Balluffi, “Diffusional Mass-Transport in Polycrystals Containing Stationary or Migrating Grain Boundaries,” Scripta Metall. Mater. 13, 499 (1979). [9] A. Suzuki and Y. Mishin, “Atomic Mechanisms of Grain Boundary Diffusion: Low Versus High Temperatures,” J. Mater. Sci. 40, 3155 (2005).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 10

Motion of Dislocations and Interfaces 10.1 Driving Forces for Dislocation Motion There are two primary types of dislocations in crystals: edge and screw dislocations. As discussed in Chapter 9, both types of dislocations provide pathways for accelerated diffusion of matter through the crystal. The dislocations themselves are also mobile and can evolve with time. A thermodynamic driving force acts on a dislocation if movement of the dislocation can decrease the free energy of the system. Driving forces for dislocation motion include: • Mechanical Forces: Dislocations evolve in response to an applied mechanical stress. • Osmotic Forces: Dislocations can create or annihilate vacancies to achieve a local equilibration of defect concentrations. • Curvature Forces: Curved dislocations can straighten to reduce the free energy associated with the dislocation itself. Before detailing each of these driving forces, we must introduce the ! concept of a Burgers vector, b , which represents the magnitude and direction of the lattice distortion resulting from a dislocation [1,2]. As shown in Figure 10.1, the Burgers vector can be determined by drawing a clockwise rectangular circuit around the dislocation. The difference between the circuit around the dislocation and the circuit around the perfect crystal (without the dislocation) defines the Burgers vector. In other words, the Burgers vector gives the magnitude and direction of the lattice distortion created by the dislocation. For an edge dislocation, the Burgers vector is perpendicular to the dislocation line. For a screw dislocation, the Burgers vector is parallel to the dislocation line.

Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00005-4 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

161 Copyright Elsevier 2023

162

Materials Kinetics

Figure 10.1 Edge and screw dislocations. For an edge dislocation, the Burgers vector is orthogonal to the line of the dislocation. For a screw dislocation, the Burgers vector is parallel to the dislocation line. (Image by Brittney Hauke (Penn State)).

In order to calculate the mechanical force on a dislocation, we must also define the stress tensor, s, which consists of nine components that completely describe the stress state at a given point in the three-dimensional material: 2 ðe Þ 3 2 3 s11 s12 s13 T 1 6 7 6 7 s ¼ 4 Tðe2 Þ 5 ¼ 4 s21 s22 s23 5: (10.1) Tðe3 Þ

s31

s32

s33

The physical meaning of each element of the stress tensor is explained in Figure 10.2. Namely, sij is the stress component acting in the j direction on the i plane (i.e., the plane whose normal is the i direction). The corresponding stress vectors are denoted as Tðei Þ . ! With these definitions of the Burgers vector, b , and the stress tensor, s, the mechanical force exerted on a dislocation is given by the PeachKoehler equation [3]:  ! ! !T F s¼ b $s  b zh d  b z; (10.2) !T where b is the transpose of the Burgers vector and b z is a unit vector tangent to the dislocation. Here, the stress tensor is evaluated at the dislo! !T cation, and d h b $s.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Motion of Dislocations and Interfaces

163

Figure 10.2 Visual representation of a stress tensor, Eq. (10.1). (Image by Brittney Hauke (Penn State)).

The second force driving dislocation motion is the osmotic force, also known as the chemical force, which results from having a nonequilibrium concentration of vacancy defects near a dislocation [4]. The equilibrium vacancy concentration is governed by the equilibrium constants of the relevant defect equations, as discussed previously in Section 8.6. If there is a supersaturated concentration of vacancies, they can diffuse to a dislocation and be annihilated there by dislocation climb, as shown in Figure 10.3 and discussed in detail in the next section. This vacancy annihilation would bring the local point defect concentration closer to its equilibrium value. Hence, the osmotic force is governed by the degree of disequilibrium of the local vacancy concentration. If there is a supersaturated concentration of vacancies, then an annihilation of vacancies would lower the free energy. On the other hand, if the vacancy concentration is deficient, then the creation of new vacancies would lower the free energy. ! The osmotic force, F m , is quantified by [5]: ! b ! F m¼z  B; (10.3) ! where B is defined by   XV ! !kT Bh b : (10.4) ln U XVeq

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

164

Materials Kinetics

Figure 10.3 Climb of an edge dislocation due to annihilation of excess vacancies. A: a vacancy is annihilated at a jog. B: a vacancy jumps into the dislocation core. C: an attached vacancy is annihilated at a jog. D: an attached vacancy diffuses along the dislocation core. (Image by Brittney Hauke (Penn State) based on Balluffi et al. [5]).

Here, k is Boltzmann’s constant, T is absolute temperature, U is the eq atomic volume, XV is the local concentration of vacancies, and XV is the equilibrium vacancy concentration. Clearly, the osmotic force vanishes when the equilibrium concentration of vacancies is achieved, i.e., when eq XV ¼ XV . The third driving force for dislocation motion is the curvature force. If a dislocation is curved, then the free energy of the system can be reduced by straightening the dislocation. The free energy of a circular dislocation loop is [6]:  2 !  G b   4R W ¼R 1 ; (10.5) ln R0 2ð1  nÞ where R is the radius of curvature of the dislocation, R0 is a cutoff radius for where the curvature force is considered to be negligible, G is the shear

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Motion of Dislocations and Interfaces

165

modulus of the material, and n is its Poisson’s ratio. A resulting radial climb ! force, F k , acts to increase the radius of the curved dislocation [5]:  2  2 !    G ! G b b   1 vW 1 4R ! ¼ ln (10.6) z j F kj ¼ 2pR vR R 4pð1  nÞ R0 R This can be generalized to any segment of an arbitrarily curved dislocation,  2 ! G b ! Fkz b n; (10.7) R where Eq. (10.7) gives the curvature force on that segment of the dislocation, and b n is the principal normal vector, i.e., a unit normal vector directed toward the concave side of the curved dislocation. Considering all three of these forcesdmechanical, osmotic, and curvaturedthe total driving force on a dislocation is the sum of Eqs. (10.2), (10.3), and (10.7):  2 ! !



G  b  ! ! ! ! ! F ¼ Fs þ Fm þ Fk ¼ d b b n; (10.8) z þ b z B þ R which simplifies to  2 !

G b  ! b ! ! F ¼z B  d þ b n: R

(10.9)

10.2 Dislocation Glide and Climb Dislocation motion occurs in response to the net driving force in Eq. (10.9). The motion of dislocations is critical for many important materials processes, including the plastic deformation of crystalline materials at relatively low temperatures. Dislocation motion can be decomposed into two basic mechanisms: • Glide, i.e., movement of the dislocation parallel to the glide plane (also known as the slip plane). • Climb, i.e., movement of the dislocation orthogonal to the glide plane. These two mechanisms for dislocation motion are shown graphically in Figure 10.4.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

166

Materials Kinetics

Figure 10.4 Dislocation glide and climb. With dislocation glide, the dislocation motion is parallel to the glide plane. With dislocation climb, the motion is perpendicular to the glide plane. (Image by Brittney Hauke (Penn State)).

Dislocation glide, also known as slip, is a primary mechanism for plastic deformation in crystals. Glide is the result of shear stress acting on a dislocation, as shown in Figure 10.5. Dislocation glide is a volumeconservative process and does not involve the creation or annihilation of

Figure 10.5 Dislocation glide due to shear stress. The shear stress is indicated by the arrows in the figure. Dislocation glide proceeds in a manner similar to the crawling of a caterpillar, where the extra half plane of atoms from the dislocation is like the “hump” in the caterpillar as it crawls from left to right. (Image by Brittney Hauke (Penn State) based on Callister and Rethwisch [1]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Motion of Dislocations and Interfaces

167

vacancies in the material. Dislocations that can undergo a gliding motion are said to be glissile. On the other hand, a dislocation is termed sessile if dislocation glide is not possible, e.g., if the dislocation is immobilized due to pinning. In contrast, dislocation climb depends on either the creation or annihilation of vacancies and thus results in a change in the volume of the system. There are two types of dislocation climb: • Positive dislocation climb, which is induced by compressive forces acting on the system. As shown in Figure 10.6, positive dislocation climb results in the annihilation of a vacancy. Hence, the crystal shrinks in the direction perpendicular to the extra half plane of atoms of the dislocation. • Negative dislocation climb, which is the result of tensile forces acting on the dislocation. As shown in Figure 10.7, negative dislocation climb results in the creation of a new vacancy, thereby expanding the crystal in the direction perpendicular to the extra half plane of the dislocation. Hence, negative dislocation climb acts to lower the density of the crystal. Compressive stress in the direction perpendicular to the half plane of the dislocation promotes positive climb, which shrinks the crystal by destroying vacancies, while tensile stress promotes negative climb, which expands the volume of the crystal by creating vacancies. Since glide is caused by shear stress and does not result in a change in vacancy concentration, this is one

Figure 10.6 Positive dislocation climb via vacancy annihilation (compression). The compressive stress is indicated by the arrows in the figure. (a) Positive dislocation climb occurs when a vacancy diffuses to the terminus of the dislocation line. (b) This annihilates the vacancy and causes shrinkage of the crystal.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

168

Materials Kinetics

Figure 10.7 Negative dislocation climb via vacancy generation (tension). The tensile stress is indicated by the arrows in the figure. (a) Negative dislocation climb occurs when an atom jumps to the terminus of the dislocation line. (b) This creates a new vacancy and causes expansion of the crystal.

main difference between dislocation glide and climb. The response to osmotic forces requires dislocation climb rather than glide to bring the local concentration of vacancies closer to equilibrium. In many real cases, the stress field resulting from the combined driving force on a dislocation, Eq. (10.9), contains a combination of shear and dilatational components. In this situation, the dislocation experiences a more complicated response comprising both glide and climb kinetics. This mixed-mode response can be decomposed into elementary glide and climb components resulting from the shear and dilatational components of the stress tensor, respectively. The concerted motion of many dislocations simultaneously is known as a dislocation avalanche [7,8]. An avalanche is a discrete event involving the cooperative motion of many dislocations, which occurs abruptly during plastic deformation of crystals. The frequency of dislocation avalanches exhibits a power-law distribution indicative of scale-free character and is observed to be independent of the temperature of the system [7]. The intermittency of dislocation avalanches means that the plastic deformation of single crystals is not a smooth process, but rather proceeds through bursts of avalanche events, which result in sudden jumps in the measured stressstrain curve [8].

10.3 Driving Forces for Interfacial Motion While most engineering materials are used in their solid state, processing of these materials often involves precipitation from a vapor or liquid phase.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Motion of Dislocations and Interfaces

169

The vapor-to-solid or vapor-to-liquid transformation, known as condensation, results from deposition of atoms or molecules from a supersaturated vapor. The liquid-to-solid transformation, known as solidification or crystallization, results from cooling a system below its liquidus temperature. These phenomena occur at distinct interfaces in the system, and the motion of these interfaces governs the rate of growth of the new phase. The process of crystallization from a liquid will be covered in detail in Chapters 14 and 15. Here, we shall focus on the motion of crystalvapor interfaces. As with dislocation motion, there is an effective driving force on a material interface if movement of the interface will decrease the free energy of the system. The driving forces for interfacial motion can arise from any mechanism affecting the free energy of the system. Typically, these forces arise from two basic sources: • Volumetric free energy differences between the phases adjacent to the interface. • Reduction of the interfacial free energy. The interfacial free energy is given by the product of the surface tension (g) and area (A) for each interface in the system: X Gsurf ¼ gi Ai ; (10.10) i

where the summation is over each type of interface i in the system. Thus, the interfacial energy can be reduced either by reducing the overall interfacial area or by redistributing the interfacial area to the types of interfaces having lower surface tension.

10.4 Motion of Crystal-Vapor Interfaces One method for growing crystals is condensation of atoms from a supersaturated vapor. In this vapor deposition process, atoms condensing from the vapor phase are incorporated onto the surface of a growing crystal [9]. Hence, the crystal acts as a sink for the atoms deposited from the vapor, and the crystal-vapor interface moves as the crystal expands. The kinetics of the crystal growth process are quite complicated and involve a variety of discrete events at the atomic scale. As detailed by Balluffi et al. [5], these mechanisms include: • Deposition of an atom from the vapor onto the free surface of the crystal. • Deposition of an atom onto a ledge of atoms on the surface of the growing crystal. This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

170

Materials Kinetics



Direct incorporation of an atom from the vapor into a kink on the crystal surface. • Surface diffusion of an adatom (i.e., an atom laying on the crystalline surface) into a kink and subsequent incorporation into the crystal. • Surface diffusion of an adatom into a ledge site. • Diffusion of an adatom from a ledge into a kink site. The interface between the crystal and vapor phases can also evolve in response to disequilibrium of the vacancy concentration within the crystal. For example, vacancies are annihilated when they diffuse to the surface of the crystal. If there is an excess of vacancies in the crystal, then such annihilation would bring the vacancy concentration closer to its equilibrium value. Since the crystal shrinks as a result of vacancy annihilation, this process can also be considered as a form of crystal dissolution. Surfaces at the crystal-vapor interface have rough and irregular structures, as shown in Figure 10.8. This structural disorder is a result of the random nature of the vapor deposition process. The roughness is especially pronounced at higher temperatures, since configurational entropy becomes a more dominant factor governing the surface structure at high temperatures. Figure 10.9 shows several of the elementary atomic processes that occur as a crystal is grown from a supersaturated vapor. The large set of elementary processes are a result of the many irregular structural features at the crystal-vapor interface. An interesting example of crystal growth through vapor deposition is the research of Jon-Paul Maria and coworkers at Penn State University, who have developed physical vapor deposition methods for growing thin

Figure 10.8 Various surface features at a crystal-vapor interface. (Figure by Brittney Hauke (Penn State) based on Balluffi et al. [5]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Motion of Dislocations and Interfaces

171

Figure 10.9 Elementary atomic processes for crystal growth from a supersaturated vapor. These processes include: (a) direct deposition of an atom from the vapor to the surface of the crystal; (b) direct deposition of an atom at a ledge of the growing crystal; (c) direct deposition of an atom at a kink in the growing crystal (d) transport of a surface atom (i.e., an adatom) into a kink; (e) transport of an adatom to a ledge; and (f) transport of a ledge atom into a kink site. (Image by Brittney Hauke (Penn State) based on Balluffi et al. [5]).

films of entropy-stabilized oxides, i.e., many-component oxides systems that are thermodynamically stabilized through high configurational entropy [10]. Figure 10.10 shows atomic force microscopy (AFM) images of an entropy-stabilized oxide known as J30, which has the chemical formula MgxNixCoxCuxZnxScxO (x z 0.167 mol fraction). The J30 thin film was grown on an MgO substrate in an oxygen atmosphere at different pressures. The AFM images in Figure 10.10 show that the roughness of the thin film increases with O2 pressure. This entails a corresponding conversion from a smooth single-phase J30 at low pressures to a polycrystalline microstructure at higher pressures exhibiting progressively greater surface roughness. This microstructural evolution with varying pressure is a byproduct of the changing kinetic energy of the adatoms on the growing thin film. Deposition at lower pressures yields adatoms with higher kinetic energy, which favors single-phase thin films since the atoms are able to migrate to lower energy positions in the growing crystal. On the other hand, deposition at higher pressures yields adatoms with lower kinetic energy, favoring phase separation and resulting in a fine-grained polycrystalline microstructure.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

172 Materials Kinetics

Figure 10.10 Atomic force microscopy (AFM) images of an entropy-stabilized oxide, MgxNixCoxCuxZnxScxO (x z 0.167), grown on an MgO target in atmospheres with (a) 50 mTorr O2, (b) 150 mTorr O2, (c) 200 mTorr O2, and (d) 300 mTorr O2; (e)e(h) respective line traces. The white dashed lines in (a)e(d) indicate the locations of the line traces in (e)e(h). The same scale bar in (a) also applies to (b)e(d). (Figure courtesy JonPaul Maria (Penn State), adapted from Ref. [10]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Motion of Dislocations and Interfaces

173

10.5 Crystalline Interface Motion The interface between two crystals has additional configurational degrees of freedom compared to a crystal-vapor interface. For example, crystal-crystal interfaces can contain such features as interfacial dislocations and dislocation ledges. Owing to this structural complexity, crystal-crystal interfaces may evolve through a wide variety of atomic mechanisms. These mechanisms can be classified as either conservative or nonconservative. Conservative interfacial motion occurs in the absence of a net diffusion flux of any component in the system, i.e., in the absence of longrange diffusion to or from the moving interface. In contrast, nonconservative motion occurs when the motion of the interface is coupled to a long-range diffusional flux of one or more components in the system. The kinetics of nonconservative interfacial motion can be limited by various factors. In the case of diffusion-limited motion, the rate of interfacial motion is governed by diffusion kinetics, i.e., the rate at which the relevant components of the system are transported to or from the interface. The interfacial motion can also be source-limited when the kinetics are limited by the rate of the required species becoming available. Finally, the interfacial motion can be sink-limited, i.e., governed by the kinetics of the incorporation of the species into the interface. Of these cases (diffusion-limited, source-limited, or sink-limited), it is always the slowest of the required processes that limits the net kinetics of the interfacial motion. The presence of an embedded second phase can also limit the rate of interfacial motion. This phenomenon, shown in Figure 10.11, is known as pinning. Here, an interface between two matrix grains is in contact with an embedded spherical particle. Despite the presence of a driving force to move the interface past the particle, the interfacial energy associated with the second phase causes the interface of the two matrix crystals to become pinned at the embedded particle, hindering the evolution of the crystalcrystal interface.

Figure 10.11 Pinning of a grain boundary by a second-phase particle.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

174

Materials Kinetics

Dislocation motion can also be inhibited by pinning from other dislocations. For example, if there is a high concentration of dislocations in a material, the activation barrier for dislocation motion increases, since it is difficult for the dislocations to move past each other [11]. This dislocation hardening, also known as work hardening or strain hardening, is a common method for strengthening metals and reducing their ductility. With dislocation hardening, new dislocations are purposely introduced into the metal through hammering or other metalworking processes. When a sufficiently high concentration of dislocations is incorporated into the crystal structure, it becomes less ductile.

10.6 Summary The driving forces for dislocation motion include mechanical, osmotic, and curvature forces. Dislocation motion itself occurs via two mechanisms: dislocation glide and dislocation climb. Glide is movement along the glide plane and occurs in response to a shear force. Climb is movement orthogonal to the glide plane and comes in two forms: positive dislocation climb results in the annihilation of vacancies due to a compressive stress, and negative dislocation climb results in the creation of vacancies due to tensile stress. As such, dislocations can act as either a source or sink for vacancies. Whereas dislocation glide conserves the volume of the crystal, dislocation climb results in either a contraction (positive climb) or expansion (negative climb) of the system. Interfacial motion is the result of driving forces related to volumetric and interfacial contributions to free energy. Interfacial motion can be either conservative (decoupled from a long-range diffusion flux) or nonconservative (coupled to a long-range diffusion flux). Interfacial motion can occur through a variety of mechanisms, which can be diffusion-limited, sourcelimited, or sink-limited. Second-phase pinning can also act to slow the motion of crystal-crystal interfaces.

Exercises (10.1) Consider an elemental metal with a free energy of vacancy formation given by GV ¼ 1.0 eV. (a) What is the equilibrium vacancy concentration at 800 C? (b) Near a dislocation, the fraction of lattice positions occupied by vacancies is 105. Is there a driving force for motion of the dislocation at 800 C?

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Motion of Dislocations and Interfaces

(10.2) (10.3)

(10.4)

(10.5)

(10.6)

175

(c) What type of dislocation motion is favored under these conditions? Why? (d) Does the crystal shrink or grow as a result of this dislocation motion? Why? (e) Given a local vacancy concentration of 105, at what temperature does the osmotic driving force for dislocation motion vanish? Discuss three strategies for reducing the plasticity of a single crystal. Explain the scientific reasoning behind these strategies. In your own words, explain how dislocation motion can reduce the free energy of a crystalline system in response to each of the following driving forces: (a) Mechanical force. (b) Osmotic force. (c) Curvature force. Consider the elementary atomic processes for crystal growth from a supersaturated vapor, as depicted in Figure 10.9. (a) Which of these elementary processes do you expect to be dominant at high temperatures? Why? (b) Which of these elementary processes do you expect to be dominant at low temperatures? Why? Search the scientific literature for a study of dislocation avalanches in a crystalline system. Provide the reference. (a) Why is plasticity important in this system? (b) What is the stress-strain response in this system? Draw an example stress-strain curve showing the impact of dislocation avalanches on the stress-strain response. (c) What are the factors governing the size and frequency of the dislocation avalanches in this system? Is it possible to strengthen a glass through dislocation hardening? Why or why not?

References [1] W. D. Callister and D. G. Rethwisch, Materials Science and Engineering: An Introduction, 9th ed., John Wiley & Sons (2018). [2] S. Trolier-McKinstry and R. E. Newnham, Materials Engineering: Bonding, Structure, and Structure-Property Relationships, Cambridge University Press (2018). [3] M. Peach and J. S. Koehler, “The Forces Exerted on Dislocations and the Stress Fields Produced by Them,” Phys. Rev. 80, 436 (1950).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

176

Materials Kinetics

[4] J. Lothe and J. P. Hirth, “Dislocation Climb Forces,” J. Appl. Phys. 38, 845 (1967). [5] R. W. Balluffi, S. M. Allen, and W. C. Carter, Kinetics of Materials, John Wiley & Sons (2005). [6] P. M. Anderson, J. P. Hirth, and J. Lothe, Theory of Dislocations, 3rd ed., Cambridge University Press (2017). [7] T. Richeton, J. Weiss, and F. Louchet, “Dislocation Avalanches: Role of Temperature, Grain Size and Strain Hardening,” Acta Mater. 53, 4463 (2005). [8] F. F. Csikor, C. Motz, D. Weygand, M. Zaiser, and S. Zapperi, “Dislocation Avalanches, Strain Bursts, and the Problem of Plastic Forming at the Micrometer Scale,” Science 318, 251 (2007). [9] A. Anders, “A Structure Zone Diagram Including Plasma-Based Deposition and Ion Etching,” Thin Solid Films 518, 4087 (2010). [10] G. N. Kotsonis, C. M. Rost, D. T. Harris, and J.-P. Maria, “Epitaxial EntropyStabilized Oxides: Growth of Chemically Diverse Phases via Kinetic Bombardment,” MRS Commun. 8, 1371 (2018). [11] D. Chen, L. L. Costello, C. B. Geller, T. Zhu, and D. L. McDowell, “Atomistic Modeling of Dislocation Cross-Slip in Nickel using Free-End Nudged Elastic Band Method,” Acta Mater. 168, 436 (2019).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 11

Morphological Evolution in Polycrystalline Materials 11.1 Driving Forces for Surface Morphological Evolution Every real material has at least one interface, viz., its free surface. The morphology of a material is a description of both its macroscopic shape and its internal microstructure. The morphology can evolve in response to driving forces that seek to lower the free energy of the system. Such morphological evolution encompasses changes to the free surface of the material and its internal interfaces, both of which contribute excess free energy to the system. These interfaces are subject to two types of forces that can yield morphological changes: • Capillary forces, which lead to a decrease in the total free energy associated with the material interfaces, i.e., through a reduction of the product of the interfacial area and surface tension, as in Eq. (10.10). • Applied forces, from externally applied sources that perform work on the system. Following Eq. (10.10), the total interfacial contribution to the free energy of a material equals the summation of the free energies for each type of interface. For a given interface, the free energy is equal to the product of the surface tension of that interface, g, with its interfacial area, A. The surface tension, g, is a material property that depends on the nature of the bonding at an interface between two adjacent phases [1]. For anisotropic materials, the surface tension also depends on the crystallographic plane exposed at the interface. Hence, capillary forces can lead either to reduction in interfacial area or reduction in the surface tension due to changes in the geometrical or crystallographic properties of the interfaces. Both effects yield reduction in the total surface energy of a system and hence are important in determining the morphological evolution of a material. Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00018-2 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

177 Copyright Elsevier 2023

178

Materials Kinetics

11.2 Morphological Evolution of Isotropic Surfaces Let us begin with the case of materials having isotropic surfaces, where the analysis is simplified since the surface tension, g, of the free surface is independent of crystallographic direction. An isotropic surface with local changes in its curvature will evolve toward a geometry with a constant mean curvature (i.e., a sphere), which results from the desire of the system to minimize its surface energy. The kinetics of the surface evolution depend on the particular atomic transport mechanisms that achieve the necessary surface motion. These mechanisms can include vapor transport, surface diffusion, or diffusion through the bulk crystal. As a simple example, let us consider an isotropic material having an undulating surface morphology given by the following function: y  y0 ¼ hðxÞ or

Fðx; yÞ ¼ y  hðxÞ ¼ constant.

(11.1)

This system will evolve toward elimination of the surface undulations, i.e., toward a geometry having a constant curvature. Let us assume that the migration of atoms is dominated by surface diffusion. The flux of the ! surface atoms, J S , is proportional to the local gradient of the surface curvature [2]: g DS ! J S ¼  S Vsurf k; kT

(11.2)

where gS is the surface tension of the free surface, DS is the diffusion coefficient of atoms migrating along the surface, k is Boltzmann’s constant, T is absolute temperature, and Vsurf k is the two-dimensional gradient of the surface curvature. The thermodynamic driving force in Eq. (11.2) for elimination of the surface undulations is the product of the surface tension, gS, with the gradient of the surface curvature, Vsurf k. As shown in Figure 11.1,

Figure 11.1 A free surface will evolve toward a morphology with uniform curvature. Here, the morphology is considered to evolve via surface diffusion, as in Eq. (11.2). (Image by Brittney Hauke (Penn State) based on Balluffi et al. [2]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Morphological Evolution in Polycrystalline Materials

179

the resulting flux acts to reduce and eventually eliminate surface undulations so that a constant curvature is achieved. Note that the flux of surface atoms vanishes when the gradient of the surface curvature, Vsurf k, also vanishes in Eq. (11.2). Grooving of an interface can occur when a grain boundary intersects the free surface of a material [3]. If the grain boundary has an isotropic surface tension of gB and the surface tension of the free surface is gS, then a groove will form at the grain boundary to achieve a balance of the associated capillary forces. The equilibrium dihedral angle j is determined by the well-known Young’s equation [1]:   j g cos (11.3) ¼ B : 2 2gS Figure 11.2 shows an example of grooving resulting from such balancing of the capillary forces. Another example of morphological evolution in response to capillary forces is Plateau-Rayleigh instability, also simply called Rayleigh instability, which causes a phase with a cylindrical morphology to decompose into a series of spheres [4]. Let us consider an isotropic cylinder having a radius of R0. The cylinder can reduce its total surface free energy if it evolves into a row of spheres having radii greater than 3R0/2. Hence, phases with cylindrical features can be morphologically unstable. However, for the transition from cylindrical to spherical morphology to occur, Plateau-Rayleigh instability requires that the total surface energy must continuously decrease as the cylinder evolves into a sequence of isolated spheres. This condition of continuously decreasing surface energy is satisfied only if the Rayleigh instability condition is met: l > lcrit ¼ 2pR0 ;

(11.4)

Figure 11.2 Grooving at surface/grain boundary intersections, which result from a balancing of capillary forces. The resulting dihedral angle, j, follows Young’s equation in Eq. (11.3). (Image by Brittney Hauke (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

180

Materials Kinetics

where l is the wavelength of the perturbation on the cylinder, as visualized in Figure 11.3. Hence, lcrit ¼ 2pR0 is the critical wavelength above which the surface energy can decrease continuously as the cylinder transitions into a series of spheres. In other words, the cylinder is stable with respect to perturbations having a wavelength less than the circumference of the cylinder, but unstable with respect to perturbations having a wavelength greater than or equal to lcrit. A common everyday example of PlateauRayleigh instability is a stream of water flowing from a faucet, which will break into droplets to reduce the total surface energy. The path of morphological evolution of a perturbed cylinder is different in the case of diffusion by vapor transport versus surface diffusion. As depicted in Figure 11.4, surface diffusion exhibits a kinetic wavelength corresponding to a maximum rate of morphological evolution. Hence, for Plateau-Rayleigh instability via the surface diffusion mechanism, both thermodynamic (lcrit) and kinetic (lmax) wavelengths govern the morphological evolution process. This is a purely geometric effect resulting from atomic transport occurring only along the free surface of the material. The limitation does not apply to diffusion by vapor transport, where only the thermodynamic Rayleigh wavelength is important, since there is no additional kinetic wavelength limiting the rate of morphological evolution.

Figure 11.3 Plateau-Rayleigh instability of a cylinder, which can continuously decrease its surface energy by evolving to a series of spheres, provided that the perturbation causing this transition has a wavelength longer than the circumference of the initial cylinder. (Image by Brittney Hauke (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Morphological Evolution in Polycrystalline Materials

181

Figure 11.4 Thermodynamic and kinetic wavelengths for Plateau-Rayleigh instability of a cylinder. Following the surface diffusion mechanism, the maximum rate of pffiffiffi morphological change occurs at the kinetic wavelength of lmax ¼ 2lcrit . With diffusion by vapor transport, there is no kinetic wavelength that limits the rate of morphological evolution. (Image by Brittney Hauke (Penn State)).

11.3 Evolution of Anisotropic Surfaces With anisotropic materials, the surface tension depends on which face of the crystal is exposed at each interface. Hence, there are additional degrees of freedom for anisotropic crystals compared to isotropic crystals. Since different faces of the crystal have different contributions to the surface free energy, some inclinations are less thermodynamically stable and will be replaced by other inclinations to lower the total free energy of the system. This process is called faceting and results in a greater total surface area of the crystal, which nevertheless lowers the free energy of the system since the lower surface tension inclinations are preferred [5]. The free energy of the faceted surface is Gfac ¼ g1 A1 þ g2 A2 þ g3 A3 , where g1,2,3 are the surface tensions of the individual facets and A1,2,3 are their corresponding areas. Two examples of free energy reduction by faceting a surface are shown in Figure 11.5. Owing to their varying surface tension along different crystallographic planes, anisotropic crystals can exhibit morphological changes that depart from (rather than approach) a spherical shape. This contrasts with isotropic systems, where surface energy is minimized by forming a perfect sphere, i.e., by eliminating gradients in surface curvature. An example of anisotropic grain growth is shown in Figure 11.6, where an initially spherical grain grows preferentially along the direction that promotes lower surface energy. As shown in the figure, fastest-growing inclinations of the crystal ultimately lead to corner formation, even when starting from an initially spherical grain [2].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

182

Materials Kinetics

Figure 11.5 Reduction of surface energy by faceting of an anisotropic surface for systems with (a) two stable facets (1 and 2) and (b) three stable facets. (Image by Brittney Hauke (Penn State)).

Figure 11.6 Growth of an initially spherical grain with anisotropic surface energy. As the morphology evolves over time, the spherical grain eventually forms corners due to anisotropic growth. (Image by Brittney Hauke (Penn State)).

The same considerations also apply to the morphological evolution during dissolution of an anisotropic crystal, i.e., the opposite of crystal growth. During the dissolution process, the crystal inclinations having the faster dissolution rates remain, while the slower dissolving inclinations disappear into the corners of the crystal [2]. This anisotropic nature of dissolution is shown in Figure 11.7.

11.4 Particle Coarsening Particle coarsening refers to an increase in the spatial dimensions of grains embedded within a matrix phase [6]. The driving force for particle coarsening is a reduction in the total interfacial energy of the system. For example, when particles having a distribution of sizes are embedded in a matrix phase, there is a net flux of matter from the smaller particles to the larger ones, leading to an increase of the average particle size. This tendency for larger

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Morphological Evolution in Polycrystalline Materials

183

Figure 11.7 Morphological evolution during anisotropic dissolution. The inclinations having faster dissolution rates remain, while the slower dissolving inclinations disappear into the corners. (Image by Brittney Hauke (Penn State)).

particles to grow at the expense of smaller ones acts to decrease the total interfacial free energy in the system. This process is also known as Ostwald ripening and involves a net diffusion of material from the smaller to the larger particles, where the diffusion takes place through the matrix phase [7]. Particle coarsening is typically a diffusion-limited process, since the interfaces act as effective sources and sinks for point defects in the material, and the concentration of diffusant remains relatively constant within the particles. Hence, the rate of particle coarsening is limited by the diffusion rate between the neighboring grains. Figure 11.8 shows a schematic of the diffusion-limited nature of grain coarsening. Alternatively, in the case of source-limited (or sink-limited) particle coarsening, the interfaces surrounding the particles are poor sources (or sinks) of the diffusant. In such cases, the coarsening rate is governed by the rate at which the relevant species can be made available at the interfaces or be incorporated into the growing phase. Experimentally, particle coarsening is quantified by measuring the particle size distribution of the material as a function of time. This provides information on the evolution of both the mean particle size and the higher order moments of the distribution. The most widely used theoretical description of diffusion-limited particle coarsening is Lifshitz-Slyozov-Wagner (LSW) theory, which considers a dilute solution of particles distributed in a matrix phase [7]. In LSW theory, the volume fraction of the second-phase particles is assumed to be infinitesimally small. LSW theory predicts that the average particle radius, hRi, increases with the cube root of time, i.e., hRift 1=3 , following 3 3 hRi  hRi0 ¼

This book belongs to Alice Cartes ([email protected])

8gcN V 2 D t; 9kNA T

(11.5)

Copyright Elsevier 2023

184

Materials Kinetics

Figure 11.8 Diffusion-limited particle coarsening, in which mass from smaller grains diffuses to larger grains through a matrix phase having a constant concentration of the diffusant, hci. As a result of particle coarsening, the larger grains grow at the expense of the smaller grains. Here, the smaller grains act as sources of the diffusant, and the larger grains act as sinks. (Image by Brittney Hauke (Penn State)).

where hRi0 is the average initial particle radius, g is the surface tension between the particle and matrix phases, cN is the solubility of the particle material in the matrix, V is the molar volume of the particle material, k is Boltzmann’s constant, NA is Avogadro’s number, T is absolute temperature, and D is the diffusion coefficient of the particle material through the matrix phase. The most famous everyday example of Ostwald ripening is the coarsening of ice crystals in ice cream. Over time, larger ice crystals grow at the expense of smaller ones. This is what causes the texture of ice cream to become grittier over time.

11.5 Grain Growth Similar to particle coarsening, grain growth is a kinetic phenomenon where the average size of grains in a polycrystalline material increases with time, e.g., as shown in Figure 11.9. To differentiate this process from particle coarsening, the term “grain growth” is used when there is no distinct matrix phase in the material. In practice, grain growth is typically facilitated by subjecting the system to heat treatment, i.e., annealing, since growth kinetics can be exponentially accelerated with increasing temperature.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Morphological Evolution in Polycrystalline Materials

185

Figure 11.9 Example of grain growth in a polycrystalline system, where the average particle size increases with time. The morphological evolution proceeds from left to right in the figure. (Image by Brittney Hauke (Penn State)).

The study of grain growth is greatly simplified in two dimensions. Several examples of two-dimensional grain evolution are shown in Figure 11.10. In part (a) of the figure, two grains lose a side as two vertices converge, after which a new interface is created between the other two neighboring grains. In part (b), the central three-sided grain shrinks and eventually disappears, causing the neighboring grains to each lose one interface. In part (c), the central four-sided grain vanishes, causing two neighboring grains to each lose a side and the other two remaining grains to form a new interfacial boundary. Finally, in part (d) of the figure, the central five-sided grain shrinks and eventually disappears. A common theme is that two-dimensional grains with less than six sides tend to shrink and disappear, while grains having more than six sides tend to grow. This is the famous von Neumann-Mullins law for grain growth in two dimensions [8e11], expressed mathematically in Eq. (11.8). The key to understanding grain growth is that an isolated, isotropic grain having a uniform surface tension, g, experiences a capillary force

Figure 11.10 Examples of two-dimensional grain evolution. (Image by Brittney Hauke (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

186

Materials Kinetics

directed toward its concave side. The resulting change in area, A, of the grain boundary segment is Z dA mB gdq; (11.6) ¼ dt GB where mB is the mobility of the grain boundary and q is the local angle. Following Eq. (11.6), grains will grow in regions that have concave curvature and shrink in regions that have convex local curvature. This is shown graphically for a single grain in Figure 11.11. Now let us consider that the grain is one of many grains in a twodimensional polycrystalline material. If a grain has N sides, then an integral can be written as a sum of contributions from each grain boundary segment: Z  Z Z dA dq þ dq þ L þ dq ¼ mB g dt seg 1 seg 2 seg N ¼ mB gð2p  NDqÞ;

(11.7)

where Dq is the angle made by a line normal to the grain boundary with the plane of the sample. With an equilibrium vertex angle of 2p/3 radians, the equilibrium normal angle is Dq ¼ p=3, such that Eq. (11.7) becomes dA p ¼ mB g ðN  6Þ: dt 3

(11.8)

Figure 11.11 Growth or shrinkage of a single grain. The grain experiences capillary forces to grow in regions of concave curvature and shrink in regions of convex curvature. (Image by Brittney Hauke (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Morphological Evolution in Polycrystalline Materials

187

The equilibrium vertex angle of 2p/3 arises from the fact that in a twodimensional crystal, each vertex is shared by three grains. The resulting Eq. (11.8) is the von Neumann-Mullins law equation for twodimensional grain growth [8e11]. According to the von NeumannMullins law: • Grains having more than six sides (i.e., N > 6) will grow (i.e., they have a positive dA/dt). • Grains having fewer than six sides (i.e., N < 6) will shrink (i.e., negative dA/dt) and eventually disappear. • Grains having exactly six sides (i.e., N ¼ 6) are stable and will neither grow nor shrink. The von Neumann-Mullins law for grain growth can be understood in terms of the angles formed at the vertices between grains. As shown in Figure 11.12, grains with fewer than six sides tend to have interior vertex angles less than the equilibrium value of 2p/3 radians (¼ 120 ). As a result, the grain boundaries tend to be curved outward, i.e., with the concave side directed into the grain. Since the direction of the capillary force is toward the concave direction, grains with N < 6 will tend to shrink until they disappear. On the other hand, grains having more than six edges tend to have internal vertex angles greater than 120 . Therefore, grain boundaries tend to be curved inward, i.e., with the concave side facing outward from the grain, as shown for N ¼ 11 in Figure 11.12. As a result,

Figure 11.12 Two-dimensional grains with varying number of edges, N. Grain with N < 6 tend to have interior angles less than 120 , causing the grain boundaries to be curved outward. Since the capillary forces act toward the direction of concavity, the grains will evolve toward straightening the grain boundaries. This causes grains with N < 6 to shrink and eventually disappear. On the other hand, grains with N > 6 tend to have interior angles greater than 120 , causing the grain boundaries to be curved inward. Since the capillary forces again act toward the direction of concavity, grains with N > 6 will grow in order to straighten the grain boundaries. Grains with N ¼ 6 satisfy the equilibrium vertex angle of 120 with straight grain boundaries. In the absence of grain boundary curvature, there is no capillary force to either shrink or grow grains with N ¼ 6, so these grains remain stable. (Image by Brittney Hauke (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

188

Materials Kinetics

grains with N > 6 will grow in response to the outward capillary force. A perfect balance is achieved with N ¼ 6, since six-sided grains can achieve the ideal vertex angle of 120 while having straight grain boundaries. In the absence of grain boundary curvature, there is no driving force acting to shrink or grow six-sided grains. Hence, grains with N ¼ 6 are stable. The geometry associated with grain growth in three dimensions is much more difficult to visualize and analyze than in two dimensions. The problem of grain growth in three-dimensional polycrystalline materials remained unsolved until recently, when MacPherson and Srolovitz [12,13] derived a solution to the problem in arbitrary dimensions, d. Their key result is that the rate of change of the volume of a grain, Vd in d dimensions is:   dVd 1 ¼  2pmB g Hd2 ðDd Þ  Hd2 ðDd2 Þ ; (11.9) 6 dt where Hd2 is the Hadwiger (d  2)-measure from geometric probability [14], and Dd is the domain of the grain in d dimensions. The exact solution by MacPherson and Srolovitz involves projecting the higherdimensional system into a lower-dimensionality space. For details, we refer the interested reader to Refs. [12,13]. Figure 11.13 shows an example of three-dimensional grain growth and its cross-section in two dimensions.

Figure 11.13 (a) A two-dimensional grain growth microstructure, (b) a cross-section from a three-dimensional grain growth microstructure, and (c) a three-dimensional grain growth microstructure. (Figure and caption reproduced with permission from Ref. [13], J. K. Mason, E. A. Lazar, R. D. MacPherson, and D. J. Srolovitz, “Geometric and Topological Properties of the Canonical Grain-Growth Microstructure,” Phys. Rev. E 92, 063308 (2015). Copyright 2015 by the American Physical Society).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Morphological Evolution in Polycrystalline Materials

189

11.6 Diffusional Creep The combination of capillary and applied forces can introduce changes to the macroscopic shape of a material through plastic deformation. In a polycrystalline material, this process is known as diffusional creep and involves the diffusion of vacancies or other species to induce morphological changes. Diffusional creep and sintering (see next Section) both result from the same underlying driving forces, viz., capillarity and applied stresses. The difference is that sintering is associated with permanent densification of a porous body, whereas diffusional creep does not necessarily involve densification. With both processes, a driving force for mass transfer exists as a result of capillary and applied forces. This results in a flux of the diffusing species, yielding a permanent (i.e., plastic) deformation. Diffusional creep and sintering are both assisted by elevated temperatures, which act to accelerate the diffusion processes. There are two fundamental types of diffusional creep [15,16]. If the mass flux occurs primarily via the crystalline matrix, then the process is called Nabarro-Herring creep. On the other hand, if the mass flux acts along the grain boundaries themselves, then the process is called Coble creep. These diffusion processes are often accompanied by grain boundary sliding, i.e., a thermally activated sliding of grains across each other in response to a shear stress [17,18], which also leads to permanent deformation of the polycrystalline material. The dominant deformation mechanism depends on the material chemistry and microstructure, as well as the temperature of the system and the nature of the applied stress. In 1972, Ashby introduced the concept of a deformation mechanism map to convey the primary deformation mechanisms in a polycrystalline material as a function of applied stress and temperature [19]. Figure 11.14 shows the deformation mechanism map for polycrystalline silver with a grain size of 32 mm and a constant strain rate of 108 s1. Each region in the figure indicates a regime of applied stress and temperature where a particular kinetic mechanism dominates the creep process. For low stresses at sufficiently low temperatures, the deformation is elastic, i.e., the strain is fully recovered after removal of the applied stress. Diffusional creep becomes activated at higher temperature, leading to plastic deformation. Nabarro-Herring creep occurs at temperatures closest to melting point. Coble creep is dominant at temperatures just below the Nabarro-Herring regime, since grain boundary diffusion has a lower activation barrier

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

190

Materials Kinetics

Figure 11.14 Deformation mechanism map for polycrystalline silver with a grain size of 32 mm. Here, s is the applied stress, mshear is the shear modulus, and Tm is the melting temperature. The strain rate is 108 s1. (Image by Brittney Hauke (Penn State) based on data from Ashby [19]).

compared to diffusion through the crystalline grains. As the applied stress increases, the primary deformation mode shifts to dislocation motion rather than diffusional creep (see Chapter 10).

11.7 Sintering Sintering is a kinetic process that converts a porous, granular body into one with a higher density (lower porosity) and greater structural integrity [20,21]. The initial “green” body typically consists of a compacted mass of particles or powders, often with high porosity. Traditional solid-state sintering typically occurs through a combination of thermal treatment and pressurization, but without melting the material. The improved structural integrity of the sintered body arises from densification and intergranular neck growth. The former is the result of mass transport that reduces porosity, and the latter is the result of mass transport that increases the neck size between grains. The fundamental driving force for sintering is capillarity, i.e., a reduction in the total surface energy of the system. This is often supplemented by applied pressure to aid in the sintering process. The mass transport mechanisms that enable sintering are usually solidstate processes. For materials that are difficult to sinter via traditional solid-state routes, liquid phase sintering can be utilized [22]. With liquid phase sintering, a low-melting point phase is added to the system. The liquid phase fills the pores upon melting and is chosen such that the solid phase has at least partial solubility in the molten liquid. This enables

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Morphological Evolution in Polycrystalline Materials

191

transport of solid material into pores by first dissolving into solution and then precipitating at the interfaces around the pore, thereby densifying the system. As shown in Figure 11.15, neck growth is one of the key mechanisms for sintering. Neck growth can occur via a variety of different mass transport pathways during the sintering process. As depicted in Figure 11.16, these mass transport mechanisms include grain boundary diffusion, surface diffusion, volume diffusion through the crystal, and viscous flow. To induce densification during sintering, porosity must shrink as a result of the mass transport processes. The dominant mass transport mechanisms for sintering depend on a variety of factors, including the material chemistry, microstructure, surface tension, temperature, pressure, and atmosphere. The primary mechanism may also change during the sintering process as the microstructure of the material evolves. A sintering mechanism map, such as in Figure 11.17, can be used to understand the dominant mechanisms for sintering under different conditions.

Figure 11.15 Neck growth during sintering. (Image by Brittney Hauke (Penn State)).

Figure 11.16 Mass transport mechanisms during sintering. The notation for each mechanism is explained in Table 11.1. (Image by Brittney Hauke (Penn State) based on Balluffi et al. [2]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

192

Materials Kinetics

Figure 11.17 Sintering mechanism map for silver powder with a radius of 100 mm. The notation for each mechanism is explained in Table 11.1. The dashed line shows the approximate transition between initial Stage I sintering and the subsequent Stages II and III. (Image by Brittney Hauke (Penn State) based on Ashby [23]). Table 11.1 Summary of sintering mechanisms. Mechanism

Source

Sink

SS$XL BS$XL BS$B

Surface Boundary Boundary

Surface Surface Surface

DS$XL SS$S SS$V VF

Dislocation Surface Surface -

Surface Surface Surface -

Transport Mechanism

Densifying or Nondensifying

Crystal Diffusion Crystal Diffusion Boundary Diffusion Crystal Diffusion Surface Diffusion Vapor Transport Viscous Flow

Nondensifying Densifying Densifying Either Nondensifying Nondensifying Either

Based on Balluffi et al. [2].

Table 11.1 summarizes key mass transport mechanisms for sintering. Each mechanism is defined by: (a) the source of the diffusing species, (b) the sink for the diffusion process, and (c) the transport pathway. The possible sources of the diffusing species include the surface of a grain, a grain boundary, or a dislocation. Since the material is always deposited onto a surface during neck growth, the sink for the diffusion process is the grain surface. Transport pathways include diffusion through the crystalline grain,

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Morphological Evolution in Polycrystalline Materials

193

grain boundary diffusion, and surface diffusion. The diffusing material may also be carried by vapor transport. Viscous flow can also be an important mechanism, viz., during liquid phase sintering. As indicated in Table 11.1, the mechanisms can be either densifying or nondensifying, depending on whether the mass transport process leads to a reduction in porosity. When the source of the diffusant is the surface of a grain, then the process is typically nondensifying since the mass is simply being redistributed onto different regions of the surface. When the diffusant originates from within the grains, it can act to fill the open pores, thereby leading to densification. Viscous flow may be either densifying or nondensifying, depending on whether the flow acts to fill in the open porosity of the system. As indicated in Figure 11.18, the sintering process consists of three main stages: • Initial stage (Stage I), wherein neck growth occurs along the grain boundaries between adjacent particles, and there is a large interconnected network of porosity. • Intermediate stage (Stage II), wherein the necks between particles are no longer small compared to the particle radii, and the porosity is mainly in the form of narrow yet still interconnected tubular pores. • Final stage (Stage III), wherein the narrow, interconnected tubular porosity along the grain breaks up, leaving behind small isolated pores. The final stage of sintering involves shrinkage/elimination of pores by mass transfer along the grain boundaries. Recently, a new paradigm in sintering of ceramic bodies has enabled sintering at significantly lower temperatures compared to traditional sintering processes. This cold sintering approach, pioneered by Clive Randall and coworkers [24,25], leverages chemical reactions and pressurization to accelerate the sintering process. With cold sintering, high-density ceramic bodies can be achieved with the same low porosity as when using traditional high-temperature sintering methods. Figure 11.19 shows

Figure 11.18 Stages of sintering. (Image by Brittney Hauke (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

194 Materials Kinetics

Figure 11.19 Anisothermal cold sintering of ZnO. (a) Comparison to ZnO solid-state sintering on relative density (r) and specific surface area (SSA) as functions of time and temperature. (b)e(e) Microstructural evolution and relative density increase during the first 9-minutes of the cold sintering process. (f) Comparison of the in-situ linear shrinkage between cold sintering (CS) and solid-state sintering (SS) processes at different heating rates, where the circle shows the maximum shrinkage rate. (g) Analysis of the heating rate dependent empirical exponent for the modified Woolfrey-Bannister method. (h) Linear shrinkage dependence on the concentration of the transient liquid. (Figure by Sun Hwi Bang (Penn State)). This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Morphological Evolution in Polycrystalline Materials

195

cold sintering of a ZnO ceramic system, where the cold sintering process occurs at temperatures several hundred degrees below traditional solid-state sintering. Despite these significantly lower temperatures, the resulting densification is on par with that achieved through the standard solid-state sintering process.

11.8 Summary Morphological changes in a material are a result of the evolution of its interfaces due to applied or capillary forces. Morphological changes occur by capillary forces if the motion acts to decrease the total free energy associated with the interfaces in the system. Interfacial free energy is the integral of the product of the surface tension and interfacial area over all interfaces in a system, including both free surfaces and internal interfaces. For isotropic systems with a single type of interface, the interfacial area always tends toward a minimum. On the other hand, for anisotropic surfaces, the interfacial area could increase during minimization of free energy via a process known as faceting. The von Neumann-Mullins law for two-dimensional grain growth states that grains with fewer than six edges will shrink and eventually disappear, whereas grains with greater than six edges will grow. Grains with exactly six edges are dimensionally stable and will neither grow nor shrink. The von Neumann-Mullins approach has recently been generalized for systems of arbitrary dimensionality by MacPherson and Srolovitz. Diffusional creep results in permanent deformation of the material as a result of capillary and applied forces. There are two types of diffusional creep: Nabarro-Herring creep happens when the mass flux occurs via the crystalline matrix, and Coble creep occurs when the mass flux is along the grain boundaries. Sintering is the process by which a compressed powder or granular material undergoes neck growth and densification through heat treatment. This heat treatment is usually conducted under pressure to aid in the sintering process. The final sintered body has a lower porosity and greater mechanical integrity compared to the initial green body. Recently, cold sintering techniques have leveraged chemical reactions to promote effective sintering of ceramics at much lower temperatures compared to traditional processes.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

196

Materials Kinetics

Exercises (11.1) Plot the equilibrium dihedral angle for grooving as a function of gB =gS . What conditions favor a larger dihedral angle? Explain in terms of the underlying surface physics. (11.2) Plot the evolution of surface free energy for a cylinder undergoing Plateau-Rayleigh instability considering a perturbation wavelength of l ¼ 4pR0 , where R0 is the radius of the initial cylinder. What are the factors governing the thermodynamics and kinetics of this morphological evolution? (11.3) Prove that the critical perturbation wavelength for PlateauRayleigh instability is given by lcrit ¼ 2pR0 , where R0 is the radius of the initial cylinder. (11.4) Search the literature for a polycrystalline material system that undergoes faceting to minimize surface free energy. Provide the reference and explain why the material exhibits this particular type of faceting. Be detailed in your explanation. (11.5) Give an example of a real-life particle coarsening process that is: (a) Diffusion-limited (b) Source-limited (c) Sink-limited (11.6) Using your own words and drawings, explain why twodimensional grains with more than six sides tend to grow at the expense of grains with fewer than six sides. Be detailed in your explanation. (11.7) Search the literature for a deformation mechanism map of a polycrystalline material that undergoes diffusional creep. Provide the reference and explain why the system undergoes the various types of deformation modes in each region of the deformation map. (11.8) Suppose that granular material can sinter by either the BS$XL or BS$B mechanisms. How will the sintering rate change for both of these mechanisms as: (a) The particle size is increased? (b) The temperature is increased? (c) The applied pressure is increased? (11.9) Give an example of a system where liquid phase sintering is often utilized. Explain the various sintering mechanisms in detail. Why is incorporation of a liquid phase beneficial for sintering of this system? Explain in detail.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Morphological Evolution in Polycrystalline Materials

197

(11.10) Give an example of a cold sintering process. What are the mechanisms that enable effective sintering at low temperatures? How does the efficacy of the cold sintering process compare with that of a traditional solid-state sintering process?

References [1] J. N. Israelachvili, Intermolecular and Surface Forces, 3rd ed., Academic Press (2011). [2] R. W. Balluffi, S. M. Allen, and W. C. Carter, Kinetics of Materials, John Wiley & Sons (2005). [3] M. McLean and E. D. Hondros, “A Study of Grain-Boundary Grooving at the Platinum/Alumina Interface,” J. Mater. Sci. 6, 19 (1971). [4] R. Mead-Hunter, A. J. King, and B. J. Mullins, “Plateau Rayleigh Instability Simulation,” Langmuir 28, 6731 (2012). [5] E. D. Williams and N. C. Bartelt, “Surface Faceting and the Equilibrium Crystal Shape,” Ultramicroscopy 31, 36 (1989). [6] C. S. Jayanth and P. Nash, “Factors Affecting Particle-Coarsening Kinetics and Size Distribution,” J. Mater. Sci. 24, 3041 (1989). [7] P. W. Voorhees, “The Theory of Ostwald Ripening,” J. Stat. Phys. 38, 231 (1985). [8] J. von Neumann, In C. Herring, editor: Metal Interfaces, American Society for Metals, Cleveland (1952). [9] W. W. Mullins, “Two-Dimensional Motion of Idealized Grain Boundaries,” J. Appl. Phys. 27, 900 (1956). [10] M. A. Palmer, V. E. Fradkov, M. E. Glicksman, and K. Rajan, “Experimental Assessment of the Mullins-Von Neumann Grain Growth Law,” Script. Metal. Mater. 30, 633 (1994). [11] G. Gottstein, A. D. Rollett, and L. S. Shvindlerman, “On the Validity of the von NeumanneMullins Relation,” Script. Mater. 51, 611 (2004). [12] R. D. MacPherson and D. J. Srolovitz, “The Von Neumann Relation Generalized to Coarsening of Three-Dimensional Microstructures,” Nature 446, 1053 (2007). [13] J. K. Mason, E. A. Lazar, R. D. MacPherson, and D. J. Srolovitz, “Geometric and Topological Properties of the Canonical Grain-Growth Microstructure,” Phys. Rev. E 92, 063308 (2015). [14] D. A. Klain and G.-C. Rota, Introduction to Geometric Probability, Cambridge University Press (1997). [15] F. R. N. Nabarro, “Steady-State Diffusional Creep,” Philos. Mag. 16, 231 (1967). [16] R. C. Gifkins, “Diffusional Creep Mechanisms,” J. Am. Ceram. Soc. 51, 69 (1968). [17] R. Raj and M. F. Ashby, “On Grain Boundary Sliding and Diffusional Creep,” Metal. Trans. 2, 1113 (1971). [18] E. H. Aigeltinger and R. C. Gifkins, “Grain-Boundary Sliding During Diffusional Creep,” J. Mater. Sci. 10, 1889 (1975). [19] M. F. Ashby, “A First Report on Deformation Mechanism Maps,” Acta Metal. 20, 887 (1972). [20] R. M. German, Sintering: From Empirical Observations to Scientific Principles, ButterworthHeinemann (2014). [21] M. N. Rahaman, Sintering of Ceramics, CRC Press (2008). [22] R. M. German, Liquid Phase Sintering, Springer (2013).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

198

Materials Kinetics

[23] M. Ashby, “A First Report on Sintering Diagrams,” Acta Metal. 22, 275 (1974). [24] J. Guo, H. Guo, A. L. Baker, M. T. Lanagan, E. R. Kupp, G. L. Messing, and C. A. Randall, “Cold Sintering: A Paradigm Shift for Processing and Integration of Ceramics,” Angew. Chemie Int. Ed. 55, 11457 (2016). [25] J.-P. Maria, X. Kang, R. D. Floyd, E. C. Dickey, H. Guo, J. Guo, A. Baker, S. Funihashi, and C. A. Randall, “Cold Sintering: Current Status and Prospects,” J. Mater. Res. 32, 3205 (2017).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 12

Diffusion in Polymers and Glasses

12.1 Introduction In Chapters 8e11, we covered various aspects of diffusion and other kinetic processes in crystalline and polycrystalline materials. In this Chapter, we shall turn our attention to diffusion in non-crystalline materials, namely, polymers and glasses. First, we will review the basic physics of diffusion in polymeric systems. Our subsequent treatment of glass diffusion will focus on several practical applications of diffusive processes in the glassy state. Polymers are composed of molecular chains of repeated subunits, arranged in various configurations. Naturally occurring polymers such as hemp, wool, and silk have been in use for many centuries. Synthetic polymers are also now ubiquitous in our everyday lives, including polyethylene, polystyrene, polypropylene, polyvinyl chloride, and many more. Polymers are synthesized by combining many smaller molecules known as monomers. The polymerization process involves combining these monomers into a covalently bonded chain or network. There are two main types of polymerization processes, known as step-growth polymerization and chain-growth polymerization. With chain-growth polymerization, each monomer is added to the chain separately. With step-growth polymerization, chains consisting of multiple monomer units can be combined directly together. An example of chain-growth polymerization is the synthesis of polyethylene, whereas polyesters are generally synthesized via step-growth polymerization. The topology of polymer networks can be linear, i.e., with little branching or cross-linking between chains, or the network can be highly cross-linked. Another important consideration is the polymer chain length, which quantifies the number of monomer units in a given chain. Owing to their complicated chain structure and network topology, most polymers are either completely non-crystalline or only partially crystalline. Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00003-0 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

199 Copyright Elsevier 2023

200

Materials Kinetics

Diffusion of polymeric chains generally falls into two categories, depending on the microstructure of the polymer: • For isolated chains in a solvent of comparatively small molecules, diffusion obeys the Stokes-Einstein relation (Section 12.2). • With long, entangled polymer chains, diffusion occurs via reptation (Section 12.3).

12.2 Stokes-Einstein Relation The Stokes-Einstein relation provides an equation for the diffusivity D of a spherical particle in a fluid medium [1]. The setup for this problem is shown in Figure 12.1. As the name implies, the Stokes-Einstein relation is derived by combining the Stokes law, Fd ¼ 6phav;

(12.1)

with the Einstein relation, D¼

vkT : Fd

(12.2)

In Eq. (12.1), Fd is the frictional drag force acting on the interface between the fluid medium and the particle, h is the shear viscosity of the fluid medium (see Chapter 16), a is the radius of the spherical particle, and v is the velocity of the particle within the fluid. In Eq. (12.2), D is the diffusion coefficient of the spherical particle, k is Boltzmann’s constant, and T is absolute temperature. Combining Eqs. (12.1) and (12.2), we obtain the Stokes-Einstein relation: D¼

kT ; 6pha

(12.3)

Figure 12.1 The Stokes-Einstein relation considers the diffusivity of a spherical particle of radius a in a fluid medium having a shear viscosity h. The particle is moving with a velocity of v and experiences a Stokes drag force, Fd, from the viscous medium.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polymers and Glasses

201

which connects the diffusivity of the particle (D) with its radius (a) and the shear viscosity of the medium (h). A higher viscosity medium leads to a lower diffusivity, since a greater viscosity creates a higher frictional drag force on the diffusing particle. The main assumptions of the StokesEinstein relation are that the particle is spherical and that there are no additional interactions between the particle and the fluid medium beyond Stokes drag. In practice, the Stokes-Einstein relation provides a good description of the diffusivity of polymeric and glass-forming liquids at relatively high temperatures (i.e., low viscosities). The Stokes-Einstein relation breaks down as the system is cooled near the glass transition regime. For polymeric systems, the Stokes-Einstein relation is useful to describe diffusion of smaller chains in a fluid of comparatively small molecules. However, when applying the Stokes-Einstein relation, the polymer chain must be approximated as an effective sphere. One way of implementing this approximation is by using the freely jointed chain model, which is discussed in the following section.

12.3 Freely Jointed Chain Model of Polymers The freely jointed chain model is a simple approximation of a polymer chain structure, where the direction of each bond between monomer units is assumed to be random [2]. Obviously, this assumption ignores the fact that the covalent bonds within a polymer chain tend to favor certain angles. Since the angles between monomer segments are assumed to be random, the freely jointed chain model is equivalent to assuming a random walk: hR2 i ¼ Nb2 ;

(12.4)

where b is the length of an individual monomer segment, N is the number of monomer segments, and R is the end-to-end distance of the polymer, i.e., the distance between the two ends of the polymer chain. A schematic of the freely jointed chain approximation is shown in Figure 12.2. Hence, R can be treated as an effective radius of the polymer, and the diffusivity of a single, isolated polymer chain can be estimated using the Stokes-Einstein relation. Combining Eqs. (12.3) and (12.4), we have: D¼

This book belongs to Alice Cartes ([email protected])

kT kT pffiffiffiffiffi : ¼ 6phR 6ph N b

(12.5)

Copyright Elsevier 2023

202

Materials Kinetics

Figure 12.2 Freely jointed chain model of a polymer. The angle between monomer segments is assumed to be random, such that R is an effective radius of the polymer.

From this equation, shorter polymer chains can be expected to diffuse more pffiffiffiffiffiquickly compared to longer chains, as the diffusivity scales with 1 N.

12.4 Reptation The freely jointed chain model is a rough approximation of an isolated polymer chain. This approach is not valid for polymer systems with longer chains or a more entangled network. A more sophisticated model of polymer diffusion was proposed by the French physicist Pierre-Gilles de Gennes (1932e2007), termed reptation [3]. The word “reptation” is derived from the word “reptile,” since reptation considers the diffusion of polymer chains as occurring through a slithering, snake-like motion. Reptation theory assumes that each polymer chain occupies a tube of length L, and the movement of the polymer chain in the entangled network occurs along this tube in a direction tangential to itself, as depicted in Figure 12.3. The leading end of the polymer chain consists of a freely jointed “head” that can find regions of lower density into which it can advance. The remainder of the chain then follows via a sliding motion along the one-dimensional pathway occupied by the tubular chain. Note that the chain can also move equally well in the reverse direction, since the “head” and the “tail” of the polymer chain are interchangeable. We define the mobility of the tube (mtube ) as simply the ratio of its velocity (v) to the force (F) acting on the tube: mtube ¼ v=F:

This book belongs to Alice Cartes ([email protected])

(12.6)

Copyright Elsevier 2023

Diffusion in Polymers and Glasses

203

Figure 12.3 Reptation of a polymer chain in an entangled network. With reptation, the polymer chain is diffusing along a tube of its own one-dimensional axis, following a snakelike motion. (Image by Brittney Hauke (Penn State)).

From the Einstein relation of Eq. (12.2), the diffusivity of the tube (Dtube ) is therefore: Dtube ¼ kT mtube :

(12.7)

From Eq. (7.35), the mean square displacement along the one-dimensional tube is: hr 2 i ¼ 2Dtube t ¼ 2kT mtube t:

(12.8)

Using Eq. (12.8), we can calculate the time for the chain to fully displace the length (L) of its original tube: tL ¼

L2 2kT mtube

(12.9)

This is known as the reptation time, which is a key material property related to the relaxation time of the polymer [4].

12.5 Chemically Strengthened Glass by Ion Exchange Like polymers, glasses play a ubiquitous role in modern civilization. One of the traditional limitations of glassy materials is their brittle fracture, which is especially pronounced in the presence of surface flaws that act to concentrate stresses at localized regions in the material. Such surface flaws effectively compromise the strength of glass specimens to a small fraction of their ideal theoretical strength [5].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

204

Materials Kinetics

To overcome these limitations, glasses can be chemically strengthened via the ion exchange process. The ion exchange process for chemical strengthening was originally developed by Kistler in 1962 [6] and has recently become a widespread process applied to a variety of commercial glass products [5]. One of the most prevalent new applications of glass is to serve as protective covers for personal electronic devices such as smartphones, tablets, and touch-screen laptops. Ion exchange is also used to chemically strengthen aircraft windshields and glass packaging for pharmaceutical vials. During the ion exchange process, an initial glass is formed which contains some concentration of a small alkali oxide, typically Li2O and/or Na2O. The concentration of the alkali oxides is usually in the range of 10e20 mol %. The as-made glass is then submerged in a molten salt bath consisting of larger alkali cations. The salt bath typically consists of NaNO3 for a Li2O-containing glass or KNO3 for a Na2O-containing glass. The salt bath treatment is performed at a temperature around 400 C, which is high enough to enable fast kinetics of the ion exchange process but still below the strain point (see Chapter 16) of the glass and below the boiling point of the salt. During the salt bath treatment, smaller alkali ions from the glass diffuse into the salt bath and are replaced by larger cations from the salt, as shown in Figure 3.8. By “stuffing” these larger alkalis into sites previously occupied by a smaller species, a protective compressive stress layer develops near the surface of the glass. The compressive stress layer acts to inhibit the formation and propagation of cracks in the glass surface. Neglecting stress relaxation effects (see Chapter 21), the stress profile sðx; tÞ obtained via the ion exchange process is given by sðx; tÞ ¼ 

BE ½Cðx; tÞ  hCðtÞi; 1n

(12.10)

where Cðx; tÞ is the concentration of the invading species (i.e., the concentration of the larger alkali from the salt bath) as a function of penetration depth (x) and ion exchange time (t). hCðtÞi is the average concentration of the invading species at any given time: 1 hCðtÞi ¼ L

ZL Cðx; tÞdx;

(12.11)

0

where L is the thickness of the glass along the x dimension. In Eq. (12.10), E is the Young’s modulus of the glass, n is its Poisson’s ratio, and B is the

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polymers and Glasses

205

linear network dilation coefficient, also known as the Cooper coefficient, named in honor of the late Prof. Alfred R. Cooper (1924e1996) from Case Western Reserve University, who was one of the pioneers in developing the physics of chemically strengthened glass. The Cooper coefficient is a property of the glass which can be considered as analogous to the thermal expansion coefficient. Whereas the thermal expansion coefficient of a material defines its strain per unit change in temperature, the Cooper coefficient B specifies the linear strain of the glass per unit change in alkali concentration. In other words, B gives the linear strain per mole % of ions exchanged during the salt bath treatment. Hence, the strain profile, εðx; tÞ, induced in the glass is linearly related to its concentration profile by: εðx; tÞ ¼ B½Cðx; tÞ  Cbase ;

(12.12)

where Cbase is the initial concentration of the larger alkali species in the glass prior to ion exchange. In many cases, this initial concentration of the larger alkali species is zero, i.e., Cbase ¼ 0. Hence, the strain profile is directly proportional to the concentration profile that results from the interdiffusion process. Following Hooke’s law, the strain profile in Eq. (12.12) is converted to the stress profile in Eq. (12.10) by multiplying by E=ð1 nÞ, assuming that the glass has a plate geometry. The subtraction of the average concentration in Eq. (12.10) is required for the stress profile to satisfy the force balance condition. The force balance condition demands that the integral of the stress profile across the full thickness of the glass must equal zero: ZL sðx; tÞdx ¼ 0:

(12.13)

0

This means that the integrated compressive stresses near the glass surfaces must be compensated by an equivalent amount of integrated tensile stresses in the interior of the glass. The zero-crossing point of the stress is known as the depth of layer (DOL) or depth of compression (DOC) of the stress profile, defined by sðxDOL Þ ¼ 0. Typical values of DOL are on the order of 40e120 mm. A larger depth of layer means that the glass has greater protection against failure. The glass is protected against fracture if the cracks are confined to the compressive stress regime. However, once a crack penetrates through the depth of layer and enters the tensile stress regime, then the stored tensile energy will be released from the interior of the glass, leading to failure.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

206

Materials Kinetics

The other key parameter characterizing the stress profile is the magnitude of the compressive stress at the surface of the glass, i.e., sð0Þ ¼ sðLÞ, which is usually simply called the compressive stress (CS) of the stress profile. A higher CS is desirable to help prevent the introduction of damage into the glass surface. Typical values of the CS are on the order of 600e1000 MPa. Another parameter commonly quoted to describe the stress profile is the central tension (CT). This corresponds to the magnitude of the tensile stress in the center of the glass, i.e., sðL =2Þ. Note that a greater compressive stress and a deeper depth of layer require a larger central tension in order to satisfy the force balance condition of Eq. (12.13). The concentration profile can be calculated by considering the initial and boundary conditions for the ion exchange treatment. The salt bath itself can be considered as a semi-infinite bath, which provides a constant source of the larger alkali ions to the surface of the glass. Since ion exchange is an interdiffusion process with zero net flux (see Section 3.5), the diffusion equation can be solved using a single diffusion coefficient, D. Given the constant surface concentration, the solution of the diffusion equation can be obtained by the Laplace transform method derived in Section 4.7. Since diffusion occurs into both surfaces of the glass, the resulting concentration profile is:      x Lx Cðx; tÞ ¼ ðCM2 O  Cbase Þ erfc pffiffiffiffiffiffiffiffi þ erfc pffiffiffiffiffiffiffiffi þ Cbase ; 4Dt 4Dt (12.14) where CM2 O is the total alkali concentration in the glass. Here we have assumed that the sample thickness is large compared to the diffusion distance into the glass. Assuming Cbase ¼ 0, Eq. (12.14) simplifies to:      x Lx Cðx; tÞ ¼ CM2 O erfc pffiffiffiffiffiffiffiffi þ erfc pffiffiffiffiffiffiffiffi : (12.15) 4Dt 4Dt Figure 12.4 plots an example concentration profile of the invading alkali species, following Eq. (12.15). An example of a stress profile obtained through the ion exchange process is plotted in Figure 12.5. To summarize the key steps of the stress generation process: • For a single salt bath treatment, the concentration profile of the invading alkali ions is given by Eq. (12.14).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polymers and Glasses

207

Figure 12.4 Concentration of the invading alkali species in the glass of thickness L, resulting from ion exchange from both surfaces. The concentration profile follows Eq. (12.15).

Figure 12.5 Example stress profile showing the surface compressive stress (CS), central tension (CT), and depth of layer (DOL). The DOL corresponds to the zero crossing point of the stress curve. The stress profile obeys the force balance condition of Eq. (12.10), such that the integrated compressive stress matches the integrated tensile stress.

• •

This concentration profile is converted to a strain profile by multiplying by the Cooper coefficient, B, as in Eq. (12.12). For a plate geometry, the strain profile is converted to a stress profile by multiplying by E=ð1 nÞ and imposing the force balance condition, as in Eq. (12.10).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

208

Materials Kinetics

The kinetics of the ion exchange process are governed by the interdiffusion coefficient, D. A higher value of D is desirable to minimize the time required to achieve a target DOL in the stress profile of the glass. Although high values of D can be achieved by increasing the salt bath temperature, this can also lead to relaxation of the stresses being installed during the ion exchange process (see Chapter 21), which reduces the values of the final CS. The composition and temperature dependence of D can be evaluated by measuring the Arrhenius curves of the ln D vs. 1/T relationship for different glass compositions, as shown in Figure 12.6. The activation barrier for interdiffusion process can be calculated from the slope of the Arrhenius plot, as indicated in Figure 3.4. Figure 12.7 plots the activation barrier for the interdiffusion process for a set of 28 different silicate glass compositions. Here the activation barrier for interdiffusion is plotted against the activation barrier for viscous flow of the same compositions. Figure 12.7 shows that there is no correlation between the activation barriers for diffusion and viscosity, indicating a complete breakdown of the Stokes-Einstein relation for the case of ion exchange of glasses. Instead, the activation barrier for interdiffusion is found to be a constant with respect to glass composition. In other words, all of the composition dependence of D is contained in the pre-exponential factor of the Arrhenius equation, i.e., the variation of D with glass composition is

Figure 12.6 Arrhenius plots to obtain the activation barrier for the interdiffusion coefficient of the Kþ-for-Naþ ion exchange process. (Data are from Mauro et al. [7]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polymers and Glasses

209

Figure 12.7 Activation barrier for the interdiffusion process versus viscous flow for 28 different silicate glass compositions. There is no correlation between the activation barrier for diffusion and that for viscous flow, demonstrating that the Stokes-Einstein relation is not valid for describing the kinetics of the ion exchange process. The activation barrier for interdiffusion is found to be a constant with respect to glass composition. This indicates that the composition dependence of the interdiffusion coefficient is entropic in origin, i.e., attributable to variation in the pre-exponential factor of the Arrhenius equation. (Data are from Mauro et al. [7]).

fundamentally entropic in origin. The breakdown of the Stokes-Einstein relation in Figure 12.7 is due to a decoupling of the atomic scale mechanisms responsible for diffusion and viscosity. Whereas viscosity involves a cooperative flow of atoms (see Chapter 16), diffusion in these glasses involves a hopping of individual atoms within the rigid network of the glass. The other key parameter governing the stress profile of the chemically strengthened glass is the compressive stress (CS) generated at the surface of the glass. Figure 12.8 plots the CS for the same 28 glass compositions as a function of their glass transition temperature, Tg (see Chapter 16). Figure 12.8 shows some evidence for correlation between the CS and Tg values of the glass. However, when the stress is converted to a strain description, as in Figure 12.9, all of the points collapse onto a single master curve. This indicates that the linear network dilation coefficient, B, may be governed by the same physics as the glass transition temperature, Tg. As will be shown in Chapter 16, the composition dependence of glass transition temperature is determined by the topology of the glass network, specifically the number of rigid bond constraints per atom present in the network.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

210

Materials Kinetics

Figure 12.8 Compressive stress versus glass transition temperature for 28 different silicate glass compositions. There is some noisy correlation between the two properties, but better correlation is shown when the stress is converted to a strain description, as in Figure 12.9. (Data are from Mauro et al. [7]).

Figure 12.9 Linear correlation between ion exchange strain and glass transition temperature, indicating that the same physics that governs the compositional scaling of glass transition temperature also applies to the linear network dilation coefficient. (Data are from Mauro et al. [7]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polymers and Glasses

211

12.6 Ion-Exchanged Glass Waveguides Besides chemical strengthening, ion exchange is also used to fabricate glass waveguides [8]. Waveguides operate on the principle of total internal reflection, where the light propagates along the pathway of high refractive index, reflecting off the boundaries of the high and low refractive index regions. Planar glass waveguides are created by ion exchanging pathways of high refractive index into a glass surface. This high refractive index can be obtained through ion exchange of Agþ, e.g., by using a silver nitrate (AgNO3) salt bath. As shown in Figure 12.10, a mask is used to control the regions of silver ion exchange to control the functionality of the glass waveguide.

12.7 Anti-Microbial Glass Silver ion exchange is also an effective means for creating anti-microbial glass surfaces. The exchange of Agþ into the glass surface is again accomplished using AgNO3. A mixed salt bath of AgNO3 and KNO3 can be used to provide simultaneous strengthening of the glass while imparting anti-microbial efficacy. When the glass surface is exposed to a humid

Figure 12.10 Ion exchange process for making a planar waveguide. The exchange of Agþ ions into the glass increases the refractive index, enabling total internal reflection along the pathway of the waveguide. (Image by Brittney Hauke (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

212

Materials Kinetics

environment, some of the Agþ ions diffuse to the hydrated surface. When the Agþ ions are incorporated into the bacterial cell, they have a highly effective kill rate [9]. Figure 12.11 shows a cartoon of the operation of antimicrobial glasses. Such glasses can be especially helpful to reduce the spread of infectious diseases, for example, on surfaces used at hospitals or nursing homes, or on commonly shared touch screens such as in automated teller machines (ATMs).

12.8 Proton Conducting Glasses One of the grand challenges facing the world today is the development of materials for next-generation energy storage. Energy storage media should provide a high density of storage capacity, rapid recharging, and high cyclability without loss of the original energy storage capacity. Safety is also of paramount concern. One recent breakthrough is the development of fast proton-conducting glasses by the collaboration of Junji Nishii (Hokkaido University) and Takahisa Omata (Tohoku University) [10,11]. Starting with sodium phosphate glasses, the sodium ions in the glass are exchanged for protons using an electrochemical ion exchange process known as alkali-proton substitution (APS). This yields glasses with a high concentration of charge carriers. The phosphate glass network provides for ultra-high diffusivity of the protons, which are only weakly bound to the glass network. These glasses provide a promising path forward for nextgeneration energy storage materials.

Figure 12.11 A glass surface can become anti-microbial through ion exchange of Agþ, which is a well-known anti-microbial agent. (Image by Brittney Hauke (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polymers and Glasses

213

12.9 Summary Many materials, including glasses and most polymers, are either noncrystalline or partially crystalline. In the low viscosity regime, diffusivity in non-crystalline systems can be modeled using the Stokes-Einstein relation, which connects diffusivity to the size of the diffusing species and the viscosity of the medium in which the diffusion occurs. When applying the Stokes-Einstein equation to polymer chains, the freely jointed chain model can be used to approximate the structure of an isolated polymer chain. For more complicated polymers with entangled networks, diffusion is more accurately described by reptation. Reptation refers to the snakelike motion of a polymer chain along its own axis. Ion exchange is a very important interdiffusion process for modifying the surface properties of glasses. Ion exchange can lead to chemical strengthening of glass when smaller alkali ions from the glass are exchanged with larger alkali ions from a molten salt bath. Ion exchange also affects the refractive index of the glass and can be used to fabricate planar waveguides. Silver ion exchange can also create glass surfaces with anti-microbial properties to help prevent the spread of infectious diseases.

Exercises (12.1) What is the diffusivity of a polymer chain consisting of 25 monomer units with a segment length of 2 Å? Assume that the polymer chain obeys the freely jointed chain model and diffuses in a fluid medium having a shear viscosity of 10 Pa$s. (12.2) Explain in your own words the conditions under which the StokesEinstein relation is obeyed. Give two examples of systems where the Stokes-Einstein relation is likely to be valid. Give two examples of systems where the Stokes-Einstein relation is not applicable. Why does the Stokes-Einstein relation break down for these systems? (12.3) Suppose that the reptation time for a polymer chain is tL ¼ 104 s at a temperature of T ¼ 300 K. When the temperature is increased to T ¼ 360 K, the reptation time reduces to 100 s. (a) What is the activation barrier for the tube diffusivity process over this range of temperatures? (b) Suppose that the length of the polymer chain is doubled. What is the reptation time at T ¼ 320 K?

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

214

Materials Kinetics

(12.4) A chemically strengthened glass sheet has an initial concentration of 15 mol% Na2O and 0% K2O. It is ion exchanged in a molten salt bath of KNO3. The inter-diffusion coefficient for the Kþ/Naþ exchange is 1010 cm2/s at 400 C and has an activation barrier of 1 eV. The network dilation coefficient of the glass is 0.001/(mol % K2O). The Young’s modulus is 75 GPa, and the Poisson’s ratio is 0.20. The glass transition temperature is 630 C. The glass sheet has a thickness of 1.0 mm. (a) Make a plot showing the inter-diffusion coefficient as a function of inverse temperature. Label the axes appropriately. Assume that the inter-diffusion coefficient is independent of the local composition. (b) What are the lowest and highest temperatures at which the ion exchange can be performed? Why? (c) How much time is required to achieve 1 mol % of K2O at a penetration depth of 50 mm in the glass using a salt bath temperature of 400 C? Assume that the surface concentration of K2O is equal to the initial concentration of Na2O in the glass. (d) Make a plot showing the amount of time required to achieve 1 mol % of K2O at a penetration depth of 50 mm as a function of salt bath temperature. Label the axes appropriately. (e) Make a plot of the concentration profile of K2O in the glass as a function of depth for a 10-hour ion exchange at 400 C. Assume that the glass is 1.0 mm thick. Show the concentration profile through the entire thickness of the glass. Label the axes appropriately. (f) Make a plot of the stress in the glass as a function of depth, again assuming a 10-hour ion exchange at 400 C and a glass thickness of 1.0 mm. Show the stress profile through the entire thickness of the glass, remembering that force balance must be satisfied. Stress relaxation can be neglected for this calculation. Label the axes appropriately. (g) How much compressive stress is generated at the surface of the 1.0-mm thick glass as a result of the ion exchange process? (h) Now consider the same glass but with a thickness of 0.2 mm. How much compressive stress is generated at the surface of this thinner glass? Why is it either different or the same as the result for the 1.0-mm thick glass?

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Diffusion in Polymers and Glasses

215

(i) Describe at least two ways that the surface compressive stress in the glass be increased. What are the relative pros and cons of these approaches? (j) Describe at least two ways that the ion exchange time to achieve a target depth of layer can be reduced. What are the relative pros and cons of these approaches?

References [1] C. C. Miller, “The Stokes-Einstein Law for Diffusion in Solution,” Proc. Royal Soc. London Series A 106, 724 (1924). [2] M. Mazars, “Statistical Physics of the Freely Jointed Chain,” Phys. Rev. E 53, 6297 (1996). [3] P.-G. de Gennes, “Reptation of a Polymer Chain in the Presence of Fixed Obstacles,” J. Chem. Phys. 55, 572 (1971). [4] M. Rubinstein and R. H. Colby, Polymer Physics, Oxford University Press (2003). [5] A. K. Varshneya and J. C. Mauro, Fundamentals of Inorganic Glasses, 3rd ed., Elsevier (2019). [6] S. S. Kistler, “Stresses in Glass Produced by Nonuniform Exchange of Monovalent Ions,” J. Am. Ceram. Soc. 45, 59 (1962). [7] J. C. Mauro, A. Tandia, K. D. Vargheese, Y. Z. Mauro, and M. M. Smedskjaer, “Accelerating the Design of Functional Glasses through Modeling,” Chem. Mater. 28, 4267 (2016). [8] S. I. Najafi, Introduction to Glass Integrated Optics, Artech House (1992). [9] N. F. Borrelli, W. Senaratne, Y. Wei, and O. Petzold, “Physics and Chemistry of Antimicrobial Behavior of Ion-Exchanged Silver in Glass,” ACS Mater. Interfaces 7, 2195 (2015). [10] T. Yamaguchi, S. Tsukuda, T. Ishiyama, J. Nishii, T. Yamashita, H. Kawazoe, and T. Omata, “Proton-Conducting Phosphate Glass and its Melt Exhibiting High Electrical Conductivity at Intermediate Temperatures,” J. Mater. Chem. A 6, 23628 (2018). [11] T. Omata, T. Yamaguchi, S. Tsukuda, T. Ishiyama, J. Nishii, T. Yamashita, and H. Kawazoe, “Proton Transport Properties of Proton-Conducting Phosphate Glasses at their Glass Transition Temperatures,” Phys. Chem. Chem. Phys. 21, 10722 (2019).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 13

Kinetics of Phase Separation

13.1 Thermodynamics of Mixing With Fick’s first law of Eq. (3.4), the thermodynamic driving force for the flux of matter is the concentration gradient of the diffusing species. The flux acts in the direction opposite to the concentration gradient, and diffusion ceases when the concentration gradient becomes zero. However, some homogeneous solutions of two or more components are thermodynamically favored to demix, i.e., to separate into two phases of differing composition [1e4]. Such demixing necessarily involves uphill diffusion, i.e., diffusion against a concentration gradient [5,6]. In these systems, the fully mixed, homogeneous distribution of matter is no longer the equilibrium state. In this chapter, we will consider both the thermodynamics and kinetics of phase separation. Let us begin with the thermodynamics of mixing. Consider a system consisting of two components, A and B, which are mixed at a temperature T and pressure P to yield a solution. The system has three options [1]: • To stay unmixed, i.e., to remain no more than a physical mixture of A and B phases. • To mix on an atomic scale and form a complete solution of A-B. • To mix in preferential ratios, forming various compounds of A-B. The degree of mixing depends on the Gibbs free energy of the mixed versus unmixed systems. If the free energy of mixing, DGm, is positive, then the phases will remain separated (e.g., oil and water). However, if DGm is negative over a certain range of compositions, then some level of mixing will take place within that range. Figure 13.1 shows a graphical representation of the mixing problem. The   initially unmixed system has a Gibbs free energy of G1 ¼ ð1 cÞGA þ cGB ,   where GA is the free energy of pure component A, GB is the free energy of pure component B, 1  c is the mole fraction of A, and c is the mole fraction of B. After A and B are mixed, the solution has a Gibbs free energy of G2 ¼ G1 þ DGm , where DGm is the free energy of mixing. Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00021-2 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

217 Copyright Elsevier 2023

218

Materials Kinetics

Figure 13.1 Mixing of two pure components, A and B, having mole fractions of 1  c   and c, respectively. The total Gibbs free energy before mixing is G1 ¼ ð1 cÞGA þ cGB . After mixing, the Gibbs free energy is G2 ¼ G1 þ DGm . (Reproduced from Varshneya and Mauro [1]).

The free energy of mixing is given by: DGm ¼ DHm  T DSm ;

(13.1)

where DHm is the enthalpy of mixing (i.e., the heat of mixing), DSm is the entropy of mixing, and T is the absolute temperature. For ideal and regular solutions, the entropy of mixing in the two-component system is given by [7]: DSm ¼  R½ð1  cÞlnð1  cÞ þ c ln c;

(13.2)

where R is the gas constant. Note that entropy always increases upon mixing, achieving a maximum at c ¼ 1/2 . Hence, entropy acts to lower the Gibbs free energy of mixing in Eq. (13.1). For ideal solutions, DHm ¼ 0, so DGm is negative at all temperatures, and the resulting equilibrium will be a fully mixed solution with the molar formula: A1cBc. For regular solutions, DHm s 0 and is given by: DHm ¼ acð1  cÞ; where a is the excess interaction energy, defined by:   1 a ¼ NA Z EAB  ðEAA þ EBB Þ : 2

This book belongs to Alice Cartes ([email protected])

(13.3)

(13.4)

Copyright Elsevier 2023

Kinetics of Phase Separation

219

Here, NA is Avogradro’s number, Z is the coordination number, EAB is the energy of the A-B bond, and EAA and EBB are the energies of the A-A and B-B bonds, respectively. Note that both Eqs. (13.2) and (13.3) are symmetric about c ¼ 1/2 but adopt different shapes as a function of c. From Eqs. (13.3) and (13.4), the enthalpy of mixing can be either positive or negative depending on whether bonding is preferred between like or unlike species. If EAB is greater than the average of EAA and EBB, then DHm is positive and bonding between like species is preferred. On the other hand, DHm is negative if bonding between unlike species is preferred, i.e., when the EAB is less than the average of EAA and EBB. In this case, the system gives off heat upon mixing (i.e., the mixing is exothermic). Hence, for exothermic mixing the free energy of mixing, DGm, is negative at all temperatures, and mixing is always thermodynamically favored, as indicated in Figure 13.2. If DHm is positive, i.e., when mixing is endothermic, then the shape of the DGm(c) curve depends on the temperature. At high temperatures, the entropic term, TDSm, is dominant such that the Gibbs free energy of mixing is negative and has a single minimum at c ¼ 0.5. Therefore, at sufficiently high temperatures, mixing is always favored, as shown in Figure 13.3.

Figure 13.2 Gibbs free energy of mixing when the enthalpy of mixing is negative, i.e., when mixing is both enthalpically and entropically favored. (Reproduced from Varshneya and Mauro [1]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

220

Materials Kinetics

Figure 13.3 Gibbs free energy of mixing when the enthalpy of mixing is positive, i.e., when mixing is favored entropically but not enthalpically. The figure shows the case of high temperature, when the entropic term is dominant. (Reproduced from Varshneya and Mauro [1]).

When the temperature is lowered, DGm(c) starts to look like a saddle as the single minimum at c ¼ 0.5 splits into two minima symmetric about the c ¼ 0.5 composition. The appearance of these double minima is due to the intrinsic difference in shape between DSm(c) in Eq. (13.2) and DHm(c) in Eq. (13.3). This situation is plotted in Figure 13.4(a), where the Gibbs free energy has minima at two preferred compounds, labeled “a” and “b”. The solutions having compositions that fall along the tie-line in Figure 13.4(a) will therefore separate into a mixture of the termini compositions “a” and “b” in order to minimize the free energy of the system. Compositions falling outside of this tie-line regime (i.e., at c < a or c > b) will still prefer to mix at the atomic level. The Gibbs free energy of mixing is used to plot the phase diagram for a binary regular solution, as in Figure 13.4(b). Here, the upper consolute temperature, Tc, indicates the lowest temperature where homogenous mixing is always thermodynamically favored, i.e., at temperatures T  Tc mixing is always preferred, such that the solution is single-phased for all compositions. The upper consolute temperature is also known as the critical temperature. This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Kinetics of Phase Separation

221

Figure 13.4 (a) Gibbs free energy of mixing when the enthalpy of mixing is positive and at a lower temperature, T1, compared to Figure 13.3. The competition between the enthalpic and entropic terms leads to a saddle-shaped curve for the Gibbs free energy of mixing. Compositions falling between points “a” and “b” will exhibit phase separation. (b) The corresponding immiscibility dome. Above the upper consolute temperature, Tc, mixing is thermodynamically favored. At temperatures below Tc, phase separation will occur. The width of the region where phase separation is favored increases with decreasing temperature of the system, i.e., as the enthalpy of mixing becomes more dominant compared to the entropic term. (Reproduced from Varshneya and Mauro [1]).

At temperatures below the upper consolute temperature, i.e., T < Tc, phase separation can occur. The region where phase separation is favored is known as the immiscibility dome. This dome is shown in Figure 13.4(b), where the bounds of the dome are given by the locations of the free energy minima at each temperature below Tc. When both “a” and “b” are liquid phases, the demixing process is called liquid-liquid immiscibility. This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

222

Materials Kinetics

13.2 Immiscibility and Spinodal Domes The immiscibility dome defines the region of immiscibility on the phase diagram, also known as the miscibility gap or the binodal curve. Within the immiscibility dome, there exists a smaller spinodal dome where the second derivative of the free energy with respect to concentration is negative. Inside this spinodal dome, the system is unstable with respect to small fluctuations, which leads to phase separation through the process of spinodal decomposition. Outside of the spinodal dome but still within the miscibility gap, the system is stable with respect to small fluctuations, leading to phase separation by droplet nucleation. Spinodal decomposition and droplet nucleation are the two fundamental types of liquid-liquid immiscibility. Figure 13.5 shows the thermodynamic origins of both types of phase separation. The immiscibility dome and spinodal dome are plotted in Figure 13.5(a), and the corresponding Gibbs free energy of mixing is plotted in Figure 13.5(b). Figure 13.5(c) plots the first derivative of the Gibbs free energy with respect to concentration, and Figure 13.5(d) plots the second derivative. The bounds of the immiscibility dome are given by the minima in the Gibbs free energy curve. At lower temperatures, the minima become further separated, indicating a greater range of compositions over which phase separation is favored. As the temperature is raised, the two minima come closer together and eventually merge at the upper consolute temperature, Tc. While the edges of the immiscibility dome are governed by the first derivative of the Gibbs free energy curve, the bounds of the spinodal dome are given by its inflection points, i.e., the zero crossing points of the second derivative. Within the spinodal dome, the second derivative of the Gibbs free energy is negative with respect to small compositional fluctuations, leading to local instability and a continuous evolution of the spinodal phase. Outside of the spinodal dome, the second derivative is positive, and phase separation occurs by droplet nucleation. The instability of compositions in the spinodal dome can be seen by performing a Taylor series expansion of the Gibbs free energy of the system:   2  dG 1 2 d G Gðhci þ dcÞ ¼ GðhciÞ þ dc  þ ðdcÞ 2  þ /; (13.5) dc c¼hci 2 dc c¼hci where hci is the average concentration and dc is a small fluctuation in the local concentration. Given that the concentration fluctuations are random,

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Kinetics of Phase Separation

223

Figure 13.5 (a) Immiscibility and spinodal dome and corresponding (b) Gibbs free energy, (c) first derivative, and (d) second derivative of the Gibbs free energy curve. The bounds of the immiscibility dome correspond to the minima points in the Gibbs free energy diagram, and the bounds of the spinodal dome correspond to its inflection points. (Reproduced from Varshneya and Mauro [1]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

224

Materials Kinetics

hdci ¼ 0 such that the first-order term of the Taylor series does not contribute in an average sense. Since ðdcÞ2 is positive regardless of the sign of dc, the Taylor series is dominated by the quadratic term, i.e.,  2  1 2 d G Gðhci þ dcÞ z GðhciÞ þ ðdcÞ 2  : (13.6) 2 dc c¼hci Therefore, if the curvature is negative, d2 G=dc 2 < 0, then small fluctuations will lead to a decrease in the free energy of the system. This instability with respect to fluctuations is the defining feature of compositions within the spinodal dome.

13.3 Phase Separation Kinetics The kinetics of phase separation depend on whether immiscibility occurs via droplet nucleation or spinodal decomposition. The evolution from a fully mixed, homogenous phase to a phase separated system is shown in Figure 13.6. Figure 13.6(a) shows the case of droplet nucleation. Since droplet nucleation occurs outside of the spinodal dome, the starting composition is necessarily closer to one phase than the other. The majority phase will become the matrix phase, into which the second-phase droplets will form. As shown in Figure 13.6(a), at a constant temperature, droplet

Figure 13.6 (a) Droplet nucleation and (b) spinodal decomposition over time. (Reproduced from Varshneya and Mauro [1]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Kinetics of Phase Separation

225

nucleation is characterized by an invariance of second-phase composition over time. In other words, the composition of the droplet is always the same, from its initial nucleation and throughout the growth process. Also, there is always a sharp interface between the matrix and droplet phases at all stages of growth. Droplet nucleation tends to lead to a fairly random distribution of droplet positions in the matrix, with low connectivity among the droplets. An example of droplet nucleation for a calcium aluminosilicate liquid is shown in Figure 13.7. The phase evolution during the spinodal decomposition process is shown in Figure 13.6(b). With spinodal decomposition, the starting composition falls within the spinodal dome, i.e., where the two phases are closer in concentration to each other. Spinodal decomposition is characterized by a continuous variation of the concentration between phases. As such, the interface between the phases is initially very diffuse. The interface eventually sharpens over time, and a fully sharp interface between phases is present in the final microstructure. The spinodal microstructure consists of two highly non-spherical, worm-like phases having a fairly regular geometric spacing. Unlike droplet nucleation, with spinodal decomposition the phases form an interconnected network. Figure 13.8 shows an example of a spinodally phase separated system, viz., an alkaline earth boroaluminosilicate glass system, where one of the phases has been leached out through an acid treatment. The high surface area of the resulting structure makes spinodally phase separated systems especially attractive as catalytic substrates or filters [8,9].

Figure 13.7 Droplet nucleation in a calcium aluminosilicate system. (Image by Nicholas Clark (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

226

Materials Kinetics

Figure 13.8 Spinodal phase separation of an alkaline earth boroaluminosilicate glass, followed by acid leaching of one of the phases. (Image by the author).

Figure 13.9 shows an example of using spinodal decomposition to create a high surface area microstructure with interconnected porosity. The initial system is a block copolymer blend with organic and inorganic additives. Upon spinodal decomposition, the inorganic material separates into one phase, and the remaining polymer phase can be pyrolyzed to leave behind a highly porous inorganic network. In Figure 13.9(c), the remaining inorganic skeleton consists of TiNb2O7. The interconnected network of porosity provided by the spinodal microstructure provides an ideal morphology for catalysis, and the fast mass transfer provided by the open porous network is also suitable for use as electrode materials for rechargeable batteries [9].

13.4 Cahn-Hilliard Equation The kinetics of phase separation are quantified using the Cahn-Hilliard equation [10], which can be derived by considering the local free energy over a volume, V: Z i 1 h k FV ¼ fV ðcÞ þ ðVcÞ2 dV ; (13.7) V V 2 where FV is the Helmholtz free energy (i.e., the constant volume free energy), fV is the free energy of the homogeneous system, and the second term in the integral of Eq. (13.7) is the free energy contribution due to inhomogeneity. Here, k is called the gradient energy coefficient, and the term k ðVcÞ2 is known as the gradient energy. Note that in going from Eqs. (13.6) 2

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Kinetics of Phase Separation

227

Figure 13.9 (a) Schematic phase diagram showing immiscibility and spinodal domes. (b) Fabrication of a hierarchically porous material via curing of an organic additive in the porous inorganic spinodal microstructure. (c) Scanning electron microscopy (SEM) image of the porous TiNb2O7 skeleton structure. (d) Isosurface imaging from 3D tomography of the porous material. (Reproduced with permission from Ref. [9]).

to (13.7), we have switched from Gibbs free energy (for isobaric systems) to Helmholtz free energy (for isochoric systems) in order to perform the integral over a constant V. The next step involves taking a variational derivative, also known as a functional derivative [11]. Given a functional, Z Y¼ Iðx; yðxÞ; dy = dxÞdx; (13.8) where y(x) and dy/dx are independent variables, through calculus of variations we have dY vI d vI ¼  : dyðxÞ vy dx vðdy=dxÞ

This book belongs to Alice Cartes ([email protected])

(13.9)

Copyright Elsevier 2023

228

Materials Kinetics

Applying the variational derivative to Eq. (13.7), we have dFV vf ðcÞ  kV2 c; ¼ vc dc

(13.10)

which in one dimension, x, becomes dF vf ðcÞ v2 c ¼  k 2: dc vc vx

(13.11)

Note that for the one-dimensional description, we have dropped the V subscript from the free energy. The partial derivative of the homogeneous free energy with respect to composition can be written in terms of the difference in the chemical potentials for the homogeneous system: vf ðcÞ ¼ mB  mA ; vc

(13.12)

where mA and mB are the chemical potentials of components A and B, respectively, assuming a homogeneous system. If we let m0A and m0B denote the corresponding chemical potentials for the inhomogeneous system, then from Eq. (13.11) we have: m0B  m0A ¼

dF vf ðcÞ v2 c ¼  k 2: dc vc vx

(13.13)

Equilibrium of the inhomogeneous system requires that the chemical potentials are identical, i.e., m0A;eq ¼ m0B;eq . Thus, m0B  m0A can be considered as the thermodynamic driving force for phase separation. The corresponding flux of matter is given by:   ! J B ¼ LV m0B  m0A dFV ¼ LV ; dc

(13.14)

where L is the kinetic coefficient. In one dimension, Eq. (13.14) becomes:   v m0B  m0A JB ¼ L vx   v dF ¼ L : (13.15) vx dc

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Kinetics of Phase Separation

229

Following Fick’s second law, we have vc vJB ¼ vt vx    v v dF ¼ L : vx vx dc Substituting Eq. (13.11) into Eq. (13.16), we have:    vc v v vf ðcÞ v2 c ¼ k 2 : L vt vx vx vc vx

(13.16)

(13.17)

This is the Cahn-Hilliard equation governing the kinetics of phase separation in a binary fluid. In three dimensions, the Cahn-Hilliard equation becomes:    vc vf ðcÞ 2 ¼ V, LV  kV c : (13.18) vt vc For a constant kinetic coefficient, Eq. (13.18) simplifies as:   vc 2 vf ðcÞ 2 ¼ LV  kV c : vt vc

(13.19)

Unlike Fick’s laws, the CahneHilliard equation allows for uphill diffusion if demixing enables a lowering of the free energy of the system.

13.5 Phase-Field Modeling The phase-field method is a powerful approach for numerical simulation of microstructural and morphological evolution in materials [12,13]. The phase-field method describes the microstructure of a material through a combination of conserved and non-conserved field variables. The field variables are continuously adjustable across interfacial regions, providing a mechanism for implementing diffuse interface theory, which will be discussed in Chapter 15. The phase-field method thereby substitutes boundary conditions at the interface by an appropriate phase field order parameter. This phase field takes two distinct values (e.g., 0 and 1) in each of the phases, with smooth changes at the interface.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

230

Materials Kinetics

The evolution of the field variables in time and space is governed by a combination of the Cahn-Hilliard equation (Eq. (13.18)) for nonlinear diffusion and suitable equations for other relevant kinetic phenomena, such as relaxation (Chapter 21), heat flow (Chapter 26), etc. Phase-field modeling has been applied to a wide range of problems in materials science, including dendrite formation, liquid-liquid immiscibility, grain growth, crack propagation, electromigration, domain evolution in thin films, and more. The interested reader is referred to the excellent work of Long-Qing Chen and colleagues at Penn State University for reviews of the latest development in phase-field modeling [13]. A recent example demonstrating successful application of phase-field modeling is the design of novel ferroelectric crystals that simultaneously achieve both high optical transparency and ultra-high piezoelectricity. This is an especially difficult problem since most high-performance piezoelectric materials exhibit a high degree of light scattering at their domain walls. Guided by phase-field modeling, Chen and coworkers developed a new microstructure of Pb(Mg1/3Nb2/3)O3$PbTiO3 (PMN$PT) crystals that achieves near-perfect optical transparency while simultaneously attaining ultra-high piezoelectricity [14]. The key breakthrough was discovering how to tailor the microstructure via an applied ac electric field. Figure 13.10 shows a comparison of the phase-field modeled microstructures (top) with their experimental counterparts (bottom). Note the distinct differences in the microstructures of the samples made using dc versus ac electric fields. As shown in Figure 13.11, samples produced under a dc field are optically opaque, whereas samples produced under an ac field can be made almost perfectly transparent.

13.6 Summary An initially homogeneous system can phase separate if demixing will lower the free energy of the system. While entropy always favors mixing, immiscibility can occur if the enthalpy of mixing is positive (i.e., if the mixing is endothermic). Liquid-liquid immiscibility can occur by either droplet nucleation or spinodal decomposition. Spinodal decomposition occurs when the system is locally unstable with respect to small compositional fluctuations, which occurs when the second derivative of Gibbs free energy with respect to concentration is negative. The kinetics of phase separation are described by the Cahn-Hilliard equation, a nonlinear diffusion equation that captures uphill diffusion in systems exhibiting immiscibility. The phase-field method is a numerical technique implementing the Cahn-Hilliard equation and other relevant kinetic equations to simulate the microstructural evolution of materials. This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Kinetics of Phase Separation

231

Figure 13.10 Development of a novel Pb(Mg1/3Nb2/3)O3$PbTiO3 (PMN$PT) microstructure to achieve simultaneous high piezoelectricity and optical transparency using an applied ac electric field. The top images show the predictions using the phase-field method. The bottom images show the corresponding experimental microstructures obtained from scanning electron microscopy (SEM). (Images courtesy Long-Qing Chen (Penn State)).

Figure 13.11 Novel PMN$PT material combining high optical transparency with high piezoelectricity, discovered using the phase-field method. (Image courtesy Long-Qing Chen (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

232

Materials Kinetics

Exercises (13.1) Consider a binary regular solution of A and B with an excess interaction energy of a ¼ 100 kJ/mol: (a) Calculate the upper consolute temperature (Tc) of this system. (b) Plot the Gibbs free energy of mixing as a function of concentration at a temperature of T ¼ Tc/2. What is the miscibility gap at this temperature? What is the region where spinodal phase separation is favored? (c) Calculate and plot the immiscibility dome for this system at temperatures between Tc/2 and Tc. (d) Calculate the spinodal dome for this system. Plot the spinodal dome on the same diagram as the immiscibility dome. (13.2) Discuss three potential applications for systems that phase separate by droplet nucleation. (13.3) Discuss three potential applications for systems that phase separate by spinodal decomposition. (13.4) Explain how two different types of phase separation could appear in the same system. For example, how could droplets be nucleated inside a spinodally phase separated matrix? Use phase diagrams to explain your reasoning. (13.5) Derive Fick’s second law from the Cahn-Hilliard equation. What assumptions must be made to obtain Fick’s diffusion equation from the Cahn-Hilliard formulation? Under what conditions are these assumptions valid?

References [1] A. K. Varshneya and J. C. Mauro, Fundamentals of Inorganic Glasses, 3rd ed., Elsevier (2019). [2] P. F. James, “Liquid-Phase Separation in Glass-Forming Systems,” J. Mater. Sci. 10, 1802 (1975). [3] K. S. McGuire, A. Laxminarayan, and D. R. Lloyd, “Kinetics of Droplet Growth in Liquid-Liquid Phase Separation of Polymer-Diluent Systems: Experimental Results,” Polymer 36, 4951 (1995). [4] R. N. Singh and F. Sommer, “Segregation and Immiscibility in Liquid Binary Alloys,” Rep. Prog. Phys. 60, 57 (1997). [5] R. Krishna, “Uphill Diffusion in Multicomponent Mixtures,” Chem. Soc. Rev. 44, 2812 (2015). [6] M. Colangeli, A. De Masi, and E. Presutti, “Microscopic Models for Uphill Diffusion,” J. Phys. A 50, 435002 (2017). [7] P. A. Rock, Chemical Thermodynamics, University Science Books (1983).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Kinetics of Phase Separation

233

[8] K. Okuyama, “Formation of a Strong, High-Porosity Spinodal Silica and its Impregnation with a Perfluoro Ionomer to Obtain a High Performance Solid Acid Catalyst Composite,” J. Mater. Chem. 10, 973 (2000). [9] S. Kim and J. Lee, “Spinodal Decomposition: A New Approach to Hierarchically Porous Inorganic Materials for Energy Storage,” National Sci. Rev. https://doi.org/ 10.1093/nsr/nwz217. [10] J. W. Cahn and J. E. Hilliard, “Free Energy of a Nonuniform System. I. Interfacial Free Energy,” J. Chem. Phys. 28, 258 (1958). [11] I. M. Gelfand and R. A. Silverman, Calculus of Variations, Courier Corporation (2000). [12] L.-Q. Chen, ““Phase-Field Method of Phase Transitions/Domain Structures in Ferroelectric Thin Films: A Review,” J. Am. Ceram. Soc. 91, 1835 (2008). [13] L.-Q. Chen, “Phase-Field Models for Microstructure Evolution,” Annu. Rev. Mater. Res. 32, 113 (2002). [14] C. Qiu, B. Wang, N. Zhang, S. Zhang, J. Liu, D. Walker, Y. Wang, H. Tian, T. R. Shrout, Z. Xu, L.-Q. Chen, and F. Li, “Transparent Ferroelectric Crystals with Ultrahigh Piezoelectricity,” Nature 577, 350 (2020).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 14

Nucleation and Crystallization

14.1 Kinetics of Crystallization A liquid cooled below its normal freezing point is known as a supercooled liquid. The supercooled liquid is out of equilibrium compared to the stable crystal state. As such, there is a thermodynamic driving force for the supercooled liquid to crystallize [1,2]. Crystallization is a two-stage process. The first stage is nucleation, where a crystal nucleus is randomly formed in the liquid. A nucleus is a nanoscale periodic arrangement of atoms that is not yet large enough to have a recognizable growth plane. Once a stable nucleus is formed, the next stage is crystal growth. In order for the supercooled liquid to crystallize, both the nucleation and crystal growth steps must occur, and they must occur in that order. To suppress crystallization, it is sufficient to suppress either the nucleation or crystal growth step, or to switch the order of the processes. Following the kinetic theory of crystallization, the degree of crystallization can be determined by the following steps [1]: • Calculate the nucleation rate as a function of temperature. • Calculate the crystal growth rate as a function of temperature. • Combine these two rates to determine the volume fraction of crystallization in the supercooled liquid. These steps are combined in the Johnson-Mehl-Avrami equation, which will be presented in Section 14.7.

14.2 Classical Nucleation Theory A nucleus is a periodic assemblage of atoms without recognizable crystal growth planes. A nucleus is a precursor to a crystal, i.e., it serves as a starting point for growth of the crystal. The crystal itself is an assemblage of atoms with a regular, periodic structure that has recognizable growth habit planes [1]. Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00013-3 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

235 Copyright Elsevier 2023

236

Materials Kinetics

Nucleation occurs because the atoms in the system are constantly vibrating as a result of thermal energy. Suppose that every atomic vibration would allow for an atom to join a nucleus. In this case, the nucleation rate, I, would be simply: I ¼ nn;

(14.1)

where n is the number of atoms per unit volume and n is the vibrational frequency. Note that in Eq. (14.1), the nucleation rate is given per unit volume. According to classical nucleation theory [3e5], there are two barriers to accomplishing nucleation: the kinetic barrier and the thermodynamic barrier. The kinetic barrier, DED, is the activation energy required for an atom to jump across the liquid-nucleus interface. This likely involves the atom breaking bonds with its neighbors, and it must also involve some repositioning of the atom into the more ordered structure of the nucleus. The thermodynamic barrier, W * , corresponds to the critical work required to form a thermodynamically stable nucleus. The existence of the thermodynamic and kinetic barriers to nucleation means that Eq. (14.1) must be multiplied by Boltzmann probability factors to obtain the actual nucleation rate [1]:     W* DED I ¼ nn exp  exp  ; (14.2) kT kT where k is Boltzmann’s constant and T is absolute temperature. Nucleation is classified into two types of processes [6]: • Homogeneous nucleation, which is the random assembly of atoms into a nucleus, leading to a lowering of the free energy of the system. The probability of homogeneous nucleation is equal throughout the volume of the material. • Heterogeneous nucleation, which is the random assembly of atoms on a preexisting surface or interface. The existence of such interfaces lowers the thermodynamic barrier for nucleation, increasing its probability of occurrence.

14.3 Homogeneous Nucleation Let us begin with homogeneous nucleation. According to classical nucleation theory, the change in free energy (DG) due to nucleus formation consists of two contributions: a lowering of the free energy of the volume

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nucleation and Crystallization

237

of the nucleus and an increase of the free energy due to formation of a liquid-nucleus interface [1]. Considering these two contributions, the change in free energy due to nucleus formation is: 4 DG ¼ pr 3 DGx þ 4pr 2 g; 3

(14.3)

where r is the radius of the nucleus, DGx is the change in free energy per unit volume as a result of nucleus formation, and g is the surface tension of the liquid-nucleus interface. Note that for any temperature below the melting point, DGx is a negative quantity since the nucleus has a lower free energy than the liquid. In classical nucleation theory, the nucleus is assumed to be spherical, and a sharp interface between the liquid and the nucleus is also assumed. These assumptions will be relaxed in the next chapter, where we discuss advanced nucleation theories. Figure 14.1 plots the volumetric and interfacial contributions to the free energy of nucleus formation. When the radius of the nucleus is small, the surface energy term in Eq. (14.3) dominates. Hence, the nucleus is unstable

Figure 14.1 Volumetric and interfacial contributions to the free energy of nucleation. At temperatures below the melting point, the volumetric contribution is always negative, since the more ordered nucleus has a lower free energy than the supercooled liquid. This is offset by the positive free energy associated with creating a new interface between the nucleus and the liquid. At small values of the nucleus radius, r, the interfacial term is dominant, so the nucleus is unstable and prefers to shrink. Once the nucleus size exceeds the critical value, r * , the nucleus becomes stable since the free energy will be lowered by increasing rather than decreasing the size of the nucleus. The work associated with forming a stable nucleus is W * .

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

238

Materials Kinetics

since the downward direction of the free energy is to reduce r, i.e., to shrink the nucleus until it vanishes in order to remove the unfavorable interface. For large values of r, the volumetric free energy term is dominant. In this regime, the nucleus is stable since the downward trajectory of the free energy is in the direction of increasing r, i.e., in the direction of growing the nucleus. The critical-sized nucleus, r ¼ r * , occurs where the slope of the free energy curve with respect to nucleus size is zero. Hence, the critical-sized nucleus occurs when: vDG ¼ 4pr 2 DGx þ 8prg ¼ 0: vr

(14.4)

Solving for r, the critical-sized nucleus, r * , is given by r* ¼ 

2g : DGx

(14.5)

Since DGx is negative at temperatures lower than the melting point, r * is a positive value. At r < r * , the positive interfacial term is dominant, and the nucleus prefers to shrink since vDG=vr > 0. At r > r * , the negative volumetric term is dominant, and the nucleus prefers to grow since vDG=vr < 0. By inserting Eq. (14.5) into Eq. (14.3), we can calculate the work associated with creating a critical-sized nucleus, W * :   3 2 4 2g 2g * DGx þ 4p  g: W ¼ p  3 DGx DGx ¼ ¼

32pg3 16pg3 þ 2 ðDGx Þ2 3ðDGx Þ

(14.6)

16pg3 3ðDGx Þ2

As shown in Figure 14.1, this corresponds to the value of DG at the peak of the DG vs. r curve, i.e., at r ¼ r * .

14.4 Heterogeneous Nucleation With heterogeneous nucleation, surfaces or interfaces can serve as preferential sites for nuclei to form. Figure 14.2 shows an example of a crystal

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nucleation and Crystallization

239

Figure 14.2 (a) Heterogeneous nucleation of a crystal nucleus at an interface with a solid substrate. The contact angle, q, is given by Young’s equation, gls ¼ gxs þ glx cos q, which balances the interfacial forces. (b) Schematic diagram of the “spherical cap” shape of a nucleating droplet, which is used in classical nucleation theory to model heterogeneous nucleation on a flat substrate. Only a section of the sphere of radius r needs to be formed by thermal fluctuations for a contact angle, q.

nucleus growing at the interface between the liquid and a solid substrate. In this case, the total interfacial energy has three contributions: glx Alx þ gxs Axs þ gxl Axl ;

(14.7)

where g is surface tension, A is the interfacial area, and the subscripts l, x, and s refer to the liquid, nucleating crystal, and solid substrate phases, respectively. For example, glx refers to the surface tension between the liquid and nucleating crystal phases, and Alx refers to its corresponding area. As shown in Figure 14.2(a), the wetting ability of the crystal nucleus on the solid substrate is described by its contact angle, q, also known as the wetting angle. The contact angle is the equilibrium angle determined by a

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

240

Materials Kinetics

balancing of the interfacial forces acting on the nucleus. The equilibrium contact angle is given by Young’s equation: gls ¼ gxs þ glx cos q:

(14.8)

A contact angle of q ¼ 0 indicates a perfectly wetting substrate, and a contact angle of q ¼ 180 indicates a perfectly non-wetting substrate. For any q < 180 , the presence of the foreign substrate acts to lower the thermodynamic barrier for nucleation. The kinetic barrier is unaffected by the presence of the substrate. Assuming the “spherical cap” shape for the heterogeneously nucleating droplet, as depicted in Figure 14.2(b), the critical radius for heterogeneous nucleation is the same as for homogeneous nucleation [1], namely: r* ¼ 

2glx : DGx

(14.9)

However, the work of nucleation is lowered. For a contact angle of q, the work corresponding to formation of a critical-sized nucleus through heterogeneous nucleation is given by [1]: #  "  16pg3lx 1 2 * Whetero ¼ ð1  cos qÞ $ ð2 þ cos qÞ : (14.10) 4 3ðDGx Þ2 The first factor on the right-hand side of Eq. (14.10) is identical to Eq. (14.6), the work associated with forming a critical-sized nucleus via homogeneous nucleation. Hence, the work of heterogeneous nucleation in Eq. (14.10) can be written in terms of the corresponding work of homogeneous nucleation:   1 2 * * Whetero ¼ Whomo ð1  cos qÞ ð2 þ cos qÞ : (14.11) 4 It follows from Eq. (14.11) that a perfectly wetting substrate (q ¼ 0) has no thermodynamic barrier to nucleation, q ¼ 0

* / Whetero ¼ 0;

(14.12)

i.e., nucleation is limited only by the kinetics of transporting atoms to the growing nucleus. For a perfectly non-wetting substrate, q ¼ 180

This book belongs to Alice Cartes ([email protected])

* * / Whetero ¼ Whomo ;

(14.13)

Copyright Elsevier 2023

Nucleation and Crystallization

241

i.e., the presence of a perfectly non-wetting substrate has no impact on the nucleation process, which proceeds identically as in homogeneous nucleation. One requirement for heterogeneous nucleation on a perfectly wetting substrate is matching of the lattice spacing of the exposed face of the substrate to the growth habit planes of the nucleating crystal. In this case, the growth of the new crystal would be oriented to match that of the substrate. This is an example of epitaxial growth, where crystals are grown with a well-defined orientation with respect to the substrate.

14.5 Nucleation Rate The Gibbs free energy of nucleation can be written in terms of enthalpic and entropic contributions as DG ¼ DH  T DS. Let us assume that DH and DS are independent of temperature, T, near the melting point, Tm. Hence, dDG ¼  DS: dT

(14.14)

Integrating both sides from Tm to T, we have: DGT  DGTm ¼ DSðT  Tm Þ:

(14.15)

Defining DT ¼ T  Tm as the difference between the current temperature and the melting point, DGx ¼ DT DSf ;

(14.16)

where DSf ¼ DS is the entropy of fusion and DGx ¼ DGT  DGTm is the free energy of crystallization. Since DGx ¼ 0 at the melting temperature, Tm: DGx ¼

DT DHf ; Tm

(14.17)

where DHf is the enthalpy of fusion. Inserting Eqs. (14.6) and (14.17) into Eq. (14.2), the nucleation rate can be approximated as: "   2 #   16pg3 Tm DED I ¼ nn exp  exp  : (14.18) 3DHf2 kT DT kT

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

242

Materials Kinetics

From Eq. (14.18), when DT ¼ 0, i.e., when T ¼ Tm, the first exponential factor is zero, indicating that nucleation is thermodynamically controlled near the melting point. As T decreases, the second exponential term goes to zero, since nucleation is kinetically controlled at low temperatures. Figure 14.3 shows a plot of nucleation rate versus temperature, showing that there is a maximum rate of nucleation at some temperature below Tm. This maximum occurs where there is an optimum tradeoff between the thermodynamic and kinetic factors in Eq. (14.18). As the temperature is lowered below Tm, the thermodynamic driving force for nucleation increases. However, the kinetics of the nucleation process decrease at lower temperatures. Hence, nucleation is thermodynamically limited at temperatures near the melting point and kinetically limited at low temperatures.

14.6 Crystal Growth Rate Once a critical-sized nucleus has formed, crystal growth occurs by transporting additional atoms across the liquid-crystal interface and incorporating them into the crystal. Hence, the crystal growth rate depends on how rapidly atoms can diffuse from the liquid regions across the interface. Crystal

Figure 14.3 Nucleation rate versus temperature, as in Eq. (14.18). The nucleation rate is zero at temperatures above the melting point, since the liquid has a lower Gibbs free energy than the crystal. The nucleation rate increases as the liquid is supercooled below the melting point, due to the increased thermodynamic driving force for nucleation. At low temperatures, it becomes exponentially more difficult to overcome the kinetic barrier, so the nucleation rate decreases. (Reproduced from Varshneya and Mauro [1]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nucleation and Crystallization

243

growth can be described using a simple two-state model, as depicted in Figure 14.4. An activation barrier, DE0 , must be overcome for an atom to transition from the liquid to the crystal site [7]. Once the atom has been incorporated into the growing crystal, the change in free energy is DGx. The atomic jump distance to cross the liquid-crystal interface is denoted a. Following Figure 14.4, the frequency of successful jumps from the liquid to the crystal, Glx , is given by:   DE0 Glx ¼ n exp  : (14.19) kT Likewise, the frequency of successful jumps from the crystal to the liquid (Gxl ) is:   DE 0  DGx Gxl ¼ n exp  : (14.20) kT Combining Eqs. (14.19) and (14.20), the net crystal growth rate, u, is

or

u ¼ aðGlx  Gxl Þ;

(14.21)

    DE 0 DGx u ¼ an exp  1  exp : kT kT

(14.22)

When T > Tm, DGx is positive, so the crystal growth rate is negative. In other words, crystals will dissolve at temperatures above the melting point. When T ¼ Tm, DGx is zero, which yields a zero crystal growth rate, i.e., u ¼ 0. At temperatures below the melting point, DGx is negative, so there

Figure 14.4 Crystal growth involves atoms from the liquid overcoming an activation barrier to be incorporated into the lower free energy crystal state. (Reproduced from Varshneya and Mauro [1]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

244

Materials Kinetics

is a thermodynamic driving force for crystal growth. As the temperature is lowered, the thermodynamic driving force for crystal growth becomes larger. However, this is offset by a slowing of the kinetics at low temperatures. Hence, as with nucleation, the crystal growth process is thermodynamically controlled at temperatures near the melting point and kinetically controlled at low temperatures. If the mechanism controlling the activation barrier for crystal growth is the same as that controlling atomic diffusion, the diffusion coefficient, D, of the atoms crossing the liquid-crystal interface can be written as:   DE 0 2 D ¼ a n exp  : (14.23) kT Combining Eqs. (14.22) and (14.23), the crystal growth rate can be written in terms of the diffusion coefficient as:    D DGx u¼ 1  exp : (14.24) a kT Eq. (14.24) is the Wilson-Frenkel theory of crystal growth [1]. If we assume that the Stokes-Einstein relation, D ¼ kT =6pha, is valid (see Section 12.2), where a is used an approximation of the atomic radius, then the crystal growth rate can be written in terms of the shear viscosity of the liquid, h, as:    kT DGx u¼ 1  exp : (14.25) 6pha2 kT In other words, a lower viscosity of the liquid should lead to a higher crystal growth rate. This relationship is experimentally validated in Figure 14.5, which plots the growth rate of cristobalite from liquid silica. The solid line shows the model fit to the experimental data points using Eq. (14.25).

14.7 Johnson-Mehl-Avrami Equation The nucleation and crystal growth rates must be combined to quantify the overall kinetics of crystallization. Figure 14.6 shows overlaid curves of nucleation and crystal growth processes. Both processes are thermodynamically limited at temperatures near the melting point. Whereas the rate of nucleation becomes zero at temperatures above Tm, the crystal growth

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nucleation and Crystallization

245

Figure 14.5 Crystal growth rate of cristobalite from liquid silica. (Reproduced from Varshneya and Mauro [1]).

rate becomes negative, implying melting or dissolution of the crystals at temperatures where the liquid state is thermodynamically favored. Both the nucleation and crystal growth processes are kinetically limited at low temperatures, vanishing in the limit of T / 0 K, since kinetic processes are forbidden in the limit of absolute zero temperature. Between these two extremes, there is a temperature of peak nucleation rate and a temperature of peak crystal growth rate. Typically, the nucleation rate peaks at a temperature lower than the crystal growth peak, as in Figure 14.6. The shaded region in Figure 14.6 indicates the region where nucleation and crystal growth occur simultaneously. Supercooled liquids that crystallize easilydsuch as waterdhave a large overlap in their nucleation and crystal growth curves. Hence, crystallization occurs rapidly and often uncontrollably in such systems. Other supercooled liquidsdsuch as silicadare more difficult to crystallize. Such systems have a smaller overlap between the nucleation and crystal growth curves. The volume fraction of crystallization, X ¼ Vx =V , where Vx is the volume of crystals and V is the total volume of the system, is determined by combining the nucleation and crystal growth rates. Assuming that nucleation and crystal growth occur simultaneously at a common temperature,

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

246

Materials Kinetics

Figure 14.6 Overlay of nucleation and crystal growth rates. Nucleation and crystal growth occur simultaneously in the shaded region. (Reproduced from Varshneya and Mauro [1]).

the volume fraction of crystals is given by the Johnson-Mehl-Avrami equation [8e12]:   Vx pIu3 t 4 1 X¼ (14.26) ¼ 1  exp  z pIu3 t 4 ; 3 V 3 where t is the heat treatment time. The equation becomes significantly more complicated for non-isothermal heat treatments [13e15], since the nucleation and crystal growth rates vary by orders of magnitude with temperature, and the nucleation step must precede the crystal growth step.

14.8 Time-Temperature-Transformation Diagram The Johnson-Mehl-Avrami equation in Eq. (14.26) can be used to construct a time-temperature-transformation (T-T-T) diagram [16], as in Figure 14.7. The T-T-T diagram plots time on the x axis and temperature on the y axis. The “transformation” part of the diagram is a curve

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nucleation and Crystallization

247

Figure 14.7 Example time-temperature-transformation (T-T-T) diagram, where the contour corresponds to a constant volume fraction of crystallization, X. (Reproduced from Varshneya and Mauro [1] based on Uhlmann [16]).

representing a constant volume fraction of crystals, i.e., constant X ¼ Vx =V , as calculated from Eq. (14.26). For example, a value of X ¼ 106 is often used as a typical instrumental limit for detection of crystals. The T-T-T diagram can be used to determine the cooling rate required to achieve a desired value of X. To calculate the cooling rate, draw a tangent line connecting the constant-X contour to the melting point at the initial time (t ¼ 0), as in Figure 14.7. The slope of this line gives the cooling rate to obtain the specified volume fraction of crystallization, X. Note that the transformation curve would shift to the left in Figure 14.7 for a lower value of X, indicating that a faster cooling rate is required to achieve a lower volume fraction of crystallization. Conversely, the transformation curve would shift to the right for a higher value of X, indicating that a slower cooling rate is needed to achieve a higher volume fraction of crystallization. In other words, more time is required to achieve a greater degree of crystallization.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

248

Materials Kinetics

14.9 Glass-Ceramics Supercooled liquids that have a large overlap in their nucleation and crystal growth curves, i.e., a large overlap region in Figure 14.6, typically exhibit rapid and uncontrollable crystallization during cooling. On the other hand, in some supercooled liquids there is a greater separation between the nucleation and crystal growth curves, where the peak in the nucleation rate occurs significantly below the peak in the crystal growth curve. In such systems, it is possible to control the nucleation and crystal growth steps independently to achieve greater control over the resulting microstructure. The most famous example of such controlled crystallization is the class of materials known as glass-ceramics. Glass-ceramics consist of a glassy matrix with embedded crystals [17e21]. The population density of the crystals is governed by the nucleation step, and the size of the crystals is determined by the crystal growth step. Figure 14.8 shows an example heat treatment path to make a glass-ceramic sample, where there are separate

Figure 14.8 Controlled crystallization by two-step heat treatment. The first nucleation step determines the concentration of crystal nuclei, which ultimately controls the number of crystals per unit volume. Then, the second crystal growth step controls the size of the crystals. As shown in Figure 14.6, the peak nucleation rate typically occurs at a lower temperature compared to the peak crystal growth rate. (Reproduced from Varshneya and Mauro [1]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nucleation and Crystallization

249

stages for nucleation and the subsequent crystal growth. A comprehensive review of glass-ceramic materials is provided in the excellent textbook by Höland and Beall [21]. Glass-ceramics were originally discovered by S. Donald Stookey (1915e2014) at Corning Glass Works [18], when he heated a glass sample in a furnace that was accidentally set to 900 C rather than the intended 600 C. When Stookey opened the furnace, he expected to find a molten mess. However, what he discovered was a solid, opaque sample. When he pulled the sample out of the furnace, Stookey then accidentally dropped it onto the concrete floor. Instead of breaking, the sample simply bounced off the floor. Completely by accident, Stookey had just invented the first glassceramic! He then sought to understand the physics and chemistry of what he had accomplished and use that understanding to design new products for Corning. Through careful experimental understanding, Stookey invented several highly successful glass-ceramic products, including the famous CorningWare cookware. For his invention of glass-ceramics, in 1986 Stookey was awarded the National Medal of Technology and Innovation from President Ronald Reagan. Glass-ceramics are now ubiquitous in our everyday lives [20]. Glass-ceramics with ultra-low thermal expansion coefficients are used as stovetops and cookware. Glass-ceramics are also used for dental restorations owing to their high strength, toughness, and corrosion resistance, as well as their ability to match the color of natural teeth [22]. An example of a dental glass-ceramic microstructure is shown in Figure 14.9. Here, the crystals are grown using an initial nucleation step at 490 C for 2 hours, followed by a crystal growth step at 725 C for 1 hour. The interlocking crystals help to improve the mechanical performance of the glass-ceramic compared to the base glass sample prior to ceramming. As shown in Figure 14.10, substantial variation in the optical properties of the glass-ceramic can be achieved by varying the heat treatment conditions. Figure 14.10 shows the same base composition as in Figure 14.9, but with different crystal growth treatments. The primary crystalline phase switches from lithium metasilicate to lithium disilicate at higher temperatures. This transition also corresponds to a transition from an off-white color of the lithium metasilicate samples to a bright white color of the lithium disilicate samples. The leftmost sample in Figure 14.10 shows the material after only the nucleation step, without any crystal growth. This sample is optically transparent because the crystals have not grown sufficiently large to scatter light in the visible spectrum [23].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

250

Materials Kinetics

Figure 14.9 Scanning electron micrograph of a lithium metasilicate glass-ceramic produced through a nucleation step at 490 C for 2 hours, followed by a crystal growth step at 725 C for 1 hour. (Image by Anthony DeCeanne (Penn State)).

Figure 14.10 Glass-ceramic samples of the same composition as in Figure 14.9, but using different nucleation and crystal growth conditions. The first temperature, which was held constant for this series, is the nucleation temperature (540 C, 2 hours). The second temperature is the growth temperature, which was varied by 25 C for each sample, with a 1-hour treatment at each temperature. The primary crystallite phase transitioned from lithium metasilicate to lithium disilicate at higher temperature. This transition is clearly seen by the transition from an off-white color to a bright white color. Note that the leftmost sample in the figure was only heat treated with a nucleation step, without any crystal growth. Hence, the crystals embedded in the glassy matrix are not large enough to scatter visible light. All of the glass-ceramic samples are produced from a common parent glass, shown at the bottom of the figure. (Image by Anthony DeCeanne (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nucleation and Crystallization

251

14.10 Summary A supercooled liquid is any liquid cooled below its normal freezing point. Crystallization from a supercooled liquid is a two-step process, consisting of nucleation and crystal growth. Nucleation occurs as a result of random fluctuations in the liquid. A nucleus is a precursor to a crystal, lacking recognizable growth habit planes. Within the framework of classical nucleation theory, the stability of a nucleus is the result of a competition between the volume free energy of nucleation and the free energy associated with creating a new liquid-nucleus interface. Classical nucleation theory considers two barriers to nucleation: a thermodynamic barrier and a kinetic barrier. A nucleus is stable if it overcomes these barriers and becomes larger than the critical nucleus size. Nucleation can be either homogeneous or heterogeneous. Homogeneous nucleation has the same probability throughout the supercooled liquid. Heterogeneous nucleation promotes accelerated nucleation in the presence of a free surface or other interface, where the thermodynamic barrier to nucleation is lowered. Once a stable nucleus is formed, crystal growth proceeds by transporting atoms from the supercooled liquid and incorporating them into the crystal. The overall rate of crystallization is described by the Johnson-Mehl-Avrami equation, which combines nucleation and crystal growth rates at a particular temperature. The microstructure of the material can be tailored by controlling the temperature and time of the nucleation and crystal growth steps, for example, in the development of glass-ceramic materials.

Exercises (14.1) In glass-ceramics, the nucleation process is aided by use of a nucleating agent, which is a chemical species that helps control the nucleation process by reducing either the thermodynamic or kinetic barrier to nucleation. However, the nucleating agent itself does not become part of the desired crystalline phase. Search the literature for a paper that uses a nucleating agent to help in forming a glassceramic material. Provide the citation for the paper. (a) What are the practical applications of this glass-ceramic material? (b) What is/are the target crystalline phase(s) for growth in this system?

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

252

(14.2)

(14.3)

(14.4)

(14.5)

(14.6)

Materials Kinetics

(c) How do the properties of the material change as a result of ceramming (i.e., how do the properties of the final glassceramic compare to those of the starting glass)? (d) What is the nucleating agent for this system? Why was this particular nucleating agent chosen? (e) Do you think the nucleating agent is acting to lower the thermodynamic barrier or the kinetic barrier to nucleation? Why? Justify your answer. Using the nucleation and crystal growth rate curves in Figure 14.6, sketch heat treatment schedules (i.e., temperature as a function of time, as in Figure 14.8), to achieve: (a) A high population of large crystals. (b) A low volume fraction of large crystals. (c) A high volume fraction of small crystals. Using Figure 14.7 as a guide, sketch time-temperature-transformation (T-T-T) curves to obtain (a) higher and (b) lower volume fractions of crystallization. Using the plot, indicate the required cooling rates to achieve these different volume fractions of crystallization. Prove that the critical radius of curvature for a heterogeneously nucleating droplet, as in Eq. (14.9), is equal to the critical radius of a homogeneously nucleating droplet, as in Eq. (14.5). Show the steps of your derivation for the critical radius of heterogeneous nucleation, assuming the “spherical cap” shape for the heterogeneously nucleating droplet shown in Figure 14.2(b). Derive Eq. (14.10), the critical work of heterogeneous nucleation for a nucleating droplet of contact angle q, assuming the “spherical cap” morphology for the heterogeneously nucleating droplet, as in Figure 14.2(b). Glass-ceramics were discovered by accident. Give an example of another accidental scientific discovery. What were the conditions that led to this discovery, and what were the new scientific insights obtained? What is the practical impact of this discovery?

References [1] A. K. Varshneya and J. C. Mauro, Fundamentals of Inorganic Glasses, 3rd ed., Elsevier (2019). [2] P. G. Debenedetti, Metastable Liquids: Concepts and Principles, Princeton University Press (1996).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nucleation and Crystallization

253

[3] K. F. Kelton and A. L. Greer, Nucleation in Condensed Matter, Elsevier (2010). [4] S. Karthika, T. K. Radhakrishnan, and P. Kalaichelvi, “A Review of Classical and Nonclassical Nucleation Theories,” Crystal Growth & Design 16, 6663 (2016). [5] D. Erdemir, A. Y. Lee, and A. S. Myerson, “Nucleation of Crystals from Solution: Classical and Two-Step Models,” Acc. Chem. Res. 42, 621 (2009). [6] X. Y. Liu, “Heterogeneous Nucleation or Homogeneous Nucleation? J. Chem. Phys. 112, 9949 (2000). [7] W. B. Hillig and D. Turnbull, “Theory of Crystal Growth in Undercooled Pure Liquids,” J. Chem. Phys. 24, 914 (1956). [8] W. A. Johnson and K. F. Mehl, “Reaction Kinetics in Processes of Nucleation and Growth,” Trans. Am. Inst. Min. Metall. Eng. 135, 416 (1932). [9] M. Avrami, “Kinetics of Phase Change: I. General Theory,” J. Chem. Phys. 7, 1103 (1939). [10] M. Avrami, “Kinetics of Phase Change: II. TransformationeTime Relations for Random Distribution of Nuclei,” J. Chem. Phys. 8, 212 (1940). [11] M. Avrami, “Kinetics of Phase Change: III. Granulation, Phase Change, and Microstructure,” J. Chem. Phys. 9, 177 (1941). [12] J. W. Christian, The Theory of Transformations in Metals and Alloys, 2nd ed., Pergamon Press (1965). [13] D. W. Henderson, “Thermal Analysis of Non-Isothermal Crystallization Kinetics in Glass Forming Liquids,” J. Non-Cryst. Solids 30, 301 (1979). [14] K. Matusita, T. Komatsu, and R. Yokota, “Kinetics of Non-Isothermal Crystallization Process and Activation Energy for Crystal Growth in Amorphous Materials,” J. Mater. Sci. 19, 291 (1984). [15] J. Farjas and P. Roura, “Modification of the KolmogoroveJohnsoneMehleAvrami Rate Equation for Non-Isothermal Experiments and its Analytical Solution,” Acta Mater. 54, 5573 (2006). [16] D. R. Uhlmann, “Glass Formation, a Contemporary View,” J. Am. Ceram. Soc. 66, 95 (1983). [17] E. D. Zanotto, “Glass Crystallization ResearchdA 36-Year Retrospective. Part I, Fundamental Studies,” Int. J. Appl. Glass. Sci. 4, 105 (2013). [18] G. H. Beall, “Dr. S. Donald (Don) Stookey (1915e2014): Pioneering Researcher and Adventurer,” Front. Mater. 3, 37 (2016). [19] G. H. Beall, “Design and Properties of Glass-Ceramics,” Ann. Rev. Mater. Sci. 22, 91 (1992). [20] E. D. Zanotto, “Bright Future for Glass-Ceramics,” Am. Ceram. Soc. Bull. 89, 19 (2010). [21] W. Höland and G. H. Beall, Glass-Ceramic Technology, 3rd ed., John Wiley & Sons (2019). [22] M. Montazerian and E. D. Zanotto, “Bioactive and Inert Dental Glass-Ceramics,” J. Biomed. Mater. Res. A 105, 619 (2017). [23] G. H. Beall and L. R. Pinckney, “Nanophase Glass-Ceramics,” J. Am. Ceram. Soc. 82, 5 (1999).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 15

Advanced Nucleation Theories

15.1 Limitations of Classical Nucleation Theory The classical nucleation theory presented in Chapter 14 has a number of appealing features. For example, it successfully accounts for the important tradeoff between thermodynamic and kinetic effects. As a liquid is supercooled, the thermodynamic driving force for nucleation increases, while the kinetics of the nucleation process are exponentially slowing. Moreover, classical nucleation theory captures the reduced thermodynamic barrier during heterogeneous nucleation compared to homogeneous nucleation. When combined with the equation for crystal growth kinetics, classical nucleation theory is part of a holistic theory of crystallization, incorporating the initial nucleation step and the subsequent crystal growth. However, classical nucleation theory also makes several assumptions, which limit its accuracy for many systems: • Classical nucleation theory assumes that the nucleus has a spherical shape. In fact, the nucleating clusters may be highly non-spherical, as shown in Figure 15.1. • Classical nucleation theory assumes that there is a sharp interface between the nucleus and the liquid matrix. In reality, the boundary between the nucleus and the liquid may not be so well-defined. • Calculation of the interfacial energy assumes that the nanometer-sized nucleus can be treated as a macroscale droplet. • The critical work associated with forming a stable nucleus is based on a rather simplistic tradeoff between surface area and volume free energy terms. • Other metastable phases may evolve in the system, serving as sites for time-dependent heterogeneous nucleation. As a result of these approximations, researchers have found substantial discrepancies between experimental nucleation rates and those calculated from classical nucleation theory. For example, Neilson and Weinberg [3] tested classical nucleation theory for homogeneous nucleation of lithium Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00011-X This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

255 Copyright Elsevier 2023

256

Materials Kinetics

Figure 15.1 Representative snapshots of lithium metasilicate and lithium disilicate nucleating clusters of different sizes (Si ¼ yellow, O ¼ red, and Li ¼ pink), calculated through grand canonical Monte Carlo with an implicit glass model as discussed in Section 15.5. Note the non-spherical shapes of the nucleating clusters. (Image by McKenzie and Mauro based on Refs. [1,2].)

disilicate crystals, finding that the predicted nucleation rates are several orders of magnitude lower than the experimentally observed values. Also, the classical nucleation model could not accurately capture the observed temperature dependence of the nucleation rate. Following this work, Gonzalez-Oliver and James [4] found a similarly large discrepancy between theory and experiment for nucleation in a soda lime silicate system. Large disagreement between theory and experiment was also reported by Zanotto and James [5,6] for the barium disilicate system. In this chapter, we will discuss several approaches for improving upon classical nucleation theory. First, we shall consider a statistical mechanical treatment of nucleation, which captures a more detailed view of the energetics and probabilities associated with particles joining a nucleating cluster. Next, we discuss diffuse interface theory, where the interface between the nucleus and liquid is treated in terms of a continuously varying free energy density. We then introduce density functional theory, in which the local free energy variation of the system is treated in terms of localized changes of density between the interface of the liquid and nucleating crystal. Finally, we consider a numerical approach for atomic simulation of

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Advanced Nucleation Theories

257

nucleation by treating the matrix liquid phase using an implicit solvent model. Interested readers who wish to learn more about advanced nucleation theories are referred to the excellent textbook on nucleation by Kelton and Greer [7].

15.2 Statistical Mechanics of Nucleation One of the primary limitations of classical nucleation theory relates to the assumptions made when calculating the work of formation of a criticalsized nucleus, W*, as in Eq. (14.6). A more detailed calculation of the energetics associated with cluster formation during nucleation can be obtained through a statistical mechanical approach [7,8]. Let us begin with the grand canonical partition function, X, of a gas of N molecules in volume V with chemical potential m: Xðm; V ; T Þ ¼

N X

 N QðN; V ; T Þ em=kT ;

(15.1)

N¼0

where k is Boltzmann’s constant and T is temperature. Since the grand canonical ensemble is applicable to an open system, the summation is over all possible values of N. For any specific value of N, QðN; V ; T Þ is the corresponding partition function in the canonical ensemble, Z 1 QðN; V ; T Þ ¼ 3N eHðNÞ=kT dr1 /drN dp1 /dpN ; (15.2) h N! where h is Planck’s constant, HðNÞ is the Hamiltonian of the N-body system, V is volume, and ri and pi are the position and momentum vectors of particle i. Following Kelton and Greer [7], we can modify Eq. (15.2) to account for particle-particle interactions and group the particles into nucleating clusters of size n. The grand canonical partition function becomes:   uð0Þ Xðm; V ; T Þ ¼ exp  $ kT # "    mn  Z X V n3 uðnÞ ðdr1 /drN ; mÞ exp  exp dr1 /drN . 3n exp kT kT n l n! (15.3) Here we assume that the clusters themselves are sufficiently separated so that they do not interact with each other. The summation in Eq. (15.3) is over

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

258

Materials Kinetics

all cluster sizes, n. The momentum contribution to the partition function is captured by the thermal de Broglie wavelength,  1=2 h2 l¼ ; (15.4) 2pmkT where m is the particle mass. In Eq. (15.3), uðnÞ is the interaction energy within a cluster of n particles, and the constant uð0Þ is the potential energy in the absence of any cluster. Eq. (15.3) can be written in more compact form as !   X uð0Þ Xðm; V ; T Þ ¼ exp  (15.5) Zn emn=kT ; exp kT n by defining Zn ¼

Vn3 l3n n!

Z

  uðnÞ ðdr1 /drN ; mÞ dr1 /drN exp  kT

(15.6)

as the partition function for a cluster of size n. The Helmholtz free energy, FðnÞ, of a cluster of size n can be calculated from Eq. (15.6) as: FðnÞ ¼  kT ln Zn :

(15.7)

From here, we can determine the expectation value of the number of clusters of each size n:     FðnÞ  nm DFðnÞ ¼ exp  : (15.8) hNðnÞi ¼ exp  kT kT This can be converted into a probability, p(n), as hNðnÞi ; pðnÞ ¼ PN n¼1 nNðnÞ

(15.9)

which then gives the work of cluster formation, W(n), W ðnÞ ¼  kT ln pðnÞ;

(15.10)

for a cluster of size n. The formulation described above can be modified to describe nucleation in various systems of interest. The key is to have an accurate model for the interaction energies within a nucleating cluster and within the original matrix phase. Also, if the density of clusters is high, then cluster-cluster interactions would need to be included. While statistical mechanics

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Advanced Nucleation Theories

259

provides a powerful approach for modeling nucleation behavior, it may not necessarily be the most practical way to solve a problem, especially for complex multi-component systems. Diffuse interface theory (Section 15.3) and density functional theory (Section 15.4) provide intermediate-level descriptions of nucleation that can be more practical to implement for many real systems.

15.3 Diffuse Interface Theory While classical nucleation theory assumes a sharp, well-defined interface between the matrix phase and the nucleating cluster, the actual interface is more diffuse. For small clusters, the interfacial width between the matrix phase and the nucleating cluster can represent a significant fraction of the total cluster size. The existence of such a diffuse interfacial region is in violation of the assumption made in classical nucleation theory of a perfectly sharp interface. As such, the presence of a diffuse interface significantly changes calculation of the work associated with nucleus formation. Diffuse interface theory is a phenomenological approach specifically designed to model broad interfaces [9e12]. Let us begin with the simple case of a spherical nucleating cluster in an isotropic liquid medium, where the radial dependence of the free energy density is denoted g(r). The work of forming the nucleating cluster in this case is: ZN W ðrÞ ¼ 4p ðgðrÞ  gl Þr 2 dr;

(15.11)

0

where gl is the free energy density of the matrix phase, viz., the liquid phase. As with classical nucleation theory, we can determine the critical nucleus size by finding the value of r where the change in Gibbs free energy is a maximum. This is accomplished by setting the derivative of the work of cluster formation equal to zero,  dW ðrÞ ¼ 0; (15.12) dr r¼r * and solving for r, yielding the critical-sized nucleus, r ¼ r*. As with classical nucleation theory (Section 14.3), any nucleus that is smaller than r* is unstable and favored to shrink and ultimately vanish. Any nucleus larger than r* is stable and thermodynamically favored to grow. The critical work for forming a stable nucleus is then simply given by W* ¼ W(r*).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

260

Materials Kinetics

Diffuse interface theory is phenomenological in nature. Successful application of this approach requires having an appropriate form of g(r), which is rather ad hoc in nature. An example scaling of g(r) is plotted in Figure 15.2 for a diffuse interface between a nucleating crystal (solid phase) and the matrix liquid. For any temperature below the melting point, the free energy density is lower in the nucleus compared to the liquid. The free energy density increases significantly and peaks in the interfacial region due to the mismatch in bonding between the solid and liquid phases. A greater peak would be indicative of a higher surface tension between the nucleus and the liquid. The width of the interfacial region in g(r) can also be adjusted to account for either narrower interfaces (with a sharper transition between phases) or broader interfaces (with a more diffuse transition). One key result from diffuse interface theory is the ability to reproduce the observed positive temperature dependence of interfacial free energy during nucleation [13]. A more physically rigorous approach is provided by density functional theory, which also accounts for the diffuse nature of interfaces.

15.4 Density Functional Theory Density functional theory considers local density fluctuations as the appropriate order parameter for describing the phase transition from the liquid to the crystal. Before we begin, please note that the density functional theory discussed in this chapter should not be confused with the completely independent theory of the same name in the field of computational quantum mechanics. These two theories have the same name but are otherwise unrelated. Density functional theory assumes that the thermodynamics of nucleation can be described in terms of a spatially heterogeneous density in the system. If the liquid and the nucleating crystal phases can be described by

Figure 15.2 Free energy density, g(r), of a diffuse interfacial region between a nucleating cluster (solid, gs) and a matrix phase (liquid, gl). The free energy density peaks in the interfacial region.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Advanced Nucleation Theories

261

two distinct values of density, then the local density variation crossing the liquid-nucleus interface can be used as an order parameter to account for the transition area between these phases. The local free energy is then expressed as a functional of the density. (A functional is simply a function of a function, i.e., a mathematical function that takes another function as its argument.) Let us express the free energy of the system as a functional of the position-dependent density, G½rðrÞ, where r is the position coordinate and rðrÞ is the local density. Following Kelton and Greer [7], the free energy can be calculated by integrating over the free energy density, gðrðrÞÞ: Z G½r ¼ gðrðrÞÞdr; (15.13) V

where the integral is performed over the full volume of the system, V. The work required to nucleate a second-phase cluster of n particles is: W ¼ G½r  nm;

(15.14)

where m is the chemical potential in the original matrix phase (i.e., the liquid). Combining Eqs. (15.13) and (15.14), we have Z W ½r ¼ ðgðrðrÞÞ  mrðrÞÞdr: (15.15) V

In order to nucleate the new phase, there must be a critical fluctuation in the density, r*. This critical fluctuation corresponds to a stationary state where the variation of the work with respect to the order parameter (i.e., the density) is zero:  dW ½r ¼ 0: (15.16) dr r¼r* Eq. (15.16) is the variational condition for stable nucleation. As an example, let us consider a simple one-dimensional case, where g ¼ g(r, dr/dx). For r ¼ r*, the following condition must be satisfied:   Z  vðg  mrÞ vðg  mrÞ vr dW ¼ dr  d dx ¼ 0: (15.17) vr vðvr=vxÞ vx This gives the Euler-Lagrange equation:

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

262

Materials Kinetics



 vðg  mrÞ d vðg  mrÞ ¼ 0:  vr dx vðvr=vxÞ r¼r*

Generalizing to three dimensions, we have: X v2 v ðgðr; VrÞ  mrÞ ðgðr; VrÞ  mrÞ  vr vxi vr0i i

(15.18)

! ¼ 0;

(15.19)

r¼r*

which simplifies to [7]: vgðr; VrÞ X v2 gðr; VrÞ  vr vxi vr0i i

! ¼ m:

(15.20)

r¼r*

This equation can be solved subject to the appropriate boundary conditions to obtain the critical density for stable nucleation, r*. This is equivalent to achieving chemical equilibrium between the original matrix phase and critical-sized cluster of the nucleating phase, i.e., m* ¼ m. The free energy barrier to nucleation, i.e., the work of nucleation, can be determined by substituting the critical density, r*, in Eq. (15.15) as: Z *

  *  W r ¼ (15.21) g r ðrÞ  mr* ðrÞ dr: V

Hence, it is apparent that the equations in density functional theory are analogous to those used in classical nucleation theory, but within the context of a more intricate formalism. The main assumption of density functional theory is that the local density is the most suitable order parameter to represent the phase transition. This assumption clearly breaks down when the liquid and crystal phases have either the same density or very similar density values. More sophisticated versions of density functional theory may be derived, e.g., where the order parameter is a function of both position and time. The accuracy of density functional theory depends on these details, as well as the accuracy of the related assumptions used in the model. The results from density functional theory qualitatively agree with those from diffuse interface theory, viz., that the interface between the nucleating clusters of the new phase and the original matrix phase is broad rather than sharp. This leads to a modified work of cluster formation, which can give a more accurate critical nucleus size compared to that predicted from classical nucleation theory. Since density functional theory is based on an order parameter description, it can be extended to model nucleation in a wide range of This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Advanced Nucleation Theories

263

systems, including cases that involve a coupling of more than one order parameter. Diffuse interface theory and density functional theory have been successfully applied to metallic and organic liquids [14], as well as silicate glasses [15]. Both of these advanced nucleation theories give better agreement to experimental data compared to classical nucleation theory. The diffuse interface and density functional approaches both accurately capture the broad nature of the liquid-nucleus interface and predict that the critical size for nucleation and the critical work associated with forming a stable nucleus are significantly less than corresponding values predicted from classical nucleation theory. Despite these successes, neither of these approaches has been fully explored for modeling the time and temperature dependence of nucleation across a significant number of systems.

15.5 Implicit Glass Model Another more recently developed approach for modeling nucleation is the implicit glass model of McKenzie and Mauro [1,2]. The implicit glass model is a hybrid Monte Carlo approach for calculating crystal nucleation, where nucleating clusters are treated explicitly at the atomistic level while the liquid or glassy matrix phase is treated as a continuous medium by applying a generalized Born solvation model [16]. When the implicit glass model is incorporated into a grand canonical Monte Carlo framework (see Chapter 23), the thermodynamic phase equilibria and structure of the nucleating clusters can be predicted for any system where the interatomic potentials have been established. The implicit solvent approach, also known as continuum solvation, approximates the behavior of explicit solvent particles using an effective continuous medium. To create the implicit glass model, effective Born radii are used to calculate the solvation energy per atom in a cluster. The method begins by simulating a box of atoms in the liquid phase at the nucleation temperature. Separate simulations are performed containing one atom of each element to represent the gas phase. The change in potential energy of each type of atom is calculated to obtain the work of solvation. This calculation results in determination of the effective Born radii, which give the atomic solvation energy. The contributions to the pairwise solvation potential are summed, screening the potential with a dielectric constant to represent the liquid or glassy matrix phase. The overall solvation free energy is:

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

264

Materials Kinetics

  N X N qi qj 1 1 X vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi DGsolv ¼ 1 !; u 8pε0 ε i¼1 jsi u r 2ij tr 2 þ ai aj exp  ij 4ai aj

(15.22)

where N is the total number of atoms, ε0 is the permittivity of free space, ε is the dielectric constant of the solvent being modeled, ai is the effective Born radius of atom i, and rij is the separation distance between each pair of atoms, i and j. Grand canonical Monte Carlo simulations are used to simulate nucleating clusters with different numbers of atoms. The clusters themselves are physically separated from but thermodynamically coupled to the solvent reservoir. Initially, small clusters are sampled. Then, larger clusters are iteratively sampled using random Monte Carlo trial moves to determine optimized structures of the nucleating clusters of varying sizes. Monte Carlo techniques will be covered in detail in Chapter 23. The implicit glass model has been successfully applied to simulate nucleation of lithium metasilicate, lithium disilicate, barium silicate, and soda lime silicate systems [1,2]. Figure 15.3 shows examples of calculated lithium metasilicate clusters, which are compared to experimentally measured nuclei of the same system, imaged using scanning electron microscopy (SEM). The implicit glass model has several advantages compared to other approaches, including: • Each step of the nucleation process can be directly simulated, providing key physical insights at the atomic scale. • Significant computational savings is obtained by treating the liquid/glass matrix as an implicit solvent rather than simulating each atom explicitly. • The implicit solvent approach enables grand canonical Monte Carlo methods to work efficiently, even in a condensed phase system. • The detailed atomic structure of the nucleating clusters can be obtained, without any assumptions regarding their morphology. • The approach is generally applicable to nucleation in any liquid or glassy system where accurate interatomic potentials have been established.

15.6 Summary Classical nucleation theory provides a useful framework for understanding key elements of the nucleation process. However, the assumptions made in

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Advanced Nucleation Theories

265

Figure 15.3 (a, b) Scanning electron microscopy (SEM) images and (c) simulated clusters of lithium metasilicate calculated using the implicit glass model. The scale bars are (a) 200 nm and (b) 100 nm. Coloring scheme in (c) is: Si (yellow), O (red), and Li (pink). (Image by McKenzie and Mauro based on Refs. [1,2].)

classical nucleation theory limit its ability to make quantitatively accurate prediction of nucleation rates for many real systems. Of particular concern is the calculation of an accurate work of cluster formation for generating a stable nucleus. Statistical mechanics provides a rigorous framework for constructing a detailed model of nucleation. Statistical mechanical models are constructed in the grand canonical ensemble to allow for nucleating clusters of different sizes. Successful application of statistical mechanical techniques requires having an accurate description of the interaction energies between particles.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

266

Materials Kinetics

Diffuse interface theory represents a phenomenological correction to classical nucleation theory, considering a continuously varying free energy density across the liquid-nucleus interface. In this manner, a broad interface with continuously changing free energy density can be considered. However, diffuse interface theory itself does not provide a means for calculating the variation of the free energy density. Density functional theory is a more sophisticated approach, considering the free energy as a functional of the spatially inhomogeneous density. The variation in density is related to the change in free energy density, which is then used to calculate the work of nucleation. However, density functional theory cannot be used if the nucleating cluster and the matrix phase have the same values of density. The implicit glass model is a hybrid Monte Carlo-based approach for numerically simulating nucleation. The implicit glass model involves application of a generalized Born solvation model, which effectively replaces the liquid/glassy matrix with a continuous medium and enables the computations to focus on nucleating atomic clusters in the grand canonical ensemble.

Exercises (15.1) Critically evaluate the assumptions made in classical nucleation theory. (a) Which of these assumptions do you think are most likely to lead to inaccurate predictions of the homogeneous nucleation rate? Explain your reasoning, giving references to support your argument. (b) What modifications could be made to classical nucleation theory to overcome these limitations? (15.2) How could diffuse interface theory be modified to incorporate more than one type of nucleating crystal? Be specific in your recommendations. (15.3) How could density functional theory be extended to model more than one type of nucleating crystal? Be specific in your recommendations. (15.4) How could density functional theory be modified to model nucleation in systems where the nucleating crystal and matrix phases have the same density? What would be the advantages and disadvantages

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Advanced Nucleation Theories

267

of this modified approach compared to the formulation of density functional theory in Section 15.4? (15.5) How can density functional theory be applied to the case of heterogeneous nucleation?

References [1] M. E. McKenzie and J. C. Mauro, “Hybrid Monte Carlo Technique for Modeling of Crystal Nucleation and Application to Lithium Disilicate Glass-Ceramics,” Comput. Mater. Sci. 149, 202 (2018). [2] M. E. McKenzie, S. Goyal, T. Loeffler, L. Cai, I. Dutta, D. E. Baker, and J. C. Mauro, “Implicit Glass Model for Simulation of Crystal Nucleation for Glass-Ceramics,” npj Comput. Mater. 4, 59 (2018). [3] G. F. Neilson and M. C. Weinberg, “A Test of Classical Nucleation Theory: Crystal Nucleation of Lithium Disilicate Glass,” J. Non-Cryst. Solids 34, 137 (1979). [4] C. J. R. Gonzalez-Oliver and P. F. James, “Crystal Nucleation and Growth in a Na2O$2CaO$3SiO2 Glass,” J. Non-Cryst. Solids 38, 699 (1980). [5] E. D. Zanotto and P. F. James, “Experimental Tests of the Classical Nucleation Theory for Glasses,” J. Non-Cryst. Solids 74, 373 (1985). [6] E. D. Zanotto and P. F. James, “Experimental Test of the General Theory of Transformation Kinetics: Homogeneous Nucleation in a BaO$2SiO2 Glass,” J. Non-Cryst. Solids 104, 70 (1988). [7] K. F. Kelton and A. L. Greer, Nucleation in Condensed Matter, Elsevier (2010). [8] R. C. Tolman, The Principles of Statistical Mechanics, Dover (2010). [9] L. Gránásy, “Diffuse Interface Theory of Nucleation,” J. Non-Cryst. Solids 162, 301 (1993). [10] F. Spaepen, “Homogeneous Nucleation and the Temperature Dependence of the Crystal-Melt Interfacial Tension, In H. Ehrenreich and D. Turnbull, editors: Solid State Physics, vol. 47, Academic Press (1994), pp 1e32. [11] L. Gránásy, “Diffuse Interface Model of Crystal Nucleation,” J. Non-Cryst. Solids 219, 49 (1997). [12] L. Gránásy, G. I. Tóth, J. A. Warren, F. Podmaniczky, G. Tegze, L. Rátkai, and T. Pusztai, “Phase-Field Modeling of Crystal Nucleation in Undercooled LiquidseA Review,” Prog. Mater. Sci. 106, 100569 (2019). [13] K. F. Kelton, “Crystal Nucleation in Liquids and Glasses, In H. Ehrenreich and D. Turnbull, editors: Solid State Physics, vol. 45, Academic Press (1991), pp 75e178. [14] L. Gránásy and F. Igioi, “Comparison of Experiments and Modern Theories of Crystal Nucleation,” J. Chem. Phys. 107, 3634 (1997). [15] L. Gránásy and P. F. James, “Nucleation in Oxide Glasses: Comparison of Theory and Experiment,” Proc. Royal Soc. London, Series A 454, 1745 (1998). [16] E. Pellegrini and M. J. Field, “A Generalized Born Solvation Model for Macromolecular Hybrid-Potential Calculations,” J. Phys. Chem. A 106, 1316 (2002).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 16

Viscosity of Liquids 16.1 Introduction The kinetics of the liquid state are generally described in terms of its viscosity. Viscosity is an intrinsically liquid-state property in which the application of a shear force leads to a continuous displacement of atoms or molecules. Unlike the elastic response of a solid material, with viscous flow this displacement continues for as long as the force is applied. Viscosity is perhaps the most important property for liquid-state processing of materials and for understanding the relaxation behavior of liquids, glasses, and polymers [1,2]. Viscosity is the inverse of fluidity and is a measure of the resistance of the liquid to shear deformation with time. The coefficient of shear viscosity, h, is defined by Newton’s law of viscosity: h ¼ sxy =_εxy ;

(16.1)

where sxy is the shear stress and ε_ xy is the shear strain rate, i.e., the time derivative of the shear strain, ε_ xy ¼ vεxy =vt. The coefficient of shear viscosity, h, is also known as the shear viscosity or simply the viscosity. From Eq. (16.1), it is clear that the viscosity has dimensions of pressure multiplied by time. Hence, the standard unit for viscosity is Pa$s. In much of the older literature, viscosity is also reported in units of Poise, which is sometimes abbreviated as “P”. The conversion between Pa$s and Poise is simple: 1 Pa$s ¼ 10 Poise. Like diffusivity, viscosity changes by orders of magnitude with changing temperature. This remarkable scaling with temperature is one the most significant challenges for understanding viscosity, both experimentally and theoretically. In this chapter, we focus first on experimental understanding and measurement of viscosity. We then discuss the underlying physics of viscous flow through several models for the temperature dependence of viscosity. Special topics such as non-Newtonian viscosity and the fragile-tostrong transition will also be addressed. Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00016-9 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

269 Copyright Elsevier 2023

270

Materials Kinetics

16.2 Viscosity Reference Points Viscosity is critically important for the manufacture of glasses and polymers, as well as in the food industry. Every step of liquid-state processing of a material is governed by its viscosity. In order to have a common set of terminology, several reference points are defined at certain specific values of viscosity [1,2]. Figure 16.1 plots these reference points on a typical viscosity-temperature curve. Each viscosity reference point is an example of an isokom temperature, i.e., a temperature of constant viscosity. Starting at high temperatures, the melting point is defined as the temperature where viscosity equals 10 Pa$s. A viscosity of around 10 Pa$s is appropriate to achieve sufficient convection to ensure homogeneity of the melt in industrial glass production. Note that this definition of the melting point is not thermodynamic in origin and is in no way related to the thermodynamic melting temperature. In other words, the melting point in Figure 16.1 is not related to the thermodynamic phase transition between the liquid and solid states; rather, it corresponds to a commonly used viscosity for a melting tank in industrial glass manufacturing. Moving to lower temperatures, the working point is defined as the temperature at which the viscosity is 103 Pa$s. This corresponds to a typical viscosity at which a melt is delivered to a forming device, i.e., for forming into a particular shape. Next, the softening point is defined as 106.6 Pa$s, which is the minimum viscosity at which a melt can avoid deformation under its own weight on the time scale of a typical industrial forming process. The temperature range between the working point and the softening point is also called the working range.

Figure 16.1 Commonly defined viscosity reference points. Each reference point is defined as an isokom temperature, i.e., a temperature at which a particular value of viscosity is achieved.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Viscosity of Liquids

271

After forming, the internal stresses which result from the forming process can be released by annealing. The annealing point is defined as the temperature where internal stresses are relaxed within a few minutes. The annealing point is defined as either 1012 or 1012.2 Pa$s. Note that the temperature at which the viscosity is 1012 Pa$s is also known as the glass transition temperature, Tg, which represents the viscosity at which the liquid melt is effectively converted to the solid-like glassy state. Finally, the lowest temperature viscosity reference point is the strain point, which is where the viscosity is equal to 1013.5 Pa$s. At the strain point, the relaxation of internal stresses occurs on a time scale of several hours. The strain point is also considered as a typical maximum use temperature for a glass or polymer product.

16.3 Viscosity Measurement Techniques As is evident from Figure 16.1, viscosity varies by more than twelve orders of magnitude as the liquid is cooled from the high-temperature melting regime. Because of this huge variation, there is no single experimental technique that can measure the entire viscosity range of a liquid. Hence, a combination of several different measurement techniques must be applied to obtain measurements over the full range of viscosities. Table 16.1 shows a summary of these various measurement methods and the corresponding viscosity ranges in which they can be applied. Table 16.1 Summary of viscosity measurement techniques and the ranges of viscosity where they can be applied.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

272

Materials Kinetics

At low viscosities ( T > > > < Tf ; Tf  T x ¼  p ; (17.5) > T > f > > ; Tf < T : T where 0 < x  1 and the exponent p governs the sharpness of the glass transition. As discussed in Section 17.5, a higher fragility leads to a sharper glass transition, which corresponds to a higher value of p. The nonequilibrium contribution to viscosity, hne ðT ; Tf Þ, in Eq. (17.4) is derived by combining the Adam-Gibbs theory of Eq. (16.13) with the MYEGA model of Eq. (16.17). Combining these two equations, the configurational entropy of the liquid evaluated at the fictive temperature of the glass, Sc ðTf Þ, is given by    Tg SN m exp  Sc ðTf Þ ¼ 1 ; (17.6) ln 10 Tf 12  log hN

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nonequilibrium Viscosity and the Glass Transition

303

where SN is a constant. As derived in Refs. [14e16], this leads to the following functional form for hne ðT ; Tf Þ:    Tg DH SN m log hne ðT ; Tf Þ ¼ A þ 1 ;  exp  kT ln 10 k ln 10 Tf 12  log hN (17.7) where DH is the activation barrier of the glass viscosity in the isostructural regime and A is the intercept in the limits of T /N and Tf /0. The middle term on the right-hand side of Eq. (17.7) gives the Arrhenius form of the isostructural viscosity for a fixed value of Tf. The thermal history dependence of the nonequilibrium viscosity is provided by the rightmost term in Eq. (17.7), which is a function of Tf but not T. Physically, the thermal history dependence of nonequilibrium viscosity is governed by the configurational entropy of the corresponding liquid at Tf, which governs the number of possible transition pathways for the glass to undergo viscous flow. A glass with a higher Tf has a greater number of transition pathways and hence a lower nonequilibrium viscosity. The nonequilibrium viscosity predicted by the MAP model has been validated with experimental measurements for a variety of industrial glass compositions across a wide range of temperatures and thermal histories [13e16]. For example, the nonequilibrium viscosities of Corning EAGLE XG® and Jade® glasses have been measured using a highly sensitive beambending viscometer [13]. As shown in Figures 17.3 and 17.4, the MAP model provides an accurate fit of the experimentally measured viscosity data

Figure 17.3 Nonequilibrium viscosity of as-formed (a) EAGLE XG® and (b) Jade® glass subjected to isothermal holds at three different temperatures. The solid black curves show the experimental data, and the dashed red lines show the MAP model predictions. The glass viscosity increases over time due to its relaxation towards the higher viscosity supercooled liquid state [14].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

304

Materials Kinetics

Figure 17.4 Nonequilibrium viscosity of Corning EAGLE XG® glass as a function of fictive temperature, Tf, for T ¼ 600, 650, and 675 C. The solid black curves show the experimental data, and the dashed red lines show the MAP model predictions. The blue curves show empirical fits to the experimental data [14].

across the full range of temperatures and fictive temperatures. In Figure 17.3, the viscosities of as-formed EAGLE XG® and Jade® glasses are both measured during isothermal holds at three different temperatures below their respective glass transition temperatures. In all cases, the nonequilibrium viscosity increases with time as the glasses relax toward their respective supercooled liquid states, i.e., as the Tf relaxes toward the isothermal hold temperature, T. In Figure 17.4, the viscosity data are plotted as a function of Tf to show the profound impact of Tf on the resulting viscosity of the glass.

17.5 Nonequilibrium Viscosity and Fragility In the MAP model of Eq. (17.7), the nonequilibrium viscosity is a function of two parameters which also govern the viscosity of the equilibrium liquid, viz., the glass transition temperature (Tg) and the fragility index (m). This demonstrates the direct link between the equilibrium viscosity of the liquid and the nonequilibrium viscosity of the glassy state. The fragility of the liquid plays a particularly important role in governing the nonequilibrium viscosity of the glass [13e16]. The impact of fragility on glass transition range behavior can be elucidated with Figure 17.5. In Figure 17.5(a), volume-temperature curves are plotted for three systems having identical glass transition temperature but different values of fragility. All three systems are cooled from high

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nonequilibrium Viscosity and the Glass Transition

305

Figure 17.5 (a) Volume-temperature diagrams and (b) nonequilibrium viscosity curves for selenium and for two hypothetical systems having the same glass transition temperature as selenium but with higher or lower values of fragility index, m. A higher value of m leads to a sharper glass transition and higher values of the nonequilibrium glass viscosity [13].

temperature at a constant rate of 1 K/s. Figure 17.5(a) shows that increasing fragility leads to a sharper, more well-defined glass transition. With a higher fragility, the effective free energy barrier for viscosity increases more rapidly as the system is cooled through the glass transition range. Above the glass transition temperature, the viscosity is governed by entropic effects, i.e., the multitude of available transition states as dictated by the Adam-Gibbs model discussed in Section 16.7. Below the glass transition temperature, the system becomes frozen into the glassy state, and the slope of the isostructural viscosity curve is governed by a single enthalpic activation barrier. For a system with higher fragility, there is a greater number of possible cooperative motions leading to viscous flow, but each has a higher activation enthalpy. Hence, systems with higher fragility have a sharper crossover from the entropy-dominated regime above Tg to the enthalpydominated regime below Tg. This leads to a more sudden glass transition with higher values of fragility, as evident in Figure 17.5(a). The nonequilibrium viscosity curves for these same systems are plotted in Figure 17.5(b). The steeper nonequilibrium viscosity curves for the more fragile systems are a direct result of having a higher activation enthalpy. Next, let us consider the impact of fragility on the cooling rate dependence of nonequilibrium viscosity. Figure 17.6 plots the scaling of fictive temperature with cooling rate for the same three systems of Figure 17.5 having different values of fragility. All three systems exhibit a linear dependence of fictive temperature with the logarithm of the cooling

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

306

Materials Kinetics

Figure 17.6 (a) Calculated fictive temperature of glasses from Figure 17.5 resulting from cooling rates spanning 25 orders of magnitude. (b) Nonequilibrium viscosity versus fictive temperature for the same systems [13].

rate. They also share a common fictive temperature for the 1 K/s cooling rate. However, a higher fragility leads to a shallower slope of the Tf versus cooling rate curve. This is a direct result of the more highly fragile system having a sharper glass transition. At temperatures above the glass transition temperature, a more fragile system has a lower viscosity, as is evident in the Angell diagram of Figure 16.2. Following the Adam-Gibbs model of Section 16.7, this is because a more highly fragile system has a greater entropic contribution to the viscosity. Hence, for the same cooling rate the more fragile system can trace the equilibrium liquid line more closely as the system approaches the glass transition. At temperatures below Tg, the higher fragility system has a longer relaxation time owing to the higher activation enthalpy, which leads to a more sudden freezing of the fictive temperature, i.e., a more sudden freezing into a glass with a fixed configuration. The result is that systems with higher fragility have a narrower range of available fictive temperatures compared to systems of lower fragility. It is interesting to note that one can obtain a perfectly sharp glass transition only in the limit of infinite fragility. This hypothetical limit of infinite fragility would therefore yield an ideal glass transition, corresponding to an ideal secondorder thermodynamic phase transition.

17.6 Composition Dependence of Viscosity The connection between equilibrium and nonequilibrium viscosities, as demonstrated by the success of the MAP model in Eq. (17.7), offers the possibility to understand and predict the composition dependence of

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nonequilibrium Viscosity and the Glass Transition

307

nonequilibrium viscosity based on the composition dependence of the Tg and m values of the corresponding liquid state. This is especially powerful as a predictive tool, since the work of Guo et al. [16] found that all four of the nonequilibrium viscosity parameters of the MAP model ( p, A, DH, and SN) are also all governed by the equilibrium viscosity parameters, viz., Tg and m. Given the constancy of the extrapolated infinite temperature viscosity, hN z 103 Pa$s, as discussed in Section 16.9, the problem of the composition dependence of both liquid- and glassy-state viscosities is reduced to finding expressions for the composition dependence of the glass transition temperature, Tg, and the fragility index, m. These properties can be predicted using topological constraint theory [17], which considers the liquid and glassy structure in terms of a network of bond constraints. Topological constraint theory of glass was originally proposed by Phillips and Thorpe [18,19] and considers that each atom in threedimensional space has three translational degrees of freedom. These atomic degrees of freedom can be removed in the presence of rigid bond constraints, including either rigid bond lengths or rigid bond angles. Depending on the average number of rigid bond constraints per atom (n), the network is classified as flexible (n < 3), isostatic (n ¼ 3), or stressed rigid (n > 3). In the original work of Phillips and Thorpe, constraint counting was performed at zero temperature, where all bond constraints were assumed to be intact. Gupta and Mauro generalized topological constraint theory to include temperature dependence of the constraints [20,21]. At high temperatures, more thermal energy is available to overcome the activation barrier associated with a constraint, causing the value of n to decrease with temperature. By calculating n as a function of composition and temperature, the composition dependence of Tg and m can be determined [21,22]. These temperature-dependent constraint models also offer valuable insights into the underlying physics governing the compositional scaling of both equilibrium and nonequilibrium viscosities. Assuming a constant value of B in the Adam-Gibbs theory of Eq. (16.13), and recognizing that the glass transition temperature is defined at a constant value of liquid viscosity (1012 Pa$s), the compositional dependence of Tg is governed by the scaling of the configurational entropy of the liquid, Sc. Specifically, Gupta and Mauro derived [20]: Tg Sc;ref ðTg;ref Þ ¼ ; Tg;ref Sc ðTg Þ

This book belongs to Alice Cartes ([email protected])

(17.8)

Copyright Elsevier 2023

308

Materials Kinetics

where the subscript “ref” denotes the parameter values for a reference composition. Note that in Eq. (17.8) the configurational entropy is always evaluated at the glass transition temperature of the liquid. Research by Naumis [23,24] has shown that the configurational entropy of a system is largely proportional to the number of atomic degrees of freedom, given by f ¼ 3  n (i.e., the number of degrees of freedom per atom is equal to the dimensionality of space minus the number of rigid constraints per atom). Considering the proportionality between Sc and f, Eq. (17.8) becomes Tg fref ðTg;ref Þ ¼ : Tg;ref f ðTg Þ

(17.9)

Hence a system has a lower glass transition temperature if it has a greater number of atomic degrees of freedom. This is an intuitive result, since fewer bonds or otherwise weaker bonding in the network would naturally lead to a lower viscosity and hence a lower glass transition temperature. To predict the composition dependence of the fragility, the definition of fragility index in Eq. (16.2) can be combined with the Adam-Gibbs relation of Eq. (16.13) to yield [21,22]    vln f ðT Þ m ¼ m0 1 þ ; (17.10) vln T T ¼Tg where m0 ¼ 12  log hN z 15

(17.11)

is the fragility of a strong liquid and where we have made use of the proportionality between Sc and f. Hence, the fragility of the liquid is governed by the rate at which the topological degrees of freedom in the network are lost upon cooling through the glass transition. A network that becomes rigid more abruptly will have a faster rate of configurational entropy loss, and therefore a greater fragility. With these formulas for the composition dependence of glass transition temperature and fragility in Eqs. (17.9) and (17.10), respectively, the composition dependence of both the equilibrium [22] and nonequilibrium [16] viscosity curves can be deduced. Example calculations of glass transition temperature and fragility index from temperaturedependent constraint theory are plotted in Figure 17.7 for soda lime borate liquids.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nonequilibrium Viscosity and the Glass Transition

309

Figure 17.7 Model calculations of glass transition temperature (Tg) and fragility index (m) for soda lime borate liquids using temperature-dependent constraint theory, as in Eqs. (17.9) and (17.10), respectively. (Modified from Smedskjaer et al. [25]).

17.7 Viscosity of Medieval Cathedral Glass As an example application of the above theory, let us consider the urban legend of cathedral glass flow at room temperature. According to this legend, the stained glass windows in medieval cathedrals are flowing perceptibly over hundreds of years, leading to the glass becoming thicker at the bottom compared to the top. The issue of cathedral glass flow was originally addressed by Zanotto [26] in terms of equilibrium liquid viscosity

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

310

Materials Kinetics

and then revisited by Zanotto and Gupta [27] accounting for the isostructural viscosity of a modern soda lime silicate glass. Both papers concluded that the cathedral glass viscosity is many orders of magnitude too high to observe any flow at room temperature. More recently, the problem was addressed by Gulbiten et al. [28] using the MAP model of nonequilibrium viscosity applied to an actual medieval glass composition used in the stained glass windows at Westminster Abbey in London. Figure 17.8 shows the predicted viscosity curve from the MAP model of Section 17.4, using experimentally measured values of glass transition temperature and fragility obtained via beam-bending viscometry. As shown in Figure 17.8, a subsequent direct measurement of the low-temperature viscosity showed excellent agreement with the theoretically predicted value. As shown in Figure 17.9, the room temperature viscosity of the medieval cathedral glass is found to be about 16 orders of magnitude less than the previous estimate of Zanotto and Gupta [27], which considered an Arrhenius extrapolation of the viscosity of modern soda lime silicate glass. This refined value is based on a combination of advances in characterization using a highly sensitive beam-bending viscometer, as well as theoretical advances represented by the MAP model of nonequilibrium viscosity.

Figure 17.8 Viscosity of cathedral glass from Westminster Abbey. The black curve shows the measured experimental viscosity of the liquid, fitted using the MYEGA model. The red curve is the predicted nonequilibrium viscosity curve of the glass based on the MAP equation using the measured values of glass transition temperature and fragility. The blue dot gives the subsequently measured data point for the nonequilibrium viscosity of the glass at 1017.4 Pa$s, which is in good agreement with the predicted value from the MAP model [28].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nonequilibrium Viscosity and the Glass Transition

311

Figure 17.9 Nonequilibrium viscosity of the cathedral glass calculated using the MAP equation, as well as a previous estimate by Zanotto and Gupta based on soda lime silicate window glass [27]. “This Study” corresponds to the work of Gulbiten et al. [28].

Despite the significantly lower value of the cathedral glass viscosity compared to the previous prediction, the viscosity is still much too high to explain the thickness variations observed in many medieval stained glass windows. Figure 17.10 plots the flow rate of the medieval glass as a function of its nonequilibrium viscosity, which decreases exponentially with fictive temperature. Since the medieval glass makers probably annealed their glass, the fictive temperature is likely to be near the glass transition temperature. Assuming Tf ¼ Tg, the outer layer of the glass window would flow only about 1 nm over a billion years under the force of gravity. Even if the glass were quenched rather than annealed, its viscosity would still be much too high to observe any flow on even a geological timescale. For example, if the fictive temperature were 620 C (27 C higher than Tg), then the room temperature viscosity would be 1022.5 Pa$s and the surface velocity would be only 1021.5 cm/s, i.e., still imperceptibly low. This result confirms that the longstanding myth regarding the flow of cathedral glass at room temperature is just that: a myth. The observed variation in thickness within cathedral glass panes is attributed to medieval glass-forming processes, which had much poorer thickness control compared to those techniques used by the modern glass industry.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

312

Materials Kinetics

Figure 17.10 Viscosity and calculated glass flow at room temperature as a function of fictive temperature for the medieval cathedral glass from Westminster Abbey [28].

17.8 Summary The glass transition involves a continuous freezing of a supercooled liquid into the glassy state. The glass transition is not a thermodynamic phase transition; rather, it is a kinetic transition with thermodynamic consequences as the configurational degrees of freedom in the liquid become frozen. Since glass is a nonequilibrium material, its properties depend on thermal history. Traditionally, the thermal history dependence of glass properties is described using an order parameter known as the fictive temperature. The nonequilibrium viscosity of glass is a function of its composition, temperature, and fictive temperature. Under normal cooling conditions, the viscosity of glass is many orders of magnitude lower than that of the corresponding supercooled liquid. During an isothermal heat treatment, the nonequilibrium viscosity of glass relaxes upward until reaching the metastable supercooled liquid state. The temperature and fictive temperature dependence of nonequilibrium viscosity are described by the Mauro-AllanPotuzak (MAP) equation. The scaling of glass viscosity is largely determined by the viscosity of its parent supercooled liquid, in particular the values of the glass transition temperature and fragility index. As fragility increases, the glass transition become sharper and the nonequilibrium viscosity curve becomes steeper. The composition dependence of the glass transition temperature and

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Nonequilibrium Viscosity and the Glass Transition

313

fragility index can be determined using temperature-dependent constraint theory. The glass transition temperature is governed by the number of rigid constraints per atom in the transition region. The fragility is governed by the rate at which topological degrees of freedom are lost upon cooling through the glass transition.

Exercises (17.1) What is the difference between fictive temperature (Tf ) and glass transition temperature (Tg)? Explain in your own words. (17.2) Why do fragile liquids tend to exhibit a greater loss of heat capacity than strong liquids as they are cooled through the glass transition? (17.3) Why do amorphous solids not exhibit a glass transition? (17.4) Why does a glass flow faster than the corresponding supercooled liquid at the same temperature? Explain in your own words. (17.5) Based on temperature-dependent constraint theory, how would you design a glass that has both a high glass transition temperature (Tg) and a high fragility index (m)? Explain in terms of configurational entropy and the topology of the glass network.

References [1] A. K. Varshneya and J. C. Mauro, Fundamentals of Inorganic Glasses, 3rd ed., Elsevier (2019). [2] E. D. Zanotto and J. C. Mauro, “The Glassy State of Matter: Its Definition and Ultimate Fate,” J. Non-Cryst. Solids 471, 490 (2017). [3] J. C. Mauro, R. J. Loucks, and P. K. Gupta, “Fictive Temperature and the Glassy State,” J. Am. Ceram. Soc. 92, 75 (2009). [4] A. Donev, F. H. Stillinger, and S. Torquato, “Configurational Entropy of Binary HardDisk Glasses: Nonexistence of an Ideal Glass Transition,” J. Chem. Phys. 127, 124509 (2007). [5] J. C. Mauro, “Through a Glass, Darkly: Dispelling Three Common Misconceptions in Glass Science,” Int. J. Appl. Glass Sci. 2, 245 (2011). [6] Q. Zheng, Y. Zhang, M. Montazerian, O. Gulbiten, J. C. Mauro, E. D. Zanotto, and Y. Yue, “Understanding Glass through Differential Scanning Calorimetry,” Chem. Rev. 119, 7848 (2019). [7] J. C. Mauro, P. K. Gupta, and R. J. Loucks, “Continuously Broken Ergodicity,” J. Chem. Phys. 126, 184511 (2007). [8] Y. Yue, “The Iso-structural Viscosity, Configurational Entropy and Fragility of Oxide Liquids,” J. Non-Cryst. Solids 355, 737 (2009). [9] P. K. Gupta and A. Heuer, “Physics of the Iso-structural Viscosity,” J. Non-Cryst. Solids 358, 3551 (2012). [10] O. S. Narayanaswamy, “A Model of Structural Relaxation in Glass,” J. Am. Ceram. Soc. 54, 491 (1971).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

314

Materials Kinetics

[11] O. V. Mazurin, V. P. Kluyev, and S. V. Stolyar, “Temperature Dependences of Structural Relaxation Times at Constant Fictive Temperatures in Oxide Glasses,” Glastech. Ber. 56K, 1148 (1983). [12] I. Avramov, “Influence of Disorder on Viscosity of Undercooled Melts,” J. Chem. Phys. 95, 4439 (1991). [13] J. C. Mauro, D. C. Allan, and M. Potuzak, “Nonequilibrium Viscosity of Glass,” Phys. Rev. B 80, 094204 (2009). [14] X. Guo, M. M. Smedskjaer, and J. C. Mauro, “Linking Equilibrium and Nonequilibrium Dynamics in Glass-Forming Systems,” J. Phys. Chem. B 120(12), 3226 (2016). [15] Q. Zheng and J. C. Mauro, “Viscosity of Glass-Forming Systems,” J. Am. Ceram. Soc. 100, 6 (2017). [16] X. Guo, J. C. Mauro, D. C. Allan, and M. M. Smedskjaer, “Predictive Model for the Composition Dependence of Glassy Dynamics,” J. Am. Ceram. Soc. 101, 1169 (2018). [17] J. C. Mauro, “Topological Constraint Theory of Glass,” Am. Ceram. Soc. Bull. 90(4), 31 (2011). [18] J. C. Phillips, “Topology of Covalent Non-Crystalline Solids I: Short-Range Order in Chalcogenide Alloys,” J. Non-Cryst. Solids 34, 153 (1979). [19] J. C. Phillips and M. F. Thorpe, “Constraint Theory, Vector Percolation and Glass Formation,” Solid State Commun. 53, 699 (1985). [20] P. K. Gupta and J. C. Mauro, “Composition Dependence of Glass Transition Temperature and Fragility. I. A Topological Model Incorporating Temperature-Dependent Constraints,” J. Chem. Phys. 130, 094503 (2009). [21] J. C. Mauro, P. K. Gupta, and R. J. Loucks, “Composition Dependence of Glass Transition Temperature and Fragility. II. A Topological Model of Alkali Borate Liquids,” J. Chem. Phys. 130, 234503 (2009). [22] J. C. Mauro, A. J. Ellison, D. C. Allan, and M. M. Smedskjaer, “Topological Model for the Viscosity of Multicomponent Glass-Forming Liquids,” Int. J. Appl. Glass. Sci. 4, 408 (2013). [23] G. G. Naumis, “Energy Landscape and Rigidity,” Phys. Rev. E 71, 026114 (2005). [24] G. G. Naumis, “Glass Transition Phenomenology and Flexibility: An Approach Using the Energy Landscape Formalism,” J. Non-Cryst. Solids 352, 4865 (2006). [25] M. M. Smedskjaer, J. C. Mauro, S. Sen, and Y. Yue, “Quantitative Design of Glassy Materials using Temperature-Dependent Constraint Theory,” Chem. Mater. 22, 5358 (2010). [26] E. D. Zanotto, “Do Cathedral Glasses Flow? Am. J. Phys. 66, 392 (1998). [27] E. D. Zanotto and P. K. Gupta, “Do Cathedral Glasses Flow? Additional Remarks,” Am. J. Phys. 67, 260 (1999). [28] O. Gulbiten, J. C. Mauro, X. Guo, and O. N. Boratav, “Viscous Flow of Medieval Cathedral Glass,” J. Am. Ceram. Soc. 101, 5 (2018).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 18

Energy Landscapes

18.1 Potential Energy Landscapes In a groundbreaking 1969 paper, Goldstein [1] postulated that atomic motion in a condensed system consists of high-frequency vibrations in regions of deep potential energy minima, with less frequent configurational transitions to other such minima. The kinetic properties of the material are governed by its ability to flow among these various minima. Goldstein’s concept of an energy landscape was formalized, extended, and implemented by Stillinger and coworkers [2e7] starting in the early 1980s. In this chapter, we introduce the concepts of potential energy landscapes and enthalpy landscapes. We discuss the underlying assumptions of the energy landscape approach, as well as methods for determining the transition points that govern the kinetics of the system. Consider an arbitrary system of N interacting particles. The potential energy landscape of the system is a map of its potential energy, U, as a function of all the position coordinates of the particles: U ¼ Uðr1 ; r2 ; .; rN Þ  CN;

(18.1)

where r1 ; r2 ; .; rN are the particle position vectors and C is a constant denoting the potential energy per particle for the lowest energy state, i.e., a perfect crystal at absolute zero temperature. While there is a lower bound on the potential energy of the system, there is no theoretical upper bound, since highly repulsive energy exists at small atomic separation distances as a result of Pauli’s exclusion principle. The potential energy Uðr1 ; r2 ; .; rN Þ is a continuous function that is at least twice differentiable with respect to the configurational coordinates, r1 ; r2 ; .; rN . The partition function of the potential energy landscape is given by the integral,   Z L Z LZ L Uðr1 ; r2 ; .; rN Þ 3 3 Q¼ / exp  d r1 d r2 /d3 rN ; (18.2) kT 0 0 0 Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00023-6 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

315 Copyright Elsevier 2023

316

Materials Kinetics

where k is Boltzmann’s constant, T is absolute temperature, and L is the length of the system. Here the volume, V, of the system is assumed to be cubic, such that V ¼ L3. The function Uðr1 ; r2 ; .; rN Þ represents a continuous multidimensional potential energy landscape, which has many peaks and valleys corresponding to regions of high and low potential energy, respectively. For a system of N particles in three-dimensional space, the potential energy landscape itself is 3N-dimensional. Each point in the landscape corresponds to the potential energy of a unique configuration of particles. The landscape contains a multitude of local minima in potential energy, each of which corresponds to a locally stable configuration of particles. Each of these stable configurations is called an inherent structure. In the limit of large N, the number of inherent structures U is approximately given by: ln U z lnðN!sN Þ þ aN;

(18.3)

where s is a symmetry factor and a > 0 is a constant relating to the number density of particles, N/V. The first term on the right-hand side of Eq. (18.3) accounts for the symmetry of the potential energy landscape with respect to the particle position coordinates, r1 ; r2 ; .; rN , and the second term accounts for the exponential increase in the number of inherent structures with increasing N [2e7]. Following the approach of Stillinger and coworkers [2e7], the continuous potential energy landscape is divided into a discrete set of basins, where each basin contains a single minimum in U, i.e., each basin contains a single inherent structure. A basin itself is defined to be the set of all coordinates in the 3N-dimensional configurational space that drain to a particular minimum via steepest descent. With this partitioning of the landscape into basins, the partition function in Eq. (18.2) can be written as a summation of integrals over each of the individual basins,   U Z X UðrÞ 3N Q¼ exp  (18.4) d r; kT i¼1 frjr˛Ri g where d3N r ¼ d3 r1 d3 r2 .d 3 rN and Ri denotes the set of all position vectors r within basin i. Since each basin i contains exactly one inherent structure, let us denote the configuration of that inherent structure as ri . The partition function can then be rewritten as   U Z X Uðri Þ þ DUi ðrÞ 3N Q¼ exp  (18.5) d r; kT i¼1 frjr˛Ri g

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

317

where DUi ðrÞ is the increase in potential energy at any point in basin i relative to that of the inherent structure, Uðri Þ as shown schematically in Figure 18.1. Since the potential energy of the inherent structure, Uðri Þ, is a constant for any given basin, i, the partition function in Eq. (18.5) can be rewritten as [8].   Z  U X Uðri Þ DUi ðrÞ 3N Q¼ exp  exp  (18.6) d r: kT kT frjr˛Ri g i¼1 It is convenient to introduce a set of normalized coordinates, s1 ; s2 ; .; sN , defined by r s ¼ 1=3N ; (18.7) Bi where Bi is the volume of basin i in the 3N-dimensional hyperspace of the landscape, given by the integral Z Bi ¼ d3N r: (18.8) frjr˛Ri g

Figure 18.1 Schematic diagram of a potential energy landscape with two basins. The inherent structure positions are denoted r1 and r2. The corresponding basins are represented by the regions R1 and R2.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

318

Materials Kinetics 1=3N

Since Bi has units of length, the normalized coordinates, s, are dimensionless. With the definition of s in Eq. (18.7), the partition function becomes   U X Uðri Þ Q¼ Bi Ci exp  ; (18.9) kT i¼1 where the integral,



 DUi ðsÞ 3N Ci ¼ exp  d s; kT fsjs˛Si g Z

(18.10)

depends only on the shape of basin i and not on its volume in 3N-dimensional space. In Eq. (18.10), the integration is performed over the scaled set of position vectors Si contained within basin i. The fundamental assumption of the energy landscape approach is that the partition function Q can be separated into independent configurational and vibrational contributions [9,10]: Q ¼ Qconf Qvib :

(18.11)

Hence, the energy landscape approach implicitly assumes that the normalized basin shape is a constant (Ca ¼ C). With this assumption, the configurational and vibrational contributions to the partition function can be written independently as   U X Uðri Þ Qconf ¼ Bi exp  (18.12) kT i¼1 and



 DUðsÞ 3N Qvib ¼ exp  d s; kT fsjs˛Sg Z

(18.13)

respectively. It is important to note that while the basin volume Bi can span many orders of magnitude throughout the landscape [11], it is a reasonable assumption that the normalized basin shape can be considered as constant. With this assumption, the fast vibrations within a basin can be treated independently from the slower transitions between basins. Given the partition function written in the separable form of Eq. (18.11), it follows that all thermodynamic properties can be written in terms of independent configurational and vibrational contributions.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

319

18.2 Enthalpy Landscapes While potential energy landscapes are suitable for modeling systems under isochoric conditions, many real systems exist under constant pressure rather than constant volume conditions. Here we extend our discussion of Section 18.1 to the isothermal-isobaric ensemble, which allows for changes in both particle positions and the overall volume of the system. In the isothermalisobaric ensemble, the system is described in terms of an enthalpy landscape rather than a potential energy landscape [12]. The enthalpy landscape at zero temperature, i.e., the enthalpy without any kinetic energy contributions, corresponds to an underlying surface that is sampled by a system at finite temperature under isobaric conditions. The zero-temperature enthalpy landscape of a system of N atoms can be expressed as H ¼ Uðr1 ; r2 ; .; rN ; V Þ þ PV ;

(18.14)

where V is the volume of the system and the pressure, P, is constant. Whereas the potential energy landscape is 3N-dimensional, the enthalpy landscape is (3N þ 1)-dimensional since the volume is included as an additional coordinate. The isothermal-isobaric partition function can be written as   Z N Z V 1=3 Z V 1=3 Z V 1=3 Hðr1 ; r2 ; .; rN ; V Þ 3 3 Y¼ / exp  d r1 d r2 /d3 rN dV ; kT 0 0 0 0 (18.15) where the system is again considered to be cubic. It is helpful to split the partition function into two separate integrals for volumes below and above Vmax, which is defined as the volume at which the interaction potentials no longer contribute significantly to the total enthalpy. In other words, Uðr1 ; r2 ; .; rN ; Vmax Þ  PVmax ;

(18.16)

such that U can be safely ignored for V  Vmax. The partition function can then be written as a sum of interacting (YI) and non-interacting (YNI) contributions [12]: Y ¼ YI þ YNI ;

This book belongs to Alice Cartes ([email protected])

(18.17)

Copyright Elsevier 2023

320

Materials Kinetics

where Z YI ¼ 0

Vmax

Z

V 1=3 0



 Hðr1 ; r2 ; .; rN ; V Þ 3N exp  d rdV kT

(18.18)

and Z YNI ¼

N

Z

Vmax

V 1=3 0

  PV 3N exp  d rdV : kT

(18.19)

The non-interacting integral in Eq. (18.19) reduces to a constant in the form of an incomplete gamma function:   Z N PV YNI ¼ V N exp  dV kT Vmax  ¼

kT P

Nþ1   PVmax G N þ 1; : kT

(18.20)

Since the enthalpy landscape in the non-interacting regime is a linear function of volume, there are no minima in the landscape and hence no inherent structures. As a result, we only need to consider the interacting portion of the partition function. Next, we rewrite the interacting partition function in terms of the length of the system, L ¼ V 1=3 , such that all coordinates have units of length:   Z Lmax Z L Hðr; LÞ 3N 2 YI ¼ 3L exp  (18.21) d rdL: kT 0 0 Following the approach in Section 18.1, we partition the continuous enthalpy landscape into a discrete set of basins, where each basin contains a single minimum in H with respect to each of the 3N þ 1 coordinates. Therefore, we can rewrite the interacting partition function as a summation of integrals over each individual basin i,   Z U Z X Hðr; LÞ 3N 2 YI ¼ 3L exp  (18.22) d rdL; kT frjr˛Ri ðLÞg i¼1 fLjL˛Li g where the set of position vectors Ri ðLÞ in basin i is a function of the length of the system, L, and Li is the set of length values within the basin i. If we denote the coordinates of inherent structure i as {ri, Li}, then the interacting partition function can be rewritten as

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

 Z Z Hðri ; Li Þ 2 YI ¼ exp  3L kT fLjL˛Li g frjr˛Ri ðLÞg i¼1   DHi ðr; LÞ 3N exp  d rdL; kT

321

U X

(18.23)

where DHi ðr; LÞ gives the increase in enthalpy at any point in basin i relative to the inherent structure enthalpy, Hðri ; Li Þ. The volume of basin i in the (3N þ 1)-dimensional space is given by Z Z Bi ¼ d3N rdL: (18.24) fLjL˛Li g frjr˛Ri ðLÞg

Again, it is convenient to introduce normalized positions, es1 ;es2 ; .;esN , which are defined by r es ¼ 1=ð3Nþ1Þ (18.25) Bi for positions within basin i. A normalized length, esL , is defined similarly as esL ¼

L 1=ð3Nþ1Þ Bi

:

(18.26)

With this set of normalized coordinates, the partition function becomes   U X Hðri ; Li Þ 1þ2=ð3Nþ1Þ e YI ¼ Bi ; (18.27) C i exp  kT i¼1 where the integral   Z Z DH s ;e s i ðe LÞ 2 ei ¼ 3 esL C exp  d 3N esd esL kT e e f esL j esL ˛Li g f es j es˛Si ð esL Þg

(18.28)

depends only on the shape of basin i and not on its volume in (3N þ 1)dimensional space [12]. As with the potential energy landscape, we wish to write the partition function of the enthalpy landscape in terms of separate configurational and vibrational contributions, YI ¼ Yconf Yvib :

(18.29)

This can be accomplished by assuming a constant normalized basin shape e i ¼ C). e With this assumption, (C

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

322

Materials Kinetics

Yconf ¼

U X i¼1

and Yvib ¼ 3

Z

 1þ2=ð3Nþ1Þ Bi exp

eig f esL j esL ˛L

 (18.30)

 DHðes;esL Þ 3N d esd esL : (18.31) exp  kT f es j es˛eSi ð esL Þg 

Z esL2

Hðri ; Li Þ  kT

18.3 Landscape Kinetics Using either the canonical partition function of Eq. (18.11) or the isothermal-isobaric partition function of Eq. (18.30), the equilibrium thermodynamic properties of the system can be calculated. However, irreversible processes involve a departure from equilibrium. The kinetics of the system are governed by configurational transitions between pairs of basins. Making an inter-basin transition involves overcoming an activation barrier, i.e., passing through the lowest-energy transition state (first-order saddle point) between adjacent basins [13e16]. The underlying assumption is that while the potential energy (or enthalpy) landscape itself is independent of temperature, the way in which the system samples the landscape depends on its thermal energy, and thus on the temperature of the system. At high temperatures, there is ample thermal energy to transition freely among basins. Under such conditions, thermodynamic equilibrium can be achieved in a relatively short period of time. At lower temperatures, it becomes difficult to overcome some of the activation barriers owing to the loss of thermal energy. At even lower temperatures, the system can become trapped within a small region of the landscape where the energy barriers for a transition are too high to overcome on the time scale of interest. If we consider a total of U basins in the enthalpy landscape, a U  U matrix of enthalpy values can be constructed [14], 1 0 H11 H12 / H1U C BH B 21 H22 / H2U C (18.32) H¼B C; @ « « 1 « A HU1

HU2

/ HUU

where the diagonal elements (Hii) are the inherent structure enthalpies and the off-diagonal elements (Hij, isj) are the enthalpies of the transition points between basin i and basin j. The H matrix is symmetric, since the transition point enthalpy between basin i and basin j is the same as that This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

323

between j and i, i.e., Hij ¼ Hji. Note that we have expressed Eq. (18.32) in terms of an enthalpy landscape assuming isobaric conditions. If the system is isochoric rather than isobaric, then a similar matrix would be constructed using potential energies instead of enthalpies. The properties of a system can be calculated in terms of the probabilities of occupying the various basins in the landscape. At equilibrium, the probabilities follow a Boltzmann distribution:   1 Hii pi;eq ¼ gi exp  ; (18.33) Yconf kT where pi,eq is the equilibrium probability of occupying basin i, gi is the degeneracy of basin i (i.e., the number of equivalent inherent structures weighted by the volumes of their corresponding basins), Hii is the inherent structure enthalpy, and Yconf is the configurational partition function. In order to capture memory effects in nonequilibrium systems, we must calculate the evolution of the basin occupation probabilities over time, t. This can be accomplished by solving the set of coupled master equations [14,15]: U U X X d Kji ½T ðtÞpj ðtÞ  Kij ½T ðtÞpi ðtÞ; pi ðtÞ ¼ dt jsi jsi

(18.34)

where pi(t) is the probability of the system occupying basin i at time t, Kij is the transition rate from basin i to basin j, and Kji is the rate of the reverse transition. Hence, the first term on the right-hand side of Eq. (18.34) gives the total rate of transitioning into basin i from all other basins j, and the second term on the right-hand side of the equation gives the total rate of transitioning out of basin i and into any other basin j. The net change in probability is the difference between the two terms. At all times t, the probabilities must satisfy the normalization condition: U X

pi ðtÞ ¼ 1:

(18.35)

i¼1

The rates, Kij, form a matrix 0 0 B B K21 B K¼B B K31 B @ « KU1 This book belongs to Alice Cartes ([email protected])

K12

K13

0

K23

K32 «

0 «

KU2

KU3

/ K1U

1

C / K2U C C / K3U C C: C 1 « A /

(18.36)

0 Copyright Elsevier 2023

324

Materials Kinetics

Each element of the matrix is given by transition state theory,   ðHij  Hii Þ Kij ½T ðtÞ ¼ nij gj exp  ; kT ðtÞ

(18.37)

where nij is the vibrational frequency along the mode corresponding to the i / j transition. Note that unlike the enthalpy matrix of Eq. (18.32), the transition rate matrix of Eq. (18.36) is not symmetric. Transition state theory, as well as methods for solving the coupled master equations in Eq. (18.34), will be covered in detail in Chapter 20. Within the framework of the energy landscape approach, the evolution of any macroscopic property can be calculated by an appropriate weighted average using the basin occupation probabilities. For example, the evolution of volume, V(t), in an enthalpy landscape can be calculated by V ðtÞ ¼

U X

Vii pi ðtÞ;

(18.38)

i¼1

where Vii is the volume of inherent structure i in real space. Likewise, the enthalpy can be calculated at any time t by: HðtÞ ¼

U X

Hii pi ðtÞ;

(18.39)

i¼1

where Hii is the enthalpy of inherent structure i.

18.4 Disconnectivity Graphs Given the high dimensionality of the 3N- or (3N þ 1)-dimensional potential energy or enthalpy landscapes, respectively, it is very difficult, if not impossible, to visualize the landscapes directly. In order to address this problem of high dimensionality, disconnectivity graphs provide a convenient method for visualizing the inherent structures and connecting transition points in an energy landscape [16]. Software for plotting disconnectivity graphs is provided by the David Wales group at the University of Cambridge (http://www-wales.ch.cam.ac.uk/software.html). As an introduction to the concept of disconnectivity graphs, let us consider the potential energy landscape of a simple three-atom cluster of selenium, Se3 [17]. The left-hand side of Figure 18.2 shows the potential energy of Se3 as a continuous function of bond angle, assuming an equilibrium bond separation distance of w2.17 Å between two pairs of atoms.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

325

Figure 18.2 Potential energy landscape of a Se3 cluster. The left-hand side plots the potential energy as a continuous function of bond angle, and the right-hand side shows the corresponding disconnectivity graph.

With such a small cluster, it is possible to identify each of the inherent structures and transition states from a continuous plot of the potential energy function. In the case of Se3, there are three minima in the energy landscape, labeled A, C, and E, which correspond to the three inherent structures of the cluster. The global minimum occurs at point C and corresponds to a molecule having a bond angle of about 117 . The potential energy function itself is shallow near this minimum, allowing for a high degree of bond flexibility. A second minimum, of somewhat higher energy, occurs at point A around 65 , corresponding to a closed triangular configuration of atoms. Finally, the third minimum at point E has a significantly higher energy, corresponding to a linear configuration of the molecule. As shown in Figure 18.2, the Se3 cluster has two transition points: one connecting A and C and the other connecting C and E. The former, labeled B, occurs at a bond angle of 84 . The latter transition point, labeled D, occurs at about 169 . The equivalent disconnectivity graph representation of the Se3 energy landscape is shown on the right-hand side of Figure 18.2. The vertical axis specifies the potential energy of the system, and the horizontal axis represents an arbitrary configurational space. The three potential energy minima of the Se3 cluster are represented by the terminal nodes, A, C, and

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

326

Materials Kinetics

E, placed at their corresponding energy levels. The horizontal positioning of the nodes is arbitrary and chosen to show the connections among the various minima clearly. These connections are provided by the transition nodes, B and D. Transition node B connects minima A and C, and transition node D connects minimum E collectively with A and C. A disconnectivity graph allows for clear visualization of inherent structures and transition states without having to plot the potential energy landscape in the full 3N-dimensional hyperspace. Disconnectivity graphs are also useful for determining which transitions are possible with different amounts of total energy. For example, if the Se3 cluster in Figure 18.2 has 5.5 eV of energy, it must occupy either of the basins corresponding to minima A or C. Since this energy is less than that of transition point B, a transition between basins A and C would not be allowed. On the other hand, if the Se3 cluster has a total energy of 4.3 eV (between points D and E), the system can occupy any of the three basins, with transitions freely allowed. The topography of the energy landscape becomes significantly more complicated with the inclusion of additional atoms, since each additional atom increases the dimensionality of the landscape by three. The disconnectivity graph representation, therefore, becomes a more useful tool for these larger systems. Even the addition of a single atom can dramatically increase the complexity of the landscape. For example, as shown in Figure 18.3, the potential energy landscape of a four-atom Se4 cluster [17] has a much richer topography than that of Se3. The global minimum of Se4, labeled A, is a kinked chain of four selenium atoms with bond angles of about 117 . There are numerous points A in the disconnectivity graph, indicating a large number of degenerate structures with this configuration. Point B is the inherent structure with the second-lowest potential energy, corresponding to a closed parallelogram with interior angles of about 65 and 115 . The less favorable (i.e., higher energy) inherent structures include chains that incorporate less favorable bond angles. Inherent structure C, which has a high degree of degeneracy, is a four-atom chain with one bond angle of 117 and the other of 65 , with a dihedral angle of 108 . Point D is a variation of this structure, with a somewhat less favorable dihedral angle of 132 . Point E is another variation which incorporates a less favorable 180 bond angle. The degeneracies of D and E have not been plotted in this disconnectivity graph, and many of the less favorable higher energy inherent structures have also been omitted (including a straight-line configuration of all four atoms).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

327

Figure 18.3 Disconnectivity graph for Se4, including plots of the identified minimum energy structures and transition states [17].

Transitions among the degenerate states C can be achieved by changing the dihedral angle to 180 , as indicated with point F in the disconnectivity graph of Figure 18.3. However, in order to switch the 65 and 117 bonds of inherent structure C, i.e., to go between the two sets of degenerate minima in the landscape, the system must go through transition point H, which involves bond breakage. (Apparently, the activation barrier is lower to break one of the bonds rather than change the bond angles while remaining fully bonded.) The C structure can transition into a parallelogram via transition point G, which also involves breaking a bond. All other transitions occur through point I, which entails breaking the single Se4 cluster into two Se2 clusters. It is interesting that even transitions among the degenerate A minima involve bond breakage.

18.5 Locating Inherent Structures and Transition Points In order to implement the energy landscape approach discussed in this chapter, we must have suitable numerical techniques to [12]: • Map the continuous energy landscape to a discrete set of inherent structures and basins. • Identify the lowest energy transition points connecting each pair of adjacent basins. • Calculate the inherent structure density of states, which provides the degeneracy factors for each basin.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

328

Materials Kinetics



Solve the system of coupled master equations in Eq. (18.34) for an arbitrary temperature path, T(t). In this section, we introduce the eigenvector-following technique for locating inherent structures and transition points [18]. Calculation of the inherent structure density of states can be performed through a selfconsistent Monte Carlo method discussed in Chapter 23, and a method for solving the set of coupled master equation is covered in Chapter 20. Let us begin with a 3N-dimensional potential energy landscape, U ¼ Uðx1 ; x2 ; .; x3N Þ;

(18.40)

which is a function of the position coordinates, here denoted as x1 ; x2 ; .; x3N . If the potential energy of the  0 system at an initial position 0 xi , where i ¼ 1, 2, 3, ., 3N, is given by U xi , then we may approximate the potential energy at a new position, xi ¼ x0i þ hi , using the Taylor series expansion,   3N 3N X 3N  0 X vU  1X v2 U  Uðxi Þ z U xi þ hj þ hi hj : (18.41) vxj xj ¼x0 2 i¼1 j¼1 vxi vxj xi;j ¼x0 j¼1 j

i;j

Eq. (18.41) can be written in matrix notation as 1 UðxÞ z Uðx0 Þ þ gu h þ hu Hh; 2

(18.42)

where the position vectors are given by 1

0

x1 B x C B 2 C x¼B C; @ « A x3N

0

x01

1

B 0 C B x C B 2 C x0 ¼ B C B « C @ A 0 x3N

and the displacement vector, h ¼ x  x0 , is 0 1 h1 B h C B 2 C h¼B C: @ « A h3N

(18.43)

(18.44)

The gradient vector, g, and the 3N  3N Hessian matrix H, evaluated at x ¼ x0, are given by

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

1 vU C B B vx1 C C B C B B vU C C B C g¼B B vx2 C B « C C B C B B vU C A @ vx3N

329

0

(18.45)

x¼x0

and

0

v2 U B vx2 B 1 B B 2 B vU B B H ¼ B vx2 vx1 B B « B B B 2 @ vU vx3N vx1

v2 U vx1 vx2

/

v2 U vx22

/

«

1

v2 U vx3N vx2

/

1 v2 U vx1 vx3N C C C C 2 vU C C vx2 vx3N C C ; C C « C C C 2 vU A vx23N x¼x0

(18.46)

respectively. Note that the Hessian matrix is symmetric by construction. Mathematically, a transition point is a first-order saddle point, i.e., a critical point where exactly one eigenvalue of the Hessian matrix H is negative. In other words, a transition point corresponds to an energy maximum in one eigendirection and an energy minimum along all other eigendirections. In order to find the first-order saddle points in an efficient fashion, we define the Lagrange function [18]: L ¼  UðxÞ þ

3N   1X li h2i  ci2 ; 2 i¼1

(18.47)

where ci are the desired step sizes in the various eigendirections and li are Lagrange multipliers. Substituting Eq. (18.41) into Eq. (18.47), we have   3N 3N X 3N  0 X vU  1X v2 U  L ¼  U xi  hj  hi hj vxj xj ¼x0 2 i¼1 j¼1 vxi vxj xi;j ¼x0 j¼1 j i;j (18.48) 3N X   1 2 2 þ li hi  ci : 2 i¼1

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

330

Materials Kinetics

Taking the derivative of Eq. (18.48) with respect to an arbitrary step hk, we obtain   3N 3N X 3N 3N X X X vL vU  v2 U  ¼0 ¼  d  d h þ li hi dik ; jk ik j vhk vxj xj ¼x0 vxi vxj xi;j ¼x0 j¼1 i¼1 j¼1 i¼1 j

i;j

(18.49) where dij is the Kronecker delta function. This simplifies to 0 ¼  gk 

3N X

Hkj hj þ lk hk ;

(18.50)

j¼1

which can be written equivalently in matrix notation as: 0 ¼  g  Hh þ lh; where l is a diagonal matrix given by 0 l1 0 / B0 l / 2 B l¼B @ « « 1 0

/

0

0

(18.51) 1

0 C C C: « A

(18.52)

lN

Solving for h in Eq. (18.51), we have 1

h ¼ ðl  HÞ g:

(18.53)

The eigenvectors Vi and eigenvalues bi of the Hessian matrix are obtained by solving the eigenvector-eigenvalue problem: HVi ¼ bi Vi :

(18.54)

Since the eigenvectors form a complete set, the gradient vector g can be expressed as g¼

3N X

Fi Vi ;

(18.55)

i¼1

where Fi is the contribution to the gradient vector along eigendirection Vi. Substituting Eq. (18.55) into Eq. (18.53), we have 1

h ¼ ðl  HÞ

3N X

Fi Vi :

(18.56)

i¼1

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

331

Thus, the step in the Lagrange function is given by h¼

3N X i¼1

Fi Vi : li  b i

(18.57)

From Eq. (18.42), the associated change in energy is 1 DU ¼ gu h þ hu Hh 2 ¼

 3N P i¼1

Fi Vu i

" # # " # 3N 3N X Fj Fj 1 X Fi u Vj þ V H Vj l  bj 2 i¼1 li  bi i j¼1 lj  bj j¼1 j

" 3N P

(18.58) which simplifies to DU ¼

3N X i¼1

3N Fi2 1X Fi2 bi þ li  bi 2 i¼1 ðli  bi Þ2



¼

 bi li  2 : ðli  bi Þ2

(18.59)

F2 3N i P i¼1

Hence, the sign of the energy change along a particular eigendirection Vi depends on both the eigenvalue bi and the choice of Lagrange multiplier li. In order to determine a suitable choice of Lagrange multipliers, it is useful to express Eq. (18.59) as a summation of energy change in each of the various eigendirections: DU ¼

3N X

DUi ;

(18.60)

i¼1

where

  bi li  2 DUi ¼ : ðli  bi Þ2 Fi2

This book belongs to Alice Cartes ([email protected])

(18.61)

Copyright Elsevier 2023

332

Materials Kinetics

Hence, the choice of Lagrange multipliers governs the stepping along each of the 3N eigendirections, with the resulting change in energy given by Eq. (18.61). From this equation, it is clear that the appropriate choice of Lagrange multiplier depends on both the local gradient (Fi) and curvature (bi) along the eigenvector [12]. The choice of Lagrange multipliers for minimization of energy along a given eigendirection is shown in Figure 18.4. Here, a 3  3 matrix of choices is shown depending on whether the local gradient is negative (top row), close to zero (middle row), or positive (bottom row). The threshold for what constitutes an effectively zero slope is denoted Fth. Likewise, the columns in Figure 18.4 indicate whether the local curvature is negative (left column), close to zero (middle column), or positive (right column). The threshold for what constitutes an effectively zero curvature is denoted bth. The content of each cell in Figure 18.4 shows the appropriate choice of Lagrange multiplier, li, or the appropriate action for the next step considering the particular combination of gradient and curvature. The Lagrange multipliers are chosen separately for each eigendirection [18] in order to construct the complete step vector, h. Minimization of energy is finished when each of the eigendirections falls in the green highlighted cells in Figure 18.4, i.e., when the gradient is zero and the curvature is either zero or positive. If minimization is achieved along all 3N eigendirections, then an inherent structure has been found. Now suppose that we wish to find a transition point (viz., a first-order saddle point) by maximizing energy along a particular Vi eigendirection while minimizing energy in all of the orthogonal Vjsi eigendirections. In

Figure 18.4 Choice of Lagrange multipliers for minimization of energy along a given eigendirection.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

333

this case, the choice of Lagrange multipliers should still follow the prescription in Figure 18.4 for all eigendirections except the one along which maximization of energy is desired. The appropriate choice of Lagrange multiplier for maximization of energy is shown in Figure 18.5. Maximization is successfully achieved when the energy along that eigendirection satisfies the condition highlighted in green, i.e., zero gradient and negative curvature. Implementation of the above eigenvector-following technique involves first minimizing the energy to find a starting inherent structure. Then the Hessian matrix, H, is calculated at this inherent structure and diagonalized in order to determine its eigenvalues and eigenvectors. One eigenvector Vi is selected along which to maximize the energy to find a transition point. Typically, one would start with the “softest mode,” i.e., the eigendirection corresponding to the smallest eigenvalue (lowest curvature). Higher-order modes can be chosen subsequently to locate other transition states in later iterations of the algorithm. These transition points are likely to be of higher energy than those obtained by following the softest mode. A step is then taken in the direction of the chosen eigenvector Vi of interest using a desired magnitude ci. This initial step h should have no components from the other Vjsi eigendirections. Note that when mapping a complete list of transition points, a second search should be initiated in the opposite Vi direction. At each new point, the gradient vector, g, and Hessian matrix, H, are calculated. The eigenvalues and eigenvectors of the new Hessian matrix are determined, and the gradients along each eigendirection, Fi, are

Figure 18.5 Choice of Lagrange multipliers for maximization of energy along a given eigendirection.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

334

Materials Kinetics

calculated using Eq. (18.55). The stepping process continues using the choice of Lagrange multipliers from Figures 18.4 and 18.5 to either minimize or maximize the energy along each individual eigendirection. Once a transition point has been found, the energy and corresponding atomic configuration are recorded. The system is then pushed to the other side of the transition point, and an energy minimization is performed to find the new inherent structure in the adjacent basin. The process continues from each new inherent structure and along each eigendirection of interest until a satisfactory level of mapping has been achieved [12]. An example demonstration of the eigenvector-following technique for finding first-order saddle points is shown in Figure 18.6, which plots a simple two-dimensional landscape [18]. The system starts in the minimum energy configuration at the origin of the landscape. The stepwise eigenvector-following algorithm is applied along both the plus and minus directions of the softer vibrational mode. The eigenvector-following approach accurately converges at the two transition points within several steps in each direction. 3 2

y

1 0

-1 -2 -3 -2

-1

0

x

1

2

3

Figure 18.6 Contour plot of a two-dimensional potential energy landscape. Starting from the inherent structure at (0, 0), the two transition points are found by stepping through the energy landscape using the eigenvector-following approach discussed in Section 18.5. The transition points occur at locations (2.4104, 0.4419) and (0.1985, 2.2793). The initial step size is chosen to be 2.0, and the initial step is taken in opposite directions along the softest mode of the starting basin [18].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

335

While the above eigenvector-following approach can be applied to any arbitrary potential energy landscape, application to enthalpy landscapes requires special consideration because any change in the volume of the system requires a rescaling of the atomic position coordinates. Hence, the volumetric changes are coupled with changes of individual atomic positions. Due to this coupling, the resulting Hessian matrix incorporating the length dimension (L) is no longer symmetric. In order to overcome this challenge, Mauro et al. [19] have developed a split-step eigenvectorfollowing technique specifically for finding inherent structures and transition points in enthalpy landscapes under isobaric conditions. The split-step technique involves alternating between changes in volume where the atomic positions are rescaled and changes in atomic positions under the constraint of fixed volume. Full details on the mapping of enthalpy landscapes can be found in Ref. [19]. Several other numerical approaches exist to locate transition points in potential energy landscapes. Of particular note is the activationrelaxation technique of Mousseau and Barkema [20,21]. The activation-relaxation technique involves three steps. First, the system is randomly perturbed from a starting potential energy minimum. Whereas the eigenvector-following approach follows a specific vibrational mode, the activation-relaxation technique involves an initial random displacement. The random motion continues until a direction of negative curvature has been identified. Since the direction of negative curvature corresponds to the eigenmode with the lowest eigenvalue, the energy is then maximized in that direction until converging on a saddle point. The final step is relaxation to a new minimum on the other side of the transition barrier. This is accomplished by nudging the system over the transition barrier and then performing an energy minimization algorithm. The activation-relaxation technique has been applied to a variety of different systems, including amorphous silicon [22] and various types of proteins [23]. The activationrelaxation technique is more computationally efficient than eigenvectorfollowing, but it is less systematic in how it explores the landscape to find transition points since it relies on initially random displacements rather than deterministic steps along specific eigendirections. Another important method for locating transition points is the nudged elastic band method of Henkelman and coworkers [24,25]. Whereas the eigenvector-following and activation-relaxation techniques find transition

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

336

Materials Kinetics

points without prior knowledge of the inherent structure on the other side of the transition barrier, the nudged elastic band method requires prior knowledge of both inherent structures. Given this knowledge of both the “reactant” and the “product” states, the nudged elastic band method then calculates the lowest activation barrier connecting those two basins. The algorithm operates by minimizing the energy of a sequence of intermediate states along the reaction pathway. This is performed using a constrained optimization algorithm that involves adding spring forces along the band between the various states.

18.6 ExplorerPy ExplorerPy is an open source software package written in Python which implements the eigenvector-following technique in Section 18.4 for finding inherent structures and transition points in a potential energy or enthalpy landscape [26]. A flowchart showing the algorithm used by ExplorerPy for mapping of an energy landscape is provided in Figure 18.7. The output from ExplorerPy is a listing of the inherent structures and transition points, as well as their corresponding energies, volumes, and information about the curvature of the landscape so that the vibrational frequency can be calculated. ExplorerPy is designed to interface with the

Figure 18.7 Flowchart of the algorithm executed in ExplorerPy. The program starts in the top left corner and runs until the condition in the pink box is satisfied. Yellow diamonds represent decision points and blue boxes represent mathematical operations. (Image by Collin Wilkinson (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

337

Figure 18.8 Enthalpy landscape of SiO2 calculated using ExplorerPy and plotted as a disconnectivity graph. The coloring of the inherent structures indicates their corresponding volume. (Calculations and figure by Collin Wilkinson (Penn State)).

LAMMPS software package for molecular simulations and uses the interatomic potentials from a LAMMPS input script. Hence, ExplorerPy can access the full range of potentials available in LAMMPS. ExplorerPy also incorporates additional techniques for finding transition points, including a combined approach based on molecular dynamics and the nudged elastic band method. The ExplorerPy program is available for download at https://github.com/Mauro-Glass-Group/ Two examples of calculations using ExplorerPy are shown in Figures 18.8 and 18.9. Figure 18.8 plots the enthalpy landscape of silica using the disconnectivity graph approach described in Section 18.3. Here, the

Figure 18.9 Potential energy landscape of a barium disilicate glass-ceramic calculated using ExplorerPy. The colors show the percent crystallinity of the sample. The blue inherent structure is the initial starting configuration. The landscape shows the lower potential energy associated with partial crystallization of the sample. (Calculations and figure by Collin Wilkinson (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

338

Materials Kinetics

inherent structures in the disconnectivity graph are colored to indicate the different volumes associated with each inherent structure. Figure 18.9 plots the potential energy landscape of a barium disilicate glass-ceramic, where the inherent structures are colored to indicate the percent crystallization in each sample. The relationship between potential energy and degree of crystallization is readily apparent from the disconnectivity graph.

18.7 Summary The thermodynamics and kinetics of condensed matter can be described in terms of their underlying energy landscape. Under constant volume conditions, the potential energy landscape maps the potential energy as a function of all 3N position coordinates of an N-atom system. Under constant pressure conditions, a (3N þ 1)-dimensional enthalpy landscape can be used to capture changes in volume. The energy landscape of a system contains an exponentially large number of minima, known as inherent structures. Inherent structures are locally stable configurations of atoms. The collection of all points in the landscape that drain to a particular minimum is known as a basin. There is one basin for every inherent structure. While the energy landscape itself does not change with temperature, the way in which a system samples the landscape depends on the available thermal energy. The primary assumption of the energy landscape approach is that the kinetics of the system can be separated into fast vibrations within a basin and less frequent inter-basin transitions. The inter-basin transitions are accomplished by overcoming a transition point, where the difference between the transition point energy and the inherent structure energy defines the activation barrier for the inter-basin transition. Application of the energy landscape approach involves mapping the continuous energy landscape to a discrete set of basins and the connecting transition points. The transition points can be found using a variety of different approaches, including eigenvector-following with a Lagrange function, the activation-relaxation technique, and the nudged elastic band method. ExplorerPy is an open source software package for performing this mapping of any arbitrary potential energy landscape or enthalpy landscape. After the set of inherent structures and transition points have been mapped, the kinetics of the system can be calculated by solving a set of coupled master equations. The evolution of the macroscopic properties of the system is calculated using suitable averages with the basin occupation probabilities.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Energy Landscapes

339

Exercises (18.1) Under what conditions would it not be possible to separate the partition function of a potential energy landscape into independent configurational and vibrational contributions? What would be the implications for studying the thermodynamics and kinetics within the energy landscape formalism? (18.2) Write equations for the evolution of the following properties in terms of the time-dependent basin occupation probabilities, pi(t), in an enthalpy landscape: (a) Isobaric heat capacity (b) Volumetric thermal expansion coefficient (c) Average structural relaxation time (18.3) Download the disconnectivity graph software from the David Wales group at the University of Cambridge (http://www-wales. ch.cam.ac.uk/software.html). Practice plotting example energy landscapes using the disconnectivity graph approach. Label the inherent structures and transition points in the graph. (18.4) In your own words, summarize the key steps and discuss the relative advantages and disadvantages of the following techniques for locating transition points in an energy landscape: (a) Eigenvector-following technique (b) Activation-relaxation technique (c) Nudged elastic band method (18.5) Download the ExplorerPy software from https://github.com/ Mauro-Glass-Group/ and run the example script for SiO2. Plot the resulting energy landscape as a disconnectivity graph. What can you learn from the topography of the landscape?

References [1] M. Goldstein, “Viscous Liquids and the Glass Transition: A Potential Energy Barrier Picture,” J. Chem. Phys. 51, 3728 (1969). [2] F. H. Stillinger and T. A. Weber, “Hidden Structure in Liquids,” Phys. Rev. A 25, 978 (1982). [3] F. H. Stillinger and T. A. Weber, “Dynamics of Structural Transitions in Liquids,” Phys. Rev. A 28, 2408 (1983). [4] F. H. Stillinger, “Supercooled Liquids, Glass Transitions, and the Kauzmann Paradox,” J. Chem. Phys. 88, 7818 (1988). [5] P. G. Debenedetti, F. H. Stillinger, T. M. Truskett, and C. J. Roberts, “The Equation of State of an Energy Landscape,” J. Phys. Chem. B 103, 7390 (1999). [6] P. G. Debenedetti and F. H. Stillinger, “Supercooled Liquids and the Glass Transition,” Nature 410, 259 (2001).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

340

Materials Kinetics

[7] F. H. Stillinger and P. G. Debenedetti, “Energy Landscape Diversity and Supercooled Liquid Properties,” J. Chem. Phys. 116, 3353 (2002). [8] J. C. Mauro, R. J. Loucks, J. Balakrishnan, and S. Raghavan, “Monte Carlo Method for Computing Density of States and Quench Probability of Potential Energy and Enthalpy Landscapes,” J. Chem. Phys. 126, 194103 (2007). [9] F. H. Stillinger and P. G. Debenedetti, “Distinguishing Vibrational and Structural Equilibration Contributions to Thermal Expansion,” J. Phys. Chem. B 103, 4052 (1999). [10] M. Potuzak, J. C. Mauro, T. J. Kiczenski, A. J. Ellison, and D. C. Allan, “Communication: Resolving the Vibrational and Configurational Contributions to Thermal Expansion in Isobaric Glass-Forming Systems,” J. Chem. Phys. 133, 091102 (2010). [11] C. P. Massen and J. P. K. Doye, “Power-Law Distributions for the Areas of the Basins of Attraction on a Potential Energy Landscape,” Phys. Rev. E 75, 037101 (2007). [12] J. C. Mauro, R. J. Loucks, A. K. Varshneya, and P. K. Gupta, “Enthalpy Landscapes and the Glass Transition”, In Scientific Modeling and Simulations, Springer (2008), pp 241e281. [13] L. Angelani, G. Parisi, G. Ruocco, and G. Viliani, “Potential Energy Landscape and Long-Time Dynamics in a Simple Model Glass,” Phys. Rev. E 61, 1681 (2000). [14] J. C. Mauro and A. K. Varshneya, “A Nonequilibrium Statistical Mechanical Model of Structural Relaxation in Glass,” J. Am. Ceram. Soc. 89, 1091 (2006). [15] J. C. Mauro, R. J. Loucks, and P. K. Gupta, “Metabasin Approach for Computing the Master Equation Dynamics of Systems with Broken Ergodicity,” J. Phys. Chem. A 111, 7957 (2007). [16] D. J. Wales, Energy Landscapes, Cambridge University Press (2003). [17] J. C. Mauro, R. J. Loucks, J. Balakrishnan, and A. K. Varshneya, “Mapping the Potential Energy Landscapes of Selenium Clusters,” J. Non-Cryst. Solids 353, 1268 (2007). [18] J. C. Mauro, R. J. Loucks, and J. Balakrishnan, “A Simplified Eigenvector-Following Technique for Locating Transition Points in an Energy Landscape,” J. Phys. Chem. A 109, 9578 (2005). [19] J. C. Mauro, R. J. Loucks, and J. Balakrishnan, “Split-Step Eigenvector-Following Technique for Exploring Enthalpy Landscapes at Absolute Zero,” J. Phys. Chem. B 110, 5005 (2006). [20] N. Mousseau and G. T. Barkema, “Traveling Through Potential Energy Landscapes of Disordered Materials: The Activation-Relaxation Technique,” Phys. Rev. E 57, 2419 (1998). [21] G. T. Barkema and N. Mousseau, “The ActivationeRelaxation Technique: An Efficient Algorithm for Sampling Energy Landscapes,” Comput. Mater. Sci. 20, 285 (2001). [22] N. Mousseau and G. T. Barkema, “Activated Mechanisms in Amorphous Silicon: An Activation-Relaxation-Technique Study,” Phys. Rev. B 61, 1898 (2000). [23] N. Mousseau, P. Derreumaux, G. T. Barkema, and R. Malek, “Sampling Activated Mechanisms in Proteins with the ActivationeRelaxation Technique,” J. Mol. Graph. Model. 19, 78 (2001). [24] G. Henkelman, B. P. Uberuaga, and H. Jónsson, “A Climbing Image Nudged Elastic Band Method for Finding Saddle Points and Minimum Energy Paths,” J. Chem. Phys. 113, 9901 (2000). [25] D. Sheppard, P. Xiao, W. Chemelewski, D. D. Johnson, and G. Henkelman, “A Generalized Solid-State Nudged Elastic Band Method,” J. Chem. Phys. 136, 074103 (2012). [26] C. J. Wilkinson and J. C. Mauro, “Explorer.py: Mapping the Energy Landscapes of Complex Materials” SoftwareX, (2020) (in press).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 19

Broken Ergodicity

19.1 What is Ergodicity? We began this book with a quote from Richard Feynman [1] on the nature of equilibrium: “Equilibrium is when all the fast things have happened but the slow things have not.” Different stages of equilibrium occur on different time scales. The example of coffee-cream equilibrium discussed in Section 1.1 illustrates this point. Another illustrative example is that of ice-water equilibrium depicted in Figure 19.1. Consider ice cubes in a glass of water, where the ice melts in the liquid water on a time scale of s1, giving the first stage of equilibrium. A second equilibrium occurs on a longer time scale of s2, when the water reaches temperature equilibrium with its surroundings. Finally, a vapor equilibrium with the environment is achieved on an even longer time scale of s3. Which of these is the true equilibrium? The answer depends on the question that you as the scientist are trying to answer. Hence, an important part of constructing the experiment is to choose an appropriate time scale for observing the processes relevant for the problem under study. The critically important role of time scale is captured in the concept of ergodicity. An ergodic system is one in which the time average of a property equals an ensemble average of that same property over all the microstates in the system. For any property A, ergodicity implies that A ¼ hAi;

(19.1)

where A is a time average of A and hAi is an ensemble average. The time average is calculated by repeatedly measuring A of the same system over an observation time, tobs. The ensemble average is calculated by measuring multiple identically prepared systems over an identical time. The ensemble average is equivalent to calculating a weighted average of the property by X pi Ai ; (19.2) hAi ¼ i

Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00012-1 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

341 Copyright Elsevier 2023

342

Materials Kinetics

Figure 19.1 (a) A composite ice-water system. (b) Melting of ice occurs on a time scale of s1. (c) The ice water is initially at a temperature below room temperature. (d) Over a longer time scale of s2, the water achieves temperature equilibrium with its environment. (e) Water molecules vaporize and (f) achieve vapor equilibrium with the environment on a time scale of s3.

where pi is the probability of the system occupying microstate i and Ai is the value of the property A corresponding to that microstate. In terms of Stillinger’s energy landscape description from Chapter 18, pi is the probability of occupying a given basin i and Ai is the value of A corresponding to that basin. The word “ergodic” was originally coined by Boltzmann during his development of statistical mechanics and is derived from the Greek words ἔrgon (“ergon,” meaning “work”) and ὁdό2 (“hodos,” which means “path” or “way”). Ergodicity is one of the most prevalentdand often unstatedd assumptions in the field of statistical mechanics. To have an ergodic system implies that it is given enough time to explore a sufficient number of microstates such that Eq. (19.1) is satisfied for the properties of interest.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Broken Ergodicity

343

Hence, the question of ergodicity, and that of thermodynamic equilibrium, is really a question of time scale. We may think of an experiment as having two relevant time scales: an internal time scale sint on which the kinetics of the system occur and an external time scale sext on which properties are measured or observed. The internal time scale is effectively a relaxation time over which the system loses memory of its previous states, while the external time scale defines the measurement window over which the system is observed. The measurement of the system can be made either by direct human observation or using an instrument able to access time or length scales inaccessible to an unaided human. Following Feynman, a system has equilibrated if all the relevant relaxation processes have occurred, while the slower processes are essentially “frozen” on the time scale of the experiment. Before proceeding, it is important to distinguish between the concept of ergodicity and the so-called “ergodic hypothesis,” since these two concepts are commonly confused. Ergodicity is the state of equivalence between the time average and ensemble average of the properties of a system. The ergodic hypothesis states that a system will become ergodic in the limit of long time. The ergodic hypothesis implies that all microstates are accessible such that, given enough time, each of the microstates will be visited by the system.

19.2 Deborah Number In a classic 1964 paper, Reiner [2] introduced the concept of the Deborah number, which is a useful construct for understanding ergodicity. The Deborah number, D, is defined as the ratio of the internal to external time scales for a given experiment: sint D¼ : (19.3) sext The Deborah number was named in honor of the prophetess Deborah, who in the Old Testament sings, “.the mountains flowed before the Lord.” (Judges 5:5). The rationale is that mountains, while essentially static on a human time scale, do indeed flow on a geological time scale inaccessible to direct observation. For observation on a human time scale, the phenomenon of continental drift is described by a large Deborah number (D[1; sint [sext ), since the system visits only a small number of the available states during the external (observation) time. This represents an insufficient sampling of phase space for determining the long-time average of properties. Hence, the system is nonergodic.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

344

Materials Kinetics

On the other hand, many phenomena, such as the kinetic processes in gases and liquids, occur on time scales that are too fast for direct human observation. These phenomena are characterized by a very small Deborah number (D  1; sint  sext ), which is indicative of an ergodic system. Since the system can explore a greater portion of phase space (i.e., a greater number of microstates) during the external observation time window, the time-averaged properties measured during sext ¼ tobs are effectively equal to those measured in an ensemble-averaged sense. The observation of ergodic behavior therefore depends on both the internal relaxation time of a system and the external observation time. A system is ergodic when D  1 since the kinetic processes that lead to equilibration are very fast. However, the system can be nonergodic when D[1 if equilibrium is unable to be achieved. The issue of ergodicity, while inherently relative, is of great physical significance since the properties we observe are those we measure on our own finite time scale, not in the limit of infinite time. It is thus important for any practical experiment to account for the ergodicity or potential lack of ergodicity of a system to attain accurate predictions and understanding of the properties of the system. A system that is initially ergodic can become nonergodic if its internal time scale for equilibration becomes long compared to the external observation time scale. A classic example of the breakdown of ergodicity is the glass transition, shown in Figure 19.2. At high temperatures, the internal relaxation time of the liquid is fast compared to the observation time (D  1; sint  sext ). Hence, the high-temperature liquid is ergodic. However, as the liquid is supercooled, its viscosity increases by many orders of magnitude. Upon sufficient cooling, the internal kinetics of the liquid reach a time scale that is of the same order as the external observation time scale (D z 1; sint z sext ). The regime where D z 1 corresponds to the glass transition range, where the system initially departs from equilibrium. As the system continues to cool, the internal kinetics become exponentially slower, such that the glassy state is effectively trapped in a subset of the available microstates. Hence, the glass itself is nonergodic (D[ 1; sint [ sext ). Indeed, the only reason we observe a glass at all is that we are measuring its properties on a time scale shorter than the internal relaxation time scale of the system. In the limit of infinite time, nonequilibrium states such as glass do not exist. The breakdown of ergodicity can take place either discontinuously or continuously. A discontinuous breakdown of ergodicity is accompanied by a sudden partitioning of the phase space into a set of mutually inaccessible regions. An example of discontinuous ergodicity breaking is glass formation

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Broken Ergodicity

345

Figure 19.2 The glass transition is an example of a breakdown of ergodicity. The liquid at high temperatures is an initially ergodic system, having a Deborah number less than unity (D < 1). As the liquid is supercooled, the kinetics of the system become exponentially slower. When the internal relaxation time becomes comparable to the external observation time, a continuous breakdown of ergodicity occurs. This corresponds to the glass transition region, where the Deborah number is on the order of unity. The low-temperature glassy system is nonergodic, having a relaxation time scale much longer than the observation time scale, i.e., a Deborah number greater than unity.

by abrupt quenching from a melt, where the system experiences a sudden loss of thermal energy, trapping it in a subset of the overall phase space. This situation is discussed in Section 19.3. On the other hand, a glass is more normally formed by rate-cooling from a melt. In this case, the glass-forming system experiences a continuous breakdown of ergodicity as it gradually becomes confined to a subset of the phase space. The physics of continuously broken ergodicity will be introduced in Section 19.4. Note that the above example of ergodicity loss upon cooling considers an exponentially increasing sint where the observation time, sext ¼ tobs , is effectively constant. From Eq. (19.3), another way to induce a loss of ergodicity is by reducing sext for a fixed sint. This can be achieved by probing a system at very short observation time scales. For example, a liquid probed at very high frequencies exhibits the properties of a glass, since it is unable to explore a sufficiently large number of configurations to achieve ergodicity on a very short time scale. This is an example of an isothermal glass transition, i.e., a glass transition that is induced by decreasing sext rather than by increasing sint. Both cases result in a breakdown of ergodicity, since D[1; sint [sext .

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

346

Materials Kinetics

19.3 Broken Ergodicity The term broken ergodicity was introduced by Bantilan and Palmer [3] in 1981 and thoroughly developed by Palmer [4] in 1982 as a statistical mechanical framework in which to treat the physics of nonergodic systems. In the Palmer approach, the loss of ergodicity is treated as a discontinuous process in which the phase space G of the nonergodic system is divided into a set of disjoint regions fGa g, where G ¼ WGa ; a

(19.4)

i.e., the complete phase space is the union of all regions. Each region Ga is called a component and consists of a group of microstates that meet two conditions: confinement and internal ergodicity. The condition of confinement states that transitions are not allowed between microstates in different components. In other words, if a system has its phase point in a particular component Ga at a time t, there is insignificant probability of the system leaving that component over the observation time tobs, i.e., during the time interval ½t; t þtobs . The condition of internal ergodicity states that ergodicity is valid within a given component, i.e., all microstates within a component are mutually accessible on a time scale much shorter than tobs. These conditions of confinement and internal ergodicity are illustrated in Figure 19.3 for a simple two-component phase space.

Figure 19.3 The Palmer model of broken ergodicity is based on partitioning the system into components satisfying the conditions of internal ergodicity and confinement. The condition of internal ergodicity implies that local equilibration can be achieved among the microstates within a component. The condition of confinement implies that the transitions between components occur on a time scale much longer than the observation time, such that inter-component transitions are effectively forbidden.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Broken Ergodicity

347

The Palmer approach thus assumes a distinct separation of intra- and inter-component kinetic time scales: intra-component kinetics, i.e., transitions among the various microstates within a component, occur on a time scale sintra much shorter than the observation time (D  1; sintra  tobs ), while inter-component transitions occur on a time scale sinter much longer than the observation time (D[1; sinter [tobs ). The assumption that sintra  tobs  sinter :

(19.5)

enables components to be in internal equilibrium while the overall system is not. Hence, ergodic statistical mechanics can be utilized within each component, and the macroscopic properties of the nonergodic system can be calculated using a suitable average over each of the components. The probability of a system being confined within a particular component Ga is equal to a restricted summation of the occupation probabilities of the individual microstates within Ga , i.e., X pi ; (19.6) Pa ¼ i˛a

where pi is the probability of occupying microstate i and Pa is the total probability of occupying component Ga . In the language of Stillinger’s energy landscape formalism in Chapter 18, each microstate corresponds to a basin containing a unique inherent structure. As shown in Figure 19.3, the transition barriers are small among the basins within a component, such that they can be overcome on a short time scale. This satisfies the condition of internal ergodicity. In contrast, there is a large transition barrier between components, such that transitions between basins in two different components are not kinetically allowed. This is consistent with Palmer’s condition of confinement. In the language of energy landscapes, a component is also known as a metabasin. Since each basin falls within exactly one metabasin (i.e., each microstate is contained within exactly one component), the sum of metabasin probabilities is unity: X XX Pa ¼ pi ¼ 1: (19.7) a

a

i˛a

As a direct result of the condition of confinement, the statistical mechanics of the system can be written in terms of independent component partition functions [4,5], Qa : X  Ui  Qa ¼ exp  ; (19.8) kT i˛a This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

348

Materials Kinetics

where k is Boltzmann’s constant, T is temperature, and Ui is the potential energy of microstate i. The summation in Eq. (19.8) is performed over all microstates i contained within component Ga . Note that Eq. (19.8) is expressed as a canonical partition function, i.e., assuming isochoric conditions. Under isobaric conditions, an analogous expression can be written in terms of the isothermal-isobaric partition function using enthalpy, Hi, in the place of potential energy, Ui. The Helmholtz free energy of the system, F, is given by the expectation value of the free energy of the individual components, Fa: X F ¼F ¼ Pa Fa : (19.9) a

Note that in the isothermal-isobaric partition function, a similar expression can be written for Gibbs free energy rather than Helmholtz free energy. If the system is trapped within component Ga , its resulting free energy is Fa ¼  kT ln Qa :

(19.10)

Combining Eqs. (19.9) and (19.10), the expectation value of the free energy of the system is X F ¼  kT Pa ln Qa : (19.11) a

This equation is expressed as a weighted average since, in general, it is not known in which component the system is trapped. We can only express the free energy and other properties in terms of the probability, Pa, of the system becoming trapped in each component. Using the definition of Helmholtz free energy, F ¼ U  TS, where S is entropy, Eq. (19.9) can be expressed as X X X F¼ Pa ðUa  TSa Þ ¼ Pa Ua  T Pa Sa : (19.12) a

a

a

Here, Ua is the potential energy of component Ga , given by P pi Ui 1 X i˛a Ua ¼ P ¼ p i Ui : Pa i˛a pi

(19.13)

i˛a

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Broken Ergodicity

349

The expectation value of the potential energy of the system is given by the weighted average of the potential energies of the individual components: X U ¼U ¼ P a Ua : (19.14) a

Note that the potential energy of the system does not change as a result of broken ergodicity, i.e., X 1 X X X Pa Ua ¼ Pa pi Ui ¼ p i Ui : (19.15) U¼ Pa i˛a a a i Likewise, the expectation value of the enthalpy of the system is X P a Ha ; H ¼H ¼

(19.16)

a

which is the same in both the broken ergodic and ergodic descriptions of the system: X X H¼ P a Ha ¼ p i Hi : (19.17) a

i

Despite the equivalence of potential energy and enthalpy in the broken ergodic and ergodic systems, the free energy of Eq. (19.12) changes due to a loss of entropy in the broken ergodic state. Suppose the system is trapped in component Ga , where the properties of the system are determined entirely by averaging over the microstates within that component. In this case, the configurational entropy of the system is given by the Gibbs formula for entropy applied to the microstate occupation probabilities within that component, i.e., X pi pi Sa ¼  k ln : (19.18) P Pa i˛a a Eq. (19.18) defines the entropy associated with component Ga . Note that the probabilities satisfy: X pi ¼ 1: (19.19) P i˛a a

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

350

Materials Kinetics

The expectation value of the entropy is: X S¼S ¼ Pa Sa :

(19.20)

a

The weighted average in Eq. (19.20) is used since we do not know beforehand into which component the system will be trapped. Suppose that a large number of systems are prepared under identical conditions. Some systems will be trapped in one particular component, while others could be trapped in other components. Each individual system can occupy only one position in phase space, i.e., each individual system can occupy one and only one component, and the entropy of that system is simply the entropy associated with that component. However, the various systems could be trapped in different components and hence have different values of entropy. The expectation value of the entropy is simply the weighted average of these entropies. Combining Eqs. (19.6), (19.18), and (19.20), we have: XX pi S¼ k pi ln : (19.21) Pa a i˛a Hence, Eq. (19.21) is the entropy of a broken ergodic system, S, which is less than that of the corresponding ergodic system, hSi, i.e., S  hSi X XX pi pi ln  k pi ln pi : k Pa a i˛a i

(19.22)

The difference between the entropy of the ergodic and broken ergodic systems is called the complexity of the system, denoted I. The complexity is given by: I ¼ hSi  S P ¼ k pi ln pi 



i

PP pi k pi ln P a a i˛a

  PP pi ¼ k pi ln pi  ln Pa a i˛a PP ¼ k pi ðln pi  ln pi þ ln Pa Þ

!

(19.23)

a i˛a

¼ k

PP a i˛a

pi ln Pa :

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Broken Ergodicity

351

Combining the last line of Eq. (19.23) with Eq. (19.6), the complexity reduces to: X I ¼  k Pa lnPa : (19.24) a

The complexity of a system is a measure of the amount of information needed to specify which component is actually occupied by the system, given knowledge of only their occupation probabilities, Pa. Complexity is also a measure of the nonergodicity of a system. For a completely confined system, i.e., for a system trapped in a single microstate, each microstate itself is a component. In this case, the configurational entropy of the nonergodic system is zero: XX S¼ k ln 1 ¼ 0 ; (19.25) a

i˛a

and the complexity adopts its maximum value of I ¼ hSi. For an ergodic system, all microstates are part of the same component, and the complexity is zero. Hence, the process of ergodicity breaking necessarily results in a loss of entropy and increase in complexity as the phase space divides into mutually inaccessible components. This loss of entropy is accompanied by an increase in the free energy of the system, as indicated by Eq. (19.12). While the breaking of ergodicity always results in a loss of entropy and a gain in free energy, the energy and enthalpy are unaffected, as indicated by Eqs. (19.15) and (19.17).

19.4 Continuously Broken Ergodicity While the Palmer approach [4] can be applied to systems having a distinct separation of microstates into components, for many systems there is not a clear partitioning of the phase space into components satisfying the conditions of confinement and internal ergodicity. Moreover, the Palmer approach can only be applied to processes where the breakdown of ergodicity is discontinuous, i.e., it cannot be applied to study a gradual transition between ergodic and nonergodic behavior where the observation time scale

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

352

Materials Kinetics

is comparable to that of the internal kinetic time scales of the system. This range is pertinent for many real systems, where the breakdown of ergodicity is gradual. In this section, we present a general statistical mechanical framework for describing systems with continuously broken ergodicity [6]. This approach enables the direct calculation of properties accounting for the actual transition rates between microstates during a specified time interval. In contrast to the Palmer approach for discontinuously broken ergodicity, the framework of continuously broken ergodicity makes no assumptions about phase space partitioning, internal ergodicity, or confinement. Consider a system with microstate occupation probabilities pi(t) at time t. Suppose that an instantaneous measurement of the microstate of the system is made at time t. In this instant, the properties of the system would reflect only the single microstate occupied by the system. Since in this limit of zero observation time, the macrostate of the system is determined by a single microstate, the entropy is necessarily zero. However, the entropy becomes positive for any finite observation time, tobs, since transitions among microstates are not strictly forbidden except at absolute zero temperature, excluding quantum tunneling. What, then, is the entropy of the system measured over the observation window? This question can be answered by following the kinetics of the system starting from each possible microstate i at the beginning of the observation window. To account for the continuous breakdown of ergodicity, we consider two types of probabilities. The probability of occupying a given microstate i at the beginning of a measurement window is denoted pi. Within the energy landscape description, this is equivalent to a basin occupation probability. In addition, we define fi, j(t) as the conditional probability of the system occupying microstate j after beginning in microstate i and then evolving for some time t, following the actual transition rates among microstates. The conditional probabilities satisfy X fi; j ðtÞ ¼ 1: (19.26) j

for any initial state i and for all time t. Hence, fi,j(tobs) denotes the probability of transitioning to microstate j after an initial measurement in state i at t ¼ 0 and evolving through an observation time window of duration tobs.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Broken Ergodicity

353

These two types of probabilities are illustrated in Figure 19.4. In the limit of tobs / 0, as shown in Figure 19.4(a), the conditional probability reduces to a Kronecker delta function:  1; i ¼ j lim fi; j ðtobs Þ ¼ dij ¼ ; (19.27) tobs /0 0; isj since in the limit of zero observation time, the system does not have time to leave its current microstate i and transition to a different microstate. In the limit of tobs / N and assuming the validity of the ergodic hypothesis, as indicated in Figure 19.4(c), the conditional probabilities reduce to the equilibrium probabilities for occupying the final microstate j:   Uj 1 eq lim fi; j ðtobs Þ ¼ pj ¼ exp  ; (19.28) tobs /N Q kT where Uj is the energy of microstate j and Q is the equilibrium partition function. Figure 19.4(b) shows the case for a finite observation time, where fi,j(tobs) is the finite probability of ending in microstate j after starting in microstate i. Suppose that the system begins in microstate i at t ¼ 0 and is measured over an observation time, tobs. Under these conditions, the configurational entropy, Si ðtobs Þ, is: X fi;j ðtobs Þln fi; j ðtobs Þ: (19.29) Si ðtobs Þ ¼  k j

This represents the configurational entropy of one possible realization of the system, viz., starting in microstate i. Eq. (19.29) is analogous to Palmer’s component entropy of Eq. (19.18). However, in Eq. (19.29) we have not made any assumptions about confinement or internal ergodicity, i.e., the actual transition rates are used to calculate the conditional probability values. The expectation value of the configurational entropy at the end of the observation window is the weighted sum of entropy values for all possible realizations of the system, i.e., averaging over all possible starting microstates i: X Sðtobs Þ ¼ Sðtobs Þ ¼ pi ð0ÞSi ðtobs Þ: (19.30) i

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

354

Materials Kinetics

Figure 19.4 The framework of continuously broken ergodicity is based on two sets of probabilities: the basin occupation probabilities (pi) and the conditional probabilities (fi,j). The conditional probability gives the probability of reaching basin j after the system starts in basin i and propagates for a given observation time. (a) In the limit of zero observation time, no transitions are allowed. (b) For a finite observation time, the conditional probabilities become nonzero for basins beyond the starting basin. (c) In the limit of long observation time, the conditional probabilities converge to the equilibrium basin occupation probabilities.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Broken Ergodicity

Substituting Eq. (19.29) into Eq. (19.30): X X Sðtobs Þ ¼ Sðtobs Þ ¼ k pi ð0Þ fi; j ðtobs Þln fi; j ðtobs Þ: i

355

(19.31)

j

In the limit of zero observation time, the configurational entropy vanishes: X X lim Sðtobs Þ ¼  k pi ð0Þ dij ln dij ¼ 0: (19.32) tobs /0

i

j

Regardless of which microstate the system occupies, in the limit of zero observation time, the macrostate is determined by one and only one microstate [6e8]. This vanishing of entropy is also obtained in the limit of absolute zero temperature, in agreement with the third law of thermodynamics, since transitions between microstates are forbidden. In the limit of infinite observation time, the equilibrium entropy is recovered: X X eq X eq lim Sðtobs Þ ¼  k pi ð0Þ pj ln peqj ¼ k pj ln peqj : (19.33) tobs /0

i

j

j

With continuously broken ergodicity, there is no need to partition the phase space into components. By considering all possible starting configurations of the system and the actual transition rates between microstates, this approach can be applied to any arbitrary system with any degree of ergodicity or nonergodicity. Note that Eq. (19.31) reduces to the Palmer equation for entropy, Eq. (19.21), for systems that obey the assumptions made by Palmer.

19.5 Hierarchical Master Equation Approach Implementation of the continuously broken ergodicity framework involves use of a hierarchical master equation algorithm to calculate both sets of probabilities: the microstate occupation probabilities, pi, and the conditional probabilities fi,j. The hierarchical master equation approach proceeds by the following steps: 1. Choose an appropriate initial state for the system, such as an equilibrium distribution of microstate occupation probabilities at some starting temperature, T(0):   1 Ui pi ð0Þ ¼ exp  : Q kT ð0Þ

This book belongs to Alice Cartes ([email protected])

(19.34)

Copyright Elsevier 2023

356

Materials Kinetics

2. Compute the kinetics of the microstate occupation probabilities according to the coupled set of master equations: U U X X d pi ðtÞ ¼ Kji ½T ðtÞpj ðtÞ  Kij ½T ðtÞpi ðtÞ; dt jsi jsi

(19.35)

where pi(t) is the probability of the system occupying basin i at time t, Kij is the transition rate from basin i to basin j, Kji is the rate of the reverse transition, T(t) is the temperature as a function of time, and U is the total number of microstates. The full algorithm for solving Eq. (19.35) will be presented in Chapter 20. 3. For any time t 0 where we wish to calculate entropy, select a specific microstate i. Construct a new set of master equations for the conditional probabilities fi,j using an initial condition of fi,i(0) ¼ 1 for the chosen microstate and fi,jsi(0) ¼ 0 for all other basins, i.e., fi,j(0) ¼ dij. Compute the kinetics of the system for the duration of the observation time, tobs, using the master equations: U U X X d K ji ½T ðt 0 þ tÞ fi; j ðtÞ  K ij ½T ðt 0 þ tÞ fi;i ðtÞ: fi; j ðtÞ ¼ dt jsi jsi

(19.36)

The output from this step is fi,j(tobs), the conditional probability of the system reaching any basin j after starting in basin i and evolving for exactly the observation time. 4. Compute the conditional entropy, Si, associated with starting in microstate i using Eq. (19.29) and the conditional probabilities fi,i(tobs) from Step 3 above. 5. Repeat Steps 3e4, starting from each of the individual basins i ¼ {1, 2, ., U}. 6. Once all values of Si are calculated, the expectation value of the entropy of the system is given by Eq. (19.30), where the values of pi ðt 0 Þ are taken from Step 2. 7. Repeat Steps 2e6 for all times of interest. The above algorithm is a direct extension of the master equation approach introduced in Section 18.3 and fully detailed in Chapter 20. Steps 1e2 above are identical to that presented in Section 18.3, while Steps 3e6 account for continuously broken ergodicity. Whereas the pi(t) values calculated in Step 2 give the occupation probabilities of the various microstates, Step 3 considers an instance of the system starting in one specific microstate. The solution to the set of master equations in Step 3 yields the

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Broken Ergodicity

357

conditional probabilities of occupying each of the various j microstates after starting in microstate i and evolving for exactly the observation time, tobs. No assumptions are made regarding accessible versus inaccessible states, since the kinetics are computed using the actual transition rates, Kij,ji. In this manner, the above algorithm represents a generalization of the Palmer approach for broken ergodicity where microstates had been considered to be either completely accessible or inaccessible from each other, with no intermediate classification. Also, use of the master equations in Step 3 allows us to avoid partitioning of the landscape into components; in this way, the above algorithm is generally applicable to any energy landscape and for any temperature path, T(t). The output from Steps 3e5 above is a set of conditional entropies Si corresponding to each possible configuration of the system, accounting for the actual observation time and transition rates. The expectation value of the entropy is the average of these Si values weighted by their corresponding quench probabilities, pi, as given in Eq. (19.30).

19.6 Thermodynamic Implications of Broken Ergodicity The transition from an ergodic to a nonergodic system is termed a partitioning process. A partitioning process involves a loss of certain degrees of freedom in the system by imposition of a kinetic constraint. While this does not result in any change in the energy or enthalpy of a system, it must involve a loss of configurational entropy since the observation time constraint constrains the system to some subset of the total phase space. In other words, the macrostate of a nonergodic system is determined by a fewer number of microstates compared to the ergodic system. Hence, the loss of ergodicity necessarily results in a loss of entropy. The opposite process, i.e., the restoration of ergodicity in the limit of long time, is called a unifying process. This process occurs spontaneously as the kinetic constraint is relaxed and must involve an increase in entropy, in accordance with the second law of thermodynamics [9e11]. The theory of continuously broken ergodicity is confirmed by a variety of experiments, including the specific heat spectroscopy experiments developed by Birge and Nagel [12], where the heat capacity of a liquid is measured over a wide range of frequencies. Birge and Nagel showed that liquids experience a continuous loss of their configurational degrees of freedom at higher measurement frequencies. Since observation time is inversely proportional to frequency, these experiments represent an example of an isothermal glass transition induced purely by changes in the

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

358

Materials Kinetics

observation time. Without lowering the temperature of a liquid, the system undergoes a continuous glass transition, i.e., a continuous loss of ergodicity, just by lowering the observation time (i.e., increasing the measurement frequency). As with a traditional glass transition, i.e., by lowering temperature for a fixed observation time, the breakdown of ergodicity entails a loss of the configurational degrees of freedom, resulting in a loss of configurational contributions to thermodynamic properties such as heat capacity and thermal expansion coefficient. Subsequent experiments such as the shearmechanical spectroscopy work of Maggi et al. [13] have confirmed this isothermal glass transition behavior for mechanical properties such as shear modulus. In each case, the liquid system displays liquid-like response at low frequencies but solid-like glassy response at higher frequencies. Figure 19.5 plots the configurational entropy of a selenium glassforming system for a range of cooling rates covering twenty-five orders of magnitude [14], accounting for continuously broken ergodicity using the hierarchical master equation approach of Section 19.5. The dashed line shows the entropy of the supercooled liquid, i.e., that computed assuming ergodicity and using equilibrium values of the microstate occupation probabilities. At high temperatures (i.e., near the melting temperature of Tm ¼ 490 K), the system is in equilibrium and satisfies the condition of ergodicity. In this case, the entropy calculated with the conditional probability formalism of Sections 19.4 and 19.5 is equivalent to the equilibrium entropy. As the liquid is cooled at different rates, the system undergoes a glass transition. A faster cooling rate yields an earlier onset of the glass transition due to the shorter observation time. As shown in Figure 19.5, the earlier onset of the glass transition produces a greater loss of configurational entropy. Regardless of the cooling rate, the entropy of each glass approaches zero at absolute zero temperature. The entropy of the supercooled liquid is also zero in this limit, since only the lowest enthalpy microstate of the liquid is occupied. The fact that each of these systems has zero entropy in the limit of absolute zero temperature does not imply that they have the same configuration or an identical macrostate. Rather, it indicates that for any cooling rate, the macrostate of an individual system is determined by one and only one microstate at absolute zero. As shown in Figure 19.5(a), the macrostate of the system at low temperatures can vary dramatically depending on the cooling path, e.g., the molar volumes vary greatly depending on the particular microstate into which the glass becomes trapped.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Broken Ergodicity

359

Figure 19.5 (a) Volume-temperature diagrams for the glass transition of selenium liquid, with cooling rates covering twenty-five orders of magnitude. (b) Configurational entropy of the same systems. There is a continuous loss of configurational entropy corresponding to the gradual loss of ergodicity at the glass transition. The entropy is zero at absolute zero temperature for all of the glassy and supercooled liquid systems [14].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

360

Materials Kinetics

The vanishing of entropy at absolute zero is further elaborated by Reiss [8], who explains, “Besides the residual entropy at 0 K being an artifact resulting from apparent entropy measurements along at least partially irreversible paths, this specification is incompatible with a view of the Second Law that establishes entropy as a function of state. If it is a state function, it depends only on its measured state, not on the history of the system and certainly not upon its future. Since the system does not visit its alternative degenerate states during the time of measurement, it is unaware of these states, and the principle of causality forbids it to be affected by these states.” The requirement of zero entropy at absolute zero is also made explicit by Fermi [15], who wrote in his discussion of the third law of thermodynamics, “The entropy of every system at absolute zero can always be taken equal to zero.”

19.7 Summary Ergodicity is a term introduced by Boltzmann, which indicates the equivalence of the time and ensemble averages of the properties of a system. A system that is initially ergodic can become nonergodic if its internal relaxation time scale becomes longer than the observation time over which properties are measured. The ratio of the internal relaxation time to the external observation time is called the Deborah number. A Deborah number less than unity is indicative of an ergodic system, and a Deborah number larger than one indicates that there is insufficient time to equilibrate on the time scale of the experiment. The physics of nonergodic systems was developed by Palmer, who coined the term “broken ergodicity.” Palmer’s treatment of broken ergodic systems considers microstates as grouped into components which satisfy the conditions of internal ergodicity and confinement. The condition of internal ergodicity states that the kinetics within a component are fast compared to the observation time, such that local equilibration can be achieved. The condition of confinement states that inter-component transitions are forbidden. Within Palmer’s framework of broken ergodicity, the macroscopic properties of a system can be calculated by suitable averages over the properties of the individual components. The division of an energy landscape into components upon loss of ergodicity is called a partitioning process. The partitioning process results in a loss of configurational entropy but no change in internal energy or enthalpy. The restoration of ergodicity in the limit of long time is known as a unifying

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Broken Ergodicity

361

process. This is a spontaneous process resulting in an increase in entropy, consistent with the second law of thermodynamics. Within Palmer’s framework of broken ergodicity, the entropy of any system is zero at absolute zero temperature, consistent with the third law of thermodynamics. Palmer’s conditions of internal ergodicity and confinement are relaxed in the more general theory of continuously broken ergodicity. Continuously broken ergodicity accounts for the gradual loss of ergodicity in a system by considering two sets of probabilities: basin occupation probabilities and conditional probabilities. The conditional probabilities give the probability of the system transitioning to a new basin from a given starting basin after propagating for a specified observation time. Conditional probabilities can be calculated using a hierarchical master equation approach. Since the framework of continuously broken ergodicity does not involve defining components, it is completely general and applicable to any system and any associated energy landscape.

Exercises (19.1) Give three examples of partitioning processes, which involve a loss of ergodicity due to imposition of a kinetic constraint. What happens to the Deborah number during a partitioning process? (19.2) Give three examples of unifying processes, which involve a restoration of ergodicity as a kinetic constraint is lifted. What happens to the Deborah number during a unifying process? (19.3) Prove that the continuously broken ergodicity formulation of Section 19.4 reduces to exactly the Palmer formulation of broken ergodicity in Section 19.3 when the conditions of internal ergodicity and confinement are satisfied. (19.4) Derive an expression for the complexity, I, of an arbitrary system within the framework of continuously broken ergodicity.

References [1] R. P. Feynman, Statistical Mechanics, Westview, New York (1972). [2] M. Reiner, “The Deborah Number,” Phys. Today 17, 62 (1964). [3] F. T. Bantilan Jr. and R. G. Palmer, “Magnetic Properties of a Model Spin Glass and the Failure of Linear Response Theory,” J. Phys. F: Metal Phys. 11, 261 (1981). [4] R. G. Palmer, “Broken Ergodicity,” Adv. Phys. 31, 669 (1982). [5] D. Dhar and J. L. Lebowitz, “Restricted Equilibrium Ensembles: Exact Equation of State of a Model Glass,” Europhys. Lett. 92, 20008 (2010).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

362

Materials Kinetics

[6] J. C. Mauro, P. K. Gupta, and R. J. Loucks, “Continuously Broken Ergodicity,” J. Chem. Phys. 126, 184511 (2007). [7] D. Kivelson and H. Reiss, “Metastable Systems in Thermodynamics: Consequences, Role of Constraints,” J. Phys. Chem. B 103, 8337 (1999). [8] H. Reiss, “Apparent Entropy, Residual Entropy, Causality, Metastability, Constraints, and the Glass Transition,” J. Non-Cryst. Solids 355, 617 (2009). [9] S. Ishioka and N. Fuchikami, “Thermodynamics of Computing: Entropy of Nonergodic Systems,” Chaos 11, 734 (2001). [10] P. K. Gupta and J. C. Mauro, “The Configurational Entropy of Glass,” J. Non-Cryst. Solids 355, 595 (2009). [11] J. C. Mauro and M. M. Smedskjaer, “Statistical Mechanics of Glass,” J. Non-Cryst. Solids 396, 41 (2014). [12] N. O. Birge and S. R. Nagel, “Specific-Heat Spectroscopy of the Glass Transition,” Phys. Rev. Lett. 54, 2674 (1985). [13] C. Maggi, B. Jakobsen, T. Christensen, N. B. Olsen, and J. C. Dyre, “Supercooled Liquid Dynamics Studied via Shear-Mechanical Spectroscopy,” J. Phys. Chem. B 112, 16320 (2008). [14] J. C. Mauro, P. K. Gupta, R. J. Loucks, and A. K. Varshneya, “Non-Equilibrium Entropy of Glasses Formed by Continuous Cooling,” J. Non-Cryst. Solids 355, 600 (2009). [15] E. Fermi, Thermodynamics, Dover Publications (1956), p 139.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 20

Master Equations 20.1 Transition State Theory In Chapter 18 we introduced the concept of energy landscapes, which map the potential energy or enthalpy of any system as a function of all possible configurations of atoms. An energy landscape contains a multitude of minima known as inherent structures. Each structure has an associated basin, which is the volume of the configurational phase space that drains to the particular minimum via steepest descent. The fundamental assumption of the energy landscape approach is that the kinetics of the system can be divided into separate vibrational and configurational contributions, with high-frequency vibrations within a basin and lower-frequency transitions between adjacent basins. The rate of making a successful transition between basins is governed by the probability of overcoming the activation barrier between those basins. This rate is typically calculated using transition state theory. Transition state theory was initially developed by Eugene Wigner in 1932 as a way to calculate the rates of chemical reactions [1,2]. For example, what is the rate of rearrangement of some molecule i into a different molecule j? As depicted in Figure 20.1, this molecular rearrangement or reaction rate, Kij, involves overcoming an activation barrier, DUij. In transition state theory, the rate is the product of an attempt frequency (n) and a Boltzmann probability of successfully overcoming the barrier:   DUij Kij ¼ n exp  ; (20.1) kT where k is Boltzmann’s constant and T is the absolute temperature. Eq. (20.1) was previously derived in Chapter 7 by considering a statistical mechanical treatment of a particle hopping from a potential energy well. In that case, the attempt frequency, n, was the vibrational frequency of the particle within the potential energy well. However, in the more general interpretation of transition state theory, n corresponds to any attempt frequency, which is often a cooperative vibration of many atoms in a molecule Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00004-2 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

363 Copyright Elsevier 2023

364

Materials Kinetics

Figure 20.1 The reaction rate, Kij, in transition state theory is the product of an attempt frequency (n) with a Boltzmann probability of overcoming the activation barrier, DUij. The abscissa indicates the reaction coordinate from the initial state, i, to the final state, j.

or in the larger system, e.g., along a normal mode of vibration. In either case, DUij gives the activation barrier that must be overcome for the system to change its configuration or, in terms of chemical engineering, for a reaction to occur. The application of transition state theory to chemical reaction kinetics will be covered in more detail in Chapter 25. In this chapter, we focus on using transition state theory to describe the transition rates within a system of master equations to describe the kinetics of inter-basin transitions within an arbitrary energy landscape. A particular challenge for solving a large system of master equations is dealing with the wide range of rate parameters in the system, which often vary by many orders of magnitude. To overcome this problem of highly disparate time scales, we leverage the concept of broken ergodicity from Chapter 19 to partition the system into components, or metabasins, where the transition rates within a metabasin are fast compared to an observation time scale. In this manner, equilibrium statistical mechanics can be applied within each metabasin, and the transition rates between metabasins are calculated using a reduced set of master equations. The number of metabasins depends on the topography of the energy landscape, the temperature of the system, and the observation time scale. Using this metabasin approach, the integration time step for solving the master equations is governed by the observation time scale rather than the fastest transition rate between basins. The technique is implemented in an open source code, KineticPy. Here, we will show an example application of this metabasin approach to access time scales covering twenty-five orders of magnitude.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

365

20.2 Master Equations While the equilibrium thermodynamic properties of a system can be calculated by applying ergodic statistical mechanics to an energy landscape, irreversible processes involve a departure from equilibrium. The kinetics of the irreversible processes are governed by configurational transitions among basins in the energy landscape. Following transition state theory in Eq. (20.1), an inter-basin transition involves overcoming an activation barrier. At high temperatures, there is plenty of thermal energy to overcome these barriers on a short time scale, such that equilibrium can be achieved. However, at lower temperatures, some of the transition rates become slow compared to the observation time scale. Let us consider a potential energy landscape having U number of basins. As discussed in Chapter 18, a U  U matrix of potential energy values can be constructed, 0 1 U11 U12 / U1U BU C B 21 U22 / U2U C U¼B (20.2) C; @ « « 1 « A UU1

UU2

/ UUU

where the diagonal elements of the matrix (Uii) are the potential energies of the inherent structures and the off-diagonal elements (Uij, isj) are the potential energies of the lowest-energy transition points (i.e., first-order saddle points) between basin i and basin j. The U matrix is symmetric by construction, since the transition point energy between basin i and basin j is the same as that between j and i, i.e., Uij ¼ Uji. While Eq. (20.2) is expressed in terms of a potential energy landscape, an analogous matrix could be constructed for an enthalpy landscape, as in Eq. (18.32). The macroscopic properties of the system can be calculated based on the probabilities of occupying the various basins in the energy landscape. At equilibrium, the basin occupation probabilities follow a Boltzmann distribution:   1 Ui pi;eq ¼ exp  ; (20.3) Q kT

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

366

Materials Kinetics

where pi,eq is the equilibrium probability of occupying basin i, Ui h Uii is the potential energy of the inherent structure in basin i, and Q is the partition function,   U X Ui Q¼ exp  ; (20.4) kT i¼1 which normalizes the probabilities such that: U X

pi;eq ¼ 1:

(20.5)

i¼1

The evolution of the basin occupation probabilities over time, t, can be calculated using a set of coupled differential equations called master equations [2e7]: U U X X d Kji ½T ðtÞpj ðtÞ  Kij ½T ðtÞpi ðtÞ; pi ðtÞ ¼ dt jsi jsi

(20.6)

where pi(t) is the probability of the system occupying basin i at time t, Kij is the transition rate from basin i to basin j, and Kji is the rate of the reverse transition from j to i. The first term on the right-hand side of Eq. (20.6) gives the combined rate of transitioning into basin i from all other basins j, and the second term on the right-hand side of Eq. (20.6) gives the combined rate of transitioning out of basin i and into any other basin j. Hence, the net change in probability of occupying the basin is given by the difference between the two terms. There is one master equation governing the evolution of each basin occupation probability, pi(t), and at all times t, the master equations satisfy: U X d i¼1

dt

pi ðtÞ ¼ 0;

(20.7)

such that the basin occupation probabilities always fulfill the normalization condition, U X

pi ðtÞ ¼ 1;

(20.8)

i¼1

at all times t.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

367

As we will see in Section 20.5 on partitioning of the landscape, it is convenient to express the transition rates, Kij, from Eq. (20.6) as elements of a U  U matrix: 0 1 0 K12 K13 / K1U B C 0 K23 / K2U C B K21 B C (20.9) K¼B 0 / K3U C B K31 K32 C: B C « « 1 « A @ « KU1 KU2 KU3 / 0 Using transition state theory, each element of the transition rate matrix can be calculated as   ðUij  Ui Þ Kij ½T ðtÞ ¼ nij exp  ; (20.10) kT ðtÞ where nij is the vibrational attempt frequency for the i / j transition. Whereas the potential energy matrix U of Eq. (20.2) is symmetric, the transition rate matrix K of Eq. (20.09) is asymmetric, since Kij sKji for any Ui sUj . Using the solution of the master equations in Eq. (20.6), the evolution of a macroscopic property, A, of the system can be calculated using a weighted average over the time-dependent basin occupation probabilities: AðtÞ ¼

U X

Ai pi ðtÞ;

(20.11)

i¼1

where Ai is the value of property A when the system occupies basin i. For example, the evolution of the potential energy of the system at any time t can be calculated by: UðtÞ ¼

U X

Ui pi ðtÞ;

(20.12)

i¼1

where Ui is the potential energy of inherent structure i. In the remainder of the chapter, we will derive the approach for solving the set of master equations in Eq. (20.6). In Section 20.3, we incorporate degeneracy factors into the master equation to account for the number of degenerate basins in the system. In Section 20.4, we group the sets of degenerate basins into metabasins and then derive a reduced set of master equations governing inter-metabasin kinetics. In Section 20.5, we discuss

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

368

Materials Kinetics

the process for partitioning the energy landscape into metabasins which satisfy the condition of internal ergodicity. Finally, we introduce the KineticPy code which implements this algorithm and show example calculations for solving the kinetics of a system over arbitrary time scales.

20.3 Degenerate Microstates Following Mauro et al. [7], the first step is to account for the degeneracy of microstates, i.e., the number of equivalent microstates, within the master equations. Let us consider the simple example of Figure 20.2, where there are five distinct microstates (basins) labeled 1-5. The complete set of master equations for the five microstates is: dp1 dt dp2 dt dp3 dt dp4 dt dp5 dt

¼ K21 p2 þ K31 p3 þ K41 p4 þ K51 p5  ðK12 þ K13 þ K14 þ K15 Þp1 ¼ K12 p1 þ K32 p3 þ K42 p4 þ K52 p5  ðK21 þ K23 þ K24 þ K25 Þp2 ¼ K13 p1 þ K23 p2 þ K43 p4 þ K53 p5  ðK31 þ K32 þ K34 þ K35 Þp3 ¼ K14 p1 þ K24 p2 þ K34 p3 þ K54 p5  ðK41 þ K42 þ K43 þ K45 Þp4 ¼ K15 p1 þ K25 p2 þ K35 p3 þ K45 p4  ðK51 þ K52 þ K53 þ K54 Þp5 : (20.13)

Figure 20.2 Simple potential energy landscape with two sets of degenerate basins (gA ¼ 3 and gB ¼ 2).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

369

The number of master equations can be reduced by accounting for the degeneracy of the microstates. From Figure 20.2, basins 1-3 are degenerate, as are basins 4 and 5. Let us denote the two sets of unique microstates using capital Roman letters, A and B, with the degeneracies given by gA ¼ 3 and gB ¼ 2, respectively. The number of unique transition rates in Eq. (20.13) can also be reduced. Owing to the degeneracy of the microstates, there are only four unique rate parameters: K11 ¼ K21 ¼ K13 ¼ K31 ¼ K23 ¼ K32 K14 ¼ K24 ¼ K34 ¼ K15 ¼ K25 ¼ K35 K41 ¼ K42 ¼ K43 ¼ K51 ¼ K52 ¼ K53

(20.14)

K45 ¼ K54 : Assuming the system starts in equilibrium, the probabilities of the degenerate microstates must always be equal to each other, i.e., p1 ðtÞ ¼ p2 ðtÞ ¼ p3 ðtÞ;

p4 ðtÞ ¼ p5 ðtÞ;

(20.15)

at all times, t. Let us define the total probabilities of occupying the sets of degenerate basins as X PA ¼ pi ¼ gA pi˛A i˛A X (20.16) PB ¼ pi ¼ gB pi˛B : i˛B

The corresponding master equations are given by: dPA X dpi dp1 dp2 dp3 ¼ ¼ þ þ dt dt dt dt dt i˛A dPB X dpi dp4 dp5 ¼ ¼ þ : dt dt dt dt i˛B

(20.17)

Substituting in the original master equations from Eq. (20.13): dPA ¼ ðgA  1ÞK21 ðp1 þ p2 þ p3 Þ þ gA K41 ð p4 þ p5 Þ dt ðgA  1ÞK12 ð p1 þ p2 þ p3 Þ  gB K14 ð p1 þ p2 þ p3 Þ dPB ¼ gB K14 ð p1 þ p2 þ p3 Þ þ ðgB  1ÞK54 ð p4 þ p5 Þ dt gA K41 ð p4 þ p5 Þ  ðgB  1ÞK45 ð p4 þ p5 Þ: This book belongs to Alice Cartes ([email protected])

(20.18) Copyright Elsevier 2023

370

Materials Kinetics

These equations can be simplified as: dPA ¼ gA K41 PB  gB K14 PA dt dPB ¼ gB K14 PA  gA K41 PB : dt

(20.19)

Let us now define effective transition rates between sets of degenerate basins as: e AB ¼ gB K14 K e BA ¼ gA K41 : K

(20.20)

The master equations of Eq. (20.19) can then be simplified as, dPA e e AB PA ¼ K BA PB  K dt dPB e e BA PB ; ¼ K AB PA  K dt

(20.21)

where the degeneracy factors, gA and gB, have been incorporated into the probabilities and effective rate parameters. Based on Eq. (20.21), the generalized form of the master equations accounting for microstate degeneracy can be written as: dPA X e e AB PA Þ: ðK BA PB  K (20.22) ¼ dt BsA

20.4 Metabasin Approach The next step is to rewrite the master equations in terms of a reduced set of metabasins, where each metabasin contains a group of microstates that are mutually accessible at a given temperature and for a given observation time [7]. Hence, equilibrium statistical mechanics can be applied within each metabasin. The metabasins themselves are separated from each other by larger activation barriers such that inter-metabasin transitions occur on a slower time scale. The kinetics of these inter-metabasin transitions are calculated using a reduced set of master equations, i.e., one master equation for each metabasin. This separation of time scales is shown schematically in Figure 20.3.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

371

Figure 20.3 Energy landscape with two metabasins, a and b, where the condition of internal ergodicity is satisfied among the basins within each metabasin. The kinetics of transitions between metabasins are described using a reduced set of master equations.

Let us denote the metabasins using lowercase Greek letters (a, b, g). The probability of occupying a given metabasin, fa, is the sum of probabilities of occupying the individual microstates within that metabasin, i.e., X XX fa ¼ PA ¼ pi : (20.23) A˛a

A˛a i˛A

The probabilities must satisfy the normalization condition: XXX X fa ¼ pi ¼ 1: a

(20.24)

a A˛a i˛A

Since the metabasins are chosen to satisfy the condition of internal ergodicity, the probability distribution of microstates within a metabasin follows equilibrium statistical mechanics over the restricted ensemble of the metabasin. The partition function restricted to any metabasin a is given by:   X Ui˛A Qa ¼ gA exp  : (20.25) kT A˛a Within a metabasin, the microstate probabilities follow equilibrium statistical mechanics over this restricted ensemble, such that   fa Ui˛a pi˛a ¼ exp  ; (20.26) Qa kT

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

372

Materials Kinetics

where pi˛a is the probability of occupying any individual microstate i contained within metabasin a. Accounting for degenerate microstates within a metabasin:   fa Ui˛A PA˛a ¼ gA˛a pi˛A ¼ gA˛a exp  ; (20.27) Qa kT where gA˛a is the degeneracy of a group of microstates, labeled A as in Section 20.3. Eq. (20.27) can be rewritten as   fa FA˛a PA˛a ¼ exp  ; (20.28) Qa kT where FA˛a is the free energy of the set of degenerate basins, A. The free energy is defined as FA˛a ¼ Ui˛A  TSA˛a ;

(20.29)

where SA˛a is the Boltzmann entropy of the degenerate microstates: SA ¼ k ln gA :

(20.30)

Let us consider the simple two-metabasin energy landscape depicted in Figure 20.4, which consists of five sets of degenerate basins. Without

Figure 20.4 Simple two-metabasin landscape. Metabasin a consists of two sets of degenerate basins: A and B. Metabasin b consists of three sets of degenerate basins: C, D, and E. Transitions are fast within each metabasin, such that the condition of internal ergodicity applies. The transitions between metabasins occur on a slower time scale.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

373

considering the partitioning of the system into metabasins, the kinetics of the system can be described using a set of five master equations: dPA e e CA PC þ K e DA PD þ K e EA PE ðK e AB þ K e AC þ K e AD þ K e AEÞPA ¼ K BA PB þK dt dPB e AB PA þK e CB PC þK e DB PD þK e EB PE  ðK e BA þ K e BC þ K e BD þ K e BE ÞPB ¼K dt dPC e AC PA þK e BC PB þK e DC PD þ K e EC PE ðK e CA þ K e CB þ K e CD þK e CE ÞPC ¼K dt dPD e AD PA þK e BD PB þK e CD PC þK e ED PE ðK e DA þK e DB þK e DC þK e DE ÞPD ¼K dt dPE e AE PA þK e BE PB þK e CE PC þK e DE PD ðK e EA þ K e EB þK e EC þK e ED ÞPE ; ¼K dt (20.31) As shown in Figure 20.4, the groups of degenerate basins A and B fall within metabasin a, whereas C, D, and E are all contained within metabasin b. Hence, the total metabasin occupation probabilities, fa and fb, are: fa ¼ PA þ PB fb ¼ PC þ PD þ PE :

(20.32)

The corresponding master equations for the metabasins are therefore given by: dfa e CB ÞPC þ ðK e DA þ K e DB ÞPD þ ðK e EA þ K e EB ÞPE e CA þ K ¼ ðK dt e AC þ K e AD þ K e AE ÞPA  ðK e BC þ K e BD þ K e BE ÞPB ðK dfb e AD þ K e AE ÞPA þ ðK e BC þ K e BD þ K e BE ÞPB e AC þ K ¼ ðK dt e CA þ K e CB ÞPC  ðK e DA þ K e DB ÞPD  ðK e EA þ K e EB ÞPE : ðK (20.33) From Section 20.3, we note that e CA ¼ gA Ki˛C;j˛A : K

This book belongs to Alice Cartes ([email protected])

(20.34)

Copyright Elsevier 2023

374

Materials Kinetics

Let us introduce the following simplified notation, which will be used in the remainder of this section: KCA h Ki˛C; j˛A ¼

e CA K : gA

(20.35)

For the landscape in Figure 20.4, there are only five unique transition rates: KCA ¼ KCB KDA ¼ KDB KEA ¼ KEB

(20.36)

KAC ¼ KAD ¼ KAE KBC ¼ KBD ¼ KBE : Hence, the metabasin master equations from Eq. (20.33) can be simplified as: dfa ¼ ð gA þ gB ÞKCA PC þ ðgA þ gB ÞKDA PD þ ðgA þ gB ÞKEA PE dt ðgC þ gD þ gE ÞKAC PA  ðgC þ gD þ gE ÞKBC PB dfb ¼ ð gC þ gD þ gE ÞKAC PA þ ðgC þ gD þ gE ÞKBC PB dt ðgA þ gB ÞKCA PC  ðgA þ gB ÞKDA PD  ðgA þ gB ÞKEA PE : (20.37) Let us now define the total number of microstates within a metabasin, na, as the sum of the degeneracies within that metabasin: X na ¼ gA : (20.38) A˛a

With this definition, Eq. (20.37) can be further simplified as: dfa ¼ na KCA PC þ na KDA PD þ na KEA PE  nb KAC PA  nb KBC PB dt dfb ¼ nb KAC PA þ nb KBC PB  na KCA PC  na KDA PD  na KEA PE : dt (20.39)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

Next, we define the inter-metabasin transition rates as: nb wab ¼ ðKAC PA þ KBC PB Þ fa na wba ¼ ðKCA PC þ KDA PD þ KEA PE Þ; fb

375

(20.40)

where wab is the total transition rate from microstates within metabasin a to microstates within metabasin b, and wba is the total transition rate from b to a. Using the inter-metabasin transition rates from Eqs. (20.39) and (20.40) becomes simply: dfa ¼ w ba f b  wab f a dt (20.41) dfb ¼ w ab f a  w ba f b : dt This enables solving for the kinetics of the system on the time scale of the inter-metabasin transitions rather than the time scale of the fastest interbasin transition. The generalized master equations describing the kinetics of intermetabasin transitions are: dfa X ðwba fb  wab fa Þ; (20.42) ¼ dt bsa where the inter-metabasin transition rates are defined by: nb X KA;B˛b PA : wab ¼ fa A˛a

(20.43)

Since the metabasins are defined based on the observation time scale, the integration time step for solving Eq. (20.42) are determined by the natural time scale of the experiment.

20.5 Partitioning of the Landscape This leaves the important question of how to partition the system into metabasins that satisfy the condition of internal ergodicity. The partitioning depends on: • Topography of the energy landscape; • Temperature of the system; • Observation time (typically related to dT/dt).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

376

Materials Kinetics

To understand the steps involved in the partitioning process, let us consider the simple seven-basin potential energy landscape in Figure 20.5. The corresponding disconnectivity graph representation of the landscape is given in Figure 20.6. The first step is to define the full transition rate matrix, K, for every transition between microstates in the system:

Figure 20.5 Seven-basin potential energy landscape with varying inherent structure and transition point energies.

Figure 20.6 Disconnectivity graph representation of the seven-basin potential energy landscape from Figure 20.5.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

0

0

BK B 21 B B K31 B B K ¼ B K41 B B K51 B B @ K61 K71

K12

K13

K14

K15

K16

0

K23

K24

K25

K26

K32

0

K34

K35

K36

K42

K43

0

K45

K46

K52

K53

K54

0

K56

K62

K63

K64

K65

0

K72

K73

K74

K75

K76

K17

377

1

K27 C C C K37 C C C K47 C. C K57 C C C K67 A 0

(20.44)

The individual transition rates are given by transition state theory:   DUij Kij ¼ nij exp  : (20.45) kT Accounting for degeneracy, the transition rates become:   DUij Kij ¼ nij gj exp  : kT This can be written equivalently as   DUij þ ln gj : Kij ¼ nij exp  kT

(20.46)

(20.47)

Since the degeneracy, gj, can vary by orders of magnitude, the logarithmic representation of the degeneracy in Eq. (20.47) is usually more convenient. Next, we must define a threshold for what constitutes a fast transition versus a slow transition. The threshold is used to classify which groups of microstates can be grouped together into metabasins that guarantee internal ergodicity on the relevant time scale of the experiment. For example, a convenient threshold rate, Kth, may be proportional to the magnitude of the rate of change of the temperature in the system:     th dT  (20.48) K f : dt Transition rates satisfying Kij > Kth are assumed to equilibrate fully during the time step, dt. Transition rates below the threshold are too slow to equilibrate during dt.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

378

Materials Kinetics

Hence, the threshold effectively divides the transition rates into two groups, viz., above and below Kth. This information is used to determine the metabasin partitioning. If none of the transition rates, Kij, fall below Kth, then the entire system is in equilibrium, and no metabasin partitioning is necessary. The initial partitioning is accomplished by dividing the system from a single metabasin into two metabasins. Additional metabasins are subsequently formed, if necessary, through further partitioning of one of the existing metabasins. The partitioning is accomplished by first considering the slowest rates below Kth. Recall that the first index i in the rate Kij refers to the initial state for a given transition to some other state j. The rates falling below the threshold may involve one or more different initial states i. When performing the partitioning, one initial state i should be considered at a time, starting with the slowest rates. For example, if the set of rates {K21, K25, K34, K35} all fall below Kth, we can consider either the subset {K21, K25} ¼ K2{ j} for i ¼ 2 or the subset {K34, K35} ¼ K3{ j} for i ¼ 3. For a given value of i, we can denote the subset of rates below Kth as Ki{j}, where { j} denotes the set of final microstates, j. Hence, { j} gives the set of microstates that should be part of a separate metabasin from the initial state i. All of the remaining microstates connected by rates above the threshold, Kth, should be part of the same metabasin as i. This can be accomplished by crossing out the jth rows and jth columns of the original K matrix (i.e., including all transitions involving { j}). The K matrix then divides into two smaller matrices, which denote two separate metabasins. One matrix is composed of all elements of the original K matrix that were not crossed out, the other matrix is composed of all elements that were crossed out twice, i.e., where both the row and the column fall into { j}. All other elements of the original K matrix, i.e., those crossed out only once, are discarded because they no longer contribute to the metabasin partitioning process. This process is repeated recursively for all other values of i where Ki{ j} < Kth, proceeding from the lowest to the highest values of Ki{ j}. This may cause the metabasin matrices to subdivide into new metabasins (i.e., new matrices). This entire algorithm is repeated at each new temperature, T. Since this algorithm is a nonintuitive process, let us illustrate with the simple example of the energy landscape depicted in Figure 20.6. The full 7  7 transition rate matrix, K, is given in Eq. (20.44). This landscape has

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

379

Figure 20.7 Ranking of the activation barriers for each inter-basin transition occurring in the potential energy landscape of Figure 20.6.

7  6 ¼ 42 possible transitions between pairs of basins, with 10 unique activation barriers, which are ranked in Figure 20.7. Assuming the same vibrational frequency along all transition pathways, the Kij rates can be ranked as indicated in Figure 20.8. At sufficiently high temperatures, the threshold rate, Kth, falls below all of the transition rates, Kij, such that the system consists of a single metabasin, which indicates a fully ergodic system. As the system is cooled, the first set of Kij values {K51, K52, K53, K71, K72, K73} falls below the threshold rate, Kth, as indicated by the dotted line in Figure 20.8. The first partitioning of the landscape from a single metabasin into two metabasins is illustrated in Figure 20.9. From the set of transition rates, Kij, falling below the threshold, {K51, K52, K53, K71, K72, K73}, we pick a subset with a single value of i: K5{ j} ¼ {K51, K52, K53}. Thus, the j values to consider are { j} ¼ {1, 2, 3}. This means that the first, second, and third rows and columns of the K matrix are crossed out, as depicted in Figure 20.9. The elements of the matrix that are doubly crossed out form the new transition rate matrix of the first metabasin, a:

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

380

Materials Kinetics

Figure 20.8 Ranking of the transition rates for each inter-basin transition in the potential energy landscape of Figure 20.6. A sliding threshold determines which rates constitute “fast” versus “slow.” The “fast” transitions occur on a time scale faster than the observation time.

Figure 20.9 First partitioning of the energy landscape of Figure 20.6 into two metabasins. Internal ergodicity is satisfied within each metabasin, and the kinetics of inter-metabasin transitions are governed by a set of two master equations. This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

0

K12

0

B Ka ¼ @ K21

K13

K31

1

C K23 A.

0 K32

381

(20.49)

0

The elements of the matrix that are not crossed out form the transition matrix of the other metabasin, b: 1 0 0 K45 K46 K47 C BK B 54 0 K56 K57 C Kb ¼ B (20.50) C: @ K64 K65 0 K67 A K74

K75

K76

0

The dividing point for the two metabasins is the transition point between basin 5 and basins 1, 2, and 3. All basins falling on the left side of the disconnectivity graph in Figure 20.9 are included in metabasin a. All basins falling on the right side of the disconnectivity graph are included in metabasin b. Transitions within each of the metabasins are faster than the threshold rate, Kth, such that internal ergodicity within the metabasins is maintained. Transitions between the two metabasins are slower and computed using a set of two master equations as described in Section 20.4. As the system is cooled, the next set of transition rates from Eqs. (20.49) and (20.50) that drop below the threshold is {K56, K57, K74, K75}, as indicated in Figure 20.10. We arbitrarily select the subset K5{ j} ¼ {K56, K57}, giving { j} ¼ {6, 7}. The second partitioning of the landscape is performed by crossing out the {j} ¼ {6, 7} rows and columns of Kb, as shown in Figure 20.10. This leads to the partitioning of Eq. (20.50) into a new b metabasin,   0 K45 Kb ¼ ; (20.51) K54 0 and a third metabasin, g:

 Kg ¼

0

K67

K76

0

 :

(20.52)

Note that the same partitioning would result if K7{ j} ¼ {K74, K75} had been chosen instead of K5{ j} ¼ {K56, K57}. After this second partitioning,

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

382

Materials Kinetics

Figure 20.10 Second partitioning of the energy landscape of Figure 20.6, which now has three metabasins, labeled a, b, and g.

the kinetics of the system are solved using a set of three master equations for the a, b, and g metabasins. The metabasin probabilities, fa, fb, and fg, are distributed to the individual microstates within each metabasin using an equilibrium Boltzmann distribution over the restricted ensemble of the metabasin, as in Eq. (20.26). Putting it all together, the algorithm for calculating the kinetics of the system is [7]: (1) Choose a time step dt over which the temperature can be assumed constant. For a constant cooling rate (dT/dt), a choice of (0.01 K)$jdT/dtj-1 is appropriate. (2) At the beginning of the time step, partition the system into metabasins according to the procedure above. (3) Distribute the metabasin probabilities among the individual microstates within each metabasin according to the equilibrium distribution of Eq. (20.26). (4) Use these probabilities and Eq. (20.43) to calculate the inter-metabasin transition rates, wab. (5) Integrate the metabasin master equations over the time step dt.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

383

(6) The metabasin probabilities should be redistributed after each dt step using Eq. (20.26). (7) Repeat the entire algorithm at each new step in T(t). (8) Calculate the evolution of macroscopic properties according to Eq. (20.11). Figure 20.11 shows the results of applying this algorithm to the sevenbasin landscape of Figure 20.6. The thick black line in Figure 20.11 shows results computed using the metabasin approach, which are in excellent agreement with results from direct Euler integration of the full set of master equations, shown by the thin gray line. The equilibrium potential energy is shown by the dashed line in the figure. Figure 20.12 plots the number of metabasins used in the calculation as a function of temperature. Since the system is ergodic at high temperatures, it consists initially of a single metabasin. In the limit of low temperature, each microstate is located in its own separate metabasin. Hence, in the lowtemperature limit, the number of metabasins equals the number of microstates.

20.6 Accessing Long Time Scales The preceding sections provide an approach for efficiently calculating the master equation kinetics of any system with highly disparate rate

Figure 20.11 Validation of the metabasin approach for calculating energy landscape kinetics versus direct Euler integration of the seven-basin landscape in Figure 20.6.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

384

Materials Kinetics

Figure 20.12 Number of metabasins as a function of temperature for calculating the master equation kinetics of the seven-basin landscape in Figure 20.6.

parameters. The method is based on partitioning of the phase space into metabasins, which are dynamically chosen to satisfy the following two criteria: (1) The transition rates between microstates within a metabasin are fast compared to the observation time scale. Hence, the distribution of basin occupation probabilities within a metabasin follows equilibrium statistical mechanics within the restricted ensemble of the metabasin. (2) The kinetics of inter-metabasin transitions are too slow to allow for equilibration on the observation time scale. The inter-metabasin kinetics are calculated using a reduced set of master equations, with one master equation for each metabasin. The first criterion above corresponds to Palmer’s condition of internal ergodicity for broken ergodic systems [8]. The second criterion is a generalization of Palmer’s condition of confinement: whereas Palmer’s formulation forbids transitions between components (i.e., between metabasins), here the transitions are allowed using a reduced set of master equations. The inter-metabasin transition rates are calculated at each time step based on the individual transition rates between microstates. In this manner, the approach discussed in this chapter accounts for a continuous breakdown of ergodicity as the temperature of the system evolves [9]. By decoupling the kinetics within metabasins from the inter-metabasin transitions, the reduced set of master equations can be solved on the natural time scale of the experiment, i.e., on the observation time scale. For a linear

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

385

cooling path, the integration time step is governed by the cooling rate rather than by the fastest microscopic transition rate. Owing to this decoupling, the kinetics of the system can be solved on any time scale of interest. For example, Figure 20.13 shows the calculated volumetemperature diagrams of a selenium liquid cooled through the glass transition using cooling rates that span twenty-five orders of magnitude. For fast cooling rates, a short time step is used. For slow cooling rates, a longer time step is chosen. In each case, the partitioning of the enthalpy landscape into metabasins depends on the cooling rate, which defines the threshold for what constitutes fast versus slow.

20.7 KineticPy KineticPy is an open source software package written in Python, which implements the metabasin approach for calculating the master equation kinetics of an arbitrary potential energy landscape or enthalpy landscape [10]. The KineticPy program is available to download at https://github. com/Mauro-Glass-Group/ In addition to specifying the temperature path, T(t), KineticPy requires the following input files to give the landscape parameters: • The *.min file contains two columns indicating the basin number and the energy or enthalpy of the inherent structure within that basin.

Figure 20.13 (a) Computed volume-temperature diagrams using the metabasin approach described in this chapter applied to the enthalpy landscape of selenium. The linear cooling rates cover twenty-five orders of magnitude. (b) Calculated roomtemperature molar volume of selenium as a function of cooling rate.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

386

• • • •

Materials Kinetics

The *.prp file specifies a property value (such as molar volume) corresponding to each individual basin. The *.deg file contains the logarithm of the degeneracy of each basin. The *.trs file contains information on the transition points, including the transition point energies and each pair of basins connected by a transition point. The *.eig file specifies the eigenvalue of each transition, which governs the vibrational attempt frequency for that transition.

20.8 Summary Transition state theory describes the kinetics of a system transitioning between microstates by overcoming an activation barrier. The transition rate consists of the product of a vibrational attempt frequency and a Boltzmann probability factor. The evolution of the microstate occupation probabilities can be described using a system of coupled master equations. While these master equations can be solved directly using standard techniques such as Euler integration, most real systems have transition rates that span many orders of magnitude, and with direct integration techniques the time step is limited by the fastest transition rate. Long time scales can be accessed by partitioning the energy landscape into metabasins that satisfy the condition of internal ergodicity. In other words, the kinetics of microstate transitions within a metabasin are sufficiently fast to achieve a local equilibration over the restricted ensemble of the metabasin. The kinetics of transitions among metabasins are governed by a reduced set of master equations. The partitioning of the system into metabasins depends on the topography of the energy landscape, the temperature of the system, and the observation time scale. By leveraging the broken ergodic nature of the system, the metabasin approach can be used to solve a system of master equations on any time scale of interest. KineticPy is an open source Python code that implements this metabasin approach for solving for the kinetics of any arbitrary system.

Exercises (20.1) Download the KineticPy program from https://github.com/ Mauro-Glass-Group/ and run the example script for selenium. (a) Plot the resulting enthalpy as a function of temperature for the default cooling rate.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Master Equations

387

(b) Plot the resulting molar volume as a function of temperature for the default cooling rate. (c) Adjust the cooling rate and plot enthalpy and molar volume results for both faster and slower cooling rates. (d) What happens to the number of metabasins required in the calculation as the cooling rate is lowered? (20.2) How can configurational entropy be calculated using the master equation approach and accounting for continuously broken ergodicity? (Hint: See Section 19.5 and discuss in terms of the metabasin partitioning approach in Section 20.4.)

References [1] E. Wigner, “On the Penetration of Potential Energy Barriers in Chemical Reactions,” Z. Phys. Chem. Abt. B 19, 203 (1932). [2] R. Zwanzig, Nonequilibrium Statistical Mechanics, Oxford University Press (2001). [3] L. Angelani, G. Parisi, G. Ruocco, and G. Viliani, “Connected Network of Minima as a Model Glass: Long Time Dynamics,” Phys. Rev. Lett. 81, 4648 (1998). [4] L. Angelani, G. Parisi, G. Ruocco, and G. Viliani, “Potential Energy Landscape and Long-Time Dynamics in a Simple Model Glass,” Phys. Rev. E 61, 1681 (2000). [5] M. A. Miller, J. P. Doye, and D. J. Wales, “Structural Relaxation in Atomic Clusters: Master Equation Dynamics,” Phys. Rev. E 60, 3701 (1999). [6] J. C. Mauro and A. K. Varshneya, “A Nonequilibrium Statistical Mechanical Model of Structural Relaxation in Glass,” J. Am. Ceram. Soc. 89, 1091 (2006). [7] J. C. Mauro, R. J. Loucks, and P. K. Gupta, “Metabasin Approach for Computing the Master Equation Dynamics of Systems with Broken Ergodicity,” J. Phys. Chem. A 111, 7957 (2007). [8] R. G. Palmer, “Broken Ergodicity,” Adv. Phys. 31, 669 (1982). [9] J. C. Mauro, P. K. Gupta, and R. J. Loucks, “Continuously Broken Ergodicity,” J. Chem. Phys. 126, 184511 (2007). [10] Y. Z. Mauro, C. J. Wilkinson, and J. C. Mauro, “KineticPy: A Tool to Calculate Long-Time Kinetics in Energy Landscapes with Broken Ergodicity,” SoftwareX 11, 100393 (2020).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 21

Relaxation of Glasses and Polymers

21.1 Introduction Glasses and glassy polymers are nonequilibrium materials that spontaneously relax toward the metastable supercooled liquid state [1]. This relaxation process is slow at low temperatures but accelerates dramatically during heat treatment. Glass relaxation is recognized as one of the most challenging problems in condensed matter physics and is also of vital importance to the glass and polymer industries. For example, the manufacture of flat panel displays is controlled by volume relaxation of the glass substrate. Volume relaxation is critically important for high-resolution displays, since smaller pixel sizes require tighter control of glass relaxation to avoid pixel misalignment in the final display [2]. Relaxation is also important for chemically strengthened glass, since the ion exchange temperature for inter-diffusion is limited by stress relaxation in the glass [3]. Relaxation also affects the kinetics of the inter-diffusion process and the magnitude of the compressive stress induced by ion exchange [4]. The mechanical properties of glasses and polymers are also a function of their thermal history. Relaxation is likewise important for controlling the attenuation of optical telecommunication fibers and for adjusting the thermal expansion coefficient of glasses and polymers [3]. In this chapter, we will discuss the detailed physics of glass and polymer relaxation. We begin with an in-depth discussion of the concept of fictive temperature, which plays such a critical role in describing the relaxation process. Then, we will discuss the stretched exponential relaxation function, its use in modern relaxation models, and its implementation in the RelaxPy software. Finally, we will examine different types of relaxation, viz., stress relaxation and structural relaxation, and how they relate to viscosity via the Maxwell relation.

Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00027-3 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

389 Copyright Elsevier 2023

390

Materials Kinetics

21.2 Fictive Temperature The macroscopic state of a system is defined from the values of its observable properties. Equilibrium systems are completely described by a small set of parameters called the state variables. For example, the relevant state variables for a closed system in contact with heat and pressure reservoirs are the temperature (T ) and volume (V ). For a given set of state variables, the observable properties of an equilibrium system are uniquely determined. However, for a nonequilibrium system, the macroscopic property values are path-dependent, i.e., they depend on the entire history of the system. The state of a nonequilibrium system, therefore, requires specification of additional parameters beyond the usual set of thermodynamic state variables [5]. In Chapter 17, we introduced the notion of fictive temperature, Tf, which is an additional order parameter used to specify the nonequilibrium state of a glass or polymer system. In that chapter, fictive temperature was defined by the intersection of the glassy and supercooled liquid lines in the volume-temperature diagram of Figure 17.1. A glass that is cooled more quickly through the glass transition has a higher fictive temperature, since it falls out of equilibrium at higher temperatures. The difference between the fictive temperature, Tf , and the real temperature, T, is a measure of the degree of thermodynamic disequilibrium of the system. Over time, the nonequilibrium glass spontaneously relaxes toward the metastable supercooled liquid state. During this relaxation process, the Tf evolves toward T as the system approaches equilibrium.

21.3 Tool’s Equation One primary source of confusion for understanding glass and polymer relaxation is the precise definition of fictive temperature. Several different interpretations of fictive temperature are used by researchers in the field [5]. In order to approach this problem, it is useful to gain some historical perspective. The concept of fictive temperature was originally introduced by Tool and Eichlin in 1931 [6] and more thoroughly described in a 1946 paper by Tool [7]. Tool defined fictive temperature with the following quote: “The physiochemical condition or state of a glass is reasonably well known only when both the actual temperature and that other temperature at which the glass would be in equilibrium, if heated or cooled very rapidly to it, are known. This latter temperature has been termed the ‘equilibrium or fictive temperature’ of the glass.” [7]. This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

391

Tool’s work implies that the nonequilibrium glassy state can only be described through knowledge of both the fictive temperature, Tf , and the physical temperature, T. If we consider glass formation through instantaneous quenching from an equilibrium liquid, then the fictive temperature of the glass is equal to the temperature of the liquid immediately prior to the quench. During the quenching process, the temperature is suddenly lowered below the glass transition range, causing the structure of the liquid to freeze. After quenching, the thermal energy of the glass is governed by the physical temperature, T, but its structure is described by the fictive temperature, Tf . The fictive temperature of the glass depends on its full thermal history, T(t), and varies with time, t. This evolution of fictive temperature is summarized well by Gardon and Narayanaswamy [8], who wrote, “It will be helpful to characterize the structural state of the glass by its fictive temperature. This may be thought of as that temperature from which the glass must be rapidly quenched to produce its particular structural state. For the present treatment, the fictive temperature of the material needs to be defined not only in its final glassy state but also at all times as it passes through the transformation range.” The fictive temperature is equal to the physical temperature (Tf ¼ T ) at temperatures above the glass transition range, where the system is in equilibrium. As the temperature is lowered through the glass transition regime, the evolution of the fictive temperature lags that of the physical temperature (i.e., Tf > T ), giving the initial departure from equilibrium. Finally, at low temperatures, the fictive temperature is frozen at some value well above the physical temperature. Hence, the difference between the fictive temperature Tf and the physical temperature T is a measure of how far the system has departed from equilibrium, i.e., T  Tf is a measure of the thermodynamic driving force for relaxation of the glass or polymer. The first equation proposed to model the relaxation process is Tool’s equation [7]: vTf T  Tf ¼ ; vt sðT ; Tf Þ

(21.1)

where T and Tf are the instantaneous values of the temperature and fictive temperature, respectively, t is time, and s is the internal relaxation time of the system, which depends on both T and Tf, following our treatment of nonequilibrium viscosity in Chapter 17. There are two key parts to Tool’s

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

392

Materials Kinetics

equation: the thermodynamic driving force for relaxation (equal to the numerator, T  Tf ) and the kinetic coefficient (equal to 1/s). Note that relaxation to the liquid state requires both a nonzero thermodynamic driving force (i.e., T s Tf ) and a finite relaxation time (s < N).

21.4 Ritland Crossover Effect According to Tool’s equation in Eq. (21.1), the relaxation behavior of a glass can be described completely using a single order parameter, Tf, and relaxation would cease when Tf ¼ T. However, the pioneering crossover experiment of Ritland in 1956 demonstrated that the nonequilibrium state of a glass cannot, in general, be described using a single fictive temperature [9]. Ritland prepared different glass samples having the same fictive temperature but obtained via different thermal histories, viz., a linear ratecooled sample and a sample annealed for a long time at the desired fictive temperature and then rapidly quenched to freeze in that structure. Within the scope of Tool’s single fictive temperature description, these glasses had the same value of Tf. However, during relaxation, they exhibited markedly different behavior (i.e., different memory effects), indicating very different nonequilibrium states, despite having the same Tf . While the annealed sample did not exhibit any relaxation during an isothermal hold at T ¼ Tf, the measured refractive index of the rate-cooled glass relaxed downward, i.e., away from its equilibrium value, before turning around and relaxing upward toward equilibrium. This non-monotonic relaxation behavior is the famous Ritland crossover effect, which demonstrates that the nonequilibrium glassy state cannot be mapped to just a single equilibrium liquid state. Indeed, it is not possible to account for non-monotonic relaxation behavior using direct application of Tool’s equation in Eq. (21.1). Seven years after Ritland published his groundbreaking results, an essentially identical experiment was conducted in the organic polymers community by Kovacs [10], reaching the same conclusion as in Ritland’s work. Hence, in the polymers community, the Ritland crossover effect is more typically referred to as the Kovacs crossover effect.

21.5 Fictive Temperature Distributions The obvious way to address the shortcomings of a single fictive temperature model is to utilize multiple fictive temperatures, an idea originally proposed in Ritland’s 1956 paper [9]. In 1971, Narayanaswamy [11] developed the

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

393

idea more formally in his phenomenological model of glass relaxation. Narayanaswamy’s approach made use of a memory integral, which “indicates that any nonequilibrium state is actually a mixture of several equilibrium states.” [11] While Narayanaswamy’s approach using multiple fictive temperatures has led to much success from an engineering point of view, the assumption that a nonequilibrium state is a mixture of equilibrium states is valid only in the theoretical limit of an infinitely fast quench [5]. To understand why Narayanaswamy’s assumption is incorrect, let us consider a probability distribution of fictive temperatures [5], hðTf Þ, satisfying ZN hðTf ÞdTf ¼ 1:

(21.2)

0

For an infinitely fast quench, the structure of the glass corresponds to exactly the structure of the liquid at the quench temperature. In this limit, the fictive temperature distribution would be given by a Dirac delta function: hðTf Þ ¼ dðTf  Tquench Þ;

(21.3)

which would correctly describe the structure of the resulting glass. However, a real glass formed over finite time would, in principle, have a hðTf Þ distribution with a nonzero width representing a mixing of different equilibrium states, according to Narayanaswamy [11]. A more slowly cooled glass may have a broader distribution of fictive temperatures, spanning the full glass transition range. If the concept of a fictive temperature distribution is physically meaningful to describe the microscopic state of a glass, then the basin occupation probabilities, pi, in the energy landscape description of a glass could be described using an appropriate mixing of equilibeq rium probabilities, pi , following the fictive temperature distribution [5]. Mathematically, this can be written as: ZN pi ¼

hðTf Þpeqi ðTf ÞdTf :

(21.4)

0

The left-hand side of the equation is the nonequilibrium probability of occupying basin i in the energy landscape of a glass after evolving through the particular thermal history, T(t). The integral on the right-hand side of

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

394

Materials Kinetics

the equation represents a weighted average of equilibrium basin occupation probabilities at the various fictive temperatures. In other words, the integral provides for an arbitrary mixing of equilibrium liquid states to represent the nonequilibrium state of the glass. The weighting function h(Tf ) is the fictive temperature distribution, satisfying Eq. (21.2), which depends on the thermal history T(t) of the glass. If Eq. (21.4) is valid, then there would be a real physical significance to the notion of a fictive temperature distribution, since the nonequilibrium glassy state could be described in terms of a distribution of equilibrium states. This would allow for calculation of the thermodynamic properties of a glass using a combination of the physical temperature, T, and the distribution of fictive temperatures, h(Tf ). To test the validity of Eq. (21.4) let us consider the selenium system previously shown in Figure 20.13, where the glass is formed by a linear cooling from the melting point to low temperature at a rate of 1 K/min [5]. As the system is cooled, the basin occupation probabilities pi(t) evolve in time according to the coupled set of master equations described in Chapter 20. To make use of Eq. (21.4), we must first compute the equilibrium occupation probabilities for the various basins as a function of temperature, as in Eq. (20.3). Figure 21.1(a) plots the equilibrium probabilities of eleven representative basins in the enthalpy landscape over the temperature range of interest. Each basin has a different molar volume, indicated in the legend. As the temperature is lowered, the basins having lower molar volume become more favorable. The probabilities of the individual basins display peak owing to a tradeoff between entropy (i.e., degeneracy) and enthalpy effects. Figure 21.1(b) plots the basin occupation probabilities for the nonequilibrium glass formed by cooling the system at a rate of 1 K/min. In order to test Eq. (21.4), the fictive temperature distribution, h(Tf ), is optimized at each new temperature, T, to provide a best fit of the nonequilibrium probabilities in Figure 21.1(b) using a mixture of the equilibrium probabilities in Fig. 21.1(a). At T ¼ 400 K, the system is in equilibrium, so h(Tf ) is a Dirac delta function, as in Eq. (21.3). As the temperature of the system is lowered, the fictive temperature distribution broadens and lags the physical temperature. At sufficiently low temperatures, the optimum distribution becomes frozen. Figure 21.1(c) shows the optimized fit of the nonequilibrium glassy probabilities using a mixture of equilibrium probabilities. Comparing parts (b) and (c) of the figure, the glass clearly exhibits a much narrower distribution of basin occupation probabilities, pi, compared to the best possible representation using a fictive

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

395

Figure 21.1 (a) Equilibrium distribution of basin occupation probabilities in the enthalpy landscape of supercooled liquid selenium. (b) Distribution of basin occupation probabilities for selenium glass using a cooling rate of 1 K/min. (c) The best approximation of the above glass using continuous fictive temperature mapping, as in Eq. (21.4). For clarity, every tenth data point is plotted. (After Mauro et al. [5]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

396

Materials Kinetics

temperature distribution. The conclusion is that it is not possible to construct a real glassy system based on a mixing of equilibrium states. Hence, Narayanaswamy’s concept that “any nonequilibrium state is actually a mixture of several equilibrium states” [11] is invalid.

21.6 Property Dependence of Fictive Temperature Another interpretation of fictive temperature is based on matching the configurational contributions to macroscopic properties between the liquid and glassy states. This approach was advocated by Moynihan [12], who did not consider fictive temperature as a mapping of the glass structure to an equivalent liquid structure. Instead, he defined fictive temperature as “simply the structural contribution to the value of the macroscopic property of interest expressed in temperature units.” [12] This interpretation of fictive temperature is simpler in that it circumvents the atomic-level structural details of the nonequilibrium system. However, the assumption of a single fictive temperature for a given property is inconsistent with the Ritland crossover effect discussed in Section 21.4. Moreover, a 1976 paper by Moynihan and coworkers [13] noted that the value of fictive temperature changes with the particular property under study: “Fictive temperatures for the same glass in the same nonequilibrium state calculated from different properties are not necessarily identical.” Commenting on this result, Scherer [14] noted: “Saying that different properties have different fictive temperatures is simply an abstract way of acknowledging that structural changes affect different properties to different extents.” Hence, the macroscopic interpretation of fictive temperature, while compelling for its simplicity, does not have any rigorous physical meaning.

21.7 Kinetic Interpretation of Fictive Temperature A more compelling approach for defining fictive temperature is based on the distribution of time scales involved with the relaxation process. This involves defining a set of partial fictive temperatures, each corresponding to a relaxation mode at a particular time scale. As described by Rekhson [15]: “The model characterizes the glassy state by a broad distribution of partial fictive temperaturesdlow, produced by fast relaxation mechanisms, and high, produced by slow mechanisms.” With this approach, there is a spectrum of partial fictive temperatures representing the various relaxation modes in the glass, and the evolution of property values is

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

397

given by a weighted average of the partial fictive temperatures. In this kinetic interpretation of fictive temperature, there is no attempt to equate the nonequilibrium glassy state to any equilibrium state. The full details of the kinetic model of glass relaxation using partial fictive temperatures will be presented in Section 21.10. First, we must introduce the concepts of stretched exponential relaxation (Section 21.8) and the Prony series approximation to the stretched exponential function (Section 21.9), since each term in the Prony series will correspond to one partial fictive temperature.

21.8 Stretched Exponential Relaxation The study of relaxation dates back to the pioneering work of Kohlrausch [16,17], who proposed the following empirical equation in 1854 to describe the relaxation of residual charge on a Leyden jar: h i b gðtÞ ¼ exp  ðt=sÞ ; (21.5) where t is time and g(t) is the relaxation function, which, by definition, decays from an initial value of 1 to a final value of 0. The relaxation function, g(t), can be calculated from the relaxation of any property value, as long as the initial value of the property is normalized to a value of g(0) ¼ 1, and the final equilibrium value is normalized to g(N) ¼ 0. Eq. (21.5) is commonly known as the stretched exponential relaxation function and has two free parameters: the relaxation time, s, and the stretching exponent, b. The stretching exponent is dimensionless and satisfies 0 < b  1, where the upper limit of b ¼ 1 corresponds to simple exponential decay and lower values of b indicate a nonexponential relaxation process. Mathematically, the stretched exponential function is an example of a fat tail distribution, exhibiting a long decay time compared to simple exponential relaxation, as shown in Figure 21.2. While many types of distributions exhibit fat tails [18], the stretched exponential function has been shown to provide a universally accurate description of the relaxation behavior of homogeneous glasses, independent of its particular chemistry and which property is measured to obtain the relaxation function [19]. For nearly 150 years since it was introduced by Kohlrausch [16], the physical origin of the stretched exponential function had been regarded as one of the oldest unsolved problems in physics [19]. The first physical

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

398

Materials Kinetics

Figure 21.2 Comparison of the stretched exponential relaxation function of Eq. (21.5) with simple exponential decay (b ¼ 1). The stretched exponential function is an example of a “fat tail” distribution, exhibiting a long decay time.

derivation of the stretched exponential functional form was achieved by Grassberger and Procaccia in 1982 based on the diffusion of excitations to randomly distributed traps which annihilate these excitations [20]. In their model, the resulting distribution of relaxation times naturally gives rise to the stretched exponential function. In 1994, this diffusion-trap model was extended by Phillips [21], who showed that the value of the stretching exponent could be described based on the effective dimensionality, d* , of the relaxation pathways in the configurational phase space. According to Phillips, the stretching exponent is related to d * by: b¼

d* : d* þ 2

(21.6)

Phillips expressed the dimensionality of the relaxation pathways as d * ¼ fd, where d is the dimensionality of the system and f is the fraction of relaxation pathways activated for the given process under study. For a threedimensional system with d ¼ 3 and all relaxation pathways activated ( f ¼ 1), a stretching exponent of b ¼ 3/5 is found. Likewise, relaxation of a two-dimensional film (d ¼ 2) with all relaxation pathways active ( f ¼ 1) follows b ¼ 1/2. A fractal dimensionality is attained if one assumes an equipartitioning of the relaxation channels into long- and short-range contributions. If the rate of entropy production is maximized, then there must be a balancing of the short- and long-range contributions. Assuming that all short-range relaxation processes have happened so that only the long-range contributions remain, then f ¼ 1/2 and a fractal dimensionality of d * ¼ 3/2 is obtained. With d* ¼ 3/2, Eq. (21.6) gives a stretching

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

399

exponent of b ¼ 3/7 for relaxation governed solely by long-range interactions. Validation of the diffusion-trap model for stretched exponential relaxation in microscopically homogeneous systems has been demonstrated by Phillips using over 50 examples from the literature [19].

21.9 Prony Series Description While stretched exponential relaxation is a universal feature of homogeneous glassy systems, in practical implementations the stretched exponential function is typically described in terms of a summation of simple exponentials, known as a Prony series. With a sufficient number of terms, the Prony series can accurately capture the shape of the stretched exponential function, while improving the computational efficiency of the model implementation [22]. A Prony series is a discrete sum of simple exponential terms: N  b X exp x z wi expðKi xÞ;

(21.6)

i¼1

where xht=s, and the weighting factors wi satisfy N X

wi ¼ 1:

(21.7)

i¼1

Use of a Prony series greatly improves the computational efficiency of the relaxation model, since the simple exponential terms in the Prony series enable analytical integration of the relaxation equations. Optimized values of the Prony series coefficients, {wi, Ki}, have been published by Mauro and Mauro [22] for different values of N (the number of terms in the Prony series) and b. These optimized parameter values are provided in Table 21.1 for N ¼ 8 and b ¼ {3/7, 1/2, 3/5}. In Table 21.2, identical values of the stretching exponent are considered while increasing the number of terms in the Prony series to N ¼ 12. The optimum Prony series parameters for N ¼ {8, 12} are plotted in Figure 21.3 for b ¼ {3/5, 1/2, 3/7}. For each value of b, the Prony series terms are rather evenly distributed over the log(Ki) space. The weighting factors, wi, have a peak just below Ki ¼ 1 and are asymmetrically distributed in log(Ki) space. There is greater weighting on the high Ki side of the distribution to accurately capture the short-time scaling of the stretched exponential function, where the relaxation occurs most rapidly and shows

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

400

Materials Kinetics

Table 21.1 Parameters of the optimized Prony series fits with N ¼ 8 for stretched exponential relaxation with b ¼ 3/7, 1/2, and 3/5. b ¼ 3/7 b ¼ 1/2 b ¼ 3/5 i

wi

Ki

wi

Ki

wi

Ki

1 2 3 4 5 6 7 8

0.05014 0.14755 0.18607 0.17905 0.14976 0.11362 0.08050 0.09331

0.04812 0.14694 0.39937 1.07652 3.08783 9.91235 38.09496 254.89688

0.03747 0.15374 0.21791 0.20267 0.15447 0.10528 0.06676 0.06168

0.07735 0.19035 0.46310 1.17279 3.21677 9.95212 37.10866 236.28861

0.03417 0.18347 0.26687 0.21782 0.13987 0.08121 0.04448 0.03209

0.14633 0.28920 0.61223 1.42012 3.65392 10.69645 37.81238 221.25717

Table 21.2 Parameters of the optimized Prony series fits with N ¼ 12 for stretched exponential relaxation with b ¼ 3/7, 1/2, and 3/5. b ¼ 3/7 b ¼ 1/2 b ¼ 3/5 i

wi

Ki

wi

Ki

wi

Ki

1 2 3 4 5 6 7 8 9 10 11 12

0.02792 0.09567 0.13049 0.13388 0.12456 0.10976 0.09256 0.07525 0.05938 0.04587 0.03588 0.06877

0.03816 0.10117 0.22822 0.47142 0.94243 1.88828 3.86312 8.15604 17.92388 41.47225 104.13591 402.71691

0.01694 0.08574 0.14468 0.15870 0.14514 0.12095 0.09512 0.07188 0.05275 0.03791 0.02749 0.04270

0.06265 0.13381 0.26816 0.52050 1.00410 1.96395 3.94401 8.20241 17.81155 40.85894 102.07104 383.52267

0.01043 0.08117 0.17168 0.19624 0.16742 0.12467 0.08711 0.05896 0.03913 0.02559 0.01688 0.02071

0.12022 0.20610 0.35680 0.63293 1.15481 2.17404 4.23811 8.59467 18.25401 41.07522 100.99297 363.84147

the greatest difference compared to simple exponential decay, as can be seen in Figure 21.2. Figure 21.3 shows that as b is lowered, the stretched exponential decay function becomes increasingly nonexponential, leading to a broader distribution of the wi coefficients in log(Ki) space. The convergence of the Prony series to the stretched exponential function with increasing N is evident in Figure 21.4(a), which plots the root mean squared error (RMSE) of the Prony series fits for b ¼ {3/5, 1/2, 3/7} and N ¼ [1,12]. Over this range of b values, the RMSE increases with decreasing b. Following the Phillips diffusion-trap model, the lower values of b correspond to a reduced dimensionality of the activated relaxation

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

401

Figure 21.3 Optimized parameters of the Prony series fits for stretched exponential relaxation with b ¼ 3/7, 1/2, and 3/5 with (a) N ¼ 8 and (b) N ¼ 12. (After Mauro and Mauro [22]).

pathways. Figure 21.4(a) shows that the RMSE decreases exponentially with increasing N. The slope of dlog(RMSE)/dN is independent of the particular value of b for this range of stretching exponents. The exponential scaling of the RMSE with N is useful for calculating the minimum number of terms required for the Prony series to attain a target level of accuracy. For example, if a 106 level of accuracy is required, at least 10 terms must be included for b ¼ 3/5, whereas at least 11 terms are required for the two lower values of b ¼ {3/7, 1/2}. In Figure 21.4(b) the RMSE is plotted over the full range of potential b values from zero to one for both N ¼ 4 (left axis) and N ¼ 8 (right axis).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

402

Materials Kinetics

Figure 21.4 (a) Root-mean-square error (RMSE) of optimized Prony series fits to stretched exponential relaxation (SER) with b ¼ 3/5, 1/2, and 3/7 as a function of the number of terms in the Prony series, N ¼ [1,12]. (b) RMSE of optimized Prony series fits to SER with N ¼ {4, 8} and b ¼ [0, 1]. (After Mauro and Mauro [22]).

The error vanishes in either limit of b / 0 and b / 1, since in both of these limits the stretched exponential function reduces to a simple exponential. The maximum in RMSE, and hence the maximum in nonexponentiality, occurs around b z 0.3. Since each of the critical exponent values from the Phillips diffusion-trap model satisfies b > 0.3, for any realistic glass relaxation process the level of nonexponentiality increases with decreasing b. In other words, over a physically realistic range of b values, the quality of the Prony series degrades with decreasing b. The first frequency domain analysis of relaxation using the stretched exponential function was published by Williams and Watts in their influential 1970 paper [23]. The Williams-Watts paper is largely responsible for

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

403

popularizing the modern use of the stretched exponential function, which had been originally introduced by Kohlrausch more than a century prior to this work [16]. Their work was so impactful that the stretched exponential function is also commonly known as the Kohlrausch-Williams-Watts (KWW) function. While there is no exact analytical expression for the Fourier transform of the stretched exponential function, the Fourier transform of the Prony series is analytically soluble [22]: ZN GProny ðuÞ ¼

gProny ðxÞexpðjuxÞdx;

(21.8)

N

where gp Prony ffiffiffiffiffiffiffiðxÞ is the Prony series from Eq. (21.6), u is angular frequency, and jh 1. Since the relaxation function is defined only for positive time, Eq. (21.8) becomes GProny ðuÞ ¼

N X

ZN wi

i¼1

exp½  ðKi þ juÞxdx:

(21.9)

0

By substitution of variables, u ¼ ðKi þjuÞx, we obtain GProny ðuÞ ¼ 

N X i¼1

wi Ki þ ju

ZN

eu du:

(21.10)

0

Solving the integral, the final expression for the Fourier transform of the Prony series is: GProny ðuÞ ¼

N X i¼1

wi : Ki þ ju

(21.11)

Figure 21.5(a) plots the power spectral density of the stretched exponential distribution, jGðuÞj2 , where the Fourier transform of the stretched exponential function is obtained numerically using a fast Fourier transform (FFT) algorithm. Figure 21.5(b) shows the convergence of the power spectral density of the Prony series with increasing N, using Eq. (21.11) for the Prony series spectrum. With just a small number of terms, the Prony series gives an accurate representation of the power spectral density of the actual stretched exponential function.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

404

Materials Kinetics

Figure 21.5 (a) Power spectral density of the stretched exponential relaxation (SER) function with b ¼ 3/7, 1/2, and 3/5. (b) Convergence of the power spectral density for Prony series fits of the SER function with b ¼ 3/7. (From Mauro and Mauro [22]).

21.10 Relaxation Kinetics With our understanding of the kinetic interpretation of fictive temperature from Section 21.7 and the Prony series representation of stretched exponential relaxation in Section 21.9, we are now ready to introduce the current state-of-the-art in relaxation equations. Following the kinetic interpretation of fictive temperature in Section 21.7, the relaxation kinetics of the nonequilibrium glassy state are described in terms of a set of partial fictive temperatures, {Tfi}, which each relax on different time scale. Each partial fictive temperature, Tfi, corresponds to one term in the Prony series representation of the stretched exponential function in Eq. (21.6), where

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

405

the coefficients of the Prony series, {wi, Ki}, are optimized as described in Section 21.9. The wi coefficients give the weighting of each term in the Prony series, i.e., the weighting of each partial fictive temperature, and the time scale of each relaxation mode is determined by Ki. At any time, t, the average fictive temperature of the system can be calculated by Tf ðtÞ ¼

N X

wi Tfi ðtÞ:

(21.12)

i¼1

The relaxation behavior of the glass can then be calculated in terms of relaxation of the individual partial fictive temperatures using a set of N coupled first-order differential equations, where N is the number of terms in the Prony series. Generalizing Tool’s equation in Eq. (21.1), we have: dTfi T ðtÞ  Tfi ðtÞ ¼ ; dt si ½T ðtÞ; Tf ðtÞ

i ¼ f1; .; Ng:

(21.13)

Here, T(t) is the thermal history of the system, and the time constants {si} are calculated from si ðT ; Tf Þ ¼ sðT ; Tf Þ=Ki :

(21.14)

In other words, the kinetic coefficients of all N rate equations share a common factor, s, which is a function of both the physical temperature, T, and the average fictive temperature, Tf, given by Eq. (21.12). The values of s are scaled by the Ki coefficients from the Prony series approximation in Eq. (21.6). Several models for the relaxation time, s(T,Tf ), exist, following the nonequilibrium viscosity of the glass, as discussed in Chapter 17. The connection between viscosity and relaxation time is given by the Maxwell relation, which is discussed in Section 21.13. Models for s(T,Tf ) include the Narayanaswamy equation in Eq. (17.1) and the Mazurin expression of Eq. (17.2). More recently, the Mauro-Allan-Potuzak (MAP) model in Eq. (17.4) has been shown to provide significant advantage for describing both the temperature and fictive temperature dependence of relaxation time using a unified set of parameters [24e27]. The MAP model is also able to predict the composition dependence of s(T,Tf ) based on the underlying topology of the glass network, as discussed in Section 17.6. The detailed form of the MAP model is discussed in Section 17.4. Note that the general form of the relaxation model in Eq. (21.13) does not depend on the particular choice of model for s(T,Tf ).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

406

Materials Kinetics

21.11 RelaxPy RelaxPy [28] is an open source Python program that implements the relaxation model in Section 21.10 using the MAP model for s(T,Tf ). RelaxPy is available to download at https://github.com/Mauro-GlassGroup/. The input to RelaxPy includes the parameters of the MAP expression (see Section 17.6), the desired temperature path T(t), the value of the stretching exponent b, and the number of terms in the Prony series N. The RelaxPy program uses the optimized Prony series coefficients of Ref. [22] and solves the set of coupled rate equations in Eq. (21.13). The output from RelaxPy includes the evolution of each partial fictive temperature, Tfi, and the average fictive temperature, Tf. The program also outputs the relaxation time, s(T,Tf ), and the evolution of other macroscopic properties of interest.

21.12 Stress vs. Structural Relaxation While the relaxation function obeys the stretched exponential form of Section 21.8, based on the Phillips diffusion-trap model the value of the stretching exponent could depend on the type of relaxation being measured. For example, a set of experiments by Potuzak et al. [29] show that homogeneous industrial silicate glasses exhibit a bifurcation of the stretching exponent into b ¼ {3/5, 3/7} for stress relaxation and structural relaxation processes, respectively. In other words, the same glass composition at the same temperature can exhibit different stretching exponents depending on the type of relaxation under study. A stress relaxation experiment is conducted using a beam-bending configuration, similar to the beam-bending viscosity measurement discussed in Chapter 16. The difference is that the viscosity measurement is conducted by measuring strain rate under a constant stress, whereas the stress relaxation measurement is conducted by dynamically adjusting the applied stress to maintain a constant strain. At zero time, i.e., when the stress is first applied, the response of the glass beam is purely elastic. In other words, if the stress were to be released immediately after its initial application, the original dimensions of the beam would be recovered. However, as time progresses, the elastic strain gradually converts into plastic strain such that less stress is required to maintain the same strain. This decrease in the required stress defines the stress relaxation function of the system. In the limit of long time, no stress is required to obtain the target strain, since the beam is permanently deformed as the strain becomes fully plastic (i.e., irrecoverable). This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

407

Figure 21.6 shows the results of the stress relaxation experiment for Corning EAGLE XG glass at a fixed temperature but using three different values of strain, corresponding to displacements of 0.075, 0.1, and 0.125 in [29]. The temperature of the experiment, 641 C, is exactly 100 C less than the glass transition temperature, and the beams were thermally equilibrated

Figure 21.6 Stress relaxation of Corning EAGLE XG glass at 641 C using three different strains corresponding to displacements of 0.075, 0.1, and 0.125 in. (a) Applied stress as a function of time to maintain a fixed strain. (b) The normalized stress relaxation function collapses onto a master curve for all three strains. The solid line shows a fit with a stress relaxation time of s ¼ 1980 s and a stretching exponent of b ¼ 3/5. (After Ref. [29]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

408

Materials Kinetics

at 641 C before initiating the stress relaxation measurements. Figure 21.6(a) plots the applied stress as a function of time, s(t), which is continuously adjusted to maintain each of the three predetermined displacement values. When this applied stress is normalized to the initial stress required to achieve the strain in the elastic regime, i.e., s(t)/s(0), the resulting curves all collapse onto a master stress relaxation function, as in Fig. 21.6(b). The stress relaxation function follows a stretched exponential form with a relaxation time of s ¼ 1980 s and a stretching exponent of b ¼ 3/5. This value of b agrees with the predicted value from the Phillips diffusion-trap model for relaxation in three dimensions with all short- and long-range relaxation pathways activated (d * ¼ 3). Having shown that the stress relaxation function is independent of the strain in the beam-bending experiment, let us consider a different glass composition, viz., Corning Jade glass [29]. Figure 21.7(a) plots the stress relaxation function of Jade glass at three temperatures: 700, 735, and 758 C, each below the glass transition temperature of 782 C. As previously, the glass beams were equilibrated at the measurement temperature before conducting the stress relaxation experiments. The stress relaxation function at all three temperatures again satisfies a stretched exponential fit using a stretching exponent of b ¼ 3/5, identical to the result for EAGLE XG glass and in agreement with the Phillips diffusion-trap theory. As shown in Figure 21.7(b), the stress relaxation time exhibits an Arrhenius dependence on temperature, consistent with the isostructural regime of nonequilibrium dynamics (see Chapter 17). To compare stress relaxation and structural relaxation behaviors, relaxation of Jade glass density is measured during separate isothermal holds at the same three temperatures [29]. This measurement corresponds to structural relaxation, which is measured under stress-free conditions and is purely a result of the initial thermodynamic disequilibrium of the ratecooled glass. The results of the structural relaxation experiments are shown in Figure 21.8(a). In Figure 21.8(b), we show that the structural (density) relaxation functions are accurately fit using a stretched exponential function with a stretching exponent of b ¼ 3/7. According to the diffusion-trap formalism, this exponent is associated with long-range diffusive pathways for relaxation (d* ¼ 3/2). This result is especially interesting since the same glass at the same set of temperatures exhibits b ¼ 3/5 for stress relaxation.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

409

Figure 21.7 (a) Stress relaxation function of Corning Jade glass at 700, 735, and 758 C. The symbols show the experimental data points, and the solid curves are fits using a stretched exponential relaxation function with b ¼3/5. (b) Stress relaxation time as a function of normalized inverse temperature, where Tg ¼ 782 C. (After Ref. [29]).

This bifurcation of stretched exponential relaxation behavior is illustrated in Figure 21.9(a), which shows that all the stress relaxation measurements of the glass collapse onto a single master curve with b ¼ 3/5, whereas the structural relaxation data of the glass collapse onto a master curve with b ¼ 3/7. As shown in Figure 21.9(b), the structural relaxation

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

410

Materials Kinetics

Figure 21.8 (a) Relaxation of the density of Corning Jade glass during isothermal holds at 700, 735, and 758 C and (b) the associated structural relaxation functions. The symbols show the experimental data points, and the solid curves represent fits using a stretched exponential function with b ¼ 3/7 for activation of long-range relaxation pathways only. (After Ref. [29]).

time is at least an order of magnitude longer compared to the stress relaxation time at the same temperature. This is expected, since the application of stress leads to a greater thermodynamic driving force for relaxation, including activation of short-range relaxation pathways, which are absent in the case of structural relaxation alone.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

411

Figure 21.9 (a) Stress and structural relaxation curves for Corning Jade glass, plotted on a double-logarithmic axis, where the slope is equal to the stretching exponent, b. Regression analysis yields R2 values of 0.996 and 0.986 for the stress and structural relaxation data fit with b ¼ 3/5 and b ¼ 3/7, respectively. (b) Stress and structural relaxation times at each temperature. Structural relaxation is more than an order of magnitude slower than stress relaxation. (Modified from Ref. [29]).

21.13 Maxwell Relation The Maxwell relation describes the linear relationship between the shear viscosity, h, and the relaxation time, s, of a liquid or glassy system: h ¼ GN s;

This book belongs to Alice Cartes ([email protected])

(21.15)

Copyright Elsevier 2023

412

Materials Kinetics

where GN is the instantaneous shear modulus of the system [26]. Since GN varies significantly less with temperature compared to h and s, Eq. (21.15) implies that the relaxation time scales linearly with viscosity. This linear relationship means that models developed for both equilibrium viscosity (Chapter 16) and nonequilibrium viscosity (Chapter 17) can be applied directly to model the relaxation time of the system. However, there has been some confusion about whether s in Eq. (21.15) refers to the stress relaxation time or structural relaxation time of the system. Figure 21.9(b) shows that structural relaxation occurs on a much longer time scale compared to stress relaxation, even for systems having the same composition and measured at the same temperature. Recent theoretical and experimental work by Doss et al. [30] has shown that the appropriate relaxation time to use in the Maxwell relation of Eq. (21.15) is, in fact, the stress relaxation time and not the structural relaxation time. The experimental proof for this relation is shown in Figure 21.10, which shows

Figure 21.10 Experimentally measured stress and structural relaxation times for Corning Jade® glass. The solid lines are the temperature-dependent relaxation times calculated using viscosity models and measured temperature-dependent shear modulus data, assuming validity of the Maxwell relation in Eq. (21.15). (a) Since the samples in the stress relaxation experiment are thermally equilibrated, the MYEGA equation of Section 16.8 for equilibrium viscosity is used to calculate stress relaxation times. (b) Since the samples used for structural relaxation experiments are out of thermal equilibrium, the MAP equation of Section 17.4 for nonequilibrium viscosity is used. Comparison of the results shows that the relaxation time in the Maxwell relation refers to the stress relaxation time, not the structural relaxation time. (Modified from Ref. [30]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

413

that Eq. (21.15) gives an accurate prediction of the stress relaxation time using measured viscosity and shear modulus versus temperature curves. In contrast, structural relaxation times are always longer than the values of s predicted by Eq. (21.15).

21.14 Secondary Relaxation The Maxwell relation of Eq. (21.15) indicates the direct connection between the viscosity and stress relaxation time of a system. This implies that viscous flow and stress relaxation are governed by the same underlying mechanisms, viz., the cooperative flow of atoms. Relaxation that is governed by the same mechanism as viscosity is known as primary relaxation or a-relaxation [31]. Primary relaxation is the predominant relaxation mechanism in most systems. Hence, knowledge of the viscosity curve gives direct information about the kinetics of relaxation. However, another type of relaxation, known as secondary relaxation or b-relaxation, also exists in many systems [32,33]. Secondary relaxation occurs on a much faster time scale than primary relaxation and involves smaller, less cooperative motions of atoms. As such, secondary relaxation is decoupled from the viscous flow of the fluid. For example, in a polymer system, primary relaxation is associated with cooperative motion of the chains, while secondary relaxation is associated with smaller intra-chain motions with a lower degree of cooperativity. An example of secondary relaxation is the room temperature relaxation of Corning Gorilla Glass, depicted in Figure 21.11 [34]. The dimensions of the Gorilla Glass samples were measured over a period of 1.5 years while holding the samples in a lab at constant room temperature. Surprisingly, the glass exhibited nearly 10 ppm in linear shrinkage over this time, despite being stored at a temperature more than 600 C lower than its glass transition temperature. It is also remarkable that the relaxation curve in Figure 21.11 follows stretched exponential decay with a stretching exponent of b ¼ 3/7. The fact that the kinetics of room temperature relaxation in Corning Gorilla Glass are stretched exponential in nature indicates that it meets the criteria of having homogeneously distributed excitations and traps, consistent with the assumptions of the diffusion-trap model. The particular value of the exponent (b ¼ 3/7) in Figure 21.11 provides a direct indication of the physical origin of this effect, viz., that the structural relaxation pathways

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

414

Materials Kinetics

Figure 21.11 Room temperature relaxation of two 1050  1050  0.7 mm sheets of Corning Gorilla Glass after being quenched from an initial 30-min heat treatment at 250 C. The symbols show the individual linear strain measurements over a total experimental time of 1.5 years. Throughout the course of the experiment the glass sheets were stored in a controlled temperature and humidity environment. The solid blue line shows a fit of this strain data using a stretched exponential relaxation function with an exponent of b ¼ 3/7, which is characteristic of structural relaxation dominated by long-range pathways. A separately fitted simple exponential decay curve (b ¼ 1) is also shown for comparison. (After Ref. [34]).

are dominated by long-range interactions. Chemically, it is believed that this effect arises from the interaction of different mixed alkali species in the glass [34]. The observed relaxation at room temperature is unrelated to the viscous flow of the glass, which is depicted in Figure 21.12. The viscosity was measured using the beam-bending approach as a function of time to capture both the equilibrium and nonequilibrium behaviors. The measured viscosity data are fit using the MYEGA and MAP models in the equilibrium and nonequilibrium regimes, respectively, leading to an estimated room temperature viscosity of the glassy state equal to 1022.25 Pa$s. When the viscosity is divided by the shear modulus of the glass (29.5 GPa) following the Maxwell relation, an estimated stress relaxation time on the order of 19,000 years is obtained. Hence, the relaxation time associated with viscous flow is much too long to explain the relaxation of Gorilla Glass on a time scale of only 1.5 years. The observed relaxation in Figure 21.11 is, therefore, an example of secondary relaxation, decoupled from viscous relaxation modes.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

415

Figure 21.12 Viscosity of Corning Gorilla Glass in the equilibrium (liquid) and nonequilibrium (glassy) regimes, fit using the MYEGA and MAP models, respectively. Extrapolation of the nonequilibrium glassy viscosity curve to room temperature gives an estimated viscosity of 1022.25 Pa$s, which corresponds to a relaxation time greater than 19,000 years. Thus, the room temperature relaxation shown in Figure 21.11 is not a product of viscous flow. Rather, it is an example of secondary relaxation, decoupled from viscosity. (After Ref. [34]).

21.15 Summary The state of a system is defined based on its measurable properties. For nonequilibrium glass and polymer systems, the property values are history dependent. As such, traditional thermodynamic state variables are insufficient to describe the glassy state. In 1932, Tool and Eichlin proposed a description of glass in terms of an equivalent equilibrium liquid at a different temperature. The temperature of this equivalent liquid is known as the fictive temperature. Ideally, the concept of fictive temperature would allow for the nonequilibrium state of a glass to be described exactly. However, in 1956 the Ritland crossover experiment showed that two glasses having the same fictive temperature obtained via different thermal histories relax differently and therefore represent two different macroscopic states. Ritland’s result implied that a single fictive temperature is inadequate to describe the nonequilibrium state of a glass. Recent modeling has shown that even a continuous distribution of fictive temperatures is insufficient for

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

416

Materials Kinetics

mapping a nonequilibrium glass to any arbitrary mixture of equilibrium states. A more compelling interpretation of fictive temperature is based on assigning partial fictive temperatures to the various relaxation modes operating on different time scales. Stretched exponential relaxation is a universal feature of homogeneous glassy systems, which can be derived using the diffusion-trap theory. Practical implementations of glass relaxation models typically rely on approximating the stretched exponential function as a Prony series of simple exponential decays, where each term in the Prony series has an associated partial fictive temperature. Each partial fictive temperature relaxes toward the instantaneous value of the physical temperature following a set of coupled first-order differential equations. RelaxPy is an open source program that implements the relaxation equations discussed in this chapter. Following the Phillips diffusion-trap model, the value of the dimensionless stretching exponent relates to the underlying dimensionality of the relaxation pathways. Hence, the value of the stretching exponent can depend on the nature of the experiment being conducted. For example, the same glass composition at the same temperature can exhibit a bifurcation of the stretching exponent depending on whether stress relaxation or structural relaxation is being measured. The stress relaxation time relates to the viscosity of the glass via the Maxwell relation. Additional secondary relaxation modes can also exist, decoupled from the cooperative viscous flow behavior.

Exercises (21.1) Download the RelaxPy program from https://github.com/MauroGlass-Group/and run the example script for glass relaxation. (a) Plot the resulting temperature, T, and fictive temperature, Tf, as a function of time for this system. (b) Plot the individual fictive temperature components, Tfi, as functions of time. What do these fictive temperature components tell us about the spread of relaxation times in the system? (c) Adjust the cooling rate and plot fictive temperature results for both faster and slower cooling rates. Why does the fictive temperature scale in this fashion with cooling rate? (d) Adjust the number of terms in the Prony series description of stretched exponential relaxation. How many terms in the

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Relaxation of Glasses and Polymers

(21.2)

(21.3) (21.4) (21.5)

417

Prony series are required to achieve convergence of the average fictive temperature, Tf, of the final glass within 0.01 K? Explain the physical basis for the Ritland crossover effect based on the kinetic interpretation of fictive temperature. Use the concept of partial fictive temperatures to explain your reasoning. Why is it infeasible to make a direct measurement of viscosity at 1024 Pa$s? In the macroscopic interpretation of fictive temperature, why can different properties have different associated fictive temperatures? Calculate the root mean square error (RMSE) associated with the Prony series approximation of stretched exponential relaxation for b ¼ 3/7 using: (a) N ¼ 8 terms and the optimized Prony series coefficients in Table 21.1; and (b) N ¼ 12 terms and the optimized Prony series coefficients in Table 21.2.

References [1] G. W. Scherer, Relaxation in Glass and Composites, Krieger (1992). [2] A. Ellison and I. A. Cornejo, “Glass Substrates for Liquid Crystal Displays,” Int. J. Appl. Glass Sci. 1, 87 (2010). [3] A. K. Varshneya and J. C. Mauro, Fundamentals of Inorganic Glasses, 3rd ed., Elsevier (2019). [4] A. I. Fu and J. C. Mauro, “Mutual Diffusivity, Network Dilation, and Salt Bath Poisoning Effects in Ion-Exchanged Glass,” J. Non-Cryst. Solids 363, 199 (2013). [5] J. C. Mauro, R. J. Loucks, and P. K. Gupta, “Fictive Temperature and the Glassy State,” J. Am. Ceram. Soc. 92, 75 (2009). [6] A. Q. Tool and C. G. Eichlin, “Variations Caused in the Heating Curves of Glass by Heat Treatment,” J. Am. Ceram. Soc. 14, 276 (1931). [7] A. Q. Tool, “Relation Between Inelastic Deformability and Thermal Expansion of Glass in its Annealing Range,” J. Am. Ceram. Soc. 29, 240 (1946). [8] R. Gardon and O. S. Narayanaswamy, “Stress and Volume Relaxation in Annealing Flat Glass,” J. Am. Ceram. Soc. 53, 380 (1970). [9] H. N. Ritland, “Limitations of the Fictive Temperature Concept,” J. Am. Ceram. Soc. 39, 403 (1956). [10] A. J. Kovacs, “Glass Transition in Amorphous Polymers: A Phenomenological Study,” Adv. Polym. Sci. 3, 394 (1963). [11] O. S. Narayanaswamy, “A Model of Structural Relaxation in Glass,” J. Am. Ceram. Soc. 54, 491 (1971). [12] C. T. Moynihan, “Correlation Between the Width of the Glass Transition Region and the Temperature Dependence of the Viscosity of High-Tg Glasses,” J. Am. Ceram. Soc. 76, 1081 (1993). [13] M. A. DeBolt, A. J. Easteal, P. B. Macedo, and C. T. Moynihan, “Analysis of Structural Relaxation in Glass Using Rate Heating Data,” J. Am. Ceram. Soc. 59, 16 (1976).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

418

Materials Kinetics

[14] G. W. Scherer, “Use of the AdameGibbs Equation in the Analysis of Structural Relaxation,” J. Am. Ceram. Soc. 67, 504 (1984). [15] S. M. Rekhson, “Memory Effects in Glass Transition,” J. Non-Cryst. Solids 84, 68 (1986). [16] R. Kohlrausch, “Theorie des Elektrischen Rückstandes in der Leidener Flasche,” Pogg. Ann. Phys. Chem. 91, 179 (1854). [17] M. Cardona, R. V. Chamberlin, and W. Marx, “The History of the Stretched Exponential Function,” Ann. Phys. 16, 842 (2007). [18] G. G. Naumis and G. Cocho, “The Tails of Rank-Size Distributions Due to Multiplicative Processes: From Power Laws to Stretched Exponentials and Beta-Like Functions,” New J. Phys. 9, 286 (2007). [19] J. C. Phillips, “Stretched Exponential Relaxation in Molecular and Electronic Glasses,” Rep. Prog. Phys. 59, 1133 (1996). [20] P. Grassberger and I. Procaccia, “The Long Time Properties of Diffusion in a Medium with Static Traps,” J. Chem. Phys. 77, 6281 (1982). [21] J. C. Phillips, “Kohlrausch Explained: The Solution to a Problem that is 150 Years Old,” J. Stat. Phys. 77, 945 (1994). [22] J. C. Mauro and Y. Z. Mauro, “On the Prony Series Representation of Stretched Exponential Relaxation,” Phys. A 506, 75 (2018). [23] G. Williams and D. C. Watts, “Non-Symmetrical Dielectric Relaxation Behaviour Arising from a Simple Empirical Decay Function,” Trans. Faraday Soc. 66, 80 (1970). [24] J. C. Mauro, D. C. Allan, and M. Potuzak, “Nonequilibrium Viscosity of Glass,” Phys. Rev. B 80, 094204 (2009). [25] X. Guo, M. M. Smedskjaer, and J. C. Mauro, “Linking Equilibrium and Nonequilibrium Dynamics in Glass-Forming Systems,” J. Phys. Chem. B 120(12), 3226 (2016). [26] Q. Zheng and J. C. Mauro, “Viscosity of Glass-Forming Systems,” J. Am. Ceram. Soc. 100, 6 (2017). [27] X. Guo, J. C. Mauro, D. C. Allan, and M. M. Smedskjaer, “Predictive Model for the Composition Dependence of Glassy Dynamics,” J. Am. Ceram. Soc. 101, 1169 (2018). [28] C. J. Wilkinson, Y. Z. Mauro, and J. C. Mauro, “RelaxPy: Python Code for Modeling of Glass Relaxation Behavior,” SoftwareX 7, 255 (2018). [29] M. Potuzak, R. C. Welch, and J. C. Mauro, “Topological Origin of Stretched Exponential Relaxation in Glass,” J. Chem. Phys. 135, 214502 (2011). [30] K. Doss, C. J. Wilkinson, Y. Yang, K.-H. Lee, L. Huang, and J. C. Mauro, “Maxwell Relaxation Time for Nonexponential a-Relaxation Phenomena in Glassy Systems,” J. Am. Ceram. Soc. 103, 3590 (2020). [31] R. Böhmer, R. V. Chamberlin, G. Diezemann, B. Geil, A. Heuer, G. Hinze, S. C. Kuebler, R. Richert, B. Schiener, H. Sillescu, and H. W. Spiess, “Nature of the Non-Exponential Primary Relaxation in Structural Glass-Formers Probed by Dynamically Selective Experiments,” J. Non-Cryst. Solids 235, 1 (1998). [32] K. L. Ngai and M. Paluch, “Classification of Secondary Relaxation in Glass-Formers Based on Dynamic Properties,” J. Chem. Phys. 120, 857 (2004). [33] J. C. Mauro and M. M. Smedskjaer, “Minimalist Landscape Model of Glass Relaxation,” Phys. A 391, 3446 (2012). [34] R. C. Welch, J. R. Smith, M. Potuzak, X. Guo, B. F. Bowden, T. J. Kiczenski, D. C. Allan, E. A. King, A. J. Ellison, and J. C. Mauro, “Dynamics of Glass Relaxation at Room Temperature,” Phys. Rev. Lett. 110, 265901 (2013).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 22

Molecular Dynamics

22.1 Multiscale Materials Modeling A comprehensive understanding of materials can only be obtained through a combination of experimental and theoretical studies. Modeling and simulation offer the ability to obtain detailed information on materials structure and kinetic processes. Modeling can be performed at a variety of length and time scales, from the electronic and atomic scales through the mesoscale and macroscale. Materials modeling includes any type of modeling or simulation technique that can predict structure or properties from the underlying chemistry and thermodynamic state of the material. Various models incorporate different levels of physics, ranging from quantum mechanical models that capture detailed electronic band structures to macroscale models that treat the material as a continuum. Figure 22.1 plots a variety of materials modeling techniques, from purely empirical models to quantum mechanical techniques that offer detailed modeling of electronic structure. While each modeling approach is useful for gaining insights into a material, a combination of techniques at different scales typically provides the most comprehensive understanding of a material system, especially when combined with experimental insights. In this chapter, we focus on molecular dynamics simulations, which can be performed either at the quantum mechanical level or, more commonly, at the classical level. Classical molecular dynamics simulations are performed using empirical interatomic potentials that are optimized to reproduce either quantum mechanical energies or structures, or to reproduce experimentally known structural or property data. In this chapter, we first review quantum mechanical techniques and then focus primarily on classical molecular dynamics. We will address key considerations for performing molecular dynamics simulations, including the choice of ensemble, boundary conditions, integration method, and property calculations. Finally, we will discuss molecular dynamics simulations using reactive force fields. Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00014-5 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

419 Copyright Elsevier 2023

420

Materials Kinetics

Figure 22.1 Multiscale modeling techniques incorporating different levels of physics from purely mathematical models (left) to approaches that incorporate detailed electronic level physics (right). To gain the most comprehensive understanding of a material, a combination of modeling techniques can be employed at different scales, together with experimental results. (Modified from Mauro [1]).

22.2 Quantum Mechanical Techniques Molecular dynamics simulations performed at the electronic level are known as ab initio molecular dynamics (AIMD), where the interatomic forces are computed through quantum mechanical energy calculations, e.g., using density functional theory. Quantum mechanical simulations offer a highly detailed description of the electronic band structure of a material. However, they are also the most computationally expensive of all materials modeling techniques. Hence, the use of quantum mechanical approaches is typically reserved for addressing problems that require such a detailed description. Quantum mechanical simulations can also be used to generate data for fitting the interatomic potentials used in classical molecular dynamics simulations. Such data are also essential for optimizing reactive force fields. Quantum mechanical modeling involves solving the Schrödinger equation for a large system of electrons and nuclei. Since the many-body Schrödinger equation is not generally solvable through analytical means, numerical solutions are required. Numerical solutions of the Schrödinger equation are based on the variational principle, which involves constructing a trial wavefunction with adjustable parameters. The energy of the system is then minimized with respect to changes in these parameters, which can provide an accurate estimate of the ground state energy of a system and its corresponding structure.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Molecular Dynamics

421

The most fundamental assumption used in quantum mechanical modeling is the Born-Oppenheimer approximation, which separates the Hamiltonian of the fast-moving electrons from that of the slower-moving nuclei. With the Born-Oppenheimer approximation, the electrons are considered to be interacting in a field of fixed nuclei. However, the BornOppenheimer approximation itself is insufficient for obtaining a general solution of the Schrödinger equation for many-body systems. HartreeFock theory further simplifies the problem by assuming that electrons interact with each other in terms of an average field, rather than considering every combination of individual electrons. This allows for the introduction of a single-electron Fock operator. When combined with the variational principle, this enables a more efficient numerical solution of the Schrödinger equation, especially for large systems [2e6]. Rayleigh-Schrödinger perturbation theory can be used to introduce corrections to improve the accuracy of the Hartree-Fock calculation, building on the Rayleigh-Schrödinger formalism to correct for the true electron-electron interactions in a system. Often, Møller-Plesset perturbation theory is used to calculate a corrected electronic energy of the many-body system [5]. Second- and fourth-order Møller-Plesset perturbation theories are typically used, denoted as MP2 and MP4, respectively. An example of quantum mechanical simulations using MP4 theory is shown in Figure 22.2, where the calculated energies are fit to an empirical Morse function in Eq. (22.3) to describe the two-body interactions for use in classical molecular dynamics simulations. The high computational cost of quantum mechanical approaches limits the size of systems that can be considered. The number of electrons that are explicitly modeled in the system can be reduced by using pseudopotential theory, which treats the inner core electrons of larger elements in terms of an effective potential. With pseudopotential theory, only the outer valence electrons are modeled explicitly. This reduces the total number of electrons in the system so that calculations can be performed more efficiently. The most computationally efficient approach for quantum-level modeling of materials is density functional theory (DFT) [5], not to be confused with the classical theory of the same name for modeling of crystal nucleation, discussed in Chapter 15. Instead of solving for each electron explicitly, DFT solves for the electron density. This approach is based on a proof that the ground state energy of a many-body system is uniquely determined by its electron density. The electron density itself is a function of only three spatial coordinates. With DFT, the many-body

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

422

Materials Kinetics

Figure 22.2 Two-body interaction potentials between pairs of atoms. The quantum mechanical energies are calculated using MP4 theory and shown by the discrete points. The MP4 results are fit with the empirical Morse function of Eq. (22.3) for use in classical molecular dynamics simulations. (Calculations by Mauro [7]).

problem of having N number of electrons with 3N independent coordinates is simplified to just three dimensions. While not as accurate as other quantum mechanical techniques that consider each electron explicitly, DFT enables much larger systems to be simulated, including materials systems with several hundred atoms. Owing to its improved computational efficiency, DFT is currently the most widely used computational approach for performing electronic-level modeling of materials. When quantum-level calculations of energy are used to compute forces and follow the kinetics of a system, the approach is known as ab initio molecular dynamics. The same procedures for integrating the equations of motion are used as described below for classical molecular dynamics. However, energy calculations are performed using quantum mechanical methods rather than classical potentials.

22.3 Principles of Molecular Dynamics Classical atomistic models consider atoms rather than electrons as the most basic particle in the simulation. As a result, classical modeling techniques allow for simulation of much larger systems compared to what can be achieved through quantum mechanics. Classical atomistic simulations are

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Molecular Dynamics

423

based on interatomic potentials, i.e., continuous functions that describe the interaction energy among atoms in the system. Interatomic potentials can be derived from quantum mechanical energy calculations or deduced empirically to reproduce known experimental data. Derivation of interatomic potentials from quantum mechanical calculations leads to greater transferability of potentials and allows for prediction of materials structure and properties without prior experimental data. Molecular dynamics (MD) is the most commonly used atomistic simulation technique. Molecular dynamics simulations consider an ensemble of atoms contained within a simulation cell. Each atom is specified by its element type and by its position and momentum vectors. Required inputs include the choice of boundary conditions for the simulation cell and selection of an appropriate ensemble. Boundary conditions can be either periodic to simulate a bulk system or non-periodic to simulate a free surface. When using periodic boundary conditions, it is important for the simulation cell to be large enough to avoid atoms interacting with periodic images of themselves through the boundary. Molecular dynamics simulations are performed by integrating Newton’s second law of motion: Fi ¼ mi ai ¼ mi

d 2 ri ; dt 2

(22.1)

where Fi is the force vector acting on atom i, mi is its mass, ri is its position vector, ai is its acceleration vector, and t is time. Each atom also has an associated velocity vector, vi, or momentum vector, pi ¼ mvi. The evolution of the position and velocity/momentum vectors of the atoms are calculated over time. Molecular dynamics simulations require a set of input parameters, including: • Composition of the material, i.e., the number of atoms of each element. • Starting configuration of atoms, i.e., the initial structure of the material. • Choice of interatomic potentials. • Choice of ensemble. • Choice of boundary conditions. • Time step and duration of simulation. The output from molecular dynamics simulations includes: • Atomic-level description of material structure, enabling the calculation of pair distribution functions, bond angle distributions, coordination numbers, etc., as well as visualization of the structure as a function of time.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

424

Materials Kinetics



Property calculations, such as mechanical, thermal, and kinetic properties. • Time-dependent property calculations. In the subsequent sections, we will review each of these features of molecular dynamics simulations in greater detail.

22.4 Interatomic Potentials The total potential energy of a system, U, can be written in terms of a cluster expansion: U¼

N X i¼1

U1 þ

N X N X i¼1

U2 ðrij Þ þ

i¼1

i¼1

j¼1; jsi

N X N X N X N X

N X N X N X



U3 ðrij ; rjk ; qijk Þþ

j¼1; k¼1; jsi ksi; ksj



(22.2)

U4 rij ; rjk ; rkl ; qijk ; qjkl ; fijkl þ /;

j¼1; k¼1; l¼1; jsi ksi; lsi; ksj lsj; lsk

where Un refers to the nth-order interaction potential, rij is the separation distance between atoms i and j, qijk is the subtended bond angle, and fijkl is the torsion angle. In principle, the series of interactions terminates only with the UN term, where N is the total number of atoms in the system. However, since the magnitude of the interactions typically decreases with increasing ndand due to considerations of computational efficiencydit is common to truncate the series after the second- or third-order terms. The single-atom U1 term in Eq. (22.2) represents the energy of a single atom, i.e., due to interactions between the subatomic particles within the atom. This term contributes by far the most energy to the system, often several orders of magnitude larger than all other terms combined. For a simulation with a constant number of atoms, the contribution of the total system energy from U1 remains constant. As a result, for closed systems we are concerned only with the interatomic potentials among atoms, i.e., Un where n > 1. The U1 term itself can be neglected in these simulations. There are several standard functions for describing two-body interatomic potentials, including the Morse potential: i h 2 aðrij r0 Þ U2 ðrij Þ ¼ D0 1  e Þ 1 ; (22.3)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Molecular Dynamics

425

where D0 is the depth of the potential energy well, r0 is the equilibrium separation distance, and a is a shape parameter. The Morse potential is used to describe covalent bonding between pairs of atoms. As shown in Figure 22.2, the Morse potential provides an excellent fit to covalent two-body interaction energies calculated from quantum mechanical simulations. Another common model to describe the interactions between neutral atoms is the Lennard-Jones potential: "   # 12 6 s s U2 ðrij Þ ¼ 4ε ; (22.4)  rij rij where ε is the depth of the potential energy well and s is the interatomic separation at which the potential energy becomes zero. The first term on the right-hand side of Eq. (22.4) gives the repulsive contribution at short separation distances, arising from Pauli repulsion. The second term gives the attraction due to dispersion forces. For charged ions, an electrostatic potential must be included, which arises from the Coulomb forces between charged species. This electrostatic contribution to the two-body potential is: qi qj U2 ðrij Þ ¼ ; (22.5) 4pε0 rij where qi and qj are the charges on atoms i and j, respectively, and ε0 is the permittivity of free space. The electrostatic potential is repulsive if qi and qj have the same sign and attractive if the signs are different. Note that the electrostatic potential is much longer range compared to the potentials in Eqs. (22.3) and (22.4), owing to its 1/rij dependence. Another repulsive term must be added to the potential in Eq. (22.5) to account for Pauli repulsion at short interatomic distances. Eqs. (22.3)e(22.5) are just a few examples of commonly used interatomic potentials. Many more functional forms exist to describe the pairwise interactions between atoms, which can be tailored to systems of specific interest.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

426

Materials Kinetics

22.5 Ensembles Another important input for atomistic simulations is the choice of ensemble, which specifies which thermodynamic variables are being held constant in the simulation. In its native form, molecular dynamics assumes a microcanonical ensemble having a constant number of atoms (N), constant volume (V), and constant internal energy (E). The microcanonical ensemble is also known as the NVE ensemble, indicating the three thermodynamic parameters that are held constant during the simulation. The canonical ensemble is more commonly utilized in molecular dynamics simulations. Here, the temperature (T) is held constant and the kinetic energy of the system is rescaled using a thermostat to maintain the required temperature of the system. The canonical ensemble is also known as the NVT ensemble, since N, V, and T are all held constant. Another commonly used ensemble is the isothermal-isobaric (or NPT) ensemble, where both pressure (P) and temperature are held fixed. Simulations in the NPT ensemble require use of both a thermostat to maintain constant temperature (by rescaling atomic velocities) and a barostat to maintain constant pressure (by adjusting the volume of the simulation cell). The appropriate choice of ensemble depends on the nature of the experiment and the properties that need to be calculated. For example, the volumetric thermal expansion coefficient, a, is defined under constant N and P:   1 vV a¼ : (22.6) V vT N;P Hence, the NPT ensemble would be the natural choice for calculating a, i.e., by determining the equilibrium V under constant P for simulations at different temperatures. Note that thermodynamic properties such as a can also be calculated through fluctuation theory, which is provided in Eq. (22.25) and will be derived in detail in Chapter 25. The advantage of fluctuation theory is that properties such as a can be calculated from a single simulation at constant temperature.

22.6 Integrating the Equations of Motion Molecular dynamics simulations entail calculating the evolution of the atomic structure of a system by integrating Newton’s second law of motion, Eq. (22.1). The result of this integration is a trajectory indicating how the

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Molecular Dynamics

427

positions and velocities of each particle in the system vary over time. At each time step in the simulation, the force vector (F) acting on a particle is calculated by the gradient of the potential energy: F ¼  VU:

(22.7)

From Eq. (22.1), the acceleration of the particle is then calculated by dividing the force vector by its mass: F a¼ : m

(22.8)

Integration of the equations of motion is performed through a finite difference algorithm based on Taylor series approximations for the position (r), velocity (v), and/or acceleration (a) vectors: 1 1 rðt þ dtÞ ¼ rðtÞ þ dtvðtÞ þ dt 2 aðtÞ þ dt 3 bðtÞ þ /; 2 6 1 2 1 3 vðt þ dtÞ ¼ vðtÞ þ dtaðtÞ þ dt bðtÞ þ dt cðtÞ þ /; 2 6 1 aðt þ dtÞ ¼ aðtÞ þ dtbðtÞ þ dt 2 cðtÞ þ /; 2

(22.9) (22.10) (22.11)

where t is the current time and t þ dt is the next step forward in time. The vector b is the derivative of the acceleration with respect to time (called the “jerk”) and c is the second derivative of the acceleration with respect to time (the “snap”). For the breakfast cereal fans among our readers, we note that the third and fourth derivatives of the acceleration are known as the “crackle” and “pop,” respectively. The Verlet algorithm is a widely used method for integrating the equations of motion based on the Taylor series expansion of the position in Eq. (22.9). Truncating Eq. (22.9) after the second-order term, the position at the next time step is: 1 rðt þ dtÞ ¼ rðtÞ þ dtvðtÞ þ dt 2 aðtÞ: 2

(22.12)

Writing a Taylor series expansion for the position at the previous time step, rðt dtÞ, we have 1 rðt  dtÞ ¼ rðtÞ  dtvðtÞ þ dt 2 aðtÞ: 2

This book belongs to Alice Cartes ([email protected])

(22.13)

Copyright Elsevier 2023

428

Materials Kinetics

The velocity terms can be eliminated from the equations by adding Eq. (22.12) and Eq. (22.13): rðt þ dtÞ ¼ 2rðtÞ  rðt  dtÞ þ dt 2 aðtÞ:

(22.14)

Eq. (22.14) thus gives the position at the new time step, rðt þdtÞ, in terms of the current position, rðtÞ, the position at the previous time step, rðt dtÞ, and the acceleration vector calculated from Eq. (22.8). With the Verlet algorithm, there is no explicit calculation of the particle velocity vectors. In molecular dynamics simulations conducted under isothermal conditions, it is necessary to calculate the particle velocities, since the velocities are periodically rescaled for the system to maintain the target temperature (see Section 22.8). To address this problem, another approach for integrating the equations of motion is the velocity Verlet algorithm, which computes positions, velocities, and accelerations. The velocity Verlet algorithm is based on the Taylor series approximation for rðt þdtÞ truncated after the second-order term: 1 rðt þ dtÞ ¼ rðtÞ þ dtvðtÞ þ dt 2 aðtÞ: 2

(22.15)

The velocity at the next time step, vðt þdtÞ, is calculated from the velocity at the current time step, vðtÞ, and the average acceleration at the current and next time steps: 1 vðt þ dtÞ ¼ vðtÞ þ dt½aðtÞ þ aðt þ dtÞ: 2

(22.16)

Hence, the velocity Verlet algorithm requires calculation of acceleration using Eq. (22.8) at both the current time and the subsequent time step. Implementation of the velocity Verlet algorithm proceeds by the following steps: 1. Calculate the new position vector, rðt þdtÞ, at the new time step according to Eq. (22.15). 2. Calculate the velocity vector at the half-time step following:   1 1 v t þ dt ¼ vðtÞ þ dtaðtÞ: 2 2

(22.17)

3. Calculate the acceleration vector at the new time step, aðt þdtÞ, using the force vector at the new position calculated in step #1.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Molecular Dynamics

429

4. Calculate the final velocity vector by:   1 1 vðt þ dtÞ ¼ v t þ dt þ dtaðt þ dtÞ: 2 2

(22.18)

Thus, the velocity Verlet algorithm calculates the evolution of both particle positions and velocities without compromising the precision of either. Integration algorithms used to compute the time evolution of the system should satisfy the following criteria: • The integration algorithm should be fast to reduce the computational time and enable longer time scales to be achieved in the simulation. Many modern algorithms also make use of parallel processing to reduce the time required to complete a simulation. • The integration algorithm must also be accurate to ensure that the results are trustworthy. Any inaccuracies in the algorithm will compound at each time step, leading to greater error. • The algorithm should require minimal memory to make efficient use of the available computer memory. • The integration algorithm must conserve energy and momentum, consistent with Newtonian mechanics. • The algorithm must be time-reversible, also to maintain consistency with Newton’s laws of motion. • The integration algorithm should permit a long time step to be used, to assist in achieving longer time scales in the molecular dynamics simulation. If the time step of the simulation, dt, is too small, then computational time will be wasted by calculating an unnecessarily large number of time steps, and the simulation will be unable to achieve long time scales. In contrast, if the time step is too large, then the integration algorithm will become unstable, leading to violations of energy and momentum conservation. It is therefore important to choose a time step that is small enough to ensure convergence of the results, but no smaller than necessary. A typical time step is on the order of 1 fs (¼1015 s). One strategy for improving the efficiency of molecular dynamics simulations is to define a neighbor list for each atom, i.e., a list of atoms that are within a certain cutoff radius. The cutoff radius for the neighbor list is determined by the distance at which the interatomic potentials become effectively zero. Potentials that decay to zero at lower values of rij can enable a shorter cutoff for the neighbor list. When calculating the force

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

430

Materials Kinetics

acting on an atom, only the interactions with atoms in its neighbor list are included. This significantly reduces the number of calculations at each time step of the simulation, greatly improving the efficiency of the integration algorithm.

22.7 Performing Molecular Dynamics Simulations With this background in place, we can now describe the specific steps for performing a molecular dynamics simulation. First, the simulation must be initialized by defining the necessary parameters, such as the choice of interatomic potentials, ensemble, size of the simulation cell, and boundary conditions. An initial configuration of particles within the simulation cell must also be defined. The initial configuration of atoms can be specified based on experimental structural data, from a known theoretical model, or even randomly. The initial velocities are obtained by drawing randomly from a Maxwell-Boltzmann distribution at the target temperature of interest:   m 1=2 1 mi vix2 i pðvix Þ ¼ exp  ; (22.19) 2 kT 2pkT where pðvix Þ is the probability density function of the velocity of particle i along dimension x, mi is the mass of the particle, k is Boltzmann’s constant, and T is the initial temperature of the simulation. The components of the velocity vectors are independently drawn from the Maxwell-Boltzmann distribution in Eq. (22.19) along each of the x, y, and z directions for each atom in the simulation cell. Another important input for the molecular dynamics simulation is the choice of boundary conditions along the planes of the simulation cell, i.e., along each of the xy, xz, and yz planes. For example, the boundaries can provide a rigid wall to confine the atoms. In materials science, it is more common to use periodic boundary conditions, where an image of the simulation cell is repeated across the boundary. Use of periodic boundary conditions enables simulation of a bulk material using a finite simulation cell. With periodic boundary conditions, the atoms can interact with each other through the periodic boundary. When calculating the separate distance between atoms, the smallest value of rij is used, accounting for the distance between atoms both within the simulation cell and considering all periodic images of the atom through the boundaries. When integrating the laws of motion, if the trajectory of an atom passes through a periodic boundary, it exits the simulation cell through that boundary and then

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Molecular Dynamics

431

re-enters through the opposite plane, as depicted in Figure 22.3. Periodic boundary conditions are a convenient way to simulate a bulk system while minimizing the number of atoms explicitly considered in the simulation. However, care must be taken so that any artificial impact of the periodic boundaries is not observed. For example, if a non-crystalline system is being simulated, the repetition of the simulation cell creates artificial periodicity that does not actually occur in the system. In order to avoid such artifacts, the simulation cell must be sufficiently large such that the atoms do not interact with periodic images of themselves and that the periodicity does not adversely affect relevant property calculations. Another commonly used boundary condition in computational materials science is the free boundary, i.e., no boundary. With a free boundary, there is no constraint on the atomic motion along that plane. Free boundaries are particularly useful for simulating the free surfaces of a material. Note that a combination of boundary conditions can be used along different directions, as appropriate. For example, a thin slab of a material could be simulated using periodic boundary conditions along two directions, with a free boundary in the third direction.

Figure 22.3 Schematic diagram of a simulation cell with periodic boundary conditions. (Image by Brittney Hauke (Penn State).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

432

Materials Kinetics

After initialization, integration of the equations of motion proceeds as described in Section 22.6. The evolution of the properties of the system can be calculated along the trajectory of the simulation. Some properties, such as the internal energy or the volume, are straightforward parameters of the system. Other properties can be calculated through statistical averaging over the time and space of the simulation. For example, the diffusivity, D, of a given species in the simulation can be calculated from the Einstein diffusion equation introduced in Section 7.7. For a three-dimensional simulation cell, the diffusivity is calculated from the mean squared displacement of the atoms by:

2 jrðtÞ  rð0Þj 3D ¼ lim : (22.20) t/N 2t As will be shown in Chapter 25, an equivalent calculation can be made from the autocorrelation function of the particle velocities using the GreenKubo formula: Zt 3D ¼ lim

t/N 0

 s hvðsÞ $ vð0Þi 1  ds: t

(22.21)

The subject of autocorrelation functions and fluctuation theory will be covered in detail in Chapter 24. The temperature of the system can be calculated at any instant in time from the kinetic energy, K, of the particles, viz., K¼

2 N X kT jpi j ¼ ð3N  Nc Þ; 2 2mi i¼1

(22.22)

where 3N is the number of translational degrees of freedom of the particles and Nc is the number of constraints corresponding to translation or rotation of the entire system. For a three-dimensional system, typically Nc ¼ 6, corresponding to three translational plus three rotational directions for moving the entire system. Other properties that can be calculated include the isochoric heat capacity, CV, which can be calculated from the variance of internal energy fluctuations, hdE2 iNVT , in the NVT ensemble by: hdE2 iNVT ¼ kT 2 CV :

This book belongs to Alice Cartes ([email protected])

(22.23)

Copyright Elsevier 2023

Molecular Dynamics

433

Similarly, the isobaric heat capacity, CP, can be calculated from the variance of the enthalpy fluctuations, hdH 2 iNPT , in the NPT ensemble by: hdH 2 iNPT ¼ kT 2 CP :

(22.24)

The volumetric thermal expansion coefficient, a, can be calculated from Eq. (22.6) or, alternatively, using the cross-correlation of enthalpy and volume fluctuations in the NPT ensemble: hdV dHiNPT ¼ kT 2 V a:

(22.25)

The derivation of these expressions from fluctuation theory will be provided in Chapter 24. Perhaps the most useful aspect of molecular dynamics simulations is the ability to visualize the entire atomic scale structure of the material. Molecular dynamics simulations provide the precise locations of every atom in the system at every time step of the simulation. This allows for a more detailed visualization of the structure than can be gained through experiments alone (see Section 22.11 for a description of visualization tools). Also, a variety of structural properties can be calculated, including radial distribution functions, bond angle distributions, coordination number distributions, ring statistics, and more.

22.8 Thermostats Since energy is conserved in Newtonian mechanics, the native ensemble for performing molecular dynamics simulations is the microcanonical (NVE) ensemble. However, most experiments are conducted under isothermal rather than adiabatic conditions. Hence, it is more common to perform molecular dynamics simulations in the canonical (NVT) or isothermal-isobaric (NPT) ensembles, both of which are isothermal. When performing a simulation in an isothermal ensemble, a thermostat is used to maintain a constant temperature. The simplest way to maintain a constant temperature is to scale the velocities of the particles at prescribed time intervals throughout the simulation. The particle velocities are rescaled by the factor, l, given by: sffiffiffiffiffiffiffiffiffi Treq l¼ ; (22.26) T ðtÞ where Treq is the required temperature set by the thermostat, and T(t) is the instantaneous temperature at time t calculated using Eq. (22.22).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

434

Materials Kinetics

22.9 Barostats The isothermal-isobaric (NPT) ensemble also requires use of a barostat to maintain constant pressure. The pressure, P, of the system can be calculated at any instant in time using the virial theorem: W ¼  3PV þ

N X N X i¼1 j¼iþ1

rij

dUðrij Þ ¼ 3NkT ; drij

(22.27)

where W is the virial, which is defined as the expectation value of the sum of the products of the coordinates of the particles and the forces acting on them. As shown in Eq. (22.27), the virial consists of both an ideal gas contribution and a contribution arising from interactions between particles. Physically, a system maintains constant pressure by changing its volume. A molecular dynamics simulation in the NPT ensemble also maintains constant pressure by adjusting the volume of the simulation cell. This volume change can be achieved by adjusting the volume in all directions simultaneously or along just one or two directions. The procedure for maintaining constant pressure with a barostat is analogous to that for maintaining constant temperature using a thermostat, viz., calculate the instantaneous pressure, P, using Eq. (22.27), and rescale the volume, V, of the system to maintain a set value of the pressure. When the volume of the system is adjusted, the positions of all particles need to be rescaled to the new volume.

22.10 Reactive Force Fields While the quantum mechanical approaches discussed in Section 22.2 can provide highly accurate calculations of the atomic and electronic structures of a system, they are too computationally intensive to be applied to large systems. Classical molecular dynamics simulations using empirical interatomic potentials are much more efficient. However, they may not be suitable for simulation of systems that involve significant chemical changes, such as chemical reactions. Reactive force fields are a highly successful intermediate-level of calculation that provide improved accuracy and flexibility compared to standard empirical potentials, while still offering significant advantages in computational efficiency compared to quantumlevel simulations [8,9]. Two widely used reactive force field approaches are the ReaxFF approach developed by Adri van Duin and coworkers [10] and the COMB

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Molecular Dynamics

435

(charge-optimized many-body) potentials developed by Susan Sinnott and coworkers [11]. Both ReaxFF and COMB allow atoms to adopt variable charges in response to their local environment, enabling them to adapt to conditions far from equilibrium or chemical changes, such as the creation or dissociation of chemical bonds. The ReaxFF and COMB approaches have much in common but a few important differences. ReaxFF is tuned to capture accurate transition states and is hence suitable for describing the energetics along the entire reaction pathway. COMB is especially suited for capturing systems containing a variety of bonding types, e.g., systems that simultaneously involve covalent, ionic, and metallic bonding. Figure 22.4 shows an example application of the COMB potentials to study the spreading of a water droplet on the surface of copper metal [12]. The atoms are colored by their charge, which varies depending on the local environment, as can be seen clearly in the figure. The spreading of the droplet proceeds initially with a monolayer of water, labeled as the precursor film. The kinetics of the wetting process were also calculated using the COMB potentials [13]. The results, plotted in Figure 22.5, reveal two different regimes for the wetting kinetics, furthering the fundamental understanding of the wetting process in this system. For complete details, we refer the reader to Refs. [12,13].

Figure 22.4 Spreading of a water droplet on the surface of copper, calculated through molecular dynamics with the reactive COMB potentials. (Image courtesy Susan Sinnott (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

436

Materials Kinetics

Figure 22.5 Kinetics of wetting of a free copper surface, calculated through molecular dynamics with reactive COMB potentials. (Image courtesy Susan Sinnott (Penn State)).

Another example of the use of reactive potentials is the study of titanium carbide derived-carbon (TiC-CDC) systems [14] shown in Figure 22.6. The porous TiC-CDC network provides a high surface area suitable for gas storage, catalysis, and other applications. In the reactive molecular dynamics simulations, the TiC-CDC network is loaded with Ti atoms or metallic clusters of Ti throughout the structure. The systems are evolved under various conditions using COMB potentials. Figure 22.7 shows the calculated adsorption of carbon dioxide in the TiC-CDC structures at varying levels of metal (Ti) loading. The calculated adsorption curves are in excellent agreement with the measured experimental data points across the full range of metal loadings and pressurizations. Finally, let us consider the application of ReaxFF potentials for modeling an entirely new class of material, viz., metal-organic framework (MOF) glasses, which are made of a hybrid inorganic-organic network [15]. Figure 22.8 shows results from a molecular dynamics simulation of glass formation in a specific MOF composition known as ZIF-4 (ZIF ¼ zeolitic imidizolate framework). As indicated in Figure 22.8(a), the simulation begins with a known crystal structure of ZIF-4, which is heated above its melting point to obtain a liquid. The liquid is then quenched sufficiently fast to avoid recrystallization, thereby entering the glassy state. The corresponding structures of the ZIF-4 crystal, liquid, and glass are shown in Figure 22.8(b). All of the zinc atoms in the ZIF-4 crystal are fourfold coordinated. When the ZIF-4 crystal is melted, Zn coordination ranges from 1 to 4 in the resulting ZIF-4 liquid. The Zn is both threefold and fourfold coordinated in the final glass. Yang et al. [15] have demonstrated that ReaxFF is a powerful tool for discovering and understanding new MOF materials. This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Molecular Dynamics

437

Figure 22.6 Titanium carbide derived-carbon (TiC-CDC) models with added (a) Ti atoms and (b) Ti13 clusters, computed through molecular dynamics simulations. (Image courtesy Susan Sinnott (Penn State)).

Figure 22.7 CO2 adsorption in titanium carbide derived-carbon (TiC-CDC) modeled using COMB potentials and a combination of molecular dynamics and grand canonical Monte Carlo simulations. The solid curves are the simulation results, and the individual data points are experimental measurements. Each curve corresponds to a different percentage of residual metal (Ti) loading. (Image courtesy Susan Sinnott (Penn State)).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

438

Materials Kinetics

Figure 22.8 Molecular dynamics simulation of a ZIF-4 metal-organic framework system using ReaxFF potentials. (a) The initial crystal is heated to the molten state, and the resulting liquid is subsequently quenched to form a glass. (b) Corresponding structures of the ZIF-4 crystal, liquid, and glass. (Image by Yongjian Yang (Penn State)).

22.11 Tools of the Trade Several excellent and freely available tools exist for performing molecular dynamics simulations. Perhaps the two most popular tools are LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) and GROMACS (GROningen MAchine for Chemical Simulations). Both LAMMPS and GROMACS are free, open source software packages. LAMMPS was originally developed by Plimpton at Sandia National Laboratories [16] and is optimized for parallel processing using the Message Passing Interface (MPI) standard. LAMMPS can be downloaded from https://lammps.sandia.gov and can be easily extended to include new features. GROMACS was originally developed by Berendsen et al. [17] at the University of Groningen. GROMACS is known as one of the fastest software packages for performing molecular dynamics simulations and is optimized for running on both standard central processing units (CPUs) and graphics processing units (GPUs). GROMACS can be downloaded from http://www.gromacs.org.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Molecular Dynamics

439

A specialized set of software tools have also been developed for postprocessing and visualization of the results from molecular dynamics simulations. Some of the popular tools include: • VMD (Visual Molecular Dynamics), http://www.ks.uiuc.edu/ Research/vmd, a free and open source tool that specializes in visualization of biomolecular systems [18]. • XCrySDen (X-window CRYstalline Structures and DENsities), http:// www.xcrysden.org, a free tool focusing on visualization of crystalline systems [19]. • AtomEye, http://li.mit.edu/Archive/Graphics/A/, a Linux-based tool with a wide variety of options for structural visualization [20]. • OVITO (Open Visualization Tool), https://www.ovito.org, a free and open source multi-platform tool written in Python [21]. OVITO is perhaps the most powerful and versatile of the currently available molecular visualization software packages.

22.12 Summary Molecular dynamics simulations solve Newton’s equations of motion to calculate the evolution of the atomic structure of a material. The forces acting on each atom can be calculated using empirical interatomic potentials or, in the case of ab initio molecular dynamics, using quantum mechanical methods such as density functional theory. An intermediate approach uses classical reactive force fields that can adapt depending on the local environment around each atom. The equations of motion are integrated using finite difference methods, such as the Verlet algorithm or the velocity Verlet algorithm. An appropriate integration time step must be chosen to ensure accurate results and the ability to achieve the desired time scale of the simulation. Inputs for molecular dynamics simulations include the choice of ensemble and the boundary conditions, as well as the initial configuration of atoms. The initial velocities are drawn randomly from a MaxwellBoltzmann distribution at the temperature of the simulation. In its native form, molecular dynamics simulations are conducted in the microcanonical ensemble. Isothermal simulations can be performed through use of a thermostat, which rescales the atomic velocities to maintain a target temperature. Isobaric simulations make use of a barostat, which rescales the volume of the simulation cell to maintain a constant pressure.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

440

Materials Kinetics

Molecular dynamics simulations enable a detailed calculation of atomic structure, including radial distribution functions, bond angle distributions, coordination number distributions, and other structural features. Macroscopic properties can be calculated either directly through the simulation parameters or through fluctuation theory. Several software programs are available to conduct molecular dynamics simulations, perform property calculations, and visualize the results.

Exercises (22.1) Which materials modeling technique(s) would you use to address the following problems? Justify your answers in terms of the required length scale, time scale, and level of physics. (a) Microstructural evolution of a polycrystalline metallic alloy. (b) Stress-strain response of a touch screen device involving a stack of several different types of materials. (c) Long-time relaxation kinetics of polyethylene. (d) Electronic band structure of amorphous silicon. (e) Dissolution of sugar in an aqueous medium. (f) Atomic structure of single-grain lithium niobate. (g) Nanoindentation of steel. (h) Diffusion of alkali ions in a solid-state electrolyte. (22.2) Using both the Morse and Lennard-Jones interatomic potentials: (a) Plot the potential energy versus atomic separation distance for both potentials (on the same graph). (b) Assume that the cutoff for a neighbor list occurs at an interatomic separation distance where the potential energy becomes 1% of its value compared to the maximum well depth. Calculate the neighbor list cutoff distance for both potentials. (c) Derive analytical equations for force vs. atomic separation distance for both potentials. (d) Plot the force vs. atomic separation distance for the Morse and Lennard-Jones potentials on the same graph. (e) Discuss the differences in shape between the Morse and Lennard-Jones potentials and the resulting forces. (22.3) Give at least one example of a practical problem that would be solved using each of the following ensembles. Justify your answers. (a) Microcanonical ensemble. (b) Canonical ensemble.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Molecular Dynamics

441

(c) Isothermal-isobaric ensemble. (d) Grand canonical ensemble (look ahead to Chapter 23). (22.4) Download one of the free, open source software packages for performing molecular dynamics simulations, e.g., LAMMPS or GROMACS, as discussed in Section 22.11. Try running some of the example scripts and familiarize yourself with the input and output parameters. How would you modify the input script to model a system of interest to your own research? (22.5) Download one of the free software tools for visualizing the structures generated from molecular dynamics simulations, e.g., VMD, AtomEye, or OVITO, as described in Section 22.11. Try plotting some of the example systems and navigating through different parts of the atomic structure. What are some of the important structural features that you note through this visualization process? What other properties can be calculated using the tool that you have selected?

References [1] J. C. Mauro, “Decoding the Glass Genome,” Curr. Op. Solid State Mater. Sci. 22, 58 (2018). [2] N. M. March, W. H. Young, and S. Sampanthar, The Many-Body Problem in Quantum Mechanics, Dover (1995). [3] A. L. Fetter and J. D. Walecka, Quantum Theory of Many-Particle Systems, Dover (2003). [4] A. Szabo and N. S. Ostlund, Modern Quantum Chemistry, Dover (1996). [5] C. J. Cramer, Essentials of Computational Chemistry, John Wiley & Sons (2002). [6] J. M. Thijssen, Computational Physics, Cambridge University Press (1999). [7] J. C. Mauro, “Multiscale Modeling of Chalcogenides”, Ph.D. Thesis, Alfred University, Alfred, New York (2006). [8] T. Liang, Y. K. Shin, Y.-T. Cheng, D. E. Yilmaz, K. G. Vishnu, O. Verners, C. Zou, S. R. Phillpot, S. B. Sinnott, and A. C. T. van Duin, “Reactive Potentials for Advanced Atomistic Simulations,” Annu. Rev. Mater. Res. 43, 109 (2013). [9] J. A. Harrison, J. D. Schall, S. Maskey, P. T. Mikulski, M. T. Knippenberg, and B. H. Morrow, “Review of Force Fields and Intermolecular Potentials used in Atomistic Computational Materials Research,” Appl. Phys. Rev. 5, 031104 (2018). [10] T. P. Senftle, S. Hong, M. M. Islam, S. B. Kylasa, Y. Zheng, Y. K. Shin, C. Junkermeier, R. Engel-Herbert, M. J. Janik, H. M. Aktulga, T. Verstraelen, A. Grama, and A. C. T. van Duin, “The ReaxFF Reactive Force-Field: Development, Applications and Future Directions,” npj Comput. Mater. 2, 15011 (2016). [11] T. Liang, B. Devine, S. R. Phillpot, and S. B. Sinnott, “Variable Charge Reactive Potential for Hydrocarbons to Simulate Organic-Copper Interactions,” J. Phys. Chem. A 116, 7976 (2012). [12] A. C. Antony, T. Liang, S. A. Akhade, M. J. Janik, S. R. Phillpot, and S. B. Sinnott, “Effect of Surface Chemistry on Water Interaction with Cu(111),” Langmuir 32, 8061 (2016).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

442

Materials Kinetics

[13] A. C. Antony, T. Liang, and S. B. Sinnott, “Nanoscale Structure and Dynamics of Water on Pt and Cu Surfaces from MD Simulations,” Langmuir 34, 11905 (2018). [14] D. Zhang, M. R. Dutzer, T. Liang, A. F. Fonseca, Y. Wu, K. S. Walton, D. S. Sholl, A. H. Farmahini, S. K. Bhatia, and S. B. Sinnott, “Computational Investigation on CO2 Adsorption in Titanium Carbide-Derived Carbons with Residual Titanium,” Carbon 111, 741 (2017). [15] Y. Yang, Y. K. Shin, S. Li, T. D. Bennett, A. C. T. van Duin, and J. C. Mauro, “Enabling Computational Design of ZIFs using ReaxFF,” J. Phys. Chem. B 12, 9616 (2018). [16] S. Plimpton, “Fast Parallel Algorithms for Short-Range Molecular Dynamics,” J. Comp. Phys. 117, 1 (1995). [17] H. J. C. Berendsen, D. van der Spoel, and R. van Drunen, “GROMACS: A MessagePassing Parallel Molecular Dynamics Implementation,” Comp. Phys. Comm. 91, 43 (1995). [18] W. Humphrey, A. Dalke, and K. Schulten, “VMD: Visual Molecular Dynamics,” J. Mol. Graph. 14, 33 (1996). [19] A. Kokalj, “XCrySDendA New Program for Displaying Crystalline Structures and Electron Densities,” J. Mol. Graph. Model. 17, 176 (1999). [20] J. Li, “AtomEye: An Efficient Atomistic Configuration Viewer,” Model. Simul. Mater. Sci. Eng. 11, 173 (2003). [21] A. Stukowski, “Visualization and Analysis of Atomistic Simulation Data with OVITOeThe Open Visualization Tool,” Model. Simul. Mater. Sci. Eng. 18, 015012 (2009).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 23

Monte Carlo Techniques

23.1 Introduction While molecular dynamics provides a deterministic algorithm for calculating atomic trajectories over time, it is not necessarily the most effective technique for every atomistic simulation. The Monte Carlo approach is an alternative method for atomic-scale simulations based on a stochastic sampling of the phase space. Unlike molecular dynamics, the Monte Carlo approach avoids computation of forces and integration of the equations of motion. Rather, it relies on generating random configurations of atoms in phase space and using special criteria to determine whether to accept or reject each new configuration. Monte Carlo is also an effective technique for performing simulations of open system, i.e., where the number of atoms in the simulation is no longer fixed. Furthermore, the kinetic Monte Carlo approach allows us to access much longer time scales than are available through standard molecular dynamics. Monte Carlo also provides an effective approach for calculating the density of states of an energy landscape, including the inherent structure density of states, which is necessary for obtaining the degeneracy factors that are used as input when solving the master equation kinetics of the landscape (see Chapter 20). In this chapter, we introduce the basic concepts of the Monte Carlo approach, including Monte Carlo integration, importance sampling, Markov processes, and the Metropolis algorithm for performing Monte Carlo simulations. We discuss implementation of the Metropolis algorithm in the canonical and isothermal-isobaric ensembles, as well as the grand canonical and Gibbs ensembles. We then turn our attention to kinetic Monte Carlo simulations for accessing long time scales and finally discuss a Monte Carlo method for calculating inherent structure density of states. For additional information on the theory and implementation of Monte Carlo techniques beyond what is discussed in this chapter, we refer the reader to the excellent textbooks by Newman and Barkema [1] and by Landau and Binder [2]. Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00007-8 This book belongs to Alice Cartes ([email protected])

© 2021 Elsevier Inc. All rights reserved.

443 Copyright Elsevier 2023

444

Materials Kinetics

23.2 Monte Carlo Integration The Monte Carlo method was originally developed by von Neumann, Ulam, and Metropolis at Los Alamos National Laboratory at the end of World War II to study neutron diffusion in fissionable materials [3e5]. The name Monte Carlo is a reference to the capital of the Principality of Monacodfamous for its Casino de Monte-Carlodand was chosen by Metropolis because of the extensive use of random numbers in his stochastic sampling method. The Monte Carlo method is much more general than its specific application to atomic scale modeling of materials. Broadly, the Monte Carlo method encompasses any stochastic technique that uses random numbers to solve a numerical problem, which might in principle be deterministic in nature. Monte Carlo is one of the most powerful mathematical tools for solving problems that are either too difficult or too time-consuming to solve analytically. A good introduction to this approach is Monte Carlo integration. While the calculus of integration is deterministic, integration can also be performed numerically through random sampling of the integrand. Consider the example shown in Figure 23.1, which shows a circle of unit radius contained within a 2  2 square. Each point in the figure corresponds to one “shot,” i.e., one random sampling of the space contained within the square. Each point falling within the circle represents a “hit,” while each point falling outside the circle is a “miss.” The number of shots (sshot) is equal to the number of hits (shit) plus the number of misses (smiss). This random sampling of phase space is at the core of the Monte Carlo method. In the case of Figure 23.1, the ratio of the number of hits to the total number of shots approximates the ratio of the area of the circle to the total area of the phase space (equal to the area of the square). Mathematically, we write p Area of the circle shit ¼ z : 4 Area of the square sshot

(23.1)

Since the area of the circle divided by the area of the square is equal to p/4, the value of p can be approximated by p z 4shit =sshot . This approximation becomes increasingly accurate as the number of shots, sshot, increases. In the limit of an infinite number of shots, an exact value of p should be recovered.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Monte Carlo Techniques

445

Figure 23.1 Example of Monte Carlo integration, in which the value of p can be estimated using Eq. (23.1).

Monte Carlo integration is conceptually straightforward and can provide an accurate estimate of most integrals. Let us consider the more general case of integrating a function, f(x), from x1 to x2. The integral, F, can be expressed as Zx2 F¼

Zx2 f ðxÞdx ¼

x1

x1

f ðxÞ rðxÞdx ; rðxÞ

(23.2)

where r(x) is an arbitrary probability density function from which the random numbers are drawn. Following the Monte Carlo method, a number of trials, s, are performed. Each trial involves drawing a random number, zs, from the distribution given by r(x). With Monte Carlo integration, the integral, F, can then be approximated by:   f ðzs Þ F¼ ; (23.3) rðzs Þ trials

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

446

Materials Kinetics

i.e., the integral is equal to the expectation value of f ðzs Þ=rðzs Þ averaged over the full set of random trials. For example, suppose that we sample from a uniform probability distribution over the domain of [x1, x2]. Then, the probability density function is: rðxÞ ¼

1 ; x2  x1

x1  x  x2 :

(23.4)

The resulting integral F can then be estimated by: smax x2  x1 X Fz f ðzs Þ; smax s¼1

(23.5)

where smax is the total number of random trials conducted. Given the large quantity of random numbers used by the Monte Carlo approach, the accuracy of the method depends on having good random number generators, a topic that will be discussed in Section 23.10.

23.3 Monte Carlo in Statistical Mechanics Within the field of statistical mechanics, the Monte Carlo method provides an effective means for calculating partition functions and the ensemble averages of property values. For example, the expectation value of an arbitrary property, A, in the canonical (NVT) ensemble can be calculated by: R

sP max

AðsÞexpð  UðsÞ=kT Þ A expðU=kT Þdr s¼1 R z sP ; hAiNVT ¼ max expðU=kT Þdr expð  UðsÞ=kT Þ

(23.6)

s¼1

where U is potential energy, k is Boltzmann’s constant, T is absolute temperature, and the integral is over the configurational phase space, r. With the Monte Carlo approach, the integral of the continuous phase space, r, is replaced by a discrete averaging performed over a random sampling of the phase space, where the potential energy, U(s), and property value, A(s), are evaluated at each random trial, s. The main problem with application of Eq. (23.6) is that a uniform sampling of the phase space would lead to significant sampling in the regions of r having high values of the potential energy, U. In other words, the vast majority of the potential energy landscape consists of unfavorable,

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Monte Carlo Techniques

447

high-energy configurations of atoms, which contribute little to the partition function owing to their low occupation probabilities. Hence, with a uniform sampling of r, most of the computational time would be wasted sampling regions of the phase space that have negligible impact on the properties of the system. The solution to this problem is importance sampling, which is a method for biasing the sampling of phase space toward those configurations that contribute most appreciably to the integral [6,7]. With importance sampling, random numbers are chosen from a distribution that allows the function evaluation to be concentrated in the regions of space that make the most significant contributions to the integral. In the general case of Monte Carlo integration in Eq. (23.2), importance sampling can be performed by selecting a probability density function, r(x), that is biased toward the region of phase space that contributes most to the integral, F. For application of Monte Carlo integration to statistical mechanics, the random sampling should be biased toward regions of phase space having lower values of energy, since these regions contribute more to the partition function at finite temperature. The appropriate choice of probability density function, r(r), is the Boltzmann distribution, i.e., the same distribution that is used in defining the partition function itself.

23.4 Markov Processes Implementation of Monte Carlo integration to the problem of statistical mechanics, as originally proposed by Metropolis, involves construction of a Markov process. A Markov process is a method for generating a new state j based only on the current state i and not on any of the previous states [8]. The transition probability from state i to state j is denoted Pði /jÞ and satisfies the condition X Pði / jÞ ¼ 1; (23.7) j

where the summation is over all possible new states, j. Note that the probability, Pði /iÞ, which is the probability of remaining in the original state i, may be nonzero. With the Metropolis Monte Carlo approach, a Markov process is repeatedly used to generate a Markov chain of states. In other words, starting from an initial state i, the Markov process is used to generate a new state j. This state is then used in the Markov process to generate the next

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

448

Materials Kinetics

new state. The process continues until a sequence of states has been generated that sufficiently samples the relevant probability distribution, i.e., the Boltzmann distribution. When this occurs, the Monte Carlo process is said to have “reached equilibrium,” and the estimator from Eq. (23.6) can be used to calculate the properties of interest. The necessary condition for equilibration is achieving detailed balance, i.e., pi Pði / jÞ ¼ pj Pð j / iÞ;

(23.8)

where pi and pj are the probabilities of occupying states i and j, respectively. The detailed balance condition of Eq. (23.8) requires that the probability of transitioning from state i to state j is equal to the probability of making the reverse transition. Since the equilibrium distribution must satisfy the Boltzmann probability distribution, the detailed balance equation should also satisfy   ðUj  Ui Þ Pði/jÞ pj ¼ ¼ exp  : (23.9) Pð j/iÞ pi kT If Eqs. (23.8) and (23.9) are satisfied, then an equilibrium chain of states has been generated that follows the Boltzmann probability distribution required by statistical mechanics.

23.5 The Metropolis Method The procedure for achieving an appropriate Markov chain of states is to separate the transition probability into two parts, Pði / jÞ ¼ gði / jÞaði / jÞ;

(23.10)

where gði /jÞ is the selection probability, i.e., the probability that the Monte Carlo algorithm will generate a new target state j after starting in state i, and aði /jÞ is the acceptance probability, also known as the acceptance ratio. The acceptance ratio, aði /jÞ, gives the probability of accepting a new trial state j after starting in state i. If the new state is rejected, then the system would remain in its original state i. With this definition of the acceptance ratio, Eq. (23.9) becomes:   ðUj  Ui Þ Pði/jÞ gði/jÞaði/jÞ ¼ ¼ exp  : (23.11) Pð j/iÞ gð j/iÞað j/iÞ kT

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Monte Carlo Techniques

449

With the Metropolis Monte Carlo method, new states are selected with a uniform probability distribution, i.e., gði /jÞ ¼ gð j /iÞ, such that Eq. (23.11) becomes:   ðUj  Ui Þ Pði/jÞ aði/jÞ ¼ ¼ exp  : (23.12) Pð j/iÞ að j/iÞ kT Hence, the task of achieving the desired Boltzmann distribution is left to making an appropriate choice of acceptance ratio. In Metropolis Monte Carlo, the acceptance ratio is chosen to be: 8   > < exp  ðUj  Ui Þ ; if U  U > 0 j i kT aði / jÞ ¼ : (23.13) > : 1; otherwise In other words, any trial move that lowers the potential energy of the system is always accepted, and any trial move that increases the potential energy of the system is accepted according to a Boltzmann probability factor. This acceptance criterion in Eq. (23.13) is visualized in Figure 23.2. Note that this choice of aði /jÞ satisfies Eq. (23.12). Putting it all together, the Metropolis algorithm implementing the above Monte Carlo scheme consists of the following steps: 1. Calculate the potential energy of the current state, Ui.

Figure 23.2 Schematic diagram summarizing whether a random trial move is accepted or rejected following the Metropolis method. If the potential energy is lowered as a result of the trial move, then the new configuration is always accepted. If the change in potential energy is positive, then the trial move is randomly accepted or rejected following a Boltzmann distribution.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

450

Materials Kinetics

2. Generate a new trial state following the Markov process. In the canonical ensemble, this is accomplished by choosing a random atom in the system and assigning it a random displacement. (Sampling in different ensembles will be covered in Section 23.7.) 3. Calculate the potential energy of the new trial state, Uj. 4. If Uj  Ui , then accept the new state. If Uj > Ui , then draw a random value from a uniform distribution between 0 and 1. If this random value is less than exp½  ðUj Ui Þ =kT , then the new state is accepted. Otherwise, the trial state is rejected, and the system is returned to the original state, i. 5. Repeat steps 1e4 until the potential energy of the system has converged to an equilibrium value.

23.6 Molecular Dynamics vs. Monte Carlo Molecular dynamics and Metropolis Monte Carlo are both methods for modeling materials at the atomic scale. Both methods are based on having accurate interatomic potentials to calculate the potential energy as a function of particle coordinates, and they both make use of neighbor lists to improve the computational efficiency of the simulations. However, there are several important differences between these techniques. With molecular dynamics: • Atoms explore the configurational phase space by following trajectories according to Newton’s second law of motion. This involves calculation of a force vector according to Eq. (22.1). • In molecular dynamics simulations, particles have positions and velocities (momenta). Hence, for N number of particles in a threedimensional space, there are 6N dimensions. • The native ensemble for molecular dynamics is the microcanonical (NVE) ensemble, since Newtonian mechanics conserves the total energy of the system. • Molecular dynamics simulations collect information along the time trajectory of the simulation, enabling them to calculate certain kinetic properties directly, such as diffusivity. • With molecular dynamics, the simulation is not guaranteed to reach thermodynamic equilibrium, regardless of the number of time steps. In other words, the system is not guaranteed to converge to the global free energy minimum.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Monte Carlo Techniques



• • •



451

On the other hand, with the Metropolis Monte Carlo technique: Atoms explore phase space via random moves following a Markov process. The moves are accepted or rejected according to the difference in potential energy between the current state and the new trial state. Hence, there is no calculation of a force vector. Particles have positions, but not momenta. For an N-atom system in a three-dimensional space, the phase space of the system has 3N dimensions. The native ensemble of the Metropolis Monte Carlo method is the canonical (NVT) ensemble, since the acceptance criterion of Eq. (23.13) considers a Boltzmann distribution at constant temperature. The Monte Carlo algorithm collects information over random ensembles, using importance sampling to reach the states of lowest free energy. Properties are calculated by an ensemble average over the configurations visited by the system. Given an infinite number of trial moves, the Metropolis Monte Carlo algorithm is guaranteed to find the global free energy minimum.

23.7 Sampling in Different Ensembles The Metropolis Monte Carlo approach is readily extended for simulations in other ensembles beyond the canonical ensemble. To perform Monte Carlo simulations in the isothermal-isobaric ensemble, two types of random trial moves are considered: • Displacement of particles within the simulation cell. • Change of the volume of the simulation cell. As discussed in detail by Allen and Tildesley [3], the acceptance criterion in Eq. (23.13) is updated based on the change in enthalpy as a result of each trial move. The grand canonical (mVT) ensemble is used for open systems where the chemical potential is fixed and the number of particles fluctuates. There are two types of random moves in grand canonical Monte Carlo: • Particle displacements, as in the canonical ensemble. • Insertion of a new particle or deletion of an existing particle. The particle insertion/deletion trials are accepted or rejected according to the probability from the grand canonical partition function. The particle insertion and deletion operations are especially challenging for condensed systems, since the probability of accepting a random insertion or deletion is very small. To address this problem, biased particle insertion/deletion

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

452

Materials Kinetics

methods have been developed to select propitious locations for inserting a new particle or deleting an existing particle [3]. Grand canonical Monte Carlo has been very successful, for example, in the development and implementation of the implicit glass model for simulation of crystal nucleation, which has already been discussed in Section 15.5. Another ensemble of interest is the Gibbs (mPT) ensemble, which is similar to the grand canonical ensemble but considering isobaric rather than isochoric conditions. As originally developed by Athanassios Panagiotopoulos [9,10], Gibbs ensemble Monte Carlo is especially useful for calculating thermodynamic phase equilibria. With this technique, two simulation cells are constructed for the two different phases of interest. Gibbs ensemble Monte Carlo considers three types of random trial moves: • Particle displacements within a simulation cell of a given phase. • Volume changes of a simulation cell, where the particle coordinates are scaled to the new volume. • Transfer of a particle between phases, i.e., between the two simulation cells. These three types of random trial moves are visualized in Figure 23.3. For a sufficiently high number of random trials, equilibration can be achieved between the two phases.

23.8 Kinetic Monte Carlo The astute kinetics student may note that there has not been any mention of time in our discussion of Metropolis Monte Carlo. The Monte Carlo technique is clearly effective for simulating material structure and phase

Figure 23.3 Summary of the three types of random moves executed during Monte Carlo simulations in the Gibbs ensemble to determine thermodynamic phase equilibria.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Monte Carlo Techniques

453

equilibria, but how can it be extended to incorporate kinetics? The answer is the kinetic Monte Carlo approach, which introduces a time dimension into the Monte Carlo framework. Kinetic Monte Carlo is an especially powerful technique that can access much longer time scales compared to standard molecular dynamics simulations [11e13]. Molecular dynamics simulations are fundamentally limited by the duration of the integration time step, which is on the order of 1 fs (¼ 1015 s). As a result of this short time step, molecular dynamics simulations are typically limited to time scales on the order of nanoseconds or microseconds. However, many kinetic processes in materials occur on much longer time scales, such as minutes, hours, or days. Beyond that, corrosion and other properties that affect material lifetime may occur on a time scale of years [14]. Kinetic Monte Carlo is an especially effective technique for achieving the time scales necessary for modeling these long-time kinetic processes. As with standard Metropolis Monte Carlo, the kinetic Monte Carlo technique is based on generating a Markov chain of states. Instead of using trial moves based on individual particle displacements or volume changes, kinetic Monte Carlo considers the set of transition points available from the current location in the potential energy landscape (or enthalpy landscape in the case of an isobaric simulation). This set of transition points can be determined using techniques such as eigenvector-following or the activation-relaxation technique, which have been covered previously in Section 18.5. Since the set of transition points must satisfy the Markov chain assumption, each transition point must be independent of the others. Also, the system should not exhibit any memory effect, meaning that the transitions between successive states depend only on the current state and not on any of the previous states visited by the system. With these assumptions, it is possible to construct a Markov process such that the system evolves dynamically with a correct correlation to time, assuming that we know the exact reaction rate or transition time for each of the possible transitions [11e14]. A kinetic Monte Carlo simulation, therefore, evolves the system by considering a sequence of infrequent events by overcoming transition barriers in the underlying energy landscape. Like the master equation approach discussed in Chapter 20, this allows the simulation to access long time scales. Kinetic Monte Carlo shares some similarities with the master equation approach. Both techniques are based on mapping transition points in an energy landscape and modeling the long-time kinetics of the system

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

454

Materials Kinetics

overcoming these activation barriers. However, there are some key differences, viz.: • With the master equation approach, there must be an a priori mapping of the energy landscape to a discrete set of inherent structures and transition points. In contrast, with kinetic Monte Carlo, the locations of the transition points can be determined “on-the-fly” during the simulation. • The master equation approach considers an evolution of the basin occupation probabilities. As such, it has built-in averaging over all possible realizations of the system. On the other hand, kinetic Monte Carlo follows a single trajectory through the energy landscape, i.e., one particular realization of the system. Hence, multiple kinetic Monte Carlo simulations would be needed using different random seeds (see Section 23.10) to yield a similar type of averaging as in the master equation approach. Several algorithms have been proposed for conducting kinetic Monte Carlo simulations. The most commonly used algorithm is the direct method, where two random numbers are drawn from a uniform distribution between 0 and 1. The first random number is used to select which transition point to cross, and the second random number is used to advance the time clock. The specific steps in a kinetic Monte Carlo simulation are as follows: 1. Initialize the system time clock to t ¼ 0. Perform a local minimization of the potential energy (or enthalpy) of the system to begin at the inherent structure configuration within the local basin. 2. Find all the possible transition points from the current configuration. As discussed in Chapter 18, a transition point is defined as a first-order saddle point in the energy landscape, which gives the minimum possible energy for making an inter-basin transition. Transition points can be calculated through methods such as eigenvector-following (see Section 18.5). A schematic diagram of this step is shown in Figure 23.4.

Figure 23.4 The kinetic Monte Carlo method involves randomly selecting a transition pathway from an initial starting configuration. The time coordinate is then updated using Eq. (23.16).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Monte Carlo Techniques

455

3. Calculate the total reaction rate (Ktot) at the current state, i: Ktot ¼

X Kij ;

(23.14)

j

where Kij are the individual transition rates from the current state, i, to each of the possible next states, j. The transition rates are calculated from transition state theory, as discussed in Section 20.1. 4. Generate two random numbers, r1 and r2, from a uniform distribution between 0 and 1. 5. Select the particular transition pathway, jk, which satisfies: jk 1 X

Kij < r1 Ktot 

j¼1

jk X

Kij :

(23.15)

j¼1

In other words, Eq. (23.15) selects a random transition pathway, where the selection probability is proportional to the reaction rate. (Faster transitions are more likely to be chosen compared to slower transitions.) 6. Execute the jk transition and move the system to the inherent structure at the other side of that activation barrier. 7. Advance the system time clock by the time step, dt, given by: dt ¼ 

1 ln r2 : Ktot

(23.16)

8. Repeat steps 2e7 until the desired total simulation time has been reached. Kinetic Monte Carlo can access much longer time scales compared to molecular dynamics since the time step, dt, in kinetic Monte Carlo is calculated from the overall expected transition rate between adjacent basins, as in Eq. (23.16). Hence, kinetic Monte Carlo focuses on the kinetics of inter-basin configurational transitions. This contrasts with molecular dynamics, where the time step is constrained to be on the order of 1 fs, since the details of the atomic and molecular vibrations must be explicitly calculated. An example of a highly successful application of kinetic Monte Carlo is the work of Rodgers et al. [15], who studied the microstructural evolution of additively manufactured metals through both experiments and simulation. Snapshots of their results are shown in Figure 23.5 where the excellent

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

456

Materials Kinetics

Figure 23.5 Comparison of experimental additively manufactured metal microstructures with those simulated using the kinetic Monte Carlo approach. Colors in the experimental microstructures indicate different grain orientations. Colors in the simulated microstructures are simply used to differentiate among grains. (Reproduced with permission from Ref. [15]).

agreement between experimental and simulated microstructures is evident along all three orthogonal planes. Indeed, good quantitative agreement was obtained between the experimental and theoretical grain size distributions and grain orientation distributions for a variety of processing conditions. Another interesting application of kinetic Monte Carlo is in the modeling of the plasma-enhanced chemical vapor deposition (PECVD) process for growing thin films of amorphous silicon. Crose et al. [16] developed a multiscale modeling approach, where atomic-scale kinetic Monte Carlo simulations were embedded in a continuum-level computational fluid dynamics (CFD) model. The thin film growth is captured through the kinetic Monte Carlo algorithm, with a dynamic boundary that is updated at each time step and fed back into the CFD model. As shown in Figure 23.6, the kinetic Monte Carlo simulation considers four types of elementary kinetic events that occur during the film growth process. These include physisorption and chemisorption of (SiH3)þ radicals on the surface of the thin film, migration of the radical particles, and the hydrogen abstraction process by which the (SiH3)þ radicals or hydrogenated surface sites release H2 gas and Si is incorporated into the film.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Monte Carlo Techniques

457

Figure 23.6 Four elementary kinetic events considered in the kinetic Monte Carlo simulation of the plasma-enhanced chemical vapor deposited (PECVD) of amorphous silicon thin films. From left to right, the kinetic events are migration, physisorption, chemisorption, and hydrogen abstraction. (Reproduced with permission from Ref. [16]).

23.9 Inherent Structure Density of States In Chapter 18 we introduced the notion of potential energy landscapes and enthalpy landscapes. The long-time kinetics of the system can be calculated from its set of inherent structures and transition points, either through the master equation approach discussed in Chapter 20 or using the kinetic Monte Carlo technique described in the previous section. In Chapters 18 and 20, we emphasized the importance of considering the degeneracy of the inherent structures, i.e., the number of equivalent inherent structures in the energy landscape. Degeneracy factors are important since they provide the entropic contribution to the basin occupation probabilities and inter-basin transition rates. In particular, Eq. (18.12) shows that the contribution of each inherent structure to the energy landscape partition function is proportional to the volume of their corresponding basin, i.e., basins with larger volumes in the 3N-dimensional phase space have higher degeneracy factors. From Eq. (18.12), it is clear that a method is required to calculate the density of states of the inherent structures, weighted by their basin volumes in the

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

458

Materials Kinetics

3N-dimensional hyperspace. In this section, we briefly outline the Monte Carlo method developed by Mauro et al. [17] for addressing exactly this problem, which is an extension of previous work by Wang and Landau [18e20] for determining the density of states of a continuous potential energy landscape. Compared to the Wang-Landau technique, the approach of Mauro et al. [17] has several key differences. First, the method considers isobaric conditions to allow for calculating the inherent structure density of states in an enthalpy landscape, accounting for the coupling of volume variations with atomic displacements. Also, an energy/enthalpy minimization is included after each trial displacement to compute the inherent structure density of states rather than that of the continuous landscape, as in the Wang-Landau technique. The approach of Mauro et al. also includes importance sampling with a Boltzmann weighting factor, which is used to preferentially sample the lower energy/enthalpy states, i.e., the Monte Carlo sampling is biased toward microstates that are more likely to be sampled by the system. With this approach, first an inherent structure quench probability is computed, i.e., the probability of occupying a given inherent structure upon instantaneous quenching to absolute zero. The density of states is subsequently calculated from this quench probability, accounting for the bias introduced by the Boltzmann-weighted sampling. Finally, the probability of sampling a particular inherent structure is weighted by the volume of the corresponding basin in either the 3N-dimensional phase space of the potential energy landscape or the (3N þ 1)-dimensional space of the enthalpy landscape, where N is the number of particles in the system, consistent with the requirements of Eq. (18.12). Since the derivation and implementation of this technique is nontrivial, we refer the interested reader to Ref. [17] for full details. Here we just outline the key steps of the algorithm, specifically for an enthalpy landscape under isobaric conditions: 1. Initialize an initial quench probability array, fi, to unity. The array elements i cover the full range of volumes accessible to the system. 2. Choose a random displacement of atoms from a uniform distribution over the configurational space. With this uniform distribution, the probability of sampling a basin is directly proportional to the volume of that basin. Any change in volume, V, should be accompanied by a corresponding rescaling of the particle positions, r, according to vr r ¼ : vV 1=3 V 1=3 This book belongs to Alice Cartes ([email protected])

(23.17)

Copyright Elsevier 2023

Monte Carlo Techniques

459

3. Each random displacement is followed by an enthalpy minimization to the inherent structure configuration. 4. The trial displacement is accepted if   ðHj  Hi Þ fi randð0; 1Þ  exp  ; fj kT

(23.18)

where Hi is the enthalpy of the initial inherent structure i, Hj is the enthalpy of the new inherent structure j, T is the temperature of the simulation, and rand (0,1) is a random number drawn from a uniform distribution between 0 and 1. 5. If the new state j is accepted, the quench probability fj is increased by some factor A > 1: fj new ¼ Afj old :

(23.19)

Following Wang and Landau [18], a good choice is A ¼ e. 6. If the new state j is rejected, the system returns to the previous state i, and the quench probability of the previous state, fi, is increased by the same factor: fi new ¼ Afi old :

(23.20)

7. Repeat steps 2e6. The update factor A decreases to 1 throughout the simulation according to the procedure of Wang and Landau [18], and convergence is achieved as A approaches unity. 8. The inherent structure density of states, wIS, normalized by the associated basin volumes, can then be computed from the quench probabilities using w IS ½V ðHÞ ¼ R

f ½V ðHÞ; T expðH=kT Þ : f ½V ðHÞ; T expðH=kT ÞdH

(23.21)

The final result of the calculation is the inherent structure density of states weighted by the (3N þ 1)-dimensional volume of their respective basins in the enthalpy landscape. This provides essential input for calculating the transition kinetics in any system, since the density of states provides the entropic contribution to the kinetics through the degeneracy factors discussed in Section 20.3.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

460

Materials Kinetics

It is helpful to consider a simple example to better understand calculation of the inherent structure density of states. Let us consider the simple one-dimensional potential energy landscape in Figure 23.7. The Monte Carlo method discussed in this chapter can be applied at different temperatures, as shown in Figure 23.8. These temperatures correspond to the temperatures used in Eq. (23.18). The resulting basin occupation probabilities calculated from Monte Carlo are in excellent agreement with the exact solutions from equilibrium statistical mechanics. Regardless of the chosen temperature for the simulation, the same inherent structure density of states can be recovered using Eq. (23.21). Again, there is excellent agreement between the Monte Carlo results and the exact solution for this simple one-dimensional landscape, as demonstrated in Figure 23.9. The inherent structure density of states is weighted by their corresponding basin volumes, since the Monte Carlo trial displacements follow a uniform probability distribution.

23.10 Random Number Generators It should be clear from this chapter that any Monte Carlo method involves generating a very large quantity of random numbers. Hence, a reliable random number generator is critically important to conduct a Monte

Figure 23.7 Simple one-dimensional potential energy landscape used for testing the Monte Carlo method of Section 23.9 for calculating inherent structure density of states. (After Mauro et al. [17]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Monte Carlo Techniques

461

Figure 23.8 Basin occupation probabilities of the potential energy landscape in Figure 23.7 calculated at different temperatures: (a) 500 K, (b) 1000 K, (c) 2500 K, and (d) 5000 K. The results using the Monte Carlo approach of Section 23.9 converge to the exact solutions for this landscape. (After Mauro et al. [17]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

462

Materials Kinetics

Figure 23.9 Inherent structure density of states weighted by corresponding basin volumes for the one-dimensional potential energy landscape in Figure 23.7. Regardless of which temperature is used in the Monte Carlo simulation in Figure 23.8, application of Eq. (23.21) yields the same result for the density of states. The Monte Carlo results are in excellent agreement with the exact basin-weighted inherent structure density of states for this landscape. (After Mauro et al. [17]).

Carlo simulation. In the case of atomistic simulations using the Metropolis Monte Carlo method, random numbers are used: • To randomly select particles for displacement; • To generate new configurations of the system; and • To decide whether a move should be accepted or rejected. Computers are deterministic devices, which faithfully run exactly the instructions that are written for them to execute. There is no intrinsic randomness in a computer. As a result, the sequence of numbers produced by a random number generator is not truly random. In the field of computer science, they are referred to as pseudo-random numbers. Random number generators are typically designed to produce a different sequence of numbers when a different random seed is provided. To increase the randomization of the process, usually the current internal clock time of the computer is used as the initial seed. Hence, each time the program is executed, a different random seed is used. The topic of random number generation for Monte Carlo simulations is an active area of research, with entire volumes devoted to the subject [21]. One simple method for generating pseudo-random numbers is the linear

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Monte Carlo Techniques

463

congruential method. With this approach, each number in the sequence is generated by multiplying the previous number by a constant, adding a second constant, and taking the remainder when dividing by a third constant [3]. If we let x½i denote the ith pseudo-random number in the sequence, then the linear congruential method can be summarized as: x½1 ¼ seed x½i ¼ ðx½i  1,b1 þ b2 Þ%b3 ;

(23.22)

where b1, b2, and b3 are integer constants. The percent sign, %, represents the modulo operator, which returns the remainder after performing an integer division operation. In the case of Eq. (23.22), ðx½i 1 ,b1 þb2 Þ is the dividend and b3 is the divisor. The remainder of the division operation (called the modulus) is assigned to x½i. The modulo operator is used because the results of this operation are seemingly more “randomized” compared to the output of other basic arithmetic operations.

23.11 Summary Classical atomistic simulations are based on calculation of interatomic potentials, i.e., continuous functions that describe the potential energy associated with interactions between atoms. While molecular dynamics provides a deterministic method for tracking atomic trajectories in time, the Metropolis Monte Carlo technique relies on generating random configurations of atoms in phase space and using specific criteria to determine whether or not to accept each new configuration. The name “Monte Carlo” was coined by Metropolis owing to the technique’s extensive use of random numbers. By relying on potential energy calculations and a stochastic sampling process, Metropolis Monte Carlo can provide an equilibrated material structure with less computational time compared to traditional molecular dynamics. Metropolis Monte Carlo can also be easily extended to different ensembles, incorporating volume changes in isobaric ensembles and random particle insertions and deletions in open ensembles. While the Metropolis Monte Carlo technique itself does not incorporate a time coordinate, the notion of time can be reintroduced with the kinetic Monte Carlo approach. The kinetic Monte Carlo algorithm randomly selects one transition point from a set of possible transitions. The time coordinate is then propagated forward by the expectation time for overcoming that barrier. Since it focuses on configurational changes rather than vibrations, kinetic Monte Carlo can access much longer time scales than traditional molecular dynamics.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

464

Materials Kinetics

Monte Carlo also provides an efficient way to compute the inherent structure density of states of a potential energy landscape or enthalpy landscape. An initially uniform density of states array is assumed. The elements of the array are updated by a multiplicative factor as new states are visited throughout the course of the simulation. After the array converges, the final inherent structure density of states is obtained, which is weighted by the volumes of their corresponding basins. This provides crucial information for specifying the degeneracy factors used in calculating the kinetics and statistical mechanics of the energy landscape. Monte Carlo simulations rely on extensive random number generation. For better or worse, there is no such thing as a true random number generated by a computer. Hence, algorithms have been developed to generate a sequence of pseudo-random numbers. A different sequence of pseudo-random numbers can be generated by changing the random seed, which is used as input to the random number generator. Typically, the internal clock time of the computer is used as the random seed, which ensures that a different sequence of pseudo-random numbers is generated each time the Monte Carlo program is executed.

Exercises (23.1) Comparing molecular dynamics and Monte Carlo techniques: (a) Discuss three advantages of molecular dynamics over Monte Carlo. (b) Discuss three advantages of Monte Carlo over molecular dynamics. (23.2) Using a program such as Excel or MATLAB, implement Monte Carlo integration of a circle as in Figure 23.1. (a) Plot the estimated value of p as a function of the number of shots. (b) How many shots are necessary to achieve convergence within 0.01% of the actual value of p? (23.3) Metropolis Monte Carlo simulations typically adjust the trial particle displacements to achieve a target acceptance ratio of around aði /jÞ z 40%. (a) What is the problem with running Monte Carlo simulations if the acceptance ratio is too low? (b) What is the problem with running Monte Carlo simulations if the acceptance ratio is too high?

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Monte Carlo Techniques

465

(c) If the acceptance ratio is too low, how should the particle displacement magnitude be adjusted? Why? (23.4) Derive the acceptance criterion for particle insertion/deletion trials in the grand canonical ensemble. (23.5) Search the literature for an application of the kinetic Monte Carlo approach in materials modeling. Provide the reference. (a) What is the system under study, and what is the importance of this research? (b) How was the kinetic Monte Carlo approach implemented for this system? Provide details of the simulations. (c) What are the relevant time scales accessed through the kinetic Monte Carlo simulations? Could alternative methods be used to access the same time scales? Why or why not?

References [1] M. E. J. Newman and G. T. Barkema, Monte Carlo Methods in Statistical Physics, Oxford University Press (1999). [2] D. P. Landau and K. Binder, A Guide to Monte Carlo Methods in Statistical Physics, Cambridge University Press (2000). [3] M. P. Allen and D. J. Tildesley, Computer Simulation of Liquids, 2nd ed., Oxford University Press (2017). [4] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, “Equation of State Calculations by Fast Computing Machines,” J. Chem. Phys. 21, 1087 (1953). [5] N. Metropolis and S. Ulam, “The Monte Carlo Method,” J. Am. Stat. Assoc. 44, 335 (1949). [6] P. W. Glynn and D. L. Iglehart, “Importance Sampling for Stochastic Simulations,” Manage. Sci. 35, 1367 (1989). [7] M. S. Oh and J. O. Berger, “Adaptive Importance Sampling in Monte Carlo Integration,” J. Stat. Comp. Simul. 41, 143 (1992). [8] C. J. Geyer, “Practical Markov Chain Monte Carlo,” Stat. Sci. 7, 473 (1992). [9] A. Z. Panagiotopoulos, “Adsorption and Capillary Condensation of Fluids in Cylindrical Pores by Monte Carlo Simulation in the Gibbs Ensemble,” Mol. Phys. 62, 701 (1987). [10] A. Z. Panagiotopoulos, “Direct Determination of Fluid Phase Equilibria by Simulation in the Gibbs Ensemble: A Review,” Mol. Simul. 9, 1 (1992). [11] K. A. Fichthorn and W. H. Weinberg, “Theoretical Foundations of Dynamical Monte Carlo Simulations,” J. Chem. Phys. 95, 1090 (1991). [12] C. C. Battaile, “The Kinetic Monte Carlo Method: Foundation, Implementation, and Application,” Comp. Meth. Appl. Mech. Eng. 197, 3386 (2008). [13] T. P. Schulze, “Efficient Kinetic Monte Carlo Simulation,” J. Comp. Phys. 227, 2455 (2008). [14] J. C. Mauro and J. Du, “Achieving Long Time Scale Simulations of Glass-Forming Systems,” Comput. Theo. Chem. 987, 122 (2012).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

466

Materials Kinetics

[15] T. M. Rodgers, J. D. Madison, and V. Tikare, “Simulation of Metal Additive Manufacturing Microstructures using Kinetic Monte Carlo,” Comput. Mater. Sci. 135, 78 (2017). [16] M. Crose, W. Zhang, A. Tran, and P. D. Christofides, “Multiscale Three-Dimensional CFD Modeling for PECVD of Amorphous Silicon Thin Films,” Comput. Chem. Eng. 113, 184 (2018). [17] J. C. Mauro, R. J. Loucks, J. Balakrishnan, and S. Raghavan, “Monte Carlo Method for Computing Density of States and Quench Probability of Potential Energy and Enthalpy Landscapes,” J. Chem. Phys. 126, 194103 (2007). [18] F. Wang and D. P. Landau, “Efficient, Multiple-Range Random Walk Algorithm to Calculate the Density of States,” Phys. Rev. Lett. 86, 2050 (2001). [19] F. Wang and D. P. Landau, “Determining the Density of States for Classical Statistical Models: A Random Walk Algorithm to Produce a Flat Histogram,” Phys. Rev. E 64, 056101 (2001). [20] D. P. Landau, S. H. Tsai, and M. Exler, “A New Approach to Monte Carlo Simulations in Statistical Physics: Wang-Landau Sampling,” Am. J. Phys. 72, 1294 (2004). [21] J. E. Gentle, Random Number Generation and Monte Carlo Methods, Springer Science & Business Media (2006).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 24

Fluctuations in Condensed Matter

24.1 What are Fluctuations? As materials scientists and engineers, we tend to think about properties in terms of their average values. However, even at equilibrium, the properties of a system exhibit fluctuations over both time and space. Microscopic fluctuations govern many material properties, and they also play a critical role in controlling material performance for a variety of practical engineering applications. A fluctuation is defined as the deviation of the instantaneous or local value of a property from its mean value. If we consider the mean calculated in an ensemble average sense, then a fluctuation in some property, A, is defined as: dA ¼ A  hAi;

(24.1)

where dA is the fluctuation, A is the instantaneous or localized value of the property, and hAi is the average value of the property. The magnitude of the property fluctuations is typically represented in terms of a standard deviation or variance of the fluctuations. The variance of the fluctuations in property A, s2A , is defined as:   2 2 (24.2) s2A ¼ ðdAÞ ¼ ðA  hAiÞ : Eq. (24.2) can be equivalently expressed as   s2A ¼ A2  hAi2 ; or s2A ¼

X pj A2j  hAi2 ;

(24.4)

j

Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00001-7 This book belongs to Alice Cartes ([email protected])

(24.3)

© 2021 Elsevier Inc. All rights reserved.

467 Copyright Elsevier 2023

468

Materials Kinetics

where pj is the probability of occupying a given state j, and Aj is the value of the property associated with that state. Fluctuations are critically important in governing material properties [1,2]. For example, Rayleigh scattering is a direct result of localized fluctuations in refractive index [3]. Nucleation and phase separation are governed by localized fluctuations in structure and bonding [4]. Mechanical properties and fracture behavior are also influenced by localized structural fluctuations [5,6]. Thermal properties such as heat capacity and thermal expansion coefficient are a direct result of fluctuations in energy and volume [7]. Moreover, glass and polymer relaxation are comprised of different relaxation modes operating over a range of time scales, each with a different underlying structural origin [8]. In this chapter, we will discuss the basic theory of fluctuations and its connection to thermodynamic properties. We will also introduce autocorrelation functions and the Green-Kubo formula. After covering this basic theory, we will discuss the importance of fluctuations in several practical applications, including dynamical heterogeneities in liquids and the distribution of relaxation modes in industrial glass for high performance displays.

24.2 Statistical Mechanics of Fluctuations Thermodynamic properties are not constant, even at equilibrium, as microscopic fluctuations lead to fluctuations in macroscopic properties [1,2]. For example, the variance of energy fluctuations in time, s2E , is: X  2 2 2 pj Ej2  E ; (24.5) s2E ¼ E  E ¼ E 2  E ¼ j

where E is the instantaneous value of internal energy and the overbars indicate averaging over time. In the canonical (NVT) ensemble, the equilibrium probability of occupying microstate j is given by 1 pj ¼ ebEj ; Q

(24.6)

where Q is the partition function and 1 bh : kT

This book belongs to Alice Cartes ([email protected])

(24.7)

Copyright Elsevier 2023

Fluctuations in Condensed Matter

469

As usual, k is Boltzmann’s constant and T is absolute temperature. The first term on the rightmost side of Eq. (24.5) can be written as: X 1 X 2 bEj 1 v X bEj pj Ej2 ¼ Ej e ¼ Ej e : (24.8) Q j Q vb j j Applying the product rule to the derivative, we have X 1 v   vE vln Q pj Ej2 ¼  EQ ¼  E : Q vb vb vb j

(24.9)

Eq. (24.9) then simplifies as: X vE 2 pj Ej2 ¼ kT 2 þE : vT j

(24.10)

Substituting Eq. (24.10) into Eq. (24.5), the variance of the internal energy fluctuations is given by: s2E ¼

X j

2

pj Ej2  E ¼ kT 2

vE : vT

Note that the isochoric heat capacity is given by   vE : CV ¼ vT N;V

(24.11)

(24.12)

Combining Eqs. (24.11) and (24.12), the heat capacity is directly proportional to the variance of the energy fluctuations by s2E ¼ kT 2

vE ¼ kT 2 CV : vT

(24.13)

Hence, the isochoric heat capacity can be calculated directly from the variance of the energy fluctuations in the canonical ensemble: CV ¼

 1  ðdEÞ2 NVT : 2 kT

(24.14)

Conversely, Eq. (24.14) can be used to calculate the variance of the energy fluctuations from a measured value of heat capacity.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

470

Materials Kinetics

Likewise, a variety of other thermodynamic properties can be calculated from underlying microscopic fluctuations of the system. For example, the isobaric heat CP, is related to the variance of the enthalpy fluc capacity, 2 tuations, ðdHÞ NPT , in the isothermal-isobaric ensemble by   (24.15) s2H ¼ ðdHÞ2 NPT ¼ kT 2 CP : The volumetric thermal expansion coefficient (a) is given by the crosscorrelation of volume fluctuations (dV) and enthalpy fluctuations (dH) in the isothermal-isobaric ensemble [1,2]: hdV dHiNPT ¼ kT 2 V a:

(24.16)

Eq. (24.16) implies that a higher degree of positive correlation between the volume fluctuations and enthalpy fluctuations leads to a greater thermal expansion coefficient. A material with a zero thermal expansion coefficient could be designed by eliminating any correlation between these two types of fluctuations. Likewise, a material with a negative thermal expansion coefficient would have a negative correlation between volume fluctuations and enthalpy fluctuations.

24.3 Fluctuations in Broken Ergodic Systems Eqs. (24.13)e(24.16) demonstrate the direct connection between the microscopic fluctuations in a system and its macroscopic thermodynamic properties. In an ergodic system, the variance of the fluctuations of a property, A, in the time domain would be equal to the variance of the same   property in an ensemble sense, i.e., ðdAÞ2 ¼ ðdAÞ2 . However, many systems exhibit broken ergodicity (see Chapter 19), where ðdAÞ2 s   ðdAÞ2 . In a broken ergodic system with a Deborah number greater than unity, the internal relaxation time scale of the system is longer than the experimental observation time. Hence, the time-domain fluctuations of a broken ergodic system are generally less than that of an equivalent ergodic system [7]. Let us consider the example of enthalpy fluctuations in a broken ergodic system. For any measurement or observation of the system made over time, the relevant fluctuations are in the time domain rather than the ensemble domain. The time domain variance of the enthalpy is defined as:  2 2 s2H ¼ H  H ¼ H 2  H : (24.17)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fluctuations in Condensed Matter

471

Considering Palmer’s treatment of broken ergodicity [9], the individual microstates are grouped together into components satisfying the conditions of internal ergodicity and confinement (see Section 19.3). The partition function, Qa, of a given component, a, is: X ebHj ; (24.18) Qa ¼ j˛a

where the summation is over all microstates, j, contained within component a. Within that component: 1 X 2 bHj 1 v X bHj Ha2 ¼ Hj e ¼ Hj e : (24.19) Qa j˛a Qa vb j˛a Following our procedure from the previous section: Ha2 ¼ 

 1 v  vH vln Qa : HQa ¼   Ha vb Qa vb vb

(24.20)

After simplification, we have Ha2 ¼ kT 2

vHa 2 þ Ha : vT

(24.21)

Combining Eq. (24.21) with Eq. (24.17), the variance of enthalpy fluctuations within component a is:  2 vHa sH a ¼ kT 2 : vT

(24.22)

Hence, the contribution to heat capacity from a given component a is: Cp;a ¼

1  2 s : kT 2 H a

(24.23)

If we define Pa as the probability of the system being confined in component a, then the expectation value of heat capacity can be calculated through an ensemble average over all components: 1   1 X  2 P a sH a : (24.24) hCp i ¼ 2 s2H ¼ 2 kT kT a Hence, Eq. (24.24) gives the measured heat capacity of a broken ergodic system. Note that both the enthalpy fluctuations and heat capacity vanish in the limit of zero observation time,

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

472

Materials Kinetics

  lim s2H ¼ 0;

tobs /0

and in the limit of absolute zero temperature,   lim s2H ¼ 0: T /0

(24.25)

(24.26)

In either of these limits, each component consists of just a single microstate. With only one accessible microstate, there are no fluctuations and hence a zero heat capacity. A more generalized equation can be presented within the framework of continuously broken ergodicity [7], which relaxes Palmer’s assumptions of internal ergodicity and confinement (see Section 19.4 or Ref. [10]). If we let fi,j denote the conditional probability of the system reaching microstate j after starting in microstate i, then the variance of enthalpy fluctuations starting from microstate i is: !2 U U X  2 X sH i ¼ fi;j Hj2  fi;j Hj : (24.27) j¼1

j¼1

The expectation value of the enthalpy fluctuations is calculated through a weighted average over all possible starting microstates: " !2 # U U U X X  2   X ; (24.28) sH i ¼ pi fi;j Hj2  fi;j Hj i¼1

j¼1

j¼1

where pi is the probability of starting in microstate i. The heat capacity can then be calculated from the enthalpy fluctuations by [7]: " !2 # U U U X X 1 X : (24.29) pi fi;j Hj2  fi;j Hj hCp i ¼ 2 kT i¼1 j¼1 j¼1 For a fully confined system (i.e., in the limit of either zero time or zero temperature), the conditional probabilities reduce to a Kronecker delta function, fi;j ¼ dij , such that: " !2 # U U U X X 1 X 2 : (24.30) lim hCp i ¼ 2 pi dij Hj  dij Hj tobs /0 kT i¼1 j¼1 j¼1

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fluctuations in Condensed Matter

473

Given the definition of the Kronecker delta function, the terms in the inner summations are nonzero only for i ¼ j. Hence, Eq. (24.30) simplifies as: lim hCp i ¼

tobs /0

U  1 X pi Hi2  Hi2 ¼ 0; 2 kT i¼1

(24.31)

which is necessarily equal to zero. Hence, the heat capacity of a system must vanish in either the limit of zero observation time or zero temperature. Experimental proof of this result is shown in Figure 24.1, where the measured heat capacity of a barium boroaluminosilicate glass is plotted at low temperatures. The heat capacity clearly vanishes in the approach to absolute zero temperature. This is further proof of the universal applicability of the Third Law of Thermodynamics, even for nonequilibrium systems, since the entropy of the system must also vanish in the absence of any fluctuations. If we consider the opposite limit, i.e., the limit of infinite observation time, then the system becomes ergodic. For an ergodic system, the starting microstate no longer matters, and the conditional probabilities are given by eq eq fi;j ¼ pj , where pj is the equilibrium probability of occupying microstate j. Substituting into Eq. (24.28), we have: " !2 # U U U X X 1 X eq 2 eq : (24.32) lim hCp i ¼ 2 pi p j Hj  pj Hj tobs /N kT i¼1 j¼1 j¼1

Figure 24.1 Low temperature isobaric heat capacity, CP, of a barium boroaluminosilicate glass (Corning Code 7059) for two different thermal histories having fictive temperatures of Tf ¼ 868 and 953 K. Regardless of thermal history, the heat capacity vanishes in the limit of absolute zero temperature. This is due to a vanishing of time-domain fluctuations in the zero temperature limit. (Data are from Ref. [7].)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

474

Materials Kinetics

Since the inner summation does not depend on i, we can rearrange Eq. (24.32) as: " !2 # U U U X X 1 X lim hCp i ¼ 2 peqj Hj2  peqj Hj pi : (24.33) tobs /N kT j¼1 j¼1 i¼1 Given that the pi values sum to unity, Eq. (24.33) simplifies to: " !2 # U U X 1 X eq 2 eq > 0; lim hCp i ¼ 2 p j Hj  pj Hj tobs /N kT j¼1 j¼1

(24.34)

which gives a positive heat capacity in the ergodic limit. The difference between Eqs. (24.31) and (24.34) underscores the importance of accounting for broken ergodicity when calculating fluctuations in the properties of a system, as well as their impact on the macroscopic behavior of the system.

24.4 Time Correlation Functions The correlation between properties in time allows us to calculate transport properties. The time correlation function between fluctuations in two properties, A and B, is defined as: CAB ðtÞ ¼ hdAðtÞdBð0Þi:

(24.35)

Properties that exhibit highly correlated fluctuation will have a high value of CAB ðtÞ. For properties that are completely uncorrelated, Eq. (24.35) becomes zero. It is also common to define a normalized time correlation function, given by cAB ðtÞ ¼

hdAðtÞdBð0Þi ; sA sB

(24.36)

where sA and sB are the standard deviations of A and B, respectively. If A and B are the same properties, i.e., A ¼ B, then Eq. (24.35) is called the autocorrelation function: CAA ðtÞ ¼ hdAðtÞdAð0Þi:

(24.37)

The autocorrelation function gives the correlation of a property with itself over time, i.e., a slowly varying property has a higher autocorrelation function compared to a property that varies more quickly.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fluctuations in Condensed Matter

475

Time correlation functions are useful because they provide information regarding the kinetic properties of a system. The integrals of the correlation functions over time can often be directly related to macroscopic transport coefficients. Moreover, the Fourier transforms of time correlation functions can be related to experimentally measured spectra [1]. Such relationships can be derived using linear response theory, where applications of timedependent perturbations to a system are assumed to induce a timedependent response that is linearly related to the perturbation. Phenomenological coefficients that describe transport processes and other time-dependent functions can, in general, be expressed as integrals of autocorrelation functions. To this end, the Green-Kubo relation for an arbitrary transport coefficient, g, can be written as an infinite time integral over an autocorrelation function at equilibrium: Z N   _ Að0Þ _ g¼ AðtÞ dt; (24.38) 0

where A_ denotes the time derivative of A. For each Green-Kubo relation, as in Eq. (24.38), there is a corresponding Einstein relation [1]:   (24.39) 2gt ¼ ðAðtÞ  Að0ÞÞ2 ; which can be used to calculate the transport coefficient, g. For example, let us consider diffusivity in three dimensions. The GreenKubo relation for the diffusion coefficient, D, is: Z 1 N D¼ (24.40) hvðtÞ $ vð0Þidt; 3 0 where v is the particle velocity vector, i.e., the time derivative of its position, r. The corresponding Einstein relation for the diffusion coefficient is:   (24.41) 6Dt ¼ jrðtÞ  rð0Þj2 : This is the equivalent result as from our previous derivation of the Einstein diffusion equation in terms of the mean squared displacement of particles in Section 7.7. Other transport properties such as shear viscosity, volume viscosity, thermal conductivity, etc., can be calculated in a similar manner [1,2].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

476

Materials Kinetics

24.5 Dynamical Heterogeneities Chemical or structural fluctuations in condensed systems can lead to regions of faster or slower kinetics, which are known as dynamical heterogeneities. Let us consider the example in Figure 24.2, which shows results from molecular dynamics of a calcium aluminosilicate liquid [11]. The molecular dynamics simulations are conducted using the isoconfigurational ensemble [12]. With the isoconfigurational ensemble, a sequence of molecular dynamics simulations are performed, each starting with the same configuration of the system. While the starting positions of all the atoms are the same, the initial velocities are randomly drawn from the MaxwellBoltzmann distribution of Eq. (22.19) at the beginning of each simulation. The simulations are all performed over the same total time, and the squared displacement of each atom is recorded. The mean squared displacement of each atom is then calculated by averaging the results over each of the molecular dynamics simulations in the isoconfigurational ensemble, i.e., the squared displacements are averaged over the ensemble of simulations performed starting from the same configuration but with different randomized starting velocities. This mean squared displacement averaged over the isoconfigurational ensemble defines the dynamic propensity of the atom. A normalized dynamic propensity can then be calculated by dividing the dynamic propensity of a single atom by the average dynamic propensity of all atoms of that element. The results of the simulations thereby reveal regions of the material where the atoms tend to

1.00

Ca AI Si O

0.80 0.60 0.40 0.20 0.00

Slower Kinetics

(a)

(b) Colored by Element Type

Faster Kinetics Normalized Dynamic Propensity

Figure 24.2 Dynamical heterogeneities in a calcium aluminosilicate liquid. (a) The atomic structure of the liquid calculated from molecular dynamics simulations, where the atoms are colored by element type. (b) The same structure, but where the atoms are colored by their normalized dynamic propensity. The dynamical heterogeneities are found to exist due to localized compositional variations in the liquid [11].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fluctuations in Condensed Matter

477

exhibit greater mobility compared to other regions, which is characteristic of dynamical heterogeneities. In Figure 24.2(a), the initial structure of the calcium aluminosilicate liquid is plotted by coloring each atom by its element type. In Figure 24.2(b), the same structure is plotted, but where the atoms are colored by their normalized dynamic propensity. This latter plot clearly reveals the regions of higher and lower dynamic propensity. Atoms in regions of high dynamic propensity have greater mobility compared to atoms in regions of low dynamic propensity. In Ref. [11], it is shown that these dynamical heterogeneities are a direct result of chemical fluctuations in the liquid, viz., regions of faster kinetics have a greater localized concentration of calcium compared to regions of slower kinetics. Dynamical heterogeneities are a natural result of such localized fluctuations in composition or structure.

24.6 Nonmonotonic Relaxation of Fluctuations While most relaxation studies consider the evolution of average property values in their approach to equilibrium, the relaxation behavior of fluctuations (i.e., the relaxation of the variance or standard deviation of property values) exhibits interesting nonmonotonic behavior. For example, consider relaxation of the three selenium glasses shown in Figure 24.3 [13]. Each of the three glass is initially equilibrated at a starting fictive temperature and then instantaneously quenched to room temperature. Glass I and Glass II both have an initial fictive temperature higher than room temperature, and

Figure 24.3 Isothermal relaxation of selenium glass at room temperature after prior equilibration at three different fictive temperatures. (a) Relaxation of the average density. (b) Nonmonotonic relaxation of density fluctuations as the three glasses relax toward equilibrium. (Data are from Mauro et al. [13].)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

478

Materials Kinetics

relative change in density fluctuation (arb.units)

Glass III has an initial fictive temperature below room temperature. The subsequent relaxation then proceeds isothermally at room temperature. As expected, the relaxation of the average density in Figure 24.3(a) is monotonic. However, the relaxation of the standard deviation of density fluctuations exhibits a nonmonotonic behavior, as shown in Figure 24.3(b). The two glasses that start at a higher fictive temperature exhibit a minimum in density fluctuations during the relaxation process, and the glass that starts from a lower fictive temperature exhibits a maximum in density fluctuations during relaxation. The results in Figure 24.3 are the output of enthalpy landscape calculations using the master equation approach described in Chapter 20. This nonmonotonic relaxation behavior of density fluctuations has also been experimentally confirmed using Corning Code 7059 glass, as shown in Figure 24.4. As explained in Ref. [13], the experiment was conducted on flat sheets of glass initially equilibrated at 955 K, followed by rapid quenching in flowing air to 953 K. Small-angle x-ray scattering (SAXS) measurements were conducted to measure the normalized evolution of

1.0 0.95 0.90 0.85 0.80 0.75 0.70 0.0

5.0 15.0 10.0 annealing time (hours)

20.0

Figure 24.4 Isothermal relaxation of the density fluctuations in Corning Code 7059 glass at 868 K after prior equilibration at 953 K. The red triangles and blue circles represent results from two separate runs at the Advanced Light Source synchrotron radiation facility at Lawrence Berkeley Laboratory. (Data are from Mauro et al. [13].)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fluctuations in Condensed Matter

479

density fluctuations. The experimental results are consistent with those predicted from enthalpy landscape modeling, viz., after down-quenching, the density fluctuations exhibit a distinct minimum during isothermal relaxation. The explanation of this intrinsically nonmonotonic relaxation of density fluctuations is shown in Figure 24.5. The initial distribution of localized molar volumes is plotted by the solid black curve. During relaxation, this black curve relaxes toward the new equilibrium distribution, which is shown by the dashed gray line. However, the relaxation does not proceed by simultaneously shifting the entire black curve. Rather, localized regions having higher molar volume, i.e., the high molar volume tail of the distribution, relax significantly faster compared to regions of low molar volume. Hence, the high molar volume side of the distribution relaxes much faster than the low molar volume side of the distribution. This leads to a compression of the molar volume distribution and a minimum in density fluctuations, indicated by the solid blue curve in Figure 24.5(b). The same underlying physics also explain the maximum in density fluctuations exhibited by the up-quenched glass (Glass III), shown in Figure 24.6. Again, the high molar volume side of the distribution relaxes significantly faster than the low molar volume regions. However, in this case, the entire distribution needs to shift to higher molar volumes rather than lower molar volumes. Hence, for the case of an up-quenched glass, the

Figure 24.5 (a) Relaxation of density fluctuations for Glass I from Figure 24.3, which is initially equilibrated at a fictive temperature higher than the temperature of the subsequent isothermal hold. (b) Molar volume distribution of this glass. The initial distribution is shown by the solid black curve. The equilibrated distribution at the end of the isothermal hold is indicated by the dashed gray line. The solid blue curve shows the molar volume distribution corresponding to the minimum in density fluctuations in part (a). (Data are from Mauro et al. [13].)

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

480

Materials Kinetics

Figure 24.6 (a) Relaxation of density fluctuations for Glass III from Figure 24.3, which is initially equilibrated at a fictive temperature lower than the temperature of the subsequent isothermal hold. (b) Molar volume distribution of this glass. The initial distribution is shown by the solid black curve. The equilibrated distribution at the end of the isothermal hold is indicated by the dashed gray line. The solid red curve shows the molar volume distribution corresponding to the maximum in density fluctuations in part (a). (Data are from Mauro et al. [13].)

fast relaxation of the high molar volume tail leads to a maximum rather than a minimum in density fluctuations. Interestingly, this nonmonotonic decay of density fluctuations means that nonequilibrium systems can be tailored to achieve properties that are far outside the range of properties available to equilibrium systems. For example, the deep minimum in density fluctuations shown in Figure 24.5 could be used to achieve a much lower magnitude of Rayleigh scattering in optical fibers compared to what can be achieved using equilibrium materials.

24.7 Industrial Example: Fluctuations in High Performance Display Glass Finally, let us consider a practical industrial example demonstrating the importance of controlling fluctuations. High resolution flat panel displays require consistent relaxation performance of the glass substrates. Variability in the glass relaxation may result either from thermal history fluctuations in the manufacturing of the thin sheets of glass or from temperature variations during the deposition of the thin film semiconductors used for building the individual pixels on the display. These variations can lead to different localized magnitudes of relaxation, i.e., certain localized regions of a glass substrate may relax with greater or lesser magnitudes. These localized fluctuations in relaxation magnitude can lead to warping or other geometric

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fluctuations in Condensed Matter

481

distortions of the glass. They can also lead to misalignment of the pixels in the flat panel display. Fluctuations are especially problematic in high resolution displays, given the small pixel size and need for highly precise alignment. Figure 24.7 shows that the fragility of a supercooled liquid has a significant impact on the relaxation behavior of the corresponding glass. In particular, a higher liquid fragility leads to a higher nonequilibrium viscosity of the glass (see Chapter 17), which translates into a longer relaxation time and suppression of the magnitude of relaxation effects. The impact of thermal history fluctuations can be calculated by introducing fluctuations in the temperature path during either the glass-forming process or the post-forming heat treatment where the thin film electronics are deposited to build the display [14]. These two types of fluctuations are shown in Figure 24.8, which plots the temperature path used for the study of relaxation variability. Figure 24.9 shows that higher fragility systems exhibit more consistent relaxation behavior with respect to fluctuations in thermal history during the initial glass-forming process. Likewise, Figure 24.10 shows that this advantage of higher fragility systems also translates to improved consistency

Figure 24.7 (a) Computed enthalpy-temperature curves for three glass-forming systems with different values of fragility. The cooling rate for all systems is 1 K/min. Increasing the fragility of the supercooled liquid results in a sharper glass transition and lower magnitude of enthalpy change due to relaxation. (b) Zoomed-in version of the figure to see the detailed relaxation behavior [14].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

482

Materials Kinetics

Figure 24.8 Temperature path for the relaxation variability study, using selenium as the base glass composition. The glass is formed initially at a rate of 1 K/min. The initially cooled glass is then heated to the heat treatment temperature and subsequently recooled. The reheating and recooling rates are equal. Nominal reheating/ recooling rates of 1 K/s (faster than the initial cooling rate), 1 K/min (equal to the initial cooling rate), and 0.01 K/s (slower than the initial cooling rate) are considered. Relaxation variability due to the initial glass forming process is simulated by introducing small fluctuations in the initial cooling rate. Relaxation variability due to the subsequent post-forming heat treatment cycle is calculated using small fluctuations in the reheating/recooling rate [14].

with respect to fluctuations in the post-forming thermal cycle, e.g., during deposition of the thin film electronics. In both cases, increasing the fragility of the system leads to orders of magnitude decrease in the standard deviation of the relaxation magnitudes, i.e., significantly improved consistency in the relaxation behavior of the glass. This dramatically improved consistency in the relaxation behavior immediately translates into improved yields of the display manufacturing process and improved performance of the final displays. Hence, fluctuations are important both for understanding the physical origins of macroscopic properties and also for the practical use of many engineered materials.

24.8 Summary All condensed matter systems at finite temperature exhibit fluctuations over time and space. Thermodynamic properties such as heat capacity and

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

483

Fluctuations in Condensed Matter

(a)

−12

−8

−10

−4

−6

log10 [St. Dev. Relaxation]:

320

Heat Treatment Temperature (K)

Heat Treatment Temperature (K)

log10 [St. Dev. Relaxation]:

310 300 290 280 270 260

40

50

60

Fragility

70

80

Heat Treatment Temperature (K)

−12

−10

−8

−6

−4

310 300 290 280 270 260

40

(b)

log10 [St. Dev. Relaxation]:

−12

320

−10

50

−8

−6

70

80

60

Fragility

70

80

−4

320 310 300 290 280 270 260

(c)

40

50

60

Fragility

Figure 24.9 Variability in the relaxation magnitude due to thermal fluctuations in the initial glass forming process. The fluctuations are simulated by introducing slight variations in cooling rate, leading to changes in the thermal history of the glass. Results are plotted for post-forming heat treatment cycles with reheating/recooling rates of (a) 1 K/s (faster than the initial cooling rate), (b) 1 K/min (approximately equal to the initial cooling rate), and (c) 0.01 K/s (slower than the initial cooling rate). Relaxation variability is plotted as the logarithm of the standard deviation of the degree of relaxation considering a range of thermal fluctuations in the initial forming process. Simulations are performed using selenium as the base glass composition. For any post-forming heat treatment, relaxation variability can be minimized by increasing the fragility of the system [14].

thermal expansion coefficient can be directly calculated from the variance of these fluctuations. In addition, transport properties such as diffusivity and viscosity can be calculated from time correlation functions using the GreenKubo relation and linear response theory. In broken ergodic systems, the variance of fluctuations is different in the time and ensemble domains.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

484

Materials Kinetics

(a)

−9 −8 −7 −6 −5 −4 −3

320 310 300 290 280 270 260

log 10[St. Dev. Relaxation]:

Heat Treatment Temperature (K)

Heat Treatment Temperature (K)

log 10[St. Dev. Relaxation]:

40

50

60

Fragility

70

80

Heat Treatment Temperature (K)

(c)

310 300 290 280 270 260

(b)

log 10[St. Dev. Relaxation]:

−9 −8 −7 −6 −5 −4 −3

320

50

40

60

Fragility

70

80

−9 −8 −7 −6 −5 −4 −3

320 310 300 290 280 270 260

40

50

60

Fragility

70

80

Figure 24.10 Variability in the relaxation magnitude due to thermal fluctuations in the post-forming heat treatment cycle of a glass. Results are shown for mean reheating/ recooling rates of (a) 1 K/s (faster than the initial cooling rate during the glass forming process), (b) 1 K/min (equal to the initial cooling rate during initial glass formation), and (c) 0.01 K/s (slower than the initial cooling rate during initial glass formation). Relaxation variability is plotted as the logarithm of the standard deviation of the degree of relaxation considering a range of thermal fluctuations in the post-forming heat treatment cycle. Again, variability in relaxation can be minimized by increasing the fragility of the glass-forming system [14].

Condensed systems exhibit spatial variations in relaxation time scales known as dynamical heterogeneities. Dynamical heterogeneities can be calculated using molecular dynamics simulations in the isoconfigurational ensemble. Owing to such localized differences in kinetics, the relaxation of density fluctuations exhibits a nonmonotonic decay behavior. Control of fluctuations is critical for many industrial materials, such as high

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Fluctuations in Condensed Matter

485

performance display glass substrates, where the relaxation magnitude must be controlled to be as consistent as possible. One strategy to minimize fluctuations in the relaxation behavior of a glass is to use compositions having higher fragility of the corresponding supercooled liquid state.

Exercises (24.1) Derive Eq. (24.3) from Eq. (24.2), showing all steps of the derivation. (24.2) Give an example of where fluctuations are important in your own research. What types of fluctuations are important? What is the practical impact of fluctuations on your system under study? (24.3) Derive Eq. (24.16) relating thermal expansion coefficient to the cross-correlation of volume and enthalpy fluctuations. (24.4) Derive an equation for isothermal compressibility in terms of the underlying volume fluctuations in a system. (24.5) Why is the thermal expansion coefficient of a glass lower than that of its corresponding supercooled liquid? Explain in terms of fluctuation theory.

References [1] M. P. Allen and D. J. Tildesley, Computer Simulation of Liquids, 2nd ed., Oxford (2017). [2] D. A. McQuarrie, Statistical Mechanics, University Science Books (2000). [3] Y. Yang, O. Homma, S. Urata, M. Ono, and J. C. Mauro, “Topological Pruning Enables Ultra-Low Rayleigh Scattering in Pressure-Quenched Silica Glass,” npj Comp. Mater. 6, 139 (2020). [4] K. F. Kelton and A. L. Greer, Nucleation in Condensed Matter, Elsevier (2010). [5] B. Wang, Y. Yu, M. Wang, J. C. Mauro, and M. Bauchy, “Nano-Ductility in Silicate Glasses is Driven by Topological Heterogeneity,” Phys. Rev. B 93, 064202 (2016). [6] L. Tang, N. M. A. Krishnan, J. Berjikian, J. Rivera, M. M. Smedskjaer, J. C. Mauro, W. Zhou, and M. Bauchy, “Effect of Nanoscale Phase Separation on the Fracture Behavior of Glasses: Toward Tough, Yet Transparent Glasses,” Phys. Rev. Mater. 2, 113602 (2018). [7] J. C. Mauro, R. J. Loucks, and S. Sen, “Heat Capacity, Enthalpy Fluctuations, and Configurational Entropy in Broken Ergodic Systems,” J. Chem. Phys. 133, 164503 (2010). [8] E. Donth, The Glass Transition: Relaxation Dynamics in Liquids and Disordered Materials, Springer (2013). [9] R. G. Palmer, “Broken Ergodicity,” Adv. Phys. 31, 669 (1982). [10] J. C. Mauro, P. K. Gupta, and R. J. Loucks, “Continuously Broken Ergodicity,” J. Chem. Phys. 126, 184511 (2007). [11] K. D. Vargheese, A. Tandia, and J. C. Mauro, “Origin of Dynamical Heterogeneities in Calcium Aluminosilicate Liquids,” J. Chem. Phys. 132, 194501 (2010).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

486

Materials Kinetics

[12] A. Widmer-Cooper and P. Harrowell, “On the Study of Collective Dynamics in Supercooled Liquids through the Statistics of the Isoconfigurational Ensemble,” J. Chem. Phys. 126, 154503 (2007). [13] J. C. Mauro, S. Soyer Uzun, W. Bras, and S. Sen, “Nonmonotonic Evolution of Density Fluctuations during Glass Relaxation,” Phys. Rev. Lett. 102, 155506 (2009). [14] Q. Zheng and J. C. Mauro, “Variability in the Relaxation Behavior of Glass: Impact of Thermal History Fluctuations and Fragility,” J. Chem. Phys. 146, 074504 (2017).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 25

Chemical Reaction Kinetics

25.1 Rate of Reactions As discussed in Section 1.2, the word “kinetics” comes from the ancient Greek word for “movement” or “motion.” In materials science and engineering, the study of kinetics is associated with the evolution of material systems under some set of conditions. In the fields of chemistry and chemical engineering, the word “kinetics” usually implies chemical reaction kinetics, i.e., the study of the rate at which chemical reactions occur. Entire textbooks in chemical engineering are devoted to the topic of chemical reaction kinetics [1e4]. Since the current monograph focuses on kinetics from a materials science perspective, inspired by the approach of Ragone’s classic textbook on thermodynamics of materials [5], here we devote one chapter to provide an overview of reaction kinetics from a chemical engineering point of view. We will consider three types of reactions: • Homogeneous reactions in fluids, where the reactants and products exist in the same phase. • Heterogeneous reactions, where the reactants and products exist in different phases, or the reaction occurs at the interface between phases. • Solid state reactions, where the reactants and products are in the solid state. Let us begin with homogeneous reactions. Consider the following reaction, aA þ bB/cC þ dD;

(25.1)

in which A and B are the reactants and C and D are the products. A key parameter in the study of chemical reaction kinetics is the extent of reaction, which we denote x. For example, the number of moles of C at any time during the reaction, nC, can be written as: nC ¼ n+C þ cx;

Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00017-0 This book belongs to Alice Cartes ([email protected])

(25.2)

© 2021 Elsevier Inc. All rights reserved.

487 Copyright Elsevier 2023

488

Materials Kinetics

where n+C is the initial number of moles of C before the reaction has started. Hence, the product of x and c gives the increase in the number of moles of C as a result of the chemical reaction. A general form for the change in the amount of each species can be written as: X 0¼ ni Ai ; (25.3) i

where ni are the stoichiometric coefficients and Ai are the reactant or product species. The stoichiometric coefficients are negative for reactants and positive for products, which reflects the loss of reactants and gain of products as a result of the chemical reaction. In the reaction shown in Eq. (25.1), the stoichiometric coefficients are ea and eb for the two reactants and c and d for the two products of the reaction. Hence, the change in the number of moles of species i as a result of the chemical reaction can be written in general form as ni ¼ n+i þ ni x.

(25.4)

Let us consider the example in Figure 25.1. In this reaction, two moles of A react with one mole of B to produce one mole of C and three moles of D. Suppose that the initial concentrations of A and B are one mole each and that initial concentrations of the products are zero. The stoichiometric coefficients are 2, 1, 1, and 3 for the A, B, C, and D species, respectively. From Eq. (25.4), the lower row of Figure 25.1 gives the amount of each species as a function of the extent of the reaction, x. As the reaction proceeds, the amounts of C and D increase at the expense of A and B. The rate of reaction, f, is defined as the change of x with respect to time, t: f¼

dx . dt

(25.5)

Figure 25.1 Example chemical reaction where two moles of A and one mole of B react to produce one mole of C and three moles of D. Let us consider that the initial concentrations of A and B are one mole each and initial product concentrations are zero. The lower row shows the amount of each species as a function of the extent of the reaction, x.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Chemical Reaction Kinetics

489

Taking the time derivative of Eq. (25.4) and combining with Eq. (25.5), the rate of change of the number of moles of species i can be expressed as: dni dx ¼ ni ¼ ni f . dt dt

(25.6)

If the reaction takes place under isochoric conditions, then we can express the reaction in terms of concentrations rather than number of moles. Let us define the concentration of species i as ni Ci ¼ ; (25.7) V where V is the volume of the system. Combining Eqs. (25.6) and (25.7), we obtain dCi dx ¼ ni ; dt dt

(25.8)

1 dCi 1 dx . ¼ ni dt V dt

(25.9)

V which is often expressed as:

25.2 Order of Reactions The equations describing chemical reaction kinetics are phenomenological in nature, with reaction rates obtained experimentally by measuring the concentrations of reactants and products over time. Empirically, it is found that reaction rates typically follow [1]: 1 dx ¼ kCAa CBb ; V dt

(25.10)

which is equivalently expressed as 1 dx ¼ k½Aa ½Bb ; V dt

(25.11)

where CA ¼ ½A is the concentration of A, CB ¼ ½B is the concentration of B, and k is a rate parameter. The exponents in Eq. (25.11), a and b, are called the orders of the reaction. This reaction is said to be “of order a” with respect to A and “of order b” with respect to B. The orders of the reaction are determined empirically using measured concentration data over time.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

490

Materials Kinetics

25.3 Equilibrium Constants Another important concept in chemical reaction kinetics is the equilibrium constant of a reaction, Keq. Consider the following reaction of a molecule, MX, dissociating to form two ions (Mþ and X): MX / Mþ þ X .

(25.12)

The equilibrium constant, Keq, for this reaction is: Keq ¼

½Mþ eq ½X eq ½MXeq

.

(25.13)

The equilibrium constant is obtained by considering the reaction in a bidirectional sense, i.e., fþ/

MX 4 Mþ þ X ; ) f

(25.14)

where fþ is the rate of the forward reaction and f is the rate of the reverse reaction. If we assume that the reaction obeys first-order kinetics, then the change in concentration of MX can be written as: d½MX ¼  fþ ½MX þ f ½Mþ ½X . dt

(25.15)

In other words, the forward reaction results in a loss of [MX] while the reverse reaction results in a gain of [MX]. At equilibrium, the net rate of change must be equal to zero, i.e., d½MXeq dt

¼  fþ ½MXeq þ f ½Mþ eq ½X eq ¼ 0:

(25.16)

This is the detailed balance condition, which reflects a perfect balance of the forward and reverse reaction rates such that there is no net change in the concentration of any species. When the detailed balance condition is satisfied, the system has achieved equilibrium with respect to the reaction under study. Rearranging the terms in Eq. (25.16), we have fþ ½MXeq ¼ f ½Mþ eq ½X eq .

(25.17)

This immediately leads to þ  fþ ½M eq ½X eq ¼ ¼ Keq ; f ½MXeq

This book belongs to Alice Cartes ([email protected])

(25.18)

Copyright Elsevier 2023

Chemical Reaction Kinetics

491

i.e., the equilibrium constant of Eq. (25.13) is simply the ratio of forward to reverse reaction rates to achieve detailed balance.

25.4 First-Order Reactions Let us consider the simplest possible example of a chemical reaction, viz., a first-order reaction in which some material A decomposes into a product: A/product.

(25.19)

We first consider the case where only the forward reaction rate is important. In this case, the rate equation is: 1 dx 1 dCA ¼ ; V dt nA dt

(25.20)

where the stoichiometric coefficient is nA ¼ 1. Hence, 1 dx d½A ¼ . V dt dt

(25.21)

If the reaction is first-order, then 1 dx ¼ k½A. V dt

(25.22)

Combining Eqs. (25.21) and (25.22), we obtain d½A  ¼ k½A. dt Integrating Eq. (25.23) over time, we have Z t Z ½A d½A ¼  k dt; ½A0  ½A 0

(25.23)

(25.24)

where [A0] is the initial concentration of A. The solution to the integral is: ln

½A ¼  kt. ½A0 

(25.25)

Solving for [A], we obtain an exponential decay function: ½A ¼ ½A0 expðktÞ.

This book belongs to Alice Cartes ([email protected])

(25.26)

Copyright Elsevier 2023

492

Materials Kinetics

An archetypal example of this form of reaction is the decay of a radioactive species, N ¼ N0 expðltÞ;

(25.27)

where N is the concentration of the radioactive species at time t, N0 is the initial concentration, and l is the decay constant. Taking the natural logarithm of both sides of Eq. (25.27), we have ln N ¼ ln N0  lt.

(25.28)

The radioactive decay constant, l, is usually expressed in terms of the half-life of the species, denoted s1=2 . The half-life is the time required for the concentration of the species to decrease by half. Mathematically, we write:   N 1 ¼ ¼ exp ls1=2 . N0 2

(25.29)

Thus, the relationship between the decay constant and the half-life is simply: l¼

ln 2 . s1=2

Rewriting Eq. (25.27) in terms of the half-life, s1=2 , we obtain:  t=s1=2 1 N ¼ N0 . 2

(25.30)

(25.31)

Next, let us consider a first-order reaction in which both the forward and reverse reactions are important: fþ/

A4 B. ) f

(25.32)

Assuming the reaction is first-order with respect to both A and B, we have: 

d½A ¼ fþ ½A  f ½B: dt

(25.33)

At equilibrium, the detailed balance condition is satisfied: fþ ½Aeq ¼ f ½Beq ;

This book belongs to Alice Cartes ([email protected])

(25.34)

Copyright Elsevier 2023

Chemical Reaction Kinetics

and the equilibrium constant is    +  fþ ½Beq DG + mB  m+A ¼ ¼ Keq ¼ exp  ¼ exp  : f ½Aeq RT RT

493

(25.35)

Here, DG + is the free energy of the reaction, R is the gas constant, T is absolute temperature, and m+B  m+A is the difference in chemical potential between the two species at equilibrium. Solving Eq. (25.35) for f and substituting into Eq. (25.33), the kinetic equation becomes: ! ! ½Aeq ½Aeq ½B d½A ¼ fþ ½A  ½A. (25.36)  ½B ¼ fþ 1  dt ½Beq ½Beq ½A

25.5 Higher Order Reactions The extension of the above formalism to higher order reactions is mathematically straightforward. For example, consider a second-order reaction that obeys 

d½A ¼ k½A2 . dt

(25.37)

The solution of the differential equation is simply: 1 1  ¼ kt. ½A ½A0 

(25.38)

Suppose that we have a second-order reaction with two reactants, A þ B/products;

(25.39)

where the reaction rate equation satisfies: d½A ¼  k½A½B. dt The solution of this differential equation is:     ½A ½B ln  ln ¼ kð½B0   ½A0 Þt. ½A0  ½B0 

(25.40)

(25.41)

Now let us consider a third-order reaction, 3A/products;

This book belongs to Alice Cartes ([email protected])

(25.42)

Copyright Elsevier 2023

494

Materials Kinetics

where the kinetic equation is: 

d½A 3 ¼ k½A . dt

(25.43)

In this case, the solution to the rate equation is: 1 1 ¼ kt. 2 ½A ½A0 2

(25.44)

25.6 Reactions in Series Next let us consider the case of reactions in series, i.e., a reaction that has one or more intermediate steps. For example, consider the case of material A decomposing into material B with a rate f1, which in turn forms material C with a rate f2. The overall rate of reaction, f, of A to C depends on both f1 and f2. The sequence of the reactions is visually depicted in Figure 25.2. Let us consider an initial concentration of A equal to [A0] and zero initial concentrations of B and C. Describing the kinetics of reactions in series involves solving a set of coupled differential equations for each step of the reaction. In the case of Figure 25.2, the first step involves the decomposition of [A], which follows an exponential decay: ½A ¼ ½A0 expðf1 tÞ:

(25.45)

The integrated solution for the concentration of B involves two terms: an increase of [B] due to the decomposition of [A] and a decrease in [B] due to its transformation into [C]. This yields ½B ¼

f1 ½A0 expðf2 tÞ½expððf2  f1 ÞtÞ  1: f2  f1

(25.46)

Figure 25.2 Example of a reaction in series, where a reactant, A, forms an intermediate compound, B, before achieving the final product, C. The initial concentration of A is [A0], and the initial concentrations of B and C are zero.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Chemical Reaction Kinetics

Finally, the concentration of C is given by:   f2 f1 ½C ¼ ½A0  1  expðf1 tÞ þ expðf2 tÞ : f2  f1 f2  f1

495

(25.47)

The overall reaction from A to C is controlled by the slower of the two rates in the series. This is the so-called bottleneck principle, i.e., the rate of every reaction is series is controlled by whichever has the slowest rate in the set of intermediate reactions needed to generate the final products. Hence, one goal of chemical reaction engineering is to identify the bottleneck reaction and determine ways of improving its efficiency.

25.7 Temperature Dependence of Reaction Rates The study of the temperature dependence of reaction rates was pioneered by the Swedish scientist Svante Arrhenius (1859e1927), for which he won the 1903 Nobel Prize in Chemistry. Arrhenius empirically found that the equilibrium constant (Keq) of a reaction changes with temperature (T) according to: dKeq 1 f . dT T 2

(25.48)

dfþ E* ; ¼ dT RT 2

(25.49)

It follows that

where R is the gas constant and E* is the activation barrier for the reaction. Rearranging Eq. (25.49):   E* 1 dfþ ¼  d . (25.50) T R Integration yields the Arrhenius law:   E* ; fþ ¼ A exp  RT

(25.51)

which states that reaction rates have an exponential dependence on temperature. Eq. (25.51) can be linearized by taking the logarithm of both sides: ln fþ ¼ ln A 

This book belongs to Alice Cartes ([email protected])

E* . RT

(25.52)

Copyright Elsevier 2023

496

Materials Kinetics

Eq. (25.52) is useful for making an Arrhenius plot of ln fþ versus 1/T, where the slope of the line gives E* /R and the intercept, ln A, contains information about the entropy associated with the reaction, as already discussed in Section 3.4.

25.8 Heterogeneous Reactions Whereas homogeneous reactions occur within a single fluid phase, many reactions of interest to materials scientists are heterogeneous. Heterogeneous reactions occur when the reactants and products occur in different phases or when the reaction occurs at a phase boundary. Heterogeneous reactions are significantly more complicated than homogeneous reactions and typically take place by a series of consecutive steps. An example of a heterogeneous reaction is a gas dissolving in a metal. For example, consider the dissolution of hydrogen gas in aluminum, which involves the following steps [5]: 1. The hydrogen gas molecules are transported within the gas phase to the gas-metal interface. 2. Hydrogen gas molecules are adsorbed onto the surface of the aluminum metal. 3. The adsorbed hydrogen molecules are decomposed into adsorbed hydrogen atoms. 4. The hydrogen atoms are dissolved into the aluminum metal at the gasmetal phase boundary. 5. The dissolved hydrogen atoms are diffused away from the phase boundary and into the metal. One can think about the steps of a reaction as acting like resistors in series in an electrical circuit. As with other reactions in series, heterogeneous reactions typically have a rate-limiting step which governs the overall kinetics of the reaction. However, since each of these consecutive reactions can have its own independent activation barrier, the rate-limiting step may be different in different temperature regimes.

25.9 Solid State Transformation Kinetics Many reactions of interest to materials scientists and engineers involve transformations in the solid state. Examples include [5]: • Recrystallization of a cold-worked metal. • Precipitation of a crystalline polymer from a non-crystalline phase. • Growth of an equilibrium solid phase from a nonequilibrium solid.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Chemical Reaction Kinetics

497

Let us consider the phase reaction of an a phase transitioning into a b phase: a/b.

(25.53)

Let Va and Vb denote the volumes of the a and b phases, respectively. The total volume of the system, V, is simply the sum of the volumes of the individual phases: V ¼ Va þ Vb :

(25.54)

The fraction of the system, X, that has been transformed into the b phase is: X¼

Vb : V

(25.55)

Let us assume that the phase transformation process involves two steps: an initial nucleation step followed by a growth step. Growth of the b phase can occur only after a critically size nucleus has form, as indicated in Figure 25.3. Let us define the nucleation and growth rates as: N_ ¼ nucleation rate per unit volume _ G ¼ growth rate in one direction ¼ dr=dt;

(25.56)

where r is the radius of the particle forming of the new phase. The number of nuclei that are formed during time step ds is equal to: _ a ds: # nuclei ¼ NV

(25.57)

If we assume that the nucleated particles of the new phase grow in a spherical geometry, then the radius of the particles at time s can be obtained by solving: Zr

Zt dr ¼

0

_ Gds:

(25.58)

s

Figure 25.3 Schematic of the two-step nucleation and growth reaction processes. The growth kinetics of the new phase start after a critically sized nucleus has formed.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

498

Materials Kinetics

The solution is simply: _  sÞ: r ¼ Gðt

(25.59)

It follows that the volume of the nucleated particles formed during a given time step is: 4 3 _ a ds: dVb ¼ pG_ ðt  sÞ3 NV 3

(25.60)

Combining Eq. (25.60) with Eq. (25.54), we have: 4 3 3 _  Vb Þðt  sÞ ds: dVb ¼ pG_ NðV 3

(25.61)

Early in the phase transformation process, the volume of the new phase is very small compared to the total volume of the system, i.e., V  Vb z V . Applying this approximation and integrating both sides of Eq. (25.61), we have: ZVb Zt 4 _3 _ 3 pG NV ðt  sÞ ds: dVb ¼ 3 0

(25.62)

0

Solving the integral, we obtain: p 3 _ 4 Vb ¼ V G_ Nt : 3

(25.63)

Combining Eq. (25.63) with Eq. (25.55), the final result is: X¼

Vb p _ 3 _ 4 ¼ G Nt : 3 V

(25.64)

Eq. (25.64) is exactly the Johnson-Mehl-Avrami equation presented previously in Section 14.7 in the context of crystal nucleation and growth from a supercooled liquid phase. The same equation can be applied to any type of phase transformation that involves distinct nucleation and growth stages.

25.10 Summary Within the fields of chemistry and chemical engineering, the term “kinetics” usually implies chemical reaction kinetics. Homogeneous chemical reactions occur within a single-phase fluid. In contrast, heterogeneous reactions encompass reactions that involve more than one phase. Finally, solid state reactions occur when both the reactants and products are

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Chemical Reaction Kinetics

499

in the solid state. For example, the Johnson-Mehl-Avrami equation for crystal growth can be derived from solid state reaction kinetics assuming separate nucleation and growth processes. The evolution of each species is determined by the extent of reaction and its stoichiometric coefficient, and the rate of reaction is defined by the time derivative of the extent of reaction. Reactions can be first-order or higher order. The order of a reaction is typically determined empirically through experimental measurements of species concentrations at different stages of a reaction. Chemical reactions often involve more than one step. For reactions in series, there is typically a rate-limiting intermediate step, which is the bottleneck of the reaction. One goal of chemical reaction engineering is to identify this rate-limiting step and determine ways to optimize the process. As with many other kinetic processes, the temperature dependence of the reaction rate is described by the Arrhenius equation. The equilibrium among the reactants and products is governed by their difference in free energy and described by the equilibrium constant. Equilibrium is achieved for a given reaction when the detailed balance condition is satisfied, i.e., when the rates of the forward and reverse reactions are perfectly balanced.

Exercises (25.1) Give two examples of each of the following types of chemical reaction: (a) Homogeneous reaction (b) Heterogeneous reaction (c) Solid state reaction (25.2) Write the equilibrium constants for each of the following reactions: (a) H2 þ Cl2 /2HCl (b) Na2 CO3 þ SiO2 /Na2 O$SiO2 þ CO2 (c) Fe3þ þ 3I /FeI3 (25.3) How many atoms have decayed in 1 gram of radioactive 235U over a time of 1 year? The half-life of 235U is 7.04  108 years. (25.4) A tracer diffusion experiment is conducted using radioactive 51Cr, which has a half-life of 27.7 days. The radioactivity of the final layer of the sample is analyzed 6 hours after the surface layer. What correction factor needs to be applied to this last measurement to make it comparable to the surface layer measurement?

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

500

Materials Kinetics

(25.5) Carbon dating is based on the radioactive decay of 14C, which has a half-life of 5700 years. The ratio of 14C to the stable isotope of carbon, 12C, is relatively stable in living organisms. This constant ratio is maintained until an organism dies, at which point the concentration of 14C decays exponentially. Suppose that a fossil exhibits a 14C to 12C ratio that is 0.1% of that found in living organisms. How old is the fossil? (25.6) A common rule of thumb in biomolecular engineering is that reaction rates double with every 10 C increase in temperature near room temperature (25 C). What is the activation energy implied by this rule of thumb? (25.7) Wine spoils by an oxidation reaction. Suppose that an open bottle of red wine spoils after 7 days of storage at 5 C. The same red wine would spoil after only 8 hours of storage at room temperature (25 C). How long would the wine last when stored at 15 C, assuming that oxidation of wine is a first-order reaction? (25.8) A decay chain is a series of radioactive decay reactions in which a radioactive parent isotope decays into a daughter isotope, which may also be radioactive. If the daughter isotope is also radioactive, it can decay to a granddaughter isotope, and so on, until a final stable isotope product is achieved. Suppose that the half-life of a parent isotope decaying to a daughter isotope is 7 days, and the half-life of the daughter isotope decaying to a stable granddaughter isotope is 21 days. (a) Plot the concentration of the parent, daughter, and granddaughter isotopes as a function of time up to 60 days. Assume that initially only the parent isotope is present in the system. (b) After 30 days of the decay chain reaction, how much of the parent isotope is remaining? What are the concentrations of the daughter and granddaughter isotopes after 30 days? (c) How much time is required for the concentration of the stable granddaughter isotope to reach 99%?

References P. L. Houston, Chemical Kinetics and Reaction Dynamics, Dover (2012). H. S. Fogler, Essentials of Chemical Reaction Engineering, Pearson (2010). J. Ancheyta, Chemical Reaction Kinetics, Wiley (2017). G. B. Marin, G. S. Yablonsky, and D. Constales, Kinetics of Chemical Reactions: Decoding Complexity, Wiley (2019). [5] D. V. Ragone, Thermodynamics of Materials, Volume II, Wiley (1995). [1] [2] [3] [4]

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

CHAPTER 26

Thermal and Electrical Conductivities 26.1 Transport Equations The field of materials kinetics encompasses a variety of transport properties. Transport properties are those which depend on the transportation of some species (e.g., atoms, electrons, or phonons) through the material in response to a driving force (e.g., gradients in concentration, electrical potential, or temperature). We have already devoted significant portions of this book to mass transport by diffusion, viscous flow, and microstructural evolution. Two additionally important transport properties of materials are the thermal and electrical conductivities [1]. Transport processes in materials are described by their corresponding flux equations. As we have already discussed in Chapters 2 and 3, the flux of mass, JM, is governed by Fick’s first law, which in one dimension, x, is written as: JM ¼  D

dC : dx

(26.1)

Here, D is the diffusion coefficient and C is the concentration of the species under study. The flux of heat, JQ, is described by Fourier’s law, JQ ¼  k

dT ; dx

(26.2)

where k is the thermal conductivity and T is temperature. The flux of electrical charge, JE, is governed by Ohm’s law, JE ¼ 

Materials Kinetics ISBN 978-0-12-823907-0 https://doi.org/10.1016/B978-0-12-823907-0.00010-8 This book belongs to Alice Cartes ([email protected])

1 d4 ; r dx

(26.3)

© 2021 Elsevier Inc. All rights reserved.

501 Copyright Elsevier 2023

502

Materials Kinetics

where r is the electrical resistivity and 4 is the electrical potential. The electrical resistivity, r, is inversely related to the electrical conductivity, s, by 1 r¼ : s

(26.4)

In this chapter, we will briefly discuss the fundamentals of thermal and electrical conduction in materials.

26.2 Thermal Conductivity Each of the three flux equations in Eqs. (26.1)e(26.3) has the same functional form, viz., the flux is equal to the product of a kinetic coefficient and the relevant gradient, which acts as a driving force for the irreversible process. Hence, Fourier’s law is just a thermal analogue to Fick’s first law of diffusion. If thermal conduction involves just a flow of heat, i.e., without any internal generation or dissipation of heat, then conservation of energy immediately leads to the heat equation, vT v2 T ¼k 2: vt vx

(26.5)

This is derived following the same arguments as in Section 3.2 for the derivation of Fick’s second law. Indeed, the heat equation of Eq. (26.5) has exactly the same mathematical form as Fick’s second law in Eq. (3.18), but where concentration (C) is replaced by temperature (T ) and the diffusion coefficient (D) is replaced by the thermal conductivity (k). The identical functional forms of Eq. (3.18) and Eq. (26.5) allow us to easily apply solutions of the mass diffusion equation to solve equivalent heat transfer problems, using k in place of D and with T in place of C. Indeed, the heat equation is solved using the same methods that were previously applied to the diffusion equation in Chapter 4; moreover, the same set of solutions apply to both problems. Identical numerical techniques also exist to find solutions of the heat equation, such as the finite difference method discussed in Chapter 6. Given the identical methods and solutions for both mass and heat transport problems, there is no need to devote additional space here to solving these problems. A thorough review of solutions to the heat equation can also be found in the excellent book by Carslaw and Jaeger [2]. Measurement of the thermal conductivity, k, requires establishing a temperature gradient across a sample of known thickness. The thermal

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermal and Electrical Conductivities

503

conductivity is then determined by measuring the rate of temperature change over time [3]. The thermal conductivity of a material can also be calculated through molecular dynamics simulations, either by directly mimicking the experimental measurement process described above, or by integrating the autocorrelation function of the heat current. This latter approach is described in detail in Ref. [4]. In non-metallic materials, the primary mechanism of thermal transport is via atomic vibrations, i.e., phonons. The thermal conductivity of noncrystalline materials is typically much lower than that of crystalline materials, since the structural ordering of crystals allows for a more consistent path for phonon motion. The difference in thermal conductivity between crystalline and non-crystalline materials can span orders of magnitude. For example, Figure 26.1 compares the low-temperature thermal conductivity of crystalline and non-crystalline forms of silica. The a-quartz crystal has significantly higher thermal conductivity than silica glass over the full temperature regime. For this same reason, defects such as dislocations and grain boundaries will lower the thermal conductivity of a crystalline material, since such defects lead to scattering of phonons. Hence, a single crystal has a much higher thermal conductivity than a polycrystalline form of the same substance. At high temperatures, the thermal conductivity of crystalline materials can be lowered either by the formation of additional defects or by phonon-phonon scattering. Besides phonons, other mechanisms contributing to thermal conductivity may include convection and radiation. Convection is important in

Figure 26.1 Thermal conductivity of crystalline vs. non-crystalline silica. The thermal conductivity of a-quartz (I) is significantly higher than that of fused silica (II). (Modified from Varshneya and Mauro [3]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

504

Materials Kinetics

fluids, especially those having a low viscosity. Radiation can become an important thermal transport mechanism at high temperatures, since the contribution from radiative heat transfer increases with the third power of temperature [3]. In metals, thermal conductivity is dominated by free electrons. As such, the thermal conductivity of a metal is directly proportional to its electrical conductivity. Since heat transfer in metals occurs via electronic conduction, the thermal conductivity of metals tends to be quite high. A summary of room temperature thermal conductivity values for various metals and nonmetallic materials is provided in Table 26.1. Diamond is the natural material having the highest thermal conductivity, due to its single-crystal structure with minimal defects and strong covalent bonding, leading to high atomic vibrational frequencies.

26.3 Electrical Conductivity In most metals, electronic conduction is facilitated by free electrons. Whereas the core electrons in metals are tightly bound to their respective nuclei, valence electrons are weakly bound and can move with very little resistance. In free electron theory, the interaction between the core and valence electrons is considered negligible. In the presence of an applied electric field, the free electrons move in a direction opposite to the field, producing an electrical current. At low temperatures, the electrical resistivity in metals is attributed primarily to scattering of the electron waves with defects and impurities in the crystalline lattice. As shown in Figure 26.2, the electrical resistivity of metals increases at higher temperature due to interactions between electrons and thermal vibrations (phonons) [3]. Table 26.1 Room temperature thermal conductivities for several materials. Material Thermal Conductivity (W m1 K1)

Copper Aluminum Steel Diamond Quartz Silicate Glass Polystyrene

386 237 50 2300 10 0.8 0.03

Data are from Hofmann [1].

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermal and Electrical Conductivities

505

Figure 26.2 Typical temperature dependence of electrical resistivity in metals.

Free electron theory is not applicable to ionic crystals, which exhibit strong interactions of the valence electrons with the periodic potential of the ionic lattice. Instead, the electronic conductivity of ionic crystals can be explained through the electron band theory of solids, which yields a set of stationary states that are available for occupation by individual electrons, as illustrated in Figure 26.3. The electrons are distributed among the available states following the Fermi-Dirac distribution: pðEÞ ¼

1 

Em 1 þ exp kT

;

(26.6)

where p(E) is the equilibrium probability of occupying a state of energy E, m is the chemical potential of the electrons, k is Boltzmann’s constant, and T is absolute temperature. At absolute zero temperature (T ¼ 0 K), m is equal to the Fermi energy, i.e., the energy of the topmost filled electron band level. In ionic materials, electrical conductivity can also be a result of the diffusion of ionic species, i.e., ionic conductivity. Indeed, many ceramic materials used as solid electrolytes in battery or fuel cell applications achieve their desired properties via a high ionic conductivity. As shown in Figure 26.4, the electrical conductivity of ceramics can vary by over 25 orders of magnitude, covering a wide range from insulators through superconductors [5,6]. Superconductors are materials that can achieve electrical conduction with zero resistance, particularly at low temperatures. Superconductivity is typically explained in terms of Bardeen-CooperSchrieffer (BCS) theory, which attributes the phenomenon to a condensation of Cooper pairs of electrons. Several applications of various types of electrical materials are presented in Table 26.2.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

506

Materials Kinetics

Figure 26.3 Electron band gap formation in an ionic crystal. (Modified from Varshneya and Mauro [3]).

26.4 Varistors and Thermistors Varistors and thermistors are two widely used types of electronic ceramics. Both are polycrystalline materials in which the electrical properties are highly nonlinear as a result of electrical barriers at grain boundaries [6]. Varistors (i.e., variable resistors) exhibit highly nonlinear currentvoltage relationships, as depicted in Figure 26.5. Varistors behave as solid-state circuit breakers that do not require a reset. They are also useful in electrical switching applications. Varistors are noted for their very low resistivity in the breakdown regime. A typical varistor is a doped ZnO, using a dopant such as bismuth or iron. The dopants segregate to the grain boundaries, where they form a barrier to conduction across the boundaries. As shown in Figure 26.5, the current-voltage relationship of a varistor shows three distinct regimes. At low voltages, the relationship is governed

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermal and Electrical Conductivities

507

Figure 26.4 Electrical conductivity of various types of materials. (Image by Brittney Hauke based on Ref. [5]).

Table 26.2 Existing and emerging applications for electronic materials with various levels of conductivity [5]. Material Category Applications

Superconductors Metallic Conductors Semiconductors Ionic Conductors Insulators

Electronics, power transmission, energy storage, large field magnets, detectors Optical coatings and devices, conductive films, electrodes, catalysts Sensors, electrodes, heating elements, thermistors, varistors, switches, solar cells, thermoelectric devices, catalysts Sensors, solid oxide fuel cells, solid-state batteries, ionselective membranes, electrolysis, other electrochemical devices Electronic packaging, substrates

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

508

Materials Kinetics

Figure 26.5 Typical voltage vs. current plot for a ceramic varistor.

by the resistivity of the grain boundaries. The resistivity abruptly drops when the applied voltage is high enough to overcome this barrier, causing breakdown. At still higher voltages, the resistivity increases again, controlled by the resistivity of the grains themselves. Thermistors (i.e., thermal resistors) are materials that show a large variation in resistivity with temperature. Thermistors are useful as selfregulating heating elements or in thermal sensors. Figure 26.6 shows an example resistivity vs. temperature plot for a thermistor, which may contain regimes of negative temperature coefficient (NTC) or positive temperature coefficient (PTC) behavior. The archetypal thermistor material is doped

Figure 26.6 Typical resistivity vs. temperature plot for a ceramic varistor, showing regions of negative temperature coefficient (NTC) and positive temperature coefficient (PTC) behavior.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermal and Electrical Conductivities

509

barium titanate, which shows a dramatic increase in resistivity (up to a factor of 107) as a function of temperature [6]. The atomistic mechanism for this effect is illustrated in Figure 26.7. As explained by Chiang and Takagi [7], oxidation of the donor-rich grain boundaries leaves uncompensated acceptor defects, which primarily consist of barium vacancies. These defects occupy a fairly narrow spatial region compared to the electron depletion layer and present a barrier for electronic conduction.

26.5 Summary Thermal conductivity and electrical conductivity are transport properties analogous to mass diffusion. The heat equation has the same mathematical form as the diffusion equation. As such, the solutions to the heat equation are identical to solutions of the diffusion equation, replacing concentration with temperature and replacing the diffusion coefficient with the thermal conductivity. In metals, thermal conductivity is dominated by free electrons and is therefore proportional to electrical conductivity. In non-metallic materials, thermal conductivity is governed by phonons. Single crystals have the highest thermal conductivities, since defects such as dislocations and grain boundaries lead to phonon scattering, which reduces thermal conductivity.

Figure 26.7 Proposed mechanism for BaTiO3 thermistor behavior. (Image by Brittney Hauke based on Ref. [7]).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

510

Materials Kinetics

Similarly, the thermal conductivity of non-crystalline materials is significantly less than that of their crystalline counterparts. Other mechanisms for thermal conduction include convection in fluid phases and radiation at high temperatures. Electrical conductivity varies by many orders of magnitude across materials. Metals have high electrical conductivities due to the high mobility of their valence electrons. Ceramics can exhibit highly nonlinear variations in resistivity, as in varistors and thermistors. A varistor is a material where the resistivity varies significantly with the applied voltage. In thermistor materials, the resistivity is a strong function of temperature. Thermistor materials may have different regimes exhibiting positive or negative temperature coefficients.

Exercises (26.1) A 10-mm thick steel sheet has a temperature difference of 20 K across the thickness of the sheet. If the sheet measures 1 m  1 m, what is the heat flux through the sheet? The thermal conductivity of steel is provided in Table 26.1. (26.2) Using Chapter 6 as a guide, derive a finite difference method for solving the heat equation. (26.3) How can the heat equation be solved in the case of a temperaturedependent thermal conductivity? (Hint: This problem is analogous to the concentration-dependent diffusion problem in Section 4.9.) (26.4) Derive an expression for thermal conductivity in terms of the autocorrelation of the heat current. (Hint: See Ref. [4].) (26.5) Describe a situation in which the heat current acts in the uphill direction, i.e., to amplify rather than reduce a temperature gradient. Discuss the underlying physics that govern this effect. (26.6) What strategy should be used for solving the heat equation when the thermal conductivity is a function of time? (26.7) In architectural engineering, an insulated glazing unit (i.e., a double-paned window) consists of two panes of glass separated by an inert gas. (a) What factors govern the transport of heat across the insulated glazing unit? Consider thermal conduction, convection, and radiation effects.

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Thermal and Electrical Conductivities

(26.8)

(26.9)

(26.10) (26.11)

511

(b) How could the design of the insulated glazing unit be improved to suppress heat transfer by each of these mechanisms? Use the Fermi-Dirac distribution of Eq. (26.6) to plot the probability of finding an electron in an ionic crystal as a function of energy relative to that of the Fermi energy. In other words, plot p(E) vs. E e m. Overlay the plots for T ¼ 0 K and T ¼ 298 K. What do you notice about the evolution of the p(E) vs. E e m curve as the temperature of the system is increased? Assuming a Fermi-Dirac distribution with a Fermi energy of 5 eV, determine the values of energy that correspond to p(E) ¼ 0.9 and p(E) ¼ 0.1 at a temperature of 298 K. Discuss two strategies for increasing the breakdown voltage of a ZnO varistor. What are the most important aspects of materials kinetics that you have learned from this course? How can these be applied to your own research? Be specific in your answer.

References [1] P. Hofmann, Solid State Physics: An Introduction, Wiley (2015). [2] H. S. Carslaw and J. C. Jaeger, Conduction of Heat in Solids, 2nd ed., Clarendon Press (1959). [3] A. K. Varshneya and J. C. Mauro, Fundamentals of Inorganic Glasses, 3rd ed., Elsevier (2019). [4] A. Kinaci, J. B. Haskins, and T. Çagın, “On Calculation of Thermal Conductivity from Einstein Relation in Equilibrium Molecular Dynamics,” J. Chem. Phys. 137, 014106 (2012). [5] W. J. Weber, H. L. Tuller, T. O. Mason, and A. N. Cormack, “Research Needs and Opportunities in Highly Conducting Electroceramics,” Mater. Sci. Eng. B 18, 52 (1993). [6] Y.-M. Chiang, D. Birnie III, and W.D. Kingery, Physical Ceramics, Wiley (1997). [7] Y.-M. Chiang and T. Takagi, “Grain-Boundary Chemistry of Barium Titanate and Strontium Titanate: II, Origin of Electrical Barriers in Positive-Temperature-Coefficient Thermistors,” J. Am. Ceram. Soc. 73, 3286 (1990).

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

Index Note: ‘Page numbers followed by “f ” indicate figures and “t” indicate tables’.

A Acceptance ratio, 448e449 Activation barrier, 46 Activation relaxation technique, 335 Adam-Gibbs entropy model, 276e278 Adiabatic process, 19 Advanced nucleation theories classical nucleation theory, limitations, 255e257 density functional theory, 260e263, 260f diffuse interface theory, 259e260, 260f implicit glass model, 263e264, 265f lithium disilicate, 256f lithium metasilicate, 256f statistical mechanics, 257e259 Affinity, 22e24, 35, 85e86 Angell diagram, 272e273, 273f, 284, 294, 305e306 Anion Frenkel defects, 138e139 Anisotropic diffusion, 75e77, 76f Anisotropic surfaces, 181e182 Annealing point, 271 Anti-microbial glass, 211e212, 211f Applied forces, 177 A regime, 150e154, 151f, 159 Arrhenius equation, 46 Arrhenius law, 495 AtomEye, 439 Atomic force microscopy (AFM), 170e171 Atomic mechanisms, 129e130 Atomic models, diffusion Brownian motion, 109e110 crystalline matrix, 111f diffusing atom, 112f diffusion, 123e125 Einstein diffusion equation, 120e121

jumps, 110 Markov process, 112 mean squared displacement of particles, 118e120 moments of function, 121e123 parabolic well potential, 116e117, 116f particle escape probability, 117e118 random walks, 123e125, 124f square well potential, 112e115, 113f thermally activated atomic jumping, 110e112, 111f Autocorrelation function, 474 Avalanche, 168, 175 Avogadro’s number, 9e10, 183e184, 277e278 Avramov-Milchev (AM) equation, 275

B Bardeen-Cooper-Schrieffer (BCS) theory, 505 Barostat, 434 Basins, 316e317 Beam-bending configuration, 406 Beam-bending viscometry, 272 Bingham plastic, 289e290 Binodal curve, 222 Binomial distribution, 125e126 Boltzmann distribution, 447 Boltzmann’s arrow of time, 12, 12f Boltzmann’s constant, 9e10, 46, 47f, 113e114, 164, 178e179, 183e184, 200e201, 236, 257e258, 276e277, 301, 315e316, 347e348, 363e364, 430, 446, 469, 505 Boltzmann’s theory, 9 Born-Oppenheimer approximation, 421 Born solvation model, 263, 266 Bottleneck principle, 494e495

513 This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

514

Index

Boundary conditions, 59, 64f, 73e74, 99, 104, 107e108, 206, 229, 261e262, 419, 423, 430e431, 431f, 439 Bounded initial distribution, 67e68, 67f Brass, 50, 55 B regime, 151 Broken ergodicity, 346e351, 346f Broken ergodic systems, 470 enthalpy fluctuations, 470 heat capacity, 471 Kronecker delta function, 472e473 low temperature isobaric heat capacity, 473, 473f time domain variance, 470 Brownian motion, 109e110 Bulk viscosity, 290 Burgers vector, 161

C Cahn-Hilliard equation, 226e229, 227f Canonical ensemble, 257e258, 265e266, 426, 451, 469 Capillary forces, 177 Cathedral glass, 309e311, 311fe312f Cation Frenkel defect, 138e140, 145 Central processing units (CPUs), 438 Central tension (CT), 206 Chain-growth polymerization, 199 Chemical force, 163 Chemically strengthened glass, 50e51 ion exchange, 203e209 Chemical reaction kinetics equilibrium constants, 490e491 first-order reactions, 491e493 heterogeneous reactions, 496 higher order reactions, 493e494 order of reactions, 489 rate of reactions, 487e489, 488f reactions in series, 494e495, 494f solid state transformation kinetics, 496e498, 497f temperature dependence, 495e496 Classical nucleation theory, 235e236 limitations, 255e257 Climb, 165, 174 Cluster expansion, 424

This book belongs to Alice Cartes ([email protected])

Coble creep, 189e190, 195 Coffee-cream mixture, 1, 2f Cold sintering approach, 193e195, 194f COMB potentials, 435e436, 435fe437f Compatibility, 106 Complementary error function, 63e65 Complexity, 173, 326, 350e351 Composite ice-water system, 342f Composite system, 10e11, 11f Composition dependence, 306e308 Compressive stress (CS), 206 Computational fluid dynamics (CFD), 456 Concentration-dependence diffusivity, 77e78 Condensation, 168e169 Conditional probability, 352 Conjugate driving force, 28 Conservative, 173 Contact angle, 239e240 Continuously broken ergodicity, 351e355 Convergence, 106e107 Cooperatively rearranging regions, 276 Cooper coefficient, 204e205 Corning EAGLE XG glass, 407e408 Correlated walks, 134 Correlation factor, 133e134, 145 Coulomb forces, 425 Coupling coefficients, 28 C regime, 151 Critical temperature, 220 Crystal growth rate, 242e244, 242fe243f Crystalline matrix, 111f Crystallization, 168e169, 295e296 classical nucleation theory, 235e236 contact angle, 239e240 crystal growth rate, 242e244, 242fe243f epitaxial growth, 241 glass-ceramics, 248e249, 248f, 250f heterogeneous nucleation, 236, 238e241, 239f homogeneous nucleation, 236e238, 237f

Copyright Elsevier 2023

Index

Johnson-Mehl-Avrami equation, 235, 244e246, 246f kinetic barrier, 236 nucleation, 235 rate, 241e242 supercooled liquid, 235 thermodynamic barrier, 236 time-temperature-transformation diagram, 246e247, 247f wetting angle, 239e240 Wilson-Frenkel theory, 244 Crystals, diffusion in atomic mechanisms, 129e130 correlated walks, 134 defect reactions equilibrium constants, 139e141 Frenkel defects, 137e139, 138f interstitialcy mechanism, 130, 131f interstitial mechanism, 130, 131f ionic crystals, 141e144, 144f ionic crystals defects, 135e137, 136f metals, 131e133, 132f ring mechanism, 129, 130f Schottky defects, 137e139, 138f self-diffusion, 133f, 135t vacancy mechanism, 129e130, 130f Curvature forces, 161, 164e165 Cylindrical coordinates, 108

D Deborah number, 343e345 Defect reactions equilibrium constants, 139e141 Deformation mechanism map, 189e190, 190f, 196 Degeneracy, 457e458 Degenerate microstates, 368e370, 368f Density functional theory, 260e263, 260f Density of states, 327e328, 443, 457e460, 464 Depth of compression (DOC), 205 Depth of layer (DOL), 205 Detailed balance condition, 448 Diamond, 504 Diffuse interface theory, 229, 259e260 Diffusing atom, 112f

This book belongs to Alice Cartes ([email protected])

515

Diffusional creep, 189e190, 190f Diffusion along dislocations, 157, 157fe158f Diffusion along free surfaces, 157e158 Diffusion along stationary, 153e155 Diffusion equation analytical solutions anisotropic diffusion, 75e77, 76f bounded initial distribution, 67e68, 67f complementary error function, 63e65 concentration-dependence diffusivity, 77e78 diffusion in other coordinate systems, 79, 80f error function, 63e65 extended source solution, 63e65, 64f Fick’s Second Law, 59e60 Laplace transforms method, 71e75, 72t one dimension, plane source in, 60e62 reflection method, 62e63 separation of variables method, 68e71 superposition method, 62e63 time-dependent diffusivity, 78e79 numerical solutions compatibility, 106 convergence, 106e107 dimensionless variables, 100e101 finite difference method, 99e103, 102f finite difference solutions, 103e106, 104f pydiffusion, 99e100 stability, 107 Diffusion-limited, 173, 183, 184f Diffusion mechanisms, 148e150, 148f Diffusion pathways, 94e96 Diffusion-trap model, 397e399 Dilatancy, 290 Dimensionless variables, 100e101 Direct kinetic coefficients, 28 Disconnectivity graphs, 324e327, 325f, 327f

Copyright Elsevier 2023

516

Index

Dislocation avalanche, 168 Dislocation hardening, 174 Dislocation motion chemical force, 163 crystalline interface motion, 173e174, 173f crystal-vapor interfaces, 169e171 curvature force, 164e165 dislocation climb, 165e168 dislocation glide, 165e168 driving forces, 161e165 burgers vector, 161 curvature forces, 161 mechanical forces, 161 osmotic forces, 161 entropy-stabilized oxides, 170e171, 172f interfacial motion, driving forces, 168e169 negative dislocation climb, 167, 167f osmotic force, 163 Peach-Koehler equation, 162 positive dislocation climb, 167, 167f stress tensor, 163f Droplet nucleation, 222 Dynamical heterogeneities, 476e477, 476f Dynamic propensity, 476e477

E Eigenvector-eigenvalue problem, 88, 329e332 Einstein diffusion equation, 120e121 Einstein relation, 475 Electrical conductivity, 504e505, 507f, 510 Electromigration, 33e35, 33f, 45e46 Electron band theory, 505 Electron probe micro-analysis (EPMA), 52, 94 Electrostatic potential, 425 Energy landscapes activation relaxation technique, 335 disconnectivity graphs, 324e327, 325f, 327f enthalpy landscapes, 319e322 ExplorerPy, 336e338, 336f

This book belongs to Alice Cartes ([email protected])

inherent structures, 316, 327e336 Lagrange multipliers, 332fe333f LAMMPS, 336e337 landscape kinetics, 322e324 nudged elastic band method, 335e336 potential energy landscapes, 315e318, 317f transition points, 327e336 zero-temperature enthalpy landscape, 319e320 Enthalpy, 3e4 landscapes, 319e322 Entropy production, 26 Entropy-stabilized oxides, 170e171, 172f Epitaxial growth, 241 Equations of motion, 426e430 Equilibrium Boltzmann’s arrow of time, 12, 12f Boltzmann’s theory, 9 coffee-cream mixture, 1, 2f composite system, 10e11, 11f constants, 490e491 definition, 1 enthalpy, 3e4 first law of thermodynamics, 11e12 Gibbs free energy, 3e4, 5f Helmholtz free energy, 3e4 internal energy, 3e4 irreversible thermodynamics, 3 kinetic rate parameters, 3 kinetics, 3e5 macrostate, 8e9 memory, 2e3 microscopic basis of entropy, 8e11 microstate, 8e9 non-spontaneous processes, 6e8, 7f principle of causality, 14e15 second law of thermodynamics, 12e13 spontaneous process, 5f, 6e8 statistical mechanics, 8e9 structural relaxation, 2e3 systems, 390 thermodynamic driving force, 3 thermodynamics, 3e5 third law of thermodynamics, 13e15 two state model, 5

Copyright Elsevier 2023

Index

vapor equilibrium, 2 zeroth law of thermodynamics, 15 Ergodicity broken ergodicity, 346e351, 346f composite ice-water system, 342f conditional probability, 352 continuously broken ergodicity, 351e355 Deborah number, 343e345 definition, 341e343 hierarchical master equation approach, 355e357 metabasin, 347 Palmer approach, 347 thermodynamic implications, 357e360 unifying process, 357e358 Error function, 63e65 Escape probability, 117e118 Excess interaction energy, 218, 220 ExplorerPy, 336e338, 336f Extended source solution, 63e65, 64f Extrinsic mechanism, 47e48

F Faceting, 181, 182f Fast Fourier transform (FFT) algorithm, 403 Fast grain boundary diffusion, 156 Fermi-Dirac distribution, 505, 511 Fiber elongation, 272 Fick’s first law, 39e41, 93 Fick’s Laws of Diffusion activation barrier, 46 Arrhenius equation, 46 chemically strengthened glass, 50e51 driving forces for diffusion, 45e46 electron probe micro-analysis (EPMA), 52 extrinsic mechanism, 47e48 Fick’s first law, 39e41 Fick’s second law, 41e45 interdiffusion process, 49e51 intrinsic mechanism, 47e48 Kirkendall effect, 50 Laplacian operator, 44 Nernst-Planck equation, 45 radioactive tracer isotopes, 53

This book belongs to Alice Cartes ([email protected])

517

secondary ion mass spectroscopy (SIMS) technique, 52 self-diffusion, 53, 54f temperature dependence of diffusion, 46e48 tracer diffusion, 53e55 x-ray photoelectron spectroscopy (XPS), 52 Fick’s second law, 41e45, 59e60 Fictive temperature, 390, 395f distributions, 392e396, 395f kinetic interpretation, 396e397 partial fictive temperatures, 396e397 property dependence, 396 Finite difference method, 99e103, 102f First law of thermodynamics, 11e12 First-order reactions, 491e493 Fluctuations, 482e485 broken ergodic systems, 470 enthalpy fluctuations, 470 heat capacity, 471 Kronecker delta function, 472e473 low temperature isobaric heat capacity, 473, 473f time domain variance, 470 defined, 467 dynamical heterogeneities, 476e477, 476f high performance display glass, 480e482, 481fe484f nonmonotonic relaxation of, 477e480 statistical mechanics of, 468e470 time correlation function, 474e475 variance of, 467 Fluidity, 269 Flux, 24e25, 501e502 Fock operator, 421 Fourier’s law, 25 Fragile liquids, 273f, 274, 278, 291 Fragile-to-strong transition, 287e288 Fragility, 304e306, 305fe306f index, 273 Free boundary, 431 Free electron theory, 504 Freely jointed chain model of polymers, 201e202, 202f Frenkel defects, 137e139, 138f Functional derivative, 227e228

Copyright Elsevier 2023

518

Index

G Galvanomagnetic effect, 37e38 Gas constant, 9e10, 218, 495 Gibbs ensemble, 443, 452 Gibbs free energy, 3e4, 5f Glass-ceramics, 248e249, 248f, 250f Glass relaxation diffusion-trap model, 397e399 fast Fourier transform (FFT) algorithm, 403 fat tail distribution, 397 fictive temperature, 390, 395f distributions, 392e396, 395f kinetic interpretation, 396e397 partial fictive temperatures, 396e397 property dependence, 396 inter-diffusion process, 389 Kohlrausch-Williams-Watts (KWW) function, 402e403 Kovacs crossover effect, 392 Maxwell relation, 411e413, 412f Phillips diffusion-trap model, 416 Prony Series description, 399e403, 400t, 401fe402f relaxation kinetics, 404e405 RelaxPy, 389, 406 Ritland crossover effect, 392 secondary relaxation, 413e414, 414fe415f stress, 406e410, 407f stretched exponential relaxation, 397e399, 398f, 409e410 structural relaxation, 406e410 Tool’s equation, 390e392 Glass transition, 295e299, 296f Glass transition temperature, 271 Glide, 165 Glide plane, 165 Glissile, 166e167 Grain boundaries, 153e155 Grain boundary diffusion regimes, 150e153, 150f Grain growth, 184e188 Grand canonical ensemble, 257e258, 265, 441, 452 Graphics processing units (GPUs), 438 Green-Kubo relation, 475

This book belongs to Alice Cartes ([email protected])

GROningen MAchine for Chemical Simulations (GROMACS), 438

H Hardening, 174 Harrison’s ABC model, 150e151 Hartree-Fock theory, 421 Heat equation, 502 Heat of reaction, 8 Helmholtz free energy, 3e4 Hessian matrix, 328e329, 333e334 Heterogeneous nucleation, 236, 238e241, 239f Heterogeneous reactions, 487, 496 Hierarchical master equation approach, 355e357 Higher order reactions, 493e494 Homogeneous nucleation, 236e238, 237f Homogeneous reactions, 487 Hyperskewness, 123 Hypertailedness, 123

I Ideal glass transition, 298 Immiscibility, 222e224 Immiscibility dome, 221 Implicit glass model, 263e264, 265f Infinite temperature limit, 284e286, 285f Inherent structures, 316, 327e336 eigenvector-following technique, 328 Hessian matrix, 329e332 Lagrange multiplier, 329e332 Insulated glazing unit, 510 Interatomic potentials, 424e425 Interdiffusion, 49e51 coefficient, 87 process, 49, 389 Interfacial motion, driving forces, 168e169 Internal energy, 3e4 Interstitialcy mechanism, 130, 131f Interstitial mechanism, 130, 131f Intrinsic mechanism, 47e48 Ion exchange, 50e51, 51f, 57, 203e209, 389

Copyright Elsevier 2023

Index

519

random numbers, 454e455, 454f transition points, 453 KineticPy, 385e386, 385f Kinetic rate parameters, 3 Kirkendall effect, 50 Kohlrausch-Williams-Watts (KWW) function, 402e403 Kovacs crossover effect, 392 Kröger-Vink notation, 135e137, 136f, 145 Kronecker delta function, 353, 472e473 Kurtosis, 122e123

Ion-exchanged glass waveguides, 211 Ionic crystals, 141e144, 144f defects, 135e137, 136f Irreversible thermodynamics, 3 affinity, 22e24, 24f electromigration, 33e35, 33f entropy production, 26 Fick’s first law, 25 fluxes, 24e25 Fourier’s law, 25 galvanomagnetic effect, 37e38 Hall effect, 37e38 linear systems, 27e28, 35 local entropy density, 24 Ohm’s law, 25 Onsager reciprosity theorem, 28e29, 35 piezoelectric materials, 34e35 purely resistive systems, 26e27, 35 quasi-static process, 36 reversible process, 19e22, 20fe21f, 21t thermocouple, 36 thermoelectric materials, 31e33, 32fe33f, 35 thermomagnetic effect, 37 thermophoresis, 29e31, 29f, 35 Isentropic process, 19 Isoconfigurational ensemble, 476e477 Isokom temperature, 270 Isostructural viscosity, 299e300 Isothermal-isobaric ensemble, 319, 348, 426, 433e434, 443, 470 Isotropic surfaces, 178e180

Lagrange multipliers, 332fe333f LAMMPS, 336e337 Landscape kinetics, 322e324 Landscape partitioning, 375e383, 376f Laplace transforms method, 71e75, 72t Laplacian operator, 44 Lennard-Jones potential, 425, 440 Lifshitz-Slyozov-Wagner (LSW) theory, 183e184 Linear approximation, 27 Linear network dilation coefficient, 204e205 Linear purely resistive system, 27 Linear response theory, 475 Liquid fragility, 272e274, 273f Liquid-liquid immiscibility, 221 Lithium disilicate, 256f Local entropy density, 24 Long time scales, 383e385, 384f

J

M

Johnson-Mehl-Avrami equation, 235, 244e246, 246f, 497e498 Jumps, 110

Maclaurin series, 26e27, 35 Macrostate, 8e9 MAP model. See Mauro-Allan-Potuzak (MAP) model Markov chain of states, 447e448, 453 Markov process, 112, 447e448 Master equations definition, 365e368 degenerate microstates, 368e370, 368f KineticPy, 385e386, 385f landscape partitioning, 375e383, 376f long time scales, 383e385, 384f

K Kauzmann paradox, 286e287 Kauzmann temperature, 286 Kinetic barrier, 236 Kinetic Monte Carlo, 452e453, 457f application of, 456 master equation approach, 454 metal microstructures, 455e456, 456f

This book belongs to Alice Cartes ([email protected])

L

Copyright Elsevier 2023

520

Index

Master equations (Continued) metabasin approach, 370e375, 371f transition state theory, 363e364, 364f Matrix diagonalization, 88e91, 92f anisotropic diffusion problem, 88 coordinate transformations, 92f diffusion pathway, 92f flux equations, 91 interdiffusivity matrix, 89e90 Matrix formulation, 87e88 Mauro-Allan-Potuzak (MAP) model, 405, 414 Mauro-Yue-Ellison-Gupta-Allan (MYEGA) equation, 279e284, 280f, 414 Maxwell relation, 411e413 Mean squared displacement of particles, 118e120 Medieval cathedral glass, 309e311, 310f Melting point, 270 Message Passing Interface (MPI), 438 Metabasin approach, 347, 370e375, 371f Metals, 131e133, 132f Metropolis algorithm, 443, 449e450 Metropolis method acceptance ratio, 448e449 Boltzmann distribution, 448e449 random trial move, 448e449, 449f Microcanonical ensemble, 426 Microscopic basis of entropy, 8e11 Microscopic reversibility, 29 Microstate, 8e9 Miscibility gap, 222 Mixing, 39, 217e221, 393e396 Modulus, 462e463 Molecular dynamics, 443 AtomEye, 439 barostats, 434 ensembles, 426 equations of motion, 426e430 interatomic potentials, 424e425 vs. Monte Carlo method, 450e451 multiscale materials modeling, 419, 420f Open Visualization Tool (OVITO), 439 principles of, 422e424

This book belongs to Alice Cartes ([email protected])

quantum mechanical techniques, 420e422, 422f reactive force fields, 434e436, 435fe437f simulations, 430e433, 431f thermostats, 433 trade tools, 438e439 Visual Molecular Dynamics (VMD), 439 X-window CRYstalline Structures and DENsities (XCrySDen), 439 Møller-Plesset perturbation theory, 421 Moments of function, 121e123 Monomers, 199 Monte Carlo integration, 445f application of, 447 integrals, 445e446 probability density function, 445e446 Monte Carlo method, 266, 443, 463e464 inherent structure density of states basin occupation probabilities, 460, 461f degeneracy, 457e458 one-dimensional potential energy landscape, 460, 460f, 462f WangeLandau technique, 458 integration. See Monte Carlo integration kinetic Monte Carlo, 452e456 Markov processes, 447e448 Metropolis method, 448e450 vs. molecular dynamics, 450e451 random number generators, 460e463 sampling, 451e452, 452f in statistical mechanics, 446e447 Morphology, 177, 179, 226, 252 Morse potential, 424e425 Multicomponent diffusion interdiffusion coefficient, 87 matrix diagonalization, 88e91, 92f matrix formulation, 87e88 ternary system, 87e88 uphill diffusion, 85, 93e96, 93f, 95fe96f Multiple boundary regime, 152 Multiscale materials modeling, 419, 420f

Copyright Elsevier 2023

Index

521

MYEGA equation. See Mauro-YueEllison-Gupta-Allan (MYEGA) equation

Order of reactions, 489 Osmotic forces, 161, 163 Ostwald ripening, 182e183

N

P

Nabarro-Herring creep, 189e190, 195 Negative dislocation climb, 167, 167f Neighbor list, 429e430 Nernst-Planck equation, 45 Newton’s law, 269 Newton’s second law of motion, 450 Ni-Fe-Co alloy system, 93, 93f Nonconservative motion, 173 Nonequilibrium system, 390 Nonequilibrium viscosity composition dependence, 306e308 crystallization, 295e296 fictive temperature, 298 fragility, 304e306, 305fe306f glass, 297 transition, 295e299, 296f ideal glass transition, 298 medieval cathedral glass, 309e311, 310f modeling, 301e304, 303f supercooled liquid, 296e297 temperature-dependent constraint models, 307 thermal history dependence, 299e300, 300f topological constraint theory, 306e307 volume temperature diagram, 295e296 Nonmonotonic relaxation density fluctuations, 478e479, 478fe480f selenium glass, 477e478, 477f Non-Newtonian viscosity, 288e290, 289f Non-spontaneous processes, 6e8, 7f Nucleation, 235 rate, 241e242 Nudged elastic band method, 335e336

Palmer approach, 347 Parabolic well potential, 116e117, 116f Parallel plate viscometry, 272 Partial Schottky defect, 138 Particle coarsening, 182e184 Particle escape probability, 117e118 Partition function, 257e258, 276e277, 315e320, 347e348, 371e372, 446e447, 471 Partitioning process, 357, 376 Peach-Koehler equation, 162 PECVD. See Plasma-enhanced chemical vapor deposition (PECVD) Peltier effect, 32e33 Periodic boundary conditions, 430e431 Phase-field modeling, 229e230, 231f Phase separation Cahn-Hilliard equation, 226e229, 227f critical temperature, 220 droplet nucleation, 222 excess interaction energy, 218, 220 immiscibility, 222e224 immiscibility dome, 221 kinetics, 224e226, 224fe225f liquid-liquid immiscibility, 221 phase-field modeling, 229e230, 231f spinodal decomposition, 222 spinodal domes, 222e224 thermodynamics of mixing, 217e221 uphill diffusion, 217 upper consolute temperature, 220 Phillips diffusion-trap model, 416 Phonon-phonon scattering, 503 Piezoelectric materials, 34 Pinning, 173, 173f Plasma-enhanced chemical vapor deposition (PECVD), 456 Plateau-Rayleigh instability, 179, 180fe181f Poisson’s ratio, 164e165, 204e205 Polycrystalline materials diffusion

O Ohm’s law, 25 One-dimensional (linear) defects, 147 Onsager reciprosity theorem, 28 Open Visualization Tool (OVITO), 439

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

522

Index

Polycrystalline materials (Continued) A regime, 150 B regime, 151 C regime, 151 defects in, 147 dislocations, 157, 157fe158f fast grain boundary diffusion, 156 free surfaces, 157e158 grain boundaries, 153e155 grain boundary diffusion regimes, 150e153, 150f Harrison’s ABC model, 150e151, 153e154, 159 mechanisms, 148e150, 148f multiple boundary regime, 152 one-dimensional (linear) defects, 147 short-circuit diffusion pathways, 148 stationary, 153e155 three-dimensional (volume) defects, 147 two-dimensional (planar) defects, 147 zero-dimensional (point) defects, 147 morphological evolution anisotropic surfaces, 181e182 applied forces, 177 capillary forces, 177 cold sintering approach, 193e195, 194f diffusional creep, 189e190, 190f final stage, 193 grain growth, 184e188, 185f initial stage, 193 intermediate stage, 193 isotropic surfaces, 178e180 Lifshitz-Slyozov-Wagner (LSW) theory, 183e184 morphology, 177 Ostwald ripening, 182e183 particle coarsening, 182e184 Plateau-Rayleigh instability, 179, 180fe181f sintering, 190e195, 191f, 192t, 193f two-dimensional grain evolution, 185f von Neumann-Mullins law, 185 ZnO ceramic system, 193e195 Polymeric chains, 200

This book belongs to Alice Cartes ([email protected])

Polymers and glasses anti-microbial glass, 211e212, 211f central tension (CT), 206 chain-growth polymerization, 199 chemically strengthened glass, ion exchange, 203e209 compressive stress (CS), 206 Cooper coefficient, 204e205 depth of compression (DOC), 205 depth of layer (DOL), 205 freely jointed chain model of polymers, 201e202, 202f ion-exchanged glass waveguides, 211 linear network dilation coefficient, 204e205 monomers, 199 polymeric chains, 200 proton conducting glasses, 212, 212f reptation, 202e203, 203f step-growth polymerization, 199 Stokes-Einstein relation, 200e201, 200f Positive dislocation climb, 167, 167f Potential energy landscapes, 315e318, 317f Primary relaxation, 413 Principle of causality, 14e15 Probability density function, 445e447 Prony Series description, 399e403, 400t, 401fe402f Proton conducting glasses, 212, 212f Pseudoplastic, 290 Pseudopotential theory, 421 Pseudo-random numbers, 462 Purely resistive systems, 26e27 Pydiffusion, 99e100

Q Quantum mechanical techniques, 420e422, 422f Quartz, 503, 503f Quasi-static process, 36

R Radioactive tracer isotopes, 53 Random number generators, 460e462 modulus, 462e463 random seed, 462

Copyright Elsevier 2023

Index

Random sampling, 447 Random seed, 462 Random walks, 123e125, 124f Rate of reactions, 487e489, 488f Rayleigh instability, 179e180 Rayleigh scattering, 468 Rayleigh-Schrödinger perturbation theory, 421 Reached equilibrium, 447e448 Reactant, 5, 335e336, 487e488 Reactive force fields, 434e436, 435fe437f ReaxFF, 434e436 Reflection method, 62e63 Relaxation kinetics, 404e405 RelaxPy, 406 Reptation, 202e203, 203f Ring mechanism, 129, 130f Ritland crossover effect, 392 Room temperature relaxation, 413, 414fe415f Root mean squared error (RMSE), 400e401 Rotational viscometry, 272

S Salt, 51f, 204e205, 208 Schottky defects, 137e139, 138f Secondary ion mass spectroscopy (SIMS), 52, 94 Secondary relaxation, 413e414, 414fe415f Second law of thermodynamics, 12e13 Seebeck coefficient, 32 Seebeck effect, 28, 31e33, 32f, 36 Self-diffusion, 53, 54f, 133f, 135t Separation of variables method, 68e71 Sessile, 166e167 Shear modulus, 190f, 357e358, 411e413 Shear thickening, 290 Shear thinning, 290 Shear viscosity, 269 Short-circuit diffusion pathways, 148 Sink-limited, 173e174, 183 Sintering, 190e195, 191f, 192t, 193f Sintering mechanism map, 191, 192f

This book belongs to Alice Cartes ([email protected])

523

Skewness, 122e123 Slip plane, 165 Small-angle x-ray scattering (SAXS), 478e479 Softening point, 270 Solidification, 168e169 Solid state reactions, 487 Solid state transformation kinetics, 496e498, 497f Soret effect, 29 Source-limited, 173 Spherical coordinates, 79 Spinodal decomposition, 222 Spinodal domes, 222e224 Spontaneous process, 5f, 6e8 Square well potential, 112e115, 113f Stability, 107 Standard deviation, 122, 467, 477e478, 483f Statistical mechanics, 8e9 fluctuations Boltzmann’s constant, 469 internal energy fluctuations, 469 isochoric heat capacity, 469 isothermal-isobaric ensemble, 470 Monte Carlo method, 446e447 Step-growth polymerization, 199 Stochastic sampling, 444 Stoichiometric coefficients, 488 Stokes-Einstein relation, 200e201, 200f Stokes law, 200, 272 Strain hardening, 174 Strain point, 271 Stress, 406e410, 407f tensor, 163f Stretched exponential relaxation (SER), 397e399, 398f, 404f, 409e410 Strong liquids, 278 Structural relaxation, 2e3, 406e410 Substitutional defect, 137 Substitution of variables, 61, 78e79, 113e114, 403 Supercooled liquid, 235, 296e297 Superposition method, 62e63 Surface/grain boundary intersections, 202f Surroundings, 19, 341

Copyright Elsevier 2023

524

Index

T Taylor series, 230 Temperature dependence, 46e48 Temperature-dependent constraint models, 307 Ternary system, 87e88 Thermal/electrical conductivities Bardeen-Cooper-Schrieffer (BCS) theory, 505 electrical conductivity, 504e505, 505f electron band theory, 505 Fermi energy, 505 negative temperature coefficient (NTC), 508e509, 508f positive temperature coefficient (PTC), 508e509, 508f superconductors, 505 thermal conductivity, 502e504, 503f thermistors, 506e509, 508fe509f transport equations, 501e502 varistors, 506e509, 508f Thermal expansion, 8e9, 204e205, 249, 298, 357e358, 468, 470 Thermal history dependence, 299e300, 300f Thermally activated atomic jumping, 110e112, 111f Thermistors, 506e509 Thermocouple, 36 Thermodynamic barrier, 236 Thermodynamic driving force, 3 Thermodynamic implications, 357e360 Thermoelectric power, 32 Thermomagnetic effect, 37 Thermophoresis, 29 Thermostats, 433 Third law of thermodynamics, 13e15 Thixotropy, 290 Thomson effect, 33, 33f Three-dimensional (volume) defects, 147 Time correlation function, 474e475 Time-dependent diffusivity, 78e79 Time-temperature-transformation diagram, 246e247, 247f Titanium carbide derived-carbon (TiC-CDC), 436, 437f

This book belongs to Alice Cartes ([email protected])

Tool’s equation, 390e392 Topological constraint theory, 306e307 Tracer diffusion, 53e55 Transition points, 327e336 activation relaxation technique, 335 eigenvector-following technique, 334 nudged elastic band, 335e336 Transition state theory, 363e364, 364f Transport properties, 501 Two-dimensional (planar) defects, 147 Two-dimensional grain evolution, 185f Two state model, 5

U Unifying process, 357e358 Uphill diffusion, 217 diffusion pathways, 94e96 electron probe micro-analysis (EPMA), 94 Fick’s first law, 93 Ni-Fe-Co alloy system, 93, 93f secondary ion mass spectroscopy (SIMS), 94 Upper consolute temperature, 220

V Vacancy mechanism, 129e130, 130f Vapor deposition, 169e170 Vapor equilibrium, 2 Vapor transport, 178, 180, 181f, 192e193 Variational derivative, 227e228 Varistors, 506e509 Velocity Verlet algorithm, 428 Verlet algorithm, 427e428 Virial theorem, 434 Viscosity Adam-Gibbs entropy model, 276e278 annealing point, 271 Avramov-Milchev (AM) equation, 275 beam-bending viscometry, 272 bulk viscosity, 290 definition, 269 dilatancy, 290 fiber elongation, 272 fragile-to-strong transition, 287e288

Copyright Elsevier 2023

Index

fragility index, 273 glass transition temperature, 271 infinite temperature limit, 284e286, 285f isokom temperature, 270 Kauzmann paradox, 286e287 liquid fragility, 272e274, 273f Mauro-Yue-Ellison-Gupta-Allan (MYEGA) equation, 279e284, 280f measurement techniques, 270f, 271e272, 271t melting point, 270 Newton’s law, 269 non-Newtonian viscosity, 288e290, 289f parallel plate viscometry, 272 reference points, 270e271, 270f rotational viscometry, 272 shear thickening, 290 shear thinning, 290 shear viscosity, 269 softening point, 270 strain point, 271 thixotropy, 290 Vogel-Fulcher-Tammann (VFT) equation, 274e275, 280f volume viscosity, 290 Williams-Landel-Ferry (WLF) equation, 274 working point, 270 working range, 270 Viscosity reference points, 270e271, 270f Visual Molecular Dynamics (VMD), 439

This book belongs to Alice Cartes ([email protected])

525

Vogel-Fulcher-Tammann (VFT) equation, 274e275 Volume relaxation, 396 Volume-temperature diagram, 295e296, 296f Volume viscosity, 290 von Neumann-Mullins law, 185

W WangeLandau technique, 458 Wetting angle, 239e240 Williams-Landel-Ferry (WLF) equation, 274 Wilson-Frenkel theory, 244 Work hardening, 174 Working point, 270 Working range, 270

X X-ray photoelectron spectroscopy (XPS), 52 X-window CRYstalline Structures and DENsities (XCrySDen), 439

Y Young’s equation, 179, 239e240, 239f Young’s modulus, 204e205, 214e215

Z Zero-dimensional (point) defects, 147 Zero-temperature enthalpy landscape, 319e320 Zeroth law of thermodynamics, 15 ZnO ceramic system, 193e195

Copyright Elsevier 2023

This page intentionally left blank

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023

This book belongs to Alice Cartes ([email protected])

Copyright Elsevier 2023