598 52 9MB
English Pages 343 [338] Year 2021
Soft and Biological Matter
Paola Gallo Mauro Rovere
Physics of Liquid Matter
Soft and Biological Matter Series Editors David Andelman, School of Physics and Astronomy, Tel Aviv University, Tel Aviv, Israel Wenbing Hu, School of Chemistry and Chemical Engineering, Department of Polymer Science and Engineering, Nanjing University, Nanjing, China Shigeyuki Komura, Department of Chemistry, Graduate School of Science and Engineering, Tokyo Metropolitan University, Tokyo, Japan Roland Netz, Department of Physics, Free University of Berlin, Berlin, Berlin, Germany Roberto Piazza, Department of Chemistry, Materials Science, and Chemical Engineering “G. Natta”, Polytechnic University of Milan, Milan, Italy Peter Schall, Van der Waals-Zeeman Institute, University of Amsterdam, Amsterdam, Noord-Holland, The Netherlands Gerard Wong, Department of Bioengineering, California NanoSystems Institute, UCLA, Los Angeles, CA, USA
“Soft and Biological Matter” is a series of authoritative books covering established and emergent areas in the realm of soft matter science, including biological systems spanning all relevant length scales from the molecular to the mesoscale. It aims to serve a broad interdisciplinary community of students and researchers in physics, chemistry, biophysics and materials science. Pure research monographs in the series, as well as those of more pedagogical nature, will emphasize topics in fundamental physics, synthesis and design, characterization and new prospective applications of soft and biological matter systems. The series will encompass experimental, theoretical and computational approaches. Topics in the scope of this series include but are not limited to: polymers, biopolymers, polyelectrolytes, liquids, glasses, water, solutions, emulsions, foams, gels, ionic liquids, liquid crystals, colloids, granular matter, complex fluids, microfluidics, nanofluidics, membranes and interfaces, active matter, cell mechanics and biophysics. Both authored and edited volumes will be considered.
More information about this series at http://www.springer.com/series/10783
Paola Gallo • Mauro Rovere
Physics of Liquid Matter
Paola Gallo Department of Mathematics and Physics Roma Tre University Roma, Italy
Mauro Rovere Department of Mathematics and Physics Roma Tre University Roma, Italy
ISSN 2213-1736 ISSN 2213-1744 (electronic) Soft and Biological Matter ISBN 978-3-030-68348-1 ISBN 978-3-030-68349-8 (eBook) https://doi.org/10.1007/978-3-030-68349-8 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This book introduces the reader to the fascinating field of liquid state physics. The liquid state is located in a restricted zone of the phase diagram in coexistence with the solid and the gas phases. Apart for the special case of helium, the conditions of density and temperature are such that the liquids can be under most circumstances considered as classical melts. The determination of the liquid properties starting from microscopic models is a challenge for statistical mechanics due to the disordered arrangement of atoms and high density. An impressive progress in the understanding of the liquid state has been achieved in recent years mainly with the improvement of the experimental techniques and the large use of computer simulations, made possible by the continuous improvement of the computational techniques. The numerical simulation is in fact not any more a simple support to theoretical studies, and it can be considered a third relevant column besides approximate theoretical methods and experiments. Computer simulation has become a predictive methodology stimulating experimental work. An important example is the study of region of the phase space where metastable states are present. The experimental, theoretical and computational methods developed for studying the liquid state have recently been extended to the research in the field of materials with large technological relevance like colloids, biomolecules and amorphous solids. Our opinion is that we are at a turning point in the research in this field. At variance with the old times when it was possible to treat only very simplified microscopic model, in the future more sophisticated models will be introduced and studied with more refined techniques. For this reason we believe that liquid state physics will become even more of interest for a large community of researchers in physics, chemistry and other related disciplines. Our intention with this book is to provide an introduction to the liquid state physics for graduate students and researchers and fill the gap between books that contain basic skills in statistical mechanics and thermodynamics and very specialized books.
v
vi
Preface
In the first chapter, we give an introduction to the liquid state physics with some example of phase diagrams. We discuss the conditions for the classical limit; we describe the features of some type of fluids and the methods used in the experimental and theoretical studies in the field. Basic thermodynamics concepts are recalled in the second chapter where the van der Waals theory is described. The statistical mechanics ensembles are introduced. In the third chapter, we introduce the distribution functions of the liquids and the experimental methods used for determining the structure of liquids. Examples of simple and molecular liquids are presented. The fourth chapter is devoted to the theoretical approaches based on the classical density functional theory. In Chap. 5, we present the computer simulation methods, molecular dynamics and Monte Carlo. We discuss in particular how the phase diagram can be studied in simulation with the use of different methods derived from the so-called umbrella sampling. The problem of studying critical phenomena in finite size simulation is also introduced. Starting from Chap. 6, we consider the dynamics of liquids. In particular in Chap. 6, the dynamical correlation functions are defined; the linear response theory and the fluctuation-dissipation theorem are presented. The density correlation functions are considered with the experimental methods used to study them. In Chap. 7, we present the thermal motion in liquids. We consider the theory of the Brownian motion, the dynamics of liquids in the hydrodynamics limit and the visco-elastic regime. The memory functions are also introduced. In Chap. 8, we consider the important area of supercooled liquids, the glass transition and the relaxation dynamics upon supercooling. The complex behaviour of supercooled liquids in approaching the glassy state is still a challenge in the condensed matter research. In this respect the mode coupling theory introduced in this chapter provides the only theoretical detailed interpretation of the dynamics of supercooled liquids. In the last chapter as a particularly important subject in the research on supercooled liquids, we consider the behaviour of supercooled water, a topic to which we gave our contribution. The anomalies of supercooled water are still an open field of research for experiments and statistical mechanics. We wish to thank the staff of the Springer editorial board for their precious help during the process of publication of this book. Roma, Italy Roma, Italy November 2020
Paola Gallo Mauro Rovere
Contents
1
2
An Introduction to the Liquid State of Matter . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Liquid State of Matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Examples of Phase Diagram of Pure Substances: CO2 and Water . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Phase Diagram of Binary Mixtures . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Structure and Dynamics of Liquids: Experiments and Correlation Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Microscopic Models for Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Classical Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Different Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Potential Energy Landscape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Approximate Theories and Computer Simulation . . . . . . . . . . . . . . . . . . . 1.6 Water and Hydrogen Bond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Metastable States and Disordered Solid Matter . . . . . . . . . . . . . . . . . . . . . 1.8 Soft Matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.1 Colloids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.2 Biomolecules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1
8 9 10 11 12 13 14 16 16 17 18 20
Thermodynamics and Statistical Mechanics of Fluid States . . . . . . . . . . . . 2.1 Extensive and Intensive Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Energy and Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Gibbs-Duhem Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Equilibrium Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Equilibrium Conditions and Intensive Quantities . . . . . . . . . . . . . . . . . . . 2.6 Macroscopic Response Functions and Stability Conditions . . . . . . . . . 2.7 Legendre Transforms and Thermodynamic Potentials . . . . . . . . . . . . . . . 2.7.1 Helmholtz Free Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.2 Gibbs Free Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.3 Enthalpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.4 Grand Canonical Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.5 Tabulated Thermodynamic Potentials . . . . . . . . . . . . . . . . . . . . . . .
23 23 24 26 27 29 29 31 32 33 35 36 37
3 5
vii
viii
Contents
2.8 2.9 2.10 2.11 2.12
Stability Conditions for Thermodynamic Potentials . . . . . . . . . . . . . . . . . Coexistence and Phase Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Phase Transitions and Their Classifications . . . . . . . . . . . . . . . . . . . . . . . . . . Van der Waals Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . General Form of the Van der Waals Equation and Corresponding States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.13 Critical Behaviour of the Van der Waals Equation . . . . . . . . . . . . . . . . . . . 2.14 Ensembles in Statistical Mechanics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14.1 Microcanonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14.2 Canonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14.3 Grand Canonical Ensemble. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14.4 Isobaric-Isothermal Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.15 Fluctuations and Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37 38 39 41
3
Microscopic Forces and Structure of Liquids. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Force Field for Atoms in Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Local Structure of a Liquid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Distribution Functions in the Canonical Ensemble . . . . . . . . . . . . . . . . . . 3.4 Relation of the RDF with Thermodynamics. . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Pressure from the Virial. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Distribution Functions in the Grand Canonical Ensemble . . . . . . . . . . . 3.6 Hierarchical Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Qualitative Behaviour of the Radial Distribution Function . . . . . . . . . . 3.8 Experimental Determination of the Structure of Liquids . . . . . . . . . . . . 3.9 Neutron Scattering on Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10 Static Limit and the Structure of Liquid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 The Static Structure Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 The Structure Factor and the RDF of Liquid Argon . . . . . . . . . . . . . . . . . 3.13 The Structure Factor Close to a Critical Point. . . . . . . . . . . . . . . . . . . . . . . . 3.14 Structure of Multicomponent Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14.1 Partial Structure Factor of Multicomponent Liquids . . . . . . . 3.14.2 Isotopic Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14.3 An Example: Molten Salts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Structure of Molecular Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.15.1 Structure of Liquid Water. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61 61 64 65 66 66 67 68 71 72 73 75 79 81 82 84 85 86 86 87 88 90 94
4
Theoretical Studies of the Structure of Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Virial Expansion in the Canonical Ensemble. . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 From Hard Spheres to the Van der Waals Equation . . . . . . . . 4.2 The Mean Force Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Kirkwood Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Radial Distribution Function from the Excess Free Energy . . . . . . . . . 4.5 Density Distributions from the Grand Partition Function. . . . . . . . . . . . 4.6 Grand Potential as Generating Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . .
95 95 97 99 100 101 101 103
46 47 50 51 52 54 56 57 60
Contents
5
ix
4.7
Classical Density Functional Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Equilibrium Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 The Ornstein-Zernike Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.3 The Ornstein-Zernike Equation in k-Space . . . . . . . . . . . . . . . . . 4.7.4 Free Energy Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.5 Expansion from the Homogeneous System . . . . . . . . . . . . . . . . . 4.8 Closure Relations from the Density Functional Theory . . . . . . . . . . . . . 4.9 An Exact Equation for the g(r) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 HNC and Percus-Yevick Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.1 RPA and MSA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Properties of the Hard Sphere Fluid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Equation of State and Liquid-Solid Transition of Hard Spheres . . . . 4.13 Percus-Yevick for the Hard Sphere Fluid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Equation of State and Thermodynamic Inconsistency . . . . . . . . . . . . . . . 4.15 Routes to Consistency: Modified HNC and Reference HNC . . . . . . . . 4.16 Perturbation Theories: Optimized RPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.17 Models for Colloids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
104 105 106 108 109 110 112 114 115 116 117 119 121 123 124 125 126 128
Methods of Computer Simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Molecular Dynamics Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Molecular Dynamics and Statistical Mechanics . . . . . . . . . . . . 5.1.2 Algorithms for the Time Evolution . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 Predictor/Corrector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 Verlet Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.5 Calculation of the Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.6 Initial Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.7 Temperature in the Microcanonical Ensemble . . . . . . . . . . . . . 5.1.8 Equilibration Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.9 Thermodynamic and Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.10 Long-Range Corrections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.11 Ewald Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Monte Carlo Integration and Importance Sampling . . . . . . . . 5.2.2 Integrals in Statistical Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Importance Sampling in Statistical Mechanics . . . . . . . . . . . . . 5.2.4 Markov Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.5 Ergodicity and Detailed Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.6 Metropolis Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.7 Averaging on Monte Carlo Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.8 MC Sampling in Other Ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.9 MC in the Gibbs Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 MD in Different Ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Controlling the Temperature: MD in the Canonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Pressure Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
131 132 132 134 134 135 138 139 139 140 143 143 144 149 149 150 151 152 153 154 156 157 160 160 161 166
x
6
7
Contents
5.4 Molecular Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Microscopic Models for Water . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Some More Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Direct Calculations of the Equation of State . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Free Energy Calculation from Thermodynamic Integration . . . . . . . . . 5.9 An Example: Liquid-Solid Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Calculation of the Chemical Potential: The Widom Method . . . . . . . . 5.11 Sampling in a Complex Energy Landscape . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12 Umbrella Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13 Histogram Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14 Free Energy Along a Reaction Coordinate . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14.1 Umbrella Sampling for Reaction Coordinates . . . . . . . . . . . . . . 5.14.2 Metadynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.15 Simulation of Critical Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
167 168 172 173 174 176 177 179 179 182 183 185 187 189 192
Dynamical Correlation Functions and Linear Response Theory for Fluids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Dynamical Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Correlation Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Further Properties of the Correlation Functions . . . . . . . . . . . . 6.3 Linear Response Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Dynamical Response Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Fluctuation-Dissipation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Response Functions and Dissipation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Density Correlation Functions and Van Hove Functions . . . . . . . . . . . . 6.8 Neutron Scattering to Determine the Liquid Dynamics . . . . . . . . . . . . . 6.9 Dynamic Structure Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9.1 Static Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9.2 Incoherent Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Density Fluctuations and Dissipation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.1 Detailed Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11 Static Limit of the Density Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 Static Response Function and the Verlet Criterion . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
195 195 196 199 200 203 206 209 211 214 214 215 216 217 217 218 218 219
Dynamics of Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Thermal Motion in Liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Brownian Motion and Langevin Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Diffusion and Self Van Hove Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Limit of the Dilute Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Short Time Expansion of the Self-Intermediate Scattering Function 7.6 Correlation Functions of the Currents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 The Hydrodynamic Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 Diffusion in the Hydrodynamic Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9 Velocity Correlation Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
221 221 222 225 225 227 229 231 232 236
Contents
xi
7.10
239 243
Liquid Dynamics in the Hydrodynamic Limit . . . . . . . . . . . . . . . . . . . . . . . 7.10.1 Transverse Current . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.2 Equations Under Isotherm Conditions, Longitudinal Current and Sound Waves . . . . . . . . . . . . . . . . . . . . 7.10.3 Longitudinal Current in Presence of Thermal Diffusion and Brillouin Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11 Different Regimes for the Liquid Dynamics: The De Gennes Narrowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12 Introduction of Memory Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.1 The Langevin Equation and Memory Effects. . . . . . . . . . . . . . . 7.12.2 Viscoelasticity: The Maxwell Model. . . . . . . . . . . . . . . . . . . . . . . . 7.12.3 Generalized Viscosity and Memory Effects . . . . . . . . . . . . . . . . 7.13 Definition of Memory Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14 Memory Function for the Velocity Correlation Function . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
248 249 249 252 254 255 260 263
8
Supercooled Liquids: Glass Transition and Mode Coupling Theory . . 8.1 Phase Transitions and Metastability of Liquids . . . . . . . . . . . . . . . . . . . . . . 8.2 Liquids Upon Supercooling: From the Liquid to the Glass. . . . . . . . . . 8.3 Angell Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Kauzmann Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Adam and Gibbs Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Cooperative Rearranging Regions. . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Calculation of the Configurational Entropy. . . . . . . . . . . . . . . . . 8.6 Energy Landscape and Configurational Entropy . . . . . . . . . . . . . . . . . . . . . 8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory . 8.7.1 Dynamics Upon Supercooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.2 Mode Coupling Theory and Cage Effect . . . . . . . . . . . . . . . . . . . 8.7.3 Formulation of the Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.4 Glass Transition as Ergodic to Non-ergodic Crossover . . . . 8.7.5 The β-Relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.6 α-Relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
265 265 268 270 272 274 274 276 278 280 280 284 285 291 295 297 300
9
Supercooled Water . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Supercooled and Glassy Water . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 The Hypothesis of a Liquid-Liquid Critical Point . . . . . . . . . . . . . . . . . . . 9.3 The Widom Line at the Liquid-Liquid Transition . . . . . . . . . . . . . . . . . . . . 9.4 Water as a Two-Component Liquid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Dynamical Properties of Water Upon Supercooling . . . . . . . . . . . . . . . . . 9.6 Widom Line and the Fragile to Strong Crossover . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
301 302 307 310 312 313 318 320
244 246
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Acronyms
AG BBGKY BO CP CRR CS CV DWF ENIAC EOS FERMIAC FSC FSSC GCMC HB HNC HS KWW LJ LT LJBM MANIAC MC MCT MD MHNC MSA NPT NVT OCT ORPA
Adam and Gibbs Bogoliubov-Born-Green-Kirkwood-Yvon Born-Oppenheimer Critical point Cooperative rearranging region Carnahan-Starling Collective variable Debye-Waller factor Electronic Numerical Integrator and Computer Equation of state Fermi Analog Computer Fragile to strong crossover Finite size scaling Grand canonical Monte Carlo Hydrogen bond Hypernetted chain Hard spheres Kohlrausch-Williams-Watts Lennard-Jones Laplace Transform Lennard-Jones binary mixture Mathematical Analyzer, Numerical Integrator and Computer Monte Carlo Mode coupling theory Molecular dynamics Modified hypernetted chain Mean spherical approximation Number pressure temperature Number volume temperature Optimized cluster theory Optimized random phase approximation xiii
xiv
OZ PBC PEL PME PMF PY RDF RHNC RPA SISF SPC SPC/E ST2 TB TIP4P TP WCA VCF VdW WHAM
Acronyms
Ornstein-Zernike Periodic boundary conditions Potential energy landscape Particle mesh Ewald Potential mean force Percus-Yevick Radial distribution function Reference hypernetted chain Random phase approximation Self-intermediate scattering function Simple point charge Simple point charge extended Symmetrical tetrad 2 Thermal bath Transferable intermolecular potential 4 points Triple point Weeks-Chandler-Andersen Velocity correlation function Van der Waals Weighted histogram analysis method
Chapter 1
An Introduction to the Liquid State of Matter
The physics of the liquid state of matter is an important field of application of the statistical mechanics. It consists in the study of the connection between the macroscopic properties of the liquid matter and its behaviour at the atomic level. In this chapter, we give a short introduction to the main features of liquid matter with some examples, and we discuss how the statistical mechanics methods can be applied to study these systems with the use of appropriate models.
1.1 Liquid State of Matter The phase diagram of a monocomponent system is represented in the threedimensional plot of Fig. 1.1 in the pressure, volume and temperature space. The different phases and the regions of coexistence are visible. The liquid phase is an equilibrium state of matter in a limited zone of the thermodynamic space in coexistence with the solid phase in the lower volume region and the gas phase on the higher volume side. According to the Gibbs phase rule for monocomponent systems, it exists a line of triple points (TP) where solid, liquid and gas have the same pressure and temperature with different volumes [1, 2]. The coexistence curves of liquid with solid and gas start from the TP line. The bell-shaped curve represents the liquid-gas coexistence. The three phases of matter are characterized by different macroscopic properties that are manifestations of the differences in the microscopic behaviour. The gas phase at coexistence with the liquid or the solid phase is called sometimes vapour. For vapour, it is intended a gas with the presence of liquid droplets or nuclei of solid phase. This is the case of water vapour and in particular in clouds where also small ice particles are present. Usually nucleation processes are induced by impurities. In the following we will not consider the distinction between gas and the vapour since gas is one of the equilibrium states of matter and it is a more © Springer Nature Switzerland AG 2021 P. Gallo, M. Rovere, Physics of Liquid Matter, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-68349-8_1
1
2
1 An Introduction to the Liquid State of Matter
Fig. 1.1 Pictorial representation of the phase diagram of a generic monoatomic substance in the three-dimensional space: pressure, volume and temperature
uid
Solid
Liq
Pressure
Critical point
Gas
Triple point line Volume
Fig. 1.2 Phase diagram in the pressure-volume plane with the coexisting solid, liquid and gas phases. The grey regions represent the zone of coexistence, where the system is not in equilibrium. The change of volume across the different transitions is evident
general concept than vapour. In Fig. 1.2 the coexistence regions are projected in the pressure-volume plane. In Fig. 1.1 the coloured zones are no equilibrium regions of coexistence between two phases: liquid-gas (grey), liquid-solid (cyan) and solidgas (light green). Looking at the liquid-solid coexistence, it is observed that the solid-liquid (melting) or the liquid-solid (freezing) transition takes place at any pressure with a change of volume. Also the solid-gas transition, sublimation, is always accompanied by a change of volume. Along the bell-shaped curve, liquid and gas coexist at the same pressure with different volumes. The difference in volume
1.1 Liquid State of Matter
3
Isotherms close to the liquid-gas transition
Supercritical fluid Liquid
Pressure
Critical point
Gas Volume
Fig. 1.3 Example of isotherms close to the liquid-gas transition. The temperature increases from below. The black points indicate the liquid-gas coexistence. The position of the critical point is also indicated
decreases at increasing compression, and it disappears at the critical point (CP), so that above it the system is in a single fluid phase. We can look more in detail at the isotherms as reported in Fig. 1.3 for increasing temperatures from below. Along an isotherm in the gas phase, the volume decreases at increasing pressure. The curve reaches a point where the gas cannot be further compressed, and it is transformed in liquid across the plateau with a change of volume and a latent heat. On the liquid side, the volume decreases more rapidly upon compression. The initial and the final points of each plateau represent the coexistence points of, respectively, the gas and the liquid phases. The two points converge finally into the single critical point. Above it the distinction between the two phases disappears. The isotherms above CP describe the behaviour of the supercritical fluid state. As we will explain in other part of the book, the region of approach to the critical point is characterized by large fluctuations of the density. The phenomenon of the critical opalescence takes place when the length scale of the fluctuations becomes comparable to the wavelength of light. Due to the scattering of the light, the fluid is not more transparent, and it becomes cloudy [2].
1.1.1 Examples of Phase Diagram of Pure Substances: CO2 and Water As an example of the equilibrium coexistence curves between the three equilibrium phases, we show in the pressure-temperature plane the case of carbon dioxide in Fig. 1.4. It is well known that CO2 is important in the process of photosynthesis of the plants. It is produced in different chemical processes in nature and in human
4
1 An Introduction to the Liquid State of Matter
Fig. 1.4 Phase diagram of carbon dioxide, from https://commons.wikimedia.org/wiki/File: Carbondioxidepressure-temperaturephasediagram.svg
activities https://pubchem.ncbi.nlm.nih.gov/compound/. In recent times there was an increasing interest in the study of CO2 as it is considered an important greenhouse gas with effects on the global warming. Looking at the phase diagram, we note that to get the solid phase at ambient pressure one needs to go to a very low temperature. The sublimation point at 1 bar is around −78 ◦ C. The liquid phase is obtained at enough high pressure. Liquid or solid CO2 is used as refrigerant. The supercritical region is of interest for recent applications as solvent and in pharmaceutical industries. Between the molecular fluids, water is the most important for our life [3]. Water is also a particular case due to its very peculiar phenomenology. In Fig. 1.5, its phase diagram is shown. Water shows numerous anomalies with respect to other liquids. The most well known is that ice can float on liquid water because it has a lower density with respect to the liquid phase at melting. At ambient pressure (100 kPa), it crystallizes in the well-known hexagonal ice phase Ih . Depending on pressures and temperatures, water crystallizes in different forms of ice as shown in the phase diagram. For a recent review of the research on the different forms of ice, see C. G. Salzmann [4]. The properties of water are strictly connected to the molecular structure as we will discuss below in Sect. 1.6, where we will show the differences between the liquid and the ice microscopic structures. We will give more details about the structure of liquid water in Chap. 3, in particular in Sect. 3.15.1. Besides the equilibrium phases, also metastable states of water are of great interest, in particular the supercooled states, states where water remains liquid below the melting line [5]. We will consider supercooled and glassy states of water in Chap. 9.
1.1 Liquid State of Matter
5
Fig. 1.5 Phase diagram of water, from https://commons.wikimedia.org/wiki/File: Phasediagramofwater.svg. The roman numbers indicate different crystalline states Table 1.1 Thermodynamic data of water [8]
Critical point Triple point Freezing point at 1 Atm Boiling point at 1 Atm
Temperature 647 K 273.16 K 273.15 K 373.15 K
Pressure 22.064 MPa 611.66 Pa 101.325 kPa 101.325 kPa
As in the case of CO2 the supercritical region of water is also of great relevance for applications in extraction of coal, in waste disposal and in many geochemical processes [6, 7]. We collect some data of water in Table 1.1.
1.1.2 Phase Diagram of Binary Mixtures Mixture of fluids show a more complex phase diagram, and different phase transitions could take place. We consider here, as an example, the case of a binary mixtures in the limit of the ideal behaviour. If we indicate with A and B the two
6 Fig. 1.6 Example of a solid-liquid phase coexistence of a binary mixture. The liquidus curves are the limit of the pure liquid phase. Below the solidus line all the system is solid. The black point, the eutectic, represents the liquid-solid coexistence
1 An Introduction to the Liquid State of Matter Tm,A
Liquid
Li
qu
id
us
Tm,B
s idu
u
Liq
T
Solidus Solid 0
0.2
0.4
0.6
0.8
1
x (B concentration)
species of the mixture, the total number of particles will be N = NA + NB . We can define the concentration of the specie B as x=
NB , NA + NB
(1.1)
x is a further parameter in the thermodynamic space. In the case of the liquid-solid coexistence, a typical phase diagram in T -x plane is represented in Fig. 1.6. Above the melting temperatures of the two components, Tm,A and Tm,B , the mixture is in the liquid phase for any concentration. By crossing the so-called liquidus curve on the left, the solid phase of specie A starts to form in coexistence with the liquid. Across the liquidus on the right, the solid phase of specie B appears in the liquid. Below the solidus line all the mixture becomes solid. The black point, called eutectic, is the point of coexistence of the mixture in the pure liquid phase with the pure solid phase. An example of the liquid-gas coexistence is given in Fig. 1.7. The mixture of benzene (component A) and toluene (component B) is almost ideal; see Chapter 13 in ref. [9]. The phase diagram is reported as T as function of the concentration of toluene at p = 1 bar. TA and TB represent the boiling of component A and B, respectively, at the given pressure. Of course the orientation of the curve depends on the values of TA and TB . In this example benzene is the more volatile component. The behaviour is in agreement with Raoult’s law valid for an ideal solution. The space delimited by the two curves is the region of coexistence between liquid and gas. The lower curve is called the bubble point curve, it collects the points where the liquid starts to boil at different concentrations. The other is the dew point curve, where the saturated vapour starts to condense. Starting from the liquid at a given concentration by increasing the temperature at the point a, the system in the liquid phase coexists with a vapour at a lower concentration of the A specie, the b point. Distillation processes are based on this type of phenomena.
1.1 Liquid State of Matter
7
390 TB 380 T (K)
Gas 370
b
a Liquid
360 TA 350
0
0.2
0.4
0.6
0.8
x
Benzene
1 Toluene
Fig. 1.7 Vapour-liquid coexistence at p = 1 bar for the mixture of benzene and toluene. The two curves practically reproduce the limit of an ideal binary mixture. The broken line represents the process described in the text, as an example: by heating the liquid at the concentration x = 0.6 of toluene, the system reaches the coexistence curve; across the gap, the gas phase is obtained with a lower concentration of toluene Fig. 1.8 Example of a liquid-liquid coexistence curve for a binary mixture of methyl acetate with carbon disulphide at 1 bar. The red point is the critical point of the transition. The points a and b are coexisting phases with different concentrations
230 Critical point 220
T (K)
210
b
a
200
190
180
0 C3H6O2
0.2
0.4
0.6 x
0.8
1 CS2
An important and peculiar phenomenon that could occur in binary mixture is the presence of a miscibility gap. For some binary mixtures, a range of concentration for which the mixture is not in equilibrium could exist, and a miscibility gap enclosed in a coexistence curve exists. The curve separates two liquid coexisting phases with different concentrations, and it terminates in a critical point, where the critical opalescence is observed. This is called a liquid-liquid transition. It is represented in Fig. 1.8 for the case of a
8
1 An Introduction to the Liquid State of Matter
mixture of methyl acetate with carbon disulphide. at a pressure of 1 bar, Chapter 13 in ref. [9]. Above the critical point, also called consolute point, the liquid mixture is stable for any concentration, but below it we enter in a two-phase region. Two phases of the liquid coexist at different concentrations like the points a and b along the curve.
1.2 Structure and Dynamics of Liquids: Experiments and Correlation Functions As evident from the phase diagram of a one component substance, at high pressure and temperature the system is in a fluid state, and below CP the fluid state splits in the gas and the liquid phases. Both phases coexist at the same pressure and temperature with different densities. The distinction between liquid and gas is essentially due to the density. The different density is the reason for which a liquid under effect of gravity precipitates while a gas is almost insensible to the gravity attraction. While it is easy to distinguish gases and liquids macroscopically, the two fluid phases could be very similar when they are studied at the microscopic level in some region of the phase space. All the fluid phases are characterized by a homogeneous and isotropic density. The atoms that are the constituents of matter are disposed without any order in the fluid phases, while solid crystals are characterized by the translational order of the atomic positions in the lattice. So the transition from a fluid to a crystal phase implies a change of symmetry from a homogeneous to a spatially ordered structure. The main issues to consider in the study of the liquid state [10–12] are: (a) thermodynamic and structural properties (b) dynamical and transport properties The structure of liquids is studied with the same methodology used for the crystals, mainly X-ray and neutron scattering. The X-ray diffraction has been used for a long time to study the structure of liquids [13]. X-rays have wavelengths of the order of the interatomic distances, and their energies allow an elastic diffraction. The great difference of liquids with respect to crystals is the absence of the order. The Bragg diffraction pattern in a crystal consists of a series of regular spots that are related to the crystal symmetry. The diffraction pattern in liquids, instead, is formed by broad rings that reveal a short-range order without any regular spot. The crystalline periodic structure is replaced by a short-range order extended over a range of approximately 0.1 ÷ 0.2 nm. Also neutron spectroscopy is an important technique for determining structural and dynamical properties of liquids [14]. Neutrons are accelerated and then moderated in order to reach the right energy range. The neutrons interact with the nuclei of the atoms in the liquid; they perform an inelastic scattering and measure
1.3 Microscopic Models for Liquids
9
the density fluctuations of the system. So neutron scattering can be used to study dynamics of liquids in analogy with the study of phonons in crystal. For liquids it is more difficult to extract the dispersion relation, the relation between the frequency of the modes and the exchanged wave vector in the scattering. Neutron spectroscopy is particularly important for water and substance containing hydrogens. The intensity of the X-ray diffraction is very little for the hydrogen as compared with other atoms. Instead the cross section related to the dynamics of the single particle (incoherent cross section) of a hydrogen nucleus is an order of magnitude higher that the cross section of the other nuclei. Therefore neutrons can detect the movement of hydrogen atoms while the hydrogens are nonvisible with X-ray scattering. In Chap. 3 we will describe more in detail how the information about the structure can be extracted from X-ray and from neutron scattering in the elastic limit. We will discuss the application of neutron scattering to detect the dynamical properties of liquids in Chap. 6. To study the density fluctuations in the very long wavelength limit, also light scattering techniques can be used. From a more general point of view, the experimental techniques introduced above make it possible to study the correlation functions. These functions play a fundamental role in the study of fluid systems. They represent how the fluctuations of a certain quantity at a given point in space, at a given time, are connected to the fluctuations of another quantity (or the same), in another point in space, at a different time. It is particularly relevant to study the correlation functions of the density fluctuations. In the static limit, they contain information on the structural arrangement, while time-dependent density correlation functions are connected to the transport properties like diffusion, viscosity and thermal diffusion. We will introduce the correlation functions in Chap. 6 and the application to the dynamics of liquids in Chap. 7.
1.3 Microscopic Models for Liquids Structural and dynamical properties of liquids are now available from experiments in a large portion of the phase space and for a large number of systems. The interpretation of the data in terms of the behaviour at the atomic level is still a major challenge for statistical mechanics methods [11, 12, 15]. At the typical thermodynamic conditions of the liquid phase, the systems are far both from a lowdensity gas and from a harmonic crystal, two well-established models of statistical mechanics. From the theoretical point of view, the interpretation of the phenomenology of the different systems is based on a reductionist approach. The first step of this approach is to individuate the elementary constituents; they could be single atoms or aggregates of atoms. Then the force field between the constituents must be deduced. The structural and dynamical properties are determined by the interaction between
10
1 An Introduction to the Liquid State of Matter
nuclei and electrons that make up the system. However the full quantum problem cannot be exactly solved. In the Born-Oppenheimer approximation, the system is usually studied with a virtual model where the constituent particles interact with an effective potential. Apart from the important exception of helium that remains liquid even near absolute zero, for the other systems in the range of temperature and density where the fluid phases are stable, the classical approximation can be applied with satisfying results.
1.3.1 Classical Approximation To discriminate between quantum and classical systems we have to consider the uncertainty principle for the spatial position of a particle Δx · Δpx ∼ h¯ ,
(1.2)
where the momentum px is related to the energy. If the mean energy associated with kB as Boltzmann’s constant, Δpx ≈ √ the particle is of the order of kB T , with√ mkB T . So we can estimate Δx as Δx ≈ h¯ / mkB T . A system can be considered classical if the Δx associated to the particles is much less than the physical characteristic lengths implied in the problems represented by the average distance r¯ [16]. If r¯ Δx, we can neglect the quantum interference effects, and the position of the particle is well defined. To be more precise, we can define the thermal de Broglie wavelength Λ = h/p ¯ as h2 . (1.3) Λ= 2π mkB T The average distance to compare with Λ can be estimated from the density ρ by assuming that approximately a sphere of radius r¯ contains 1 particle r¯ =
3 4π
1/3
ρ −1/3 .
(1.4)
If Λ ≥ r¯ , the classical approximation is not valid. It is clear that it is the combination of mass, temperature and density to determine the range of validity of the approximation. In Table 1.2 we show some examples of simple and molecular liquids. We report in the table the thermodynamic parameters of a phase state point such that the systems are in the liquid phase not far from the triple point. The last two lines show that He, as it is well known, is a quantum fluid and liquid H2 at least in this region of the phase diagram cannot be considered as a classical fluid.
1.3 Microscopic Models for Liquids Table 1.2 Thermal De Broglie wavelength and mean distance in different liquids calculated at the indicated pressure and temperature. He is in the normal liquid state. Data taken from NIST Chemistry WebBook [8]
11
Ar Ne N2 H2 O CO2 CH4 H2 He
p (bar) 0.8 1.0 40.0 1.0 10.0 1.0 0.5 0.3
T (K) 84.0 25.0 65.0 300.0 220 92.0 14.0 2.8
Λ (nm) 0.03 0.078 0.041 0.024 0.018 0.045 0.33 0.52
r¯ (nm) 0.2 0.18 0.23 0.19 0.25 0.24 0.22 0.22
1.3.2 Different Models Generally the systems can be classified according to the type of effective forces, acting between the atoms. Which microscopic model is to be used for representing the various systems is the first challenging problem. We will discuss more in detail this point in Chap. 3. It is usual to consider that a system is simple if it is possible to find a two-body effective potential that can be used to reach a good agreement between theory and experiment. Generally speaking closed shell atoms, the noble gases, are more easy to treat. The electronic charge distribution is usually spherical, and the potential can be represented as a combination of a van der Waals attraction and a short-range repulsion due to the Pauli principle. For this kind of systems, the potential depends on few parameters, and it is independent from the region of the phase space. Metallic systems in the solid phase remain metallic also in the liquid phase. Liquid metals are an important class of materials. They have been studied for a long time [17–19] due to their relevance in different technological applications. In spite of the presence of the conduction electrons, at least for monatomic alkali metals, it is possible to use two-body potentials, where the presence of the electrons is indirectly taken into account. However at melting some of the electronic properties change, in Fig. 1.9 it is shown how the resistivity of sodium increases with a jump at the melting temperature [20]. On approaching the melting, the modification of the order in the crystal decreases the mobility of the electrons with a jump to a higher resistivity across the melting. This effect cannot be reproduced with simple effective potentials. For systems with covalent bonds, like semiconductors, the bond directionality plays an important role and a two-body approximation for the potential could be not appropriate. Three-body potentials can give better approximations. Semiconductors, like silicon and germanium, become poor conductor in the liquid phase. In these systems the electronic structure shows a strong change across the melting. Ionic liquids or molten salts are ionic conductors in the liquid phase. In spite of this change, simple ionic systems, like NaCl, can be represented with two-body potentials derived from the solid state. A special category is constituted by molecular liquids. In the fluid phases usually the atoms preserve their bonds so that the constituent units are molecules.
12
1 An Introduction to the Liquid State of Matter
Fig. 1.9 Resistivity of solid (red curve) and liquid (black curve) sodium as function of temperature, calculated by the fitting formula of ref. [20]
15 Resistivity of Sodium
UID
ρ (μΩ cm)
LIQ 10
ID
SOL 5 0
50
100
150
200
250
T (°C)
For molecular liquids to build appropriate force field it is necessary to know the electronic charge shared by the bound atoms and distributed around them. In the case of linear molecules or molecules with an almost spherical distribution of the electronic charge, like CO2 , CH4 or N2 , it is possible to assume potentials with a relatively simple form.
1.4 Potential Energy Landscape The main reasons for the difficulties that are encountered in the study of liquids or dense fluids derive from the fact that in these systems, because of the high concentration compared to dilute gases, there are frequent collisional processes and a strong correlation between the particles. On the other hand, the interactions are not strong enough to stabilize the system in a phase with a configurational order. A consequence of the disordered structural arrangement of the atoms in fluid matter is the complexity of the so-called potential energy landscape. The potential energy landscape (PEL) is defined as the surface generated by the potential energy of each particle in the multidimensional space of the coordinates [21]. It is independent of the temperature. In Fig. 1.10, a portion of a possible PEL is presented. We observe the presence of minima separated by barriers. At different temperatures, the system explores different portions of the PEL. Since all the statistical mechanics approach for equilibrium systems is based on their ergodicity, it is assumed that the particles are not trapped in any portion of the phase space. The presence of a complex PEL with different basins and barriers could affect the ergodicity. In research fields that study biomolecules, chemical reactions and glassy states of matter, the determination of the PEL has become a central problem to perform predictive calculations and in the interpretation of the experimental results.
13
Potential energy
1.5 Approximate Theories and Computer Simulation
Coordinates
Fig. 1.10 Schematic portion of a potential energy landscape. The potential energy surface is cut along configurational coordinates
1.5 Approximate Theories and Computer Simulation The calculation of observable quantities with analytical methods requires to solve equations that connect the microscopic potential to correlation functions. To solve those equations, it is necessary to make approximations. The partition function or the free energy is usually expanded in density-dependent terms. The low density limit is valid for dilute gases, but at increasing density the contributions of the neglected terms cannot be estimated. Different particle approaches have been proposed since the pioneering work of Mayer and Montroll [15, 22]. Exact methods of numerical calculations have been implemented just after the Second World War. They are the methods of computer simulation. Starting from the second half of the nineteenth century, these methods have been refined following the development of the computer technology. Computer simulation plays now a major role in studies of fluid matter. The virtual systems of atoms interacting with effective potentials can be simulated with enough accuracy to make the computer calculations similar to an experiment performed on the assumed model. It is possible now to build a phenomenology of the model that can be very predictive [23, 24]. For systems for which it is very difficult to find a good simple effective potential in recent years, computer simulation methods, called ab initio, have been developed [25]. In these methods the effective interaction between the atoms is obtained in the course of the simulation where also electrons are included. Classical and quantum degrees of freedom are treated on the same ground. We will introduce the theoretical method to treat the liquid matter in Chap. 4. The computer simulation methods will be discussed in Chap. 5.
14
1 An Introduction to the Liquid State of Matter
1.6 Water and Hydrogen Bond We have shown before the complex phase diagram of water. Due to its relevance in our life, we discuss more in detail this very peculiar molecular fluid. The properties of water are mainly due to the presence of the hydrogen bond (HB) between the molecules. To explain how the hydrogen bond connects two molecules, we have to consider how the molecule is formed. The oxygen atom has six electrons in the external shell. In order to bond with a hydrogen, the oxygen orbitals undergo an sp3 hybridization; therefore, the oxygen has four equivalent orbitals, two of them carry two electrons (lone pairs orbitals) and the other two orbitals carry only one electron each. These two single electrons are shared with the electrons of the hydrogens to form a covalent bond. In order to minimize the energy, the orbitals of the molecules are arranged in a tetrahedral symmetry. In Fig. 1.11 it is represented how the covalent bond of H2 O molecules is formed. The O–H bond length is 0.0975 nm and the HOH angle is 104.5◦ . Being the oxygen much more electronegative than the hydrogen, the shared electron of the hydrogen spends most of its time close to the oxygen so the oxygen can be considered to hold a negative charge while the bonded hydrogen shows instead a positive charge. As a consequence the water molecule is a polar molecule with a permanent dipole moment. Due to the fact that the hydrogen has only one electron when it forms the covalent bond with the oxygen, the positive charge of the proton is more exposed, or, as it is commonly said, the hydrogen is protonated. Therefore in each water molecule, each protonated hydrogen can attract an oxygen lone pair of another water molecule to form the so-called hydrogen bond. At the same time, each lone pair is attracted by the hydrogen of another molecule and other two HB are formed. In this way each water molecule can form four hydrogen bonds arranged in a tetrahedral geometry. The HB energy is around 20 kJ/mol much less than the covalent strength of 460 kJ/mol. How the electronic charges are distributed is very important. As said by Pauling [26]: Fig. 1.11 Chemical bond of the water molecule. The oxygen atom and the two hydrogens share couple of electrons. Each lone pair of the oxygen contains two uncoupled electrons. Note the tetrahedral arrangement of the molecule
1.6 Water and Hydrogen Bond
15
The tetrahedral arrangement of the shared and unshared electron pairs causes these four bounds to extend in the four tetrahedral directions in space, and leads to the characteristic crystal structure of ice. The tetrahedral arrangement of water molecules is shown in Fig. 1.12. The central molecule forms hydrogen bonds with other four molecules. The geometry of this pentamer is at the basis of the ice structure shown on the top panel in Fig. 1.13. At melting the tetrahedral structure collapses, as it is evidenced in the bottom panel of Fig. 1.13. As a consequence, the liquid has a high density with respect to the crystal, the most known anomaly of water. The liquid however maintains a shortrange order that preserves the tetrahedral hydrogen bond structure as demonstrated by experiments. Fig. 1.12 A central water molecule is bonded to the other four to form a tetrahedral structure. Note that the angle now is approximately 109◦ and the value observed in the liquid phase is slightly different from 105◦ of the single molecule
Fig. 1.13 On the top: a snapshot of the structure of 1h ice, the normal hexagonal lattice formed by water below melting temperature at ambient pressure. At the bottom: a snapshot of the structure of liquid water
16
1 An Introduction to the Liquid State of Matter
We will show the structure of liquid water in Sect. 3.15.1. In Sect. 5.5 we will consider the models used in computer simulations. Chapter 9 is devoted to the properties of supercooled water. The hydrogen bond is important also in different systems, for instance, in the geometry of protein as we will discuss later.
1.7 Metastable States and Disordered Solid Matter The traditional classification of the states of matter is not nowadays sufficient to take into account all the manners in which materials can be naturally or artificially arranged. Not all the solid materials are crystals. There are experimental methods able to drive liquids out of an equilibrium phase and keep them in metastable states below the melting temperature. Under appropriate conditions, a supercooled liquid can solidify in a metastable disordered configuration, the amorphous or glassy state. Amorphous solids are disordered and the amorphous state in principle is metastable. Amorphous solids are also of relevant interest for technological applications, and the study of the transition from the liquid phase to the glass has become a vivid field of research [27]. Metastable states can be studied with the methods used for equilibrium properties if their lifetime is longer enough to make possible to perform experiments. In this case we can talk of glass transition phenomenon and quasi-equilibrium properties. From the point of view of statistical mechanics, the characterization of glasses is still an open field of research. We will consider the slow dynamics of supercooled liquids and the glass transition in Chap. 8.
1.8 Soft Matter In our life we have frequently dealt with very viscous fluid matter, like glues, emulsions and pastes. Their elementary constituents are particles with diameters of the order of 1–103 nm. Those substances are generically called colloids, and they present very different properties. In biology, macromolecules are constituted by a large number of atoms, and usually they are in solution with water, so they are fluid matter with very peculiar properties. Materials like colloids and biological systems are now part of a large family, named soft matter [28]. Experimental and theoretical methods developed over the decades to investigate liquids have been used in recent years to study soft matter [29]. This is also one of the reasons for which the methodology used for studying liquid matter is of great relevance for understanding many phenomena in chemistry, biology and geochemistry.
1.8 Soft Matter
17
We will not address the topic of the soft matter in this book and we will indicate below appropriate general references. However the general theoretical and computer simulation methods considered, respectively, in Chaps. 4 and 5 are the indispensable basic knowledge to tackle these issues.
1.8.1 Colloids The generic term of colloids is used for systems where large poly-molecular particles are dispersed in a medium. The size of colloidal particles ranges from 1 to 1000 nm. In a normal solution, the solute and the solvent have sizes of the same order of magnitude, and they constitute a single phase. A colloid instead is composed by two distinct components, the colloidal particles (the dispersed phase) and the dispersing medium. The two components are in equilibrium due to the effect of the thermal energy. The dispersed phase can be in the form of bubbles (gas), droplets (liquid) or grains (solid). Depending on the dispersing phase, we can have, for instance, fogs, when droplets are dispersed in gas, or emulsions when droplets are dispersed in a liquid. Table 1.3 indicates some of the possible combinations. The size of the colloids are such that their positions and motions can be observed by means of light scattering and other optical methods. Though the systems are not made of atomic particles, numerical methods elaborated to study liquid matter can be applied also to these materials largely diffused in our daily life. The basic idea is that these systems are obtained with self-assembly processes of building blocks [30, 31]. For instance, different types of colloids are composed by Janus particles [32, 33]. The roman god Janus has two faces, one directed toward the past and the opposite direct toward the future. The Janus microparticles are spheres divided into two parts with different chemical properties. In Fig. 1.14 there is a possible representation in terms of spheres with two distinct hydrophilic and hydrophobic surfaces. An example of Janus microparticles fabricated with plasma polymerization is shown in Fig. 1.15. The authors prepare Janus particles composed of organic and inorganic surfaces by utilizing a very sophisticated technique. They use plasma polymerization to deposit different monomers onto the exposed surfaces of the partially
Table 1.3 Examples of colloids Name Sol Aerosol Emulsion Gel Foam Solid emulsion
Dispersed phase Solid Liquid Liquid Liquid Gas Liquid
Dispersion medium Liquid Gas Liquid Solid Liquid Solid
Examples Inks, paints Fogs, clouds Milk, mayonnaise Gels, jellies Foams, whipped cream Butter, cheese
18
1 An Introduction to the Liquid State of Matter
Fig. 1.14 Model of Janus particles with two surfaces that have different properties. For instance, the blue part could be hydrophilic and the red part hydrophobic
Fig. 1.15 Janus particles with controlled coating coverage (from the left c:3/4, d:1/2, e:1/4) of acrylonitrile on silica microspheres. Image obtained with scanning electron microscopy. Adapted with permission from ref. [34]. Copyright 2010 American Chemical Society
embedded microspheres of titania and silica. In the figure, there is an example of acrylonitrile, an organic compound, deposited on silica microspheres at different coverage. The synthesis of Janus particles has become an important research topic in the field of new material design [35, 36]. A number of microscopic models used to study colloids are based on Janus particles [37, 38] similar to the ones shown in Fig. 1.14. In particular Janus particles are of great interest in the study of the self-assembly phenomenon in biology, chemistry and the design of new materials.
1.8.2 Biomolecules Molecules that are present in living organisms are usually composed of a large amount of atoms. They are classified according to their properties and functions. As for colloids they are sometimes dispersed in some medium like water, and they are studied both for their intrinsic properties and in interactions with other macromolecules or with some environments. The modern research on the microscopic behaviour of biomolecules is much supported by the use of computer simulation methods elaborated firstly for the study of liquid matter [39]. In the living processes, a fundamental role is played by proteins. Proteins are formed by chains of amino
1.8 Soft Matter
19
Fig. 1.16 Haemoglobin as example of a protein. (a) The primary structure. (b) The secondary structure with the two possible forms: an alpha-helix (on the left) or a beta-pleated sheet (on the right). (c) The tertiary structure. (d) The quaternary structure. See text for explanations. By OpenStax—https://cnx.org/contents/[email protected]:fEI3C8Ot@10Version 8.25 from the TextbookOpenStax Anatomy and PhysiologyPublished May 18, 2016, CC BY 3.0, https://commons. wikimedia.org/w/index.php?curid=64286551
acid residues, like the example in Fig. 1.16, haemoglobin. The final form of a protein is a result of the successive formation of four structures starting from a polypeptide chain obtained as sequence of amino acids. The four structures are schematically illustrated in Fig. 1.16 and indicated with the letters (a)–(d): (a) The primary structure is the sequence of amino acids that form the polypeptide chain. (b) The secondary structure, which can take the form of an alpha-helix or a beta-pleated sheet, is maintained by hydrogen bonds between amino acids in different regions of the original polypeptide strand. (c) The tertiary structure occurs as a result of further folding and bonding of the secondary structure. (d) The quaternary structure occurs as a result of interactions between two or more tertiary subunits. Different experimental techniques made available a large amount of data about the geometry of proteins. The main techniques are nuclear magnetic resonance (NMR), X-ray diffraction and also electron microscopy. X-ray diffraction in particular provides the electronic distributions around the atoms in the molecules. By
20
1 An Introduction to the Liquid State of Matter
fitting appropriate models of the electronic configuration to the measured electron density, it is possible to derive the protein structure. To predict the protein geometry and the various properties starting from a model is the important task of computer simulation in this field. Taking into account the complexity of the charge distributions in the macromolecule, it is clear that it is particularly difficult to implement realistic models. For this reason, it is usual to use simplified representation of the protein in terms of sites with appropriate charges and interacting with effective potentials. A number of review papers and books have been published in this field; see, for instance, [40, 41].
References 1. Landau, L.D., Lifshitz, E.M.: Statistical Physics. Elsevier, London (2013) 2. Stanley, H.E.: Phase Transition and Critical Phenomena. Oxford University Press, New York (1971) 3. Franks, F.: Water: A Matrix of Life, 2nd edn. Royal Society of Chemistry, Cambridge (2000) 4. Salzmann, C.A.: J. Chem. Phys. 150, 060901 (2019) 5. Gallo, P., Amann-Winkel, K., Angell, C.A., Anisimov, M.A., Caupin, F., Chakravarty, C., Lascaris, E., Loerting, T., Panagiotopoulos, A.Z., Russo, J., Sellberg, J.A., Stanley, H.E., Tanaka, H., Vega, C., Xu, L., Pettersson, L.G.M.: Chem. Rev. 116, 7463 (2016) 6. Akiya, N., Savage, P.E.: Chem. Rev. 102, 2725 (2002) 7. Huelsman, C.M., Savage, P.E.: J. Supercrit. Fluids 81, 200 (2013) 8. Linstrom, P.J., Mallard, W.G. (eds.): The NIST Chemistry WebBook: A Chemical Data Resource on the Internet. J. Chem. Eng. Data 46 (2001) 9. Devoe, H.: Thermodynamics and Chemistry, 2nd edn. Department of Chemistry and Biochemistry, University of Maryland, College Park, Maryland (2019) 10. Egelstaff, P.A.: An Introduction to the Liquid State. Academic, London (1967) 11. March, N.H., Tosi, M.P.: Introduction to Liquid State Physics. World Scientific, Singapore (2002) 12. Hansen, J.P., McDonald, J.R.: Theory of Simple Liquids. Academic, Oxford (2013) 13. Stuart, G.W.: Phys. Rev. 32, 558 (1928) 14. Lovesey, S.W.: Theory of Neutron Scattering from Condensed Matter. Clarendon Press, Oxford (1984) 15. Barker, A., Henderson, D.: Rev. Mod. Phys. 48, 587 (1976) 16. Di Castro, C., Raimondi, R.: Statistical Mechanics and Applications in Condensed Matter. Cambridge University Press, Cambridge (2015) 17. Mott, N.F.: Proc. R. Soc. Lond. A Math. Phys. Eng. Sci. 146, 465 (1934). http://rspa. royalsocietypublishing.org/content/146/857/465 18. March, N.H.: Liquid Metals: Concepts and Theory. Cambridge University Press, Cambridge (1990) 19. Scopigno, T., Ruocco, G., Sette, F.: Rev. Mod. Phys. 77, 881 (2005) 20. Addison, C.C., Creffield, G.K., Hubberstey, P., Pulham, R.J.: J. Chem. Soc. A 77, 1482 (1969). https://doi.org/10.1039/J19690001482 21. Stillinger, F.H.: Science 267, 1935 (1995) 22. Mayer, J.E., Montroll, E.: J. Chem. Phys. 9, 2 (1941) 23. Allen, M.P., Tildesley, D.J.: Computer Simulation of Liquids. Oxford University Press, Oxford (2017) 24. Frenkel, D., Smit, B.: Understanding Molecular Simulation. Academic, London (2002)
References
21
25. Car, R., Parrinello, M.: Phys. Rev. Lett. 55, 2471 (1985) 26. Pauling, L.: General Chemistry. Dover, New York (1988) 27. Debenedetti, P.G.: Metastable Liquids. Princeton University Press, Princeton (1996) 28. De Gennes, P.G.: Soft Matter (Nobel Lecture). Angew. Chem. Int. Ed. Engl. 31(7), 842–845 (1992) 29. Piazza, R.: Soft Matter: The Stuff that Dreams are Made of. Springer Science and Business Media, Berlin (2011) 30. Glotzer, S.C.: Science 306, 419 (2004) 31. Glotzer, S.C., Solomon, M.J.: Nat. Mat. 6, 557 (2007) 32. Jiang, S., Chen, Q., Tripathy, M., Luijten, E., Schweizer, K.S., Granick, S.: Adv. Mater. 22, 1060 (2010) 33. Walther, A., Müller, A.H.: Chem. Rev. 113, 5194 (2013) 34. Anderson, K.D., Luo, M., Jakubiak, R., Naik, R.R., Bunning, T.J., Tsukruk, V.V.: Chem. Mater. 22, 3259 (2010) 35. Yang, Q.: Janus Particles: Fabrication, Design and Distribution in Block Copolymers. University of Groningen, Groningen (2016) 36. Deng, R., Liang, F., Zu, J., Yang, Z.: Mater. Chem. Front. 1, 431 (2017) 37. Kernel, N., Frenkel, D.: J. Chem. Phys. 118, 9882 (2003) 38. Sciortino, F., Giacometti, A., Pastore, G.: Phys. Rev. Lett. 103, 237801 (2009) 39. Dror, R.O., Dirks, R.M., Grossman, J.P., Xu, H., Shaw, D.E.: Ann. Rev. Biophys. 41, 429 (2012) 40. Zuckerman, D.M.: Statistical Physics of Biomolecules: An Introduction. CRC Press, Taylor and Francis, Boca Raton (2010) 41. Liwo, J.A. (ed.): Computational Methods to Study the Structure and Dynamics of Biomolecules and Biomolecular Processes. Springer, Berlin (2014)
Chapter 2
Thermodynamics and Statistical Mechanics of Fluid States
In the first part of this chapter, we examine the thermodynamic properties of fluid matter. We will introduce the general conditions of equilibrium and stability for a system. Then we will consider the coexistence and the transition between different equilibrium phases. As an important applications of the general formulas, we will discuss in detail the van der Waals equation. This is a prototype equation of state able to describe the liquid-gas transition. In the second part, we recall the theory of the classical ensembles since we will consider only classical fluids. In particular we will derive relations between thermodynamics and microscopic fluctuations of energy and density.
2.1 Extensive and Intensive Functions In thermodynamics we can distinguish between extensive variables that depend on the system size and intensive variables that are independent from the system size. We recall that a function is homogeneous of order n if f (λx1 , λx2 , . . . ) = λn f (x1 , x2 , . . . ) .
(2.1)
An extensive function is a homogeneous function of the first order of the extensive variables. An intensive function is a homogeneous function of order 0 of the extensive variables. So an intensive function p will be characterized by the property p(x1 , x2 , . . . ) = p(λx1 , λx2 , . . . ) .
© Springer Nature Switzerland AG 2021 P. Gallo, M. Rovere, Physics of Liquid Matter, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-68349-8_2
(2.2)
23
24
2 Thermodynamics and Statistical Mechanics of Fluid States
2.2 Energy and Entropy A system is in equilibrium if the variables that define it completely, its internal energy E, its volume V and the number of particles (or moles) of its n different components N1 , N2 , . . . , Nn , do not change in time [1]. The internal energy is the sum of the energies of all the particles in the system, and it is an extensive function of the volume and the number of particles of each components. According to the first principle of thermodynamics, if an infinitesimal heat quantity δQ is transferred to the system and an infinitesimal amount of work δW is done on the system, the infinitesimal change of energy is given by dE = δQ + δW ,
(2.3)
where δQ and δW are not exact differentials, while dE is an exact differential. In a transformation from an initial equilibrium state A to a final equilibrium state B, ΔE = EB − EA is independent from the path, while the total transferred heat Q and performed work W are different along different paths. The transformation of a thermodynamic system can be performed by activating or removing different constraints. A possible constraint is – adiabatic if it prevents any heat exchange; – rigid if it prevents any change of volume; – impermeable if it prevents any exchange of particles. The constraints can be internal or imposed by the contact with a thermal bath (reservoir). An internal constraint can be a wall dividing the isolated system in two parts as in in Fig. 2.1. A transformation taking place along a series of equilibrium states is defined as quasi static; this is a necessary condition that a transformation is reversible. The first principle of thermodynamics does not predict how the system evolves during a transformation when internal constraints are removed. To determine the final equilibrium state, we need to introduce the function called entropy. Fig. 2.1 Isolated system with an internal constraint represented as a wall
2.2 Energy and Entropy
25
The properties of the entropy are determined by three postulates: (1) the entropy S is an extensive function of the internal energy, of the volume and of the numbers of particles or moles of the components, S = S(E, V , {N}), where {N } = N1 , . . . , Nn ; (2) the entropy is a continuous, differentiable and monotonic increasing function of the internal energy; (3) if an equilibrium state B is reached adiabatically from an equilibrium state A by removing internal constraints, then SB ≥ SA , with the equal sign for reversible transformation. These postulates are equivalent to the second principle of thermodynamics [1]. We can invert the relation between entropy and energy, so energy is also an extensive function of entropy E = E(S, V , {N}). For the differentials, we have dS =
∂S ∂E
V ,{N }
dE +
∂S ∂V
n ∂S dV + dNi , ∂Ni E,V ,Nj E,{N }
(2.4)
i=1
and dE =
∂E ∂S
V ,{N }
dS +
∂E ∂V
S,{N }
dV +
n ∂E ∂Ni
i=1
dNi ,
(2.5)
S,V ,Nj
Along a reversible path the second term in Eq. (2.5) is the infinitesimal mechanical work done to change the volume, and we can define the pressure as p=−
∂E ∂V
,
(2.6)
S,{N }
so the mechanical work is given by dWm = −pdV
(2.7)
Analogously the third term in Eq. (2.5) is the infinitesimal work done to change the number of particles. We can define the chemical potential as μi =
∂E ∂Ni
,
(2.8)
S,V ,Nj
where the partial derivative with respect to Ni is performed leaving constant the other Nj with j = i. The chemical work is δWc =
n i=1
μi dNi .
(2.9)
26
2 Thermodynamics and Statistical Mechanics of Fluid States
Therefore n
dE = δQ − pdV +
(2.10)
μi dNi
i=1
Along a reversible path, the first term in Eq. (2.10) is the infinitesimal quantity of heat involved in the transformation ∂E dS , (2.11) (δQ)rev = ∂S V ,{N } and we can define the temperature as T =
∂E ∂S
V ,{N }
=
∂S ∂E
−1 .
(2.12)
μi dNi .
(2.13)
V ,{N }
and finally we have dE = T dS − pdV +
n i=1
By comparing dS in Eqs. (2.13) and (2.4), we can also write p=T
∂S ∂V
(2.14) E,{N }
and for the chemical potential μi = −T
∂S ∂Ni
(2.15)
. E,V ,Nj
2.3 Gibbs-Duhem Relation If we consider the Euler’s theorem for homogeneous functions of first order f (x1 , . . . , xn ) =
n ∂f i=1
∂xi
xj =i
xi ,
(2.16)
2.4 Equilibrium Conditions
27
for the internal energy, we obtain E=
∂E ∂S
V ,{N }
S+
∂E ∂V
S,{N }
V +
∂E
Ni .
(2.17)
(μi dNi + Ni dμi ) .
(2.18)
∂Ni
i
S,V ,Nj
From this the differential of the energy is dE = T dS + SdT − pdV − V dp +
i
Now by comparing with Eq. (2.13) we find the Gibbs-Duhem relation SdT − V dp +
Ni dμi = 0 .
(2.19)
i
This relation is important since it connects the three intensive variables that are not independent. For a single-component system, we have dμ = −sdT + vdp ,
(2.20)
with s = S/N and v = V /N.
2.4 Equilibrium Conditions An isolated system is in equilibrium if for any change from its state the entropy decreases (ΔS)E,V ,{N } ≤ 0 ,
(2.21)
since the entropy must be at the maximum the equilibrium conditions become dS (E, V , {N}) = 0
(2.22)
and for the stability of the equilibrium state, it must be d 2 S (E, V , {N}) ≤ 0 .
(2.23)
The condition of maximum entropy (2.21) is equivalent to the condition of minimum energy (ΔE)S,V ,{N } ≥ 0 ,
(2.24)
28
2 Thermodynamics and Statistical Mechanics of Fluid States
in this case the equilibrium conditions are given by dE (S, V , {N}) = 0
(2.25)
and for the stability of the equilibrium state, it must be d 2 E (S, V , {N}) ≥ 0 .
(2.26)
In the following we will use the conditions for the energy, Eqs. (2.25) and (2.26), the same results will be obtained by using the conditions for the entropy, Eqs. (2.22) and (2.23). By assuming that the system is divided into two parts with an internal wall, as in Fig. 2.1, and the wall is impermeable and adiabatic, the total energy will be E = E1 + E2 .
(2.27)
Now if we remove the adiabatic constraint, there will be an exchange of heat between the two parts but the total energy must be constant since the system is isolated, so we have dE1 = −dE2
(2.28)
and from the condition (2.22) dS =
∂S1 ∂E1
dE1 +
V1
∂S2 ∂E2
dE2
(2.29)
V2
we get
1 1 − T1 T2
dE1 = 0
(2.30)
for any dE1 so the equilibrium condition is T1 = T2 . If we remove all the internal constraints, by following a similar derivation where we assume that the total energy, the total volume and the total number of particles must be constant, we get the condition
1 1 − T1 T2
dE1 +
p1 p2 − T1 T2
(2) μ(1) μi (1) i dV1 + − dNi = 0 T1 T2
(2.31)
i
therefore at equilibrium T1 = T2
p 1 = p2
(1)
μi
(2)
= μi .
(2.32)
2.6 Macroscopic Response Functions and Stability Conditions
29
2.5 Equilibrium Conditions and Intensive Quantities We recall that the intensive quantities temperature, pressure and chemical potential are derived from the energy as T
=
∂E ∂S
(2.33) V ,{N }
∂E p=− ∂V S,{N } ∂E μi = . ∂Ni S,V ,Nj
(2.34) (2.35)
From the previous derivations we found that in a system at equilibrium, each one of these intensive variables must have the same value in all the portions of the system.
2.6 Macroscopic Response Functions and Stability Conditions The Eq. (2.26) is connected with the second derivatives of E (or S), and it leads to conditions on the macroscopic response functions that we introduce now. We assume to keep the Ni constant. When we modify the external parameters, like temperature or pressure, the properties of the system could change. The thermodynamic response functions describe the response of the system to the variation of the external parameters. One of these functions is the thermal expansion coefficient 1 αp = V
∂V ∂T
(2.36)
, p
it shows how the volume changes under the effect of a change in the temperature. The isothermal compressibility is defined as 1 KT = − V
∂V ∂p
,
(2.37)
.
(2.38)
T
while the adiabatic compressibility is 1 KS = − V
∂V ∂p
S
30
2 Thermodynamics and Statistical Mechanics of Fluid States
These functions show how the volume or the density of the system can change under the effect of the applied pressure. Also the heat capacities (or the specific heats) are response functions: at constant volume ∂E ∂S CV = =T , (2.39) ∂T V ∂T V and at constant pressure Cp = T
∂S ∂T
(2.40)
. p
It is possible to derive the following relations [2] C p − CV = T V
αp2 KT
,
(2.41)
.
(2.42)
and also KT − KS =
T V αp2 Cp
The second order condition on the energy minimum (2.26) gives the stability conditions for the system [3, 4]. They can be derived as
∂ 2E ∂S 2
∂ 2E ∂V 2
V ,{N }
=
∂T ∂S
V ,{N }
∂p =− ∂V
∂ 2E ∂ 2E − ∂S 2 ∂V 2
(2.43)
S,{N }
≥0
S,{N }
≥0
(2.44)
2
∂ 2E ∂S∂V
≥0
(2.45)
From Eqs. (2.43)–(2.45), we get four important inequalities. When the pressure applied on a system grows, the volume decreases. KT = −
1 V
∂V ∂p
≥0
(2.46)
T
and also KS ≥ 0
(2.47)
2.7 Legendre Transforms and Thermodynamic Potentials
31
These conditions are called the mechanical stability conditions. The stability conditions concern also the specific heat at constant volume and pressure, with s = S/N the entropy per particle: cV =
∂s ∂T
>0
(2.48)
> 0.
(2.49)
V
and since cp > cV this implies cp =
∂s ∂T
p
These criteria are called the thermal stability conditions; they simply tell us that if heat is added to a stable system, its temperature increases. The conditions on the second derivatives of the energy imply that E is a convex function of S and V . For the entropy one finds that it is a concave function of both energy and volume.
2.7 Legendre Transforms and Thermodynamic Potentials If a n-variable function f = f (x1 , . . . , xn ) has an exact differential df =
n
ui dxi
(2.50)
i=1
the quantities ui =
∂f ∂xi
(2.51) xj
are defined as the conjugate variables to the xi . Let us suppose that we want to substitute some of the independent variables xi with the corresponding ui ; for simplicity now we shift the xi to be changed to the final positions so that x1 , . . . , xm , xm+1 , . . . , xn → x1 , . . . , xm , um+1 , . . . , un The Legendre transform of the function f is defined as g=f −
n i=m+1
ui xi
(2.52)
32
2 Thermodynamics and Statistical Mechanics of Fluid States
where g = g(x1 , . . . , xm , um+1 , . . . , un ). The differential of g will be given by dg =
m
ui dxi +
i=1
n
(−xi )dui
(2.53)
i=m+1
With the Legendre transforms, it is possible to introduce different thermodynamic potentials depending on the independent variables that we want to use starting from the internal energy E = E(S, V , N1 , . . . , Nn ), with dE = T dS − pdV + μ1 dN1 + · · · + μn dNn
(2.54)
2.7.1 Helmholtz Free Energy From the set of independent variables S, V , N1 , . . . , Nn , we substitute the entropy with its conjugate variable, the temperature T =
∂E ∂S
(2.55) V ,{N }
We perform a Legendre transform, and we get a new function, called Helmholtz free energy A(T , V , N1 , . . . , Nn ) = E − T S
(2.56)
with a differential given by dA = −SdT − pdV +
n
μi dNi .
(2.57)
i=1
The Helmholtz free energy is the relevant thermodynamic potential in the transformation at constant T and V . Consider that our system Σ is now in contact with a reservoir R. The reservoir can exchange heat at constant temperature with our system, see Fig. 2.2. The total system Σ composed by Σ and R is isolated, so we can require the equilibrium conditions for its energy E = E + ER and entropy S = S + SR . For the energy we have d (E + ER ) = 0
(2.58)
Now the energy change of the reservoir is given by dER = TR dSR , the heat exchanged with the system Σ. Since dS = dS +dSR = 0, we have dER = −TR dS so
2.7 Legendre Transforms and Thermodynamic Potentials
33
Fig. 2.2 System Σ at contact with the reservoir R with heat exchange. The total system Σ is isolated
d (E − TR S) = 0
(2.59)
dA = d (E − T S) = 0
(2.60)
Now TR = T so finally
The equilibrium conditions for the system in contact with a reservoir at constant V and T are the conditions for a minimum of the Helmholtz free energy.
2.7.2 Gibbs Free Energy The Gibbs free energy is obtained by substituting (S, V ) with (T , −p) G(T , p, N1 , . . . , Nn ) = E − T S + pV
(2.61)
with dG = −SdT + V dp +
n
μi dNi
(2.62)
i=1
The Gibbs free energy is usually important for the connections with experiments that are frequently performed with the use of p and T as external parameters to vary. We can derive the equilibrium conditions when the reservoir exchanges heat and mechanical work with the system, Fig. 2.3, dER = TR dSR − pR dVR ,
(2.63)
34
2 Thermodynamics and Statistical Mechanics of Fluid States
Fig. 2.3 As Fig. 2.2 but with exchange of heat and mechanical work
for the equilibrium condition of the total system dSR = −dS and dVR = −dV , and since TR = T and pR = p we have d (E + ER ) = d (E − T S + pV ) = 0
(2.64)
Therefore the equilibrium conditions for the system in contact with a reservoir at constant p and T are the conditions for a minimum of the Gibbs free energy. The equilibrium condition is dG = 0 .
(2.65)
By applying the Euler theorem to the Gibbs free energy, we can write G=
n ∂G ∂Ni
i=1
Ni .
(2.66)
Nj
On the other hand from Eq. (2.62), we have G=
(2.67)
μi Ni
i
Therefore it comes out μi =
∂G ∂Ni
.
(2.68)
Nj
For a system with a single component G = μN
(2.69)
2.7 Legendre Transforms and Thermodynamic Potentials
35
2.7.3 Enthalpy By substituting the volume with the pressure
∂E p=− ∂V
S,{N }
from the internal energy, we get the enthalpy H (S, p, N1 , . . . , Nn ) = E + pV
(2.70)
with dH = T dS + V dp +
n
μi dNi
(2.71)
i=1
In this case the system exchanges only mechanical work with the reservoir, Fig. 2.4, and we have dER = −pR dVR ,
(2.72)
and since p = pR , dV = −dVR d(E + pV ) = 0
(2.73)
dH = 0 .
(2.74)
so the equilibrium condition is
Fig. 2.4 As Fig. 2.2 but with exchange of mechanical work
36
2 Thermodynamics and Statistical Mechanics of Fluid States
2.7.4 Grand Canonical Potential The so-called grand canonical potential can be derived from E with the substitution (S, Ni ) → (T , μi ) Ω(T , V , μ1 , . . . , μn ) = E − T S −
μi Ni
(2.75)
i
with dΩ = −SdT − pdV −
Ni dμi .
(2.76)
i
In this case the system exchanges heat and particles with the reservoir. The heat (R) is exchanged at constant T = TR and the particle of specie i at constant μi = μi , (R) Fig. 2.5. Now we have dS = −dSR and dNi = −dNi d E −TS −
μi Ni
=0
(2.77)
i
Therefore the equilibrium condition is now dΩ = 0 .
(2.78)
By using Eq. (2.67) in Eq. (2.75), we get Ω = −pV Fig. 2.5 As Fig. 2.2 but with exchange of heat and particles
(2.79)
2.8 Stability Conditions for Thermodynamic Potentials
37
Table 2.1 Thermodynamic potentials Thermodynamic potential Internal energy Helmholtz free energy Gibbs free energy Enthalpy Grand canonical potential
Formula E = T S − pV + i μi Ni A = E −TS G = E − T S + pV H = E + pV Ω = E − T S − i μi Ni
External parameters S, V , {N } T , V , {N } T , p, {N } S, p, {N } T, V, μ
2.7.5 Tabulated Thermodynamic Potentials See Table 2.1 [1].
2.8 Stability Conditions for Thermodynamic Potentials In analogy with the internal energy, the stability conditions for the other thermodynamic potentials require properties of convexity or concavity as a function of the different parameters [4]. For the Helmholtz free energy, we have
∂ 2A ∂V 2
T ,{N }
=−
∂p ∂V
T ,{N }
≥0
(2.80)
which is positive, as consequence of Eq. (2.46), while due to Eq. (2.48) we have
∂ 2A ∂T 2
V ,{N }
∂S =− ∂T
V ,{N }
=−
CV ≤0 T
(2.81)
so A is a concave function of T and a convex function of V . From the stability conditions, the Gibbs free energy results to be a concave function both of T and P since 2 ∂ G 1 = − Cp ≤ 0 (2.82) T ∂T 2 p,{N }
∂ 2G ∂p2
T ,{N }
= −V KT ≤ 0 .
These conditions are violated in instability regions of the system.
(2.83)
38
2 Thermodynamics and Statistical Mechanics of Fluid States
2.9 Coexistence and Phase Transitions Consider now a single-component system. The coexistence curve between two phases indicated as a and b is represented in Fig. 2.6 in the p − T plane. According to the general conditions for equilibrium of a system, two coexisting phases are in equilibrium when the following conditions are satisfied for their intensive quantities temperature, pressure and chemical potential T (a) = T (b)
p(a) = p(b)
(2.84)
μ(a) (T , p) = μ(b) (T , p) .
(2.85)
If T and p are chosen as external variables, the condition (2.85) defines the coexistence curve pcoex = p (T )coex
(2.86)
the curve represented in Fig. 2.6. In a single-component system, if three phases a, b and c coexist, together with the condition (2.85), the equation must be satisfied μ(b) (T , p) = μ(c) (T , p)
(2.87)
and then with two equations, we get an unique solution where pa = pb = pc and T a = T b = T c ; this solution represents the triple point. Fig. 2.6 Coexistence curve between a and b phases in the P -T plane P Phase a
Phase b
T
2.10 Phase Transitions and Their Classifications
39
2.10 Phase Transitions and Their Classifications If T and p are assumed as the external variables, the stable state of a system will correspond to the lowest Gibbs free energy. The equilibrium phase of the system can be changed by varying one or both T and p. Close to a phase transition the behaviour of μ = G/N for fixed p as function of T will be that represented in Fig. 2.7. It is evident that • T < T0 −→ (a) is the stable phase • T > T0 −→ (b) is the stable phase At T = T0 , a phase transition takes place. The equilibrium is along the bold curve in Fig. 2.8. By changing the pressure, we can get a coexistence curve between the a and b phases in the (T , p) plane as in Fig. 2.6. Fig. 2.7 Chemical potentials of the two generic a and b phases as function of temperature. μa continuous curve, μb point dashed curve
b μb a
μa
μ
a
b
T
T0
Fig. 2.8 The continuous line is the equilibrium chemical potential across the transition from the a phase to the b phase. The dashed red line represents the unstable curve
b a
μ
a
b
T0
T
40
2 Thermodynamics and Statistical Mechanics of Fluid States
From Fig. 2.8, it is evident that μ is continuous in T0 , but there is a discontinuity in its derivative with respect to T . An analogous behaviour is found for μ as function of p for fixed T . At the phase transition therefore we have discontinuities in the derivatives ∂μ ∂μ s=− v= (2.88) ∂T p ∂p T where s = S/N and v = V /N so there is a discontinuity in the specific entropy and a change of volume Δs = s (b) − s (a)
(2.89)
Δv = v (b) − v (a) .
(2.90)
This type of phase transitions are called the first order. Phase transitions are always characterized by non-analyticities of the thermodynamic potential. Phase transitions where the first derivatives of the thermodynamic potential are continuous and the discontinuities are shifted to the second derivatives are called the second order. In a phase transition of the first order, the volume is discontinuous, so along an isotherm the following conditions are satisfied
p(a) T , v (a) = p(b) T , v (b)
μ(a) T , p(a) = μ(b) T , p(b)
(2.91) (2.92)
and the values of v (a) and v (b) can be obtained. Along the coexistence curve from (2.85), the differential dμ satisfies the following equation
dμ(a)
= dμ(b)
(2.93)
dμ = −sdT + vdp
(2.94)
coex
coex
since
we get −s (a) dT + v (a) dp = −s (b) dT + v (b) dp
coex
(2.95)
2.11 Van der Waals Equation
41
from which the Clausius-Clapeyron equation can be derived
dp dT
= coex
qλ T Δv
(2.96)
where qλ is the latent heat of the transition qλ = T Δs
(2.97)
2.11 Van der Waals Equation Van der Waals in 1873 invented an equation of state with the idea of connecting the macroscopic thermodynamic behaviour with the interaction between the molecules of a fluid [5, 6]. Its equation can describe the main features of the liquid-gas transition; it was the first example of an empirical equation of state. The approach of van der Waals is also at the very basis of the future statistical mechanics theory of liquids. He won the Nobel Prize in 1910. The starting point is a modification of the ideal gas equation pV = NkB T
(2.98)
by introducing the effects of the molecular interactions. The excluded volume effect is due to the particle repulsion at short distance. Two molecules cannot overlap and they can be approximately considered as two hard spheres (see Fig. 2.9). For each sphere there is a portion of volume excluded, represented by an empirical parameter b. The total volume that can be occupied is then V − Nb and the Eq. (2.98) becomes p (V − Nb) = NkB T
(2.99)
For finite N when p → ∞, the volume does not go to zero, as for the ideal gas, but V → N b, so there is a maximum possible packing for the molecules. Equation (2.99) describes the passage from a dilute gas to a more dense system as a continuous process without any evidence of a gas-liquid transition. Van der Waals realized that in order to describe a gas-liquid transition with a critical point, an attractive effect between the molecules must be introduced. It was shown later Fig. 2.9 Excluded volume effect of two spheres of diameter σ
42
2 Thermodynamics and Statistical Mechanics of Fluid States
that a hard sphere system cannot have a gas-liquid transition, while it is possible to freeze it to a crystalline ordered state [7]. In the ideal gas, the pressure is due to the collisions with the container wall; now the attractive forces will induce part of the molecules to stay far from the wall with the consequent reduction of the collisions and the pressure. To take into account this effect, van der Waals introduced a second empirical parameter to correct the pressure term. The pressure in the equation is the pressure of the molecules on the wall of the container pwall . If p is the external applied pressure, at equilibrium it must be pwall − Δp = p, with Δp due to the attractive forces between couples of molecules. Since couples of molecules are involved, the correction is assumed to be 2 proportional to (N/V )2 through a second empirical parameter a so that Δp = a N . V2 The equation becomes N2 p + a 2 (V − Nb) = NkB T V
(2.100)
It could be convenient in some case to use the equation as function of the specific volume, v = V /N ab kB T a v − b+ v2 + v − =0 p p p 3
(2.101)
It is evident that for large values of T and p, Eq. (2.101) becomes v3 −
kB T 2 v =0 p
(2.102)
and the ideal gas equation (2.98) is recovered. From Eq. (2.100), we can write the isotherms p = p(v) as p=
a kB T − . (v − b) v 2
(2.103)
The curves appear as in Fig. 2.10 for decreasing temperatures from the top for the case of carbon dioxide. The parameters are a = 0.366 Pa·m6 , b = 42.9 · 10−6 m3 https://www.brighthubengineering.com/hvac/35632-van-der-waalsequation-and-carbon-dioxide/. At very high T , the curves are very similar to the hyperbolic functions of the ideal gas. At low enough temperature, the curves start to show a very peculiar behaviour with a change of sign of
∂p ∂v
(2.104) T
2.11 Van der Waals Equation
43
25 Van der Waals isotherms for CO2 20 80°C P (MPa)
50°C 15 60°C 10
5
10°C 0.1
0.15
0.2 3
0.25
0.3
-3
V (m ) x 10
Fig. 2.10 Isotherms of the van der Waals equation with the parameters adapted to the case of CO2 . Apart for the two high temperatures, 80 and 60 ◦ C, the isotherms are at decreasing temperatures from T = 50 ◦ C to T = 10 ◦ C with step of −0.5 ◦ C
so that the isothermal compressibility KT results to be negative for a range of volumes violating the stability condition (2.46). Moreover two points are found corresponding to the minimum and the maximum of p vs. v, where KT diverges since ∂p =0 (2.105) ∂v T This behaviour is indicated as a van der Waals (VdW) loop. The presence of a VdW loop is the signature of a region where a phase transition takes place. The phase transition is from the low volume (high pressure and density) liquid phase to the high volume (low pressure and density) gas phase. As stated above, the two phases coexist at equilibrium if the conditions (2.84)– (2.85) are satisfied. Along an isotherm the chemical potential changes as (dμ)isot = (vdp)isot .
(2.106)
By integrating along the VdW loop, we can impose the equality of the chemical potentials of the two phases
μ(L) − μ(G) =
(L)
vdp = 0 ,
(G)
then we integrate by parts and we get from (2.107)
(2.107)
44
2 Thermodynamics and Statistical Mechanics of Fluid States 7 Maxwell construction (T=10°C)
P (MPa)
6 B
5
4
A
V0
VL 0.1
0.2
VG
0.3
V (m3) x 10-3
Fig. 2.11 Maxwell construction for our example. The pressure is fixed by the equality of area A and area B
pG vG − pL vL −
(L)
pdv = 0 .
(2.108)
(G)
The two pressures must be the same pG = pL , so we can introduce a third volume v0 , such that vL < v0 < vG as in Fig. 2.11 and the condition (2.108) becomes
pL (v0 − vL ) −
(0)
pdv =
(L)
(G)
pdv − pG (vG − v0 )
(2.109)
(0)
This is called the Maxwell construction and it is equivalent to impose the condition that the two areas, indicated as A and B in Fig. 2.11, are equal. From the Maxwell construction, it is possible to determine the volumes of the two coexisting phases and the corresponding pressure. The system follows the equilibrium state with a jump in the volume. In the (v, p) plane for each temperature, we have two points representing the coexistence. By repeating the procedure for each isotherm, we get the coexistence curve represented in Fig. 2.12. At increasing T , the liquid volume vL and the gas volume vG become closer, and finally there is a temperature for which vL = vG . This is the critical temperature Tc . For this temperature the isotherm has an inflection point. Therefore the critical point is determined by the conditions
∂p ∂v
T =Tc
=0
2.11 Van der Waals Equation
45
10 C.P.
9
P (MPa)
8 7 6 5 4
0.1
0.15
0.2 3
0.25
0.3
-3
V (m ) x 10
Fig. 2.12 Coexistence curve in the (P, T) plane obtained from the Maxwell construction (blue curve). The broken line represents the spinodal (see text). The critical point is at T ≈ 30.8 ◦ C
∂ 2p ∂v 2
T =Tc
=0
(2.110)
The isotherm at Tc is the border between the isotherms with a monotonous decreasing slope, representing a single gas phase, and the isotherms with the VdW loop. With the Maxwell construction, the regions with KT < 0 are excluded but also the portion of the isotherm where the mechanical stability condition KT > 0 is not violated. Along each isotherm for T < Tc , as said above, the minimum and the maximum of p vs. v are found, where KT diverges; these points define a curve called spinodal, represented in Fig. 2.12. Between the coexistence curve and the spinodal curve, the condition KT > 0 is satisfied, so the spinodal defines the limit of mechanical stability. In the region between the coexistence curve and the spinodal, where the mechanical stability is non-violated, the system is considered to be in a metastable state. We will consider more in detail the metastability of liquids in Chap. 8. The critical point can approximately be located also by plotting the isochores in the p − T diagram. As shown in Fig. 2.13 the isochores close to the critical one cross at the critical point.
46
2 Thermodynamics and Statistical Mechanics of Fluid States
4
p/pc
2
v/vc =0.8 v/vc =0.9 v/vc =1.0 v/vc =1.1 v/vc =1.2 v/vc =1.3 v/vc =1.4
0
–2 0.5
1
0.75
1.25
1.5
T/Tc
Fig. 2.13 Isochores close to the critical isochore plotted in the plane p/pc versus T /Tc
2.12 General Form of the Van der Waals Equation and Corresponding States From the van der Waals equation with the conditions (2.110) it is possible to determine the critical values vc = 3b
pc =
a 27b
Tc =
8 a 27 kB b
(2.111)
p pc
(2.112)
We can define the reduced variables as T T˜ = Tc
v˜ =
v vc
p˜ =
where the critical parameters are given by (2.111). The VdW equation (2.100) can be rewritten as p˜ =
3 8T˜ − 2. (3v˜ − 1) v˜
(2.113)
We note that now there is a maximum limit for v˜ < 1/3 that corresponds to the previous one v < b. The equation of state (2.113) does not depend on the substance so we can derive general isotherms for given values of T /Tc . The isotherms are the same for all the systems, and in particular the coexistence curve is the same. This prediction of the van der Waals theory is called the law of the corresponding states [8]. This law is approximately verified for many substances.
2.13 Critical Behaviour of the Van der Waals Equation
47
Fig. 2.14 Collapsing of the coexistence curves of different systems plotted as T /TC as ρ/ρC (Reproduced with permission from [8], copyright from AIP Publishing)
The gas-liquid coexistence curves plotted in a T /Tc vs. ρ/ρc plane collapse on the same curve, Fig. 2.14. Moreover this universal curve is well approximated by the van der Waals equation at least far from the critical point.
2.13 Critical Behaviour of the Van der Waals Equation To look more in detail the behaviour of the VdW equation in approaching the critical point, it is useful to introduce the quantities =
T − Tc Tc
(2.114)
φ=
v − vc vc
(2.115)
and
They measure the distance of the temperature and the volume from their critical values. By using these new variables, the VdW equation becomes
48
2 Thermodynamics and Statistical Mechanics of Fluid States
p˜ =
3 8 (1 + ) − . 2 + 3φ (1 + φ)2
(2.116)
To make connection with the more general theory of phase transition, like the Landau theory, it is assumed that the quantity φ is the order parameter of the transition. It plays the role that magnetization plays in the ferromagnetic transition. Since we are interested in the limit where → 0− and φ → 0, we can develop Eq. (2.116) 3 1+ 9 2 27 3 8 (1 + ) 1 − =4 φ + φ φ ≈ 4 + ) − + · · · (1 2 + 3φ 2 4 8 1 + 32 φ ≈ 4 + 4 − 6φ − 6φ + 9φ 2 −
(2.117)
27 3 φ 2
and 3 (1 + φ)2
≈ 3 1 − 2φ + 3φ 2 − 4φ 3 .
(2.118)
Finally the equation of state close to the critical point for T < Tc can be written as 3 p˜ = 1 + 4 − 6φ − φ 3 . 2
(2.119)
We can impose the Maxwell construction now to the loop
φ2
φd p˜ = 0
(2.120)
φ1
with φ1 < 0 and φ2 > 0. From Eq. (2.119) we have 9 d p˜ = −6dφ − φ 2 dφ 2 and substituting in Eq. (2.120)
φ2 φ1
9
φ24 − φ14 = 0 , φd p˜ = −3 φ22 − φ12 − 8
(2.121)
the only possible solution is φ1 = −φ2 . This result implies a symmetry in the behaviour of the order parameter. Now from Eq. (2.119) with φ1 = −φ2 and imposing p˜ 1 = p˜ 2 , we have
2.13 Critical Behaviour of the Van der Waals Equation
49
3 3 − 6φ1 − φ13 = 6φ1 + φ13 2 2
(2.122)
so close to the critical point for the order parameter φ 3φ 3 + 12φ = 0
(2.123)
as a consequence |φ| = 2
Tc − T Tc
1/2 (2.124)
In the scaling theory of the second order phase transitions the order parameter is expected to go like φ ∼ β
(2.125)
where β is one of the exponent parameters characterizing the transition. Another critical exponent can be found by considering the derivative d p/dφ ˜ of Eq. (2.119). By neglecting the high order term φ 3 for φ → 0, we get d p˜ = −6 dφ
(2.126)
vc dp d p˜ = dφ pc dv
(2.127)
Now
so close to the critical point (−vKT )−1 =
vc pc dp = − 6 vc dv pc
(2.128)
It is expected that at the critical point, the isothermal compressibility diverges with a power law KT ∼ −γ
(2.129)
The VdW theory predicts the critical indices β = 1/2 and γ = 1 in agreement with the general results of the mean field theory of phase transition. In this respect it is equivalent to the Landau theory of phase transitions [2]. The mean field theories give a good description of the phase transition but they do not predict the correct critical exponents. For the liquid-gas transition, in fact, like for the ferromagnetic transition, the correct exponents are β ≈ 0.32 and γ ≈ 1.23.
50
2 Thermodynamics and Statistical Mechanics of Fluid States
2.14 Ensembles in Statistical Mechanics As said in the introduction, the theoretical studies of the physics of liquids is based on the statistical mechanics. Since we will consider only classical fluids, in the following we recall the theory of the classical ensembles [2, 4, 9]. In the last section, we will derive important relations between thermodynamics and fluctuations. Let us consider a classical system of N particles in d-dimensions. Each particle has a determined position r i and momentum p i . So the system is characterized by d ×N coordinates r N = (r 1 , . . . , r N ) and d ×N momenta p N = (p 1 , . . . , p N ) that evolve with time. In the phase space a point represents an instantaneous microscopic state. The evolution of the system is determined by Hamilton’s equations, and it is represented as a trajectory in the phase space, starting from an initial state point. A system is in a macroscopic state consistent with the external conditions. An enormous number of microstates are compatible with the given macrostate. The ensemble is defined as the collection of all the possible virtual copies of a system, each one in a different microstate, corresponding to the macroscopic state. The ensemble is represented by a number of points in the phase space distributed with a density r N , p N , t ; this function is normalized
dr N dp N r N , p N , t = 1 ,
(2.130)
it represents the probability density to find the particles of the system with the values r 1 . . . r N and p 1 . . . p N , and it describes the way in which at each time the members of the ensemble are distributed. The time evolution of r N , p N , t is determined by Liouville’s equation
∂ ˆ r N , pN , t , = −i L ∂t
(2.131)
where Lˆ is the Liouville operator that represents the application to a function f of the Poisson brackets ∂f ∂f ˆ {f, } Lf = H = r˙ i . (2.132) + p˙ i ∂r i ∂pi i
We are interested to the equilibrium probability density such that ∂ = 0, ∂t
(2.133)
This implies that r N , p N must be a functional of the Hamiltonian: = [H ]. The equilibrium distribution defines an ensemble which is composed of all the state points in the phase space corresponding to the same macroscopic equilibrium state.
2.14 Ensembles in Statistical Mechanics
51
By means of the equilibrium distribution, we can calculate the equilibrium average of an observable X. If α = (r N , p N ) are the points in the phase space, the average value is given by
X =
ens [H (α)] X (α) .
(2.134)
α
In experiments the measure of an observable is a time average performed on a time much longer than the characteristic times of the microscopic phenomena of interest. The experimental measure of X(t) will give 1 X = lim t→∞ t
t
dt X t .
(2.135)
0
In statistical mechanics the temporal average are substituted by averages performed on the ensemble. The equivalence of these two averages is called the ergodic hypothesis
X = X .
(2.136)
This condition is satisfied if the system visits all the possible states in the phase space. Different types of ensemble can be defined according to the different thermodynamic conditions.
2.14.1 Microcanonical Ensemble In an isolated system with fixed N, V , the system evolves at constant energy E. We can apply the fundamental postulate of equal a priori probabilities, so all the states α that evolve at constant energy E will have the same probability. To satisfy this postulate, we define the microcanonical ensemble by assuming an equilibrium distribution wmic r N , p N mic (H ) = , (2.137) Zmic (N, V , E) where the statistical weight wmic (H ) is defined as wmic (N, V , E) = δ (E − H (α)) ,
(2.138)
in order to satisfy the postulate of the equal probabilities. The partition function Zmic is introduced to normalize the distribution function, and it is given by
52
2 Thermodynamics and Statistical Mechanics of Fluid States
Zmic (N, V , E) =
δ (E − H (α)) .
(2.139)
α
The partition function counts the numbers of microstates at constant N, V , E The most important quantity that consents to link the microcanonical ensemble to the thermodynamics is the entropy. Boltzmann was the first to make a logarithmic connection between the entropy and the probability in his kinetic theory of gases, coming up with the famous formula S = kB ln(W ) where W is the probability of the macrostate corresponding to the entropy S. W can be calculated by counting all the possible microstates of the system that correspond to the macrostate. Entropy can be introduced in the ensemble theory in a form equivalent to the Boltzmann equation S(E, V , N) = kB ln Zmic (N, V , E) .
(2.140)
We will see that for all the ensembles, the logarithm of the partition function will be associated to a thermodynamic potential.
2.14.2 Canonical Ensemble In the canonical ensemble, the system is at constant N and V and in contact with a thermal bath to ensure a constant temperature. In the canonical ensemble, the statistical weight is wcan r N , p N =
1 N N , exp −βH p , r N! h3N
(2.141)
and the distribution function is given by wcan r N , p N , can r N , p N = QN (V , T )
(2.142)
where QN (T , V ) is the canonical partition function 1 QN (V , T ) = N! h3N
dr N dp N exp −βH p N , r N .
(2.143)
It counts the number of microstates at constant N, V and T . The thermodynamic potential associated to the canonical ensemble can be obtained from ln Zmic with a Legendre transform. In Eq. (2.140), we perform a Legendre transform of the entropy S (E, V , N) by substituting E with T ; we recall that
∂S ∂E
= V ,N
1 , T
(2.144)
2.14 Ensembles in Statistical Mechanics
53
so we have kB ln QN (T , V ) = S −
1 E, T
(2.145)
and we find that the thermodynamic potential associated to this ensemble is the Helmholtz free energy A = E − T S βA(T , V , N) = − ln QN (V , T ) ,
(2.146)
with β = 1/kB T . In a classical system with a Hamiltonian N pi2 + U (r 1 , . . . , r N ) H = 2m
(2.147)
i=1
we can exactly integrate on the momenta Eq. (2.143), and we have QN (V , T ) =
1 ZN (V , T ) , N! Λ3N
(2.148)
where Λ is the thermal De Broglie wavelength h2 , 2π mkB T
Λ=
(2.149)
and we defined the configurational integral ZN (V , T )
ZN (V , T ) =
dr N e−βU (r 1 ,...,r N ) .
(2.150)
We note that for an ideal gas ZN (V , T )=V N and then from Eq. (2.148) Qid N (V , T ) =
VN . N! Λ3N
(2.151)
The partition function (2.148) can be also written as exc QN (V , T ) = Qid N (V , T )QN (V , T ) ,
(2.152)
where the excess partition function is defined as Qexc N (V , T ) =
ZN (V , T ) . VN
(2.153)
54
2 Thermodynamics and Statistical Mechanics of Fluid States
In this way the free energy A can be divided in an ideal contribution and an excess contribution A = Aid + Aexc .
(2.154)
The ideal term is
id
βA
VN = − ln N! Λ3N
≈ N [ln ρ + 3 ln Λ − 1] ,
(2.155)
with ρ the density ρ=N/V while the excess part is βAexc = − ln
ZN (V , T ) , VN
(2.156)
and it is clear that Eq. (2.156) contains the contribution of the particle interactions. The distribution function (2.142) can be factorized as can r N , p N = MB pN N r N
(2.157)
where the Maxwell-Boltzmann term is
MB
N p =
1 2π mkB T
3N/2
p2 i exp −β 2m
(2.158)
i
(written for three dimensions) while the configurational distribution is exp [−βU (r 1 , . . . , r N )] . N r N = ZN (V , T )
(2.159)
2.14.3 Grand Canonical Ensemble We consider the system at contact with a thermal bath which takes T and μ fixed with exchange of particles. The distribution function of the grand canonical ensemble is N N exp [βμN] exp −βH p N , r N . (2.160) GC r , p ; N = ZGC (μ, V , T ) The partition function is ZGC (μ, V , T ) =
∞ N =0
eβμN QN (V , T ) ,
(2.161)
2.14 Ensembles in Statistical Mechanics
55
with Q0 = 1. To find the associated thermodynamic potential, we can proceed with a Legendre transform of the microcanonical partition function. Starting from the entropy S (E, V , N), we substitute E with T and N with μ, and recalling Eq. (2.15), we have 1 μN − E, T T
(2.162)
kB T ln ZGC (μ, V , T ) = T S + μN − E
(2.163)
kB ln ZGC (μ, V , T ) = S + then
and finally the thermodynamic potential is the grand canonical potential βΩ = − ln ZGC (μ, V , T ) = βA − βμN = −βpV .
(2.164)
For a classical system with Hamiltonian (2.147), the partition function can be written as ZGC (μ, V , T ) =
∞ zN ZN (V , T ) , N!
(2.165)
N =0
where we defined the activity as z=
eβμ Λ3
(2.166)
and ZN (V , T ) is the configurational integral defined above (2.150). We can introduce the configurational distribution function GC r N =
1 zN exp −βU r N . ZGC N!
(2.167)
The average value of an observable X(r N ) that depends only from the coordinates will be given by
X =
1 ZGC
∞
z
N
dr N X r N exp −βU r N .
(2.168)
N =0
Since the number of particles is fluctuating, it is important to calculate the average value N. N does not depend on the coordinates (r N ), and from (2.168) we have
56
2 Thermodynamics and Statistical Mechanics of Fluid States
N =
1 ZGC
∞ zN NZN (V , T ) N!
(2.169)
N =0
from which
N =
z ZGC
∞ zN −1 NZN (V , T ) N!
N =0
then ∂ ln ZGC . ∂z
(2.170)
− βpV = − ln ZGC
(2.171)
N = z Now if we consider Eq. (2.164)
by combining it with Eq. (2.170), we can eliminate z and get the equation of state that connects pressure, volume, temperature and number of particles. In the simple case of an ideal gas, we have id ZGC (μ, V , T ) =
∞ zN N V = exp (zV ) N!
(2.172)
N =0
then id = −zV . − βpV = − ln ZGC
(2.173)
From Eq. (2.170)
N = z
∂ (zV ) = zV ∂z
(2.174)
and combining Eq. (2.173) with Eq. (2.174), we get the equation of state of the ideal gas.
2.14.4 Isobaric-Isothermal Ensemble To make contact with experiments, it is useful sometimes to keep a system at constant pressure and temperature. In the isobaric-isothermal ensemble, the system can change its volume by exchanging work with a reservoir at the same pressure and
2.15 Fluctuations and Thermodynamics
57
temperature. In the NP T ensemble, the distribution function includes as variable also the volume N N 1 exp −β H p N , r N + pV NP T r , p , V = . (2.175) ZN P T N! h3N The partition function is
ZN P T =
dV e−βpV QN (N, V , T )
(2.176)
With the Legendre transform where E is substituted with T and V with −p, it is found that the associated thermodynamic potential is the Gibbs free energy βG = − ln ZN P T .
(2.177)
In the N P T ensemble, a quantity can be averaged as
X =
dV e−βpV
N dr N X rN e−βU r , ZN P T
(2.178)
where the partition function is given by
ZN P T =
dV e−βpV
dr N e−βU
rN
.
(2.179)
Since the volume will be not constant, it is better to rescale the coordinates [10] r i = (xi , yi , zi ) → s i = (xi /L, yi /L, zi /L)
(2.180)
so dr N = V N ds N and the average (2.178) becomes
X =
dV e−βpV V N
N ds N X s N e−βU s ,V . ZN P T
(2.181)
2.15 Fluctuations and Thermodynamics As stated above, there are a number of microstates associated with the macroscopic state of a system. This implies that, even when the system is in equilibrium, there are fluctuations of microscopic quantities due to the possibility for the particles to occupy different portions of the phase space at the given equilibrium conditions. Two important relations between micro fluctuations and thermodynamics can be easily found [2].
58
2 Thermodynamics and Statistical Mechanics of Fluid States
The first one is related to fluctuations of the energy in a canonical ensemble. The specific heat at constant volume is cv =
∂E ∂T
=− v
1 kB T 2
∂E ∂β
(2.182)
. v
The average energy E = H is given by 1
H = QN
drN dpN H rN , pN wcan rN , pN ,
(2.183)
that can be written also as 1
H = − QN
drN dpN
∂wcan . ∂β
(2.184)
On the other hand, it is easy to show that ∂QN = − < H > QN . ∂β
(2.185)
Now from Eq. (2.184), we have ∂ H 1 ∂QN = 2 ∂β QN ∂β
1 ∂wcan − dr dp ∂β QN N
N
drN dpN
∂ 2 wcan ∂β 2
(2.186)
and the intermediate equation ∂ H 1 = 2 QN H QN H − H 2 . ∂β QN
(2.187)
∂ H = H 2 − H 2 . ∂β
(2.188)
Finally we get
Equation (2.188) gives the important result cv =
1 2 2
H H − , kB T 2
(2.189)
therefore the macroscopic thermodynamic specific heat cv is connected to the fluctuations of the energy. Another important relation of this type can be found in the grand canonical ensemble, and it has to do with the fluctuations of the particle number.
2.15 Fluctuations and Thermodynamics
59
We start from the calculation of ΔN 2 ΔN 2 = N 2 − N2 .
(2.190)
From Eq. (2.169) we see that ΔN
2
=
1 ZGC
∞ zN 2 N ZN (V , T ) − N!
N =0
1 ZGC
It is easy to derive
2 ∞ zN NZN (V , T ) . N! N =0 (2.191)
∂ N ΔN 2 = z . ∂z T ,V
(2.192)
We recall now the definition of the activity z (2.166) and since ∂z =z ∂(βμ) the Eq. (2.192) becomes
∂ N ΔN 2 = z ∂z T ,V ∂ N ∂z = ∂(βμ) ∂z T ,V ∂ N = kB T . ∂μ T ,V
(2.193)
The thermodynamic derivative that appears on the right-hand side of Eq. (2.193) can be related to the isothermal compressibility
∂ N ∂μ
∂ρ ∂μ T ∂p ∂ρ =V ∂p T ∂μ T ∂ρ = Vρ , ∂p T
=V T ,V
where we used
∂p ∂μ
=ρ. T
(2.194)
60
2 Thermodynamics and Statistical Mechanics of Fluid States
We have already seen that the isothermal compressibility is defined as KT = −
1 V
∂V ∂p
= T
1 ρ
∂ρ ∂p
(2.195) T
so finally we obtain the relation
N2 ΔN 2 = kB T KT . V
(2.196)
This result is very relevant. We have already seen that KT is strictly related to the stability of the system. From Eq. (2.196) it is clear that the stability of a system is connected to the fluctuations of the number of particles that are equivalent to the density fluctuations. We have ΔN 2
N2 statistics
=
kB T KT V thermodynamics
(2.197)
In approaching the critical point, we have already seen that the isothermal compressibility diverges. We understand now that this is due to the strong increase of the density fluctuations. A consequence is that these fluctuations appear on a macroscopic scale in the the critical opalescence observed in experiments on liquids at the critical point [4]. We will discuss more this point in Sect. 3.13.
References 1. Callen, H.B.: Thermodynamics and Introduction to Thermostatistics. Wiley, Hoboken (1985) 2. Landau, L.D., Lifshitz, E.M.: Statistical Physics, 3rd edn. Elsevier, London (2013) 3. Debenedetti, P.G.: Metastable Liquids. Princeton University Press, Princeton (1996) 4. Stanley, H.E.: Phase Transition and Critical Phenomena. Oxford University Press, New York (1971) 5. Van der Waals, J.D.: De Continuiteit van den Gas-en Vloeistoftoestand. A. W. Sijthoff, Amsterdam (1873) 6. Domb, C.: A Historical Introduction To The Modern Theory Of Critical Phenomena. Taylor and Francis, London (1996) 7. Barker, J.A., Henderson, D.: Rev. Mod. Phys. 48, 587 (1976) 8. Guggenheim, E.A.: J. Chem. Phys. 13, 253 (1945) 9. Di Castro, C., Raimondi, R.: Statistical Mechanics and Applications in Condensed Matter. Cambridge University Press, Cambridge (2015) 10. Allen, M.P., Tildesley, D.J.: Computer Simulation of Liquids. Oxford University Press, Oxford (2017)
Chapter 3
Microscopic Forces and Structure of Liquids
In this chapter we consider the structure of the fluid matter. All the structural and thermodynamic properties of fluids are determined by the microscopic forces between the atoms. We will introduce the effective interaction potentials between the atoms that are assumed to move along a Born-Oppenheimer surface. We will show examples of potential model. Then we will define in the appropriate ensembles the distribution functions that can describe the arrangement of the atoms in the disordered configurations. These distribution functions can be determined by experiments. The methods are mainly X-ray diffraction or neutron scattering that we will introduce in this chapter. We will show some examples of experimental results, and we will discuss as the structure of liquids is determined by the short-range order of the atoms. As an example of molecular liquids, we will show in particular the important case of the structure of water.
3.1 Force Field for Atoms in Liquids We are interested now in the microscopic structure of the liquid systems. The structural and the thermodynamic properties are determined by the atomic forces that in principle, as any other system in condensed matter, derive from the complex interaction of nuclei and electrons. As usual in the study of atomic aggregates, the Born-Oppenheimer (BO) approximation must be applied in order to derive an effective atomic potential. In the BO approximation, the adiabatic decoupling between the slow dynamics of the nuclei with respect to the fast dynamics of the electrons is assumed [1]. The electronic problem is solved with fixed positions of the nuclei. Then, to model the microscopic interaction between the atoms, it is assumed that the electrons are in their fundamental state with the ions that move along the BO surface without perturbing the electronic state. These atoms can be treated as classical particles as we have already discussed. The BO potential however would © Springer Nature Switzerland AG 2021 P. Gallo, M. Rovere, Physics of Liquid Matter, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-68349-8_3
61
62
3 Microscopic Forces and Structure of Liquids
be too difficult to use for practical purposes so it is usually approximated with an empirical potential. In principle the many body potential of an N atoms system can be developed in n-body terms U BO (r 1 , · · · , r N ) =
i
(3) u(2) r i , r j + u ri, rj , rk + . . .
j >i
i
j
k
(3.1) The effective potential is assumed to have a simple enough form with a number of empirical parameters to catch the main features of the real microscopic interaction between the atoms. In a homogeneous and isotropic system like a fluid, it is frequently assumed an effective two-body potential U (r 1 , · · · , r N ) =
u rij i
(3.2)
j >i
with rij = |r i − r j |. The comparison with experiments provides evidence that the model potential is sufficiently accurate. In the case of closed shell systems, like noble gases, the superposition of the electronic shells is responsible of the short-range repulsion between the atoms. This quantum effect can be approximately represented by a term A . r 12
(3.3)
At larger distances the main effect is the distortion of the electronic configuration that an atom induces on another one. This can be treated as a dipole-dipole interaction calculated in quantum mechanical perturbation theory, the van der Waals term. The result is an attractive force −
B . r6
(3.4)
By combining Eqs. (3.3) and (3.4), we obtain the well-known Lennard-Jones (LJ) potential [2] that can be written as u(r) = 4
σ 12 σ 6 , − r r
(3.5)
where σ measures the repulsive diameter, while is the depth of the potential minimum. In spite of the simplicity of this potential, with appropriate σ and parameters, the empirical properties of noble gases in fluid phases can be reproduced remarkably well. Figure 3.1 shows the form of the potential, and in Table 3.1 there are the typical values used in literature. From Fig. 3.1 there is evidence of the presence of a sharp repulsion at short distances. This implies a zone of excluded volume that was taken into account by
3.1 Force Field for Atoms in Liquids
63
1 Hard Sphere potential
uHS(r)
u(r) (kJ/mol)
LJ potential for Argon
0 ε
0 -1 1
1.5
2
2.5
0
r/σ
1
2
3
r/σ
Fig. 3.1 On the left panel: Lennard-Jones potential. On the right: the hard sphere potential Table 3.1 Examples of parameters of the Lennard-Jones potential. Carbon dioxide is represented sometimes with a single Lennard-Jones centre
Ne Ar Kr Xe CO2
/kB (K) 35.6 120.0 164.0 229.0 189.0
σ (nm) 0.275 0.341 0.383 0.406 0.449
van der Waals in his equation. He assumed a hard sphere model for the repulsion. At high temperatures, when a regime of frequent collisions with kinetic effects prevailing on the the attractive forces is expected, the potential can be approximated with the hard sphere term
u(r) =
⎧ ⎨∞ ⎩
rσ
We will consider the hard spheres liquid in the next chapter. In the case of closed shell fluids where charged ions are present, the Coulomb interaction must be taken into account. For ionic liquids (molten salts) like NaCl, for example, a widely used potential is the Tosi-Fumi potential [3] qi qj u rij = Aij e−rij /aij + rij
(3.7)
64
3 Microscopic Forces and Structure of Liquids
where rij are the distances between the ions of charges qi and qj . The Aij and aij are parameters. The term containing the exponential comes from approximations of the electronic contribution to the repulsion, and it is called Born-Mayer term. In some calculations to this potential, a van der Waals attractive term is added. The examples shown before refer to the case of closed shell systems. It is more difficult to establish a good model potential for other systems. We can mention liquid metals where free electrons play an important role in the interaction and systems, like silicon, where the bond directionality makes difficult to reduce the effective interaction to a two-body potential. Fluids composed of aggregates of atoms like molecular liquids or colloids require more elaborated models. In this respect a great progress is due to the simulation methods; by means of computer simulation, it is possible to test and compare directly the predictions of different models. We will consider computer simulation methods in Chap. 5.
3.2 Local Structure of a Liquid A system in a fluid phase, either liquid or gas, is homogeneous and isotropic. In an ideal gas, the N particles are distributed randomly in the volume V , and for a given atom in the origin, the quantity dNid = 4πρr 2 dr
(3.8)
is the number of atoms in a spherical shell at distance r; see Fig. 3.2. In the case of interacting atoms, the distribution is modified and the number of particles in the spherical shells will be given by dN = 4πρg(r)r 2 dr , Fig. 3.2 Representation of a random distribution of atoms around one located in the origin
(3.9)
3.3 Distribution Functions in the Canonical Ensemble
65
where the radial distribution function (RDF) g(r) has been introduced. This function describes how the distribution of the particles locally deviates from the uniform distribution of the ideal gas where g(r) = 1. The RDF is also called the pair distribution function; it is related to the probability of finding an atom at distance r from another one in the origin. For large distances r the particle interaction decays to zero, and it is expected that g(r) → 1. In the following, we will show that the g(r) can be obtained from experiments. Before we derive this function from statistical mechanics ensembles.
3.3 Distribution Functions in the Canonical Ensemble The RDF can be introduced starting from the statistical distribution function of N classical particles in the canonical ensemble; see Eq. (2.159) N (r N ) =
exp [−βU (r 1 , . . . , r N )] . ZN (V , T )
(3.10)
We define now the distribution function of n particles between the N as
exp [−βU (r 1 , . . . , r N )] , ZN (V , T ) (3.11) this gives us the probability of finding n particles in the positions r1 , . . . , rn independently from the positions of the others N − n. The factorial term takes into account that we can select any of the n particles among the N . With this definition we have
N! . (3.12) dr1 . . . drn (n) (r1 , . . . , rn ) = N (N − n)!
(n)
N
N! (r1 , . . . , rn ) = (N − n)!
drn+1 . . . drN
In particular for n = 1, we get the distribution function of a single particle ρ (1) (r) N from (3.12)
dr(1) (r) = N . (3.13) N
For a homogeneous and isotropic system, the (1) (r) does not depend on r; therefore N
(1) = N
N =ρ V
and so it coincides with the system density.
(3.14)
66
3 Microscopic Forces and Structure of Liquids
For n = 2 the (2) (r1 , r2 ) is the two-particle density. For large distance between N the two particles lim
r1 −r2 →∞
(2) (r1 , r2 ) = (1) (r1 )(1) (r2 ) + O (1/N) . N
N
(3.15)
N
For a homogeneous and isotropic system (2) (r1 , r2 ) = (2) (|r1 − r2 |) N
(3.16)
N
and we can define a normalized distribution function g(r) =
(2) (r) N
(3.17)
.
ρ2
In the limit of r → ∞ lim g(r) = 1 + O(1/N) .
(3.18)
r→∞
The function g(r) defined above coincides with the RDF introduced before (3.9). More in general we can define the normalized distribution functions of n particles g (n) as (n) (r1 , . . . , rn ) N gN(n) (r1 , . . . , rn ) = . n (1) k=1 (rk )
(3.19)
N
3.4 Relation of the RDF with Thermodynamics 3.4.1 Energy With the assumption of an effective potential as sum of two-body terms (3.2), in the canonical ensemble the average potential energy is given by
U =
1 ZN (V , T )
dr1 · · · drN
e−βU
rN
1
2
u rij . i
(3.20)
j =i
By exchanging the integrals with the sums, we have N(N − 1)/2 equal terms, so Eq. (3.20) can be rewritten as 1
U = N (N − 1) 2
dr1 dr2
u (r12 )
dr3 · · · drN
e−βU r ZN
N
.
(3.21)
3.4 Relation of the RDF with Thermodynamics
67
By introducing the two-body distribution function, Eq. (3.21) becomes
U =
1 2
dr1 dr2
u (r12 ) (2) (r1 , r2 ) .
(3.22)
r 2 u(r)g(r) .
(3.23)
N
For a homogeneous and isotropic system
U = 2πρ N
∞
dr 0
3.4.2 Pressure from the Virial For calculating the pressure, we can start from the virial of the forces [4]. It is defined as N W rN = ri · F i .
(3.24)
i=1
For the ergodic theorem, the temporal average will be equal to the ensemble average 1 t→∞ t
t
W = lim
1 t→∞ t
dτ 0
r i (τ ) · F i (τ )
i=1 t
= lim
N
dτ 0
N
r i (τ ) · mr¨i (τ )
i=1
t N 1 |r˙i (τ )|2 = − lim m dτ t→∞ t 0 i=1
t 1 = − lim dτ 2K(τ ) = −2 K t→∞ t 0
(3.25)
where K is the kinetic energy which is related to the temperature through the equipartition of the energy
K =
3 NkB T 2
(3.26)
so that
W = −3NkB T .
(3.27)
68
3 Microscopic Forces and Structure of Liquids
Now it is possible to divide the virial in the two contributions from the external forces and from the internal forces W = W int + W ext .
(3.28)
The virial of the internal forces is given by W int = −
N
r i · ∇i U r N .
(3.29)
i=1
If external forces do not act on the system the term W ext is due to the pressure of the walls on the particles. It is easy to calculate [5] W ext = −3pV .
(3.30)
By combining Eq. (3.27) with Eqs. (3.28) and (3.30), we get pV = NkB T +
1 < W int > . 3
(3.31)
For the internal virial, when the potential can be written as a sum of pair potential, we have W int = −
r i · ∇i
i
u(rij ) = −
j =i
1 du(rij ) rij . 2 drij i
(3.32)
j =i
The average value < W int > can be calculated in the canonical ensemble and rewritten in terms of the RDF for our homogeneous and isotropic system; therefore the formula for the pressure is 2 ρ p =1− π ρkB T 3 kB T
∞
dr 0
r 3 g(r)
du(r) . dr
(3.33)
3.5 Distribution Functions in the Grand Canonical Ensemble In the grand canonical ensemble, the distribution function of n particles can be defined as (n) (r1 , . . . , rn ) =
1
∞
zN · (N − n)!
ZGC N ≥n
drn+1 . . . drN
exp [−βU (r 1 , . . . , r N )] .
(3.34)
3.5 Distribution Functions in the Grand Canonical Ensemble
69
By integrating, the sum rule is derived
dr1 . . . drn
(n) (r1 , . . . , rn ) =
N! . (N − n)!
(3.35)
The normalized distribution functions will be now defined as (n) (r1 , . . . , rn ) g (n) (r 1 , . . . , rn ) = n . (1) k=1 (rk )
(3.36)
In particular the two-body distribution function is ∞
1
(2) (r1 , r2 ) =
zN · (N − 2)!
ZGC N ≥2
dr3 . . . drN
exp [−βU (r 1 , . . . , r N )] .
(3.37)
The distribution functions can be also obtained through a different route. Let us define the density operator (r) ˆ =
δ (r − r i )
(3.38)
i
it counts the number of particles in the point r. It is easy to see that ˆ (1) (r) = (r)
(3.39)
and
(2)
(r, r ) =
! i
" δ (r − r i ) δ r − r j .
(3.40)
j =i
We can also define the static density correlation function. If δ ˆ (r) = ˆ (r) − ˆ (r)
(3.41)
the static density correlation function is defined as G(2) r, r = δ ˆ (r) δ ˆ r
= (2) (r, r ) − (1) (r)(1) (r ) + (1) (r)δ r − r .
(3.42)
70
3 Microscopic Forces and Structure of Liquids
It is also useful to introduce the total correlation function h(2) r, r (1) (r)(1) (r )h(2) r, r = (2) (r, r ) − (1) (r)(1) (r )
(3.43)
and the G(2) (3.42) can be written as G(2) r, r = (1) (r)(1) (r )h(2) r, r + (1) (r)δ r − r .
(3.44)
For a homogeneous and isotropic system (2) (r, r ) = (2) (|r − r |) .
(1) (r) = ρ
(3.45)
The RDF can be written as g(r) =
(2) (r) . ρ2
(3.46)
The density correlation function (3.42) becomes G(2) (r) = 2 [g(r) − 1] + ρδ(r)
(3.47)
and the total correlation function h(r) = g(r) − 1 .
(3.48)
From Eq. (3.35) we can get an important sum rule for the g(r). We can start from
dr1 dr2
(2) (r1 , r2 ) − (1) (r1 )(1) (r2 )
=< N (N − 1) > − < N >2
(3.49)
and we get
V
2
dr [g(r) − 1] =< ΔN 2 > − < N > .
(3.50)
By recalling Eq. (2.196), we obtain the important relation between the integral of the g(r) and the isothermal compressibility KT
1+ρ
drh(r) = ρkB T KT .
(3.51)
On the right side, (ρkB T )−1 is the isothermal compressibility of the ideal gas. We will discuss later Eq. (3.51) for further implications.
3.6 Hierarchical Equations
71
3.6 Hierarchical Equations We consider the definition of the RDF for a fluid in the grand canonical ensemble ρ 2 g(r12 ) =
1 ZGC
∞ N =2
zN (N − 2)!
dr 3 . . . dr N
exp [−βU (r 1 , . . . , r N )] .
(3.52) We assume that the interaction potential can be written as a sum of pair potentials. We apply now the operator ∇1 on both sides of Eq. (3.52) and we get ρ ∇1 g(r12 ) = −
1
2
ZGC
∞
β
N =2
·
zN (N − 2)!
∇1 u(r12 ) +
N
dr 3 . . . dr N
e−βU
∇1 u(r1i ) ,
(3.53)
i=3
the term ∇1 u(r12 ) can be taken out of the integral, while for the second term we have
e−βU
dr 3 . . . dr N
N
∇1 u(r1i )
i=3
=
N
dr 3 . . . dr N e−βU ∇1 u(r1i )
i=3
dr 3 . . . dr N e−βU ∇1 u(r13 ) .
= (N − 2)
(3.54)
If we insert this in Eq. (3.53), we get ρ 2 ∇1 g(r12 ) = −β∇1 u(r12 ) −β
1 ZGC
∞ N =2
∞
1 ZGC
zN (N − 2) (N − 2)!
N =2
zN (N − 2)!
dr 3 ∇1 u(r13 )
dr 3 . . . drN
e−βU
dr 4 . . . dr N e−βU . (3.55)
Now we recall the definition (3.34) 1 ZGC
∞ N =3
zN (N − 3)!
dr 4 . . . dr N e−βU = ρ 3 g (3) (r 1 , r 2 , r 3 ) ,
(3.56)
72
3 Microscopic Forces and Structure of Liquids
therefore we can write the following equation [6] −
1 ∇1 g(r12 ) = g(r12 )∇1 u(r12 ) + ρ β
dr 3 g (3) (r 1 , r 2 , r 3 ) ∇1 u(r13 ) .
(3.57)
This equation is called hierarchical, since the two-body function g(r) depends now on the three-body function g (3) . If we pursue this route, we will find a series of equations, called hierarchical equations, that connect a function g (n) to g (n+1) . Hierarchical equations of this type are found in statistical mechanics of interacting particles in a more general framework. To include all the authors of different versions, the more extended version of the name Bogoliubov–Born-Green– Kirkwood–Yvon (BBGKY) hierarchy [6] is often used.
3.7 Qualitative Behaviour of the Radial Distribution Function We note that in the limit ρ → 0, Eq. (3.57) can be simplified as ∇1 ln (g(r12 )) = −β∇1 u(r12 ) ,
(3.58)
therefore for a very dilute gas g(r) = e−βu(r) ,
(3.59)
the behaviour will be like that reported in Fig. 3.3 for a typical interaction potential also reported in the figure. The RDF is zero until the minimum approach distance corresponding to the range of the repulsive part of the potential. Then g(r) increases and it shows a broad peak in correspondence of the minimum of the potential where the atoms are attracted toward the one in the origin. The RDF goes rapidly to 1 due to the low density. At increasing density and in the range of temperatures of the liquid state, the g(r) shows several well-defined peaks and goes to the high distance limit with dumped oscillations around 1. This is the signature of the short-range order typical of liquids. In Fig. 3.4 we show the g(r) of a LJ liquid at a density of the liquid state and two different temperatures in units of the LJ parameters. In these units T is measured as kB T /. For both temperatures the first peak is much higher with respect to the peak of the gas phase, Fig. 3.3. The first peak corresponds to a nearest neighbour shell; after it there is a minimum, then a second peak corresponding to the second shell of neighbours and so on. For large distance, the g(r) decays to 1 with dumped oscillations. The position and the height of the peaks are determined from the potential features and the thermodynamic conditions. The oscillations at large r could persist in a liquid to distances of the order of 10 Å. The decrease of the temperature enhances the
3.8 Experimental Determination of the Structure of Liquids
73
g(r)
1.5 1 0.5 0
u(r)
0.5 0 -0.5 -1 0.1
0.2
0.3
r (nm)
Fig. 3.3 In the top panel, the radial distribution function of a real gas whose pair potential is represented in the bottom panel
height of the peaks, while their positions usually remain constant. A similar effect is obtained at constant temperature by increasing the density. In analogy with the crystals, we can define also for atoms in liquids a coordination number. By considering the first shell, we can integrate all the area below the first peak with a cut-off at the position of the first minimum. The coordination number can be calculated as
r1 n1 = 4πρ dr r 2 g(r) (3.60) 0
where r1 is the position of the first minimum. In the same manner, we can calculate the coordination number of the second shell, and this procedure can be repeated for the other peaks, at least if they are well defined.
3.8 Experimental Determination of the Structure of Liquids The function g(r) that plays an important role in the study of the liquid matter can be obtained from experiments. The experimental techniques able to give us informations on the structure of liquids are the same used for crystals. They are mainly X-ray diffraction and neutron scattering. X-ray diffraction is realized by sending on the sample an electromagnetic radiation with wavelength λ in the range of the interatomic distances. From the dispersion relation for X-rays
74
3 Microscopic Forces and Structure of Liquids
Fig. 3.4 Radial distribution function of a Lennard-Jones liquid for two temperatures. The first two peaks give the locations of the first and the second shells that surround the atom in the origin, as indicated in the cartoon below
ελ =
12.4 · 103 λ(Å)
eV
(3.61)
with λ = 1Å the energy is in the range of ελ ≈ 10 keV so the scattering is elastic At variance with X-rays, neutrons interact with the nuclei of the atoms, and they are able to reveal the density fluctuations. The neutron scattering technique has become increasingly important in the study of liquids. Neutron scattering techniques can be used to study the dynamics of the liquid as we will discuss in Sect. 6.9. In the appropriate elastic limit, the neutron scattering provides information on the structural properties. In particular neutron scattering is more sensitive to systems containing hydrogens. The X-rays, in fact, interact with the electron cloud, and the diffracted radiation from the sample depends on the electronic form factor, as we will discuss later.
3.9 Neutron Scattering on Liquids
75
Other techniques are able to probe structural and dynamical properties of liquids such as nuclear magnetic resonance (NMR) and Raman spectroscopy. All these techniques are frequently used to get complementary informations. We will introduce now the theory of the neutron scattering on atomic systems [7, 8], and we will consider the limit of the elastic scattering. We will consider the case of a beam of neutrons selected to have a single wavelength. After the scattering with the sample, the neutrons are collected at different angles. This type of experiments are realized with the use of a reactor and a monochromator. Neutron scattering experiments can be also performed with pulsed neutron sources; the beam is composed of neutrons with different energy; they are scattered at different angles but also at different times; and the time of flight of the neutrons must be measured [9].
3.9 Neutron Scattering on Liquids Neutrons have a dispersion relation εn =
h¯ 2 k 2 2mn
(3.62)
with mn = 1.67510−24 g. In order to describe the neutron diffusion, we consider a typical experimental set-up. Neutrons are produced by a reactor. The initial high energy is of the order of several MeV, and it must be reduced with the use of a moderator material. After the passage in the moderator, the neutrons are filtered in a monochromator to select a single wavelength. Then the beam is directed at the liquid sample. The neutrons are diffracted by collisions with the nuclei of the sample, and they are collected by detectors at different angles. A single neutron arrives on the sample with a wavevector k in and energy εin = 2 /2m ; it will be diffused after the scattering with a new wavevector k h¯ 2 kin n out and a 2 /2m . In the diffraction event, there is a transfer corresponding energy εout = h¯ 2 kout n of a wavevector as represented in Fig. 3.5 k = k in − k out
(3.63)
Fig. 3.5 Anelastic scattering: on the left k1 < k0 , neutron loses energy; on the right k1 > k0 , neutron gains energy
76
3 Microscopic Forces and Structure of Liquids
and an energy hω ¯ = εin − εout between the neutron and the system. With this definition of h¯ ω, we will have h¯ ω > 0 if the neutron gives energy to the system, instead if the system gives energy to the neutron hω ¯ < 0. If the system has an initial energy En , it undergoes a transition to a state with energy Em such that hω ¯ = Em − En .
(3.64)
The interaction between neutrons and nuclei can be described in terms of the pseudo-potential introduced ad hoc by Fermi. This effective potential takes into account that the nuclear interaction is very strong but localized in space in a range of the order of 10−5 Å. The Fermi potential can be written as VS (r) =
2π h¯ 2 bi δ (r − r i ) . mn
(3.65)
i
where ri are the positions of the nuclei. The bi are phenomenological parameters defined as scattering lengths. They depend on the isotope number and the spin of the nucleus. The great idea of Fermi was that the neutron-nucleus interaction can be treated with the Born theory because of the very restricted range in spite of the strong intensity. Let us define a flux of incident neutrons J0 as the number of neutrons per unit time and area. Let us assume that the flux is collimated; it means that in the beam, there are only neutrons travelling parallel to a given direction. Interference effects are avoided and in first approximation the multiple scattering is neglected. In the experiment, after the scattering process, it is possible to measure the fraction of neutrons diffused in a solid angle element dΩ with an energy between εout and εout + dεout ; this quantity is called the partial differential cross section, indicated as d 2σ , dΩdεout
(3.66)
it has the dimensions of area/energy. In the initial state the sample and a neutron are two independent systems, so the wavefunction of the system composed of the neutron and the liquid can be written as |Ψin >= ψin (r)|φn >
(3.67)
where ψin is the incident neutron wavefunction, while φn is the wavefunction of the liquid. After the scattering process, the neutron goes away from the sample, and the final state will be |Ψf in >= ψout (r)|φm > .
(3.68)
3.9 Neutron Scattering on Liquids
77
We apply now the Born theory. The total cross section, obtained from the integration of Eq. (3.66) on the total solid angle and energy, is σ =
N0 WF J0
(3.69)
where N0 is the number of incoming neutrons and WF is the transition probability given by the Fermi golden rule WF =
2π pn dk out D(kout ) |< kout , m|VS |kin , n >|2 δ (h¯ ω − h¯ ωmn ) h¯ n m (3.70)
where |kin , n > and |kout , m > are the initial (3.67) and the final (3.68) states, respectively, with hω ¯ mn = Em − En . In Eq. (3.70) the sum is on the initial states of the liquid system weighted with the statistical probability pn and on the final states of the neutron weighted with the density of state of the free particles D(k). The final states of the liquid are the ones reachable according to the energy conservation; they are selected through the δ function. With the above assumptions and by considering that v = hk ¯ in /mn is the velocity of the incoming neutrons, the flux can be written as J0 = N0 v|ψin |2 =
N0 hk ¯ in . V mn
(3.71)
The integration in Eq. (3.70) on dk out can be transformed as D(kout )dk out =
V V mn k 2 dkout dΩ = kout dεout dΩ , (2π )3 out (2π )3 h¯ 2
(3.72)
where the density of states of the free particles has been used. Now by substituting (3.71) in (3.72), we can write Eq. (3.70) as WF =
mn V (2π )3 h¯ 2
n
pn
dεout dΩ
m
kout |< kout , m|VS |k0 , n >|2 δ (h¯ ω − h¯ ωmn )
(3.73)
Equations (3.73) and (3.71) can be used to rewrite the total cross section (3.69) σ =
mn V 2π h2
n,m
¯
2 dεout dΩ
kout kin
pn |< kout , m|VS |kin , n >|2 δ (h¯ ω − hω ¯ mn ) .
(3.74)
78
3 Microscopic Forces and Structure of Liquids
From it we can derive the partial cross section (3.66) d 2σ kout = dΩdεout kin
mn V
2 I (ω) ,
2π h¯ 2
(3.75)
where I (ω) =
pn
n
< kin , n|VS |kout , m >< kout , m|VS |kin , n > δ (hω ¯ − h¯ ωmn ) .
m
(3.76) We consider now the latter equation (3.76). Into the matrix element, we can insert the neutron wavefunctions (plane waves), and taking into account that the Fourier transform is defined as V˜S (k) =
dre−ik·r
VS (r) =
2π h¯ 2 −ik·r i bi e , mn
(3.77)
1 < n|V˜S (k)|m > V
(3.78)
1 < m|V˜S (−k)|n > V
(3.79)
i
we get < kin , n|VS |kout , m >= < kout , m|VS |kin , n >= where the definition (3.63) has been used. The δ function can be represented as δ (hω ¯ − h¯ ωmn ) =
1 h¯
+∞ −∞
ei(ω−ωmn )t dt ,
(3.80)
by inserting it in Eq. (3.76), I (ω) can be written as I (ω)=
+∞ 1 p dteiωt < n|eiωn t V˜S (k)e−iωm t |m >< m|V˜S (−k)|n > . n hV ¯ 2 n m −∞ (3.81)
By taking into account that e−iH t |n >= e−iωn t |n > with H the Hamiltonian of the sample, we have I (ω) =
1 hV ¯ 2
+∞ −∞
dt
eiωt
pn < n|V˜S (k, t)V˜S (−k, 0)|n > .
(3.82)
n
Now the sum on the weighted initial states is the average on the statistical ensemble
3.10 Static Limit and the Structure of Liquid
79
pn < n|V˜S (k, t)V˜S (−k, 0)|n >= V˜S (k, t)V˜S (−k, 0) ,
(3.83)
n
and by using the Fermi pseudo-potential (3.77), we get 2 ! " 2π h¯ 2 −ik·[r i (t)−r j (0)] ˜ ˜ VS (k, t)VS (−k, 0) = bi bj e mn
(3.84)
ij
this ensemble average can be inserted in Eq. (3.82). Then Eq. (3.82) so modified can be substituted in Eq. (3.75) to get ! "
kout +∞ d 2σ iωt −ik·[r i (t)−r j (0)] = dt e bi bj e (3.85) dωdΩ kin −∞ ij
3.10 Static Limit and the Structure of Liquid The neutron scattering is anelastic but in order to understand how the information about the structure of a liquid can be extracted, we consider the static limit. To get the static limit, we must integrate on all the frequencies in order to obtain the quantity dσ/dΩ that contains the information on the structure of the liquid
dσ d 2 σ dω = dΩ dωdΩ 2π ! "
+∞ kout 1 = dω dteiωt bi bj e−ik·[r i (t)−r j (0)] . (3.86) kin 2π −∞ ij
We assume that the scattering is elastic; this implies that hω ¯ = 0 and the wavevector k exchanged in the scattering is simply determined by the geometrical relation (3.63) with |k out | = |k in |. In this limit we can exchange the integrals and write "
+∞ ! dσ −ik·[r i (t)−r j (0)] = dt bi bj e δ(t) dΩ −∞ =
!
ij
bi bj e
ik·[r i (0)−r j (0)]
" .
(3.87)
ij
In the cross section Eq. (3.87) the statistical average can be separated in two contributions dσ = bi bj spin e−ik·[r i −r j ] . ensemble dΩ ij
(3.88)
80
3 Microscopic Forces and Structure of Liquids
The average on the scattering lengths must be done on the nuclear spin and isotope states. We can assume that these states are randomly distributed so that bi ≈< b > +δbi con < δbi >= 0. In this way < bi bj >≈< b >2 + < δbi δbj > . We do not expect that states on different nuclei are correlated, so ⎧ i = j ⎨0 δbi δbj = ⎩ < b2 > − < b >2 i=j By substituting this result in Eq. (3.88), we get
dσ = N < b2 > − < b >2 + < b >2 e−ik·[r i −r j ] . dω i
(3.89)
(3.90)
(3.91)
j
Now it is usual to define the coherent and incoherent scattering length as 2 binc = < b2 > − < b >2 2 bcoh = < b >2 .
(3.92)
In Eq. (3.91), the part with binc is called incoherent cross section; it must be subtracted since it is not relevant in the study of the structure. The structure of the liquid can be extracted from the coherent cross section
dσ dΩ
= coh
e−ik·[r i −r j ] .
i
(3.93)
j
We must stress that the formula (3.93) has been derived by assuming the limit of an elastic scattering for neutrons and by integrating on all the frequency; see Eq. (3.86). This coherent cross section can be obtained in experiments by using some approximation since the scattering is not elastic and the differential cross section is not available for all the frequencies. Moreover for light nuclei, also recoil effects must be taken into account. To extract the structural properties, experimentalists apply inelastic corrections also called Placzek corrections [10]. In the case of X-ray diffraction, as said above, the scattering is elastic. The incident radiation with a wavelength λ and a wavevector k 0 with modulus k0 = 2π/λ is deviated from the system at an angle θ , see Fig. 3.6. It comes out with a wavevector k 1 such that |k1 | = |k0 |. The exchanged vector in the diffraction is k = k 1 − k 0 with modulus k=
4π sin (θ/2) . λ
(3.94)
3.11 The Static Structure Factor
81
Fig. 3.6 Exchanged wavevector in an elastic scattering
The intensity of the diffracted radiation will be given by
e−ik·[r i −r j ]
I (θ ) = |f (k)|2
i
(3.95)
j
where f (k) is the form factor, determined from the electronic distribution of the atoms. At variance with crystals, we cannot have the Bragg diffraction. The resulting scattered spectra consist of broad rings that reveal a short-range order without any regular spot. In an ideal gas, all the scattering processes would be independent. The short-range order produces interference effects and the diffraction intensity is proportional to the correlation of the atomic positions as found in Eq. (3.93).
3.11 The Static Structure Factor To connect the coherent cross section to the radial distribution function, we consider the Fourier transform of the function (3.47) and we can write
dre−ik·r G(2) (r) = ρS (k) , (3.96) where the function
S (k) = 1 + ρ
dre−ik·r [g (r) − 1]
(3.97)
is defined as the static structure factor. In order to understand how this function is related to experiments, we consider the meaning of the function S(k). We start from Eq. (3.93) and consider the righthand side " ! e−ik·[r i −r j ] = dre−ik·r (3.98) δ r − ri + rj i
j
i
j
where it was taken into account that the Fourier transform of the microscopic density operator (3.38) is ρk =
i
e−ik·r i .
(3.99)
82
3 Microscopic Forces and Structure of Liquids
Since we are interested to the fluctuations of the density, we can rewrite the righthand side as ⎡! ⎤ " dr e−ik·r ⎣ δ r − r i + r j − ρ2 + ρ2⎦ i
=
j
dr
e−ik·r G(2) (r) + ρ 2
(3.100)
By combining Eq. (3.100) and the definition of the static structure factor (3.97) with the expression for the scattering section (3.93), we get the important relation
dσ dΩ
= ρS (k) + ρ 2 δ (k)
(3.101)
coh
in the last formula, the term ρ 2 δ (k) corresponds to the k = 0 limit where the experimental cross section does not have meaning since this is the limit of not diffused neutrons. The structure factor S(k) can be written also as S(k) =
1
δρk δρ−k . N
(3.102)
From Eq. (3.102), we note that the limit k → 0 corresponds to the correlation of the density fluctuations in the macroscopic limit. The latter is related to the isothermal compressibility KT (see (2.196)). In fact, by considering the sum rule (3.51) for the g(r), we get the important limit
S (k → 0) = 1 + ρ
dr [g(r) − 1] = ρkB T KT .
(3.103)
We recall that the S(k) can be also related to the total correlation function (3.48) ˜ . S(k) = 1 + ρ h(k)
(3.104)
3.12 The Structure Factor and the RDF of Liquid Argon As an example of the statics structure factor, we show in Fig. 3.7 the S(k) of liquid argon determined in a neutron scattering experiment at 85 K, close to the triple point [11]. From this function, the RDF has been derived and it is shown in Fig. 3.8. The position of the first peak of the S(k) is determined by the quasi-periodicity of the oscillations of the g(r), while the height of the peak is determined by
3.12 The Structure Factor and the RDF of Liquid Argon
83
Liquid Argon T=85 K
2.5
S(k)
2 1.5 1 0.5 0 0
2
4
6
8
10
k (Å-1)
Fig. 3.7 Structure factor of liquid argon at 85 K from neutron scattering experiment [11]. Reproduced with permission from data reported in [11] 4 Liquid Argon T=85 K
g(r)
3
2
1
0
0
5
10
15
20
r (Å)
Fig. 3.8 Radial distribution function of liquid argon at 85 K [11]. Reproduced with permission from data reported in [11]
the persistence of those oscillations. The limit k → 0 is not measured but it is extrapolated to the value obtained from the isothermal compressibility. The g(r) shows a high very well-defined first peak and at least other three welldefined peaks. This indicates a good short-range order in the liquid. The calculation of the first coordination number, as define in Eq. (3.60), gives n1 ≈ 12, the same number found in the crystal. Many liquids show a short-range order that preserves the memory of the corresponding crystal.
84
3 Microscopic Forces and Structure of Liquids
3.13 The Structure Factor Close to a Critical Point In the k = 0 limit the structure factor cannot be measured in diffraction experiments but Eq. (3.103) has a great importance in the vicinity of the liquid-gas critical point. In fact the structure factor enormously increases at small k, and this makes impossible the determination of the fluid structure in approaching the critical point. From the phenomenology the structure factor at small k can be approximated as S(k) ≈
A k 2 + k02
(3.105)
while the coefficient A remains constant, k0 strongly depends on the temperature, and it tends to decrease for T → TC , the critical temperature. For a better understanding, it is more convenient to look at the function h(r). The limit k → 0 of S(k) corresponds to the asymptotic limit r → ∞ for the h(r). From Eq. (3.105) we have h(r) →
A −r/ξ e r
(3.106)
where ξ = 1/k0 is a characteristic length of the fluid. ξ measures the decay length of the correlation between the density fluctuations represented by h(r). ξ is called the correlation length. Equation (3.105) can be rewritten as S(k) ≈
A (kξ )2 k 2 (kξ )2 + 1
(3.107)
In approaching the critical point, the correlation length ξ diverges with a power law ξ ∼ |T − TC |−ν
(3.108)
with the critical exponent ν ≈ 0.63. This implies that for k → 0 the structure factor diverges as lim lim S(k) ∼ k −2
k→0 T →TC
(3.109)
and lim lim S(k) ∼
T →TC k→0
1 (kξ )2 = ξ 2 k2
(3.110)
This behaviour must be corrected very close to the critical point by introducing another critical exponent, called η S(k) ∼ k 2−η or S(k) ∼ ξ 2−η
(3.111)
3.14 Structure of Multicomponent Liquids
85
η in three dimensions has a small value (η ≈ 0.05). We know that in the limit k → 0 S(k) is proportional to the isothermal compressibility, according to Eq. (3.103). We know moreover from the theory of the critical phenomena that KT diverges at the critical point as KT ∼ |T − TC |−γ with γ ≈ 1.23. In this way we get a relation between the two exponents γ = ν(2 − η)
(3.112)
We wrote all the formulas for the three-dimensional case. In the theory of critical phenomena, beyond the mean field theory, it is found that the critical exponents depend on the dimensionality d, so, for instance, Eq. (3.110) must be rewritten as S(k) ∼ ξ d−1−η
3.14 Structure of Multicomponent Liquids We can extend the results obtained in the previous sections to treat liquids composed of different atoms or molecules. We discuss later the case of molecular liquids. Now we consider a liquid mixture with m components of different species such that N=
m
Nα
(3.113)
α=1
with a total density ρ = N/V . Each component will be present with a fraction xα = Nα /N and a partial density ρα = xα ρ. In analogy with the previous derivation for a monoatomic liquid, we introduce the partial radial distribution function gαβ (r) such that 4πρβ gαβ (r)r 2 dr
(3.114)
gives the number of atoms of specie β that are in a shell at distance r from an atom of specie α in the origin. The definition implies a symmetry gαβ (r) = gβα (r). The functions gαβ (r) show a behaviour similar to the RDF of a mono atomic liquid with gαβ (r → ∞) = 1. If there is a shell of β atoms around an α atom in the origin, we can calculate the coordination number as
r1 dr r 2 gαβ (r) (3.115) nβ,α = 4πρβ 0
where r1 is the position of the first minimum of gαβ (r).
86
3 Microscopic Forces and Structure of Liquids
The formula for the energy (3.23) can be extended to multicomponent systems as
√ xα xβ
= 2πρ N
α
∞
drr 2 uαβ (r)gαβ (r)
(3.116)
0
β
We can define the total correlation functions as hαβ (r) = gαβ (r) − 1 and the important sum rule (3.51) becomes now 1+ρ
√ xα xβ drhαβ (r) = ρkB T KT α
(3.117)
β
3.14.1 Partial Structure Factor of Multicomponent Liquids By starting from Eq. (3.114), we can define the matrices
Hαβ (k) = ρ
dr
e−ik·r gαβ (r) − 1
(3.118)
and from them the partial structure factors Sαβ (k) = δαβ +
√
xα xβ Hαβ (k)
(3.119)
The partial structure factors can be obtained with measures of neutron diffraction. The theory can be developed with a procedure similar to that used for the monoatomic case. In the approximation of elastic scattering, it comes out that the information on the structural properties can be extracted from the distinct cross section, given by
dσ dΩ
=N dis
α
xα xβ < bα >< bβ > Hαβ (k) ,
(3.120)
β
where bα are now the coherent scattering lengths of the atomic specie α.
3.14.2 Isotopic Substitution From the cross section (3.120), the partial Hαβ (k) can be obtained with a technique called isotopic substitution [12–15]. We consider the most simple example of a binary mixture. Let us call A and B the two components. We define F(k) = (dσ /dΩ)dis /N, and we can write
3.14 Structure of Multicomponent Liquids
87
F(k) = xA2 < bA >2 HAA + xB2 < bB >2 HBB
(3.121)
+xA xB < bA >< bB > HAB Since the scattering lengths depend on the isotope if it is possible to perform 3 distinct experiments by substituting the isotopes at least in one of the atomic species, we get 3 different F(k). So we have finally three equations like (3.121) with three variables HAA (k), HBB (k).e and HAB (k). Since the structural properties do not depend on the isotope, by solving the three equations we can obtain the partial structure factors.
3.14.3 An Example: Molten Salts As an example we show in Fig. 3.9 the partial structure factors of the liquid sodium chloride at T = 875 ◦ C obtained in the experiments [13]; see also the review paper [16] and references therein. This is an ionic liquid, composed of Na+ and Cl− ions, as the corresponding crystal. It is important to notice that the main peak of each partial is located at the same position. The reason is understood by considering the RDF obtained from the partials. They are reported in Fig. 3.10. A short-range order in the location of the charged ions inside the liquid is evident. In fact, gN aCl (r) corresponds to a g+− (r) as gClCl (r) corresponds to a g−− (r) and gN aN a (r) to a g++ (r). So if an ion Na+ is in the origin, we have a first very intense shell of Cl− , and then in correspondence of the first minimum of the gN aCl (r), we find the first Fig. 3.9 Partial structure factors of liquid sodium chloride at T = 875 ◦ C. Neutron diffraction data reproduced from experimental data [13]. For clarity the SN aN a (k) is shifted by 2 on the y-axis
4
Na-Na
3
Sαβ(k)
2 Cl-Cl 1
0 Na-Cl -1
-2
0
2
4 k (Å-1)
6
8
88
3 Microscopic Forces and Structure of Liquids
4
gαβ(r)
3
gClCl(r)
gNaCl(r)
2
gNaNa(r)
1
0
2
4
6
8
r (Å)
Fig. 3.10 Radial distribution functions of liquid sodium chloride at T = 875 ◦ C as obtained from neutron diffraction data [13]. For clarity the gClCl (r) is shifted by 2 on the y-axis
shell of Na+ , followed by the second shell of Cl− , and then the second shell of Na + and so on. If we put in the origin a Cl− ion, we have the sequence of a first shell of Na+ , then a first shell of Cll − and so on. We find in the short range an alternation of charges very similar to the one found in crystal. In the liquid this ordering collapses after 10 Å. The three RDF show the same periodicity; for this reason, the first peak of the corresponding partial Sαβ (k) is located at the same position.
3.15 Structure of Molecular Liquids The study of liquids composed of molecules is generally more complex than atomic liquids. Molecules have internal degrees of freedom due to the vibrations of the bonded atoms. In principles, the geometry of the molecules could be distorted by the intermolecular interactions. The internal vibrations could give contributions to the correlation of the molecules in the liquid. Moreover since the molecules rotate, it is expected that rotational correlations are present besides the correlations between centres of mass. For relatively simple molecules, it is possible to assume a model of rigid molecules [17]. The pair distribution function in a homogeneous isotropic fluid will depend on the relative distance between the two centres of mass of two molecules and the relative orientation of the axes of the two molecules, see Fig. 3.11. For axis of the molecule, the principal axis is usually assumed. The orientation of the axis with respect to a fixed frame work is indicated with ωα . For plane molecules ωα are the angles θ, φ, while in general ωα are composed of the Euler angles. The
3.15 Structure of Molecular Liquids
89
Fig. 3.11 Example of two linear molecules located in the space. Molecule 1 has the centre of mass located at the origin. The centre of mass of molecule 2 is at distance r12 . The axes are oriented with angles ωi with respect to the laboratory frame
Fig. 3.12 Intermolecular and intramolecular site-site models
pair distribution function will be proportional to the probability to find molecule 2 at distance r12 and an orientation ω2 from molecule 1 in the origin with an orientation ω1 , and it can be written as g (r12 , ω1 ω2 ). This function depends on the relative orientation between ω1 and ω2 . For the interpretation of neutron scattering data and also in theoretical approaches, it is usual to use the site-site distribution functions. As shown in Fig. 3.12, it is possible to choose two sites α and β on two different molecules (or on the same). If R i are the positions of the centres of mass with respect to the laboratory reference system, the position of the site α is given by r α = R i + d iα .
(3.122)
90
3 Microscopic Forces and Structure of Liquids
Then the distance r αβ will be given by r αβ = R 12 + d 2β − d 1α
(3.123)
where R12 = R 2 − R 1 . Now we can define a pair distribution function or RDF gαβ rαβ that is proportional to the probability of finding a site β in a molecule at distance rαβ from a site α of another (or the same) molecule. In terms of ensemble average, the gαβ rαβ can be considered as [17] gαβ rαβ = g R 12 + d 2β − d 1α , ω1 ω2 ω
1 ω2
(3.124)
the average is on the angles ω1 , ω2 for fixed rαβ . We note that due to the averaging on the angles, Eq. (3.124), gαβ rαβ contains less information than the g (r12 , ω1 ω2 ) [17]. So scattering experiments or calculations performed with sitesite models that we will consider in Chapter 5 cannot give a complete description of the properties of molecular fluids. The cross section of an experiment will be separated in an intramolecular part and an intermolecular part [8]. The final expression will be like Eq. (3.119) but now (intra)
Hαβ (k) = Hαβ
(inter)
(k) + Hαβ
(k)
(3.125)
(inter) The important term is the intermolecular Hαβ (k). From it, it is possible to obtain the intermolecular RDF gαβ (r) (inter)
Hαβ
(k) = ρ
dr
e−ik·r gαβ (r) − 1
(3.126)
As said above the intermolecular gαβ (r) refers to the distribution of β atoms in a molecule with respect to an α atom in a different molecule. The intramolecular structure is usually not relevant by assuming that there are not significant changes with respect to the structure of the free molecule.
3.15.1 Structure of Liquid Water Due to the relevant role of liquid water in many phenomena numerous X-ray and neutron scattering experiments have been performed to determine the structure of this molecular fluid after the first pioneering experiments [18]. Due to the presence of hydrogens, neutron scattering is usually preferred in spite that hydrogens make the calculation of the inelastic corrections not straightforward [19]. The structure of water is determined by the presence of the hydrogen bond discussed in Sect.1.6. At melting the ice tetrahedral structure of ice collapses, but the liquid maintains a short-range tetrahedral order.
3.15 Structure of Molecular Liquids
91
In Fig. 3.13, we show the partial radial distribution functions obtained from neutron scattering experiments with a procedure for correcting the neutron data [19]. We show separately the intermolecular radial distribution functions. The locations of the peaks in the gOO (r), gOH (r) and gH H (r) confirm the tetrahedral short-range order. Let us consider the oxygen-oxygen distance. In Fig. 3.14 the local arrangement of four water molecules in the tetrahedral geometry is presented. The OO nearest neighbour distance is ≈ 0.28 nm; with the angle of 109◦ , the second oxygen is at a distance of ≈ 0.45 nm. Now if we consider the gOO (r), shown in
3.5
O-O H-H O-H
3
gαβ(r)
2.5 2 1.5 1 0.5 0
0
0.2
0.4
0.6
0.8
r (nm)
Fig. 3.13 Radial distribution functions of water at ambient conditions. Note: the high peak in the gOH (r) at r ≈ 0.096 nm represents the intramolecular structure. Drawn from data of A. K. Soper (https://www.isis.stfc.ac.uk/Pages/Water-Data.aspx) Fig. 3.14 Local tetrahedral order in liquid water: the O-O distances
92
3 Microscopic Forces and Structure of Liquids
Fig. 3.15 Oxygen-oxygen RDF at ambient conditions. Drawn from A. K. Soper (https://www.isis. stfc.ac.uk/Pages/Water-Data.aspx)
Fig. 3.16 Local tetrahedral order in liquid water: the O-H distances. The H bond length is approximately 0.18 nm
Fig. 3.15, we note that the first peak of the RDF is located at around 0.28 nm, while the second peak is approximately at 0.45 nm (Fig. 3.16). The gOH (r) in Fig. 3.17 shows a first peak at around 0.18 nm, the signature of the hydrogen bonds in the liquid (Fig. 3.18). The tetrahedral arrangement is confirmed also by the structure of the gH H (r) in Fig. 3.19.
3.15 Structure of Molecular Liquids
93
Fig. 3.17 Oxygen-hydrogen RDF at ambient conditions. Drawn from A. K. Soper (https://www. isis.stfc.ac.uk/Pages/Water-Data.aspx)
Fig. 3.18 Local tetrahedral order in liquid water: the H-H distances
In spite of progress in the techniques, there are still uncertainty in the determination of the radial distribution functions at low distances. We refer to the recent paper by L. B. Skynner et al. [20] for a deeper discussion of the problem.
94
3 Microscopic Forces and Structure of Liquids
Fig. 3.19 Hydrogen-hydrogen RDF at ambient conditions. Drawn from A. K. Soper (https://www. isis.stfc.ac.uk/Pages/Water-Data.aspx)
References 1. Tully, J.C.: Modern Methods for Multidimensional Dynamics Computations in Chemistry. In: Thompson, D.L. (ed.). World Scientific, Singapore (1998) 2. Lennard-Jones, J.E.: Proc. R. Soc. Lond. 106, 463 (1924) 3. Tosi, M.P.: Solid State Phys. 16, 1 (1964) 4. Landau, L.D., Lifshitz, E.M.: Mechanics, 3rd edn. Elsevier, London (1982) 5. Pathria, R.K.: Statistical Mechanics. Pergamon Press, Oxford (1988) 6. Barker, A., Henderson, D.: Rev. Mod. Phys. 48, 587 (1976) 7. Lovesey, S.W.: Theory of Neutron Scattering from Condensed Matter. Clarendon Press, Oxford, UK (1984) 8. Fischer, H.E., Barnes, A.C., Salmon, P.S.: Rep. Prog. Phys. 69, 233 (2006) 9. Pynn, R.: Neutron Scattering: A Primer. Los Alamos Science, Summer Los Alamos, NM (1990) 10. Placzek, G.: Phys. Rev. 86, 377 (1952) 11. Yarnell, J.L., Katz, M.J., Wenzel, R.G., Koenig, S.H.: Phys. Rev. A 7, 2130 (1973) 12. Enderby, J.E., North, D.M., Egelstaff, P.A.: Phil. Mag. 14, 961 (1966) 13. Biggin, S., Enderby, J.E.: J. Phys. C Solid State Phys. 15, L305 (1982) 14. McGreevy, R.L., Mitchell, E.W.J.: J. Phys. C Solid State Phys. 15, 5537 (1982) 15. Price, D.L., Pasquarello, A.: Phys. Rev. B 59, 5 (1999) 16. Rovere, M., Tosi, M.P.: Rep. Prog. Phys. 49, 1001 17. Gray, C.G., Gubbins, K.E.: Theory of Molecular Fluids. Volume 1: Fundamentals. Clarendon Press, Oxford, UK (1984) 18. Narten, A.H., Levy, H.A.: J. Chem. Phys. 55, 2263 (1971) 19. Soper, A.K.: Chem. Phys. 258, 121 (2000) 20. Skinner, L.B., Huang, C., Schlesinger, D., Pettersson, L.G.M., Nilsson, A., Benmore, C.J.: J. Chem. Phys. 138, 074506 (2013)
Chapter 4
Theoretical Studies of the Structure of Liquids
We consider now the appropriate theoretical methods elaborated to calculate the structural and thermodynamic properties of liquids starting from assumed microscopic models. The early approaches were based on expansions in terms of the density, but they are not adequate for the thermodynamic conditions of liquids. The most successful method to calculate the distribution functions introduced in the previous chapter is the integral equation method developed on the basis of the classical density functional theory.
4.1 Virial Expansion in the Canonical Ensemble As seen before, the canonical partition function (2.148) is given by QN (V , T ) =
1 ZN (V , T ) . N! Λ3N
(4.1)
By assuming that the potential is a sum of two-body terms, see Eq. (3.2), the configurational integral is given by
ZN (V , T ) =
dr N e−βU (r 1 ,...,r N )
(4.2)
and it can be written as
ZN (V , T ) =
dr 1 . . . dr N
'
exp −βu(rij ) .
(4.3)
ii
where fij = f rij .
(4.6)
The product in the integral (4.3) can be expanded as '' 1 + fij = 1 + fij + fij fkl + . . . . i j >i
i
j >i
i
j
k
(4.7)
l
At low density and high temperatures, we can expect that there is few overlapping between the Mayer functions that appear in the integral (4.5) at least for shortrange potentials that decay faster than r −3 the function (4.4) goes to zero quickly. The function ZN can be approximated by taking into account only the first order terms [1]
ZN (V , T ) ≈
⎡ dr 1 . . . dr N ⎣1 +
⎤ f rij ⎦ .
(4.8)
i 2 are more complex integral of the Mayer functions; see, for instance, ref. [2].
4.1.1 From Hard Spheres to the Van der Waals Equation At high temperatures when for a Lennard-Jones like potential kB T , the most important effect is the repulsion between the atoms. In this regime the density plays the most important role in determining the thermodynamic properties, and the system of hard spheres (HS) can be assumed as a prototype. For the HS potential (3.6), the Mayer function is simply f (r) = −1 for r < σ and f (r) = 0 for r > σ . The second virial coefficient (4.15) is B2 =
2π 3 σ = 4Vσ , 3
(4.16)
where Vσ is the volume of the sphere, so B2 is equivalent to the excluded volume.
98
4 Theoretical Studies of the Structure of Liquids
We add now an attractive code to the hard sphere potential [3] ⎧ ⎨∞ u(r) = −δ ⎩ 0
r r0
(4.17)
and so the Mayer function becomes ⎧ ⎪ −1 ⎪ ⎪ ⎪ ⎪ ⎨ f (r) = eβδ − 1 ≈ βδ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩0
r r0
From the virial term B2 (T ) obtained from the integral (4.15), we get a pressure with two terms
2π σ 3 2π δ kB T 1+ − 2 r03 − σ 3 , (4.19) p= v 3 v 3v where we have introduced the volume per particle v = V /N. It is easy to see that the first term is repulsive while the second one is attractive, and the formula is similar to the van der Waals equation. The repulsive term in fact can be considered as derived from a series expansion, and going back to the original function, we can write kB T v−
2π σ 3 3
kB T ≈ v
2π σ 3 1+ . 3 v
(4.20)
Therefore Eq. (4.19) can be rewritten as
2π δ r03 − σ 3 2π σ 3 p+ v− = kB T , 3 3v 2
(4.21)
by comparing with the van der Waals equation that we recall here
p+
a
(v − b) = kB T , v2
(4.22)
we can identify the phenomenological parameters a and b in terms of microscopic parameters: a is given by a=
2π δ 3 r0 − σ 3 , 3
(4.23)
4.2 The Mean Force Potential
99
while b is the virial term B2 of the hard spheres (4.16) b=
2π 3 σ . 3
(4.24)
4.2 The Mean Force Potential With the assumption that the interaction potential can be written as a sum of pair potentials U (r 1 , . . . , r 1 ) =
i
u(rij ) ,
(4.25)
j >i
we derived in the previous chapter (see Sect. 3.6) the hierarchical equation (3.57) that we rewrite here [2]
1 − ∇1 g(r12 ) = g(r12 )∇1 u(r12 ) + ρ dr 3 g (3) (r 1 , r 2 , r 3 ) ∇1 u(r13 ) . (4.26) β We can explore it more in detail. We found already that in the limit ρ → 0 Eq. (4.26) can be simplified ∇1 ln (g(r12 )) = −β∇1 u(r12 ) ,
(4.27)
and in the limit of low density we get the form of the g(r), Eq. (3.59) g(r) = e−βu(r) .
(4.28)
We can generalize Eq. (4.28) by introducing a new effective potential U (r) g(r) = e−β U (r) .
(4.29)
− ∇1 U (r12 ) = kB T ∇1 ln (g(r12 )) .
(4.30)
We can also define a force as
We substitute now Eq. (4.30) in Eq. (4.26) to obtain ∇1 U (r12 ) · g(r12 ) = g(r12 )∇1 u(r12 ) + ρ
dr 3 g (3) (r 1 , r 2 , r 3 ) ∇1 u(r13 ) .
(4.31)
100
4 Theoretical Studies of the Structure of Liquids
By dividing for g(r12 ) and changing sign, we have an interesting equation
− ∇1 U (r12 ) = −∇1 u(r12 ) − ρ
dr 3
g (3) (r 1 , r 2 , r 3 ) ∇1 u(r13 ) , g(r12 )
(4.32)
the left side term represents the total force between two particles; the equation tells us that this force is determined by a direct force between the two particles and a term representing an indirect force. In the integral the 1–2 interaction is averaged on a generic particle 3 and integrated on all its possible positions. The U (r) is called the potential of mean force (PFM). The separation between a direct contribution and an indirect one to the particle interaction will be used also in different formulations. By considering Eq. (4.29), g(r) can be formally written also as g(r) = e−βu(r)+w(r) ,
(4.33)
w(r) = β [u(r) − U (r)]
(4.34)
where the function
contains the many body contribution.
4.3 Kirkwood Approximation Equation (4.32) can be rewritten as −kB T ∇1 ln (g(r12 )) = ∇1 u(r12 )
g (3) (r1 , r2 , r3 ) − g(r13 ) ∇1 u(r12 ) , + ρ dr3 g(r12 ) (4.35) since
dr3
g(r13 )∇1 u(r13 ) = 0 .
(4.36)
Kirkwood proposed the superposition approximation [4] g (3) (r1 , r2 , r3 ) ≈ g(r12 )g(r13 )g(r23 ) , with this assumption, we get the Born-Green equation. −kB T ∇1 [ln g(r12 ) + βu(r12 )]
(4.37)
4.5 Density Distributions from the Grand Partition Function
101
=ρ
dr3 ∇1 u(r13 ) [g(r13 ) (g(r23 ) − 1)] .
(4.38)
In principle this equation could allow to calculate the RDF for a given potential. The Born-Green equation however is based on the approximation (4.37) that is valid only for fluids at very low concentration. The modern theories of liquids assume a different route as we will see in the following sections.
4.4 Radial Distribution Function from the Excess Free Energy We recall the definition of the excess free energy in the canonical ensemble (2.156) βAexc = − ln
ZN (V , T ) . VN
(4.39)
We perform a functional derivative with respect to the two-body potential u(r) δβAexc δ δ 1 =− ln ZN = − δu(r) δu(r) ZN δu(r)
dr 1 . . . dr N e−βU (r 1 ,...,r N ) .
(4.40)
By taking into account the definition of the radial distribution function (3.17), we get [5, 6] δβAexc /N 1 = βρg (r) . δu(r) 2
(4.41)
Therefore in the canonical ensemble, the excess free energy is a generating functional for the RDF.
4.5 Density Distributions from the Grand Partition Function We start from the early method formulated by Percus [7]. In order to simplify the notation from now on, we will use sometimes an index i to indicate r i . We assume that an external one-body potential is applied to the liquid. Let us indicate the external potential as v(i). We restart from the grand canonical partition function (2.165) ZGC
∞ zN = dr 1 . . . dr N N! N =0
e−βU (r 1 ...r N ) ,
(4.42)
102
4 Theoretical Studies of the Structure of Liquids
where z = eβμ /Λ3 is the activity (2.166). We assume the potential U as sum of pair potential u(rij ). When we apply the external potential the system is no longerhomogeneous, and the ZGC becomes a functional of the applied field V= N i=1 v(i). We define φ (r) = μ − v (r) ,
(4.43)
and the partition function can be rewritten as ∞
ZGC [v] =
1
N =0
dr 1 . . . dr N
Λ3N N!
N '
eβφ(i)
'
e−βu(ij ) .
(4.44)
j >i
i=1
We define now ζ (i) =
eβφ(i) Λ3
η(i, j ) = e−βu(i,j ) , and Eq. (4.44) becomes
N ∞ ' ' 1 ζ (i) η(i, j ) , ZGC [v] = dr 1 . . . dr N N! N =0
(4.45)
j >i
i=1
by switching off the external potential lim ZGC [v] = ZGC .
(4.46)
v→0
From the ZGC [v] it is possible to generate the distribution functions. By considering their definitions, we can write the distribution functions in this way: ρ (n) (1, . . . , n; v) =
1 ZGC
∞ N =n
1 (N − n)!
dr n+1 . . . dr N
N '
ζ (i)
i=1
'
η(i, j ) .
(4.47)
j >i
With a variational derivative of ZGC with respect to ζ (1), we get
∞ N δZGC [v] = dr 2 . . . dr N δζ (1) N! N =1
η(1, 2)
N ' ' i=2 j >i
we now multiply both sides by ζ (1)/ZGC , and we get
ζ (i)η(i, j ) ,
(4.48)
4.6 Grand Potential as Generating Functional
103
ζ (1) δZGC [v] ZGC [v] δζ (1) =
∞
1 ZGC
N =1
1 (N − 1)!
dr 2 . . . dr N
N '
ζ (i)
'
η(i, j ) .
(4.49)
j >i
i=1
So we have generated the single particle distribution function ρ (1) (1; v) =
δ ln ZGC [v] ζ (1) δZGC [v] = . ZGC [v] δζ (1) δ ln ζ (1)
(4.50)
Analogously we can verify that ρ (2) (1, 2; v) =
1 ZGC [v]
ζ (1)ζ (2)
δ 2 ZGC [v] , δζ (1)δζ (2)
(4.51)
and we can get all the distribution functions ρ (n) (1, . . . , n; v) =
1 ZGC [v]
ζ (1) . . . ζ (n)
δ n ZGC [v] . δζ (1) . . . δζ (n)
(4.52)
We note that lim ρ (1) (1; v) = ρ (1) (1) = ρ
(4.53)
lim ρ (2) (1, 2; v) = ρ (2) (1, 2) = ρ 2 g(r) .
(4.54)
v→0 v→0
4.6 Grand Potential as Generating Functional Instead of using the partition function, it is possible to reformulate the previous approach in terms of the grand potential. We recall the definition of the grand potential βΩ [v] = − ln ZGC [v] .
(4.55)
We note that Eq. (4.50) can be rewritten as ρ (1) (1; v) =
1 δΩ [v] ζ (1)δ ln ZGC [v] =− . δζ (1) β δφ(1)
(4.56)
In alternative to the partition function, also the grand potential can be used to generate the distribution functions. Let us consider the two-body case. If we take now the functional derivative, we obtain
104
4 Theoretical Studies of the Structure of Liquids
δρ (1) (r1 ) = δφ(r2 )
dr 3
δρ (1) (r1 ) δv(r 3 ) δv(r 3 ) δφ(r2 ) δρ (1) (r1 ) [δ (r 3 − r2 )] δv(r 3 )
=−
dr 3
=−
δρ (1) (r1 ) . δv(r2 )
(4.57)
The last functional derivative can be calculated as δρ (1) (1; v) δ ζ (1) δZGC = −βζ (2) δv(2) δζ (2) ZGC δζ (1) 1 δZGC δζ (1) ζ (1) δZGC δZGC = −βζ (2) − 2 ZGC δζ (1) δζ (2) ZGC δζ (1) δζ (2) ζ (1) δ 2 ZGC + ZGC δζ (1)δζ (2) = −β ρ (1) (r1 )δ (r1 − r2 ) − ρ (1) (r1 )ρ (1) (r2 ) + ρ (2) (r1 , r2 ) . (4.58) So we get that 1 δ 2 Ω [v] δρ (1) (r1 ) =− δφ(r2 ) β δφ(r1 )δφ(r1 ) = ρ (2) (r1 , r2 ) − ρ (1) (r1 )ρ (1) (r2 ) + ρ (1) (r1 )δ (r1 − r2 ) = G(2) (r 1 , r 2 ) ,
(4.59)
the last function is the density correlation function as defined in Eq. (3.42). With further n-differentiation of the grand potential, we can generate the n-body density correlation functions.
4.7 Classical Density Functional Theory The modern theory of liquids can be formulated in terms of a more fruitful method based on the density functional theory (DFT). The DFT for quantum fluids has been successfully applied in solid state physics [8–10]. We will consider here the classical version [11, 12].
4.7 Classical Density Functional Theory
105
4.7.1 Equilibrium Conditions As before we assume that an external potential V = i v (r i ) is applied to the liquid. We indicate with ρ(r) = ρ (1) (r) the single particle density that is not homogeneous in presence of the external potential. The grand canonical potential is a unique functional of ρ (r) [11]
Ωv [ρ] = Av [ρ] − μ
drρ(r) .
(4.60)
We can separate the Helmholtz free energy in an intrinsic part F and the explicit contribution of the external potential
Av [ρ] = F [ρ] +
drv(r)ρ(r) .
(4.61)
drφ(r)ρ(r) ,
(4.62)
We can write
Ωv [ρ] = F [ρ] −
where, as before, φ = μ − v(r). The equilibrium density is determined by the condition
δΩv [ρ] δρ(r)
= 0,
(4.63)
δF [ρ] . δρ(r)
(4.64)
ρeq
this implies μ = v(r) +
The intrinsic free energy can be separated as F [ρ] = F id [ρ] + F exc [ρ]
(4.65)
where the ideal part is given by
βF id [ρ] =
dr ρ(r) ln Λ3 ρ(r) − 1 .
(4.66)
Equation (4.64) becomes βμ = βv(r) + ln Λ3 ρ(r) +
δβF exc [ρ] . δρ(r)
(4.67)
106
4 Theoretical Studies of the Structure of Liquids
Now we can define the single particle direct correlation function c(1) (r) = −
δβF exc [ρ] , δρ(r)
(4.68)
and Eq. (4.67) can be written as βμ = βv(r) + ln Λ3 ρ(r) − c(1) (r) .
(4.69)
We note that for the ideal gas c(1) = 0 and
β(μid − v) = ln Λ3 ρ .
(4.70)
4.7.2 The Ornstein-Zernike Equation We consider Eq. (4.67) that can be rewritten as βφ = ln Λ3 ρ +
δβF exc [ρ] , δρ(r)
(4.71)
by taking the functional derivative, we get δβφ(r 1 ) δ (r 1 − r 2 ) δ 2 βF exc [ρ] = + . δρ(r 2 ) ρ(r 1 ) δρ(r 1 )δρ(r 2 )
(4.72)
We define the two-body direct correlation function c(2) (r 1 , r 2 ) = −
δ (r 1 − r 2 ) δβφ(r 1 ) δ 2 βF exc [ρ] = − . δρ(r 1 )δρ(r 2 ) ρ(r 1 ) δρ(r 2 )
(4.73)
We recall now Eq. (4.59), and we have −1 δβφ(r 1 ) (2) = G (r 1 , r 2 ) , δρ(r 2 )
(4.74)
by definition
−1 dr 3 G(2) (r 1 , r 3 ) G(2) (r 3 , r 2 ) = δ (r 1 − r 2 ) ,
and by using Eq. (4.73), we obtain
(4.75)
4.7 Classical Density Functional Theory
dr 3
107
δ (r 1 − r 3 ) (2) − c (r 1 , r 3 ) G(2) (r 3 , r 2 ) = δ (r 1 − r 2 ) , ρ(r 1 )
(4.76)
then
dr 3
=
1 G(2) (r 3 , r 2 ) δ (r 1 − r 3 ) ρ(r 1 ) dr 3 c(2) (r 1 , r 3 ) G(2) (r 3 , r 2 ) + δ (r 1 − r 2 ) .
(4.77)
We substitute in the left-hand side of this equation the function G(2) with the total correlation function h(2) by using Eq. (3.44), and we obtain 1 ρ(r 1 )ρ(r 2 )h(2) (r 1 , r 2 ) + ρ(r 1 )δ (r 1 − r 2 ) ρ(r 1 )
= dr 3 c(2) (r 1 , r 3 ) G(2) (r 3 , r 2 ) + δ (r 1 − r 2 ) .
(4.78)
The terms with δ (r 1 − r 2 ) can be eliminated on both sides, and we are left with ρ(r 2 )h(2) (r 1 , r 2 )
= dr 3 c(2) (r 1 , r 3 ) G(2) (r 3 , r 2 ) , by substituting G(2) with h(2) in the right-hand side, we get ρ(r 2 )h(2) (r 1 , r 2 )
= dr 3 c(2) (r 1 , r 3 ) ρ(r 2 )ρ(r 3 )h(2) (r 3 , r 2 ) + ρ(r 3 )δ (r 3 − r 2 ) . Finally by integrating on the δ-function, we obtain
h(2) (r 1 , r 2 ) = c(2) (r 1 , r 2 ) +
dr 3 ρ(r 3 )c(2) (r 1 , r 3 ) h(2) (r 3 , r 2 ) ,
(4.79)
the last integral equation is the Ornstein-Zernike (OZ) equation. In the limit of a homogeneous fluid, v(r) → 0, it becomes
h (r) = c(r) + ρ
dr c | r − r | h r ,
(4.80)
where c(r) = c(2) (r) is the direct correlation function for a homogeneous liquid. The OZ equation is the basic equation of the modern theory of liquids [13]. In analogy with Eq. (4.32) also in the OZ equation, there is a separation between a
108
4 Theoretical Studies of the Structure of Liquids
direct term and an indirect contribution to the total correlation. The direct correlation function plays an important role in determining the structure of the liquid since it appears as the kernel in the integral equation for the RDF. In order to solve the problem, however it is necessary to find a further relation between c(r) and g(r). For this reason we consider later an approximation to the grand potential.
4.7.3 The Ornstein-Zernike Equation in k-Space By performing the Fourier transform of Eq. (4.80), we get ˜ ˜ , h(k) = c(k) ˜ + ρ c(k) ˜ h(k)
(4.81)
that can be written also as ˜ h(k) =
c(k) ˜ . 1 − ρ c(k) ˜
(4.82)
1 . 1 − ρ c(k) ˜
(4.83)
˜ Since h(k) = S(k) − 1, we have S(k) =
Equations (4.81)–(4.83) are all equivalent form of the OZ equation in k space. Equation (4.83) was really formulated for the first time by Ornstein and Zernike in a study of the gas-liquid transition. Since S(k → 0) diverges for T → Tc , they introduced the function c(k) ˜ with the idea that the direct correlation function remains finite in approaching Tc where it is expected that ρ c(k) ˜ → 1. The introduction of the function c(r) was a very relevant by-product of the OZ mean field theory. We note that c(k → 0) is related to the isothermal compressibility, since 1 − ρ c(k) ˜ =
1 S(k)
(4.84)
we have 1 ρkB T 1 − ρ c(0) ˜ = . KT The inverse of the KT is the bulk modulus ∂P BT = ρ ∂ρ T
(4.85)
(4.86)
4.7 Classical Density Functional Theory
109
Now we can infer that 1/S(0) is related to the bulk modulus that measures the mechanical stiffness of the liquid against a deformation. In an ideal gas BT = ρkB T , so from Eq. (4.85) we find that the quantity −ρ 2 kB T c(0) ˜ can be interpreted as the excess mechanical stiffness; it contains the contribution to the stiffness of the interatomic forces. The generalization for finite k indicates that 1/S(k) is related to the stiffness of the fluid against a perturbation that modulates the density at the wavelength 2π/k. The excess contribution to this stiffness is given by −ρ 2 kB T c(k). ˜ In Sect. 6.12 we will further discuss this interpretation of the static structure factor.
4.7.4 Free Energy Calculation We consider a system with initial state at density ρ0 (r) and a change of state in the grand canonical ensemble at constant T , V and μ to a final state connected with the initial one by a density path ρλ (r) = ρ0 (r) + λΔρ(r)
0≤λ≤1
(4.87)
with Δρ(r) = ρ(r) − ρ0 (r). By taking into account the definition of the single particle direct correlation function, we can write
1
βF exc [ρ] = βF exc [ρ0 ] −
dλ 0
dr Δρ(r)cλ(1) (r) ,
(4.88)
(2)
(4.89)
then the c(1) can be expanded as (1) cλ (r)
=
(1) c0 (r) +
λ
dλ
0
dr 1 Δρ(r 1 )cλ (r, r 1 )
and so Eq. (4.88) becomes
βF exc [ρ] = βF exc [ρ0 ] −
1
−
(1)
drΔρ(r)c0 (r)
dλ
dr 1 Δρ(r 1 )
0
λ
dλ
0
dr 2 Δρ(r 2 )cλ(2) (r 1 , r 2 ) , (4.90)
that can be written as
βF exc [ρ] = βF exc [ρ0 ] −
(1)
drΔρ(r)c0 (r)
110
4 Theoretical Studies of the Structure of Liquids
1
+
λ
dλ 0
dλ
(2)
dr 2 Δρ(r 1 )Δρ(r 2 )cλ (r 1 , r 2 ) .
dr 1
0
(4.91)
4.7.5 Expansion from the Homogeneous System We assume now that λ = 0 corresponds to v = 0 and ρ0 (r) = ρ0 , a homogeneous density. Now for v = 0 (1)
ln Λ3 ρ0 − c0 = βμ0 ,
(4.92)
since βμid = ln Λ3 ρ, we have (1)
c0 = −βμexc 0 ,
(4.93)
and in the following we indicate with the subscript 0 the quantities in the limit of the homogeneous liquid. In Eq. (4.91) on the right-hand side, the second term becomes
dr Δρ(r)c0(1) (r) = −βμexc 0
dr Δρ(r) .
(4.94)
Now we make an approximation in Eq. (4.91), and we assume that (2)
(2)
cλ (r 1 , r 2 ) ≈ c0 (r 1 , r 2 ) .
(4.95)
In the third term the integrand does not depend more on λ, and the expansion of βF exc becomes
exc exc exc βF [ρ] ≈ βF0 [ρ0 ] + βμ0 drΔρ(r) −
1 2
dr 1
(2)
dr 2 Δρ(r 1 )Δρ(r 2 )c0 (r 1 , r 2 ) .
(4.96)
The expansion for the intrinsic free energy in this approximation will be βF [ρ] = βF id [ρ] + βF exc [ρ] ≈ βF id [ρ] + βF0exc [ρ0 ]
+ βμexc drΔρ(r) 0 1 − 2
dr 1
(2)
dr 2 Δρ(r 1 )Δρ(r 2 )c0 (r 1 , r 2 ) .
(4.97)
4.7 Classical Density Functional Theory
111
Now we add and subtract βF0id on the right-hand side, so we substitute the term βF0exc with
βF0exc
[ρ0 ] = βF0 [ρ0 ] −
dr ρ0 ln Λ3 ρ0 − 1 .
(4.98)
3 id Moreover to the term with βμexc 0 , we add and subtract βμ0 = ln Λ ρ0 to get
3 βμexc 0 = βμ − ln Λ ρ0
(4.99)
we recall that μ is kept constant. In this way Eq. (4.97) becomes
drΔρ(r) βF [ρ] ≈ βF0 [ρ0 ] + βμ0 − ln λ3 ρ0
+ 1 − 2
dr ρ ln λ3 ρ − 1 − dr ρ0 ln λ3 ρ0 − 1
(2)
dr 2 Δρ(r 1 )Δρ(r 2 )c0 (r 1 , r 2 ) .
dr 1
(4.100)
It is easy to see that
dr
* ) ρ ln λ3 ρ − 1 − ρ0 ln λ3 ρ0 − 1
drρ ln
ρ + ln λ3 ρ0 ρ0
drΔρ(r −
(4.101)
drΔρ(r) .
So finally we have
βF [ρ] = βF0 [ρ0 ] + (βμ − 1) −
1 2
dr 1
drΔρ(r) +
drρ(r) ln
dr 2 Δρ(r 1 )Δρ(r 2 )c0(2) (r 1 , r 2 ) .
ρ(r) ρ0 (4.102)
Now we want to calculate the grand potential functional
βΩv = βF +
drρ(r)v(r) − μ
drρ(r) ,
(4.103)
and we finally get
βΩv [ρ] = βΩ0 + β
drρ(r)v(r) +
ρ(r) dr ρ(r) ln − Δρ(r) ρ0
112
4 Theoretical Studies of the Structure of Liquids
−
1 2
dr 1
(2)
dr 2 Δρ(r 1 )Δρ(r 2 )c0 (r 1 , r 2 ) .
(4.104)
We recall the equilibrium condition (4.63)
δΩv [ρ] δρ(r)
=0
(4.105)
ρeq
by performing the functional derivative ρ(r) δΩv [ρ] = βv(r) + ln − δρ(r) ρ0
dr Δρ(r)c0(2) r, r = 0
(4.106)
the equilibrium single particle density satisfies the approximate equation
(2) ρ(r) = ρ0 exp −βv(r) + dr Δρ(r )c0 r, r .
(4.107)
4.8 Closure Relations from the Density Functional Theory In order to find a useful equation for the radial distribution functions, we can start from the OZ equation (4.80) but we need to find another independent relation between g(r) and c(r) in order to be able to solve the problem. We consider the result of the previous section where we get the final Eq. (4.107). Let us assume that if we introduce in the homogeneous system a further particle, the potential energy will be modified as N N u |r i − r j | + u (|r i − r N +1 |) i=1 j >i
(4.108)
i=1
and we can use v(ri ) = u (|r i − r N +1 |) as an external potential. We recall the partition function (4.44) ⎧ ⎡ ⎤⎫
∞ N N ⎨ ⎬ N 1 z ZGC [v] = u(ij ) + v(i)⎦ dr 1 . . . dr N exp −β ⎣ ⎩ ⎭ N! 2 N =0
i=1 j =i
i=1
⎤⎫
∞ N +1 ⎬ 1 zN = u(ij )⎦ . dr 1 . . . dr N exp −β ⎣ ⎩ ⎭ N! 2 N =0
It can be rewritten as
⎧ ⎨
⎡
i=1 j =i
(4.109)
4.8 Closure Relations from the Density Functional Theory
ZGC [v] =
113
∞ ZGC [0] zN +1 zZGC [0] N! N =0 ⎧ ⎡ ⎤⎫
N +1 ⎨ ⎬ 1 dr 1 . . . dr N exp −β ⎣ u(ij )⎦ ⎩ ⎭ 2
(4.110)
i=1 j =i
and by considering M = N + 1 ZGC [v] =
∞ 1 zM ZGC [0] z ZGC [0] (M − 1)! M=1 ⎧ ⎡ ⎤⎫
M ⎨ ⎬ 1 u(ij )⎦ , dr 2 . . . dr M exp −β ⎣ ⎩ ⎭ 2
(4.111)
i=1 j =i
where r i have been shifted to r i+1 . But in the right-hand side, we recognize the single particle distribution function ρ (1) (v = 0), so Eq. (4.111) can be written as ZGC [v] =
ZGC [0] (1) ρ (v = 0) . z
(4.112)
Starting from Eq. (4.109), we can write ρ
(1)
(r, v) =
∞
zN ZGC [v] (N − 1)! N =1 ⎧ ⎡ ⎤⎫
N N ⎨ ⎬ 1 dr 2 . . . dr N exp −β ⎣ u(ij ) + v(i)⎦ .(4.113) ⎩ ⎭ 2 1
i=1 j =i
i=1
In the same way as before ρ (1) (r, v) =
∞ zM 1 ZGC [0] zZGC [0] ZGC [v] (M − 2)! M=2 ⎧ ⎡ ⎤⎫
M ⎨ ⎬ 1 u(ij )⎦ dr 3 . . . dr M exp −β ⎣ ⎩ ⎭ 2
(4.114)
i=1 j =i
so we have ρ (1) (r, v) =
ZGC [0] (2) ρ (v = 0) . zZGC [v]
(4.115)
114
4 Theoretical Studies of the Structure of Liquids
Finally with the use of Eq. (4.112) ρ (1) (r, v) =
1 2 ρ g (r) . ρ
(4.116)
Equation (4.107) for the single particle density for the homogeneous liquid becomes an equation for the RDF
g (r) = exp −βu(r) + dr Δρ(r )c r, r ,
(4.117)
in this equation Δρ(r) = ρ (1) (r) − ρ = ρ (ρg(r) − 1) .
(4.118)
So now we have a second equation connecting g(r) with c(r). By combining Eq. (4.80) with the last Eq. (4.117), we obtain the approximation called, for historical reasons, hypernetted chain (HNC) [2] g (r) = exp [−βu(r)] exp [h (r) − c (r)] .
(4.119)
This is the first example of a closure relation, since we now have two unknown functions c(r) and g(r) connected from the exact OZ equation and the approximate equation called HNC.
4.9 An Exact Equation for the g(r) If we recall the definition of the function w(r), Eq. (4.34), we see that HNC corresponds to w (r) ≈ h (r) − c (r) .
(4.120)
From the classical many body theory [13] it can be shown that Eq. (4.120) is an approximation to the exact formula for the function w(r) w (r) = h (r) − c (r) + d (r) ,
(4.121)
where the term d(r) is called bridge function. The HNC approximation corresponds to neglect this bridge function. From Eq. (4.121) an exact formal expression for the g(r) is given by g(r) = e−βu(r)+h(r)−c(r)+d(r) .
(4.122)
4.10 HNC and Percus-Yevick Approximations
115
Since the bridge function d(r) cannot be exactly calculated, it becomes necessary to rely on approximations. We can also define a function y(r) = eβu(r) g(r) .
(4.123)
At variance of g(r), the y(r) is a continuous function since it can be expressed as a power expansion of the density y (r) = 1 +
∞
yn (r) ρ n .
(4.124)
n=1
The analytic properties of y(r) can be useful in some theoretical formulation though it is impossible to calculate it exactly.
4.10 HNC and Percus-Yevick Approximations By neglecting the contribution of the bridge function with d(r) = 0 in Eq. (4.122), as said above, we have the HNC approximation. We can also linearize the HNC equation to obtain eh(r)−c(r) ≈ 1 + h(r) − c(r) = g(r) − c(r) ,
(4.125)
from it g(r) ≈ e−βu(r) [g(r) − c(r)] ,
(4.126)
c(r) = g(r) − y(r) .
(4.127)
that is also equivalent to
This approximation is called Percus-Yevick (PY) from the names of the authors who proposed the equation for the first time, following a different route [14]. An equivalent form of the PY is c(r) = f (r)y(r) where f (r) is the Mayer function.
(4.128)
116
4 Theoretical Studies of the Structure of Liquids
4.10.1 RPA and MSA The random phase approximation (RPA) was introduced as the classical version of the quantum RPA formulated in the study of the electron gas. For all the approximations introduced, see ref. [2, 13]. It is assumed that c(r) = −βu(r) .
(4.129)
This approximation was used in particular for the case of a the classical jellium model. This model is also called the one component plasma (OCP), and it represents a system of charged positive particles in a neutralizing background. The model was proposed by Debye and Hückel in the study of electrolyte solution. The main result is the screening effect that reduces the Coulomb interaction to a short-range Yukawa-like potential. In our formalism, we can easily derive this effect. Let us recall the OZ equation in k space ˜ h(k) =
c(k) ˜ . 1 − ρ c(k) ˜
(4.130)
With the assumption (4.129) and u(r) = q 2 /r β4π q 2 , k2
(4.131)
β4π q 2 /k 2 . 1 + ρβ4π q 2 /k 2
(4.132)
c(k) ˜ =− by substituting in Eq. (4.130) we get ˜ h(k) =−
We define the Debye and Hückel wavevector 2 kD = 4πρβq 2
(4.133)
and we have ˜ ρ h(k) =−
2 kD 2 k 2 + kD
.
(4.134)
By performing the Fourier transform, we obtain g(r) = 1 −
q 2 e−kD r . kB T r
The RDF decay at short-range due to the screening effect.
(4.135)
4.11 Properties of the Hard Sphere Fluid
117
The RPA is not valid at short distances where the g(r) could become negative. In this respect the mean spherical approximation (MSA) improves the RPA by introducing excluded volume effects by assuming a hard sphere approximation at short distances. So in the MSA the assumptions are g(r) = 0 r < σ c(r) = −βu(r)
(4.136) (4.137)
r>σ.
The MSA can be solved analytically, and it has been applied particularly to fluids with charged atoms [15].
4.11 Properties of the Hard Sphere Fluid A said above a basic system in the study of fluids is the hard sphere system. We recall the potential
u(r) =
⎧ ⎨∞ ⎩
rσ
The thermodynamics of this system does not depend on the temperature. The excess partition function in the canonical ensemble is Qexc N (V , T ) =
1 VN
dr 1 . . . r N e−β
i
j >i
u(rij )
,
(4.139)
since for the hard sphere potential (4.138) e−βu(r) = θ (r − σ )
(4.140)
it becomes Qexc N (V , T ) =
1 VN
dr 1 . . . r N
''
θ (rij − σ )
(4.141)
i j >i
exc and it only depends on the volume, Qexc N (V , T ) = QN (V ). As a consequence, (1) the internal energy of the hard spheres has only the kinetic contribution, and (2) the thermodynamics depends on the density with the excess free energy determined from the entropy.
118
4 Theoretical Studies of the Structure of Liquids
It is usual to rescale the density as ρσ 3 and to adopt as parameter the packing fraction π 3 ρσ , 6
η=
(4.142)
η measures the fraction of total volume occupied by the particles. It must be η < 1. Its maximum values in fact are determined √ by the maximum density of spheres in a FCC crystalline structure, ρ0 σ 3 = 2. The corresponding packing fraction is η0 ≈ 0.75. Computer simulations found a phase transition from the liquid to the crystal phase for a value ρσ 3 = 0.939 with a volume change of 10% circa. Concerning the calculation of the pressure of the hard sphere system, we recall the formula of the virial pressure
2 βP = 1 − πβρ ρ 3
∞
0
du g(r)r 3 dr . dr
(4.143)
First of all in a hard sphere system with diameter σ , it must be g(r) = 0
r σ , the g(r) increases since, at high density, it is expected that the first shell of hard spheres around the one in the origin must be of the order of the coordination number 12, typical of the arrangement of hard spheres in a crystal. This discontinuity of g(r) may give a problem in the calculation of the pressure. It is more convenient to use the continuous function y(r) (4.123)
∞ βP 2 du −βu(r) = 1 − πβρ e y(r)r 3 dr ρ 3 dr 0
∞ d −βu(r) 2 dr . e = 1 + πρ y(r)r 3 3 dr 0
(4.146)
We must derive the function e−βu = θ (r − σ ). The derivative of θ (r) is the δ(r) function, so Eq. (4.146) becomes 2 βP = 1 + πρ ρ 3
∞
y(r)r 3 δ(r − σ )dr
0
2 = 1 + πρ lim r 3 y(r) . r→σ 3
(4.147)
4.12 Equation of State and Liquid-Solid Transition of Hard Spheres
119
Here the resulting expression is well defined since y(r) is a continuous function. And we obtain y(σ + ) = y(σ − ) with y(σ + ) = g(σ ). The hard sphere pressure results to be βP 2 = 1 + πρσ 3 g(σ ) . ρ 3
(4.148)
4.12 Equation of State and Liquid-Solid Transition of Hard Spheres The hard sphere fluid was one of the first systems studied by computer simulation. In particular there was lot of work on the liquid-solid transition in hard sphere systems. The first molecular dynamics simulation by Alder and Wainwright [16] in 1957 was devoted to study the transition in a three-dimensional hard sphere system (see Chap. 5). A fluid-solid coexistence was also found for the hard disk system by the same authors in 1962 [17]. Since then a number of computer simulations were done to study structure and thermodynamics of hard spheres and hard disks [2]. As we discuss in the next section, the PY equation of state (EOS) was not in agreement with simulation. Carnahan and Starling (CS) were able to derive an EOS for the hard sphere fluid modifying the virial expansion. The CS-EOS is in perfect agreement with the simulation results; it is given by a rather simple expression βpCS 1 + η + η2 − η3 = . ρ (1 − η)3
(4.149)
We note that the CS-EOS has been also tested in experiments on a colloidal suspension where the particles imitate a hard sphere system [18]. In Fig. 4.1, the agreement in the region of the liquid is shown. After the pioneering work of Alder and Wainwright, Hoover and Ree [19] studied the freezing transition of hard spheres and hard disks by calculating with computer simulation the free energy of liquid and solid phases. They found that the transition is first order in both three and two dimensions. More recent work confirmed the existence of a liquid-solid transition in hard spheres with more refined calculation of the density and pressure at melting [20, 21]. We report the phase diagram obtained with Monte Carlo simulation (see Chap. 5) by Zykova-Timan et al. [21] (Fig. 4.2). We have to mention that the results on the freezing of hard disks is part of the very controversial issue on the two-dimensional melting; see, for instance, K. J. Strandburg [23]. In a recent paper [24], an experimental study is reported on quasitwo-dimensional colloidal spheres with evidence of a first order transition from a liquid to an hexatic phase and a continuous transition from the hexatic to the crystal phase. In a 2D crystal the long range order does not exist at variance with the 3D case. The melting of a 2D crystal is characterized by the appearance of topological defects, the resulting phase, called hexatic, shows a quasi long range orientational order similar to a nematic phase of liquid crystal [25, 35].
120
4 Theoretical Studies of the Structure of Liquids
Fig. 4.1 CS equation of state of the hard sphere fluid (red line) compared with the results of an experiment on a colloidal suspension (points), adapted with permission from ref. [18]. Copyright 1993 American Physical Society
Experiment HS-CS equation
12
PV/NkBT
10
8
6
4
2
0
0
0.1
0.2
0.3
0.4
0.5
η
20
15 p (kBT/σ3)
Fig. 4.2 Phase diagram of the hard sphere system. The filled circles are simulation results of Zykova-Timan et al. [21]. The solid line of the liquid branch is the CS-EOS, while the solid line of the crystal branch is the EOS of Hall [22]. The coexistence pressure is pco = 11.576. The freezing point is located at ηf = 0.492, while the melting point is at ηm = 0.545. Redrawn with permission from [21]. Adapted with permission from ref. [21]. Copyright 2010 American Chemical Society
Crystal (fcc)
pco 10
Fluid
5
freezing
melting
0 0.2
0.3
0.4
0.5 η
0.6
0.7
4.13 Percus-Yevick for the Hard Sphere Fluid
121
4.13 Percus-Yevick for the Hard Sphere Fluid The PY approximation gives good results for the hard sphere fluid [26]. Equation (4.128) in the case of hard spheres gives
f (r) =
⎧ ⎨ −1 ⎩
rσ
and for the direct correlation function ⎧ ⎨ −y(r) c(r) = ⎩ 0
rσ
by taking into account the condition (4.144), we can write the PY in terms of the y(r) function as y(r) = −c(r)
rσ
(4.150)
The OZ equation can be exactly solved and the c(r) has a simple form, with x = r/σ c(r) =
⎧ ⎨ a0 + a1 x + a3 x 3 ⎩
x1
where a0 = −
(1 + 2η)2 (1 − η)4
a1 = −6η
a3 =
1 + 12 η
2
(1 − η)4 1 ηa0 2
By taking the Fourier transform of the c(r), it is possible to derive the S(k) with Eq. (4.83) and then the g(r). The RDF of hard spheres obtained with the PY approximations are shown in Fig. 4.3. At increasing packing fraction, both the
122
4 Theoretical Studies of the Structure of Liquids
4
Percus-Yevick ln y(r)
3
g(r)
101
η=0.20 η=0.30 η=0.40 η=0.45
η=0.45
100 10-1
2
1
0
2 r/σ
3
4
1
0
1
2
3
4
r/σ
Fig. 4.3 Radial distribution function of a hard sphere liquid at increasing packing fraction in the PY approximation. In the inset the function y(r) for η = 0.45. The calculations have been performed with the use of the software OZ, Version 4.2.2, P. Linse, Lund University, Lund, Sweden 2015
increase of the height of the first peak and the onset of oscillations are evident, indicating the formation of well-defined shells of spheres. The hard spheres cannot penetrate in the excluded volume around the particle in the origin, but at increasing density, they accumulate in the first shell at r = σ as a consequence of the repulsion of the other spheres. The first minimum and the other shells become more well defined at increasing η. Starting from the PY theory, Verlet and Weis [27] were able to parametrize the g(r) obtained by computer simulations with the formula:
g(r, σ ) = gP Y (r, σ ) +
g(r, σ ) = 0r < σ
(4.151)
A exp [μ(r − σ )] cos [μ(r − σ )] r > σ r
(4.152)
where
σ σ
3 =1−
η 16
(4.153)
The main point is the presence of parameters A and μ chosen to get the peak of the RDF that reproduces the correct pressure. The Verlet-Weis formula reproduces exactly the hard sphere structure. In Fig. 4.4, we compare the PY and the VerletWeis results. As can be seen more clearly in the inset, the main difference resides in the height of the main peak, and the discrepancy increases at increasing η.
4.14 Equation of State and Thermodynamic Inconsistency
η=0.3 VW η=0.3 PY η=0.45 VW η=0.45 PY
4 g(r)
4
3
123
g(r)
2 1
2
1.1 r/σ
1
0
1
2
3
4
r/σ
Fig. 4.4 Comparison of the RDF for hard spheres calculated with the PY approximation and the exact values reproduced with the Verlet-Weis formula (4.152). The calculations have been performed with the use of the software OZ, Version 4.2.2, P. Linse, Lund University, Lund, Sweden 2015
4.14 Equation of State and Thermodynamic Inconsistency By using g(σ ) = − lim c(r) r→σ −
in Eq. (4.148), we can derive the HS pressure in the PY approximation βpv 1 + 2η + 3η2 = ρ (1 − η)2
(4.154)
The pressure can be calculated also in another way. We can start from the isothermal compressibility KT obtained from S(k = 0) or c(k = 0). Then by integrating KT , we get another expression for the pressure, the compressibility pressure βp c 1 + η + η2 = ρ (1 − η)3
(4.155)
The pressure functions pv and pc are different. This problem is called the thermodynamic inconsistency. It is a consequence of the approximations involved
124
4 Theoretical Studies of the Structure of Liquids
HNC-V
Hard Spheres Equation of State
PY-C
pV/NkBT
10
Carnahan-Starling PY-V 5 HNC-C
0
0
0.1
0.2
0.3
0.4
0.5
η
Fig. 4.5 Equation of state of the hard sphere fluid. Curves calculated with the different approximations. The points represent the CS equation. The calculations have been performed with the use of the software OZ, Version 4.2.2, P. Linse, Lund University, Lund, Sweden 2015. See also ref. [13].
in the derivation of closure relations for the OZ equation. Different quantities are calculated with different levels of approximation. In the figure different equations of state (EOS) are reported and compared with the (CS) equation of state. As said above, the CS-EOS is in perfect agreement with the simulation results of the hard sphere model, and it can be also obtained from a linear combination βpCS 2 βpc 1 βpv = + ρ 3 ρ 3 ρ
(4.156)
Both PY-EOS and HNC-EOS show the thermodynamic inconsistency, since also HNC has different results depending on the route, virial or compressibility, to calculate the pressure. In both the cases the c-EOS and the v-EOS sorround the exact result, see also ref. [13]. The PY works better than HNC. In particular PY(c)-EOS is the closest to the CS (Fig. 4.5).
4.15 Routes to Consistency: Modified HNC and Reference HNC Other methods that have been proposed and tested to solve the problem of the thermodynamic inconsistency are based on the HNC approximation. Let us start with the modified HNC. We recall the exact formula for the RDF
4.16 Perturbation Theories: Optimized RPA
g(r) = e−βu(r)+h(r)−c(r)+d(r)
125
(4.157)
Rosenfeld e Ashcroft [28] proposed to calculate the bridge function d(r) by using a reference system like the HS fluid. The HS system is very well known by means of the numerical simulation. It is possible to compare the HNC g(r) of the HS fluid with the true result of the simulation. In this way the behaviour of d(r) can be obtained as function of η dH S (r) = ln [yH S (r)] − [hH S (r) − cH S (r)]
(4.158)
Therefore it is assumed d(r) ≈ dH S (r; η)
(4.159)
in Eq. (4.157). The packing fraction η can be used as a free parameter to impose the consistency. Rosenfeld and Ashcroft found that a large body of computer simulation data compiled for a variety of quite disparate interparticle potentials can be fitted rather accurately by a single one-parameter family of bridge functions. A more refined theory was introduced later, the reference HNC (RHNC) [29] where the determination of the bridge function is coupled with a minimization procedure for the free energy.
4.16 Perturbation Theories: Optimized RPA As said in the introduction, one of the difficulties in the study of the physics of liquid matter is the lack of a simple system to use as starting point for a perturbation theory where it is possible to perform an expansion in terms of some parameter. The virial expansion works only in the limit of very low density. Perturbation theories however have been formulated based on the idea to use a reference liquid system that is well known in all its properties, for instance, the HS liquid. The starting point is to separate the potential into two parts u(r) = u0 (r) + u1 (r)
(4.160)
where u0 (r) is the short-range repulsive part of the potential and u1 (r) is the remaining usually attractive part. The approach consists in assuming the system (0) composed of particles interacting with u0 (r) as the reference system and applying the potential u1 (r) as a perturbation. The most successful method was introduced by Weeks, Chandler and Andersen (WCA) [30, 31], and it is called the optimized random phase approximation (ORPA) [2]. The ORPA was recently reformulated in terms of a variational problem to take into account continuous potential with soft repulsion [6, 32].
126
4 Theoretical Studies of the Structure of Liquids
The ORPA closure can be defined by splitting the correlation functions as h(r) = h0 (r) + h1 (r)
(4.161)
c(r) = c0 (r) + c1 (r)
(4.162)
where (0) refers to the reference system, and h1 (r) and c1 (r) are the modifications induced by the perturbation. It is assumed that the correlation functions and the thermodynamic properties of the reference system are known. h0 (r) and c0 (r) satisfy the OZ equation as h(r) and c(r). By taking into account the OZ in k space, a relation between h1 (r) and c1 (r) can be derived h˜ 1 (k) =
c˜1 (k)S02 (k) 1 − ρ c˜1 (k)S0 (k)
(4.163)
where S0 (k) = 1 + ρ h˜ 0 (k). The ORPA is defined by the following assumptions: c1 (r) = −βu1 (r) for r > σ
(4.164)
h1 (r) = 0 for r < σ
(4.165)
where σ is in general a crossover distance between the short- and the longrange behaviour of the correlation functions. Equation (4.164) is equivalent to the RPA already introduced; now the condition (4.165) prevents from modifying the correlation function of the reference system. In particular in the case that the reference system is the HS, it implies that g(r) = 0 for r < σ is preserved. Equation (4.163) is equivalent to the OZ equation, and to close the relation between h1 (r) and c1 (r), it is possible to implement a procedure to obtain c1 (r). In practice taking into account the constraint given by Eq. (4.164), c1 (r) inside the core can be obtained by minimizing the functional
F [c1 ] = −
/ k . ρS0 (k)c˜1 (k) + ln 1 − ρS0 (k)c˜1 (k) . (2π )3
(4.166)
4.17 Models for Colloids At variance with the case of systems composed of atoms, where the forces are established by the electronic structures, in systems like colloids the forces between the colloidal particles are determined by the dissolving medium. A good consequence is that sometimes experimentalist could play to tune the potential in simple forms, as the example shown before concerning the hard sphere model. In recent time the possibility of studying colloidal systems with simple models had
4.17 Models for Colloids
127
recovered a number of exotic potentials that have demonstrated to be very useful in the comprehension of the properties of different types of colloids. A relative simple potential has been the basis for different applications in the study of colloids. It is the square well (SW) potential defined with a repulsive hard sphere part, and the attractive or repulsive interaction ⎧ ⎪ ⎪ −∞ ⎪ ⎪ ⎪ ⎨ uSW (r) = − ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 0
r λσ
where σ is the hard core diameter and and λσ are, respectively, the depth and the range of the well. If > 0, the SW is attractive and it is the case we consider. In Figs. 4.6 and 4.7, the RDF obtained by the authors with different methods are shown and compared with the simulation [33]. The reported data are for λ = 1.5σ , ρ = 0.8σ 3 and kB T / = 1.0. It is evident that ORPA and the optimized cluster theory (OCT), the latter an improvement of the ORPA, work well; they are in much better agreement with
Fig. 4.6 Radial distribution function for a fluid of particles interacting with the SW potential (4.167) with the parameters λ = 1.5σ , ρ = 0.8σ 3 and kB T / = 1.0. The figure shows the results of the computer simulation compared with the calculations obtained with ORPA and OCT. Reproduced with permission from [33]. © IOP Publishing Ltd
128
4 Theoretical Studies of the Structure of Liquids
Fig. 4.7 The same as in Fig. 4.6 with the calculations obtained with the use of PY and HNC approximations. Reproduced with permission from [33]. © IOP Publishing Ltd
simulation in comparison with HNC and PY. OCT shows a better agreement in the peak region. We note the discontinuity at the distance λ due to the form of the SW potential. Integral equations have been used to study some properties of the Janus particles and a generalization of the Janus particles, the Kern-Frenkel potential for patchy colloids [34]. It is a system of hard sphere particles interacting with a SW potential modulated by a factor dependent on the orientation of the attractive patches on the spheres. Patchy colloids are even more suitable for describing a large number of selfassembled structure in different technological applications. The RHNC is found to work rather well for these systems.
References 1. Egelstaff, P.A.: An Introduction to the Liquid State. Academic, London (1967) 2. Barker, A., Henderson, D.: Rev. Mod. Phys. 48, 587 (1976) 3. Stanley, H.E.: Phase Transition and Critical Phenomena. Oxford University Press, New York (1971) 4. Kirkwood, J.G.: J. Chem. Phys. 3, 300 (1935) 5. Caillol, J.M.: J. Phys. A 35, 4189 (2002) 6. Pastore, G., Akinlade, O., Matthews, F., Badirkhan, Z.: Phys. Rev. E 57, 460 (1998)
References
129
7. Percus, J.K.: Phys. Rev. Lett. 8, 462 (1962) 8. Hohenberg, P., Kohn, W.: Phys. Rev. 136, B864 (1964) 9. Kohn, W., Sham, L.J.: Phys. Rev. 140, A1133 (1965) 10. Dreizler, R.M., Gross, E.K.U.: Density Functional Theory. Springer, Berlin (1990) 11. Mermin, N.D.: Phys. Rev. 137, A1441 (1965) 12. Wu, J.: Wu, J. (ed.) Variational Methods in Molecular Modeling, p. 65. Springer, Singapore (2017) 13. Hansen, J.P., McDonald, J.R.: Theory of Simple Liquids. Academic, Oxford (2013) 14. Percus, J.K., Yevick, G.J.: Phys. Rev. 110, 1 (1958) 15. Rovere, M., Tosi, M.P.: Rep. Prog. Phys. 49, 1001 (1986) 16. Alder, B.J., Wainwright, T.E.: J. Chem. Phys. 27, 1208 (1957) 17. Alder, B.J., Wainwright, T.E.: Phys. Rev. 127, 359 (1962) 18. Piazza, R., Bellini, T., De Giorgio, V.: Phys. Rev. Lett. 71, 4267 (1993) 19. Hoover, W.G., Ree, F.H.: J. Chem. Phys. 47, 3609 (1968) 20. Noya, E.G., Vega, C., de Miguel, E.: J. Chem. Phys. 128, 154507 (2008) 21. Zykova Timan, T., Horbach, J., Binder, K.: J. Chem. Phys. 133, 014705 (2010) 22. Hall, K.R.: J. Chem. Phys. 57, 2252 (1972) 23. Strandsburg, K.J.: Rev. Mod. Phys. 60, 60 (1988) 24. Thorneywork, A.L., Abbott, J.L., Aarts, D.G.A.L., Dullens, R.P.A.: Phys. Rev. Lett. 118, 158001 (2017) 25. Kosterlitz, J.M., Thouless, D.J.: J. Phys. C Solid State Phys. 6, 1181 (1973) 26. Henderson, D.: Condens. Matter Phys. 12, 127 (2009) 27. Verlet, L., Weis, J.J.: Phys. Rev. A 5, 93 (1972) 28. Rosenfeld, Y., Ashcroft, N.W.: Phys. Rev. A 20, 1208 (1979) 29. Lado, F., Foiles, S.M., Ashcroft, N.W.: Phys. Rev. A 28, 2374 (1983) 30. Weeks, J., Chandler, D., Andersen, H.: J. Chem. Phys. 54, 5237 (1971) 31. Andersen, H., Chandler, D., Weeks, J.: J. Chem. Phys. 56, 3812 (1972) 32. Kahl, G., Hafner, J.: Phys. Rev. A 29, 3310 (1984) 33. Lang, A., Kahl, G., Likos, C.N., Löwen, H., Watzlawek, M.: J. Phys.: Condens. Matt. 11, 10143 (1999) 34. Giacometti, A., Gögelein, C., Lado, F., Sciortino, F., Ferrari, S., Pastore, G.: J. Chem. Phys. 140, 094104 (2014) 35. Halperin, B.I. and Nelson, David R.: Phys. Rev. Lett. 41(2), 121–124 (1978)
Chapter 5
Methods of Computer Simulation
In statistical mechanics, few models can be exactly solved. For many years, this has been a strong limitation for the theoretical studies of condensed matter. With the rapid progress in the computational technology, the computer simulation of many particle interacting systems, developed in the second part of the twentieth century, becomes an essential method in statistical mechanics and in condensed matter physics [1, 2]. With computer simulation, the calculation of the properties of microscopic models is done exactly apart for statistical errors that are usually estimable like error bars in experiment. So numerical simulations can be used to perform virtual experiments on different fluid models making possible to test their predictive abilities. In recent time, computer simulation has become from a simple support to theoretical approaches and experiments, a third methodology able to set up a complete phenomenology of the systems under study. Simulation methods in particular were essential for the progress in the study of the physics of liquids. In recent years, they achieved many successes also in applications to complex systems, like amorphous solid, colloids and biological matter. The first technique of simulation, the Monte Carlo (MC) method, was developed during the Second World War at Los Alamos. Monte Carlo simulations are based on stochastic sampling with the use of random numbers. Sampling with random numbers were implemented long time ago in the eighteenth century by the naturalist Buffon; another similar method was introduced by the Italian mathematician Lazzarini in 1901 [1]. In the twentieth century, Fermi used random numbers to make predictions on nuclear fission processes. On the line of Fermi, von Neumann and Ulam suggested to use computational methods based on random sampling to study problems involved in the nuclear fission. It seems that the name Monte Carlo for the method was proposed by an uncle of Ulam, who was an experienced player of casino games. The first calculations on neutron transport were performed with FERMIAC, an analog computer invented by Fermi (https://www.lanl.gov/museum/ discover/docs/fermiac.pdf). After the war, the electronic computer called ENIAC (Electronic Numerical Integrator and Computer) was used [3]. © Springer Nature Switzerland AG 2021 P. Gallo, M. Rovere, Physics of Liquid Matter, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-68349-8_5
131
132
5 Methods of Computer Simulation
Metropolis, an American physicist of Greek origin, collaborated with von Neumann, Ulam and Fermi to improve the use of random sampling for applications in statistical mechanics. Metropolis after the war pursuit calculations on many body particles with a computer called MANIAC (Mathematical Analyzer, Numerical Integrator, and Computer) completely devoted to perform Monte Carlo simulations. The first simulation on a hard disk fluid has been published in 1953 [4]. In 1957 Alder and Wainwright invented a new method, called molecular dynamics, based on the numerical solution of the Newton equations for a large number of interacting particles. Molecular dynamics is a deterministic method able to study the dynamical behaviour of systems composed of many atoms. The first simulation of Alder and Wainwright was done on a hard disk system [5]. In the chapter, we consider in sequence the basic principles and methodologies of molecular dynamics and Monte Carlo method. In the second part, we will discuss how the computational methods can be applied to the study of phase coexistence and phase transitions.
5.1 Molecular Dynamics Methods 5.1.1 Molecular Dynamics and Statistical Mechanics We consider a system composed of N atoms governed by classical mechanics. The positions and the momenta of the system evolve in time with the equations r˙i =
∂H ∂pi
p˙ i = −
∂H , ∂ri
(5.1)
where the Hamiltonian is given by H (pN , r N ) = K(pN ) + U (r N ) .
(5.2)
From the time evolution of (p N , r N ), it is possible to derive the time average value of an observable X(p N , r N ) 1 X¯ = lim τ →∞ τ
τ
dtX pN , r N .
(5.3)
0
The limit τ → ∞ makes the average independent from the initial conditions p N (t = 0), r N (t = 0). Statistical mechanics is based on the ergodic hypothesis that the ensemble average, indicated with X, is equivalent to the time average (5.3)
X = X¯ .
(5.4)
5.1 Molecular Dynamics Methods
133
In molecular dynamics (MD), Eq. (5.1) is solved numerically. In the numerical procedure the time variable is discretized by introducing a finite time step Δt; in this way t → tk = kΔt and the average is performed on a finite time τ = nΔt n 1 N X p (tk ), r N (tk ) X¯ MD n
(5.5)
k=1
The evolution of the system starts from a given initial state, where positions and velocities are assigned to the particles. In the presence of conservative forces, the Hamiltonian is a conserved quantity, so the total energy E = K + U is constant at each time step. The averaged values obtained in MD approximate the true microcanonical averages X¯
MD
Xmicrocan .
(5.6)
The MD averages are approximated since they are obtained in a finite time, so with a finite number of states. Due to the finite time, the initial conditions could bias the averages. To take into account this problem, it is required to check, to compare and in case to average on results obtained with different initial conditions. Moreover, while in real systems the trajectories of the particles are exactly determined by the Newton equations at each time, in the numerical procedure at each discrete time tk , the set of positions r 1 . . . r N and velocities v 1 . . . v N are calculated with some numerical incertitude. Therefore, the calculated trajectory in the phase space is different from the real one. It is important in this respect that such virtual trajectory follows approximately the true one. The true trajectory is unknown and only indirect tests can be performed. To obtain a good approximation of the averages in the microcanonical ensemble, it is necessary that the simulated system moves along a constant energy surface in the phase space. For this reason the Newton equations must be solved with appropriate algorithms. In any case it is found that the time interval cannot be too large to avoid that the solution becomes unstable. Later we will discuss more about the order of magnitude of the time step. It must be taken into account also that the system is confined in a finite volume. The finite volume problems are only partially alleviated by the use of periodic boundary conditions. The simulated system is never in the thermodynamic limit, and finite volume could become relevant in particular in the study of phase transitions. We will discuss also this issue later.
134
5 Methods of Computer Simulation
5.1.2 Algorithms for the Time Evolution As said above, in MD the time evolution of the system takes place along discrete time steps. To simplify the notation, we will indicate in the following the vectors with the notation r, v, etc. instead of r, v, etc. The first idea could be to use in the single time step a Taylor expansion r (t + Δt) = r (t) +
dr dt
1 Δt + 2 t
d 2r dt 2
Δt 2 + · · ·
(5.7)
t
where the derivatives are determined by the forces at time t calculated from the positions r(t). If the expansion includes only the second order term, it is equivalent to assume a motion with constant acceleration during the time step. This approximation shows serious problems; with the use of a simple algorithm truncated at the second order, the system is driven toward wrong directions in the phase space. This shows that one must be very careful in implementing the algorithms to solve the Newton equations on computers. A good algorithm must satisfy at least the following conditions: 1. 2. 3. 4.
It must be simple and fast enough It must give stable trajectories with enough long time step The temporal evolution must be reversible It must conserve energy (and momenta).
We discuss now the main algorithms used in the MD simulations. We have to take into account that the time step is an important parameter. With a too small value, we explore a too limited portion of the trajectory. A too long time step could produce numerical instability. A good estimation of the time step Δt to use would be to consider the time scale τ of the phenomenon under study and consider a time step Δt order of magnitude less than τ . To study atomistic systems, this implies Δt ≈ 10−15 s, so order of femtosecond or 10−3 ps.
5.1.3 Predictor/Corrector The expansion (5.7) can approximately predict the positions, velocities and accelerations at time t + Δt starting from given values at time t. The predicted values however must be corrected with a method elaborated for the solutions of differential equations. The best performing procedure is the one elaborated by Gear [1]. It consists in three steps. The first step corresponds to the expansion r(t + Δt) = r(t) +
M m=1
bm (t)
Δt m m!
5.1 Molecular Dynamics Methods
135
bk (t + Δt) = bk (t) +
M
bm (t)
m=k+1
bm (t) =
d mr dt m
Δt m−k (m − k)!
m > 0.
(5.8)
In the expansion obviously b1 (t) = v(t) and b2 (t) = a(t). Once we get the positions at time t + Δt, we can calculate the forces and get the new accelerations a c (t + Δt) that are different from the predicted a(t + Δt). Now it is assumed that a c (t + Δt) are the correct accelerations. So a correction term can be defined as Δa(t + Δt) = a c (t + Δt) − a(t + Δt) .
(5.9)
According to Gear, the predictions of Eq. (5.8) can be corrected by performing the following steps: r c (t + Δt) = r(t + Δt) + c0 Δa(t + Δt) v c (t + Δt) = v(t + Δt) + c1 Δa(t + Δt) a c (t + Δt) = a(t + Δt) + c2 Δa(t + Δt)
(5.10)
b3c (t + Δt) = b3 (t + Δt) + c3 Δa(t + Δt) ... c bM (t + Δt) = bM (t + Δt) + cM Δa(t + Δt) .
The coefficients ci have been estimated by Gear to give the better stable and accurate trajectories. For M = 3, for instance, they are: c0 = 1/6, c1 = 5/6, c3 = 1/3 with c2 = 1. This algorithm works well and it was used in the early implementation of MD simulations. Later another class of algorithms were developed that are now more used since they are more easy to implement and they are stable with longer time step. They are the Verlet algorithms.
5.1.4 Verlet Algorithms Verlet in 1967 [6] proposed his algorithm based on a simple idea. We can start from positions, velocities and accelerations at time t, r(t), v(t) and a(t) and write approximate equations for a step forward and a step backward as 1 r(t + Δt) = r(t) + v(t)Δt + a(t)Δt 2 2
136
5 Methods of Computer Simulation
1 r(t − Δt) = r(t) − v(t)Δt + a(t)Δt 2 , 2 by summing both sides, we get r(t + Δt) = −r(t − Δt) + 2r(t) + a(t)Δt 2 .
(5.11)
At t + Δt the acceleration is obtained from the potential as a(t + Δt) =
1 ∇U r N (t + Δt) m
(5.12)
and the algorithm (5.11) can be restarted. The velocity is not required, but at each time step it can be calculated as v(t) =
r(t + Δt) + r(t − Δt) . 2Δt
(5.13)
It can be shown that this simple algorithm satisfies all the conditions written above with a time step of the order of the femtosecond. The Verlet formula has been very successful, since it requires a simple routine and a small amount of memory. The only weak point is that the velocities are not calculated directly. Sometimes more precise values for the velocities are required. With the development of computers, it became possible to use more memory to store instantaneous data, and modifications of the pure Verlet algorithm were proposed.
Velocity Verlet The velocity Verlet algorithm is derived from the previous one, but it gives a better estimation of the velocity. The starting point is at t; we know r(t), v(t) and a(t); and we compute 1 r(t + Δt) = r(t) + v(t) + a(t)Δt 2 . 2
(5.14)
Then the velocity is calculated at half of the time step Δt 1 . v t + Δt = v(t) + a(t) 2 2
(5.15)
From the positions we calculate the forces at t + Δt and then the accelerations a(t + Δt), and the velocity at t + Δt is given by
1 Δt v (t + Δt) = v t + Δt + a (t + Δt) . 2 2
(5.16)
5.1 Molecular Dynamics Methods
137
Fig. 5.1 Scheme of the velocity Verlet algorithm
Fig. 5.2 Scheme of the leapfrog Verlet algorithm
By combining Eqs. (5.15) and (5.16), we can write the final step as v (t + Δt) = v(t) +
1 [a(t) + a (t + Δt)] Δt . 2
(5.17)
So the algorithm is reduced to the two steps (5.14) and (5.17) with the calculation of a(t + Δt) in the middle. A scheme of the algorithm is shown in Fig. 5.1.
Leapfrog The so-called leapfrog algorithm is also largely used. In the first step, given the positions r(t), the accelerations a(t) and the velocities in t − Δt/2, the velocities in t + Δt/2 are calculated as
1 v t + Δt 2
1 = v t − Δt + a(t)Δt 2
(5.18)
then
1 r (t + Δt) = r(t) + v t + Δt Δt . 2
(5.19)
The name is due to the procedure, shown schematically in Fig. 5.2. In every step the velocities leap over the positions r(t) from t − Δt to t + Δt. Then it is the turn of the positions to jump from t to t + Δt over the velocities v(t + Δt/2).
138
5 Methods of Computer Simulation
The velocities at time t can be obtained as 1 1 1 v(t) = v t + Δt + v t − Δt 2 2 2
(5.20)
5.1.5 Calculation of the Forces The computer simulation is usually implemented by assuming a system of N particles in a cubic box of length L and volume V = L3 . Since N is typically of the order of 100 ÷ 10,000, very far from the thermodynamics limit, to avoid surface effects, periodic boundary conditions (PBC) are applied in the three directions. In particular cases, the PBC can be relaxed in some of the directions for simulating interfacial phenomena. In practice the cell is repeated in the space along the x–y–z directions. In principle each particle would interact with all the particles in the box and their images in the periodic repetition of the box. Since the calculation of the forces is particularly time-consuming, the potential is truncated at a cut-off radius rc ≤ L/2. Then the so-called minimum image convention is assumed. Each particle is at a centre of a sphere of radius rc , and it interacts only with the other particles and with the images that are in that sphere. In this way the maximum range of the potential is L/2. In Fig. 5.3, there is a two-dimensional representation of the central cell with its images around. The discontinuity of the truncated potential can produce a spurious contribution to the forces, so it is common to use a truncated and shifted potential: Fig. 5.3 Two-dimensional representation of the simulation cell with the cell images build by the periodic boundary conditions. Each particle in the cell interacts with the particles inside the circle (sphere in three dimensions) with radius rc
5.1 Molecular Dynamics Methods
us (r) =
139
⎧ ⎨ u (r) − u (rc ) ⎩
r ≤ rc .
0
(5.21)
r > rc
One must take into account in comparing different simulation studies that the results could depend on the choice of rc . Long-range corrections can be applied to thermodynamic quantities to take into account the truncation of the potential, as we will see below.
5.1.6 Initial Configuration If the numerical procedure and the algorithms are correct, the final results of computer simulation must be independent from the starting conditions. It is true, however, that at the beginning we have to place the particles in the simulation box, and the configuration that we can produce is off equilibrium. By running the simulation, the system finally would reach an equilibrated point of the phase space, if we are not far from ergodic conditions. So in practice it is necessary to avoid initial state very far from equilibrium. For example, if in the initial configuration two particles are very close, there would be a very large repulsive contribution to the energy. To dissipate such large energy contribution, one would need a very long equilibration run. For this reason it is convenient to start with the particles located at distances larger than the repulsive zone of the potential. In this respect it is useful in the starting configuration to place the particles in a crystalline phase since they are sufficiently far apart. The first part of the equilibration procedure will consist in the melting of the crystal. This can be achieved by starting with an initial high enough temperature. We discuss now how the temperature can be implemented and estimated in the simulation.
5.1.7 Temperature in the Microcanonical Ensemble In the microcanonical ensemble in absence of external forces, the total energy and momentum are conserved, but the temperature fluctuates. The instantaneous temperature is related to the kinetic energy on the basis of the equipartition principle (in three dimensions) 3 1 mi v2i . NkB T (t) = 2 2
(5.22)
i
The initial temperature of the simulation is usually established by assigning the initial velocities to the particles with a Maxwell-Boltzmann distribution.
140
5 Methods of Computer Simulation
During the simulation, the instantaneous temperature can be calculated from the kinetic energy from Eq. (5.22). The average temperature instead is obtained from the average value of the kinetic energy ! " 1 3 2 mi vi . NkB T = 2 2
(5.23)
i
5.1.8 Equilibration Procedure As said above, to study a system in the liquid phase, it is convenient to start with the particles placed in a lattice and to prepare the system at high temperature. After few steps, 1–10 ps, the crystal melts and we obtain a not equilibrated liquid. In the initial simulation run, we can check the behaviour of various quantities. In Fig. 5.4, as an example, an initial run for a system with a LJ potential is shown. The LJ reduced units are used where we recall that the temperature is kB T /ε and the density ρσ 3 in terms of the LJ parameters. We start with the particles in a lattice with velocities distributed such that Tin = 2.0 K. The systems evolves, and we note that the total energy remains constant, while both the kinetic and the potential energies relax in few time steps and oscillate around an average value. The final temperature is just below Tin . It is possible to check that the crystal is melted. After this initial runs, if we want to reach a lower temperature, the most simple method is the velocity rescaling. During the equilibration, all the velocities are rescaled by a factor such that Fig. 5.4 MD: first equilibration, the velocities of the particles are distributed so that initial temperature is Tin = 2.0 K. The system evolves and it reaches spontaneously an average T < Tin
4 Tin=2.0 K
2
0
Lennard-Jones
ρ=0.844
L.J. units -2 E -4
U
-6 0
0.2
0.4
0.6 t (ps)
0.8
1
5.1 Molecular Dynamics Methods
141
vi = χ vi
(5.24)
with
K0 χ= K(t)
1/2 (5.25)
,
where K0 is the kinetic energy corresponding to the target temperature and K(t) is the instantaneous kinetic energy. The rescaling is done every fixed number of time steps so the system goes through intermediate temperatures, see Fig. 5.5, from Tin = 2.0 to T0 = 0.72. In Fig. 5.5, the behaviour of the energies is shown as the temperature is rescaled. At every rescaling the energy is not conserved. Finally, we reach a state where the average temperature is T0 . At this point the velocity rescaling is switched off. The system evolves toward equilibrium, and we observe finally that quantities like the potential energy and the kinetic energy (the temperature) oscillate around a constant average value, while the total energy is constant, Fig. 5.6. It must be taken into account that the velocity rescaling could be a very drastic method, particularly to use in same region of the phase space. Other, less drastic, methods of velocity rescaling have been also invented. One of the most used is the Berendsen algorithm [7]. The χ appearing in Eq. (5.24) is taken as a function of time
0 LJ units
Temperature with velocity rescaling down to T=0.72
K
1.8 1.6
L.J. ρ=0.844 velocity rescaling
1.4
-2
1.2 E
T (LJ units)
2
1 -4 U -6
0.8
1.5
2 t (ps)
1.5 t (ps)
2
Fig. 5.5 MD: equilibration with temperature rescaling. Note in the left panel the jump in the total energy at each velocity rescaling. On the right, the evolution of the instantaneous temperature as derived from the kinetic energy is reported
142
5 Methods of Computer Simulation
Fig. 5.6 MD: evolution in the equilibrium state
2 K
1
LJ units
0
L.J. ρ=0.844 T=0.72
-1 -2 E
-3 -4
U
-5 0
0.5
1
1.5
2
t (ps)
1/2 K0 δt −1 χ = 1+ , τK K(t)
(5.26)
the system relaxes toward the state with K0 with a relaxation time constant τK . This time constant can be adjusted in order to get a fast enough equilibration without perturbing too much the system. Equation (5.26) implies that the kinetic energy change is given by ΔK = (K0 − K(t))
δt . τK
(5.27)
Also in this case, as soon as the equilibration is obtained, the procedure is switched off, and the system evolves in the microcanonical ensemble. In case a simulation in the canonical ensemble is required, it has to be taken into account that with the Berendsen thermostat it is the kinetic energy that is taken constant, while a canonical ensemble is isothermal and not isokinetic; for a discussion of this point, see [2]. The Berendsen thermostat can be improved with the use of an algorithm where the rescaling of the velocities is done by adding to the Berendsen equation Eq. (5.27) a stochastic term [8]. At each step the kinetic energy is made to evolve for a single time step with a stochastic dynamic, and then a rescaling is applied to reach the target value. The authors show that this thermostat samples the canonical ensemble. This type of velocity rescaling is now largely used and implemented in packages like Gromacs [9].
5.1 Molecular Dynamics Methods
143
5.1.9 Thermodynamic and Structure When the equilibration point is reached, we can proceed to calculate average values in the microcanonical ensemble. There are different quantities of interest, like the potential energy and the pressure. The pressure can be obtained by the virial theorem, already derived in Sect. 3.4.2 " !N 1 P V = NkB T − ri · ∇i U . 3
(5.28)
i=1
Also the radial distribution function g(r) can be easily obtained from the configurations. If n(2) (r) is the average number of atom pairs in the range (r, r + δr) and n(id) (r) is the equivalent quantity in an ideal gas n(id) (r) =
4πρ (r + δr)3 − r 3 3
(5.29)
then g(r) = n(2) (r)/n(id) (r) .
(5.30)
More refined calculations of thermodynamic and structural quantities can be performed with methods that we will discuss later in this chapter.
5.1.10 Long-Range Corrections As said above it is usual in computer simulation to truncate the potential to avoid very long computations. We can rewrite the potential as U = UC + ULR ,
(5.31)
where UC =
u rij < rc i
(5.32)
j >i
and ULR is the long-range term. Now we know that the potential energy can be obtained from the pair correlation function g(r), as shown in Sect. 3.4,
U= 0
∞
2πρr 2 u(r)g(r)dr .
(5.33)
144
5 Methods of Computer Simulation
If we assume that g(r) = 1 for r > rc , we can write a simple formula for the long-range correction in Eq. (5.31)
ULR ≈
∞
2πρr 2 u(r)dr .
(5.34)
rc
The long-range corrections can be calculated once in the simulation program. For the simple form of the potential, like LJ, the integral in Eq. (5.34) can be evaluated analytically.
5.1.11 Ewald Method In the case of potentials that do not decay in a short range, like the Coulomb potential, the simple formula (5.34) does not give the right estimation of the longrange contributions. It is more accurate to take into account the interaction of each particle with all the particles in the cell and the particles in the periodic images. The simulation cell is treated like a unitary cell in a crystal, and it is repeated along the three directions in the space; see the two-dimensional representation in Fig. 5.7. In the case of a Coulomb potential, we have 1 Zi Φ (ri ) 2 N
Ucoul =
(5.35)
i=1
where Φ (ri ) =
N j =1 n
Zj 0, 0 0rij + nL0
(5.36)
the sum over n = (nx , ny , nz ) is the sum over all the periodic images; for n = 0, the terms i = j must be avoided. The problem becomes similar to the calculation of the potential energy of an ionic crystal, and it can be treated with the method developed by Ewald [10, 11]. Now in the cell we have point-like charges with a background density ρback to assure the charge neutrality ρ (r) =
R
Zj δ r − rj − R − ρback ,
(5.37)
j
where R = (nx L, ny L, nz L). We can screen each charge with an opposite distributed charge ρs (r) as shown schematically in Fig. 5.8. Then we subtract all the extra charge to keep the original distribution,
5.1 Molecular Dynamics Methods
145
Fig. 5.7 Two-dimensional representation of a simulation cell with charged particles. The cell must be repeated in the space like a unitary cell of a crystal. Each particle in the cell interacts with all the other particles in the crystal
Fig. 5.8 Point-like charged screened by Gaussian distribution of opposite charges
146
5 Methods of Computer Simulation
ρ (r) = ρ (r) + ρs (r) − ρs (r) ,
(5.38)
The ρs can be assumed as a sum of Gaussian distributed charges ρs (r) =
α 3/2 R
j
π
2 , Zj exp α r − rj − R
(5.39)
where α is a free parameter that we will consider below. In this way ρs (r) introduces a short-range term in the potential Ucoul = Ucoul − Us + Us = USR + Us .
(5.40)
The short-range screening potential Us can be written as 1 Zi Φs (ri ) , 2 N
Us =
(5.41)
i=1
where Φs (r) is obtained from the Poisson equation − ∇ 2 Φs (r) = 4πρs (r) .
(5.42)
Since our system is composed of many cells that are repeated with a periodicity, we can define a reciprocal lattice with vectors K=
2π nx , ny , nz L
(5.43)
and we can define the Fourier transform of ρs as ρs (K) =
1 Vc
drρs (r) e−iK·r .
(5.44)
cell
Equation (5.42) can be written in k space K 2 Φs (K) = 4πρs (K) .
(5.45)
From the charge distribution (5.39), we get ρs (K) =
K2 1 , Zj e−iK·rj exp − V 4α j
from this by using Eq. (5.45), we get
(5.46)
5.1 Molecular Dynamics Methods
147
K2 4π −iK·rj , Φs (K) = 2 Zj e exp − 4α K
(5.47)
j
then Φs (r) =
4π Zj K2
K=0 j
e
−iK·(r−rj )
K2 exp − 4α
,
(5.48)
where the term K = 0 is cancelled by the background term. Now by substituting (5.48) in (5.41), we have Us =
1 4π Zi Zj −iK·(ri −rj ) K2 e exp − , 2 4α K2 i
(5.49)
j K=0
from it the self-term i = j must be subtracted. The self-term is easily evaluated as Uself =
α 2 Zi . π
(5.50)
i
Let us look now to the short-range part. We can write in r space Φs (r) =
Zj
j
φ1 r − rj − R ,
(5.51)
R
φ1 satisfies − ∇ 2 φ1 (r) = 4πρ1 (r)
(5.52)
where ρ1 (r) =
α 3/2 π
e−αr . 2
(5.53)
Equation (5.52) becomes −
α 3/2 1 ∂2 2 rφ e−αr = 4π (r) 1 2 r ∂r π
we write below how to arrive at the solution α 3/2 ∂ ∂ 2 rφ1 (r) = −4π e−αr ∂r ∂r π α 3/ ∞ ∂ 2 r φ1 (r) = 4π ds s e−αs ∂r π r
(5.54)
148
5 Methods of Computer Simulation
α 1/2 α 3/2 1 ∂ 2 2 e−αr rφ1 (r) = 4π e−αr = 2 ∂r π 2α π
(5.55)
By integrating from 0 to r, we get rφ1 (r) = erf
√ αr ,
(5.56)
where the error function erf (x) is defined as 2 erf (x) = √ π
x
dt e−t
(5.57)
0
now (5.51) becomes Φs (r) =
Zj
j
R
√ 1 erf α|r − rj − R| . |r − rj − R|
(5.58)
The short-range potential USR = Ucoul − US , see Eq. (5.40), can be written as USR =
. √ / Zi Zj 1 1 − erf α|ri − rj − R| = 2 |ri − rj − R| i
=
(5.59)
j
√ Zi Zj 1 erf c α|ri − rj − R| 2 |ri − rj − R| i
j
where the function erf c(x) is used
∞
erf c (r) = 1 − erf (r) =
dt e−t . 2
(5.60)
x
Therefore we obtain the Coulomb energy as Ucoul = USR + Us + Uself .
(5.61)
The Coulomb energy written in this form is calculated with a combination of the sum in k space (5.49) and in r space (5.60). The parameter α must be carefully chosen, since with a large α, the USR (5.60) rapidly converges, while the convergence of the sum in k space (5.49) becomes very slow. The contrary effect is obtained for a too small α. The computation of Ucoul is in any case computationally expensive, and for this reason, an improved method called particle mesh Ewald (PME) is now largely used [9, 12].
5.2 Monte Carlo Simulation
149
5.2 Monte Carlo Simulation 5.2.1 Monte Carlo Integration and Importance Sampling The Monte Carlo (MC) simulation method is based on the idea of calculating the averages of statistical mechanics with the use of numerical integration. The calculation of area or volume by using random numbers is an old idea, but nowadays the modern name, introduced by Metropolis, is used for a number of numerical methods based on random numbers that are applied in different fields. We consider now how MC can be used for simple integration. Suppose we have to integrate a function f (x) between a and b
b
F =
f (x)dx ,
(5.62)
a
there are different methods to perform the numerical integration; one of them, not the more convenient in one dimension, is based on the use of the random numbers that a computer can produce. These random numbers are uniformly distributed between 0 and 1. We can extract a series of random number 0 < Ri < 1 with i = 1, . . . , n and calculate (5.62) as 1 f (xi ) n n
F ≈ (b − a)
xi = a + (b − a)Ri .
(5.63)
i=1
Like any other numerical integration method, also the simple MC could be more or less successful depending on the behaviour of the function f (x). Suppose that the function is as f (x) in Fig. 5.9, the important region to sample is the one where the peak is present. In this case the use of uniformly distributed random numbers could give bad results. It would be useful to introduce a weighted distribution to maximize the sampling in the important region. This idea, introduced by von Neumann, is called importance sampling. We can introduce in the integral a distribution p(x) appropriate for our problem, for instance, the Gaussian in Fig. 5.9, the numeric function to calculate now becomes
b
F = a
f (x) [p(x)dx] , p(x)
(5.64)
where p(x) is normalized
b a
p(x)dx = 1 ,
(5.65)
150
5 Methods of Computer Simulation
8
6
f(x)
4 distribution p(x)
2
0
0
0.2
0.4
0.6
0.8
1
Fig. 5.9 Example of a difficult function to integrate with the uniform random sampling (bold line). The Gaussian distribution function introduced to implement an importance sampling is shown as a broken line
and the integral can be calculated as 1 f (xi ) , n p(xi ) n
F ≈ (b − a)
(5.66)
i=1
where xi are distributed according to p(x). At a first sight, it seems that the method can work if one knows very well the behaviour of the function f (x). This could be a difficult task when we have to deal with functions in multidimensional space. Von Neumann proposed a method to guess the appropriate distribution function in more general cases. It is from these ideas of von Neumann that Metropolis invented his algorithm.
5.2.2 Integrals in Statistical Mechanics Consider now the Boltzmann distribution in the canonical ensemble, if σ = (r1 , . . . , rN ; p1 , . . . , pN ) is a point in the phase space of our system, its probability is given by ρ [H (σ )] =
e−H (σ )/kB T QN (T )
(5.67)
where H (σ ) = K(p1 , . . . , pN ) + U (r1 , . . . , rN ) and the partition function QN is
5.2 Monte Carlo Simulation
151
QN =
e−H (σ )/kB T .
(5.68)
σ
In classical statistical mechanics, if we want to average quantities which do not depend on the momenta, we can directly integrate (5.67) on the momenta. Now we reduce to the phase space, Ω, of the configurations α = (r1 , . . . , rN ), and we can use the probability ρ [U (α)] =
e−U (α)/kB T , Z(T )
(5.69)
e−U (α)/kB T .
(5.70)
where
Z=
dΩ
The average value of an observable X(α) is obtained from
X =
dΩ
X(α)ρ [U (α)] .
(5.71)
Now we search for a method to calculate numerically the integral (5.71).
5.2.3 Importance Sampling in Statistical Mechanics To perform the integral (5.71) with the importance sampling MC technique, we have to introduce a distribution function
p (α)
X = dΩ X(α)ρ [U (α)] = dΩ X(α)ρ [U (α)] (5.72) p (α) and we can be integrated numerically as 1 1 . X(αk )ρ [U (αk )] n p (αk ) n
X ≈
(5.73)
k=1
Though the integral (5.71) is multidimensional and the function cannot be plotted down, we can guess on the basis of statistical mechanics that the most important region of integration has to be close to the average value X. This is also the region more sampled by the Boltzmann distribution. So it is reasonable to assume that p (α) = ρ [U (α)] .
(5.74)
152
5 Methods of Computer Simulation
With this choice, Eq. (5.73) becomes a very simple formula 1 X(αk ) . n n
X ≈
(5.75)
k=1
We can calculate the average values in the form (5.75) only if the sequence of configurations {αk } is generated according to the Boltzmann distribution (5.74). So the next problem is to generate the right configuration sequence.
5.2.4 Markov Processes Our system at time t can stay in one of the states {α1 , . . . , αk , . . .}. We define pk (t) as the probability that the system is in the state αk at time t, pk (t) satisfies at each time pk = 1 . (5.76) pk ≥ 0 k
The time evolution will be determined by a conditional probability that a system is in a state αn at the time tn if it was at tn−1 in αn−1 , at tn−2 in αn−2 . . . at t1 in α1 W (αn | αn−1 , . . . , α1 ) .
(5.77)
In principle the evolution is determined from what happened at each time before. But we can restrict to Markov processes where W (αn | αn−1 , . . . , α1 ) = W (αn | αn−1 ) · W (αn−1 | αn−2 ) · . . . W (α2 | α1 ) . (5.78) In a Markov process, the evolution is determined only from what happened at the preceding time step. So at each time step the system loses memory of its previous evolution. We can define the stochastic matrix whose elements are Wij = W αj | αi .
(5.79)
The elements satisfy the properties Wij ≥ 0
Wij = 1 .
(5.80)
j
The last condition is due to the fact that starting from a state αi , at least one of the state αj must be reachable.
5.2 Monte Carlo Simulation
153
Fig. 5.10 Schematic representation of Eq. (5.81). The probability that the system is in the state k at t + Δt is determined by two processes: (1) it was already in k at t, (2) in the time interval Δt, it arrives in k starting from another state j , and then we must subtract the transition from the state k to a state j in the same time interval
With the use of the stochastic matrix, we can determine how the probability pk defined above evolves in time. It is easy to see that the probability at a time t + Δt can be derived from the one at time t with the equation
pk (t + Δt) = pk (t) − Δt
pk Wkj + Δt
j
pj Wj k ,
(5.81)
j
as schematically represented in Fig. 5.10. In the continuous limit, this becomes the so-called master equation dpk pk Wkj + pj Wj k . =− dt j
(5.82)
j
An equilibrium distribution, as the Boltzmann distribution in statistical mechanics, must satisfy dpk = 0. dt
(5.83)
5.2.5 Ergodicity and Detailed Balance With the stochastic matrix W˜ , we can generate a series of probability distributions from an initial one. The probability distribution is given by the row vector p = (p1 , . . . , pk , . . .). If we start from p(0) , we can generate p(0) · W˜ = p(1) .
(5.84)
This is equivalent to the equation j
(0)
(1)
pj Wj i = pi ,
(5.85)
154
5 Methods of Computer Simulation
then by applying again W˜ , we have p(1) · W˜ = p(0) · W˜ 2 = p(2) ,
(5.86)
p(0) · W˜ n = p(n) .
(5.87)
more generally
Now we want to find a distribution p such that p · W˜ = p .
(5.88)
A distribution which satisfies (5.88) is an equilibrium distribution since it is invariant under transformation. According to the Perron-Frobenius theorem, the property of Eq. (5.88) can be obtained only if the stochastic matrix W˜ has one non-degenerate eigenvector with a unitary eigenvalue. This condition is satisfied if ∀i, j there exists a finite m such that (W m )ij > 0; in such case the matrix is called irreducible. The condition means that all the phase space is reachable. From each point in the phase space, the system can move to another point without limitations. This is an alternative formulation of the ergodic condition. It is shown that the stochastic matrix has one unitary eigenvalue and the corresponding eigenvector is the limiting distribution of the Markov chain. In this way the p is independent of the starting point. A sufficient, not necessary, condition for the matrix to be irreducible is that pk Wkj = pj Wj k .
(5.89)
From the mathematical point of view, it is easy to see that from Eq. (5.89) by considering Eq. (5.80), we can write
pk Wkj =
k
pj Wj k = pj .
(5.90)
k
The condition (5.89) is called the microscopic reversibility or detailed balance condition. It is also easy to see that if we look at the master equation (5.82), we get an equilibrium probability distribution, since Eq. (5.83) is satisfied.
5.2.6 Metropolis Method In our problems the state of a system is determined by the configuration, and the probability distribution is given by the Boltzmann formula pk = e−βU (αk ) /Z
(5.91)
5.2 Monte Carlo Simulation
155
where β = 1/kB T . To satisfy the detailed balance, Eq. (5.89), it must be true that Wj k pk = = e−β [U (αk )−U (αj )] pj Wkj
(5.92)
Consider that the system goes from the state αj to the state αk , if we define ΔUj k = U (αk ) − U (αj )
(5.93)
from Eq. (5.92), we get the condition Wj k = e−βΔUj k Wkj
(5.94)
Now the Metropolis method is based on the following assumption
Wj k =
⎧ −βΔU jk ⎨e ⎩
if
ΔUj k > 0 (5.95)
W0
if
ΔUj k < 0
In particular, W0 = 1 is chosen. It is easy to see that the transition probability Wj k from j to k given by Eq. (5.95) satisfies Eq. (5.94). • if ΔUj k > 0, then ΔUkj < 0 ⇒ Wj k = e−βΔUj k and Wkj =1 • if ΔUj k < 0, then ΔUkj > 0 ⇒ Wj k = 1 and Wkj = eβΔUj k in both cases Eq. (5.94) is fulfilled. Equation (5.95) for the transition probability can be implemented on the computer with the Metropolis algorithm. The system we consider is a fluid composed of N atoms in a volume V at temperature T . Metropolis Algorithm • • • • • • •
calculate the initial potential energy U1 choose randomly an atom move the atom to a new random position calculate the new potential energy U2 and ΔU = U2 − U1 calculate F = exp(−βΔU ) extract a random number 0 < R < 1 compare F with R if F > R then accept the new configuration 2 else the system remains in the old configuration 1
• start again the procedure with a new atom
156
5 Methods of Computer Simulation
exp(−β ΔU) Alway accept 1 range of random numbers
0
–0.2
0
0.2
0.4
0.6
βΔU
Fig. 5.11 Function for the accepting rule in MC
The Metropolis transition probability for the translational moves can be expressed with the formula: W
) *
) * = min 1, exp (−βΔU ) . rN → rN 1
2
(5.96)
There is a simple physical interpretation of this algorithm that can be inferred with the help of Fig. 5.11 with F = exp (−βΔU ). If U2 < U1 → F > 1 ⇒, the move is accepted as expected since the system goes in a state at lower energy. If U2 > U1 , the move is not rejected since the temperature is finite and there are fluctuations of the energy. In this second case, there are two possible conditions: if ΔU >> kB T , the move has a high probability of being rejected since F becomes very small (likely F < R); on the contrary, if ΔU ≤ kB T , there is a high probability that F > R.
5.2.7 Averaging on Monte Carlo Steps To study a fluid of many atoms with MC simulation, we can use some of the procedures used for MD. As in MD we can start from a lattice at high temperature and after the melting equilibrate the system to a lower T . Now the simulation is performed in a canonical ensemble. We move the atoms according to the Metropolis algorithm. If the number of atoms is large, as usual with the modern computers, we can try to move each atoms starting from the number 1 to the number N , the
5.2 Monte Carlo Simulation
157
so-called typewriter way of process. In detail, we can apply the algorithm in this way • FOR i = 1, N • extract Rx , Ry , Rz • shift the atom position: x = x + Δ(2Rx − 1) y = y + Δ(2Ry − 1)
(5.97)
z = z + Δ(2Rz − 1) • apply the Metropolis criterion (5.96) to accept or reject the move The parameter Δ in (5.98) is the maximum allowed shift. It is easy to see that if Δ is very small, almost all the moves will be accepted, while if Δ is too large, almost all the moves will be rejected. Δ is usually adjusted to give an acceptance ratio of 50%. The cycle on the N atoms is called a Monte Carlo step (MCS), and it is assumed as a conventional unit for the time evolution of the system. It has to be noted that when the system does not go in the new configuration, it is considered as it makes a transition to the old state. The thermodynamic and the structural quantities can be calculated on averaging on the MCS. The formulas for the pressure and for the radial distribution function are the same used in MD simulations.
5.2.8 MC Sampling in Other Ensembles To study phase equilibria, it is useful to implement Monte Carlo methods in different ensemble; see, for instance, [1, 2, 13]
Isobaric-Isothermal MC In MC simulation, we can perform averages in ensembles different from the canonical one. If we want to study the system in the isobaric-isothermal (NP T ) ensemble, we can add to the algorithm another type of move. It consists in changing the volume at constant pressure V = V + ΔV (2R − 1) .
(5.98)
In the N P T ensemble, as seen in Chap. 2, a quantity can be averaged according to Eq. (2.181)
158
5 Methods of Computer Simulation
X =
dV e−βpV V N
N ds N X s N e−βU s ,V . ZN P T
(5.99)
where the coordinates have been rescaled with the box length according to Eq. (2.180). The . /probability density to find the system with a volume V in a configuration s N is
N p V , sN = V N e−βP V e−βU s ,V /ZN P T .
(5.100)
The MC calculation is implemented with two types of moves: – try to shift the atom position: x = x + Δ(2Rx − 1) y = y + Δ(2Ry − 1)
(5.101)
z = z + Δ(2Rz − 1) – try to change the volume V = V + ΔV · (2R − 1) The Metropolis criterion can be applied by considering not the potential energy but the difference of the enthalpy after and before the change of volume corrected with a term that takes into account the rescaling of the coordinates V βΔH = βU V − βU (V ) + βp0 V − V − N ln V
(5.102)
where p0 is the fixed pressure applied to the system. To the Metropolis algorithm (5.96) for the translations in this case, the transition probability for the change of volume must be added . / W {V } → V = min 1, exp (−βΔH )
(5.103)
Grand Canonical MC The system can be also studied in the grand canonical ensemble with the grand canonical Monte Carlo (GCMC). In this case V and T are kept fixed, and we have to fix also a chemical potential μ. Then we need moves able to add or remove particles from the system. This can be done by implementing this procedure • choose randomly either move or removing or adding a particle; • for moving: choose randomly a particle and displace it as in the usual MC;
5.2 Monte Carlo Simulation
159
• for removing: choose randomly a particle and try to remove it • for adding: choose randomly a position in the box and try to insert the new particle In the acceptance criterion, the move is accepted/rejected using the normal Metropolis algorithm (5.96). For deleting/adding a particle, one has to consider the probability density of the grand canonical ensemble. Now we rescale the coordinates s i = r i /L so
GC s N , V =
1 zN N V exp −βU (s N ) , ZGC N!
(5.104)
where z is the activity already defined in Eq. (2.166) as z=
eβμ Λ3
(5.105)
with Λ the thermal De Broglie wavelength. In the algorithm for removing a particle, we must consider GC (N − 1) N , = exp −βΔU + ln GC (N) zV
(5.106)
N W ({N } → {N − 1}) = min 1, exp (−βΔU ) , zV
(5.107)
so we have
where ΔU = U (N − 1) − U (N). For adding a particle instead GC (N + 1) zV = exp −βΔU + ln GC (N) N +1
(5.108)
zV W ({N } → {N + 1}) = min 1, exp (−βΔU ) , N +1
(5.109)
and
where ΔU = U (N + 1) − U (N).
160
5 Methods of Computer Simulation
5.2.9 MC in the Gibbs Ensemble For the simulation of liquid-gas phase transition, a method has been introduced by A. Z. Panagiotopoulos [13–15] called by the author Gibbs ensemble Monte Carlo (GEMC). The GEMC avoids the problems connected to the interface between the two coexisting phases. The simulation is organized with two boxes. One of the boxes is designed to represent the low-density fluid, the gas, while the other box represents the coexisting liquid. At given temperature a sequence of three types of MC moves is realized to reach the equilibrium between the two phases. (a) displacement of particles according to the Metropolis rule (b) change of volume of the boxes at constant total volume (c) exchange of particles between the two boxes Move (b) is realized with the rule of the NPT MC. Move (c) implies to choose randomly one of the two boxes, choose in the box randomly a particle and try to transfer it in the other box according to the rules of the grand canonical MC. At equilibrium the pressure and the chemical potential are the same in the two boxes. Gas and liquid evolve at coexistence. Deep inside the coexistence, the two boxes evolve at equilibrium with very distinct densities. Close to the critical point, it becomes difficult to observe the coexistence. The two boxes frequently exchange their role due to the large increase of the density fluctuations [2]. The analysis of the density distribution inside the boxes improves the accuracy of the calculations. This method has been applied also to problems of fluid-solid equilibrium [16].
5.3 MD in Different Ensembles The microcanonical ensemble could not be the most convenient for comparing with experiments. The study of dynamical properties, like diffusion or relaxation phenomena, must be performed in this ensemble since the Hamiltonian dynamical behaviour of the system must be reproduced. It is a different story for the thermodynamic properties. Experiments are frequently performed at constant pressure and temperature in the isobaric-isothermal ensemble. On the other hand, it could be sometimes more convenient to keep the temperature constant and obtain quantities averaged in the canonical ensemble. In order to perform MD in ensemble different from the microcanonical, we need to use a generalized dynamics, which is not more determined by a real Hamiltonian and does not satisfy the conservation rules of the Newtonian dynamics. In practice the system is considered at contact with a reservoir. To keep constant the temperature of a system of particles, it is possible to put the box of the particles at contact with a reservoir. The system composed of particles and reservoir is isolated, while the system of particles exchanges energy with the reservoir. Moreover, in case we need to maintain the pressure constant, we can relax the assumption of a constant box of simulation. The volume of the box becomes
5.3 MD in Different Ensembles
161
a variable which is determined by the coupling with the reservoir. In all the cases when the system evolves with a non-Hamiltonian dynamics, the trajectories of the particles are not realistic. There are still conservation laws, but they concern the total system composed of the liquid and the reservoir.
5.3.1 Controlling the Temperature: MD in the Canonical Ensemble The Nosé Method A successful method to control the temperature was introduced by Nosé [2, 17]. At variance with Eqs. (5.24)–(5.25), the rescaling of the velocities is done by means of a coupling with a thermal bath (TB) in which the system is embedded. The TB is characterized by a degree of freedom s that performs a rescaling of the temporal scale. If r˙ is the instantaneous velocity and we want to rescale it to get the value v, we can write v = s r˙ where s is the rescaling parameter. Since dr dτ dr dr = =s , dt dτ dt dτ
(5.110)
the rescaling is equivalent to a change of the temporal scale with s = dτ/dt, and this is realized by coupling with the TB. We assume that the TB is characterized by a degree of freedom s and its conjugate momentum ps . We indicate now the variables in the new temporal scale with x. ˜ With the assumption that r˜ = r, its conjugate momentum is p˜ = mv˜ = m
dr = ms r˙ = sp . dt
(5.111)
As said above the parameter s is assumed to be a coordinate of the TB with an associated momentum ps . The kinetic energy of the TB is ps2 /2Q ,
(5.112)
where Q is the thermal inertia of the TB. To the kinetic energy, a fictitious potential energy is added which is given by Us = gkB T ln(s) .
(5.113)
The parameter g is related to the degree of freedom of the total system. The Nosé Hamiltonian can be written as
162
5 Methods of Computer Simulation
H=
p˜ 2 ps2 i N + U (r ) + + gkB T ln(s). 2Q 2ms 2
(5.114)
i
The equations of motion obtained from this Hamiltonian with the general rules are p˜ i dri = dτ ms 2 d p˜ i ∂U =− dτ ∂ri
(5.115) (5.116)
ps ds = dτ Q
(5.117)
p˜ 2 dps gkB T i = . − 3 dτ s ms
(5.118)
i
The system of Eqs. (5.115)–(5.118) describes an evolution where the effective Hamiltonian is conserved (5.114) but not the energy of the system. It is relevant however that the average values obtained with this method are equivalent to the averages in a canonical ensemble. We note that the Nosé Hamiltonian can be rewritten as
p2 H = H pN , r N + s + gkB T ln(s) 2Q
(5.119)
where H (pN , r N ) is the Hamiltonian of the system. Since the total system is in a microcanonical ensemble, the partition function is 1 Z˜ = N!
dps ds
d p˜ N dr N δ (H − E)
(5.120)
with d p˜ N = s 3N dpN ,
(5.121)
so the partition function can be rewritten as
1 ˜ Z= dps ds dpN dr N s 3N · N!
p2 s N N + gkB T ln(s) − E . δ H p ,r + 2Q We recall that
(5.122)
5.3 MD in Different Ensembles
163
δ (f (x)) =
δ (x − x0 ) |f (x0 ) |
(5.123)
with x0 a point where f (x0 ) = 0. In (5.123) we consider
p2 f (s) = H pN , r N + s + gkB T ln(s) − E 2Q
(5.124)
and we assume that there is only a zero for this function for s = s0 ; therefore it must be
p2 s N N − E /gkB T + ln s0 = − H p , r (5.125) 2Q then we have gkB T s0
f (s) =
(5.126)
so Z˜ becomes now 1 Z˜ = N!
ds s 3N
dps dpN dr N
1 1 = N! gkB T
N
dps dp dr
N
s0 δ (s − s0 ) gkB T
(5.127)
ds s 3N s0 δ (s − s0 ) .
By integrating on s and substituting s0 with (5.125), we get 1 1 Z˜ = N ! gkB T
⎧ ⎡ ⎨ H pN , r N + N N exp ⎣− dps dp dr ⎩ gkB T
ps2 2Q
⎤⎫3N +1 −E ⎬ ⎦ (5.128) . ⎭
We can assume g = 3N + 1 ,
(5.129)
this seems to be reasonable since the system of particles has 3N degrees of freedom and there is one more degree for the TB. The partition function becomes
ps2 1 1 E /kB T ˜ e · dps exp Z= N! gkB T 2QkB T
H pN , r N N N dp dr exp − . kB T
(5.130)
164
5 Methods of Computer Simulation
In this way we obtained that the statistical distribution of the particles described by the Hamiltonian H (pN , r N ) coincides with the canonical ensemble distribution.
Hoover Equations The Nosé equations can be written in a different form following Hoover [18]. We recall that p = p/s; ˜ then it is easy to transform the Nosè equations (5.115)–(5.118). For Eq. (5.115) p˜ i dri = dτ ms 2
1 dri dri dt = dt dτ s dt
(5.131)
and we are back to dri pi = . dt m
(5.132)
Then with a more long derivation, we get from Eq. (5.116) dpi dτ dpi dpi d p˜ i ds ds ds =s + pi = + pi = + pi dτ dτ dτ dt dτ dτ dt dτ
(5.133)
and dpi ∂U ds =− − pi dt ∂ri dτ =−
∂U ds dt 1 ds ∂U =− . − pi − pi ∂ri dt dτ ∂ri s dt
(5.134)
Now we define η = ln s
(5.135)
dpi ∂U dη =− − pi . dt ∂ri dt
(5.136)
and Eq. (5.134) becomes
Then Eq. (5.117) can be transformed in an equation for η dη ps = . dt Q Finally since
(5.137)
5.3 MD in Different Ensembles
165
dps dps dt 1 dps = = dτ dt dτ s dt
(5.138)
Eq. (5.118) can be rewritten as (spi )2 (pi )2 dps 1 dps gkB T = → = − gkB T − s dt s dt m ms 3 i
(5.139)
i
The variable s does not appear more explicitly in the equations, and the degree of freedom of the TB is now η. We call the corresponding momentum as pη = ps , and the Hoover equations are dri pi = dt m
pη dpi ∂U − pi =− dt ∂ri Q
(5.140)
pη dη = dt Q
p2 dpη i = − gkB T . dt m
(5.141)
i
At this point it is also possible to define a variable ζ = pη /Q. In this way the final equations can be written as pi dri = dt m dpi = Fi − ζpi dt dζ 1 pi2 1 = − gkB T , dt Q 2m 2
(5.142) (5.143) (5.144)
i
where Fi = −∂U/∂ri is the force on particles i. It is interesting to give an interpretation of the last equation (5.144). Since ΔEcin =
p2 1 i − gkB T 2m 2
(5.145)
i
are the fluctuations of the kinetic energy, according to Eq. (5.144), these fluctuations are determined by the time derivative of the TB parameter ζ weighted by its inertia Q. It has been shown later that the Nosé-Hoover dynamics is not ergodic; the method was improved with the introduction of the so-called Nosé-Hoover chains [2, 19].
166
5 Methods of Computer Simulation
5.3.2 Pressure Control To study the phase diagram of a system, it can be useful to perform MD at constant pressure and possibly in a NP T ensemble. To take under control the pressure, it is possible to use a Berendsen thermostat [7]. The volume is rescaled by a factor χp V = χV.
(5.146)
χp evolves with an equation χp = 1 −
δt (P0 − P ) , τp
(5.147)
where P is the instantaneous pressure, while P0 is the fixed external pressure. τp is the time constant that determines the relaxation toward the given pressure. By combining this algorithm with the previous Berendsen method for the temperature, it is possible to run MD where temperature and pressure are taken under control. For performing MD at constant pressure, Andersen introduced a method [2, 20] that inspired later the work of Nosè. The system is considered in a pressure bath with the application of a virtual piston. The piston has a mass W and a kinetic energy KV =
1 ˙ WV 2
(5.148)
where V is the volume that changes with time. To the piston, a potential energy is associated UV = P0 V .
(5.149)
It is possible to develop equations of motion from a fictitious Lagrangian. Parrinello and Rahman modified the Andersen method by introducing the possibility of changing the form of the simulation cell [21]. If the cell is defined by three basis vectors (a, b, c), the volume is given by V = a · b × c = |H |
(5.150)
where |H | is the determinant of the matrix H with columns ax , ay , az , etc. The equations can be implemented by introducing a kinetic energy KV =
1 ˙ W Hαβ . 2 α β
(5.151)
5.4 Molecular Liquids
167
All the problem of performing MD of a system at contact with a thermal bath can be reformulated in the more general framework of the study of the so-called nonHamiltonian dynamics. Precise theoretical formulations are derived by Tuckerman et al. [22, 23]. A discussion about these methods can be found in [1, 2]. Some of the algorithms are implemented in packages like Gromacs [9].
5.4 Molecular Liquids To simulate fluids composed of molecules, instead of single atoms, it is necessary to introduce appropriate algorithms. It must be taken into account that molecular systems consist of bonded atoms. The bonds are due to intramolecular forces. The atoms of a molecule interact with the atoms of another molecule through intermolecular forces. In practice since the intramolecular forces are much stronger than the intermolecular ones and the vibrational motions are very fast compared with the usual time step of simulation, it is possible in some cases to assume that the system is composed of rigid molecules with fixed geometry. The motion becomes a combination of the translational and rotational dynamics. In MD one possible method is to move the centre of mas according to the algorithms previously described, and then the rotations of the atoms of the molecule around the centre of mass are realized by considering the equations of motions for the Euler angles between a system of axes fixed in space and the one attached to the rotating body. This method results to be less convenient than the use of dynamics constraints. Each atom of the molecule moves according to the algorithms of the translational motion, but constraints are added to the Hamiltonian or the Lagrangian in order to preserve the geometry of the molecule. For example, for a biatomic molecule formed by atoms A and B, the two atoms move independently but after the step a correction is introduce to keep the rigid AB bond distance dAB . The dynamics with constraints is a problem treated in mechanics with the method of Lagrange multipliers. Some of the coordinates, r1 , . . . , rn , are connected with equations of the form σα (r1 , . . . , rn , t) = 0 .
(5.152)
These are holonomic constraints since the function depends only on ri and t and its differential for each time is δσα =
n ∂σα i=1
∂ri
δri = 0 .
(5.153)
The constraints can be inserted in the Lagrangian equations with the use of the Lagrange multipliers λα
168
5 Methods of Computer Simulation
d dt
∂L ∂ r˙i
∂σα ∂L = λα (t) , ∂ri ∂ri ν
−
(5.154)
α=1
the term on the right-hand side is equivalent to add forces determined by the ν constraints. In the case of molecular liquids, the constraints can be introduced with functions like
2 σα = ria − rjb − dα2
(5.155)
where dα is the bond length between the a and b atoms in the molecules. From the Lagrangian (5.154), we see that the constraint forces are gi =
ν
λα (t)
α=1
∂σα . ∂ri
(5.156)
In this way the equations of motion for the n particles involved in the constraints derived from the Lagrangian (5.154) result to be ∂ 2r i = f i + gi . ∂t 2
(5.157)
Since the constraints are holonomic, the constraint forces preserve the energy conservation. Different methods have been proposed to satisfy the constraint during the simulation. The most efficient it is a method, called Shake, in which the constraints are treated not simultaneously but in succession [24]. In some simulation of molecular system, however, also the internal degrees of freedom of the molecules have to be taken into account. In this case there are a number of the so-called bonded potentials that can be implemented in the simulation. They could represent, for example, bond stretching between two bonded atoms. The potential could be harmonic or could include anharmonic corrections. It is possible also to represent angle vibrations and combine the different internal dynamic of molecules. These bonded forces could be important for the simulation of biological macromolecules [9].
5.5 Microscopic Models for Water An important example of molecular liquid is water. In Sect. 1.6, we already discussed the structure of this molecular liquid, and we have shown how relevant is the hydrogen bond in determining the short-range tetrahedral order in the liquid. One of the unshared orbitals of an oxygen can attract a hydrogen of another
5.5 Microscopic Models for Water
169
molecule, and each hydrogen of a molecule can form a bond with the unshared orbital of the oxygen of another molecule. Each water molecule can form four hydrogen bonds arranged in a tetrahedral symmetry typical of the ice structure. In the liquid experiments show that the tetrahedral geometry is preserved in the shortrange arrangement [25]. Due to its relevance, water was one the first systems studied by computer simulation. The difficult task is to implement a simple enough potential able to reproduce the hydrogen bond effects. Stillinger and Rahman in 1974 introduced the model called ST2 able to reproduce with enough accuracy in many properties of water [26]. Since then, a number of rigid models were developed based on the same idea. The molecule is represented with a number of charged and neutral sites. In the ST2 model, the charged sites are four as shown in Fig. 5.12; two positive charged sites represent the hydrogens, while the other two negative sites represent the two unshared electron pairs. Effective charges are attributed to the positive and negative sites. In this way each site of a molecule has a Coulomb interaction with each site of another molecules. Taken two molecules α and β, we have a Coulomb interaction between a site i in α and a site j in β qi qj uij ri , rj = rij
(5.158)
where rij is the distance between the position of the sites. The oxygen sites of different molecules interact also with a Lennard-Jones potential uLJ (rOO ) = 4OO
σOO rOO
12
−
σOO rOO
6
where rOO is the distance between the position of the oxygen sites.
Fig. 5.12 Examples of water site models used in simulations
(5.159)
170
5 Methods of Computer Simulation
Table 5.1 Parameters of the different water models. ST2: symmetrical tetrad 2. SPC: simple point charge. SPC/E: simple point charge extended. TIP4P: transferable intermolecular potential 4 points
H-O (Å) φ σOO (Å) OO (kJ/mol) qH (e) qO (e) qM (e) O-M (Å)
ST2 1.0 109.47 3.10 0.3169 0.2357 0.0 −0.2357 0.8
SPC 1.0 109.47 3.166 0.650 0.410 −0.82 0.0 0.0
SPC/E 1.0 109.47 3.166 0.650 0.4238 −0.8476 0.0 0.0
TIP4P 0.9752 104.52 3.154 0.648 0.520 0.0 −1.04 0.150
TIP4P/2005 0.9752 104.52 3.1589 0.7749 0.5564 0.0 −1.1128 0.1546
In Fig. 5.12, we show the geometry of ST2 and other site models that have been introduced later. In the SPC [27] and the SPC/E [28], the oxygen site is charged qO = −2qH . In TIP4P [29] and TIP4P/2005 [30], the oxygen is neutral, but an extra virtual site with a negative charge 2qH is introduced. In particular the negative charge site is placed along the bisector of the HOH angle and coplanar with the oxygen and hydrogens at a distance 0.1546 from the oxygen. The parameters of the models are collected in Table 5.1. These and other rigid site models are largely used. A coarse-grained model has been also introduced where the molecule is reduced to a single site with a threebody term added to the pair interaction, similar to the potentials used to model liquid silicon [31]. All these models do not require long computation time. Some of these models are implemented in free simulation packages. The TIP4P/2005 is presently considered the rigid model that better reproduces the phase diagram of water, in particular in the region of melting. Moreover it gives a good prediction for the highpressure phases of ice. In Fig. 5.13 we compare the gOO (r) obtained from the models TIP4P and TIP4P/2005 with the experimental results, already shown in Fig. 3.15; it is evident that both the models reproduce well enough the experiment, but TIP4P/2005 has a better agreement after the first peak. Concerning the thermodynamics properties, it is interesting to consider how the behaviour of liquid density close to the temperature of maximum density is reproduced in simulations; see Fig. 5.14. The TIP4P/2005 is in excellent agreement with experiment. The temperature of maximum density of TIP4P results to be shifted of −30 K as shown in the figure. Finally in Fig. 5.15, a part of the water phase diagram predicted by the TIP4P/2005 model is shown compared with experiments. The liquid and crystal stable phases are shown.
5.5 Microscopic Models for Water
171
TIP4P/2005 TIP4P EXPER
gOO(r)
3
2
1
0
0.2
0.4
0.6
0.8
1
r (nm)
Fig. 5.13 Oxygen-oxygen RDF at ambient conditions calculated with different models and compared with experiment. Bold line experiment from A. K. Soper (https://www.isis.stfc.ac. uk/Pages/Water-Data.aspx), black points TIP4P/2005, red triangles TIP4P. Simulated results are obtained with the use of Gromacs package [9]
1.02
TIP4P/2005 TIP4P EXPER Shifted exper
ρ (g/cm3)
1
0.98
0.96 200
250
300
350
400
T (K)
Fig. 5.14 Density of water as function of temperature for the TIP4P/2005 [30] and the TIP4P models compared with the experimental results. The experimental points shifted for −30 K is also presented
The TIP4P/2005 model improves the predictions of other models since: 1. It rightly predicts the stable phases of different types of ices, apart the 1h, indicated with roman numbers II, III, etc. These ice polymorphs are characterized by different symmetries, and they are found at high enough pressures. 2. The coexistence lines between the different phases are in agreement with the experiment by shifting pressure and temperature for about 100 MPa and 20 ÷ 30 K, respectively.
172
5 Methods of Computer Simulation
Fig. 5.15 Liquid and crystalline stable phases of water as predicted by the TIP4P/2005 model (lines) compared with the experimental results (stars). Reprinted from ref. [30] with permission of AIP Publishing
A comparison of the performance of several models of water is presented by Vega et al. [32]. More recently a test of different models has also been done in the region of supercritical water [33, 34]. Flexible water models have been also introduced. In a recent study a TIP4P/2005 flexible potential [35] has been tested with results comparable with the rigid TIP4P/2005. The methodology implemented by Stillinger and Rahman for water has been relevant not only for water. Since then, rigid site models or flexible site models have been implemented for many other cases, in particular for large biological molecules. Free packages are available containing the main force fields introduced during the last 20 years together with data base with the structural informations about a large number of systems [9, 36]. Polarizable water models have been also introduced particularly in the study of biomolecules [37]. The water molecule is modelled as a soft sphere with a dipole moment to mimic the polarizability of the molecule.
5.6 Some More Hints Like in experiments also in MC and MD the quantities evaluated are affected by errors. They are mainly due to the fact that averaging in statistical mechanics is expected to be done on independent configurations. This is not rigorously true even in MC because we realize an approximate Markov chain. A source of errors in MC was related in the past to the random number generators, but the modern
5.7 Direct Calculations of the Equation of State
173
routines usually are largely improved and more reliable also for very long runs. Moreover the averages are performed on a finite time. Estimating error bars for the simulation data could be more or less difficult on the type of problem and the region of the thermodynamic space we consider. However accurate methods have been developed, and they are now implemented in normal computer simulations. MC and MD methods can be complementary. MD is the only method to obtain a complete description of the dynamical properties. With enough computer effort, it is possible to calculate the time correlation functions and the relaxation to equilibrium also in metastable states like undercooled liquid. MD can be used to study problems of non-equilibrium dynamics when the system is under the effect of a perturbation, and transport coefficients can be calculated. MC is sometimes preferred in the study of phase transitions since simulations in ensemble like grand canonical are more easy to implement where interface effects at coexistence can be avoided. It has to take into account however that the study of phase transitions, in particular of critical phenomena, would require a rigorous estimation of finite size effects, because criticality is determined by the divergence of the correlation length. In finite size systems, the correlation length cannot overcome the value of the box length. In such cases the use of PBC is not enough to recover the lack of thermodynamic limit. We will discuss this point in the last section of this chapter.
5.7 Direct Calculations of the Equation of State In computer simulation, it is possible to compute directly some thermodynamic quantity. For instance, in a simulation in the canonical ensemble performed at different temperature and density, the isotherms can be derived. As an example we report in Fig. 5.16 the isotherms of a Lennard-Jones system, obtained from computer simulation The isotherms show a behaviour similar to the van der Waals mean field theory. So it is possible to perform a Maxwell construction and derive the critical parameters. In the inset of Fig. 5.16, the crossing of the isochores close to the critical point is shown as found also in the van der Waals theory; see Fig. 2.13. The direct calculation of isotherms is the more easy way to search for a gas-liquid transition. There are however a number of issues that indicate that this procedure can be considered only a preliminary approach in the numerical simulation of phase transitions. The van der Waals loop appears typically in the study of phase transition within mean field theories. The van der Waals-like behaviour of the isotherms in computer simulations is determined essentially by finite size effects. The interpretation of the loop is discussed in several papers; see, for instance, ref. [38, 39]. In practice the direct calculation of isotherms or isochores is useful to locate a phase transition, but a more refined treatment is necessary in order to establish more precisely the coexistence curve and the other properties of the fluid at a phase
174
5 Methods of Computer Simulation
Fig. 5.16 Isotherms of a Lennard-Jones fluid. The quantities are in reduced units of the LennardJones parameters: T ∗ = kB T /, V ∗ = V /σ 3 , p ∗ = pσ 3 /. In the inset the isochores close to the critical point: Tc∗ = 1.10 ± 0.01, pc∗ = 0.11 ± 0.01, Vc∗ = 3.30 ± 0.03
transition. Important methodologies have been developed to derive the free energy or the equivalent thermodynamic potentials. In the study of phase transitions, it has to be taken into account in any case that phase transitions and especially critical phenomena are related to singularities of thermodynamics observable. The singular behaviour appears only in the thermodynamic limit, as established by the Yang and Lee theorem. Since computer simulations are performed in a finite volume with a finite number of particles, we expect that the singularities are smeared out and the determination of the phase transitions is affected by the so-called finite size effects [40, 41]. We will discuss further this point in the last section.
5.8 Free Energy Calculation from Thermodynamic Integration A possible route to study the phase coexistence between two phases and to search for phase transitions is to calculate the thermodynamic potential appropriate for the ensemble used in computer simulation. For instance, in MD or MC simulations at constant volume and temperature, the thermodynamic potential will be the Helmholtz free energy. We recall that the Helmholtz free energy is given by βA(T , V , N) = − ln QN (V , T ) , where QN (T , V ) is the canonical partition function
(5.160)
5.8 Free Energy Calculation from Thermodynamic Integration
175
exc QN (V , T ) = Qid N (V , T ) QN (V , T ) ,
(5.161)
where the ideal part is Qid N (V , T ) =
VN , Λ3N N!
(5.162)
while the excess term is Qexc N (V , T ) =
1 1 ZN (V , T ) = N VN V
dr N e−βU
r N ,V
.
(5.163)
A(T , V , N) cannot be directly calculated since it would require to know the partition function of the system according to Eq. (5.160), but it can be obtained by thermodynamic integration of different quantities. Let us suppose that we know the free energy at a point in the thermodynamic space at density ρ1 and temperature T , we can perform a series of simulations at constant temperature to obtain the free energy at density ρ2 by considering that
∂A ∂V
= −p .
(5.164)
T
By integrating we have
V2
A (V2 , T ) − A (V1 , T ) = −
pdV ,
(5.165)
V1
and in terms of density βA (ρ1 , T ) βA (ρ2 , T ) = +β N N
ρ2
ρ1
p dρ . ρ2
(5.166)
From Eq. (5.161), we know that the free energy is composed of an ideal term and an excess term determined by the particle interaction A = Aid + Aexc . Also for the pressure, we can write p = ρ/β + pexc ; therefore we can write βAexc (ρ2 , T ) βAexc (ρ1 , T ) − =β N N
ρ2 ρ1
pexc dρ . ρ2
(5.167)
In the case we want to perform the integration along a path at constant density and varying T , we can use the thermodynamic derivative ∂ ∂T
A 1 ∂A A 1 =− 2 + = − 2E. T T ∂T T T
(5.168)
176
5 Methods of Computer Simulation
So we have A (T2 ) A (T1 ) − =− T2 T1
T2 T1
E dT , T2
(5.169)
then by taking into account that the total energy is E=
3 NkB T + U 2
(5.170)
where U is the potential energy, we get Aexc (T2 ) Aexc (T1 ) − =− NT2 NT1
T2 T1
(U/N) dT . T2
(5.171)
5.9 An Example: Liquid-Solid Transition In principle melting/freezing processes could be directly observed in computer simulations. However the real phenomena are usually difficult to reproduce. In real experiments, the melting/freezing is determined by the presence of nucleation processes induced by impurities. In computer simulation, for instance, a solid phase can persist even above the melting temperature [42]. For these reasons to locate the liquid-solid coexistence, it is better to perform calculations of the thermodynamic potentials involved; in particular it is possible to use a perturbation method [42]. Let us suppose that the potential in the crystal state can be written in a harmonic approximation as
+ ΔU rN , UC r N = U0 r N 0
(5.172)
where U0 is the potential of the atoms in their equilibrium positions in the lattice and ΔU is the potential describing the harmonic motion of the atoms around the equilibrium positions. For instance, in a simple Einstein model N 1 ΔU r N = k0 (r i − r 0i )2 . 2
(5.173)
i=1
The potential in the liquid state is UL r N . We can introduce a parameter 0 ≤ λ ≤ 1 and define
N N + − λ) U r − U r + λΔU rN , (5.174) U (λ) = U0 r N (1 L 0 0 0 in this way
5.10 Calculation of the Chemical Potential: The Widom Method
177
U (0) = UL r N
U (1) = UC r N .
(5.175)
λ=0 λ=1 Now since
dr N e−βU (λ) ,
Z (λ) =
(5.176)
we can derive the excess free energy starting from ∂Aexc 1 ∂ = −kB T Z (λ) , ∂λ Z (λ) ∂λ
(5.177)
so we get ∂Aexc 1 1 = −kB T ∂λ −kB T Z (λ)
dr
N
∂U −βU (λ) ∂U e . = ∂λ ∂λ
(5.178)
In this way
exc
A
(λ = 1) − A
exc
1
(λ = 0) =
dλ 0
∂U . ∂λ
(5.179)
In order to evaluate the average term inside the integral, a number of simulations must be performed at different λ. The model for the vibrations in the solid phase can be improved with different approaches going beyond the Einstein model [42]. This perturbation method can be applied to other problems; however, it works if the free energy of the reference system is well known or it can be reasonably approximated.
5.10 Calculation of the Chemical Potential: The Widom Method We consider a MC simulation in a NVT ensemble of a single-component system. The chemical potential is given by μ= The excess term can be written as
∂A ∂N
. V ,T
(5.180)
178
5 Methods of Computer Simulation
Qexc N (V , T ) =
1 ZN (V , T ) = VN
ds N e−βU
s N ,V
(5.181)
,
where we introduced the coordinates normalized to the box length, for each particle r i = (xi , yi , zi ) → s i = (xi /L, yi /L, zi /L) ,
(5.182)
where L = V 1/3 . Now if the number of particle is large, we can write
∂A ∂N
≈ V ,T
A (N + ΔN) − A (N ) , ΔN
(5.183)
so if a single particle is inserted in the simulation box, we have μ ≈ −kB T ln
Q (N + 1, V , T ) . Q (N, V , T )
(5.184)
For the ideal part, it is easy to find μid = −kB T ln
V Λ3 (N + 1)
(5.185)
.
The excess chemical potential is given by exc
μ
= −kB T ln
ds N +1 e−βU s N ds N e−βU (s )
N+1
(5.186)
.
Now we can perform a simulation with N particles and insert a virtual particle in an random position; we calculate the difference
ΔU = U s N +1 − U s N
(5.187)
The integrals in Eq. (5.186) can be transformed to
ds N +1 e−βU s N ds N e−βU (s )
N+1
=
ds 0
ds N e−βU s e−βU N ds N e−βU (s ) N+1
sN
eβU
sN
,
(5.188)
then the excess chemical potential can be written as μexc = − ln kB T
ds 0
ds N e−βΔU e−βU N ds N e−βU (s )
sN
= − ln
. ds 0 e−βΔU N
(5.189)
5.12 Umbrella Sampling
179
The sampling on different insertions of the virtual particle makes possible to evaluate the average in Eq. (5.189).
5.11 Sampling in a Complex Energy Landscape Computer simulation follows the moves of the single particles under the effects of forces. From the sampling along the trajectories, the different thermodynamic quantities can be calculated. In order to get reliable results, it is necessary that the sampling is done in a large enough portion of the phase space. But the socalled potential energy landscape in which the system of particle moves could be very complex, even for simple two-body potential. We defined the potential energy landscape (PEL) in Sect. 1.4 as the surface generated in the multidimensional space of the coordinates by the potential energy of each particle. We will consider late more in detail the PEL in relation with the supercooled liquids and the glass transition; see Chap. 8. It is evident that the presence of different basins separated by barriers affects the sampling of the trajectories in the phase space. The determination of the free energy can become very difficult when the energy landscape is constituted by minima separated by high barriers. The system could remain trapped for very long time in a single minimum making impossible to sample of the phase space with normal algorithms. The sampling can be improved by introducing bias in the simulation that makes possible to cross the barriers avoiding that the system is trapped in a restricted region of the phase space. After that the numerical procedure must appropriately correct the bias.
5.12 Umbrella Sampling The umbrella sampling is a class of computer simulation methods developed to the aim of determining the free energy and the possible transitions between different equilibrium states. The idea, introduced originally by Torrie and Valleau [43], is to use a potential build ad hoc to cross the barrier; this fictitious potential acts as a sort of umbrella between the initial and the final state. Let us suppose that we want to study the properties of a system with a potential U r N that can be related to a reference system with a potential U0 r N for which the free energy A0 is known. In principle we can write A (N, V , T ) − A0 (N, V , T ) = −kB T ln The explicit formula for QN /Q0N is
QN (V , T ) Q0N (V , T )
.
(5.190)
180
5 Methods of Computer Simulation
QN (V , T ) Q0N (V , T )
=
dr N e−β
N U r −U0 r N −βU0 r N
e
.
N dr N e−βU0 (r )
(5.191)
By defining ΔU = U − U0 , we can write Eq. (5.191) as QN (V , T ) Q0N (V , T )
= e−βΔU
(5.192)
0
and we have , A (N, V , T ) − A0 (N, V , T ) = −kB T ln e−βΔU 0
(5.193)
where the average is done with the distribution function (or probability density) of the reference system p0 r
N
=
e−βU0
Q0N
rN
.
(5.194)
The problem is that the averages on the right side of Eqs. (5.192) and (5.193) are very the simulation, we can compute for a configuration . N /difficult to perform. During r the potential energy U0 (r N ) and the corresponding ΔU , but we have to sample over the distribution of the configurations determined by p0 [1, 44]. To better understand this point, we make a change of variable in Eq. (5.192), and we integrate on ΔU
e−βΔU
0
=
d (ΔU ) e−βΔU p0 (ΔU ) .
(5.195)
. / The integral is not zero in a region of phase space r N where e−βΔU has a significant superposition with the distribution function p0 . In Fig. 5.17, a typical situation is presented [1]. The region of phase space that would give the largest contribution to the integral is not accurately sampled since the probability distribution function p0 has a maximum on right-hand side and it is almost negligible for negative ΔU . The region below the blue circle curve is not well calculated in particular on the left side. The way to improve the sampling is to introduce an ad hoc distribution function defined as
w r N exp −βU0 r N N , = N N π r (5.196) dr w r exp −βU0 r N where the weight w r N is chosen to enhance the sampling of the relevant part of ΔU . So the distribution π r N defined by Eq. (5.196) improves the overlap
5.12 Umbrella Sampling
181
Fig. 5.17 Plot of p0 (ΔU ) and the function e−βΔU . The curve with the blue circles is the product of the two functions
p0(ΔU)
e-βΔU
ΔU
between the configuration space of the full system (with potential U ) and the reference system (with potential U0 ). This is the reason of the name umbrella sampling. The weight in the distribution function is somehow arbitrary and introduces a bias in the average performed with the non-Boltzmann distribution π ; this bias must be subtracted to obtain the correct ensemble average. The average of an observable X(r N ) as we known is obtained by
X0 =
dr N X r N exp −βU0 r N , dr N exp −βU0 r N
(5.197)
we insert now the weight w r N in both numerator and denominator of Eq. (5.197), and by recalling the definition of π in Eq. (5.196), we can write
X0 =
dr N X r N π r N 1/w r N , dr N π r N 1/w r N
(5.198)
it can be written as N X r /w r N π
X0 = 1/w r N π
(5.199)
The average in Eq. (5.193) can be calculated according to Eq. (5.199). We note that Eq. (5.196) has also a different interpretation; the distribution function π can obtained by modifying the original potential as
1 U r N = U0 r N − ln w r N , β
(5.200)
so the introduction of the bias in the sampling is equivalent to using a modified potential.
182
5 Methods of Computer Simulation
If we consider Eq. (5.190), it is possible to insert intermediate states between the initial with potential U0 and the final one with potential U . By inserting, for example, a single state with potential U1 , we can write
QN (V , T ) Q1N (V , T ) A (N, V , T ) − A0 (N, V , T ) = −kB T ln Q1N (V , T ) Q0N (V , T )
(5.201)
The umbrella sampling technique can be extended to consider more intermediate states [2].
5.13 Histogram Methods Related to the umbrella sampling, a technique has been developed, sometimes called multicanonical ensemble, based on MC simulations where the probability of visiting different configurations is calculated and modified to make possible to overcome barriers between coexisting phases [45]. The more simple implementation is a technique called single histogram method [46]. During a simulation at constant temperature, we can calculate the average of an observable X with the usual formula
X =
α
X (α) e−βE(α) , Q
(5.202)
where the sum is on the configurations α and the partition function is Q=
e−βE(α) .
(5.203)
α
If we consider that at each configuration α is associated with an energy E, we can write Eq. (5.202) as
X =
E
XD (E) e−βE , Q
(5.204)
where now D (E) is the density of state and Q=
E
Therefore the probability density is
D (E) e−βE .
(5.205)
5.14 Free Energy Along a Reaction Coordinate
pE (β) =
183
D (E) e−βE . Q
(5.206)
During a simulation at temperature T0 (β0 ), we can build a histogram H (E) by collecting the number of times the energy E is found. The probability density of the energy will be pE (β0 ) =
H (E) , N
(5.207)
where N is the number of MC steps. The density of states will be D (E) = Q0 H/N eβ0 E = Q0 eβ0 E pE (β0 ) .
(5.208)
At a temperature T1 taking into account that the density of states does not change with temperature, we have pE (β1 ) =
Q0 −(β1 −β0 )E D (E) e−β1 E = e pE (β0 ) . Q1 Q1
(5.209)
The term Q0 /Q1 is a constant included in the normalization, and finally we have He−(β1 −β0 )E . pE (β1 ) = −(β1 −β0 )E E He
(5.210)
In principle from a single histogram at a given temperature, it is possible to extrapolate the probability density at any different temperature. The procedure can be extended to the GCMC simulation where the probability density at fixed volume is H = U − μN. The success of this method depends on the thermodynamic conditions, in particular problems arise for too high barriers between the different states.
5.14 Free Energy Along a Reaction Coordinate The umbrella sampling method introduced before can be generalized [47]. The transformation between two states can be driven by a reaction coordinate (or order parameter). It is, for example, the case of two thermodynamic states corresponding to two minima in the free energy separated by a barrier; see Fig. 5.18. The transition could be a very rare event, and it could become impossible to study it with the normal sampling due to the finite time of the simulation. Let us suppose that we are interested in the calculation of the free energy of a system along a reaction coordinate x r N , the probability of finding the system with a value ξ is given by
5 Methods of Computer Simulation
free energy
184
B
A
reaction coordinate
Fig. 5.18 Free energy profile as function of a reaction coordinate
p (ξ ) =
dr N
δ x r N − ξ exp −βU r N , dr N exp −βU r N
(5.211)
the delta function selects the degree of freedom connected to the reaction coordinate of interest. The free energy is A (ξ ) = −kB T ln p (ξ ) .
(5.212)
This expression is called also potential of mean force (PMF). The meaning is different with respect to the definition given in Chap. 4 in Eq. (4.32). In the present formulation, the definition of the PMF comes from considering dA (ξ ) = −β dξ
d dξ
=
δ x r N − ξ exp −βU r N δ x r N − ξ exp −βU r N d δ x r N − ξ dξ exp −βU r N , δ x r N − ξ exp −βU r N
dr N
dr N
dr N dr N
(5.213)
then −
dA (ξ ) = dξ
dr N
N − ξ exp −βU r N − dU dξ δ x r , (5.214) dr N δ x r N − ξ exp −βU r N
the right-hand side can be considered as a mean force along the direction determined by ξ so that
5.14 Free Energy Along a Reaction Coordinate
185
dU dA (ξ ) = − . − dξ dξ
(5.215)
The free energy difference from a state a to a state b will be given by ΔA (ξa → ξb ) = −kB T ln
p (ξb ) . p (ξa )
(5.216)
The problem is that in computer simulation we need to have enough sampling of both the states to generate the appropriate probability density. This could be very difficult if the states are separated by a potential barrier, as in Fig. 5.18.
5.14.1 Umbrella Sampling for Reaction Coordinates The path of the reaction coordinate can be separated in windows; in each piece according to Eq. (5.200), we can insert a bias potential Vi
. Ui r N = Ui r N + Vi x r N
(5.217)
The added potential Vi must constrain the variation of the reaction coordinate around a value ξi in each window to make possible to optimize the sampling in that limited range. It is usual to assume a bias potential of harmonic type for each window Vi =
1 ki (ξ − ξi )2 . 2
The biased distribution will be N . / dr δ x r N − ξ exp −β Ui r N + Vi (x) . / πi (ξ ) = . dr N exp −β Ui r N + Vi (x)
(5.218)
(5.219)
Now in this equation, Vi depends explicitly only on x so we can extract it from the numerator and write πi (ξ ) = exp [−βVi (ξ )] N dr δ x r N − ξ exp −βUi r N . / · dr N exp −β Ui r N + Vi (x) then
(5.220)
186
5 Methods of Computer Simulation
dr N δ x r N − ξ exp −βUi r N . / πi (ξ ) exp [βVi (ξ )] = N dr exp −β Ui r N + Vi (x) N dr exp −βUi r N . · N (5.221) dr exp −βUi r N Finally we get an expression for the distribution pi (ξ ) pi (ξ ) = πi (ξ ) exp [βVi (ξ )] N dr exp −βUi r N exp [−βVi (x)] · , dr N exp −βUi r N
(5.222)
it can be written also as pi (ξ ) = exp [βVi (ξ )] πi (ξ ) exp [−βVi (ξ )] .
(5.223)
The free energy or PMF in each window will be given by Ai = −
1 1 ln pi = − πi − Vi + Fi , β β
(5.224)
where the function Fi is defined as βFi = ln exp [−βVi (ξ )] .
(5.225)
The function Fi in Eq. (5.224) must be calculated to combine the free energy in the different windows in order to obtain the final global free energy. For the last issue, we will consider in particular the weighted histogram analysis method (WHAM) that is now much used. In the procedure, it is usual to assume a bias potential of harmonic type, like Eq. (5.218), then for each window the bias distribution πi is obtained from a histogram on ni sampling. By using Eq. (5.223) in combination with Eq. (5.225), the unbiased distribution pi can be derived. In the WHAM, the different pi are combined to get the complete unbiased distribution p (ξ ) =
N
ai (ξ ) pi (ξ )
(5.226)
i=1
where N is the total number of windows and the coefficients ai are obtained by minimizing the statistical error. By imposing that N i=1 ai = 1, the optimized coefficients are given by
5.14 Free Energy Along a Reaction Coordinate
ni e−β[Vi −Fi ] ai = N −β [Vj −Fj ] j =1 nj e
187
(5.227)
Now Eq. (5.226) with the the coefficients (5.227) must be combined with Eq. (5.225) to obtain Fi given by e−βFi (ξ ) =
dξp (ξ ) e−βVi (ξ )
(5.228)
The system of Eqs. (5.226) and (5.228) must be solved self-consistently by iteration.
5.14.2 Metadynamics In the umbrella sampling methods, the idea is to realize a crossing of the barrier between two minima in the free energy with the use of overlapping bias distribution functions. An alternative method was developed more recently, and it is called metadynamics [48–50]. It is usual to give an intuitive explanation of this method by making an analogy of a boat at the bottom of a water basin. A way to make that the boat could reach a close basin separated by a mountain is to fill the two basins with water until the boat can navigate above the mountain; see Fig. 5.19. The bias potentials are built to fill the deep wells and to drive the system above the barrier making possible to move from a local minimum to another one.
Fig. 5.19 Crossing the barrier in the metadynamics method
188
5 Methods of Computer Simulation
The region of the minimum in the phase space is filled with Gaussian functions. First of all, a number of collective variables (CV) are selected. The number d of the CV is expected to be not too large. Each CV Sα , (α = 1, d) depends on a set of coordinates Sα (r 1 (t), . . . , r n (t)). The time evolution of each CV is considered along the trajectory, and at certain intervals of the time tk = k · Δt, a Gaussian is placed centred at the values of Sα (tk ) determined by the positions at the time tk . In this way, an external potential is obtained d (Sα − Sα (tk ))2 V (S) = a0 exp − (5.229) 2σα2 k=1,n
α=1
where: • a0 is the Gaussian amplitude • σα is the width of the Gaussian a bias is introduced against visiting the same position. The Gaussian functions fill the free energy well. The choice of the Gaussian parameters is a key point. With broad Gaussian functions, it is possible to explore more quickly the free energy profile and fill the well, but the results could be affected by large errors. On the contrary, the use of too narrow Gaussians would require very long time simulations. With the time evolution, the sum in Eq. (5.229) flattens and for enough long time A ({Sα }) + V ({Sα }) = cost
(5.230)
so the free energy can be obtained apart from a constant. A good explanation of the method is given in ref. [51]. The authors consider a one-dimensional model potential with three minima for the CV S; the minima are at A, B and C; see Fig. 5.20. The system is located at the beginning in the local minimum B. The conditions are such that in normal simulation the system could not escape from the minimum. On the top of the figure, S oscillates at the initial time around the point B, but with the accumulation of the Gaussians indicated in the bottom of the figure, the system can reach the other regions and finally S is not more trapped in a single portion of the phase space. The method can be improved by assuming a time dependence for the coefficients a0 in Eq. (5.229), and it is called well-tempered metadynamics[50].
5.15 Simulation of Critical Phenomena
189
Fig. 5.20 Metadynamics in a one-dimensional model with three minima. At the top: time evolution of the collective variable S as function of time. At the bottom: the bold line is the potential with the minima, the thin lines represent the accumulations of the Gaussians along the trajectories. Reproduced with permission from [51]
5.15 Simulation of Critical Phenomena Computer simulations are performed on finite size systems. An important achievement of the theory of critical phenomena is that the singularities in thermodynamic functions appear only in the thermodynamic limit. It is well known that in finite size systems, the singularities are rounded to maxima located in shifted positions with respect to the real critical point. The theory of critical phenomena established that the critical behaviour is connected to the divergence of the correlation length ξ ξ ≈ |1 − T /Tc |−ν
(5.231)
with the critical exponent ν ≈ 0.63. This power law divergence is rounded off in finite size system since ξ cannot exceed the maximum length L of the box. As shown already in a pioneering analytic calculations for the two-dimensional Ising model [52] in a finite square lattice L2 at increasing L, the specific heat does not diverge as in the thermodynamic limit, but its maximum increases at increasing size. The position of the maximum as function of temperature can be assumed as an apparent critical temperature Tca (L). It shifts and approaches Tc as the size of the area of the system increases. Scaling theory applied to finite size system found for the apparent critical temperature [41, 53]
190
5 Methods of Computer Simulation
Tmax (L) − Tc ∼ 1/Lθ/ν .
(5.232)
where ν is the critical exponent of Eq. (5.231) and θ ≈ 0.54 for the threedimensional Ising model. Finite size scaling (FSSC) methods have become of frequent use in the MC calculations of lattice model. They consist in performing simulations on lattice of increasing sizes and making a sort of extrapolation to the thermodynamic limit by means of appropriate scaling laws. The extension of FSC to liquid systems is not straightforward, and different methodologies have been applied. We recall here the approach based on the distribution functions of the order parameter; they can be calculated during the simulation. This method was introduced by Binder for lattice models [40] and later applied to critical phenomena in fluids [54, 55]. We restrict here to the application to the three-dimensional Lennard-Jones (LJ) fluid as discussed by N. H. Wilding [56, 57]. In a GCMC, a direct measurement of the density fluctuations is possible. These fluctuations are the essential feature of fluid criticality. The LJ fluid is investigated close to the estimated critical point Tc with simulation boxes of different sizes. All the quantities are in LJ units, already introduced. Above Tc the distribution function of the density PL (ρ) has a single peak at the average density. By carefully changing the chemical potential μ and the temperature, it is possible to obtain below Tc a bimodal distribution function. The barrier between the two phases is not so high, so the system can fluctuate between the liquid and the gas phases. In order to produce the PL (ρ), long simulations are needed to sampling on enough independent configurations. In Fig. 5.21 on the right, the time evolution of the density along a GCMC for two temperatures below Tc is presented. For T = 0.985Tc , there are very frequent oscillations; the corresponding distribution function is on the left side. For T = 0.965Tc the barrier increases and it becomes more difficult for the system to cross it. The double peak of the distribution function are higher and located at larger distance. For the presence of the interface between the two coexisting phases, it is essential to use the multicanonical method introduced before. Apart from technical details, the simulations are performed by dividing the box into m3 subcells of length a. So in practice the fluctuations of density and energy are measured at increasing L = m · a. For technical reason the length a is chosen equal to the cut-off of the potential rc . In Fig. 5.22, the distribution functions for a fixed m = 4 at decreasing temperatures in the subcritical region are reported. They are obtained by applying the equal area criterion. By changing μ at given T , it is possible to obtain that the two peaks of the gas and the liquid phases have the same area. The maxima of the two peaks individuate the coexisting densities. In Fig. 5.23, the PL (ρ) is shown for a temperature close to Tc obtained with different sizes L = m · a. It is evident that at increasing m, the peaks become higher and sharper. The differences are connected to the finite size effects. For each size there exists an apparent critical temperature determined by a maximum of the correlation length and some related quantities like the isothermal compressibility. At increasing size, the system approaches the real critical behaviour. All the problem can be treated
5.15 Simulation of Critical Phenomena
191
Fig. 5.21 Density fluctuations on the right panel and corresponding density distributions on the left panel. The examples are for subcritical conditions with T = 0.985Tc and T = 0.965Tc . Reproduced from N. B. Wilding, Am. J. Phys., 69, 1147 (2001) with the permission of the American Association of Physics Teachers 30.0 T∗=1.1696 T∗=1.1494 T∗=1.1111 T∗=1.0667 T∗=1.0256 T∗=0.9877 T∗=0.9412
20.0 PL(ρ*)
Fig. 5.22 Density distribution functions for fixed size (m = 4) at decreasing temperatures in the subcritical region. Reprinted with permission from N. B. Wilding, Phys. Rev. E 52, 602 (1995). Copyright (1995) by the American Physical Society
10.0
0.0 0.0
0.2
0.4 ρ*
0.6
0.8
with accurate and rigorous methods, called finite size scaling. In Fig. 5.24, the apparent Tca (L) versus a L−(θ+1)/ν is plotted where θ and ν are the scaling exponents already introduced in Eq. (5.232). The exponent θ must be corrected to θ + 1 due to the differences in the symmetry of the phase diagram of a fluid with respect to lattice models like the Ising model. The extrapolation of Tca (L) for L → ∞ gives accurate estimates of the critical temperature Tc = 1.1876 ± 3.
192 6.0
m=4 m=5 m=6 m=7
5.0 4.0 PL(ρ*)
Fig. 5.23 Density distribution functions at fixed temperature just below Tc at increasing size. Reprinted with permission from N. B. Wilding, Phys. Rev. E 52, 602 (1995). Copyright (1995) by the American Physical Society
5 Methods of Computer Simulation
3.0 2.0 1.0 0.0 0.00
0.20
0.30 ρ*
0.40
0.50
0.60
1.1880 1.1870
Apparent Tc∗
Fig. 5.24 Apparent critical temperature for different sizes L reported as function of L−(θ +1)/ν according to FSSC (see text). Reprinted with permission from N. B. Wilding, Phys. Rev. E 52, 602 (1995). Copyright (1995) by the American Physical Society
0.10
1.1860 1.1850 1.1840 1.1830 1.1820 0.00
0.02
0.04
0.06
0.08
L–(θ+1)/v
References 1. Allen, M.P., Tildesley, D.J.: Computer Simulation of Liquids. Oxford University Press, Oxford (1989) 2. Frenkel, D., Smit, B.: Understanding Molecular Simulation. Academic, London (2002) 3. Haigh, T., Priestley, M., Rope, C.: ENIAC in Action. Making and Remaking Modern Computer. The MIT Press, Cambridge (2016) 4. Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., Teller, E.: J. Chem. Phys. 21, 1087 (1953) 5. Alder, B.J., Wainwright, T.E.: J. Chem. Phys. 27, 1208 (1957) 6. Verlet, L.: Phys. Rev. 159, 98 (1967) 7. Berendsen, H.J.C., Postma, J.P.M., DiNola, A., Haak, J.R.: J. Chem. Phys. 81, 3684 (1984) 8. Bussi, G., Donadio, D., Parrinello, M.: J. Chem. Phys. 126, 014101 (2007) 9. Abraham, M.J., van der Spoe, D.D., Lindahl, E., Hess, B.: The GROMACS Development Team, GROMACS User Manual Version 2018 (2018). www.gromacs.org
References
193
10. Ewald, P.: Ann. Phys. 64, 253 (1921) 11. Barker, A., Henderson, D.: Rev. Mod. Phys. 48, 587 (1976) 12. Essmann, U., Perera, L., Berkowitz, M.L., Darden, T., Lee, L.G., Pedersen, H.: J. Chem. Phys. 103, 8577 (1995) 13. Panagiotopoulos, A.Z.: J. Phys.: Condens. Matter 12, R25 (2000) 14. Panagiotopoulos, A.Z.: Mol. Phys. 61, 813 (1987) 15. Panagiotopoulos, A.Z., Quirke, N., Stapleton, M., Tildesley, D.J.: Mol. Phys. 63, 527 (1988) 16. Sweatman, M.B., Quirke, N.: Mol. Simul. 30, 23 (2004) 17. Nosé, S.: Mol. Phys. 52, 255 (1984) 18. Hoover, W.G.: Phys. Rev. A 31, 1695 (1985) 19. Martyna, G.J., Klein, M.L.: J. Chem. Phys. 97, 2635 (1992) 20. Andersen, H.C.: J. Chem. Phys. 72, 2384 (1980) 21. Parrinello, M., Rahman, A.: J. Appl. Phys. 52, 7182 (1981) 22. Tuckerman, M.E., Mundy, C.J., Martyna, G.J.: Europhys. Lett. 45, 149 (1990) 23. Tuckerman, M.E., Liu, Y., Ciccotti, G., Martyna, G.J.: J. Chem. Phys. 115, 1678 (2001) 24. Ryckaert, J.P., Ciccotti, G., Berendsen, H.J.C.: J. Comput. Phys. 23, 327 (1977) 25. Soper, A.K.: Chem. Phys. 258, 121 (2000) 26. Stillinger, F.H., Rahman, A.: J. Chem. Phys. 60, 1545 (1974) 27. Berendsen, H.J.C., Postma, J.P.M., van Guresten, W.F., Hermans, J.: In: Pullmann, B. (ed.) Intermolecular Forces, p. 331. Reidel, Dordrecht (1981) 28. Berendsen, H.J.C., Grigera, J.C., Straatsman, T.P.: J. Phys. Chem. 91, 6269 (1987) 29. Jorgensen, W.L., Chandrasekhar, J., Madura, J.D., Impey, R.W., Klein, M.L.: J. Chem. Phys. 79, 926 (1983) 30. Abascal, J.L.F., Vega, C.: J. Chem. Phys. 123, 234505 (2005) 31. Molinero, V., Moore, E.B.: J. Phys. Chem. B 113, 4008 (2009) 32. Vega, C., Abascal, J.L.F., Conde, M.M., Aragones, J.L.: Faraday Discuss. 141, 251 (2009) 33. Gallo, P., Corradini, D., Rovere, M.: Nat. Commun. 5, 5806 (2014) 34. Corradini, D., Rovere, M., Gallo, P.: J. Chem. Phys. 143, 114502 (2015) 35. Gonzales, M.A., Abascal, J.L.F.: J. Chem. Phys. 135, 224516 (2011) 36. Marrink, S.J., Risselada, H.J., Yefimov, S., Tieleman, D.P., de Vries, A.H.: J. Phys. Chem. B 111, 7812 (2007) 37. Yesylevskyy, S.O., Schäfer, L.V., Sengupta, D., Marrink, S.J.: PLoS Comput. Biol. 6, 1 (2010) 38. Binder, K.: Rep. Prog. Phys. 50, 783 (1987) 39. Binder, K., Block, B.J., Virnau, P., Tröster, A.: Am. J. Phys. 80, 1099 (2012) 40. Binder, K.: Z. Phys. B.: Condens. Matter 43, 119 (1981) 41. Binder, K.: Mol. Phys. 108, 1797 (2010) 42. Vega, C., Sanz, E., Abascal, J.L.F., Noya, e.G.: J. Phys. Condens. Matter 20, 153101 (2008) 43. Torrie, G.M., Valleau, J.P.: J. Comput. Phys. 23, 187 (1977) 44. Pohorille, A., Jarzynski, C., Chipot, C.: J. Phys Chem. B 114, 10235 (2010) 45. Berg, B.A., Neuhaus, T.: Phys. Rev. Lett. 68 (1992) 46. Ferrenberg, A.M., Swendsen, R.: Comput. Phys. 3, 101 (1989) 47. Kaestner, J.: Wiley Interdiscip. Rev. Comput. Mol. Sci. 1, 932 (2011) 48. Laio, A., Parrinello, M.: Proc. Natl. Acad. Sci. U.S.A 99, 12562 (2002) 49. Bussi, G., Laio, A., Parrinello, M.: Phys. Rev. Lett. 96, 090601 (2006) 50. Laio, A., Gervasio, F.L.: Rep. Prog. Phys. 71, 126601 (2008) 51. Barducci, A., Bonomi, M., Parrinello, M.: WIREs Comput. Mol. Sci. 1, 826 (2011) 52. Ferdinand, A.E., Fisher, M.E.: Phys. Rev. 185, 832 (1969) 53. Chen, J.H., Fisher, M.E., Nickel, B.G.: Phys. Rev. Lett. 48, 630 (1982) 54. Rovere, M., Heermann, D.W., Binder, K.: J. Phys. Condens. Matter 2, 7009 (1990) 55. Wilding, N.B., Bruce, A.D.: J. Phys. Condens. Matter 4, 3087 (1992) 56. Wilding, N.B.: Phys. Rev. E 52, 602 (1995) 57. Wilding, N.B.: Am. J. Phys. 69, 1147 (2001)
Chapter 6
Dynamical Correlation Functions and Linear Response Theory for Fluids
We consider now the correlation functions of dynamical observables. These functions are a natural extension of the static correlators introduced in Chap. 3. They take into account how the fluctuations of quantities like the density are correlated in time and space. The time-dependent correlation functions are related to the response functions under the assumption of the linear response theory through the fluctuationdissipation theorem [1–4]. We will be interested in particular to the density correlation functions and the density response functions. The central quantity will be the dynamical structure factor that can be experimentally determined in a large range of frequencies and wave lengths. In the next chapter, we will show how the correlation functions introduced here can be used to study the liquid dynamics.
6.1 Dynamical Observables The fluids that we consider are classical systems composed by N particles. The evolution of the positions ri and momenta pi is governed by the Hamilton equations r˙i =
∂H ∂pi
p˙ i = −
∂H . ∂ri
(6.1)
The different observables are represented by microscopic operators that evolve in time. Let us consider as an example the density. We can generalize the density operator defined in Eq. (3.38). From now to avoid excessive notations, we indicate the microscopic operators without specific symbols, so we define the time-dependent local density operator as
© Springer Nature Switzerland AG 2021 P. Gallo, M. Rovere, Physics of Liquid Matter, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-68349-8_6
195
196
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
ρ(r, t) =
δ [r − r i (t)] .
(6.2)
i
This operator would measure the local value of the density at a position r at a time t. The time dependence derives from the time evolution of the single particles according to the Hamilton equations (6.1). Of course the average in the equilibrium ensemble will be constant in time, and it will be
ρ(r, t) = ρ(r, 0) = ρ .
(6.3)
since the fluid is homogeneous. Other examples are the current density j (r, t) =
v i δ [r − r i (t)] ,
(6.4)
i
where v i is the velocity of the single particle, and the energy density (r, t) =
i δ [r − r i (t)] ,
(6.5)
i
where i is the single particle energy per volume. There are different problems where we need to study the local fluctuations of an observable with respect to its equilibrium value at a given time t. The density fluctuation, for instance, is given by δρ(r, t) = ρ(r, t)−ρ, and we will be interested moreover in determining how the fluctuations are correlated. For this reason we need to introduce the correlation function as the equilibrium average ρ r , t − ρ [ρ (r, t) − ρ] = ρ r , t ρ(r, t) − ρ 2 .
(6.6)
We define now the correlation functions and their properties more in general.
6.2 Correlation Functions We consider a dynamical variable * ) X (r, t) = X r, r N (t), pN (t)
(6.7)
that describes the behaviour of a corresponding observable at position r at time t. Its temporal evolution will be determined by the equation ∂X (r, t) ˆ = i LX(r, t) , ∂t
(6.8)
6.2 Correlation Functions
197
where Lˆ is the Liouville operator already defined in Eq. (2.132) that we recall here ˆ = {f, H } = Lf
∂f ∂f r˙i . + p˙ i ∂ri ∂pi
(6.9)
i
The formal solution of Eq. (6.8) is ˆ
X (r, t) = ei Lt X (r, 0) .
(6.10)
The average value at equilibrium of the observable X(r, t) does not depend on time, and we have
X(r, t) = X(r, 0) = X .
(6.11)
Let us define now the correlation function between two dynamical variables. We consider that the evolution of a dynamical variable X(r, t) can be at least partially determined by the correlation with another dynamical variable Y (r, t). The correlation function can be determined by evaluating the time equilibrium average 1 τ →∞ τ
lim
τ
dt X(r, t + t )Y (r , t ) = X(r, t + t )Y (r , t ) ,
(6.12)
0
the time average is equivalent to an average on the appropriate ensemble. In our system, this correlation is independent from the origin of the time; therefore, we can assume t = 0. We can now define the correlation function CXY r, r , t = X(r, t)Y (r , 0) .
(6.13)
By shifting the origin of the time, we have CXY (t) = X (r, t) Y r , 0 = X (r, 0) Y r , −t = CY X (−t) .
(6.14)
In an homogeneous and isotropic system, we also have CXY r, r , t = CXY |r − r |, t
(6.15)
so we can rewrite Eq. (6.13) as CXY (r, t) = X(r, t)Y (0, 0) .
(6.16)
198
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
In the asymptotic limit lim CXY (r, t) = X(t) Y (0) = X Y ,
t→∞
(6.17)
since we do not expect any correlation between X(t → ∞) and Y (t = 0). By taking into account Eq. (6.17), we can also define the correlation functions of the fluctuations δX(r, t) = X(r, t) − X
(6.18)
CXY (r, t) = δX(r, t)δY (0, 0) .
(6.19)
as
This form is in fact sometime more useful to use in calculations since lim CXY (r, t) = 0 .
t→∞
(6.20)
We consider now the Fourier expansion of the dynamical variables X (r, t) =
eik·r X (k, t)
(6.21)
k
and we define the correlation functions in the (k, t) space C˜ XY (k, t) = X (k, t) Y (−k, 0) . This is also the Fourier transform of the (6.19)
C˜ XY (k, t) = dr e−ik·r CXY (r, t) .
(6.22)
(6.23)
The Fourier transform from the time to the frequency space gives C˜ XY (k, ω) =
dt
dr
eiωt C˜ XY (k, t) .
(6.24)
The Laplace transform can be defined in the complex frequency space as follows C˜ XY (k, z) =
∞
dt 0
eizt C˜ XY (k, t)
I m(z) > 0 ,
(6.25)
6.2 Correlation Functions
199
6.2.1 Further Properties of the Correlation Functions We can start from the left-hand side of Eq. (6.12) and indicate the time derivative as X˙ = dX/dt. To simplify the notation, we indicate only the dependence on time of the dynamical variables. We can write 1 τ →∞ τ
τ
lim
˙ + t )Y (t ) dt X(t
0
τ 1 1 = lim X(t + t )Y (t ) 0 − lim τ →∞ τ τ →∞ τ
1 τ = − lim dt X(t + t )Y˙ (t ) , τ →∞ τ 0
τ
dt X(t + t )Y˙ (t )
0
(6.26)
where we used an integration by parts. So finally ˙ X(t)Y (0) = − X(t)Y˙ (0) ,
(6.27)
and as a consequence
˙ X(t)X(0) = 0.
(6.28)
The procedure of Eq. (6.26) can be iterated 1 lim τ →∞ τ
τ
¨ + t )Y (t ) dt X(t
0
1 τ →∞ τ
= − lim
τ
˙ + t )Y˙ (t ) dt X(t
(6.29)
0
so ¨ ˙ Y˙ (0) . X(t)Y (0) = − X(t)
(6.30)
¨ ˙ X(0) ˙ X(t)X(0) = − X(t) .
(6.31)
In particular
If we indicate the n-derivative of X(t) as ∂tn X =
d nX dt n
with further iterations of Eq. (6.29), we find
(6.32)
200
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
∂t2n X(t)X(0) = (−1)n ∂tn X(t)∂tn X(0) .
(6.33)
In the case of an autocorrelation function, the Taylor expansion around t = 0, taking into account that it is an even function in the time, can be written in the (k, t) space as follows C˜ XX (k, t) =
∞ n=0
1 (2n)!
d 2n C˜ XX dt 2n
t 2n .
(6.34)
t=0
The coefficient of the expansion can be obtained by considering the inverse Fourier transform
˜ (6.35) CXX (k, t) = dω e−iωt C˜ XX (k, ω) . The expansion in Eq. (6.34) can be written as C˜ XX (k, t) =
∞ n=0
(2n)
(−1)n
ω˜ XX 2n t , (2n)!
(6.36)
where the 2n-th moment of the frequency is given by the integral
ω2n C˜ XX (k, ω) .
(6.37)
= ∂t2n X(0)X(0)
(6.38)
(2n) ω˜ XX = ∂tn X(k, 0)∂tn X(−k, 0) .
(6.39)
(2n)
ω˜ XX =
dω
Moreover since
d 2n CXX dt 2n
t=0
we have also
6.3 Linear Response Theory We study now how a system responds to an external time-dependent perturbation. We consider the coupling of an external field to a dynamics variable in such way to induce a deviation from equilibrium. We suppose that we can treat the problem in linear response theory, neglecting nonlinear terms in the perturbation [1, 2]. In this limit, according to the principle of Onsager’s regression [5, 6], the regression to equilibrium of the microscopic fluctuations would take place with the same laws
6.3 Linear Response Theory
201
as the relaxation from non-equilibrium induced by an external perturbation. As a consequence it is expected that the response functions contain information about the intrinsic properties of the system. The Hamiltonian of the perturbed system is given by H(t) = H + H ext (t)
(6.40)
where the perturbation H ext (t) is switched on from t → −∞ and switched off at t =0 ⎧ ⎪ 0 t → −∞ ⎪ ⎪ ⎪ ⎪ ⎨ (6.41) H ext (t) = = 0 t 0 To be more general, we treat the problem in quantum statistical mechanics. The dynamical variable X(t) is driven out of equilibrium, and for t > 0, it relaxes to equilibrium, as shown in Fig. 6.1. We are not interested to the range of time −∞ < t < 0, but we explore how X(t) goes to its equilibrium value X for t > 0. In general we can define the non-equilibrium average as
X(t)N E = T r [(t)X] ,
(6.42)
where we introduce the general time dependent statistical distribution (t) that takes into account the presence of an external perturbation. The time evolution of this operator is determined by the commutator with the Hamiltonian that contains the external perturbation
Fig. 6.1 Relaxation to equilibrium of the observable X(t) after switching off the perturbation at t =0
202
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
i h¯
∂ (t) = [H(t), (t)] . ∂t
(6.43)
For t → ±∞ (t) → 0 , where 0 is the equilibrium distribution independent from the time that obeys [H, 0 ] = 0 .
(6.44)
In linear response theory, we assume that the deviation of (t) from equilibrium δ(t) = (t) − 0
(6.45)
depends linearly on the perturbation. Now we calculate the deviation from equilibrium of X(t) δ X(t)N E = X(t)N E − X = T r [(t)X] − T r [0 X] = T r [δ(t)X] ,
(6.46)
where we used Eq. (6.45). We consider the equation of motion (6.43) (from now we assume h¯ = 1) i
∂ [0 + δ(t)] = H + H ext (t), 0 + δ(t) , ∂t
(6.47)
and we apply the linear response theory to get i
∂ δ(t) ≈ [H, δ(t)] + H ext (t), 0 . ∂t
(6.48)
We move the first commutator on the left-hand side ∂ δ(t) − [H, δ(t)] = H ext (t), 0 . ∂t
(6.49)
d iH t e δ(t)e−iH t = eiH t H ext (t), 0 e−iH t . dt
(6.50)
i then i
We can integrate the last equation to obtain
δ(t) = −i
t −∞
dt
eiH (t −t ) H ext (t ), 0 e−iH (t −t ) .
(6.51)
We assume now that the perturbation term is due to the coupling of a scalar field φ ext to a variable Y
6.4 Dynamical Response Functions
203
dr
H ext (t) = −
Y r φ ext r , t ,
(6.52)
the commutator in Eq. (6.51) becomes ext H (t), 0 = −
dr
Y r , 0 φ ext r , t .
(6.53)
We can now calculate the trace (6.46) by using the cyclic properties of the trace T r[ABC] = T r[CAB] = T r[BCA]; inside the integral, we have T r [δ(t)X]
t
dt =i dr −∞
(6.54) ) * T r eiH t Y r , 0 e−iH t X(r, t) φ ext r , t .
then / . , T r {. . .} = T r 0 X (r, t) , Y r , t
(6.55)
this is the statistical equilibrium average of the commutator, and finally, we can write
t
i
δX(r, t)N E = X (r, t) , Y r , t φ ext r , t , (6.56) dt dr h¯ −∞ where we inserted h. ¯ We can define the response function i X (r, t) , Y r , t , ΦXY r − r , t − t = h¯
(6.57)
in an uniform system, it depends on r − r and t − t .
6.4 Dynamical Response Functions We can define the after effect function [7] RXY (k, t) = θ (t) ΦXY (k, t)
(6.58)
where ΦXY (k, t) =
i
[X (k, t) , Y (−k, 0)] h¯
(6.59)
204
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
and the Heaviside function θ (t) has been introduced ⎧ ⎨0
θ (t) =
⎩
t 0
As stated by Berne and Harp [7]: The after-effect function is an intrinsic dynamical property of the system, which is independent of the precise magnitude and form of the applied force, and which succinctly summarizes the way in which the constituent particles in a many-body system cooperate to give the observed response of the system to the external perturbation.
In (k, t) space, Eq. (6.56), introducing τ = t − t , can be transformed to
δX(k, t)N E =
+∞ −∞
dτ RXY (k, τ ) φ ext (k, t − τ ) .
(6.61)
Equation (6.61) is a convolution in the (k, ω) space
δX(k, ω)N E = χXY (k, ω) φ ext (k, ω) ,
(6.62)
in this convolution, the susceptibility or complex response function is defined by
χXY (k, ω) =
+∞
dteiωt ΦXY (k, t) .
(6.63)
0
The function χXY (k, ω) is complex, and we can define its real and imaginary part: χXY (k, ω) = χXY (k, ω) + iχXY (k, ω) .
(6.64)
where, by considering only the ω dependence, χXY (ω) =
χXY (ω) =
∞
dtΦXY (t) cos (ωt)
(6.65)
dtΦXY (t) sin (ωt) .
(6.66)
0
∞
0
As a consequence χXY (ω) = χXY (−ω)
(6.67)
χXY (ω) = −χXY (−ω)
(6.68)
6.4 Dynamical Response Functions
205
The function χ (k, ω) can be also defined as a Laplace transform
χXY (k, z) =
+∞
dteizt ΦXY (k, t)
(6.69)
0
and it is an analytic function in the part of the complex plane with I m(z) > 0. As a consequence, the Cauchy integral will give 1 dz
C
χxy z z − z
=0
(6.70)
when performed on any closed curve in the upper complex plane. By performing the integral along a semicircle to avoid the pole at z = ω, as shown in Fig. 6.2, we have
+∞ χXY ω P = iπ χXY (ω) . dω (6.71) ω −ω −∞ By substituting in this equation the definition (6.64), we find that the real and the imaginary components are connected through the Kramers–Kronig relations
χXY
χXY
1 (ω) = P π
1 (ω) = − P π
+∞
−∞
dω
ω χXY
+∞ −∞
dω
ω − ω
ω χXY ω − ω
(6.72)
,
.
(6.73)
The function χ (k, ω) is the generalization of the analogous function defined in electromagnetism as the response of a charged system to an applied timeFig. 6.2 Contour of integration in the complex z plane
206
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
dependent external potential. From this point of view, the quantity χ (k, ω) describes analogously the adsorption process, while χ (k, ω) is related to dispersion phenomena. As shown, the two functions are not independent.
6.5 Fluctuation-Dissipation Theorem The fluctuation-dissipation theorem was formulated in 1928 by Nyquist as a relation between noise and dissipation in electrical circuits. In statistical mechanics, it represents an important relation between the correlation functions and the corresponding response functions [8–11]. We rewrite the susceptibility from Eq. (6.69)
χXY (k, z) =
+∞
dteizt
0
i
[X (k, t) , Y (−k, 0)] , h¯
(6.74)
where we used the definition (6.59). In quantum statistical mechanics, the average CXY (k, t) = X (k, t) Y (−k, 0)
(6.75)
is related to the fluctuations of the observable, but at variance with the classical case, the two operators in general do not commute so the correlation functions are defined with the symmetrized expression SXY (k, t) =
1
[X (k, t) Y (−k, 0) + Y (−k, 0) X (k, t)] . 2
(6.76)
To simplify the notation, from now we consider only the dependence on time of the operators. Consider Eq. (6.75)
X(t)Y (0) =
1 T r e−βH X(t)Y (0) Z
(6.77)
where Z = T r e−βH ; with the use of the properties of the trace, we obtain (with h¯ = 1) T r e−βH X(t)Y (0) = T r Y (0)e−βH X(t) = T r Y (0)e−βH eiH t X(0)e−iH t
6.5 Fluctuation-Dissipation Theorem
207
= T r e−iH t Y (0)e−βH eiH t X(0) = T r e−βH eβH e−iH t Y (0)e−βH eiH t X(0) = T r e−βH e−i(t+iβ)H Y (0)ei(t+iβ)H X(0)
(6.78)
and we get
X(t)Y (0) = Y (−t − iβ)X(0) .
(6.79)
By taking the Fourier transform, we have
¯ dteiωt X(t)Y (0) = eβ hω
dte−iωt Y (t)X(0) ,
(6.80)
that can be written as ¯ C ˜ XY (k, ω) = eβ hω ˜ Y X (−k, −ω) . C
(6.81)
This equation is called the principle of detailed balance [1, 10, 12]. We will see later more about the meaning of this equation. From Eq. (6.79) by shifting the origin of the time, we have also
X(t)Y (0) = Y (0)X(t + i hβ) , ¯
(6.82)
so we get
¯ dteiωt X(t)Y (0) = eβ hω
dteiωt Y (0)X(t) .
(6.83)
As a consequence for the correlation function, we have
SXY (ω) =
dteiωt SXY (t) =
¯ +1 eβ hω 2
dteiωt Y (0)X(t) .
(6.84)
We can define now the following integral [10, 11]
ϕxy (t) = 0
β
dβ Y (0)X(t + i hβ ¯ ) ,
if we take the time derivative, we have d ϕxy (t) = dt
β 0
dβ
d Y (0)X(t + i hβ ¯ ) dt
(6.85)
208
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
= since
d dt
=
1 d i h¯ dβ
1 i h¯
β
dβ
0
d Y (0)X(t + i hβ ¯ ) dβ
(6.86)
, and it comes out
i d 1
[Y (0)X(t + i hβ) ϕxy (t) = ¯ − Y (0)X(t)] = − [X(t), Y (0)] . h¯ dt i h¯ (6.87) We can write
+∞ χXY (z) = − dteizt ϕ˙xy (t) . (6.88) 0
Now
ϕxy (t) =
+∞
−∞
dω −iωt e ϕxy (ω) 2π
(6.89)
then
ϕ˙xy (t) = −
+∞ −∞
dω iωe−iωt ϕxy (ω) . 2π
(6.90)
By substituting in Eq. (6.88) and exchanging the integration on t and ω, we have
χXY (z) =
+∞
−∞
dω ω ϕxy ω , 2π ω − z
(6.91)
with z = ω + i, we can use the formula 1 1 =P ∓ iπ δ (x − x0 ) x − x0 ± i x − x0
(6.92)
and finally Eq. (6.91) becomes χXY
1 P (ω) = 2π
+∞ −∞
dω
ω
ϕ xy ω ω − ω
i + ωϕxy (ω) . 2
(6.93)
From it we have χXY (ω) =
Then
1 ωϕxy (ω) . 2
(6.94)
6.6 Response Functions and Dissipation
ϕxy (ω) =
+∞ −∞
209
β
dteiωt 0
dβ Y (0)X(t + i hβ ¯ )
(6.95)
we can exchange the integration order
β
ϕxy (ω) =
dβ
0
β
=
+∞
−∞
dβ eh¯ β ω
dteiωt Y (0)X(t + i hβ ¯ )
0
+∞
−∞
dteiωt Y (0)X(t)
(6.96)
and finally
χXY
+∞ 1 h¯ βω e −1 dteiωt Y (0)X(t) . (ω) = 2h¯ −∞
(6.97)
On the right-hand side of the last equation, we can use Eq. (6.84), and we have
χXY
¯ −1 1 eβ hω SXY (ω) , (ω) = β hω h¯ e ¯ + 1
(6.98)
so finally we get the fluctuation-dissipation theorem [1, 9] SXY (k, ω) = h¯ coth
β hω ¯ χXY (k, ω) . 2
(6.99)
In the classical limit β hω ¯ → 0 obtained either for high temperature or for h¯ → 0, we can expand coth(x) ≈ 1/x, and we have the classical fluctuation-dissipation theorem χXY (k, ω) =
ω ˜ CXY (k, ω) . 2kB T
(6.100)
6.6 Response Functions and Dissipation The fluctuation-dissipation relation connects the correlation function of the fluctuations of an observable to the imaginary part of the susceptibility. The latter function is related to dissipation phenomena [1]. To show this, we consider again the system under the effect of perturbation coupled to an observable X (r, t)
H ext (t) = −
dr
X r φ ext r , t
(6.101)
210
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
where now φ ext is periodic. The average energy of a system in presence of an external field is given by W(t) = T r [(t)H(t)]
(6.102)
where H(t) is defined in Eq. (6.40) and the average is a non-equilibrium average, as already defined in Eq. (6.42). From now we do not indicate it as · · · N E . The dissipated power will be d dH dW = Tr H(t) + (t) dt dt dt
(6.103)
but
d Tr H(t) ∝ T r {[(t), H(t)] H(t)} dt = T r {(t) [H(t), H(t)]} = 0 then dH dW = T r (t) . dt dt
(6.104)
Since in the Hamiltonian (6.40) the part that depends on time is due to the external field, we have dW =− dt
T r [(t)X(r)]
dr
T r [(0 + δ(t)) X(r)]
dr
[ X + δX(r, t)]
=−
∂φ ext ∂t
dr
=−
∂φ ext ∂t
∂φ ext , ∂t
(6.105)
where δX(r, t) is the non-equilibrium average. This is equivalent to the power that is dissipated by the external field. If the external field φ ext is periodic and it is an even function of time, to calculate the work done in a period, we can integrate the dissipated power (6.105) to obtain
ΔW =
+T −T
dt
dW dt
(6.106)
this will be positive ΔW ≥ 0. By substituting Eq. (6.105) in Eq. (6.106) and taking into account that φ ext is even in t
6.7 Density Correlation Functions and Van Hove Functions
ΔW = −
+T
dt
−T
δX(r, t)
dr
211
∂φ ext , ∂t
(6.107)
δX(r, t)
(6.108)
this can be written as
ΔW = −
∂ ∂t
dω 2π
+T −T
dt
dr
dk −iω t ik ·r ext k ,ω . e e φ (2π )3
By introducing also the Fourier representation of δX(r, t), we have
dω dk −iω (6.109) 3 2π (2π )
+T
δX(k, ω) φ ext k , ω dt dr e−i(ω+ω )t ei(k+k )·r dω 2π
ΔW = −
dk (2π )3
−T
from which we get
ΔW =
dω 2π
dk (−iω) δX(k, ω) φ ext (−k, −ω) , (2π )3
(6.110)
then we can introduce the susceptibility χ (k, ω) = χXX (k, ω) and we have
ΔW =
dω 2π
02 0 dk (−iω) χ (k, ω) + iχ (k, ω) 0φ ext (k, ω)0 .(6.111) 3 (2π )
Now χ is even in ω while χ is odd, then finally
ΔW =
dω 2π
dk (2π )3
0 02 ωχ (k, ω) 0φ ext (k, ω)0 ≥ 0 .
(6.112)
This result shows that χ (k, ω) determines the dissipation effects and moreover ωχ (k, ω) ≥ 0 .
(6.113)
6.7 Density Correlation Functions and Van Hove Functions We will focus now on the correlation function of the density fluctuations defined as 2 Cρρ (r, t) = ρ(r, t)ρ(0, 0) − ρˆ
(6.114)
212
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
where ρ(r, t) =
δ [r − r i (t)]
(6.115)
i
is the microscopic density. Cρρ is the important trait d’union between experiments, computer simulation and theoretical approaches in the investigation of the dynamical behaviour of liquids. To make contact with experiments, it is usual to introduce a function related to Cρρ ; this is the Van Hove function [13] defined as
1 G(r, t) = ˆ + r , t)ρ(r ˆ , 0) (6.116) dr ρ(r N " !
1 dr δ r + r − r i (t) δ r − r j (0) . = N i
j
The description in terms of Eq. (6.114) or Eq. (6.116) is completely equivalent. It is easy to see that Cρρ (r, t) = ρ [G(r, t) − ρ]
(6.117)
For a classical system, we have not to take into account operator commutation so Eq. (6.116) can be rewritten as " ! 1 δ r + r j (0) − r i (t) . G(r, t) = N i
(6.118)
j
In an isotropic and homogeneous fluid, the function G(r, t) depends on the modulus of r. The function G(r, t) can be separated in a self part and a distinct part G(r, t) = Gs (r, t) + Gd (r, t)
(6.119)
with " ! 1 Gs (r, t) = δ (r + r i (0) − r i (t)) , N
(6.120)
i
" 1 δ r + r j (0) − r i (t) . Gd (r, t) = N !
i
In the limit t = 0
j =i
(6.121)
6.7 Density Correlation Functions and Van Hove Functions
213
Gs (r, 0) = δ(r) ,
(6.122)
Gd (r, 0) = ρg(r) .
(6.123)
The following sum rules are satisfied
Gs (r, t) = 1 ,
(6.124)
Gd (r, t) = N − 1 .
(6.125)
dr
dr
Moreover also the following limits are verified lim Gs (r, t) = lim Gs (r, t) = 0 ,
(6.126)
lim Gd (r, t) = lim Gd (r, t) = ρ .
(6.127)
r→∞
r→∞
t→∞
t→∞
The typical behaviour of the Van Hove functions at increasing distance and different time scales are represented in Fig. 6.3, where the relaxation time τ is of the order 10−12 s. The self-part at short time is a Gaussian, as we will show later. At increasing time, the self-correlation decays, and finally the behaviour of the particle is not more related to its initial state. As said before, for t → 0, Gd is equivalent to the radial distribution function, the static density correlator, the static density correlator. For large t the density correlation decays and it goes to the ideal gas limit.
t τ t >> τ r
r
Fig. 6.3 Van Hove function separated in self part (left panel) and distinct part (right panel). In both cases the curve are for t > τ (dashed line). τ is the characteristic relaxation time of the system
214
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
6.8 Neutron Scattering to Determine the Liquid Dynamics We recall the theory of the neutron scattering on liquids reported in Chap. 3. In the scattering process, the collimated neutron with a wavevector k0 is diffracted at different angles with a wavevector k1 , so a wavevector k = k0 − k1 is transferred to the system. We consider here the diffraction as anelastic, so there is also a transfer of energy hω ¯ = Ef in − Ein where Ein and Ef in are respectively the initial and the final energy of the system. We recall that if hω ¯ > 0, the neutron loses energy to excite the liquid, while if hω ¯ < 0, the neutron gains energy from the liquid. The differential cross section is given by k1 d 2σ = dωdΩ k0
+∞ −∞
dt
e
iωt
!
" bi bj e
−ik·[r i (t)−r j (0)]
(6.128)
.
ij
Now we separate the average on the positions of the atoms from the average on the spin and isotopic states of the nuclei, as done in Chap. 3. We get both a coherent and an incoherent cross section ! " 2
+∞ d σ k 1 2 = bcoh dt eiωt e−ik·[r i (t)−r j (0)] , (6.129) dωdΩ coh k0 −∞ ij
d 2σ dωdΩ
= inc
2 k1 binc k0
+∞ −∞
dt
e
iωt
!
" e
−ik·[r i (t)−r i (0)]
.
(6.130)
i
The incoherent term contains only the contribution from i = j .
6.9 Dynamic Structure Factor In the coherent cross section (6.129), the statistical average corresponds to the correlation function of the density in the (k, t)-space, from Eq. (6.115) ρk (t) =
e−ik·r i (t) ,
(6.131)
i
we define the intermediate scattering function F (k, t) =
1
δρk (t) δρ−k (0) . N
(6.132)
6.9 Dynamic Structure Factor
215
From this definition, it is easy to see that the F (k, t) is related to the Van Hove function
F (k, t) = dre−ik·r [G(r, t)) − ρ] . (6.133) We note that F (k, t) depends on the modulus of k. The intermediate scattering function is related to the density correlator C˜ ρρ (k, t) = ρF (k, t)
(6.134)
that also depends on k. Equation (6.129) becomes
d 2σ dωdΩ
2 = bcoh coh
k1 k0
+∞ −∞
dt
eiωt NF (k, t) .
(6.135)
In the (k, ω) space, we define the dynamical structure factor
S(k, ω) =
+∞ −∞
dt
eiωt F (k, t) .
(6.136)
The coherent scattering cross section can be written as
d 2σ dωdΩ
2 = bcoh coh
k1 NS(k, ω) . k0
(6.137)
Neutron scattering experiments give direct access to the density correlation function in the (k, ω) space since C˜ ρρ (k, ω) = ρS(k, ω) .
(6.138)
6.9.1 Static Limit We consider now the intermediate scattering function (6.133) in the static limit
F (k, t = 0) =
dr [G(r, t = 0) − ρ] e−ik·r
= 1+ρ
dr [g(r) − 1] e−ik·r = S(k)
(6.139)
where we used Eqs. (6.122) and (6.123) and the definition of the structure factor S(k). We get also the important sum rule for the S(k, ω)
216
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
dω S(k, ω) = S(k) 2π
(6.140)
In principle, if it were possible to measure the dynamical structure factor in all the range of the frequencies, the structure factor could be obtained from the inelastic neutron scattering. This is not realizable, and it is necessary to introduce corrections in order to extract the static structure factor from the inelastic neutron scattering. Different procedures have been introduced; they depend on the details of the neutron scattering technique used and on the type of system under investigation. The procedures are commonly indicated as Placzek corrections.
6.9.2 Incoherent Scattering The incoherent cross section (6.130) can be written as
d 2σ dωdΩ
2 = binc inc
k1 N k0
+∞ −∞
dt
eiωt Fs (k, t)
(6.141)
where it was introduced the self-intermediate scattering function 1
ρs (k, t)ρs (−k, 0) N
Fs (k, t) =
(6.142)
in Eq. (6.142) ρs indicates that we follow the evolution of a single particle and the correlation with itself. Fs is related to the self Van Hove function
Fs (k, t) = dr Gs (r, t)e−ik·r (6.143) we note that Fs (k, t = 0) = 1. We can define the self-dynamical structure factor
Ss (k, ω) =
+∞ −∞
dt
eiωt Fs (k, t) .
(6.144)
It can be obtained from an incoherent scattering measure
d 2σ dωdΩ
2 = binc inc
k1 NSs (k, ω) k0
(6.145)
6.10 Density Fluctuations and Dissipation
217
6.10 Density Fluctuations and Dissipation From Eq. (6.100), the fluctuation-dissipation theorem for the density correlator in the classical limit becomes χρρ (k, ω) =
βω ˜ Cρρ (k, ω) 2
(6.146)
By considering Eq. (6.138), we also have S(k, ω) =
(k, ω) 2kB T χρρ ρ ω
(6.147)
6.10.1 Detailed Balance For the density correlator, the detailed balance Eq. (6.81) is now ¯ C ˜ ρρ (k, −ω) C˜ ρρ (k, ω) = eβ hω
(6.148)
As a consequence, we have also ¯ S(k, ω) S(k, −ω) = e−β hω
(6.149)
For a homogeneous and isotropic fluid in the classical limit, S(k, ω) = S(k, −ω)
(6.150)
A simple physical interpretation of Eq. (6.149) is possible. In the inelastic scattering, the neutron exchanges energy with the system. The system undergoes a transition, and we can have two possible processes; if E0 e E1 are the energies of two eigenstates with E0 < E1 , the transition probability W will be W (E0 → E1 ) e−βE1 = −βE = e−β(E1 −E0 ) 0 W (E1 → E0 ) e
(6.151)
with hω ¯ = E1 − E0 > 0, we can have ¯ W (E → E ) W (E0 → E1 ) = e−β hω 1 0
(6.152)
on the left-hand side, the energy is given to the neutron, while on the right-hand side, the system gets energy from the neutron. This asymmetry is due to the fact that the discrete levels are occupied with different probability.
218
6 Dynamical Correlation Functions and Linear Response Theory for Fluids
In the high-temperature corresponding to the classical limit, β hω ¯ → 0, we get Eq. (6.150).
6.11 Static Limit of the Density Fluctuations From Eq. (6.147) to go in the static limit, we will integrate on all the frequencies
2kB T S(k, ω) = ρ
dω 2π
dω 2π
(k, ω) χρρ
ω
(6.153)
so we find the relation S(k) =
kB T χ(k) ˜ ρ
(6.154)
where the static response function χ(k) ˜ is given by
χ˜ (k) = χρρ (k) =
dω π
(k, ω) χρρ
ω
(6.155)
We get a sum rule for the density correlation functions
dω ˜ Cρρ (k, ω) = kB T χρρ (k) 2π
(6.156)
These sum rules are particularly relevant in the limit of the long wavelength (k → 0).
6.12 Static Response Function and the Verlet Criterion From Eq. (3.103), we know that S(k = 0) = ρkB T KT so Eq. (6.154) implies that χ˜ (k = 0) = ρ 2 KT ; the isothermal compressibility is the response function in the long wavelength limit. For finite k, the functions χ˜ (k) and S(k) are related to the response of the system to a perturbation of the density with wavelength 2π/k. We recall the discussion in Sect. 4.7.3 where we found that 1/S(k) is related to the mechanical stiffness of the liquid. The inverse S(k) measures the softness of the system to the perturbation. An important consequence of this property of S(k) is the so-called Verlet criterion, associated to the freezing of the liquid [14]. Suppose an external periodic potential is applied to the liquid. In a linear response regime, if the periodicity corresponds to the vector k0 of the peak of S(k), as the temperature decreases,
References
219
the peak at k0 increases, and as a consequence, it increases also the softness of the liquid to be modulated by the periodic potential. We recall that a peak at k0 corresponds to some short range oscillation with periodicity 2π/k0 . At enough low temperature, the system freezes spontaneously in a crystal structure since the density wave corresponding to k0 is locked in the system. It was found that a sort of universal criterion is valid for this process and the freezing takes place at S (k0 ) ≈ 2.7.
(6.157)
This is called Verlet criterion [14], and it is theoretically explained by the density theory of freezing originally formulated by Ramakrishnan and Yussouff [15]. We note also that in the opposite limit of high temperature, the divergence of χ˜ (k → 0) and S(k → 0) at the critical point, the so-called critical opalescence, implies that the fluid loses its rigidity under critical conditions.
References 1. Forster, D.: Hydrodynamic Fluctuations, Broken Symmetry and Correlation Functions. Addison Wesley, Reading (1994) 2. Boon, J.P., Yip, S.: Molecular Hydrodynamics. Dover, New York (1991) 3. Berne, B., Pecora, R.: Dynamic Light Scattering, 2nd edn. Wiley, New York (1976) 4. Balucani, U., Zoppi, M.: Dynamics of the Liquid State. Oxford University Press, New York (1994) 5. Onsager, L.: Phys. Rev. 37, 405 (1931) 6. Onsager, L.: Phys. Rev. 38, 2265 (1931) 7. Berne, B.J., Harp, G.D.: Adv. Chem. Phys. 17, 64 (1970) 8. Pathria, R.K.: Statistical Mechanics, 2nd edn. Elsevier, Oxford (1976) 9. Callen, H.B., Welton, T.A.: Phys. Rev. 83, 34 (1951) 10. Kubo, R.: Rep. Prog. Phys. 29, 255 (1966) 11. Ford, G.W.: Contemp. Phys. 58, 244 (2017) 12. Di Castro, C., Raimondi, R.: Statistical Mechanics and Applications in Condensed Matter. Cambridge University Press, Cambridge (2015) 13. van Hove, L.: Phys. Rev. 95, 249 (1954) 14. Hansen, J.P., McDonald, J.R.: Theory of Simple Liquids. Academic, Oxford (2013) 15. Ramakrishnan, T.V., Yussouff, M.: Phys. Rev. B 19, 2775 (1979)
Chapter 7
Dynamics of Liquids
We consider in this chapter the dynamics of the atoms in the liquids. The correlation functions defined in the previous chapter are now applied to the study of the density fluctuations and the diffusion of atoms in a fluid. In particular we consider two limits. With respect to the microscopic time scale, the short time limit corresponds to a situation where particles still do not interact; this is the ballistic regime similar to a dilute gas limit. In this regime, we have to deal mainly with single particle dynamics. In the long time limit and considering length scales greater than the mean free path, we enter in the hydrodynamics limit where the collective properties appear. From an experimental point of view, the hydrodynamic limit corresponds to phenomena observed in the long wavelength limit with neutron or light inelastic scattering. In the hydrodynamic limit, frequent collisions drive the system in a Brownian regime, where memory effects are negligible. There are however phenomena taking place in intermediate regimes where memory effects must be taken into account. The last part of the chapter introduces the formalism to treat with these effects.
7.1 Thermal Motion in Liquids At the microscopic level, even if the system is in equilibrium, the thermal motion of the particles induces continuous movements. While in a diluted gas there are few collisions between the particles, in dense phases, such as a liquid, the atoms can remain confined in a sort of cage constituted by first neighbours. They can vibrate in the cage with frequencies of 1012 ÷1013 Hz with some similarity with the vibrations in solids. The cage of neighbours however is not rigid, since it consists of atoms that in turn move. So after a stay in the cage, the atom can diffuse in the system. The motion of the atoms in the liquid on sufficiently long times consists by oscillations and jumps distributed randomly. It is possible to distinguish single particle motion
© Springer Nature Switzerland AG 2021 P. Gallo, M. Rovere, Physics of Liquid Matter, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-68349-8_7
221
222
7 Dynamics of Liquids
and collective effects. Transmission of sound in liquids and thermal diffusion are related to density correlation effects [1]. In crystals there are well-defined normal modes characterized by precise relations between the oscillation frequency and the wave vector. Scattering experiments determine well-defined peaks with linewidth due to some anharmonic contribution. In the long wavelength limit from the acoustic mode, the sound velocity can be determined. In liquids, dispersion relations are more difficult to extract since there are different combined effects, as mentioned above, they are activated processes, single particle diffusion with frequent collisions and collective fluctuations. It is possible however to approach the problem by starting from the opposite limits of short time and free particle motion and on the other side of long time and collective dynamics. The last is particularly relevant since in the long wavelength limit the collective dynamics shows similarity with the crystal. Well-defined peaks are found in scattering experiments. Well-defined dispersion relation can be also derived with the possibility of determining the sound velocity. Sometimes experiments reveal the residual of the crystalline behaviour in liquid spectrum.
7.2 Brownian Motion and Langevin Equation Motions due to random collisions were observed by the botanist Brown in 1827 in his study of pollen suspended in water. For this reason, we speak of Brownian motion, when studying particles moving subjected to frequent collisions [2]. This phenomenon was studied extensively by Einstein in one of his famous works of 1905 [3]. Einstein considered a model where a colloidal particle is suspended in a medium composed by small molecules. He connected the Brownian motion with the diffusive processes of the colloid. We follow here the approach of Langevin [4]. If a large particle (colloid) moves in a medium under the effect of frequent collisions, the motion can be separated in the three equivalent directions in space. If the particle is large enough, the effect of the medium will be equivalent to a friction force and the equation of motion can be written as x¨ = −ξ x˙
(7.1)
vx (t) = e−ξ t vx (0) .
(7.2)
with solution
Langevin introduces in Eq. (7.1) a random force representing the fast collisions during the motion. The Langevin equation is x¨ + ξ x˙ =
1 Rx (t) . m
(7.3)
7.2 Brownian Motion and Langevin Equation
223
We expect that the motion is random so x 2 = 0 ,
x = 0
(7.4)
where the averages are time averages. For the random forces, we have analogously
xRx = 0 .
(7.5)
By multiplying Eq. (7.3) with x 1 Rx x m
(7.6)
d (xx) ˙ − x˙ 2 , dt
(7.7)
xx ¨ + ξ xx ˙ = we note that xx ¨ = then we can write
d 1 (xx) ˙ − x˙ 2 + ξ xx ˙ = Rx x . dt m
(7.8)
We can average on the time, and by taking into account Eq. (7.5) and the equipartition of the energy k T B x˙ 2 = , m
(7.9)
kB T d
xx ˙ + ξ xx ˙ = . dt m
(7.10)
we obtain
The solution of this equation is
xx ˙ =
kB T + Ae−ξ t . mξ
(7.11)
By imposing the initial condition xx ˙ = 0 at t = 0, we get
xx ˙ =
kB T 1 − e−ξ t . mξ
The first member can be substituted to get
(7.12)
224
7 Dynamics of Liquids
d 2 2kB T x = 1 − e−ξ t dt mξ
(7.13)
2k T 1 1 B x2 = t + e−ξ t − . mξ ξ ξ
(7.14)
2k T B lim x 2 = t. t→∞ mξ
(7.15)
and finally
It is easy to see that
This result is valid in all the directions; indeed, in three dimensions, the mean square displacement (MSD) r 2 (t) for long time is
6k T B t. r 2 (t) = mξ
(7.16)
It is usual to introduce the diffusion coefficient D as defined by Einstein D=
kB T mξ
(7.17)
so finally we have the Einstein relation
r 2 (t) = 6Dt .
(7.18)
In the other limit of t → 0, we get from Eq. (7.14)
3kB T 2 t , r 2 (t → 0) → m
(7.19)
this limit is called ballistic. At short time, the friction coefficient does not appear and the collisions still do not affect the motion; therefore, the result is equivalent to the Newtonian motion of a free particle. For long time, the Brownian limit, due to the large number of collisions, the motion is completely random, and the MSD is proportional to the time.
7.4 Limit of the Dilute Gas
225
7.3 Diffusion and Self Van Hove Function We consider the self Van Hove function and calculate its second moment " !
1 dr r 2 Gs (r, t) = dr r 2 δ (r + r i (0) − r i (t)) N i " !
1 2 = dr r δ (r + r i (0) − r i (t)) N i " ! 1 = [r i (0) − r i (t)]2 , N
(7.20)
i
the last term is the MSD, and we have
Δr 2 (t) = dr
r 2 Gs (r, t) .
(7.21)
7.4 Limit of the Dilute Gas We calculate now the self-correlation function of a system of non-interacting particles. We consider the self-intermediate scattering function as defined in Eq. (6.142) Fs (k, t) = e−ik·r(t) eik·r(0) .
(7.22)
In classical mechanics, if a particle is not under the action of a force r(t) = r(0)+vt so − k · [r(t) − r(0)] = −k · vt
(7.23)
Fs (k, t) = e−ik·vt .
(7.24)
and we have
The average must be performed with the Maxwell-Boltzmann distribution function ρMB (v) =
m 2π kB T
3/2
e−βmv
2 /2
,
(7.25)
226
7 Dynamics of Liquids
then
Fs (k, t) =
e−ik·vt
dv
m 2π kB T
3/2
e−βmv
2 /2
.
(7.26)
By taking into account that 2
dvx
e−ikx vx t e−βmvx /2 = 2
2kB T √ −kx2 t 2 /2βm πe , m
(7.27)
we get Fs (k, t) = e−k
2 t 2 /2βm
(7.28)
.
From it we obtain Ss (k, ω) =
2π m kB T k 2
1/2
e−mω
2 /2k T k 2 B
(7.29)
.
This function is Gaussian with a half width determined by k 2 (kB T /m). Also the self Van Hove function is a Gaussian Gs (r, t) =
m 2π kB T t 2
3/2
e−mr
2 /2k T t 2 B
.
(7.30)
At increasing time, the Gaussian function broadens. From Eq. (7.21), we have Δr 2 (t) = dr
r 2 Gs (r, t) = 4π
∞
drr 4 Gs (r, t) ,
(7.31)
0
the integral can be performed with the use of 1 √ 2π σ
∞
−∞
dx
x 4 e−x
2 /2σ 2
= 3σ 4
(7.32)
and we obtain
3k T B Δr 2 (t) = t2 , m
(7.33)
which is the ballistic limit (7.19). For t → 0, we expect that all the systems will be in the ballistic regime since at short times the particles do not collide.
7.5 Short Time Expansion of the Self-Intermediate Scattering Function
227
7.5 Short Time Expansion of the Self-Intermediate Scattering Function With the use of Eq. (6.36), we can expand the self-intermediate scattering function in the Taylor series at short time as 1 1 Fs (k, t) = ω˜ s(0) − ω˜ s(2) t 2 + ω˜ s(4) t 4 + · · · 2 4!
(7.34)
where the momenta of order 2n can be derived from Eq. (6.37) or Eq. (6.39). It is (0) easy to see that ω˜ s = 1. For the second moment ω˜ s(2) =
d −ik·r(t) d ik·r(t)
, e e 0 dt 0 dt
(7.35)
this gives ω˜ s(2) = (−ik · r˙ (0)) (+ik · r˙ (0))
(7.36)
then we assume k along the direction z, and we have ω˜ s(2) = k 2 (vz (0))2
(7.37)
so we get ω˜ s(2) = k 2
kB T . m
(7.38)
The zero-th and the second momenta can be obtained also by expanding the expression (7.28) for short time. By combining the result (7.38) with Eq. (6.144), we have the second moment of the Ss (k, ω)
dω 2 kB T ω Ss (k, ω) = k 2 . (7.39) 2π m A longer calculations is required for the fourth moment ω˜ s(4) = ρ¨s (−k, 0) ρ¨s (k, 0) ;
(7.40)
by taking the derivative of the two members of Eq. (7.35), we have ω˜ s(4) = (k · r˙ (0))4 + (k · r¨ (0))2 ,
(7.41)
228
7 Dynamics of Liquids
and now the first term is more easy to evaluate as k 4 v 4 (0) = 3(kB T /m)2 k 4 .
(7.42)
The second term contains v˙ that it is related to the forces between the atoms F = −∇U with U the interatomic potential. As before by taking k along the direction z, we have
1 (k · v˙ )2 = k 2 2 (∇U )z (∇U )z . m
(7.43)
Now the average of the product of the force gradients is 1 1 m2 Z
dze−βU
∂U ∂U ; ∂z ∂z
(7.44)
this can be rewritten as 1 1 1 − β m2 Z
∂U dz ∂z
∂ −βU e , ∂z
(7.45)
and finally with a partial integration, we get 1 1 βm2 Z
2 ∂ U 1 ∂ 2 U −βU . = dz 2 e ∂z βm2 ∂z2
(7.46)
We recall that in the liquid, we assume U as a sum of pair potential u(r), and with a procedure similar to the one used to derive Eq. (3.23) in Sect. 3.4, we have 2
1 ∂ U ρkB T = dr∇ 2 u(r)g(r) ; βm2 ∂z2 3m2
(7.47)
this term represents the contribution of the interaction, and it includes the radial distribution function and the pair potential [5, 6]. We can define the frequency Ω02 =
ρ 3m
dr g (r) ∇ 2 u (r) ;
(7.48)
this is called Einstein frequency. By considering an atom in the cage of the nearest neighbours, this would be the frequency of oscillations around the minimum of the potential, in analogy with the solid state. (4) The fourth moment ω˜ s is given by ω˜ s(4)
kB T 2 kB T 2 2 k 3 k + Ω0 . = m m
(7.49)
7.6 Correlation Functions of the Currents
229
Also for the dynamical structure factor S (k, ω), only the even moments ω(2n) are different from zero. The zero-th moment is
dω S (k, ω) = S (k) . (7.50) ω˜ (0) = 2π The second moment is
ω˜ (2) =
2 dω 2 d F (k, t) . ω S (k, ω) = − 2π dt 2 t=0
(7.51)
To find it, we have to calculate 1 1
δ ρ¨k (t)δρ−k (0) = − δ ρ˙k (t)δ ρ˙−k (0) . N N
(7.52)
At t = 0, this gives 2 d F (k, t) − = (−ik · r˙ (0)) (+ik · r˙ (0)) dt 2 t=0 ! " −ik·(r (0)−r (0)) i j −ik · r˙ i (0) − r˙ j (0) e + ; i=j
(7.53) the average of the term with i=j goes to zero so the second moment has the same value obtained for the self-function
kB T dω 2 ω S (k, ω) = k 2 . (7.54) ω˜ (2) = 2π m The fourth moment has a more complex expression, and it is related to the interatomic forces as in the case of the self-function.
7.6 Correlation Functions of the Currents The density fluctuations imply that currents of particles are activated in the system. The microscopic current density is defined as j (r, t) =
N i=1
v i (t)δ (r − r i (t)) .
(7.55)
230
7 Dynamics of Liquids
The particle density and the current are connected by the microscopic continuity equation ∂ρ(r, t) = −∇ · j (r, t) . ∂t
(7.56)
In k space, this equation becomes ∂ρ(k, t) = −k · j (k, t) . ∂t
(7.57)
For a homogeneous and isotropic system, the current can be separated in a longitudinal and a transverse component j (r, t) = j l (r, t) + j t (r, t) ;
(7.58)
∇ · j t (r, t) = 0 ,
(7.59)
∇ × j l (r, t) = 0 .
(7.60)
for them
Indeed Eq. (7.57) can be written as ∂ρ (k, t) = −k · j l (k, t) . ∂t
(7.61)
The correlation function with α, β = x, y, z Cjα jβ |r − r |, t = j α (r, t)j β (r , 0)
(7.62)
can be separated in a longitudinal and a transverse parts. In k, t space Cjα jβ (k, t) =
kα kβ kα kβ Ct (k, t) , C (k, t) + δ − l αβ k2 k2
(7.63)
with Cl (k, t) = j l (k, t)j l (−k, 0)
(7.64)
Ct (k, t) = j t (k, t)j t (−k, 0) .
(7.65)
and
From the continuity equation, it is found that
7.7 The Hydrodynamic Limit
231
Cl (k, ω) =
ω2 S(k, ω) . k2
(7.66)
7.7 The Hydrodynamic Limit The thermodynamic equilibrium in dense fluids is determined by the frequent collisions between the particles. The collision time is generally of the order of τ ≈ 10−13 ÷ 10−10 s. The mean free path λf between collisions can be approximately estimated in the framework of the kinetic theory spherical particles √ by assuming
2π σ 2 ρ . with diameter σ and a density ρ as λf = 1/ Now let us consider a perturbation of frequency ω travelling with a wave vector k. Under the conditions ωτ λf . The relaxation to the equilibrium takes place with diffusion from regions (+) to regions (−) to establish the equilibrium density. In the hydrodynamic limit, the observables like the density are assumed to be averaged in a volume and a fraction of time so small to preserve the local properties but large enough to make the average meaningful. In this way the coarse-grained quantities substitute the microscopic associated variables; for instance, the density is
1 1 τ0 ρ(r, ¯ t) = dV dt ρ(r, t ) (7.68) ΔV ΔV τ0 0 where τ0 >> τ .
232
7 Dynamics of Liquids
Fig. 7.1 Fluctuation of the density around the average value ρ0 . In the hydrodynamic limit, λ is much greater than the mean free path
From now on we substitute the density and other microscopic variables with the corresponding coarse-grained quantities.
7.8 Diffusion in the Hydrodynamic Limit Now let us consider specifically how we can treat the diffusion in the hydrodynamic limit, determined by the conditions (7.67). The problem can be treated on the basis of two types of equations. The first types are exact equations at the microscopic level, and they typically describe conservation principles. In this case, we consider the equation of continuity (7.56) where now the quantities are the hydrodynamic variables. The second type of equations, necessary to close the problem, are phenomenological; they are called constitutive equations. On the basis of the the principle of Onsager’s regression [8, 9], already mentioned in the previous chapter, we can assume that in the hydrodynamic limit, the quantities relax to equilibrium (regression) as the corresponding macroscopic quantities. This hypothesis is valid in the linear response regime. In the case of diffusion, it is used the Fick’s law (1865), which describes the macroscopic diffusion in a fluid. The empirical law connects the current of the particles to the gradient of the density through a linear equation j (r, t) = −D∇ρ(r, t) ,
(7.69)
where D is the diffusion coefficient, here introduced as a phenomenological parameter. The diffusion in a fluid is related to its viscosity. The viscosity can be introduced in a simple way. The motion of a fluid along a wall will be affected by a friction
7.8 Diffusion in the Hydrodynamic Limit
233
Fig. 7.2 Velocity of a fluid moving along a wall
force that induces a gradient in the velocity field as represented in Fig. 7.2. The tail viscosity η can be defined with the formula Fxy ∂vx =η ; A ∂y
(7.70)
on the left, the force is divided for the area so the viscosity is measured in Pa·s or in Poise (P), 1 P = 0.1 Pa·s. In a liquid, η is of the order of 10−4 ÷ 10−3 Pa·s. In the framework of the Brownian motion, it was formulated a relation between the the diffusion coefficient and the viscosity of the fluid, the Stokes–Einstein law [3]. The theory is based on a model where spherical molecules of radius a diffuse in a solvent. The force acting on a particle moving in a fluid was determined by Stokes [10] with the assumption of a sphere of radius a much larger than the interatomic distances in the fluid and a laminar flow. The viscous force is given by F = 6π ηav. On the other hand, the friction coefficient ξ that appears in the Brownian motion, Eq. (7.1), and in the Langevin equation (7.3) is related to the force F by mξ v = F . The resulting mξ = 6π η can be substituted in Eq. (7.17) to get the Stokes–Einstein relation D=
kB T . 6π aη
(7.71)
The continuity equation ∂ ρs (r, t) + ∇j (r, t) = 0 ∂t must be combined with the constitutive equation (7.69), and we have
(7.72)
234
7 Dynamics of Liquids
∂ ρs (r, t) = D∇ 2 ρs (r, t) , ∂t
(7.73)
where we added the suffix s to ρ to make clear that we treat now the self-correlation of the particles. In k space ∂ ρs (k, t) = −Dk 2 ρs (k, t) , ∂t
(7.74)
ρs (k, t) = ρs (k, 0) e−Dk t .
(7.75)
the solution is 2
For the self-intermediate scattering function from Eq. (7.74), we have ∂ Fs (k, t) = −Dk 2 Fs (k, t) , ∂t
(7.76)
with the solution Fs (k, t) = e−Dk
2t
(7.77)
so Fs (k, t) decays exponentially with a relaxation time τ = 1/Dk 2 ; this is called a behaviour à la Debye. We observe that τ is inversely proportional to the diffusion coefficient and it becomes very large in the long wavelength limit of k → 0. The Laplace transform of Eq. (7.77) is F˜s (k, z) =
1 , −iz + Dk 2
(7.78)
and it is characterized by the presence of the imaginary pole z = −iDk 2 ; this type of singularity is called diffusive pole. From Eq. (7.76), the self Van Hove function is a Gaussian function Gs (r, t) =
1 4π Dt
3/2
e−r
2 /4Dt
.
(7.79)
From the previous formulas, the self-dynamic structure factor can be derived as Ss (k, ω) =
2Dk 2 2 ; ω2 + Dk 2
(7.80)
this is a Lorentzian function of width 2Dk 2 , as represented in Fig. 7.3. Information on the diffusion can be extracted from incoherent neutron scattering experiments. The measures must be carried out at a low angle, k → 0, in the limit
7.8 Diffusion in the Hydrodynamic Limit Fig. 7.3 Self-dynamic scattering function in the hydrodynamic limit as function of ω for given k
235
Ss (k, ω)
for given k
2Dk2
ω
0
of the validity of hydrodynamics where the peak is well defined. From Eq. (7.80), we get the formal result D=
Ss (k, ω) 1 lim ω2 lim 2 ω→0 k→0 k2
(7.81)
where the two limits must be performed in the given order. Equation (7.81) is the first Green–Kubo relation that we encounter. The Green–Kubo relations [11, 12] connect a transport coefficient to a function directly determined by the microscopic behaviour, typically a correlation function. In this respect the Green–Kubo relations are a manifestation of the fluctuation-dissipation theorem. From Eq. (7.79), we can calculate the MSD
Δr 2 (t) = 4π
∞
dr 0
= 2π
1 4π DT t
r 2 Gs (r, t) 3/2
(7.82)
+∞
−∞
dr
r 4 e−r
dr
r 4 e−r
2 /4Dt
then
Δr 2 (t) =
1 1 √ √ 2Dt 2π 2Dt 1 = 3 (2Dt)2 2Dt
+∞ −∞
2 /4Dt
(7.83)
where the Eq. (7.32) has been used, so finally we get the Einstein formula
Δr 2 (t) = 6Dt.
(7.84)
236
7 Dynamics of Liquids
Brownian regime
Ballistic regime t
Fig. 7.4 Mean square displacement (MSD) as function of time. We note the change of regime from ballistic at short time to Brownian at long time
This is the same result obtained for the Brownian motion in the long time limit. The diffusion coefficient can be derived as 1 2 Δr (t) . t→∞ 6t
D = lim
(7.85)
This formula is useful to extract D from molecular dynamics simulation. The typical qualitative behaviour of Δr 2 (t) calculated in simulation for a liquid in normal conditions is represented in Fig. 7.4. After the initial ballistic regime, the particles diffuse 2 with a random walk and enter in the Brownian regime. From the slope of Δr (t) at long time, it is possible to extract the value of D.
7.9 Velocity Correlation Function The velocity correlation function (VCF) is defined as Z(t) = v(t) · v(0) .
(7.86)
This function has the property Z(t) = Z(−t). The VCF is connected to the MSD. We can write
t
t dt1 dt2 v (t1 ) · v (t2 ) (7.87) (r(t) − r(0))2 = 0
then
0
7.9 Velocity Correlation Function
Δr (t) = 2
237
t 0
=
t
dt1 dt2
v(t1 ) · v(t2 )
dt1 dt2
v(t1 − t2 ) · v(0) .
(7.88)
0
t 0
t
0
In the integral, we define τ = t2 − t1
t
t
dt1 0
t
dt2 Z (t2 − t1 ) =
dt1
0
t−t1 −t1
0
dτ Z (τ ) ;
(7.89)
in the integral on the right side, we can exchange the integration on τ and t1
t
dt1 0
= =
−t1 0
−t
t−t1
0
−t
dτ
(7.90)
dτ Z (τ )
t
−τ
t
dt1 Z (τ ) +
t−τ
dτ
0
dτ Z (τ ) (t + τ ) +
t
dt1 Z (τ ) 0
dτ Z (τ ) (t − τ ) ,
0
then we change τ with −τ in the first integral, and we have
t Δr 2 (t) = 2 dτ Z (τ ) (t − τ ) 0
0
τ
. dτ Z (τ ) t − t
∞
t
= 2t
(7.91)
By recalling Eq. (7.85), we get 1 D= 3
dtZ(t) ;
(7.92)
0
this is another Green–Kubo formula connecting a macroscopic transport parameter to a correlation function. We can define the frequency spectrum of the VCF Z˜ (ω) =
dteiωt Z (t) ;
(7.93)
this is related to the Ss (k, ω) through Ss (k, ω) Z˜ (ω) = 3ω2 lim . k→0 k2
(7.94)
238
7 Dynamics of Liquids
In the limit t → 0 Z(0) = v(0)2 , so Z(0) =
3kB T . m
(7.95)
The velocity correlation function can be expanded as 1 (2) 2 Z (t) = v(0)2 − ω˜ vv t ; 2
(7.96)
∇U ∇U , m2
(7.97)
from Eq. (6.36), we have (2) ω˜ vv = ˙v v˙ =
and following the same calculation done before, see Eq. (7.48), we get Z (t) =
3kB T m
1 − Ω02
t2 2
(7.98)
where Ω0 is the Einstein frequency defined in (7.48). In a very diluted fluid, we expect that the particles move with a Brownian dynamics and from Eq. (7.2) Z(t) =
3kB T −ξ t e m
(7.99)
where ξ is the friction coefficient in Eq. (7.1). In Fig. 7.5, it is shown the VCF calculated for a Lennard-Jones system adapted to represent liquid nitrogen [13]. At high temperatures in the low density regime, Z(t) follows approximately Eq. (7.99). This indicates that the molecules proceed with few collisions. At lower temperature and high density instead, the system is in a regime with an high number of collisions. The Z(t) becomes negative since the frequent collisions change the direction of the particles with a counter flow. This behaviour evidences the presence of a cage effect in the liquid. For a time interval, the diffusion is restricted by the cage of the nearest neighbours. This sort of solidstate effect is more evident in liquid metals than in Lennard-Jones fluids due to the differences in the repulsion part of the effective potential [14]. The solid-state effects disappear for longer times after that the atom has undergone a significant number of collisions. Therefore the particles enter at long time in the Brownian regime, as already seen in the MSD approaching the linear time dependence. The long-time decay of Z(t), however, is not exponential at enough high density. This was discovered for the first time by Alder and Wainwright [15] in their simulation of hard disks and hard spheres. The VCF decays slowly with a power law t −d/2 where d is the dimensionality of the space. We will discuss later this point.
7.10 Liquid Dynamics in the Hydrodynamic Limit
239
1.0 ρ=0.62 g/cm3 T=80 K
0.8
ρ=0.85 g/cm3 T=66 K
Z(t)/Z(0)
0.6 0.4 0.2 0.0 -0.2 0.0
0.2
0.4
0.6 t
0.8
1.0
*
Fig. 7.5 Velocity correlation function for liquid nitrogen simulated with a Lennard-Jones potential. Z(t) is normalized with Z(0) = 3kB T /m. Redrawn from [13]
7.10 Liquid Dynamics in the Hydrodynamic Limit As said before in the hydrodynamic limit, we can combine two different types of equation: • Conservation laws that are valid also for the microscopic variables • Constitutive equations that are usually empirical and derived for macroscopic fluidodynamics The constitutive equations connect currents (or momenta) to fields (or forces); they can contain reactive and dissipative terms. The latter are responsible for irreversible processes. In a monoatomic liquid in absence of external forces, we expect that the conserved quantities are as follows: (a) The density of particles: ρ (r, t) (b) The energy density: (r, t) (c) The quantity of motion: π (r, t) = mρ (r, t) v (r, t) All these quantities are defined in terms of the hydrodynamic limit, and they fluctuate around an average value. For each of them, it is defined a current, as we have done in Eq. (7.73). The equation for the conservation of the density is ∂ ρ (r, t) + ∇ · j (r, t) = 0 ∂t where j (r, t) = π (r, t) /m.
(7.100)
240
7 Dynamics of Liquids
Fig. 7.6 Pressure on an element of fluid along the x-axis
For the conservation of the energy, it is introduced the energy current ∂ (r, t) + ∇ · j (r, t) = 0 , ∂t
(7.101)
where j (r, t) is the flux of energy and heat per unit area and unit time. The conservation law for the quantity of motion π (r, t) is equivalent to a Newton equation, where the intrinsic forces acting on a portion of fluid are divided in conservative forces due to the pressure and viscous forces. The term due to the pressure is easily derived. We take a small cubic element of the fluid, and we look along the x-direction, see Fig. 7.6, • the force on the face ΔyΔz at x is pΔyΔz, ∂p • on the face in x + Δx the force is − p + ∂x Δx ΔyΔz ∂p so the net force on the cube in the x axis will be Fx = − ∂x ΔxΔyΔz. By repeating the derivation on the other directions, we get that the force per unit volume on the cube is f = −∇p. The Newton law can be written as
π˙ (r, t) = −∇p (r, t) + f viscous
(7.102)
where f viscous is the density of the viscous force. In our system, we approximate the left side of Eq. (7.102) by linearizing the product ρ (r, t) v (r, t), π (r, t) = ρ (r, t) v (r, t)) = [ρ + δρ (r, t)] v (r, t) ≈ ρv (r, t)
(7.103)
where it is assumed a small deviation from equilibrium with the system at rest with density ρ. In this way the derivative on the left side of Eq. (7.102) becomes π˙ ≈
d ∂v + ρ (v · ∇) v. (ρv) = ρ dt ∂t
(7.104)
7.10 Liquid Dynamics in the Hydrodynamic Limit
241
In the Newton equation (7.102), we must consider now the viscous forces acting on an element of the fluid [16, 17]. A slab of the fluid that moves along the rest of the system experiences a stress (force per unit area) applied on the surfaces. The stress could be normal or tangential to the surfaces since the forces acting on a portion of fluid can be compression forces or shear forces, as represented in Fig. 7.7. By assuming a cubic element of the fluid, as we have done in deriving Eq. (7.102), the stress acting on each surface can be indicated as σij . The first index specifies the orientation of the surface upon which the stress is acting; this orientation is indicated by the vector perpendicular to the surface. The second index specifies the direction of the stress. The σij acting on each surface. are illustrated in Fig. 7.8. The stress tensor σ¯¯ is symmetric σij = σj i . The diagonal terms σii represent the normal stress, and the off diagonal represents the shear stress. The first constitutive equation is obtained by considering the contribution that comes from the shear stress, and it is a generalization of the simple argument that we used to introduce the shear viscosity. The Eq. (7.70) can be generalized, and for an incompressible fluid, the offdiagonal shear stress is given by
Fig. 7.7 Compression force (on the left) and shear force (on the right) on element of fluid
Fig. 7.8 Stress on the different surfaces of an element of fluid
242
7 Dynamics of Liquids
∂vj ∂vi =η + ∂xj ∂xi
σij
.
(7.105)
A second term of the stress tensor comes from an effect of compression by introducing the second coefficient of viscosity η . This contribution enters in the diagonal terms σii , and it is given by η ∇ · v .
(7.106)
The contribution of the pressure can be included in the diagonal term so that σij = −pδij + σij + η ∇ · vδij .
(7.107)
It is usual to introduce the bulk viscosity [6, 18] defined as ζ =
2 η + η . 3
(7.108)
Finally the components of the stress tensor can be written as [16, 18] σij = −pδij + η
∂vj ∂vi 2 + − ∇ · vδij ∂xj ∂xi 3
+ ζ ∇ · vδij .
(7.109)
The conservation law for the quantity of motion now becomes mρ
∂v + v · ∇v = ∇ · σ¯¯ . ∂t
(7.110)
By collecting all the terms, we can write the final equation as
∂v 1 mρ + v · ∇v = −∇p + η + ζ ∇ (∇ · v) + η∇ 2 v . ∂t 3
(7.111)
Now the current is j (r, t) ≈ ρv (r, t) where we use the linear approximation defined in Eq. (7.103). We neglect the terms with v 2 , and we obtain the linearized Navier-Stokes equation ∇p 1 ∂j =− + ∂t m mρ
1 η 2 η + ζ ∇ (∇ · j ) + ∇ j. 3 mρ
(7.112)
In order to consider the thermal effects, we start from the equation of the conservation of the energy, Eq. (7.101). In the hydrodynamic limit, we assume to linearize the equation as done before so the energy current contains a flux of energy plus an heat current
7.10 Liquid Dynamics in the Hydrodynamic Limit
243
j (r, t) = ( + p)v (r, t) + j Q (r, t)
(7.113)
where and p are the value of the energy density and of the pressure at equilibrium and j Q is the heat current due to gradient of temperature. For j Q it is valid the Fourier constitutive equation j Q (r, t) = −λT ∇T (r, t)
(7.114)
where λT is the thermal conductivity. In an homogeneous and isotropic material, λT is a scalar quantity. So the macroscopic constitutive equation for j becomes j (r, t) = ( + p)v (r, t) − λT ∇T (r, t) .
(7.115)
The Eq. (7.115) must be combined with the Navier-Stokes equation.
7.10.1 Transverse Current The current can be separated in longitudinal and transverse components as shown in Eq. (7.58). For the transverse component in particular ∇ · j t = 0, so for it, the Navier-Stokes equation becomes ∂j t = ν∇ 2 j t ∂t
(7.116)
where we define the kinematic viscosity ν = η/mρ. In k space ∂ j t (k, t) + νj t (k, t) = 0, ∂t
(7.117)
and we can derive the equation for the correlation function of the transverse currents (7.65) ∂ Ct (k, t) + νk 2 Ct (k, t) = 0 . ∂t
(7.118)
Ct (k, t) = Ct (k, 0) e−νk t ;
(7.119)
The solution is 2
in the long time limit, the fluctuations of the transverse current decay exponentially. In the limit k → 0, there is not more a distinction between longitudinal and transverse mode, so in the hydrodynamics regime, Ct (k, 0) = Cl (k, 0). Then with the use of Eq. (7.66), we get
244
7 Dynamics of Liquids
Cl (k, 0) =
=
dω Cl (k, ω) 2π
(7.120)
dω ω2 1 kB T S (k, ω) = 2 ω2 = . 2 2π k m k
Therefore the solution (7.119) is kB T −νk 2 t e . m
Ct (k, t) =
(7.121)
In Laplace transform, Eq. (7.119) can be written as C˜ t (k, z) =
kB T /m −iz + νk 2
(7.122)
with the presence of a diffusive pole. In (k, ω) space, the real part of the spectrum is Re [Ct ] (k, ω) =
νk 2 2kB T . m ω2 + (νk 2 )2
(7.123)
The viscosity can be formally related to Ct (k, ω) as η m ω2 = lim lim 2 Re [Ct ] (k, ω) . mρ 2kB T ω→0 k→0 k
(7.124)
This is another Green–Kubo formula.
7.10.2 Equations Under Isotherm Conditions, Longitudinal Current and Sound Waves We consider now the linearized Navier-Stokes equation (7.112) for the longitudinal current under isothermal conditions [6]. This implies that we neglect the heat diffusion. We combine Eq. (7.112) with the continuity equation that can be written as ∂δρ(r, t) = −∇ · j l (r, t) ∂t
(7.125)
where δρ(r, t) is the fluctuation of the density with respect to the average density ρ. We apply the divergence operator to both sides of Eq. (7.112)
7.10 Liquid Dynamics in the Hydrodynamic Limit
∇ 2p 1 ∂j l =− + ∇· ∂t m mρ
245
1 η η + ζ ∇ · ∇(∇ · j l ) + ∇ · ∇ 2j l ; 3 mρ
with the use of the continuity equation, we have ∂ 2 δρ ∇ 2p 1 − 2 =− − m mρ ∂t
1 ∂δρ η 2 ∂δρ η + ζ ∇2 − ∇ 3 ∂t mρ ∂t
then −
1 ∇ 2p ∂ 2 δρ − = − m mρ ∂t 2
4 ∂δρ η + ζ ∇2 3 ∂t
In the hydrodynamic limit, the thermodynamic is valid, and we can use the relation δp(r, t) =
∂p ∂ρ
δρ(r, t). T
and we can substitute ∇ 2 p = ∇ 2 δp and obtain an equation for δρ ∂2 ∂ 2 1 4 1 ∂p 2 − 2+ η+ζ ∇ + ∇ δρ(r, t) = 0 mρ 3 ∂t m ∂ρ T ∂t
(7.126)
In (k, ω) space, this equation is 4 1 1 ∂p ω2 + i η + ζ ωk 2 − k 2 δρ(k, ω) = 0 ρm 3 m ∂ρ T
(7.127)
The eigenvalues satisfy ω2 + iΓη k 2 ω − cT2 k 2 = 0
(7.128)
where cT =
1 m
∂p ∂ρ
(7.129) T
is the isothermal sound velocity, and we have defined Γη =
1 ρm
4 η+ζ . 3
(7.130)
In the long wavelength limit, we get two damped acoustic modes with frequencies
246
7 Dynamics of Liquids
1 ω = ±cT k − i Γη k 2 . 2
(7.131)
The acoustic waves propagate in the liquid, and they are damped from viscosity effects that disappear in the long wavelength limit. We have four modes, two transverse and two longitudinal (acoustic) modes. This number corresponds to the four variables, the number of particles and the three components of the momentum.
7.10.3 Longitudinal Current in Presence of Thermal Diffusion and Brillouin Scattering We consider now the effects of the heat transport. We have to recall the equation of the conservation of the energy, Eq. (7.101), and the macroscopic constitutive equation for j , Eq. (7.115). In order to calculate the longitudinal current, the previous equation must be combined with the linearize Navier-Stokes equation (7.112). The coupling of the density fluctuations with the thermal dissipation modifies the results obtained under isothermal conditions [1, 7, 19]. Instead of the isothermal sound velocity, now the adiabatic sound velocity enters in the solutions cs =
γ m
∂p ∂ρ
(7.132) S
where γ is the ratio between the specific heats γ = cp /cv . Other quantities to define are the coefficient of thermal damping Γλ =
λT cv
(7.133)
λT . ρcp
(7.134)
and the coefficient of thermal diffusion DT =
Two acoustic modes are found with complex frequencies 1 ω = ±cs k − i Γ k 2 2
(7.135)
where Γ is the sound attenuation coefficient Γ = Γη + DT (γ − 1) .
(7.136)
7.10 Liquid Dynamics in the Hydrodynamic Limit
247
Due to the thermal diffusion, it appears now also a third mode at zero real frequency ω = −iDT k 2
(7.137)
that represents a damped diffusive thermal mode. From the correlation function of the longitudinal currents, it is possible to derive the dynamical structure factor in the hydrodynamic limit k → 0. It is composed by three terms S(k, ω) γ −1 2DT k 2 = (7.138) S(k) 2π γ ω2 + DT k 2 2 Γ k2 1 Γ k2 + + 2 . 2π γ (ω + cs k)2 + Γ k 2 2 (ω − cs k)2 + Γ k 2 The function is characterized by three peaks. The determination and resolution of the peaks require an experimental technique able to explore the range at very long wavelength. This technique is called Brillouin scattering. It is realized with light scattering or low-angle neutron scattering [20]. In Fig. 7.9, it is reported the Brillouin scattering obtained in experiments on liquid argon along the vapour pressure equilibrium curve [21]. The system was excited with a laser light of 5145 Å. The central peak is due to the heat diffusion, and it is called Rayleigh peak. Its width is given by 2DT k 2 . The other two symmetrical peaks, Stokes and anti-Stokes,
1
Liquid Argon T=84.97 K
Intensity
0.8
0.6
2.883 GHz
2.883 GHz
0.4
0.2
0
–3
–2
–1
0
1
2
3
Frequency shift (GHz)
Fig. 7.9 Brillouin scattering on liquid argon at ≈85 K, obtained with laser light. Figure adapted from [21] with permission. Copyright by American Physical Society
248
7 Dynamics of Liquids
are Lorentzian functions centred at ω = ±cs k; they are called Brillouin peaks. Their width is given by 2Γ k 2 . In this case, the shift is of ±2.883 GHz for the anti-Stokes and Stokes scattering, respectively. Various sum rules can be derived. The integral of the Rayleigh peak has the value 1 S(k). Then an important IR = γ γ−1 S(k). For the Brillouin peaks, we have IB = 2γ sum rule is IR + 2IB = S(k),
(7.139)
while the ratio of the integrals, called Landau–Placzek ratio, is IR =γ −1. 2IB
(7.140)
For liquid Ar at T = 85 K, it is γ = 2.19, and in the experiment, the Landau– Placzek ratio results to be ≈1.2 in good agreement with the prediction (7.140). The sound velocity is found cs = 849.6 m/s.
7.11 Different Regimes for the Liquid Dynamics: The De Gennes Narrowing In the dynamical behaviour of liquids, we distinguished the two regimes: 1. For kλf >> 1 and short time, we are in the limit of free particles; a particle does not interact with the others. The diffusion is in the ballistic regime. The Ss (k, ω) is a Gaussian function with a width determined by (kB T /m)1/2 k. 2. In the hydrodynamic regime, the diffusion is determined by the Fick’s law. It is possible to observe collective phenomena that produce acoustic modes damped by viscous effects and heat dispersion. The heat diffusion gives rise to a mode at ω = 0. These modes determine a dynamical structure factor with three peaks. The intermediate case when 1/k approaches the interatomic distance corresponds to more difficult situations to study. Starting from the long wavelength limit, at increasing k, it is observed a damping of the side peaks of the dynamical structure factor corresponding to the acoustic modes with an increase of the central peak. It is observed, however, that if k0 is the position of the peak of the structure factor, for k → k0 , the central peak narrows. This effect is called De Gennes narrowing [5]. The value of k0 is related to the periodicity of the radial distribution function, as explained in Chap. 3. The narrowing is interpreted as a major effect of the short range order. On this length scale, the atoms feel the nearest neighbour shell, and they are trapped inside the nearest neighbour cage with a reduction of the diffusion. The cage effect will play a major role in supercooled liquid, as we will discuss in the next chapter.
7.12 Introduction of Memory Effects
249
Generally the intermediate regimes are more difficult to characterize. If we consider the memory effects, they are not present in the ballistic limit. In the hydrodynamic limit, the particles diffuse and undergo a large number of collisions so their motion is of Brownian type. In the intermediate regime or under particular thermodynamic conditions, we can expect that memory effects must be taken into account. For this reason now, we consider as these effects can be introduced in the theory.
7.12 Introduction of Memory Effects 7.12.1 The Langevin Equation and Memory Effects Let us consider again the Langevin equation. We are interested only to the time evolution so we simplify the notation indicating only the dependence on the time dv R(t) = −ξ v(t) + dt m
(7.141)
R(t) = 0,
(7.142)
For the random forces,
and we define
R(t)R(t ) = f (t − t ) m2
(7.143)
where f (t) is an even function of time. Moreover
v(t) = 0
(7.144)
v(t)R(t) = 0.
(7.145)
We want to find the solution of Eq. (7.141). We write v(t) as v(t) = u(t)w(t)
(7.146)
so the equation becomes w
du dw R +u + ξ uw = dt dt m
(7.147)
250
7 Dynamics of Liquids
then du R dw + ξw + w = . u dt dt m
(7.148)
dw + ξw = 0 ; dt
(7.149)
w = Ae−ξ t .
(7.150)
1 R du = eξ t ; dt A m
(7.151)
Now we can impose
the solution is easy
In this way Eq. (7.148) becomes
this equation can be integrated as u(t) =
1 A
t
dτ
eξ τ
0
R(τ ) + B. m
(7.152)
By substituting in Eq. (7.146), we get v(t) = v 0 e−ξ t + e−ξ t
t
dτ
eξ τ
0
R(τ ) m
(7.153)
where we assume v(t = 0) = v 0 . The correlation of the velocities by taking into account the conditions on R(t) is given by
v(t)v(t ) = v02 e−2ξ t
t
+ dτ1
t
dτ2
e−ξ (t+t −τ1 −τ2 ) f (τ1 − τ2 ).
(7.154)
0
0
From this, v 2 (t) results to be
t
t v 2 (t) = v02 e−2ξ t + dτ1 dτ2 0
e−ξ (2t−τ1 −τ2 ) f (τ1 − τ2 ).
(7.155)
0
By changing the variables τ = τ1 − τ2 and s = (τ1 + τ2 )/2, the double integral in Eq. (7.155) becomes
7.12 Introduction of Memory Effects
251
+t
dτ
−t
=
f (τ )
1 2ξ
e2ξ s
ds
|τ |/2
(7.156)
f (τ ) −eξ |τ | + e2ξ t
+t
−t
1 =− ξ
t
dτ
t
dτ
f (τ )e
ξτ
0
1 + e2ξ t ξ
t
dτf (τ ).
(7.157)
0
So Eq. (7.155) is now
1 t v 2 (t) = v02 e−2ξ t + dτ f (τ ) ξ 0
t 1 dτ f (τ )eξ τ ; − e−2ξ t ξ 0
(7.158)
in the limit t → ∞, we have
1 v (t) = ξ 2
∞
dτ
(7.159)
f (τ ).
0
On the other hand, we expect in this limit 1 2 3 m v = kB T , 2 2
(7.160)
so from Eq. (7.159), we get m ξ= 3kB T
∞
dt 0
R(t)R(0) m = 2 3kB T m
∞
dtf (t) .
(7.161)
0
We can generalize Eq. (7.141) as m dv =− dt 3kB T
∞
0
R(t) f t − t v(t ) + m
(7.162)
where the kernel f t − t accounts for memory effects. If we assume that in the long time limit R(t) has a white spectrum f (t − t ) =
3kB T ξ δ(t − t ), m
(7.163)
we get Eq. (7.141). The assumption of Eq. (7.163) is equivalent to consider only Markov processes. Memory effects will be introduced later in a more general framework.
252
7 Dynamics of Liquids
Fig. 7.10 Scheme of the Maxwell model
7.12.2 Viscoelasticity: The Maxwell Model The opposite of the hydrodynamic limit is the limit of high-frequency ωτ >> 1. Under high-frequency perturbation, the liquid cannot flow, and its local response would be of elastic type similar to the case of a solid. To introduce the problem of the coupling between elasticity and viscosity, it is useful to consider the Maxwell model. An elastic spring is connected to a viscous damper as in Fig. 7.10. In the spring, the elastic stress σe is given by σe = ke Δxe ,
(7.164)
according to the Hook’s law where Δxe is the elastic strain. In the viscous damper, the stress will be σd = η
dΔxd , dt
(7.165)
where Δxd is the strain in the damper and η is the viscosity. The two elements are connected in series, and the stress is uniformly distributed σe = σd = σ .
(7.166)
Δx = Δxe + Δxd .
(7.167)
The total strain will be
By taking into account Eqs. (7.164) and (7.165), we can write the equation of motion for the strain Δx dΔx 1 dσ 1 = + σ. dt ke dt η
(7.168)
With a constant strain, Eq. (7.168) becomes ke dσ =− σ, dt η
(7.169)
σ = σ0 e−t/τR ,
(7.170)
and finally we have the solution
7.12 Introduction of Memory Effects
253
where σ0 is the stress at t = 0 and τR = η/ke . For time long enough, the stresses relax to zero. The viscosity was introduced with Eq. (7.70), and it was related to the stress tensor in Eq. (7.105). The Eq. (7.70) by taking into account also Fig. 7.2 can be rewritten as Fxy ∂ ∂rx =η , A ∂t ∂y
(7.171)
where rx is the displacement of the portion of liquid along the x direction. By generalizing for the other directions, we can rewrite also Eq. (7.105) as ∂rj ∂ ∂ri . =η + σij d ∂t ∂xj ∂xi
(7.172)
This equation is analogous to Eq. (7.165) in the Maxwell model. In the limit ωτ >> 1 under an instantaneous force without viscosity, the stress will be proportional to the shear (or elastic) modulus. The shear modulus measures the response of the system to a deformation induced by an external elastic force. In general the external force can be periodic, and the shear modulus depends on frequency. This effect is typical of elastic media, and it takes place in solids. In liquids in the elastic limit, it has to be considered the high-frequency shear modulus G∞ . The stress will be given by
σij
e
= G∞
∂rj ∂ri + ∂xj ∂xi
.
(7.173)
At increasing frequency, we expect a coupling between elasticity and viscosity. Viscoelastic materials are a combination of the two behaviours as approximately represented by the Maxwell model. We can consider that the elastic constant ke is equivalent to G∞ and we can generalize Eq. (7.168). We define the strain as Δij =
∂rj ∂ri + ∂xj ∂xi
,
(7.174)
and we can assume that it comes out from a combination of a viscous dumping (d) and an elastic component (e) Δij = Δij d + Δij e .
(7.175)
As done in the Maxwell model, we consider that the stress is uniformly distributed σij = σij = σij . e
d
(7.176)
By combining Eq. (7.172) and the time derivative of Eq. (7.173), we can write
254
7 Dynamics of Liquids
∂ Δij = ∂t
1 1 ∂ + σij . G∞ ∂t η
(7.177)
If we assume that the total strain remains constant, we have an equation similar to Eq. (7.169), and the relaxation time can be defined as τR =
η ; G∞
(7.178)
this is called the Maxwell relaxation time, and it is analogous to the relaxation time that appears in Eq. (7.170) where G∞ replaces ke . If we recall the Stokes–Einstein relation, Eq. (7.71), by combining with the previous equation, we find a relation between the relaxation time and the diffusion coefficient DτR = cost × T ;
(7.179)
this is another form of the Stokes–Einstein relation. The coefficient of σij in Eq. (7.177) can be considered as the inverse of a frequency-dependent viscoelastic viscosity ηve . By taking the Laplace transform, we have 1 1 1 = −iz + ηve (z) G∞ η
(7.180)
from which with z = ω + i we get ηve (ω) =
G∞ . −iω + 1/τR
(7.181)
This function interpolates between the two limits: 1. Viscous limit ωτR > 1
7.12.3 Generalized Viscosity and Memory Effects In the viscoelastic theoretical formulation, to interpolate between the two limit, the generalized viscosity ηve = η (k, z) can be introduced. Then the generalized
7.13 Definition of Memory Functions
255
kinematic viscosity can be defined as ν (k, z) = η (k, z) /mρ and inserted in Eq. (7.122) for the correlation function of the transverse currents. The equation becomes C˜ t (k, z) =
kB T /m . −iz + ν (k, z) k 2
(7.184)
Now if for the generalized viscosity we assume the interpolation function (7.181), in the elastic limit (7.183) ν (ω) = iG∞ / (ρmω) in the ω space, we have C˜ t (k, ω) ≈
iωkB T /m . ω2 − (G∞ /ρm) k 2
(7.185)
2 = (G /ρm) k 2 propagating In this limit we find√a transverse elastic mode with ωel ∞ with a speed cel = G∞ /ρm. In (k, t)-space, Eq. (7.184) can be written as
∂ Ct (k, t) = −k 2 ∂t
t
dt ν k, t − t Ct k, t .
(7.186)
0
It is clear that ν(k, t) plays the role of a memory function. The Markov approximation will correspond to ν = η/ρm, the viscous limit. In general we have to account for memory effect with a functional form for ν (k, t).
7.13 Definition of Memory Functions In order to introduce more in general the memory functions, we follow the MoriZwanzig formalism [14, 22–26]. We will use a formalism similar to that adopted in quantum mechanical problems [26]. We consider a set of dynamical fluctuating variables Xν as elements of an Hilbert space. They evolve in time with the equation dXν (t) ˆ ν (t) = i LX dt
(7.187)
where Lˆ is the Hermitian Liouville operator defined in Eq. (2.132). The formal solution of this equation is
ˆ Xν (0). Xν (t) = exp i Lt
(7.188)
256
7 Dynamics of Liquids
In the Dirac formalism, the inner product is defined as
Xν |Xλ (t) = Xλ (t)Xν∗
(7.189)
where we indicate Xν = Xν (0) and Xν∗ are the Hermitian conjugate. We define the correlation function as ˆ
Cλν (t) = Xν |Xλ (t) = Xν |ei Lt |Xλ .
(7.190)
The formalism can be developed by considering the variables Xν with ν = 1, . . . , n as components of a vector X(t). The components could be, for instance, the density and the current. Each component evolves in time with Eq. (7.188). Now the correlation function is a matrix defined as C (t) = X|X(t) .
(7.191)
We introduce a projection operator Pˆ = |X X|X−1 X|
(7.192)
where it appears the inverse of the matrix X|X. The explicit expression of Pˆ is Pˆ =
|Xλ X|X−1
λ,ν
λν
Xν | .
(7.193)
The operator Pˆ satisfies to all the properties of a projection operator Pˆ 2 = Pˆ
Pˆ † = Pˆ .
(7.194)
The time evolution of X(t) corresponds to a rotation in the phase space: ˆ
|X(t) = ei Lt |X ;
(7.195)
if we apply the operator Pˆ , we have ˆ Pˆ |X(t) = Pˆ ei Lt |X;
(7.196)
it gives us the part of X(t) that remains projected along the initial values X. Now we define ˆ = 1 − Pˆ ; Q
(7.197)
ˆ Pˆ = 0. Q ˆ projects the part of X(t) this is also a projector operator such that Q orthogonal to X, since
7.13 Definition of Memory Functions
257
ˆ
X|QX(t) = 0.
(7.198)
ˆ The meaning of Eq. (7.198) is that QX(t) is not correlated to X. The time evolution is not necessarily determined only by what is correlated with the dynamical variable in the initial state. For instance, at short time, a particle is in a ballistic regime, and then it feels the forces of the other particles. Now we want to separate the properties that are related to the initial value of the vector from the rest. We can write d|X(t) ˆ ˆ ˆ ˆ i L|X ˆ = iei Lt L|X = ei Lt Pˆ + Q dt
(7.199)
and so d|X(t) ˆ ˆ ˆ ˆ ˆ = iei Lt Pˆ L|X + iei Lt Q L|X . dt
(7.200)
We treat separately the two terms on the right side. For the first ˆ ˆ ˆ iei Lt Pˆ L|X = iei Lt Ω|X = iΩ|X(t)
(7.201)
where we defined iΩ =
ˆ
X|L|X .
X|X
(7.202)
ˆ we need to introduce an operator Sˆ such that For the second term containing Q, ˆ ˆ ˆ ˆˆ ei Lt = ei Lt S(t) + ei QLt
(7.203)
ˆ = 0) = 0. In Eq. (7.203), we take the derivative of both members with with S(t respect to time ˆ ˆ ˆ ˆ d Sˆ ˆ ˆ Le ˆ i Qˆ Lt ˆ i Lt ˆ i Lt + iQ = i Le . S(t) + ei Lt i Le dt
(7.204)
For the left-hand side, we use Eq. (7.203) ˆ ˆ ˆˆ ˆ ˆ ˆ d Sˆ ˆ ˆ Le ˆ i Qˆ Lt ˆ i Lt + iQ , + ei QLt = i Le S(t) + ei Lt i Lˆ ei Lt S(t) dt
(7.205)
and we get the equation ˆ
ei Lt
d Sˆ ˆ ˆ i Qˆ Lt = i Pˆ Le . dt
(7.206)
258
7 Dynamics of Liquids
We integrate it ˆ =i S(t)
t
dτ
ˆ
ˆˆ
ˆ i QLτ e−i Lτ Pˆ Le
(7.207)
0
by using this result; the second terms of the right side of Eq. (7.200) become ˆ ˆ ˆ ˆ ˆ ˆˆ ˆ L|X ˆ L|X = i ei Lt S(t) + ei QLt Q iei Lt Q
t ˆ ˆ ˆ ˆ ˆˆ ˆ ˆ ˆ i Qˆ Lτ =i dτ ei L(t−τ ) Pˆ Le QL|X + iei QLt Q L|X .
(7.208)
0
Now we define the random force ˆˆ
ˆ L|X ˆ , |R(t) = iei QLt Q
(7.209)
X|R(t) = 0.
(7.210)
since
It is evident that R(t) is connected to the time evolution of a term orthogonal to |X. We reconsider in Eq. (7.208) the integral in the right-hand side term, and we write it as
t ˆ ˆ ˆ ˆ ˆ i Qˆ Lτ dτ ei L(t−τ ) |X X|X−1 X|Le QL|X i 0
=i
t
dτ |X(t − τ )M(τ )
(7.211)
0
where we have defined the memory function M(t) =
ˆ ˆ ˆ ˆ i Qˆ Lt QL|X
X|Le .
X|X
(7.212)
ˆ ˆ ˙ ˙ i Qˆ Lτ
X|e Q|X .
X|X
(7.213)
It can also be written as M(t) =
Now by recalling Eq. (7.212), we get ˆˆ
−1 ˆ i QLτ Q ˆ L|X X|X ˆ M(t) = X|Le ˆˆ
ˆ Le ˆ i QLτ |X X|X−1 = X|Lˆ Q
7.13 Definition of Memory Functions
259
ˆ ˆ 2 Le ˆ i Qˆ Lτ = X|Lˆ Q |X X|X−1 ˆ ˆ ˆ ˆ i Qˆ Lτ = X|Lˆ Qe QL|X X|X−1
= R(0)|R(t) X|X−1
(7.214)
where we used Eq. (7.209). So finally we rewrite the memory function as M(t) =
R(0)|R(t)
X|X
(7.215)
so the memory function can be interpreted as the correlation function of the random forces. The Eq. (7.208) becomes ˆ
ˆ L|X ˆ iei Lt Q =i
t
dτ |X(t − τ )M(τ ) + |R(t).
(7.216)
0
By combining Eq. (7.201), the definition (7.209) and Eq. (7.216), we can rewrite Eq. (7.200) as dX(t) = iΩ · X(t) − dt
t
dt M t − t · X t + R(t) ,
(7.217)
0
where the memory function is the kernel of the integral equation. We observe that Eq. (7.217) can be easily transformed in an equation for the correlation function by using Eq. (7.210) dC(t) = iΩ · C(t) − dt
t
dt M t − t · C t .
(7.218)
0
The Laplace transform is given by ˜ C(z) =
C(t = 0) ˜ −iz − iΩ + M(z)
.
(7.219)
In principle the calculation of a correlation function can be performed with Eq. (7.218) or Eq. (7.219) for a given memory function. These equations have a role similar to the Ornstein-Zernike equation in the case of the study of the structure of liquids. To calculate the dynamical correlation functions, we need a form for the memory function. In the theoretical approaches, it is necessary to find adequate approximations for the memory functions. We note that the diagonal terms of Ω are zero since the numerator of the definition (7.202) ˆ α = X α |X ˙ α = 0 iΩαα ∼ X α |L|X
(7.220)
260
7 Dynamics of Liquids
according to Eq. (6.28). If we take into account only the diagonal terms, Eq. (7.217) becomes
t dXα (t) =− dt Mαα t − t Xα t + R(t) . (7.221) dt 0 For a self-correlation function from Eq. (7.218), we have dCαα (t) =− dt
t
dt Mαα t − t Cαα t .
(7.222)
0
7.14 Memory Function for the Velocity Correlation Function Equation (7.221) can be written for the velocity as dv =− dt
t 0
R(t) , dt Mvv t − t v t + m
(7.223)
where the kernel is the diagonal part of the memory function
R(0) · R(t) /m2 . v 2 (0)
Mvv (t) =
(7.224)
By recalling Eq. (7.143) and the equipartition of energy (7.160), we see that Eq. (7.223) is equivalent to the Langevin equation (7.162), discussed in Sect. 7.12.1. We can write also Eq. (7.222) for the velocity correlation function, defined in Eq. (7.86). We use the normalized correlation function Ψ (t) = where we recall that Z(0) =
3kB T m
dΨ (t) =− dt
Z(t) Z(0)
(7.225)
.
t
dt Mvv t − t Ψ t .
(7.226)
0
By assuming Markov processes, the memory function can be taken as Mvv (t) = ξ δ t − t ;
(7.227)
in this way Eq. (7.223) reduces to Eq. (7.141), and the solution of Eq. (7.226) is Ψ (t) = e−ξ t .
(7.228)
7.14 Memory Function for the Velocity Correlation Function
261
In the Markov approximation, it is assumed that after a collision, the atom loses memory, so the collisions are not correlated. This could be valid only for very dilute fluids. A possible form for the memory function is based on the assumption of a single relaxation time with an exponential decay [14, 27] Mvv (t) = M0 e−t/τ .
(7.229)
M0 =
R(0) · R(0) ; 3kB T /m
(7.230)
M0 =
Rz (0)Rz (0) . kB T /m
(7.231)
We note that
this is also
The random forces in the liquid are determined by the interparticle potential, and we can recall the result of Eq. (7.47) and the definition of the Einstein frequency (7.48) to obtain M0 =
(kB T /m) Ω02 = Ω02 . kB T /m
(7.232)
With the exponential memory function, Eq. (7.226) becomes Ψ˙ (t) = −M0
t
dt e−(t−t )/τ Ψ t .
(7.233)
0
It is convenient to define Φ (t) = et/τ Ψ (t) ;
(7.234)
the equation for Φ is e−t/τ Φ˙ −
1 −t/τ e Φ = −M0 e−t/τ τ
t
dt φ t ;
(7.235)
0
then we take the derivative of this equation to get Φ¨ − the initial conditions are:
1 Φ˙ + M0 Φ = 0; τ
(7.236)
262
7 Dynamics of Liquids
˙ Ψ˙ = 0 → Φ(0) =
Ψ (0) = 1 → Φ(0) = 1
1 . τ
(7.237)
The associated polynomial equation to (7.236) is x2 −
1 x + M0 = 0 , τ
(7.238)
3 1 1 ± α2 2τ
(7.239)
and the roots of the polynomial are x1,2 = where α 2 = 1 − 4M0 τ 2 = 1 − 4Ω02 τ 2 .
(7.240)
It could be positive or negative. The general solution will be Φ(t) = C1 ex1 t + C2 ex2 t
(7.241)
where C1,2 are coefficients determined by the initial conditions. We obtain for the function Ψ 3 √ 2 3 √ 2 e−t/2τ 1 + α 2 e α t/2τ − 1 − α 2 e− α t/2τ . Ψ (t) = √ 2 α2
(7.242)
If α 2 > 0, the roots of the equation are real and αt αt e−t/2τ sinh + α cosh . Ψ (t) = α 2τ 2τ
(7.243)
If α 2 < 0, the roots are complex, and Ψ (t) has an oscillatory behaviour Ψ (t) =
|α|t |α|t e−t/2τ sin + |α| cos . |α| 2τ 2τ
(7.244)
In considering the two types of behaviour, we take into account that Ω0 is approximately the characteristic frequency of oscillation of an atom around its equilibrium position in the cage of the neighbours. After a certain time the cage relaxes and the oscillations decay, the relaxation time τ is related also to the collisions. In the case α 2 > 0, if τ is short enough, the function Ψ (t) decay rapidly due to the collisions. For very low density, we expect an exponential decay. From 2 Eq. (7.243) in the limit of a τ 0, is considered unphysical. Generally speaking, in the metastable regions, it is possible to observe phenomena like nucleation of gas in the liquid and vice versa. Fig. 8.1 Van der Waals loop along an isotherm at a temperature below the critical temperature of the liquid-gas transition. In the inset the chemical potentials as function of pressure of the gas (bold line) and the liquid (dashed line)
d
μ
P
c
d
P
b
a
a b
c V
8.1 Phase Transitions and Metastability of Liquids
267
e
C.P.
1
m
Lin
ido
W
p/pc
LIQUID 0.9
en
ist
0.8
ex Co
ce
l
oda
pin
sS
Ga
od
e lin
id
qu
Li
0.96
in Sp
al
GAS
0.98
1
1.02
T/Tc
Fig. 8.2 Liquid-gas transition from the van der Waals equation. In the p/pc vs. T /Tc are reported the coexistence curve (solid black line), the gas spinodal (dash-dot red line), the liquid spinodal (dashed red line). In the supercritical region, the blue line is the Widom line
In Fig. 8.2 we report in the p, ˜ T˜ plane the coexistence curve and the spinodal curves for the gas and the liquid phases derived from the van der Waals equation written in terms of the reduced variables T˜ = T /Tc , v˜ = v/vc , p˜ = p/pc (see Sect. 2.11) p˜ =
8T˜ 3 − 2. (3v˜ − 1) v˜
(8.1)
In approaching the critical point, it is observed a strong increase of the thermodynamic response functions, like the isothermal compressibility, KT , and the specific heat, cp . We have already seen that the structure factor S(k → 0) increases dramatically in approaching the critical temperature. This is connected to the divergence of the correlation length ξ that dominates the critical behaviour. The thermodynamic response functions diverge since they become proportional to powers of ξ . At increasing pressure and temperature, beyond the critical point, we know that we enter in a generic fluid phase without liquid-gas distinction. Now moving from the supercritical single fluid phase region to the two-phase region, it is found the occurrence of lines of maxima in the specific heat, in the isothermal compressibility and in the thermal expansion. These lines approaching the critical point collapse on a single curve, defined as the Widom line reported in the figure, the locus of the maxima of the correlation length that extends into the single fluid phase [2, 36]. It has to be remarked that the spinodal curve is an outcome of the approximations involved in the mean field theory of phase transitions, as the van der Waals equation is. In real systems the transition from the metastable region to the region of instability is not marked by a well-defined line [3]. As previously discussed, also
268
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
in computer simulation, it is possible to observe van der Waals like loop due to finite size effects. In principle thermodynamic quantities can be measured only in systems at equilibrium, but metastable states may exist for enough time to make possible their determination [1]. The free energy of a metastable state is higher than that of the stable state of equilibrium; however, the presence of a potential barrier could keep the system in the metastable state for a lifetime τl such that τl > τobs where τobs is the typical observation time. As in the case of systems at equilibrium, even for the system in metastable states, there will be a relaxation dynamics in quasi-equilibrium with a relaxation time τR . If τobs >> τR , we can study the system with methods of thermodynamics and apply the theory of linear response [1]. In the following we will concentrate on the metastable states of supercooled liquids approaching the glassy state, an important application of the concepts that we introduced in this section.
8.2 Liquids Upon Supercooling: From the Liquid to the Glass Upon cooling when the melting temperature Tm is reached, the crystalline phase starts to nucleate. The crystal growth is determined by two processes, the nucleation of a certain number of atoms in a pattern and the extension of the pattern to the surrounding atoms. The time needed to form a fraction of crystal τcryst decreases at decreasing temperature below Tm , but the diffusion process of the nucleus slows down due to the increase of the viscosity and the related relaxation time τrelax . After reaching a minimum, the time needed to grow the crystal phase starts to increase; see Fig. 8.3. The crystal can be formed with a slow process that allows the system to reach the equilibrium crystalline phase. Instead a fast process of cooling could make possible to freeze the dynamics of the crystalline growth and drive the liquid into a glassy state. The cooling or quenching rate necessary to bypass crystallization depends on the chemical composition of the material. It is low, of the order of 10−2 K/s, for good glass formers like SiO2 or GeO2 . The quenching rate must be faster for other materials, up 109 K/s for some metallic glasses. As said above supercooled liquid and glass are metastable states in conditions such that we can study the thermodynamics with experiments. It is found that the glass transition is characterized by changes in thermodynamics quantities. Upon cooling, the enthalpy H of the liquid decreases. At the melting, it is found a change ΔHm corresponding to the heat of fusion with a change of the slope. At the transition from the supercooled liquid to the glass, it is observed a change of slope of the enthalpy without any heat release. The change of slope defines the temperature of the glass transition Tg ; see Fig. 8.4. A similar behaviour is found for the volume. In the glassy state, the volume decreases with a slope similar to the crystal [1].
8.2 Liquids Upon Supercooling: From the Liquid to the Glass
τcryst
Time
Fig. 8.3 Qualitative behaviour of the crystallization process upon cooling from the melting temperature Tm . τcryst is the time necessary to form a portion of crystal from the liquid. τrelax is the structural relaxation time. Tg is the conventional temperature of the glass transition; for a definition, see text
269
τrelax
Tg
T
Fig. 8.4 Enthalpy change across the melting and across the transition from the supercooled liquid to the glass
Tm
H (enthalpy)
Liquid
Glass
ΔH melting
Crystal
Tg
Tm
T
As a consequence of the change in the slope of the enthalpy, also the specific heat changes across the glass transition, as shown in Fig. 8.5. The location of the temperature Tg however depends on the the experimental procedure. In particular it is found that lower cooling rate yields a lower glass transition temperature, as the example shown in Fig. 8.6 for the volume at the liquidglass transition. This is a consequence of the fact that the liquid-glass transition is not a normal thermodynamics transition between two equilibrium phases.
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
Fig. 8.5 Specific heat across the melting and the transition from supercooled liquid to the glass
Liquid Specific heat
270
Glass Crystal
Tm
Tg
Fig. 8.6 Volume across the melting and the transition from supercooled liquid to the glass. The glass B is obtained with a slower rate of cooling with respect to glass A
T
Volume
Liquid
Glass A
ΔV melting
Glass B Crystal
TgB TgA
Tm
T
8.3 Angell Plot Materials in the glassy state show usually different types of behaviour. Systems like silica or metallic glasses present a large shear modulus; they are called hard glasses. The specific heat of hard glasses shows a small change across the transition to the glassy state at Tg . Other systems in the glassy state show a lower shear modulus, and across the glass transition, it is found a larger jump in the specific heat with respect to the the harder glasses. A more precise classification of the glass former materials has been done by Angell by considering the behaviour of the viscosity in the supercooled region of glass formers [4, 5]. In Fig. 8.7 it is reported a version of the famous Angell plot [6]. The viscosity η changes of many order of magnitude from 10−4 to 1013 poise in approaching the glass transition. Angell defines Tg as the temperature at which the liquid viscosity η reaches the value 1013 poise. According to the Maxwell model, presented in Sect. 7.12.2, the relaxation time is proportional to the viscosity, Eq. (7.178) τ = η/G∞ . With
8.3 Angell Plot
271
SiO2 (1473)
13
GeO2 (810) NaAlSi3O8 (1095) Na2O.2SiO8
log[Viscosity (poise)]
11 9
B2O3 (556) As2Se3 (455)K AS2Se3 (455)S As2Se3 (424)T
7
Propanol (98) CaAl2SiO8 (1134) Glycerol (191)
5
[log η]1/2
Se 1 (307) Se 2 (307) ZnCl2 (380)
3
3-bromopentane (108) Phenolphthalein (363) Triphenylphosphite (205) Ca(NO3)2.4H2O (217)
1
H2SO4.3H2O (159) Toluene (117) Propylene carbonate (158)
–1 –3
o-terphenyl (247) Salol (225)
–5 0
0.2
0.4
0.6
0.8
1.0
Tg/T
Fig. 8.7 Angell plot. The viscosities for many systems are collected as function of Tg /T . The yaxis is in a log scale. The linear behaviour corresponds to the Arrhenius formula (strong liquids). Fragile liquids behaviour can be fitted with a VFT equation. Reprinted with permission from [6]. Copyright (2001) Springer Publishing Company
the typical value of the elastic modulus G∞ , the temperature Tg corresponds to a relaxation time of 100 s circa. The temperature Tg defined in this way is usually just below the one obtained by calorimetric measures. In the Angell plot, we note that for certain systems, ln(η) increases linearly. This behaviour is called à la Arrhenius, and it is typical of activated processes with η (T ) = η0 eEA /kB T
(8.2)
where EA is the activation energy and is weakly dependent on the temperature. The coefficient η0 is the limit at high temperature. In an Arrhenius process, it is considered that the viscosity is produced by the sliding of a liquid layer on another layer. The atoms can move from the equilibrium positions only by overcoming a potential energy barrier according to the Eyring model [7, 8], as shown in Fig. 8.8. Systems like SiO2 and GeO2 show an Arrhenius behaviour. This is typical of glass formers with a tetrahedral local order and a small change in the specific heat across Tg . They are classified by Angell as strong liquids.
272
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
Fig. 8.8 An atom in the liquid layer can move from position a to position b by overcoming an energy barrier, the activation energy
On the contrary, there are systems, like glycerol or ethanol, that show a behaviour called super Arrhenius. They are called fragile liquids. The system undergoes a complete change of its local structure across the glass transition; as a consequence, the change of cp is larger with respect to the strong glass former. The super Arrhenius behaviour of η or τ can be approximated with the Vogel-FulcherTammann (VFT) formula
BT0 η = η0 exp T − T0
(8.3)
where T0 is a hypothetical temperature below Tg where η would diverge. The parameter B is related to the degree of fragility, since smaller values of B correspond to higher fragility. According to Angell it is possible also to define a fragility parameter as mf (T ) =
∂η ∂ Tg /T
(8.4) Tg
that measures how fast η increases in approaching Tg .
8.4 Kauzmann Temperature In 1948 Kauzmann proposed a paradox for the entropy of a supercooled liquid [9]. As we know, the specific heat at constant pressure is related to the entropy per particle s = S/N by the formula cp = T
∂s ∂T
.
(8.5)
p
By considering the behaviour of the specific heat in supercooled liquid and in crystal, Kauzmann deduced that the entropy of the supercooled liquid decreases
8.4 Kauzmann Temperature
273
more rapidly than the entropy of the crystal. Then he explored the possibility that it exists a temperature where the curve of the entropy of the supercooled liquid could cross the curve of the crystal entropy. We can calculate the entropy of both liquid and crystal below Tm
T
sl (T ) = sl (Tm ) +
Tm
scr (T ) = scr (Tm ) +
T
Tm
cp,l dT , T
(8.6)
cp,cr dT . T
(8.7)
By subtracting the two equations, we can impose that it exists a temperature TK , called now Kauzmann temperature, such that Δs(TK ) = sl (TK ) − scr (TK ) = 0 .
(8.8)
So TK will be defined by
Δs(Tm ) =
Tm
Tk
Δcp dT , T
(8.9)
and since Δcp (T ) = cp,l (T ) − cp,cr (T ) > 0, it will be Δs(Tm ) = sl (Tm ) − scr (Tm ) > 0. TK results to be a finite temperature. For T < TK , the liquid (or glass) entropy would be less that the crystal entropy sl (T ) < scr (T )
for
T < TK ,
(8.10)
and for T → 0, the glass entropy would become negative since the entropy of the crystal goes to zero in this limit. This is the entropy crisis of the Kauzmann’s paradox. For Kauzmann the only way to solve the paradox would be to suppose that the supercooled liquid transforms spontaneously in the crystal at a temperature T > TK . In recent times, a different interpretation has been introduced [10]. We have already seen that slowing down the cooling rate the temperature Tg decreases, so it has been hypothesized that the Kauzmann temperature would be the temperature of the glass transition if it would be possible to supercool the liquid with a very slow (going to zero) cooling rate. If it would be possible to realize such ideal process, the liquid-glass transition would be a thermodynamic transformation. This interpretation of TK as the temperature of an ideal glass transition has been matter of long discussions since there are systems where the entropy of the solid results larger than the entropy of the liquid along the coexistence [11] and not entropy crisis is found in approaching T → 0. In any case the temperature TK can be considered an important parameter in the phenomenology of the glass formers of the glass formers. The meaning of
274
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
the Kauzmann temperature has been also connected with the theory developed by Adam and Gibss in 1965 [12] by computation performed in analogy with spin glass theory [13, 14].
8.5 Adam and Gibbs Theory 8.5.1 Cooperative Rearranging Regions G. Adam and J. H. Gibbs (AG) proposed in 1965 a theory that connects the relaxation time to the configurational entropy of glasses [12]; they developed further the previous approach by Gibbs and Di Marzio [15]. For the comprehension of the formation of the glass from the liquid, AG assumes that a fundamental role is played by units called cooperative rearranging region (CRR). A CRR is constituted by a group of molecules that can rearrange themselves in different configurations independently from the behaviour of the others molecules in the system. The rearrangement takes place under fluctuations of energy (or enthalpy). The basic idea of AG is that when the liquid is supercooled, the CRR can rearrange and grow with a consequent increase of the structural relaxation time. In this way, it is expected a connection between the increasing of the relaxation time with the decreasing of the configurational entropy. In an NPT ensemble, let us assume that in the system, there are Ns subsystems of molecules that interact weakly with the rest of the system. A subsystem is composed by z molecules. Let us assume that the fluctuations around equilibrium can induce cooperative rearrangement of n between the Ns subsystems. For a subsystem, we can write the partition function as ζ (z, p, T ) =
w (z, E, V ) e−βE e−βpV ,
(8.11)
E,V
where w is the degeneration of the state, the number of possible configurations for given E and V . The associated Gibbs free energy is G = zμ = −kB T ln ζ . Between the Ns , we suppose that n can make a transition to a new state. In practice we suppose that there are CRR of molecules in metastable states that can rearrange in deeper minima in energy for the given p and T . The rearrangement would be possible only for certain values of E and V . By summing only on those values of E and V , we get the partition function ζ of the CRR. The corresponding Gibbs free energy would be G = zμ = −kB T ln ζ . Therefore the fraction n/Ns is given by
n Δ = exp −β G − G . = Ns Δ
(8.12)
8.5 Adam and Gibbs Theory
275
Now it is possible to define a transition probability for the cooperative rearranging states W (z, T ) ∝ n/Ns
W (z, T ) = A exp −β G − G ,
(8.13)
the exponent is G − G = z μ − μ = zΔμ so W (z, T ) = A exp (−βzΔμ) ,
(8.14)
where Δμ > 0 represents the energy barrier for the rearrangement of the subsystem. This barrier can be considered approximately independent from T and z. The transition probability can be averaged by summing over all the values of z. AG assume that there is a lower limit to the sizes of the CRR indicated as z∗ . The average transition probability is given by W¯ (T ) = A
∞ z exp (−βΔμ) .
(8.15)
z=z∗
with the assumption that A does not depend on z. The geometrical progression can be summed by considering n
xk =
k=m
x m − x n+1 . 1−x
(8.16)
We get exp (−βz∗ Δμ) . W¯ (T ) = A 1 − exp (−βΔμ)
(8.17)
Now we can define A¯ (T ) =
A 1 − exp (−βΔμ)
(8.18)
and assume that A¯ (T ) is approximately independent of temperature, so Eq. (8.17) can be written as W¯ (T ) = A¯ exp −βz∗ Δμ .
(8.19)
The relaxation time can be related to the inverse of the transition probability τ (T ) ∝
1 . ¯ W (T )
(8.20)
276
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
We will show below how AG connects W¯ (and τ ) to the configuration entropy of the system.
8.5.2 Calculation of the Configurational Entropy The configurational entropy Sc can be derived from a configurational partition function of the subsystems ζc (z, p, T ) =
wc (z, U, V ) e−βU e−βpV ;
(8.21)
U,V
at variance with Eq. (8.11) now the sum is on the potential energy, and wc is the number of possible configurations of a subsystem with potential energy U and a volume V . From (8.21) we can get the entropy of the subsystem sc . On the other hand, the configurational entropy of all the system will be given by Sc = kB ln Wc
(8.22)
where Wc is the number of configurations that maximizes the partition function. Under the assumption of independent Ns subsystems, we can write Sc = Ns sc ,
(8.23)
and from it we can derive sc =
Sc kB 1/N = ln Wc = kB ln Wc s . Ns Ns
(8.24)
The number of configurations available depends on the size z of the subsystems; if N is the total number of molecules Ns = N/z, then
z/N . sc = kB ln Wc
(8.25)
We said above that there is a lower limit to the size z∗ of a CRR; AG assumes that z∗ corresponds to the limit of two configurations, the minimum to make possible a rearrangement. The number of configurations related to z∗ will be such that ∗
z /N ; sc∗ = kB ln Wc this formula give us a way to define z∗ as
(8.26)
8.5 Adam and Gibbs Theory
277
z∗ =
Nsc∗ . Sc (T )
(8.27)
Now substituting (8.27) in (8.19), we have W¯ (T ) = A¯ exp −
a , T Sc (T )
(8.28)
where a = N sc Δμ/kB . By considering the relaxation time in Eq. (8.20), we obtain a . τ ∝ exp T Sc (T )
(8.29)
In this way the relaxation time is related to the configurational entropy. The relaxation time is growing up as Sc decreases, as consequence of the decrease of the possible configurations for the system. The explicit dependence of τ from T can be determined only with a calculation of Sc . There is not a definitive theoretical method to calculate Sc . A possible phenomenological derivation is related to the observation that, in approaching the glass transition, the behaviour of the configurational heat capacity can be approximated with a simple formula Cp,c ≈
c , T
(8.30)
so by integrating it, the configurational entropy can be calculated as
T
Sc =
T0
c , T2
(8.31)
where it is assumed that the temperature T0 corresponds to Sc (T → T0 ) = 0. By substituting the result of the integration in Eq. (8.29), we obtain τ = τ0 exp
BT0 . (T − T0 )
(8.32)
In this way we derived in a theoretical framework the VFT equation (8.3) that describes phenomenologically the behaviour of the relaxation time or the viscosity in fragile supercooled liquids. In recent interpretations, T0 is considered to correspond to the Kauzmann temperature; see, for instance, [16]. To better understand this point, it is necessary to consider the potential energy landscape (PEL), a concept that we introduced in Sect. 1.4.
278
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
8.6 Energy Landscape and Configurational Entropy The complex phenomenology of supercooled liquids can conveniently be interpreted in terms of the PEL. This idea was early introduced by Goldstein [17] and later developed by Stillinger and Weber [18]. The central quantity is the potential energy of a system of N particles in a region of volume V , indicated as Φ (r 1 , . . . , r N ). This function is determined by all the interactions between the molecules. Though the molecules interact mainly with their nearest neighbours, the topology of the PEL is very complicated. The function Φ is a multidimensional surface characterized by local minima corresponding to regions of the landscape where the system can be quenched and trapped. The local neighbouring minima are separated by saddle points. The system visits different regions of the PEL depending on the temperature. At high temperatures, the system has enough kinetic energy to sample all the PEL, and it is not trapped in deep minima, but at decreasing temperature, the system samples a smaller number of deeper minima. As we know from the Boltzmann equation, the entropy of a system is determined by the number of microstates that correspond to the macrostate for the given thermodynamic conditions. Upon supercooling the system could visit a number of metastable states so there would be contributions to the entropy by configurations where the system is not in equilibrium. In normal conditions, these contributions will decrease at long time, since the system will reach equilibrium regions. In approaching the glass transition, however, below a certain temperature, the part of entropy related to metastable states, called configurational entropy, could be still finite for time longer with respect to the experimental observation time. In this framework the dynamics of the molecules at low enough temperature can be thought as composed by vibrations around the minima and jumps between the different minima, as represented schematically in Fig. 8.9. The two processes contribute to the free energy. By assuming a decoupling between vibrations and transitions between the minima, the partition function of
Fig. 8.9 Schematic representation of the possible motions of atoms in the PEL at low enough temperature. The atoms can explore the minima of the PEL; they can be trapped and oscillate around the minima, or for thermal fluctuations, they can jump to other minima
8.6 Energy Landscape and Configurational Entropy
279
the system can be factorized as a vibrational term and a configurational part. As a consequence the excess entropy Sexc = S − Sid will be the sum of the configurational Sc and the vibrational Sv contributions Sexc ≈ Sc + Svib .
(8.33)
In the topographic view of the PEL, the minima are marked by different depths, and for a given energy, there is a certain number of minima. The counting of these numbers as function of energy is an important goal in the development of theoretical approaches [10, 13, 14, 16, 19–24]. By quenching the system to extremely low temperature, it is possible to determine the inherent structures, the collection of basins classified according to their depths. The inherent structures are the solutions of the equation ∇Φ (r 1 , . . . , r N ) = 0 .
(8.34)
Every configuration can be quenched to a minimum of Φ, its inherent structure. Each minimum would be a sort of basin of attraction for a number of configurations in the quenching process. With the hypothesis that TK is the temperature at which Sc = 0, in the limit of T → TK , the system will be stuck in a single stable state, a deep minimum in the PEL as represented in Fig. 8.10, since it will not have enough energy to reach the absolute minimum that corresponds to the ordered, crystalline, phase.
Fig. 8.10 Schematic one-dimensional representation of a multidimensional PEL of a fragile glass former with a single deep minimum corresponding to the ideal glassy state, reached at TK , where the system is trapped
280
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
Fig. 8.11 Schematic one-dimensional representation of the PEL of a strong glass former
We observed already that by lowering the cooling rate, the Tg decreases; therefore, it could be hypothesized that with an infinitely slow process of cooling, similar to a thermodynamic process, an ideal thermodynamic glass transition would take place. In this respect, the Kauzmann temperature could be interpreted as the temperature of this ideal glass transition. In the fragile case, the relaxation process is slower due to the intermediate sampling of equivalent basins, and this originates the non-Arrhenius behaviour. For strong glass formers, it is expected that the PEL is more uniform (see Fig. 8.11) and the relaxation takes place with an activation energy almost constant with temperature.
8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory 8.7.1 Dynamics Upon Supercooling All the experimental techniques, such as light scattering, neutron scattering, dielectric spectroscopy and optical Kerr effect, used to study the time-dependent correlation functions can give direct information about the behaviour of the liquid upon supercooling. To study the dynamics of supercooled liquids approaching the glass transition, the central observable is the dynamical structure factor or the related intermediate scattering function and van Hove correlation function. Since long time, experiments have measured a stretching of the excitation spectra of the liquid as it approaches the glass transition. In normal conditions, the microscopic time scales of the relaxations are of the order of the ps corresponding to frequency of the order
8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory
281
of the THz. In molecular liquids, the intermolecular modes show frequencies in the range Δν ≈ 0.1 ÷ 1 THz much smaller than the intramolecular frequencies in the range Δν ≈ 6 ÷ 7 THz. Upon supercooling, it is possible to observe excitations at frequencies below 1 THz. It is usually needed to use the resolution of quasi-elastic scattering techniques since the excitation peaks are very close to the elastic peak. As we will explain later, the experiments evidence that the relaxation to equilibrium of the fluid becomes slower and slower as the temperature decreases below the melting point. The relaxation laws stretch several orders of magnitudes like very few other processes in physics. The system explores regions of the PEL with minima and barriers. At high temperature, the particles have enough kinetic energy to visit all the PEL, but at reduced temperature, the system is constrained to sample deeper minima. As said in the discussion of the Kauzmann temperature, the system at a low temperature is finally trapped in a single very deep minimum and the dynamics is arrested. There is however an intermediate range of temperatures where it is found a change in the relaxation of the density fluctuations. From a pure exponential decay at high T , the relaxation law turns to a stretched exponential behaviour F (k, t) ≈ e−(t/τ )
β
(8.35)
where the exponent β < 1 and τ are a function of T and k. The decay of this form is called Kohlrausch-Williams-Watts (KWW) function. The relaxation time τ increases of orders of magnitude with decreasing temperature. Computer simulation is very relevant in the investigation of supercooled liquids. It is usually not difficult to quench the system in the liquid state avoiding crystallization because of the fast cooling rates that can be performed. Long-time computations are however necessary to avoid large statistical errors. A prototype of glass forming liquid was proposed years ago by Kob and Andersen [25–27]. This is a binary Lennard-Jones mixture (LJBM) composed by two types of atoms, A and B, with the same mass and interacting with a LJ potential, Eq. (3.5). The mixture is composed by 80% of A particles and 20% of B particles. The parameters of the potential are σBB = 0.88σAA
BB = 0.5AA
σAB = 0.80σAA
AB = 1.5AA .
(8.36)
The lengths are in units of σAA , and the temperature is in units of AA /kB . We report in the figures the results of a computer simulation performed at decreasing temperature with a box length 9.4, following the original procedure of Kob and Andersen. In Fig. 8.12 it is shown the MSD of particles A. At high temperature, it is found the usual behaviour in liquids. Initially the ballistic motion takes place in the time interval of the free motion of the particles. Then the system in the long time limit enters in the Brownian regime characterized by a random motion induced by collisions so that the Einstein relation is observed < r 2 >= 6Dt. But at decreasing
282
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
Fig. 8.12 Mean square displacement of the A particles in the Kob-Andersen LJBM at decreasing temperature. LJ units are used; in particular, the time is in units of (m/48AA )1/2 σAA
Fig. 8.13 Intermediate scattering function of the A particles of the Kob-Andersen LJBM at decreasing temperature. The function is calculated at k0 , the position of the maximum of S(k). LJ units are used; in particular, the time is in units of (m/48AA )1/2 σAA
temperature, the diffusion of the particles is slowed down. Between the ballistic and the Brownian regimes, it appears a flat region, a plateau. The plateau corresponds to a period of time where the particles are trapped in the transient nearest neighbour cage, and they cannot diffuse. In Fig. 8.13, it is shown the self-intermediate scattering function (SISF), calculated at the wave vector k0 corresponding to the maximum of the static structure factor. At high temperature after the ballistic interval of time, it is observed the
8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory
283
Fig. 8.14 SISF of oxygens in water, simulated with the TIP4P/2005 model at density 1.0 g/cm3 . The function is calculated at the position k0 of the maximum of SOO (k). See [28]
exponential decay. At decreasing T , a plateau develops after the initial ballistic regime, and its length increases as T goes down. The region of the plateau corresponds to the so-called β-relaxation. The long time decay, called α-relaxation, takes place with the KWW stretched exponential (8.35). Another important example of the behaviour of supercooled liquids is shown in Fig. 8.14. It is reported the (SISF) of the oxygens in liquid water, simulated with the TIP4P/2005 model [28]. The results shown are for the SISF calculated at k0 = 0.225 nm, the position of the main peak of the SOO (k), from T = 300 K down to T = 190 K. The double relaxation regime is found also in this case. We observe in the region of the β-decay the presence of a bump. This is attributed to the socalled Boson peak, an effect observed in computer simulations and experiments in the glass transition region. The origin of the Boson peak seems to be related to anomalies in the density of the vibrational states in glasses, but several competing explanations have been proposed for this phenomenon. More will be said about the glassy behaviour of supercooled water in the next chapter. The interpretation of the relaxation phenomena described before must take into account the properties of the PEL. Upon supercooling, the dynamics will be dominated by the transition between the metastable states. The number of accessible states decreases upon approaching the glass transition, and it is expected that a structural arrest would take place when the height of the barriers between the metastable states becomes infinite. This idea is at the basis of the mean field theoretical approach, called mode coupling theory (MCT) of glassy dynamics [29], the most important theory able to interpret and make predictions about the dynamics of supercooled liquids.
284
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
8.7.2 Mode Coupling Theory and Cage Effect To investigate the dynamics of supercooled liquids, it is necessary to introduce memory effects taking explicitly into account those more relevant in determining the transition from the liquid to the glassy state. The memory effects can be approximately separated in binary contributions related to rapid collisional short time effects and contributions due to slow relaxation processes. These long time slow effects are the more relevant for the glass transition. We have already seen that collective modes are present in liquids in the hydrodynamics limit. Density fluctuations around the equilibrium are at the origin of modes similar to phonons in crystals. Now in approaching the glass transition, the slow relaxation can be attributed mainly to the coupling of those liquid modes, a type of processes similar to the mode coupling of phonons in crystals. The mode-mode coupling approach, in fact, was developed to study liquid dynamics in analogy with the theory of anharmonic effects in crystals. The origin of the name is related to the assumption of processes where a phonon is assumed to decay in a single couple of phonons with different frequencies. Kawasaki applied this theory to the study of the critical slowing down near a critical point [30, 31]. Götze had the idea of a similar theory for the dynamics of a liquid approaching the glass transition. He called his approach mode coupling theory for the evolution of glassy dynamics [29]. The MCT is able to predict the behaviour of the intermediate scattering function in supercooled liquid along the scheme shown in Fig. 8.15. After the ballistic region, two relaxation regimes are found. The first one is the β-relaxation; it is characterized by a plateau that connects the inflection points indicated with a and b. Then after crossing b, the system enters in the α-relaxation regime, where the KWW decay is found. The MCT explains this behaviour in terms of the cage effect, described in the figure. In the ballistic regime, the particle diffuses
Fig. 8.15 Schematic representation of the behaviour of the intermediate scattering function of a liquid upon supercooling
8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory
285
without interaction with other atoms. The β-relaxation is the period of the rattling of the particle in the transient cage of the nearest neighbours. This effect is visible also in the plateau of the MSD, when the particle is almost arrested. The caging period grows upon cooling, and the length of the plateau increases. After a time long enough, the cage is broken and the atom can diffuse; this corresponds to the α-relaxation region.
8.7.3 Formulation of the Theory The formulation of the MCT is rather complex, and here we follow the derivation given by Reichman and Charbonneau in their lecture notes [32]; see also [33]. We recall Eq. (7.218) and the definitions given in Sect. 7.13 dC(t) = iΩ · C(t) − dt
t
dt M t − t · C t
(8.37)
0
where C(t) is the correlation function of the components of the vector X C (t) = X|X(t) , iΩ =
ˆ ˙
X|X
X|L|X = ,
X|X
X|X
(8.38) (8.39)
and the memory function is given by M(t) =
R(0)|R(t) ,
X|X
(8.40)
where R(t) are the random forces. We work in the (k, t) space, but we will specify the dependence on the wave vector only when necessary. We recall that ˆˆ ˆ ˙ |R(t) = iei QLt Q| X
(8.41)
ˆ = 1 − Pˆ . We see where Pˆ is the projection operator defined in Eq. (7.192) and Q that
˙ . |R(0) = i 1 − Pˆ |X (8.42) In order to derive the MCT equations, we consider a vector X with two components
286
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
δρ (k, t) X (t) = jl (k, t)
(8.43)
,
where δρ(k, t) is the density fluctuation and jl (k, t) is the longitudinal component of the current, as defined in Eq. (7.58) ∂t ρ(k, t) = ikjl (k, t) ;
(8.44)
note that from now with ∂t , we indicate the derivative ∂/∂t. The correlation function is ⎡ C(t) = X|X(t) = ⎣
δρ(−k)δρ(k, t) δρ(−k)jl (k, t)
⎤ ⎦;
(8.45)
{ jl (−k)δρ(k, t)} jl (−k)jl (k, t) in this matrix the term on the bottom left delimited with {. . .} is the most relevant for our calculations. We calculate now the different terms of Eq. (8.37); for the iΩ we need ⎡
X|∂t X = ⎣
δρ(−k)∂t δρ(k) δρ(−k)∂t jl (k)
⎤ ⎦;
(8.46)
jl (−k)∂t δρ(k) jl (−k)∂t jl (k) we have to recall the calculation of the moments of the intermediate scattering function F (k, t) developed in Sect. 7.5 i i NkB T ;
δρ(−k)∂t jl (k) = δρ(−k) ∂t ∂t ρ(k) = k 2 k k m
(8.47)
we have the same result for the other off-diagonal term, while it is easy to see that the diagonal terms are zero, since δρ∂t δρ = 0, etc. Then the product X|X can be obtained from (8.43) ⎡
X|X = C(0) = ⎣
⎤
NS(k)
0
0
NkB T /m
⎦.
(8.48)
Now iΩ is given by ⎡ iΩ = ⎣
0
ikNkB T /m
ikNkB T /m
0
⎤ ⎡ 1 N S(k) ⎦·⎢ ⎣ 0
0 1 N kB T /m
⎤ ⎥ ⎦
(8.49)
8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory
287
and from it ⎡ ⎢ iΩ = ⎣
0
ik
ikkB T mS(k)
0
⎤ ⎥ ⎦
(8.50)
and also ⎡ ⎢ iΩ · C = ⎣
0
⎤ ⎡ ⎤
δρ(−k)δρ(k, t) δρ(−k)jl (k, t) ⎥ ⎣ ⎦. ⎦· 0 { jl (−k)δρ(k, t)} jl (−k)jl (k, t)
ik
ikkB T mS(k)
(8.51)
In order to calculate the memory function, we can start from Eq. (8.42) and Pˆ |∂t X =
1
X|∂t X|X
X|X
(8.52)
then ⎡ ⎢ Pˆ |∂t X = ⎣
0 ikkB T mS(k)
ik
⎤⎡ ⎥⎣ ⎦
0
δρ(k)
⎤ ⎦.
(8.53)
jl (k)
Since R(0) = |∂t X − Pˆ |∂t X,
(8.54)
we have ⎡ R(0) = ⎣
δ∂t ρ(k) ∂t jl (k)
⎤
⎡
⎦−⎢ ⎣
ikjl (k) k T ik B δρ(k) mS(k)
⎤
⎡
0
⎥ ⎣ ⎦=
⎤ ⎦
(8.55)
a(k)
where a (k) = ∂t jl (k) − ik
kB T δρ(k) . mS(k)
(8.56)
The R(t) will be given by ˆˆ
|R(t) = ei QLt |R(0)
(8.57)
288
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
so ⎡ ⎢ ⎢
R(0)|R(t) = ⎢ ⎣
0
⎤
0
⎥ ⎥ ⎥. ⎦
ˆˆ
(8.58)
0 a(−k)ei QLt a(k) Now we can write the memory function ⎡ ⎢ ⎢ M(t) = ⎢ ⎣
0
⎤
0
ˆˆ
0 a(−k)ei QLt a(k)
⎡ 1 ⎥ ⎥ ⎢ N S(k) ⎥·⎣ ⎦ 0
0 1 N kB T /m
⎤
⎡
⎥ ⎢ ⎦=⎣
0
0
0
m N kB T
⎤ ⎥ ⎦ ζ (t) (8.59)
where ˆˆ
ζ (t) = a(−k)ei QLt a(k) .
(8.60)
We have all the components for writing the equation for the correlation function C(t). The kernel of Eq. (8.37) is given by ⎡
⎤ ⎡ ⎤
δρ(−k)δρ(k, t ) δρ(−k)jl (k, t ) ⎢ ⎥ ⎣ ⎦. M ·C =⎣ ⎦· m 0 N kB T ζ (t − t ) { jl (−k)δρ(k, t )} jl (−k)jl (k, t ) (8.61) As said above we are interested in the matrix C(t) at the term delimited with {. . .} reported here below 0
0
jl (−k)δρ(k, t) ;
(8.62)
when we perform the derivative of C(t), this term gives N d2 F (k, t) ik dt
(8.63)
where we used the properties F (k, t) = F (−k, −t). By looking at the product of matrices in Eq. (8.61), the corresponding left bottom term is m 1 d ζ (t − t ) N F (k, t) . NkB T ik dt
(8.64)
By taking the corresponding term in the product iΩ · C from Eq. (8.51), we obtain
8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory
m 1 N d 2 F (k, t) kN kB T F (k, t) − =− 2 ik imS(k) kB T ik dt
t 0
289
dF (k, t ) dt ζ k, t − t dt (8.65)
and finally d 2 F (k, t) m k 2 kB T F (k, t) − = − 2 mS (k) NkB T dt
t 0
dF (k, t ) dt ζ k, t − t . dt
(8.66)
We define the function Φ(k, t) Φ (k, t) =
F (k, t) , S (k)
(8.67)
if we consider the self-scattering function Φ(k, t) coincides with Fs (k, t), in both cases Φ(k, t = 0) = 1. We rewrite Eq. (8.66) as d2 Φ (k, t) + Ωk2 Φ (k, t) + dt 2
d Φ k, t = 0 dt M k, t − t dt
(8.68)
where Ωk2 =
ω˜ (2) kB T k 2 . = (0) m S (k) ω˜
(8.69)
We have defined the kernel memory function as M (k, t) =
m ζ (k, t) . NkB T
(8.70)
The Eq. (8.68) is an exact equation, and it is similar to the equation of a harmonic oscillator with a dumping that depends on time. The dumping is determined by the memory function. In principle, the memory function (8.70) could be derived from the function ζ (t) given in Eq. (8.60), but the long calculation gives unsolvable equations. Some approximation is necessary in order to proceed. In analogy with the approach of Kawasaki [30, 31] for the critical slowing down, in the application of MCT to supercooled liquid, the contributions to the memory function are separated in a part called regular that includes the short time effects typical of a liquid in normal situation and in a part that contains long time effects connected to the correlation of the forces acting between the atoms [34, 35]. So the memory function is divided in two terms M (k, t) = Mreg (k, t) + Ωk2 m (k, t) .
(8.71)
Mreg determines the behaviour of the F (k, t) at relatively short time. For supercooled liquid, it does not give an important contribution.
290
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
If we consider only the regular part and we assume the Markov approximation Mreg (k, t) = γk δ(t),
(8.72)
we get from Eq. (8.68) the equation Φ¨ (k, t) + Ωk2 Φ (k, t) + γk Φ˙ (k, t) = 0;
(8.73)
now in the long wavelength limit, we have Ωk2 =
kB T 2 −1 k S (k → 0) , m
(8.74)
and we recall that 1 KT = ρ
S (k → 0) = ρkB T KT
∂ρ ∂p
,
(8.75)
T
so Ωk2 =
1 m
∂p ∂ρ
k2 ;
(8.76)
T
the isothermal sound velocity already defined in Eq. (7.129) is cT2 =
1 m
∂p ∂ρ
,
(8.77)
T
and we have Ωk2 = cT k 2 .
(8.78)
The Eq. (8.73) can be written in ω space as ω2 + iγk ω − cT2 k 2 = 0,
(8.79)
and by assuming γk = Γη k 2 , where Γη is defined in (7.130), we get the frequencies of the dumped acoustic modes (7.131) 1 ω = ±cT k − i Γη k 2 . 2
(8.80)
The long time behaviour is determined by the m (k, t) memory function. In MCT, it is assumed the factorization approximation
8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory
m (k, t) =
1 2
dq 1 dq 2 (2π )6
291
V (k, q1 , q2 ) Φ (q1 , t) Φ (q2 , t) δ (q 1 + q 2 + k) ;
(8.81) V (k, q1 , q2 ) are the vertices that determine the mode-mode coupling, in analogy with the applications in solid-state physics where a phonon of wave vector k decays in two phonons of wave vectors q1 and q2 . The key point of the theory is that the vertices are written in terms of the structure factors of the liquid V (k, q1 , q2 ) =
. /2 ρ S (k) S (q1 ) S (q2 ) k · q1 c˜ (q1 ) + q2 c˜ (q2 ) q4
(8.82)
where c˜ (k) is the direct correlation function defined by Eq. (4.73) and related to S (k) by Eq. (4.83): 1 − ρ c(k) ˜ = 1/S(k). In MCT, the equations for the density correlation functions depend on a kernel containing the static structure of the liquid. This implies that the theory is self-consistent, since at variance with critical phenomena now the input S(k) functions are regular in approaching the glass transition. In spite of this, a singularity appears at temperature low enough. In particular the solution of the MCT equation shows a bifurcation at a temperature TC for Φ(k, t → ∞).
8.7.4 Glass Transition as Ergodic to Non-ergodic Crossover As mentioned above, the bifurcation concerns the asymptotic limit of Φ(k, t). At high temperatures, the system can visit all the PEL, and this assures its ergodicity. This implies that Φ(k, t → ∞) = 0. In the MCT equation at decreasing temperature, the coupling terms increase since, as we know, the intensity of the peaks of the structure factor grows. We have already seen in Sect. 6.12 the Verlet criterion that shows as the structure factor peak plays a role in the solidification. The mode coupling approximation enhances the role of the structure factor through the nonlinear terms in the vertices. As a consequence there is a temperature TC , where we have a crossover to a situation where the density fluctuations are frozen, and it is found that Φ(k, t → ∞) = fk with fk a finite number less than 1. So Φ(k, t) starts from 1 at t = 0 and decays as Φ (k, t → ∞) =
⎧ ⎨ ⎩
0
T > TC .
fk > 0
(8.83)
T < TC
So at TC it is found the transition from an ergodic regime to a non-ergodic state. Above TC the system is able to equilibrate (liquid state); instead below TC the dynamics of the system is arrested; this corresponds to a glassy state. It has to be considered however that this is an ideal liquid-glass transition, since in considering
292
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory 1 Φ(k,t) 0.8 Tc 0.6 T>Tc
0.4
0.2
0
10-2
100
t/t0
102
104
Fig. 8.16 Function Φ (k, t) for decreasing temperature approaching the ideal glass transition. t0 is a characteristic time of the system
the cage effect some physical processes are neglected, in particular the possibility at finite temperatures of hopping between minima in the PEL. In Fig. 8.16, it is represented the Φ (k, t) for decreasing temperature. For T > TC , we observe a plateau that grows with an increase of the β-relaxation region. In the ideal transition, the plateau extends to infinity at T = TC , so that the system is frozen in a non-ergodic state for T < TC . The asymptotic limit fk is called the non-ergodicity parameter, and it is equivalent to the Debye-Waller factor (DWF) in crystals, since in an ideal crystal S (k, ω) = fk δ (ω). If Φ(k, t) = Fs (k, t), the self-intermediate scattering function, the non-ergodicity parameter is the Lamb-Mössbauer factor, the equivalent of the Debye-Waller factor in the case of incoherent scattering. We recall that the DWF in case of an harmonic potential can be derived as fk = e−k
2 /3
;
(8.84)
in a lattice < u2 > is the mean square displacement, corresponding to the oscillation around the equilibrium position. In the MCT interpretation, < u2 > is determined by the oscillation in the cage. Therefore it is assumed that it corresponds to the mean radius of the cage rc2 =< u2 > and we can write fk = e−k
2 r 2 /3 c
.
(8.85)
Apart for few cases, the calculations of the vertices starting from the interaction potential of liquid matter are rather complicated. It is however of relevant interest the comparison between experiment and computer simulation with the scaling relations
8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory
293
and the asymptotic predictions of MCT. We can now consider the MCT equation in some limit where the theory appears in a simplified version. Let us consider Eq. (8.68) and rewrite it by operating a Laplace transform (LT). We use the properties of the LT of a generic function LT [F (t)] LT F˙ (t) (z) = −izF (z) − F (t = 0)
(8.86)
and LT F¨ (t) (z) = −z2 F (z) + izF (t = 0) − F˙ (t = 0) .
(8.87)
By taking into account the initial conditions, Φ (t = 0) = 1 and Φ˙ (t = 0) = 0, the Eq. (8.68) can be written as
−z2 + Ωk2 Φ + iz = M [1 + izΦ]
(8.88)
and finally Ωk2 Φ (k, z) = −iz + M (k, z) . 1 + izΦ (k, z)
(8.89)
We consider now the long time limit, and we use the property of the Laplace transform lim (−iz) F (z) = lim F (t) .
z→0
t→∞
(8.90)
We can neglect the contribution of the regular part of the memory function, so M (k, z) ≈ Ωk2 m (k, z) ,
(8.91)
Φ (k, z) = m (k, z) . 1 + izΦ (k, z)
(8.92)
and Eq. (8.89) becomes
To perform approximate calculations of the MCT equations, the term m (k, t) can be considered as a functional of the correlation function Φ m (k, t) = F [Φ] ,
(8.93)
then the functional can be expanded in power of Φ. A simple approximation is to assume m (k, t) = u Φ 2 (k, t) .
(8.94)
294
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
To take the long time limit, we multiply by −iz both sides of Eq. (8.92), since lim (−iz) Φ (k, z) → fk
lim (−iz) m (k, z) → u fk2
z →0
z→0
(8.95)
we have fk = ufk2 . 1 − fk
(8.96)
This approximation is called sometimes schematic MCT, and it can be derived also by assuming in the vertices S (k) ≈ 1 + Aδ (k − k0 ) ,
(8.97)
where k0 is the position of the first peak of the structure factor and A is the area below the first peak. In this way it is found also an expression for the coefficient u. From Eq. (8.96) if we exclude the solution fk = 0, we have to solve fk2 − fk +
1 = 0; u
(8.98)
the solutions are 1 fk = ± 2
2
1 1 − ; 4 u
(8.99)
we require u > 4 and we have to choose the solution with the sign + to get fk → 1 in the limit of very large u. The real solutions of Eq. (8.96) are
fk =
⎧ ⎨
0
√ ⎩1 2 1 + 1 − 4/u
u4
so we recover the bifurcation (8.83) by considering that u = 4 at T = TC . The condition (8.100) implies that u must be large enough to obtain the bifurcation. By assuming for S (k) the form (8.97), it is possible to find that the value of u is given by u=
k0 S (k0 ) A2 8π 2 ρ
(8.101)
so the enhancement of the peak of structure factor upon supercooling plays an important role.
8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory
295
8.7.5 The β-Relaxation In approaching TC , an important feature is the presence of the β-relaxation determined by the rattling of the atom in the cage, as represented in Fig. 8.15. As stated above, Eq. (8.83), for T < TC in the non-ergodic zone, the correlation function of the density fluctuations decays from the value 1 at t = 0 to the nonergodicity parameter 0 < fk (T ) < 1 in the limit t → ∞. Below TC , MCT finds that close to TC , fk (T ) converges to the critical nonergodicity parameter fkc √ fk (T ) = fkc + hk σ
(8.102)
where it is introduced the so-called separation parameter σ = σ0 with = (TC − T ) /TC and hk is the critical amplitude. Above TC , as seen in Fig. 8.16, at decreasing temperature, a plateau develops before the onset of the α-relaxation. MCT shows that the plateau converges toward fkc in approaching TC from above. For || 0
(T < TC )
(8.112)
where the master functions g± depend on the scaled time t˜ = t/tσ . For σ = 0 G (t) = (t0 /t)a . For small values of t˜ for both the master functions is verified a power law behaviour g± t˜ ≈ t˜−a . In particular this scaling concerns the onset of the βrelaxation. For the supercooled liquid, this corresponds to the point indicated with (a) in Fig 8.17. For the two master functions. For the glass t˜ >> 1, there is a distinction between g+ t˜ → (1 − λ)−1/2 , instead g− t˜ ≈ −B t˜b . This last behaviour is called the von Schweidler scaling law. According to Eq. (8.109), the two exponent a and b are connected by the relation λ=
Γ 2 (1 + b) Γ 2 (1 − a) = ; Γ (1 − 2a) Γ (1 + 2b)
(8.113)
as a consequence of the resolution of the MCT equations, the values of λ are restricted to the range 1/2 < λ < 1, while 0 < a < 0.395 and 0 < b < 1.
8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory Fig. 8.17 β-relaxation and α-relaxation of a supercooled liquid. The zone indicated with (a) corresponds to t > tσ and the von Schweidler scaling
Φ(k,t)
297
ballistic regime
1 β-relaxation
0.8
(a) 0.6
(b)
α-relaxation
0.4 0.2 0 10-2
100
102
t
8.7.6 α-Relaxation After the β-relaxation region, the supercooled liquid for σ < 0 enters in the regime of the α-relaxation where the correlation function decays to zero. In crossing from the β the α relaxation from the von Schweidler scaling, we have 3 Φ (k, t) − fkc = −hk |σ |B t˜b ;
(8.114)
now by recalling the definition of tσ in (8.110) 3
|σ |
t tσ
b
b = |σ |1/2b |σ |1/2a t/t0 ,
(8.115)
now we can define an exponent γ γ =
1 1 + ; 2a 2b
(8.116)
by taking the larger values of a and b, it results γ > 1.7666. The coefficient of hk in Eq. (8.114) becomes
b b B |σ |γ t/t0 = B 1/b |σ |γ t/t0 = (t/τ )b
(8.117)
where τ = B −1/b t0 |σ |−γ ; we are above TC so |σ | = (T − TC ) /TC and as a consequence
(8.118)
298
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
τ = C (T − TC )−γ .
(8.119)
The von Schweidler scaling form can be rewritten as Φ (k, t/τ ) = fkc − hk
b t ; τ
(8.120)
now τ becomes the relevant time for the evolution of the density fluctuations. In the α-relaxation region, a master function describes the decay of the density correlation with the stretched exponential formula Φ (k, t) = fkc e−(t/τ ) ; β
(8.121)
therefore, the KWW decay is found in agreement with experimental and computer simulation evidences. Moreover according to the MCT prediction, the relaxation time obtained from the KWW decay diverges asymptotically as function of time in approaching TC . In Fig. 8.18, it is reported the verification of the asymptotic behaviour of τ in the case of the Kob-Andersen LJBM. The best fit of τ for the lowest temperatures investigated shows the linear behaviour of 1/τ as function of T − TC in the log-log plot in agreement with the MCT prediction of Eq. (8.119). Also the von Schweidler scaling is satisfied as shown in Fig. 8.19.
100
100
1/τ
102
1/τ
102
TC=0.432
10-2
10-2
γ=2.5 ε=(T-TC)/TC 10-4
10-4
0
1
2
3 T
4
5
10-1
100
ε
101
102
Fig. 8.18 Relaxation time of A particles of the Kob-Andersen LJBM obtained from the fit of αrelaxation of the SISF in Fig. 8.13. On the left panel: inverse of the relaxation time τ as function of T . On the right panel: log-log plot of 1/τ as function of = (T − TC ) /TC . The plot shows the MCT prediction, Eq. (8.119), with γ = 2.5 and TC = 0.432
8.7 Dynamics of the Supercooled Liquid and Mode Coupling Theory
299
1
Fs(k0,t)
LJ Binary Mixture A particles
T=5.0 T=4.0 T=3.0 T=2.0 T=1.0 T=0.8 T=0.6 T=0.55 T=0.50 T=0.475 T=0.466
0.5
0
10-5
10-3
10-2
100
102
t/τ(T)
Fig. 8.19 Test of the von Schweidler scaling for the A particles of the Kob-Andersen LJBM. SISF are reported as function of the scaled time t/τ (T ). The broken red curve is the master curve fitted with Eq. (8.120) with an exponent b ≈ 0.51
The behaviour of the MSD upon supercooling, shown in Fig. 8.12, is interpreted by MCT in analogy with the theoretical approach for the density correlation function. The β-relaxation region corresponds to the region of the plateau, and from the Brownian regime observed in the long time limit, it is possible to obtain the diffusion coefficient D, as the limit < r 2 >→ 6Dt. According to the StokesEinstein relation in the form (7.179), we expect D = const · T /τ ; therefore in approaching TC , MCT predicts D (T ) ∼ (T − TC )γ .
(8.122)
The asymptotic limit of D → 0 corresponds to the limit of the plateau with infinite length when the Brownian regime becomes unreachable. In the Kob-Andersen LJBM, it is found that Eq. (8.122) is verified but with TC = 0.435 slightly different from that found for the SISF. For this binary mixture, we have to mention that the particles A and B show a very similar behaviour. The Kob-Andersen mixtures, as said above, can be considered a prototype of a glass forming fluid where the MCT theory works very well. It has to be consider that the results shown until now are obtained with the ideal formulation of the theory. We will consider in the next chapter the structural relaxation of supercooled water, and we will find that MCT works in a large range of temperatures; however, in approaching the deep supercooled regime, MCT must be corrected by introducing hopping effects neglected in the ideal formulation of the theory.
300
8 Supercooled Liquids: Glass Transition and Mode Coupling Theory
References 1. Debenedetti, P.G.: Metastable Liquids. Princeton University Press, Princeton, NJ, USA (1996) 2. McMillan, P.F., Stanley, H.E.: Nature Phys. 6, 479 (2010) 3. Binder, K., Block, B.J., Virnau, P., Tröster, A.: Am. J. Phys. 80, 1099 (2012) 4. Angell, C.A.: J. Non-Cryst. Solids 131–133, 13 (1991) 5. Angell, C.A.: Science 267, 1924 (1995) 6. Martinez, L.M., Angell, C.A.: Nature 410, 663 (2001) 7. Eyring, H.: J. Chem. Phys. 3, 107 (1935) 8. Evans, M.G., Polanyi, M.: Trans. Faraday Soc. 31, 875 (1935) 9. Kauzmann, W.: Chem. Rev. 43, 219 (1948) 10. Debenedetti, P.G., Stillinger, F.H.: Nature 410, 259 (2001) 11. Stillinger, F.H., Debenedetti, P.G., Truskett, T.M.: J. Phys. Chem. B 105, 11809 (2001) 12. Adam, G., Gibbs, J.H.: J. Chem. Phys 43, 139 (1965) 13. Mezard, M., Parisi, G.: Phys. Rev. Lett. 82, 747 (1999) 14. Mezard, M., Parisi, G.: J. Chem. Phys. 111, 1076 (1999) 15. Gibbs, J.H., Di Marzio, J.: J. Chem. Phys 28, 373 (1958) 16. Royall, C.P., Turci, F., Tatsumi, S., Russo, J., Robinson, J.: J. Phys. Condens. Matter 30, 363001 (2018) 17. Goldstein, M.: J. Chem. Phys. 51, 3728 (1969) 18. Stillinger, S.H., Weber, T.A.: Phys. Rev. A 25, 978 (1982) 19. Sastry, S., Debenedetti, P.G., Stillinger, F.H.: Nature 393, 554 (1998) 20. Sciortino, F., Kob, W., Tartaglia, P.: Phys. Rev. Lett. 83, 3214 (1999) 21. Angelani, L., Leonardo, R.D., Ruocco, G., Scala, A., Sciortino, F.: Phys. Rev. Lett. 85, 5356 (2000) 22. Coluzzi, B., Parisi, G., Verrocchio, P.: Phys. Rev. Lett. 84, 306 (2000) 23. Scala, A., Starr, F.W., La Nave, E., Sciortino, F., Stanley, H.E.: Nature 406, 166 (2000) 24. Sastry, S.: Nature 409, 164 (2001) 25. Kob, W., Andersen, H.C.: Phys. Rev. Lett. 73, 1376 (1994) 26. Kob, W., Andersen, H.C.: Phys. Rev. E 51, 4626 (1995) 27. Kob, W., Andersen, H.C.: Phys. Rev. E 52, 414 (1995) 28. De Marzio, M., Camisasca, G., Rovere, M., Gallo, P.: J. Chem. Phys. 144, 074503 (2016) 29. Götze, W.: Complex Dynamics of Glass-Forming Liquids: A Mode-Coupling Theory. Oxford University Press, Oxford (2009) 30. Kawasaki, K.: Prog. Theor. Phys. 39, 1133 (1968) 31. Kawasaki, K.: Ann. Phys. 61, 1 (1970) 32. Reichman, D.R., Charbonneau, P.: J. Stat. Mech. 2005, P05013 (2005) 33. Janssen, L.M.C.: Front. Phys. 6, 97 (2018) 34. Götze, W.: Condens. Matter Phys. 1, 873 (1998) 35. Bengtzelius, U., Götze, W., Sjölander, A.: J. Phys. C Solid State Phys. 17, 5915 (1984) 36. Franzese G., Stanley H. E.: The Widom Line of Supercooled Water. J. Phys. Condens. Matter 19, 205126 (2007)
Chapter 9
Supercooled Water
Water is the most important fluid in nature. The properties of water are of interest for different fields of research in physics, chemistry, biology and medicine. A large amount of experimental and theoretical work exists on pure water, water in solution and water at contact with different substrates. The water molecule has a very simple formula with covalent bonds and a permanent dipole moment. Molecules of water in a fluid phase or in a crystal are connected with hydrogen bonds. The hydrogen bond network determines the crystalline structure of ice. At melting the network, it is still present in the short range order of the liquid phase. The hydrogen bond is at the origin of all the properties of water. From a more fundamental point of view, water is interesting since it shows a great number of anomalies in different portions of its phase diagram. The most well-known anomaly of water is that ice has a density lower with respect to the coexisting liquid. The crystalline order makes possible to arrange the molecules in a larger volume with respect to the liquid phase. A precursor effect of the approach to freezing it is found in the anomalous behaviour of the density of liquid water. The liquid density increases with decreasing temperature, but around 4 ◦ C at ambient pressure, the liquid density reaches a maximum, and then it starts to decrease. By varying the pressure, it is possible to trace a curve of the temperatures of maximum density. The most interesting and intriguing properties of water are observed under extreme conditions of the phase diagram. Anomalies are found particularly at high temperatures in the supercritical state and on the other side when the water is kept in the liquid state below the melting temperature. The supercooled region that can be explored in experiments, however, is limited by difficulties in avoiding crystallization. In this respect, due to the possibility to have very fast cooling rates, computer simulation has played a very important role in these studies. The behaviour of supercooled liquid water is a topic of a large amount of research both for experiments and theories. Here we will focus particularly on the recent study of the supercooled liquid water in approaching the glassy state. To explain the anomalous behaviour in that region of the phase space, it was formulated the hypothesis of the presence of a coexistence between two water © Springer Nature Switzerland AG 2021 P. Gallo, M. Rovere, Physics of Liquid Matter, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-68349-8_9
301
302
9 Supercooled Water
forms in the liquid with a counterpart in the glassy states of amorphous ice. This idea opened a new possible interpretation of the phenomenology of water in all the thermodynamic space.
9.1 Supercooled and Glassy Water In a seminal paper, C. A. Angell reviewed the studies done in the previous decade about the phenomena taking place in supercooled liquid water [1]. It is particularly difficult to keep water liquid below its freezing temperature since water contains usually a number of impurities that are sufficient to induce crystallization. In spite of the experimental difficulties, these studies signed the beginning of an intense activity, both experimental and theoretical, on the supercooled state of water and its possible transition to a glassy state. In the bottom panel of Fig. 9.1, it is reported the anomalous change of slope of the density at 4 ◦ C (269 K). The presence of a temperature of maximum density, TM , implies that the coefficient of thermal expansion αP (T ) becomes from positive to negative at the crossing of TM . The behaviour of αP (T ) at ambient pressure is reported in the top panel of Fig. 9.1. By varying the pressure, it is possible to obtain a line of temperature of maximum density TM (p) in the phase space coincident with the line where αp changes sign. The change of sign of αP (T ) below TM has important consequences. From statistical mechanics [2], it is found that the thermal expansion is related to the correlation of the fluctuations of volume and entropy αp =
1 V
∂V ∂T
= p
1
ΔSΔV . TV
(9.1)
In most fluids as a consequence of an increase of volume, it is found an increase of entropy, while in water below TM , the fluctuations of volume and entropy are anticorrelated. An intuitive explanation is in the fact that, in approaching the freezing temperature, the local hydrogen bond structure of water becomes more open and less dense but also more ordered, so entropy decreases at increasing volume. The isothermal compressibility KT and the specific heat Cp also show an anomalous behaviour. While in other liquids KT and Cp decrease below the freezing temperature, the experimental results for water, reported in Fig. 9.2, evidence that both these functions strongly increase upon supercooling. A first interpretation of the anomalous increase of the thermodynamic response functions was formulated by Speedy in 1982 [3]. The scenario involves the liquid portion of the gas-liquid spinodal, similar to the curve shown in Figs. 2.12 and 8.2 for the van der Waals transition. Following [4], we can start from the differential of the pressure
αp (10-3 K-1)
9.1 Supercooled and Glassy Water
303
0 -1 -2 -3
ρ (g/cm3)
1 0.98 0.96 200
250
300
350
400
T (K)
Fig. 9.1 Bottom panel, density of water as function of temperature; top panel, coefficient of thermal expansion. Data from experiment at ambient pressure, from (https://www1.lsbu.ac.uk/ water/)
KT (GPa-1)
110
Cp (J mol-1 K-1)
1.2
100
1.0
0.8
90
0.6 80 0.4 200
250
300 T (K)
350
400
200
250
300
350
400
T (K)
Fig. 9.2 Experimental isothermal compressibility on the left panel and specific heat on the right as function of temperature at ambient pressure Data from (https://www1.lsbu.ac.uk/water/)
304
9 Supercooled Water
dp =
∂p ∂T
dT +
v
∂p ∂v
(9.2)
dv T
where v is the specific volume. Along the spinodal by definition, we have the Skripov relation
∂p ∂T
∂p ∂T
= sp
(9.3)
, v
by considering that
∂p ∂T
v
∂p =− ∂v
T
it can be easily understood that the signs of
∂v ∂T
;
(9.4)
p
∂p ∂T v
∂p ∂v T
and αp coincide in approaching
the spinodal from the stability region where < 0 [4]. As a consequence if the spinodal crosses the TMD line, it starts to increases at decreasing temperature. The scenario is represented in Fig. 9.3. We note that the spinodal starts from the critical point and decreases at decreasing T ; the curve reaches the region of negative pressure. Negative pressures are present in some natural phenomena and can be obtained in experiments. A negative pressure is called tension. It is more easy to think about a solid under tension, but also liquids can be under tensile strength. The mechanism responsible for forcing sap to ascend in plants is connected to the presence of a negative pressure in the vascular tissue
TMD CP 0
al
p (MPa)
od
n pi
extrapolated curve
a st
q
Li
lity
bi
S
d ui
l in
ca
-100
i an
h
ec
on
fm
o
i eg
r
-200
200
300
400
500
600
T (K)
Fig. 9.3 Hypothetical scenario for a retracing spinodal. TMD data from (https://www1.lsbu.ac. uk/water/)
9.1 Supercooled and Glassy Water
305
called xylem [5]. Negative pressures are related to the phenomenon of cavitation in liquids [4, 6, 7]. Cavitation is characterized by the formation of vapour cavities (bubbles) in the liquid; at increasing negative pressure, the liquid could become unstable. The loss of tensile strength takes place when the mechanical stability condition is violated and ∂p = 0. (9.5) ∂v T This represents the limiting value for the ability of the fluid submitted to stress to resist without giving rise to cavitation [4]. The instability point coincides with the change of sign of αp . Classical nucleation theory estimates for water a limiting tension of −190 MPa, experimental results are controversial and accurate measures found a limit of −120 MPa [7]. The retracing of the spinodal would have consequences on the thermodynamic behaviour of the response functions. According to the Speedy hypothesis, the spinodal after crossing the TMD line would increase at decreasing temperature, and it could induce an anomalous increase of KT and the specific heat at positive pressures and low enough temperatures. Subtle arguments showed possible inconsistencies of this scenario of a retracing spinodal [8] including the fact that at positive pressures the spinodal would encounter the solid phase. Nonetheless this scenario has played a very important historical role having focused the attention of the community working on water on the search of possible explanations of water anomalies. Following this first scenario, several others were proposed over the years [9–11]. The most fascinating and one who had so far more experimental indirect evidences is the scenario connected to the existence of a second liquid-liquid critical point in the supercooled water region [12]. We will talk in detail about this scenario in the next section. The singularity-free scenario [13], on the other hand, shows that large increase in the response functions in presence of a negatively sloped TMD could occur without a low-temperature critical point. As explained in the previous chapter, the experimental study of supercooled liquids can be performed by rapid quenching below the melting temperature to avoid the crystallization process. By further decreasing the temperature, a transition to a glassy state can be induced. In water the quenching procedure is more difficult. The liquid must contain a very small amount of impurities, and the formation of small ice droplets must be avoided. With particularly refined techniques, water can be undercooled to −42 ◦ C at ambient pressure [5]. By changing the pressure, it is possible to determine a line of homogeneous nucleation; this has been considered for many years the limit for performing experiments on supercooled water. This is not a rigorous thermodynamics limit, but it is determined by limitations in the experimental techniques and new methods succeeded in supercooling below the previously established nucleation line [14, 15]. On the other hand, amorphous ice states can be formed with rapid quenching from the vapour phase or with the use of very special techniques. The value of the temperature of glass transition,
306
9 Supercooled Water
Tg , as defined in Sect. 8.3, has been determined as 136 K; this value however is controversial, and also the value of 165 K has been proposed [11]. Glassy water is found in two main forms; they are called low-density amorphous ice (LDA) [16, 17] and high-density amorphous ice (HDA) [18]. For this reason, it is usual to consider that water presents polyamorphism. LDA was the first amorphous state obtained by quenching hot water at ambient pressure, while HDA was found by compressing Ih ice. In Fig. 9.4, it is reported a portion of the phase diagram of water well below the melting point. The LDA and HDA states are separated by a coexistence line. The transition between the two metastable states is a first-order transition. This transition was observed by Mishima et al. [19]; with a very refined technique, he was able to measure the transformation between the two amorphous states at constant ambient pressure. Successive experiments confirmed the first-order phase transition at different pressures [11, 20]. It was also found that LDA and HDA have slightly different Tg , as indicated in the figure. Across Tg in both cases, the amorphous ice becomes a high viscous liquid. Then along the line TX , a crystalline phase is formed. In the figure, it is reported also the line of the homogeneous crystallization TH . In the region between the lines TH and TX , it is extremely difficult to maintain water in its liquid state. For this reason, that region is indicated as no man’s land. Experiments are also pushing the limits of this region up from the low-temperature side [21].
250 Supercooled Water 200 T (K)
TH No Man’s Land Tx
150
Tg2
Tg1
HDA
LDA
100
0
0.1
0.2
0.3
p (GPa)
Fig. 9.4 Schematic portion of the phase diagram of water in the region below the melting point. The two glassy states are shown. They are called low-density amorphous (LDA) and high-density amorphous (HDA). The two states are separated by a first-order transition line. At crossing Tg1 for LDA and Tg2 for HDA, the glassy solids transform in viscous liquids. Along the line TX , the viscous liquids crystallize in a cubic ice. The line TH is the limit of the homogeneous crystallization. Since between TH and TX usually only crystalline water is found, this region has been called no man’s land. The graph is based on data reported in literature
9.2 The Hypothesis of a Liquid-Liquid Critical Point
307
Fig. 9.5 Radial distribution functions of amorphous ice in the two forms, LDA and HDA, as obtained from neutron scattering technique at 80 K. Reprinted with permission from [22]. Copyright (2002) by the American Physical Society. https://doi.org/10.1103/PhysRevLett.88.225503
LDA and HDA are characterized by different local structures. In Fig. 9.5, it is evident that the difference is mainly in the gOO (r). The differences between the two gOO (r) are particularly in the arrangement of the second shell with a well-defined minimum in LDA and instead in HDA the second broaden peak [22].
9.2 The Hypothesis of a Liquid-Liquid Critical Point To explain the anomalies of supercooled water a scenario has been introduced by Poole, Sciortino, Essmann and Stanley in a paper published in 1992 [12]. The simulation was performed with the ST2 model. As shown in Fig. 9.6, the authors found that the TMD does not intersect the spinodal and bends showing a noseshaped behaviour. From an accurate analysis of the LDA-HDA coexistence, they extrapolate a coexistence curve above the region of the amorphous states. The metastable extension of a two-state coexistence in the no man’s land implies that
308
9 Supercooled Water
Fig. 9.6 Isochores, TMD and liquid spinodal as calculated in simulation of water with the ST2 model. The broken curve is the TMD. The bottom bold curve is the spinodal. The isochores are from the top in g cm−3 : 0.60, 0.85, 0.90, 0.95, 1.00, 1.05 and 1.10. Reprinted with permission from [23]. Copyright (1993) by the American Physical Society. https://doi.org/10.1103/PhysRevE.48.3799
there are two coexisting liquid states derived from the LDA and HDA glasses. They can be indicated as low-density liquid (LDL) and high-density liquid (HDL). The main issue is that in the supercooled liquid region, the LDL-HDL coexistence terminates in a second-order liquid-liquid critical point (LLCP). The scenario is schematically represented in Fig. 9.7. The singular behaviour of the response functions upon supercooling is supposed to be determined by the approach to the LLCP. In Fig. 9.8, it is shown a schematic representation of the region of the liquidliquid phase transition (LLPT) in supercooled water. We note the presence of the HDL and LDL spinodal. We have already discussed the meaning of the spinodal for the liquid-gas transition in the van der Waals theory. Now in this case, the spinodal is the limit of the mechanical stability for the HDL (LDL) state inside the LDL (HDL) region. We note also the presence of the Widom line, already defined in Sect. 8.1 in connection with the liquid-gas critical point. It is the locus of the maxima of the correlation length extended in the single phase region [24]. All the maxima of the thermodynamic response functions, like the specific heat or the isothermal compressibility collapse on this line in approaching the critical point from the single
9.2 The Hypothesis of a Liquid-Liquid Critical Point 300
250
T (K)
Fig. 9.7 Phase diagram of supercooled water in the region above the amorphous ice states (LDA and HDA). The hypothesis is that the coexistence LDA-HDA line can be extrapolated in a liquid-liquid coexistence line terminating in a liquid-liquid critical point (LLCP)
309
Supercooled Water
TH
LLCP 200 LDL
HDL
Tx
150
LDA 0
HDA 0.1
0.2
0.3
p (GPa)
HDL
L
LD l
p
da
ino
Sp
Fig. 9.8 Schematic scenario of the region of the liquid-liquid coexistence. The bold black line with negative slope is the coexistence curve between the LDL and HDL states. The red broken curves are the HDL spinodal and the LDL spinodal. The blue dot-dashed line is the Widom line
HDL LDL
Spin
oda
LLCP
l
Widom Line
T
phase region. In spite of the difficulty in reaching the LLCP in the no man’s land, it is expected to observe peculiar effects at crossing of the coexisting line in the two-state region or the Widom line in the single phase region [25]. The scenario of the LLCP has been investigated by experiments and computer simulations both on supercooled water and glassy water. Relevant were the experiments performed by Mishima on the melting of the glassy ices. On the basis of a deep analysis of a number of experiments and computer simulations, Mishima and Stanley [26, 27] deduced that at decreasing temperatures, the LDL and HDL structures of water separate and under further supercooling, LDL and HDL transform respectively into the LDA and HDA states. The authors estimated values of Tc ≈ 220 K and pc ≈ 100 MPa for the LLCP.
310
9 Supercooled Water
Fig. 9.9 Hypothesized form of an effective two-well potential according to [27]
Mishima and Stanley proposed also a simple physical interpretation of the possible liquid-liquid transition [27]. The idea is that the effective pair potential between two water molecules could have the form indicated in Fig. 9.9. At decreasing temperature and low enough pressure, the molecules feel the minimum at higher distance r2 , and they condense in a low-density liquid state. At increasing pressure, the well at lower distance r1 induces a change to the high-density liquid structure. Depending on the conditions of temperature and pressure, it is expected that there are fluctuations between the two states with a coexistence line that is an extension of the first-order line transition of the LDA-HDA glassy states. The Jagla model, the most representative short-range monoatomic model mimicking the presence of two length scales in a potential, shows a phase diagram similar to that of water and a liquid-liquid critical point [28, 29]. It has been rigorously proved through successive umbrella sampling and finite size scaling that Jagla liquid-liquid transition is of the second order in the same universality class of the Ising model [30]. The configurations of the LDL and HDL water are locally different. The tetrahedral arrangement at low density could be an open structure with lower entropy, while the HDL is expected to be characterized by a higher entropy and less empty space in the tetrahedral local order. New experiments found evidence of a first-order transition between the LDL and HDL states [31, 65].
9.3 The Widom Line at the Liquid-Liquid Transition The LLPT was found in a number of computer simulations of supercooled water performed with different models. The LLCP was generally found with the the direct calculations of the isochores and isotherms. The maxima of the specific heat or the isothermal compressibility were determined in the single phase region to locate the Widom line. In Fig. 9.10, it is shown the phase diagram obtained in the simulation of supercooled water with the TIP4P/2005 potential [32]. The Widom line is located looking at the maxima of the isothermal compressibility obtained from the volume
9.3 The Widom Line at the Liquid-Liquid Transition 250
TIP4P/2005
TMD Widom line Experimental TMD
se
a ph o ion w g T re
200 150 P [Mpa]
311
EXP-LLCP
LLCP
100 50 0 -50
180
200
220
240
260
280
T [K]
Fig. 9.10 Phase diagram of the TIP4P/2005 water model in the region of supercooled water. The Widom line terminates in the LLCP (the full blue dot) [32]. It is reported also the computed TMD line compared with experiment. The full black dot is located at the LLCP estimated by Mishima and Stanley [26, 27]. The black broken line is a virtual LDL/HDL coexistence as a hypothesized extension of the Widom line
1.15
TIP4P
TMD Widom line Liquid spinodal
1.1 ρ (g/cm3)
LLCP 1.05 1 0.95 0.9 0.85 180
200
220
240
260
T (K)
Fig. 9.11 Phase diagram of TIP4P water model in the region of supercooled water [33]. It is reported the TMD curve and the liquid spinodal to show the absence of any crossing with the TMD. The Widom line is determined from the maxima of the specific heat
fluctuations in the simulation at constant pressure. The LLCP is determined by an accurate analysis of the isotherms [32]. In Fig. 9.11, the results obtained in simulation with the TIP4P model are reported in the density versus temperature plane [33].
312
9 Supercooled Water
Fig. 9.12 Computer simulation of ST2 water. Contour plots of the free energy for T = 240 K at three different pressures: (a) p = 195 MPa, (b) p = 204.5 MPa and (c) p = 230 MPa. Reprinted with permission from [34]. Copyright (2013) by the American Institute of Physics
The results obtained with different computer simulations raised a very large debate about the ability of water models to reproduce the real water behaviour [11]. For this reason, there was a great effort in studying the LLPT with the use of the more refined methods of calculation of the free energy discussed in Sect. 5.8. The presence of the LLPT in ST2 water was found in a MC with the use of the histogram reweighting technique [66]. More recently, the free energy was studied as function of two order parameters. The two parameters are the usual density and a bond-orientational order parameter Q6 that indicates the degree of crystalline order. The studies of the free energy performed on the ST2 model confirmed the existence of the LLCP [34–37, 67]. In Fig. 9.12, it is shown as an example, the free energies calculated by Poole et al. [34]. It is evidenced the minima of the free energy at T = 240 K for increasing pressure. For the lowest pressure, the LDL is the stable state, for p = 204.5 MPa, there is a HDL/LDL coexistence, and finally at p = 230 MPa, HDL becomes the stable state. A review of the state of the art concerning the liquid-liquid transition and the LLPT can be found in [37].
9.4 Water as a Two-Component Liquid The simple model for the two states of water can be considered the prototype for interpreting the anomalies of water. As said above, the LDL cluster is related to a low entropy and low-energy configuration, while the HDL-like structure corresponds to a higher entropy and higher-energy configuration. LDL corresponds to a local
9.5 Dynamical Properties of Water Upon Supercooling
313
tetrahedral structure similar on the one observed in ice. HDL is characterized by a collapse of the tetrahedral structure allowing for higher density with some distortions in the H-bonding in the first coordination shell. The LDL structure is mostly driven by the directional strong H-bonding, while the HDL structure is more due to the isotropic part of the interaction. The next step is to assume that water is a mixture of the two possible realizations of the tetrahedral local order and its thermodynamics is the result at given external conditions of an equilibrium between the two components. This two-state model has been formulated by different authors; see, for example, [38–40, 68]. An approach is based on the idea that the system is a mixture of two competing states. LDL or HDL prevails depending on the values of pressure and temperature. Another formulation assumes an analogy with liquid binary mixtures. The behaviour of the non-ideal mixture is dominated by the excess free energy with an interplay between energy and entropy. An entropy-driven unmixing determines a phase transition with a critical point. The main results of the analysis performed by Holten and Anisimov [40] are reported in Fig. 9.13. A particularly important parameter to determine the degree of short-range order in water was introduced by Shiratani and Sasai [41, 42], and it is called local structure index (LSI). The LSI measures the average distance between a reference molecule and the molecules in the shells around. High values of LSI imply a higher tetrahedral local order so it is the signature of an LDL component. Low values of LSI instead are connected to a more disordered local structure HDL-like. Calculations of the LSI were done by quenching the structure of SPC/E water to obtain the inherent structure [43]. A similar calculation has been performed on TIP4P/2005 water [44]. A review of computer simulation and experimental results is presented in [45]. The signatures of the LDL/HDL states in water have been evidenced in a number of experiments [45–48]. However, standing the large experimental effort devoted to verifying the predictions of theoretical approaches and computer simulations, there is no definitive experimental evidence yet [49], and still controversial interpretations of some of the results are present in literature; see, for instance, [50].
9.5 Dynamical Properties of Water Upon Supercooling In pioneering experimental investigations on water below the melting, it was found an anomalous increase of thermodynamic response functions, like the isothermal compressibility. Speedy and Angell [51] hypothesized a power law behaviour KT ∼
T − Ts Ts
−x (9.6)
with a singular temperature Ts ≈ −45 ◦ C and 0.024 < x < 0.35. Also transport properties show anomalies, the diffusion coefficient D goes to zero, and the viscosity diverges in both cases with a power law in approaching Ts . The analogies
314
9 Supercooled Water
Fig. 9.13 Isobars of supercooled water in the density-temperature plain as reported in the original paper by Holten and Anisimov [40]; the symbols reproduce experimental data. The blue line is the liquid-liquid coexistence line with the LLCP indicated as C. The other main predictions of the model are (i) the isobars (black curves), (ii) the melting temperature indicated as TM (dark red line), (iii) the temperature TH (broken red line) and (iv) the line of the maximum density (red line). The green line is the line of the constant fraction (around 0.12) of the LDL component. Reprinted with permission from [40]. Copyright (2012) by the Springer Publishing Company
of these singular behaviours with the prediction of MCT stimulated theoretical studies of the dynamics of supercooled water. The first complete study of the dynamics of supercooled water and its interpretation in terms of MCT has been performed by Gallo, Sciortino, Tartaglia and Chen with the use of SPC/E model [52, 53]. In Fig. 9.14, it is shown the SISF of the oxygens for the wave vector corresponding to the maximum of the O − O structure factor. The results are for the isobar with p = −80 MPa corresponding in SPC/E water to a density around 1 g/cm3 . It is evident the double relaxation regime with the plateau of the β-region and the α-decay at long time. It is found a good agreement with the MCT predictions. The authors introduced a very successful fitting formula
9.5 Dynamical Properties of Water Upon Supercooling
315
Fig. 9.14 SISF of oxygens in water, simulated with the SPC/E model. The temperatures are 284.5 K, 258.5 K, 238.2 K, 224.0 K, 213.6 K, 209.3 K and 206.3 K. The SISF is calculated at the wave vector corresponding to the the maximum of oxygen structure factor. The fitting is performed with the function (9.7). In the inset, the authors report the exponent β as function of T . Reprinted with permission from [52]. Copyright (2013) by the American Physical Society. https:// doi.org/10.1103/PhysRevLett.76.2730
able to describe the behaviour of the SISF in all the range from the ballistic regime through the β-relaxation zone to the α-relaxation Fs (k, t) = [1 − A (k)] e−(t/τs ) + A (k) e−(t/τ ) . 2
β
(9.7)
The last term represents the KWW decay, and the first term instead describes the initial decay determined by the short relaxation time τs . The coefficient A (k) is the Lamb-Mössbauer factor, from Eq. (8.85) A (k) = e−k
2 r 2 /3 c
(9.8)
where rc is the mean radius of the cage. The results of the fitting procedure are shown in the same figure. The α relaxation time τ was found to follow the MCT prediction τ = C (T − TC )−γ ,
(9.9)
and the asymptotic values of TC result to be in a range close to the singular temperature TS of the experiments pointing to a close connection between dynamics and thermodynamics. During the simulations also the diffusion was studied. The diffusion coefficient D extracted from the MSD was found to follow the theoretical prediction D ∼ (T − TC )γ . As in the case of the LJBM, discussed in the previous chapter, the extracted values of TC and γ are slightly different from the values obtained for the relaxation time.
316
9 Supercooled Water
This preliminary study showed that in interpreting the anomalies of supercooled water, it must be taken into account the description in terms of cage effect and the singularity predicted by MCT. The MCT scenario for the structural relaxation of supercooled water has been confirmed by light scattering experiments [54]. Further simulation studies of the glassy dynamics have been carried on with different water models in connections with the characterization of the phase diagram of water in the range from the melting region to the glassy states, as discussed in the previous sections [55, 56]. As an example, we consider more in details the case of the TIP4P/2005 model. We have already shown in the previous chapter the SISF in Fig. 8.14. In Fig. 9.15, it is reported the results of the fitting procedure done with the use of Eq. (9.7) [56]. From the fit, the relaxation time can be derived, and it is shown in Fig. 9.16 as a red curve. In the supercooled region down to 210 K, the MCT predictions are verified, in particular the von Schweidler scaling as shown in Fig. 9.17. As shown in Fig. 9.16, the simulation results for density ρ = 1.00 g/cm3 deviate from the MCT behaviour at around 210 K. In the inset of Fig. 9.15, it is evident that at this temperature, also the behaviour of the exponent β deviates from a monotonous decrease. Below 210 K, the relaxation time can be fitted with the Arrhenius formula (8.2) introduced for the viscosity τ = τ0 eEA /kB T .
(9.10)
Fig. 9.15 SISF of oxygens in water, simulated with the TIP4P/2005 model, already shown in Fig. 8.14. The function is calculated at the position k0 of the maximum of SOO (k) at temperatures 300 K, 280 K, 260 K, 250 K, 240 K, 230 K, 220 K, 210 K, 200 K and 195 K. The red broken curves are the fits obtained with Eq. (9.7). In the inset, it is shown the exponent β as function of T . See [56]
9.5 Dynamical Properties of Water Upon Supercooling
317
TIP4P/2005
100
ρ=1.00 g/cm3 T (K)
MCT fit Arrhenius fit
200
250
300
10
TC=190.8 K
ρ=1.03 g/cm3
10-2 1/τ (ps-1)
1/τ (ps-1)
0
10-2
MCT fit TC=179.7 K
10-4
10-4 150
200
250
300
T (K)
Fig. 9.16 Relaxation time of supercooled water simulated with the TIP4P/2005 model. The continuous red line is the MCT fit with formula (9.9). In the main panel at density ρ = 1.00 g/cm3 , the points at the lowest temperatures are fitted with the Arrhenius formula (9.10). In the inset at density ρ = 1.03 g/cm3 , the MCT fit is valid for all the range of temperatures
1 T=210 K
T=300 K
0.8
F(k0,t)
0.6
TIP4P/2005 ρ=1.00 g/cm3
0.4
0.2
0
10-4
10-2 t/τ
100
Fig. 9.17 Test of the von Schweidler scaling for the SISF reported in Fig. 9.15. The scaling works for the range of temperatures from T = 300 K to T = 210 K
318
9 Supercooled Water
The change of behaviour is determined by the presence of hopping effects. This was evidenced by analysing the self Van Hove correlation functions [57]. From the point of view of MCT, as said before, the basic formulation of the theory neglects hopping effects. Extensions of the theory have been proposed to include such effects, and we refer to the literature [58]. In the inset of Fig. 9.16, it is evident that for the density ρ = 1.03 g/cm3 , the relaxation time can be fitted with MCT down the lowest temperatures explored. This density is above the density of the LLCP of the TIP4P/2005 ρ = 1.012 g/cm3 [32], and we will consider this in the next section.
9.6 Widom Line and the Fragile to Strong Crossover The change from the MCT power law behaviour to the Arrhenius exponential is the signature that on approaching the glass transition, water shows a crossover from a fragile to a strong behaviour [11, 55] according to the Angell classification. The possibility of a fragile to strong crossover (FSC) in water was explored by Angell in 1993 [59], and later the FSC was hypothesized to take place around 228 K [60]. The FSC was found both in computer simulation [61] and in experiments of bulk water [62]. This crossover has been also found in experimental and simulation studies on confined water [69–71]. More recently the FSC has been connected to the presence of the Widom line [11, 55]. In this interpretation, the FSC occurs at the crossing of the Widom line
T (K) 210 220
r se
a
o Tw
ph
230
1.1 1.05
ion
eg
150
P (MPa)
200
LLCP
1
TIP4P
ρ (g/cm3)
190
200
0.95
100 TIP4P/2005 50
Fragile to strong crossover
0
-50 180
190
200
210
220
230
T (K)
Fig. 9.18 The FSC points calculated from the dynamics of TIP4P/2005 water follow the Widom line emanating from the LLCP [56]. In the inset the case of the TIP4P potential is represented in the density vs. temperature plane [33]
9.6 Widom Line and the Fragile to Strong Crossover
319 101
102 100
ρ=1.03 g/cm3
L LD
τ (ps-1)
HDL
TC=179.7 K
10-1
Sp
10-2
ino l
p
da
LDL
100
b
Spin
ρ=1.00 g/cm3
oda l
LLCP a
τ (ps-1)
HDL
TC=190.8 K
10-2
Widom Line 10-4 T
10
-1
0
10
10
1
10
2
T-TC (K)
Fig. 9.19 On the left, a schematic scenario of a fragile to strong crossover (FSC) in the the region of the liquid-liquid coexistence. The bold black line with negative slope is the coexistence curve between the LDL and HDL states. The red broken curves are the HDL spinodal and the LDL spinodal. The blue dot-dashed line is the Widom line. The arrows a and b indicate two possible paths at constant pressure upon cooling. While the crossing of the Widom line along the path a implies the FSC, along the path b, the system remains in the fragile state. The insets on the right show the corresponding behaviour of the relaxation time, and the lower inset refers to the path a and the upper inset to the path b. The relaxation time is plotted as function of T − TC
emanating from the LLCP. In Fig. 9.18, it is reported the results for water simulated with the TIP4P/2005 and with the TIP4P potentials. Also for these potentials, it is confirmed that the FSC takes place along the Widom line. To explain more the role of the Widom line, we show in Fig. 9.19 the schematic representation of the phase diagram close to the LLCP. In the single phase region not far from the critical point, changes in the type of fluctuations can be observed the case of the path a in Fig. 9.19. Even in the one phase region, close enough to the critical point, the Widom line separates the thermodynamic space so that the system along the path a goes from a HDL-like region to a LDL-like region. The relaxation time shows the behaviour in the right low inset, corresponding to the main panel of Fig. 9.16. Along the path b, there is no evidence of a FSC down to the lowest temperature where it was possible to supercool with molecular dynamics. The final state is inside a HDL region delimited from a spinodal. We have to mention that a similar phenomenon with a change in the dynamical properties of water is found at the crossing of the Widom line of the liquid-gas transition [63, 64].
320
9 Supercooled Water
References 1. Angell, C.A.: Ann. Rev. Phys. Chem. 34, 593 (1983) 2. Landau, L.D., Lifshitz, E.M.: Statistical Physics. Elsevier, London (2013) 3. Speedy, R.J.: J. Phys. Chem. 86, 3002 (1982) 4. Debenedetti, P.G., D’Antonio, M.C.: J. Chem. Phys. 84, 3339 (1986) 5. Debenedetti, P.G.: Metastable Liquids. Princeton University Press, Princeton, NJ, USA (1996) 6. Caupin, F.: Phys. Rev. E 71, 051605 (2005) 7. Azouzi, M.E.M., Ramboz, C., Lenain, J.F., Caupin, F.: Nat. Phys. 9, 38 (2013) 8. Debenedetti, P.G.: J. Phys. Condens. Matter 15, R1669 (2003) 9. Handle, P., Loerting, T., Sciortino, F.: Proc. Natl. Acad. Sci. USA 114, 1336 (2017) 10. Hestand, N., Skinner, J.: J. Chem. Phys. 149, 140901 (2018) 11. Gallo, P., Amann-Winkel, K., Angell, C.A., Anisimov, M.A., Caupin, F., Chakravarty, C., Lascaris, E., Loerting, T., Panagiotopoulos, A.Z., Russo, J., Sellberg, J.A., Stanley, H.E., Tanaka, H., Vega, C., Xu, L., Pettersson, L.G.M.: Chemical Reviews 116, 7463 (2016) 12. Poole, P.H., Sciortino, F., Essmann, U., Stanley, H.E.: Nature 360, 324 (1992) 13. Sastry, S., Debenedetti, P.G., Sciortino, F., Stanley, H.E.: Phys. Rev. E 53, 6144 (1996) 14. Kim, K.H., Späh, A., Pathak, H., Perakis, F., Mariedahl, D., Amann-Winkel, K., Sellberg, J.A., Lee, J.H., Kim, S., Park, J., Nam, K.H., Katayama, T., Nilsson, A.: Science 358, 1589 (2017) 15. Gallo, P., Stanley, H.E.: Science 358, 6370 (2017) 16. Burton, E.F., Oliver, W.F.: Nature 135, 505 (1935) 17. Mayer, E., Brüggeller, P.: Nature 298, 715 (1982) 18. Mishima, O., Calvert, L.D., Whalley, E.: Nature 310, 393 (1984) 19. Mishima, O., Calvert, L.D., Whalley, E.: Nature 314, 76 (1985) 20. Winkel, K., Mayer, E., Loerting, T.: J. Phys. Chem B 115, 14141 (2011) 21. Stern, J.N., Seidl-Nigsch, M., Loerting, T.: Proc. Natl. Acad. Sci. USA 116, 9191 (2019) 22. Finney, J.L., Hallbrucker, A., Kohl, I., Soper, A.K., Bowron, D.T.: Phys. Rev. Lett. 88, 225503 (2002) 23. Poole, P.H., Sciortino, F., Essmann, U., Stanley, H.E.: Phys. Rev. E 48, 3799 (1993) 24. McMillan, P.F., Stanley, H.E.: Nature Phys. 6, 479 (2010) 25. Franzese, G., Stanley, H.E.: J. Phys. Condens. Matter 19, 205126 (2007) 26. Mishima, O., Stanley, H.E.: Nature 392, 164 (1998) 27. Mishima, O., Stanley, H.E.: Nature 396, 329 (1998) 28. Xu, L., Buldyrev, S.V., Angell, C.A., Stanley, H.E.: Phys. Rev. E 74, 031108 (2006) 29. Xu, L., Giovambattista, N., Buldyrev, S.V., Denedetti, P.G., Stanley, H.E.: J. Chem. Phys. 134, 064507 (2011) 30. Gallo, P., Sciortino, F.: Phys. Rev. Lett. 109, 177801 (2012) 31. Woutersen, S., Ensing, B., Hilbers, M., Zhao, Z., Angell, C.A.: Science 359, 1127 (2018) 32. Abascal, J.L.F., Vega, C.: J. Chem. Phys. 133, 234502 (2010) 33. Gallo, P., Rovere, M.: J. Chem. Phys. 137, 164503 (2012) 34. Poole, P.H., Bowles, R.K., Saika-Voivod, I., Sciortino, F.: J. Chem. Phys. 138, 034505 (2013) 35. Sciortino, F., Saika-Voivod, I., Poole, P.H.: Phys. Chem. Chem. Phys. 13, 19759 (2011) 36. Palmer, J.C., Martelli, F., Liu, Y., Car, R., Panagiotopoulos, A.Z., Debenedetti, P.G.: Nature 510, 385 (2014) 37. Palmer, J.C., Poole, P.H., Sciortino, F., Debenedetti, P.G.: Chemical Review 118, 9129 (2018) 38. Tanaka, H.: J. Chem. Phys. 112, 799 (2000) 39. Russo, J., Tanaka, H.: Nat. Commun. 5, 3556 (2014) 40. Holten, V., Anisimov, M.A.: Sci. Rep. 2, 713 (2012) 41. Shiratani, E., Sasai, M.: J. Chem. Phys. 108, 3264 (1998) 42. Shiratani, E., Sasai, M.: J. Chem. Phys. 104, 7671 (1996) 43. Appignanesi, G.A., Rodriguez, F.J.A., Sciortino, F.: Eur. Phys. J. E 29, 305 (2009) 44. Wikfeldt, K.T., Nilsson, A., Pettersson, L.G.M.: Phys. Chem. Chem. Phys. 13, 19918 (2011) 45. Nilsson, A., Pettersson, L.: Nat. Commun. 6, 8998 (2015)
References
321
46. Nilsson, A., Pettersson, L.: Chem. Phys. 389, 1 (2011) 47. Huang, C., Wikfeldt, K.T., Tokushima, T., Nordlund, D., Harada, Y., Bergmann, U., Niebuhr, M., Weiss, T.M., Horikawa, Y., Leetmaa, M., Ljungberg, M.P., Takahashi, O., Lenz, A., Ojamae, L., Lyubartsev, A.P., Shin, S., Nilsson, L.G.M.P.A.: Proc. Natl. Acad. Sci. USA 106, 15214 (2009) 48. Mallamace, F., Branca, C., Broccio, M., Corsaro, C., Mou, C.Y., Chen, S.H.: Proc. Natl. Acad. Sci. USA 104, 18387 (2007) 49. Debenedetti, P.G., Sciortino, F., Zerze, G.H.: Science 369, 289 (2020) 50. Sedlmeier, F., Horinek, D., Netz, R.R.: J. Am. Chem. Soc. 133, 1391 (2011) 51. Speedy, R.J., Angell, C.A.: J. Chem. Phys. 65, 851 (1976) 52. Gallo, P., Sciortino, F., Tartaglia, P., Chen, S.H.: Phys. Rev. Lett. 76, 2730 (1996) 53. Sciortino, F., Gallo, P., Tartaglia, P., Chen, S.H.: Phys. Rev. E 54, 6331 (1996) 54. Torre, R., Bartolini, P., Righini, R.: Nature 428, 296 (2004) 55. Xu, L., Kumar, P., Buldyrev, S.V., Chen, S.H., Poole, P.H., Sciortino, F., Stanley, H.E.: Proc. Natl. Acad. Sci. USA 102, 16558 (2005) 56. De Marzio, M., Camisasca, G., Rovere, M., Gallo, P.: J. Chem. Phys. 144, 074503 (2016) 57. De Marzio, M., Camisasca, G., Rovere, M., Gallo, P.: J. Chem. Phys. 146, 084502 (2017) 58. Götze, W.: Complex Dynamics of Glass-Forming Liquids: A Mode-Coupling Theory. Oxford University Press, Oxford (2009) 59. Angell, C.A.: J. Phys. Chem. 97, 6339 (1993) 60. Kaori, I., Moynihan, C.T., Angell, C.A.: Nature 398, 492 (1999) 61. Starr, F.W., Sciortino, F., Stanley, H.E.: Phys. Rev. E 60, 6757 (1999) 62. Xu, Y., Petrik, N.G., Smith, R.S., Kay, B.D., Kimmel, G.A.: Proc. Natl. Acad. Sci. USA 113, 52 (2016) 63. Gallo, P., Corradini, D., Rovere, M.: Nat. Commun. 5, 5806 (2014) 64. Corradini, D., Rovere, M., Gallo, P.: J. Chem. Phys. 143, 114502 (2015) 65. Kim, K.H., Amann-Winkel, K., Giovambattista, N., Späh, A., Perakis, F., Pathak, H., Parada, M.L., Yang, C., Mariedahl, D., Eklund, T., Lane, T.J., You, S., Jeong, S., Weston, M., Lee, J.H., Eom, I., Kim, M., Park, J., Chun, S.H., Poole, P.H., Nilsson, A.: Experimental observation of the liquid-liquid transition in bulk supercooled water under pressure. Science. 370(6519), 978– 982 (2020) 66. Liu, Y., Panagiotopoulos, A.Z., Debenedetti, P.G.: Low-temperature fluid-phase behavior of ST2 water. J. Chem. Phys. 131(10), 104508 (2009) 67. Liu, Y., Palmer, J.C., Panagiotopoulos, A.Z., Debenedetti, P.G.: Liquid-liquid transition in ST2 water. J. Chem. Phys. 137(21), 214505 (2012) 68. Holten, V., Palmer, J.C., Poole, P.H., Debenedetti, P.G., Anisimov, M.A.: Two-state thermodynamics of the ST2 model for supercooled water. J. Chem. Phys. 140(10), 104502 (2014) 69. Faraone, A., Liu, L., Mou, C.-Y., Yen, C.-W., Chen, S.-H.: Fragile-to-strong liquid transition in deeply supercooled confined water. J. Chem. Phys. 121(22), 10843–10846 (2004) 70. Liu, L., Chen, S.-H., Faraone, A., Yen, C.-W., Mou, C.-Y.: Pressure dependence of fragile-tostrong transition and a possible second critical point in Supercooled confined water. Phys. Rev. Lett. American Physical Society. 95(11), 117802 (2005) 71. Gallo, P., Rovere, M., Chen, S.-H.: Dynamic crossover in Supercooled confined water: understanding bulk properties through confinement. J. Phys. Chem. Lett. 1(4), 729–733 (2010)
Index
A Ab initio methods, 13 Adam and Gibbs (AG) theory configurational entropy calculation, 276–277 cooperative rearranging regions, 274–276 Adiabatic compressibility, 29 AG theory, see Adam and Gibbs (AG) theory Andersen method, 166 Angell plot, 270–272
B Berendsen algorithm, 141 Berendsen thermostat, 142 Binary mixtures phase diagram of, 5–8 Biological systems, 16 Biomolecules, 18–20 Born theory, 76, 77 Brownian motion, 222–224, 233, 236
C Cage effect, 238, 248, 284–285, 292, 316 Canonical ensemble, 52–54 distribution functions in, 65–66 virial expansion in, 95–99 Carbon dioxide phase diagram of, 4 Carnahan and Starling-PY equation of state (CS-EOS), 119, 120 Chemical potential, calculation of, 177–179 Classical density functional theory, 104–112 Clausius-Clapeyron equation, 41
Coexistence and phase transitions, 38 Colloids, 16–18 models for, 126–128 Complex energy landscape sampling in, 179 Computer simulation methods, 13 of critical phenomena, 189–192 equation of state, direct calculations of, 173–174 free energy calculation, thermodynamic integration, 174–176 microscopic models for water, 168–172 molecular dynamics (see Molecular dynamics (MD) methods) molecular liquids, 167–168 Monte Carlo simulation method (see Monte Carlo (MC) simulation method) Configurational entropy calculation, 276–277 energy landscape, 278–280 relaxation time, 274 Correlation functions, 8–9 consequence, 207 currents, 229–231 density, 104 equilibrium average, 196 MCT equations, 293 memory function, 260–263 non-interacting particles, 225 potential energy, 143 splitting, 126 velocity, 236–239 Coulomb energy, 148 Coupling theory α-relaxation, 297–299
© Springer Nature Switzerland AG 2021 P. Gallo, M. Rovere, Physics of Liquid Matter, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-68349-8
323
324 Coupling theory (cont.) β-relaxation, 295–297 and cage effect, 284–285 dynamics upon supercooling, 280–283 formulation, 285–291 glass transition, 291–294 MCT, 283 Critical opalescence phenomenon, 3 Critical point (CP), 2–3
D Density distributions from grand partition function, 101–103 Density fluctuations and dissipation detailed balance, 217–218 static limit, 218 response function, 218–219 Density functional theory (DFT) classical many body theory, 114 closure relations from, 112–114 equilibrium conditions, 105–106 free energy calculation, 109–110 Helmholtz free energy, 105 homogeneous system, expansion from, 110–112 intrinsic free energy, 105 for quantum fluids, 104–112 single particle direct correlation function, 106 DFT, see Density functional theory (DFT) Diffusion density correlation effects, 222 hydrodynamic limit, 232–236 longitudinal current, 246–248 neutron, 75 and self Van Hove Function, 225 Dilute gas, 225–226 Disordered solid matter, 16 Dynamical observables, 195–196 Dynamics of liquids Brownian motion, 222–224 correlation functions of the currents, 229–231 diffusion, 225 hydrodynamic limit, 232–236 dilute gas, 225–226 hydrodynamic limit, 231–232 Langevin equation, 222–224 liquid dynamics, 239–249 memory effects, 249–255 functions, 255–263
Index self-intermediate scattering function, 227–229 self Van Hove Function, 225 thermal motion, 221–222 VCF, 236–239 Dynamic structure factor formulas, 234 incoherent scattering, 216 static limit, 215–216 Dynamics upon supercooling, 280–283
E Electronic charge distribution, 11 Electronic Numerical Integrator and Computer (ENIAC), 131 Energy and entropy, 24–26 Energy landscape and configurational entropy, 278–280 PEL, 12, 13 sampling, 179 ENIAC, see Electronic Numerical Integrator and Computer (ENIAC) Ensembles in statistical mechanics, 50–57 Enthalpy, 35 Equation of state direct calculations of, 173–174 and thermodynamic inconsistency, 123–124 Equilibration procedure, molecular dynamics, 140–142 Equilibrium conditions and intensive quantities, 29 maximum entropy, 27 stability of, 27, 28 Equilibrium probability density, 50 Ergodicity and detailed balance Monte Carlo (MC) simulation, 153–154 Euler’s theorem for homogeneous functions of first order, 26–27 Ewald method, 144–148 Extensive function, 23
F FERMIAC, 131 Fermi pseudo-potential, 79 Finite size scaling (FSSC) methods, 190 Fluctuation-dissipation theorem, 206–209, 217, 235 Force field for atoms in liquids, 61–64 Fragile to strong crossover (FSC), 190, 318–319 Free energy calculation from thermodynamic integration, 174–176
Index Free energy, reaction coordinate, 183–189 FSC, see Fragile to strong crossover (FSC) FSSC methods, see Finite size scaling (FSSC) methods
G Gibbs-Duhem relation, 26–27 Gibbs ensemble, 160 Gibbs ensemble Monte Carlo (GEMC), 160 Gibbs free energy, 33–34, 37 Glass transition Arrhenius exponential, 318 ergodic to non-ergodic crossover, 291–294 Kauzmann temperature, 273 structural and thermodynamic properties, 265 thermodynamic, 280 quantities, 268 Glassywater, 306, 309 ambient pressure experiment, 302, 303 experimental isothermal compressibility, 302, 303 forms, 306 hypothetical scenario, 304 isothermal compressibility, 302 LDA and HDA, 306, 307 phase diagram of water, 306 radial distribution functions, 307 supercooled liquids, 305 thermodynamic behaviour, 305 TMD line, 304 g(r), exact equation, 114–115 Grand canonical ensemble, 54–56 distribution functions in, 68–70 Grand canonical Monte Carlo (GCMC), 158–159 Grand canonical potential, 36 Grand partition function, density distributions, 101–103 Grand potential density correlation function, 104 as generating functional, 103–104
H Haemoglobin, 19 Hard sphere fluid contact value, 118 equation of state and liquid-solid transition, 119, 120 freezing transition of, 119 Percus-Yevick for, 121–123
325 phase diagram of, 119, 120 properties of, 117–119 HDA, see High-density amorphous (HDA) Helmholtz free energy, 32–33, 37 Hierarchical equations, 71–72 High-density amorphous (HDA), 306–310 Histogram methods, 182–183 HNC approximation, 114 and Percus-Yevick approximation, 115–117 Hoover equations, 164–165 HS system, 125 Hydrogen bond (HB), water molecule, 14–16 Hydrogen-hydrogen RDF, 92, 94 Hydrodynamic limit, 231–232 diffusion in, 232–236 liquid dynamics, 239–248
I Integral equation theory, 107, 108, 128 Intensive function, 23 Ionic liquids/molten salts, 11 Isobaric-isothermal ensemble, 56–57 Isobaric-isothermal MC, 157–158 Isolated system with internal constraint, 24 Isothermal compressibility, 29 Isotherms, 3 of Lennard-Jones fluid, 173, 174
J Janus particles with controlled coating coverage, 17, 18 with two surfaces, 17, 18
K Kauzmann temperature, 272–274, 280, 281 Kern-Frenkel potential for patchy colloids, 128 Kirkwood approximation, 100–101
L Lagrange multipliers, 167 Langevin equation, 222–224 LDA, see Low-density amorphous (LDA) Leapfrog algorithm, 137–138 Legendre transforms and thermodynamic potentials, 31–37 Lennard-Jones (LJ) fluid, 190 isotherms of, 173, 174 Linear response correlation functions further properties, 199–200
326 Linear response (cont.) density correlation functions, 211–213 fluctuations and dissipation, 218 Van Hove functions, 211–213 dynamical observables, 195–196 response functions, 203–206 dynamic structure factor incoherent scattering, 216 static limit, 215–216 fluctuation-dissipation theorem, 206–209 neutron scattering, 214 response functions and dissipation, 209–211 theory, 200–203 Liquid dynamics compression force, 241 isotherm conditions, 244–246 longitudinal current, 244–248 pressure, element of fluid, 240 regimes, 248–249 sound waves, 244–246 stress, 241 tensor, 242 transverse current, 243–244 types of equation, 239 Liquid-gas coexistence, 1, 3 Liquid-liquid critical point hypothesis, 307–310 transition, 310–312 Liquids classical approximation, 10, 11 microscopic models for, 9–12 structure and dynamics of, 8–9 Liquid-solid transition, 176–177 Liquid water hydrogen-hydrogen RDF, 92, 94 local tetrahedral order, 91–93 oxygen-oxygen RDF, 91–93 radial distribution functions, 91 structure of, 90–94 Low-density amorphous (LDA), 306–310
M Macromolecules, 16 Macroscopic response functions and stability conditions, 29–31 Markov chain, 154, 172, 173 Markov processes, 152–153 Mathematical Analyzer, Numerical Integrator, and Computer (MANIAC), 132
Index Maxwell construction, 44, 173 coexistence curve, 45 critical point, 45 Mayer function, 96, 98 MC simulation method, see Monte Carlo (MC) simulation method MCT, see Mode coupling theory (MCT) MD methods, see Molecular dynamics (MD) methods Mean force potential, 99–100 Mean spherical approximation (MSA), 117 Mechanical stability conditions, 31 Memory effects generalized viscosity, 254–255 Langevin equation, 249–252 viscoelasticity, 252–254 Memory functions, 255–260 velocity correlation function, 260–263 Metadynamics, 187–189 Metallic systems in the solid phase, 11 Metastable states of matter, 16 Metropolis after the war pursuit calculations, 132 Metropolis method, 154–156, 159 Microcanonical ensemble, 51–52 temperature in, 139–140 Microscopic models for liquids, 9–12 for water, 168–172 Mode coupling theory (MCT) and cage effect, 284–285 equations, 296 glassy dynamics, 283 power law behaviour, 318 prediction, 298, 314, 315 Modified HNC, 124–125 Molecular dynamics (MD) methods in canonical ensemble, 161–165 equilibration procedure, 140–142 equilibration with temperature rescaling, 141 evolution in equilibrium state, 141, 142 of system, 133 Ewald method, 144–148 forces calculation, 138–139 initial configuration, 139 long-range corrections, 143–144 microcanonical ensemble, temperature, 139–140 predictor/corrector, 134–135 pressure control, 166–167 and statistical mechanics, 132–133
Index
327 O OCT, see Optimized cluster theory (OCT) Optimized cluster theory (OCT), 127 Optimized random phase approximation (ORPA), 125–127 Optimized RPA, 125–126 Ornstein-Zernike equation, 106–108 in k-Space, 108–109 ORPA, see Optimized random phase approximation (ORPA) Oxygen-oxygen RDF, 91, 92
thermodynamic and structure, 143 time evolution algorithms, 134 Verlet algorithms, 135–138 Molecular liquids, 167–168 example of, 61 intermolecular and intramolecular site-site models, 89 internal vibrations, 88 laboratory reference system, 89 neutron scattering data, 89 pair distribution function, homogeneous isotropic fluid, 88 structure of, 88–94 Molten salts, 87–88 Monocomponent system Gibbs phase rule, 1 phase diagram of, 1, 2 Monte Carlo (MC) simulation method, 131 ergodicity and detailed balance, 153–154 in Gibbs Ensemble, 160 integration and importance sampling, 149–150 Markov processes, 152–153 Metropolis method, 154–156 sampling in other ensembles, 157–159 statistical mechanics importance sampling in, 151–152 integrals in, 150–151 steps, averaging, 156–157 MSA, see Mean spherical approximation (MSA) Multicanonical ensemble, 182–183 Multicomponent liquids isotopic substitution, 86–87 partial structure factor, 86 structure of, 85–88
P Particle mesh Ewald (PME), 148 Patchy colloids Kern-Frenkel potential for, 128 PBC, see Periodic boundary conditions (PBC) PEL, see Potential energy landscape (PEL) Percus-Yevick (PY) theory for hard sphere fluid, 120–123, 128 Periodic boundary conditions (PBC), 138 Perron-Frobenius theorem, 154 Perturbation theories, 125–126 Phase transitions, MC, 173 and classification, 39–41 Photosynthesis of plants, 3 PME, see Particle mesh Ewald (PME) Polarizable water models, 172 Potential energy landscape (PEL), 12, 13, 179, 277–281, 283, 291, 292 Pressure-volume plane phase diagram of, 1, 2 Protein geometry, 20 Pure substances phase diagram of, 3–5
N Neutron scattering techniques on liquids, 73, 74 anelastic scattering, 75 on atomic system, 75 dispersion relation, 75 on liquids, 75–79 neutron wavefunctions (plane waves), 78 pseudo-potential, 76 Newton equations, 132, 133 NMR, see Nuclear magnetic resonance (NMR) Nosé method, 161–164 Nuclear magnetic resonance (NMR), 75 Numerical calculations, 13
R Radial distribution functions (RDF), 88, 91, 92, 112, 307 canonical ensemble, 65–66 from excess free energy, 101 of Lennard-Jones liquid for two temperatures, 74 of liquid argon, 82–83 normalized distribution functions, 66 pair distribution function, 88–90 potential energy, 66–67 pressure, virial of forces, 67–68 qualitative behaviour of, 72–73 structure factor, 82–83 with thermodynamics, 66–68
328 Raman spectroscopy, 75 Random phase approximation (RPA), 116–117 RDF, see Radial distribution function (RDF) Reference HNC, 124–125 Relations fluctuations/thermodynamics, 57–60 Response function correlation, 206 and dissipation, 209–211 dynamical, 203–206 singular behaviour, 308 thermodynamic, 313 Verlet criterion, 218–219 RPA, see Random phase approximation (RPA)
S Sampling in complex energy landscape, 179 Self-intermediate scattering function, 227–229 Simple and molecular fluids, 10 Soft matter, 16–20 Solid-gas transition, 2 Stability conditions for thermodynamic potentials, 37 State and thermodynamic inconsistency equation, 123–124 Static structure factor, 81–82 liquid-gas critical point, 84–85 and RDF of liquid argon, 82–83 Statistical mechanics of fluid states ensembles in, 50–57 molecular dynamics, 132–133 Monte Carlo simulation method importance sampling in, 151–152 integrals in, 150–151 Structure of fluid matter, 64–65 exchanged wavevector, elastic scattering, 80, 81 experimental determination of, 73–75 static limit and, 79–81 Supercooled liquids AG theory, 274–277 Angell plot, 270–272 configurational entropy, 278–280 energy landscape, 278–280 Kauzmann temperature, 272–274 liquid to the glass, 268–270 metastability, 265–268 phase transitions, 265–268 Supercooled water dynamical properties of water, 313–318 FSC, 318–319 and glassywater, 302–307 liquid density, 301
Index liquid-liquid critical point hypothesis, 307–310 transition, 310–312 molecules, 301 two-component liquid, 312–313
T Tabulated thermodynamic potentials, 37 TB, see Thermal bath (TB) Thermal bath (TB), 161 Thermal motion in liquids, 221–222 Thermal stability conditions, 31 Thermodynamic inconsistency equation, 123–124 Thermodynamic integration free energy calculation, 174–176 Thermodynamic properties of fluid matter coexistence and phase transitions, 38 energy and entropy, 24–26 equilibrium conditions, 27–28 and intensive quantities, 29 extensive and intensive functions, 23 fluctuations and, 57–60 Gibbs-Duhem Relation, 26–27 Legendre transforms and thermodynamic potentials, 31–37 macroscopic response functions and stability conditions, 29–31 phase transitions and classifications, 39–41 stability conditions for thermodynamic potentials, 37 Van derWaals Equation, 41–49 Three-body potentials, 11 Time evolution algorithms, 134 TMD line, 304, 305, 307, 308, 311 Two-component liquid water, 312–313
U Umbrella sampling, 179–182 for reaction coordinates, 185–187
V van der Waals (VdW) equation and corresponding states, 46–47 critical behaviour of, 47–49 gas-liquid coexistence curves, 47 gas-liquid transition with critical point, 41, 42 hard spheres, 97–99 hyperbolic functions, gas, 42
Index isothermal compressibility, 49 isotherms of, 42, 43 Maxwell construction, 44 loop, 43, 45, 173 van der Waals-like behaviour of the isotherms, 173 Van Hove functions, 211–213, 216, 225, 226, 234 Vapour, 1 VCF, see Velocity correlation function (VCF) Velocity correlation function (VCF), 236–239, 263 Velocity Verlet algorithm, 136–137 Verlet algorithms, 135–138 Virial coefficients, 97 Virtual systems of atoms, 13 Viscoelasticity, 252–255 W Water coarse-grained model, 170
329 flexible water models, 172 liquid and crystalline stable phases, 170, 172 microscopic models for, 168–172 oxygen-oxygen RDF, 170, 171 phase diagram of, 4, 5 site models in simulations, 169 TIP4P/2005 model, 170, 171 as two component liquid, 312–313 Weeks, Chandler and Andersen (WCA) method, 125 Weighted histogram analysis method (WHAM), 186 WHAM, see Weighted histogram analysis method (WHAM) Widom line, 267, 308–312, 318–319 method, 177–179 X X-ray diffraction, 73, 74