Understanding molecular simulation: from algorithms to applications [3 ed.] 9780323902922

Understanding Molecular Simulation explains molecular simulation from a chemical-physics and statistical-mechanics persp

131 77 12MB

English Pages 679 [868] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Cover
Understanding Molecular Simulation
Copyright
Contents
Preface to the third edition
Preface to the second edition
Preface to first edition
1 Introduction
I Basics
2 Thermodynamics and statistical mechanics
2.1 Classical thermodynamics
2.1.1 Auxiliary functions
2.1.2 Chemical potential and equilibrium
2.1.3 Energy, pressure, and chemical potential
2.2 Statistical thermodynamics
2.2.1 Basic assumption
2.2.2 Systems at constant temperature
2.2.3 Towards classical statistical mechanics
2.3 Ensembles
2.3.1 Micro-canonical (constant-NVE) ensemble
2.3.2 Canonical (constant-NVT) ensemble
2.3.3 Isobaric-isothermal (constant-NPT) ensemble
2.3.4 Grand-canonical (constant-μVT) ensemble
2.4 Ergodicity
2.5 Linear response theory
2.5.1 Static response
2.5.2 Dynamic response
2.6 Questions and exercises
3 Monte Carlo simulations
3.1 Preamble: molecular simulations
3.2 The Monte Carlo method
3.2.1 Metropolis method
3.2.2 Parsimonious Metropolis algorithm
3.3 A basic Monte Carlo algorithm
3.3.1 The algorithm
3.3.2 Technical details
3.3.2.1 Boundary conditions
3.3.2.2 Truncation of interactions
Simple truncation
Truncated and shifted
Truncated and force-shifted
3.3.2.3 Whenever possible, use potentials that need no truncation
3.3.2.4 Initialization
3.3.2.5 Reduced units
3.3.3 Detailed balance versus balance
3.4 Trial moves
3.4.1 Translational moves
3.4.2 Orientational moves
3.4.2.1 Rigid, linear molecules
3.4.2.2 Rigid, nonlinear molecules
3.4.2.3 Non-rigid molecules
3.5 Questions and exercises
4 Molecular Dynamics simulations
4.1 Molecular Dynamics: the idea
4.2 Molecular Dynamics: a program
4.2.1 Initialization
4.2.2 The force calculation
4.2.3 Integrating the equations of motion
4.3 Equations of motion
4.3.1 Accuracy of trajectories and the Lyapunov instability
4.3.2 Other desirable features of an algorithm
4.3.3 Other versions of the Verlet algorithm
4.3.4 Liouville formulation of time-reversible algorithms
4.3.5 One more way to look at the Verlet algorithm…
4.4 Questions and exercises
5 Computer experiments
5.1 Static properties
5.1.1 Temperature
5.1.2 Internal energy
5.1.3 Partial molar quantities
5.1.4 Heat capacity
5.1.5 Pressure
5.1.5.1 Pressure by thermodynamic integration
5.1.5.2 Local pressure and method of planes
5.1.5.3 Virtual volume changes
5.1.5.4 Compressibility
5.1.6 Surface tension
5.1.7 Structural properties
5.1.7.1 Structure factor
5.1.7.2 Radial distribution function
5.2 Dynamical properties
5.2.1 Diffusion
5.2.2 Order-n algorithm to measure correlations
5.2.3 Comments on the Green-Kubo relations
5.3 Statistical errors
5.3.1 Static properties: system size
5.3.2 Correlation functions
5.3.3 Block averages
5.4 Questions and exercises
II Ensembles
6 Monte Carlo simulations in various ensembles
6.1 General approach
6.2 Canonical ensemble
6.2.1 Monte Carlo simulations
6.2.2 Justification of the algorithm
6.3 Isobaric-isothermal ensemble
6.3.1 Statistical mechanical basis
6.3.2 Monte Carlo simulations
6.3.3 Applications
6.4 Isotension-isothermal ensemble
6.5 Grand-canonical ensemble
6.5.1 Statistical mechanical basis
6.5.2 Monte Carlo simulations
6.5.3 Molecular case
6.5.4 Semigrand ensemble
6.5.4.1 Phase coexistence in the semigrand ensemble
6.5.4.2 Chemical equilibria
Comment
Even more ensembles
6.6 Phase coexistence without boundaries
6.6.1 The Gibbs-ensemble technique
6.6.2 The partition function
6.6.3 Monte Carlo simulations
6.6.4 Applications
6.7 Questions and exercises
7 Molecular Dynamics in various ensembles
7.1 Molecular Dynamics at constant temperature
7.1.1 Stochastic thermostats
7.1.1.1 Andersen thermostat
7.1.1.2 Local, momentum-conserving stochastic thermostat
7.1.1.3 Langevin dynamics
Brownian dynamics
7.1.2 Global kinetic-energy rescaling
7.1.2.1 Extended Lagrangian approach
Advantages and drawbacks of the Nosé thermostat
7.1.2.2 Application
7.1.3 Stochastic global energy rescaling
7.1.4 Choose your thermostat carefully
7.2 Molecular Dynamics at constant pressure
7.3 Questions and exercises
III Free-energy calculations
8 Free-energy calculations
8.1 Introduction
8.1.1 Importance sampling may miss important states
8.1.2 Why is free energy special?
8.2 General note on free energies
8.3 Free energies and first-order phase transitions
8.3.1 Cases where free-energy calculations are not needed
8.3.1.1 Direct coexistence calculations
8.3.1.2 Coexistence without interfaces
8.3.1.3 Tracing coexistence curves
8.4 Methods to compute free energies
8.4.1 Thermodynamic integration
8.4.2 Hamiltonian thermodynamic integration
8.5 Chemical potentials
8.5.1 The particle insertion method
8.5.2 Particle-insertion method: other ensembles
8.5.3 Chemical potential differences
8.6 Histogram methods
8.6.1 Overlapping-distribution method
8.6.2 Perturbation expression
8.6.3 Acceptance-ratio method
8.6.4 Order parameters and Landau free energies
8.6.5 Biased sampling of free-energy profiles
8.6.6 Umbrella sampling
8.6.7 Density-of-states sampling
8.6.8 Wang-Landau sampling
8.6.9 Metadynamics
8.6.10 Piecing free-energy profiles together: general aspects
8.6.11 Piecing free-energy profiles together: MBAR
8.7 Non-equilibrium free energy methods
8.8 Questions and exercises
9 Free energies of solids
9.1 Thermodynamic integration
9.2 Computation of free energies of solids
9.2.1 Atomic solids with continuous potentials
9.2.2 Atomic solids with discontinuous potentials
9.2.3 Molecular and multi-component crystals
9.2.4 Einstein-crystal implementation issues
9.2.5 Constraints and finite-size effects
9.3 Vacancies and interstitials
9.3.1 Defect free energies
9.3.1.1 Vacancies
9.3.1.2 Interstitials
10 Free energy of chain molecules
10.1 Chemical potential as reversible work
10.2 Rosenbluth sampling
10.2.1 Macromolecules with discrete conformations
10.2.2 Extension to continuously deformable molecules
10.2.3 Overlapping-distribution Rosenbluth method
10.2.4 Recursive sampling
10.2.5 Pruned-enriched Rosenbluth method
IV Advanced techniques
11 Long-ranged interactions
11.1 Introduction
11.2 Ewald method
11.2.1 Dipolar particles
11.2.2 Boundary conditions
11.2.3 Accuracy and computational complexity
11.3 Particle-mesh approaches
11.4 Damped truncation
11.5 Fast-multipole methods
11.6 Methods that are suited for Monte Carlo simulations
11.6.1 Maxwell equations on a lattice
11.6.2 Event-driven Monte Carlo approach
11.7 Hyper-sphere approach
12 Configurational-bias Monte Carlo
12.1 Biased sampling techniques
12.1.1 Beyond Metropolis
12.1.2 Orientational bias
12.2 Chain molecules
12.2.1 Configurational-bias Monte Carlo
12.2.2 Lattice models
12.2.3 Off-lattice case
12.3 Generation of trial orientations
12.3.1 Strong intramolecular interactions
12.4 Fixed endpoints
12.4.1 Lattice models
12.4.2 Fully flexible chain
12.4.3 Strong intramolecular interactions
12.5 Beyond polymers
12.6 Other ensembles
12.6.1 Grand-canonical ensemble
12.7 Recoil growth
12.7.1 Algorithm
12.8 Questions and exercises
13 Accelerating Monte Carlo sampling
13.1 Sampling intensive variables
13.1.1 Parallel tempering
13.1.2 Expanded ensembles
13.2 Noise on noise
13.3 Rejection-free Monte Carlo
13.3.1 Hybrid Monte Carlo
13.3.2 Kinetic Monte Carlo
13.3.3 Sampling rejected moves
13.4 Enhanced sampling by mapping
13.4.1 Machine learning and the rebirth of static Monte Carlo sampling
13.4.2 Cluster moves
13.4.2.1 Cluster moves on lattices
Swendsen-Wang algorithm for Ising model
Wolff algorithm
13.4.2.2 Off-lattice cluster moves
13.4.3 Early rejection method
13.4.4 Beyond detailed-balance
14 Time-scale-separation problems in MD
14.1 Constraints
14.1.1 Constrained and unconstrained averages
14.1.2 Beyond bond constraints
14.2 On-the-fly optimization
14.3 Multiple time-step approach
15 Rare events
15.1 Theoretical background
15.2 Bennett-Chandler approach
15.2.1 Dealing with holonomic constraints (Blue-Moon ensemble)
15.3 Diffusive barrier crossing
15.4 Path-sampling techniques
15.4.1 Transition-path sampling
15.4.1.1 Path ensemble
15.4.1.2 Computing rates
15.4.2 Path sampling Monte Carlo
15.4.3 Beyond transition-path sampling
15.4.4 Transition-interface sampling
15.5 Forward-flux sampling
15.5.1 Jumpy forward-flux sampling
15.5.2 Transition-path theory
15.5.3 Mean first-passage times
15.6 Searching for the saddle point
15.7 Epilogue
16 Mesoscopic fluid models
16.1 Dissipative-particle dynamics
16.1.1 DPD implementation
16.1.2 Smoothed dissipative-particle dynamics
16.2 Multi-particle collision dynamics
16.3 Lattice-Boltzmann method
V Appendices
A Lagrangian and Hamiltonian equations of motion
A.1 Action
A.2 Lagrangian
A.3 Hamiltonian
A.4 Hamilton dynamics and statistical mechanics
A.4.1 Canonical transformation
A.4.2 Symplectic condition
A.4.3 Statistical mechanics
B Non-Hamiltonian dynamics
C Kirkwood-Buff relations
C.1 Structure factor for mixtures
C.2 Kirkwood-Buff in simulations
D Non-equilibrium thermodynamics
D.1 Entropy production
D.1.1 Enthalpy fluxes
D.2 Fluctuations
D.3 Onsager reciprocal relations
E Non-equilibrium work and detailed balance
F Linear response: examples
F.1 Dissipation
F.2 Electrical conductivity
F.3 Viscosity
F.4 Elastic constants
G Committor for 1d diffusive barrier crossing
G.1 1d diffusive barrier crossing
G.2 Computing the committor
H Smoothed dissipative particle dynamics
H.1 Navier-Stokes equation and Fourier’s law
H.2 Discretized SDPD equations
I Saving CPU time
I.1 Verlet list
I.2 Cell lists
I.3 Combining the Verlet and cell lists
I.4 Efficiency
J Some general purpose algorithms
J.1 Gaussian distribution
J.2 Selection of trial orientations
J.3 Generate random vector on a sphere
J.4 Generate bond length
J.5 Generate bond angle
J.6 Generate bond and torsion angle
VI Repository
K Errata
L Miscellaneous methods
L.1 Higher-order integration schemes
L.2 Surface tension via the pressure tensor
L.3 Micro-canonical Monte Carlo
L.4 Details of the Gibbs “ensemble”
L.4.1 Free energy of the Gibbs ensemble
L.4.1.1 Basic definitions and results for the canonical ensemble
L.4.1.2 The free energy density in the Gibbs ensemble
L.4.2 Graphical analysis of simulation results
L.4.3 Chemical potential in the Gibbs ensemble
L.4.4 Algorithms of the Gibbs ensemble
L.5 Multi-canonical ensemble method
L.6 Nosé-Hoover dynamics
L.6.1 Nosé-Hoover dynamics equations of motion
L.6.1.1 The Nosé-Hoover algorithm
Implementation
L.6.1.2 Nosé-Hoover chains
L.6.1.3 The NPT ensemble
L.6.2 Nosé-Hoover algorithms
L.6.2.1 Canonical ensemble
L.6.2.2 The isothermal-isobaric ensemble
L.7 Ewald summation in a slab geometry
L.8 Special configurational-bias Monte Carlo cases
L.8.1 Generation of branched molecules
L.8.2 Rebridging Monte Carlo
L.8.3 Gibbs-ensemble simulations
L.9 Recoil growth: justification of the method
L.10 Overlapping distribution for polymers
L.11 Hybrid Monte Carlo
L.12 General cluster moves
L.13 Boltzmann-sampling with dissipative particle dynamics
L.14 Reference states
L.14.1 Grand-canonical ensemble simulation
L.14.1.1 Preliminaries
L.14.1.2 Ideal gas
L.14.1.3 Grand-canonical simulations
M Miscellaneous examples
M.1 Gibbs ensemble for dense liquids
M.2 Free energy of a nitrogen crystal
M.3 Zeolite structure solution
N Supporting information for case studies
N.1 Equation of state of the Lennard-Jones fluid-I
N.2 Importance of detailed balance
N.3 Why count the old configuration again?
N.4 Static properties of the Lennard-Jones fluid
N.5 Dynamic properties of the Lennard-Jones fluid
N.6 Algorithms to calculate the mean-squared displacement
N.7 Equation of state of the Lennard-Jones fluid
N.8 Phase equilibria from constant-pressure simulations
N.9 Equation of state of the Lennard-Jones fluid - II
N.10 Phase equilibria of the Lennard-Jones fluid
N.11 Use of Andersen thermostat
N.12 Use of Nosé-Hoover thermostat
N.13 Harmonic oscillator (I)
N.14 Nosé-Hoover chain for harmonic oscillator
N.15 Chemical potential: particle-insertion method
N.16 Chemical potential: overlapping distributions
N.17 Solid-liquid equilibrium of hard spheres
N.18 Equation of state of Lennard-Jones chains
N.19 Generation of trial configurations of ideal chains
N.20 Recoil growth simulation of Lennard-Jones chains
N.21 Multiple time step versus constraints
N.22 Ideal gas particle over a barrier
N.23 Single particle in a two-dimensional potential well
N.24 Dissipative particle dynamics
N.25 Comparison of schemes for the Lennard-Jones fluid
O Small research projects
O.1 Adsorption in porous media
O.2 Transport properties of liquids
O.3 Diffusion in a porous medium
O.4 Multiple-time-step integrators
O.5 Thermodynamic integration
P Hints for programming
Bibliography
Acronyms
Glossary
Index
Author index
Back Cover
Recommend Papers

Understanding molecular simulation: from algorithms to applications [3 ed.]
 9780323902922

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Understanding Molecular Simulation From Algorithms to Applications

This page intentionally left blank

Understanding Molecular Simulation From Algorithms to Applications Third Edition

Daan Frenkel Yusuf Hamied Department of Chemistry University of Cambridge Cambridge, United Kingdom

Berend Smit Laboratory of molecular simulation (LSMO) Institut des Sciences et Ingénierie Chimiques École Polytechnique Fédérale de Lausanne (EPFL) Sion, Switzerland Department of Chemical and Biomolecular Engineering Department of Chemistry University of California at Berkeley Berkeley, CA, United States

Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2023 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. ISBN: 978-0-323-90292-2 For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals Publisher: Candice Janco Acquisitions Editor: Charles Bath Editorial Project Manager: Aera Gariguez Production Project Manager: Bharatwaj Varatharajan Cover Designer: Victoria Pearson Typeset by VTeX

Contents Preface to the third edition Preface to the second edition Preface to first edition

1.

xv xix xxi

Introduction

Part I Basics 2.

Thermodynamics and statistical mechanics 2.1 Classical thermodynamics 2.1.1 Auxiliary functions 2.1.2 Chemical potential and equilibrium 2.1.3 Energy, pressure, and chemical potential 2.2 Statistical thermodynamics 2.2.1 Basic assumption 2.2.2 Systems at constant temperature 2.2.3 Towards classical statistical mechanics 2.3 Ensembles 2.3.1 Micro-canonical (constant-NVE) ensemble 2.3.2 Canonical (constant-NVT) ensemble 2.3.3 Isobaric-isothermal (constant-NPT) ensemble 2.3.4 Grand-canonical (constant-μVT) ensemble 2.4 Ergodicity 2.5 Linear response theory 2.5.1 Static response 2.5.2 Dynamic response 2.6 Questions and exercises

3.

12 18 21 23 25 25 27 28 31 31 32 33 34 36 38 39 41 45

Monte Carlo simulations 3.1 Preamble: molecular simulations 3.2 The Monte Carlo method 3.2.1 Metropolis method 3.2.2 Parsimonious Metropolis algorithm 3.3 A basic Monte Carlo algorithm 3.3.1 The algorithm

53 54 56 62 62 62 v

vi Contents

3.3.2

Technical details 3.3.2.1 Boundary conditions 3.3.2.2 Truncation of interactions 3.3.2.3 Whenever possible, use potentials that need no truncation 3.3.2.4 Initialization 3.3.2.5 Reduced units 3.3.3 Detailed balance versus balance 3.4 Trial moves 3.4.1 Translational moves 3.4.2 Orientational moves 3.4.2.1 Rigid, linear molecules 3.4.2.2 Rigid, nonlinear molecules 3.4.2.3 Non-rigid molecules 3.5 Questions and exercises

4.

72 72 73 78 81 81 87 88 88 89 92

Molecular Dynamics simulations 4.1 Molecular Dynamics: the idea 4.2 Molecular Dynamics: a program 4.2.1 Initialization 4.2.2 The force calculation 4.2.3 Integrating the equations of motion 4.3 Equations of motion 4.3.1 Accuracy of trajectories and the Lyapunov instability 4.3.2 Other desirable features of an algorithm 4.3.3 Other versions of the Verlet algorithm 4.3.4 Liouville formulation of time-reversible algorithms 4.3.5 One more way to look at the Verlet algorithm. . . 4.4 Questions and exercises

5.

63 65 67

97 98 99 102 104 106 106 110 112 118 122 123

Computer experiments 5.1 Static properties 5.1.1 Temperature 5.1.2 Internal energy 5.1.3 Partial molar quantities 5.1.4 Heat capacity 5.1.5 Pressure 5.1.5.1 Pressure by thermodynamic integration 5.1.5.2 Local pressure and method of planes 5.1.5.3 Virtual volume changes 5.1.5.4 Compressibility 5.1.6 Surface tension 5.1.7 Structural properties 5.1.7.1 Structure factor 5.1.7.2 Radial distribution function 5.2 Dynamical properties 5.2.1 Diffusion

127 127 128 128 129 130 133 133 134 135 135 139 139 140 147 148

Contents vii

5.2.2 Order-n algorithm to measure correlations 5.2.3 Comments on the Green-Kubo relations 5.3 Statistical errors 5.3.1 Static properties: system size 5.3.2 Correlation functions 5.3.3 Block averages 5.4 Questions and exercises

155 160 167 167 169 171 174

Part II Ensembles 6.

Monte Carlo simulations in various ensembles 6.1 General approach 6.2 Canonical ensemble 6.2.1 Monte Carlo simulations 6.2.2 Justification of the algorithm 6.3 Isobaric-isothermal ensemble 6.3.1 Statistical mechanical basis 6.3.2 Monte Carlo simulations 6.3.3 Applications 6.4 Isotension-isothermal ensemble 6.5 Grand-canonical ensemble 6.5.1 Statistical mechanical basis 6.5.2 Monte Carlo simulations 6.5.3 Molecular case 6.5.4 Semigrand ensemble 6.5.4.1 Phase coexistence in the semigrand ensemble 6.5.4.2 Chemical equilibria 6.6 Phase coexistence without boundaries 6.6.1 The Gibbs-ensemble technique 6.6.2 The partition function 6.6.3 Monte Carlo simulations 6.6.4 Applications 6.7 Questions and exercises

7.

182 182 183 183 184 184 188 191 193 195 196 199 203 206 209 211 216 217 218 219 226 228

Molecular Dynamics in various ensembles 7.1 Molecular Dynamics at constant temperature 7.1.1 Stochastic thermostats 7.1.1.1 Andersen thermostat 7.1.1.2 Local, momentum-conserving stochastic thermostat 7.1.1.3 Langevin dynamics 7.1.2 Global kinetic-energy rescaling 7.1.2.1 Extended Lagrangian approach 7.1.2.2 Application 7.1.3 Stochastic global energy rescaling

234 237 237 239 242 244 245 253 256

viii Contents

7.1.4 Choose your thermostat carefully 7.2 Molecular Dynamics at constant pressure 7.3 Questions and exercises

257 258 259

Part III Free-energy calculations 8.

Free-energy calculations 8.1 Introduction 8.1.1 Importance sampling may miss important states 8.1.2 Why is free energy special? 8.2 General note on free energies 8.3 Free energies and first-order phase transitions 8.3.1 Cases where free-energy calculations are not needed 8.3.1.1 Direct coexistence calculations 8.3.1.2 Coexistence without interfaces 8.3.1.3 Tracing coexistence curves 8.4 Methods to compute free energies 8.4.1 Thermodynamic integration 8.4.2 Hamiltonian thermodynamic integration 8.5 Chemical potentials 8.5.1 The particle insertion method 8.5.2 Particle-insertion method: other ensembles 8.5.3 Chemical potential differences 8.6 Histogram methods 8.6.1 Overlapping-distribution method 8.6.2 Perturbation expression 8.6.3 Acceptance-ratio method 8.6.4 Order parameters and Landau free energies 8.6.5 Biased sampling of free-energy profiles 8.6.6 Umbrella sampling 8.6.7 Density-of-states sampling 8.6.8 Wang-Landau sampling 8.6.9 Metadynamics 8.6.10 Piecing free-energy profiles together: general aspects 8.6.11 Piecing free-energy profiles together: MBAR 8.7 Non-equilibrium free energy methods 8.8 Questions and exercises

9.

263 263 264 267 267 268 268 270 270 274 274 277 279 280 284 287 288 289 292 293 296 299 301 303 304 309 311 312 317 320

Free energies of solids 9.1 Thermodynamic integration 9.2 Computation of free energies of solids 9.2.1 Atomic solids with continuous potentials 9.2.2 Atomic solids with discontinuous potentials 9.2.3 Molecular and multi-component crystals 9.2.4 Einstein-crystal implementation issues

324 326 326 329 330 332

Contents ix

9.2.5 Constraints and finite-size effects 9.3 Vacancies and interstitials 9.3.1 Defect free energies 9.3.1.1 Vacancies 9.3.1.2 Interstitials

340 346 346 348 350

10. Free energy of chain molecules 10.1 Chemical potential as reversible work 10.2 Rosenbluth sampling 10.2.1 Macromolecules with discrete conformations 10.2.2 Extension to continuously deformable molecules 10.2.3 Overlapping-distribution Rosenbluth method 10.2.4 Recursive sampling 10.2.5 Pruned-enriched Rosenbluth method

351 352 353 357 363 365 366

Part IV Advanced techniques 11. Long-ranged interactions 11.1 Introduction 11.2 Ewald method 11.2.1 Dipolar particles 11.2.2 Boundary conditions 11.2.3 Accuracy and computational complexity 11.3 Particle-mesh approaches 11.4 Damped truncation 11.5 Fast-multipole methods 11.6 Methods that are suited for Monte Carlo simulations 11.6.1 Maxwell equations on a lattice 11.6.2 Event-driven Monte Carlo approach 11.7 Hyper-sphere approach

371 373 382 385 386 388 393 394 398 399 402 402

12. Configurational-bias Monte Carlo 12.1 Biased sampling techniques 12.1.1 Beyond Metropolis 12.1.2 Orientational bias 12.2 Chain molecules 12.2.1 Configurational-bias Monte Carlo 12.2.2 Lattice models 12.2.3 Off-lattice case 12.3 Generation of trial orientations 12.3.1 Strong intramolecular interactions 12.4 Fixed endpoints 12.4.1 Lattice models 12.4.2 Fully flexible chain

406 406 407 413 414 414 417 424 424 432 432 434

x Contents

12.4.3 Strong intramolecular interactions 12.5 Beyond polymers 12.6 Other ensembles 12.6.1 Grand-canonical ensemble 12.7 Recoil growth 12.7.1 Algorithm 12.8 Questions and exercises

436 436 440 440 445 446 450

13. Accelerating Monte Carlo sampling 13.1 Sampling intensive variables 13.1.1 Parallel tempering 13.1.2 Expanded ensembles 13.2 Noise on noise 13.3 Rejection-free Monte Carlo 13.3.1 Hybrid Monte Carlo 13.3.2 Kinetic Monte Carlo 13.3.3 Sampling rejected moves 13.4 Enhanced sampling by mapping 13.4.1 Machine learning and the rebirth of static Monte Carlo sampling 13.4.2 Cluster moves 13.4.2.1 Cluster moves on lattices 13.4.2.2 Off-lattice cluster moves 13.4.3 Early rejection method 13.4.4 Beyond detailed-balance

455 457 464 467 468 468 469 471 473 475 479 480 482 484 486

14. Time-scale-separation problems in MD 14.1 Constraints 14.1.1 Constrained and unconstrained averages 14.1.2 Beyond bond constraints 14.2 On-the-fly optimization 14.3 Multiple time-step approach

494 499 505 506 509

15. Rare events 15.1 Theoretical background 15.2 Bennett-Chandler approach 15.2.1 Dealing with holonomic constraints (Blue-Moon ensemble) 15.3 Diffusive barrier crossing 15.4 Path-sampling techniques 15.4.1 Transition-path sampling 15.4.1.1 Path ensemble 15.4.1.2 Computing rates 15.4.2 Path sampling Monte Carlo 15.4.3 Beyond transition-path sampling 15.4.4 Transition-interface sampling 15.5 Forward-flux sampling

516 520 522 527 534 535 536 538 543 546 546 547

Contents xi

15.5.1 Jumpy forward-flux sampling 15.5.2 Transition-path theory 15.5.3 Mean first-passage times 15.6 Searching for the saddle point 15.7 Epilogue

548 549 552 556 558

16. Mesoscopic fluid models 16.1 Dissipative-particle dynamics 16.1.1 DPD implementation 16.1.2 Smoothed dissipative-particle dynamics 16.2 Multi-particle collision dynamics 16.3 Lattice-Boltzmann method

561 562 566 567 569

Part V Appendices A.

Lagrangian and Hamiltonian equations of motion

573

A.1 A.2 A.3 A.4

573 575 577 580 580 581 583

Action Lagrangian Hamiltonian Hamilton dynamics and statistical mechanics A.4.1 Canonical transformation A.4.2 Symplectic condition A.4.3 Statistical mechanics

B.

Non-Hamiltonian dynamics

587

C.

Kirkwood-Buff relations

591

C.1 Structure factor for mixtures C.2 Kirkwood-Buff in simulations

591 593

Non-equilibrium thermodynamics

595

D.1 Entropy production D.1.1 Enthalpy fluxes D.2 Fluctuations D.3 Onsager reciprocal relations

595 597 597 600

E.

Non-equilibrium work and detailed balance

603

F.

Linear response: examples

607

F.1 F.2 F.3 F.4

607 610 611 612

D.

Dissipation Electrical conductivity Viscosity Elastic constants

xii Contents

G. Committor for 1d diffusive barrier crossing G.1 1d diffusive barrier crossing G.2 Computing the committor

H. Smoothed dissipative particle dynamics

I.

J.

617 617 618 621

H.1 Navier-Stokes equation and Fourier’s law H.2 Discretized SDPD equations

621 622

Saving CPU time

625

I.1 I.2 I.3 I.4

Verlet list Cell lists Combining the Verlet and cell lists Efficiency

625 629 630 631

Some general purpose algorithms

637

J.1 J.2 J.3 J.4 J.5 J.6

637 638 638 639 640 641

Gaussian distribution Selection of trial orientations Generate random vector on a sphere Generate bond length Generate bond angle Generate bond and torsion angle

Part VI Repository1 K.

Errata

645

L.

Miscellaneous methods

647

M. Miscellaneous examples

649

N. Supporting information for case studies

651

O. Small research projects

653

P.

655

Hints for programming

Bibliography Acronyms Glossary Index Author index

657 695 697 701 715

1 Repository is available in its entirety online at https://www.elsevier.com/books-and-journals/

book-companion/9780323902922.

Contents xiii

Online appendices K.

Errata

e1

L.

Miscellaneous methods

e3

L.1 L.2 L.3 L.4

e3 e4 e5 e6 e6

L.5 L.6

L.7 L.8

L.9 L.10 L.11 L.12 L.13 L.14

Higher-order integration schemes Surface tension via the pressure tensor Micro-canonical Monte Carlo Details of the Gibbs “ensemble” L.4.1 Free energy of the Gibbs ensemble L.4.1.1 Basic definitions and results for the canonical ensemble L.4.1.2 The free energy density in the Gibbs ensemble L.4.2 Graphical analysis of simulation results L.4.3 Chemical potential in the Gibbs ensemble L.4.4 Algorithms of the Gibbs ensemble Multi-canonical ensemble method Nosé-Hoover dynamics L.6.1 Nosé-Hoover dynamics equations of motion L.6.1.1 The Nosé-Hoover algorithm L.6.1.2 Nosé-Hoover chains L.6.1.3 The NPT ensemble L.6.2 Nosé-Hoover algorithms L.6.2.1 Canonical ensemble L.6.2.2 The isothermal-isobaric ensemble Ewald summation in a slab geometry Special configurational-bias Monte Carlo cases L.8.1 Generation of branched molecules L.8.2 Rebridging Monte Carlo L.8.3 Gibbs-ensemble simulations Recoil growth: justification of the method Overlapping distribution for polymers Hybrid Monte Carlo General cluster moves Boltzmann-sampling with dissipative particle dynamics Reference states L.14.1 Grand-canonical ensemble simulation L.14.1.1 Preliminaries L.14.1.2 Ideal gas L.14.1.3 Grand-canonical simulations

M. Miscellaneous examples M.1 Gibbs ensemble for dense liquids M.2 Free energy of a nitrogen crystal M.3 Zeolite structure solution

e6 e8 e13 e17 e18 e18 e22 e22 e22 e24 e30 e32 e33 e37 e41 e45 e45 e48 e50 e54 e57 e61 e62 e65 e67 e67 e67 e68 e68 e71 e71 e71 e74

xiv Contents

N. Supporting information for case studies N.1 N.2 N.3 N.4 N.5 N.6 N.7 N.8 N.9 N.10 N.11 N.12 N.13 N.14 N.15 N.16 N.17 N.18 N.19 N.20 N.21 N.22 N.23 N.24 N.25

Equation of state of the Lennard-Jones fluid-I Importance of detailed balance Why count the old configuration again? Static properties of the Lennard-Jones fluid Dynamic properties of the Lennard-Jones fluid Algorithms to calculate the mean-squared displacement Equation of state of the Lennard-Jones fluid Phase equilibria from constant-pressure simulations Equation of state of the Lennard-Jones fluid - II Phase equilibria of the Lennard-Jones fluid Use of Andersen thermostat Use of Nosé-Hoover thermostat Harmonic oscillator (I) Nosé-Hoover chain for harmonic oscillator Chemical potential: particle-insertion method Chemical potential: overlapping distributions Solid-liquid equilibrium of hard spheres Equation of state of Lennard-Jones chains Generation of trial configurations of ideal chains Recoil growth simulation of Lennard-Jones chains Multiple time step versus constraints Ideal gas particle over a barrier Single particle in a two-dimensional potential well Dissipative particle dynamics Comparison of schemes for the Lennard-Jones fluid

O. Small research projects O.1 O.2 O.3 O.4 O.5

P.

Adsorption in porous media Transport properties of liquids Diffusion in a porous medium Multiple-time-step integrators Thermodynamic integration

Hints for programming

e75 e75 e77 e79 e79 e82 e83 e85 e86 e87 e87 e89 e90 e92 e93 e93 e94 e96 e101 e101 e104 e107 e109 e112 e114 e116 e119 e119 e120 e120 e121 e122 e125

Preface to the third edition

The third edition of Understanding Molecular Simulation is rather different from the previous one. The main reason why we opted for a different approach is that there have been massive changes in the way in which simulations are used. However, before discussing these changes, we first emphasize what has remained the same: this book is still about understanding molecular simulation. As we wrote in the preface to the first edition, “This book is not a molecular-simulation cookbook”, and it still isn’t. It has been more than 20 years since the second edition was published, and it was only due to the lock-downs resulting from the Covid-19 pandemic that we found time to undertake the rather massive task of updating the book. Twenty years is a long time and it is reasonable to ask: what, in the book, has changed? First and foremost, the community of people using molecular simulations has grown tremendously —and for many of them simulations are not the central focus of their research. Whereas at the time of the first edition of our book, many simulators were writing their own code, this group —whilst still very active— has become a minority. An appreciable fraction of all recent molecular simulations are reported in papers that report also, or even primarily, experimental work. This influx of new users went hand-in-hand with the increasing prevalence of a number of powerful simulation packages. It is hard to overstate the importance of this development, because it has removed an important barrier to the widespread use of molecular simulations. At first sight, one might think that the use of simulation packages has decreased the need for “understanding” molecular simulation. However, we argue that the opposite is true: there is now a much larger need for information about the techniques that are used inside these packages, and about the choices that have to be made between different options in such packages. In this sense, understanding molecular simulations has become uncoupled from writing your own code. But there is another factor that required a major rethink of our book, and it is directly related to the growth in computing power, or more specifically, to the staggering growth in the size of solid-state memories. In the early days of simulations, the available memory of computers could be measured in kilobytes. As a consequence, simulations were performed and the output was written on xv

xvi Preface to the third edition

tapes, or later on disks, which were subsequently read by separate codes for analyzing the data. Then computer memories grew, and much of the analysis could be (and was) carried out on the fly, because storing the data on an external device and analyzing them afterwards, would slow down the process. However, with the subsequent explosion in the capacity of cheap solid-state memories, it is now, once again, attractive to separate simulations from analysis. Hence, even if the simulations are carried out using a standard simulation package, there is a good chance that the users will write their own analysis code —and even if not, there is a good reason for separating the process of running the simulations from the data analysis. In the current edition of our book, we have reorganized the material to reflect this change in the way simulations are used. Finally, there is an obvious reason for changing the structure of this third edition: there has been an explosion in the number of new algorithms. This is an important development, but it also created a challenge: this is a book to help understanding molecular simulation, not to review the many hundreds of algorithms that have been published. So, we don’t. Rather, we try to explain the approach of certain classes of algorithms by choosing as an example an algorithm that is among the simplest in its category: but the simplest algorithm is rarely the fastest, nor the one that is most widely used. We stress that our choice of examples does not imply a “choice” against the more popular algorithms —just a way to be succinct. Succinctness is important, because we do not want this book to grow without bounds. For this reason, we have moved a fair amount of older material that is of less general interest to a website1 containing Supplementary Information (SI), where it can still be consulted. The same website will also be used to maintain a list of the inevitable errata that keep on emerging more or less from the very moment the manuscript has been submitted. The carbon footprint of computing There are wildly varying estimates of the energy consumption associated with the global computing infrastructure. However, one thing is clear: the energy spent on computing is a substantial fraction of the total. At present, most electricity is generated using fossil fuels. What does that mean? According to Wikipedia, the energy consumption of a typical supercomputer in 2022 is in the Mega-Watt range, which would require the equivalent of several metric tonnes of fossil fuel per day. Moreover, the total amount of energy spent on computing is rising. Clearly, computing has to be made sustainable. On the “supply side”, this means that computers should be powered with electricity that is generated sustainably. But users can also contribute, by computing more efficiently. This is where algorithms can make a large difference, provided that the increased efficiency is not used to run ever larger simulations. Also, in computing, it is often true that “small is beautiful”. 1 https://www.elsevier.com/books-and-journals/book-companion/9780323902922.

Preface to the third edition xvii

Acknowledgments Special thanks are due to Rosalind Allen, Dick Bedeaux, Peter Bolhuis, Giovanni Bussi, Bingqing Cheng, Samuel Coles, Stephen Cox, Alex Cumberworth, John Chodera, Giovanni Ciccotti, Christoph Dellago, Oded Farago, Susana Garcia, Bjørn Hafskjold, Kevin Jablonka, Signe Kjelstrup, Werner Krauth, Alessandro Laio, Ben Leimkuhler, Andrea Liu, Erik Luijten, Tony Maggs, Sauradeep Majumdar, Elias Moubarak, Beatriz Mouriño, Frank Noe, Miriam Pougin, Benjamin Rotenberg, David Shalloway, Michiel Sprik, Eric VandenEijnden, Joren Van Herck, Fred Verhoeckx, Patrick Warren, Peter Wirnsberger, and Xiaoqi Zhang for their suggestions for improvements in the text. Special thanks go to Jacobus van Meel, who provided the raw material for the image on the cover of the book, which shows the nucleation of a crystallite in cavity in a solid surface. Some of our online exercises are based on Python code written by Manav Kumar and David Coker. We gratefully acknowledge them. However, they bear no responsibility for the mistakes that we introduced by changing their code In addition, we thank everyone who pointed out the numerous mistakes and typos in the previous edition, in particular Giovanni Ciccotti, Clemens Foerst, Viktor Ivanov, Brian Laird, Ting Li, Erik Luijten, Mat Mansell, Bortolo Mognetti, Nicy Nicy, Gerardo Odriozola, Arno Proeme, Mikhail Stukan, Petr Sulc, Krzysztof Szalewicz, David Toneian, Patrick Varilly, Patrick Warren, and Martijn Wehrens. We have tried to resolve the issues that were raised. We stress that all remaining errors and misconceptions in the text are ours, and ours alone. Daan Frenkel & Berend Smit, January 2023

This page intentionally left blank

Preface to the second edition

Why did we write a second edition? A minor revision of the first edition would have been adequate to correct the (admittedly many) typographical mistakes. However, many of the nice comments that we received from students and colleagues alike, ended with a remark of the type: “unfortunately, you don’t discuss topic x”. And indeed, we feel that, after only five years, the simulation world has changed so much that the title of the book was no longer covered by the contents. The first edition was written in 1995 and since then several new techniques have appeared or matured. Most (but not all) of the major changes in the second edition deal with these new developments. In particular, we have included a section on: • • • • • • • •

Transition path sampling and diffusive barrier crossing to simulate rare events Dissipative particle dynamic as a coarse-grained simulation technique Novel schemes to compute the long-ranged forces Discussion on Hamiltonian and non-Hamiltonian dynamics in the context of constant-temperature and constant-pressure Molecular Dynamics simulations Multiple-time-step algorithms as an alternative for constraints Defects in solids The pruned-enriched Rosenbluth sampling, recoil growth, and concerted rotations for complex molecules Parallel tempering for glassy Hamiltonians

We have updated some of the examples to include also recent work. Several new Examples have been added to illustrate recent applications. We have taught several courses on Molecular Simulation, based on the first edition of this book. As part of these courses, Dr. Thijs Vlugt prepared many Questions, Exercises, and Case Studies, most of which have been included in the present edition. Some additional exercises can be found on the Web. We are very grateful to Thijs Vlugt for the permission to reproduce this material. Many of the advanced Molecular Dynamics techniques described in this book are derived using the Lagrangian or Hamilton formulations of classical mechanics. However, many chemistry and chemical engineering students are not familiar with these formalisms. While a full description of classical mechanics xix

xx Preface to the second edition

is clearly beyond the scope of the present book, we have added an Appendix that summarizes the necessary essentials of Lagrangian and Hamiltonian mechanics. Special thanks are due to Giovanni Ciccotti, Rob Groot, Gavin Crooks, Thijs Vlugt, and Peter Bolhuis for their comments on parts of the text. In addition, we thank everyone who pointed out mistakes and typos, in particular Drs. J.B. Freund, R. Akkermans, and D. Moroni. Daan Frenkel & Berend Smit, 2001

Preface to first edition This book is not a computer simulation cookbook. Our aim is to explain the physics that is behind the “recipes” of molecular simulation. Of course, we also give the recipes themselves, because otherwise the book would be too abstract to be of much practical use. The scope of this book is necessarily limited: we do not aim to discuss all aspects of computer simulation. Rather, we intend to give a unified presentation of those computational tools that are currently used to study the equilibrium properties and, in particular, the phase behavior of molecular and supramolecular substances. Moreover, we intentionally restrict the discussion to simulations of classical many-body systems, even though some of the techniques mentioned can be applied to quantum systems as well. And, within the context of classical many-body systems, we restrict our discussion to the modeling of systems at, or near, equilibrium. The book is aimed at readers who are active in computer simulation or are planning to become so. Computer simulators are continuously confronted with questions concerning the choice of technique, because a bewildering variety of computational tools is available. We believe that, to make a rational choice, a good understanding of the physics behind each technique is essential. Our aim is to provide the reader with this background. We should state at the outset that we consider some techniques to be more useful than others, and therefore our presentation is biased. In fact, we believe that the reader is well served by the fact that we do not present all techniques as equivalent. However, whenever we express our personal preference, we try to back it up with arguments based in physics, applied mathematics, or simply experience. In fact, we mix our presentation with practical examples that serve a twofold purpose: first, to show how a given technique works in practice, and second, to give the reader a flavor of the kind of phenomena that can be studied by numerical simulation. The reader will also notice that two topics are discussed in great detail, namely simulation techniques to study first-order phase transitions, and various aspects of the configurational-bias Monte Carlo method. The reason why we devote so much space to these topics is not that we consider them to be more important than other subjects that get less coverage, but rather because we feel that, at present, the discussion of both topics in the literature is rather fragmented. xxi

xxii Preface to first edition

The present introduction is written for the nonexpert. We have done so on purpose. The community of people who perform computer simulations is rapidly expanding as computer experiments become a general research tool. Many of the new simulators will use computer simulation as a tool and will not be primarily interested in techniques. Yet, we hope to convince those readers who consider a computer simulation program a black box, that the inside of the black box is interesting and, more importantly, that a better understanding of the working of a simulation program may greatly improve the efficiency with which the black box is used. In addition to the theoretical framework, we discuss some of the practical tricks and rules of thumb that have become “common” knowledge in the simulation community and are routinely used in a simulation. Often, it is difficult to trace back the original motivation behind these rules. As a result, some “tricks” can be very useful in one case yet result in inefficient programs in others. In this book, we discuss the rationale behind the various tricks, in order to place them in a proper context. In the main text of the book we describe the theoretical framework of the various techniques. To illustrate how these ideas are used in practice we provide Algorithms, Case Studies and Examples.

Algorithms The description of an algorithm forms an essential part of this book. Such a description, however, does not provide much information on how to implement the algorithm efficiently. Of course, details about the implementation of an algorithm can be obtained from a listing of the complete program. However, even in a well-structured program, the code contains many lines that, although necessary to obtain a working program, tend to obscure the essentials of the algorithm that they express. As a compromise solution, we provide a pseudo-code for each algorithm. These pseudo-codes contain only those aspects of the implementation directly related to the particular algorithm under discussion. This implies that some aspects that are essential for using this pseudo-code in an actual program have to be added. For example, the pseudo-codes consider only the x directions; similar lines have to be added for the y and z direction if the code is going to be used in a simulation. Furthermore, we have omitted the initialization of most variables. Case Studies In the Case Studies, the algorithms discussed in the main text are combined in a complete program. These programs are used to illustrate some elementary aspects of simulations. Some Case Studies focus on the problems that can occur in a simulation or on the errors that are sometimes made. The complete listing of the FORTRAN codes that we have used for the Case Studies is accessible to the reader through the Internet.1 1 The original FORTRAN code of the Case Studies can be found at: https://doi.org/10.

5281/zenodo.7503798 For any updates, we refer to our GitHub site: https://github.com/ UnderstandingMolecularSimulation.

Preface to first edition xxiii

Examples In the Examples, we demonstrate how the techniques discussed in the main text are used in an application. We have tried to refer as much as possible to research topics of current interest. In this way, the reader may get some feeling for the type of systems that can be studied with simulations. In addition, we have tried to illustrate in these examples how simulations can contribute to the solution of “real” experimental or theoretical problems. Many of the topics that we discuss in this book have appeared previously in the open literature. However, the Examples and Case Studies were prepared specifically for this book. In writing this material, we could not resist including a few computational tricks that, to our knowledge, have not been reported in the literature. In computer science it is generally assumed that any source code over 200 lines contains at least one error. The source codes of the Case Studies contain over 25,000 lines of code. Assuming we are no worse than the average programmer this implies that we have made at least 125 errors in the source code. If you spot these errors and send them to us, we will try to correct them (we can not promise this!). It also implies that, before you use part of the code yourself, you should convince yourself that the code is doing what you expect it to do. In the light of the previous paragraph, we must add the following disclaimer: We make no warranties, express or implied, that the programs contained in this work are free of error, or that they will meet your requirements for any particular application. They should not be relied on for solving problems whose incorrect solution could result in injury, damage, or loss of property. The authors and publishers disclaim all liability for direct or consequential damages resulting from your use of the programs.

Although this book and the included programs are copyrighted, we authorize the readers of this book to use parts of the programs for their own use, provided that proper acknowledgment is made. Finally, we gratefully acknowledge the help and collaboration of many of our colleagues. In fact, many dozens of our colleagues collaborated with us on topics described in the text. Rather than listing them all here, we mention their names at the appropriate place in the text. Yet, we do wish to express our gratitude for their input. Moreover, Daan Frenkel should like to acknowledge numerous stimulating discussions with colleagues at the FOM Institute for Atomic and Molecular Physics in Amsterdam and at the van ’t Hoff Laboratory of Utrecht University, while Berend Smit gratefully acknowledges discussions with colleagues at the University of Amsterdam and Shell. In addition, several colleagues helped us directly with the preparation of the manuscript, by reading the text or part thereof. They are Giovanni Ciccotti, Mike Deem, Simon de Leeuw, Toine Schlijper, Stefano Ruffo, Maria-Jose Ruiz, Guy Verbist and Thijs Vlugt. In addition, we thank Klaas Esselink and Sami Karaborni for the cover

xxiv Preface to first edition

figure. We thank them all for their efforts. In addition we thank the many readers who have drawn our attention to errors and omissions in the first print. But we stress that the responsibility for the remainder of errors in the text is ours alone. Daan Frenkel & Berend Smit, 1996

Chapter 1

Introduction (Pre)history of computer simulation It usually takes decades rather than years before a fundamentally new invention finds widespread application. For computer simulation, the story is rather different. Computer simulation started as a tool to exploit the electronic computing machines that had been developed during and after the Second World War. These machines had been built to perform the very heavy computation involved in the development of nuclear weapons and code-breaking. In the early 1950s, electronic computers became partly available for non-military use and this was the beginning of the discipline of computer simulation. W.W. Wood [1] recalls: “When the Los Alamos MANIAC became operational in March 1952, Metropolis was interested in having as broad a spectrum of problems as possible tried on the machine, in order to evaluate its logical structure and demonstrate the capabilities of the machine.” The strange thing about computer simulation is that it is also a discovery, albeit a delayed discovery that grew slowly after the introduction of the technique. In fact, discovery is probably not the right word, because it does not refer to a new insight into the working of the natural world but into our description of nature. Working with computers has provided us with a new metaphor for the laws of nature: they carry as much (and as little) information as algorithms. For any nontrivial algorithm (i.e., loosely speaking, one that cannot be solved analytically), you cannot predict the outcome of a computation simply by looking at the program, although it often is possible to make precise statements about the general nature (e.g., the symmetry) of the result of the computation. Similarly, the basic laws of nature, as we know them, have the unpleasant feature that they are expressed in terms of equations that we cannot solve exactly, except in a few very special cases. If we wish to study the motion of more than two interacting bodies, even the relatively simple laws of Newtonian mechanics become essentially unsolvable. That is to say, they cannot be solved analytically, using only a pencil and the back of the proverbial envelope. However, using a computer, we can get the answer to any desired accuracy. Most of materials science deals with the properties of systems of many atoms or molecules. Many almost always means more than two; usually, very much more. So if we wish to compute the properties of a liquid (to take a particularly nasty example), there is no hope of finding the answer exactly using only pencil and paper. Before computer simulation appeared on the scene, there was only one way to predict the properties of a molecular substance, namely by making use of Understanding Molecular Simulation. https://doi.org/10.1016/B978-0-32-390292-2.00008-8 Copyright © 2023 Elsevier Inc. All rights reserved.

1

2 Understanding Molecular Simulation

a theory that provided an approximate description of that material. Such approximations are inevitable precisely because there are very few systems for which the equilibrium properties can be computed exactly (examples are the ideal gas, the harmonic crystal, and a number of lattice models, such as the two-dimensional Ising model for ferromagnets). As a result, most properties of real materials were predicted on the basis of approximate theories (examples are the van der Waals equation for dense gases, the Debye-Hückel theory for electrolytes, and the Boltzmann equation to describe the transport properties of dilute gases). Given sufficient information about the intermolecular interactions, these theories will provide us with an estimate of the properties of interest. Unfortunately, our knowledge of the intermolecular interactions of all but the simplest molecules is also quite limited. This leads to a problem if we wish to test the validity of a particular theory by comparing directly to experiment. If we find that theory and experiment disagree, it may mean that our theory is wrong, or that we have an incorrect estimate of the intermolecular interactions, or both. Clearly, it would be nice if we could obtain essentially exact results for a given model system without having to rely on approximate theories. Computer simulations allow us to do precisely that. On the one hand, we can now compare the calculated properties of a model system with those of an experimental system: if the two disagree, our model is inadequate; that is, we have to improve on our estimate of the intermolecular interactions. On the other hand, we can compare the result of a simulation of a given model system with the predictions of an approximate analytical theory applied to the same model. If we now find that theory and simulation disagree, we know that the theory is flawed. So, in this case, the computer simulation plays the role of the experiment designed to test the theory. This method of screening theories before we apply them to the real world is called a computer experiment. This application of computer simulation is of tremendous importance. It has led to the revision of some very respectable theories, some of them dating back to Boltzmann. And it has changed the way in which we construct new theories. Nowadays it is becoming increasingly rare that a theory is applied to the real world before being tested by computer simulation. The simulation then serves a twofold purpose: it gives the theoretician a feeling for the physics of the problem, and it generates some “exact” results that can be used to test the quality of the theory to be constructed. Computer experiments have become standard practice, to the extent that they now provide the first (and often the last) test of a new theoretical result. But note that the computer as such offers us no understanding, only numbers. And, as in a real experiment, these numbers have statistical errors. So what we get out of a simulation is never directly a theoretical relation. As in a real experiment, we still have to extract the useful information. To take a not very realistic example, suppose we were to use the computer to measure the pressure of an ideal gas as a function of density. This example is unrealistic because the volume dependence of the ideal-gas pressure has, in fact, been well-known since the work of Boyle and Gay-Lussac. The Boyle-Gay-Lussac law states that the

Introduction Chapter | 1

3

TABLE 1.1 Simulated equation of state of an ideal gas. ρkB T

P

1

1.03 ± 0.04

2

1.99 ± 0.03

3

2.98 ± 0.05

4

4.04 ± 0.03

5

5.01 ± 0.04

product of volume and pressure of an ideal gas is constant. Now suppose we were to measure this product by computer simulation. We might, for instance, find the set of experimental results in Table 1.1. The data suggest that P equals ρkB T , but no more than that. It is left to us to infer the conclusions. The early history of computer simulation (see ref. [2]) illustrates this role of computer simulation. Some areas of physics appeared to have little need for simulation because very good analytical theories were available, e.g., to predict the properties of dilute gases or of nearly harmonic crystalline solids. However, in other areas, few if any exact theoretical results were known, and progress was much hindered by the lack of unambiguous tests to assess the quality of approximate theories. A case in point was the theory of dense liquids. Before the advent of computer simulations, the only way to model liquids was by mechanical simulation [3–5] of large assemblies of macroscopic spheres (e.g., ball bearings). Then the main problem becomes how to arrange these balls in the same way as atoms in a liquid. Much work on this topic was done by the famous British scientist J.D. Bernal, who built and analyzed such mechanical models for liquids. Actually, it would be fair to say that the really tedious work of analyzing the resulting three-dimensional structures was done by his research students, such as the unfortunate Miss Wilkinson whose research assignment was to identify all distinct local packing geometries of plastic foam spheres: she found that there were at least 197. It is instructive to see how Bernal built some of his models. The following quote from the 1962 Bakerian lecture describes Bernal’s attempt to build a balland-spoke model of a liquid [5]: . . . I took a number of rubber balls and stuck them together with rods of a selection of different lengths ranging from 2.75 to 4 inch. I tried to do this in the first place as casually as possible, working in my own office, being interrupted every five minutes or so and not remembering what I had done before the interruption. However,. . . .

Subsequent models were made, for instance, by pouring thousands of steel balls from ball bearings into a balloon. It should be stressed that these mechanical models for liquids were, in some respects, quite realistic. However, the analysis

4 Understanding Molecular Simulation

of the structures generated by mechanical simulation was very laborious and, in the end, had to be performed by computer anyway. In view of the preceding, it is hardly surprising that, when electronic computers were, for the first time, made available for unclassified research, numerical simulation of dense liquids was one of the first problems to be tackled. In fact, the first simulation of a liquid was carried out by Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller on the MANIAC computer at Los Alamos [6], using (or, more properly, introducing) the Metropolis Monte Carlo (MC) method. The name Monte Carlo simulation had been coined earlier by Metropolis and Ulam (see Ref. [7]), because the method makes heavy use of computer-generated random numbers. Almost at the same time, Fermi, Pasta, Ulam and Tsingou [8,9] performed their famous numerical study of the dynamics of an anharmonic, one-dimensional crystal. The first proper Molecular Dynamics (MD) simulations were reported in 1956 by Alder and Wainwright [10] at Livermore, who studied the dynamics of an assembly of hard spheres. The first MD simulation of a model for a “real” material was reported in 1959 (and published in 1960) by the group led by Vineyard at Brookhaven [11], who simulated radiation damage in crystalline Cu (for a historical account, see [12]). The first MD simulation of a real liquid (argon) was reported in 1964 by Rahman at Argonne [13]. After that, computers were increasingly becoming available to scientists outside the US government labs, and the practice of simulation started spreading to other continents [14–17]. Much of the methodology of computer simulations has been developed since then, although it is fair to say that the basic algorithms for MC and MD have hardly changed since the 1950s. The most common application of computer simulations is to predict the properties of materials. The need for such simulations may not be immediately obvious. After all, it is much easier to measure the freezing point of water than to extract it from a computer simulation. The point is, of course, that it is easy to measure the freezing point of water at 1 atmosphere but often very difficult and, therefore, expensive to measure the properties of real materials at very high pressures or temperatures. The computer does not care: it does not go up in smoke when you ask it to simulate a system at 10,000 K. In addition, we can use computer simulation to predict the properties of materials that have not yet been made. And finally, computer simulations are increasingly used in data analysis. For instance, a very efficient technique for obtaining structural information about macromolecules from 2d-NMR is to feed the experimental data into a Molecular Dynamics simulation and let the computer find the structures that are both energetically favorable and compatible with the available NMR data. Initially, such simulations were received with a certain amount of skepticism, and understandably so. Simulation did not fit into the existing idea that whatever was not experiment had to be theory. In fact, many scientists much preferred to keep things the way they were: theory for the theoreticians and experiments for the experimentalists and no computers to confuse the issue. However, this

Introduction Chapter | 1

5

position became untenable, as is demonstrated by the following autobiographical quote of George Vineyard [12], who was the first to study the dynamics of radiation damage by numerical simulation: . . . In the summer of 1957 at the Gordon Conference on Chemistry and Physics of Metals, I gave a talk on radiation damage in metals . . . . After the talk, there was a lively discussion . . . . Somewhere the idea came up that a computer might be used to follow in more detail what actually goes on in radiation damage cascades. We got into quite an argument, some maintaining that it wasn’t possible to do this on a computer, others that it wasn’t necessary. John Fisher insisted that the job could be done well enough by hand, and was then goaded into promising to demonstrate. He went off to his room to work. Next morning he asked for a little more time, promising to send me the results soon after he got home. After about two weeks, not having heard from him, I called and he admitted that he had given up. This stimulated me to think further about how to get a high-speed computer into the game in place of John Fisher. . . .

Finally, computer simulation can be used as a purely exploratory tool. This sounds strange. One would be inclined to say that one cannot “discover” anything by simulation because you can never get out what you have not put in. Computer discoveries, in this respect, are not unlike mathematical discoveries. In fact, before computers were actually available this kind of numerical charting of unknown territory was never considered. The best way to explain it is to give an explicit example. In the mid-1950s, one of the burning questions in statistical mechanics was this: can crystals form in a system of spherical particles that have a harsh short-range repulsion, but no mutual attraction whatsoever? In a very famous computer simulation, Alder and Wainwright [18] and Wood and Jacobson [19] showed that such a system does indeed have a first-order freezing transition. This is now accepted wisdom, but at the time it was greeted with skepticism. For instance, at a meeting in New Jersey in 1957, a group of some 15 very distinguished scientists (among whom were 2 Nobel laureates) discussed the issue. When a vote was taken as to whether hard spheres can form a stable crystal, it appeared that half the audience simply could not believe this result. However, the work of the past 30 years has shown that harsh repulsive forces really determine the structural properties of a simple liquid and that attractive forces are in a sense of secondary importance. More about the early history of molecular simulations can be found in the book by Battimelli and Ciccotti [2]. Machine Learning The power of computers has grown by more than 15 orders of magnitude since the early 1950s. This unprecedented growth has driven fundamental, qualitative changes in the way in which computers are used. One of these changes is the unstoppable growth of Machine Learning (ML) in almost every field, including Molecular Simulations, and we are only at the beginning. In this book we had

6 Understanding Molecular Simulation

to make a choice: we stress the growing importance of ML, but the focus of the book cannot be on the technical aspects of ML: others are much better qualified to describe this than we are. Moreover, many of the current applications of ML in simulations are in the construction of cheaper force fields. This is a topic of great practical importance, but the focus of this book is on algorithms, not force fields. However, in some cases (and there will be many more), ML is used to construct novel algorithms or used to facilitate the analysis of high-dimensional data sets. In section 13.4.1, we will touch upon algorithm design using ML. Yet, in view of the very rapid developments in the field, the reader should be aware that the examples that we discuss are just snapshots: suggestive, but patchy. We only hint at the use of ML in the context of data analysis (see section 15.7), not because it is too small, but because it is too big for this book. Yet, a few words are in place. The essence of constructing models is data reduction: as humans, we are continually confronted with sensory overload, and our brains have developed efficient analysis tools to reduce the myriad of data into a highly simplified (and occasionally oversimplified) representation of reality. In Science, a “model” can be viewed as such a low-dimensional representation of the system that we try to understand. For instance, the ideal gas law collapses a large number of measurements of the pressure as a function of density and temperature onto a plane (P = NρT ). That is dimensionality reduction. In a simulation, as in many other fields, we are increasingly confronted with highdimensional data sets that we would like to represent with a low-dimensional function. Finding such a representation (if it exists) is often too hard for our brains. This is where ML, in particular the version that makes use of autoencoders, comes in. It allows us to identify what combination of variables, e.g., specific structures, correlate with a particular observation. ML does not replace model-building, but it is a tool that helps us construct new models, just as the observed linear relation between P and ρT suggested the ideal gas law. Clearly, in this context, there is a huge variety of ML techniques that can be used, and the list is growing rapidly. For more details on this fast-moving field, we refer the reader to the relevant literature: a snapshot that was almost up-to-date in 2022 can be found in ref. [20].

Suggested reading As stated at the outset, the present book does not cover all aspects of computer simulation. Readers who are interested in aspects of computer simulation not covered in this book are referred to one of the many books on the subject, some old, some more recent. We list only a few: • Allen and Tildesley, Computer Simulation of Liquids [21] • Haile, Molecular Dynamics Simulations: Elementary Methods [22] • Leimkuhler and Matthews, Molecular Dynamics [23] • Tuckerman, Statistical mechanics: theory and molecular simulation [24]

Introduction Chapter | 1

7

• Landau and Binder, A Guide to Monte Carlo Simulations in Statistical Physics [25] • Rapaport, The Art of Molecular Dynamics Simulation [26] • Newman and Barkema, Monte Carlo Methods in Statistical Physics [27] Also of interest in this context are the books by Hockney and Eastwood [28], Hoover [29,30], Vesely [31], and Heermann [32] and the book by Evans and Morriss [33] for the theory and simulation of transport phenomena. The latter book is out of print and has been made available in electronic form.1 A book by Peters [34] deals specifically with reaction rates and rare events. A general discussion of Monte Carlo sampling can be found in Koonin’s Computational Physics [35]. As the title indicates, this is a textbook on computational physics in general, as is the book by Gould and Tobochnik [36]. In contrast, the book by Kalos and Whitlock [37] focuses specifically on the Monte Carlo method. A good discussion of (quasi) random-number generators can be found in Numerical Recipes [38], while Ref. [37] gives a detailed discussion of tests for random-number generators. An early discussion of Monte Carlo simulations with emphasis on techniques relevant to atomic and molecular systems may be found in two articles by Valleau and Whittington in Modern Theoretical Chemistry [39,40]. The books by Binder [41,42] and Mouritsen [43] emphasize the application of MC simulations to discrete systems, phase transitions and critical phenomena. In addition, there exist several very useful proceedings of summer schools [44–49] on computer simulation.

How to use this book The main part of this book is a description of the theoretical framework of the various simulation techniques. To illustrate how these ideas are used in practice we provide Algorithms, Illustration, Examples, and Case Studies. Algorithms Throughout the text, we use pseudo-codes to explain algorithms. In earlier editions of this book, the pseudo-codes looked very much like FORTRAN, and, although FORTRAN is a powerful language to write code that has to run fast, many readers will not be familiar with it. We considered switching to a widely used language, such as Python. In the end, we chose not to, because one of the reasons why Python is popular is that it is very compact: compactness facilitates writing Python, but not reading it. In our pseudo-codes, we want to spell out what many languages hide: we sacrifice compactness to show the inner workings of a piece of coding. Our pseudo-codes will make skilled programmers cringe. However, for the sake of clarity, we follow Boltzmann’s advice that “Elegance should be left to shoemakers and tailors”. Even so, in our pseudo-codes, we need some shorthand notation. As much as possible, we have used standard notation: For comparisons, we use >, q1 . Hence there would be a net heat flow from the cold reservoir into the hot reservoir. But this contradicts the Second Law of thermodynamics. Therefore we must conclude that the efficiency of all reversible heat engines operating between the same reservoirs is identical. The efficiency only depends on the

14 PART | I Basics

temperatures t1 and t2 of the reservoirs (the temperatures t could be measured in any scale, e.g., in Fahrenheit or Réaumur, as long as it is single-valued. As η(t1 , t2 ) depends only on the temperature in the reservoirs, then so does the ratio q2 /q1 = 1 − η. Let us call this ratio R(t2 , t1 ). Now suppose that we have a reversible engine that consists of two stages: one working between reservoir 1 and 2, and the other between 2 and 3. In addition, we have another reversible engine that works directly between 1 and 3. As both engines must be equally efficient, it follows that R(t3 , t1 ) = R(t3 , t2 )R(t2 , t1 ).

(2.1.5)

This can only be true in general if R(t1 , t2 ) is of the form R(t2 , t1 ) =

f (t2 ) , f (t1 )

(2.1.6)

where f (t) is an, as yet unknown function of our measured temperature. What we do now is to introduce an “absolute” or thermodynamic temperature T given by T ≡ f (t).

(2.1.7)

q2 T2 = R(t2 , t1 ) = . q1 T1

(2.1.8)

Then, it immediately follows that

Note that the thermodynamic temperature could just as well have been defined as c × f (t). In practice, c has been fixed such that, around room temperature, 1 degree in the absolute (Kelvin) scale is equal to 1 degree Celsius. But that choice is, of course, purely historical and —as it will turn out later— a bit unfortunate. Why do we need all this? We need it to introduce entropy, the most mysterious of all thermodynamic quantities. To do so, note that Eq. (2.1.8) can be written as q2 q1 = , (2.1.9) T1 T2 where q1 is the heat that flows in reversibly at the high temperature T1 , and q2 is the heat that flows out reversibly at the low temperature T2 . We see therefore that, during a complete cycle, the difference between q1 /T1 and q2 /T2 is zero. Recall that, at the end of a cycle, the internal energy of the system has not changed. Now Eq. (2.1.9) tells us that there is also another quantity, denoted by S, which is unchanged when we restore the system to its original state. Following Clausius, we use the name “entropy” for S. In Thermodynamics, quantities such as S that are unchanged if we return a system to its original state, are called state functions. We do not know what S is,

Thermodynamics and statistical mechanics Chapter | 2

15

but we do know how to compute its change. In the above example, the change in S was given by S = (q1 /T1 ) − (q2 /T2 ) = 0. In general, the change in entropy of a system due to the reversible addition of an infinitesimal amount of heat δqrev from a reservoir at temperature T is dS =

δqrev . T

(2.1.10)

We also note that S is extensive. That means that the total entropy of two non-interacting systems, is equal to the sum of the entropies of the individual systems. Consider a system with a fixed number of particles2 N and a fixed volume V . If we transfer an infinitesimal amount of heat δq to this system, then the change in the internal energy of the system, dE is equal to δq. Hence,   ∂S 1 (2.1.11) = . ∂E V ,N T The most famous, though not the most intuitively obvious, statement of the Second Law of Thermodynamics is that A spontaneous change in a closed system, i.e., a system that exchanges neither energy nor particles with its environment, can never lead to a decrease of the entropy. Hence, in equilibrium, the entropy of a closed system is at a maximum. The argument behind this sweeping statement is simple: consider a system with energy E, volume V and number of particles N that is in equilibrium. We denote the entropy of this system by S0 (E, V , N ). In equilibrium, all spontaneous changes that can happen, have happened. Now suppose that we want to change something in this system —for instance, we increase the density in one half of the system and decrease it in the other half. As the system was in equilibrium, this change does not occur spontaneously. Hence, in order to realize this change, we must perform a certain amount of work, w (for instance, by placing a piston in the system and moving it). We assume that this work is performed reversibly in such a way that E, the total energy of the system, stays constant (and also V and N ). The First Law tells us that we can only keep E constant if, while doing the work, we allow an amount of heat q, to flow out of the system, such that q = w. Eq. (2.1.10) tells us that when an amount of heat q flows reversibly out of the system, the entropy S of the system must decrease. Let us denote the entropy of this constrained state by S1 (E, V , N ) < S0 (E, V , N ). Having completed the change in the system, we insulate the system thermally from the rest of the world, and we remove the constraint that kept the system in its special state (taking the example of the piston: we make an opening in the piston). Now the system goes back spontaneously (and irreversibly) to equilibrium. However, no work is done, and no 2 Thermodynamics does not assume anything about atoms or molecules. In thermodynamics, we

would specify the total mass (a macroscopic quantity) of a given species in the system, rather than the number of particles. But when we talk about Statistical Mechanics or simulation techniques, we will always specify the amount of matter in a system by the number of molecules. By doing the same while discussing thermodynamics, we can keep the same notation throughout.

16 PART | I Basics

heat is transferred. Hence the final energy E is equal to the original energy (and V and N are also constant). This means that the system is now back in its original equilibrium state and its entropy is once more equal to S0 (E, V , N ). The entropy change during this spontaneous change is equal to S = S0 − S1 . But, as S1 < S0 , it follows that S > 0. As this argument is quite general, we have indeed shown that any spontaneous change in a closed system leads to an increase in the entropy. Hence, in equilibrium, the entropy of a closed system is at a maximum. We can now combine the First Law and the Second Law to arrive at an expression for the energy change of a thermodynamic system. We consider an infinitesimal reversible change in the system due to heat transfer and work. The First Law states: dE = q + w. For a reversible change, we can write q = T dS. In the case of w, there are many ways in which work can be performed on a system, e.g., compression, electrical polarization, magnetic polarization, elastic deformation, etc. Here we will focus on one such form of work, namely work due to a volume change against an external pressure P . In that case, the work performed on the system during an infinitesimal volume change dV is w = −P dV , and the First Law can be written as dE = T dS − P dV . However, there is another, important way in which we can change the energy of a system, namely by transferring matter into or out of the system. For convenience, we consider a system containing only one species. As before, the number of molecules of this species is denoted by N . As we (reversibly) change the number of molecules in the system at constant V and S, the energy of the system changes: dE = μdN . This expression defines the constant of proportionality, μ, the “chemical potential”   ∂E μ≡ . (2.1.12) ∂N S,V We note that, for consistency with the rest of the book, we have defined the chemical potential in terms of the number of molecules. In classical thermodynamics, the chemical potential is defined as   ∂E thermo ≡ , (2.1.13) μ ∂n S,V where n denotes the number of moles. The relation between the thermodynamic μthermo and the molecular μ is simple: μthermo = NA μ,

Thermodynamics and statistical mechanics Chapter | 2

17

where NA denotes Avogadro’s number. The generalization of Eq. (2.1.12) to multi-component systems is straightforward and will be encountered later. We can now write down the most frequently used form of the First Law of Thermodynamics dE = T dS − P dV + μdN .

(2.1.14)

Often we use Eq. (2.1.14) in the form dS = which implies that:

and

We already knew that

1 P μ dE + dV − dN , T T T 



∂S ∂V

∂S ∂N



 = E,N



∂S ∂E

E,V

(2.1.15)

P T

μ =− . T

 = V ,N

1 . T

It is important to make a distinction between thermodynamic properties that are extensive and those that are intensive. Intensive properties do not depend on the size of the system under consideration. Examples are temperature, pressure, and chemical potential. If we combine two identical systems in the same thermodynamic state, then the temperature, pressure, and chemical potential of the resulting system are the same as that of the constituent systems. In contrast, energy, entropy, volume, and number of particles are extensive. This means that they scale with the system size. Now assume that we construct a thermodynamic system by combining a large number of infinitesimal systems. Then the extensivity of E, S, V , and N implies that, for the resulting system, we have E = T S − P V + μN .

(2.1.16)

Now consider a small variation in E: dE = dT S − dP V + dμN = T dS + SdT − P dV − V dP + μdN + N dμ . If we combine this with the First Law of thermodynamics, we find: 0 = SdT − V dP + N dμ.

(2.1.17)

This is an important relation because it shows that T , P , and μ are dependent variables. Two of them suffice to specify the thermodynamic state of the system.

18 PART | I Basics

However, in addition, we always need (at least) one extensive thermodynamic variable to specify the size of the system: T , P , and μ are intensive and therefore they do not contain that information. From this point on, we can derive all of thermodynamics, except one law: the so-called Third Law of Thermodynamics. The Third Law can be formulated in a number of ways. The shortest version states that the entropy of the equilibrium state of a pure substance at T = 0 is equal to zero. However, the fact that the value of the entropy at T = 0 must be zero follows from considerations outside thermodynamics. The Third Law is not as “basic” as the First and the Second, and, anyway, we shall soon get a more direct interpretation of its meaning.

2.1.1 Auxiliary functions Eq. (2.1.14) expresses the First Law of thermodynamics as a relation between the variations in E with those in S, V , and N . Sometimes, it is more convenient to use other independent variables, e.g., the temperature instead of the entropy, the pressure instead of the volume, or the chemical potential instead of the number of particles. “Convenient” variables are those that can be controlled in a given experiment. There is a simple procedure to recast the First Law in terms of these other variables. Enthalpy For instance, if we use S, P , and N as the independent variables we can carry out a so-called Legendre transform, which allows us to replace the energy with a new state function that is a function of S, P , and N . This function is called the Enthalpy (H ), defined as H ≡ E + P V . Clearly dH = dE + dP V = T dS − P dV + μdN + P dV + V dP = T dS + V dP + μdN, (2.1.18) showing that the independent variables controlling the enthalpy are S, P , and N . Helmholtz free energy Similarly, we can introduce a function F , called the Helmholtz free energy, defined as F ≡ E − T S. As in the case of the enthalpy, it is easy to show that dF = −SdT − P dV + μdN.

(2.1.19)

Gibbs free energy The Gibbs free energy G is defined as F + P V , and it satisfies dG = −SdT + V dP + μdN.

(2.1.20)

Thermodynamics and statistical mechanics Chapter | 2

19

FIGURE 2.1 An isolated system consisting of two boxes 1 and 2 that each have a fixed volume, the two subsystems can exchange heat but the subsystem 2 is much larger than system 1 and therefore acts as a heat bath.

Grand Potential Finally, we can introduce the Grand Potential, , which is  ≡ F − μN , satisfying d = −SdT − P dV − N dμ.

(2.1.21)

However, for homogeneous systems, we rarely use the symbol  for the Grand Potential, because, if the pressure of a system is well-defined, F − μN = −P V , we can replace  by −P V .3 Auxiliary functions and the Second Law For a closed N -particle system with a given internal energy E and volume V , equilibrium is reached when the entropy S is at a maximum. To describe experiments at conditions other than constant N , V , E, we must reformulate the Second Law of thermodynamics, because, in general, S will not be at a maximum in equilibrium if we hold P or T constant. Fortunately, we can use the original Second Law of thermodynamics to derive the condition for thermodynamic equilibrium under conditions other than constant E, V , and N . Let us consider the system shown in Fig. 2.1. The total system is isolated and has a fixed volume. In the system, we have a subsystem 1, which is much smaller than subsystem 2 (we will refer to the larger system as the “heat bath”). If we allow subsystems 1 and 2 to exchange energy, the total energy of the combined system is still conserved. Hence, we can apply the Second Law of thermodynamics to the combined system, i.e., the total entropy of the combined system must be at a maximum in equilibrium. As long as there is a net energy flux between systems 1 and 2, the total system is not yet in equilibrium and we 3 The pressure is not a convenient variable for describing the state of confined or porous systems.

For such systems, it is best to use  ≡ F − μN .

20 PART | I Basics

must have Stot = S1 + S2 ≥ 0.

(2.1.22)

As the heat bath (subsystem 2) is much larger than subsystem 1, its temperature T will not change when a small amount of energy is exchanged with subsystem 1. Using dS = δqrev /T (Eq. (2.1.10)), we can then write: S2 =

E2 . T

(2.1.23)

As the total energy is conserved, (i.e., E2 = −E1 ), we can then write: S1 + S2 =

1 (T S1 − E1 ) ≥ 0. T

(2.1.24)

This equation expresses the condition for equilibrium in terms of the properties of subsystem 1. The only effect of the heat bath is that it imposes the temperature T . Note that the quantity that is varied in Eq. (2.1.24) is nothing else than the Helmholtz free energy (see Eq. (2.1.19)) F1 (N, V , T ) ≡ E1 − T S1 . Then the Second Law, in the form given by Eq. (2.1.24), implies that, for a system in contact with a heat bath, −

1 F1 ≥ 0. T

(2.1.25)

In other words: when an N-particle system in a volume V is in contact with a heat bath at a (positive) temperature T , then a spontaneous change can never increase its Helmholtz free energy: dF ≤ 0.

(2.1.26)

Similarly, we can define two systems that can not only exchange of energy, but also change their volumes in such a way that the total volume remains constant (see Fig. 2.2). As before, the combined system is isolated and its total volume is fixed. Then we can again apply the Second Law of thermodynamics to the combined system. As system 2 is again assumed to be much larger than system 1, its temperature and pressure do not change when the energy and volume of system 1 change. The entropy change of system 2 is therefore given by E2 P V2 + . (2.1.27) T T As the total energy is conserved (E2 = −E1 ) and the total volume remains constant (V2 = −V1 ), we can write for the total entropy change: S2 =

S1 + S2 =

1 (T S1 − E1 − P V1 ) ≥ 0, T

(2.1.28)

Thermodynamics and statistical mechanics Chapter | 2

21

FIGURE 2.2 An isolated system consisting of two boxes 1 and 2 that can exchange heat and change their volume in such a way that the total volume and energy remains constant. System 2 is much larger compared to system 1 so it can act as a heat bath that exerts a constant pressure on system 1.

or (dG)/T ≤ 0, where dG is the variation in the Gibbs free energy (Eq. (2.1.20)). In this equation, we have again expressed the inequality in terms of the properties of system 1 only. A spontaneous change in a system at constant temperature and pressure can never increase its Gibbs free energy.

2.1.2 Chemical potential and equilibrium Thus far, we have considered systems in contact with a reservoir at constant temperature and constant pressure. Let us now consider what happens if we bring a system in contact with a “particle reservoir,” i.e., a system at constant chemical potential. Clearly, as T , P , and μ are linearly dependent, we cannot consider a system in contact with a reservoir at constant T , P , μ, because these variables are not enough to fix the size of the system. Hence, when considering a system in contact with a particle reservoir, we should fix at least one extensive variable. The most convenient choice is to fix the volume V . So, we will consider a system (1) of volume V in contact with a reservoir (system 2) at constant T and μ. As before, we can use the Second Law of thermodynamics as applied to the combined system, to derive the equilibrium condition for system 1 under conditions where this system can exchange heat and particles with the reservoir (system 2): Stot = S1 + S2 = S1 −

E1 μN1 + ≥0 T T

(2.1.29)

or  (T S1 − E1 + μN1 ) ≥ 0, (2.1.30) T or, using Eq. (2.1.21),  ≤ 0. Hence, at constant T , V , and μ,  is at a minimum in equilibrium. Stot =

22 PART | I Basics

We have now obtained the conditions for equilibrium for some of the conditions of the greatest practical importance, namely: Equilibrium at constant N, V, E :

S is maximal

Equilibrium at constant N, V, T :

F is minimal

Equilibrium at constant N, P, T :

G is minimal

Equilibrium at constant μ, V, T :

 is minimal

(2.1.31)

Using the definitions of F , G, and , we can write down expressions for an infinitesimal variation in each of these quantities. For instance: dF = dE − dT S. Using the First Law, we can rewrite this as dF = T dS − P dV + μdN − T dS − SdT = −SdT − P dV + μdN. Similarly, we can write: 1 P μ dE + dV − dN T T T dF = −SdT − P dV + μdN dS =

dG = −SdT + V dP + μdN d = −SdT − P dV − N dμ .

(2.1.32)

In what follows, we will replace  by −P V , unless explicitly stated otherwise. Eq. (2.1.32), in combination with the equilibrium conditions (2.1.31), are extremely important because they specify the conditions under which a system is in thermodynamic (chemical or phase) equilibrium. Later, we shall make extensive use of the conditions for phase equilibrium. Next, consider a closed system containing two subsystems. The total volume of the system V = V1 + V2 is fixed. Similarly, N1 + N2 and E1 + E2 are fixed. These conditions imply that dV1 = −dV2 , dN1 = −dN2 and dE1 = −dE2 . The Second Law tells us that, in equilibrium, the total entropy of the system Stot = S1 + S2 , must be an extremum (note that Stot is not fixed). Hence, the derivatives of Stot with respect to E1 , N1 and V1 must vanish, i.e.:       ∂S1 ∂S2 ∂ (S1 + S2 ) = − =0 ∂E1 ∂E1 ∂E2       ∂ (S1 + S2 ) ∂S1 ∂S2 = − =0 ∂V1 ∂V1 ∂V2       ∂ (S1 + S2 ) ∂S1 ∂S2 = − =0. (2.1.33) ∂N1 ∂N1 ∂N2

Thermodynamics and statistical mechanics Chapter | 2

23

If we combine Eq. (2.1.33) with the expression for dS, Eq. (2.1.32), we obtain 1 1 = T1 T2 P1 P2 = T1 T2 μ1 μ2 = . T1 T2

(2.1.34)

The first condition implies thermal equilibrium between the two systems, i.e., T1 = T2 ≡ T . Then the second condition simply implies P1 = P2 , and the third is μ1 = μ2 .4 Eq. (2.1.34) is the starting point for all free-energy-based calculations to locate the point where two systems (two phases) are in equilibrium (see Chapter 8).

2.1.3 Energy, pressure, and chemical potential One of the aims of molecular simulations is to compute the thermodynamic properties of a system based on our knowledge of the interactions between constituent molecules. In subsequent chapters, we discuss how the relevant expressions can be derived from Statistical Mechanics. Here we focus on one general feature: in all cases, we will start from the thermodynamic definitions of the quantities to be computed. For instance, the starting point for computing the pressure, P , is a thermodynamic relation of the type:   ∂F . (2.1.35) P =− ∂V N,T This is the expression that we would use if we consider a system at fixed N , V , and T . However, if we would consider a system at constant N , V and E, we would use:   P ∂S = . T ∂V N,E Analogous expressions can be written down for other thermodynamic variables and other conditions. For instance, for a system at constant N , V and T , the energy is given by the following thermodynamic relation:   ∂F E =F +TS =F −T ∂T V ,N   ∂F /T = . (2.1.36) ∂1/T V ,N 4 For a multicomponent mixture, the chemical potential μα of each component α in the mixture must be equal in the two subsystems: μα1 = μα2 .

24 PART | I Basics

We can also obtain the chemical potential, μ, from F using:   ∂F μ= . ∂N T ,V

(2.1.37)

As most Monte Carlo simulations are carried out at constant N , V , and T , we will use these relations extensively. Relations between partial molar derivatives All extensive thermodynamic variables X of a multi-component system can be written as:  X= Ni xi , i

where xi is the partial derivative of the quantity X with respect to Ni , the number of particles of species i, at constant P , T , and Nj :   ∂X xi ≡ . ∂Ni P ,T ,{Nj } For instance, in the case that X = S. Then  S= Ni si , i

where, we have for the molar entropy of component i, s i   ∂S si ≡ . ∂Ni P ,T ,{Nj } It is important to note that it makes a difference what thermodynamic variables we keep constant. Clearly,   ∂S si = = μi /T . ∂Ni E,V ,{Nj } But there is no inconsistency: when we add a particle of species i at constant P and T , the internal energy changes by an amount ei and the volume by an amount v i . To reduce the system to constant energy and volume, we should therefore compute         ∂S ∂S ∂S ∂S = − vi − ei ∂Ni E,V ,{Nj } ∂Ni P ,T ,{Nj } ∂V E,{N } ∂E V ,{N } = si −

ei P vi − T T

Thermodynamics and statistical mechanics Chapter | 2

T si − P vi − ei μi 1 = =− =− T T T



∂G ∂Ni

25

 . P ,T ,{Nj }

2.2 Statistical thermodynamics In the previous section, we introduced the framework of thermodynamics. Thermodynamics is a phenomenological theory: it provides relations between experimentally observable quantities. However, it does not predict these quantities on the basis of a microscopic model. Statistical Mechanics provides the link between the microscopic description of a system of interacting atoms or molecules and the prediction of thermodynamical observables such as pressure or chemical potential. For all but the simplest systems, the statistical mechanical expressions for the thermodynamical observables are too complex to be evaluated analytically. However, in many cases, numerical simulations will allow us to obtain accurate estimates of the quantities of interest.

2.2.1 Basic assumption Most of the computer simulations that we discuss are based on the assumption that classical mechanics can be used to describe the motions of atoms and molecules. This assumption leads to a great simplification in almost all calculations, and it is therefore most fortunate that it is justified in many cases of practical interest. Surprisingly, it turns out to be easier to derive the basic laws of statistical mechanics using the language of quantum mechanics. We will follow this route of least resistance. In fact, for our derivation, we need only little quantum mechanics. Specifically, we need the fact that a quantum mechanical system can be found in different states. For the time being, we limit ourselves to quantum states that are eigenvectors of the Hamiltonian H of the system (i.e., energy eigenstates). For any such state |i, we have that H|i = Ei |i, where Ei is the energy of state |i. Most examples discussed in quantum mechanics textbooks concern systems with only a few degrees of freedom (e.g., the onedimensional harmonic oscillator or a particle in a box). For such systems, the degeneracy of energy levels will be small. However, for the systems that are of interest to statistical mechanics (i.e., systems with O(1023 ) particles), the degeneracy of energy levels is super-astronomically large. In what follows, we denote by  the number of eigenstates with energy E of a system of N particles in a volume V ,  = (E, V , N ). We now express the basic assumption of statistical mechanics as follows: a system with fixed N , V , and E is equally likely to be found in any of its (E) eigenstates. Much of statistical mechanics follows from this simple (but nontrivial) assumption. To see this, let us again consider a system with total energy E that consists of two weakly interacting subsystems. In this context, weakly interacting means that the subsystems can exchange energy but that we can write the total energy of the system as the sum of the energies E1 and E2 of the subsystems. There are

26 PART | I Basics

many ways in which we can distribute the total energy over the two subsystems such that E1 + E2 = E. For a given choice of E1 , the total number of degenerate states of the system is 1 (E1 ) × 2 (E2 ). Note that the total number of states is the product of the number of states in the individual systems. In what follows, it is convenient to have a measure of the degeneracy of the subsystems that is additive. A logical choice is to take the (natural) logarithm of the degeneracy. Hence: ln (E1 , E − E1 ) = ln 1 (E1 ) + ln 2 (E − E1 ).

(2.2.1)

We assume that subsystems 1 and 2 can exchange energy. What is the most likely distribution of the energy? We know that every energy state of the total system is equally likely. But the number of eigenstates corresponding to a given distribution of the energy over the subsystems depends very strongly on the value of E1 . We wish to know the most likely value of E1 , that is, the one that maximizes ln (E1 , E − E1 ). The condition for this maximum is that   ∂ ln (E1 , E − E1 ) =0 (2.2.2) ∂E1 N,V ,E or, in other words, 

∂ ln 1 (E1 ) ∂E1



 = N1 ,V1

∂ ln 2 (E2 ) ∂E2

 .

(2.2.3)

N2 ,V2

We introduce the shorthand notation   ∂ ln (E, V , N ) . β(E, V , N) ≡ ∂E N,V

(2.2.4)

With this definition, we can write Eq. (2.2.3) as β(E1 , V1 , N1 ) = β(E2 , V2 , N2 ).

(2.2.5)

Clearly, if initially, we put all energy in system 1 (say), there will be energy transfer from system 1 to system 2 until Eq. (2.2.3) is satisfied. From that moment on, no net energy flows from one subsystem to the other, and we say that the two subsystems are in (thermal) equilibrium. When this equilibrium is reached, ln  of the total system is at a maximum. This suggests that ln  is somehow related to the thermodynamic entropy S of the system. As we have seen in the previous section, the Second Law of thermodynamics states that the entropy of a system N, V , and E is at its maximum when the system has reached thermal equilibrium. To establish the relation between ln  and the entropy we could simply start by assuming that the entropy is equal to ln  and then check whether predictions based on this assumption agree with experiment. If we do so, we find that the answer is “not quite”: for (unfortunate) historical reasons (entropy already had

Thermodynamics and statistical mechanics Chapter | 2

27

units before statistical mechanics had been created), entropy is not simply equal to ln ; rather we have S(N, V , E) ≡ kB ln (N, V , E),

(2.2.6)

where kB is Boltzmann’s constant, which in S.I. units has the value 1.380649 10−23 J/K. With this identification, we see that our assumption that all degenerate eigenstates of a quantum system are equally likely immediately implies that, in thermal equilibrium, the entropy of a composite system is at a maximum. It would be a bit premature to refer to this statement as the Second Law of thermodynamics, as we have not yet demonstrated that the present definition of entropy is, indeed, equivalent to the thermodynamic definition. We simply take an advance on this result. The next thing to note is that thermal equilibrium between subsystems 1 and 2 implies that β1 = β2 . In everyday life, we have another way to express the same thing: we say that two bodies brought into thermal contact are in equilibrium if their temperatures are the same. This suggests that β must be related to the absolute temperature. The thermodynamic definition of temperature is given by Eq. (2.1.11), or   ∂S 1/T = . (2.2.7) ∂E V ,N If we use the same definition here, we find that β = 1/(kB T ).

(2.2.8)

2.2.2 Systems at constant temperature Now that we have a statistical mechanical definition of temperature, we can consider what happens if, as in section 2.1.1, we have a small system (denoted by system 1) in thermal equilibrium with a large heat bath (system 2) (see Fig. 2.1). The total system is closed; that is, the total energy E = E1 + E2 is fixed. Suppose that the system 1 is prepared in one specific quantum state i with energy Ei . The bath then has an energy E1 = E − Ei and the degeneracy of the bath is given by 2 (E − Ei ). Clearly, the degeneracy of the bath determines the probability P i to find system 1 in state i: 2 (E − Ei ) Pi =  . j 2 (E − Ej )

(2.2.9)

To compute 2 (E − Ei ), we assume that our bath, system 2, is much larger than system 1, which allows us to expand ln 2 (E − Ei ) around Ei = 0: ln 2 (E − Ei ) = ln 2 (E) − Ei

∂ ln 2 (E) + O(1/E), ∂E

(2.2.10)

28 PART | I Basics

and, using Eqs. (2.2.6) and (2.2.7), ln 2 (E − Ei ) = ln 2 (E) − Ei /kB T + O(1/E).

(2.2.11)

If we insert this result in Eq. (2.2.9), and take the limit E → ∞, we get exp(−Ei /kB T ) Pi =  . j exp(−Ej /kB T )

(2.2.12)

This is the well-known Boltzmann distribution for a system at temperature T . Knowledge of the energy distribution allows us to compute the average energy

E of the system at the given temperature T : 

E = Ei Pi i

 Ei exp(−Ei /kB T ) = i j exp(−Ej /kB T )  ∂ ln i exp(−Ei /kB T ) =− ∂1/kB T ∂ ln Q , =− ∂1/kB T

(2.2.13)

where, in the last line, we have defined the partition function, Q ≡ Q(N, V , T ). If we compare Eq. (2.2.13) with the thermodynamic relation Eq. (2.1.36) E=

∂F /T , ∂1/T

where F is the Helmholtz free energy, we see that F is related to the partition function Q:    exp(−Ei /kB T ) . (2.2.14) F = −kB T ln Q = −kB T ln i

Strictly speaking, F is fixed only up to a constant. Or, what amounts to the same thing, the reference point of the energy can be chosen arbitrarily. In what follows, we can use Eq. (2.2.14) without loss of generality. The relation between the Helmholtz free energy and the partition function is often more convenient to use than the relation between ln  and the entropy. As a consequence, Eq. (2.2.14) is the workhorse of equilibrium statistical mechanics.

2.2.3 Towards classical statistical mechanics Thus far, we have formulated statistical mechanics in purely quantum mechanical terms. The entropy is related to the density-of-states of a system with energy

Thermodynamics and statistical mechanics Chapter | 2

29

E, volume V , and number of particles N . Similarly, the Helmholtz free energy is related to the partition function Q, a sum over all quantum states i of the Boltzmann factor exp(−Ei /kB T ). To be specific, let us consider the average value of some observable A. We know the probability that a system at temperature T will be found in an energy eigenstate with energy Ei and we can therefore compute the thermal average of A as  exp(−Ei /kB T ) i|A|i

A = i , (2.2.15) j exp(−Ej /kB T ) where i|A|i denotes the expectation value of the operator A in quantum state i. This equation suggests how we should go about computing thermal averages: first we solve the Schrödinger equation for the (many-body) system of interest, and next we compute the expectation value of the operator A for all those quantum states that have a non-negligible statistical weight. Unfortunately, this approach is doomed for all but the simplest systems. First of all, we cannot hope to solve the Schrödinger equation for an arbitrary many-body system. And second, even if we could, the number of quantum states that contribute to the 25 average in Eq. (2.2.15) would be so huge (O(1010 )) that a numerical evaluation of all expectation values would be inconceivable. Fortunately, Eq. (2.2.15) can be simplified to a more workable expression in the classical limit. To this end, we first rewrite Eq. (2.2.15) in a form that is independent of the specific basis set. We note that exp(−Ei /kB T ) = i| exp(−H/kB T )|i, where H is the Hamiltonian of the system. Using this relation, we can write 

i| exp(−H/kB T )A|i

A = i j j | exp(−H/kB T )|j  =

Tr exp(−H/kB T )A , Tr exp(−H/kB T )

(2.2.16)

where Tr denotes the trace of the operator. As the value of the trace of an operator does not depend on the choice of the basis set, we can compute thermal averages using any basis set we like. Preferably, we use simple basis sets, such as the set of eigenfunctions of the position or the momentum operator. Next, we use the fact that the Hamiltonian H is the sum of a kinetic part K and a potential part U. The kinetic energy operator is a quadratic function of the momenta of all particles. As a consequence, momentum eigenstates are also eigenfunctions of the kinetic energy operator. Similarly, the potential energy operator is a function of the particle coordinates. Matrix elements of U, therefore, are most conveniently computed in a basis set of position eigenfunctions. However, H = K + U itself is not diagonal in either basis set nor is exp[−β(K + U)]. However, if we could replace exp(−βH) by exp(−βK) exp(−βU), then we could simplify Eq. (2.2.16) considerably. In general, we cannot make this replacement because exp(−βK) exp(−βU) = exp{−β[K + U + O([K, U])]},

30 PART | I Basics

where [K, U] is the commutator of the kinetic and potential energy operators: O([K, U]) stands for all terms containing commutators and higher-order commutators of K and U. It is easy to verify that the commutator [K, U] is of order  ( ≡ h/(2π), where h is Planck’s constant). Hence, in the limit  → 0, we may ignore the terms of order O([K, U]). In that case, we can write Tr exp(−βH) ≈ Tr exp(−βU) exp(−βK).

(2.2.17)

If we use the notation |r for eigenvectors of the position operator and |k for eigenvectors of the momentum operator, we can express Eq. (2.2.17) as 

r|e−β U |r r|k k|e−β K |k k|r. (2.2.18) Tr exp(−βH) = r,k

All matrix elements can be evaluated directly: 

r| exp(−βU)|r = exp −βU(rN ) , where U(rN ) on the right-hand side is no longer an operator but a function of the coordinates of all N particles. Here, and in what follows, we denote this set of coordinates by rN . Similarly,

k| exp(−βK)|k = exp −β

N 

pi2 /(2mi )

,

i=1

where p i = ki , and

r|k k|r = 1/V N , where V is the volume of the system and N the number of particles. Finally, we can replace the sum over states by an integration over all coordinates and momenta. The final result is

    1 N N 2 N Tr exp(−βH) ≈ dN pi /(2mi ) + U r dp dr exp −β h N! i

≡ Qclassical ,

(2.2.19)

where d is the dimensionality of the system and the last line defines the classical partition function. The factor 1/N ! corrects for the fact that any permutation of indistinguishable particles corresponds to the same macroscopic state.5 5 The factor 1/N ! is often justified by invoking the quantum-mechanical indistinguishability of

identical particles. However, even in systems of non-identical particles that are so similar that they cannot be separated, the same factor is necessary to ensure extensivity of the Helmholtz free energy [50–52].

Thermodynamics and statistical mechanics Chapter | 2

31

Similarly, we can derive the classical limit for Tr exp(−βH)A, and finally, we can write the classical expression for the thermal average of the observable A as 

A =

  2  N   N N  dpN drN exp −β A p ,r i pi /(2mi ) + U r   . (2.2.20)     2 N dpN drN exp −β j pj /(2mj ) + U r

Eqs. (2.2.19) and (2.2.20) constitute the starting points for a wide range of simulations of classical many-body systems. Eqs. (2.2.19) and (2.2.20) are expressed in terms of high-dimensional integrals over the dN momenta and dN coordinates of all N particles, where d denotes the dimensionality of the system. The 2dN-dimensional space spanned by all momenta and coordinates is called phase space.

2.3 Ensembles In statistical mechanics as in thermodynamics, the state of the system is determined by a number of control parameters, some of them extensive (e.g., N , the number of particles), some of them intensive (e.g., the pressure P or the temperature T ). For historical reasons we denote the collection of all realizations of a system, which are compatible with a set of control parameters by the name “ensemble”. There are different ensembles for different sets of control parameters. The historical names of these ensembles (“micro-canonical,” “canonical,” “grand-canonical,” etc.) are not particularly illuminating. Below, we will list these names when we describe the most frequently used ensembles. However, in what follows, we will often denote ensembles by the control variables that are kept constant, e.g., the “constant-N V E ensemble” or the “constant-μV T ensemble”. In the following sections, we assume for convenience that the system consists of particles with no internal degrees of freedom (i.e., no rotations, vibrations, or electronic excitations). That assumption simplifies the notation, but for molecular systems, we will of course have to take internal degrees of freedom into account.

2.3.1 Micro-canonical (constant-NVE) ensemble In the micro-canonical ensemble the energy, volume, and the number of particles of every species are kept constant.6 In a classical system, the total energy is given by the Hamiltonian, H, which is the sum of the kinetic and potential 6 In simulations we can fix different thermodynamic parameters X, Y, Z, · · · : N , V , E is just one

example. For the sake of compactness, we will often refer to an ensemble with X, Y , Z constant as the XY Z-ensemble. We refer to the corresponding simulations as XY Z-MC or XY Z-MD.

32 PART | I Basics

energy: H=

N    p2i + U rN , 2m

(2.3.1)

i=1

in which we have assumed that the potential energy does not depend on the momenta p. The classical partition function in the micro-canonical ensemble is the phase-space integral over a hyper-surface where the value of the Hamiltonian is equal to the imposed value of the energy E.   The constraint that the system has to be on the hyper-surface H pN , rN = E can be imposed via a δ-function, and hence for a three-dimensional system (d = 3):     1 E,V ,N ≡ 3N (2.3.2) dpN drN δ H pN , rN − E . h N!

2.3.2 Canonical (constant-NVT) ensemble The ensemble of states at constant N , V and T is called the “canonical ensemble.” As described in the previous section, the classical partition function, Q, for a system of atoms at constant N , V and T is given by:    1 QN,V ,T ≡ 3N (2.3.3) dpN drN exp −βH pN , rN . h N! As the potential energy does not depend on the momenta of the system, the integration over the momenta can be done analytically7

 3N    N  p2i 2πm 3N/2 p2 = . dp exp −β = dp exp −β 2m 2m β i=1 (2.3.4) If we define the thermal Broglie wavelength as

N

 =

h2 , 2πmkB T

(2.3.5)

we can write the canonical partition function as:    1 1 Q (N, V , T ) = 3N drN exp −βU rN ≡ 3N Z (N, V , T ) , N! N! (2.3.6) 7 For molecular systems, in particular for systems of flexible molecules where bond lengths are

fixed by holonomic constraints (see section 14.1), the integration over momenta may result in a Jacobian that depends on the coordinates of the nuclei in the molecule.

Thermodynamics and statistical mechanics Chapter | 2

which defines the configurational integral, Z ≡ Z(N, V , T ):    Z (N, V , T ) = drN exp −βU rN .

33

(2.3.7)

Unlike the integral over momenta, the configurational integral can almost never be computed analytically. The canonical partition function Q (N, V , T ) is related to the Helmholtz free energy, F , through βF = − ln Q (N, V , T ) .

(2.3.8)

Having defined the configurational  integral Z (N, V , T ), we can write the ensemble average of a quantity A rN that depends on the coordinates only as      1 drN A rN exp −βU rN . (2.3.9)

A = Z (N, V , T ) The probability (N ) of finding our system in a particular configuration rN , is given by        1 N N N dr δ rN − r exp −βU r N rN = Z (N, V , T )    ∝ exp −βU rN . (2.3.10)

2.3.3 Isobaric-isothermal (constant-NPT) ensemble The canonical ensemble describes a system at constant temperature and volume. In experiments, it is more common to fix the pressure P than the volume V . As with the constant-N V T ensemble, we can derive the probability distribution function for the constant-N P T ensemble by considering a closed system that consists of the system of interest (system 1), that is in contact with a reservoir (system 2), which acts as both a thermostat and barostat (see Fig. 2.2). The two sub-systems can exchange energy and change their volume in such a way that the total volume is constant. For simplicity, we start with the quantum expression for the total entropy of the system, Eq. (2.2.6): S = S1 + S2 = kB ln 1 (E1 , V1 , N1 ) + kB ln 2 (E2 , V2 , N2 ) .

(2.3.11)

As system 2 is much larger than system 1, we can make an expansion of  around V and E:   ∂ ln  (E, V , N2 ) ln  (E2 , V2 , N2 ) = ln  (E, V , N2 ) + (E − E1 ) ∂E N,V   ∂ ln  (E, V , N2 ) + (V − V1 ) + · · · ∂V N,E

34 PART | I Basics

= ln  (E, V , N2 ) +

E − E1 P (V − V1 ) + + ··· , kB T kB T (2.3.12)

where we have used Eq. (2.1.15), which relates the derivative of the entropy with respect to energy and volume to 1/T and P /T , respectively. We can then write the probability to find system 1 with an energy E1 and volume V1 as:  (E − E1 , V − V1 , N2 )   P (E1 , V1 , N1 ) =     j dV  E − Ej , V − V , N2   E1 P V1 ∝ exp − − . kB T kB T

(2.3.13)

Taking the classical limit, we obtain an expression for the N , P , T -partition function, Q ≡ Q (N, P , T ), which is an integral over particle coordinates and over the volume V :    βP dV exp (−βP V ) drN exp −βU rN , Q (N, P , T ) ≡ 3N N! (2.3.14) where the factor βP has been included to make Q (N, P , T ) dimensionless. From Eq. (2.3.14) we obtain the probability to find our system in a particular configuration rN and volume V :      (2.3.15) N rN ∝ exp −βP V − βU rN . Q (N, P , T ) is related to the Gibbs free energy, G, via βG = − ln Q (N, P , T ) .

(2.3.16)

The above relation follows from the fact that, in the thermodynamic limit, the integral in Eq. (2.3.14) is completely dominated by the maximum value of the integrand ∼ exp {−β[P V ∗ + F (N, V ∗ , T )]}, where V ∗ is the volume for which the integrand is maximal. This maximum term method is used to establish the relation between thermodynamic variables and statistical mechanical partition functions for other ensembles.8

2.3.4 Grand-canonical (constant-μVT) ensemble Thus far, we have considered ensembles in which the total number of particles remains constant. It is often convenient to consider open systems, in which the 8 The maximum-term or saddle-point approximation relies on the observation that we can approximate the (one-dimensional) integral I of a function eR(x) , which is sharply peaked at x ∗ , by replacing R(x) close to x ∗ by R(x ∗ ) − (c/2)(x − x ∗ )2 , where c equals (minus) the second derivative

 ∗ ∗ 2 ∗ √ of R(x) at x ∗ . The resulting Gaussian integral dx eR(x )−(c/2)(x−x ) yields I ≈ eR(x ) 2π/c. In statistical√mechanics, ln I is related to the appropriate thermodynamic potential, and the contribution of ln 2π/c is negligible in the thermodynamic limit.

Thermodynamics and statistical mechanics Chapter | 2

35

FIGURE 2.3 An isolated system consisting of two boxes 1 and 2 that can exchange heat and exchange particles in such a way that the total energy and number of particles remain constant. System 2 is much larger compared to system 1, so it can act as a heat bath and buffer on system 1.

number of particles is allowed to change. We consider again a system containing two subsystems (Fig. 2.3). The volume of system 1 is fixed, but it is allowed to exchange energy and particles with the reservoir 2. As before, the entire system is closed, and the entropy of the entire system is given by Eq. (2.3.11). As system 2 is much larger than system 1, we can expand ln  around E and N:  ∂ ln  (E, V , N2 ) ln 2 (E2 , V2 , N2 ) = ln  (E, V2 , N) + (E − E1 ) ∂E N,V   ∂ ln  (E, V2 , N) + (N − N1 ) + · · · ∂N E,V E − E1 μ (N − N1 ) − + ··· = ln  (E, V2 , N) + kB T kB T (2.3.17) 

where we have used Eq. (2.1.15) to relate the derivative of the entropy with respect to the number of particles to the chemical potential. It then follows that the probability to find system 1 with an energy E1 and number of particles N1 is:  (E − E1 , V2 , N − N1 ) P (E − E1 , V2 , N − N1 ) =  N   M=0  E − Ej , V2 , N − M j   μN1 E1 + . (2.3.18) ∝ exp − kB T kB T The classical partition function now involves a summation of over all particles in system 1. As the reservoir is much larger than system 1, we can replace the upper limit of this summation by ∞:

36 PART | I Basics

∞     exp (βμN) N dr (μ, V , T ) ≡ exp −βU rN 3N N ! =

N =0 ∞ 

exp (βμN) e−βF (N,V ,T ) ,

(2.3.19)

N =0

where we have defined the grand-canonical partition function, ≡ (μ, V , T ). From Eq. (2.3.19) we obtain the probability to find N particles in system 1, in configuration rN :      (2.3.20) N rN ∝ exp βμN − βU rN . From Eqs. (2.3.19) and (2.3.8) it follows, using the maximum term method, that − kB T ln = F − N μ =  ,

(2.3.21)

where  is the grand potential defined in section 2.1.2. For homogeneous systems, we can replace  by −P V .

2.4 Ergodicity Thus far, we have discussed the average behavior of many-body systems in a purely static sense: we introduced only the assumption that every quantum state of a many-body system with energy E is equally likely to be occupied. Such an average over all possible quantum states of a system is called an ensemble average. However, this is not how we usually think about the average behavior of a system. In most experiments, we perform a series of measurements during a certain time interval and then determine the average of these measurements. In fact, the idea behind Molecular Dynamics simulations is precisely that we can study the average behavior of a many-particle system simply by computing the natural time evolution of that system numerically and then average the quantity of interest over a sufficiently long time. To take a specific example, let us consider a fluid consisting of atoms. Suppose that we wish to compute the average density of the fluid at a distance r from a given atom i, ρi (r). Clearly, the instantaneous density depends on the coordinates rj of all particles j in the system. As time progresses, the atomic coordinates will change (according to Newton’s equations of motion), and hence the density around atom i will change. Provided that we have specified the initial coordinates and momenta of all atoms (rN (0), pN (0)), we know, at least in principle, the time evolution of ρi (r; rN (0), pN (0), t). In a standard Molecular Dynamics simulation, we measure the timeaveraged density ρi (r) of a system of N atoms, in a volume V , at a constant total energy E: 1 t  dt ρi (r; t  ). (2.4.1) ρi (r) = lim t→∞ t 0

Thermodynamics and statistical mechanics Chapter | 2

37

Note that, in writing down this equation, we have implicitly assumed that, for t sufficiently long, the time average does not depend on the initial conditions. This is, in fact, a subtle assumption that is not true in general (see e.g., [53]). However, we shall disregard subtleties and simply assume that, once we have specified N , V , and E, time averages do not depend on the initial coordinates and momenta. If that is so, then we would not change our result for ρi (r) if we average over many different initial conditions; that is, we consider the hypothetical situation where we run a large number of Molecular Dynamics simulations at the same values for N , V , and E, but with different initial coordinates and momenta,    1 t   N lim dt ρi r; r (0), pN (0), t  t→∞ t 0 initial conditions . (2.4.2) ρi (r) = number of initial conditions We now consider the limiting case where we average over all initial conditions compatible with the imposed values of N , V , and E. In that case, we can replace the sum over initial conditions with an integral:    f rN (0), pN (0)   N  N N N initial conditions E dr dp f r (0), p (0) → , (2.4.3) number of initial conditions (N, V , E) where f denotes an arbitrary function of the initial conditions rN (0), pN (0),  N while (N, V , E) = E dr dpN , where we have ignored a constant factor. Note that the second line in Eq. (2.4.3) is nothing else than the micro-canonical (constant-N V E) average of f . In what follows, we denote an ensemble average by · · ·  to distinguish it from a time average, denoted by a bar. If we switch the order of the time averaging and the averaging over initial conditions, we find  1 t   N ρi (r) = lim dt ρi r; r (0), pN (0), t  . (2.4.4) t→∞ t 0 NV E However, the ensemble average in this equation does not depend on the time t  . This is so because there is a one-to-one correspondence between the initial phase-space coordinates of a system and those that specify the state of the system at a later time t  (see e.g., [53,54]). Hence, averaging over all initial phase-space coordinates is equivalent to averaging over the time-evolved phase-space coordinates. For this reason, we can leave out the time averaging in Eq. (2.4.4), and we find ρi (r) = ρi (r)N V E .

(2.4.5)

This equation states that, if we wish to compute the average of a function of the coordinates and momenta of a many-particle system, we can either compute that quantity by time averaging (the “MD” approach) or by ensemble averaging (the

38 PART | I Basics

“MC” approach). It should be stressed that the preceding paragraphs are meant only to make Eq. (2.4.5) plausible, not as a proof. In fact, that would have been quite impossible because Eq. (2.4.5) is not true in general. However, in what follows, we shall simply assume that the “ergodic hypothesis”, as Eq. (2.4.5) is usually referred to, applies to the systems that we study in computer simulations. The reader should be aware that there are many examples of systems that are not ergodic in practice, such as glasses and metastable phases, or even in principle, such as nearly harmonic solids.

2.5 Linear response theory Until now, we focused on ensemble averages (Eq. (2.2.20)) or time averages (Eq. (2.4.1)) of quantities that are constant in time (once fluctuations have been averaged out). One of the great advantages of the Molecular Dynamics method is that it also allows us to predict the time-dependent response of a system to an external perturbation. Examples of such responses are the heat current due to a temperature gradient, or the electrical current induced by an electric field (we shall see more examples in Chapter 5). It would seem that computing such currents would require a non-equilibrium simulation, where we impose the external perturbation of interest. This is indeed possible, but the disadvantage is that for every different perturbation, we would have to carry out a separate simulation. Fortunately, for systems that are only weakly perturbed (such that the response is linear in the applied perturbation), we can predict the response to an applied perturbation by studying the decay of fluctuations in the corresponding current in equilibrium. Onsager [55,56] was the first to suggest that a response (e.g., a current) induced in a system by a weak external perturbation, decays in the same way as a spontaneous fluctuation of that current in equilibrium. Onsager formulated his “regression” hypothesis in the language of non-equilibrium thermodynamics; in fact, Onsager’s 1931 papers [55,56], although partly inspired by earlier work, created the field (see [57]). The theory of non-equilibrium thermodynamics provides a phenomenological description of the relation between applied perturbations and resulting fluxes/currents. In particular, it defines the linear transport coefficients that relate a small perturbation to the resulting fluxes. However, non-equilibrium thermodynamics is not an atomistic theory and hence it is not always straightforward to make a link between the macroscopic forces and fluxes and the atomistic description of the same quantities. Moreover, for simulations, we need expressions for the transport coefficients that are computable in terms of molecular coordinates and momenta. Such expressions are provided by linear response theory. Below we give a simple introduction to classical linear response theory, to illustrate the mechanical basis of Onsager’s regression hypothesis. For a more detailed discussion, the reader is referred to textbooks on advanced statistical mechanics, such as [53]. A simple introduction (similar to the one presented

Thermodynamics and statistical mechanics Chapter | 2

39

here) is given in the book by Chandler [58], while an extensive discussion of linear response theory in the context of the theory of liquids is given in [59].

2.5.1 Static response Before discussing transport, we consider the static response of a system to a weak applied field. The field could be an electric field, for instance, and the response might be the electric current or, for a nonconducting material, the electric polarization. Suppose that we are interested in the response to a property that can be expressed as the ensemble average of a dynamical variable A. In the presence of an external perturbation, the average of A changes from its equilibrium value

A0 to A0 + A. Next, we must specify the perturbation. We assume that the perturbation also can be written as an explicit function of the coordinates (and, possibly, momenta) of the particles in the system. The effect of the perturbation is to change the Hamiltonian H0 of the system, to H0 − λB(pN , rN ). For instance, in the case of an electric field along the x direction, the change in H would be H = −Ex Mx (rN ), where Mx is the x component of the total dipole moment of the system. The electric field Ex corresponds to the parameter λ. We can immediately write down the general expression for A:  d A exp[−β(H0 − λB)]

A0 + A =  , (2.5.1) d exp[−β(H0 − λB)]   where we have used the symbol  to denote pN , rN , the phase-space coordinates of the system. Let us now compute the part of A that varies linearly with λ. To this end, we compute   ∂ A . (2.5.2) ∂λ λ=0 Straightforward differentiation shows that   ∂ A = β { AB0 − A0 B0 } . ∂λ λ=0

(2.5.3)

Taking again the example of the electric polarization, we can compute the change in dipole moment of a system due to an applied field Ex :      ∂Mx

Mx  = Ex (2.5.4) = βEx Mx2 − Mx 2 . ∂Ex Ex =0 Suppose that we wish to compute the electric susceptibility of an ideal gas of nonpolarizable dipolar molecules with dipole moment μ. In that case, N       j Mx2 − Mx 2 = μix μx i,j =1

40 PART | I Basics

  = N (μix )2 =

N μ2 , 3

and hence, Px ≡

Mx μ2 ρ = Ex . V 3kB T

(2.5.5)

Of course, this example is special because it can be evaluated exactly. But, in general, we can compute the expression (2.5.3) for the susceptibility, only numerically. It should also be noted that, actually, the computation of the dielectric susceptibility is quite a bit more subtle than suggested in the preceding example. The subtleties are discussed in the book by Allen and Tildesley [21] and the contribution of McDonald in [44]). Hamiltonian thermodynamic integration The above discussion of static linear response is but a special case of the effect of a change in the Hamiltonian of a system on its free energy (see section 8.4.2) —an approach pioneered by Kirkwood [60], partly together with Monroe-Boggs [61].9 The essence of Hamiltonian integration can be expressed in a few lines. Consider a Hamiltonian H(λ), such that H(λ = 0) = H0 and H(λ = 1) = H1 . Typically, H0 corresponds to a reference state for which we know the free energy. We need not assume that H(λ) is a linear function of λ. In what follows, we use the notation   ∂H(λ) . H (λ) ≡ ∂λ For the case discussed above H(λ) = H0 − λB, and hence H (λ) = −B. In general, we have   ∂F (λ) = H (λ) , ∂λ and hence

F (λ = 1) =

1

dλ H (λ)λ .

(2.5.6)

0

In Eq. (2.5.6), the subscript in · · · λ means that the Boltzmann average is computed using a Boltzmann weight corresponding to the Hamiltonian H(λ). Eq. (2.5.6) is the starting point for most “Hamiltonian” integration schemes in simulations that will be discussed in Section 8.4.2. Such schemes aim to compute the unknown free energy of a system at λ = 1 from knowledge of the free 9 The same Elisabeth Monroe who, together with Kirkwood, laid the foundations for the theory of

entropic freezing.

Thermodynamics and statistical mechanics Chapter | 2

41

energy of a reference state (λ = 0) using the average values of H (λ) evaluated during the simulations.

2.5.2 Dynamic response Next, we consider a simple time-dependent perturbation. We begin by preparing the system in the presence of a very weak, constant perturbation (λB). The static response of A to this perturbation is given by Eq. (2.5.3). At time t = 0, we switch off the external perturbation. The response A will now decay to 0. We can write an expression for the average of A at time t:  d exp[−β(H0 − λB)]A(t)

A(t) =  , (2.5.7) d exp[−β(H0 − λB)] where A(t) is the value of A at time t if the system started at point  in phase space and then evolved according to the natural time evolution of the unperturbed system. For convenience, we have assumed that the average of A in the unperturbed system vanishes. In the limit λ → 0, we can write  d exp[−βH0 ]BA(t)

A(t) = βλ  d exp[−βH0 ] = βλ B(0)A(t) .

(2.5.8)

The quantity B(0)A(t) is called a time correlation function (if B = A, it is called an autocorrelation function). The time correlation function B(0)A(t) is the time average of the product of the value of A at time τ and B at time t + τ :     1 t0 dτ A rN (τ ), pN (τ ) B rN (t + τ ), pN (t + τ ) ,

B(0)A(t) ≡ lim t0 →∞ t0 0 (2.5.9) where {rN (x), pN (x)} denotes the phase-space coordinates at time x. Note that the time evolution of {rN (x), pN (x)} is determined by the unperturbed Hamiltonian H0 . To give a specific example, consider a gas of dipolar molecules again in the presence of a weak electric field Ex . The perturbation is equal to −Ex Mx . At time t = 0, we switch off the electric field. When the field was still on, the system had a net dipole moment. When the field is switched off, this dipole moment decays:

Mx (t) = Ex β Mx (0)Mx (t) .

(2.5.10)

In words, the decay of the macroscopic dipole moment of the system is determined by the dipole autocorrelation function, which describes the decay of spontaneous fluctuations of the dipole moment in equilibrium. This relation between the decay of the response to an external perturbation and the decay of fluctuations in equilibrium is an example of Onsager’s regression hypothesis.

42 PART | I Basics

It might seem that the preceding example of a constant perturbation that is suddenly switched off is of little practical use, because we are interested in the effect of an arbitrary time-dependent perturbation. Fortunately, in the linear regime that we are considering, the relation given by Eq. (2.5.8) is enough to derive the general response. To see this, let us consider a time-dependent external field f (t) that couples to a mechanical property B; that is, H(t) = H0 − f (t)B.

(2.5.11)

To linear order in f (t), the most general form of the response of a mechanical property A to this perturbation is

A(t) =



−∞

dt  χAB (t, t  )f (t  ),

(2.5.12)

where χAB , the “after-effect function”, describes the linear response. We know several things about the response of the system that allow us to simplify Eq. (2.5.12). First of all, the response must be causal; that is, there can be no response before the perturbation is applied. As a consequence, χAB (t, t  ) = 0 for t < t  .

(2.5.13)

Secondly, the response at time t to a perturbation at time t  depends only on the time difference t − t  . Hence,

A(t) =

t

dt  χAB (t − t  )f (t  ).

−∞

(2.5.14)

Once we know χ, we can compute the linear response of the system to an arbitrary time-dependent perturbing field f (t). To find an expression for χAB , let us consider the situation described in Eq. (2.5.8), namely, an external perturbation that has a constant value λ until t = 0 and 0 from then on. From Eq. (2.5.14), it follows that the response to such a perturbation is

A(t) = λ =λ

0

−∞ ∞

dt  χAB (t − t  ) dτ χAB (τ ).

(2.5.15)

t

If we compare this expression with the result of Eq. (2.5.8), we see immediately that ∞ λ dτ χAB (τ ) = βλ B(0)A(t) (2.5.16) t

Thermodynamics and statistical mechanics Chapter | 2

43

or

χAB (t) =

 ˙ −β B(0)A(t) 0

for for

t >0 t ≤ 0.

(2.5.17)

To give a specific example, consider the mobility of a molecule in an external field Fx . The Hamiltonian in the presence of this field is H = H0 − Fx x.

(2.5.18)

The phenomenological expression for the steady-state velocity of a molecule in an external field is

vx (t) = mFx ,

(2.5.19)

where m is the mobility of the molecule under consideration. We can now derive a microscopic expression for the mobility in terms of a time-correlation function. From Eqs. (2.5.14) through (2.5.17), we have t

vx (t) = Fx dt  χvx x (t − t  ) −∞ ∞ = Fx dτ χvx x (τ ) 0 ∞ = −βFx dτ x(0)v˙x (τ ) 0 ∞ = +βFx dτ vx (0)vx (τ ) . (2.5.20) 0

In the last line of Eq. (2.5.20), we used the stationarity property of timecorrelation functions: d  A(t)B(t + t  ) = 0. (2.5.21) dt Carrying out the differentiation, we find that   ˙ + t ) . ˙ (2.5.22) A(t)B(t + t  ) = − A(t)B(t Combining Eqs. (2.5.19) and (2.5.20), we find that ∞ dt vx (0)vx (t) . m=β

(2.5.23)

0

Eq. (2.5.23) relating a transport coefficient to an integral of a time correlation function, is an example of a so-called Green-Kubo relation [62]. In Chapter 5 we shall see that the mobility m is related to the self-diffusion coefficient D through the Einstein relation m = βD. In Section F.1 of the Appendix, we discuss how,

44 PART | I Basics

from the knowledge of certain time-correlation functions, we can compute the rate of dissipation due to an applied periodic perturbation. Such information is very useful when modelling, for instance, the absorption of radiation, be it that the expressions derived in the Appendix only apply to the classical case where ω  kB T . Power spectra Time-correlation functions are often computed (and also measured in spectroscopic experiments), by Fourier transforming from the frequency domain using the Wiener-Khinchin (WK) theorem (see e.g., [53]). To derive the WK theorem, we first define the Fourier transform of the observable of interest over time interval T T a(ω) ˆ ≡ dt A(t)eiωt . (2.5.24) 0

Note that we define the Fourier transform over a finite time interval {0 − T}, because the simulation has a finite length. We now define GA (ω), the power spectrum of A, as  1  2 |a(ω)| ˆ (2.5.25) T→∞ 2πT T T  1  = lim dt dt  A(t)A(t  ) eiωt e−iωt T→∞ 2πT 0 0 T−t  T  1  = lim dt  dt − t  A(0)A(t − t  ) eiω(t−t ) ,  T→∞ 2πT 0 −t

GA (ω) ≡ lim

where we have used the fact that an equilibrium time correlation function only depends on the time difference t − t  . When T is much longer than the time it takes the correlation function to decay, we can now write (in the limit that T → ∞): ∞ 1 GA (ω) = dτ A(0)A(τ ) eiωτ , (2.5.26) 2π −∞ where we have defined τ ≡ t − t  . Equation (2.5.26) shows that GA (ω) is the Fourier transform of A(0)A(τ ), and conversely that ∞

A(0)A(τ ) = dω GA (ω)e−iωτ . (2.5.27) −∞

Equation (2.5.27) is often used to obtain correlation functions from spectroscopic data (see e.g., section F.1). For the calculation of correlation functions, it is important to note that correlating n points, requires n(n − 1)/2 multiplications, whereas a (fast) Fourier transform [38] only requires n ln n operations.

Thermodynamics and statistical mechanics Chapter | 2

45

Power spectra can often be determined by measuring the dissipation in a system subject to a periodic perturbation (see Appendix F.1).

2.6 Questions and exercises Question 1 (Number of configurations). 1. Consider a system A consisting of subsystems A1 and A2 , for which 1 = 1020 and 2 = 1022 . What is the number of configurations available to the combined system? Also, compute the entropies S, S1 , and S2 . 2. By what factor does the number of available configurations increase when 10 m3 of air at 1.0 atm and 300 K is allowed to expand by 0.001% at constant temperature? Here and in what follows, we assume that air behaves like an ideal gas. 3. By what factor does the number of available configurations increase when 150 kJ is added to a system containing 2.0 mol of particles at constant volume and T = 300 K? 4. A sample consisting of five molecules has a total energy 5. Each molecule is able to occupy states of energy j , with j = 0, 1, 2, · · · , ∞. Draw up a table with columns by the energy of the states and write beneath them all configurations that are consistent with the total energy. Identify the type of configuration that is most probable. Question 2 (Thermodynamic variables in the canonical ensemble). Starting with an expression for the Helmholtz free energy (F ) as a function of N , V , T F=

− ln [Q (N, V , T )] , β

one can derive all thermodynamic properties. Show this by deriving equations for U , p, and S. Question 3 (Ideal gas (Part 1)). The canonical partition function of an ideal gas consisting of monoatomic particles is equal to Q (N, V , T ) =

1 h3N N !

d exp [−βH ] =

VN , λ3N N !

√ in which λ = h/ 2πm/β and d = dq1 · · · dqN dp1 · · · dpN . Derive expressions for the following thermodynamic properties: • • • • • •

F (N, V , T ) (hint: ln (N !) ≈ N ln (N ) − N ) p (N, V , T ) (which leads to the ideal gas law) μ (N, V , T ) (which leads to μ = μ0 + RT ln ρ) U (N, V , T ) and S (N, V , T ) Cv (heat capacity at constant volume) Cp (heat capacity at constant pressure)

46 PART | I Basics

Question 4 (Van der Waals equation of state). The van der Waals equation of state describes the behavior of non-ideal gasses. P=

a RT − 2, v−b v

where R is the gas constant. Show that the constants a and b can be related to the critical point: 27 R 2 Tc2 64 Pc 1 RTc b= . 8 Pc

a=

In this book, we will use the Lennard-Jones fluid very often. The critical point of the Lennard Jones fluid is Tc = 1.32, ρc = 0.32 (molecules per unit volume), and Pc = 0.131 [63]. These constants are expressed in reduced units (see section 3.3.2.5). Plot the equation of state (pressure as a function of the molar volume for T = 2.00 and T = 1.00. Question 5 (Fugacity). In most Chemical Engineering textbooks, the chemical potential is replaced by the fugacity, f . βμ − βμ0 = ln

f , f0

where f 0 is the fugacity of the reference state. • Compute the fugacity of an ideal gas • How would you choose the reference state f 0 for a real gas The fugacity coefficient φ is introduced to quantify deviations from the ideal gas behavior: f φ= , P where P is the pressure.10 Compute φ as a function of the molar volume for T = 2.0 and T = 1.0 for the van der Waals equation of state (see Question 4). Question 6 (Reversible work). The First Law of thermodynamics expresses the conservation of the energy E of a system dE = q + w , 10 Note that for an ideal gas, P = ρk T . We can therefore also write  = f/(ρ k T ). In the rest B id B of the book, we use this relation to define the fugacity through  = f  /ρid . Note that f  = f/kB T .

In what follows, we drop the prime.

Thermodynamics and statistical mechanics Chapter | 2

47

where q and w denote the (infinitesimal) energy changes of the system due to heat flow and work, respectively. In the case of reversible changes, we can use the First and Second Laws together, to express how the different state functions (E, S, F , G, · · · ) change in a reversible transformation. The best-known expression is dE = T dS − P dV + μdN , or, in terms of the Helmholtz free energy F : dF = −SdT − pdV + μdN. This expression for dF applies to the case where the reversible work is due to a change of the volume against an external pressure P . However, we can also consider other forms of work, for instance, due to changing the polarization of a system at a constant applied electric field, or to changing the surface area A at constant surface tension γ . Here, we consider the latter case: dF = −SdT + γ dA − P dV + μdN. 1. We will assume that both V and A are linear in N . Making use of the fact that the free energy of a system is extensive, show that F = μN − P V + γ A We can interpret this expression as a definition of the surface free energy Fs ≡ γ A. 2. Derive an expression for (∂Fs /∂A)N,V ,T . This expression may look strange, because we change A at constant N . 3. Under what conditions is (∂Fs /∂A)N,V ,T = γ ? Would this condition be satisfied for: • a liquid-vapor interface? • a solid-liquid interface? Question 7 (Ising model). Consider a system of N spins arranged on a simple lattice (1d: a linear chain, 2d: a square lattice, 3d: a simple cubic lattice, etc.). In the presence of a magnetic field, H , the energy of the system is U =−

N  i=1

H si − J



si sj ,

i>j

where J is called the coupling constant (J > 0) and si = ±1. The second summation is a summation over all pairs (d × N for a periodic system, d is the dimensionality of the system). This system is called the Ising model.

48 PART | I Basics

1. Show that for positive J , and H = 0, the lowest energy of the Ising model is equal to U0 = −dN J . 2. Show that the free energy per spin of a 1d Ising model with zero field is equal to ln (2 cosh (βJ )) F (β, N) =− N β when N → ∞. The function cosh (x) is defined as cosh (x) =

exp [−x] + exp [x] . 2

(2.6.1)

3. Derive equations for the energy and heat capacity of this system. Question 8 (The photon gas). An electromagnetic field in thermal equilibrium can be described as a photon gas. From the quantum theory of the electromagnetic field, it is found that the total energy of the system (U ) can be written as the sum of photon energies: U=

N 

nj ω j  =

j =1

N 

n j j

j =1

in which j is the characteristic energy of a photon with frequency ω, j , nj = 0, 1, 2, · · · , ∞ is the so-called occupancy number of mode j , and N is the number of field modes (here we take N to be finite). 1. Show that the canonical partition function of the system can be written as Q=

N !

1 .  1 − exp −βj j =1

(2.6.2)

Hint: you will have to use the following identity for | x |< 1: i=∞  i=0

xi =

1 . 1−x

(2.6.3)

For the product of partition functions of two independent systems A and B, we can write QA × QB = QAB (2.6.4) when A ∩ B =  and A ∪ B = AB.  2. Show that the average occupancy number of state j , nj , is equal to 

nj =

∂ ln Q 1 =    . ∂ −βj exp βj − 1

 3. Describe the behavior of nj when T → ∞ and when T → 0.

(2.6.5)

Thermodynamics and statistical mechanics Chapter | 2

49

Question 9 (Ideal gas (Part 2)). An ideal gas is placed in a constant gravitational field. The potential energy of N gas molecules at height z is Mgz, where M = mN is the total mass of N molecules, and g the gravitational acceleration. The temperature in the system is uniform, and the system is infinitely large. We assume that the system is locally in equilibrium, so we are allowed to use a local partition function. 1. Show that the grand-canonical partition function of a system in volume V at height z is equal to Q (μ, V , T , z) =

∞  exp [βμN] d exp [−β (H0 + Mgz)] h3N N !

(2.6.6)

N =0

in which H0 is the Hamiltonian of the system at z = 0. 2. Explain that a change in z is equivalent to a change in chemical potential, μ. Use this to show that the pressure of the gas at height z is equal to p (z) = p (z = 0) × exp [−βmgz] .

(2.6.7)

(Hint: you will need the formula for the chemical potential of an ideal gas.) Exercise 1 (Distribution of particles). Consider an ideal gas of N particles in a volume V at constant energy E. Let us divide the volume into p identical compartments. Every compartment contains ni molecules such that N=

i=p 

ni .

(2.6.8)

i=1

An interesting quantity is the distribution of molecules over the p compartments. Because the energy is constant, every possible eigenstate of the system will be equally likely. This means that in principle it is possible that one of the compartments is empty. 1. On the book’s website you can find a program that calculates the distribution of molecules among the p compartments. Run the program for different numbers of compartments (p) and total number of gas molecules (N ). The output of the program is the probability of finding x particles in a particular compartment as a function of x. 2. What is the probability that one of the compartments is empty? 3. Consider the case p = 2 and N even. The probability of finding N/2 + n1 molecules in compartment 1 and N/2 − n1 molecules in compartment 2 is given by P (n1 ) =

N! . (N/2 − n1 )!(N/2 + n1 )!2N

(2.6.9)

Compare your numerical results with the analytical expression for different values of N . Show that this distribution is a Gaussian for small n1 /N . Hint: For

50 PART | I Basics

x > 10, it might be useful to use Stirling’s approximation: 1 1 x! ≈ (2π ) 2 x x+ 2 exp [−x] .

(2.6.10)

Exercise 2 (Boltzmann distribution). Consider a system of N energy levels with energies 0, , 2, · · · , (N − 1)  and  > 0. 1. Calculate, using the given program, the occupancy of each level for different values of the temperature. What happens at high temperatures? 2. Change the program in such a way that the degeneracy of energy level i equals i + 1. What do you see? 3. Modify the program in such a way that the occupation of the energy levels, as well as the partition function (q), is calculated for a heteronuclear linear rotor with moment of inertia I . Compare your result with the approximate result q=

2I β 2

(2.6.11)

for different temperatures. Note that the energy levels of a linear rotor are U = J (J + 1)

2 2I

(2.6.12)

with J = 0, 1, 2, · · · , ∞. The degeneracy of level J equals 2J + 1. Exercise 3 (Coupled harmonic oscillators). Consider a system of N harmonic oscillators with a total energy U . A single harmonic oscillator has energy levels 0, , 2, · · · , ∞ ( > 0). All harmonic oscillators in the system can exchange energy. 1. Invent a computational scheme to update the system at constant total energy (U ). Compare your scheme with the scheme that is incorporated in the computer code that you can find on the book’s website. 2. Make a plot of the energy distribution of the first oscillator as a function of the number of oscillators for a constant value of U/N . Which distribution is recovered when N becomes large? What is the function of the other N − 1 harmonic oscillators? Explain. 3. Compare this distribution with the canonical distribution of a single oscillator at the same average energy (use the option NVT). 4. How does this exercise relate to the derivation of the Boltzmann distribution for a system at temperature T ? Exercise 4 (Random walk on a 1d lattice). Consider the random walk of a single particle on a line. The particle performs jumps of fixed length. Assuming that the probability for forward or backward jumps is equal, the mean-squared displacement of a particle after N jumps is equal to N . The probability that, after N jumps, the net distance covered by the particle equals n is given by ln [P (n, N )] ≈

  2 n2 1 ln − . 2 πN 2N

1. Derive this equation using Stirling’s approximation for ln x!.

Thermodynamics and statistical mechanics Chapter | 2

51

2. Compare your numerical result for the root mean-squared displacement with the theoretical prediction (the computed function P (n, N ). What is the diffusivity of this system? 3. Modify the program in such a way that the probability to jump in the forward direction equals 0.8. What happens? Exercise 5 (Random walk on a 2d lattice). Consider the random walk of N particles on a M × M lattice. Two particles cannot occupy the same lattice site. On this lattice, periodic boundaries are used. This means that when a particle leaves the lattices it returns on the opposite side of the lattice; i.e., the coordinates are given modulo M. 1. What is the fraction of occupied sites (θ) of the lattice as a function of M and N? 2. Make a plot of the diffusivity D as a function of θ for M = 32. For low values of θ , the diffusivity can be approximated by D ≈ D0 (1 − θ) . Why is this equation reasonable at low densities? Why does it break down at higher densities? 3. Modify the program in such a way that the probability of jumping in one direction is larger than the probability of jumping in the other direction. Explain the results. 4. Modify the program in such a way that periodic boundary conditions are used in one direction and reflecting boundary conditions in the other. What happens?

This page intentionally left blank

Chapter 3

Monte Carlo simulations 3.1 Preamble: molecular simulations In the previous chapter, we introduced the basics of (classical) statistical mechanics. We found that many observables may either be expressed as time averages or as ensemble averages. In the remainder of this book, we discuss how these observables may be computed. We stress that once we start using numerical techniques, we give up on exact results. This is so because, first of all, simulations are subject to statistical noise. This problem can be alleviated, but not eliminated, by sampling longer. Secondly, simulations contain systematic errors related to the fact that the systems we simulate are far from macroscopic: finite-size effects may be substantial. Again, this problem may be alleviated by performing a systematic finite-size scaling analysis, but the problem should be recognized. Thirdly, we discretize Newton’s equations of motion. The resulting trajectories are not the true trajectories of the system. This problem becomes less severe for shorter time steps, but it cannot be eliminated completely. Finally, if we aim to model real materials, we have to use an approximate description of the potential energy function U (rN ). The approximation may be good, —but it is never perfect. Hence, in simulations, we always predict the properties of a model system. When, in the remainder of this book, we mention simulations of real materials, the reader should always bear these approximations in mind, not to mention the fact that we will be using a classical rather than a quantum description. Quantum simulation techniques are beyond the scope of this book. But the general comments about the trade-off between cost and rigor still apply, and even more so than for classical simulations. There are two distinct numerical approaches to compute the observable properties of classical many-body systems: one method is called Molecular Dynamics (MD). As the name suggests, a Molecular-Dynamics simulation generates an approximate solution of Newton’s equations of motion, yielding a trajectory rN (t), pN (t) of the system in phase space.1 Using the MD approach, we can compute time averages of observable properties, both static (e.g., thermodynamic observables) and dynamic (e.g., transport coefficients). Whilst the practical implementation of MD is not always simple, the idea behind it is blindingly obvious. 1 We denote the x, y, z coordinates of the N particles in our system by rN , and pN denotes the

corresponding momenta. Understanding Molecular Simulation. https://doi.org/10.1016/B978-0-32-390292-2.00011-8 Copyright © 2023 Elsevier Inc. All rights reserved.

53

54 PART | I Basics

In contrast, the idea behind the Monte Carlo (MC) method (or, more precisely, the (Markov-Chain Monte Carlo (MCMC)) Method) is highly nontrivial. In fact, the development of the MCMC method is arguably one of the greatest discoveries in computational science of the 20th century.2 The present chapter deals with the basis of the MCMC method, as applied to classical many-body systems. The MCMC method is not limited to computing the properties of many-body systems: whenever we have a system that can be found in a (very) large number of states, and we know how to compute the relative probability with which these states are occupied (e.g., the Boltzmann weight), the MCMC method can be used. Unlike MD, MCMC provides no information about the time evolution of a many-body system. Hence, MCMC cannot be used to compute transport properties. This might suggest that, all things being equal, MD is always to be preferred over MC. However, all things are not always equal. In particular, as we shall see in subsequent chapters, the very strength of MD simulation is its weakness: it must follow the natural dynamics of the system. If the natural dynamics is slow, this limits the ability of an MD simulation to explore all accessible states of a system. MC suffers much less from this drawback: as we shall see, ingenious Monte Carlo schemes exist that can sample efficiently even where the natural dynamics is slow. Moreover, there are many interesting model systems, e.g., lattice models that have no natural dynamics. Such models can only be studied using Monte Carlo simulations.

3.2 The Monte Carlo method Below, we describe the basic principles of the Markov-Chain Monte Carlo method, which from now on we will often refer to as the Monte Carlo (MC) method, unless explicitly stated otherwise. Initially, we focus on the basic MC method, which can be used to simulate systems with a fixed number of particles (N ) in a given volume (V ) at an imposed temperature (T ). To keep the notation simple, we focus initially on particles with no internal degrees of freedom. To introduce the Monte Carlo method, we start from the classical expression for the partition function Q, Eq. (2.2.19):  (3.2.1) Q = c dpN drN exp[−H(rN , pN )/kB T ], where the Hamiltonian, H ≡ H(rN , pN ) is defined as in Eq. (2.3.1). The Hamiltonian of a system expresses the total energy as a function of the coordinates and momenta of the constituent particles: H = K + U, where K is the kinetic energy and U is the potential energy. The equilibrium average of an observable, 2 Interesting accounts of the early history of the Metropolis method may be found in refs. [1,2,64,

65].

Monte Carlo simulations Chapter | 3 55

A ≡ A(pN , rN ) of a classical system at constant N V T is given by Eq. (2.2.20):  N N dp dr A(pN , rN ) exp[−βH(pN , rN )]  A = , (3.2.2) dpN drN exp[−βH(pN , rN )] K is a quadratic function of the momenta, and, as we argued below Eq. (2.3.3), the integration over momenta can be carried out analytically, provided that A is a simple function of the momenta for which we can carry out the Gaussian integration analytically. Hence, averages of functions that depend on momenta only are usually easy to evaluate.3 The difficult part of the problem is the computation of averages of functions A(rN ). Only in a few exceptional cases can the multidimensional integral over particle coordinates be computed analytically; in all other cases, numerical techniques must be used. In other words, we need a numerical technique that allows us to compute averages of functions that depend on a large number of coordinates (typically O(103 –106 )). It might appear that the most straightforward approach would be to evaluate A in Eq. (3.2.2) by numerical quadrature, for instance, by using a high-dimensional version of Simpson’s rule. However, it is easy to see that such an approach is completely useless even if the number of independent coordinates dN (d is the dimensionality of the system) is still quite small. As an example, consider the computational effort required to compute an integral of the type Eq. (3.2.2) by quadrature for a system of N = 100 particles. Such a calculation would involve carrying out the quadrature by evaluating the integrand on a mesh of points in the dN-dimensional configuration space. Let us assume that we take m equidistant points along each coordinate axis. The total number of points at which the integrand must be evaluated is then equal to mdN . For all but the smallest systems, this number becomes astronomically large, even for small values of m. For instance, if we take 100 particles in three dimensions, and m = 5, then we would have to evaluate the integrand at 10210 points! Computations of such magnitude cannot be performed during the lifetime of the known universe. And this is fortunate because the answer that would be obtained would be subject to a large statistical error. After all, numerical quadratures work best on functions that are smooth over distances corresponding to the mesh size. But for most intermolecular potentials, the Boltzmann factor in Eq. (3.2.2) is a rapidly varying function of the particle coordinates. Hence an accurate quadrature would require a small mesh spacing (i.e., a large value of m). Moreover, when evaluating the integrand for a dense liquid (say), we would find that for the overwhelming majority of points, this Boltzmann factor is vanishingly small. For instance, for a fluid of 100 hard spheres at the freezing point, the Boltzmann factor would be non-zero for 1 out of every 10260 configurations! This problem is not related to the fact that the quadrature points are chosen on a mesh: estimating the integral as an average over 10260 randomly chosen configurations would be just ineffective. 3 However, special care is needed in systems subject to constraints; see section 10.2.1.

56 PART | I Basics

Clearly, other numerical techniques are needed to compute thermal averages. One such technique is the Monte Carlo method or, more precisely, the Monte Carlo importance-sampling algorithm introduced in 1953 by Metropolis et al. [6]. The application of this method to the numerical simulation of atomic and molecular systems is the subject of the present chapter.

3.2.1 Metropolis method As above, it is, in general, not feasible to evaluate an integral, such as  argued drN exp[−βU(rN )], by quadrature. However, in many cases, we are not interested in the configurational part of the partition function itself, but in averages of the type (see Eq. (2.3.9)):  A =

drN exp[−βU(rN )]A(rN )  . drN exp[−βU(rN )]

(3.2.3)

Hence, we wish to know the ratio of two integrals. What Metropolis et al. [6] showed is that it is possible to devise an efficient Monte Carlo scheme to sample such a ratio. Note that the ratio exp(−βU)/Z in Eq. (3.2.3) is the probability density (N ) of finding the system in a configuration around rN (see Eqs. (2.3.10) and (2.3.7)): N (rN ) ≡

exp[−βU(rN )] . Z

For what follows, it is important that N (rN ) is non-negative. Suppose that we would somehow be able to generate L points in configuration space that are distributed according to the probability distribution N (rN ). On average, the number of points nj generated in a small configuration-space (hyper)volume4 dN around a point rN would then be equal to L N (rN )dN . We assume that there are M sub-volumes dN , which completely tile the configuration space V N . Then M=

VN . dN

In the limit of large L and large M (small dN ) 1 nj A(rN i ). L M

A ≈

(3.2.4)

j =1

4 We use the notation  dN to designate a small hyper-volume in the dN dimensional configuration space of the system.

Monte Carlo simulations Chapter | 3 57

FIGURE 3.1 Measuring the depth of the Nile: a comparison of conventional quadrature (left), with the Metropolis scheme (right).

Rather than writing this as a sum over M cells, we can write this as a sum over the L points that have been sampled: 1 A(rN j ). L L

A ≈

(3.2.5)

i=1

The crucial point to note is that, in order to evaluate Eq. (3.2.5), we only need to know that nj is proportional to the Boltzmann factor exp[−βU(rN )]: the unknown  normalizing factor (Z) is not needed, because nj is normalized by L = j nj . Hence, if we have a scheme that allows us to generate points with a frequency proportional to their Boltzmann weight, then we can estimate averages of the type given in Eq. (3.2.3). Now, we have to find out how to generate points with a frequency proportional to their Boltzmann weight. Such a procedure is called importance sampling, and the Metropolis Monte Carlo method [6] gives a simple recipe to realize such importance sampling. The above introduction to the concept of importance sampling may sound rather abstract: let us, therefore, try to clarify the difference between quadrature and importance sampling with the help of a simple example (see Fig. 3.1). In this figure, we compare two ways of measuring the depth of the river Nile, by conventional quadrature (left) and by Metropolis sampling; that is, the construction of an importance-weighted random walk (right). In the conventional quadrature scheme, the value of the integrand is measured at a predetermined set of points. As the choice of these points does not depend on the value of the integrand, many points may be located in regions where the integrand vanishes (in the case of the Nile: one cannot perform a measurement of the depth of the Nile outside the Nile —yet most of our quadrature points are far away from the river). In contrast, in the Metropolis scheme, a random walk is constructed

58 PART | I Basics

through that region of space where the integrand is non-negligible (i.e., through the Nile itself). In this random walk, a trial move is rejected if it takes you out of the water, and it is accepted otherwise. After every trial move (accepted or not), the depth of the water is measured. The (unweighted) average of all these measurements yields an estimate of the average depth of the Nile. This, then, is the essence of the Metropolis method. In principle, the conventional quadrature scheme would yield the same answer, but it would require a very fine grid, and most sampled points would be irrelevant. The advantage of quadrature, if it were feasible, would be that it also allows us to compute the total area of the Nile. In the importance sampling scheme, information on the total area cannot be obtained directly, since this quantity is similar to the configurational integral Z. In later chapters, we discuss numerical schemes to compute Z. Thus far, we have explained what is needed to compute averages by importance sampling: we must somehow be able to generate points in configuration space with a frequency proportional to their Boltzmann weight. The question is: how? Ideally, we would use a scheme that generates a large number of independent points in configuration space with the desired Boltzmann weights. Such an approach is called static Monte Carlo sampling. Static MC has the advantage that every configuration is statistically independent of the previous ones. Traditionally, such an approach has been used in very simple cases, e.g., the generation of points that are normally distributed, in which case we can analytically map random numbers uniformly distributed between zero and one onto a Gaussian distribution [66]. However, for any nontrivial many-body system, such a rigorous analytical approach is not feasible. This is why the overwhelming majority of MC simulations are dynamic: the simulation generates a (Markov) chain of configurations, where a new configuration is generated with an easily computable probability from the previous one. The Metropolis Monte Carlo method is the prime example of such a dynamic scheme. Using dynamic MC, it is possible to generate configurations with the correct Boltzmann weight. However, the disadvantage of the scheme is that successive configurations tend to be correlated. Hence, the number of statistically independent configurations is less than the total number of configurations. Before continuing with a description of (dynamic) MC methods, we note that static (or hybrid static/dynamic) MC methods are undergoing a revival due to the application of transformations that are obtained by Machine Learning (see section 13.4.1). However, in the present chapter, and most of those that follow, we focus on dynamic Monte Carlo schemes. To rationalize the Metropolis algorithm, let us first consider the consequence of the fact that the desired algorithm must sample points with a frequency proportional to their Boltzmann weight. This implies that if we have L points that are already Boltzmann distributed, then applying one step of our algorithm on every single point should not destroy this Boltzmann distribution. Note that this is an “ensemble” argument: we consider a very large number (L) of copies of

Monte Carlo simulations Chapter | 3 59

the same model system. It is useful to consider the limit where L  M, where M denotes the number of points in configuration space. Of course, in reality, the number of points in configuration space is not countable —but in a numerical representation, it is, as floating point numbers have a limited number of digits. We now assume that the number of copies of the system in a given state (say i) is proportional to the Boltzmann factor of that state. Next, we carry out one step of our MC algorithm on all L systems. As a consequence of this step, individual systems may end up in another state, but, on average, the number of systems in every state should not change, as these numbers are given by the Boltzmann distribution. Let us consider one such state, which we denote by o (for “old”). State o has an unnormalized Boltzmann weight, exp[−βU(o)]. We can compute this Boltzmann weight because we can compute the potential energy. A Monte Carlo algorithm is not deterministic: a single step can have different outcomes with different probabilities (hence the name “Monte Carlo”). Let us consider the probability of moving during one MC step from state o to a new state n with Boltzmann weight exp[−βU(n)]. We denote this transition probability by π(o → n). Let us denote the number of copies of our system that are originally in state o, by m(o). As we start with a Boltzmann distribution, m(o) is proportional to exp[−βU(o)]. To maintain the Boltzmann distribution, m(n), the number of copies is state n should be proportional to exp[−βU(n)]. The transition probability π(o → n) should be constructed such that it does not destroy the Boltzmann distribution. This means that, in equilibrium, the average number of accepted trial moves that result in the system leaving state o must be exactly balanced by the number of accepted trial moves from all other states n to state o. We note that m(o), the average number of systems in state o, is proportional to N (o) (and similarly for n). The transition probability π(o → n) should then satisfy the following balance equation: N (o)

 n

π(o → n) =



N (n)π(n → o) .

(3.2.6)

n

Eq. (3.2.6) expresses the balance between the flow into and the flow out of state o. Note that π(o → n) can be interpreted as a matrix (the “transition matrix”) that maps the original states of the system onto the new states. In this matrix interpretation, we require that the Boltzmann distribution is an eigenvector of the transition matrix, with eigenvalue 1. We also note that the probability of making a transition from a given state o to a new state n depends only on the current state of the system. Hence, the process described by the matrix π(o → n) is a Markov process. The original Metropolis algorithm was based on a stronger condition than Eq. (3.2.6), namely that in equilibrium, the average number of accepted moves from o to any given state n is exactly balanced by the number of reverse moves.

60 PART | I Basics

This condition is known under the name detailed balance5 : N (o)π(o → n) = N (n)π(n → o).

(3.2.7)

Many possible forms of the transition matrix π(o → n) satisfy Eq. (3.2.7). To progress towards the Metropolis algorithm, we make use of the fact that we can decompose a Monte Carlo move into two stages. First, we carry out a trial move from state o to state n. We denote the transition matrix that determines the probability of performing a trial move from o to n by α(o → n), where α is usually referred to as the underlying matrix of the Markov chain [67]. The next stage involves the decision to either accept or reject this trial move. Let us denote the probability of accepting a trial move from o to n by acc(o → n). Clearly, π(o → n) = α(o → n) × acc(o → n).

(3.2.8)

In the original Metropolis scheme, α was chosen to be a symmetric matrix (α(o → n) = α(n → o)). However, in later sections, we shall see several examples where α is chosen not to be symmetric. If α is symmetric, we can rewrite Eq. (3.2.6) in terms of the acc(o → n): N (o) × acc(o → n) = N (n) × acc(n → o).

(3.2.9)

From Eq. (3.2.9) it follows that6 acc(o → n) N (n) = = exp{−β[U(n) − U(o)]}. acc(n → o) N (o)

(3.2.10)

Again, many choices for acc(o → n) satisfy this condition, subject, of course, to the constraint that the probability acc(o → n) cannot exceed one. The choice of Metropolis et al. was acc(o → n) = N (n)/N (o)

if N (n) < N (o)

=1

if N (n) ≥ N (o).

(3.2.11)

Other choices for acc(o → n) are possible (for a discussion, see for instance [21]), but under most conditions, the original choice of Metropolis et al. appears to result in a more efficient sampling of configuration space than the other strategies that have been proposed. 5 In Chapter 13 we discuss powerful MC algorithms that do not satisfy detailed balance. They do,

however, satisfy balance, and therefore maintain the Boltzmann distribution. 6 Clearly, Eq. (3.2.10) only makes sense if N is non-negative.

Monte Carlo simulations Chapter | 3 61

In summary: in the Metropolis scheme, the transition probability for going from state o to state n is given by π(o → n) = α(o → n)

N (n) ≥ N (o)

= α(o → n)[N (n)/N (o)]  π(o → o) = 1 − n =o π(o → n).

N (n) < N (o)

(3.2.12)

Note that we have not yet specified the matrix α, except for the fact that it must be symmetric. Indeed, we have considerable freedom in choosing the symmetric matrix α. This freedom will be exploited in subsequent sections. One thing that we have not yet explained is how to decide whether a trial move is to be accepted or rejected. The usual procedure is as follows. Suppose that we have generated a trial move from state o to state n, with U(n) > U(o). According to Eq. (3.2.10) this trial move should be accepted with a probability acc(o → n) = exp{−β[U(n) − U(o)]} < 1. In order to decide whether to accept or reject the trial move, we generate a (quasi) random number, here denoted by R from a uniform distribution in the interval [0, 1]. Clearly, the probability that R is less than acc(o → n) is equal to acc(o → n). We now accept the trial move if R < acc(o → n) and reject it otherwise. This rule guarantees that the probability of accepting a trial move from o to n is indeed equal to acc(o → n). Obviously, it is important that our quasi-random-number generator does generate numbers uniformly in the interval [0, 1]. Otherwise, the Monte Carlo sampling will be biased. The quality of random-number generators should never be taken for granted. A good discussion of random-number generators can be found in Numerical Recipes [38] and in Monte Carlo Methods by Kalos and Whitlock [37]. Thus far, we have not mentioned another condition that the matrix π(o → n) should satisfy, namely that it must be irreducible (i.e., every accessible point in configuration space can be reached in a finite number of Monte Carlo steps from any other point). Irreducibility plays the same role in MC as ergodicity in MD, and to facilitate the comparison, we will use the terms interchangeably. Although some simple MC schemes are guaranteed to be irreducible, these are often not the most efficient schemes. Conversely, many efficient Monte Carlo schemes have either not been proven to be irreducible or, worse, been proven to violate irreducibility. The solution is usually to mix the efficient, non-ergodic scheme with an occasional trial move of the less-efficient but ergodic scheme. The method as a whole will then be ergodic (at least, in principle). The one criterion that MC moves do not have to satisfy is generating physically plausible trajectories. In this respect, MC differs fundamentally from MD, and what sounds like a disadvantage is often a strength. As we shall see later, “unphysical” moves MC can greatly speed up the exploration of configuration space and ensure irreducibility. However, in numerous applications, MC algorithms are designed to generate physically plausible trajectories, in particular

62 PART | I Basics

when simulating particles that undergo diffusive Brownian motion. We will mention a few examples later (see section 13.3.2). At this stage, we only note that all these schemes must also be valid MC schemes. Hence the present discussion applies.

3.2.2 Parsimonious Metropolis algorithm In the standard Metropolis MC method, we first compute the energy change due to a trial move, and then draw a random number to decide on its acceptance. In the so-called Parsimonious Metropolis Algorithm (PMA) [68], the first step is drawing a random number R uniformly distributed between 0 and 1. Obviously, the Metropolis rule implies that trial moves with βE < − ln R will be accepted, and so it would seem that nothing is gained by computing R first. However, the PMA exploits the fact that it is often possible to place bounds Emax and Emin on E. Clearly, trial moves with βEmax < − ln R will be accepted, and those with βEmin > − ln R will be rejected. The idea behind the PMA is to use bounds that can be (pre)computed cheaply. How this is achieved depends on the specific implementation: for more details, see ref. [68]. We will see related examples where R is computed first in section 13.4.4 (Eq. (13.4.30)) and, in a slightly different form, in the kinetic Monte Carlo algorithm described in Section 13.3.2 (see Eq. (13.3.5)).

3.3 A basic Monte Carlo algorithm It is not very helpful to discuss Monte Carlo or Molecular Dynamics algorithms in abstract terms. The best way to explain how they work is to write them down. This will be done in the present section. The core of most Monte Carlo or Molecular Dynamics programs is simple: typically only a few hundred lines long. These MC or MD algorithms may be embedded in standard program packages or programs that have been tailor-made for specific applications. All these programs and packages tend to be different, and there is, therefore, no such thing as a standard Monte Carlo or Molecular Dynamics program. However, the cores of most MD/MC programs are, if not identical, at least very similar —if you understand one, you understand them all. Below, we shall construct the “core” of an MC program. It will be rudimentary, and efficiency has been traded for clarity. But it should demonstrate how the Monte Carlo method works. In the next chapter, we will do the same for a Molecular Dynamics program.

3.3.1 The algorithm As explained in the previous section, the Metropolis (Markov-Chain) Monte Carlo method is constructed such that the probability of visiting a particular point rN in configuration space is proportional to the Boltzmann factor

Monte Carlo simulations Chapter | 3 63

Algorithm 1 (Core of Metropolis MC code) program MC for 1 ≤ itrial ≤ ntrial do mcmove if (itrial % nsamp) == 0 then sample endif enddo end program

basic Metropolis algorithm perform ntrial MC trial moves trial move procedure % denotes the Modulo operation sample procedure

Specific Comments (for general comments, see p. 7) 1. Function mcmove attempts to displace a randomly selected particle (see Algorithm 2). 2. Function sample samples observables every nsample-th trial move. exp[−βU(rN )]. In the standard implementation of the approach introduced by Metropolis et al. [6], we use the following scheme: 1. Select a particle at random,7 and calculate its contribution U(rN ) to the energy of the system.8 2. Give the particle a random displacement, r = r + , and calculate its new potential energy U(r N ). 3. Accept the move from rN to r N with probability   N acc(o → n) = min 1, exp{−β[U(r ) − U(rN )]} . (3.3.1) An implementation of this basic Metropolis scheme is shown in Algorithms 1 and 2.

3.3.2 Technical details Computing averages in Monte Carlo simulations can be accelerated using a number of tricks. Ideally, schemes to save computer time should not affect the results of a simulation in a systematic way, but that is often not completely true. 7 Selecting particles at random is standard practice in MC because other choices yield algorithms

that do not satisfy detailed balance. Interestingly, the original Metropolis algorithm [6] moved particles sequentially, and hence violated detailed balance. It did, however, satisfy the balance condition. In section 13.4.4, we will discuss how breaking detailed balance may speed up the convergence of MC simulations. An explicit description of trial moves involving a particle selected at random can be found in the 1957 paper by Wood and Parker [69]. 8 This is straightforward for pairwise additive interactions. However, for systems with many-body interactions, it would be that part of the potential energy of the system that depends on the coordinate of the selected particle. In the worst case, this may simply be the total potential energy of the system.

64 PART | I Basics

Algorithm 2 (Monte Carlo trial displacement) function mcmove i=int(R*npart)+1 eno =

ener(x(i),i)

xn=x(i)+(R-0.5)*delx

ener(xn,i) if R < exp[-β*(enn-eno)] then enn =

x(i)=xn

Metropolis MC trial move select random particle with 1 ≤ i ≤ npart eno: energy particle i at “old” position x(i) trial position xn for particle i enn: energy of i at xn Metropolis criterion Eq. (3.3.1) if accepted, x(i) becomes xn

endif end function

Specific Comments (for general comments, see p. 7) 1. npart: number of particles. x(npart) position array. T = 1/β, maximum steps size = 0.5*delx 2. The function ener(x): computes the interaction energy of a particle at position x, using the approach shown in Algorithm 5. 3. R generates a random number uniformly between 0 and 1 4. int(z) returns the integer part of z 5. Note that, if a configuration is rejected, the old configuration is retained.

The main reason is that simulations of macroscopic systems are not feasible: we, therefore, carry out simulations on microscopic (hundreds to millions of particles) systems. The properties of such finite systems are often slightly, but sometimes very different from those of macroscopic systems. Similarly, even for moderately-sized systems, it is often advantageous to avoid explicit calculation of very weak intermolecular interactions between particles that are far apart. In both cases, the effect of these approximations can be mitigated, as we discuss below. But, whatever the situation, the key point is that we should be aware of possible systematic errors introduced by time-saving tricks and correct them wherever possible. Many of the time-saving devices that we discuss below are similar for MC and MD simulations. Rather than repeat the present section in the chapter on MD, we shall refer in our discussion below to both types of simulations whenever this is relevant. However, we will not yet assume that the reader is familiar with the technical details of MD simulations. The only feature of MD that, at this stage, is relevant for our discussion is that in MD simulations, we must compute the forces acting on all particles.

Monte Carlo simulations Chapter | 3 65

FIGURE 3.2 Schematic representation of periodic boundary conditions.

3.3.2.1 Boundary conditions Monte Carlo and Molecular Dynamics simulations of atomic or molecular systems aim to provide information about the properties of a macroscopic sample. Yet, the number of degrees of freedom that are usually studied in molecular simulations ranges from hundreds to millions.9 Clearly, such particle numbers are still far removed from the thermodynamic limit. In particular, for small systems it cannot be safely assumed that the choice of the boundary conditions (e.g., free or hard or periodic) has a negligible effect on the properties of the system. In fact, in a three-dimensional N-particle system with free boundaries, the fraction of all molecules that is at the surface is proportional to N −1/3 . For instance, in a simple cubic crystal of 1000 atoms, some 49% of all atoms are at the surface, and for 106 atoms, this fraction has only decreased to 6%. As a consequence, we should expect that the properties of such small systems will be strongly influenced by surface effects. To simulate bulk phases, it is essential to choose boundary conditions that mimic the presence of an infinite bulk surrounding our N -particle model system. This is usually achieved by employing periodic boundary conditions. The volume containing the N particles is treated as the primitive cell of an infinite periodic lattice of identical cells (see Fig. 3.2). A given particle (i, say) now interacts with all other particles in this infinite periodic system, that is, all other particles in the same periodic cell and all particles (including its own periodic image) in all other cells. For instance, if we assume that all intermolecular interactions are pairwise additive, then the total potential energy of the N particles 9 Larger systems containing billions of particles can be simulated, but in view of the high compu-

tational cost, such simulations will only be performed if the essential physics cannot be captured by smaller systems.

66 PART | I Basics

in any one periodic box is Utot =

1  u(|rij + nL|), 2 i,j,n

where L is the diameter of the periodic box (assumed cubic, for convenience) and n is an arbitrary vector of three integer numbers, while the prime over the sum indicates that the term with i = j is to be excluded when n = 0. In this general form, periodic boundary conditions are not particularly useful, because to simulate bulk behavior, we had to rewrite the potential energy as an infinite sum rather than a finite one.10 As discussed in the next section, we are in practice often dealing with short-range interactions. In that case, it is usually permissible to truncate all intermolecular interactions beyond a certain cutoff distance rc , and account approximately for all longer-ranged interactions. Although the use of periodic boundary conditions is a surprisingly effective method for emulating homogeneous bulk systems, one should always be aware of the fact that such boundary conditions still cause spurious correlations that are not present in a macroscopic bulk system. In particular, one consequence of the periodicity of the model system is that only fluctuations with a wavelength compatible with the periodicity of the boundary conditions are allowed: the longest wavelength that still fits in the periodic box, is the one for which λ = L. In cases where long-wavelength fluctuations are expected to be important (as, for instance, near a critical point), one should expect problems with the use of periodic boundary conditions. Periodic boundary conditions also break the rotational symmetry of the system. One consequence is that the average density profile around a particle in an isotropic fluid is not spherically symmetric, but reflects the cubic (or lower) symmetry of the periodic boundary conditions [70]. Such effects become less important with increasing system size. Periodic Boundaries: a misnomer? The term Periodic Boundary Conditions sometimes creates confusion, due to the use of the word boundary. The origin of the periodic lattice of primitive cells may be chosen anywhere in the model system under study, and this choice will not affect any observable property. Hence, the “boundary” has no physical significance (see, however, section 11.2.2). In contrast, the shape of the periodic cell and its orientation may not always be chosen at will. In particular, for crystals or other systems with positional ordering, the shape and orientation of the simulation box must be compatible with the intrinsic periodicity of the physical system.

10 In fact, in the first MC simulation of three-dimensional Lennard-Jones particles, Wood and

Parker [69] discuss the use of such infinite sums in relation to the now conventional approach discussed here.

Monte Carlo simulations Chapter | 3 67

3.3.2.2 Truncation of interactions Let us consider the specific case of a simulation of a system with short-ranged, pairwise-additive interactions. In this context, short-ranged means that the total potential energy of a given particle i is dominated by interactions with neighboring particles that are closer than some cutoff distance rc . The error that results from ignoring interactions with particles at larger distances can be made arbitrarily small by choosing rc sufficiently large. When using periodic boundary conditions, the case where rc is less than L/2 (half the diameter of the periodic box) is of special interest because, in that case, we need to consider the interaction of a given particle i only with the nearest periodic image (sometimes called the Minimum Image) of any other particle j (see the dashed box in Fig. 3.2). Strictly speaking, one could also choose to limit the computation of the pair interaction to the nearest image, without imposing a spherical cut-off. However, in that case, the pair interaction at distances larger than L/2 would depend on the direction of the line joining the centers of the particles. Obviously, such a spurious angle-dependence could cause artifacts and is usually to be avoided (but not always: see Chapter 11). If the intermolecular potential is not rigorously zero for r ≥ rc , truncation of the intermolecular interactions at rc will result in a systematic error in U tot . As we discuss below, it is often possible to correct approximately the truncation of the pair potential. Ideally, the computed values of observable properties should be insensitive to the precise choice of the cut-off distance of the pair potential, provided we correct for the effect of the cut-off procedure. However, for the most common choices of the cut-off distance, different cut-off procedures can yield different answers, and the consequences have been serious. For instance, the literature is full of studies of models with Lennard-Jones-like pair potentials. However, different authors often use different cut-off procedures and, as a result, it is difficult to compare different simulations that claim to study the same model. Several factors make truncation of the potential a tricky business. First of all, it should be realized that although the absolute value of the potential energy function decreases with inter-particle separation r, for sufficiently large r, the number of neighbors is a rapidly increasing function of r: the number of particles at a distance r from a given atom increases asymptotically as r d−1 , where d is the dimensionality of the system. As an example, let us compute the effect of truncating the pair potential for a simple model system. In a homogeneous phase, the average potential energy (in three dimensions) of a given atom i is given by  ∞ dr 4πr 2 ρ(r)u(r), ui = (1/2) 0

where ρ(r) denotes the average number density at a distance r from atom i. The factor (1/2) has been included to correct for double counting of intermolecular interactions. If we truncate the potential at a distance rc , we ignore the tail

68 PART | I Basics

contribution utail :

 tail

u

≡ (1/2)



dr 4πr 2 ρ(r)u(r),

(3.3.2)

rc

where we have dropped the subscript i because in a bulk fluid, all atoms in the system experience an equivalent environment. To simplify the calculation of utail , it is commonly assumed that for r ≥ rc , the density ρ(r) is equal to the average number density ρ in the bulk of the fluid. If the intermolecular interactions decay sufficiently rapidly, one may correct for the systematic error that results from truncating the potential at rc by adding a tail contribution to U tot :   Nρ ∞ U tot = utrunc (rij ) + dr u(r)4πr 2 , (3.3.3) 2 rc i rc becomes worse for smaller choices for rc . From Eq. (3.3.3), it can be seen that the tail correction to the potential energy diverges unless the potential energy function u(r) decays more rapidly than r −3 (in three dimensions). This condition is satisfied if the long-range interaction between molecules is dominated by dispersion forces. However, for the important case of Coulomb or dipolar interactions, the tail correction diverges, and hence the nearest-image convention cannot be used for such systems. In such cases, the interactions with all periodic images should be taken into account explicitly. Chapter 11 discusses how this problem can be addressed. To give explicit examples of truncation procedures that have been used in the literature, let us consider the Lennard-Jones (LJ) [71,72] potential that is very popular in molecular simulations11 Lennard-Jones 12 − 6 potential is of the following form:   σ 12  σ 6 . (3.3.4) uLJ (r) = 4 − r r In what follows, we shall refer to this potential as the Lennard-Jones potential. For a cut-off distance rc , the tail correction utail for the LJ pair potential in three 11 Lennard-Jones-type potentials of the form r −n r −m were first introduced by J.E. Lennard-Jones

when he was still called J.E. Jones [71], but the choice for the 12 − 6 form was only made a few years later [72]. The popularity of the LJ 12-6 potential (as it is commonly called) in simulations was initially due to the fact that it was cheap to evaluate and that it worked well (be it for the wrong reasons [73]) for liquid argon. By now, its popularity is mainly due to the fact that ... it is so widely used.

Monte Carlo simulations Chapter | 3 69

dimensions becomes

 ∞ 1 dr r 2 u(r) utail = 4πρ 2 rc    ∞ σ 12  σ 6 1 = 16πρ dr r 2 − 2 r r r

c 3 9 1 σ 8 σ = πρσ 3 − . 3 3 rc rc

(3.3.5)

For a cutoff distance rc = 2.5σ the potential has decayed to a value that is about 1/60th of the well depth. This seems to be a small value, but in fact, the tail correction is usually non-negligible. For instance, at a typical liquid density, ρσ 3 = 1, we find utail = −0.535. This number is certainly not negligible compared to the total potential energy per atom (almost 10% at a typical liquid density); hence although we can truncate the potential at 2.5 σ , we cannot ignore the effect of this truncation. Note that the tail correction described above assumes that we consider a homogeneous system. Near an interface, the simple expressions for the tail correction lose their validity, and, in fact, computed surface tensions are depressingly sensitive to the cut-off procedure used [74]. For adsorption in solids, Jablonka et al. [75] showed that adding tail corrections helps to get more consistent results. There are several ways to truncate interactions in a simulation —some truncate the potential, some the force, and some make the force vanish continuously at rc . Although all these methods are designed to yield similar results, it should be realized that they yield results that may differ significantly, in particular in the vicinity of critical points [76–78] (see Fig. 3.3). Frequently used methods to truncate the potential are 1. Simple truncation 2. Truncation and shift 3. Truncation and force-shift (i.e., both u(r) and the force are made to vanish at rc ). Simple truncation The simplest method to truncate potentials is to ignore all interaction beyond rc . In that case, we would be simulating a model with a potential of the form:  uLJ (r) r ≤ rc trunc (r) = . (3.3.6) u 0 r > rc As mentioned, such a truncation may result in an appreciable error in our estimate of the potential energy of the true Lennard-Jones model (3.3.4). Moreover, as the truncated pair potential changes discontinuously at rc , such a potential is not suited for standard Molecular Dynamics simulations.

70 PART | I Basics

FIGURE 3.3 Vapor-liquid coexistence curves for three-dimensional Lennard-Jones models with different truncations. The figure shows the effect of the truncation of the potential on the estimated location of the critical point (large black dots). The upper curve gives the phase envelope for the Lennard-Jones potential truncated at rc = 2.5σ and with a tail correction added to account for interactions at distances larger than rc . The lower curve shows the estimated vapor-liquid coexistence curve for a potential that is used in Molecular Dynamics simulations, namely the truncated and shifted potential (also with rc = 2.5σ ) [78].

Truncated potentials could be used in Monte Carlo simulations, and sometimes they are (e.g., in simulations of simple model systems with square-well or square-shoulder potentials). However, in such cases, one should be aware of the fact that there is an “impulsive” contribution to the pressure due to the discontinuous change of the potential at rc . For otherwise continuous potentials, it is not recommended to use truncation pure and simple (although, if necessary, the impulsive contribution to the pressure could be computed). The usual method to compensate (approximately) for the truncation of the pair potential is to add tail corrections to the structural properties that we compute. The tail correction for the potential energy was discussed above, Eq. (3.3.5). We will discuss the computation of the pressure in Chapter 5, Eq. (5.1.21). For the sake of completeness, we here give the expression for the tail correction to the pressure in three dimensions: P tail =

4πρ 2 6





dr r 2 r · f(r) rc

3 16 2 3 2 σ 9 σ = πρ σ − , 3 3 rc rc

(3.3.7)

where, in the second line, we consider the specific example of a Lennard-Jones potential. Typically, the tail correction to the pressure is even larger than to the potential energy For the LJ model at ρ = 1 with rc = 2.5, P tail ≈ −1.03, which is clearly very substantial.

Monte Carlo simulations Chapter | 3 71

Truncated and shifted In Molecular Dynamics simulations, it is important that forces are not singular and it is desirable to make even higher derivatives of the pair potential continuous. The simplest procedure just shifts the pair potential so that it vanishes continuously at rc . Then the force is not singular, but its first derivative is. The truncated-and-shifted potential is  tr−sh

u

(r) =

u(r) − u(rc ) 0

r ≤ rc r > rc .

(3.3.8)

Of course, the potential energy and pressure of a system with a truncated and shifted potential differ from the values obtained with untruncated pair potentials. As before, we can approximately correct for the effect of the modification of the intermolecular potential on both the potential energy and the pressure. For the pressure, the tail correction is the same as in Eq. (3.3.7). For the potential energy, we must add, in addition to the long-range correction (3.3.5), a contribution equal to the average number of particles that are within a distance rc from a given particle, multiplied by half the value of the (untruncated) pair potential at rc . The factor of one-half is included to correct for overcounting of the intermolecular interactions. One should be careful when applying truncated and shifted potentials in models with anisotropic interactions. In that case, the truncation should not be carried out at a fixed value of the distance between the molecular centers of mass but at a point where the pair potential has a fixed value, because otherwise the potential cannot be shifted to 0 for all points on the cut-off surface. For Monte Carlo simulations, this creates no serious problems, but for Molecular Dynamics simulations, this would be quite disastrous, as the system would no longer conserve energy, unless the impulsive forces due to the truncating and shifting are taken into account explicitly. Truncated and force-shifted To make also the forces vanish continuously at rc , one can use a truncated, shifted and force-shifted potential. There are many choices for potentials that have this property. The simplest is:  tr−sh

u

(r) =

u(r) − u(rc ) + f (rc )(r − rc ) 0

r ≤ rc . r > rc

(3.3.9)

Again, the corresponding tail corrections can be worked out. An example of such a truncated and force-shifted Lennard-Jones potential is the one proposed by Kincaid and Hafskjold [79,80].

72 PART | I Basics

3.3.2.3 Whenever possible, use potentials that need no truncation As should be clear from the above discussion, the use of different cut-off procedures for the pair potential creates many problems. Of course, if we need to use models where the use of truncation and tail correction is inevitable, we have no choice. The Lennard-Jones potential is very often simply used as a “typical” shortranged inter-atomic potential, where the precise behavior at larger distances is irrelevant. Under those conditions, one would be much better off using a cheap potential that vanishes quadratically at rc . One such potential is [81]: u(r)W F

    2 σ 2 rc 2 ≡ −1 −1 r r

for r ≤ rc

=0

for r > rc .

(3.3.10)

The Wang-Ramirez-Dobnikar-Frenkel (WF) potential is Lennard-Jones-like for rc = 2, whereas, for rc = 1.2, it can describe the typical short-ranged interaction between colloids (see [81]). In this book, we use this potential in many exercises, because it eliminates the need for tail corrections. As shown in the preceding paragraphs, the cost of computing pairwiseadditive interactions can be reduced by only considering explicitly the interactions between particles that are less than a cut-off distance rc apart. However, even if we use a cut-off procedure, we still have to decide whether a given pair is within a distance rc . As there are N (N − 1)/2 (nearest image) distances in an N -particle system, it would seem that we would have to compute O(N 2 ) distances, before we can compute the relevant interactions. In the most naive simulations (which are, in fact, quite adequate for systems that are less than some 3-4 rc in diameter), this is exactly what we would do. However, for larger systems (say 106 particles), N (N − 1)/2 corresponds to some 1012 distances, and that makes the simulation unnecessarily expensive. Fortunately, there exist more advanced schemes that allow us to pre-sort the particles in such a way that evaluating all truncated interactions only requires O(N ) operations. Some of these tricks are discussed in Appendix I. However, in order to understand a basic MC or MD algorithm, these time-saving devices are not essential. Hence, in the examples that we discuss in section 4.2.2, we will describe a Molecular Dynamics program that uses a simple O(N 2 ) algorithm to compute all interactions (see Algorithm 5).

3.3.2.4 Initialization To start a simulation, we should assign initial positions to all particles in the system. As the equilibrium properties of the system do not (or, at least, should not) depend on the choice of initial conditions, all reasonable initial conditions are in principle acceptable. If we wish to simulate the solid state of a particular model

Monte Carlo simulations Chapter | 3 73

system, it is logical to prepare the system in the crystal structure of interest. In contrast, if we are interested in the fluid phase, there are several options: one that is increasingly popular for systems with continuous interactions, is to prepare the system in a random configuration (which will typically have very high energy) and then relax the system (by energy minimization) to a less pathological configuration. Such an approach is less suited for dense systems with hard-core interactions, as minimization does not work for such systems. For such systems, but in fact also for all others, one could simply prepare the system in any convenient crystal structure at the desired target density and temperature. If we are lucky, this crystal will melt spontaneously during the early stages of a simulation (i.e., during the equilibration run) because at the temperature and the density of a typical liquid-state point, the solid-state may not be mechanically stable. However, one should be careful here, because the crystal structure may be metastable, even if it is not thermodynamically stable. It is, therefore, unwise to use a crystal structure as the starting configuration for a simulation of a liquid close to the freezing curve. In such cases, it is better to start the simulation from the final (liquid) configuration of a system at a higher temperature or a lower density, where the solid is not even metastable. In any event, to speed up equilibration, it is often advisable to use the final (well-equilibrated) configuration of an earlier simulation at a nearby state point (if such a configuration is available) as the starting configuration for a new run and adjust the temperature and density to the desired values. The equilibrium properties of a system should not depend on the initial conditions. If such a dependence nevertheless is observed in a simulation, there are two possibilities. The first is that our results reflect the fact that the system that we simulate really behaves non-ergodically. This is the case, for instance, in glassy materials or low-temperature, substitutionally disordered alloys. The second, more common, explanation is that the system that we simulate is ergodic but our sampling of configuration space is inadequate; in other words, we have not yet reached equilibrium.

3.3.2.5 Reduced units One way to save time in simulations is avoiding duplication, i.e., avoiding doing the same simulation twice for the same state point. At first sight, one might think that such duplications could only happen by mistake, but that is not quite true. Suppose that one group has studied the thermodynamic properties of the Lennard-Jones model for argon at 60 K and a density of 840 kg/m3 . Another group wishes to study the properties of the Lennard-Jones model for xenon at 112 K and a density of 1617 kg/m3 . One might expect that these simulations would yield quite different results. However, if, instead of using SI units, we use as our units the natural units of the model (i.e., the LJ diameter σ , the LJ well-depth , and the molecular mass m), then in these “reduced” units the two state-points are identical. The Ar results can simply be rescaled to obtain the Xe results.

74 PART | I Basics

For the above example of Ar and Xe, both simulations would be characterized by a reduced density (N σ 3 /V ) ≡ ρ ∗ = 0.5 and a reduced temperature kB T / ≡ T ∗ = 0.5. The use of reduced units allow us to characterize the state of the system by dimensionless numbers is very common in science and engineering. For instance, dimensionless groups, such as the Reynolds number (vL/η) or the Peclet number (vL/D) are widely used in fluid dynamics to compare the properties of systems that, at first sight, might seem quite different: think of a model airplane being tested in a wind tunnel to predict the behavior of a real airplane under the same (or at least similar) conditions. Reduced units are popular with simulators, but not so popular with experimentalists, who like to see things expressed in SI units (although even that is not quite true, as is clear from the popularity of non-SI units, such as kcal’s and Ångstroms). To construct reduced units in simulations, we need a natural choice for the unit of length in the model, and for the units of energy and mass. These choices are not unique, but for pair potentials of the form u(r) = f (r/σ ) (see Eq. (3.3.4)), it is logical to choose σ as the unit of length and  as the unit of energy. The well-known Lennard-Jones potential is of the form u(r) = f (r/σ ) (see Eq. (3.3.4)) and hence a natural (though not unique) choice for our basic units is the following: • Unit of length, σ • Unit of energy,  • Unit of mass, m (the mass of the atoms in the system) and from these basic units, all other units follow. For instance, our unit of time is  σ m/ and the unit of temperature is /kB . In terms of these reduced units, denoted with superscript *, the reduced pair potential u∗ ≡ u/ is a dimensionless function of the reduced distance r ∗ ≡ r/σ . For instance, the reduced form for the Lennard-Jones potential is

6 1 12 1 ∗ ∗ uLJ (r ) = 4 − ∗ . (3.3.11) r∗ r With these conventions, we can define the following reduced units: the potential energy U ∗ = U  −1 , the pressure P ∗ = P σ 3  −1 , the density ρ ∗ = ρσ 3 , and the temperature T ∗ = kB T  −1 . For the simple potential given in Eq. (3.3.10) [81], the reduced form would be





2 1 2 2 2 ∗ ∗ uWF (r ) = −1 −1 . r∗ r∗

Monte Carlo simulations Chapter | 3 75

TABLE 3.1 Translation of reduced units to real units for Lennard-Jones argon (/kB = 119.8 K, σ = 3.405 × 10−10 m, M = 0.03994 kg/mol). Quantity

Reduced units

Real units

temperature

T∗ =1



T = 119.8 K

density

ρ ∗ = 1.0



ρ = 1680 kg/m3

time

t ∗ = 0.005



t = 1.09 × 10−14 s



P = 41.9 MPa

pressure

P∗ = 1

Note that the choice of units is not unique: for the LJ potential, we could equally well have chosen as our unit of length the position of the minimum rm = σ × 21/6 . In that case, the reduced LJ potential would be of the form:

1 12 1 6 ∗∗ ∗∗ uLJ (r ) = − 2 ∗∗ , r ∗∗ r where we have used the subscript ∗∗ to indicate that this scaled potential was obtained by scaling to rm , rather than to σ . Of course, for the results of the simulations, the choice of the length scale is unimportant. Another practical reason for using reduced units is the following: when we work with real (SI) units, we find that the absolute numerical values of the quantities that we are computing (e.g., the average energy of a particle or its acceleration) are either much less or much larger than one. If we multiply several such quantities using standard floating-point multiplication, we face the risk that, at some stage, we will obtain a result that creates an overflow or underflow. Conversely, in reduced units, almost all quantities of interest are of order 1 (say, between 10−3 and 103 ). Hence, if we suddenly find a very large (or very small) number in our simulations (say, 1042 ), then there is a good chance that we have either found something interesting, or we have made a mistake somewhere (usually the second). In other words, reduced units make it easier to spot errors. Simulation results that are obtained in reduced units can always be translated back into real units. For instance, if we wish to compare the results of a simulation on a Lennard-Jones model at T ∗ = 1 and P ∗ = 1 with experimental data for argon (/kB = 119.8 K, σ = 3.405 × 10−10 m, M = 0.03994 kg/mol), then we can use the translation given in Table 3.1 to convert our simulation parameters into SI units.12 Example 1 (Equation of state of the Lennard-Jones fluid-I). One important application of molecular simulation is computing the phase diagram

12 In what follows we will use reduced units unless explicitly indicated otherwise. For this reason, we will henceforth omit the superscript * to denote reduced units.

76 PART | I Basics

of a given model system. In several chapters, we discuss some of the techniques that have been developed to study phase transitions, including directcoexistence simulations. However, direct-coexistence simulations may suffer from hysteresis, which may make them less suited for locating phase transitions. In the present Example, we illustrate some of the problems that occur when we use standard Monte Carlo simulation to determine a phase diagram. As an example, we focus on the vapor-liquid curve of the Lennard-Jones fluid. Of course, as was already mentioned in section 3.3.2.2, the phase behavior is quite sensitive to the detailed form of the intermolecular potential that is used. In this Example, we approximate the full Lennard-Jones potential as follows:  uLJ (r) r ≤ rc u(r) = 0 r > rc , where the cutoff radius rc is set to half the box length. The contribution of the particles beyond this cutoff is estimated with the usual tail corrections; that is, for the energy

3 1 1 1 9 8 − utail = πρ 3 3 rc rc and for the pressure 16 P tail = πρ 2 3

3 2 1 9 1 − . 3 rc rc

The equation of state of the Lennard-Jones fluid has been investigated by many groups using Molecular Dynamics or Monte Carlo simulations, starting with the work of Wood and Parker [69]. A first systematic study of the equation of state of the Lennard-Jones fluid was reported by Verlet [14]. Subsequently, many more studies have been published. In 1979, the data available at that time were compiled by Nicolas et al. [82] into an accurate equation of state. This equation has been refitted by Johnson et al. [83] using the best data then available. Since the work of Johnson et al., many additional studies on the equation of state of the Lennard-Jones fluid have appeared. This resulted in further improvements in the equation of state. There are now so many equations of state for the Lennard-Jones fluid that we have to refer to the review of Stephan et al. [84] for a discussion which of these equations to choose for which property. In the present study, we compare our numerical results with the equation of state by Johnson et al., which is sufficiently accurate for this purpose. We performed several simulations using Algorithms 1 and 2. During the simulations, we determined the energy per particle and the pressure. The pressure was calculated using the virial, W P=

W ρ + , β V

(3.3.12)

Monte Carlo simulations Chapter | 3 77

where the virial is defined by W=

1  f(rij ) · rij , 3

(3.3.13)

i j >i

where f(rij ) is the intermolecular force. Fig. 3.4 (left) compares the pressure as obtained from a simulation above the critical temperature with the equation of state of Johnson et al. [83]. The agreement is excellent (as is to be expected).

FIGURE 3.4 Equation of state of the Lennard-Jones fluid. (Left) Isotherm at T = 2.0. (Right) Isotherm below the critical temperature (T = 0.9); the horizontal line is the saturated vapor pressure, and the filled circles indicate the densities of the coexisting vapor and liquid phases. The solid curve represents the equation of state of Johnson et al. [83], and the circles are the results of the simulations (N = 500). The horizontal line in the right-hand figure corresponds to the coexistence pressure obtained by applying the Maxwell construction to the Johnson equation of state. The Maxwell construction exploits the fact that the equality of the chem Vvap ical potentials of the coexisting phases implies that Pcoex = V dV P (V )/(Vvap − Vliq ). liq

The statistical errors in the numerical data are smaller than the symbol sizes.

Fig. 3.4 (right) shows a typical isotherm below the critical temperature. If we cool the system below the critical temperature, we should expect to observe vapor-liquid coexistence. However, standard Monte Carlo or Molecular Dynamics simulations of small model systems are not suited to study the coexistence between two phases. Using the Johnson equation of state, we predict how the pressure of a macroscopic Lennard-Jones system would behave in the two-phase region (see Fig. 3.4). For densities inside the coexistence region, the pressure is expected to be constant and equal to the saturated vapor pressure. If we now perform a Monte Carlo simulation of a finite system (500 LJ particles), we find that the computed pressure is not at all constant in the coexistence region (see Fig. 3.4). In fact we observe that, over a wide density range, the simulated system is metastable and may even have a negative pressure. The reason is that, in a finite system, a relatively important free-energy cost is associated with the creation of a liquid-vapor interface. So much so that, for sufficiently small systems, it is favorable for the system not to phase separate at all [85]. Clearly, these problems will be most severe for

78 PART | I Basics

small systems and in cases where the interfacial free energy is large. For this reason, standard N V T -simulations are not recommended to determine the vapor-liquid coexistence curve or, for that matter, any strong first-order phase transition in small systems. To determine the liquid-vapor coexistence curve, we should determine the equation of state for a large number of state points outside the coexistence region. These data can then be fitted to an analytical equation of state. With this equation of state, we can determine the vapor-liquid curve (this is exactly the procedure used by Nicolas et al. [82] and Johnson et al. [83]). Of course, if we simulate a system consisting of a sufficiently large number of particles, it is possible to simulate a liquid phase in coexistence with its vapor. However, such simulations are time-consuming because it takes a long time to equilibrate a two-phase system. The equation of state in Fig. 3.4 is for the full Lennard-Jones potential (as approximated with tail corrections). However, differences in truncating the potential result in different equations of state for the Lennard-Jones fluid. For example, Thon et al. have developed an equation of state for the truncated and shifted Lennard-Jones fluid [86], which is typically used in a molecular dynamics simulation. For more details, see SI (Case Study 1).

3.3.3 Detailed balance versus balance There is no obvious bound to the number of valid Markov-Chain Monte Carlo algorithms, and in this book, we shall encounter a tiny (but hopefully important) fraction of all the algorithms that have been proposed. Clearly, it is important to be able to show that a proposed algorithm will lead to a correct sampling of the equilibrium properties of a system. There are two main factors that may contribute to incorrect sampling: the first is that the algorithm is not ergodic, i.e., that does not connect all equilibrium configurations that have non-vanishing Boltzmann weight.13 Non-ergodicity is arguably a larger problem for MD than for MC. It is certainly much harder to prove ergodic-like properties for an MD algorithm than for a properly designed MC algorithm. The second factor that leads to incorrect sampling arises when the states that are visited are not sampled with the correct weight. This is not a problem for MD algorithms that properly emulate Newton’s equations of motion (we will state this more clearly in Chapter 4). With Monte Carlo algorithms, it is crucial to demonstrate that they generate the correct Boltzmann distribution of the states sampled. The most common way to ensure that MC algorithms achieve Boltzmann sampling is to impose the condition of detailed balance (see Eq. (3.2.7)). However, as mentioned the detailed-balance condition is sufficient, but not necessary. 13 Of course, MC algorithms are not limited to applications in Molecular Simulation. Hence, the

term “Boltzmann weight” stands for any non-negative weight function that determines the distribution over states.

Monte Carlo simulations Chapter | 3 79

Manousiouthakis and Deem [87] have shown that the weaker “balance condition” is a necessary and sufficient requirement to ensure Boltzmann sampling of the accessible states. Dropping the detailed-balance constraint gives us the freedom to use a much wider range of Monte Carlo strategies. But this freedom comes at a cost: proving the validity of non-detailed-balance MC algorithms is not always easy. This is why non-detailed-balance MC algorithms were often avoided in the past. However, in recent years Krauth and collaborators have developed a class of powerful non-detailed-balance MC algorithms [88,89]. We will discuss some of these algorithms in Chapter 13. Here, we just give a simple example of an MC algorithm that is valid yet does not satisfy detailed balance: In the simple Monte Carlo scheme shown in Algorithm 2 we selected a particle at random and gave it a random trial displacement. The a priori probability to select the same particle in the next trial move and attempt to move it back to its original position is the same as the probability of carrying out the forward move. Combining this procedure to generate trial moves with the Metropolis acceptance rule results in an algorithm that satisfies detailed balance. An alternative scheme would be to attempt moving all particles sequentially, i.e., an attempt to move particle 1 is followed by an attempt to move particle 2, etc. In this sequential scheme, the probability that a single-particle move is followed by its reverse is zero. Hence, this scheme clearly violates detailed balance. However, Manousiouthakis and Deem showed (under rather weak conditions —see Ref. [87]) that such a sequential updating scheme does obey balance and does therefore result in correct MC sampling. There is every reason to distrust detailed-balance-violating algorithms if it has not been demonstrated that they satisfy balance. This is true in particular for “composite” algorithms that combine different trial moves. The detailed-balance condition is, therefore, an important guiding principle in developing novel Monte Carlo schemes: when writing a new code, it is advisable to start with a version that imposes detailed balance and use that to obtain some reference data. If, subsequently, it turns out that the performance of a working program can be improved considerably by using a “balance-only” algorithm, then it is worth implementing it, —but it is then crucial to prove that balance is satisfied. Example 2 (Importance of detailed balance). Monte Carlo simulations aim to sample points in configuration space according to their Boltzmann weights. If the system is ergodic, imposing detailed-balance is sufficient, but not necessary to ensure Boltzmann sampling. In other words: some sampling schemes that violate detailed balance may nevertheless lead to correct Boltzmann sampling (see Section 13.4.4). However, one should avoid non-detailed-balance schemes, unless one can prove that they satisfy the more general balance condition. In the present Example, we show that systematic errors result when using a scheme that does not satisfy the balance condition.

80 PART | I Basics

Consider an ordinary N , V , T move; a new trial position is generated by giving a randomly selected particle, say i, a random displacement: xn (i) = xo (i) + x (R − 0.5), where x is twice the maximum displacement. We now make a seemingly small change in the algorithm and generate a new position using xn (i) = xo (i) + x (R − 0.0)

wrong!

i.e., we give the particles only a positive displacement. With such a move detailed balance is violated, since the reverse move —putting the particle back at xo — is not possible.a

FIGURE 3.5 Equation of state of the Lennard-Jones fluid (T = 2.0); comparison of a displacement scheme that obeys detailed balance (circles) and one that does not (squares). Both simulations have been performed with 500 particles. The solid curve is the equation of state of Johnson et al. [83]. The figure at the left corresponds to the low-pressure regime. The high-pressure regime is shown in the right-hand figure.

For the Lennard-Jones fluid, we can use the program of Case Study 1 to compare the two sampling schemes. The results of these simulations are shown in Fig. 3.5. At first sight, the results of the incorrect scheme look reasonable; in fact, at low densities, the results of the two schemes do not show significant differences. But at high densities, the incorrect scheme overestimates the pressure. This overestimate of the pressure is a systematic error: it does not disappear when we perform longer simulations. The above example illustrates something important: one cannot decide if an algorithm is correct on the basis of the fact that the results look reasonable. Simulation results must always be tested on cases where we know the correct answer: either a known (and trusted) numerical result, but preferably an exact result that may be known in some limiting cases (dilute vapor, dense solid, etc.). Usually, one does not know a priori the size of the optimal maximum displacement in a Monte Carlo simulation. By doing short test runs, we can explore what maximum displacement achieves the best sampling for a given

Monte Carlo simulations Chapter | 3 81

amount of simulation time. However, the maximum step-size should not be changed during a production run, because then one would violate detailed balance [90]: if from one move to the next, the maximum displacement is decreased, then the a priori probability for a particle to return to its previous position could be zero, which violates microscopic reversibility. For more details, see SI (Case Study 2). a In Event-Chain MC (Section 13.4.4) we will see examples of forward-only algorithms that

have been designed to satisfy balance.

3.4 Trial moves Having discussed specified the general structure of the Metropolis MC algorithm, we now consider its implementation. To carry out a simulation, we need a model for the system of interest and an algorithm to sample its properties. At this stage, we assume that we have a model (i.e., a prescription to compute all intermolecular interactions, given the particle coordinates) and that we have prepared our model system in a suitable (not too pathological) starting configuration. We must now we must decide how we are going to generate trial moves. That is: we must specify the matrix α introduced in Eq. (3.2.8). We must distinguish between trial moves that involve only the molecular centers of mass and those that change the orientation or even the conformation of a molecule. Later, we shall see that a great advantage of the Monte Carlo method is that we can carry out “unphysical” trial moves that have no counterpart in the natural dynamics of a system. Often such unphysical trial moves can enhance the sampling efficiency of an MC algorithm dramatically. In the present chapter, we only briefly refer to such unphysical moves (see Example 4). Still, in later chapters, we shall encounter numerous illustrations of the power of unphysical MC moves.

3.4.1 Translational moves We first consider trial moves of the molecular centers of mass. A perfectly acceptable method for creating a trial displacement is to add random displacements between −/2 and +/2 to the x, y, and z coordinates of the molecular center of mass: xi → xi +  (R − 0.5) yi → yi +  (R − 0.5)

zi → zi +  (R − 0.5),

(3.4.1)

where /2 is the maximum size of the displacement and, as before, R denotes a random number uniformly distributed between 0 and 1. Clearly, the reverse trial

82 PART | I Basics

move is equally probable (hence, α is symmetric).14 We are now faced with two questions: how large should we choose , and should we attempt to move all particles simultaneously or one at a time? In the latter case, we should pick the molecule that is to be moved at random to ensure that the underlying Markov chain remains symmetric. All other things being equal, we should choose the most efficient sampling procedure. But, to this end, we must first define what we mean by efficient sampling. In rather vague terms, sampling is efficient if it gives you good value for money. Good value in a simulation corresponds to high statistical accuracy, and “money” is simply money: the money that buys your computer time and even your own time. For the sake of argument, we assume that the average scientific programmer is poorly paid. In that case, we only have to worry about your computer budget and the carbon footprint of the simulation.15 Then we could use the following definition of an optimal sampling scheme: a Monte Carlo sampling scheme can be considered optimal if it yields the lowest statistical error in the quantity to be computed for a given expenditure of computing budget. Usually, computing budget is translated into CPU time. From this definition, it is clear that, in principle, a sampling scheme may be optimal for one quantity but not for another. Actually, the preceding definition is all but useless in practice (as are most definitions). For instance, it is just not worth the effort to measure the error estimate in the pressure for a number of different Monte Carlo sampling schemes in a series of runs of fixed length. However, it is reasonable to assume that the mean-square error in the observables is inversely proportional to the number of uncorrelated configurations visited in a given amount of CPU time. And the number of independent configurations visited is a measure of the distance covered in configuration space. This suggests a more manageable, albeit rather ad hoc criterion for estimating the efficiency of a Monte Carlo sampling scheme: the sum of the squares of all accepted trial displacements divided by computing time. This quantity should be distinguished from the mean-squared displacement per unit of computing time, because the latter quantity goes to 0 in the absence of diffusion (e.g., in a solid or glass), whereas the former does not. Using this approximate measure of efficiency, we can offer a tentative explanation why, in simulations of condensed phases, it is usually better to perform random displacements of one particle at a time (as we shall see later, the situation is different for correlated displacements). To see why random single-particle moves are preferred, consider a system of N spherical particles, 14 Although almost all published MC simulations on atomic and molecular systems generate trial

displacements in a cube centered around the original center of mass position, this is by no means the only possibility. Sometimes, it is more convenient to generate trial moves in a spherical volume, and it is not even necessary that the distribution of trial moves in such a volume be uniform, as long as it has inversion symmetry. For an example of a case where another sampling scheme is preferable, see ref. [91]. 15 Still, we should stress that it is not worthwhile to spend a lot of time developing a fancy computational scheme that will be only marginally better than existing, simpler schemes, unless your program will run very often and a few percent gains in speed is important.

Monte Carlo simulations Chapter | 3 83

interacting through a potential energy function U(rN ). Typically, we expect that a trial move will be rejected if the potential energy of the system changes by much more than kB T . Yet we try to make the Monte Carlo trial steps as large as possible without having a very low acceptance. A displacement that would, on average, give rise to an increase of the potential energy by kB T would still have a reasonable acceptance. In the case of a single-particle trial move, we then have    ∂U ∂ 2U 1 β α U = ri + riα ri + · · · ∂riα 2 ∂r α ∂r β i i 

= 0 + f (U) ri2 + O(4 ),

(3.4.2)

where the angle brackets denote averaging over the ensemble and the horizontal bar denotes averaging over random trial moves. The second derivative of U has been absorbed into the function f (U), the precise form of which does not concern us here. If we now equate U on the left-hand side of Eq. (3.4.2) to kB T , we find the following expression for ri2 : ri2 ≈ kB T /f (U).

(3.4.3)

If we attempt to move N particles, one at a time, most of the computation involved is spent on the evaluation of the change in potential energy. Assuming that we use a neighbor list or a similar time-saving device (see Appendix I), the total time spent on evaluating the potential energy change is proportional to nN , where n is the average number of interaction partners per molecule. The sum of the mean-squared displacements will be proportional to N r 2 ∼ N kB T /f (U). Hence, the mean-squared displacement per unit of CPU time will be proportional to kB T /(n f (U)). Now suppose that we try to move all particles at once. The cost in CPU time will still be proportional to nN. But, using the same reasoning as in Eqs. (3.4.2) and (3.4.3), we estimate that the sum of the meansquared displacements is smaller by a factor 1/N . Hence the total efficiency will be down by this same factor. This simple argument explains why most simulators use single-particle, rather than collective trial moves. It is important to note that we have assumed that a collective MC trial move consists of N independent trial displacements of the particles. As will be discussed in section 13.3.1, efficient collective MC moves can be constructed if the trial displacements of the individual particles are not chosen independently. Optimal acceptance of trial moves Next, consider the choice of the parameter  which determines the size of the trial move. How large should  be? If it is very large, it is likely that the resulting configuration will have a high energy and the trial move will probably be rejected. If it is very small, the change in potential energy is probably small

84 PART | I Basics

FIGURE 3.6 (Left) Typical dependence of the mean-squared displacement of a particle on the average size  of the trial move. (Right) Typical dependence of the computational cost of a trial move on the step-size . For continuous potentials, the cost is constant, while for hard-core potentials it decreases rapidly with the size of the trial move.

and most moves will be accepted. In the literature, one often finds the mysterious statement that an acceptance of approximately 50% should be optimal. This statement has no justification. The optimum acceptance ratio is the one that leads to the most efficient sampling of configuration space. In fact, Roberts et al. [92,93] performed an analysis of Metropolis sampling in high-dimensional spaces for a special form of the n-dimensional distribution (not the Boltzmann form) and find that sampling is most efficient when the acceptance is 23.4%. In ref. [93] it is argued that this finding will probably generalize to more complex “non-pathological” distributions. Roberts et al., therefore, formulated the following rule of thumb: Tune the proposal variance so that the average acceptance rate is roughly 1/4. In our language, the “proposal variance” is determined by the size of the trial steps. So, all else being equal, it is probably preferable to aim for an acceptance of 25%, rather than 50%. But in certain MC simulations of many-body systems, the optimal acceptance may even be lower. This happens when the computational cost of a rejected move is less than that of an accepted move. To see why this might be so, let us consider the “diffusion coefficient” (i.e., the mean-squared displacement per unit CPU time) of the MC random walk in the (high-dimensional) configuration space. We will use this diffusion coefficient to compare the sampling efficiencies of different MC algorithms. It is then easy to see that different Monte Carlo codes will have different optimal acceptance ratios. The reason is that it makes a crucial difference whether the amount of computing required to test whether a trial move is accepted depends on the magnitude of the move (see Fig. 3.6). In the conventional Metropolis scheme, all continuous interactions have to be computed before a move can be accepted or rejected. Hence, for continuous potentials, the amount of computation does not depend on the size of a trial move. In contrast, for simulations of particles with hard repulsive cores, a move can be rejected as soon as an overlap with any neighbor is detected. In that case, a rejected move is cheaper than an ac-

Monte Carlo simulations Chapter | 3 85

cepted one, and hence the average computing time per trial move goes down as the step size is increased. As a result, the optimal acceptance ratio for hard-core systems is appreciably lower than for systems with continuous interactions. Exactly how much, depends on the nature of the program, on how the information about neighbor lists is stored, and even on the computational “cost” of random numbers and exponentiation. The consensus seems to be that for hard-core systems the optimum acceptance ratio is around, or even below, 20%. However, this is just another rule of thumb that should be checked.16 A distinct disadvantage of the efficiency criterion discussed previously is that it does not allow us to detect whether the sampling of configuration space is ergodic. To take a specific example, suppose that our system consists of a number of particles that are trapped in different potential energy minima. Clearly, we can sample the vicinity of these minima quite well and still have totally inadequate sampling of the whole of the configuration space. A criterion that would detect such non-ergodicity has been proposed by Mountain and Thirumalai [94]. These authors consider the difference between the variance of the time average of the single-particle (potential) energy. Let us denote the time average of the energy of particle j in time interval t by ej (t):  1 t ej (t) = dt ej (t ). t 0 The average of all single-particle energies for this time interval is e(t) ≡

N 1  ej (t). N j =1

The variance of interest is σE2 (t) ≡

N 2 1  ej (t) − e(t) . N j =1

If all particles sample the whole configuration space, σE2 (t) will approach zero as t → ∞: σE2 (t)/σE2 (0) → τE /t, where τE is a measure for the characteristic time to obtain uncorrelated samples. However, if the system is non-ergodic, as in a (spin) glass, σE will not decay to 0. The work of Mountain and Thirumalai suggests that a good method for optimizing the efficiency of a Monte Carlo scheme is to minimize the product of τE 16 In section 13.4.3, Eq. (13.4.21), we show how, even in the case of continuous potentials, it is possible to reject trial moves before all interactions have been evaluated. With such a sampling scheme, the distinction between the sampling of hard-core and continuous potentials all but disappears. In recent years, this idea has gained more traction: see section 13.4.3.

86 PART | I Basics

and the computer time per trial move. Using this scheme, Mountain and Thirumalai already concluded that, even for the Lennard-Jones system, a trial move acceptance of 50% is far from optimal. They found that an acceptance probability of 20% was twice as efficient, which is in agreement with the observations made earlier in this section. Of course, a scheme based on the energy fluctuations of a particle is not useful to monitor the rate of convergence of simulations of hard-core systems, or for that matter, of any system for which we cannot define the potential energy of a given particle (e.g., a system that cannot be described by a pair potential). But the essence of the method is not that one must necessarily probe single-particle energies: any quantity that is sensitive to the local environment of a particle should do. For instance, a robust criterion would look at the convergence of the time-averaged Voronoi signature of a particle. Different environments yield different signatures. Only if every particle samples all environments, will the average Voronoi signatures of all particles decay to the mean. Of course, in some situations, an efficiency criterion based on ergodicity is not useful. By construction, it cannot be used to optimize simulations of glasses. But also when studying interfaces (e.g., solid-liquid or liquid-vapor), the ergodicity criterion would suggest that every particle should have ample time to explore both coexisting phases. This is clearly unnecessary: ice can be in equilibrium with water, even though the time of equilibration is far too short to allow the complete exchange of the molecules in the two phases. Example 3 (Why count the old configuration again?). A somewhat counterintuitive feature of the Metropolis sampling scheme is that, if a trial move is rejected, we should once again count the contributions of the old configuration to the average that we are computing (see acceptance rule (3.2.12)). The aim of this Example is to show that this recounting is really essential. In the Metropolis scheme, the acceptance rule for a move from o to n is acc(o → n) = exp{−β[U (n) − U (o)]} =1

U (n) ≥ U (o) U (n) < U (o).

These acceptance rules lead to a transition probability π(o → n) = exp{−β[U (n) − U (o)]} =1

U (n) ≥ U (o) U (n) < U (o).

Note that this transition probability must be normalized:  π(o → n) = 1. n

From this normalization, it follows that the probability that we accept the old configuration again is by definition

Monte Carlo simulations Chapter | 3 87

π(o → o) = 1 −



π(o → n).

n =o

This last equation implies that we should count the contribution of the old configuration again.

FIGURE 3.7 Equation of state of the Lennard-Jones fluid (T = 2.0); comparison of a scheme in which particles are displaced until a move is accepted (squares) with the conventional scheme (circles). Both simulations have been performed with 108 particles. The solid curve is the equation of state of Johnson et al. [83]. The left figure is at low pressure and the right one at high pressure.

It is instructive to use the Lennard-Jones program from Case Study 1 to investigate numerically the error that is made when we only include accepted configurations in our averaging. In essence, this means that in Algorithm 2 we continue attempting to displace the selected particle until a trial move has been accepted.a In Fig. 3.7, we compare the results of the correct scheme with those obtained by the scheme in which we continue to displace a particle until a move is accepted. Again the results look reasonable, but the figure shows that large, systematic errors are being made. For more details, see SI (Case Study 3). a It is easy to see that this approach leads to the wrong answer if we try to compute the

average energy of a two-level system with energy levels E0 and E1 . If we include only accepted trial moves in our averaging, we would find that E = (E0 + E1 )/2, independent of temperature.

3.4.2 Orientational moves When simulating molecules rather than atoms,17 we must use a repertoire of trial to change both the position and the orientation of the molecules. Rotational 17 Here, and in what follows, we use the term “molecule” to denote any type of particle, large or small, whose orientation can affect the potential energy of the system. Similarly, much of our discussion of “atomic” particles applies to any particle that is spherically symmetric: in this respect, a spherical colloid can be considered to be an “atom”.

88 PART | I Basics

moves are more subtle than translational moves: it almost requires an effort to generate translational trial moves with a distribution that does not satisfy the symmetry requirement of the underlying Markov chain. For rotational moves, the situation is different: it is only too easy to introduce a systematic bias in the orientational distribution function of the molecules by using a non-symmetrical orientational sampling scheme. Several different strategies to generate rotational displacements are discussed in [21]. Here we only mention one approach, for the sake of illustration.

3.4.2.1 Rigid, linear molecules Consider a system consisting of N linear molecules. We specify the orientation of the i-th molecule by a unit vector uˆ i . One possible procedure to change uˆ i by a small, random amount is the following. First, we generate a unit vector vˆ with a random orientation. This is quite easy to achieve (see Algorithm 38). Next, we multiply this random unit vector vˆ by a scale factor γ . The magnitude of γ determines the magnitude of the trial rotation. We now add γ vˆ to uˆ i . Let us denote the resulting sum vector by t: t = γ vˆ + uˆ i . Note that t is not a unit vector. Finally, we normalize t, and the result is our trial orientation vector uˆ i . We still have to fix γ , which determines the acceptance probability for the orientational trial move. The optimum value of γ is determined by essentially the same criteria as for translational moves. We have not yet indicated whether the translational and orientational trial moves should be performed simultaneously. Both procedures are acceptable. However, if rotation and translation correspond to separate moves, then the selection of the type of move should be probabilistic rather than deterministic if we wish to satisfy detailed balance.

3.4.2.2 Rigid, nonlinear molecules The case of nonlinear, rigid molecules is slightly more complex than that of linear molecules. In molecular physics, it is conventional to describe the orientation of nonlinear molecules in terms of the Eulerian angles (φ, θ, ψ). However, for most simulations, the use of these angles is less convenient because all rotation operations should then be expressed in terms of trigonometric functions, and these are computationally expensive. For Molecular Dynamics simulations, the situation is even worse: the equations of motion in terms of Euler angles have a singularity at θ = 0. It is usually better to express the orientation of rigid, nonlinear molecules in terms of quaternion parameters (for a discussion of quaternions in the context of computer simulation, see [21]). The rotation of a rigid body can be specified by a quaternion of unit norm Q. Such a quaternion may be thought of as a unit vector in four-dimensional space: Q ≡ (q0 , q1 , q2 , q3 )

with q02 + q12 + q22 + q32 = 1.

(3.4.4)

Monte Carlo simulations Chapter | 3 89

There is a one-to-one correspondence between the quaternion components qα and the Eulerian angles, φ+ψ θ q0 = cos cos 2 2 θ φ−ψ q1 = sin cos 2 2 φ−ψ θ q2 = sin sin 2 2 φ+ψ θ q3 = cos sin , (3.4.5) 2 2 and the rotation matrix R, which describes the rotation of the molecule-fixed vector in the laboratory frame is given by (see, e.g., [95]) ⎛ ⎜ R=⎝

q02 + q12 − q22 − q32

2(q1 q2 − q0 q3 )

2(q1 q3 + q0 q2 )

2(q1 q2 + q0 q3 )

q02 − q12 + q22 − q32

2(q2 q3 − q0 q1 )

2(q1 q3 − q0 q2 )

2(q2 q3 + q0 q1 )

q02 − q12 − q22 + q32

⎞ ⎟ ⎠.

(3.4.6) To generate trial rotations of nonlinear, rigid bodies, we must rotate the vector (q0 , q1 , q2 , q3 ) on the four-dimensional (4d) unit sphere. The procedure just described for the rotation of a 3d unit vector is easily generalized to 4d. An efficient method for generating random vectors uniformly on the 4d unit sphere has been suggested by Vesely [95].

3.4.2.3 Non-rigid molecules If the molecules under consideration are not rigid, then we must also consider Monte Carlo trial moves that change the internal degrees of freedom of a molecule. In the present section, we discuss molecules that are not completely rigid but that have a single dominant conformation (or a small number of dominant conformations). Methods to perform Monte Carlo simulations on molecules with large numbers of distinct conformations are discussed in Chapter 12. If a molecule is not completely rigid, the situation is relatively simple: we can carry out normal trial moves on the Cartesian coordinates of the individual atoms in the molecule (in addition to center-of-mass moves). If some of the atoms are strongly bound, it is advisable to carry out small trial moves on those particles (no rule forbids the use of trial moves of different sizes for different atoms, as long as the moves for one particular atom type are always sampled from the same distribution). But such “atomic” trial moves should be combined with trial moves that attempt to translate or rotate the molecule as a whole. At first sight, then, sampling intra-molecular degrees of freedom would seem straightforward, in particular, if we choose suitable coordinates (e.g., bond

90 PART | I Basics

lengths or normal modes) to describe the structure of the molecules. However, if we use generalized coordinates (e.g., bond angles), we should bear in mind that the transformation from cartesian coordinates to generalized coordinates results in the appearance of Jacobians or, more precisely, the absolute value of the determinant of the Jacobian matrix, in the partition function (a simple case is when we transform from {x, y, z} to {r, θ, φ}: the volume element dx dy dx transforms into r 2 sin θ dr dθ dφ. The Jacobian r 2 sin θ reflects the fact that uniform sampling in x, y, z does not correspond to uniform sampling in r, θ , φ). As we explain below, the appearance of Jacobians in the Monte Carlo weights may be a complication, but not an insurmountable one. The problem gets more tricky when we consider situations where certain coordinates are fixed (e.g. a bond length). It turns out that, in this case, it makes a difference if the constraint on the coordinates is imposed by considering the limit of a very stiff spring constant, or by eliminating the coordinate and its conjugate velocity altogether, by imposing the appropriate constraints on the Lagrangian (see Appendix A). The method of Lagrangian constraints is extensively used in MD simulations ((see Chapter 14) and ref. [96]), but the results of such simulations can only be compared with MC simulations if the constraints have been imposed in exactly the same way (see [97]). Different ways of imposing the same constraints can lead to observable differences. A well-known example is a freely jointed trimer with fixed bond lengths. As shown by Van Kampen, the limit of constraining the bond lengths with stiff springs yields a different distribution of the internal angle than when the same constraint would have been imposed at the level of the Lagrangian [98] (see section 14.1.1). To understand why Monte Carlo simulations of flexible molecules with a number of stiff (or even rigid) bonds (or bond angles) can become complicated, let us return to (2.2.20) for a thermal average of a function A(rN ):  N N dp dr A(rN ) exp[−βH(pN , rN )]  A = . dpN drN exp[−βH(pN , rN )] We will first consider the case that the constraints are just stiff. After that, we consider what happens if certain coordinates are rigorously constrained. To perform Monte Carlo sampling on the generalized coordinates qN , we must express the Hamiltonian in Eq. (2.2.20) in terms of these generalized coordinates and their conjugate momenta. This is done most conveniently by first considering theLagrangian, L ≡ K − U, where K is the kinetic energy of the system 2 (K = N i=1 (1/2)mi r˙i ) and U the potential energy. When we transform from Cartesian coordinates r to generalized coordinates q, L changes to L=

N  1 i=1

2

mi

∂ri ∂ri q˙α q˙β − U(qN ) ∂qα ∂qβ

1 ≡ q˙ · G · q˙ − U(qN ). 2

(3.4.7)

Monte Carlo simulations Chapter | 3 91

In the second line of Eq. (3.4.7), we have defined the matrix G. The momenta conjugate to qN are easily derived using pα ≡

∂L . ∂ q˙α

This yields p α = Gαβ q˙β . We can now write down the Hamiltonian H in terms of the generalized coordinates and conjugate momenta: 1 H(p, q) = p · G−1 · p + U(qN ). 2

(3.4.8)

If we now insert this form of the Hamiltonian into Eq. (3.2.2), and carry out the (Gaussian) integration over the momenta, we find that  N  dq exp[−βU(qN )]A(qN ) dpN exp(−βp · G−1 · p/2)  A = dqN dpN exp(−βH)  N 1 dq exp[−βU(qN )]A(qN )|G| 2  = . (3.4.9) dqN dpN exp(−βH) 1

The problem with Eq. (3.4.9) is the term |G| 2 . Although the determinant |G| can be computed fairly easily for small flexible molecules, its evaluation can become an unpleasant task in the case of larger molecules. Note that the factor 1 1 |G| 2 could be written as exp(ln |G| 2 ): it, therefore, appears as a correction to the Boltzmann factor. Next, let us consider the case that a subset {σ } of the generalized coordinates are constrained to have a fixed value, which necessarily also implies that σ˙ = 0. These hard constraints must be imposed at the level of the Lagrangian and lead to a reduction in the number of degrees of freedom of the system. They also lead to a different form for the Hamiltonian in Eq. (3.4.8) and to another determinant in Eq. (3.4.9). Again, all this can be taken into account in the Monte Carlo sampling (see [97]). An example of such a Monte Carlo scheme is the concerted rotation algorithm that has been developed by Theodorou and coworkers [99] to simulate polymer melts and glasses (see SI L.8.2). The idea of this algorithm is to select a set of adjacent skeletal bonds in a chain (up to seven bonds). These bonds are given a collective rotation while the rest of the chain is unaffected. By comparison, Molecular Dynamics simulations of flexible molecules with hard constraints have the advantage that these constraints enter directly into the equations of motion (see [96]). In Chapter 12, we shall discuss other Monte Carlo sampling schemes that are particularly suited for flexible molecules. Still, these schemes do not eliminate the problem associated with the introduction of hard constraints.

92 PART | I Basics

Example 4 (Mixture of hard disks). One of the important disadvantages of the Monte Carlo scheme is that it does not reproduce the natural dynamics of the particles in the system. However, sometimes this limitation of the method can be made to work to our advantage. Below we show how the equilibration of a Monte Carlo simulation can be sped up by many orders of magnitude through the use of unphysical trial moves.

FIGURE 3.8 A mixture of hard disks, where the identities of two particles are swapped.

In a Molecular Dynamics simulation of, for instance, a binary (A − B) mixture of hard disks (see Fig. 3.8), the efficiency with which configurationspace is sampled is greatly reduced by the fact that concentration fluctuations decay very slowly: typically the relaxation time τ ∼ DAB /λ2 , where DAB is the mutual diffusion coefficient, and λ is the wavelength of the concentration fluctuation. As a consequence, very long runs would be needed to ensure equilibration of the local composition of the mixture. In solids, equilibration may not take place at all on simulation, or even experimental time scales. In contrast, in a Monte Carlo simulation, it is permissible to carry out trial moves that swap the identities of two particles of species A and B. Such moves, even if they have only a moderate rate of acceptance (a few percent will do), greatly speed up the sampling of concentration fluctuations in crystalline solids [100,101] and polydisperse glasses [102].

3.5 Questions and exercises Question 10 (Reduced units). Typical parameters for the Lennard-Jones potentials of argon and krypton, truncated and shifted at rc = 2.5σ , are σAr = 3.405 Å, Ar /kB = 119.8 K and σKr = 3.63 Å, Kr /kB = 163.1 K [103]. 1. At the reduced temperature T ∗ = 2.0, what is the temperature in kelvin of argon and krypton? 2. A typical time step for MD is t ∗ = 0.001. What is this in SI units for argon and krypton?

Monte Carlo simulations Chapter | 3 93

3. If we simulate argon at T = 278 K and density ρ = 2000 kg m−3 with a Lennard-Jones potential, for which conditions of krypton can we use the same data? If we assume ideal gas behavior, compute the pressure in reduced and normal units. 4. List the main reasons to use reduced units. The WF potential [81] is a simple, Lennard-Jones-like potential that goes quadratically to zero at the cut-off radius rc . Its functional form is particularly simple for a cut-off distance rc = 2σ (see Eq. (3.3.10)): φ(r) ≡  =0

  σ 2 r

−1



rc 2 −1 r

2 for r ≤ rc for r > rc .

In reduced units, the critical temperature kB Tc / ≈ 1.04 and the reduced critical density ρc σ 3 ≈ 0.32. 1. Given that the critical temperature of Argon is Tc = 83.1 K, give  (W F ) in SI units. 2. Given that the critical density of Argon is 536 kg m−3 , estimate σ (W F ) for Argon (in SI units). Question 11 (An alien potential). On the planet Krypton, the pair potential between two Gaia atoms is given by the Lennard-Jones 10-5 potential   σ 10  σ 5 . U (r) = 5 − r r Kryptonians are notoriously lazy and it is therefore up to you to derive the tail corrections for the energy, pressure, and chemical potential. If we use this potential in an MD simulation in the truncated and shifted form we still have a discontinuity in the force. Why? If you compare this potential with the LennardJones potential, will there be any difference in efficiency of the simulation? (Hint: there are two effects!) Exercise 6 (Calculation of π ). Consider a circle of diameter d surrounded by a square of length l (l ≥ d). Random coordinates are generated uniformly within the square. The value of π can be calculated from the fraction of points that fall within the circle. 1. How can π be calculated from the fraction of points that fall in the circle? 2. Complete the small Monte Carlo program to calculate π using this method. Compare your result with the “exact” value of π : in some languages, π is pre-computed, in others it can easily be computed e.g., by using π = 4.0 × arctan(1.0). 3. How does the accuracy of the result depend on the ratio l/d and the number of generated coordinates? Derive a formula to calculate the relative standard deviation of the estimate of π .

94 PART | I Basics

4. Why is this not an efficient method for computing π accurately? Exercise 7 (The   photon gas). The average occupancy number of state j of the photon gas, nj , can be calculated analytically; see Eq. (2.6.5). It is possible to estimate this quantity using a Monte  Carlo scheme. In this exercise, we will use the following procedure to calculate nj : (i) Start with an arbitrary nj . (ii) Decide at random to perform a trial move to increase or decrease nj by 1. (iii) Accept the trial move with probability acc (o → n) = min (1, exp [−β (U (n) − U (o))]) . Of course, nj cannot become negative! 1. Does this scheme obey detailed balance when nj = 0? 2. Is the algorithm still correct when trial moves are performed that change nj with a random integer from the interval [−5, 5]? What happens when only trial moves are performed that change nj with either −3 or +3? 3. Assume that N = 1 and j = . Write a small Monte Carlo program to calculate   nj as a function of β. Compare your result with the analytical solution. 4. Modify the program in such a way that the averages are not updated when a trial move is rejected. Why does this lead to erroneous results? At which values of β does this error become more pronounced? 5. Modify the program in such a way that the distribution of nj is calculated as well. Compare this distribution with the analytical expression. Exercise 8 (Monte Carlo simulation of a Lennard-Jones system). In this exercise, we study a 3d Lennard-Jones system (see also online Case Study 1). Chapter 5 provides the details on how the different observables are computed in a simulation. 1. In the code that you can find on the book’s website, the pressure of the system is not calculated. Modify the code in such a way that the average pressure can be calculated. You will only have to make some changes in the function that computes the energy. 2. Perform a simulation at T = 2.0 and at various densities. Up to what density is the ideal gas law βp = ρ

(3.5.1)

a good approximation? 3. The program produces a sequence of snapshots of the state of the system. Try to visualize these snapshots using, for example, the program Visual Molecular Dynamics (VMD) [104]. 4. For the heat capacity at constant volume, one can derive   U 2 − U 2 Cv = kB T 2 in which U is the total energy of the system. Derive a formula for the dimensionless heat capacity. Modify the program that can be found on the book’s website in such a way that Cv is calculated.

Monte Carlo simulations Chapter | 3 95

5. Instead of performing trial moves in which one particle at a time is displaced, one can make trial moves in which all particles are displaced. Compare the maximum displacements of these moves when 50% of all displacements are accepted. 6. Instead of using a uniformly distributed displacement, one can also use a Gaussian displacement. Does this increase the efficiency of the simulation? Exercise 9 (Scaling as a Monte Carlo move). Consider a system in which the energy is a function of one variable (x) only, exp [−βU (x)] = θ (x) θ (1 − x) , in which θ (x) is the Heaviside step function: θ (x < 0) = 0 and θ (x > 0) = 1. We wish to calculate the distribution of x in the canonical ensemble. We will consider two possible algorithms (we will use δ > 0): (i) Generate a random change in x between [−δ, δ]. Accept or reject the new x according to its energy. (ii) Generate a random number φ between [1, 1 + δ]. With a probability of 0.5, invert the value φ thus obtained. The new value of x is obtained by multiplying x with φ. 1. Derive the correct acceptance/rejection rules for both schemes. 2. Complete the computer code to calculate the probability density of x. 3. What happens when the acceptance rule of method (i) is used in the algorithm of method (ii)?

This page intentionally left blank

Chapter 4

Molecular Dynamics simulations Molecular Dynamics (MD) simulation is a technique for computing the equilibrium and transport properties of a classical many-body system. In this context, the word classical means that the nuclear motion of the constituent particles obeys the laws of classical mechanics, which is an excellent approximation for the translational and rotational motion of a wide range of molecules. However, quantum effects cannot be ignored when considering the translational or rotational motion of light atoms or molecules (He, H2 , D2 ) or vibrational motions with a frequency ν such that hν > kB T . Molecular Dynamics simulations were pioneered by Alder and Wainwright in the mid-1950s [18], and have since become a standard tool in many areas of the physical, biological, and engineering sciences. Literally, dozens of books have been written about the method. In addition, many books describe applications of MD simulations to specific subject areas, or the use of MD simulations in the context of specific program packages. In what follows, we do not aim to cover all aspects of MD simulations. Rather, as in the rest of this book, we focus on the basics of the method: what it is, what it can do, and, importantly, what it cannot do —and, at the simplest level, to give the reader an idea of how to do it. We urge the readers who are interested in other aspects of MD to consult some of the many excellent books on the topic [21–24,26,105]. In addition to books, there are many conference proceedings that provide snapshots of the development of the field [44–46,49].

4.1 Molecular Dynamics: the idea Molecular Dynamics simulations are in many respects similar to real experiments. When performing a real experiment, we proceed as follows: We prepare a sample of the material that we wish to study. We connect this sample to a measuring instrument (e.g., a thermometer, pressure gauge, or viscometer), and we record the property of interest over a certain time interval. If our measurements are subject to statistical noise (as most measurements are), then the longer we average, the more accurate our measurement becomes. In a Molecular Dynamics simulation, we follow the same approach. First, we prepare a sample: we select a model system consisting of N particles and we solve Newton’s equations of motion for this system until the properties of the system no longer change with time (i.e., we equilibrate the system). After equilibration, we perform the actual measurement. In fact, some of the most common mistakes that can be made Understanding Molecular Simulation. https://doi.org/10.1016/B978-0-32-390292-2.00012-X Copyright © 2023 Elsevier Inc. All rights reserved.

97

98 PART | I Basics

when performing a computer experiment are similar to the ones that can be made in real experiments: e.g., the sample has not been prepared correctly, the duration of the measurement has been too short, the system undergoes an irreversible change during the simulation, or we are not measuring what we think we are. To measure an observable quantity in a Molecular Dynamics simulation, we must first of all be able to express this observable as a function of the positions and momenta of the particles in the system. For instance, a convenient definition of the temperature in a (classical) many-body system makes use of the equipartition of energy over all degrees of freedom that enter quadratically in the Hamiltonian of the system. In the thermodynamic limit, we have   1 2 1 mvα = kB T . (4.1.1) 2 2 However, this relation is not quite correct for a finite system. In particular, for a hypothetical atomic system with fixed total kinetic energy, we can define the kinetic temperature using the microcanonical relation 1 ∂ ln (Ekin ) = . kB T ∂Ekin

(4.1.2)

We then we find that, for a d-dimensional system of N atoms with fixed total momentum, the instantaneous temperature kB T is equal to 2Ekin /(d(N − 1) − 1). Hence the number of degrees of freedom that enter into the equipartition relation is Nf = dN − (d + 1). In an interacting system, the total kinetic energy fluctuates, and so does the instantaneous temperature: T (t) =

N  mi v 2 (t) i

i=1

kB N f

.

(4.1.3)

As the variance in the total kinetic energy of an interacting system scales as N ≈ Nf [106] (seesection 5.1.8), the relative fluctuations in the temperature will be of order 1/ Nf . As Nf for many simulations is in the range between 103 − 106 , the statistical fluctuations in the temperature are typically of the order of O(1%) − O(0.1%). To get more accurate estimates of the temperature, one should average over many fluctuations. Note that for systems that contain very few particles, the correct counting of the number of degrees of freedom becomes important [107].

4.2 Molecular Dynamics: a program The best introduction to Molecular Dynamics simulations is to consider a simple program. The program that we consider is kept as simple as possible to illustrate a number of important features of Molecular Dynamics simulations. The program is constructed as follows:

Molecular Dynamics simulations Chapter | 4

99

1. We specify the parameters that set the conditions of the run (e.g., initial temperature, number of particles, density, time step, etc.). 2. We initialize the system (i.e., we select initial positions and velocities). 3. We compute the forces on all particles. 4. We integrate Newton’s equations of motion. This step and the previous one make up the core of an MD simulation. These two steps are repeated until we have computed the time evolution of the system for the desired length of time. 5. After completion of the central loop, we compute and store the averages of measured quantities, and stop. Algorithm 3 is a pseudo-algorithm for a Molecular Dynamics simulation of an atomic system. Below, we discuss the different steps in the program in more detail.

4.2.1 Initialization To start the simulation, we should assign initial positions and velocities to all particles in the system. In particular for simulations of crystalline systems, the particle positions should be chosen compatible with the crystal structure that we are aiming to simulate. This requirement puts constraints on the number of particles in the periodic box: e.g., for a cubic box, the number of particles in an FCC crystal should be N = 4n3 , where n is an integer, and for BCC, N = 2n3 . In any event, the particles should not be positioned at positions that result in an appreciable overlap of the atomic or molecular cores. In Algorithm 4 we do not specify the initial crystal structure explicitly. For the sake of the argument, let us assume that it is a simple cubic lattice, which is mechanically unstable. To simulate a liquid, we typically choose the density and the initial temperature such that the simple cubic lattice melts rapidly. First, we put each particle on its lattice site and, in Algorithm 4, we attribute to each velocity component α = {x, y, z} of every particle a value that is drawn √ from a Gaussian distribution, using the Box-Muller method [66]: vi,α ∼ − ln(R1 ) cos(2πR2 ), where R1 , R2 are random numbers uniformly distributed between 0 and 1. Subsequently, we shift all velocities, such that the total momentum is zero and we scale the resulting velocities to adjust the mean kinetic energy to the desired value. We know that, in thermal equilibrium, the following relation should hold for N  1:   vα2 = kB T /m, (4.2.1) where vα is the α component of the velocity of a given particle. We can use this relation to define an instantaneous temperature at time t, T (t): kB T (t) ≡

N 2 (t)  mvα,i i=1

Nf

.

(4.2.2)

100 PART | I Basics

Algorithm 3 (Core of Molecular Dynamics program) program MD

basic MD code

[...]

setlat initv(temp)

function to initialize positions x function to initialize velocities vx

t=0

while (t < tmax) do FandE Integrate-V t=t+delt

sample enddo end program

main MD loop function to compute forces and total energy function to integrate equations of motion update time function to sample averages

Specific Comments (for general comments, see p. 7) 1. The [...] at the beginning refers to the initialization of variables and parameters used in the program. We assume that the maximum run time tmax and the time step delt are global variables. The initial temperature temp is explicitly needed in the function initv, and is therefore shown as an argument. 2. The function setlat creates a crystal lattice of npart particles in a given volume (see Algorithm 20). The number of particles npart is usually chosen compatible with the number of unit cells (nx,ny,nz), and with nc, the number of particles per unit cell: npart=nx*ny*nz*nc. 3. To simulate disordered systems, many simulation packages generate an initial structure that is already disordered. Such a function may speed up equilibration, but requires additional steps. 4. The function Initv (see Algorithm 4) initializes the velocities vx such that the initial temperature is temp. From these velocities, the positions one time-step earlier xm. 5. The main loop consists of three steps a. FandE (see Algorithm 5) computes the current energy and forces b. Sample samples the desired observables at time t - not necessarily every time step. See Algorithms 8 and 9 for some examples of sampling algorithms. c. Integrate Newton’s equation of motion, using the Verlet algorithm (Integrate-V —see Algorithm 6) and update the time t.

Clearly, we can adjust the instantaneous temperature T (t) to match the desired temperature T by scaling all velocities with a factor (T /T (t))1/2 . This initial setting of the temperature is not particularly critical, as the temperature will change anyway during equilibration.

Molecular Dynamics simulations Chapter | 4

101

Algorithm 4 (Initialization of a Molecular Dynamics program) initializes velocities for MD program

function initv(temp) sumv=0 sumv2=0

for



1

i



npart

do

x(i) = lattice_pos(i) vx(i) =



− ln(R)cos (2π R)

sumv=sumv+v(i)

Place the particle on a lattice Generate 1D normal distribution center of mass momentum (m = 1)

enddo sumv=sumv/npart

for



1

i



npart

do

vx(i) = vx(i) - sumv sumv2=sumv2+vx(i)**2

enddo √

temp/(sumv2/nf)

fs=

for

1



i



npart

center of mass velocity set desired kinetic energy and set Center of Mass velocity to zero kinetic energy temp

= desired initial temperature

do

vx(i)=vx(i)*fs xm(i)=x(i)-vx(i)*dt

set initial kinetic temperature position previous time step

enddo end function

Specific Comments (for general comments, see p. 7) 1. Every call of the random-number routine yields a different random number R, uniformly distributed between 0 and 1. 2. Strictly speaking, we need not generate a Maxwell-Boltzmann distribution for the initial velocities: upon equilibration, the distribution will become a Maxwellian. 3. nf is the number of degrees of freedom. In d dimensions, energy and momentum conservation imply that nf=d*(npart-1)-1. 4. During equilibration, the kinetic energy will change. Therefore, the temperature of the equilibrated system will differ from temp. As will appear later, we do not really use the velocities themselves in our algorithm to solve Newton’s equations of motion. Rather, we use the positions of all particles at the present (x) and previous (xm) time steps, combined with our knowledge of the force (f) acting on the particles, to predict the positions at the next time step. When we start the simulation, we must bootstrap this procedure by generating approximate previous positions. Without much consideration for any law of mechanics other than the conservation of linear momentum, we approximate x for a particle in a direction by xm(i) = x(i) - v(i)*dt. Of course, we could make a better estimate of the true previous position of each particle. But as we are only bootstrapping the simulation, we do not worry about such subtleties.

102 PART | I Basics

4.2.2 The force calculation What comes next is the most time-consuming part of most Molecular Dynamics simulations: the calculation of the forces acting on the particles. If we consider a model-system with pairwise-additive interactions, as we do in the present case, we have to consider the contribution to the force on particle i due to all particles within its range of interaction.1 If we only consider the interaction between a particle and the nearest image of another particle, this implies that, for a system of N particles, we must evaluate N × (N − 1)/2 pair distances. If we would use no time-saving devices, the time needed for the evaluation of the forces would scale as N 2 . There exist efficient techniques to speed up the evaluation of both short-range and long-range forces in such a way that the computing time scales as N (for long-ranged forces N ln N ), rather than N 2 . In Appendix I, we describe some of the more common techniques to speed up the calculation of forces between particles. Although the examples in that Appendix apply to Monte Carlo simulations, similar techniques can be used in a Molecular Dynamics simulation. However, in the present, simple example (see Algorithm 5) we will not attempt to make the program particularly efficient and we shall, in fact, consider all possible pairs of particles explicitly. We first compute the current (vectorial) distances in the x, y, and z directions between each pair of particles i and j. These distances are denoted by xr. As in the Monte Carlo case, we use periodic boundary conditions (see section 3.3.2.1). In the present example, we use a cutoff at a distance rc in the explicit calculation of intermolecular interactions, where rc should be chosen to be less than half the diameter of the periodic box to ensure that a given particle i interacts only with the nearest periodic image of any other particle j. In the present case, the diameter of the periodic box is denoted by box. If we use simple cubic periodic boundary conditions, the distance in any direction between i and the nearest image of j should always be less (in absolute value) than box/2. A compact way to compute the distance between i and the nearest periodic image of j uses the nearest integer function. We now first evaluate xr, the difference between the current x-coordinates of i and any j. Note that these coordinates need not be inside the same periodic box. To obtain the x-distance between the nearest images of i and j, we transform xr to xr=xrbox*nint(xr/box). Once we have computed all Cartesian components of rij , the vector distance between i and the nearest image of j, we compute rij2 (denoted by r2 in the program). Next we test if rij2 is less than rc2 , the square of the cutoff radius. If rij2 > rc2 , j does not interact with i and we skip to the next value of j. Note that we do not compute |rij | itself, because that would involve the evaluation of a square root, which (at least at this stage) would be unnecessary. 1 Increasingly, MD simulations use non-pairwise additive interactions. Even if the functional form

of the potential energy is known in principle, computing the forces may become painful. In such cases, Automatic Differentiation (see e.g., [108]) may be an attractive option.

Molecular Dynamics simulations Chapter | 4

103

Algorithm 5 (Calculation of pair forces and energy forces) determine forces and energy rc=2 is the default cut-off set energy to zero

function FandE rc2=rc**2 en=0

for



1

i



npart

do

fx(i)=0

enddo for 1 ≤ i ≤ npart-1 do for i+1 ≤ j ≤ npart do

set forces to zero

loop over all pairs

xr=x(i)-x(j) xr=xr-box*round(xr/box) r2=xr**2

nearest image distance

if

test cutoff

r2 i fij · rij does not vanish and Eq. (5.1.21) gives a correct description of the pressure. For molecular systems, we have different choices to compute the virial: one is based on the forces between atoms (or, more precisely, force centers), and the other is based on forces between the molecular centers of mass. For the average value of the virial, the choice makes no difference. However, it does make a difference for the statistical error, in particular, if we describe intra-molecular forces by stiff spring constants. The reason is that the mean-squared fluctuations in such forces may be very large, even if the average vanishes.2

5.1.5.1 Pressure by thermodynamic integration There are cases where Eq. (5.1.14) cannot be used, for instance, for lattice models, where the volume is not a continuous variable. In such cases, we can compute the pressure of a fluid using the thermodynamic relation d(P V )V T = N dμ .

(5.1.22)

Methods to perform simulations under conditions where μ, V , and T are the control variables, are discussed in section 6.5.

5.1.5.2 Local pressure and method of planes Eq. (5.1.14) yields a global expression for the pressure. Even though Eq. (5.1.21) suggests that, for pairwise additive interactions, the pressure could be decomposed into contributions due to individual particles, it would be wrong to interpret these contributions as local pressures. The mechanical definition of the pressure does have a local meaning as the force acting per unit area on a plane (say, at position x) in the system. We can have different choices for x, and hence these could yield different pressures. However, for a system in mechanical equilibrium, the average pressure should not depend on x, otherwise, there would be a net force acting on a volume element bounded by planes at x + x and at x. 2 However, the strong fluctuations of the intra-molecular forces have little effect on the accuracy of

Green-Kubo integrals (see sections 2.5.2 and 5.3.2).

134 PART | I Basics

If we would take the local virial pressure, for instance, near a hard wall at x = 0, we would find that this measure of the pressure is not constant: hence its gradient is not related to a mechanical force. But we can compute the mechanical pressure directly. Let us consider a fictitious plane at x. We can then compute the force on that plane as the average momentum transfer through that plane due to all particles on the left (say) of that plane. This force has two contributions: 1) momentum transfer due to particles that carry their own momentum, exerting a net force ρ(x)kB T and 2) the force due to the fact that the particles on the left of the dividing plane interact with particles on the right-hand side (note: the choice of “left” or “right” is immaterial). We can compute this force for any plane (and for any potential, even a many-body potential). However, for pairwise additive potentials, the expression simplifies because we can write the force acting through a plane as the sum of all pair forces fx (rij ) for which xi < x and xj > x. This method of computing the pressure is commonly referred to as the “method of planes” [132]. By construction, the mechanical force thus obtained is independent of x for a system in mechanical equilibrium.

5.1.5.3 Virtual volume changes For non-pairwise-additive interactions we cannot use the standard virial route to compute the pressure. For such systems —but also for systems of non-spherical hard-core particles, for which the virial approach becomes rather cumbersome —it may be attractive to compute the pressure by using a finite-difference version of Eq. (2.1.35): P ≈ −(F /V )N T .

(5.1.23)

To this end, we must compute the free-energy difference between a system contained in volume V and the same system contained in a volume V  = V + V , where V must be chosen sufficiently small that F is linear in V . As V is small, we can use a perturbation expression for the free energy (see Eq. (8.6.10)) to compute F : −

kT Q(N, V  , T ) F = ln V V Q(N, V , T )       V  N dsN exp −βU sN ; V  kT     , ln = V V N dsN exp −βU sN ; V

(5.1.24)

or:    kT ln exp −βU sN , (5.1.25) V →0 V     where U ≡ U sN ; V − U sN ; V  and Pid is the ideal gas pressure. For systems with continuous potential-energy functions, V can be chosen both P = Pid − lim

Computer experiments Chapter | 5

135

positive and negative. For hard-core systems, the situation may be a bit more tricky because (for spherical particles) U is always zero upon expansion. In such cases, one should use V < 0. However, for sufficiently non-spherical particles even volume expansion may occasionally lead to overlaps. In that case, the results for simulations with positive and negative V should be combined as will be explained in section 8.6.3. In practice, the virtual-volume-move approach can be made much more efficient by decomposing the free-energy changes in contributions due to individual particles. Such an approach becomes rigorous in the limit V → 0 see [133].

5.1.5.4 Compressibility Once we have computed the pressure of a system as a function of density, we can obtain the isothermal compressibility βT from     1 ∂V 1 ∂ρ βT ≡ − = . (5.1.26) V ∂P N,T ρ ∂P N,T However, as in the case of the heat capacity, we can use a fluctuation expression to estimate the compressibility from a simulation at constant pressure at a single state point. We use the fact that   ∂ ln Q (N, P , T ) . (5.1.27) V N,P ,T = −kB T ∂P It then follows that −1 βT = V



∂V ∂P

 N,T



 V 2 − V 2 . = V  kB T

(5.1.28)

Similar expressions exist for the elastic constants of solids (see section F.4).

5.1.6 Surface tension Up to this point, we have been discussing how simulations can be used to estimate the bulk properties of materials, where it is useful to employ periodic boundary conditions as these minimize the finite-size effects associated with the presence of surfaces. However, the properties of surfaces are interesting in their own right. Here we discuss the calculation of one key surface property, namely the surface tension γ , which measures the free-energy cost associated with a change, at constant N , V , and T , of the area of a flat, unstructured surface or interface. We focus initially on unstructured interfaces because, as we shall see later, computing the free energy of structured interfaces (e.g., a crystal-liquid interface) requires different methods. We start with the expression for the variation of the Helmholtz free energy of a one-component system with N , V , T , and with the surface area A: dF = −SdT − P dV + μdN + γ dA ,

(5.1.29)

136 PART | I Basics

FIGURE 5.1 The figure with the solid frame shows a (periodically repeated) box with height H and width W , containing two phases separated by flat interfaces (dashed lines). The original box is then deformed such that the volume remains constant. This is achieved by scaling W by a factor λ, and H by a factor λ−1 . The boundary of the deformed box is indicated by dotted lines. Due to this transformation, the surface area separating the two phases changes from 2S to 2λS (there are two surfaces).

and hence,

 γ≡

∂F ∂A

 .

(5.1.30)

N,V ,T

We consider a periodically repeated system containing two phases in parallel slabs (see Fig. 5.1). We assume that the surface is perpendicular to the zdirection, and consider the effect of stretching the surface in the x-direction by a factor λ: the new surface area A is then related to the original surface area by A = λA. Note, however, that the system contains two interfaces, hence the total surface area A = 2S, where S is the area per interface. The height of the box in the z-direction is scaled by a factor 1/λ, such that the volume of the box remains unchanged. Due to this transformation, all x-coordinates in the system are scaled by a factor λ, and all z-coordinates are scaled by a factor λ−1 . We can then use the statistical mechanical expression for the Helmholtz free energy to arrive at an expression for the surface tension. In analogy with Eq. (5.1.15) we write3   ∂F γ= ∂A N,V ,T ⎛  1    ⎞ ∂ ln 0 dsN exp −βU sN ; λW, H /λ ⎠ = −kB T ⎝ ∂A N,V ,T    N ∂U s ; λW, H /λ 1 = . (5.1.31) 2S ∂λ N,V ,T

3 The coordinate transformation due to scaling will also change the momenta (see Appendix F.4,

however for a volume conserving transformation, rescaling of the momenta does not change Eq. (5.1.31).

Computer experiments Chapter | 5

137

We now focus on continuous pairwise-additive potentials.4 For continuous potentials, we can write       !  N  ∂U sN ; λW, H /λ ∂U (rN ) ∂U (rN ) xi − zi = ∂λ ∂xi ∂zi λ=1

i=1

=−

N    fi;x xi − fi;z zi ,

(5.1.32)

i=1

where fi;α denotes the force on particle  i in the α direction. For pairwise additive potentials, we can write fi;α = j =i fij ;α , where fij ;α is the pair-force between particles i and j in the α-direction. As in Eq. (5.1.21), we now use the fact that i and j are dummy indices that can be permuted.    N  ∂U sN ; λW, H /λ 1  fij ;z zij − fij ;x xij , (5.1.33) = ∂λ 2 λ=1

and hence

i=1 j =i

 N  1  γ= fij ;z zij − fij ;x xij . 4S

(5.1.34)

i=1 j =i

It would seem that there is something wrong with Eq. (5.1.34) because the number of particles in the numerator scales with the volume V , whereas the denominator scales with the surface area S. In fact, there is no problem, because the environment of a particle (say i) far away from the surface is isotropic, then       fij ;z zij = fij ;x xij . (5.1.35) j =i

j =i

The net result is that pairs ij that are in the bulk of the liquid do not contribute to the surface tension. In a simulation, it is advisable not to include such pairs in the sum in Eq. (5.1.34), because they would contribute to the statistical noise, but not to the average. The derivation above is just one route to computing the surface tension. Other approaches are described in refs. [134,135] —see also SI L.2. However, as was shown by Schofield and Henderson [136], the most commonly used expressions are equivalent. Surface tension from virtual moves In complete analogy with the direct approach to measure the pressure by performing virtual volume moves (Section 5.1.5.3), we can also compute the surface tension by considering (say) a vertical slab of liquid in the system sketched 4 The expression for particles with discontinuous interactions can be recovered from the continuous

case by replacing the force acting on a particle with the average momentum transfer per unit of time.

138 PART | I Basics

in Fig. 5.1. Just as in Eq. (5.1.25), we can compute the free-energy change due to a change in the surface area at constant total volume. The finite-difference form of Eq. (5.1.31), is usually referred to as the “test-area method”. This method remains valid when estimating the surface tension of a system with arbitrary non-pairwise-additive interactions [134,137]. For flat fluid-fluid interfaces, the test-area method remains correct for finite virtual-area changes, because the surface tension is independent of the area. In practice, large test area changes are not advisable if the energy changes in forward and backward test-area moves do not overlap (see section 8.6.1). An illustration of the problems caused by nonoverlapping distributions in the test-area method can be found in ref. [138]). Surface free-energy density and surface stress In the previous section, we considered the surface tension of a flat liquid interface, or for that matter, the surface tension of a liquid at a perfectly flat solid wall. The expression for γ derived above makes use of the fact that we can change the surface area of a liquid by an infinitesimal amount, without changing the bulk properties. Such an approach will not work if any of the two phases is a solid, because as we stretch the surface of a solid, we change its interfacial free energy. For solids, we can still write the contribution of the surface to the free energy as Fs = γ A, where γ is now called the surface free-energy density. But now we cannot use Eq. (5.1.31) to compute γ , because     ∂γ ∂Fs =γ +A ≡ ts , (5.1.36) ∂A ∂A where we have introduced the surface stress ts .5 For liquids, γ does not depend on A, and hence γ = ts , but this equality does not hold for solids6 : special free-energy techniques (as discussed in section 8.4.2) are needed to compute γ for solid interfaces [141], however to compute the free-energy change upon bringing a solid and a liquid into contact, one can use the relatively straightforward thermodynamic-integration technique proposed by Leroy and MüllerPlathe [142]. Free energy of curved surfaces In general, the surface tension of an interface will depend on its curvature. Curvature effects become important when at least one of the radii of curvature of the surface is not much larger than a typical molecular diameter. In contrast to the case of a flat interface, the value of the surface tension of a curved surface depends on our choice of the location of the surface. These and 5 Of course, deforming the surface of a solid may also change its bulk elastic free energy, but that

effect can be computed separately. 6 The distinction has observable consequences. For instance, the Laplace pressure inside a small

crystal is not determined by γ but by ts , which can be negative [139,140].

Computer experiments Chapter | 5

139

other features of curved surfaces imply that computing the free energy of curved surfaces is subtle and full of pitfalls. We will not discuss this topic, but refer the reader to ref. [143] for further background information.

5.1.7 Structural properties Thus far we discussed the measurement of thermodynamic observables. However, many experiments provide information about the microscopic structure of a system. Although some experiments, such as confocal microscopy, can provide an instantaneous snapshot of the configuration of a system, most experiments yield information about some averaged descriptors of the local structure in a system. Scattering experiments (X-ray, neutron) yield information about the mean-squared value of the Fourier transform of the scattering density, whereas real-space experiments such as confocal microscopy can be used to obtain information about the averaged local density profile around a selected particle. As we discuss below, the two quantities are related.

5.1.7.1 Structure factor Static scattering experiments usually probe the angle-dependence of the intensity of radiation scattered by the sample. The scattered intensity is proportional to the mean-squared value of the scattering amplitude A(q), where q denotes the scattering wave-vector; for instance, for monochromatic X-rays with wavelength λ0 : q = (4π/λ0 ) sin(θ/2). The instantaneous scattering amplitude depends on the configuration of the system and is typically of the form A(q) ∼

N 

bi (q)eiq·ri ,

(5.1.37)

i=1

where bi (q) is the scattering amplitude of particle i. The bi (q) depends on the internal structure of the particles. We note that if b(q) is a known function of q, simulations can be used to predict the scattering intensity. Often the data of scattering experiments are analyzed to yield information about the so-called structure factor S(q), which is equal to 1/N times mean-squared fluctuation in the amplitude of ρ(q), the Fourier transform of the single-particle density. ρ(q) is equal to ρ(q) =

N 

eiq·ri =

dr ρ(r)eiq·r ,

(5.1.38)

V

i=1

where the real-space single-particle density ρ(r) is defined by ρ(r) ≡

N  i=1

δ(r − ri ) .

(5.1.39)

140 PART | I Basics

With this definition, we can write

1 |ρ(q)|2  − |ρ(q)|2 S(q) = N

1  dr dr ρ(r)ρ(r ) − ρ2 eiq·(r−r ) . = N V V

(5.1.40)

The quantity ρ(r)ρ(r ) is the density correlation function. It is often written as ρ(r)ρ(r ) = ρ(r)ρ(r )g(r, r ) .

(5.1.41)

Eq. (5.1.41) defines the pair distribution function g(r, r ). In an isotropic, homo-

geneous liquid ρ(r) is constant and equal to the average density ρ, and g(r, r ) only depends on the scalar distance r ≡ |r − r |. g(r) is called the radial distribution function: it probes how the local density around a particle in a classical fluid is decreased/enhanced due to the intermolecular interactions. g(r) plays a key role in liquid-state theory. In the next section, we discuss how g(r) can be measured in a simulation. As S(q) is related to g(r), g(r) can be obtained from S(q) by inverse Fourier transform. This might seem to be a needlessly complicated route for obtaining g(r). However, a naive calculation of g(r) requires O(N 2 ) operations, whereas the computational effort required to compute S(q) by Fast-Fourier transform, scales as N ln N. It might seem straightforward to obtain S(q) of a liquid from g(r) using

dr [g(r) − 1]eiq·r . (5.1.42) S(q) = ρ V

However, in simulations, this procedure is tricky. The reason is that g(r) is usually computed up to a spherical cut-off distance rmax = L/2, where L is the diameter of the simulation box. But often r 2 (g(r) − 1) has not yet decayed to zero at rmax . In that case, spherical truncation of the integral can give rise to unphysical behavior of the apparent S(q) —for instance, it may show oscillations and even negative values at small q values. For this reason, it is safer to compute S(q) using Eq. (5.1.40). Computationally, this is not a big problem because Fast Fourier Transforms are ... fast [38].

5.1.7.2 Radial distribution function Computing the radial distribution function is probably one of the very first measurements that a novice in simulations will perform because it is such a simple calculation. For a given instantaneous configuration, we can easily compute all N (N − 1)/2 pair distances between the particles in the system. We can then make a histogram of the number of pairs with a distance between r and r + . Choosing the bin-width  is a compromise between resolution (favoring a small value of ) and statistical accuracy (the relative error in g(r)

Computer experiments Chapter | 5

141

FIGURE 5.2 The figure shows three different calculations of the radial distribution function of an 864-particle Lennard-Jones fluid [144]. The noisy curve (dots) was obtained using the conventional histogram method for a single liquid configuration (histogram bin-width equal to 0.005σ ). The other two, almost indistinguishable curves, are the results using the histogram method for a run of 10,000 simulation steps (triangles) and the result obtained for a single configuration, using the method of [144] (gray curve). Figure: courtesy of Samuel Coles.

√ scales as 1/ ). Suppose that the number of pairs in the interval {r, r + r} is Np (r), then we divide this number by the average number of pairs that would be found in the same range in an ideal (non-interacting) system. That number is Npid (r) = 12 Nρ(4π/3)[(r + r)3 − r 3 ] (in three dimensions). The factor (1/2) is due to the fact that we count every pair only once. Then our estimate for g(r) is g(r) =

Np (r) . Npid (r)

(5.1.43)

This calculation is so simple that it seems hard to imagine that one can do better, and indeed during the first six decades of molecular simulation, the above approach was overwhelmingly used to compute g(r). However, in 2013 Borgis et al. [144,145] (see also [146]) proposed an alternative method to compute g(r) that has two advantages: 1) it yields a smaller statistical error and 2) it does not require binning. In deriving the result of Ref. [144] we follow a slightly different approach than that paper. The value of the radial distribution function at a distance r from a reference particle is equal to the angular average of ρ(r)/ρ:  

 1 1 δ(r − rj ) , (5.1.44) d rˆ ρ(r)N −1 = d rˆ g(r) = ρ ρ j =i

N −1

where N is the total number of particles in the system, ρ denotes the average number density (ρ ≡ (N/V )) and rj is the distance of particle j from the origin, where particle i is located. rˆ is the unit vector in the direction of r. For simplicity,

142 PART | I Basics

we have written down the expression for g(r) for a given particle i, and hence the sum of j = i is keeping i fixed, but in practice the expression is averaged over all equivalent particles i. The angular brackets denote the thermal average  · · · N −1 ≡

drN −1 e−βU (r ) (· · · ) ,  N drN −1 e−βU (r ) N

(5.1.45)

where we integrate over N − 1 coordinates, because particle i is held fixed. We can now write    

 ∂g(r) 1 ∂ = δ(r − rj ) . (5.1.46) d rˆ ∂r ρ ∂r j =i

The only term that depends on r (the length of r) is the δ-function. We can therefore write    

 ∂g(r) 1 rˆ · ∇r δ(r − rj ) . (5.1.47) = d rˆ ∂r ρ j =i

As the arguments of the δ-function is r − rj , we can replace rˆ · ∇r by −ˆrj · ∇rj and perform a partial integration: 

∂g(r) ∂r







N  drN −1 e−βU (r ) j =i rˆ · ∇r δ(r − rj )  N drN −1 e−βU (r )   N −1 −βU (rN )  e rj · ∇rj U (rN ) −β d rˆ dr j =i δ(r − rj )ˆ =  N ρ drN −1 e−βU (r )  

 β = δ(r − rj )ˆrj · Fj (rN ) , (5.1.48) d rˆ ρ

−1 = ρ

d rˆ

j =i

N −1

(r)

where rˆ · Fj ≡ Fj denotes the force on particle j in the radial direction. We can now integrate with respect to r  

 β r  (r) N   g(r) = g(r = 0) + dr δ(r − rj )Fj (r ) d rˆ ρ 0 = g(r = 0) +

β ρ



r  0 u(z) = . ∞ z≤0 For this system, the probability of finding an ideal gas molecule at a position z is given by the barometric distribution: p0 (z) = C exp[−βu(z)]. The Landau free energy as a function of the coordinate z is, in this case, simply equal to the potential energy: F (z) = −kB T ln[p0 (z)] = u(z) = z, where we have chosen our reference point at z = 0. A direct simulation of the barometric height distribution yields poor statistics if βu(z)  1. This is

316 PART | III Free-energy calculations

why we use the self-consistent histogram method. For window i we use the following window potential: ⎧ min ⎪ ⎪ ⎨ ∞ z < zi Wi (z) = 0 zimin < z < zimax . ⎪ ⎪ ⎩ ∞ z > zimax We allow only neighboring windows to overlap: max < zmin < zmax zi−2 i i−1 min > zmax > zmin . zi+2 i+1 i

FIGURE 8.6 The probability of finding an ideal gas particle at position z. The figure on the left shows the results for the various windows, and the right figure shows the reconstructed distribution function as obtained from the self-consistent histogram method.

For each window, we perform M samples to estimate the probability pi (z) to find an ideal gas particle at a position z. The results of such a simulation are shown in Fig. 8.6(left). A self-consistent histogram method, such as MBAR (Eq. (8.6.59)), can be used to reconstruct the desired distribution p0 (z). The result of this calculation is shown in Fig. 8.6(right).

A rather different method to reconstruct free energy surfaces is to make use of the fact that, in equilibrium, there is detailed balance between states characterized by different order parameters. If we carry out a simulation (MC or MD) and measure the forward and reverse rate of transitions between states with order parameter Q and Q , denoted by R(Q → Q ) and R(Q → Q), respectively, then we must have: P(Q)R(Q → Q ) = P(Q )R(Q → Q) or P(Q) R(Q → Q) = . P(Q ) R(Q → Q )

Free-energy calculations Chapter | 8

317

Hence, by measuring the transition rates between narrow windows in the orderparameter distribution, we can reconstruct the shape of P(Q) [383–386].

8.7 Non-equilibrium free energy methods Above, we discussed a number of techniques for computing free-energy differences. All these techniques assume either that the system under study is in thermodynamic equilibrium or, as in metadynamics, that the system is changing slowly in time. This choice for systems in or close to equilibrium seems logical, as the free-energy difference between two states is equal to the reversible work needed to transform one state into the other. It is therefore surprising that the free energy difference between two systems can also be obtained by computing the non-equilibrium work that is needed to transform one system into the other. In fact, the relation that we discuss is even valid for arbitrarily short “switching” times ts . In what follows, we briefly describe the non-equilibrium free-energy expression due to Jarzynski [387,388] and some of the generalizations proposed by Crooks [389–391]. As before, we consider two N -particle systems: one with a Hamiltonian H0 () and the other with a Hamiltonian H1 (), where  ≡ {pN , rN } represents the phase space coordinates of the system. We assume that we can switch the Hamiltonian of the N -particle system from H0 to H1 — that is, we introduce a Hamiltonian Hλ that is a function of a time-dependent switching parameter λ(t), such that for λ = 0, Hλ=0 = H0 , while for λ = 1, Hλ=1 = H1 . Clearly, we can then write  H1 [(ts )] = H0 [(0)] +

ts

0

dt λ˙

∂Hλ [(t)] . ∂λ

(8.7.1)

Note that the work W performed on the system due to the switching of the t [(t)] Hamiltonian is equal to 0s dt λ˙ ∂Hλ∂λ . If the switching takes place very slowly, the system remains in equilibrium during the transformation, and W reduces to the reversible work needed to transform system 0 into system 1. Under those conditions, W (ts → ∞) = F1 − F0 ≡ F . However, for a finite switching time, the average amount of work W that must be expended to transform the system from state 0 to state 1 is larger than the free energy difference F W (ts ) ≥ F. The work W (ts ) depends on the path through phase space and, for a Hamiltonian system, this path itself depends on the initial phase space coordinate (0); later we shall consider more general situations where many paths connect (0) with (ts ). Let us next consider the average of exp[−βW (ts )]. The work W (ts ) is a function of the initial phase space position (0). We assume that at time t = 0, the system is in thermal equilibrium, in which case the probability of finding the

318 PART | III Free-energy calculations

system 0 with phase space position (0) is given by the canonical distribution P0 [(0)] =

exp{−βH0 [(0)]} , Q0

where Q0 is the canonical partition function of the system 0. The average of exp[−βW (ts )] is then given by exp[−βW (ts )]  = d(0) P0 [(0)] exp{−βW [ts , (0)]}  exp{−βH0 [(0)]} = d(0) exp{−βW [ts , (0)]} Q0  exp{−βH0 [(0)]} = d(0) exp{−β[H1 ((ts )) − H0 ((0))]} Q0  exp{−βH1 [(ts )]} , (8.7.2) = d(0) Q0 where we have used the fact that W (ts ) = H1 [(ts )] − H0 [(0)]. Finally, we use the fact that the Hamiltonian equations of motion are area-preserving. This implies that d(ts ) = d(0). We then obtain Jarzynski’s central result  exp{−βH1 [(ts )]} exp[−βW (ts )] = d(ts ) Q0 Q1 = = exp(−βF ). (8.7.3) Q0 This is a surprising result because it tells us that we can obtain information about equilibrium free-energy differences from a non-equilibrium simulation. But actually, we already know two limiting cases of this result. First of all, in the limit of infinitely slow switching, we recover the relation between F and the reversible work Ws , written in the form exp(−βδF ) = exp(−βWs ). The other limit is instantaneous switching. In that case, W is simply equal to H1 [(0)] − H0 [(0)] and we get exp(−βF ) = exp(−βH ) , which is Eq. (8.6.10). Crooks [389] has given a more general derivation of Eq. (8.7.3) that is not limited to Hamiltonian systems. In particular, Crooks showed that Eq. (8.7.3) remains valid, provided that the dynamics of the system is Markovian and microscopically reversible. Crooks’ result implies that Eq. (8.7.3) is also valid if the “time-evolution” of the system is determined by a Metropolis Monte Carlo scheme. For more details, see Appendix E.

Free-energy calculations Chapter | 8

319

Eq. (8.7.3) and Crooks’ generalization, are both surprising and elegant and are of great conceptual importance. However, there is only little evidence that the non-equilibrium method to compute free-energy differences outperforms existing equilibrium methods [392]. The exception seems to be the calculation of free-energy differences in glassy systems [393] where thermodynamic integration and related methods simply fail. However, when other free-energy methods can also be used, they are at least as good as the Jarzynski method. The underlying reason limiting the practical use of the Jarzynski method is that for transformations far from equilibrium, the statistical accuracy of Eq. (8.7.3) may be quite poor. We already know this for the limit of instantaneous switching. Like in the particle-removal version of the Widom method (Eq. (8.6.1)), the dominant contribution to the average that we wish to compute comes from initial configurations that are rarely sampled. This was the reason why, for the measurement of chemical potentials, the “particle-removal” method was not a viable alternative to the “particleinsertion” scheme. To illustrate the problem in the context of non-equilibrium free-energy calculations, we consider a change in the Hamiltonian of the system that does not change the free energy of the system. An example would be a Monte Carlo move that displaces one molecule in a liquid over a distance +X. If X is not small compared to the typical molecular dimensions, then the displacement of the particle will most likely require positive work to be performed. The same holds for the reverse situation where we move the particle back over a distance −X from its new position to its starting point. However, the free energies of the initial and final situations are the same, and hence F should be zero. This implies that the very small fraction of all configurations for which the work is negative makes an equal and opposite contribution to the average of exp(−βW ). In fact, as in the particleinsertion/particle-removal case, the resolution of the problem lies in a combination of the forward and reverse schemes. We illustrate this by considering the Hamiltonian system. However, the result is general. We now consider two non-equilibrium processes: one transforms the Hamiltonian from H0 to H1 in a time interval ts , and the other process does the reverse. For both processes, we can make a histogram of the work that is expended during the transformation. For the forward process, we can write  exp{−βH0 [(0)]} p0 (W ) = d(0) δ[W − W (ts )]. (8.7.4) Q0 If we multiply both sides of this equation by exp(−βW ) and use the fact that W (ts ) = H1 [(ts )] − H0 [(0)], we get  exp{−βH0 [(0)]} exp(−βW )p0 (W ) = d(0) exp(−βW )δ[W − W (ts )] Q0  exp{−βH0 [(0)]} = d(0) × Q0

320 PART | III Free-energy calculations

exp{−βH1 [(ts )] − H0 [(0)]}δ[W − W (ts )]  exp{−βH1 [(ts )]} = d(0) δ[W − W (ts )] Q0  exp{−βH1 [(ts )]} Q1 δ[W − W (ts )] d(ts ) = Q0 Q1 = exp(−βF )p1 (−W ). (8.7.5) In the last line of Eq. (8.7.5), we have used the fact that the work that is performed on going from 1 to 0 is equal to H0 [(0)] − H1 [(ts )] = −W (ts ). Hence, just as in the overlapping distribution method (8.6.1), we can obtain F reliably if the histograms of the forward and reverse work show some overlap. The above result provides a powerful diagnostic tool for testing when the approach of Jarzynski and Crooks can be used safely in numerical simulations. The above result would seem to limit the applicability of the Jarzynski result to the situation where the distributions of forward and reverse work have a nonnegligible overlap, which is typically the case close to equilibrium. However, as was shown by Hartmann [394], the method can also be made to work for extreme non-equilibrium situations, by using a biased path-sampling of the nonequilibrium trajectories. In addition, Nilmeier et al. [395,396] showed that the approach of [389] could be used to construct composite MC trial moves with a high acceptance under conditions where the acceptance of normal MC moves would be low.

8.8 Questions and exercises Question 23 (Free energy). 1. Why does Eq. (8.6.1) fail for hard spheres? 2. Derive an expression for the error in the estimate of the chemical potential obtained by Widom’s test particle method for a system of hard spheres. Assume that the probability of generating a trial position with at least one overlap is equal to p. 3. An alternative method for calculating the free-energy difference between state A and state B is to use an expression involving the difference between the two Hamiltonians: ! − ln exp [−β (HA − HB )]N,V ,T ,B FA − F B = . (8.8.1) β Derive this equation. What limits the practical applicability of the above expression? Show that Widom’s test particle method is a special case of this equation. Question 24 (Virtual volume change). As discussed in section 5.1.5.3, the virial equation is not particularly convenient for computing the pressure of a hard-sphere fluid.

Free-energy calculations Chapter | 8

321

• Why not? It is more convenient to perform a constant-pressure simulation and compute the density. An alternative way to compute the pressure of a hard sphere fluid is to use a trial volume change. In this method, a virtual displacement of the volume is performed, and the probability that such a (virtual) move is accepted has to be computed. In general, one can consider both trial expansions and trial compressions. • Discuss under what conditions the pressure can be calculated using only trial compressions. (Hint: consider the analogy of Widom’s test particle method.) Next, consider now a system of hard-core chain molecules. • Explain how the acceptance ratio method can be used the compute the pressure. Hint: it is useful to start from Eq. (8.6.13), with w = 1.

This page intentionally left blank

Chapter 9

Free energies of solids In three dimensions, the fluid and the crystalline states are separated by a firstorder phase transition. Unlike the liquid-vapor transition, which ends in a critical point, there is no solid-fluid critical point [345,397,398].1 As first-order phase transitions proceed irreversibly via a nucleation and growth process, there is no continuous, hysteresis-free path joining the bulk fluid and solid. The hysteresis problem is even worse for first-order solid-solid transitions. The most robust way to locate the coexistence curve involving one or more solid phases is, therefore, to compute the chemical potentials of all relevant phases as a function of temperature, pressure, and possible additional variables.2 In the absence of a natural reversible path between the crystal phase and the dilute gas, we cannot directly relate the chemical potential of a solid to that of a dilute-gas reference state. It is for this reason that special techniques are needed to compute the chemical potential of solids. In practice, such calculations usually involve computing the Helmholtz free energy F of the solid, and then using the relation G = N μ = F + P V to deduce the chemical potential of the solid. Note that the above relation applies to pure substances. In what follows, we will mainly discuss pure substances, but we will occasionally refer to the extra challenges posed by mixtures. At the outset, we note that locating phase equilibria involving solids cannot be based on techniques such as the Gibbs-ensemble method of section 6.6, which rely on the possibility of exchanging particles between the coexisting phases. Such particle exchanges become extremely unlikely for a dense phase, in particular for solids: the successful trial insertion of a particle into the solid phase typically requires that there is a non-negligible concentration of vacancies in the solid. Such defects do occur in real solids, but their concentration is so low (for example, in the case of a hard-sphere crystal near melting, there is, on average, one vacancy per 8000 particles) that one would need a rather large crystal (or a biased simulation) to observe a reasonable number of holes in a simulation. Hence, the Gibbs ensemble technique, although still valid in principle, 1 However, the transition between an isotropic fluid and a (liquid) crystal may end in a critical point

in the presence of a strong enough symmetry-breaking field [399]. This approach has been used in simulations in ref. [303]. 2 In Chapter 8, we discussed direct-coexistence simulations, which are simple in principle but, for transitions involving solids, tricky in practice. Moreover, locating solid-solid coexistence by directcoexistence simulations is not feasible. Understanding Molecular Simulation. https://doi.org/10.1016/B978-0-32-390292-2.00019-2 Copyright © 2023 Elsevier Inc. All rights reserved.

323

324 PART | III Free-energy calculations

would not be very practical for studying solid-liquid or solid-solid coexistence.3 In cases where the number of particles in a crystal can differ significantly from the number of lattice sites [224,225], particle insertions/removals may be easier, but applying the Gibbs-ensemble method is even more problematic. The reason is that to find the minimal free-energy state of such systems, the number of lattice sites should be free to vary independently of the volume and the number of particles [224]. The standard Gibbs ensemble method tends to constrain the number of lattice sites and will yield nonsensical answers. Below, we briefly describe some of the salient features of free-energy calculations of crystalline solids. Our discussion is not meant to be comprehensive: for complementary information, we refer the reader to the reviews of Vega et al. [402] and of Monson and Kofke [403].

9.1 Thermodynamic integration Thermodynamic integration is the method most commonly used in the study of the solid-liquid transition. For the liquid phase, this calculation is straightforward and was already discussed in section 8.4.1: the Helmholtz free energy F of the liquid is determined by integrating this equation of state, starting at low densities where the fluid behaves effectively as an ideal gas:    ρ F (ρ) P (ρ  ) − ρ  kB T F id (ρ) 1 , (9.1.1) = + dρ  N kB T N kB T kB T 0 ρ 2 where the equation of state as a function of the density (ρ) is denoted by P (ρ), and F id (ρ) is the free energy of an ideal gas at density ρ. An important condition is that the integration path in Eq. (9.1.1) should be reversible. If the integration path crosses a strong first-order phase transition, hysteresis may occur, and Eq. (9.1.1) can no longer be used. For a liquid phase, this problem can be avoided by performing the integration in two steps. Start the simulation at a temperature well above the critical temperature and determine the equation of state for compression along an isotherm to the desired density. In the second step, the system is cooled at constant density to the desired temperature. The free energy change in this step is given by  TII F (T = TII ) F (T = TI ) − = d(1/T )U (T , N, V ). (9.1.2) kB TII k B TI TI The solid-liquid coexistence curve itself does not end at a critical point, and hence there exists no “natural” reversible path from the solid to the ideal gas that does not cross a first-order phase transition. It is usually possible, however, to construct a reversible path to other states of known free energy. The construction of such paths is the main topic of the present chapter. 3 In some special cases, the method of [400] (see section 6.6) may make a direct Gibbs ensemble simulation feasible, a related scheme has been proposed for solid mixtures [401].

Free energies of solids Chapter | 9

325

Various routes arrive at a state of known free energy. In the mid-1960s, Hoover and Ree introduced the so-called single-occupancy cell method [276, 307]. In the single-occupancy cell method, the solid is modeled as a lattice gas; each particle is assigned to a single lattice point and is allowed to move only in its “cell” around this lattice point. The lattice sites coincide with the average positions of the atoms of the unconstrained solid. If the density is sufficiently high —such that the walls of the cells have a negligible influence on the properties of the system—the free energy of this lattice model is identical to that of the original solid. The single-occupancy cell model can be expanded uniformly without melting (or, more precisely, without losing its translational order). In this way, we obtain a (presumably reversible) integration path to a dilute lattice gas, the free energy of which can be calculated analytically. The earliest application of the single-occupancy cell method was the calculation by Hoover and Ree of the free energy of the hard-disk [276] and hard-sphere solid [307]. An alternative for the single-occupancy cell method was also developed by Hoover and co-workers [309,311]. In this approach, the solid is cooled to a temperature sufficiently low for it to behave as a harmonic crystal. The Helmholtz free energy of a harmonic crystal can be calculated analytically, using lattice dynamics. The free energy of the solid at higher temperatures then follows from integration of Eq. (9.1.2).4 In practice, both the single-occupancy cell method and the method using the harmonic solid have some limitations that make a more general scheme desirable. For example, there is some evidence that the isothermal expansion of the single-occupancy cell model may not be completely free of hysteresis [313]: at the density where the solid would become mechanically unstable in the absence of the artificial cell walls, the equation of state of the single-occupancy cell model appears to develop a cusp or possibly even a weak first-order phase transition. This makes the accurate numerical integration of Eq. (9.1.1) difficult. The harmonic-solid method can work only if the solid phase under consideration can be cooled reversibly all the way down to the low temperatures where the solid becomes effectively harmonic. However, many molecular solids undergo one or more first-order phase transitions on cooling. Even more problematic are model systems for which the particles interact via a discontinuous (e.g., hard-core) potential. The crystalline phase of such model systems can never be made to behave like a harmonic solid. For complex molecular systems, the problem is of a different nature. Even if these materials can be cooled to become a harmonic crystal, computing the Helmholtz free energy in that limit may be nontrivial. In the present chapter, we discuss methods that do not suffer from these limitations and can be applied to arbitrary solids [404,405]. Although the method is generally applicable, it is advantageous to make small modifications depending 4 If we use Eq. (9.1.2) directly, the integration will diverge in the limit T → 0. This divergence can

be avoided if we determine the difference in free energy of the solid of interest and the corresponding harmonic crystal.

326 PART | III Free-energy calculations

on whether we study an atomic solid with a discontinuous potential [314], or with a continuous potential [406], or a molecular solid [405,407].

9.2 Computation of free energies of solids The method discussed in this section is a Hamiltonian thermodynamic integration technique (see Section 2.5.1) for computing the Helmholtz free energy of an atomic solid. The basic idea is to transform the solid under consideration reversibly into an Einstein crystal. To this end, the atoms are coupled harmonically to their lattice sites. If the coupling is sufficiently strong, the solid behaves as an Einstein crystal, the free energy of which can be calculated exactly. The method was first used for continuous potentials by Broughton and Gilmer [408], while Frenkel and Ladd [314] used a slightly different approach to compute the free energy of the hard-sphere solid. Extensions to atomic and molecular substances can be found in [406,407].

9.2.1 Atomic solids with continuous potentials Let us first consider a system that interacts with a continuous potential, U(rN ). We shall use thermodynamic integration (Eq. (8.4.8)) to relate the free energy of this system to that of a solid of known free energy. For our reference solid, we choose an Einstein crystal, i.e., a solid of noninteracting particles that are all coupled to their respective lattice sites by harmonic springs. During the thermodynamic integration, we switch on these spring constants and switch off the intermolecular interactions. To this end, we consider a potential energy function N    N N U(rN ; λ) = U(rN αi (ri − r0,i )2 , (9.2.1) 0 ) + (1 − λ) U(r ) − U(r0 ) + λ i=1

where r0,i is the lattice position of atom i and U(rN 0 ) is the static contribution to the potential energy (i.e., the potential energy of a crystal with all atoms at their lattice positions), λ is the switching parameter, and αi is the Einstein-crystal spring constant coupling atom i to its lattice site. Note that for λ = 0 we recover the original interactions; for λ = 1, we have switched off the intramolecular interactions completely (except for the constant static term) and the system behaves like an ideal (noninteracting) Einstein crystal. The free energy difference is calculated using Eq. (8.4.8): ∂U(λ) (9.2.2) ∂λ λ λ=1

N  λ=0    2 N N = FEin + dλ αi (ri − r0,i ) − U(r ) − U(r0 ) . 

F = FEin +



λ=0



λ=1

i=1

λ

Free energies of solids Chapter | 9

327

The configurational free energy of the noninteracting Einstein crystal is given by FEin = U(rN 0 )−

N d  ln(π/αi β). 2β

(9.2.3)

i=1

As we shall see later, it is computationally more convenient to consider a crystal with fixed center of mass. This will result in a slight modification of Eq. (9.2.3) (see section 9.2.5). The “spring constants” αi can be adjusted to optimize the accuracy of the numerical integration of Eq. (9.2.2). It is reasonable is optimal if the fluctuations of the quantity N to assume that2 the integration N ) are minimal, which implies that the interactions in α (r − r ) − U(r i i 0,i i=1 the pure Einstein crystal should differ as little as possible from those in the original system. This suggests that αi should be chosen such that the mean-squared displacements for λ = 1 and λ = 0 are equal:

N  (ri − r0,i )2 i=1





N  ≈ (ri − r0,i )2 λ=0

i=1

. λ=1

Using the expression for the mean-squared displacement in an Einstein crystal (9.2.10) we find the following condition for α: 

3 = (ri − r0,i )2 . λ=0 2βαi

(9.2.4)

For systems with diverging short-range repulsive interactions, such as, for instance, the Lennard-Jones potential, the integrand in Eq. (9.2.2) will exhibit a weak, integrable divergence. This divergence is due to the fact that the potential energy function of the Einstein crystal does not completely exclude configurations where two particles have the same center-of-mass coordinates. The amplitude of the diverging contribution can be strongly suppressed by increasing the value of the α’s. However, in order to increase the accuracy of the calculation, it is better to perform the thermodynamic integration for λ = 0 to λ = 1 − λ, and compute the free energy difference between the states at λ = 1 and at λ = 1 − λ using the perturbation expression F = −kB T lnexp(−βU), where U ≡ U(rN ; λ = 1) − U(rN ; λ − λ). The precise value of λ is unimportant, but if it is chosen too large, the perturbation expression becomes inaccurate, and if it is chosen too small, the numerical quadrature used in the Hamiltonian integration becomes less accurate. We get back to this point in section 9.2.2. Other approaches Of course the choice of the Einstein crystal as a reference state is a matter of choice. We can also use other reference states for which the free-energy is

328 PART | III Free-energy calculations

known analytically. The most obvious choice is the (classical) harmonic crystal, i.e., a model crystal for which the potential energy is only expanded to quadratic order in the displacements of the particles with respect to their lattice position [335,409]. From our knowledge of the Hessian at the potential minimum of the crystal, we can obtain all non-zero eigenfrequencies ωi (i = 1, · · · , d(N − 1)) of the harmonic phonon modes. The free energy Fh (N, V , T ) of the (fixed-center-of-mass) harmonic crystal is given by βFh (N, V , T ) = βU0 +

d(N −1) 

ln(βωi ) ,

(9.2.5)

i=1

where U0 is the value of the potential energy at its minimum.5 The difference between the average excess potential energy of the interacting crystal equals Uexc (N, V , T ) = U(N, V , T ) − U(N, V , T = 0). For the harmonic crystal, Uhexc (N, V , T ) = d(N − 1)kB T /2. At a low enough temperature TL , the crystal with the full intermolecular interactions will (usually) become increasingly harmonic, and hence very similar to the harmonic crystal. Cheng and Ceriotti [409] proposed that when U = U exc (N, V , TL ) − Uhexc (N, V , TL ) = O(kB TL ), we can use the thermodynamic perturbation expression Eqn. 8.6.10 to obtain the free energy of the interacting crystal: βF = − ln exp(−βU)h . Note that, as before with the Einstein crystal, the use of a perturbation expression near the harmonic limit, eliminates the divergence that would have been present if Hamiltonian Thermodynamic Integration (TI) would have been used. Note, however, that U is an extensive quantity, hence the larger systems, the lower TL will have to be chosen. For this reason, it may be necessary to perform Hamiltonian TI at a higher temperature —like in the Einstein-crystal limit. We stress that using MD for sampling the potential energy of almost harmonic solids requires the use of thermostats (see Chapter 7) that are guaranteed to be ergodic. Hence, the simple Nosé-Hoover thermostat should definitely be avoided. Note, though, that it is never necessary to simulate the harmonic (or Einstein) limit: as we know the Gaussian distribution of the particle or phonon displacements, we can generate uncorrelated configurations in the harmonic/Einstein limit, using the Box-Muller method [66]. Not all solids structures are mechanically stable at very low temperatures, meaning that the Hessian corresponding to the high-temperature lattice structure need not be positive-definite. In that case, the Einstein-crystal method is more robust. 5 Expression (9.2.5) for the harmonic free energy includes the contribution due to the momenta and  explicitly. However,  is irrelevant: for all free-energy differences between different phases, all factors  cancel. So, still: classical phase equilibria do not depend on .

Free energies of solids Chapter | 9

329

Integrating around the critical point Although the transition from solid to liquid always involves a phase transition that, in three dimensions, is always first-order, it is possible to avoid this phase transition altogether by applying an artificial field that stabilizes the structure of the crystal. When applied to the liquid phase, this field breaks the isotropic symmetry. Hence, there is no symmetry difference between the liquid and the crystal in the presence of this field. For weak fields, there is still a first-order phase transition between a solid-like and a liquid-like phase, but for strong enough fields the phase transition ends at a critical point, and we can go continuously from liquid to crystal, in the same way, that we can link liquid and vapor by a continuous path around the critical point [399]. The same approach can be used to construct a reversible path around a first-order phase transition involving liquid-crystalline phases, such as the isotropic-nematic transition and the nematic-smectic transition [303]. Alternatives reference states There are many ways to prepare a reference solid with known free energy. The Einstein crystal or the harmonic solid are but two examples. Sometimes it is convenient to use a reference solid that does allow multiple particles to share the same lattice site: such behavior is relevant in the study of cluster solids [410]. An Einstein-crystal-like method has even been used to compute the free energy of disordered phases [411]. Free-energy differences between different solid phases Often one is interested in knowing the relative stability of two crystalline phases. In that case, it suffices to compute the free energy difference between those phases. It is then not always necessary to compute the free energy of the individual phases: if it is possible to transform one solid reversibly into the other, we can perform a thermodynamic integration along that path. A rather different method that can be used to measure free-energy differences between different solid phases is the Lattice Switch MC method of Bruce et al. [412]. This method is discussed in Example 12.

9.2.2 Atomic solids with discontinuous potentials In the preceding examples, we have considered reference states where the intermolecular interactions were switched off gradually, or where a harmonic a perturbation expression was used to compute the free energy difference between a harmonic crystal and a crystal with the full interaction potential. However, such an approach is not possible if the model under study has hard-core interactions: to be more precise, an infinitely strong repulsion cannot be switched off using a linear coupling parameter. Moreover, for a hard-core crystal, the harmonic reference state does not exist.

330 PART | III Free-energy calculations

One way to treat such systems with hard a core interaction U0 , is simply not to switch it off but to make it harmless. One possible solution would be to use a method where the Einstein lattice is expanded to very low densities, where the hard-core overlaps become extremely unlikely (the same method has been used for molecular crystals [404,407]). An alternative is to consider a system where we can switch on the spring constants, while leaving the hard-core interactions between the particles unaffected: U(λ) = U0 + λU = U0 + λ

N  (ri − r0,i )2 ,

(9.2.6)

i=1

where N denotes the total number of particles and r0,i the position of the lattice site to which particle i is assigned. The free energy difference between the system with coupling λ and the hard-sphere fluid is then 

λmax

FHS = F (λmax ) −

 dλ U(rN , λ) .

0

λ

(9.2.7)

At sufficiently high values of λmax , the hard particles do not “feel” each other and the free energy reduces to that of a noninteracting Einstein crystal. Clearly, the value of the spring constant λ should be sufficiently large to ensure that the harmonically bound crystal is indeed behaving as an Einstein crystal. At the same time, λ should not be too large, because this would make the numerical integration of Eq. (9.2.7) less accurate. In general, the choice of the optimal value for λ depends on the details of the model. In Case Study 17, we show how to choose λ for a particular model system and we discuss other practical issues that are specific to hard-core interactions.

9.2.3 Molecular and multi-component crystals Molecular crystals In contrast to atoms, the interactions between molecules depend on their relative orientation. Molecules come in a wide range of shapes, and typically these shapes can be packed into a periodic structure in many different ways. But it is not just packing that matters: the energetic interactions of molecules depend on their local charge distributions and sometimes also on their ability to form hydrogen bonds. Which molecular-crystal polymorphs form upon solidification is therefore the result of an interplay between many different factors. Even a simple molecule such as nitrogen has at least seven different solid phases [413,414]. It is in this context that free energy calculations of molecular solids become important: it is clearly of interest to know which polymorph has the lowest Gibbs free energy, at a given temperature and pressure. In this section, we discuss how free-energy calculations of molecular crystals can differ from this of atomic crystals.

Free energies of solids Chapter | 9

331

FIGURE 9.1 Sketch of the use of the Einstein Crystal method for molecular crystals. The simulation uses a sequence of Hamiltonian thermodynamic integrations to compute the free-energy difference between a molecular crystal (panel A) and an “Einstein” crystal where both the orientation of the molecules and their center of mass position are constrained by harmonic springs (panel C). Panel A shows the original crystal; some of the intermolecular interactions are shown as double arrows. In the transformation from Panel A to Panel B, the Einstein harmonic forces are switched on, while the intermolecular forces are still on. Finally, in the transformation from B to C, the intermolecular forces (but not the intramolecular forces) are switched off. Computing the free energy of the molecular Einstein crystal is a bit more complex than for an atomic Einstein crystal, as the normal-mode frequencies of the harmonically bound crystal must be determined.

However, a word of caution is in place: even if we know which polymorph is the most stable, there is no guarantee that this is the form that will form in an experiment. As was already noted by Ostwald in 1897 [415], the polymorph that crystallizes is usually not the one that is thermodynamically most stable. Conceptually, the calculation of the free energy of a molecular crystal is no different from that of an atomic crystal: we need to select a suitable reference state, and we need to find a reversible Hamiltonian thermodynamic integration path. However, in practice, more steps are involved, depending on the nature of the reference state. In particular, we need to impose both the translational and (possibly) the orientational order of the liquid crystal. One way to create an orientationally ordered reference state is to couple the molecular orientation to an external aligning field [416]. However, it is also possible to connect several atoms in the molecules with harmonic springs to their average lattice positions [405] (see Fig. 9.1). An example of an alternative approach to compute the free energy of a nitrogen crystal is given in Example 33. Multi-component crystals Free-energy calculations for ordered crystals that contain more than one species per unit cell are not fundamentally different from those for one-component crystals [417–419]. The situation is more subtle in the case of mixed crystals, such as alloys with substitutional disorder. In such cases, we need to sample over different re-

332 PART | III Free-energy calculations

alizations of the substitutional disorder. If the sizes of the various components in the mixed solid are not too different, we can use Monte Carlo “swap moves” to sample over configurations [101]. Note that we cannot simply average over random permutations of the different components, because even in substitutionally disordered solids, there exist local correlations between the positions of the different components. The easiest way to compute the free energy of a crystal with substitutional disorder is to compute first the free-energy change associated with the transformation of a mixed crystal to a pure crystal. There are several ways of doing this (see e.g. [101,210] and Example 12), involving the transformation of all particles to the same type, and computing the free-energy change in the process. Once we have a pure crystal, we can use the standard methods discussed above to compute the free energy. However, it may be that no reversible path exists from the substitutional alloy to a one-component crystal. In that case, it is better to use a free energy calculation method that is compatible with swap moves. One can, for instance, in the spirit of ref. [410] switch on a periodic potential that is compatible with the lattice structure, such that, for a sufficiently strong field, all particles are forced to be close to a lattice position, be it that unlike ref. [410], double occupancy of lattice sites will typically be excluded. This field should be “color-blind”: hence it does not matter which species is at what lattice site. Once the field is strong enough, we can switch off the intermolecular forces and we have reduced the system to a state for which we can compute the free energy analytically (or almost analytically) - in this limit, the swap moves will sample a randomly substituted alloy.

9.2.4 Einstein-crystal implementation issues If all particles are coupled to the Einstein lattice, the crystal as a whole does not move. However, in the limit λ → 0, there is no penalty for moving the particles away from their “Einstein” lattice position. As a consequence, the crystal   as a whole may start to drift and the mean-squared particle displacement r 2 becomes on the order L2 . If this happens, the integrand in Eq. (9.2.7) becomes sharply peaked around λ = 0. This would seem to imply that the numerical integration of Eq. (9.2.7) requires many simulations for low values of λ. This problem can be avoided if we perform the simulation under  the constraint   that the center of mass of the solid remains fixed. In this case, r 2 tends to r 2 0 , the mean-squared displacement of a particle from its lattice site in the normal (i.e., interacting) crystal.6 6 The divergence of the mean-square displacement at low values of λ can also be achieved by fixing

the center of mass of one of the particles in the system. The net result of imposing such an “EinsteinMolecule” constraint [420] is similar, though not quite identical to fixing the center of mass of the system as a whole.

Free energies of solids Chapter | 9

333

To perform a Monte Carlo simulation under the constraint of a fixed center of mass we have to ensure that, if a particle is given a random displacement, all particles are subsequently shifted in the opposite direction such that the center of mass remains fixed. In practice, it is not very convenient to keep the center of mass in place by moving all particles every time a single-particle trial move is carried out. Rather, we update the center-of-mass position every time a single-particle trial move is accepted. We need to correct for the shift of the center of mass only when computing the potential energy of the harmonic springs connecting the particles to their lattice sites. In contrast, the calculation of the intermolecular potential can be carried out without knowledge of the position of the center of mass, as a shift of the center of mass does not change the distance between particles. It is convenient to distinguish between the “absolute” coordinates (r) of a particle (i.e., those that have been corrected for center-of-mass motion) and the uncorrected coordinates (r(U ) ). When computing2the potential energy of the harmonic springs, we need to know N i=1 (ri − r0,i ) . To compute the distance of a particle i to its lattice site, ri − r0,i , we must keep track of the shift of the center of mass: (U )

ri ≡ ri − r0,i = ri

(U )

− r0,i − RCM ,

where RCM denotes the accumulated shift of the center of mass of the system. Every time a particle is moved from r(U ) → r(U ) + r, RCM changes to RCM + r/N. The computation of the change in energy of the harmonic interaction between all particles and their lattice site is quite trivial. Suppose that we attempt to move particle i that is at a distance ri from its lattice site r0,i , by an amount i . This causes a shift i /N in the center of mass. The change in the harmonic potential energy is N     2 rj − i /N − r2j UHarm (λ) = λ j =i

  + λ (ri + (1 − 1/N )i )2 − r2i   N −1 2 = λ 2ri · i + i , N

(9.2.8)

where, in the last line, we used the fact that N i=1 ri = 0. One more caveat should be considered: it is common (though not advisable) practice to put a particle that moves out of the original simulation box, back in at the other side. However, when simulating a system with a fixed center of mass, moving a particle back into the original simulation box creates a discontinuous change in the position of the center of mass and hence a sudden change in the

334 PART | III Free-energy calculations

Algorithm 19 (Fixed-CoM MC of crystal bound to reference lattice) .

function mcmove setlat o=int(R*npart)+1 dis=(R-0.5)*delx

attempts to move a particle keeping the center of mass fixed set up the reference lattice select particle at random give particle random displ.

xn=x(o)+dis dx=x(o)-x0(o)-dxcm del=lambda*(2*dx*dis+ + dis*dis*(npart-1)/npart) arg1=-beta*del if R < exp(arg1) then eno =

ener(x(o))

enn = ener(xn) arg2=-beta*(enn-eno)

calculate ri energy difference with lattice Eq. (9.2.8)

energy old configuration energy new configuration

if R < exp(arg2) then dxcm=dxcm+(xn-x(o))/npart x(o)=xn

new shift center of mass accepted: replace x(o) by xn

endif endif end function

Specific Comments (for general comments, see p. 7) 1. The function setlat has been used to set up the fixed reference lattice x0 (Algorithm 20). At the beginning of the simulation x=x0. ener calculates the intermolecular interaction energy. 2. If a move is accepted, the shift of the center of mass (CoM) of the system (dxcm) is updated, and the same shift is applied to the reference lattice. 3. The term λ (lambda) is the coupling constant as defined in Eq. (9.2.6) and dxcm = RCM is the accumulated shift of the center of mass. 4. For hard-core systems, it is important to compute first the Boltzmann factor associated with the potential-energy change of the harmonic springs and apply the Metropolis rule to see if the move should be rejected. Only if this test is passed should we attempt to perform the more expensive test for overlaps.

energy of the Einstein lattice. Therefore, in a simulation with a fixed center of mass, particles that move out of the original simulation box should definitely not be put back in. Algorithms 19 and 20 sketch how the Einstein-crystal method is implemented in a Monte Carlo simulation.

Free energies of solids Chapter | 9

335

Algorithm 20 (Generate fcc crystal) function setlat(nx,ny,nz)

a1=(vol/(nx*ny*nz))**(1/3) i=0 xcm0=0

generates 3d fcc-crystal of nx*ny*nz unit cells each containing 4 particles a1: unit-cell diameter

for 1 ≤ iz ≤ 2*nz do for 1 ≤ iy ≤ 2*ny do for 1 ≤ ix ≤ 2*nx do if (ix+iy+iz)%2 == 0 then i=i+1 x0(i)=a0*ix+ 0.5*a0*(iy+iz)%2

x-coordinate of particle i

+

y0(i)=a0*iy+ 0.5*a0*(ix+iz)%2

y-coordinate of particle i

+

z0(i)=a0*iz+ 0.5*a0*(ix+iy)%2

z-coordinate of particle i

+

xcm0=xcm0+x0(i)

y and z similar

endif enddo enddo enddo xcm0=xcm0/npart

x center of mass; y and z similar

end function

Specific Comments (for general comments, see p. 7) 1. This algorithm generates face-centered cubic (fcc) lattice and calculates the position of its center of mass (here only the x-component is shown). 2. Note that, in a periodic system, the center of mass is ill-defined: here we take the center of mass of the particles in the original simulation box. 3. When following the displacement of the center of mass (see Algorithm 19), we compute the average (possibly mass-weighted) displacement of all particles: in that case; note that we should not force particles to be inside the original simulation box.

Example 19 (Solid-liquid equilibrium of hard spheres). In this Example, we locate the solid-liquid coexistence densities of the hard-sphere model. We determine these densities by equating the chemical potential and the pressure of the two phases. For the liquid phase, we use the equation of state of Speedy [421], which is based on a Padé approximation to simulation data on both the equation of

336 PART | III Free-energy calculations

state and the virial coefficients of hard spheres: zliquid =

x + 0.076014x 2 + 0.019480x 3 Pβ =1+ . ρ 1 − 0.548986x + 0.075647x 2

For the solid phase of the hard-sphere model, Speedy proposed the following equation of state [320]: zsolid =

3 ρ ∗ − 0.7072 , − 0.5921 ∗ 1 − ρ∗ ρ − 0.601

(9.2.9)

√ where ρ ∗ = σ 3 ρ/ 2. In Fig. 9.2, we compare the predictions of this equation of state for the liquid and solid phases with the results from the computer simulations of Alder and Wainwright [422] and Adams [171]. As can be seen, the empirical equations of state reproduce the simulation data quite well. To calculate the chemical potential of the liquid phase, we integrate the equation of state (see (9.1.1)) starting from the dilute gas limit. This yields the Helmholtz free energy as a function of the density. The chemical potential then follows from P βG βF = + . βμ(ρ) = N N ρkB T The free energy per particle of the ideal gas is given by βf id (ρ) =

F id (ρ) = ln ρ 3 − 1, N kB T

where is the de Broglie thermal wavelength. In what follows, we shall write βf id (ρ) = ln ρ − 1.

FIGURE 9.2 Pressure P (left) and chemical potential μ (right) as a function of the density ρ. The solid curves, showing the pressure and chemical potential of the liquid phase, are obtained from the equation of state of Speedy [421]. The dashed curve gives the pressure of the solid phase as calculated from the equation of state of ref. [320]. The open and filled symbols are the results of computer simulations for the liquid [171,422,423] and solid phases [422], respectively. The coexistence densities are indicated with horizontal lines.

Free energies of solids Chapter | 9

337

That is, we shall work with the usual reduced densities and ignore the additive constant 3 ln( /σ ), as it plays no role in the location of phase equilibria for classical systems. Fig. 9.2 compares the chemical potential that follows from the Hall equation of state with some of the available simulation data (namely, grandcanonical ensemble simulations of [171] and direct calculations of the chemical potential using the Widom test-particle method [423] (see Chapter 8)). These results show that we have an accurate equation of state for the liquid phase and the solid phase. Since we know the absolute free energy of the ideal gas phase, we can calculate the free energy and hence the chemical potential of the liquid phase. For the solid phase we can use the equation of state to calculate only free energy differences; to calculate the absolute free energy, we have to determine the free energy at a particular density. To perform this calculation, we use the lattice coupling method. We must now select the upper limit of the coupling parameter λ (λmax ) and the values of λ for which we perform the simulation. For sufficiently large 2 values of λ we can calculate N i=1 (ri − r0,i ) analytically, using

 1 ∂F (λ) r2 = . λ N ∂λ For the noninteracting Einstein crystal, the mean-squared displacement is given by

 3 . (9.2.10) r2 = λ 2βλ For a noninteracting Einstein crystal with fixed center of mass, the free energy is given by Eq. (9.2.23), which gives

 1 3 N −1 1 r2 . (9.2.11) = Ein,λ β2 N λ In [314] an analytical expression is derived for the case of an interacting Einstein crystal, which reads



 1 βn 

  r2 = r2 − λ Ein,λ 2 2a(2πβλ)(1/2) 1 − P nn overlap λ     2 × σ a − σ − 1/(βλ) exp −βλ(a − σ )2 /2   (9.2.12) + [σ a + σ 2 − 1/(βλ)] exp −βλ(a + σ )2 /2 , where a is the separation of two nearest neighbors i and j , a = r0,i − r0,j , σ is the hard-core diameter, and n is the number of nearest neighbors (for example, n = 12 for fcc (face-centered cubic) and hcp (hexagonal close-packed) solids  nn or n = 8 for bcc (body-centered cubic)); Poverlap

λ

is the probability that two

nearest neighbors overlap. Such probability is given by

338 PART | III Free-energy calculations    

 erf (βλ/2)1/2 (σ + a) + erf (βλ/2)1/2 (σ − a) nn Poverlap = λ 2 exp[−βλ(σ − a)2 /2] − exp[−βλ(σ + a)2 /2] . − (2πβλ)1/2 a

(9.2.13)

This equation can also be used to correct the free energy of a noninteracting Einstein crystal (9.2.23):

  βFEin (λ) βFEin n  nn = + ln 1 − Poverlap . (9.2.14) λ N N 2 We

 choose λmax such that, for values of λ larger than this maximum value, r 2 obeys the analytical expression. Typically, this means that the probaλ

bility of overlap of two harmonically bound particles should be considerably less than 1%. The results of these simulations are presented in Fig. 9.3. This figure shows that if we rely only on the analytical results of the noninteracting Einstein crystal,

 we have to take a value for λmax ≈ 1000–2000. If we use

Eq. (9.2.12) for r 2

λ

λmax = 500–1000 is sufficient.

 FIGURE 9.3 The mean-squared displacement r 2 as a function of the coupling parameter λ λ for a hard-sphere (FCC) solid of 54 particles (6 layers of 3 × 3 close-packed atoms at a density ρ = 1.04). The figure on the left shows the simulation results for low values of λ, the figure on the right for high values. The solid line takes into account nearest-neighbor interactions (9.2.12); the dashed line assumes a noninteracting Einstein crystal (9.2.11). The open symbols are the simulation results.

We should now integrate  λmax

 F = dλ r 2 . λ N 0 In practice, this integration is carried out by numerical quadrature. We, 

therefore, must specify the values of λ for which we are going to compute r 2 . To λ

improve the accuracy of the numerical quadrature, it is convenient to transform to another integration variable:

Free energies of solids Chapter | 9

339

 λmax  G−1 (λmax ) 

 

 F dλ = g(λ) r 2 = d G−1 (λ) g(λ) r 2 , λ λ N g(λ) 0 G−1 (0) where g(λ) is an as-yet arbitrary function of λ and G−1 (λ) is the primitive of the function 1/g(λ). If we can find a function g(λ) such that the integrand,

 g(λ) r 2 , is a slowly varying function, we need fewer function evaluations λ

to arrive at an accurate estimate. To do this, we need to have an idea about

 the behavior of r 2 .

λ  For λ → 0, r 2 → r 2 , which is the mean-squared displacement of an 0

λ

atom around its lattice site in the normal hard-sphere crystal. At high

values  of λ, where the system behaves like an Einstein crystal, we have r 2 → λ

3kB T /(2λ). This leads to the following guess for the functional form of g(λ):

 g(λ) ≈ kB T / r 2 ≈ c + λ, λ



 where c = kB T / r 2 . Here, r 2 can be estimated from Fig. 9.3. The value 0

0

of c clearly depends

 on density (and temperature). For ρ = 1.04, extrapolation

to λ → 0 gives r 2

0

≈ 0.014, which gives c = 70. If we use this function g(λ),

the free energy difference is calculated from  ln(λmax +c)

 F d [ln(λ + c)] (λ + c) r 2 . = λ N ln c For the numerical integration, we use a n-point Gauss-Legendre quadrature [424]. As the integrand is a smooth function, a 10-point quadrature is usually adequate. As discussed in section 9.2.5, the resulting free energy still depends (slightly) on the system size. An example of the system-size dependence of the excess free energy of a hard-sphere crystal is shown in Fig. 9.4 [425]. From this figure, we can estimate the excess free energy of the infinite system to be βf ex = 5.91889(4). This is in good agreement with the estimate of Frenkel and Ladd, βf ex = 5.9222 [314]. Once we have one value of the absolute free energy of the solid phase at a given density, we can compute the chemical potential of the solid phase at any other density, using the equation of state of Speedy (see Fig. 9.2). The coexistence densities follow from the condition that the chemical potentials and pressures in the coexisting phases should be equal. Using the value of 5.91889(4) from [425] for the solid at ρ = 1.04086, we arrive at a freezing density ρl = 0.9391 and a melting density ρs = 1.0376. At coexistence, the pressure is Pcoex = 11.567 and the chemical potential is μcoex = 17.071. In fact, as we shall argue below, the presence of vacancies in the equilibrium crystal lowers the coexistence pressure slightly: Pcoex = 11.564. These results are in surprisingly good agreement with the original data of Hoover and Ree [307], who obtained an estimate for the

340 PART | III Free-energy calculations solid-liquid coexistence densities ρs = 1.041 ± 0.004 and ρl = 0.943 ± 0.004 at a pressure 11.70 ± 0.18.

FIGURE 9.4 βF ex /N + ln(N )/N versus 1/N for an fcc crystal of hard spheres at a density ρσ 3 = 1.0409. The solid line is a linear fit to the data. The coefficient of the 1/N term is −6.0(2), and the intercept (i.e., the infinite system limit of βF ex /N ) is equal to 5.91889(4).

The free energy difference between the FCC and HCP for large hardsphere crystals at melting is very close to 0, but the FCC structure appears to be the more stable phase [303,412,426,427]. For more details, see SI (Case Study 17).

9.2.5 Constraints and finite-size effects The constraint that the center of mass of the system is fixed eliminates a number of degrees of freedom from the system, and this has an effect on the free energy. Strictly speaking, the change in free energy due to any hard constraint is infinite. However, as we shall always consider differences in free energy, the infinities drop out. The remaining change in the free energy becomes negligible in the thermodynamic limit. However, as simulations are necessarily performed on finite systems, it is important to have an estimate of the magnitude of the finite size effects. Below, we describe in some detail how the free energy of an unconstrained crystal is computed using simulations of a system with a fixed center of mass. To keep the discussion general, we will consider a d-dimensional crystal system of Nmol molecules composed of a total of N atoms. The partition function for the unconstrained solid is given by  Q = cN drdN dpdN exp[−βH(ri , pi )], (9.2.15) where cN = (hdNmol N1 !N2 !...Nm !)−1 , where N1 denotes the number of indistinguishable particles of type 1, N2 the number of particles of type 2, etc., and N1 + N2 + ... + Nm = Nmol . In all calculations of phase equilibria between systems that obey classical statistical mechanics, Planck’s constant h drops out

Free energies of solids Chapter | 9

341

of the result. Hence, in what follows, we omit all factors h. As discussed in ref. [428], one can write the partition function Qcon of a constrained system as  Qcon = cN drdN dpdN exp [−βH(ri , pi )] × δ [σ (r)] δ(G−1 · σ˙ ),

(9.2.16)

where σ (r) and σ˙ are the constraints and time derivatives of the constraints, respectively, and N  1 ∇ r σ k · ∇ ri σ l . Gkl = mi i

(9.2.17)

i=1

In order to constrain the center of mass (CM), we take σ (r) = N i=1 μi ri , and, N thus, σ˙ = i=1 (μi /mi )pi , where μi ≡ mi / i mi . To simplify matters, we have assumed that there are no additional internal molecular constraints, such as fixed bond lengths or bond angles. We first consider the case of an Einstein crystal, which has a potential energy function given by 2 1  αi ri − r0,i , 2 N

UEin =

i=1

where r0,i are the equilibrium lattice positions. Note that the particles in a crystal are associated with specific lattice points and therefore behave as if they are distinguishable —thus, cN = 1 (as we omit the factor 1/ hd(N −1) ). It is easy to show that CM CM QCM Ein = ZEin PEin ,

(9.2.18)

with  CM ZEin

=

dr

dN

N 

N     2 exp −(βαi /2)ri δ μi ri

i=1

(9.2.19)

i=1

and  CM PEin

=

dp 

=  =

dN

N 

 exp

−(β/2mi )pi2

i=1

β 2πM β 2πM

d/2  N  d/2

i=1

PEin ,

2πmi β

d/2

 δ

N 

 pi

i=1

(9.2.20)

342 PART | III Free-energy calculations

where M = i mi and ZEin and PEin are the configurational and kinetic contributions to QEin , the partition function of the unconstrained Einstein crystal. It then follows that  d 2  d m i i 2 2 2 β QCM /4π QEin . (9.2.21) Ein = 2 i mi /αi In fact, this expression can be further simplified if we make the specific choice αi = αmi . In that case, d/2  2 2 QCM QEin . (9.2.22) Ein = β α/4π There is a good reason for making this choice for αi : in this case the net force on the center of mass of the crystal, due to the harmonic springs is always zero, provided that it is zero when all particles are on their lattice sites. This makes it easier to perform MD simulations on Einstein crystals with fixed center of mass. The free-energy difference between the constrained and the unconstrained Einstein crystals is then  CM FEin = FEin − kB T ln

β 2α 4π 2

d/2 (9.2.23)

.

For an arbitrary crystalline system in the absence of external forces, the partition function subject to the CM constraint is given by QCM = Z CM (β/2πM)d/2

N  (2πmi /β)d/2 ,

(9.2.24)

i=1

with  Z

CM

=

dr

dN

exp[−βU (ri )]δ

N 

 μi ri ,

(9.2.25)

i=1

while the partition function of the unconstrained crystal is given by Q=Z

N  (2πmi /β)d/2 ,

(9.2.26)

i=1

with

 Z=

drdN exp[−βU (ri )].

(9.2.27)

Note that, as far as the kinetic part of the partition function is concerned, the effect of the fixed center-of-mass constraint is the same for an Einstein crystal as for an arbitrary “realistic” crystal. Using Eqs. (9.2.24) and (9.2.26), the

Free energies of solids Chapter | 9

343

Helmholtz free energy difference between the constrained and unconstrained crystal is given by F CM = F − kB T ln(Z CM /Z) − kB T ln (β/2πM)d/2 We note that

(9.2.28)



  drdN exp[−βU (ri )]δ i μi ri  drdN exp[−βU (ri )]

   μi ri = δ

Z CM = Z

i



= P(rCM = 0),

(9.2.29)

where rCM ≡ i μi ri , and P(rCM ) is the probability distribution function of the center of mass, rCM . To calculate P(rCM ) we exploit the fact that the probability distribution of the center of mass of the lattice is evenly distributed over a volume equal to that of the Wigner-Seitz cell7 of the lattice. The reason the integration over the center-of-mass coordinates is limited to a single Wigner-Seitz cell is that if the center of mass were to another Wigner-Seitz cell, we would have created a copy of the crystal that simply corresponds to another permutation of the particles. Such configurations are not to be counted as independent. It then follows that P(rCM ) = 1/VWS = NWS /V , where VWS is the volume of a Wigner-Seitz cell, and NWS is the number of such cells in the system. Thus, Z CM /Z = P(rCM = 0) = NWS /V . In the case of one molecule per cell, this implies Z CM /Z = Nmol /V , where Nmol is the number of molecules in the system. In numerical free energy calculations, the actual simulation involves computing the free energy difference between the Einstein crystal and the normal crystal, both with constrained centers of mass. We denote this free energy difference by CM F CM ≡ F CM − FEin .

The free energy per particle of the unconstrained crystal (in units of kB T ) is then βF βF CM βFEin ln(Nmol /V ) d = + + − ln(βαM/2π). N N N N 2N

(9.2.30)

If we consider the special case of a system of identical atomic particles (mi = m and N = Nmol ), we obtain the following:   βF CM βFEin ln ρ d d βαm βF = + + − ln N − ln . (9.2.31) N N N N 2N 2N 2π 7 A Wigner-Seitz cell is constructed by drawing lines to connect a given lattice point to all nearby

lattice points. At the midpoints of these lines surfaces normal to these lines are constructed. The smallest enclosed volume defines the Wigner-Seitz cell.

344 PART | III Free-energy calculations

In practice, we usually calculate the excess free energy, F ex ≡ F − F id , where F id is the ideal gas free energy. Let us, therefore, compute the finite-size corrections to the latter quantity: Given that βF id /N = − ln[V N (2πm/β)dN/2 /N !]/N, we find that   βF ex βF CM βFEin ln ρ d βαm = + + − ln N N N N 2N 2π d + 1 ln N ln 2π − − ln ρ + 1 − , 2 N 2N

(9.2.32)

where we have used the Stirling approximation: ln N ! ≈ N ln N − N + (ln 2πN)/2. Hoover has analyzed the system-size dependence of the entropy of a classical harmonic crystal with periodic boundaries [429]. In this study, it was established that the leading finite-size correction to the free energy per particle of a harmonic crystal is equal to kB T ln N/N . Assuming that this result can be generalized to arbitrary crystals, we should expect that βF ex /N + (d − 1) ln N/(2N ) will scale as N −1 , plus correction terms of order O(1/N 2 ). Fig. 9.4 shows the N-dependence of βF ex /N + (d − 1) ln N/(2N ) for three-dimensional hard spheres. The figure clearly suggests that the remaining system-size dependence scales as 1/N. This is a useful result because it provides us with a procedure to extrapolate free energy calculations for a finite system to the limit N → ∞. For more details, see ref. [425]. Illustration 12 (FCC or HCP?). Hard-sphere crystals can occur in different crystal phases. The best known among these are the Face Centered Cubic (FCC) and Hexagonal Close Packed (HCP) structures. It is not easy to determine which phase is thermodynamically most stable. The reason is that the free energy differences between the various structures are on the order of 10−3 kB T per particle, or less. As a consequence, the earliest numerical studies aimed at computing this free energy difference [314] were not conclusive. Subsequent studies [303,412] showed conclusively that the fcc structure is the most stable. While one of the latter simulations used the Einstein-crystal method of ref. [314], the others were based on a different approach. Here, we briefly discuss the so-called lattice-switch Monte Carlo method of Bruce et al. [412]. A close-packed crystal consists of hexagonally close-packed two-dimensional planes that are stacked up in the vertical direction. Assume that we construct the crystal by stacking planes. For every new plane, there are two distinct possibilities of stacking it on the previous plane in such a way that

Free energies of solids Chapter | 9

345

all the atoms fit in the triangular holes between the atoms of the previous plane. Let us denote these two positions of the new plane by B and C, and the position of the original plane by A. With this notation, the FCC stacking obeys the following sequence · · · ABCABCABC· · · , while the HCP structure is characterized by · · · ABABABA· · · . In addition, many hybrid close-packed structures are possible, as long as we never stack two identical planes on top of one another (i.e., BAAB is forbidden). At any given instant, the atoms in a layer are not exactly on a lattice point. We can therefore write ri = Ri (α) + ui , where Ri (α) is the ideal reference lattice position of particle i in structure α, where α labels the crystal structure (e.g., FCC or HCP). We can now perform a Monte Carlo simulation where, in addition to the usual particle displacement moves, we also attempt moves that do not affect the displacement vectors, ui , but that switch the reference lattice, Ri (α), from FCC to HCP. In principle, the free energy difference between these two structures would follow directly from the relative probabilities of finding the two structures in such a Monte Carlo simulation:   P (fcc) . Fhcp − Ffcc = kB T ln P (hcp) However, in practice, such a lattice switch has a very low acceptance probability. The usual solution for such a problem is to decompose the large trial move into many small steps, each of which has a reasonable acceptance probability. The lattice-switch method of Bruce et al. employs the multi-canonical method of Berg and Neuhaus [350]. This method is a version of the umbrellasampling scheme described in section 8.6.6. The first step in this procedure is to define a convenient “order parameter” that connects the two states. To this end, Bruce et al. defined an overlap order parameter M: M(uN ) = M(uN , fcc) − M(uN , hcp), where M(uN , α) is the number of pairs of hard spheres that overlap for configuration uN if the α lattice is used as a reference. For example, M(uN , hcp) is zero for a set of displacement vectors, uN , that do not yield a single overlap if we choose an HCP reference lattice. Of particular interest are those configurations for which M(uN ) = 0, since for these configurations lattice switches are always accepted. Let us define the biased distribution    P (uN , α|{η}) ∝ P (uN , α) exp η M(uN ) , where P (uN , α) is the unweighted distribution and η[M(uN )] are the weights that have to be set. These weights should be chosen such that all relevant values of M are sampled. From a given simulation one can make an estimate

346 PART | III Free-energy calculations

of these weights and these are then subsequently used and updated in the next (longer) simulation until the desired accuracy has been achieved. Bruce et al. [412] used this method to compute the free energy difference between the HCP and the FCC structures with a statistical error of 10−5 kB T . These calculations of Bruce et al. gave further support for the observation that the FCC structure is more stable than the HCP structure. Mau and Huse [430] showed that all hybrids of FCC and HCP stacking have a free energy higher than that of a pure FCC structure.

9.3 Vacancies and interstitials Thus far, we have described crystals as if they were free of imperfections. However, any real crystal will contain point defects, such as vacancies and interstitials. In addition, one may find extended defects such as dislocations and grain boundaries. In equilibrium, point defects are the most common. Clearly, to have a realistic description of a crystal, it is important to have an expression for the equilibrium concentration of vacancies and interstitials, and their contribution to the free energy. This is not completely trivial, as the concept of a point defect is inextricably linked to that of a lattice site. And lattice sites lose their meaning in a disordered state. So, we should first address the question: when is it permissible to count states with a different number of lattice sites as distinct? The answer is, of course, that this is only true if these different states can be assigned to distinct volumes in phase space. This is possible if we impose that every particle in a crystal is confined to its Wigner-Seitz cell. In three-dimensional crystals, this constraint on the positions of all particles has little effect on the free energy (in contrast, in a liquid it is not at all permissible). Below, we derive an expression for the vacancy concentration in a crystal, following the approach first given by Bennett and Alder [431].

9.3.1 Defect free energies The equilibrium concentration of vacancies in a crystal is usually very low. We shall therefore make the approximation that vacancies do not interact. This assumption is not as reasonable as it seems, as the interaction of vacancies through their stress fields is quite long range. The assumption that vacancies are ideal implies that F (n) , the Helmholtz free energy of a crystal with n vacancies at specified positions, can be written as F (n) = F (0) − nf1 = Mf0 − nf1 ,

(9.3.1)

where M is the number of lattice sites of the crystal, f0 is the free energy per particle in the defect-free crystal, and −f1 is the change in free energy of a crystal due to the creation of a single vacancy at a specific lattice point. Let us now consider the effect of vacancies on the Gibbs free energy of a system

Free energies of solids Chapter | 9

347

of N particles at constant pressure and temperature. First, we define g vac as the variation in the Gibbs free energy of a crystal of M particles due to the introduction of a single vacancy at a specific lattice position g vac ≡ GM+1,1 (N, P , T ) − GM,0 (N, P , T ) = FM+1,1 (VM+1,1 ) − FM,0 (VM,0 ) + P (VM+1,1 − VM,0 ).

(9.3.2)

In the above equation, the first subscript refers to the number of lattice sites in the system, and the second subscript to the number of vacancies. Clearly, the number of particles N is equal to the difference between the first and second subscripts. The next step is to write FM+1,1 (VM+1,1 ) − FM,0 (VM,0 ) = FM+1,1 (VM+1,1 ) − FM+1,1 (VM+1,0 ) + FM+1,1 (VM+1,0 ) − FM+1,0 (VM+1,0 ) + FM+1,0 (VM+1,0 ) − FM,0 (VM,0 ). (9.3.3) The first line on the right-hand side of this equation is equal to −P v, where v ≡ v vac − v part is the difference in the volume of the crystal as one particle is replaced by a vacancy, at constant pressure and constant number of lattice sites. The second line on the right-hand side is simply equal to −f1 , defined in Eq. (9.3.1): −f1 ≡ FM+1,1 (VM+1,0 ) − FM+1,0 (VM+1,0 ). To rewrite the third line on the right-hand side of Eq. (9.3.3), we note that the Helmholtz free energy is extensive. We express this by introducing f0 , the Helmholtz free energy per particle of a defect-free crystal, and writing FM,0 (VM,0 ) = Mf0 . Obviously, FM+1,0 (VM+1,0 ) − FM,0 (VM,0 ) = f0 . Combining these three terms, we find that FM+1,1 (VM+1,1 ) − FM,0 (VM,0 ) = −P v − f1 + f0 .

(9.3.4)

The volume is also an extensive quantity; hence VM,0 =

M VM+1,0 . M +1

It then follows that P (VM+1,1 − VM,0 ) = P (VM+1,1 − VM+1,0 + VM+1,0 − VM,0 ) = P (v + V /N ). Hence, the Gibbs free energy difference, associated with the formation of a vacancy at a specific lattice site, Eq. (9.3.2), is then g vac = P (VM+1,1 − VM,0 ) − f1 + (v + V /N )P + f0

348 PART | III Free-energy calculations

= P (V /N) − f1 + f0 = (P /ρ + f0 ) − f1 = μ0 − f1 , where we have defined μ0 ≡ (P /ρ + f0 ). Now we have to include the entropic contribution due to the distribution of n vacancies over M lattice sites. The total Gibbs free energy then becomes n n  n  n  + 1− ln 1 − ln G = G0 (N ) + ng vac + MkB T M M M M n vac − nkB T . ≈ G0 (N ) + ng + nkB T ln M If we minimize the Gibbs free energy with respect to n, we find that   n ≈ M exp −βg vac , where we have ignored a small correction due to the variation of ln M with n. If we insert this value in the expression for the total Gibbs free energy, we find that G = G0 (N ) + n g vac − n g vac − n kB T = G0 − n kB T . The total number of particles is M − n. Hence the Gibbs free energy per particle is n kB T G0 − n kB T = μ0 − N N ≈ μ 0 − x v kB T ,

μ=

(9.3.5)

where we have defined xv ≡ n/N . Hence the change in the chemical potential of the solid due to the presence of vacancies is μ = −xv kB T

(9.3.6)

from which it follows that the change in pressure of the solid at fixed chemical potential is equal to P = xv ρs kB T .

(9.3.7)

9.3.1.1 Vacancies Numerically, it is straightforward to compute the equilibrium vacancy concentration. The central quantity that needs to be computed is −f1 , the change in free energy of a crystal due to the creation of a single vacancy at a specific lattice point. In fact, it is more convenient to consider +f1 , the change in free energy due to the removal of a vacancy at a specific lattice point. This quantity can be computed in several ways. For instance, we could use a particle-insertion

Free energies of solids Chapter | 9

349

method. We start with a crystal containing one single vacancy and attempt a trial insertion in the Wigner-Seitz cell surrounding that vacancy. Then f1 is given by   VWS exp (−βU ) f1 = −kB T ln , (9.3.8) d where VWS is the volume of the Wigner-Seitz cell, and U is the change in potential energy associated with the insertion of a trial particle. For hard particles   VWS Pacc (VWS ) , f1 = −kB T ln d where Pacc (VWS ) is the probability that the trial insertion in the Wigner-Seitz cell will be accepted. As most of the Wigner-Seitz cell is not accessible, and it is more efficient to attempt insertion in a subvolume (typically on the order of the cell volume in a lattice-gas model of the solid). However, then we also should consider the reverse move—the removal of a particle from a subvolume v of the Wigner-Seitz cell, in a crystal without vacancies. The only thing we need to compute in this case is Prem (v), the probability that a particle happens to be inside this volume. The expression for f1 is then   vPacc (v) f1 = −kB T ln . Prem (v) d Of course, in the final expression for the vacancy concentration, the factor d drops out (as it should), because it is cancelled by the same term in the ideal part of the chemical potential. A direct calculation of the vacancy concentration [431, 432] suggests that this concentration in a hard-sphere solid near coexistence is approximately 2.6 × 10−4 . Let us assume that the defect-free crystal is in equilibrium with the liquid at a pressure P and chemical potential μ. Then it is easy to verify that the shift in the coexistence pressure due to the presence of vacancies is −x(0)kB T δPcoex = , vl − vs where vl (vs ) is the molar volume of the liquid (solid). The corresponding shift in the chemical potential at coexistence is δμcoex =

δPcoex . ρl

Inserting the numerical estimate x(0) ≈ 2.6 × 10−4 , the decrease in the coexistence pressure due to vacancies is δPcoex ≈ −2.57 × 10−3 . The corresponding shift in the chemical potential at coexistence is δμcoex = −2.74 × 10−3 . Note that these shifts are noticeable when compared to the accuracy of absolute freeenergy calculations of the crystalline solid.

350 PART | III Free-energy calculations

9.3.1.2 Interstitials Thus far, we have ignored interstitials. However, it is not a priori obvious that these can be ignored. The only new ingredient in the calculation of the interstitial concentration is the determination of fI . This is best done by thermodynamic integration. To this end, we first simulate a crystal with one interstitial. We then determine the excursions of the interstitial from its average position. Next, we define a volume v0 such that the interstitial is (with overwhelming probability) inside this volume. The probability that a point particle inserted at random in a Wigner-Seitz cell will be inside this volume is Pacc =

v0 . VWS

(9.3.9)

Next, we “grow” the particle to the size of the remaining spheres. This will require a reversible work w. The latter quantity can easily be calculated, because the simulation yields the pressure exerted on the surface of this sphere. The total free energy change associated with the addition of an interstitial in a given octahedral hole is then   VWS (9.3.10) fI = −kB T ln Pacc 3 + w and

     VWS xI = exp −β w − kB T ln Pacc 3 − μ .

(9.3.11)

As before, the 3 term drops out of the final result (as it should). For more details, see refs. [433–437].

Chapter 10

Free energy of chain molecules In Chapter 8, we introduced the test-particle insertion scheme as a powerful method for determining chemical potentials. However, this method fails when the Boltzmann factor associated with the trial insertion becomes very small. One consequence is that the simple particle insertion method is ill-suited for computing the chemical potential of larger molecules, except at very low densities. The reason is that, to a first approximation, the probability of successful insertion of a molecule that excludes a volume vexcl to the solvent decays exponentially with the solvent density ρS and the excluded volume: acc ∼ exp(−vexcl ρS ). Fortunately, it is possible to overcome this problem, at least partially, by performing non-random sampling. Here, we discuss several of the techniques that have been proposed to compute the chemical potential of chain molecules, for which the problem of random insertion is particularly severe. The methods described can, however, be applied to any composite object that can be inserted in stages, provided that, at every stage, there is some choice about where to insert the next unit. Many approaches have been proposed to improve the efficiency of the original Widom scheme, of which we describe three that are representative of a wider class of algorithms. The most straightforward of these techniques is thermodynamic integration scheme. Next, we discuss a method based on (generalizations of) the Rosenbluth algorithm for generating polymer conformations [438]. And, finally, we mention a recursive algorithm.

10.1

Chemical potential as reversible work

The excess chemical potential of a (chain) molecule is simply the reversible work needed to add such a molecule to a liquid in which N other (possibly identical) molecules are already present. If we choose to break up the insertion of the molecule into a number of steps, then clearly, the reversible work needed to insert the whole molecule is equal to the sum of the contributions of the substeps. At this stage, we are still free to choose the elementary steps, just as we are free to choose whatever reversible path we wish when performing thermodynamic integration. One obvious possibility is to start with an ideal (noninteracting) chain molecule and then slowly switch on the interaction of this molecule with the surrounding particles (and, if necessary, also the nonbonded intramolecular interactions). This could be done in the way described in section 8.4.1. In fact, this approach was followed by Müller and Paul [439], who performed a simuUnderstanding Molecular Simulation. https://doi.org/10.1016/B978-0-32-390292-2.00020-9 Copyright © 2023 Elsevier Inc. All rights reserved.

351

352 PART | III Free-energy calculations

lation in which the polymer interaction is switched on gradually. Although this simulation could have been performed with straightforward thermodynamic integration, a multiple-histogram method (see section 8.6.10) was used instead, but this does not change the overall nature of the calculation. As stated before, the advantage of thermodynamic integration (and related techniques) is that it is robust. The disadvantage is that it is no longer possible to measure the excess chemical potential in a single simulation. A closely related method for measuring the chemical potential of a chain molecule was proposed by Kumar et al. [440,441]. In this scheme, the chain molecule is built up monomer by monomer. The method of Kumar et al. resembles the gradual insertion scheme for measuring excess chemical potentials that had been proposed by Mon and Griffiths [442]. The reversible work involved in the intermediate steps is measured using the Widom method; that is, the difference in excess free energy of a chain of length  and  + 1 is measured by computing U( →  + 1), the change in potential energy associated with the addition of the ( + 1)th monomer. The change in free energy is then given by Fex ( →  + 1) ≡ μincr ex ( →  + 1) = −kB T ln exp[−βU( →  + 1)] .

(10.1.1)

This equation defines the incremental excess chemical potential μincr ex ( →  + 1). The excess chemical potential of the complete chain molecule is simply the sum of the individual incremental excess chemical potentials. As the latter contributions are measured using the Widom method, the scheme of Kumar et al. is referred to as the modified Widom method. This method is subject to the same limitations as the original Widom method (i.e., the insertion probability of the individual monomers should be appreciable). In this respect, it is less general than thermodynamic integration. As with the multiple-histogram method used by Müller and Paul [439], the computation of the excess chemical potential may require many individual simulations [441,443].

10.2 Rosenbluth sampling Several proposals for measuring the chemical potential of a chain molecule in a single simulation have been made. Harris and Rice [444] and Siepmann [445] showed how to compute the chemical potential of chain molecules with discrete conformations using an algorithm to generate polymer conformations due to Rosenbluth and Rosenbluth [438]. A generalization to continuously deformable molecules was proposed by Frenkel et al. [446,447] and by de Pablo et al. [448]. As the extension of the sampling scheme from molecules with discrete conformations to continuously deformable molecules is nontrivial, we shall discuss the two cases separately. The approach followed here is closely related to the configurational-bias Monte Carlo scheme described in section 12.2.1. However, we have attempted to make the presentation self-contained.

Free energy of chain molecules Chapter | 10 353

10.2.1 Macromolecules with discrete conformations It is instructive to recall how we compute μex of a chain molecule with the Widom technique. To this end, we introduce the following notation: the position of the first segment of the chain molecule is denoted by q and the conformation of the molecule as a whole is described by . The configurational part of the partition function of a system of chain molecules can be written as1   1 Qchain (N, V , T ) = exp[−βU(qN ,  N )]. (10.2.1) dqN N!  1 ,··· , N

The excess chemical potential of a chain molecule is obtained by considering the ratio Q(N + 1, V , T ) , Q(N, V , T )Qnon−interacting (1, V , T ) where the numerator is the (configurational part of) the partition function of a system of N + 1 interacting chain molecules while the denominator is the partition function for a system consisting of N interacting chains and one chain that does not interact with the others. The latter chain plays the role of the ideal gas molecule (see section 8.5.1). Note, however, that although this molecule does not interact with any of the other molecules, it does interact with itself through both bonded and nonbonded interactions. As explained in section 6.5.1, the fact that we do not know the configurational part of the partition function of an isolated self-avoiding chain a priori is unimportant if we work in terms of the fugacity. However, if for some reason we wish to determine the absolute chemical potential of the chain molecule, it is better to use another reference state, namely that of the isolated non-self-avoiding chain (i.e., a molecule in which all nonbonded interactions have been switched off), because for such molecules we can compute the intramolecular part of the partition function analytically. But we stress that, if used consistently, the choice of the reference system makes no difference in the computation of any observable property. Here, we start with the description of the case of a non-self-avoiding reference state, simply because it is easier to explain. We subsequently consider the case of intramolecular interactions. Let us consider a lattice polymer that consists of  segments. Starting from segment 1, we can add segment 2 in k2 equivalent directions, and so on. For instance, for a polymer on a simple cubic lattice, there are six possible directions of the first segment and, if we choose to exclude conformations where bonds can reverse, five for all subsequentones. Clearly, the total number of non-selfavoiding conformations is id = i=2 ki . For convenience, we have assumed that, for a given i, all ki directions are equally likely (i.e., we ignore gauchetrans potential energy differences). Moreover, we assume for convenience that 1 We assume that there are no hard constraints on the intramolecular degrees of freedom.

354 PART | III Free-energy calculations

all ki are the same, which means that we even allow the ideal chain to retrace its steps. These limitations are not essential but they simplify the notation (though not the computational efficiency). Hence, for the simple model that we consider, id = k −1 . Using this ideal chain as our reference system, the expression for the excess chemical potential becomes   Qchain (N + 1, V , T ) βμex = − ln Q(N, V , T )Qideal (1, V , T )   (10.2.2) = − ln exp[−βU(qN ,  N ; qN +1 ,  N +1 )] , where U denotes the interaction of the test chain with the N chains already present in the system and with itself, while · · ·  indicates averaging over all starting positions and all ideal chain conformations of a randomly inserted chain. The problem with the Widom approach to Eq. (10.2.2) is that almost all randomly inserted ideal chain conformations will overlap either with particles already present in the system or internally. The most important contributions to μex will come from the extremely rare cases, where the trial chain happens to be in just the right conformation to fit into the available space in the fluid. Clearly, it would be desirable if we could restrict our sampling to those conformations that satisfy this condition. If we do that, we introduce a bias in our computation of the insertion probability and we must somehow correct for that bias. The Rosenbluth approach used in [444,445] consists of two steps: in the first step a chain conformation is generated with a bias that ensures that “acceptable” conformations are created with a high probability. The next step corrects for this bias by multiplying it by a weight factor. A scheme that generates acceptable chain conformations with a high probability was developed by Rosenbluth and Rosenbluth in the early 1950s [438]. In the Rosenbluth scheme, a conformation of a chain molecule is constructed segment by segment. For every segment, we have a choice of k possible directions. In the Rosenbluth scheme, this choice is not random but favors the direction with the largest Boltzmann factor. To be more specific, the following scheme is used to generate a conformation of one polymer with  monomers: 1. The first monomer is inserted at a random position and its energy is denoted by u(1) (n). We define the Rosenbluth weight of this monomer as w1 = k exp[−βu(1) (n)].2 2. For all subsequent segments i = 2, 3, · · · , , we consider all k trial positions adjacent to segment i − 1 (see Fig. 10.1). The energy of the j th trial position is denoted by u(i) (j ). From the k possibilities, we select one, say, n, with a 2 The factor k is included in the definition of w only to keep the notation consistent with that of 1 section 12.2.1.

Free energy of chain molecules Chapter | 10 355

FIGURE 10.1 Rosenbluth scheme to insert a polymer segment by segment. The arrows indicate the trial positions for the next segment.

probability p (i) (n) =

exp[−βu(i) (n)] , wi

(10.2.3)

exp[−βu(i) (j )].

(10.2.4)

where wi is defined as wi =

k  j =1

The energy u(i) (j ) excludes the interactions with the subsequent segments i + 1 to . Hence, the total energy of the chain is given by U(n) =  (i) i=1 u (n). 3. Step 2 is repeated until the entire chain is grown, and we can compute the normalized Rosenbluth factor of conformation n: W(n) =



wi i=1

k

.

(10.2.5)

We use this scheme to generate a large number of conformations, and ensembleaveraged properties of these chains are calculated as follows: M AR =

n=1 W(n)A(n) , M n=1 W(n)

(10.2.6)

where · · · R indicates that the conformations have been generated by the Rosenbluth scheme. This label is important, because the Rosenbluth algorithm does not generate chains with the correct Boltzmann weight. We refer to the

356 PART | III Free-energy calculations

distribution generated with the Rosenbluth procedure as the Rosenbluth distribution. In the Rosenbluth distribution, the probability of generating a particular conformation n is given by P (n) =



exp[−βu(i) (n)]

wi

i=1

= k

exp[−βU(n)] . W(n)

(10.2.7)

An important property of this probability is that it is normalized; that is,  P (n) = 1, n

where the sum runs over all possible conformations of the polymer. We can recover canonical averages from the Rosenbluth distribution by attributing different weights to different chain conformations. And this is precisely what is done in Eq. (10.2.6): n W(n)A(n)P (n) AR = . (10.2.8) n W(n)P (n) Substitution of Eqs. (10.2.5) and (10.2.7) gives  n W(n)k A(n) exp[−βU(n)]/W(n) AR = W(n)k  exp[−βU(n)]/W(n) n A(n) exp[−βU(n)] = n n exp[−βU(n)] = A ,

(10.2.9)

which shows that Eq. (10.2.6) indeed yields the correct ensemble average. Here, we introduced the Rosenbluth factor as a correction for the bias in the sampling scheme. The Rosenbluth factor itself is also of interest, since it can be related to the excess chemical potential. To see this, let us assume that we use the Rosenbluth scheme to generate a large number of chain conformations while keeping the coordinates of all other particles in the system fixed. For this set of conformations, we compute the average of the Rosenbluth weight factor W, W. Subsequently, we also perform an ensemble average over all coordinates and conformations of the N particles in the system, and we obtain 

 N N N N W = P (q ,  )W q ,  , (10.2.10) 

where the angular brackets denote the ensemble average over all configurations of the system {qN ,  N } of the solvent. Note that the test polymer does not form part of the N -particle system. Therefore the probability of finding the remaining

Free energy of chain molecules Chapter | 10 357

particles in a configuration qN does not depend on the conformation  of the polymer. To simplify the expression for the average in Eq. (10.2.10), we first consider the average of the Rosenbluth factor for a given configuration {qN ,  N } of the solvent:  W({qN ,  N }) = P (qN ,  N )W ({qN ,  N }). (10.2.11) 

Substitution of Eqs. (10.2.3) and (10.2.5) yields W=







     

wi exp −βu(i) ( i ) k exp −βu ( 1 ) wi k 

(1)

i=2

i=1

  

  1 exp −βu(i) ( i ) k exp −βu(1) ( 1 ) = k  i=2  1 = exp [−βU ] , k −1



(10.2.12)



where we have dropped all explicit reference to the solvent coordinates {qN ,  N }. Note that Eq. (10.2.12) can be interpreted as an average over all ideal chain conformations of the Boltzmann factor exp [−βU ]. If we now substitute Eq. (10.2.12) in Eq. (10.2.11), we obtain  exp[−βU(qN ,  N ; qN +1 ,  N +1 )] .

 W =



(10.2.13)



If we compare Eq. (10.2.13) with Eq. (10.2.2), we see that the ensemble average of the Rosenbluth factor is directly related to the excess chemical potential of the chain molecule: βμex = − ln W .

(10.2.14)

This completes our demonstration that a measurement of the average Rosenbluth factor of a trial chain can indeed be used to estimate the excess chemical potential of a polymer in a dense fluid. We should stress that the preceding method for measuring the chemical potential is in no way limited to chain molecules in a lattice. What is essential is that the number of possible directions for each segment (k) relative to the previous one is finite.

10.2.2 Extension to continuously deformable molecules The numerical computation of the excess chemical potential of a flexible chain with or without terms in the intramolecular potential that depend on bending and

358 PART | III Free-energy calculations

torsion angles, is rather different from the corresponding calculation for a chain molecule that has a large but fixed number of undeformable conformations. Here, we consider the case of a flexible molecule with intramolecular potential energy. Fully flexible chains, of course, are included as a special case. Consider a semi-flexible chain of  linear segments. The potential energy of the molecule is divided into two contributions: the “internal” potential energy Ubond , which includes the bonded intramolecular interactions, and the “external” potential energy Uext , which accounts for the remainder of the interactions - including nonbonded intramolecular interactions. A chain in the absence of external interactions is defined as an ideal chain. The conformational partition function of the ideal chain is equal to  Qid = c

 ···

d 1 · · · d 



exp[−βubond (θi )],

(10.2.15)

i=1

where c is a numerical constant. We assume that Qid is known. Our aim is to compute the effect of the external interactions on the conformational partition function. Hence, we wish to evaluate Q/Qid , where Q denotes the partition function of the interacting chain. The excess chemical potential of the interacting chain is given by μex = −kB T ln(Q/Qid ). Before considering the “smart” approach to computing μex , let us briefly review two not-so-smart methods. The most naive way to compute the excess chemical potential of the interacting chain is to generate a very large number of completely random conformations of the freely jointed chain. For every conformation we compute both exp(−βUbond ) and exp[−β(Ubond + Uext )]. The average of the former quantity is proportional to Qid , while the average of the latter Boltzmann factor is proportional to Q. The ratio of these two averages therefore should yield Q/Qid . The problem with this approach is that the overwhelming majority of randomly generated conformations correspond to semi-flexible chains with a very high internal energy (and therefore very small Boltzmann weights). Hence, the statistical accuracy of this sampling scheme will be very poor. The second scheme is designed to alleviate this problem. Rather than generating conformations of a freely jointed chain, we now sample the internal angles in the chain in such a way that the probability of finding a given angle θi is given by the Boltzmann weight P (θi ) = 

exp[−βu(θi )] . d i exp[−βu(θi )]

Such sampling can be performed quite easily using a rejection method (see, e.g., [21]). In what follows, we use the symbol  i to denote the unit vector that

Free energy of chain molecules Chapter | 10 359

specifies the orientation of the ith segment of the chain molecule. For every conformation thus generated, we compute the Boltzmann factor exp(−βUext ). The average of this Boltzmann weight is then equal to exp(−βUext ) =

 d exp[−β(Ubond + Uext )]  d exp(−βUbond )

= Q/Qid .

(10.2.16)

This approach is obviously superior to the first scheme. However, in many practical situations it will still yield poor statistics, because most ideal chain conformations will not correspond to energetically favorable situations for the interacting chain. Hence the Boltzmann weights, again, will be small for most conformations, and the statistical accuracy will not be very good. The problem with both these schemes is that neither allows us to focus on those conformations that should contribute most to Q, namely, those for which the sum of the internal and external potential energies is not much larger than a few kB T per degree of freedom. It would clearly be desirable to bias the sampling toward such favorable conformations. It turns out that we can use a procedure similar to that used in section 10.2.1 to compute the excess chemical potential of a chain molecule with many fixed conformations. To compute μex , we apply the following recipe for constructing a conformation of a chain of  segments. The construction of chain conformations proceeds segment by segment. Let us consider the addition of one such segment. To be specific, let us assume that we have already grown i segments and we are trying to add segment i + 1. This is done as follows: 1. Generate a fixed number of (say, k) trial segments with orientations distributed according to the Boltzmann weight associated with the internal potential energy u(θ ). We denote the different trial segments by indices 1, 2, · · · , k. Importantly, our final result for the excess chemical potential is valid for any choice of k ≥ 1, but the accuracy of the result depends strongly on our choice for k. 2. For all k trial segments, we compute the external Boltzmann factor exp[−βu(i) ext (j )]. 3. Select one of the trial segments, say, n, with a probability (i)

p (i) (n) =

exp[−βuext (n)] , wiext

where we have defined wiext ≡

k  j =1

(i)

exp[−βuext (j )].

(10.2.17)

360 PART | III Free-energy calculations

4. Add this segment as segment i + 1 to the chain and repeat this procedure until the entire chain is completed. The normalized Rosenbluth factor W of the entire chain is given by W ext (n) =



w ext i

i=1

k

,

  (1) where, for the first segment, w1ext = k exp −βuext (1) . The desired ratio Q/Qid is then equal to the average value (over many trial chains) of the product of the partial Rosenbluth weights:   Q/Qid = W ext . (10.2.18) To show that Eq. (10.2.18) is correct, let us consider the probability with which we generate a given chain conformation. This probability is the product of a number of factors. Let us first consider these factors for one segment and then later extend the result to the complete chain. The probability of generating a given set of k trial segments with orientations  1 through  k is Pid ( 1 )Pid ( 2 ) · · · Pid ( k )d 1 · · · d k .

(10.2.19)

The probability of selecting any one of these trial segments follows from Eq. (10.2.17): (i)

p (i) (j ) =

exp[−βuext ( j )] , wiext ( 1 , · · · ,  k )

(10.2.20)

for j = 2, 3, · · · , . We wish to compute the average of wiext over all possible sets of trial segments and all possible choices of the segment. To this end, we must  sum over all j and integrate over all orientations kj =1 d j (i.e., we average over the normalized probability distribution for the orientation of segment i +1): 

 

k k  wiext exp[−βuext (j  )] wiext ( 1 , · · · ,  k ) = d j Pid ( j ) k k wiext ( 1 , · · · ,  k )  j =1

=



k

j =1

j =1

d j Pid ( j )

k  exp[−βuext (j  )] . k 

(10.2.21)

j =1

But the labeling of the trial segments is arbitrary. Hence, all k terms in the sum in this equation yield the same contribution, and this equation simplifies to 

  wiext = dPid () exp[−βuext ()] k

(10.2.22)

Free energy of chain molecules Chapter | 10 361

 = =

d exp {−β[ubond () + uext ()]}  d exp[−βubond ()]

Q(i) (i)

,

(10.2.23) (10.2.24)

Qid

which is indeed the desired result but for the fact that the expression in Eq. (10.2.24) refers to segment i (as indicated by the superscript in Q(i) ). The extension to a chain of  segments is straightforward, although the intermediate expressions become a little unwieldy. The final result is a relation between the normalized Rosenbluth factor and the excess chemical potential:  ext  W ex βμ = − ln  ext  , (10.2.25) WID ext is the normalized Rosenbluth factor of an isolated chain with nonwhere WID bonded intramolecular interactions. This Rosenbluth factor has to be determined from a separate simulation using exactly the same approach: the only difference being that we now have to compute W for an isolated molecule with nonbonded intra-molecular interactions. In principle, the results of the Rosenbluth sampling scheme are exact in the sense that, in the limit of an infinitely long simulation, the results are identical to those of a Boltzmann sampling. In practice, however, there are important limitations. In contrast to the configurational-bias Monte Carlo scheme (see Chapter 12), the Rosenbluth scheme generates an unrepresentative sample of all polymer conformations as the probability of generating a given conformation is not proportional to its Boltzmann weight. Accurate values can be calculated only if these distributions have a sufficient overlap. If the overlap is small, then the tail of the Rosenbluth distribution makes the largest contribution to the ensemble average (10.2.6); conformations that have a very low probability of being generated in the Rosenbluth scheme may have Rosenbluth factors so large that they tend to dominate the ensemble average. Precisely because such conformations are generated very infrequently, the statistical accuracy may be poor. If the relevant conformations are never generated during a simulation, the results will even deviate systematically from the true ensemble average. This drawback of the Rosenbluth sampling scheme is well known, in fact (see, the article of Batoulis and Kremer [449,450] and Illustration 14).

Illustration 13 (Henry coefficients in porous media). For many practical applications of porous media, we need to know the “adsorption isotherm”, which describes the dependence of the number of adsorbed molecules of a given species at a given temperature on its external pressure or, more generally, on

362 PART | III Free-energy calculations

its fugacity. Examples 4 and 18, show how a complete adsorption isotherm can be computed using Grand Canonical Monte Carlo simulations. However, if the external pressure is sufficiently low, a good estimate of the adsorption isotherm can be obtained from the Henry coefficient KH . Under these conditions, the number of adsorbed molecules per unit volume (ρa ) is proportional to the Henry coefficient and external pressure P : ρa = KH P . The Henry coefficient is directly related to the excess chemical potential of the adsorbed molecules. To see this, consider the ensemble average of the average density in a porous medium. In the grand-canonical ensemble, this ensemble average is given by (see section 6.5, Eq. (6.5.10))    ∞ 1  (f V )N N = dsN exp[−βU (sN )]N/V V  N! =

f 

N =0 ∞ 



(f V )N /N  !

N  =0

     dst exp [−βU (st )] × dsN exp −βU (sN )   = f exp(−βU + ) , where N  = N − 1, st denotes the scaled position where we insert a test particle and U + is defined as the change in the potential energy of the system due to the insertion of the test particle. In the limit P → 0, the reservoir can be considered to be an ideal gas, in which case its fugacity becomes f → βP , and hence



   N = βP exp(−βU + ) V

This gives, for the Henry coefficient, KH = β exp(−βμex ). Maginn et al. [451] and Smit and Siepmann [452,453] used the approach described in this section to compute the Henry coefficients of linear alkanes adsorbed in the zeolite silicalite. The potential describing the alkane interactions is divided into an external potential and an internal potential. The internal potential includes bond bending and torsion: uint = ubend + utors . The alkane model uses a fixed bond length. The external interactions include the remainder of the intramolecular interactions and the interactions with the zeolite:

Free energy of chain molecules Chapter | 10 363

uext = uintra + uzeo . Since the Henry coefficient is calculated at infinite dilution, there is no need to consider the intermolecular alkane-alkane interactions. Smit and Siepmann used the internal interactions to generate the trial conformations (see section 12.3) and determine the normalized Rosenbluth factor using the external interactions only; this Rosenbluth factor is related to the excess chemical potential according to  ext  W ex βμ = − ln  ext  , WIG  ext  where WIG is the Rosenbluth factor of a molecule in the ideal gas phase (no interactions with the zeolite) [454]. For an arbitrary alkane, the calculation of the Henry coefficient requires two simulations: one in the zeolite and one in the ideal gas phase. However, for butane and the shorter alkanes, all isolated (ideal gas) molecules are ideal chains, as there are no nonbonded interactions. For such chains, the Rosenbluth factors in the ideal gas phase are by definition equal to 1.

FIGURE 10.2 Henry coefficients KH of n-alkanes in the zeolite silicalite as a function of the number of carbon atoms Nc as calculated by Maginn et al. [451] and Smit and Siepmann [453].

In Fig. 10.2 the Henry coefficients of the n-alkanes in silicalite as calculated by Smit and Siepmann are compared with those of Maginn et al. If we take into account that the models considered by Maginn et al. and Smit and Siepmann are slightly different, the results of these two independent studies are in good agreement.

10.2.3 Overlapping-distribution Rosenbluth method Although the Rosenbluth particle-insertion scheme described in section 10.2 is correct in principle, it may run into practical problems when the excess chemical potential becomes large. Fortunately, it is possible to combine the Rosenbluth

364 PART | III Free-energy calculations

scheme with the overlapping distribution method to obtain a technique with built-in diagnostics. This scheme is explained in the SI (section L.10). As with the original overlapping distribution method (see section 8.6.1), the scheme described in SI (section L.10) constructs two histograms, but now as a function of the logarithm of the Rosenbluth weight rather than the potential energy difference. If the sampled distributions do not overlap, then one should expect the estimate of the excess chemical potential of chain molecules to become unreliable and the Rosenbluth method should not be used. As shown in ref. [455], there is indeed a tendency for the two distributions to move apart when long chains are inserted into a moderately dense fluid. Yet, at least in the case studied in ref. [455], the statistical errors in μex become important before the systematic errors due to inadequate sampling show up. Illustration 14 (Rosenbluth sampling for polymers). Batoulis and Kremer [450] made a detailed analysis of the Rosenbluth algorithm for self-avoiding walks on a lattice. The Rosenbluth scheme was used to generate one walk on a lattice. Batoulis and Kremer found that, with a random insertion scheme, the probability of generating a walk of 100 steps without overlap is on the order of 0.022% (FCC-lattice). If, on the other hand, we use the Rosenbluth scheme, this probability becomes close to 100%. In Fig. 10.3, the distribution of the radius of gyration of the polymer as calculated with the corrected ensemble average (10.2.6) is compared with the uncorrected average (i.e., using the Rosenbluth scheme to generate the schemes and using A = (1/M) M n=1 A(n) instead of Eq. (10.2.6) to calculate the ensemble averages). The figure shows that the Rosenbluth scheme generates chains that are more compact. Batoulis and Kremer showed that, for longer chain lengths, this difference increases exponentially. One therefore should be careful when using such a non-Boltzmann sampling scheme.

FIGURE 10.3 Probability distribution of the radius of gyration RG . The circles show the Boltzmann distribution and the squares the Rosenbluth distribution. The results are of an FCC-lattice for a walk of 120 steps (data taken from ref. [450]).

Free energy of chain molecules Chapter | 10 365

10.2.4 Recursive sampling In view of the preceding discussion, it would seem attractive to have unbiased sampling schemes to measure the chemical potential. Of course, thermodynamic integration methods are unbiased and the modified Widom scheme, although biased at the level of the insertion of a single monomer (like the original Widom scheme), is less biased than the Rosenbluth method. Yet, these methods cannot be used to measure μex in a single simulation (see section 10.1). It turns out that nevertheless, it is possible to perform unbiased sampling of μex in a single simulation. Here, we briefly sketch the basic idea behind this method. In our description, we follow the approach proposed by Grassberger and Hegger [456,457]. Their technique is quite similar to a Monte Carlo scheme developed a few years earlier by Garel and Orland [458]. Like the Rosenbluth and modified Widom schemes, the recursive sampling approach is based on a segment-by-segment growth of the polymer. But that is about where the similarity ends. In recursive sampling, the aim is to generate a population of trial conformations. The excess chemical potential of a chain molecule is directly related to the average number of molecules that have survived the growth process. The first step of the procedure is to attempt a trial insertion of a monomer in the system. Suppose that the Boltzmann factor associated with this trial insertion is b0 ≡ exp[−βu0 (rN )]. We now allow the monomer to make multiple copies of itself, such that the average number of copies, n0 , is equal to n0  = π0 b0 , where π0 is a constant multiplicative factor that remains to be specified. A convenient rule for determining how many copies should be made is the following. Denote the fractional part of π0 b0 by f0 and the integer part by i0 . Our rule is then to generate i0 (i0 + 1) copies of the inserted particle with a probability 1 − f0 (f0 ). Clearly if i0 = 0, there is a probability 1 − f0 that the monomer will “die.” Assume that we have generated at least one copy of the monomer. Every copy from now on proceeds independently to generate offspring. For instance, to generate a dimer population, we add a segment to every surviving monomer. We denote the Boltzmann weight associated with these trial additions by b1 (i), where the index i indicates that every surviving monomer will give rise to a different dimer. As before, we have to decide how many copies of the dimers should survive. This is done in exactly the same way as for the monomer; that is, the average number of dimers that descends from monomer i is given by n1 (i) = π1 b1 (i), where π1 , just like π0 before, is a constant to be specified later. The number of dimers generated may either be larger or smaller than the original number of monomers. We now proceed with the same recipe for the next generation

366 PART | III Free-energy calculations

(trimers) and so on. In fact, as with the semi-flexible molecules discussed in section 10.2.1, it is convenient to include the intramolecular bond-bending, bond-stretching, and torsional energies in the probability distribution that determines with what orientation new segments should be added. The average number of surviving molecules at the end of the th step is      

N  = πi exp −βU (rN ) , i=0

where U (rN ) is the total interaction of the chain molecule with the N solvent molecules (and the nonbonded intramolecular interactions). The angular brackets denote a canonical average over the coordinates and over the intramolecular Boltzmann factors of the ideal (nonself-avoiding) chain. In other words,   

N  = πi exp[−βμex ()]. i=0

Hence, the excess chemical potential is given by   N  μex () = −k B T ln  . i=0 πi

(10.2.26)

The constants πi should be chosen such that there is neither a population explosion nor mass extinction. If we have a good guess for μex () then we can use this to estimate πi . In general, however, πi must be determined by trial and error. This recursive algorithm has several nice features. First of all, it is computationally quite efficient (in some cases, more than an order of magnitude faster than the Rosenbluth scheme, for the same statistical accuracy). In fact, in actual calculations, the algorithm searches in depth first, rather than in breadth. That is to say, we try to grow a polymer until it has been completed (or has died). We then continue from the last branch of the tree from where we are allowed to grow another trial conformation. In this way, we work our way back to the root of the tree. The advantage of this scheme is that the memory requirements are minimal. Moreover, the structure of the program is very simple indeed if we make use of recursive function calls. Last but not least, the recursive scheme generates an unbiased (i.e., Boltzmann) population of chain conformations [459].

10.2.5 Pruned-enriched Rosenbluth method An important extension of the Rosenbluth scheme has been proposed by Grassberger [460]. It is called the pruned-enriched Rosenbluth method (PERM). One of the reasons why the conventional Rosenbluth method fails for long chains or at high densities is that the distribution of Rosenbluth weights becomes very

Free energy of chain molecules Chapter | 10 367

broad. As a consequence, it can happen that a few conformations with a high Rosenbluth weight completely dominate the average. If this is the case, we should expect to see large statistical fluctuations in the average. It would, of course, be desirable to focus the simulations on those classes of conformations that contribute most to the average, and spend little time on conformations that have a very low Rosenbluth weight. The PERM algorithm is a generalization of the recursive-sampling scheme discussed above. It also generates a population of chains with different conformations. And it shares the advantage that, due to the recursive nature of the algorithm, we need not keep more than one conformation (plus a set of pointers) in memory. The “birth” and “death” rules of this algorithm are such that it generates many copies of conformations with a high Rosenbluth weight, while low-weight structures have a high probability of “dying.” The Rosenbluth weight of the remaining conformations is adjusted in such a way that our birth-death rules do not affect the desired average. As the PERM algorithm is recursive, it uses little memory. To summarize the algorithm in a few words: conformations with a high Rosenbluth weight are multiplied by a factor k and their weight is reduced by the same factor. Conformations with a low weight are “pruned”—half the low-weight conformations are discarded, while the weight of the remainder is doubled. Once all chains that have started from a common “ancestor” have been grown to completion (or have been discarded), we simply add the (rescaled) Rosenbluth weights of all surviving chains. Below, we briefly sketch how the algorithm is implemented. Let us introduce an upper and a lower threshold of the Rosenbluth weight of a chain with length l, Wimax and Wimin , respectively. If the partial Rosenbluth weight of a particular chain conformation of length i, Wi , exceeds the threshold, Wi > Wimax , then the single conformation is replaced by k copies. The partial Rosenbluth weight of every copy is set equal to Wi /k. If, on the other hand, the partial Rosenbluth weight of a particular conformation, Wi , is below the lower threshold, Wi < Wimin , then we “prune.” With a probability of 50% we delete the conformation. But if the conformation survives, we double its Rosenbluth weight. There is considerable freedom in the choice of Wimax , Wimin , and k. In fact, all of them can be chosen “on the fly” (as long as this choice does not depend on properties of conformations that have been grown from the same ancestor). A detailed discussion of the algorithm can be found in refs. [460,461]. The limitation of the recursive growth algorithm is that it is intrinsically a static Monte Carlo technique; every new configuration is generated from scratch. This is in contrast to dynamic (Markov-chain) MC schemes in which the basic trial move is an attempt to modify an existing configuration. Dynamic MC schemes are better suited for the simulations of many-particle systems than their static counterparts. The reason is simple: it is easy to modify a manyparticle configuration to make other “acceptable” configurations (for instance, by displacing one particle over a small distance). In contrast, it is very difficult to generate such configurations from scratch. On the other hand, once a new

368 PART | III Free-energy calculations

configuration is successfully generated in a static scheme, it is completely independent of all earlier configurations. In contrast, successive configurations in dynamic MC are strongly correlated. CBMC is, in a sense, a hybrid scheme: it is a dynamic (Markov-chain) MC method. But the chain-regrowing step is more similar to a static MC scheme. However, in this step, it is less “smart” than the recursive algorithms discussed above, because it is rather “myopic.” The scheme looks only one step ahead. It may happen that we spend a lot of time growing a chain almost to completion, only to discover that there is simply no space left for the last few monomers. This problem can be alleviated by using a scanning method of the type introduced by Meirovitch [462]. This is basically a static, Rosenbluth-like method for generating polymer configurations. But, in contrast to the Rosenbluth scheme, the scanning method looks several steps ahead. If this approach is transferred naively to a configurational-bias Monte Carlo program, it would yield an enhanced generation of acceptable trial conformations, but the computational cost would rise steeply (exponentially) with the depth of the scan. This second drawback can be avoided by incorporating a recursive scanning method that cheaply eliminates doomed trial configurations, within a dynamic Monte Carlo scheme. In section 12.7 we discuss a dynamic MC algorithm (recoil growth), that is based on this approach.

Part IV

Advanced techniques

This page intentionally left blank

Chapter 11

Long-ranged interactions 11.1

Introduction

The most time-consuming step in a simulation is the calculation of the potential energy (MC), or of the forces acting on all particles (MD). In the case of pairwise-additive interactions, the total time for such a calculation scales as the number of interacting pairs. Clearly, as the range of the interactions becomes longer, the number of pairs approaches N (N − 1)/2, where N is the number of particles in the system. The usual way to avoid this problem (see section 3.3.2.2) is to truncate the potential at some finite range rc , and approximate the remaining interaction by a “tail correction” of the form  Nρ ∞ U tail ∼ dr r d−1 u(r). 2 rc However, for interactions that decay as 1/r d or slower (e.g., Coulomb or dipolar interactions), the tail corrections diverge. Moreover, for such interactions, it is also not permissible to limit the calculation of the interactions between the nearest periodic image. Simply ignoring the long-ranged part of the potential has serious consequences (for a discussion, see ref. [463]). So, clearly, there is a problem. The more so as Coulomb and dipolar interactions between molecules are very common. Fortunately, efficient methods have been developed that can treat the long-ranged nature of the Coulomb interaction correctly. When selecting an algorithm for a specific application, key factors to consider are accuracy and speed, not whether they are easy to explain. Modern algorithms to compute electrostatic interactions typically scale very favorably with the N , the number of particles in the system: either as N ln N or even as N . But some of the most powerful algorithms only outperform the simpler ones for fairly large system sizes. These are all considerations to keep in mind. A systematic comparison of scalable algorithms to compute long-ranged interactions can be found in a paper by Arnold et al. [464]. That paper describes many of the widely used variants of the algorithms that we discuss, but some of the more recent algorithms are not included in that review. Before proceeding, we must discuss the fact that we will write the laws of electrostatics in a form that looks as if we are using non-SI units, but that is not the explanation. When, in section 3.3.2.5, we introduced reduced units, we Understanding Molecular Simulation. https://doi.org/10.1016/B978-0-32-390292-2.00022-2 Copyright © 2023 Elsevier Inc. All rights reserved.

371

372 PART | IV Advanced techniques

argued that it was convenient to express all observables in units of a characteristic energy , a characteristic length σ , and a characteristic mass m. Now we must introduce yet another characteristic unit, namely the unit of charge. For microscopic simulations, neither the Coulomb nor the, by now largely forgotten, e.s.u. are convenient units, but the absolute value e of the electron charge is. If we have two particles with charges Q1 , Q2 at separation r in a medium with a relative dielectric constant r , we can write their Coulomb interaction in SI units as Q1 Q2 , (r) = 4π0 r r where 0 denotes the dielectric permittivity of vacuum. But in a simulation, we wish to use dimensionless quantities so that we would write φ  (r) ≡

e2 (Q1 /e)(Q2 /e) (r) 1 (λB /σ ) q1 q2 , = =   4π0 r σ r/σ (/kB T ) r ∗

where r ∗ is the distance in units of σ , q ≡ Q/e, and λB is the Bjerrum length (see e.g., [59]). In terms of the reduced Bjerrum length λ∗ ≡ λ/σ and the reduced temperature T ∗ ≡ kB T /, we get φ  (r) = T ∗ λ∗

q1 q2 . r∗

In what follows, we drop the asterisk on r ∗ . To make the notation in the remainder of this chapter compact, we go one step further and define φ(r) ≡ φ  (r)/(T ∗ λ∗ ), so that Coulomb’s law becomes φ(r) =

q1 q2 , r

(11.1.1)

which looks like Coulomb’s law in Gaussian units, but it is not. It is just a convenient way of expressing electrostatics in a dimensionless form with a minimum number of conversion factors. Note that once we have computed φ(r), we must still multiply the result by T ∗ λ∗ to obtain all energies in the same units. Below we discuss a number of widely used methods to compute Coulomb interactions. It is easy to get buried under formalism when discussing algorithms to compute Coulomb interactions. Therefore, following the well-known rule Tell’em what you’re going tell’em, we briefly outline what classes of algorithms we will discuss and why. Very briefly: we will only discuss different algorithms separately if they make use of a sufficiently different approach. In this spirit, we will discuss 1. The Ewald summation (section 11.2), which was the first to treat the complete Coulomb interaction as the sum of a short-range part and a long-ranged part that is evaluated by Fourier transformation. 1 f (r) 1 − f (r) = + , r r r

(11.1.2)

Long-ranged interactions Chapter | 11 373

with f (r) some short-ranged function (in the case of the Ewald approach, f (r) is a complementary error function). In the context of the Ewald sum, we also discuss: • The corresponding expression for dipolar particles (section 11.2.1), and the calculation of the dielectric constant (section 11.2.1). • The fact that the nature of the boundary conditions at infinity can never be ignored (section 11.2.2).1 • Methods to optimize the choice of f (r) in Eq. (11.1.2) (section 11.2.3). 2. Then we discuss Particle-Mesh methods that, although similar in spirit to the Ewald approach, are much faster because they use the Fast Fourier Transform of a discretized charge distribution to compute the long-range interactions (section 11.3). 3. Next, we consider how, by smart truncation of the short-range part of the interaction, we can completely ignore the long-ranged part of the electrostatic potential and yet obtain a very good approximation to the complete expression (section 11.4). 4. The so-called Fast-Multipole Method (FMM) is a very different approach, which we explain mainly in words. The FMM scales as N , which becomes important for large systems (section 11.5). Importantly, the FMM method was originally designed for two-dimensional systems, where most other methods do not perform well. 5. Finally, we discuss two local algorithms that, although in principle rigorous, never compute any long-ranged interaction. We also mention one rather different algorithm that replaces infinite-ranged interactions in flat space with finite-ranged interactions on a hyper-sphere. (section 11.7). The choice of the method to compute Coulomb interactions depends on a number of factors, such as system size and the desired accuracy, but also on which method is the fastest for a given application. Of course, the speed of a given implementation depends not just on the software but also on the hardware.

11.2

Ewald method

We start with a discussion of the Ewald-summation method, which has a long history, as it goes back to ... Ewald (1921) [466]. Ewald’s method was initially used to estimate the electrostatic part of the cohesive energy of ionic crystals solids [467]. The method was first used in computer simulations by Brush, Sahlin, and Teller [468], but the current methodology started with a series of papers by De Leeuw, Perram, and Smith [469–471], to which we refer the reader for more details. The computational effort required for the Ewald summation does not scale as N 2, but it still increases faster than linear with the number of particles: at best as O N 3/2 —see section 11.2.3. As a consequence, the Ewald method, although 1 A clear treatment of the boundary conditions at infinity can be found in ref. [465].

374 PART | IV Advanced techniques

  adequate for system sizes of O 103 – 104 particles, becomes less attractive for larger systems. The methods that we will discuss later have a more favorable scaling (O(N log N ) or even O(N )). But these methods tend to have a larger overhead making them less attractive for smaller systems, whilst the methods that also work for small systems are only approximate. Below we present a simplified discussion of the Ewald method [466] for computing long-ranged contributions to the potential energy in a system with periodic boundary conditions. A more rigorous derivation is given in the articles by De Leeuw et al. [469–471] mentioned above, and in a paper by Hansen [472]. In what follows, we limit our discussion to the three-dimensional case. For a discussion of two-dimensional Ewald sums, we refer to the reader ref. [21]. However, one aspect of dimensionality should be mentioned: Coulomb’s law (Eq. (11.1.1)) is a three-dimensional law. Of course, we can embed 3d charges in a (quasi)-2d geometry, but the potential due to such charges does not satisfy the 2d Laplace equation. Rather, the potential that satisfies the 2d Laplace equation depends logarithmically on distance. For real charged particles, the true 2d equivalent of Coulomb’s law is irrelevant. However, there are other interactions for which the 2d laws are relevant: for instance, the interactions between dislocations in a 2d crystal: in such cases, a true 2d equivalent of the Ewald method can sometimes be used. Let us consider a system consisting of positively and negatively charged particles (q). These particles are assumed to be located in a cubic box with diameter L (and volume V = L3 ). We assume periodic boundary conditions. The total number of particles in the fundamental simulation box (the unit  cell) is N . We assume that the system as a whole is electrically neutral; that is, i qi = 0. Moreover, we rely on the fact that a non-Coulombic short-range repulsion (Pauli exclusion principle) will prevent unlike point charges from getting arbitrarily close. We wish to compute the Coulomb contribution to the potential energy of this N -particle system, 1 qi φ(ri ), 2 N

UCoul =

(11.2.1)

i=1

where φ(ri ) is the electrostatic potential at the position of ion i:  qj ,  φ(ri ) = rij + nL

(11.2.2)

j,n

where the prime on the summation indicates that the sum is over all periodic images n and over all particles j , except j = i if n = 0. In other words: particle i interacts with all its periodic images, but not with itself. Eq. (11.2.2) cannot be used to compute the electrostatic energy in a simulation, because it contains a poorly converging sum; in fact, the sum is only conditionally convergent. To improve the convergence of the expression for the electrostatic potential energy, we rewrite the expression for the charge density.

Long-ranged interactions Chapter | 11 375

FIGURE 11.1 In the Ewald scheme to compute the electrostatic energy, we evaluate the electrostatic potential at the positions of all particles, due to the same set of point charges, represented as a set of screened charges minus the smoothly varying screening background. In the figure, particle (i) experiences the electrostatic potential due to a set of charge clouds (e.g., Gaussians) (A), plus the contribution due to a set of point charges, each embedded in a neutralizing counter-charge cloud (B). Finally, as contribution (B) over-counts the interaction of particle i with its own neutralizing charge cloud, we must subtract this contribution (C).

In Eq. (11.2.2) we have represented the charge density as a sum of point charges. The contribution to the electrostatic potential due to these point-charges decays as 1/r. Now consider what happens if we assume that every particle i with charge qi is surrounded by a diffuse charge distribution of the opposite sign, such that the total charge of this cloud exactly cancels qi . In that case, the electrostatic potential of such a composite particle is due exclusively to the fraction of the point charge that is not screened. At large distances, this fraction goes to zero. How rapidly depends on the functional form of the screening charge distribution. In what follows, we shall assume a Gaussian distribution for the screening charge cloud.2 The contribution to the electrostatic potential at a point ri due to a set of screened charges can be easily computed by direct summation, because the electrostatic potential due to a screened charge is a short-ranged function of r. However, our aim is to evaluate the potential due to point charges, not screened charges. Hence, we must correct for the fact that we have added a screening charge cloud to every particle. The different contributions to the charge density are shown schematically in Fig. 11.1. This compensating charge density varies smoothly in space. We wish to compute the electrostatic potential at the site of ion i. Of course, we should exclude the electrostatic interaction of the ion with itself. We have three contributions to the electrostatic potential: first of all, the 2 The choice of Gaussian compensating charge clouds is simple and convenient, but other choices

are possible —see [473]).

376 PART | IV Advanced techniques

one due to the point charge qi . Secondly, the one due to the (Gaussian) screening charge cloud with charge −qi , and finally, the one due to the compensating charge cloud with charge qi . In order to exclude Coulomb self-interactions, we should not include any of these three contributions to the electrostatic potential at the position of ion i. However, it turns out that it is convenient to retain the contribution due to the compensating charge distribution and correct for the resulting spurious interaction afterwards. The reason why we retain the compensating charge cloud for ion i is that, if we do so, the compensating charge distribution is not only a smoothly varying function, but it is also the same for all particles, and it is periodic. Such a function can be represented by a (rapidly converging) Fourier series, and this will turn out to be essential for the numerical implementation. Of course, in the end, we should correct for the inclusion of a spurious “self” interaction between ion i and the compensating charge cloud. One point needs stressing: in the Ewald method, we compute the electrostatic potential by expressing the charge distribution as described above. However, when we compute the electrostatic energy, we use Eq. (11.2.1) to compute how this electrostatic potential interacts with a set of point charges. Let us next consider the individual terms that contribute to the electrostatic energy. We assume that the compensating charge distribution surrounding an √ ion i is a Gaussian with width 2/α:  3 ρGauss (r) = −qi (α/π) 2 exp −αr 2 . The choice of α will be determined later by considerations of computational efficiency. We shall first evaluate the contribution to the Coulomb energy due to the continuous background charge, then the spurious “self” term, and finally the real-space contribution due to the screened charges. Fourier transformation This chapter relies heavily on the use of Fourier transforms. Here we summarize the basics of Fourier transformation in the context of electrostatics. We already know Coulomb’s law in the form φ(r) =

z . |r|

(11.2.3)

The electrostatic potential at a point r due to a collection of charges is then φ(r) =

N  i=1

qi . |r − ri |

However, this expression is of little use in simulations where the system is repeated periodically In simulations, our aim is to compute the electrostatic energy of a system from knowledge of the charge distribution ρP (r) made up of point

Long-ranged interactions Chapter | 11 377

charges: ρP (r) =

N 

qi δ(r − ri ),

(11.2.4)

i=1

where ri and qi denote the position and the charge of particle i. In order to relate the electrostatic potential to the charge density, we now use Poisson’s equation for the electrostatic potential. For our choice of units, the Poisson equation reads: − ∇ 2 φ(r) = 4πρP (r),

(11.2.5)

where φ(r) is the electrostatic potential at point r. For what follows, it is convenient to consider the Fourier transform of Poisson’s equation. For convenience, we assume a system in a periodically repeated cubic box with diameter L and volume V − Ld . Any well-behaved function f (r) with period L in all directions can be represented by a Fourier series: ∞ 1  ˜ f (r) = f (k)eik·r , V

(11.2.6)

l=−∞

where k = (2π/L)l with l = (lx , ly , lz ) are the lattice vectors in Fourier space. The Fourier coefficients f˜(k) are calculated using f˜(k) =



dr f (r)e−ik·r .

(11.2.7)

V

In Fourier space Poisson’s equation (11.2.5) reduces to:

−∇ φ(r) = −∇ 2

2

1  ir·k ˜ φ(k)e V



k

1  2 ir·k ˜ = k φ(k)e V k 4π  ir·k = ρ(k)e ˜ , V

(11.2.8)

k

where ρ(k) ˜ in the last equality, denotes the Fourier transform of the charge density. The equality in Eq. (11.2.8) must hold for every Fourier component. Hence: ˜ k 2 φ(k) = 4π ρ(k). ˜

(11.2.9)

378 PART | IV Advanced techniques

To find the solution of Poisson’s equation for a point charge of strength z at the origin, we have to perform the Fourier transform of a delta function:  ρ(k) ˜ = dr zδ(r)e−ik·r V

=z. This yields as solution for the Poisson equation ˜ φ(k) =

4πz . k2

The solution for a unit charge is the so-called Green’s function: g(k) ˜ =

4π . k2

(11.2.10)

For a collection of point charges, the charge density is given by Eq. (11.2.4), and we can write for the Fourier coefficients of the potential ˜ φ(k) = g(k) ˜ ρ(k) ˜ with  ρ(k) ˜ =

dr V

=

N 

N 

qi δ(r − ri )e−ik·r

i=1

qi e−ik·ri .

(11.2.11)

i=1

These equations show that in Fourier space the solution of Poisson’s equation is simply obtained by multiplying ρ(k) ˜ and g(k) ˜ for all k vectors. In what follows we will also use another property of the Fourier transform. If we have a function f1 (x), which is the convolution ( ) of two other functions f2 (x) and f3 (x):  f1 (x) ≡ f2 (x) f3 (x) ≡ dx  f2 (x  )f3 (x − x  ), then the Fourier coefficients of these functions are related by a simple multiplication: f˜1 (k) = f˜2 (k)f˜3 (k). For example, if we have a charge distribution that is “smeared out” around a sum of δ functions by replacing each δ-function centered at ri with a normalized

Long-ranged interactions Chapter | 11 379

distribution function γ (r − ri ), then we can write:   qi γ (r − ri ) = dr γ (r )ρp (r − r ), ρ(r) =

(11.2.12)

i

and the Poisson equation in Fourier space takes the form ˜ φ(k) = g(k) ˜ γ˜ (k)ρ(k). ˜ This result is convenient because it shows that in Fourier space the effect of smearing out a charge distribution amount to a simple multiplication. Fourier part of Ewald sum We now apply the properties of the Poisson equation in Fourier form to compute the electrostatic potential at a point ri due to a charge distribution ρS (r) that consists of a periodic sum of Gaussians (interaction (A) in Fig. 11.1): ρS (r) =

N  

 2 3 qj (α/π) 2 exp −α r − (rj + nL) ,

j =1 n

where the subscript S in ρS denotes the smoothed charge distribution (as opposed to the real (point)charge distribution ρ(r)). To compute the electrostatic potential φS (r) due to this charge distribution, we use the Fourier form of the Poisson equation: k 2 φ˜ S (k) = 4π ρ˜S (k). Fourier transforming the charge density ρS yields  ρ˜S (k) = dr exp(−ik · r)ρS (r) V

 =

dr exp(−ik · r) V



dr exp(−ik · r) all space N 

 2 3 qj (α/π) 2 exp −α r − (rj + nL)

j =1 n

= =

N  

N 

 2 3 qj (α/π) 2 exp −α r − rj 

j =1

 qj exp(−ik · rj ) exp −k 2 /4α .

(11.2.13)

j =1

If we now insert this expression in Poisson’s equation, we obtain φ˜ S (k) =

N  4π  2 q exp(−ik · r ) exp −k /4α . j j k2 j =1

(11.2.14)

380 PART | IV Advanced techniques

For a neutral system, the term φ˜ S (k = 0) is ill-defined, as both numerator and denominator vanish. Such behavior is a direct consequence of the conditional convergence of the Ewald sum. In section 11.2.2 we will see that if the system has a net polarization, and if this polarization results in a depolarizing field, then the electrostatic energy contains a term for k = 0. However, for the time being, we shall assume that the term with k = 0 is equal to 0. As we shall see in section 11.2.2, this assumption is consistent with a situation where the periodic system is embedded in a medium with infinite dielectric constant. We now compute the contribution to the potential energy due to φS , using Eq. (11.2.1). To this end, we first compute φS (r): 1  φ˜ S (k) exp(ik · r) V

φS (r) =

(11.2.15)

k =0

=

N  4πqj k =0 j =1

k2

 exp[ik · (r − rj )] exp −k 2 /4α ,

and hence, US ≡

1 qi φS (ri ) 2 i

N  1   4πqi qj 2 = exp[ik · (r − r )] exp −k /4α i j 2 V k2 k =0 i,j =1  1  4π 2 2 | ρ(k)| ˜ = exp −k /4α , (11.2.16) 2V k2 k =0

where we have used the definition ρ(k) ˜ ≡

N 

qi exp(ik · ri ).

(11.2.17)

i=1

Correction for self-interaction The expression for the electrostatic potential energy given in Eq. (11.2.16) includes a term (1/2)qi φself (ri ) that is due to the interaction of a point charge qi with its own compensating charge cloud (see (A) in Fig. 11.1). As particles do not interact with themselves, this interaction is spurious and should be subtracted. We do this by evaluating the interaction of point charge i with its surrounding charge cloud (interaction (B) in Fig. 11.1). To evaluate the correction to the spurious self-interaction, we must compute the electrostatic potential at the center of a Gaussian charge cloud. The charge

Long-ranged interactions Chapter | 11 381

distribution that we have overcounted is  3 ρGauss (r) = −qi (α/π) 2 exp −αr 2 . We can compute the electrostatic potential due to this charge distribution using Poisson’s equation. Using the spherical symmetry of the Gaussian charge cloud, we can write Poisson’s equation as −

1 ∂ 2 rφGauss (r) = 4πρGauss (r) r ∂r 2



∂ 2 rφGauss (r) = 4πrρGauss (r). ∂r 2

or

Partial integration yields ∂rφGauss (r) − = ∂r



r

dr 4πrρGauss (r)  ∞  3 dr 2 exp −αr 2 = 2πqi (α/π) 2 r  1 = 2qi (α/π) 2 exp −αr 2 . ∞

(11.2.18)

A second partial integration gives 

 r dr exp −αr 2 rφGauss (r) = −2qi (α/π) √  0 = −qi erf αr , 1 2

(11.2.19)

where, in the√lastline, we have employed the definition of the error function: x erf(x) ≡ (2/ π) 0 exp(−u2 ) du. Hence, φGauss (r) = −

qi √  erf αr . r

(11.2.20)

To compute the spurious self term to the potential energy, we must compute φGauss (r) at r = 0. It is easy to verify that 1

φGauss (r = 0) = −2qi (α/π) 2 . Hence, the correction for the spurious contribution to the potential energy is 1 Uself = − qi φself (ri ) 2 N

i=1

382 PART | IV Advanced techniques

1

= −(α/π) 2

N 

qi2 .

(11.2.21)

i=1

The correction term Uself should be subtracted from the sum of the real-space and Fourier contributions to the Coulomb energy. Note that Eq. (11.2.21) does not depend on the particle positions. Hence, during a simulation, this term is constant, provided that the number of particles in the system is fixed and the values of all (partial) charges remain unchanged. Real-space sum Finally, we must compute the electrostatic energy due to all pair interactions between the individual point charges and the screened point charges of all other particles (interaction (C) in Fig. 11.1). Using the results of section 11.2, in particular Eq. (11.2.20), we can immediately write the (short-range) electrostatic potential due to a point charge qi surrounded by a Gaussian with net charge −qi : qi qi √  − erf αr r r √  qi = erfc αr , r

φshort−range (r) =

(11.2.22)

where the last line defines the complementary error function erfc(x) ≡ 1 − erf(x). The total contribution of the screened Coulomb interactions to the potential energy is then given by √  1 qi qj erfc αrij /rij . 2 N

Ushort−range =

(11.2.23)

i =j

The total electrostatic contribution to the potential energy now becomes the sum of Eqs. (11.2.16), (11.2.21), and (11.2.23): UCoul =

 1  4π 2 2 | ρ(k)| ˜ exp −k /4α 2V k2 k =0

− (α/π)

1 2

N  i=1

qi2

 √ N 1  qi qj erfc αrij + . 2 rij

(11.2.24)

i =j

11.2.1 Dipolar particles Once we have the expression for the electrostatic potential energy of a periodic system of point charges, the corresponding expressions for the potential energy of a system containing dipolar molecules can be derived by differentiation. The only modification is that we must everywhere replace qi by −μi · ∇ i , where μ

Long-ranged interactions Chapter | 11 383

is the dipole moment. For example, the electrostatic energy of a dipolar system becomes Udipolar =

2  1  4π  ˜  2 M(k) exp −k /4α   2V k2 k =0



N 3  2π (α/π) 2 μ2i 3 i=1

+

1 2

N 

  (μi · μj )B(rij ) − (μi · rij )(μj · rij )C(rij ) , (11.2.25)

i =j

where   exp −αr 2 + 2(α/π) , B(r) ≡ r 3 r2 √   exp −αr 2  erfc αr 1 2 + 2 (α/π) 2 2α + 3/r , C(r) ≡ 3 r5 r2 erfc

√

αr



1 2

and ˜ M(k) ≡

N 

iμi · k exp(ik · ri ).

i=1

Again, this expression applies to a situation where the periodic system is embedded in a material with infinite dielectric constant. Dielectric constant To derive an expression for the dielectric constant of a polar fluid, we consider the system shown in Fig. 11.2: a large spherical dielectric with radius a and dielectric constant  (region I) surrounded by a much larger sphere with radius b and dielectric constant   (region II). The entire system is placed in vacuum (region III), and an external electric field E0 is applied. The potential at a given point in this system follows from the solution of the Poisson equation with the appropriate boundary conditions (continuity of the normal component of the displacement D and tangential component of the electric field E) at the two boundaries between regions I and II, and II and III. Solving the set of linear equations that impose the continuity of D⊥ and E , and taking the limit a → ∞, b → ∞, a/b → 0, we obtain the following expression for the electric field in region I, EI =

9  E, (  + 2)(2  + )

(11.2.26)

384 PART | IV Advanced techniques

FIGURE 11.2

Spherical dielectric surrounded by a sphere.

which gives, for the polarization P, P≡

9  ( − 1) −1 EI = E0 . 4π 4π(  + 2)(2  + )

(11.2.27)

In order to make contact with linear response theory, we should compute the polarization of the system as a function of the applied field inside region I , i.e., the electric field that would be present in this region in the absence of the particles. Using Eq. (11.2.26), it is easy to derive that the electrostatic field EI that would be present in region I if it were empty is given by Eq. (11.2.26) with  = 1: EI =

9  E0 . (  + 2)(2  + 1)

The field EI is uniform throughout region I . If we assume that the system is isotropic, we can write for the polarization 

  N N   1 N 

P = μi exp −β H0 − μi · EI dr VQ i=1 i=1 β  2  2  = M − M EI , (11.2.28) 3V where, in the second line, we have assumed that the response is linear. Note that Eq. (11.2.28) describes the relation between the polarization P and the electric field acting on the medium: the external field E0 has disappeared from this expression. This is important because the relation between P and E0 depends on the shape of the dielectric medium, whereas the relation between

P and EI is shape-independent.

Long-ranged interactions Chapter | 11 385

Comparison of Eqs. (11.2.28) and (11.2.27) yields 1

P = βρgk μ2 EI , 3

(11.2.29)

where the gk is the Kirkwood factor, which is defined as gk ≡

1  2  M − M2 , 2 Nμ

where M is the total dipole moment M=

N 

μi .

i=1

Combining Eqs. (11.2.27) and (11.2.29) gives ( − 1)(2  + 1) 4 = πβρgk μ2 . (2  + ) 3 For a simulation with conducting boundary conditions (  → ∞), the expression for the dielectric constant becomes 4  = 1 + πρβgk μ2 . 3

(11.2.30)

This result shows that the fluctuations of the dipole moment depend on the dielectric constant of the surrounding medium. This, in turn, implies that, for a polar system, the Hamiltonian itself depends on the dielectric constant   of the surrounding medium, but not on the dielectric constant  of the medium itself.

11.2.2 Boundary conditions It may appear strange that the form of the potential energy of an infinite periodic system of ions or dipoles should depend on the nature of the boundary conditions at infinity. However, for systems of charges or dipoles, this is a very real effect, which has a simple physical interpretation. To see this, consider the system shown in Fig. 11.2. The fluctuating dipole moment of the unit cell M gives rise to a surface charge at the boundary of the sphere, which, in turn, is responsible for a homogeneous depolarizing field: E=−

4πP , 2  + 1

where P ≡ M/V . Now let us consider the reversible work per unit volume that must be performed against this depolarizing field to create the net polarization

386 PART | IV Advanced techniques

P. Using 4π PdP, 2  + 1 we find that the total work needed to polarize a system of volume V equals dw = −EdP =

Upol =

2π 2π P 2V =  M 2 /V 2  + 1 2 + 1

or, using the explicit expression for the total dipole moment of the periodic box, 2π Upol =  (2 + 1)V

N 2     ri qi  ,    i=1

in the Coulomb case, and 2π Upol =  (2 + 1)V

 N  2   μi  ,    i=1

in the dipolar case. This contribution to the potential energy corresponds to the k = 0 term that we have neglected thus far. It is permissible to ignore this term if the depolarizing field vanishes. This is the case if our periodic system is embedded in a medium with infinite dielectric constant (a conductor,   → ∞), which is what we have assumed throughout. Ballenegger [465] gives a unified discussion of the effect of differently shaped boundaries at infinity. For simulations of ionic systems, it is essential to use such “conducting” (sometime called “tin-foil”) boundary conditions; for polar systems, it is merely advantageous. For a discussion of these subtle points, see [474]. A paper by Sprik explains how simulations can be carried out at constant D or E [475].

11.2.3 Accuracy and computational complexity In the Ewald summation, the calculation of the energy is performed in two parts: the real-space part (11.2.22) and the part in Fourier space (11.2.16). For a given implementation, we have to choose the parameter α that characterizes the width of the Gaussian charge distributions, rc the real-space cutoff distance, and kc the cutoff in Fourier space. In fact, it is common to write kc as 2π/Lnc , where nc is a positive integer. The total number of Fourier components within this cutoff value is equal to (4π/3)n3c . The values of these parameters depend on the desired accuracy , that is, the root-mean-squared difference between the exact Coulombic energy and the results from the Ewald summation. Expressions for the cutoff errors in the Ewald summation method3 have been derived in 3 The accuracy is dependent on whether we focus on the energy (for Monte Carlo) or on the forces

(for Molecular Dynamics).

Long-ranged interactions Chapter | 11 387

[476,477]. For the energy, the standard deviation of the real-space cutoff error of the total energy is δER ≈ Q

 r 1 1  2 c 2 exp −αr c 2L3 αrc2

(11.2.31)

and for the Fourier part of the total energy δEF ≈ Q

√ 1/2

α nc 2 exp − /L) /α , (πn c L2 (πnc /L)2

where Q=



(11.2.32)

qi2 .

i

Note that for both the real-space part and the Fourier part, the strongest dependence of the estimated  on the parameters α, rc , and nc is through a function  error of the form x −2 exp −x 2 . We now impose that these two functions have the   same value . The value of x for which x −2 exp −x 2 =  we denote by s.   Hence  = s −2 exp −s 2 . It then follows from Eq. (11.2.31) that s rc = √ α

(11.2.33)

√ sL α . nc = π

(11.2.34)

and from Eq. (11.2.32) we obtain

If we insert these expressions for rc and nc back into the expressions (11.2.31) and (11.2.32), we find that both errors have the same functional form:   1/2  exp −s 2 s δER ≈ Q √ 3 s2 αL and



s δEF ≈ Q √ 3 2 αL

1/2

  exp −s 2 . s2

Hence, changing s affects both errors in the same way. We now estimate the computational effort involved in evaluating the Ewald sum. To this end, we write the total computational time as the sum of the total time in real space and the total time in Fourier space τ = τR NR + τF NF ,

(11.2.35)

388 PART | IV Advanced techniques

where τR is the time needed to evaluate the real part of the potential of a pair of particles and τF is the time needed to evaluate the Fourier part of the potential per particle and per k vector. NR and NF denote the number of times these terms need to be evaluated to determine the total energy or the force on the particles. If we assume a uniform distribution of particles, these two numbers follow from the estimates of rc and nc : 4 s3N 2 NR = π 3/2 3 3 α L 4 s 3 α 3/2 L3 N NF = π . 3 π3 The value of α follows from minimization of Eq. (11.2.35)  α=

τR π 3 N τ F L6

 13 ,

which yields for the time τ=

√  8 τr τf N 3/2 s 3 = O N 3/2 . √ 3 π

(11.2.36)

Note that, with the above expression for α, the parameters rc and nc follow from Eqs. (11.2.33) and (11.2.34), respectively, once we have specified the desired accuracy. To optimize the Ewald summation one has to make an estimate of τR /τF . This ratio depends on the details of the particular implementation of the Ewald summation and can be obtained from a short simulation.4 We conclude this section with a few comments concerning the implementation. First of all, when using Eq. (11.2.33) to relate rc to α, one should make sure that rc ≤ L/2; otherwise the real part of the energy cannot be restricted to the particles in the box n = 0. A second practical point is the following: in most simulations, there are short-range interactions between the particles, in addition to the Coulomb interaction. Usually, these short-range interactions also have a cutoff radius. Clearly, it is convenient if the same cutoff radius can be used for the short-range interactions and for the real-space part of the Ewald summation. However, if this is done, the parameters of the Ewald summation need not have their optimum values.

11.3 Particle-mesh approaches As discussed in section 11.2.3, the CPU time required for a fully optimized Ewald summation scales with the number of particles as O(N 3/2 ). In many 4 A typical value of this ratio is τ /τ = 3.6 [478]. R F

Long-ranged interactions Chapter | 11 389

applications, we not only have the long-ranged interactions but short-range interactions as well. For such systems, it may be convenient to use the same cutoff radius for the real-space sum in the Ewald summation as for the short-range interactions. For a fixed cutoff, however, the calculation of the Fourier part of the Ewald summation scales as O(N 2 ), which makes the Ewald summation inefficient for large systems. Note that it is only the reciprocal-space part of the Ewald sum that suffers from this drawback. Clearly, it would be advantageous to have an approach that handles the Fourier part more efficiently. Several schemes for solving this problem have been proposed. They all exploit the fact that the Poisson equation can be solved more efficiently if the charges are distributed on a mesh with fixed spacing. One reason why “meshing” increases the computational efficiency is that we can then use the Fast Fourier Transform (FFT) [38] to compute the Fourier components of the charge density: the computational cost of an M-point FFT scales as M ln M, and as M typically scales as N , the routine leads to N ln N scaling. The efficiency and accuracy of such mesh-based algorithms depend strongly on the way in which the charges are attributed to mesh points. Below, we briefly discuss the basics of the particle-mesh approach. The earliest particle-mesh scheme for molecular simulations was developed by Hockney and Eastwood [28]. The charges in the systems were interpolated on a grid to arrive at a discretized Poisson equation. In its simplest implementation, the particle-mesh method is fast, but not very accurate. The technique was subsequently improved by splitting the calculation into a short-range and a long-range contribution. In the spirit of the Ewald method, the short-range part is then calculated directly from the particle-particle interactions while the particle-mesh technique is used for the long-range contribution. Below, we briefly discuss the particle-mesh methods and their relation to the Ewald-sum approach. As before, we do not discuss the most sophisticated particle-mesh approach, but a version that allows us to explain the physical idea behind the approach. Excellent and detailed reviews of particle-mesh methods and their performance exist, e.g., the reviews by Deserno and Holm [479] and by Arnold et al. [464]. A description of a “typical” particle-mesh method is reasonable because many variants of the particle-mesh methods, such as the Particle Mesh Ewald (PME) [480] or the Smooth Particle Mesh Ewald (SPME) [481], are similar in spirit, as they are all inspired by the original Particle-Particle/Particle-Mesh (PPPM) technique of Hockney and Eastwood [28]. In practice, the choice of the method depends on the application. For example, Monte Carlo simulations require an accurate estimate of the energy, while in Molecular Dynamics simulations, we need to compute the forces accurately. Some particle-mesh schemes are better suited to do one, and some to do the other. Like the Ewald method, the idea behind the PPPM method is based on the splitting of the Coulomb potential a short-ranged and a long-ranged part

390 PART | IV Advanced techniques

(Eq. (11.1.2)): 1 f (r) 1 − f (r) = + . r r r The idea of using a switching function is similar to the splitting of the Ewald summation into a short-range and a long-range part. Pollock and Glosli [482] found that different choices for f (r) yield comparable results, although the efficiency of the method does depend strongly on a careful choice of this function. Darden et al. [480] showed that, if one uses the same Gaussian screening function as in the Ewald summation, the PPPM technique becomes very similar to the Ewald method. It is instructive to recall the Fourier-space contribution of the energy: US =

1  4π 2 |ρ(k)| ˜ exp(−k 2 /4α). 2V k2 k =0

Following Deserno and Holm [479], we write the Fourier-space contribution as ⎛ ⎞ N   1 1 ik·ri ⎠ qi ⎝ g(k) ˜ γ˜ (k)ρ(k)e ˜ US = 2 V k =0

i=1

1 qi φ k (ri ), 2 N

=

(11.3.1)

i=1

where φ k (ri ) can be interpreted as the electrostatic potential due to the second term in Eq. (11.1.2): φ k (ri ) =

1  g(k) ˜ γ˜ (k = 0)ρ(k) ˜ exp(ik · ri ). V k

As a product in Fourier space corresponds to a convolution in real space, we see that the potential φ k (ri ) is due to the original charge distribution ρ(x), convoluted by a smearing function γ (r). The Ewald summation is recovered if we choose a Gaussian smearing function, in which case f (r) is given by an error function. To evaluate the above expression for the Fourier part of the electrostatic energy using a discrete fast Fourier transform, we have to perform the following steps [479,483]: 1. Charge assignment: Up to this point, the charges in the system are not localized on lattice points. We now need a prescription to assign the charges to the grid points. 2. Solving Poisson’s equation for our discrete charge distribution using a FFT technique (the Poisson equation on a lattice can also be solved efficiently, using a diffusion algorithm [484]).

Long-ranged interactions Chapter | 11 391

3. Force assignment (in the case of MD): Once the electrostatic energy has been obtained from the solution of the Poisson equation, the forces have to be calculated and assigned back to the particles in our system. At every stage, there are several options to choose from. Ref. [479] assesses the relative merits of the various options and their combinations. Below we give a brief summary of the main conclusions of [479]. To assign the charges of the system to a grid, a charge assignment function, W (r), is introduced. For example, in a one-dimensional system, the fraction of a unit charge at position x assigned to a grid point at positionxp is given by W (xp − x). Hence, if we have a charge distribution ρ(x) = i qi δ(x − xi ), then the charges at a grid point xp are given by  1 L dx W (xp − x)ρ(x), (11.3.2) ρM (xp ) = h 0 where L is the box diameter and h is the mesh spacing. The number of mesh points in one dimension, M, is equal to L/ h. The factor 1/ h ensures that ρM is a density. Many choices for the function W (x) are possible, but some choices are better than others [479]. Obviously, W (x) should be an even function and should be normalized such that the sum of the fractional charges equals the total charge of the system. Moreover, since the computational cost is proportional to the number of mesh points over which a single charge is distributed, a function with a small support decreases the computational cost. Of course, one would like to reduce the computational errors due to the discretization as much as possible. As a particle moves through the system, the function W (x) should not yield abrupt changes in the fractional charges as it passes from one grid point to another. A nice way to approach the charge assignment problem was described by Essmann et al. [481], who argue that the problem of discretizing the Fourier transform can be viewed as an interpolation procedure. Consider a single term in the (off-lattice) Fourier sum qi e−ik·ri . This term cannot be used in a discrete Fourier transform, because r rarely coincides with a mesh point. However, we can interpolate e−ik·ri in terms of values of the complex exponential at mesh points. For convenience, consider a one-dimensional system. Moreover, let us assume that x varies between 0 and L and that there are M equidistant mesh points in this interval. Clearly, the particle coordinate xi is located between mesh points [Mxi /L] and [Mxi /L] + 1, where [· · · ] denotes the integer part of a real number. Let us denote the real number Mxi /L by ui . We can then write an order-2p interpolation of the exponential as e−ikx xi ≈

∞ 

W2p (ui − j ) e−ikx Lj/M ,

j =−∞

where the W2p ’s denote the interpolation coefficients. Strictly speaking, the sum over j contains only M terms. However, to account for the periodic boundary

392 PART | IV Advanced techniques

conditions, we have written it as if −∞ < j < ∞. For an interpolation of order 2p, only the 2p mesh point nearest to xi contributes to the sum. For all other points, the weights W2p vanish. We can now approximate the Fourier transform of the complete charge density as ρk ≈

N 

qi

i=1

∞ 

W2p (ui − j )e−ikx Lj/M .

j =−∞

This can be rewritten as ρk ≈

 j

e−ikx Lj/M

N 

qi W2p (ui − j ).

i=1

We can interpret the above expression as a discrete Fourier transform of a  “meshed” charge density ρ(j ) = N q i=1 i W2p (ui − j ). This shows that the coefficients W2p that were introduced to give a good interpolation of e−ikx xi end up as the charge-assignment coefficients that attribute off-lattice charges to a set of lattice points. While the role of the coefficients W is now clear, there are still several choices possible. The most straightforward one is to use the conventional Lagrange interpolation method to approximate the exponential (see Darden et al. [480] and Petersen [477]). The Lagrange interpolation scheme is useful for Monte Carlo simulations, but less so for the Molecular Dynamics method. The reason is that although the Lagrangian coefficients are everywhere continuous, their derivative is not. This is problematic when we need to compute the forces acting on charged particles (the solution is that a separate interpolation must be used to compute the forces). To overcome this drawback of the Lagrangian interpolation scheme, Essmann et al. suggested the so-called SPME method [481]. The SPME scheme uses exponential Euler splines to interpolate complex exponentials. This approach results in weight functions W2p that are 2p − 2 times continuously differentiable. It should be stressed that we cannot automatically use the continuum version of Poisson’s equation in all interpolation schemes. In fact Eq. (11.2.10) is only consistent with the Lagrangian interpolation schemes. To minimize discretization errors, other schemes, such as the SPME method, require other forms of the Green’s function g(k) ˜ (see ref. [479]). In section 11.2.3, we discussed how the parameter α in the conventional Ewald sum method can be chosen such that it minimizes the numerical error in the energy (or in the forces). Petersen [477] has derived similar expressions for the PME method. Expressions that apply to the PPPM method [28] and the SPME scheme are discussed by Deserno and Holm [485]. In the case of the force computation, matters are complicated by the fact that, in a particle-mesh scheme, there are several inequivalent ways to compute the electrostatic forces acting on the particles. Some such schemes do not conserve momentum, and

Long-ranged interactions Chapter | 11 393

others do —but at a cost. The choice of what is the “best” method, depends largely on the application [479]. This concludes our discussion of particle-mesh schemes. While we have tried to convey the spirit of these algorithms, we realize that this description is not sufficiently detailed to be of any help in the actual implementation of such an algorithm. We refer readers who are considering implementing one of the particle-mesh schemes to the articles of Essmann et al. [481], Deserno and Holm [479], Arnold et al. [464], and of course Hockney and Eastwood [28].

11.4

Damped truncation

The Coulomb potential is long-ranged and naive attempts to truncate it lead to very inaccurate results. However, as stressed by Wolf [486], the mathematical manipulations of the Ewald method do not clarify the physical origins of this problem. The failure of naive truncation may seem puzzling, as it had already been noted earlier [487] that the effective interactions in Coulomb fluids are short-ranged except, of course, for the fields that are due to boundary conditions at infinity. As shown in ref. [486], the reason why naive truncations of the Coulomb potential fail is that the volume within the cutoff radius is rarely charged neutral. This problem is not specific to the Ewald method: the same problem occurs with any representation of the Coulomb potential in the form given by Eq. (11.1.2) 1 f (r) 1 − f (r) = + . r r r Wolf [486] showed that good estimates for the energy of a Coulomb system could be obtained, even when ignoring the long-ranged part and truncation f (r) at some finite cutoff radius Rc , provided that the volume within the cut-off radius is made charge neutral. Charge-neutrality is achieved by placing a compensating charge on the surface of the sphere with radius Rc . Wolf made the Ewald choice for f (r), but of course, other choices can be, and have been, made. Here, we give Wolf’s expression for the electrostatic energy for a system with a “damped” Coulomb potential, truncated at Rc in the general form: E electrostatic (Rc ) =

N  i O(105 )) systems, be it that the precise threshold depends on the desired accuracy. The algorithm has a history dating back to Appel [490] and Barnes and Hut [491]. The fast-multipole approach as we now know it was developed by Greengard and Rokhlin (GR), initially for a non-periodic, two-dimensional case [492], and later extended to 3d [493]. Schmidt and Lee extended the method to systems with periodic boundary conditions [494], which adds a bit to the initial overhead, but does not otherwise slow down the algorithm. Yoshii et al. [495] have shown that the FMM method can also be generalized to 3d problems with a slab geometry (periodic in the x and y directions, but not in the z direction). Ewald-like schemes typically become cumbersome for slab geometries (see SI section L.7). Other helpful descriptions of the FMM approach can be found in refs. [496,497]. In the FMM, the system (the “root”) is divided into cubic cells (“children”) that are then subdivided into smaller cubic cells (“grandchildren”) with half the

Long-ranged interactions Chapter | 11 395

FIGURE 11.3 One-dimensional representation of the steps in the fast multipole method (FMM). (A) In the M2M step Multipoles in smaller cells are transformed and added to generate the Multipoles of the parent cells, which are then used to generate the multipoles of the grandparents, etc. (B) L2L-step: The Local potential in parent cells is transformed to generate the Local potential in the off-spring cells (black arrows). In addition, the Local potential in the off-spring also contains a Multipole-to-Local contribution (see (C)) due to multipoles in cells that are beyond the next-nearest neighbor, but not yet included at the parent level): dashed arrows. (C) M2L step. The Local potential due to Multipoles surrounding goldilocks cells (not too far, not too close). Multipoles that are further removed have been accounted for at the parent level. Those that are too close will be included in subsequent generations.

linear dimensions (see Fig. 11.3). This subdivision goes on until there are O(1) charges in the smallest cells. This procedure creates what is called an octal tree of cells. As the name suggests, the FMM is based on the fact that the electrostatic potential due to a group of charges inside a volume V can be written as the sum of the contributions due to all the multipoles of the charge distribution inside that volume. The crucial point is that an electrostatic potential in a vacuum satisfies the Laplace equation and that the solution of this equation at a point with spherical coordinates (r, θ, φ), where r is the distance from the chosen origin, is of the form [498]5 5 This multipole expansion is for the 3d Coulomb potential. A similar expansion exists for the 2d

Laplace equation. Note, however, 1/r is not a solution to the 2d Laplace equation.

396 PART | IV Advanced techniques

   ∞ 

 4π Am r  + Bm r −(+1) Ym (θ, φ) , φ(r, θ, φ) = 2 + 1 =0 m=−

where the Ym denotes the spherical harmonics (unfortunately, different authors use different notation conventions). The coefficients Am and Bm characterize the spatial distribution of the charges responsible for the potential. For instance, for a potential at distances r larger than the radius of the sphere that encloses all charges i = 1, · · · , k causing the potential, the Bm are the multipole moments in spherical tensor notation: Mm =

k 

qi ri Y∗m (θi , φi ),

(11.5.1)

i=1

but there is an equivalent expression relating the Am to the charges at a distance larger than r:  = Mm

k 

−(+1)

qi ri

Y∗m (θi , φi ).

(11.5.2)

i=1

In practice, the multipole expansion is truncated at some finite value of : the larger , the lower the speed but, the higher the accuracy of the algorithm. The GR approach uses a polynomial expansion of the electrostatic potential inside a charge distribution similar to Eq. (11.5.2), but due to multipoles, rather than due to charges, at distances larger than r. We will not write down this expression (see ref. [496]). This local polynomial expansion of the electrostatic potential can be used to compute the contribution to the potential inside a given cell due to the multipoles in cells that are far enough away to guarantee the convergence of the expansion. The simplest way to ensure convergence of the local expansion [496,499] is to include only the contribution to the local potential due to cells that are further away than the next-nearest neighbor of the central cell (next nearest because no charges responsible for this part of the local potential can be inside the cell for which we compute the local potential). Cells that are closer will be dealt with at the next level of refinement, where we split every cell into eight sub-cells. The part of the local field that is due to multipoles that are well-removed from the local cell, is slowly varying and should be well-represented by a small number of terms, corresponding to low of powers of r. The crucial point is that the contribution to the coefficients of r  from the multipoles centered in the different surrounding cells (“not too close, not too far”), is given by a simple,6 fixed linear combination of these cell multipoles. We do not have to compute the contribution to the local potential due to cells that are treated in the next level up (i.e., at the level of parent cells), where 6 The relation is simple only if you are into the construction of rotational invariants from products

of spherical harmonics.

Long-ranged interactions Chapter | 11 397

the same calculation is performed for the local potential due to multipoles in beyond-next-nearest-neighbor parent cells. But we have not yet explained how we compute the multipole moments of cells, their parents, their grandparents, etc. Let us first focus on the highest resolution (when there are few, if any, charges in a cell). We use the terminology of Ref. [496] and call these cells “leaf” cells, as opposed to “root” or “branch” cells. We simply compute the multipole moments of a leaf cell using Eq. (11.5.1). From there on, we use the GR recursive relations described below to compute everything else. The key point is that, at this level, we are done with computing multipoles. What follows are only linear transformations with fixed coefficients. The key steps underlying the FMM are: 1. There exists a “simple” linear transformation that allows us to move a set of multipoles centered on the middle of box V to the center of its “parent” box, which (in 3d) contains 8 smaller cells. Note that the new multipole moments are not the same as the old ones: they are linear combinations (and they mix different orders of ). We repeat the same Multipole-to-Multipole operation for all 8 “children” of a given parent box. We can then simply add the multipoles thus transformed and transported, and we have thereby obtained the multipole moments of the parent box. 2. We now repeat this procedure by combining 8 parent boxes into one “grandparent” box, and so on. Note that we never have to recompute multipoles from charges: we just apply a fixed, recursive, linear operation. 3. Once we have all the multipole moments of all cells at all the different levels up to the root (the periodic unit cell), it is time to increase the resolution again to compute the multipole contributions to the local potentials.7 Once we have the local potential at level L, starting with L = 0 —the whole simulation box, we can carry out GR’s second set of linear transformations that transform and move the local potential at the center of the parent box to centers of its 8 off-spring boxes. Note that this Local-to-Local transformation takes care of all multipole fields at the scale of the parent box or larger, all the way up to L = 0. 4. However, now we still need to add the contribution due to the charge distribution surrounding parent cells that were closer than next nearest neighbors. As we are now one level down from the parent cells, we have a set of nearby (but not too nearby) offspring cells for which we know the multipoles. We can compute the Multipole-to-Local contribution of all these cell multipoles to the local potential in our target cell. We still miss all cells of the same size 7 Here, the situation with periodic boundary conditions is different from the case of a non-periodic

charge-distribution: in the non-periodic case, we can only start using the multipole expansion of local potentials if we have cells that are beyond next nearest neighbors. However, for periodic boundary conditions, the potential due to the multipole moments of beyond-next-nearest-neighbor images of the periodic box must be computed. This stage requires an Ewald summation that is carried out only once at the beginning of the simulation. The boundary conditions at infinity enter at this stage.

398 PART | IV Advanced techniques

that are (next)nearest neighbors. But, no problem, these are dealt with at the next refinement level. 5. Finally, arrive at the level of the “leaf” cells. There we compute the contribution to the local potential due to the beyond-next-nearest neighbor leaf cells that are not already included in the recursive calculation. Now the only thing that remains is computing the contribution to the local potential due to the charges that were not yet included in the multipole expansion. This contribution includes the electrostatic interactions between all charges inside the same leaf cell and the interaction with all charges in its immediate neighborhood. These contributions are computed explicitly using Coulomb’s law:  qi qj uclose = . rij close

As we have already computed the expansion coefficients of the local electrostatic potential φL (r) in every “leaf” cell, computing the total electrostatic  potential is now just a matter of adding (1/2) i φL (ri )qi . The Fast Multipole Method has the unique feature that the computational effort scales as N . However, a potential drawback of the FMM method is that the changeover from the direct calculation of the Coulomb interactions with charges in the immediate vicinity, to a truncated multipole expansion for cells beyond the next nearest neighbor, may lead to discontinuities in the potential, which cause energy drift in MD. Special regularization techniques are required to address this problem [464,500,501]. Moreover, the FMM comes with some overhead, and may not be the best solution for molecular simulations of homogeneous systems. Moreover, there are now many algorithms that scale as N (a + b ln N ), but with b substantially smaller than a. A comparison of a number of algorithms with good (N or N ln N ) scaling [464] (see Example 15 shows that the FMM approach is certainly very good, in particular for larger systems, but whether it is the best depends on many other factors (including the architecture of the computer used). For large, non-periodic systems many of the competitors to FMM loose their advantage and the choice becomes easier.

11.6 Methods that are suited for Monte Carlo simulations The majority of the techniques described above are less suited for MC simulations that employ single-particle moves. The reason is that computing the longranged potential after moving a single charge requires computing the change in the Fourier components of the charge densities. This is inefficient in the standard implementation of the Ewald and PME methods, and also for computing the propagated multipoles and the local potentials due to a single charge in FMM. The approximate Wolf method does not suffer from this problem and is therefore also suited for Monte Carlo simulations.

Long-ranged interactions Chapter | 11 399

Below, we describe two refreshingly different techniques that work either exclusively for MC 11.6.2, or are conceptually much simpler for MC than for MD 11.6.1.

11.6.1 Maxwell equations on a lattice The Maxwell Equations Method (MEM) proposed by Maggs and coworkers [502–504] is rather different from the schemes mentioned previously as, in its simplest form, it does not compute the electrostatic potential at all, but minimizes the energy associated with the electrostatic field. The energy associated with an electric field E can be written as:  1 Uel = (11.6.1) drE 2 (r) , 8π where, as before, the use of the factor 1/(8π) does not imply that we use Gaussian units, but that we use reduced units, as was explained below Eq. (11.2.1). We also note that the electrostatic energy in Eq. (11.6.1) contains the self-energy which diverges for the case of a point particle. However, as the MEM method is implemented on a lattice, the self-energy remains finite. It still creates some problems, but these we will discuss later. We first focus on the simplest possible case of the MEM approach, namely one where particles have fixed charges and are constrained to lattice sites. Our starting point is the relation between the charge density ρ(r) and the divergence of the E-field: ∇ · E(r) = 4πρ(r) .

(11.6.2)

We can write the E-field as the sum of two independent terms8 Etr and Ephi : the first, the transverse field, is the rotation of some other vector field Z(r) and is therefore by construction divergence-free, the other is the gradient of a potential φ(r): E(r) = −∇φ + ∇ × Z(r) . The total field energy is then 1 Uel = 8π



dr (∇φ)2 + (∇ × Z)2 ,

(11.6.3)

where we note that the integral over the cross terms vanishes. In addition, we impose that divergence of the E-field satisfies Gauss’s law, i.e. Eq. (11.6.2). The charges are located on the lattice points at the centers of these cells, and the 8 In general, E may also contain a constant term E , due to the boundary conditions (section 11.2.2). 0 For the MEM-algorithm, this term makes no difference, and we leave it out here.

400 PART | IV Advanced techniques

FIGURE 11.4 Simple example of two field distributions that both satisfy Gauss’s law for a simple, square lattice of positive and negative charges. Fig. (a) shows an initial field distribution that has been constructed such that the divergence of E satisfies Gauss’s law. However, the E fields are not equal to (minus) the gradient of the electrostatic potential. (b) shows a snapshot of the same lattice, but with an E-field that is equal to (minus) the gradient of the electrostatic potential. Note that situation (b) can be created from situation (a) by adding E fields that circulate around the plaquettes in a checkerboard manner, as sketched in Fig. 11.5(A). The net flux of the E-fields through the cell boundaries is the same in both figures (a) and (b), yet the field patterns are clearly quite different. This difference is due to the difference in the transverse fields.

electric fields live on the links between lattice points. Gauss law then has the form  Eij = 4πQi , j ∈ nn i

where the sum runs over the nearest neighbors of lattice point i. At this stage, we do not yet know the fields. The procedure is now as follows: having placed the charges on some reasonable initial positions, we first generate an initial E-field that satisfies Eq. (11.6.2).9 Note that this E-field is not equal to −∇φ. In fact, this original field is not rotation free (see Fig. 11.4). Once the field has been initialized, we have two types of MC trial moves: 1. We attempt updates of the transverse part of the E-field (see Fig. 11.5A). That part of the field does not “see” the charges and hence does not depend on the particle positions. The role of these MC moves is only to sample all field configurations that are compatible with the given charge distribution. There is finite thermal energy associated with the transverse field fluctuations, but as this energy does not depend on the charges, it does not affect the Monte Carlo sampling of the positions of the charged particles. As the total field is the sum of −∇φ and ∇ × Z(r), the part of the field that is not 9 One way of initializing the field is as follows [505]: We first apply the lattice version of Gauss’s

law to planes (say, perpendicular to x). Then to lines inside these planes, and finally to points inside these lines.

Long-ranged interactions Chapter | 11 401

FIGURE 11.5 In its simplest form, the Maxwell Equations Method MC algorithm has two types of MC trial moves: (A) moves that change the rotation of the E-field, but not the divergence. In such a move a random amount  is added in a clockwise (or anti-clockwise) fashion to all links around a plaquette. The charges live on the vertices of the plaquettes. Hence, this move does not change ÷E. (B) Particle moves: in a (charged) particle trial move, a charged particle is moved from a site to a neighboring site. The E-field (gray arrow) between the old and the new site is then changed such that the Gauss relation holds for the new charge geometry.

fluctuating is equal to Ephi ≡ −∇φ. This part of the field does depend on the positions of the charges, and we change it by moving charges. 2. In a particle move, we attempt to displace a charge from its lattice site to a neighboring lattice site, and at the same time change the field on the link between the old and the new sites, such that Gauss’s law is satisfied for the new position of the charge (see Fig. 11.5B). The above discussion of the MEM MC algorithm is over-simplified. However, it shows that the algorithm is completely local: no long-ranged interactions are computed. Moreover, the computational effort associated with a single-particle MC move is O(1). However, in particular, at low temperatures, the acceptance of the trial moves can become rather low. For a discussion of this problem —and of ways to mitigate it —see ref. [504]. Another obvious limitation of the original MEM algorithm is that it deals with discrete charges that live on lattice sites. Obviously, it would be attractive to apply the method to off-lattice models, in which case one would like to use a particle-mesh method to distribute the charges of a particle over neighboring lattice sites, using one of the particlemesh schemes. However, now the self-energy of the electric field of a single charge creates problems: for a single charge moving in the space between lattice sites, the discretized electric-field energy depends on the position of the particle and favors positions in the middle of the lattice cells. This problem is not specific to MEM. The MEM approach has been extended to Molecular Dynamics calculations (see refs. [464,505]). These MD algorithms emulate a discretized version of the full Maxwell equations, but they are different, as the speed of light must be chosen much lower than the real speed of light. We will not discuss these MD algorithms (see, however, ref. [21]). The advantage of the Maggs method in MD is not as clear as it is in MC, as there are many, very efficient MD algorithms to deal with long-ranged forces.

402 PART | IV Advanced techniques

11.6.2 Event-driven Monte Carlo approach Another O(N ) algorithm is based on the Event Chain Monte Carlo (ECMC) method described in Chapter 13, section 13.4.4. We will not repeat the discussion of the ECMC method here, but just mention that by embedding each charge in a neutralizing background (either a neutralizing line charge or a neutralizing volume charge), the interactions between the charge that are subject to a trial move and the other charges are sufficiently short ranged to make use of the cell-veto method discussed in section 13.4.4. In fact, as discussed by Faulkner et al. [89], several choices are possible for treating Coulomb interactions in the context of Event-Chain MC. Although the ECMC approach does not eliminate the long-range nature of Coulomb forces, the expensive calculations need to be done only once, at the beginning of a simulation. For more details, we refer the reader to Ref. [89].

11.7 Hyper-sphere approach Finally, we mention an early method to treat Coulomb interactions, due to Caillol and Levesque [506]. The approach of ref. [506] is based on the idea of embedding the system as a (hyper)sphere in a higher-dimensional space. As a consequence, the range of Coulomb interaction is limited by the diameter of the hyper-sphere. Of course, curving space does change the properties of a system, in particular for crystalline solids, where the curvature of space is incompatible with a defectfree crystal. However, as shown in ref. [506], the method yields reasonable for liquids. We will not discuss the Caillol-Levesque method, but refer the reader to the original paper for more details. Illustration 15 (Algorithms to calculate long-range interactions). As the speed of the computation of long-ranged forces is often a bottleneck in large-scale simulations, much effort has been invested in optimizing this part of the simulation code. In fact, there is a large number of algorithms (and acronyms), some of which we have discussed in this chapter. Clearly, it is useful to know which method is best, and the answer is: “that depends”. First of all, as existing algorithms are improved all the time, and new algorithms keep being introduced, any comparison can only be a snapshot. But, more importantly, different conditions and requirements may change the relative ranking of algorithms. For instance, some simulations focus on homogeneous systems, and others may deal with large inhomogeneities, sharp boundaries, or spatial variations of the dielectric constant. In addition, what is best for MD may be less useful for (single-particle) MC. And then, there are factors such as the desired accuracy, the number of processors, and other machine or compiler-related issues.

Long-ranged interactions Chapter | 11 403

Still, it is useful to give some impression of the performance of some of the better-known algorithms, as long as the reader is aware of the fact that other comparisons might give a somewhat different ranking. In what follows, we briefly discuss the findings of a careful comparison by Arnold et al. [464] of half a dozen of the most competitive (or promising) algorithms. The comparison was published in 2013, and applies to MD simulations of homogeneous systems. A more focused review of Poisson solvers [134] was published in 2016. Still, at the time of the writing of the current edition of this book, we were not aware of comprehensive reviews that also cover more recent algorithms, such as the one mentioned in section 11.6.2.

FIGURE 11.6 (a) Comparison of the wall-clock time per charge for a number of highquality algorithms to compute Coulomb interactions in Molecular Dynamics simulations. The acronyms in the legend are briefly discussed in the text. They refer to: ParticleParticle/Particle-Mesh (PPPM), Fast-Multipole Method (FMM), MEMD refers to the molecular dynamics implementation of the Maxwell equation method. The most striking feature of this figure is that all these algorithms scale effectively as O(N ), even the one that uses fast Fourier transforms, which scale as O(N ln N ). (b) Comparison of the wall-clock time per charge as a function of the RMS error in the potential energy. Figure based on the data of ref. [464].

With these caveats, let us look at some of the findings of ref. [464]. First of all, we would like to get an impression of how the speeds of the different algorithms compare. Such a comparison is shown in Fig. 11.6 (a). The MD simulation data shown in this figure refer to systems of 12,960 up to 424,673,280 BKS-silicon particles [507]. Details about the simulation parameters and the hardware used can be found in ref. [464]. The figure compares three algorithms, some of which we discussed in this chapter, namely: PPPM, a version of the “Particle-Particle-Particle-Mesh” method of ref. [28], FMM, a version of the “Fast Multipole Method” ref. [492, 493] and MEM, which is the MD version [505] of the Maxwell Equations Method discussed in section 11.6.1. The most striking feature of Fig. 11.6 (a) is that all algorithms scale effectively linearly with particle size. To be more precise: the ones that include an

404 PART | IV Advanced techniques

FFT step, should eventually scale as N ln N . However, as the FFT part of the algorithm takes only a small fraction of the time, even for these rather large systems, the logarithmic corrections are barely visible. Next, we look at the relative performance of the algorithms. For the accuracy shown in Fig. 11.6 (a), the MEM method seems somewhat slower than the other two. However, the conclusion about relative performance may depend on the nature of the system under study: systems that are very inhomogeneous (e.g., containing large regions of low density) tend to be better suited for fastmultipole methods, whereas the MEM algorithm is suited for MC simulations because the effect of single-particle moves is local. Apart from these considerations, there is also the question of the desired accuracy. Arnold et al. made a comparison for the same set of algorithms, but now for a fixed system size (N = 102, 900), and considered the cost of decreasing the relative root-mean-square error in the potential energy (see Fig. 11.6 (b)). For most simulations, a relative error of O(10−4 ) is acceptable. Hence, it is usually not of much interest to know how algorithms behave when the relative error is 10−7 or less (see ref. [464]).

Chapter 12

Configurational-bias Monte Carlo Up to this point, we have barely addressed the fairly obvious question: what is the point of using the Monte Carlo technique in simulations? After all, Molecular Dynamics simulations can be used to study the static properties of manybody systems and, in addition, MD provides information about their dynamical behavior. Moreover, a standard MD simulation is computationally no more expensive than the corresponding MC simulation. Hence, it would seem tempting to conclude that the MC method is an elegant but outdated scheme. As the reader may have guessed, we believe that there are good reasons to use MC rather than MD in certain cases. But we stress the phrase in certain cases. All other things being equal, MD is clearly the method of choice. Hence, if we use the Monte Carlo technique, we should always be prepared to justify our choice. Of course, the reasons may differ from case to case. Sometimes it is simply a matter of ease of programming: in MC simulations there is no need to compute forces. This is irrelevant if we work with pair potentials, but for many-body potentials, the evaluation of the forces may be nontrivial. Another possible reason is that we are dealing with a system that has no natural dynamics. For instance, this is the case in models with discrete degrees of freedom (e.g., Ising spins). And, indeed, for simulations of lattice models, MC is almost always the technique of choice. But even in off-lattice models with continuous degrees of freedom, it is sometimes better, or even essential, to use Monte Carlo sampling. Usually, the reason to choose the MC technique is that it allows us to perform unphysical trial moves, that is, moves that cannot occur in nature (and, therefore, have no counterpart in Molecular Dynamics) but are essential for the equilibration of the system. This introduction is meant to place our discussion of Monte Carlo techniques for simulating complex fluids in a proper perspective: in most published simulations of complex (often macromolecular) fluids, Molecular Dynamics is used, and rightly so. The Monte Carlo techniques that we discuss here have been developed for situations where either MD cannot be used at all or the natural dynamics of the system is too slow to allow the system to equilibrate on the time scale of a simulation. Examples of such simulations are Gibbs-ensemble and grand-canonical Monte Carlo simulations. Both techniques require the exchange of particles, either between a reservoir and the simulation box or between the two boxes. Such Understanding Molecular Simulation. https://doi.org/10.1016/B978-0-32-390292-2.00023-4 Copyright © 2023 Elsevier Inc. All rights reserved.

405

406 PART | IV Advanced techniques

particle exchanges are not related to any real dynamics and therefore require the use of Monte Carlo techniques. But, in the case of complex fluids, in particular, fluids consisting of chain molecules, the conventional Monte Carlo techniques for grand-canonical or Gibbs-ensemble simulations also fail. The reason is that, in the case of large molecules, the probability of acceptance of a random trial insertion in the simulation box is extremely small and hence the number of insertion attempts has to be made prohibitively large. For this reason, the early grand-canonical and Gibbs-ensemble simulations were limited to the study of adsorption and liquid-vapor phase equilibria of small molecules.

12.1 Biased sampling techniques In this chapter, we discuss extensions of the standard Monte Carlo algorithm that allow us to overcome some of these limitations.1 The main feature of these more sophisticated Monte Carlo trial moves is that they are no longer completely random: the moves are biased in such a way that the molecule to be inserted has an enhanced probability to “fit” into the existing configuration. In contrast, no information about the present configuration of the system is used in the generation of normal (unbiased) MC trial moves: that information is used only to accept or reject the move (see Chapters 3 and 6). Biasing a Monte Carlo trial move means that we are no longer working with a symmetric a priori transition matrix. To satisfy detailed balance, we therefore also should change the acceptance rules. Clearly, the price we pay for using configurationally biased MC trial moves is a greater complexity of our program. However, the reward is that, with the help of these techniques, we can sometimes speed up a calculation by many orders of magnitude. To illustrate this, we shall discuss examples of simulations that were made possible only through the use of biased sampling.

12.1.1 Beyond Metropolis The general idea of biased sampling is best explained by considering a simple example. Let us assume that we have developed a Monte Carlo scheme that allows us to generate trial configurations with a probability that depends on the potential energy of that configuration: α(o → n) = f [U(n)]. For the reverse move, we have α(n → o) = f [U(o)]. Suppose we want to sample the N , V , T ensemble, which implies that we have to generate configurations with a Boltzmann distribution (6.2.1). Imposing de1 Readers who are not familiar with the Rosenbluth scheme are advised to read section 10.2 first.

Configurational-bias Monte Carlo Chapter | 12 407

tailed balance (see section 6.1) yields, as a condition for the acceptance rule, acc(o → n) f [U(o)] = exp{−β[U(n) − U(o)]}. acc(n → o) f [U(n)] A possible acceptance rule that obeys this condition is   f [U(o)] acc(o → n) = min 1, exp{−β[U(n) − U(o)]} . f [U(n)]

(12.1.1)

This derivation shows that we can introduce an arbitrary biasing function f (U) in the sampling scheme and generate a Boltzmann distribution of configurations, provided that the acceptance rule is modified in such a way that the bias is removed from the sampling scheme. Ideally, by biasing the probability to generate a trial conformation in the right way, we could make the term on the right-hand side of Eq. (12.1.1) always equal to unity. In that case, every trial move will be accepted. In section 13.4.2, we show that it is sometimes possible to achieve this ideal situation. However, in general, biased generation of trial moves is simply a technique for enhancing the acceptance of such moves without violating detailed balance. We now give some examples of the use of non-Metropolis sampling techniques to demonstrate how they can be used to enhance the efficiency of a simulation.

12.1.2 Orientational bias To perform a Monte Carlo simulation of molecules with an intermolecular potential that depends strongly on the relative molecular orientation (e.g., polar molecules, hydrogen-bond formers, liquid-crystal forming molecules), it is important to find a position that not only does not overlap with the other molecule but also has an acceptable orientation. If the probability of finding a suitable orientation by chance is very low, we can use biased trial moves to enhance the acceptance. Algorithm Let us consider a Monte Carlo trial move in which a randomly selected particle has to be moved and reoriented. We denote the old configuration by o and the trial configuration by n. We use standard random displacement for the translational parts of the move, but we bias the generation of trial orientations, as follows: 1. Move the center of mass of the molecule over a (small) random distance and determine all those interactions that do not depend on the orientations. These interactions are denoted by upos (n). In practice, there may be several ways to separate the potential into orientation-dependent and orientationindependent parts.

408 PART | IV Advanced techniques

2. Generate k trial orientations {b1 , b2 , · · · , bk } and for each of these trial orientations, calculate the energy uor (bi ). 3. We define the Rosenbluth2 factor, W W (n) =

k 

exp[−βuor (bj )].

(12.1.2)

j =1

Out of these k orientations, we select one, say, n, with a probability exp[−βuor (bn )] p(bn ) = k . or j =1 exp[−βu (bj )]

(12.1.3)

4. For the old configuration, o, the part of the energy that does not depend on the orientation of the molecules is denoted by upos (o). The orientation of the molecule in the old position is denoted by bo , and we generate k − 1 trial orientations denoted by b2 , · · · , bk . Using these k orientations, we determine W (o) = exp[−βuor (bo )] +

k 

exp[−βuor (bj )].

(12.1.4)

j =2

5. The move is accepted with a probability   W (n) pos pos acc(o → n) = min 1, exp{−β[u (n) − u (o)]} . W (o)

(12.1.5)

It is clear that Eq. (12.1.3) ensures that energetically favorable configurations are more likely to be generated. An example implementation of this scheme is shown in Algorithm 21. Next, we should demonstrate that the sampling scheme is correct. Justification of algorithm To show that the orientational-bias Monte Carlo scheme just described is correct, that is, generates configurations according to the desired distribution, it is convenient to consider lattice models and continuum models separately. For both cases, we assume that we work in the canonical ensemble, for which the distribution of configurations is given by Eq. (6.2.1) N (qN ) ∝ exp[−βU(qN )], where U(qN ) is the sum of orientational and non-orientational part of the energy: U = uor + upos . 2 Since this algorithm for biasing the orientation of the molecules is very similar to an algorithm

developed by Rosenbluth and Rosenbluth in 1955 [438] for sampling configurations of polymers (see section 10.2), we refer to the factor W as the Rosenbluth factor.

Configurational-bias Monte Carlo Chapter | 12 409

Algorithm 21 (Orientational bias) function orien_bias o =

int(R*npart)+1

sumw = 0

for 1 ≤ j ≤ k do b(j) = ranor eno = enero(x(o),o,b(j)) w(j)= exp(-beta*eno)

Confgurational-bias MC trial move to change orientation of molecule o Select a particle at random k, the number of trial directions, is arbitrary but fixed. generate random trial direction calculate energy of trial orientation calculate Rosenbluth factor (12.1.2)

sumw = sumw+w(j)

enddo n = select(w,sumw) bn = b(n) wn = sumw sumw = 0

for if

1



j

select one of the orientations is the selected orientation Rosenbluth factor new orientation Next consider the old orientation. consider k trial orientations

n

≤ k do then

j == 1

b(j)= bu(o)

use actual orientation of particle o

else b(j) =

ranor

generate a random orientation

endif eno =

enero(x(o),b(j))

sumw=sumw+exp(-beta*eno)

calculate energy of trial orientation j calculate Rosenbluth factor (12.1.4)

enddo wo = sumw

if R 1. As before, we wish to modify the configurational-bias Monte Carlo sampling of conformations of a fully flexible chain in such a way that the chain is forced to terminate at r2 . There are two ways to do this. In one approach, we include the bias in the probability with which we generate trial directions; in the second, the bias is in the acceptance probability. In either case, our approach does not depend on the specific form of p1 (r), but only on the existence of the recurrence relation (12.4.7). In the first approach, we use the following scheme of generating the ith segment out of  segments to be regrown. We generate k trial segments, all starting at the current trial position r, such that the a priori probability of generating a

Configurational-bias Monte Carlo Chapter | 12 435

given trial direction (say,  j ) is proportional to the probability of having an ideal chain conformation of length  − i between this trial segment and the final position r2 . Let us denote this a priori probability by pbond ( j ). By construction, pbond ( j ) is normalized. Using Eq. (12.4.7) we can easily derive an explicit expression for pbond : p1 ()P (r +  − r2 ;  − i) d  p1 (  )P (r +   − r2 ;  − i) p1 ()P (r +  − r2 ;  − i) = . P (r − r2 ;  − i + 1)

pbond () = 

(12.4.9)

From here on, we treat the problem just like the sampling of a continuously deformable chain, described in section 12.2.3. That is, we select one of the k trial directions with a probability Psel (j ) = k

exp[−βuext ( j )]

ext j  =1 exp[−βu ( j  )]

.

The contribution to the total Rosenbluth weight of the set of k trial directions generated in step i is k ext j  =1 exp[−βu ( j  )] wi ≡ . k The overall probability of moving from the old conformation  old to a new conformation  new is proportional to the product of the probability of generating the new conformation and the ratio of the new to the old Rosenbluth weights. The condition of (super-)detailed balance requires that the product of the probability of generating the new conformation times the Rosenbluth weight of that conformation is (but for a factor that is the same for the old and new conformations) equal to the product of the Boltzmann weight of that conformation and the properly normalized probability of generating the corresponding ideal (i.e., noninteracting) conformation. If we write the expression for this product, we find that  

Pgen [ j (i)]wi

i=1

=

    p1 (ri − ri−1 )P (ri − r2 ;  − i) P (ri−1 − r2 ;  − i + 1)   k ext j  =1 exp{−βu [ j  (i)]}

 k

i=1

×

exp{−βuext [ j (i)]}



ext j  =1 exp{−βu [ j  (i)]}

k

exp[−βU ext ( total )] i=1 p1 (ri − ri−1 ) . = k  P (r12 ; )

(12.4.10)

436 PART | IV Advanced techniques

As the last line of this equation shows, the conformations are indeed generated with the correct statistical weight. In ref. [521] this scheme has been applied to simulate model homopolymers, random heteropolymers, and random copolymers consisting of up to 1000 Lennard-Jones beads. For molecules with strong intramolecular interactions, the present scheme will not work and other approaches are needed.

12.4.3 Strong intramolecular interactions In the previous section, we have shown that we can use the configurational-bias Monte Carlo scheme to grow a chain of length n between two fixed endpoints r1 and r2 if we know the probability density of conformations of length n between these points. For the special case of a fully flexible chain, this probability distribution is known analytically. For chains with strong intramolecular interactions, such an analytical distribution is not known. Wick and Siepmann [522] and Chen and Escobedo [523] have shown that one can use an approximated distribution. Chen and Escobedo [523] estimate this distribution using a simulation of an isolated chain with bonded interactions only. Wick and Siepmann [522] proposed a scheme in which this estimated probability distribution is further refined during the simulation.

12.5 Beyond polymers Thus far, the CBMC scheme has been presented exclusively as a method of generating polymer conformations. The method is more general than that. It can be used as a scheme to perform collective rearrangements of any set of labeled coordinates. In fact, the scheme can be used to carry out Monte Carlo moves to swap n small particles within a volume V with one large particle that occupies the same (excluded) volume. This application of the CBMC scheme has been exploited by Biben et al. [524,525] to study mixtures of large and small hard spheres. Gibbs ensemble simulations of mixtures of spherical colloids and rodlike polymers were performed by Bolhuis and Frenkel [526] (see Example 17), using CBMC-style particle swaps and a closely related approach was employed by Dijkstra and co-workers to study phase separation [518,519] in mixtures of large and small hard-core particles on a lattice. An application of CBMC for improving the sampling of ionic solutions has been proposed by Shelley and Patey [527]. A different application of the CBMC ideas is used by Esselink et al. [528] to develop an algorithm to perform Monte Carlo moves in parallel. Parallel Monte Carlo appears to be a contradiction in terms, since the Monte Carlo procedure is an intrinsically sequential process. One has to know whether the current move is accepted or rejected before one can continue with the next move. The conventional way of introducing parallelism is to distribute the energy calculation over various processors or to farm out the calculation by performing separate

Configurational-bias Monte Carlo Chapter | 12 437

simulations over various processors. Although the last algorithm is extremely efficient and requires minimum skills to use a parallel computer, it is not a truly parallel algorithm. For example, farming out a calculation is not very efficient if the equilibration of the system takes a significant amount of CPU time. In the algorithm of Esselink et al. several trial positions are generated in parallel, and out of these trial positions, the one with the highest probability of being accepted is selected. This selection step introduces a bias that is removed by adjusting the acceptance rules. The generation of each trial move, which includes the calculation of the energy (or Rosenbluth factor in the case of chain molecules), is distributed over the various processors. Loyens et al. have used this approach to perform phase equilibrium calculations in parallel using the Gibbs ensemble technique [529]. An interesting application of this parallel scheme is the multiple-first-bead algorithm. In a conventional CBMC simulation one would have to grow an entire chain before one can reject a configuration that is “doomed” from the start because the very first bead has an unfavorable energy. If the chains are long growing them to the end before deciding on acceptance can be inefficient and it becomes advantageous to use a multiple-first-bead scheme [528].7 Instead of generating a single trial position for the first bead, k trial positions are generated. The energy of these beads, u1 (j ) with j = 1, . . . , k, is calculated, and one of these beads, say j , is selected using the Rosenbluth criterion: P1st (j ) =

exp[−βu1 (j )] w1

where w1 (n) =

k 

exp[−βu1 (i)].

i=1

Also for the old configuration, one should use a similar scheme to compute w1 (o). For some moves, the same set of first beads used for the new configuration can be used to compute the Rosenbluth factor for the old configuration [530]. To ensure detailed balance the Rosenbluth factors associated with the multiple-first beads should be taken into account in the acceptance rules:   w1 (n)W (n) , acc(o → n) = min 1, w1 (o)W (o) where W (n) and W (o) are the (conventional) Rosenbluth factors of the new and the old configuration of the chain, respectively, excluding the contribution of the first segment. Vlugt et al. [531] have shown that a multiple-first-bead move can increase the efficiency of simulations of n-alkanes up to a factor of 3. 7 Note that the same problem is also addressed by the early-rejection method discussed in section

13.4.3

438 PART | IV Advanced techniques

Another extension of the CBMC approach is the use of a dual-cutoff radius [531]. The idea is that usually a particular trial conformation is accepted not because it is energetically very favorable, but because its competitors are so unfavorable. This suggests that one can use a much cheaper potential to perform a prescreening of acceptable trial configurations in a CBMC move. Let us split the potential into a contribution that is cheap to compute and the expensive remainder: U (r) = U cheap (r) + U (r). This can be done, for example, by splitting the potential into a long-range and short-range part. We can now use the cheap part in our CBMC scheme to generate trial configurations. The probability of generating a given configuration is then P cheap (n) =

exp[−βU cheap (n)] W cheap (n)

and the move is accepted using   W cheap (n) exp{−β[ U (n) − U (o)]} . acc(o → n) = min 1, cheap W (o) In ref. [531] it is shown that this scheme obeys detailed balance. The advantage of this algorithm is that the expensive part of the energy calculation has to be performed only once and not for every trial segment. A typical application would be to include the Fourier part of an Ewald summation in U . Many variations on this theme exist: one example is Hybrid MC (see section 13.3.1). Illustration 17 (Mixtures of colloids and polymers). We have presented CBMC as a scheme for sampling conformations of chain molecules. However, the method is more general than that. It can be used to perform collective rearrangements of any set of labeled coordinates. For instance, the scheme can be used to carry out Monte Carlo moves to swap n small particles within a volume V with one large particle that occupies the same (excluded) volume. This application of the CBMC scheme has been exploited by Biben [524] to study mixtures of large and small hard spheres. Gibbs ensemble simulations of mixtures of spherical colloids and rodlike polymers were performed in ref. [526] using CBMC-style particle swaps, and a closely related approach was employed by Dijkstra et al. [518,519] to study phase separation of mixtures of large and small hard-core particles on a lattice. Below, we briefly discuss an example of such a CBMC scheme, related to the phase behavior of colloidal suspensions [526]. Examples of colloidal solutions are milk, paint, and mayonnaise. Since a single colloidal particle may contain more than 109 atoms, it is not practical to model such a particle as a collection of atoms. It is better to describe colloidal solutions using

Configurational-bias Monte Carlo Chapter | 12 439

coarse-grained models. For example, a suspension of sterically stabilized silica spheres in a nonpolar solvent can be described surprisingly accurately with a hard-sphere potential. Similar to the hard-sphere fluid, such a colloidal suspension has a “fluid-solid” transition but not a “liquid-gas” transition. To be more precise, the colloidal particles undergo a transition from a liquid-like arrangement to a crystalline structure. But in either case, the solvent remains liquid. In what follows, the terms “crystal,” “liquid,” and “gas” refer to the state of the colloidal particles in suspension. Experimentally, it is observed that a liquid-gas transition can be induced in a suspension of hard-sphere colloids by adding nonadsorbing polymers. The addition of polymers induces an effective attraction between the colloidal particles. This attraction is not related to any change in the internal energy of the system but to an increase in entropy. It is not difficult to understand the origin of such entropic attractions. Let us assume that the polymers in solution do not interact with each other. This is never rigorously true, but for dilute solutions of long, thin molecules, it is a good first approximation. The translational entropy of N polymers in a volume V is then equal to that of N ideal-gas molecules occupying the same volume: (0) Strans = constant + N kB ln V , where the constant accounts for all those contributions that do not depend on the volume V . In the absence of colloids, the volume accessible to the polymers is equal to V0 , the volume of the container. Now suppose that we add one hard colloidal particle with radius Rc . As the polymers cannot penetrate the colloidal particle, such a colloid excludes the polymers from a spherical volume with radius Rexcl ≡ Rc + Rp , where Rp is the effective radius of the polymer (for flexible polymers, Rp is on the order of the radius of gyration, and for rigid polymers, Rp is of order O(L), where L is the length of the polymer). Let us denote the volume excluded c . Clearly, the entropy of N polymers in the system by one colloid by vexcl (1)

c ). Now conthat contains one colloid is Strans = constant + N kB ln(V0 − vexcl sider what happens if we have two colloidal spheres in the solution. Naively, one might think that the entropy of the polymer solution is now equal to (2) c ). However, this is only true if the two Strans = constant + N kB ln(V0 − 2vexcl colloids are far apart. If they are touching, their exclusion zones overlap, and pair

c . This implies that the enthe total excluded volume vexcl is less than 2vexcl tropy of the polymers is larger when the colloids are touching than when they are far apart. Therefore, we can lower the free energy of the polymer solution by bringing the colloids close together. And this is the origin of entropic attraction. The strength of the attraction can be tuned by changing the polymer concentration, and for sufficiently high polymer concentrations, the colloidal suspensions may undergo a “liquid-vapor” phase separation. In the present example, we consider the phase behavior of a mixture of colloidal hard spheres and thin hard rods [526]. In principle, we can use Gibbs ensemble simulations to study the “vapor-liquid” coexistence in this mixture. However, a conventional Gibbs ensemble simulation is likely to fail as the transfer of a colloidal sphere from one simulation box to the other will, almost certainly, result in an overlap of the sphere with some of the rodlike

440 PART | IV Advanced techniques

polymers. We can now use the CBMC approach to perform such a trial move with a higher chance of success. In this scheme, we perform the following steps: 1. Randomly select a sphere in one of the boxes and insert this sphere at a random position in the other box. 2. Remove all the rods that overlap with this sphere. These rods are inserted in the other box. The positions and orientations of the rods are chosen such that they intersect with the volume vacated by the colloid —but apart from that, they are random. Even though we have thus ensured that the rods are in, or near, the “cavity” left by the colloidal sphere, they are very likely to overlap with one or more of the remaining spheres. However, if one tries several orientations and positions of the rods and selects an acceptable configuration using the configurational-bias Monte Carlo scheme, one can strongly enhance the acceptance probability of such particle swaps.

FIGURE 12.7 Coexistence curves for a mixture of hard spheres and thin rods [526]. The horizontal axis measures the density, and the vertical axis the fugacity (= exp(βμ)). L/σ is the ratio of the length of the rods to the diameter of the hard spheres.

The results of these Gibbs ensemble simulations are presented in Fig. 12.7. This figure shows that if one increases the fugacity (and thereby the concentration) of the rods, a demixing into a phase with a low density of spheres and a phase with a high density of spheres occurs. The longer the rods, the lower the concentration at which this demixing occurs. We stress once again that, in this system, only hard-core interactions between the particles exist. Therefore this demixing is driven by entropy alone.

12.6 Other ensembles 12.6.1 Grand-canonical ensemble In Chapter 6, we introduced the grand-canonical ensemble in the context of simulations of systems in open contact with a reservoir. An essential ingredient of Monte Carlo simulations in this ensemble is the random insertion or removal of particles. Clearly, such simulations will be efficient only if there is

Configurational-bias Monte Carlo Chapter | 12 441

a reasonable acceptance probability of particle-insertion moves. In particular, for polyatomic molecules, this is usually a problem. Let us consider the system mentioned in Example 4, a grand-canonical ensemble simulation of the adsorption of molecules in the pores of a microporous material such as a zeolite. For single atoms, the probability that we find an arbitrary position that does not overlap with one of the atoms in the zeolite lattice is on the order 1 in 103 . For dimers, we have to find two positions that do not overlap, and if we assume that these positions are independent, the probability of success will be 1 in 106 . Clearly, for the long-chain molecules, the probability of a successful insertion is so low that to obtain a reasonable number of accepted insertions, the number of attempts needs to be prohibitively large. In the present section, we demonstrate how configurational-bias Monte Carlo technique can be used in the grand-canonical ensemble to make the exchange step of chain molecules more probable. Algorithm As in the general scheme of the configurational-bias Monte Carlo technique for off-lattice systems, we divide the potential energy of a given conformation into a bonded potential energy (U bond ), which includes the local intramolecular interactions, and an external potential energy (U ext ), which includes the intermolecular interactions and the nonbonded intramolecular interactions (see section 12.2.3). A chain that has only bonded interactions is defined as an ideal chain. Let us now consider the Monte Carlo trial moves for the insertion and removal of particles. Particle insertion To insert a particle into the system, we use the following steps: 1. For the first monomer, a random position is selected, and the energy of this monomer is calculated. This energy is denoted by uext 1 (n) and we define (n)] (as before, the factor k is introduced only to w1ext (n) = k exp[−βuext 1 simplify the subsequent notation). 2. For the following monomers, a set of k trial positions is generated. We denote these positions by {b}k = (b1 , b2 , · · · , bk ). This set of trial orientations is generated using the bonded part of the potential, which results in the following distribution for the ith monomer: (b)]db pibond (b)db = C exp[−βubond i with C −1 ≡

(12.6.1)

 db exp[−βubond (b)]. i

(12.6.2)

Note that the way the trial orientations are generated depends on the type of monomer being added (see section 12.3). For each of these trial positions the

442 PART | IV Advanced techniques

external energy, uext i (bj ), is calculated, and one of these positions is selected with a probability piext (bn ) =

exp[−βuext i (bn )] , ext wi (n)

(12.6.3)

in which wiext (n) =

k 

exp[−βuext i (bj )].

j =1

3. Step 2 is repeated until the entire alkane of length  has been grown, and the normalized Rosenbluth factor can be calculated: W ext (n)  wiext (n) . = k k 

W ext (n) ≡

(12.6.4)

i=1

4. The new molecule is accepted with a probability 

 q(T ) exp(βμB )V ext acc(N → N + 1) = min 1, W (n) , (N + 1)

(12.6.5)

where μB is the chemical potential of a reservoir consisting of ideal chain molecules and q(T ) is the kinetic contribution to the molecular partition function (for atoms, q(T ) = 1/ 3 ). Particle removal To remove a particle from the system, we use the following algorithm: 1. A particle, say, o, is selected at random, the energy of the first monomer ext is calculated and is denoted by uext 1 (o), and we determine w1 (o) = ext k exp[−βu1 (o)]. 2. For the following segments of the chain, the external energy uext i (o) is calculated and a set of k − 1 trial orientations is generated with a probability given by Eq. (12.6.1). Using this set of orientations and the actual position, we calculate for monomer i: wiext (o) = exp[−βuext i (o)] +

k 

exp[−βuext i (bj )].

j =2

3. After step 2 is repeated for all  monomers and we compute for the entire molecule: W ext (o)  wiext (o) . (o) ≡ = k k M

W

ext

i=1

(12.6.6)

Configurational-bias Monte Carlo Chapter | 12 443

4. The selected molecule is removed with a probability   N 1 acc(N → N − 1) = min 1, . q(T )V exp(βμB ) W ext (o)

(12.6.7)

We have defined μB as the chemical potential of a reservoir consisting of ideal chains. It is often convenient to use as a reference state the ideal gas of nonideal chains (i.e., chains that have both bonded and nonbonded intramolecular interactions). This results in a simple, temperature-dependent shift of the chemical potential:   (12.6.8) βμB ≡ βμid.chain = βμnonid.chain + ln W nonbonded , where W nonbonded is the average Rosenbluth factor due to the nonbonded intramolecular interactions. This Rosenbluth factor has to be determined in a separate simulation of a single-chain molecule. For more details about reference states, see SI section L.14. In the same appendix, we also discuss the relation between the chemical potential and the imposed pressure (the latter quantity is needed when comparing with real experimental data). To show that the preceding algorithm does indeed yield the correct distribution, we have to demonstrate, as before, that detailed balance is satisfied. As the proof is very similar to those shown before, we will not reproduce it here. For more details, the reader is referred to [454]. Illustration 18 (Adsorption of alkanes in zeolites). In Illustration 4, grandcanonical simulations were used to determine the adsorption of methane in the zeolite silicalite. Using the scheme described in the present section, Smit and Maesen computed adsorption isotherms of the longer alkanes [532]. Adsorption isotherms are of interest since they may signal phase transitions, such as capillary condensation or wetting, of the fluid inside the pores [533]. Capillary condensation usually shows up as a step or rapid variation in the adsorption isotherm. It is often accompanied by hysteresis, but not always; for instance, experiments on flat substrates [534] found evidence for steps in the adsorption isotherm without noticeable hysteresis. Since the pores of most zeolites are of molecular dimensions, adsorbed alkane molecules behave like a one-dimensional fluid. In a true onedimensional system, phase transitions are not expected to occur. To the extent that zeolites behave as a one-dimensional medium, one, therefore, might expect that the adsorption isotherms of alkanes in zeolites exhibit no steps. If steps occur, they are usually attributed to capillary condensation in the exterior secondary pore system formed by the space between different crystals. For silicalite, adsorption isotherms have been determined for various n-alkanes, and, indeed, for the short-chain alkanes (methane–pentane), the isotherms ex-

444 PART | IV Advanced techniques

hibit no steps. The same holds for decane. For hexane and heptane, however, steplike features are observed (for experimental details, see [532]). In the simulations of Smit and Maesen [532], the alkane molecules are modeled with a united atom model; that is, CH3 and CH2 groups are considered as single interaction centers [535]. The zeolite is modeled as a rigid crystal and the zeolite-alkane interactions are assumed to be dominated by the interaction with the oxygen atoms and are described by a Lennard-Jones potential. Fig. 12.8 compares the simulated adsorption isotherms of various alkanes in silicalite with experimental data. For butane, a smooth isotherm is observed, and the agreement between experiments and simulation is good. For hexane and heptane, the agreement is good at high pressures, but at low pressures, deviations indicate that the zeolite-alkane model may need to be refined. It is interesting to note that, for heptane, both the experiments and the simulations show a step at approximately half the loading. Since the simulations are performed on a perfect single crystal, this behavior must be due to a transition of the fluid inside the pores and cannot be attributed to the secondary pore system.

FIGURE 12.8 Adsorption isotherms of butane (left) and heptane (right); the closed symbols are experimental data, and the open symbols the results from simulations at T = 298 K.

Silicalite has two types of channels, straight and zigzag, which are connected via intersections. It so happens that the length of a hexane molecule is on the order of the length of the period of the zigzag channel. The simulations show that, at low chemical potential, the hexane molecules move freely in these channels, and the molecules will spend part of their time at the intersections. If a fraction of the intersections is occupied, other molecules cannot reside in the straight channels at the same time. At high pressures, almost all hexane molecules fit exactly into the zigzag channel. They no longer move freely and keep their noses and tails out of the intersection. In such a configuration, the entire straight channel can now be tightly packed with hexane molecules. This may explain the plateau in the adsorption isotherm; to fill the entire zeolite structure neatly, the hexane molecules located in zigzag

Configurational-bias Monte Carlo Chapter | 12 445

channels first have to be “frozen” in these channels. This “freezing” of the positions of the hexane molecules implies a loss of entropy and, therefore, will occur only if the pressure (or chemical potential) is sufficiently high to compensate for this loss. This also makes it clear why we do not observe a step for molecules shorter or longer than hexane or heptane. If the molecules are longer, they will always be partly in the intersection, and nothing can be gained by collective freezing in the zigzag channels. If the molecules are shorter than one period of the zigzag channel, a single molecule will not occupy an entire period, and a second molecule will enter, which results in a different type of packing. The interesting aspect is that after the simulations were published, this observation was confirmed by experiments [536]. Also, the adsorption behavior of mixtures of hydrocarbons has many surprising effects [537,538].

In SI section L.8.3 the combination of CBMC and the Gibbs ensemble is discussed.

12.7

Recoil growth

To find numerical schemes that are more efficient than CBMC, we should first understand why CBMC works better than a scheme that employs random trial moves. Suppose that we have a system with hard-core interactions and the probability of successfully inserting a monomer is a. If we assume that the insertion of an m-mer is equivalent to inserting m independent monomers, then the probability of a successful random insertion of an n-mer is random ≈ am. pm

For a dense system, a 1, and therefore random insertion only works for very short chains. With the CBMC scheme we generate k trial orientations and our growing scheme fails if all of the k trial orientations result in an overlap. The probability that we grow a chain successfully is therefore m−1  CBMC ≈ a 1 − (1 − a)k = abm−1 . pm This crude estimate suggests that by increasing k, the number of trial orientations, we can make b arbitrarily close to 1 and hence obtain a reasonable insertion probability for any chain length and at any density. In practice, simply increasing k will not solve the problem. First of all, there is a practical limitation: increasing k increases the computational cost. More importantly, the assumption that the probability of a successful insertion of a monomer is equal and independent for each trial position is not correct. For instance, if we have grown into a “dead alley” where there is simply no space for an additional monomer (see Fig. 12.9), then no matter how often we try, the insertion will not be accepted. At high densities, such dead alleys are the main reason the CBMC method becomes

446 PART | IV Advanced techniques

FIGURE 12.9 The configurational-bias Monte Carlo scheme fails if the molecule is trapped in a dead alley (left); irrespective of the number of trial orientations the CBMC scheme will never generate an acceptable conformation. In the recoil growth scheme (right) the algorithm “recoils” back to a previous monomer and attempts to regrow from there.

inefficient. This suggests that we need a computational scheme that allows us to escape from these dead alleys. The Recoil Growth (RG) scheme is a dynamic Monte Carlo algorithm that was developed with the dead-alley problem in mind [539,540]. The algorithm is related to earlier static MC schemes due to Meirovitch [541] and Alexandrowicz and Wilding [542]. The basic strategy of the method is that it allows us to escape from a trap by “recoiling back” a few monomers and retrying the growth process using another trial orientation. In contrast, the CBMC scheme looks only one step ahead. Once a trial orientation has been selected, we cannot “deselect” it, even if it turns out to lead into a dead alley. The recoil growth scheme looks several monomers ahead to see whether traps are to be expected before a monomer is irrevocably added to the trial conformation (see Fig. 12.9). In this way we can alleviate (but not remove) the dead-alley problem. In principle, one could also do something similar with CBMC by adding a sequence of l monomers per step. However, as there are k possible directions for every monomer, this would involve computing k l energies per group. Even though many of these trial monomers do not lead to acceptable conformations, we would still have to compute all interaction energies.

12.7.1 Algorithm To explain the practical implementation of the RG algorithm, let us first consider a totally impractical, but conceptually simple scheme that will turn out to have the same net effect. Consider a chain of l monomers. We place the first monomer at a random position. Next, we generate k trial positions for the second

Configurational-bias Monte Carlo Chapter | 12 447

monomer. From each of these trial positions, we generate k trial positions for the third monomer. At this stage, we have generated k 2 “trimer” chains. We continue in the same manner until we have grown k l−1 chains of length l. Obviously, most of the conformations thus generated have a vanishing Boltzmann factor and are, therefore, irrelevant. However, some may have a reasonable Boltzmann weight and it is these conformations that we should like to find. To simplify this search, we introduce a concept that plays an important role in the RG algorithm: we shall distinguish between trial directions that are “open” and those that are “closed.” To decide whether a given trial direction, say b, for monomer j is open, we compute its energy uj (b). The probability8 that trial position b is open is given by open

pj

(b) = min(1, exp[−βuj (b)]).

(12.7.1)

For hard-core interactions, the decision of whether a trial direction is open open or closed is unambiguous, as pj (b) is either zero or one. For continuous inopen teractions we compare pj (b) with a random number between 0 and 1. If open the random number is less than pj (b), the direction is open; otherwise, it is closed. We now have a tree with k l−1 branches but many of these branches are “dead,” in the sense that they emerge from a “closed” monomer. Clearly, there is little point in exploring the remainder of a branch if it does not correspond to an “open” direction. This is where the RG algorithm comes in. Rather than generating a host of useless conformations, it generates them “on the fly.” In addition, the algorithm uses a cheap test to check if a given branch will “die” within a specified number of steps (this number is denoted by lmax ). The algorithm then randomly chooses among the available open branches. As we have only looked a distance lmax ahead, it may still happen that we have picked a branch that is doomed. But the probability of ending up in such a dead alley is much lower than that in the CBMC scheme. In practice, the recoil growth algorithm consists of two steps. The first step is to grow a new chain conformation using only “open” directions. The next step is to compute the weights of the new and the old conformations. The following steps are involved in the generation of a new conformation: 1. The first monomer of a chain is placed at a random position. The energy of this monomer is calculated (u1 ). The probability that this position is “open” is given by Eq. (12.7.1). If the position is closed we cannot continue growing the chain and we reject the trial conformation. If the first position is open, we continue with the next step. 2. A trial position bi+1 for monomer i +1 is generated starting from monomer i. We compute the energy of this trial monomer ui+1 (b) and, using Eq. (12.7.1), 8 This probability can be chosen in many alternative ways and may be used to optimize a simulation.

However, the particular choice discussed here appears to work well for Lennard-Jones and hard-core potentials.

448 PART | IV Advanced techniques

we decide whether this position is open or closed. If this direction is closed, we try another trial position, up to a maximum9 of k trial orientations. As soon as we find an open position we continue with step 3. If not a single open trial position is found, we make a recoil step. The chain retracts one step to monomer i − 1 (if this monomer exists), and the unused directions (if any) from step 2, for i − 1, are explored. If all directions at level i − 1 are exhausted, we attempt to recoil to i − 2. The chain is allowed to recoil a total of lmax steps, i.e., down to length i − lmax + 1. If, at the maximum recoil length, all trial directions are closed, the trial conformation is discarded. 3. We have now found an “open” trial position for monomer i + 1. At this point monomer i − lmax is permanently added in the new conformation; i.e., a recoil step will not reach this monomer anymore. 4. Steps 2 and 3 are repeated until the entire chain has been grown. In the naive version of the algorithm sketched above, we can consider the above steps as a procedure for searching for an open branch on the existing tree. However, the RG procedure does this by generating the absolute minimum of trial directions compatible with the chosen recoil distance lmax . Once we have successfully generated a trial conformation, we have to decide on its acceptance. To this end, we have to compute the weights, W (n) and W (o), of the new and the old conformations, respectively. This part of the algorithm is more expensive. However, we only carry it out once we know for sure that we have successfully generated a trial conformation. In contrast, in CBMC it may happen that we spend much of our time computing the weight factor for a conformation that terminates in a dead alley. In the RG scheme, the following algorithm is used to compute the weight of the new conformation: 1. Consider that we are at monomer position i (initially, of course, i = 1). In the previous stage of the algorithm, we have already found that at least one trial direction is available (namely, the one that is included in our new conformation). In addition, we may have found that a certain number of directions (say kc ) are closed—these are the ones that we tried but that died within lmax steps. We still have to test the remaining krest ≡ k − 1 − kc directions. We randomly generate krest trial positions for monomer i + 1 and use the recoil growth algorithm to test whether at least one “feeler” of length lmax can be grown in this direction (unless i + lmax > l; in that case, we only continue until we have reached the end of the chain). Note that, again, we do not explore all possible branches. We only check if there is at least one open branch of length lmax in each of the krest directions. If this is the case, we call that direction “available.” We denote the total number of available directions (including the one that corresponds to the direction that we had 9 The maximum number of trial orientation should be chosen in advance—and may depend on the

index i— but is otherwise arbitrary.

Configurational-bias Monte Carlo Chapter | 12 449

found in the first stage of the algorithm) by mi . In the next section, we shall derive that monomer i contributes a factor wi (n) to the weight of the chain, where wi (n) is given by wi (n) =

mi (n) open pi (n)

open

and pi (n) is given by Eq. (12.7.1). 2. Repeat the previous step for all i from 1 to l − 1. The expression for the partial weight of the final monomer seems ambiguous, as ml (n) is not defined. An easy (and correct) solution is to choose ml (n) = 1. 3. Next compute the weight for the entire chain:  

  mi (n) . wi (n) = W (n) = open p (n) i=1 i=1 i

(12.7.2)

For the calculation of the weight of the old conformation, we use almost the same procedure. The difference is that, for the old conformation, we have to generate k − 1 additional directions for every monomer i. The weight is again related to the total number of directions that start from monomer i and that are “available,” i.e., that contain at least one open feeler of length lmax : W (o) =

  i=1

wi (o) =

  mi (o) . open p (o) i=1 i

Finally, the new conformation is accepted with a probability: acc(o → n) = min(1, exp[−βU (n)]W (n)/ exp[−βU (o)]W (o)),

(12.7.3)

where U (n) and U (o) are the energies of the new and old conformations, respectively. In the next section, we demonstrate that this scheme generates a Boltzmann distribution of conformations. The justification of the recoil-growth algorithm can be found in the SI section L.9. Example 22 (Recoil growth simulation of Lennard-Jones chains). To illustrate the recoil growth (RG) method, we make a comparison between this method and configurational-bias Monte Carlo (CBMC). Consider 20 Lennard-Jones chains of length 15. The monomer density is ρ = 0.3 at temperature T = 6.0. Two bonded monomers have a constant bond length of 1.0, while three successive particles have a constant bond angle of 2.0 radians. In Fig. 12.10 the distribution of the end-to-end vector, RE , of the chain is plotted. In this figure we compare the results from a CBMC and a RG. Since both methods generate a Boltzmann distribution of conformations, the results are identical (as they should be).

450 PART | IV Advanced techniques

FIGURE 12.10 Comparison of configurational-bias Monte Carlo (CBMC) with recoil growth for the simulation of Lennard-Jones chains of length 15. The left figure gives the distribution of the end-to-end distance (RE ). In the right figure the efficiency (η) is a function of the number of trial directions (k) for different recoil lengths (lmax ) as well as for CBMC.

For this specific example, we have compared the efficiency, η, of the two methods. The efficiency is defined as the number of accepted trial moves per amount of CPU time. For CBMC we see that the efficiency increases as we increase k, the number of trial orientations, from 1 to 4. From 4 to 8 the efficiency is more or less constant, and above 8 a decrease in the efficiency is observed. In the RG scheme we have two parameters to optimize: the number of trial orientations k and the recoil length lmax . If we use only one trial orientation, recoiling is impossible, since there are no other trial orientations. If we use a recoil length of 1, the optimum number of trial orientations is 4 and for larger recoil lengths the optimum is reached with less trial orientations. Interestingly, the global optimum is 2 trial orientations and a recoil length of 3–5. In this regime, the increase in CPU time associated with a larger recoil length is compensated by a higher acceptance. In the present study, optimal RG was a factor 8 more efficient than optimal CBMC. The Fortran code to generate this Example can be found in the online-SI, Case Study 20.

12.8 Questions and exercises Question 25 (Biased CBMC). In a configurational-bias Monte Carlo simulation, trial positions are selected with a probability that is proportional to the Boltzmann factor of each trial segment. However, in principle, one can use another probability function [531] to select a trial segment. Suppose that the probability of selecting a trial segment i is proportional to   pi ∝ exp −β  ui in which β  = β.

Configurational-bias Monte Carlo Chapter | 12 451

1. Derive the correct acceptance/rejection rule for this situation. 2. Derive an expression for the excess chemical potential when this modified CBMC method is used to generate configurations of test particles. 3. What happens if β  → ∞ and if β  → 0? Exercise 16 (CBMC of a single chain). In this exercise, we will look at the properties of a single-chain molecule. We will compare various sampling schemes. Suppose that we have a chain molecule of length n in which there are the following interactions between beads: • Two successive beads have a fixed bond length l. We will use l = 1. • Three successive beads have a bond-bending interaction 1 U = kt (θ − θ0 )2 , 2 in which θ is the bond angle, θ0 is the equilibrium bond angle, and kt is a constant. We will use θ0 = 2.0 rad (≈ 114.6◦ ) and kt = 2.0. • Every pair of beads that is separated by more than two bonds has a soft repulsive interaction ⎧ ⎨ A(r−rcut )2 r ≤ rcut , U (r) = rcut 2 ⎩ 0 r > rcut in which rcut is the cutoff radius (we will use rcut = 1.0 and A > 0). An interesting property of a chain molecule is the distribution of the end-to-end distance, which is the distance between the first and the last segments of the chain. There are several possible schemes for studying this property:

Dynamic schemes In a dynamic scheme, a Markov chain of states is generated. The average of a property B is the average of B over the elements of the Markov chain i=N B ≈

i=1 Bi

N

.

In the limit N → ∞ this expression becomes exact. Every new configuration is accepted or rejected using an acceptance criterion: • When unbiased chains are generated: acc (o → n) = min (1, exp {−β [U (n) − U (o)]}) , in which U is the total energy (soft repulsion and bond bending) of a chain. • When configurational-bias Monte Carlo is used:   W (n) acc (o → n) = min 1, , W (o)

452 PART | IV Advanced techniques

in which i=n j =k W=

i=2

j =1 exp [−βU (i, j )] . k n−1

In this equation, k is the number of trial positions and U (i, j ) is the energy of the j th trial position of the ith chain segment. The term U (i, j ) does not contain the bond-bending potential, because that potential has already been used to generate the trial positions.

Static schemes In a static scheme, all configurations are generated independently. To obtain a canonical average, every configuration is weighted with a factor R i=N B =

i=1 Bi × Ri . i=N i=1 Ri

For Ri we can write: • When random chains are generated: Ri = exp [−βUi ] . Here, Ui is the total energy of the chain. • When CBMC is used: Ri = W.

(12.8.1)

1. On the book’s website you can find a program for calculating chain properties using these four methods. However, some additional programming has to be done in the file grow.f, which contains a routine for growing a new chain using either CBMC or random insertion. 2. Compare the end-to-end distance distributions of the four methods. Which method has the best performance? Investigate how the efficiency of CBMC depends on the number of trial directions (k). 3. Investigate the influence of chain length on the end-to-end distance distribution. For which chain lengths do the four methods start to fail? 4. For high temperatures (and for low kt and A), the end-to-end distance distribution looks like the distribution of a nonself-avoiding random walk. This means that the chain segments are randomly oriented and the segments are allowed to overlap. For the mean square end-to-end distance, we can write   ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ i=n i=n i=n r2    = ⎝ xi2 ⎠ + ⎝ yi2 ⎠ + ⎝ zi2 ⎠ , l2 i=1

i=1

i=1

in which (xi , yi , zi ) are the projections of each segment on the (x, y, z) axes xi = sin (θi ) cos (φi ) yi = sin (θi ) sin (φi )

Configurational-bias Monte Carlo Chapter | 12 453

zi = cos (θi ) . This set of equations can be reduced to   r2 l2

= n.

(12.8.2)

• Derive Eq. (12.8.2). Hint: the following equations will be very useful: cos2 (θi ) + sin2 (θi ) = 1       cos θi − θj = cos (θi ) cos θj + sin (θi ) sin θj !  " cos θi − θj = 0. The last equation holds because θi − θj is uniformly distributed.   • Modify the program in such a way that r 2 is calculated for a nonselfavoiding random walk. Compare your results with the analytical solution. • Does   r2 ∝ n hold for a chain with a potential energy function described in this exercise? Investigate the influence of A on the end-to-end distance distribution.

This page intentionally left blank

Chapter 13

Accelerating Monte Carlo sampling The key advantage of Molecular Dynamics simulations is that they generate physically realistic trajectories of a classical many-body system. The key advantage of Monte Carlo simulations is that they do not. That is: MC simulations can explore configuration space in a way that is incompatible with Newton’s equations of motion, or even with Brownian dynamics. Were it not for this feature, MC simulations of off-lattice systems would probably have died out long ago. In this chapter, we discuss a variety of Monte Carlo algorithms that can greatly enhance the speed with which configuration space is explored, compared with the simple Metropolis-style MC simulations. However, partly because of the availability of powerful simulation packages, MD simulations are much more widely used than MC simulations, and many of the ideas that were first explored in the context of MC have subsequently been ported to MD, where they became more widely known than their MC parents. For a discussion of the ideas behind the algorithms, it is, however, more convenient to use the MC language, although we will mention some of the MD off-spring.

13.1

Sampling intensive variables

The Monte Carlo simulations that we considered thus far were designed to study the behavior of systems at constant N V T , N P T , or μV T . In such simulations, the relevant intensive thermodynamic variables (T , P , or μ) are kept fixed, while the microscopic configurations of the system are explored by performing Metropolis MC moves that change the particle coordinates. In addition, for N P T -MC, we carry out trial moves that change the volume V , and for μV T MC simulations, we attempt moves that change the number of particles. In the present chapter, we consider a different class of MC simulations where we still carry out particle moves, but in addition, we perform trial moves that change an intensive variable such as T , P , or μ (see Fig. 13.1). In fact, as we shall see below, this approach can be extended to cover other intensive parameters in the Hamiltonian of the system, e.g., parameters that determine the strength or the range of interactions between particles. This general class of simulations can be decomposed into two sub-classes: one in which we let a single system explore different values of the intensive variable (say, the temperature). Understanding Molecular Simulation. https://doi.org/10.1016/B978-0-32-390292-2.00024-6 Copyright © 2023 Elsevier Inc. All rights reserved.

455

456 PART | IV Advanced techniques

FIGURE 13.1 This figure compares three different Monte Carlo schemes to couple simulations carried out at different values of a control parameter (in this example: the temperature T ). Panel (A) shows how, in a simulated annealing simulation (which is not really an equilibrium Monte Carlo scheme), the temperature is gradually lowered, allowing the system to find low-energy states that would be missed in an instantaneous quench. Panel (B) shows how, in an expanded-ensemble simulation, a random change of the temperature is a valid Monte-Carlo trial move. To ensure uniform sampling, such simulations require a judiciously chosen bias. Panel (C) shows an example of a parallel tempering simulation. Here we prepare n different systems (in the figure, only 4) at different initial temperatures. In addition to the regular Monte Carlo moves, we now also allow trial moves that swap the temperatures attributed to the different systems (identified by different patterns). Hence, at every stage in the simulations, we always have n systems at n distinct temperatures.

Such simulations are known under the name Expanded-ensemble simulations, sometimes referred to as “extended-ensemble simulations” [543], for reasons that shall become clear shortly. The second class of algorithms that allow us to sample intensive variables, considers the parallel evolution of n systems that are initially prepared at n different temperatures (say). In a trial move, we attempt to swap the temperatures of two adjacent systems. Algorithms of this type also have various names, among which parallel tempering and replica exchange are the most common. In what follows, we will mostly use the term Parallel Tempering. Before discussing these algorithms, we should address one obvious question: Why? Or, more precisely: is there an advantage associated with expanding the move repertoire to include changes in intensive variables? The short answer is: it depends. For most MC simulations, performing trial moves in intensive variables only complicates the program, and does not add to the quality of the simulations. However, in some cases, expanded-ensemble simulations greatly

Accelerating Monte Carlo sampling Chapter | 13 457

enhance the ability of a simulation to explore the accessible configuration space —typically, if different parts of the accessible configuration space are separated by barriers that are effectively insurmountable at the temperature (or pressure, or chemical potential) of interest. To give an explicit example: suppose that we wish to sample an amorphous solid at low temperatures. We know that the particles in an amorphous solid can be arranged in many different ways. But at the low temperatures where the amorphous phase is mechanically stable, normal MC sampling will not allow the system to escape from the vicinity of its original arrangement. Clearly, if we were to heat the system, and then cool it down again, it would be easy to move from one amorphous structure to the next, but in general, such a heatingcooling protocol does not result in sampling states with their correct Boltzmann weight. Expanded-ensemble simulations allow us to implement heating-cooling protocols in such a way that all the states of the system that we sample (including at all intermediate temperatures) are bona-fide equilibrium states. Hence, all configurations that we sample can be used to compute observable properties. In contrast, if we would simply heat and cool the system, most of our simulation points would not correspond to equilibrium states of the system. There is a nomenclature problem with expanded-ensemble simulations and related methods such as parallel tempering: these methods have been (re-)discovered many times in slightly different forms (see ref. [543]) —all correct, but also all very similar in spirit. We will not try to provide a taxonomy of the many algorithms that achieve correct sampling of intensive variables, but just distinguish the main classes. For the sake of simplicity, we first describe the case where we include sampling of the temperature of a system. After that, we quickly generalize to the case where we allow variation in some other intensive parameter that characterizes the system.

13.1.1 Parallel tempering The Parallel Tempering (PT) method for sampling intensive variables is closest in spirit to the original Markov-Chain MC method. This method has been invented and re-invented in various forms and with various names [544–548], and was also introduced in Molecular Dynamics simulations under the name replica-exchange MD [549]. Here, we introduce the PT algorithm, in the context of simulations that allow transitions between different temperatures. After that, we discuss the general case. The simplest way of viewing the PT algorithm is by considering n simultaneous simulations of an N particle system. At this stage, we will assume that both V and N are constant, although sometimes it is more convenient to keep P or μ (but not both) fixed. The n parallel simulations differ in the values T1 , T2 , · · · , Tn that we have chosen for the imposed temperatures. At this stage, the choice of these temperatures is still arbitrary. Later, we shall see that the spacing between

458 PART | IV Advanced techniques

adjacent temperatures should be chosen such that the energy fluctuations in adjacent systems still have some overlap. Up to this point, we can consider the combined system simply as n noninteracting systems each at its own temperature Ti . The probability density to find system 1 in configuration (rN )1 , system 2 in configuration (rN )2 etc. follows directly from Eq. (2.3.10):    n e−βi U {rN }i  i=1 N N N . (13.1.1) P {r }1 , {r }2 , · · · , {r }n = n i=1 Z(N, V , Ti ) We could study this combined system with a standard MC simulation where only moves between two configurations at the same temperature are allowed, but clearly, such a simulation would not have any advantage over n separate simulations. Now consider what would happen if we allowed a massive particle swap, where we would permute the configurations of systems at temperature Ti and Tj . Usually, we choose i random, and j adjacent to i. To guarantee microscopic reversibility, trial swaps for which j < 1 or j > n should be rejected. Such a particle swap, which we denote by (i, βi ), (j, βj ) → (j, βi ), is a perfectly legitimate MC trial move. For such a trial move, the Boltzmann weight of the system as a whole would change. Denoting the configuration of system i by i = {rN i }, the condition for detailed balance reads   N (i, βi )N (j, βj ) × α (i, βi ), (j, βj ) → (j, βi ), (i, βj )   × acc (i, βi ), (j, βj ) → (j, βi ), (i, βj )   = N (i, βj )N (j, βi ) × α (i, βj ), (j, βi ) → (i, βi ), (j, βj )   × acc (i, βj ), (j, βi ) → (i, βi ), (j, βj ) . If we perform the simulations in such a way that the a priori probability, α, of performing a particular swap move is equal for all conditions, we obtain as acceptance rule acc[(i, βi ), (j, βj ) → (j, βi ), (i, βj )] acc[(i, βj ), (j, βi ) → (i, βi ), (j, βj )]   exp −βi U (j) − βj U (i)   = exp −βi U (i) − βj U (j)

(13.1.2) = exp (βi − βj ) [U (i) − U (j)] ,   where we have defined U (i) ≡ U {rN }i . It is important to note that, as we know the total energy of a configuration anyway, these swap moves are very inexpensive since they do not involve additional calculations. With this kind of massive swap move, we can exchange configurations that were generated at different temperatures, while maintaining the equilibrium at

Accelerating Monte Carlo sampling Chapter | 13 459

all temperatures Ti . In particular, we can swap configurations at lower temperatures where equilibration is slow, with those at higher temperatures, where normal MC sampling can adequately sample the configuration space. Of course, in reality, we do not move particles from configuration i to j and vice versa. Rather, what we do is swap the (inverse) temperatures βi and βj . One could therefore interpret a parallel tempering trial move as an attempt to change (permute) the intensive variable T . However, when computing averages for a given temperature Ti , it is more convenient to think in terms of moves that change the configuration of a system at a constant temperature. With PT, interpreting trial moves in terms of changes of the intensive variable is only a matter of words. However, below we shall discuss extended ensembles, where no such ambiguity exists. Example 23 (Parallel tempering of a single particle). As an illustration of the power of parallel tempering, we consider a single particle moving an external potential as shown in Fig. 13.2(left): ⎧ ⎪ ∞ x < −2 ⎪ ⎪ ⎪ ⎪ 1 × + sin x)) −2 ≤ x ≤ −1.25 (1 (2π ⎪ ⎪ ⎪ ⎪ ⎪ 2 × + sin x)) −1.25 ≤ x ≤ −0.25 (1 (2π ⎨ (13.1.3) U (x) = 3 × (1 + sin (2π x)) −0.25 ≤ x ≤ 0.75. ⎪ ⎪ ⎪ 4 × (1 + sin (2π x)) 0.75 ≤ x ≤ 1.75 ⎪ ⎪ ⎪ ⎪ ⎪ 5 × (1 + sin (2π x)) 1.75 ≤ x ≤ 2 ⎪ ⎪ ⎩ ∞ x>2 We place the particle initially in the left-most potential energy well, and then we first use normal Metropolis MC at three different temperatures (T = 0.05, 0.3 and 2.0). At the lowest temperature (T = 0.05) the particle is effectively trapped in its initial potential-energy well during the entire simulation, whereas for the highest temperature (T = 2.0) it can explore all wells. Next we apply parallel tempering, that is: we allow for temperature swaps between the three systems (see Fig. 13.2). Due to the temperature-swap moves, the systems now equilibrate rapidly at all three temperatures. The difference is particularly strong for the probability distribution at the lowest temperature. In the present parallel-tempering simulation, we consider two types of trial moves: 1. Particle displacement: we randomly select one of the three temperatures and carry out a trial displacement  of a randomly selected particle at that temperature, choosing the trial displacement  for a random distribution, uniform between −0.1 and 0.1. The acceptance of this trial displacement is determined by the conventional Metropolis MC rule acc (o → n) = min {1, exp [−β (U (n) − U (o))]} .

(13.1.4)

460 PART | IV Advanced techniques

FIGURE 13.2 (Top left) Potential energy (U (x)) as a function of the position x. (Top right) Probability (P (x)) of finding a particle at position x for various temperatures (T ) as obtained from ordinary Monte Carlo simulations and (bottom left) using parallel tempering. In the ordinary MC simulations, the lower-temperature systems are not (or barely) able to cross the energy barriers separating the wells. (bottom right) Position (x) as a function of the number of Monte Carlo trial moves (n) for T = 0.05.

2. Temperature swapping. The trial move consists of attempting to swap two randomly selected neighboring temperatures (Ti and Tj ). Such a trial move is accepted with a probability given by Eq. (13.1.2)      (13.1.5) acc (o → n) = min 1, exp βi − βj × Uj − Ui . We are free to choose the relative rate of displacement and swap moves. In the present example, we used 10% swap moves and 90% particle displacements, as suggested in ref. [550]. Note, however, that other choices are possible.a As can be seen in Fig. 13.2, parallel tempering results in a dramatic improvement in the sampling of configuration space at the lowest temperature. For more details, see SI (Case Study 21). a There are even algorithms that use an infinite swap rate [551], including all possible per-

mutations of the temperatures. However, such algorithms (see also [552]) do not scale well with the number of distinct temperatures.

Accelerating Monte Carlo sampling Chapter | 13 461

One obvious question is: how many parallel simulations are needed in a PT run, and how should we choose the temperature spacing? On the one hand, we want to use as few simulations as possible, which means choosing the temperature spacing as large as possible. But if the temperature spacing becomes too large, the acceptance of the swap moves plummets. A good (but probably not optimal) rule of thumb is to choose the separation of neighboring temperatures such that the spacing in the average energy of adjacent systems is comparable to the root-mean-square fluctuations in the energy of these systems. This rule allows us to make a quick estimate of the temperature spacing. Note that the difference in the average energy of two systems separated by a temperature interval T is equal to E ≈ CV T , where CV (T ) is the heat capacity of the system at temperature T and volume V . But the variance in the energy of the system is also related to the heat capacity (Eq. (5.1.7)):   kB T 2 CV = E 2  − E2 . Hence, our rule to estimate the optimal value of T is  kB T 2 CV ≈ CV T

(13.1.6)

or  T ≈ kB /CV . (13.1.7) T As the heat capacity is an extensive variable (it scales √ with the number of particles N), Eq. (13.1.7) shows that T /T scales as 1/ N. Hence, parallel tempering becomes less efficient as N grows. The parallel tempering approach is not limited to coupling simulations at different temperatures. We can use parallel tempering for any intensive control parameter —or any combination thereof. Obvious examples are the pressure P or the chemical potential μ (but not both) —see e.g., ref. [553]. But we can also consider PT simulations that couple systems with different potential-energy functions. To capture all these cases in a single notation, we can generalize the simple Boltzmann weight wB (i; βi ) = exp[−βi U (i)] to w(i; λi ), where λi stands for any set of intensive variables that we intend to vary. In this notation, the generalized expression for the acceptance probability of a trials move that swaps λi and λj is: acc[(i, λi )), (j, λj )) → (j, λi ), (i, λj ))] acc[(i, λj ), (j, λi ) → (i, λi ), (j, λj )] w(i; λj )w(j; λi ) . = w(i; λi )w(j; λj )

(13.1.8)

Examples of parallel tempering simulations that sample a parameter in the Hamiltonian of a system are, for instance, the simulations of Yan and de Pablo

462 PART | IV Advanced techniques

[554] of a polymer mixture: the intensive variables in these simulations were the temperature, and the length of a tagged polymer chain (see Illustration 19). Illustration 19 (Parallel tempering and phase equilibria). Another example that illustrates the power of parallel tempering simulations is the study of liquid-vapor coexistence. Typically, if a fluid is heated above the critical temperature, the density is a monotonically increasing function of the chemical potential. However, below the critical temperature, the density of a macroscopic fluid sample will jump from a vapor-like value to a liquid-liquid value as the chemical potential crosses its value at coexistence (μcoex ). In a constant μV T simulation of a system containing only a few particles, coexistence will show up in a different way: for μ well below μcoex , the density of a fluid will fluctuate around a vapor-like value, and for μ well above μcoex , the fluid density will fluctuate around a liquid-like value. However, close to coexistence, the system should be able to exist in a vapor-like and a liquid-like state and precisely at coexistence, the areas under the vapor and liquid peaks in the density distribution should be equal, provided that the system can equilibrate. However, apart from simulations close to the critical point, where large density fluctuations carry a low free-energy cost [176,212], it is rare to observe transitions between liquid-like and vapor-like densities: at temperatures well below Tc , the free energy barrier separating the two states makes equilibration simply too slow. This is where parallel tempering can make a difference. Yan and de Pablo [553] used simulations at a number of different points (in their case 18) in the μ − T plane, linking the low-temperature vapor and liquid branches of the fluid via a path that passes close to Tc (where large density fluctuations can occur). As the method of ref. [553] applies parallel-tempering swaps to both T and μ, Yan and de Pablo called this method hyper-parallel tempering. Deviating a bit from the original presentation of ref. [553], we will use the fugacity f to characterize the chemical potential of a system. We recall that the fugacity of a fluid can be interpreted as the density of a hypothetical ideal gas of the same molecules at the same temperature that has the same chemical potential as the fluid: βμ(β, ρ) = ln(ρ) + βμex (ρ, β) ≡ ln f (ρ, β) , where we have dropped all “uninteresting” constants. Note that at low densities, f → ρ, as it should. In a parallel-tempering move, we exchange the fugacities and the temperatures of the two systems without changing the configuration of either system. The new configuration of system i is then characterized by i (Ui , Ni , βj , ln fj ), while for system j we have (Uj , Nj , βi , ln fi ). Such a trial move is accepted with probability    acc(o → n) = min 1, exp (ln fj − ln fi )(Ni − Nj ) − (βj − βi )(Ui − Uj ) .

Accelerating Monte Carlo sampling Chapter | 13 463

Because these parallel tempering simulations connect equilibrium state points on the liquid and vapor side of the critical point, equilibration is no longer a problem, and a density distribution is obtained, which, close to coexistence, contains both the liquid and vapor peaks. Of course, none of the state points studied in this parallel tempering calculation will be located exactly on the liquid-vapor coexistence curve. However, as we know the density histograms as a function of β and f , we can use the histogram-reweighting technique discussed in section 8.6.11. The histogram reweighting procedure allows us to estimate the density distribution for arbitrary points in the f, β-plane close to the points that were sampled. We can then locate those points in the f, β-plane for which the probability of finding the system at a liquid density is equal to the probability of finding it at a vapor density (i.e., the area under the two peaks in the reweighted histogram are equal). As shown in ref. [553], the combined parallel temperature and histogram reweighting technique is very efficient in determining the coexistence curve of the Lennard-Jones fluid, provided that one has a well-chosen set of temperatures and chemical potentials. Using the approach of Wilding and Bruce [176,212], Yan and de Pablo also used the histogram-reweighting techniques to estimate the location of the liquid-vapor critical point.

A closely related example is the work of Bunker and Dünweg [555], who used the excluded volume of polymer chains as the intensive variable to be sampled: typically, the relaxation of polymer conformations in dense melts is very slow, but becomes much faster as the polymers interact less (see Illustration 20). Of course, if the scheme is only meant to equilibrate the melt of interacting polymers, the overhead involved in running additional “unphysical” simulations makes the scheme more expensive. If at all possible, PT should be used in situations where the information from all intermediate values of the intensive variable is of interest, as is for instance the case when PT is used in the context of computing free-energy landscapes (e.g., nucleation barriers [556]), and even when performing thermodynamic integration (section 8.4.1) —see ref. [155]. As such calculations require several simulations anyway, the sampling efficiency can be increased at no extra cost by adding parallel tempering moves. Illustration 20 (Parallel tempering and polymers). In a Parallel Tempering (PT) simulation, one can also perform swaps between systems that have slightly different Hamiltonians. In some cases, this can be very useful. Suppose that  (i) (i) system i has a temperature T and energy Ui = N k>l u (rkl ) and system j  (j ) (j ) (i) and u(j ) are a temperature T and energy Uj = N k>l u (rkl ), where u different potentials. As a parallel tempering move, we can swap systems i and j . We take the positions of the particles in system i and recompute the energy (j ) using the intermolecular potential of system j ; this energy is denoted by Ui .

464 PART | IV Advanced techniques

In a similar way, we compute for system j the energy using the intermolecular (i) potential of system i, Uj . The acceptance rule for such a move reads    (j ) (j ) (i) (i) acc(o → n) = min 1, exp −β(Ui − Ui ) + (Uj + Uj ) . This type of parallel tempering move can be combined with others involving the temperature, pressure, or chemical potential. Bunker and Dünweg used “Hamiltonian” parallel tempering to simulate a long-chain polymer. The polymer was modeled using a bead-spring model with a purely repulsive Lennard-Jones interaction between the beads: ⎧ ⎪ ⎪ A − Br 2 r ≤ rPT ⎪ ⎨      pol 12 6 σ U (r) = rPT < r ≤ 21/6 σ, − σr ⎪ 4 r ⎪ ⎪ ⎩ 0 r > 21/6 σ where A and B were chosen such that, for a given value of rP T , the potential and its first derivative were continuous. We are interested in the properties of the model system with rPT = 0. The other systems are simply added to facilitate equilibration. For instance, for rPT = 21/6 σ , the core repulsion vanishes, and polymer chains can pass through each other. In their parallel tempering scheme, Bunker and Dünweg simulated a number of systems with different values of rPT . A drawback of this “Hamiltonian” parallel tempering scheme is that most of the simulation time is spent on systems that are not of physical interest. After all, we are only interested in the thermodynamic behavior of the system with rPT = 0. However, as is discussed in ref. [555], the gain in sampling efficiency due to the use of the parallel tempering scheme is still sufficient to make the scheme competitive.

13.1.2 Expanded ensembles The parallel tempering approach that we discussed in the previous section could still be viewed as a normal Monte Carlo scheme, if we chose to interpret the moves as particle swaps, rather than as swaps of the intensive variables. Now we consider simulation methods where the elementary trial move involves changing an intensive variable; in the following, we will take the temperature as an example. Similar to parallel tempering, the idea behind expanded-ensemble simulations is to speed up the equilibration of a system by allowing excursions to other thermodynamic conditions where configuration space can be explored more efficiently, whilst maintaining thermodynamic equilibrium. Expanded-ensemble simulations and their predecessors have many inventors (see [543]), but the term “Expanded-ensemble method” seems to have been coined by Lyubartsev et al. [546].

Accelerating Monte Carlo sampling Chapter | 13 465

There is a conceptual problem with such simulations: for normal Markov Chain MC moves, we can decide on the acceptance probability of a trial move by considering the ratio of the Boltzmann weight of the new and the old states. However, if we change (say) the temperature of a system, there is no natural way to compare probabilities. We cannot say that the probability of a given configuration will be higher or lower when we change the temperature. The underlying reason for this indeterminacy is that intensive variables are not properties of the system itself but of an external “reservoir”. Whilst this situation may seem to present a problem, it is in fact an advantage, because it offers us more freedom to tune the properties of expanded ensembles. As before, we start from the expression for the probability distribution of a system at a given inverse temperature β: 

P rN







e−βU r . = Z(N, V , T ) N

(13.1.9)

Next, we allow this system to take on a range of temperatures T1 , T2 , ..., Tn . We now define a weighted, extended configurational integral Z(N, V , {T }) as Z(N, V , {T }) ≡

n 

eηi Z(N, V , Ti ) .

(13.1.10)

i=1

The probability to find the system in volume V in configuration rN and temperature Ti is then given by: 



N   eηi e−βi U r N . P r ; Ti = n ηj j =1 e Z(N, V , Tj )

(13.1.11)

Note that at this stage the choice of the weighting factors eηi is left open. Having defined a probability distribution for a system that can be found at n different temperatures, we can construct the rules for trial moves that change the temperature. We follow the usual recipe where we require microscopic reversibility and impose detailed balance. Then the acceptance probability for a trial move that changes the inverse temperature from βi to βj is given by the condition for detailed balance:      acc(i → j ) = exp ηj − ηi exp −(βj − βi )U rN (13.1.12) acc(j → i) Note that if we compute some observable for the state with temperature Ti , we will obtain the normal Boltzmann average for that temperature. Hence, for every temperature, we perform equilibrium sampling. Of course, as in the case of parallel tempering, we could sample other intensive variables using another form of the weight function (see Eq. (13.1.8)). We

466 PART | IV Advanced techniques

will not go through the same steps here, as it adds nothing, as long as it is clear that in the discussion below, temperature stands for a wider class of variables. The first question to address is: what values should we choose for the weighting factors eηi . It should be noted that the values chosen for the ηi do not affect the values of the equilibrium averages, only the efficiency with which states with different Ti are sampled. The simplest (but not necessarily the best) choice would be the one that results in equal sampling of all temperatures T1 through Tn . We note that the probability of finding the system in any configuration at temperature Ti is eηi −βi F (N,V ,Ti ) eηi Z(N, V , Ti ) = n P (Ti ) = n . η ηj −βj F (N,V ,T) j j =1 e Z(N, V , Tj ) j =1 e

(13.1.13)

Hence, if we were to choose the ηi such that ηi = F (N, V , Ti ) + constant, then all Ti s would be equally likely. We do not have to make this choice, but clearly, we do not wish to use weights that would make certain temperatures orders of magnitude less likely than others, because then these temperatures would be under-sampled. Of course, we do not know the free energy F (N, V , Ti ) a priori. However, we can make a good initial estimate (up to an unimportant constant) by using the fact that   ∂βF (N, V , T ) = E(N, V , T ) , (13.1.14) ∂β N,V and we can compute E(N, V , T ) in a normal MC simulation. But the expandedensemble simulations provide us with an estimate of the free-energy difference between two states i and j . If we sample the equilibrium probabilities P (Ti ) and P (Tj ) that states i and j are populated, then we can estimate F (N, V , Ti ) − F (N, V , Tj ) from P (Ti ) = e(ηi −ηj ) e−βi F (N,V ,Ti )+βj F (N,V ,Tj ) . P (Tj )

(13.1.15)

However, populating all states in the expanded ensemble equally may not be an optimal choice. Often, we wish to maximize the rate at which the system “diffuses” between T1 and Tn . More importantly, in the case where the intermediate states correspond to unphysical values of some other intensive parameter (such situations occur, for instance, if we consider the gradual insertion of a molecule in the system), we may only be interested in computing accurate averages for those parameter values that have physical meaning. Under those conditions, a more sensible optimization criterion might be to choose the set {ηi } such that it maximizes the rate of “diffusion” between the different physically meaningful states. In such cases, it is better to choose the ηi such that the population of √ state i is proportional to 1/ Di , where Di is the local “diffusion coefficient”

Accelerating Monte Carlo sampling Chapter | 13 467

of the intensive variable [378,557,558]. Clearly, Di will depend on the acceptance of trial moves that change the intensive variable, and on the spacing in the temperatures —and therefore Di may vary strongly with i. Escobedo and coworkers have compared different strategies to optimize the sampling in expanded ensembles, and we refer the reader to their work for more details [559–561].

13.2

Noise on noise

Thus far, we have assumed that once we know the coordinates of all particles in a system, we can directly evaluate the value of the potential energy. However, sometimes computing the potential energy itself requires a sampling process, and hence the resulting value of U is subject to statistical noise. Of course, the noise can be decreased by sampling longer, but surprisingly, that is not always necessary. Ceperley-Dewing [562] showed that under certain conditions, it is possible to perform Boltzmann sampling of configuration space, even if our estimate of the potential energy is subject to random statistical errors. For convenience, we focus on the case of equal a priori probabilities for forward and reverse moves. In the absence of noise, the acceptance probability of a trial move from a state o to a new state n, which changes the potential energy by an amount U = U (n) − U (o) is, as usual, given by acc (o → n) = min (1, exp [−βU ]) .

(13.2.1)

Now assume that there are statistical errors in the potential energy difference, i.e., we must decide the acceptance of a trial move from o to n based on our knowledge of δ = U + x, where x describes the noise in our estimate of U . The new stochastic variable δ follows some distribution P (δ), such that δ = U . The acceptance probability of trial moves will now depend on δ: acc(δ; o → n). The average acceptance probability for trial moves between o and n is  (13.2.2) acc (o → n) = dδ P (δ)acc(δ; o → n) . To obtain Boltzmann sampling, we require that acc (o → n) = exp [−βU ] , acc (n → o)

(13.2.3)

which implies that   dδ P (δ)acc(δ; o → n) = exp [−βU ] dδ P (δ)acc(−δ; n → o) . (13.2.4) In general, we do not know the true U , nor do we know P (δ), and as a consequence, we cannot determine acc(δ; n → o). However, Ceperley and Dewing

468 PART | IV Advanced techniques

showed that if we assume that P (δ) is distributed normally around U , with a variance σ 2 , then things simplify dramatically. The only additional information we need is a good estimate of σ 2 , which can be sampled directly. Under those circumstances, the choice    acc(δ; o → n) = min 1, exp −βδ − (βσ )2 /2 (13.2.5) will generate the correct Boltzmann distribution over all states sampled (as shown in ref. [562]). Of course, the assumption that the same normal distribution describes the variation in δ around U for all pairs o and n is almost never exact (although often reasonable). Hence, when computing σ 2 , it is useful to check if the distribution P (δ) is normal. Noise can affect Monte Carlo sampling in several ways. In the case discussed above, we considered the case where the energy function U was subject to random noise, in such a way that the average of U over many samples yielded to correct average value. In this case, w = exp(−βU ) = w(U ) = exp(−β U ). However, in some simulation problems, the weight function w itself is subject to random noise with zero mean. In such cases, other sampling schemes exist (see [563]).

13.3 Rejection-free Monte Carlo The conventional wisdom is that if something sounds too good to be true, it probably is not true. However, the rejection-free Monte Carlo schemes that we discuss below are both real and powerful, be it that they usually involve some computational overhead.

13.3.1 Hybrid Monte Carlo To illustrate that there is nothing strange about rejection-free MC, we first consider the case of Hybrid MC. In the context of molecular simulations, the Hybrid MC method is, as the name suggests, a hybrid between MC and MD [564]. That is: the trial moves are not random displacements of particles, but rather a collective displacement of particles generated by carrying out an MD simulation of length t. The underlying thought is that a “good” (time-reversible, symplectic) MD algorithm (e.g., Verlet) generates a perfectly valid MC trial move because 1) it is reversible and 2) it conserves volume in phase space, meaning that a volume element d ≡ drN dpN is not changed during the time evolution of the system (see Chapter 4 and ref. [117]). We can now start a trial MD run at point rN (0). To define a trajectory, we must generate a set of initial momenta pN (0) according to some (as yet unspecified) probability distribution P (pN (0)). Having specified (0), we run the MD algorithm for n steps (the choice of n can be made later). After n steps, the system will be in a state characterized by phase-space coordinates rN (t), pN (t).

Accelerating Monte Carlo sampling Chapter | 13 469

If we wish to use MD as an MC trial move, it is essential that we impose microscopic reversibility, meaning that the reverse move must be possible. That implies that we must ensure that P (pN (t)) = 0. In addition, as the reverse move involves running the trajectory backward after flipping all momenta at time t, we impose that P (pN ) is invariant under the reversal of all momenta. In thermal equilibrium, the probability of finding the system with coordinates rN must be   −βU rN proportional to e . Denoting the configuration of the system at the initial (final) time (t = 0) by {i} ≡ rN (0), and {f} ≡ rN (t), the condition for detailed balance (Eq. (3.2.7)) reads     e−βU ({i}) P pN (0) acc(i → f ) = e−βU ({f}) P −pN (t) acc(f → i). (13.3.1) From this, it follows that  N  acc(i → f ) −β[U ({f})−U ({i})] P  p (t)  =e . (13.3.2) acc(f → i) P pN (0) First, consider the case that the MD trial move is perfectly energy-conserving. If we denote the total energy U + K by E, then the detailed balance condition implies   acc(i → f ) e+βK({f}) P pN (t)   = (13.3.3) acc(f → i) e+βK({i}) P pN (0) Clearly, if we would have chosen P (pN ) to be the Maxwell-Boltzmann distriN bution at inverse temperature β (P (pN ) ∼ e−βK(p ) ), then the acceptance ratio for forward and backward trial moves would be 1, which means that we can accept 100% of the trial moves. In reality, energy is not perfectly conserved in an MD simulation. If E(t) − E(0) = δ, then the acceptance ratio of our trial moves would be equal to exp(−βδ). Hence, in practice, we do not reach 100% acceptance of trial moves, but we can get close. Of course, we could choose another form for P (pN ), but typically such a choice would only result in a lower acceptance of hybrid-MC trial moves. For more details on the Hybrid MC method and related techniques, see refs. [21, 565–567].

13.3.2 Kinetic Monte Carlo In Chapter 3 we presented the Markov-Chain Monte Carlo method as a technique to compute the equilibrium properties of many-body systems. However, the original Monte Carlo method of Ulam (and Metropolis) found its first applications in the simulation of sequences of events (e.g., the interaction of neutrons with condensed matter). The philosophy behind the Kinetic Monte Carlo method builds on the original Ulam paper: an efficient, and rejection-free tool to model sequences of stochastic events. The KMC method was invented

470 PART | IV Advanced techniques

many times over in various flavors, for instance, in the 1960s in the context of electron transport in semiconductors (see the 1983 review by Jacoboni and Reggiani [568]), in the context of Chemical Kinetics [569] (the so-called Gillespie algorithm) and in the formulation by Bortz, Kalos, and Lebowitz [570] as a method to perform Boltzmann sampling of systems with discrete degrees of freedom. Rather than discussing all these flavors separately, we give a simplified presentation of the essentials. The key ingredient in KMC is that we consider a situation where different stochastic events labeled with an index α may happen with different rates rα . What is essential is that we should be able to evaluate all rα a priori, if we know the current state  of the system. Let us denote the total rate that something happens by R = α rα . Then the probability that the system evolves for a time t without any of these events taking place is P0 (t) = exp(−Rt). The probability density for the first event happening at time t is then p1 (t) = R exp(−Rt) .

(13.3.4)

Our aim is to generate the time t1 , the time of the next event, according to the distribution given by Eq. (13.3.4). This we can do by expressing t1 as t1 = −(1/R) ln x ,

(13.3.5)

where x is a random number, uniformly distributed between zero and 1. Once we have generated t1 , we still have to decide which event takes place. But, again, this is straightforward because the probability of observing event α is equal to rα /R, and we know all rα . If the number Nr of distinct processes is large, selecting one particular αr has to be done efficiently, for instance, using Walker’s Alias method [571]; a nice, non-mathematical explanation can be found in [572]. The next step is to carry out event α (e.g., a spin flip, or a scattering event) and to recompute all those rα that may have changed. We then compute the time to the next event, a repeat the same steps. In the context of Boltzmann sampling, the events are (for instance) spin flips for spins with different arrangements of neighbors. To obtain the correct equilibrium sampling, we should choose the rα proportional to the acceptance probability of a standard (e.g., Metropolis) MC trial move. For spin systems with a small number of states of the neighboring spins, we can easily classify each spin according to its environment. When computing averages, the time intervals between successive events matter. For instance, if we denote the energy of the system during interval i, by Ei , then the energy estimate for a run of total length τ = t1 + t2 + · · · + tn is  E = τ −1 E i ti . (13.3.6) i

Note that, as this average depends on a ratio of times, the unit of time is unimportant.

Accelerating Monte Carlo sampling Chapter | 13 471

As the algorithm of ref. [570] requires the (re)evaluation of all rα at every event, the computational overhead is high. On the whole, the method is most useful to study the kinetics of systems where the time evolution is dominated by rare, unfavorable spin flips. In standard MC, most high-energy trial moves would be rejected. In contrast, with the method of ref. [570], all moves are accepted, but the system simply spends much less time in “high energy” than in “low-energy” configurations (see e.g., ref. [573]). The KMC algorithm proposed by Gillespie [569] is widely used for modeling stochastic (bio)chemical reactions or similar processes. However, its applications fall mostly outside the scope of this book.

13.3.3 Sampling rejected moves In standard Markov-Chain MC, a large fraction of all trial moves is rejected. This seems wasteful, because before rejecting a trial move, we have probed its final state. Yet, if we reject the move, we discard all the information that we have acquired, even if the final state had a fair chance of being accepted. Surprisingly, it is possible to perform MC sampling in such a way that even the information contained in rejected trial configurations is taken into account when computing thermal averages. Sampling of rejected moves was introduced by Ceperley and Kalos [574] in the context of quantum Monte-Carlo simulations and rediscovered in the context of classical MC simulations [575,576]. To derive the method to include the sampling of rejected states, we once again consider the probability of moves from state m to a trial state n (see Chapter 3). The Boltzmann weight of state m is denoted by N (m). The probability that a system will undergo a transition from state m to state n is π(m → n), which we shorten to πmn . A valid choice for πmn should maintain the equilibrium distribution [87]:  N (m)πmn = N (n). (13.3.7) m

Typically, we aim to compute averages of the form:  N (n)an A = n . n N (n) We can now use Eq. (13.3.7) to write Eq. (13.3.8) as     N (m)πmn an N (m) n πmn an A = n m , = m  n m N (m)πmn m N (m)

(13.3.8)

(13.3.9)

where, in the final equality, we have changed the order of summation and used the fact that n πmn = 1. Hence, A can be rewritten as a Boltzmann average over all states m of the quantity n πmn an . To estimate A we can use normal

472 PART | IV Advanced techniques

Markov-chain sampling over the M visited states   M 1   A = lim πnm am , M→∞ M m

(13.3.10)

n=1

where the sum now runs over the visited states n. We now use the fact that in a Monte Carlo algorithm the transition probability πnm is the product of two factors: the a priori probability αnm to attempt a trial move to m, given that the system is initially in state n and the probability acc(n → m) to accept this trial move. πnm = αnm × acc(n → m). With this definition, our estimate for A becomes   M 1   A = lim αnm × acc(n → m)am . M→∞ M m

(13.3.11)

(13.3.12)

n=1

This expression is still not very useful, because it would require us to compute am for all states m that can be reached in a single trial move from n. In what follows, we use the following shorthand notation:  an ≡ αnm acc(n → m)am . (13.3.13) m

In simulations, we do not compute an explicitly, but use the fact that αnm is a normalized probability. We can therefore estimate an by drawing, with a probability αnm , trial moves from the total set of all moves starting at n: an = acc(n → m)am αnm .

(13.3.14)

Our scheme to estimate A then reduces to the following sampling   M 1  



A = lim acc(n → m )am M→∞ M

n=1

(13.3.15)

m

where m is the set of states generated with the probability αnm . If we use an MC algorithm for which a trial move generates only a single candidate state m, then the expression for A becomes   



n (1 − acc(n → m ))an + acc(n → m )am  A = . (13.3.16) n

This average combines information about both the “accepted” and the rejected state of a trial move. Note that the Monte Carlo algorithm used to generate the

Accelerating Monte Carlo sampling Chapter | 13 473

random walk among the states n need not be the same as the one corresponding to πnm . For instance, we could use standard Metropolis to generate the random walk, but use the symmetric rule [577] πmn = αmn

N (n) N (n) + N (m)

(13.3.17)

to sample the am ’s, in which case it can be shown [578,579] that the statistical error is necessarily lower than when using standard Metropolis sampling. The advantage of sampling rejected moves becomes larger when trial moves can cheaply generate many possible final states [580].

13.4

Enhanced sampling by mapping

In section 6.3, we discussed volume-changing moves in the constant-N P T MC method. In these moves, we kept the scaled coordinates of all center-of-mass coordinates fixed, and hence the real coordinates of all particles would change when the volume was changed. This is not the only way we could carry out constant pressure MC. We could, for instance, have a system terminated by hard walls and change the volume by an amount V , by carrying out trial displacements of a wall acting as a piston. In such moves, we could leave all real coordinates unchanged, but of course, if the piston would overlap with a particle, then the trial move would be rejected. Such rejections would occur, even if we would simulate an ideal gas at density ρ. The probability that a trial move yields a non-overlapping configuration, would decay as min[1, exp(+ρV )] (expansion never yields an overlap with the walls). But if no such overlap would be detected, then the acceptance probability would simply be min[1, exp(−βP V )], which accounts for the free energy change of the reservoir. If the ideal gas pressure is equal to the reservoir pressure, then the acceptance of expansion and compression trial moves is symmetric in V , and the average volume change after a move is zero, as it should be. In contrast, if, as in section 6.3, we use scaled coordinates, then all trial moves would be guaranteed to generate configurations without overlap with the walls. But then there is a factor [(V + V )/V ]N in the acceptance probability (Eq. (6.3.12)). This factor is the ratio of the Jacobians of the transformation from unscaled to scaled coordinates.1 It is a purely entropic term. We show this example, because it shows that a) trial moves can sometimes be made more efficient by using a coordinate scaling and b) that if we use trial moves that change the mapping from a set of reference coordinates (sN ) to the real coordinates, then the ratio of the old and new Jacobians ends up in the acceptance rule. Jarzynski [581] explored the more general problem of how a suitably chosen coordinate transformation can improve sampling, in the context of estimating 1 As before, we write “Jacobian” as a shorthand for “the absolute value of the determinant of the

Jacobian matrix.”

474 PART | IV Advanced techniques

free-energy differences. To explain the gist of the method, consider that we have a well-characterized and easy-to-sample N -particle reference system (A) with dN coordinates denoted by q, a potential energy UA (q), and a Boltzmann distribution ρA (q), and we wish to sample a system B with a harder-to-sample potential energy function UB (q ) and Boltzmann distribution ρB (q ). The holy grail would be to construct a coordinate transformation T from q to q , such that our sampling of A automatically yields all points in B with the correct Boltzmann weight. The probability density generated by sampling A and transforming from q to q is ρ(q ) ∼

exp(−βUA ) |J |T

(13.4.1)

   

where |J |T ≡  ∂q ∂q  is the Jacobian of the transformation from q to q . Clearly,

if we could construct a transformation T such that |J |T = ρA (q)/ρB (q ), then sampling A would immediately allow us to sample B with the correct Boltzmann weight. In general, constructing such a perfect transformation is not feasible, and hence much of the effort focuses on generating reasonably good transformations. (see however, section 13.4.1 below). But for illustrative purposes, it helps to consider a simple case where we can do the exact transformation. Consider a single particle in a 1d box of length L = 1: this is our system A. If the potential energy in the box is flat, uniform random sampling of q between 0 and 1 yields the Boltzmann distribution of A. Next consider system B: a particle that can take all coordinates 0 < q < ∞, and that is subject to an external potential UB (q ) = κq , with a corresponding Boltzmann distribution ρB (q ) ∼ exp(−βκq ). To achieve the correct sampling of B via a coordinate transformation we need ∂q

= C exp(βκq ) (13.4.2) ∂q   where C is a constant that will be fixed by the condition q = (βκ)−1 . It is easy to verify that Eq. (13.4.2) implies that [66] q = (βκ)−1 ln(−q). In this case, Metropolis sampling of the exponential distribution of system B would be slower than sampling the uniform distribution of system A. Jarzynski [581] showed that, for the general case where A and B can be complex many body systems, the free-energy difference F between A and B is given by  

F = −kB T ln e−β(UB (q )−UA (q)−ln J (q) , and hence a transformation with ln J (q) ≈ UB (q ) − UA (q) would speed up the estimation of free-energy differences, provided that A is easy to sample (no barriers, no traps).

Accelerating Monte Carlo sampling Chapter | 13 475

The mapping method was reformulated by Kofke and collaborators (see e.g., [582]), for a wide range of sampling problems. But also in the Kofke approach, it is necessary to make physically plausible approximations to construct the mapping.

13.4.1 Machine learning and the rebirth of static Monte Carlo sampling More recently, a number of groups (see Noé et al. [583], Wirnsberger et al. [584] and Gabrié et al. [585]), showed that there is considerable promise in using machine learning to create ‘good’ invertible mappings between q and q defined in section 13.4, in situations where our intuition may not be good enough. As explained in section 3.2.1, an ideal static MC code would generate independent points in configurations space with a probability proportional to their Boltzmann weights. A system of N particles in d dimensions has dN degrees of freedom; we can view the generation of a configuration as a complex, nonlinear mapping of dN random numbers RdN = (R1 , ..., RdN ) to dN coordinates rN = (r1 , ..., rdN ). A trivial example would be the generation of an ideal gas configuration in a cubic box with diameter L: a typical configuration can be generated by multiplying dN random scaled coordinates sN inside a unit hypercube, i.e., 0 ≤ si < 1, with the box-length L. For interacting systems, the above mapping would be useless: almost all configurations would have a vanishing Boltzmann weight. The question is whether we can define some complex, nonlinear vector function F, with rN = F(R1 , · · · , RdN ) such that the resulting configurations appear with a frequency proportional to their Boltzmann weights. The answer to this question used to be “no”, but that situation is changing. Before proceeding, we note that the mapping from RdN to rN should be bijective, that is: every set of dN random numbers should correspond to one and only one valid point in configuration space, and conversely. In section 13.4 we described MCMC trial moves that use the idea of mapping to generate trial configurations with a higher Boltzmann weight than those generated by random trial moves. Typically, such mappings are constructed “by hand”, using physical insight. But our physical insight is insufficient to design reasonable mappings that generate, in a static MC move, configurations with a probability close to their Boltzmann weight. This is where Machine Learning (ML) comes in: the idea is that we can train a neural net to perform such bijective transformations. Note that most machine-learned “mappings” are not at all bijective: typically, many inputs generate the same output, e.g., many pictures of the same person should identify the same individual. A popular class of methods for training a neural net to carry out a complex, bijective transformation is called “normalizing flows” [586]. We will not go into the ML aspect of the method. Just a remark on the terminology: the term “flow” is used because in practice the mapping is im-

476 PART | IV Advanced techniques

plemented as a series of transformations. The original, normalized probability distribution of the random numbers (more commonly normal, rather than uniform), is mapped onto a (normalizable) probability distribution in the space of transformed coordinates. To explain what happens next, it is convenient to assume that we map random numbers that are uniform in 0 ≤ R < 1, to the coordinates of the system. The original, simple distribution is often referred to as the “latent space” distribution. In practical cases, the latent-space distribution is not uniform but normal. We denote the generated probability distribution in r dN by P(r dN ). Then         P RdN dRdN = P r dN dr dN = P r dN J RdN , r dN dRdN , (13.4.3) where P(RdN ) = 1, and J (RdN , r dN ) denotes the Jacobian for the transformation from RdN to r dN . We aim to find a mapping such that P(r dN ) is proportional to the Boltzmann distribution exp[−βU (r dN )]      (13.4.4) exp −βU r dN J RdN , r dN = constant . Without loss of generality, we take the constant to be equal to one. Then     (13.4.5) βU r dN = ln J RdN , r dN . In words: the trained transformation should map more densely onto regions with low potential energy (small Jacobian ↔ large Boltzmann factor) and less densely elsewhere. The ML aspect is then to train a neural net to find a transformation that approximately satisfies Eq. (13.4.5): we can then correct for the remaining discrepancy by re-weighting the sampled points with a factor exp[−βU (r dN )]/J (RdN , r dN ).2 If the training has been successful, the reweighting factor should be close to one. To train the neural network, we need to be able to generate a number of configurations of the system with a reasonable Boltzmann weight. These configurations may be generated in advance [583], or on the fly [585]. Also, the training works both ways: we train the neural net such that probable states of the latent distribution map onto states with a high Boltzmann weight. But conversely, we also train the network such that the inverse mapping from real space to the latent distribution, maps states with a high Boltzmann weight onto “likely” reference states. In practice, it is often convenient to train the network such that it minimizes the average of the backward and forward Kullback-Leibler divergences. 2 Training of a neural network requires finding an extremum, rather than an equality. In this case,

the quantity to optimize is the (forward or backward) Kullback-Leibler divergence [587] between the generated distribution and the target distribution. This divergence is at a minimum if the two distributions are equal. Normalization of the distributions is unimportant because the log of the normalizing factor is a constant.

Accelerating Monte Carlo sampling Chapter | 13 477

There are two points to note. First, computing Jacobians by hand, in particular for a sequence of complex transformations, is no fun. However, this problem has been made much less daunting by the use of automatic differentiation (see [108]). Second: a Jacobian is the absolute value of a determinant. In general computing, an M × M determinant requires O(M 3 ) operations, which quickly becomes prohibitive. However, in some cases, such as when the matrix is triangular, computing the determinant is an O(M) operation. The following trick (called “coupling layers/flows”) can be used to create a mapping for which the Jacobian is guaranteed to be triangular: at every step in the sequence of transformations, part of the M coordinates, i.e., those with index 1 ≤ i ≤ k (k can be chosen) are just mapped onto themselves and are therefore independent of all other coordinates. As a result, the top k rows of the Jacobian matrix are only non-zero on the diagonal. The remaining coordinates (k + 1 ≤ i ≤ M), are transformed such that every new coordinate with index i is a function of the old coordinate with the same index, and of all the coordinates in the first set (1 ≤ i ≤ k). The function that carries out the transformation can be linear or nonlinear and should be different for every coordinate; importantly, it depends on a set of parameters that can be “learned”. The net effect of the coupled-flows trick is that the resulting matrix is lower diagonal and computing it is an O(M) operation. The splitting of the coordinates in sets that will/will not be transformed, should be different at every step in the sequence of transformations. The total Jacobian is then simply the product of the partial Jacobians, or in practice: the log of the total Jacobian is the sum of the logs of the partial Jacobians. Illustration 21 (Normalizing flows and static Monte Carlo sampling). Noé et al. [583] illustrated the potential of normalizing flows in the context of Molecular Simulations. Here we focus on two features of the normalizingflows method that are highlighted in ref. [583]: 1. the ability to discover states with a high Boltzmann weight that are different from the training set, and 2. the ability to compute free-energy differences between two or more different structures of a given model, without the need to construct a transition between the two. To illustrate the “discovery” of new structures, ref. [583] considered a simple example of a system with a low-dimensional Mueller-Brown potentialenergy landscape [588] that contains two low-energy basins separated by a high-energy barrier. The first stage of the simulations is to start by sampling a small number of states in one of the potential energy basins, and training the network such that states with large Boltzmann weights correspond to points close to the center of normal distribution in the latent space, and conversely that points that are generated with high probability in the latent space correspond to configurations with high Boltzmann weights in the real energy landscape. As the simulation is trained on one basin only, the simulation will initially simply find more configurations in that basin. However, after a number of it-

478 PART | IV Advanced techniques

erations, the network starts to find states in the other basin, which was not a part of the initial training set. After a sufficiently large number of iterations, the simulation recovers the complete Boltzmann distribution of this energy landscape (see Fig. 13.3). Note, however, that this is a simple, low-dimensional example. Gabrié et al. [585] argue that the discovery of “unknown” basins does not work in real-life problems with high-dimensional potential energy landscapes.

FIGURE 13.3 Example of the “discovery” of a basin in an energy landscape that was not included in the original training set of the mapping [583]. The potential energy “landscape” is given by a version of the Mueller potential (left figure: see [583]). In the legend, the number of iterations is denoted by I . Initially (right figure: top left) only the original (single) state in one basin is found. The original basin is explored during the next O(100) iterations. However, after that, also the other basins are found. After some 1500 iterations, the complete Boltzmann distribution of the original system is recovered. Note that this is achieved without having to cross the high energy barrier between the two basins along the x2 direction.

Next, we consider the normalizing-flows approach to computing freeenergy differences. The starting point is the following relation [581], expressing the free-energy difference F ≡ FB − FA of two systems A and B related via a bijective mapping, as: e−βF =

ZB  −β[UB −UA −kB T ln J ]  = e , A ZA

(13.4.6)

where J is the Jacobian of the mapping from A to B. In our case, system A would be described by a dN dimensional normal distribution, and system B would be the system of interest. If we have a good mapping, we can obtain good estimates of the average in Eq. (13.4.6), and thereby obtain the free energy of system A. But this result is not limited to a single target system B: we could have several target systems (e.g., different crystal structures of

Accelerating Monte Carlo sampling Chapter | 13 479

the same model [584]). In that case, Eq. (13.4.6) allows us to compute the free-energy difference between different structures. Note, however, that the free-energy calculations in refs. [583,589] do not use Eq. (13.4.6) itself, but variants based on the more sophisticated methods described in Chapter 8. But we can also bias the mapping in such a way that we explore the (Landau) free energy of system B as a function of a given order parameter. Noe et al. [583] showed that the latter procedure yields free-energy profiles that are in good agreement with results obtained with conventional (e.g., umbrella sampling) methods (see Fig. 13.4).

FIGURE 13.4 Free-energy profile along the diagonal x1 − x2 direction in the potential energy landscape of the Mueller potential (see Fig. 13.3). The figure compares the reference free-energy profile obtained using a normal (biased) simulations with the results obtained using an unbiased normalizing-flows, and with normalizing flows plus biasing. Note that, with the biasing, it is possible to recover the free-energy profile in the range that would be poorly sampled in the original normalizing-flows calculation. Figure based on the data of ref. [583].

Note, however, that the normalizing flow approach to Monte Carlo sampling is very much “work in progress”. For higher-dimensional problems (e.g., liquids), the mapping starting from a Gaussian distribution [583] tends to run into trouble: approaches that create mappings from a reference state that is more similar to the target state work better [584,585,590].

13.4.2 Cluster moves The most spectacular examples of Monte Carlo schemes that reach 100% acceptance are those based on cluster moves. In this area, there have been important developments since the previous edition of this book. Below, we describe some of the key concepts behind these developments, without pretending to be anywhere near complete.

480 PART | IV Advanced techniques

For didactical reasons, it is best to start with the earliest example of a clustermove scheme that reaches 100% acceptance, namely the so-called SwendsenWang method [591].

13.4.2.1 Cluster moves on lattices The central idea behind the Swendsen-Wang (SW) scheme and subsequent extensions and modifications is to generate trial configurations with a probability that is proportional to the Boltzmann weight of that configuration. As a result, the subsequent trial moves can be accepted with 100% probability. We use a somewhat simplified derivation of the SW for cluster moves based, again, on the condition for detailed balance. Consider an “old” configuration (labeled by a superscript o) and a “new” configuration (denoted by a superscript n). Detailed balance is satisfied if the following equality holds: o N (o)PGen ({cluster})P {cl} (o → n)acc(o → n) n = N (n)PGen ({cluster})P {cl} (n → o)acc(n → o),

(13.4.7)

o ({cluster}) where N (o) is the Boltzmann weight of the old configuration, PGen denotes the probability of generating a specific cluster, starting from the old configuration of the system. The term P {cl} (o → n) is the probability of transforming the generated cluster from the old to the new situation. Finally, acc(o → n) is the acceptance probability of a given trial move. We can simplify Eq. (13.4.7) in two ways. First of all, we require that the a priori probability P {cl} (o → n) be the same for the forward and reverse moves. Moreover, we wish to impose Pacc = 1 for both forward and reverse moves. This may not always be feasible. However, for the simple case that we discuss next, this is indeed possible. The detailed balance equation then becomes o n N (o)PGen ({cluster}) = N (n)PGen ({cluster})

(13.4.8)

o ({cluster}) PGen N (n) = = exp(−βU), n PGen ({cluster}) N (o)

(13.4.9)

or

where U is the difference in energy between the new and the old configurations. The challenge is to find a recipe for cluster generation that will satisfy Eq. (13.4.9). To illustrate how this works, we consider the Ising model. The extension to many other lattice models is straightforward. Swendsen-Wang algorithm for Ising model Consider a given configuration of the spin system (the dimensionality is unimportant), with Np spin pairs parallel and Na spin pairs anti-parallel. The total energy of that configuration is U = (Na − Np )J,

(13.4.10)

Accelerating Monte Carlo sampling Chapter | 13 481

where J denotes the strength of the nearest-neighbor interaction. The Boltzmann weight of that configuration is N (o) = exp[−βJ (Na − Np )]/Z,

(13.4.11)

where Z is the partition function of the system. In general, Z is unknown, but that is unimportant. The only thing that matters is that Z is constant. Next, we construct clusters by creating bonds between spin pairs according to the following recipe: • If nearest neighbors are antiparallel, they are not connected. • If nearest neighbors are parallel, they are connected with probability p and disconnected with probability (1 − p). Here, it is assumed that J is positive. If J is negative (antiferromagnetic interaction), parallel spins are not connected, while antiparallel spins are connected with a probability p. In the case that we consider, there are Np parallel spin pairs. The probability that nc of these are connected and nb = Np − nc are “broken” is o ({cluster}) = p nc (1 − p)nb . PGen

(13.4.12)

Note that this is the probability to connect (or break) a specified subset of all links between parallel spins. Once the connected bonds have been selected, we can define the clusters in the system. A cluster is a set of spins that is at least singly connected by bonds. Let us denote the number of such clusters by M. We now flip every one of the M clusters with 50% probability. After the cluster flipping, the number of parallel and antiparallel spin pairs will have changed, for example, Np (n) = Np (o) + 

(13.4.13)

Na (n) = Na (o) − .

(13.4.14)

and (hence)

Therefore, the total energy of the system will have changed by an amount −2J : U(n) = U(o) − 2J .

(13.4.15)

Let us now consider the probability of making the reverse move. To do this, we should generate the same cluster structure, but now starting from a situation where there are Np +  parallel spin pairs and Na −  antiparallel pairs. As before, the bonds between antiparallel pairs are assumed to be broken (this is compatible with the same cluster structure). We also know that the new number of connected bonds, n c , must be equal to nc , because the same number of connected bonds is required to generate the same cluster structure. The difference

482 PART | IV Advanced techniques

appears when we consider how many of the bonds between parallel spins in the new configuration should be broken (n b ). Using Np (n) = n c + n b = nc + n b = Np (o) +  = nc + nb + ,

(13.4.16)

we see that n b = nb + .

(13.4.17)

If we insert this in Eq. (13.4.9), we obtain o ({cluster}) PGen p nc (1 − p)nb = n = (1 − p)− n PGen ({cluster}) p c (1 − p)nb + N (n) = exp(2βJ ). = N (o)

(13.4.18)

To satisfy this equation, we must have p = 1 − exp(−2βJ ),

(13.4.19)

which is the Swendsen-Wang rule. Wolff algorithm Most of the computational overhead of the Swendsen-Wang algorithm goes into decomposing the system into M clusters that can be flipped independently. In a sense, this is overkill, because the algorithm prepares M clusters that can be flipped in 2M different ways, yet we only select one of these possible choices as our trial move. A more efficient approach to the cluster move problem was proposed by Wolff [592]: rather than constructing all clusters that could be flipped, the Wolff algorithm selects one spin at random, and then uses the above rules for defining bonds to construct a single cluster connected to the selected spin. There are bonds connecting all spins inside the cluster, yet none of the spins in the cluster is connected to any of the other spins. This single cluster is then flipped (with a 100% acceptance probability). Wolff clusters may be small, in particular, well above the critical point, in which case the algorithm will not speed up the decay of fluctuation much compared with single-spin moves, but this is not serious as constructing small clusters is cheap. As the system is cooled towards the critical temperature, Wolff clusters will typically grow —and hence more expensive to generate, but flipping them then contributes significantly to the decay of fluctuations.

13.4.2.2 Off-lattice cluster moves An obvious question is if it is possible to formulate algorithms that generate rejection-free cluster moves of the Swendsen-Wang/Wolff type for off-lattice systems. As we explain below, the answer is: Yes, but....

Accelerating Monte Carlo sampling Chapter | 13 483

Let us start with the Yes. Dress and Krauth [593] proposed a rejection-free algorithm to carry out large-scale cluster moves for hard-core systems. Such a move is constructed by considering the effect on the particle coordinates of a symmetry operation that is compatible with the periodic boundary conditions. We will not describe the original (Swendsen-Wang-like) approach of ref. [593], but the Wolff-like approach of ref. [594]. The approach of ref. [593] does not depend on dimensionality, and therefore we illustrate the method by considering a one-dimensional example: a fluid of N hard particles in a periodically repeated segment of length L. We now consider a permissible symmetry operation: in 1d, this would be inversion around a randomly chosen point, or a translation over a randomly chosen distance. However, for Wolff clusters in higher dimensions, we can also consider other point-symmetries, e.g., rotations around an axis (this cannot be done when the cluster move should map the whole periodic box onto itself). Whatever the move, the Wolff cluster should be sufficiently small such that, after the move, it cannot interact with any of the periodic images of the original cluster. We first apply this symmetry operation (say, inversion) to a randomly chosen “seed” particle, say i: the result is that xi → x i , where x i is the position where particle i would end up after the inversion. Of course, there is now a finite chance that particle i at x i will overlap with one or more particles j . If that is the case, we apply the same symmetry operation to the coordinate(s) of particle(s) j . As we started from a configuration with no overlaps, particles i and j will not overlap if they have both undergone the symmetry operation. However, particle j may now overlap with a particle k, which is then also inverted. The sequence of moves might be described by the rule “if you cannot avoid them, join them”. If the density of the fluid is not too high, there will come a point when no new overlaps are generated, say after we have inverted the coordinate of the nth particle in our cluster. At that point, we can stop: we have moved a cluster of particles i, j , k, · · · , n without generating any overlaps. Therefore, we can accept this trial move. The limitation of the method is that for a dense fluid, the cluster will percolate through the periodic boundaries and will contain most or all particles. In that case, the move would not contribute to the relaxation of structural fluctuations. However, the same problem is already present in the Swendsen-Wang/Wolff algorithm: at low enough temperatures, the cluster that is flipped contains (almost) all spins, in which case: yes, it is rejection-free, but no, it does not speed up the simulation. Having formulated a rejection free cluster-algorithm for hard particles, it is logical to ask if we can do the same for particles with continuous interactions. The answer is, again, yes. But to explain that approach, it is convenient to first discuss a seemingly unrelated method, namely the early rejection method.3 3 In earlier editions of this book, the early-rejection method was described in a different context.

However, the approach has gained importance in the context of the algorithms described in this chapter.

484 PART | IV Advanced techniques

13.4.3 Early rejection method In section 3.4.1 we argued that the optimal acceptance probability of MC trial moves in hard-core systems should be lower than of trial moves in systems with continuous interactions. The reason is that in simulations of hard-core systems, we can reject a trial move as soon as a single overlap is detected. For continuous potentials, all interactions must be computed before a standard MC trial move can be accepted or rejected. As a consequence, it is on average cheaper to perform a trial move of a hard-core particle that results in rejection than in acceptance. This leads to the strange situation that it would be cheaper to perform a MC simulation of a hard-core model than of a corresponding model that has very steep but continuous repulsive interactions. Yet one would expect that, also for continuous potentials, it should be possible to reject trial moves that are almost certainly “doomed” if they incur a large potential-energy penalty from a near overlap. Fortunately, we can use the idea behind the Swendsen-Wang scheme to formulate a criterium that allow us to reject “doomed” trial moves for particles with a continuous intermolecular potential at an early stage [595]. To see how this approach works, consider a trial displacement of particle i to a new position. We now construct “bonds” between this particle and a neighboring particle j with a probability   pbond (i, j ) = max 0, 1 − exp(−βui,j ) ,

(13.4.20)

where ui,j = un (i, j ) − uo (i, j ) is change in the interaction energy of particles i and j caused by the trial displacement of particle i. If j is not connected to i, it means that j will not block the move of i, and we proceed with the next neighbor k, and so on. But as soon as a bond is found between i and any of its neighbors, the single-particle move of i is blocked and we reject the trial move. Only if particle i is not bonded to any of its neighbors do we accept the trial move. It is easy to show that this scheme satisfies detailed balance. The probability that we accept a move from the old to the new position is given by ⎡ ⎤  + acc(o → n) = [1 − pbond (i, j )] = exp ⎣−β ui,j (o → n)⎦ , j =i

j =i

(13.4.21) where the summation is over particles j for which the u(i, j ) is positive. For the reverse move n → o, we have ⎡ ⎤  + acc(n → o) = [1 − pbond (i, j )] = exp ⎣−β ui,j (n → o)⎦ . j =i

j =i

(13.4.22)

Accelerating Monte Carlo sampling Chapter | 13 485

The summation is over all particles j for which the reverse move causes an increase in energy. In addition, we can write ui,j (n → o) = −ui,j (o → n),

(13.4.23)

which gives for the probability of accepting the reverse move n → o ⎡ ⎤ − acc(n → o) = exp ⎣+β ui,j (o → n)⎦ , (13.4.24) j =i

where the summation is over all particle j for which the energy ui,j (o → n) decreases. Detailed balance now implies that   + acc(o → n) exp −β j =i ui,j (o → n)   = (13.4.25) acc(n → o) exp +β − u (o → n) i,j j =i ⎤ ⎡  N (n) , ui,j (o → n)⎦ = = exp ⎣−β N (o) j =i

which demonstrates that detailed balance is indeed obeyed. It is interesting to compare this scheme with the original Metropolis algorithm. In the bond formation scheme, it is possible that a move is rejected even when the total energy decreases. Hence, although this scheme yields a valid Monte Carlo algorithm, it is not equivalent to the Metropolis method. Eq. (13.4.20) ensures that, if a trial displacement puts particle i in a very unfavorable position where uij  0, then it is very likely that a bond will form between these particles and hence the trial move can be rejected. The early-rejection scheme is not limited to single particle moves. In fact, it is most useful when applied to complex many-particle moves, such as the cluster moves discussed below, or the Configurational-Bias Monte Carlo scheme discussed in Chapter 12. In the standard configurational-bias scheme, one has to “grow” an entire chain molecule and calculate its Rosenbluth weight, before a trial move can be accepted or rejected. However, if one of the first segments during this growth was placed at an unfavorable position, such that the new configuration is “doomed”, then the early-rejection scheme could be used to avoid having to complete the growth of a new polymer configuration. Early rejection and cluster moves We can use the early-rejection method described above to recast the method of ref. [593] in a form suitable for systems with continuous interactions (see Liu and Luijten [594]).

486 PART | IV Advanced techniques

Consider a cluster move of the type considered above. We can now again define “bonds”. If particle i moves from xi to x i , its interaction with another particle j at xj would change by an amount ui,j ≡ ui ,j − ui,j , where i denotes the situation where particle i is located at x i . Clearly, if βui,j  1, the move would be unfavorable, unless we also include particle j in the same cluster as i. We can now use Eq. (13.4.20) to describe the probability that particles i and j are “bound” which, in this case, means that they undergo the same cluster move. As before, we now continue adding particles j , k, etc. to the cluster until no more bonds are formed. We can then carry out the cluster move. As we have shown in Eq. (13.4.25), this cluster move satisfies detailed balance and would be rejection-free. The method of ref. [594] works well for particles with short-ranged interactions. There is, however, a problem if we try to carry out cluster moves for particles with longer-ranged interactions because then we would have to compute pbond (i, j ) for a large number of particle pairs. In the case of longer-ranged interactions, we could, in principle, attempt hybrid cluster moves: we use the approach described above to construct a cluster of particles interacting with the pair-potential u(r) truncated at some distance rc . When attempting the cluster move, we then compute ULR , the change in the total potential energy due to all other (presumably weak) long-ranged interactions (r > rc ). We then accept or reject our  cluster move with the Metropolis rule acc(o → n) = min 1, exp(−βULR ) , but of course, then the moves are no longer rejection free. Virtual move Monte Carlo The Virtual-Move Monte Carlo (VMMC) method of Whitelam et al. [596,597] is a cluster-move scheme that aims to make the motion of clusters resemble the diffusive motion of clusters in a solvent. The basic moves are cluster translation/rotation, but to make the dynamics more “realistic,” the clusters are given a diffusion constant that scales as their inverse mass, which is not the StokesEinstein result for approximately spherical particles, nor for most other cluster shapes. The algorithm is distinctly more complex than the basic cluster algorithm of ref. [594].

13.4.4 Beyond detailed-balance Thus far, we have considered Monte Carlo algorithms that were constructed by imposing irreducibility and detailed balance (Eq. (3.2.7)). However, as argued in Chapter 3, valid MC algorithms must obey balance (Eq. (3.2.6)): detailedbalance is overkill. Most currently used MC algorithms satisfy detailed balance, rather than balance, because proving that an algorithm satisfies balance is often subtle. However, a new class of MC algorithms has emerged, which do not satisfy detailed balance and, importantly, explore configuration space much more ef-

Accelerating Monte Carlo sampling Chapter | 13 487

ficiently than the Metropolis algorithm. Below we discuss the so-called Event Chain Monte Carlo (ECMC) simulations, in the form introduced by Krauth and co-authors [89,598–601] and a related approach proposed by Peters and de With [602]. Event-chain Monte Carlo To explain the concept of balance in the context of the Event Chain Monte Carlo (ECMC) method, we first consider the example of a 1d hard-core system discussed in ref. [603]. We consider N identical hard particles with diameter σ on a line segment of length L, subject to periodic boundary conditions. The coordinates of the particles are denoted by x1 , x2 , · · · xi · · · , xN . Because of the hard-core exclusion, two particles cannot come closer than a distance σ . We also assume that particles cannot cross. Now consider the situation in that we attempt to move a randomly chosen particle by a random distance δx drawn from a probability distribution P(δx), with P(δx) = 0 for δx < 0. A necessary condition for an algorithm to maintain the Boltzmann distribution is that it satisfies the balance condition (Eq. (3.2.6)): 

N (o )π(o → n) = N (n)

o



π(n → o) ,

(13.4.26)

o

which is a weaker condition than the detailed-balance condition, given by equation (3.2.7). Note that in Eq. (13.4.26), the set of states feeding into a given state n need not be the same as the states that can be reached from state n. In the specific case of hard-core interactions, the Boltzmann weights of all permissible (non-overlapping) configurations are the same, hence in that case balance implies that  π(o → n) = 1 , (13.4.27) o

where we have used the fact that the probabilities to carry out any of the permitted moves (including rejected moves) must add to one. In the case of N hard particles, we can for instance select any one of the particles with a probability 1/N and attempt to move in the positive x-direction (because we have chosen δx > 0). Such an algorithm clearly does not satisfy detailed balance because the reverse move (δx < 0) is excluded. A given trial move, say of particle i from position x i ≡ xi − δx to the (permissible) position xi , will be accepted if x i was at least a distance σ to the right of i − 1. But if (and only if) this condition is not satisfied, then a trial move of the particle at xi−1 over a distance δx would be rejected because it would create an overlap of the particle at xi (we count particles Modulo N). In that case, the move is rejected and all particles stay at the original configuration. However, in Monte Carlo terms, a rejected move is also a move (namely from o to o). Together, the probability of a move into the

488 PART | IV Advanced techniques

configuration x1 , x2 , · · · xi · · · , xN is then  1  acc(x i → xi ) + (1 − acc(xi−1 → xi )) = 1 N

(13.4.28)

i

because every term in the sum is equal to 1/N . But, as explained above, every single term in the sum is equal to 1/N . For the trial move (x i → xi ), the old configuration is x1 , x2 , · · · xi − δx · · · , xN , whereas for the trial move (xi−1 → xi ), the old configuration is equal to the new configuration x1 , x2 , · · · xi · · · , xN . In the example discussed above, we considered only one size δx for the trial move. However, in order to ensure ergodicity, we need a distribution p(δx) of step sizes, with limδx→0 p(δx) = 0. As discussed in ref. [603], it can be shown in a number of cases that the irreversible MC algorithm samples configuration space more efficiently than the Metropolis algorithm. However, the irreversible MC algorithm can be made even more powerful by comparing it with another trick called “lifting”. To explain lifting, we will introduce it using somewhat less mathematical language than ref. [603]. Up to this point, we have assumed that the particle to be moved in an MC algorithm is selected at random (see section 3.3.1). However, this is not necessary to satisfy the balance condition. We could, for instance, attribute an additional label ω to the particles in the simulation, where ω can have the values 0 and 1. Trial moves will only be attempted for a particle i if ωi = 1. At any time only one particle has ω = 1. We now add a rule to our MC algorithm: if the trial move of particle i is rejected because the move would lead to an overlap with particle j , then we set ωj = 1 and ωi = 0. Hence, in the next move, we will attempt to move particle j . Does this rule satisfy balance? In fact, it follows directly from the discussion above: the irreversible MC algorithm of a one-dimensional hard-core system. Consider that the “new” state is one where particle i at xi is mobile (ωi = 1). As before, we simplify things by considering only trial moves of length +δx. How could the system have ended up in the state xi , ωi = 1? There are now only two possibilities: particle i came from a position xi − δx and had ωi = 1, or a trial move of particle i − 1 with ωi−1 = 1 was rejected, and the label ω = 1 was moved to i. But we already know that the probability that the first move is accepted plus the probability that the second move is rejected must add up to one. Hence, this simple lifting algorithm satisfies balance. We can now carry out the next trial move for the currently active particle, and so on. To ensure that the algorithm is ergodic and satisfies global balance, we should randomly select a new active particle after a given number of steps, say M. Of particular importance is the case where the δx → 0 and M → ∞ such that Mδx ≡ λ finite. In this case, the moves in the ECMC algorithm are like a relay race: a particle moves continuously until it hits another particle: it then stops and passes the “baton” (i.e., the label ω = 1) to the particle that stopped it. This process continues until the total distance covered by the successive active

Accelerating Monte Carlo sampling Chapter | 13 489

particles equals λ, after which a new active particle is chosen at random. The optimal choice of the distribution of λ-values is discussed in ref. [604]. Intuitively, it is easy to understand why this algorithm may lead to more rapid equilibration than the Metropolis algorithm: in the Metropolis algorithm, density heterogeneities decay by diffusion, which is slow in regions of high density. In contrast, in the ECMC algorithm, a particle that hits a high-density region can lead to a sound-like propagation through that region, so that another particle comes off at the other end of the high-density obstacle. Clearly, if we would also allow ECMC moves in the reverse direction, the motion would become more diffusive (although not as bad as Metropolis) and would lead to slower decay of density fluctuations. In higher dimensions, most of the above arguments remain unchanged, except that in that case, it is important to have unidirectional moves in all spatial dimensions. The next advance in ECMC algorithms was the extension of the method of [598] to systems with continuous interactions (early attempts are described in [600,602]). In the case of continuous interactions, the rejection of a trial move of a particle i cannot be attributed uniquely to the interaction with a single other particle. The way to resolve this problem is described in some detail in refs. [599,603]: here we give a very condensed description. The crucial point to note is that if we decompose the move of a particle i in infinitesimal steps, then for every step the probability to reject the move is non-zero as long as the interaction with another particle j increases (if the interaction energy with j decreases, then j cannot stop the motion of i). Of course, the energy increase due to an infinitesimal step is itself infinitesimal, and so is the rejection probability. However, we can now use Eq. (13.4.21) to compute the probability that the interaction with particle j could lead to the rejection of a move after a displacement s ∗ of particle i in the chosen direction eˆ . It is #   $  s∗ ij ∂u(rj − ri − s eˆ ) + Preject (i; j ) = 1 − exp −β ds , (13.4.29) ∂s 0 where rj − ri ≡ rj − ri is the vectorial distance between i and j at the beginning of the move, and the superscript “+” indicates that we only consider those contributions for which the pair interaction is increasing with increasing s. Using a slight generalization of Eq. (13.3.5) we can then generate values of sij∗ according to a distribution that gives the correct rejection probability, by drawing a random number 0 < R ≤ 1 and evaluation the sij∗ for which  β 0

sij∗

 ds

∂u(rj − ri − s eˆ ) ∂s

+

= − ln R .

(13.4.30)

To evaluate the distance sij∗ , the use to the parsimonious Metropolis algorithm 3.2.2 is advantageous.

490 PART | IV Advanced techniques

For many simple forms of the intermolecular potential, Eq. (13.4.30) can be solved analytically (or very nearly so). Of course, it is possible that this equation has no solution (namely when the potential energy decreases, or when the integral is less than − ln R even for sij∗ → ∞). In such cases, particle j cannot block the move of i. The calculation of sij∗ must be carried out for all potential interaction partners j of particle i. As explained below, in practice, the calculation usually can be done without explicitly considering particles j that are not very close to i and therefore unlikely to block the move of i. Cell-veto method An elegant way to avoid having to consider all pairs ij explicitly is the cellveto method of ref. [601], explained in more detail in ref. [89]. In this approach, the system is divided up into cells that are chosen small enough that they will typically contain at most one particle.4 We consider the case that we have chosen to move particle i. We now explicitly consider all other particles j that are in the cells surrounding the cell of i. However, for all other, more distant, cells we do something else: we compute the maximum possible value of the “rejection rate”   ∂u(rSj − rSi − s eˆ ) + q(rSi ,Sj ) ≡ ∂s for a particle on the surface (Si ) of Ci , cell of i, interacting with another particle on the surface (Sj ) of Cj , the cell of j . Note that the real rejection rate for particle i anywhere in Ci must be lower. To compute this maximum rejection rate, qmax (Ci , Cj ), we must scan the surfaces of cells Ci and Cj to find the pair of points that maximizes the rejection rate, but we do this only once, at the beginning of the simulation. For a displacement of particle i by an amount s, the rejection probability is then bounded from above by preject (s; Ci , Cj ) = (1 − exp[−sqmax (Ci , Cj )].

(13.4.31)

For every cell Cj , we can now compute qmax (Ci , Cj ), the upper limit to the rate at which a particle in cell Cj can block the move of particle i. Next, we can sum the maximum rejection rates of all these cells, to obtain  Qmax ≡ qmax (Ci , Cj ), Cj

the upper bound to the rejection rate of a move of particle i anywhere in cell Ci (note that Qmax is the same for all cells: it only depends on the form of the interaction potential and the lattice geometry). The upper bound to the cell rejection probability for a displacement over a distance s is then 1 − exp[−sQmax ]. 4 Ref. [601] explains what to do if, nevertheless, a cell contains more than one particle.

Accelerating Monte Carlo sampling Chapter | 13 491

We can now proceed as before: we use Eq. (13.4.30) to compute sij∗ for all (nearby) particles j that are considered explicitly and the s ∗ (Cells) for the distance that the move might be blocked by any of the other particles. If the smallest s ∗ corresponds to any of the particles j considered explicitly, we carry out the displacement and make particle j active. However, if the smallest s ∗ is due to the Qmax , then we have to select one of the cells (say k) with a probability qmax (Ci , Ck ) P (k) =  . Cj qmax (Ci , Cj ) This cell-selection step can be carried out in O(1) time, using Walker’s algorithm [571]. If there is a particle (say k) in the cell, we first use Eq. (13.4.29) to compute the real probability that this particle would block a move of length s ∗ of particle i. Particle i is then blocked by particle k probability Preject (i; k) . qmax (Ci , Ck ) If the further movement of i is rejected by k, then i is moved over a distance s ∗ and k becomes the active particle. However, if the move is not blocked by particle k, then particle i is moved over a distance s ∗ , but remains the active one. One special case is when s ∗ would move particle i out of its cell: in that case, i is only moved to (or better just across) the cell boundary, but it remains the active particle. One point should be stressed: for the cell-veto method all periodic images of a particle k should be “folded” into the image of k nearest to i. For details, see [89,601]. Of particular importance is the fact that the cell-veto method can be used to simulate systems with very long-ranged (e.g., Coulomb) interactions, where at every step, the interaction need only be computed for one particle pair (see section 11.6.2). For more details, we refer the reader to [89].

This page intentionally left blank

Chapter 14

Time-scale-separation problems in MD Molecules are made up of atoms. Hence, one might expect that Molecular Dynamics simulations of molecules can be performed using the algorithms used to simulate nonbonded atomic systems, as long as it is legitimate to ignore the quantum nature of intramolecular motions. In practice, it is usually not advisable to integrate the equations of motion of internal modes in molecules, employing the same algorithms used for simulating unbonded atoms. The reason is that the characteristic time scales associated with intramolecular motions are typically 10–50 shorter than the typical decorrelation time of the translational velocity of the same molecule in a liquid. In a Molecular Dynamics simulation, time steps should be chosen such that they are appreciably shorter than the shortest relevant time scale in the simulation. If we simulate the intramolecular dynamics of molecules explicitly, this implies that our time step should be shorter than the period of the highestfrequency intramolecular vibration. This condition would make the simulation of molecular substances very time-consuming. Several techniques for tackling this problem have been developed. Here, we will discuss three approaches: constraints, extended Lagrangians, and multiple-time-step simulations. Multiple-time-scale Molecular Dynamics [117], is based on the observation that forces associated with a high-frequency intramolecular vibration can be integrated efficiently with a time step that is different from the one used to integrate the intermolecular vibrations. An alternative is to treat the bonds (and, sometimes, bond angles) in molecules as rigid. The Molecular Dynamics equations of motion are then solved under the constraint that the rigid bonds and bond angles do not change during our simulation. This procedure should eliminate the highest-frequency modes in the dynamics: the motion associated with the remaining degrees of freedom is presumably slower, and hence we can again use a long time step in our simulations. Below, we briefly explain how such constraints are implemented in a Molecular Dynamics simulation. In addition, we illustrate how extended Lagrangians can be used in “onthe-fly” optimization problems. The foremost example of a technique that uses an extended Lagrangian for this purpose is the original Car-Parrinello “abinitio” MD method [605]. We shall not discuss this technique because quantum simulations fall outside the scope of this book. Rather, we shall illustrate the Understanding Molecular Simulation. https://doi.org/10.1016/B978-0-32-390292-2.00025-8 Copyright © 2023 Elsevier Inc. All rights reserved.

493

494 PART | IV Advanced techniques

application of the extended Lagrangian approach in the Car-Parrinello method to a purely classical optimization problem.

14.1 Constraints Constraints on the classical equations of motion are best expressed in the language of Lagrangian dynamics (see [54] and Appendix A). To get a feel for the way in which constrained dynamics works, let us consider a simple example, namely a single particle that is constrained to move on the surface of a 3d sphere with radius d. The constraint is then of the form: f (x, y, z) ≡ x 2 + y 2 + z2 − d 2 = 0.1 The Lagrangian equations of motion for the unconstrained particle are (see Appendix A) ∂ ∂L ∂L = . ∂t ∂ q˙ ∂q

(14.1.1)

As the Lagrangian, L, is equal to Kkin − Upot , the equation of motion for the unconstrained particle is ∂U . mq¨ = − ∂q Now, suppose that we start with a particle located on the surface f (x, y, z) = 0 and moving initially tangentially to the constraint surface, that is: f˙ = q˙ · ∇f = 0. Without any constraints, the particle would move away from the surface of the sphere, and its velocity would no longer be tangential to the constraint surface. To keep the particle on the constraint surface, we now apply a fictitious force (the constraint force) in such a way that the new velocity is again perpendicular to ∇f . In a many-body simulation, the dynamics often must satisfy many constraints simultaneously (e.g., many bond lengths). Let us denote the functions describing these constraints by σ1 , σ2 , · · · . For instance, σ1 may be a function that is equal to 0 when atoms i and j are at a fixed distance dij : σ1 (ri , rj ) = rij2 − dij2 . We now introduce a new Lagrangian L that contains all the constraints:  L = L − λα σα (rN ), α

where α denotes the set of constraints and λα denotes a set of (as yet undetermined) Lagrange multipliers. The equations of motion corresponding to this new Lagrangian are 1 A constraint that can be written as a relation between the particle coordinates of the form f (rN ) =

0, is called a holonomic constraint.

Time-scale-separation problems in MD Chapter | 14 495

∂ ∂L ∂L = ∂t ∂ q˙ ∂q

(14.1.2)

or ∂U  ∂σα − λα ∂qi ∂qi α  ≡ Fi + Gi (α).

mi q¨i = −

(14.1.3)

α

The last line of this equation defines the constraint force Gα . To solve for the set λα , we require that the second derivatives of all σα vanish (our initial conditions were chosen such that the first derivatives vanished): ∂ σ˙ α ∂ q∇σ ˙ α = ∂t ∂t = q∇σ ¨ α + q˙ q˙ : ∇∇σα = 0.

(14.1.4)

Using Eq. (14.1.3), we can rewrite this equation as ⎡ ⎤   ∂ σ˙ α  1 ⎣ Fi + Gi (β)⎦ ∇i σα + q˙i q˙j ∇i ∇j σα = ∂t mi i

β

i,j

 1  1   = F i ∇i σ α − λ β ∇i σ β ∇i σ α + q˙i q˙j ∇i ∇j σα mi mi i

i

β

≡ Fα − β Mαβ + Tα = 0.

i,j

(14.1.5)

In the last line of Eq. (14.1.5), we have written the equation on the previous line in matrix notation. The formal solution of this equation is  = M−1 (F + T ).

(14.1.6)

This formal solution of the Lagrangian equations of motion in the presence of constraints is, unfortunately, of little practical use. The reason is that, in a simulation, we do not solve differential equations but difference equations. Hence there is little point in going through the (time-consuming) matrix inversion needed for the exact solution of the differential equation, because this procedure does not guarantee that the constraints also will be accurately satisfied in the solution of the difference equation. Before proceeding, let us again consider the simple example of a particle moving on the surface of a sphere with radius d. In that case, we can write our constraint function σ as  1 2 σ= r − d2 . 2

496 PART | IV Advanced techniques

The factor 1/2 has been included to make the following equations simpler. The constraint force G is equal to G = −λ∇σ = −λr. To solve for λ, we impose σ¨ = 0: ∂t σ˙ = ∂t (˙r · r) = (¨r · r) + r˙ 2 = 0.

(14.1.7)

The Lagrangian equation of motion is 1 (F + G) m 1 = (F − λr). m

r¨ =

(14.1.8)

For convenience, we assume that no external forces are acting on the particle (F = 0). Combining Eqs. (14.1.7) and (14.1.8), we obtain −

λ 2 r + r˙ 2 = 0. m

(14.1.9)

Hence λ=

m˙r 2 , r2

and the constraint force G is equal to G = −λr = −

m˙r 2 r. r2

Recall that, on the surface of a sphere, the velocity r˙ is simply equal to ωr. Hence we can also write the constraint force as G = −mω2 r, which is the well-known expression for the centripetal force. This simple example helps us to understand what goes wrong if we insert the preceding expression for the constraint force into an MD algorithm, e.g., in the Verlet scheme. In the absence of external forces, we would get the following algorithm for a particle on the surface of a sphere: r(t + t) = 2r(t) − r(t − t) − ω2 t 2 r(t). How well is the constraint r 2 = d 2 satisfied? To get an impression, we work out the expression for r 2 after one time-step. Assuming that the constraint was

Time-scale-separation problems in MD Chapter | 14 497

satisfied at t = 0 and at t = − t, we find that, at t = t,

r 2 (t + t) = d 2 5 + (ω t)4 − 4(ω t)2 + cos(ω t)[2(ω t)2 − 4] (ω t)4 2 6 + O( t ) . ≈d 1− 6 At first sight, this looks reasonable, and the constraint violation is of order ( t)4 , as is to be expected for the Verlet scheme. However, whereas for centerof-mass motion, we do not worry too much about errors of this order in the trajectories, we should worry in the case of constraints. In the case of translational motion, we argued that two trajectories that are initially close but subsequently diverge exponentially may still both be representative of the true trajectories of the particles in the system. However, if we find that, due to small errors in the integration of the equations of motion, the numerical trajectories diverge exponentially from the constraint surface, then we are in deep trouble. The conclusion is that we should not rely on our algorithm to satisfy the constraints (although, in fact, for the particle on a sphere, the Verlet algorithm performs remarkably well). We should construct our algorithm such that the constraints are rigorously obeyed. The most straightforward solution to this problem is not to fix the Lagrange multiplier λ by the condition that the second derivative of the constraint vanishes but by the condition that the constraint is exactly satisfied after one time-step. In the case of the particle on a sphere, this approach would work as follows. The equation for the position at time t + t in the presence of the constraint force is given by r(t + t) = 2r(t) − r(t − t) − = ru (t + t) −

λ r(t) m

λ r(t), m

where ru (t + t) denotes the new position of the particle in the absence of the constraint force. We now impose that the constraint r 2 = d 2 is satisfied at t + t: 2 λ d 2 = ru (t + t) − r(t) m = ru2 (t

2 2λ λ r(t) . + t) − r(t) · ru (t + t) + m m

This expression is a quadratic equation in λ,

λ d m

2 −

2λ r(t) · ru (t + t) + r2u (t + t) − d 2 = 0, m

498 PART | IV Advanced techniques

and the solution is λ=

r(t) · ru (t + t) −



[r(t) · ru (t + t)]2 − d 2 [ru2 (t + t) − d 2 ] . d 2 /m

For the trivial case of a particle on a spherical surface, this approach clearly will work. However, for a large number of constraints, it will become difficult, or even impossible, to solve the quadratic constraint equations analytically. Why this is so can be seen by considering the form of the Verlet algorithm in the presence of constraints: rconstrained (t i

+ t) = runconstrained (t) − i

t 2  λk ∇ i σk (t). mi

(14.1.10)

k=1

If we satisfy the constraints at time t + t, then σkc (t + t) = 0. But if the system would move along the unconstrained trajectory, the constraints would not be satisfied at t + t. We assume that we can perform a Taylor expansion of the constraints:  N    ∂σk c u σk (t + t) = σk (t + t) + · rci (t + t) − rui (t + t) u ∂ri r (t+ t) i=1

i

+ O( t 4 ).

(14.1.11)

If we insert Eq. (14.1.10) for riu − ric in Eq. (14.1.11), we get σku (t + t) =

N  t 2  i=1

mi

∇ i σk (t + t)∇ i σk  (t)λk  .

(14.1.12)

k  =1

We note that Eq. (14.1.12) has the structure of a matrix equation: σ u (t + t) = t 2 M.

(14.1.13)

By inverting the matrix, we can solve for the vector . However, as we had truncated the Taylor expansion in Eq. (14.1.11), we should then compute the σ ’s at the corrected positions, and iterate the preceding equations until convergence is reached. Although the approach sketched here will work, it is not computationally cheap because it requires a matrix inversion at every iteration. In practice, therefore, one often uses a simpler iterative scheme to satisfy the constraints. In this scheme, called SHAKE [606], the iterative scheme just sketched is not applied to all constraints simultaneously but to each constraint in succession. To be more precise, we use the Taylor expansion of Eq. (14.1.11) for σk , but then we approximate ric − riu as rci (t + t) − rui (t) ≈ −

t 2 λk ∇ i σk (t). mi

(14.1.14)

Time-scale-separation problems in MD Chapter | 14 499

If we insert Eq. (14.1.14) in Eq. (14.1.11), we get σku (t + t) = t 2 λk

N  1 ∇ i σk (t + t)∇ i σk (t), mi

(14.1.15)

i=1

and hence our estimate for λk is λk t 2 = N

σku (t + t)

1 i=1 mi ∇ i σk (t

+ t)∇ i σk (t)

.

(14.1.16)

In a simulation, we treat all constraints in succession during one cycle of the iteration and then repeat the process until all constraints have converged to within the desired accuracy. The above implementation of constrained dynamics is based on the normal (position) Verlet algorithm (Eq. (4.2.3)). Andersen [607] has shown how to impose constraints in MD simulations that use the Velocity Verlet algorithm (Eqs. (4.3.4) and (4.3.5)), and De Leeuw et al. [608] showed how the problem of constrained dynamics can be cast in Hamiltonian form. Computing the derivatives of a constraint is not always enjoyable, in particular when the quantity that is being constrained is a complicated function of the coordinates of many particles, as can happen when constraining an order parameter (see section 15.2.1). In such cases, using Automatic Differentiation (see e.g., [108]) may become advantageous.

14.1.1 Constrained and unconstrained averages Thus far, we have presented constrained dynamics as a convenient scheme for modeling the motion of molecules with stiff internal bonds. The advantage of using constrained dynamics was that we could use a longer time step in our Molecular Dynamics algorithm when the high-frequency vibrations associated with the stiff degrees of freedom were eliminated. However, somewhat surprisingly, the results of a constrained simulation depend on how the constraints are imposed: as was pointed out by Fixman [97], a simulation that uses hard constraints that are introduced in the Lagrangian equations of motion, does not yield the same averages as a simulation, where the constraints are represented by arbitrarily stiff but non-rigid bonds. Below we reproduce an argument by Van Kampen [98], showing that different expressions for the bond-angle distribution of a fully flexible trimer (see Fig. 14.1) are obtained in the case of hard and soft constraints. We wish to fix the bond lengths r12 and r23 . We can do this in two ways. One 2 = d 2 and r 2 = d 2 in the Lagrangian equations is to impose the constraints r12 23 of motion of the trimer. The other is to link the atoms in the trimer by harmonic springs, such that  α (r12 − d)2 + (r23 − d)2 . UHarmonic = 2

500 PART | IV Advanced techniques

FIGURE 14.1 Symmetric trimer with bond length d and internal bond angle ψ. (Left) Bonds are represented by an infinitely stiff spring; (right) bonds are represented by hard constraints in the Lagrangian equations of motion of the trimer.

Intuitively, one might expect that the limit α → ∞ would be equivalent to dynamics with hard constraints, but this is not so. In fact, if we look at P (ψ), the distribution of the internal angle ψ, we find that P (ψ) = c sin ψ

 P (ψ) = c sin ψ 1 − (cos ψ)2 /4

(Harmonic forces) (14.1.17) (Hard constraints).

Next, we sketch the origins of this difference in the behavior of “hard” and “soft” constraints. To this end, we start with the Lagrangian of the system, L = K − U. Thus far, we had expressed the kinetic (K) and potential (U) energy of the system in terms of the Cartesian velocities and coordinates of the atoms. However, when we talk about bonds and bond angles or, for that matter, any other function of the coordinates that has to be kept constant, it is more convenient to use generalized coordinates, denoted by q. We choose our generalized coordinates such that every quantity we wish to constrain corresponds to a single generalized coordinate. We denote by qH the set of generalized coordinates that describes the quantities that are effectively, or rigorously, fixed. The remaining soft coordinates are denoted by qS . The potential energy function U is a function of both qH and qS : U(q) = U(qH , qS ). If we rigorously fix the hard coordinates such that qH = σ , then the potential energy is a function of qS , while it depends parametrically on σ : Uhard (qS ) = Usoft (σ , qS ). Let us now express the Lagrangian in terms of these generalized coordinates: 1 mi r˙i2 − U 2 N

L=

i=1

1 ∂ri ∂ri mi q˙α · q˙β − U 2 ∂qα ∂qβ N

=

i=1

1 ≡ q˙ · G · q˙ − U, 2

(14.1.18)

Time-scale-separation problems in MD Chapter | 14 501

where the last line of Eq. (14.1.18) defines the mass-weighted metric tensor G. We can now write the expression for the generalized momentum: pα ≡

∂L = Gαβ q˙β , ∂ q˙α

(14.1.19)

where summation of the repeated index β is implied. Next, we can write the Hamiltonian, H as a function of generalized coordinates and momenta: 1 H = p · G−1 · p + U(q). 2 Once we have the Hamiltonian, we can write an expression for the equilibrium phase space density that determines all thermal averages. Although one could write expressions for all averages in the microcanonical ensemble (constant N , V , E), this is, in fact, not very convenient. Hence, we shall consider canonical averages (constant N , V , T ). It is straightforward to write the expression for the canonical distribution function in terms of the generalized coordinates and momenta: ρ(p, q) = with

exp[−βH(p, q)] QN V T

(14.1.20)

 QN V T =

dpdq exp[−βH(p, q)].

(14.1.21)

The reason we can write Eq. (14.1.20) in this simple form is that the Jacobian of the transformation from Cartesian coordinates to generalized coordinates is 1. Let us now look at the canonical probability distribution function as a function of q only: 

dp exp{−β[p · G−1 · p/2 + U(q)]}  = c exp[−βU(q)] |G|,

ρ(q) = c

(14.1.22)

where |G| denotes the (absolute value of the) determinant of G and c and c are normalizing constants. Thus far, we have not mentioned constraints. We have simply transformed the canonical distribution function from one set of phase space coordinates to another. Clearly, the answers will not depend on our choice of these coordinates. But now we introduce constraints. That is, in our Lagrangian (14.1.18) we remove the contribution to the kinetic energy due to the dynamics of the hard coordinates; that is, we set q˙ H = 0, and in the potential energy function, we replace the coordinates qH by the parameters σ . The Lagrangian for the system

502 PART | IV Advanced techniques

with constraints is 1 mi r˙i2 − U 2 N

LH =

(14.1.23)

i=1

1 ∂ri ∂ri mi q˙αS S · S q˙βS − U(qS , σ ) 2 ∂qα ∂qβ i=1 N

=

1 ≡ q˙ S · GS · q˙ S − U(qS , σ ). 2 Note that the number of variables has decreased from 3N to 3N − , where is the number of constraints. The Hamiltonian of the constrained system is 1 S S HH = pS · G−1 S · p + U(q , σ ), 2 where pαS ≡

∂L . ∂ q˙αS

(14.1.24)

As before, we can write the phase space density. In this case, it is most convenient to write this density directly as a function of the generalized coordinates and momenta: ρ(pS , qS ) =

exp[−βH(pS , qS )] QSN V T

.

Let us now write the probability density in coordinate space:  ρ(qS ) = a dpS exp{−β[pS · GS · pS /2 + U(qS , σ )]}  = a  exp[−βU(qS , σ )] |GS |,

(14.1.25)

(14.1.26)

where a and a  are normalizing constants. Now compare this expression with the result that we would have obtained if we had applied very stiff springs to impose the constraints. In that case, we would have to use Eq. (14.1.22). For qH = σ , Eq. (14.1.22) predicts  (14.1.27) ρ(qS ) = c exp[−βU(qS , σ )] |G|, and this is not the same result as given by Eq. (14.1.26). Ignoring constant factors, the ratio of the probabilities in the constrained and unconstrained system is given by  ρ(qS ) |GS | = . S ρ(q , qH = σ ) |G|

Time-scale-separation problems in MD Chapter | 14 503

This implies that, if we do a simulation in a system with hard constraints, and we wish to predict average properties for the system with “stiff-spring”  constraints, then we must compute a weighted average with a weight factor |G|/|GS | to compensate for the bias in the distribution function of the constrained system. Fortunately, it is usually easier to compute the ratio |G|/|GS | than to compute |G| and |GS | individually. To see this, consider the inverse of G G−1 αβ =

N 

m−1 i

i=1

∂qα ∂qβ · . ∂ri ∂ri

It is easy to verify that this is indeed the inverse of G: Gαβ G−1 βγ

=

N 

mi

i,j =1

=

∂ri ∂ri ∂qβ ∂qγ −1 m ∂qα ∂qβ ∂rj ∂rj j

N  ∂ri ∂qγ ∂qα ∂ri i=1

= δαγ .

(14.1.28)

Now, let us write both the matrices G and G−1 in block form   GS ASH G= AH S AH H and

 G−1 =

BSS BH S

BSH H

(14.1.29)

 ,

(14.1.30)

where the subscripts S and H denote soft and hard coordinates, respectively. The submatrix H is simply that part of G−1 that is quadratic in the derivatives of the constraints: Hαβ =

N  i=1

m−1 i

∂σα ∂σβ . ∂ri ∂ri

Now we construct a matrix X as follows. We take the first 3N − columns of G and we complete it with the last columns of the unit matrix:   GS 0 X= . (14.1.31) AH S I From the block structure of X, it is obvious that the determinant of X is equal to the determinant of GS . Next, we multiply X with GG−1 , that is, with the unit

504 PART | IV Advanced techniques

matrix. Straightforward matrix multiplication shows that   I BSH −1 GG X = G . 0 H

(14.1.32)

Hence, |X| = |GS | = |GG−1 X| = |G||H|.

(14.1.33)

The final result is that |G| = |H|. |GS |

(14.1.34)

We therefore can write the following relation between the coordinate space densities of the constrained and unconstrained systems: 1

ρflex (q) = |H|− 2 ρhard (q).

(14.1.35)

The advantage of this expression is that we have expressed the ratio of the determinants of a 3N × 3N matrix and a 3N − × 3N − matrix, by the determinant of an × matrix. In many cases, this simplifies the calculation of the weight factor considerably. As a practical example, let us consider the case of the flexible trimer, discussed at the beginning of this section. We have two constraints: 2 − d2 = 0 σ1 = r12 2 σ2 = r23 − d 2 = 0.

If all three atoms have the same mass m, we can write |H| as 1 |H| = m



∂σ1 i ∂ri  ∂σ2 i ∂ri

∂σ1 ∂ri ∂σ1 ∂ri

 

∂σ1 i ∂ri ∂σ2 i ∂ri

∂σ2 ∂ri ∂σ2 ∂ri

.

Inserting the expressions for σ1 and σ2 , we find that |H| =

2 m

2 2r12

−r12 · r23

−r12 · r23

2 2r23

2 = r 2 = d 2 , we get Using the fact that r12 23 

1 8 2 2 2 r r − (r12 · r23 ) |H| = m 12 23 4

.

Time-scale-separation problems in MD Chapter | 14 505

=

 8d 4 cos2 ψ 1− . m 4

(14.1.36)

Finally, we recover Eq. (14.1.17) for the ratio of the probability densities for the constrained and unconstrained systems:  1 cos2 ψ ρflex . (14.1.37) = |H| 2 = c 1 − ρhard 4 This ratio varies between 1 and 0.866, that is, at most some 15%. It should be noted that, in general, the ratio depends on the masses of the particles that participate in the constraints. For instance, if the middle atom of our trimer is much √ lighter than the two end atoms, then |H| becomes 1 − cos2 ψ = | sin ψ| and the correction due to the presence of hard constraints is not small. However, to put things in perspective, we should add that, at least for bond-length constraints of the type most often used in Molecular Dynamics simulations, the effect of the hard constraints on the distribution functions appears to be relatively small. One obvious question is: which description is correct? Somewhat depressingly, for intra-molecular bonds, the answer is “neither”. The reason is that very stiff bonds tend to have a high vibration frequency and cannot be described by classical mechanics. In other cases (for instance, constraints on order parameters), the answer is: both methods can be used, as long as they are not mixed.

14.1.2 Beyond bond constraints The above discussion of constrained dynamics was fairly general, but it mostly focused on the situation where the constraints had a simple geometric interpretation in terms of bond lengths. There are many examples of constraints that do not have a simple geometric interpretation. A common example is when simulations are carried out under conditions where an order-parameter of reaction coordinate is kept fixed. This application is important in the context of studying the crossing of a free-energy barrier by Molecular Dynamics [609] (see Chapter 15). It is also useful when computing free-energy differences: as we change an order parameter that characterizes a system (say the total dipole moment) from QA to QB , the change in free-energy is equal to the reversible work that must be performed to change the order parameter against the conjugate constraint force f (Q):  QB dQ f (q) . F = wrev = − QA

In the previous section, we saw that there were two not completely equivalent methods to impose constraints: 1) by including holonomic constraints of the type σ = 0 in the Lagrangian equation of motion or 2) by approximating the constraint by stiff, usually harmonic, term (a “restraint”) in the Hamiltonian:

506 PART | IV Advanced techniques

H = Hunconstrained + (1/2)κσ 2 . In the second case, small oscillations along the restraint direction are still possible, and they have an associated kinetic energy. In fact, equipartition fixes the average kinetic energy at 1/2kB T per restrained degree of freedom. And for harmonic restraints, the average potential energy associated with the restraint is also determined by equipartition. As such stiff harmonic restraints thermalize very slowly, it is often necessary to couple the constrained degrees of freedom to a separate thermostat. Moreover, it is often convenient to set the temperature of this thermostat low, to minimize the fluctuations in the constraints. A prototypical example of constraints that have no simple geometric interpretation appeared in the original Car-Parrinello scheme for ab-initio Molecular Dynamics [605]. In standard ab-initio MD, the Density Functional Theory (DFT) estimate of the electronic energy should be at a minimum. In DFT, the electronic energy depends parametrically on the coefficients (e.g., plane-wave amplitudes) that characterize the Kohn-Sham orbitals. Clearly, these amplitudes have no simple geometrical interpretation, but they are fixed by the condition that the Kohn-Sham orbitals are orthonormal, that the energy is at a minimum, and that the integrated electronic density is constant. In the early Car-Parrinello approach, part of these constraints —namely those that keep the Kohn-Sham orbitals orthonormal —were implemented by including them as holonomic constraints in the Lagrangian equations of motion. However, restraining the energy to be near a minimum was imposed by treating the plane-wave amplitudes as coordinates, with a (fictitious) mass and an associated momentum. As a result, the system is never exactly in its DFT ground state, but close. In this respect, Car and Parrinello’s method was similar to the methods used by Andersen [607] and Nosé [248]: it uses an extended Lagrangian rather than a holonomic constraint on the Lagrangian equations of motion. However, we stress that it is but one choice: the alternative is to use holonomic constraints [610]. Below, we briefly discuss the Car-Parrinello-style approach to approximate complex constraints “on the fly”, as this approach is widely used for classical applications. For more details on the Car-Parrinello method for electronic structure calculations, we refer the reader to the book by Marx and Hutter [611] and the many early reviews that have been written on the subject (see e.g., [612–614]).

14.2 On-the-fly optimization In the Car-Parrinello method, the electronic density fluctuates around its optimal (adiabatic) value. Even though at every step the system is not exactly in its electronic ground state, the electrons do not exert a systematic drag force on the nuclei, hence the slower nuclear dynamics is still correct. A close classical analog of “ab initio” Molecular Dynamics is the method developed by Löwen et al. [615,616] to simulate counterion screening in colloidal suspensions of polyelectrolytes. In the approach of [615], the counterions

Time-scale-separation problems in MD Chapter | 14 507

are described by classical density-functional theory and an extended Lagrangian method is used to keep the free energy of the counterions close to its minimum. Here we consider a somewhat simpler application of the Car-Parrinello approach to a classical system. As before, the aim of the method is to replace the iterative optimization procedure with an extended dynamical scheme. As a specific example, we consider a fluid of point-polarizable molecules. The molecules have a static charge distribution that we leave unspecified (for instance, we could be dealing with ions, dipoles, or quadrupoles). We denote the polarizability of the molecules by α. The total energy of this system is given by U = U0 + Upol , where U0 is the part of the potential energy that does not involve polarization. The induction energy, Upol , is given by [617] Upol = −



Ei · μi +

i

1  (μi )2 , 2α i

where Ei is the local electric field acting on particle i and μi is the dipole induced on particle i by this electric field. Of course, the local field depends on the values of all other charges in the system. For instance, in the case of dipolar molecules, Ei = Tij · μtot j , where Tij is the dipole-dipole tensor and μtot j is the total (i.e., permanent plus induced) dipole moment of molecule j . We assume that the induced dipoles follow the nuclear motion adiabatically and that Upol is always at its minimum. Minimizing Upol with respect to the μi yields μi = αEi .

(14.2.1)

Hence, to properly account for the molecular polarizability of an N -particle system, we would have to solve a set of 3N linear equations at every time step. If we solve this set of equations iteratively, we must make sure that the solution has fully converged, because otherwise, the local field will exert a systematic drag force on the induced dipoles and the system will fail to conserve energy. Now let us consider the Car-Parrinello approach to this optimization problem. The application of this extended Lagrangian method to polarizable molecules has been proposed by Rahman and co-workers [618] and by Sprik and Klein [619]. A closely related approach was subsequently advocated by Wilson and Madden [620]. The basic idea is to treat the magnitude of the induced dipoles as additional dynamical variables in the Lagrangian: 1 2 1 ˙ 2i − U, m˙ri + Mμ L(r , μ ) = 2 2 N

N

N

i=1

i=1

N

(14.2.2)

508 PART | IV Advanced techniques

where M is the mass associated with the motion of the dipoles. This Lagrangian yields the following equations of motion for the dipole moments: ¨i ≡ Mμ

∂L μ = − i + Ei . ∂μi α

The right-hand side of this equation can be considered as a generalized force that acts on the dipoles. In the limit that this force is exactly zero the iterative scheme is recovered. If the temperature associated with the kinetic energy of the dipoles is sufficiently low, the dipoles will fluctuate around their lowest-energy configuration. More importantly, there will be no systematic drag force on the dipoles, and hence the energy of the system will not drift. To make sure that the induced dipoles are indeed close to their ground-state configuration, we should keep the temperature of the induced-dipole degrees of freedom low. Yet, at the same time, the dipoles should be able to adapt rapidly (adiabatically) to changes in the nuclear coordinates to ensure that the condition of minimum energy is maintained during the simulation. This implies that the masses associated with the induced dipoles should be small. In summary, we require that T μ Tr M m, where the temperature of the induced dipoles is defined as 1 ˙ 2i , Mμ 2 N

Tμ =

i=1

while the translational temperature is related in the usual way to the kinetic energy 1 2 m˙ri . 2 N

Tr =

i=1

The condition that the temperature of the induced dipoles should be much lower than the translational temperature seems to create a problem because, in an ordinary simulation, the coupling between induced-dipole moments and translational motion leads to heat exchange. This heat exchange will continue until the temperature of the induced dipoles equals the translational temperature. Hence, it would seem that we cannot fix the temperature of the induced dipoles independent of the translational temperature. However, here we can again make use of thermostats. Sprik and Klein [619] showed that one can use two separate Nosé-Hoover thermostats to impose the temperature of the positions and to impose the (low) temperature of the polarization [621]. The mass M associated with the induced dipoles should be chosen such that the relaxation time of the

Time-scale-separation problems in MD Chapter | 14 509

polarization is on the same order of magnitude as the fastest relaxation in the liquid. As mentioned above, the extended Lagrangian approach is but one way to address the problem of complex “non-geometric” constraints. An alternative approach has been proposed by Coretti et al. [622] and Bonella et al. [610]. Although the latter approach starts from the extended-Lagrangian picture, it is different in that it considers the limit in which the mass associated with the dynamics of the restrained variables goes to zero. In this limit, the restraint becomes a constraint, and the usual constraint techniques (e.g., SHAKE) are used to maintain the constraint. An obvious advantage of this approach that there is no need for thermostatting the dynamics of the unphysical coordinates: they are rigorously contrained by the physical coordinates. It would seem that many of the applications that now use extended Lagrangians could be recast in the form that uses restraint dynamics with zero mass.

14.3

Multiple time-step approach

An alternative scheme for dealing with the high-frequency vibrational modes of polyatomic molecules is based on the Trotter expansion Liouville representation of the classical equations of motion (Eq. (4.3.18)). The idea here is not just to separate the propagation of the coordinates and the momenta, but also to decompose the propagation of the high-frequency modes into many shorter time steps, whilst maintaining a longer time step for the lower-frequency modes. To achieve this separation, we separate the force on a particle into two parts: F = Fshort + Flong . This division is arbitrary, but for our diatomic molecule we could divide the potential into the short-range interactions that are responsible for the bond vibration and the long-range attractive forces between the atoms. The idea is that on the time scale of the vibrations of the atoms, the long-range part of the potential hardly changes and therefore this “expensive potential” does not need to be updated as often as the “cheap” short-range part of the potential. This suggests using multiple time steps, a short time step for the vibration and a much longer one for the remainder of the interactions. Martyna et al. [126] used the Liouville formalism to solve the equations of motion using multiple time steps. In our discussion of the approach, we consider the N V E ensemble. For details on how to use multiple time steps in other ensembles, we refer to [126]. Let us start with the simple case and derive the equations of motion for a single particle with force F . The Liouville operator (iL) for this system is Eq. (4.3.12): iL = iLr + iLp ∂ F ∂ =v + . ∂r m ∂v

510 PART | IV Advanced techniques

The equations of motion follow from applying the Trotter formula (4.3.18) with time step t: eiL t ≈ eiLp t/2 eiLr t eiLp t/2 . The position and the velocity at time t follow from applying the Liouville operator under the initial condition (r(0), v(0)). As shown in section (4.3.4), iLr t corresponds to a shift in coordinates and iLp t to a shift in momenta. If we perform these operations in three steps, we obtain eiL t f [˙r(0), r(0)] = eiLp t/2 eiLr t eiLp t/2 f [˙r(0), r(0)] = eiLp t/2 eiLr t f [˙r(0) + F(0) t/2m, r(0)] = eiLp t/2 f [˙r(0) + F(0) t/2m, r(0) + r˙ ( t/2) t] = f [˙r(0) + F(0) t/2m + F( t) t/2m, r(0) + r˙ ( /2) t] . The equations of motion that follow are t [F(0) + F( t)] 2m r( t) = r(0) + r˙ ( /2) t, r˙ ( t) = r˙ (0) +

which the reader will recognize as the velocity Verlet equations (see section 4.3.4). Example 24 (Multiple time step versus constraints). In this Example, we consider a system of diatomic Lennard-Jones molecules. We compare two models: the first model uses a fixed bond length l0 between the two atoms of a molecule. In the second model, we use a bond-stretching potential given by 1 Ubond (l) = kb (l − l0 )2 , 2 where l is the distance between the two atoms in a molecule. In the simulations we used kb = 50000 and l0 = 1. In addition to the bond-stretching potential, all nonbonded atoms interact via a Lennard-Jones potential. The total number of diatomics was 125 and the box length 7.0 (in the usual reduced units). The Lennard-Jones potential was truncated at rc = 3.0, while T = 3.0. The equations of motion are solved using bond constraints for the first model, while multiple time steps were used for the second model. All simulations were performed in the N V E ensemble. It is interesting to compare the maximum time steps that can be used to solve the equations of motion for these two methods. As a measure of the accuracy with which the equations of motion are solved, we compute the average deviation of the initial energy, which is defined by Martyna et al.

Time-scale-separation problems in MD Chapter | 14 511

[623] as E=

1 Nstep

  E (i t) − E (0)  ,    E (0)

Nstep  i=1

in which E (i) is the total energy at time i. For the bond constraints we use the SHAKE algorithm [606] (see also section 14.1). In the SHAKE algorithm, the bond lengths are exactly fixed at l0 using an iterative scheme. In Fig. 14.2 the energy fluctuations are shown as a function of the time step. Normally one tolerates a noise level in E of O(10−5 ), which would correspond to a time step of 2 × 10−4 for the first model. This should be compared with a single-time-step Molecular Dynamics simulation using the second model. A similar energy noise level can be obtained with a time step of 9 × 10−5 , which is a factor 2 smaller.

FIGURE 14.2 Comparison of the energy fluctuations as a function of the time step for a normal MD simulation with a harmonic bond potential and a constrained MD simulation with the SHAKE algorithm.

To apply the multiple-time-step algorithm, we have to separate the intermolecular force into a short-range and a long-range part. In the short-range part we include the bond-stretching potential and the short-range part of the Lennard-Jones potential. To make a split in the Lennard-Jones potential, we use a simple switching function S (r): ULJ (r) = U short (r) + U long (r) U short (r) = S (r) × ULJ (r) U long (r) = [1 − S (r)] ULJ (r) , where

⎧ ⎪ ⎨ 1 S (r) = 1 + γ 2 (2γ − 3) ⎪ ⎩ 0

0 < r < rc − λ r c − λ < r < rm rm < r < rc

512 PART | IV Advanced techniques

and γ=

r − rm + λ . λ

(14.3.1)

In fact, there are other ways to split the total potential function [624,625]. We have chosen λ = 0.3 and rm = 1.7. To save CPU time a list is made of all the atoms that are close to each other (see Appendix I for details); therefore the calculation of the short-range forces can be done very efficiently. For a noise level of 10−5 , one is able to use δt = 10−4 and n = 10, giving t = 10−3 . To compare the different algorithms in a consistent way, we compare in Fig. 14.3 the efficiency of the various techniques. The efficiency η is defined as the length of the simulation (time step times the number of integration steps) divided by the amount of CPU time that was used. In the figure, we have plotted η for all simulations from Fig. 14.2. For an energy noise level of 10−5 , the SHAKE algorithm is twice as efficient than normal MD (n = 1). This means that hardly any CPU time is spent in the SHAKE routine. However, the MTS algorithm is still two times faster (n = 10, δt = 10−4 ) at the same efficiency.

FIGURE 14.3 Comparison of the efficiency η for bond constraints (SHAKE) with normal molecular dynamics (left), and multiple times steps (right). The left figure gives the efficiency as a function of the time step and the right figure as a function of the number of small time steps n, t = nδt, where the value of δt is given in the symbol legend.

For more details, see SI (Case Study 22).

Let us now separate the Liouville operator iLp into two parts: Fshort ∂ m ∂v Flong ∂ F − Fshort ∂ = . iLlong = m ∂v m ∂v

iLshort =

We use a Trotter expansion with two time-steps: a long time step, t, and a short one, δt = t/n. The total Liouville operator then reads eiL t = ei(Lshort +Llong +Lr ) t

Time-scale-separation problems in MD Chapter | 14 513

Algorithm 28 (Multiple-time-step MD) function multi(fl,fs)

input: long-range part of the force fs: short-range part of the force velocity Verlet with time step t/2 loop for the short time steps velocity Verlet with short timestep t/n fl:

vx=vx+0.5*delt*fl

for 1 ≤ it ≤ n do vx=vx+0.5*(delt/n)*fs x=x+(delt/n)2*vx fs = force_short vx=vx+0.5*(delt/n)*fs

enddo fl = force_long vx=vx+0.5*delt*fl

short-range forces

all long-ranged forces velocity Verlet with time step t/2

end function

Specific Comments (for general comments, see p. 7) 1. In the argument list of function call we have added fl, fs to indicate that in the velocity Verlet algorithm the force is remembered from the previous time step. 2. Function force_short determines the short-range forces. Since this involves a small number of particles, the calculation of these forces is much faster than force_long in which all interacting particles must be considered. ≈ eiLlong t/2 ei(Lshort +Lr ) t eiLlong t/2 . We can again apply a Trotter expansion for the terms iLlong and iLr :  n eiL t = eiLlong t/2 eiLshort δt/2n eiLr δt/n eiLshort δt/2n eiLlong t/2 . We apply this Liouville operator to the initial position and velocity. We first make a step using the expensive Flong   eiLlong t/2 f [˙r(0), r(0)] = f r˙ (0) + Flong (0) t/2m, r(0) , followed by n small steps using the cheap Fshort with the smaller time step, δt, or n    eiLshort δt/2n eiLr δt/n eiLshort δt/2n f r˙ (0) + Flong (0) t/2m, r(0) , and finally one more time-step of length t/2 with the expensive Flong . The result corresponds to solving the equations of motion using the velocity Verlet scheme using the force Fshort with time step δt and initial conditions r˙ (0) + Flong (0) t/2m, r(0). By construction, this algorithm is time reversible.

514 PART | IV Advanced techniques

In Algorithm 28 we illustrate how this Multiple-Time-Step (MTS) can be implemented. Two applications of this algorithm are particularly important. One is the use of MTS algorithms to simulate the dynamics of molecules with stiff internal bonds. In Example 24 it is shown that this application of the MTS method is attractive, because it is competitive with constrained dynamics (see section 14.1), at least for the case that we considered. The second important area of application is as a time-saving device in the simulation of systems with computationally “expensive” potential-energy functions. Here the MTS method offers the possibility of carrying out many time steps with a “cheap” potential energy (e.g., an effective pair potential) and then performing the expensive correction every nth step. Procacci and Marchi have used this approach to reduce the computational costs associated with the long-range interactions of Coulombic systems [624,625], using MTS MD in combination with the Ewald summation (see Chapter 11) to reduce the CPU time for the calculation of long-range interactions.

Chapter 15

Rare events Molecular Dynamics simulations can be used to probe the natural time evolution of classical many-body systems, typically on a time scale of 10−14 to 10−7 s: the upper limit depends on the computing power at our disposal and can even be several orders of magnitude higher, but then the computational cost becomes very high. The typical time window of 10−14 to 10−7 is adequate for studying many structural and dynamical properties, provided that the relevant fluctuations decay on a time scale that is appreciably shorter than 10−7 s. This is true for most equilibrium properties of simple liquids. It is also usually true for the dynamics associated with non-hydrodynamic modes. For hydrodynamic modes (typically, the modes that describe the diffusion or propagation of quantities that satisfy a conservation law, such as mass, momentum, or energy), the time scales can be much longer. But we still can use MD simulations to compute the transport coefficients that govern the hydrodynamic behavior by making use of the appropriate Green-Kubo relation. As explained in section 2.5.2, Green-Kubo relations allow us to express the hydrodynamic transport coefficients in terms of a time integral of a correlation function of a dynamical quantity that fluctuates on a microscopic time scale: for instance, the self-diffusion coefficient is equal to the integral of the velocity autocorrelation function. Nevertheless, there are many dynamical phenomena that cannot be studied in this way. In this Chapter, we discuss one particularly important example, namely activated processes. Conventional MD simulations cannot be used to study activated processes. The reason is not that the relevant dynamics is slow, but rather that rare events happen infrequently, but when a reare event does take place, it usually happens quite quickly, i.e., on a time scale that can be followed by MD simulation. An example is the trans-to-gauche transition in an alkane: this process is infrequent if the barrier separating the two conformations is large compared to kB T . Yet, once an unlikely fluctuation has driven the system to the top of the barrier, the actual barrier crossing is quick. It turns out that in many cases, MD simulations can be used to compute the rate of such activated processes. Such calculations were first performed by Bennett in the context of diffusion in solids [626]. Subsequently, Chandler extended and generalized the approach to the calculation of reaction rates [58,627]. The basic idea behind the “Bennett-Chandler”-style MD simulations of rare events is that the rate at which a barrier crossing proceeds is determined by the product of a static term, namely the probability of finding the system at the top of the Understanding Molecular Simulation. https://doi.org/10.1016/B978-0-32-390292-2.00026-X Copyright © 2023 Elsevier Inc. All rights reserved.

515

516 PART | IV Advanced techniques

barrier, and a dynamic term that describes the rate at which systems at the top of the barrier move to the other valley. Since the mid-90s, techniques for computing the rate of rare events have undergone an explosive development: the number of papers on rare-event simulation techniques has increased by two orders of magnitude since the 1990s, and entire books have been written on the subject [34]. The aim of this chapter on Rare Events is to highlight the ideas behind the main developments, rather than to list them all. In this spirit, we pay attention to the basic physics of rare events, and we give simple examples of some of the rare-event techniques, at least those that have been successfully applied to complex problems. We add this qualification because some rare-event techniques that have been proposed in the literature have only been tested on barrier crossings in low-dimensional energy landscapes. Our focus on a set of representative techniques comes at a price: even less than in the rest of this book can we aim for completeness.

15.1 Theoretical background By Rare Events we mean processes that happen on a timescale that is much longer than the natural timescale of the underlying dynamics. For instance, rare ∗ events √ in a Lennard-Jones system happen on timescales much longer than t ≡ σ m/. Unlike the decay of hydrodynamic modes, rare events are not simply slow: they are just infrequent, but when they happen they proceed rapidly. It is this separation of timescales that makes the numerical simulation of rare events challenging. As a prototypical example of a physical phenomenon proceeding through rare events, we consider a unimolecular reaction A  B, in which species A is transformed into species B. If the rate-limiting step of this reaction is a (classical) barrier crossing, then Molecular Dynamics simulations can be used to compute the rate constant of such a reaction: Chandler’s 1978 paper [627] explains under what conditions the use of rare-event techniques is justified. If the rate-limiting step is a tunneling event or the hopping from one potentialenergy surface to another, the classical approach breaks down, and we should turn to quantum dynamical schemes that fall outside the scope of this book (see [48,628]). Let us first look at the phenomenological description of unimolecular reactions. We denote the number density of species A and B by cA and cB , respectively. The phenomenological rate equations are dcA (t) = −kA→B cA (t) + kB→A cB (t) dt dcB (t) = +kA→B cA (t) − kB→A cB (t). dt

(15.1.1) (15.1.2)

Rare events Chapter | 15 517

Clearly, as the number of molecules is constant in this conversion reaction, the total number density is conserved: d [cA (t) + cB (t)] = 0. dt

(15.1.3)

In equilibrium, all concentrations are time-independent, i.e., c˙A = c˙B = 0. This implies that K≡

cA  kB→A , = cB  kA→B

(15.1.4)

where K is the equilibrium constant of the reaction. Let us now consider what happens if we take a system at equilibrium, and apply a small perturbation, cA , to the concentration of species A (and thereby of species B). We can write the rate equation that determines the decay of this perturbation as dcA (t) = −kA→B cA (t) − kB→A cA (t), dt where we have used Eqs. (15.1.3) and (15.1.4). The solution to this equation is cA (t) = cA (0) exp[−(kA→B + kB→A )t] ≡ cA (0) exp(−t/τR ), (15.1.5) where we have defined the reaction time constant   cA  −1 cB −1 −1 τR = (kB→A + kA→B ) = kA→B 1 + = , cB  kA→B

(15.1.6)

where we have assumed that the total concentration cA + cB = 1. With this normalization, cA is simply the probability that a given molecule is in state A. Thus far, we have discussed the reaction from a macroscopic, phenomenological point of view. Let us now look at the microscopics. We do this in the framework of linear response theory. First of all, we must have a microscopic description of the reaction. This means that we need a recipe that allows us to measure how far the reaction has progressed. In the case of diffusion over a barrier from one free energy minimum to another, we could use the fraction of the distance traveled as a reaction coordinate. In general, reaction coordinates may be complicated, nonlinear functions of the coordinates of all particles. It is convenient to think of the reaction coordinate q simply as a generalized coordinate of the type discussed in Chapter 14. In Fig. 15.1, we show a schematic drawing of the free energy surface of the system as a function of the reaction coordinate q. If we wish to change the equilibrium concentration of species A, we should apply an external perturbation that favors all states with q < q ∗ relative to those with q > q ∗ . By analogy to the discussion in section 2.5, we consider an external perturbation that changes the relative probabilities of finding species A and B. To

518 PART | IV Advanced techniques

FIGURE 15.1 Schematic drawing of the free energy surface of a many-body system, as a function of the reaction coordinate q. For q < q ∗ , we have the reactant species A, for q > q ∗ , we have the product B. As will be discussed below, the choice of q ∗ is, to some extent, arbitrary. However, it is convenient to identify the value of the reaction coordinate at the top of the barrier with q ∗ .

achieve this, we add to the Hamiltonian a term that lowers the potential energy for q < q ∗ : H = H0 − gA (q − q ∗ ),

(15.1.7)

where  is a parameter that measures the strength of the perturbation. As we are interested in the linear response, we shall consider the limit  → 0. The function gA (q − q ∗ ) is chosen such that it is equal to 1 if the reaction coordinate q is in the range that corresponds to an equilibrium configuration of the “reactant,” while gA (q − q ∗ ) should be equal to 0 for a typical “product” configuration. The traditional choice for gA is a Heaviside θ -function: gA (q − q ∗ ) = 1 − θ (q − q ∗ ) = θ (q ∗ − q), where θ (x) = 1 for x > 0 and θ (x) = 0 otherwise. In what follows, we shall consider the more general case that gA is equal to the θ -function in the reactant and product domains. However, unlike θ , gA varies smoothly from 1 to 0 in the region of the free energy barrier. For the sake of simplicity, we refer to the states A and B as “reactants” and “products” in the chemical sense of the word. However, in general, A and B can designate any pair of initial and final states that can interconvert by a barrier-crossing process. Let us first consider the effect of a static perturbation of this type on the probability of finding the system in state A. We note that cA = cA  − cA 0 = gA  − gA 0 . Here we have used the fact that gA is equal to 1 in the reactant basin. Hence, the average value of gA is simply equal to the probability of finding the system in

Rare events Chapter | 15 519

state A. From Eq. (2.5.3) of section 2.5.1, we find immediately that    ∂cA 2 = β gA − gA 20 . 0 ∂ This equation can be simplified by noting that, outside the barrier region, gA is 2 (x) = g (x). In the barrier region, this equality need either 1 or 0, and hence, gA A not hold—but those configurations hardly contribute to the equilibrium average. Hence,

 ∂cA (15.1.8) = β gA 0 1 − gA 0 = β cA  cB  . ∂ For what follows, it is convenient to define the function gB = 1 − gA . Clearly, gB 0 = (1 − gA )0 = cB 0 . Next, consider what happens if we suddenly switch off the perturbation at time t = 0. The concentration of A will relax to its equilibrium value as described in Eq. (2.5.8) and we find that, to first order in , d exp(−βH0 ) (gA (0) − gA ) exp(iL0 t) (gA (0) − gA ) cA (t) = β d exp(−βH0 ) = β gA (0)gA (t) .

(15.1.9)

Finally, we can use Eq. (15.1.8) to eliminate  from the above equation, and we find the following expression for the relaxation of an initial perturbation in the concentration of species A: cA (t) = cA (0)

gA (0)gA (t) . cA  cB 

(15.1.10)

If we compare this with the phenomenological expression, Eq. (15.1.5), we see that gA (0)gA (t) exp(−t/τR ) = . (15.1.11) cA  cB  Actually, we should be cautious with this identification. For very short times (i.e., times comparable to the average time that the system spends in the region of the barrier), we should not expect the autocorrelation function of the concentration fluctuations to decay exponentially. Only at times that are long compared to the typical barrier-crossing time should we expect Eq. (15.1.11) to hold. Let us assume that we are in this regime. Then we can obtain an expression for τR by differentiating Eq. (15.1.11): − τR−1 exp(−t/τR ) =

gA (0)g˙ A (t) g˙ A (0)gA (t) =− , cA  cB  cA  cB 

(15.1.12)

520 PART | IV Advanced techniques

where we have dropped the ’s, because the time derivative of the equilibrium concentration vanishes. Hence, for times that are long compared to molecular times, but still very much shorter than τR , we can write τR−1 =

g˙ A (0)gA (t) cA  cB 

(15.1.13)

or, if we recall Eq. (15.1.6) for the relation between kA→B and τR , we find kA→B (t) =

g˙ A (0)gA (t) . cA 

(15.1.14)

In this equation, the time dependence of kA→B (t) is indicated explicitly. However, we recall that it is only the long-time plateau value of kB→A (t) that enters into the phenomenological rate equation. Finally, we can re-express the correlation function in Eq. (15.1.14) by noting that g˙ A (q − q ∗ ) = q˙

∂gB (q − q ∗ ) ∂gA (q − q ∗ ) = −q˙ = −q(∂ ˙ q gB ), ∂q ∂q

where we use the notation ∂q ≡

∂ ∂q .

Then:



q(∂ ˙ q gB )(0)gB (t) kA→B (t) = , cA 

(15.1.15)

where we have used the fact that the equilibrium average q ˙ is equal to zero. A particularly convenient form of Eq. (15.1.15) that we shall use in section 15.3 is   ∞ q(0)(∂ ˙ ˙ q )gB (0)q(t)(∂ q gB )(t) kA→B (t) = dt . (15.1.16) cA  0 But first, we establish contact with the conventional “Bennett-Chandler” expression for the rate constant.

15.2 Bennett-Chandler approach If we choose gA = θ (q ∗ − q), and hence gB = θ (q − q ∗ ), then we can rewrite Eq. (15.1.15) in the following way: qδ(q(0) ˙ − q ∗ )θ (q(t) − q ∗ ) cA  ∗ − q(0))θ (q(t) − q ∗ ) qδ(q ˙ = . θ (q ∗ − q)

kA→B (t) =

(15.2.1)

In this way, we have expressed the rate constant kA→B exclusively in microscopic quantities that can be measured in a simulation. Next, we shall see

Rare events Chapter | 15 521

how this can be done. First, however, we establish the connection between Eq. (15.2.1) and the expression for the rate constant that follows from Eyring’s Transition State Theory (TST). To this end, consider kA→B (t) in the limit t → 0+: qθ ˙ (q(0+) − q ∗ )δ(q ∗ − q(0)) θ (q ∗ − q) ∗ − q(0)) qθ ˙ (q)δ(q ˙ = , θ (q ∗ − q)

lim kA→B (t) =

t→0+

(15.2.2)

where we have used the fact that θ (q(0+) − q ∗ ) = 1, if q˙ > 0, and 0 otherwise. In other words θ (q(0+) − q ∗ ) = θ (q). ˙ The expression on the last line of Eq. (15.2.2) is the classical transition state theory prediction for the rate conT ST . stant, kA→B It is useful to rewrite Eq. (15.2.1) as kA→B =

∗ − q(0))θ (q(t) − q ∗ ) q(0)δ(q δ(q ∗ − q) ˙ × . δ(q ∗ − q(0)) θ (q ∗ − q)

(15.2.3)

The first part on the right-hand side of Eq. (15.2.3) is a conditional average, namely the average of the product q(0)θ ˙ (q(t) − q ∗ ), given that q(0) = q ∗ . The second part expresses the probability density of finding the system at the top of the barrier, divided by the probability that the system is on the reactant side of the barrier. We denote this density with P (q ∗ ), where P (q) is defined as dX exp(−βU)δ(q − q(X )) δ(q − q(X )) P (q) ≡ . (15.2.4) = θ (q ∗ − q(X )) dX exp(−βU)θ (q ∗ − q(X )) It is inadvisable to use brute-force simulations to compute either term in Eq. (15.2.3). The reason is that if we study events that are rare, normal simulations will barely sample the top of the barrier, and would yield poor statistics on kA→B . The solution to this problem is either to use biased simulations of the type discussed in Chapter 8, or to use holonomic constraints as discussed in Chapter 14. As the two approaches are rather different, we discuss them separately. Conceptually the easiest approach is to use umbrella sampling to compute first the probability distribution at the top of the barrier. In such a simulation, we would construct P (q) over the entire range of q-values q < q ∗ , using multiple umbrella-sampling simulations (section 8.6.6) and a histogram reconstruction method, such as MBAR (see section 8.6.11). To probe P (q) near the top of the barrier, we could use a narrow biasing potential well. This part of the simulation can be done with both MC and MD. To compute the first term in Eq. (15.2.3), which is a conditional average, we would use configurations obtained from the umbrella sampling near the top of the barrier, to initiate MD simulations. In that case, we would generate the

522 PART | IV Advanced techniques

velocities of the particles in the system according to a Maxwell distribution. With these initial velocities and positions, we run MD simulations to sample q(0)θ ˙ (q(t) − q ∗ ). Of course, the umbrella sampling will generate points very close to, but not exactly at q ∗ . In practice, this is rarely a problem.

15.2.1 Dealing with holonomic constraints (Blue-Moon ensemble) The alternative to biased sampling is to compute the conditional average of q(0)θ ˙ (q(t) − q ∗ ) using a holonomic constraint q(0) − q ∗ = 0 for the initial conditions With this constraint, we would then equilibrate the system using MD, and then use the points on this MD trajectory to start a large number of runs in which the system is allowed to evolve without any constraints. However, when using holonomic constraints at q = q ∗ , we must be careful because in Chapter 14 we argued that holonomic constraints can change the equilibrium distribution: 1

ρunconstrained (q) = |H|− 2 ρconstrained (q)

(15.2.5)

with Hαβ =

N  i=1

m−1 i

∂σα ∂σβ . ∂ri ∂ri

In the present case, we have one constraint.1 Our constraint is σ ≡ q ∗ − q = 0. If q is a linear function of the Cartesian coordinates, there is no need to worry about the effect of the constraints on the distribution function, because |H| is a constant. However, in general, q is a nonlinear function of all other coordinates, and we should consider the effect of |H| on ρ(q). The hard constraint(s) will bias the initial distribution function:   − 12 ∗) q(0)θ ˙ (q(t) − q |H| ∗ ∗ q(0)δ(q ˙ − q(0))θ (q(t) − q ) c   , (15.2.6) = − 12 δ(q ∗ − q(0)) |H| c

where the subscript c indicates a constrained average over initial configurations. Below, we derive an explicit expression for the variation of P (q) with q, using constrained MD to reconstruct the complete P (q) between the bottom and the top of the barrier, and in this way we can compute P (q ∗ ). In fact, rather than 1 We assume, for the moment, that there are no other constraints in the system. For a discussion of

the latter case, see e.g., ref. [609].

Rare events Chapter | 15 523

computing the derivative of P (q) with respect to q, we differentiate ln P (q): dX exp(−βU)∂δ(q − q(X ))/∂q ∂ ln P (q) . (15.2.7) = ∂q dX exp(−βU)δ(q − q(X )) We can re-express the integral in the numerator by partial integration. To do this, we should first transform from the Cartesian coordinates X to a set of generalized coordinates {Q, q} that includes the reaction coordinate q. We denote the Jacobian of the transformation from X to {Q, q} by |J|. Now we carry out the partial integration dQdq |J| exp(−βU)∂δ(q − q(X )/∂q ∂ ln P (q) = ∂q dX exp(−βU)δ(q − q(X )) dQdq [∂|J| exp(−βU)/∂q(X )]δ(q − q(X ) = ) dX exp(−βU)δ(q − q(X ) dX [∂(ln(|J|) − βU)/∂q(X )] exp[−βU]δ(q − q(X ) = dX exp(−βU)δ(q − q(X ) [∂(ln(|J|) − βU)/∂q(X )]δ(q − q(X ) = , (15.2.8) δ(q − q(X ) where, in the third line, we have transformed back to the original Cartesian coordinates. It should be noted that the computation of the Jacobian |J| can be greatly simplified [629]. As the averages both in the numerator and in the denominator contain δ(q − q(X ), it is natural to express Eq. (15.2.8) in terms of constrained averages that can be computed conveniently in a constrained Molecular Dynamics simulation. Just as in Eq. (15.2.6), we must correct for the bias introduced by the hard constraint:   − 12 |H | ∂(ln(|J|) − βU)/∂q ∂ ln P (q) c   = , (15.2.9) − 12 ∂q |H | c

where the subscript c denotes averaging in an ensemble where q(X is constrained to be equal to q. If we integrate Eq. (15.2.9) from the bottom to the top of the barrier, we get   − 12    q∗ |H | ∂(ln(|J|) − βU)/∂q ∗ P (q ) c   ln = dq . (15.2.10) − 12 P (q = qA ) qA |H | c

In practice, this integration has to be carried out numerically. By combining Eqs. (15.2.6), (15.2.4), and (15.2.10), we finally have an expression for the rate constant kA→B that can be computed numerically.

524 PART | IV Advanced techniques

It should be noted that, in the above expression, we have assumed that the reaction coordinate is the only quantity that will be constrained in the simulation. If there are more constraints, e.g., if we simulate a reaction in a polyatomic fluid, then the expression for kA→B becomes a bit more complicated (see the article by Ciccotti [609], and references therein). The Blue-Moon technique can be painful to implement when the constraints are complicated many-body functions. In such cases, Automatic Differentiation (see e.g., [108]) may be an attractive option. Bennett-Chandler epilogue It is clear that the Bennett-Chandler expressions for systems with holonomic constraints are not particularly simple. The question is then: which technique to use when? When writing a program from scratch, using holonomic constraints would be heroic, but the method is not better than a much simpler simulation that uses umbrella sampling or a similar technique. However, few programs are written from scratch. Moreover, there are many more MD codes available than MC codes. Hence, chances are that using the Blue-Moon ensemble in an existing MD code is just a matter of setting a parameter in the input file. In that case, the previous section should help the user understand what the Blue-Moon option does. Irrespective of the method used, the expressions derived above for a unimolecular rate constant are not limited to chemical reactions: the same approach can be used to study any activated classical process, such as diffusion in solids, crystal nucleation, or transport through membranes. Example 25 (Ideal gas particle over a barrier). To illustrate the “BennettChandler” approach for calculating crossing rates, we consider an ideal gas particle moving in an external field. This particle is constrained to move on the dimensional potential surface shown in Fig. 15.2. This example is rather unphysical because the moving particle cannot dissipate its energy. As a consequence, the motion of the particle is purely ballistic. We assume that, far away on either side of the barrier, the particle can exchange energy with a thermal reservoir. Transition state theory predicts a crossing rate given by Eq. (15.2.2):  exp[−βu(q ∗ )] exp[−βu(q ∗ )] kB T 1 TST ˙ q∗ = . kA→B = |q| q∗ 2 2π m −∞ dq exp[−βu(q)] −∞ dq exp[−βu(q)] (15.2.11) If we choose the dividing surface q1 (see Fig. 15.2) at the top of the barrier (q1 = q ∗ ) none of the particles that start off with a positive velocity will return to the reactant state. Hence, there is no recrossing of the barrier and transition state theory is exact for this system.

Rare events Chapter | 15 525

FIGURE 15.2 Potential-energy barrier for an ideal gas particle; if the particle has a position to the left of the dividing surface q1 the particle is in state A (reactant). The region to the right of the barrier is designated as product B. The top of the barrier is denoted by q ∗ (q ∗ = 0).

Note that transition state theory (Eq. (15.2.11)) predicts a rate constant that depends on the location of the dividing surface. In contrast, the BennettChandler expression for the crossing rate is independent of location of the dividing surface (as it should be). To see this, consider the situation that the dividing surface is chosen to be the left of the top of the barrier (i.e., at q1 < q ∗ ). The calculation of the crossing rate according to Eq. (15.2.3) proceeds in two steps. First we calculate the relative probability of finding a particle at the dividing surface. And then we need to compute the probability that a particle that starts with an initial velocity q˙ from this dividing surface will, in fact, cross the barrier. The advantage of the present example is that this probability can be computed explicitly. According to Eq. (15.2.4), the relative probability of finding a particle at q1 is given by δ(q − q1 ) exp[−βu(q1 )] = q1 . θ (q1 − q) −∞ dq exp[−βu(q)] If the dividing surface is not at the top of the barrier, then the probability of finding a particle will be higher at q1 than at q ∗ , but the fraction of the number of particles that actually cross the barrier will be less than predicted by transition state theory. It is convenient to introduce the time-dependent transmission coefficient κ(t), defined as the ratio κ(t) ≡

kA→B (t) TST kA→B

=

q(0)δ(q(0) ˙ − q1 )θ(q(t) − q1 ) . 0.5 |q(0)| ˙

The behavior of κ(t) is shown in Fig. 15.3 for various choices of q1 . The figure shows that for t → 0 κ(t) = 1, and that for different values of q1 we get different plateau values. The reason κ(t) decays from its initial value is that particles that start off with too little kinetic energy cannot cross the barrier and recross the dividing surface (q1 ). The plateau value of κ(t) provides us with the correction that has to be applied to the crossing rate predicted by

526 PART | IV Advanced techniques

transition state theory. Hence, we see that as we change q1 , the probability of finding a particle at q1 goes up, and the transmission coefficient goes down. But, as can be seen from Fig. 15.3, the actual crossing rate (which is proportional to the product of these two terms) is independent of q1 , as it should be. Now consider the case that q1 > q ∗ . In that case, all particles starting with positive q˙ will continue to the product side. But now there is also a fraction of the particles with negative q˙ that will proceed to the product side. These events will give a negative contribution to κ. And the net result is that the transmission coefficient will again be less than predicted by transition state theory. Hence, the important thing is not if a trajectory ends up on the product side, but if it starts on the reactant side and proceeds to the product side. In a simulation, it is therefore convenient always to compute trajectories in pairs: for every trajectory starting from a given initial configuration with a velocity q, ˙ we also compute the time-reversed trajectory, i.e., the one starting from the same configuration with a velocity −q. ˙ If both trajectories end up on the same side of the barrier then their total contribution to the transmission coefficient is clearly zero. Only if the forward and time-reversed trajectories end up on different sides of the barrier, do we get a contribution to κ. In the present (ballistic) case, this contribution is always positive. But in general, this contribution can also be negative (namely, if the initial velocity at the top of the barrier is not in the direction where the particle ends up).

FIGURE 15.3 Barrier recrossing: the left figure gives the transmission coefficient as a function of time for different values of q1 . The right-hand figure shows, in a single plot, the probability density of finding the system at q = q1 (solid squares), the transmission coefficient κ (open squares), and the overall crossing rate (open circles), all plotted as a function of the location of the dividing surface. Note that the overall crossing rate is independent of the choice of the dividing surface.

We chose this simple ballistic barrier-crossing problem because we can easily show explicitly that the transmission rate is independenta of the location of q1 . We start with the observation that the sum of the kinetic and potential energies of a particle that crosses the dividing surface q1 is constant. Only those particles that have sufficient kinetic energy can cross the barrier. We can easily compute the long-time limit of q(0)θ(q(t) ˙ − q1 ):

Rare events Chapter | 15 527

 q(0)θ(q(∞) ˙ − q1 ) =  =

 mβ ∞ dv v exp(−βmv 2 /2) 2π v 1 1 exp(− βmv2 ) , 2π mβ 2

where v is the minimum velocity needed to achieve a successful crossing. v is given by 1 2 mv + u(q1 ) = u(q ∗ ) . 2  It then follows that q(0)θ(q(∞) ˙ − q1 ) =

 1 exp{−β[u(q ∗ ) − u(q1 )]} . 2π mβ

This term exactly compensates the Boltzmann factor, exp(−βu(q1 )), associated with the probability of finding a particle at q1 . Hence, we have shown that the overall crossing rate is given by Eq. (15.2.11), independent of the choice of q1 . The reader may wonder why it is important to have an expression for the rate constant that is independent of the precise location of the dividing surface. The reason is that, although it is straightforward to find the top of the barrier in a one-dimensional system, the precise location of the saddle point in a reaction pathway of a many-dimensional system is usually difficult to determine. With the Bennett-Chandler approach it is not necessary to know the exact location of the saddle point. Still, it is worth trying to get a reasonable estimate, as the statistical accuracy of the results is best if the dividing surface is chosen close to the true saddle point. The nice feature of the Bennett-Chandler expression for barrier-crossing rates is that it allows us to compute rate constants under conditions where barrier recrossings are important, for instance, if the motion over the top of the barrier is more diffusive than ballistic. Examples of such systems are the cyclohexane interconversion in a solvent [631] and the diffusion of nitrogen in an argon crystal [632]. For more details, see SI (Case Study 23). a The general proof that the long-time limit of the crossing rate is independent of the location

of the dividing surface was given by Miller [630].

15.3

Diffusive barrier crossing

In the previous section, we described the Bennett-Chandler expression for the rate of activated processes. This expression is widely used in numerical simulation. However, even though the expression is correct for arbitrary barrier crossings, provided that the barrier is much larger than kB T , it is not always computationally efficient. To see this, consider the expression for the transmis-

528 PART | IV Advanced techniques

FIGURE 15.4 Simple model for a diffuse barrier crossing: a square barrier of height U and width ω that separates two macroscopic states, A and B.

sion coefficient, κ κ(t) ≡

kA→B (t) T ST kA→B

=

q(0)δ(q(0) ˙ − q ∗ )θ (q(t) − q ∗ ) . 0.5 |q(0)| ˙

(15.3.1)

Clearly, if κ → 1, we can use Transition State Theory (TST) to compute the crossing rate, once we know the barrier height. Hence, the only regime where Eq. (15.3.1) is of interest is when there are appreciable corrections to TST, i.e., when κ 1. However, precisely in this regime, the numerical calculation of κ, using Eq. (15.3.1), is plagued by slow transient behavior and large statistical errors. To illustrate this point, let us consider a simple example: a square barrier of height U and width ω that separates two (meta)stable states, A and B (see Fig. 15.4). For simplicity, we assume that, in equilibrium, the two states have the same probability, Peq ≈ 0.5 (the population of the barrier region is negligible). Moreover, we assume that the motion in the barrier region is diffusive. For this simple geometry, it is easy to write down the diffusion equation. This equation follows from the continuity equation ∂ ∂ρ(q, t) = − J (q, t), ∂t ∂q

(15.3.2)

which relates the local density ρ(q, t) at point q and time t to the flux density J (q, t). In addition, we have the constitutive equation for the diffusional flux in an external field   ∂U (q) ∂ρ(q, t) J (q, t) = −D β , (15.3.3) ρ(q, t) + ∂q ∂q where U (q) is the external potential. Combining this with the continuity equation, we obtain the Smoluchowski equation [633],   ∂ρ(q, t) ∂ ∂U ∂ρ(q, t) = D β (q)ρ(q, t) + ∂t ∂q ∂q ∂q

Rare events Chapter | 15 529

=

∂ ∂ De−βU e+βU ρ(q, t), ∂q ∂q

(15.3.4)

where D is the diffusion constant of the system. In steady state ρ˙ = 0, and the diffusional flux J is constant. We now use the fact that we have assumed a flat barrier, implying that ∂U/∂q = 0 for −ω/2 < q < ω/2. We also will assume that D is independent of q. It then follows from Eq. (15.3.3) that the probability distribution at the top of the barrier must be a linear function of the reaction coordinate, q: ρ st (q) = aq + b

for − ω/2 < q < ω/2.

(15.3.5)

The constants a and b are determined from by boundary conditions. In equilibrium, a = 0 and b = ρeq exp(−βU ), where ρeq is the density in states A and B. Suppose we increase the probability density in state A from its equilibrium value by an amount ρeq δ/2, and decrease the probability density of state B by the same amount, then the system is no longer in equilibrium. If the barrier is high enough, the resulting flux will be very small and the probabilities of states A and B will barely change with time. In this case, the stationary probability distribution at the top of the barrier is   δ , (15.3.6) ρ st (q) = e−βU ρeq 1 − q ω and the flux ρeq δ −βU e . (15.3.7) ω As expected, the flux decreases exponentially with increasing barrier height. The probability density at the top of the barrier is given by Eq. (15.3.6) if, and only if, the flux has reached its stationary value. Now consider expression for the rate. We rewrite Eq. (15.3.1) as J st = D

κ(t) ≡

θ (q ∗ − q(0))q(t)δ(q(t) ˙ − q ∗ ) . 0.5 |q(0)| ˙

(15.3.8)

Apart from a constant factor, κ(t) is the flux through the transition state, q ∗ (= 0), due to a step function probability profile at t = 0. As this step function differs from the linear steady-state profile, the resulting flux will depend on time. We are interested in the plateau value of κ(t) after the initial transient regime. The usual assumption is that this transient regime extends over typical “molecular” time scales. However, in the present case it is easy to show that the approach of κ(t) to its plateau value can be quite slow. For times t ω2 /D, we can combine Eqs. (15.3.4) and (15.3.2) to yield ∂ 2 J (q, t) ∂J (q, t) ≈D , ∂t ∂q 2

(15.3.9)

530 PART | IV Advanced techniques

with the solution



  1 (q − q ∗ )2 . (15.3.10) exp − 2πDt 2Dt √ We then find that J (q ∗ , t) decays as 1/ t for times t ω2 /D. This means that the approach to the stationary state is very slow. But, more importantly, in the case of diffusive barrier crossings, the transmission coefficient κ is typically quite small. Below, we will give an estimate for κ, but at this stage we just note that small values of κ cannot be determined accurately using Eq. (15.3.8). To see this, consider the expression for the transmission coefficient: J (q, t) ≈ De

−βU

κ=

ρeq

2 q(0)θ ˙ (q(t) − q ∗ )q(0)=q ∗ . |q| ˙ eq

(15.3.11)

In a computer simulation, we put the system initially at q ∗ and let it evolve. We then compute θ (q(t) − q ∗ ) for times that are long enough for Eq. (15.3.8) to have reached a plateau value. We repeat this procedure for n independent trajectories, and then estimate κ as 2  [q(0)θ ˙ (q(t) − q ∗ )]i . n|q| ˙ n

κest =

(15.3.12)

i=1

The statistical error in κest is given by   σκ2 = (κest − κ)2 .

(15.3.13)

Taking into account that the trajectories are uncorrelated and assuming that the average in Eq. (15.3.13) can be factorized as if q˙ and θ (q(t)−q ∗ ) were Gaussian variables, we get σκ2 =

4 1 q˙ 2 θ 2  + κ 2 . 2 n n|q| ˙

(15.3.14)

To estimate the variance we make use of the fact that θ 2  ≈ 0.5 and 4

q˙ 2  ∼ O(1). |q| ˙ 2

(15.3.15)

As we consider the case that the transmission coefficient is much less than one small, the second contribution in Eq. (15.3.14) can be ignored. We then obtain, 1 σκ2 ∼ , n

(15.3.16)

σκ 1 ∼ √ . κ κ n

(15.3.17)

and the relative error is

Rare events Chapter | 15 531

This shows that, even for a transmission coefficient as large as 0.1, we would need to follow about 104 trajectories in order to get an accuracy of only 10%. The reason why the statistical error is so large is that we use the θ -function to detect transitions from A to B. In a diffusive barrier-crossing process, where recrossings of the transition state are frequent, the time evolution of this θ -function resembles a random telegraph signal. The above analysis suggests that the Bennett-Chandler approach becomes inefficient for systems with low transmission coefficients because: 1) the BC scheme prepares the system in a state that is not close to the steady-state situation, and 2) the BC scheme employs the “noisy” θ -function to detect whether the system is in state B. The obvious question is whether we can do better. Below, we show that this is indeed possible. First of all, we shall go back to Eq. (15.1.7) and try to devise a perturbation that prepares the system immediately close to the steady state. Secondly, we shall construct a more continuous “detector” function for measuring the concentration of state B. Below, we shall not discuss the general case, but explain the basic ideas in the context of our simple square-barrier model. We refer the reader to the literature [629] for a more general discussion. As discussed above, the steady-state probability profile at the top of the barrier is a linear function of the reaction coordinate. Hence, if we set up a perturbation that has this shape, rather than a step function, we would eliminate the problem of the slow, diffusive approach to the steady-state crossing rate. Let us therefore replace the θ -function perturbation by a function g(q) chosen such that g(q) = θ (q ∗ − q) outside the barrier region, while inside the barrier region2 g(q) = 1/2 − q/ω. The change in the equilibrium concentration profile due to this perturbation is qβ ρ(q) = −e−βU ρeq . (15.3.18) ω But, with the identification δ = β, this is precisely the (linear) concentration profile that corresponds to the steady state. Hence, with this perturbation, we have suppressed the initial transient. However, if we still use a θ -function to detect whether the system is in state B, the numerical results will still be noisy. So the second step is to replace the “detector” function for state B by 1 − g(q). Note that outside the barrier region g(q) = θ (q ∗ − q). Hence, replacing θ with g makes a negligible difference in our estimate of the concentration of B. Let us next consider the effect of this choice of the perturbation g on the statistical accuracy for the transmission coefficient κ. We start from Eq. (15.1.16) for the crossing rate   ∞ ˙ q(0)(∂ ˙ q gB (0))q(t)(∂ q gB (t)) kA→B = dt . cA  0 2 Note that a perturbation that is everywhere constant does not change the equilibrium distribution.

Hence, to compute the change in the concentration profile, we can focus on g(q) = 1/2 − q/ω.

532 PART | IV Advanced techniques

Now, gB = 1 − gA = 12 + q/ω inside the barrier, and zero elsewhere. Inside the barrier region, we have at all times 1 ω

∂q gB = and hence kA→B

1 = 2 ω





dt 0

∗ q(0) ˙ q(t) ˙ , cA 

where the asterisk indicates the condition that both q(0) and q(t) should be within the barrier region. If the velocity correlations decay on a time scale that is much shorter than the time it takes to diffuse across the barrier, then we can write ∗ q(0) ˙ q(t) ˙ ≈ q(0) ˙ q(t) ˙ ω exp(−βU )ρeq .

The transition state theory expression for kA→B is T ST kA→B = 0.5 |q| ˙

exp(−βU )ρeq . cA 

We then obtain the following expression for the transmission coefficient κ:  ∞ 2 κ= dt q(0) ˙ q(t) ˙ . ω |q| ˙ 0 Making use of the Green-Kubo relation  ∞ D= dt q(0) ˙ q(t) ˙ ,

(15.3.19)

0

we obtain κ=

2D . ω |q| ˙

As D is of order O(|q| ˙ λ), where λ is the mean-free path, we immediately see that λ κ∼ ; ω i.e., the transmission coefficient is approximately equal to the ratio of the meanfree path to the barrier width. Next, we consider the statistical accuracy of our new estimate for κ  2 κest = ω|q|n ˙ n



i=1 0

t



˙ q(t ˙ ) i , dt q(0)

(15.3.20)

Rare events Chapter | 15 533

where we must remember that in all of the n trajectories considered the system is initially at the top of the barrier. Following essentially the same reasoning that led to Eq. (15.3.14) we now get  t  t 4 dt dt q(0) ˙ q(t ˙ )q(0) ˙ q(t ˙ ) ω2 |q| ˙ 2n 0 0 2   t dt q(0) ˙ q(t ˙ ) − .

  (κest )2 =

0

If we assume, as before, that q˙ behaves as a Gaussian variable, then   (κest )2 =

   t 4 2 2 q˙ t dt q(0) ˙ q(t ˙ ) + D . ω2 |q| ˙ 2n 0

(15.3.21)

We consider the limit t → ∞. In that limit D q˙ 2 t and hence 

 (κest )2 ∼

4 q˙ 2 Dt. ω2 |q| ˙ 2n

(15.3.22)

The relative error in the computation of the transmission coefficient is now

(κest )2 κ



1/2 ∼

q˙ 2 t . Dn

(15.3.23)

From the Green-Kubo relation Eq. (15.3.19) we see that the diffusion constant D is equal to q˙ 2 τc , where τc is the decay time for velocity fluctuations. Hence, 1/2 

(κest )2 t ∼ . κ nτc

(15.3.24)

Typically, there is not much point in computing the correlation function q(0) ˙ √ q(t) ˙ for times much larger than τc . Hence, the relative error in κ is simply 1/ n. If we compare this expression for the statistical accuracy in κ with that obtained in the Bennett-Chandler scheme Eq. (15.3.17) 

(κest )2 κ

1/2  Bennett-Chandler

1 = √ , κ n

we conclude that, by a judicious choice of the scheme to compute κ, we have decreased the statistical error —for a given number of trajectories —by a factor κ. This implies that the present scheme is also applicable in the diffusive regime where κ 1. Moreover, by suppressing the transient behavior, we have substantially reduced the time to compute a single barrier-crossing trajectory. The

534 PART | IV Advanced techniques

additional gain due to the suppression of transients is of order  ω 2 τdiff ω2 1 = = ≈ 2. τc Dτc λ κ Hence, the overall gain in speed is of order 1/κ 4 . Of course, the present analysis is based on a highly simplified example. A discussion of the application of the present method to more realistic diffusive barrier-crossing problems is discussed in detail in ref. [629].

15.4 Path-sampling techniques In the previous sections, we introduced the concept of reaction coordinates: functions of the coordinates of the system that measure the progress of a system from a “reactant” state A to a “product” state B. We use quotation marks to indicate that the transition between states A and B need not be a chemical reaction. Even though it may be easy to distinguish states A and B—for instance, one might be a liquid, the other a crystal—there is no such thing as the reaction coordinate: many functions can measure the progress from A to B and, as we have seen in Example 25, the result of a simulation should be independent of this choice of order parameter. But, from a computational point of view, some reaction coordinates are better than others: making a good choice often decides whether a given rate calculation is feasible or not. In less extreme cases, the Bennett-Chandler approach and related techniques may still work, but the transmission coefficient for the barrier crossing may become very small, making the computational estimate of the crossing rate expensive. For example, if a chemical reaction in a liquid involves the complex reorganization of the solvent, the bottleneck of the reaction may be very different from the bottleneck for the same reaction in the gas phase. In such cases, it is dangerous to rely on our intuition about the choice of the reaction coordinate. Clearly, it would be attractive to have computational techniques that make no a priori assumption about the reaction coordinate. This is where path-sampling techniques enter the game. Below, we discuss the ideas behind some of the methods. We stress that our discussion is necessarily incomplete. More comprehensive reviews can be found in refs. [34,634,635]. Before discussing path-sampling techniques to study the pathway and rate of rare processes, it is useful to ask the question: what do we want to know? It is not the individual pathways from reactant to product, nor is it necessarily the location of the transition state or its higher-dimensional analogs. Hummer [636], and in more detail, Vanden-Eijnden and co-workers (see e.g., [635,637]) have argued that the focus of our attention should be on the bundles of reactive paths. Together, these bundles carry the largest reactive flux, hopefully, localized in one, or a few, “tubes” in configuration space, or in the space spanned by some suitably guessed generalized coordinates. Identifying the dominant microscopic

Rare events Chapter | 15 535

pathway of a rare event is interesting in its own right, even if we are not primarily interested in rate calculations. Some of the techniques that we discuss below can be viewed as special cases of Transition Path Theory (TPT) as formulated in refs. [635,637]. However, TPT is phrased assumed that the dynamics can be described by stochastic differential equations, such as the Langevin equation or its overdamped limit. Hence, Newtonian dynamics falls strictly speaking outside the scope of TPT. Also, in contrast to Forward-Flux Sampling (see section 15.5), the original implementation of TPT assumed that the stationary distribution of the system (Boltzmann or other) is known, which is usually not the case for driven systems (see, however, ref. [638]).

15.4.1 Transition-path sampling Transition-Path Sampling (TPS) is a technique that was introduced by Chandler and co-workers [639–643], with the explicit aim of computing the rates of rare events without making a priori assumptions about the reaction coordinate. The development of the TPS approach was inspired by earlier work of Pratt [644], who introduced the concept of path sampling. Below we give a brief description of TPS. A more in-depth review can be found in refs. [645,646]. An important feature of the original version of TPS is that it was developed to compute the rate of rare events in systems evolving according to deterministic (e.g., Newtonian) dynamics. Although this may seem to be the simplest situation to consider, it is in many respects the hardest. As we shall see later, a bit of stochasticity makes path sampling easier. To quantify the progress of the transition from state A to state B in an N particle system, TPS defines a time correlation function C(t) that measures the conditional probability that a trajectory starting in A at time t = 0, has arrived in B at time t. We call such a trajectory a reactive path. The probability that a path is reactive, is given by: C(t) =

 hA (x0 )hB (xt ) ≈ hB  1 − exp(−t/τR ) , hA 

(15.4.1)

where xt denotes the phase-space coordinates {pN , rN } of the N particles at time t, and hA and hB are “oracle” functions that return a value 1 if the system is in state A (B) and return 0 otherwise:  1 if x in A,B (15.4.2) hA,B (x) = 0 otherwise. Note that the functions hA,B are not reaction coordinates: they just allow us to specify if the system is in the initial (final) state of the transition. As before, we assume that transitions from A to B are rare events, meaning that τR τmicro , where τmicro is the typical duration of a successful crossing

536 PART | IV Advanced techniques

from A to B. As in Eq. (15.1.6) the rate constant follows from the time derivative ˙ C(t) ˙ k(t) = C(t),

(15.4.3)

which reaches a plateau value for times τmicro t τR The correlation function C(t) defined in Eq. (15.4.1) differs slightly from the quantity introduced in Eq. (15.2.1), but both expressions lead to the same value of k(t).

15.4.1.1 Path ensemble The crucial step in TPS and related techniques is the introduction of a path ensemble, which is an important extension of the usual ensemble of points in phase space. The path ensemble allows us to define averages over trajectories: just as we can relate a free energy to the partition function of a domain in phase space, so we will be able to associate a “free energy” to the ensemble of all paths that connect two regions in phase space in a given time t. In introducing the path ensemble, we make no assumptions about the nature of the dynamics (Newtonian, Langevin, Brownian), although TPS was designed for deterministic, Newtonian dynamics. To stay close to the notation used in much of the literature on path sampling, we will use the symbol x(T ) to denote a trajectory of the system in time interval T . In the case of Newtonian dynamics, x is the phase-space coordinate {pN , rN }, but for Brownian dynamics, it would just be the configuration-space coordinate {rN }. In simulations, time is discretized with a time-step t. However, in theoretical analyses, we can consider the limit t → 0. By discretizing time, we have created discrete time slices in x-space. A trajectory is then a sequence of x-values for successive time slices. The dynamics of the system determines the probability of a given trajectory. For instance, if we start at x0 at time t0 , then the probability to follow a particular trajectory {x0 , xt , x2t , · · · , xT } is determined by the (Boltzmann) probability N (x0 ) of finding the system at x0 at time t = 0 multiplied by the probability P[x(T )] that a path will go from x0 to xT via xt , x2t , x3t , and so on, is given by P [x(T )] = N (x0 )

(T /t)−1 

π xit → x(i+1)t ,

(15.4.4)

i=0

where π(xit → x(i+1)t ) denotes the probability that a system that is at xit will arrive after one time step at x(i+1)t . For deterministic (Newtonian) dynamics, this notation is a bit of an overkill as the path is fixed once we have specified x0 . However, for stochastic dynamics, π(x → x ) is a (normalized) distribution. With this notation, the ensemble averages in Eq. (15.4.1) can be written as an integration over the initial conditions weighted with the equilibrium distribution

Rare events Chapter | 15 537

N (x0 ). We first note that we can write Eq. (15.4.1) as C(t) =

dx0 N (x0 )hA (x0 )hB (xt ) . dx0 N (x0 )hA (x0 )

(15.4.5)

Using Eq. (15.4.4), we can write Eq. (15.4.5) as an average over the path probability P[x(T )]: C(t) =

Dx(t) hA (x0 )P[x(t)]hB (xt ) , Dx(t) hA (x0 )P[x(t)]

(15.4.6)

where Dx(t) denotes a path-integral, i.e., an integral over all paths between  x0 and xt [647], and Dx(t) ≡ i dxit . Note that the form of Eq. (15.4.6) resembles the ratio of two partition functions: C(t) = hB (t)A,t = where

ZAB (t) , ZA (t)

(15.4.7)

 ZA =

Dx(t) hA (x0 )P[x(t)]

is the partition function of all paths of length t that start in A at t = 0, while  ZAB =

Dx(t) hA (x0 )P[x(t)]hB (xt )

is the partition functions of all paths of length t that start in A at t = 0 and end up in B at time t. The fact that C(t) can be expressed as the ratio of pathensemble partition functions is exploited in TPS, because it allows us to evaluate Eq. (15.4.6) using the free-energy techniques discussed in Chapter 8. Below, we sketch how the analogy between free energy calculations and the evaluation of partition functions in the path ensemble can be used to compute rates of rare events. Only after that will we discuss how Monte-Carlo sampling can be used to generate trial reactive paths, starting from existing reactive paths between A and B. The only thing we need to know for now is that path-sampling algorithms attempt trial moves that generate trajectories that are at least locally close to an existing reactive trajectory. A trial trajectory is only accepted if it is also reactive (i.e., linking A and B in time t), and has a high-enough Boltzmann weight. Of course, simulating an entire trajectory to see if it ends up in B and originates in A makes path sampling expensive: the larger t, the more expensive the test. This preliminary information is useful to appreciate the trick developed in ref. [642] to reduce the cost.

538 PART | IV Advanced techniques

15.4.1.2 Computing rates To compute reaction rates with TPS, we start from the expression for the rate k(t)

 hA (x0 )h˙ B (xt ) ˙ . (15.4.8) k(t) = C(t) = hA  For τmicro t τR , k(t) reaches a plateau value, which defines the reaction ˙ rate. Hence, to compute rates, we must evaluate C(t). In principle, we could compute C(t) from a large number of “ordinary” MD simulations, thereby generating an ensemble of paths of length t that start at A and then counting the fraction that happens to be in B at time t. However, since we consider a situation where transitions from A to B are rare, the fraction of all paths that end in B is very small, and the brute-force approach would be prohibitively expensive. We, therefore, need computational approaches that do not waste most time on trajectories that are not going from A to B. For what follows, we will assume that we can construct some, not necessarily representative, reactive paths that link A to B in time t. We can then subsequently apply a path sampling algorithm, which, starting from these initial paths, allows us to generate an equilibrium ensemble of reactive paths. How this is done will be discussed section 15.4.2. Looking at Eq. (15.4.8), one might be inclined to compute C(t) for a range ˙ of times t, find the regime where C(t) is linear in t, and then obtain C(t) by numerical differentiation. Later, we shall see that this calculation can be simplified, but let us first focus on the calculation of C(t). As a preliminary step, we need an order parameter λ(x) that allows us to distinguish states A and B. For instance, state B could correspond to a range of B λ values between limits λB min and λmax : x∈B

if

B λB min < λ(x) < λmax ,

(15.4.9)

and similarly for A. An example may help: we could consider a particle crossing a one-dimensional potential-energy barrier. In that case, λ could be simply the coordinate that measures the progress of the particles across the barrier region. The above example is a bit too simple, because the coordinate of the particle not only distinguishes A from B, but it is also a good reaction coordinate. In general, however, order parameters that are good enough to distinguish between A and B, may not be useful reaction coordinates. But a good reaction coordinate is always a good order parameter. The next step is to link the computation of C(t) and umbrella sampling (section 8.6.6). In umbrella sampling, we can force a system to explore an orderparameter range where the free energy is large (probability is low), by sampling the probability distribution in a sequence of partially overlapping windows, where the windows are chosen narrow enough that the probability distribution can be sampled over the entire window, without further biasing. By matching

Rare events Chapter | 15 539

the unnormalized probability distributions within these windows, we can reconstruct the overall probability distribution along the path between the states on either side of the barrier (see Example 11 in section 8.6.11). In path sampling, we can do something similar to compute the probability that a path that starts in A at t = 0 ends up in a window with value λ at time t. Note that here we do not yet assume that λ is in the range corresponding to B. Let us define P (λ, t) as the probability density of finding the system with order parameter λ = λ(xt ) at time t starting from A at t = 0: dx0 N (x0 )hA (x0 )δ[λ − λ(xt )] . (15.4.10) P (λ, t) ≡ dx0 N (x0 )hA (x0 ) Unless t is very large, most paths of length t go from A to A. However, a fraction will end up with a λ-value adjacent to, but outside the range of A: we will label this window as window 1. Sampling many paths that start from equilibrium configurations in A, we can populate window 1. From this simulation, we can compute the fraction of all paths that start in A and end up in window 1. However, as we have made successive windows overlap, some of these paths also end in the next window (window 2). Now we use the path-sampling techniques of section 15.4.2 to use the paths that end up in window 2 in time t, to generate many more paths that end up in that window: this condition means that we reject all trial paths to end up outside window 2. But, of course, window 2 partially overlaps with window 3. And so, little by little, we construct paths that go in time t from A all the way to the range of λ-values that defines B. We note that this procedure generates reactive paths from A to B. However, they need not be representative: for instance, if the time t is not in the “plateau” regime. In principle, we can use the window approach to compute the correlation ˙ function C(t) for various values of t, check if it is linear in t, and compute C(t) in the linear regime. However, this procedure would be rather time-consuming. Dellago et al. [642] proposed a more efficient method to compute C(t). To derive the approach of ref. [642], we note that C(t) can be rewritten as (see Eq. (15.4.7)): hA (x0 )hB (xt ) hA (x0 ) 

hA (x0 )hB (xt ) hA (x0 )hB (xt )  × = hA (x0 ) hA (x0 )hB (xt ) hB (t)A,t × C(t ). = hB (t )A,t

C(t) = hB (t)A,t =

(15.4.11)

In words: C(t) can be expressed as the product of the probability of finding a particle in B at time t , multiplied by a correction factor equal to the ratio of the number of paths that end in B at times t and t . The calculation of this correction would seem to be as complex as the calculation of C(t). However, we can cast this term in a more convenient form. To this end, we define a slightly different

540 PART | IV Advanced techniques

path ensemble, namely the ensemble of paths that have visited B at least once in the time interval t ∈ [0; T ]. An ensemble average in this ensemble can be written as dx0 N (x0 )hA (x0 )HB (xT )hB (xt ) hB (xt )A,HB (T ) = , (15.4.12) dx0 N (x0 )hA (x0 )HB (xT ) where HB (T ) is a characteristic function that has the value 1 if, somewhere in the interval [0, T ], the path has visited B; otherwise this function is 0. The difference with the previous ensemble is that a path does not have to end in B. We can use the fact that for all trajectories with length t ∈ [0; T ] we have hB (xt ) = hB (xt )HB (x0 , xT ), because if hB (xt ) = 1, the HB (x0 , xT ) must be unity too, and if hB (xt ) = 0, then both sides of the equation are zero. If we substitute this relation into Eq. (15.4.11), we obtain hA (x0 )hB (xt )HB (xT ) hA (x0 )HB (xT )  × C(t ) × hA (x0 )HB (xT ) hA (x0 )hB (xt )HB (xT ) hB (xt )A,HB (T )  = × C(t ). (15.4.13) hB (xt ) A,H (T )

C(t) =

B

In this equation, we have rewritten the correction factor in terms of two ensemble averages. The nice feature of this ensemble average is that both averages can be obtained from a single path-sampling simulation of paths with length T . For the rate constant k(t) we then have k(t) =

dC(t) dt

 1  h˙ B (xt ) A,H (T ) B hB (xt ) A,H (T ) B ≡ η(t, t )C(t ). = C(t )

(15.4.14)

Note that in reactions where recrossings from B to A are common, h˙ B (xt ) in this equation can have both positive and negative contributions. Using Eq. (15.4.14), the calculation of the rate constant can be decomposed into two steps. First, we calculate η(t, t ) using the path ensemble as defined by Eq. (15.4.12). The second step is the calculation of C(t ) using umbrella sampling in the path ensemble. It may be helpful to clarify the idea behind Eq. (15.4.13) by using an approximate human analogy: suppose that we wish to compute the rate at which people receive an Oscar for acting. In this case, state A corresponds to the overwhelming majority of people who have never even tried, or who have failed, and B corresponds to all living recipients. In between, there is a range of people

Rare events Chapter | 15 541

who tried acting but in the end, did not quite get there. The brute force approach would be to look at a large number of life histories, and determine the rate at which people win Oscars at the age of fifty (say) —this is our time t. Of course, the numbers are extremely small and the paths are long, making the umbrella sampling expensive. Hence, this approach would not be efficient. Now suppose that, instead, we consider the biographies of people who have won an Oscar within a period of, say, ninety years since their birth (this is our time T ). Some of them may still be alive after ninety years and populate state B, others may have moved on. We can then find how many of them received the Oscar at age thirty (time t ). That number tells us little about the probability to have an Oscar at age thirty, if we do not know how many have failed to get there. So next, we compute the probability C(30) that an average person is in possession of an Oscar at age thirty by using the equivalent of the umbrella sampling on life histories (path sampling) of people who try acting (in front of the mirror (=window 1), at a birthday party (window 2), at school (3), etc., all the way up to playing an Oscar-winning role. Finally, from our collection of biographies of Oscar winners, we compute the ratio R between the number of people who have received the Oscar by age fifty and those who already have it by age thirty. We then multiply R with C(30) to obtain C(50). To compute the rate, we must differentiate C(t). To this end, we check our collection of biographies to find out how many people have won the Oscar between the ages of 50 and 51. So, why this complicated route? The main reason is that we save much time by restricting ourselves to biographies of Oscar winners, except in the case of people who win an Oscar at the age of thirty: there we use umbrella sampling, but we use only relatively short life histories. It is not important that at age thirty we are not yet in the steady-state regime (if such a thing exists for Oscars): Eq. (15.4.13) is valid for any initial time. An illustration of the above procedure to compute crossing rates is discussed in Example 26. Example 26 (Single particle in a two-dimensional potential well). To illustrate the path sampling method, consider a system containing a single particle in the following simple two-dimensional potential [641]:   2  2  2 V (x, y) = 4 1 − x 2 − y 2 + 2 x 2 − 2 + (x + y)2 − 1   2 + (x − y)2 − 1 − 2 /6. (15.4.15) Note that V (x, y) = V (−x, y) = V (x, −y). Fig. 15.5 shows that this potential consists of two stable regions around the points (−1, 0), which we call A, and (1, 0), which we call region B. To be more specific, all points within a distance of 0.7 from (−1, 0) or (1, 0) are defined to be in region A or B, respectively. At a temperature of T = 0.1 transitions from A to B are rare events.

542 PART | IV Advanced techniques

FIGURE 15.5 Contour plot of the function V (x, y) defined by Eq. (15.4.15). The two minima are at (−1, 0), A, and (0, 1), B. These minima are separated by a potential energy barrier.

To compute the rate of transitions from A to B we used path ensemble simulations. The initial distribution N (x0 ) was chosen to be canonical, i.e., N (x0 ) ∝ exp [−βH (x0 )] . A trajectory was generated using standard Molecular Dynamics simulations (see Chapter 4). The equations of motion were integrated using the velocityVerlet algorithm with a time step of 0.002.

FIGURE 15.6 hB (t) (left) and η (t) (right) as a function of time for various values of the total path length T .

The first step was the calculation of the coefficient η(t, t ). This involves the computation of the path ensemble averages hB (xt )A,HB (T ) for various times t. The result of such a simulation is shown in Fig. 15.6 for T = 4.0 and T = 3.6. An important question is whether the time T is long enough. Since we are interested in the plateau of k(t), the function hB (xt )A,HB (T ) must have become a straight line for large values of t. If this function does not show a straight line, the value of T was probably too short, the process is not a rare event, or the process cannot be described by a single hopping rate. The consistency of the simulations can be tested by comparing the results with a

Rare events Chapter | 15 543

simulation using a shorter (or longer, but this is more expensive) T . Fig. 15.6 shows that the results of the two simulations are consistent. The next step is the calculation of the correlation function C(t). For the calculation of P (λ, t), we have defined the order parameter λ as the distance from point B: λ=1−

|r − rB | , |rA − rB |

(15.4.16)

in which rB = (1, 0). In this way, the region B is defined by 0.65 < λ ≤ 1 and the whole phase space is represented by −∞, 1]. In Fig. 15.7 (left), we have plotted P (λ, i, t = 3.0) as a function of λ for different slices i. Recombining the slices leads to Fig. 15.7 (right). The value of C (t = 3.0) can be obtained by integrating over region B:  dλP (λ, t) . (15.4.17) C (t) = B

Combining the results gives for the total crossing rate k = η (t) C (t) .

(15.4.18)

Using t = 3.0 leads to η (3.0) = 1.94, C (3.0) = 4.0 × 10−6 , and k = 8.0 × 10−6 .

FIGURE 15.7 (Left) P (λ, i, t = 3.0) for all slices i. (Right) P (λ, t = 3.0) when all slices i 1 dλ P (λ, t) = 1. are combined. The units on the y axis are such that −∞

For more details, see SI (Case Study 24).

15.4.2 Path sampling Monte Carlo In principle, Transition-Path Sampling (TPS) only requires information on the reactant and product state: the reaction pathway is a result of the simulation, not an input. Analyzing the ensemble of reactive paths may therefore help us identify a suitable reaction coordinate. Of course, the approach assumes that path sampling really probes the equilibrium path ensemble, which should be independent of how we created the first reactive paths.

544 PART | IV Advanced techniques

To start a path-sampling simulation, we must first identify some path or paths that connect the reactant and product. Creating the first barrier-crossing paths in path sampling is like creating an initial configuration in a normal simulation, which may be valid but highly atypical: the first paths should connect A and B in time t, but they may be atypical. Typical paths are then created by path sampling, during the subsequent equilibration stage. The fact that we can start with atypical paths makes it attractive to generate initial paths using a procedure that is cheap, rather than realistic. One way is to generate a barrier-crossing path at an unrealistically high temperature. However, there is no general “best” recipe to generate the first paths. Of course, it is also possible to use umbrella sampling, to generate the first paths that go from A to B. This procedure is expensive, but it is more likely to generate typical reactive paths. Up to this point, we have not described the algorithm to sample reactive paths that start in A and, after a specified time, end up in B or, in the case of umbrella sampling, that end up in the i-th window along the coordinate λ that is used to distinguish A and B. To emphasize the relation with umbrella sampling, we define “window potentials” Wi (xt ) that allow us to obtain good statistics on paths that start in A and end at time t in window i between λmin [i] and λmax [i]; ⎧ ⎪ λ(xt ) < λmin [i] ⎨ ∞ Wi (xt ) = (15.4.19) 0 λmin [i] < λ(xt ) < λmax [i] . ⎪ ⎩ max ∞ λ(xt ) > λ [i] It is necessary that neighboring windows overlap. To carry out path sampling, we use Monte Carlo trial moves that change an entire trajectory, rather than just a point in phase space. As with earlier MC algorithms, we use the criterion of detailed balance to construct valid path-sampling schemes. In the present section, we focus on systems that evolve in time according to deterministic, Newtonian dynamics. However, transition path sampling can also be used in cases where the time evolution is more stochastic, e.g., if it is described by Brownian or by Langevin dynamics. Different dynamics require different path-sampling trial moves. We refer the reader to refs. [635,645] for a description of path-sampling moves in cases where the dynamics is stochastic. Let us consider a path-sampling trial move. The old path is denoted by o and the new path by n. We distinguish two path ensembles: one ensemble comprises the collections of paths that start in A and are of length t. This ensemble is used to compute the correlation function C(t). The window potential (15.4.19) imposes the constraint that at time t the order parameter λ(xt ) should lie within a pre-specified window. The path probability distribution for this ensemble is N (A, W ) ∝ N (x0 )hA (xo ) exp[−W (xt )].

(15.4.20)

Rare events Chapter | 15 545

The second path ensemble, defined by Eq. (15.4.12), is the collection of paths of length T that start in A and that visit B at some time in the interval t ∈ [0; T ]. This ensemble is used to sample the correction factor η(t, t ). The probability distribution for this ensemble is N (A, HB ) ∝ N (x0 )hA (xo )HB (xT ).

(15.4.21)

The ratio of the acceptance probabilities for the forward and reverse pathsampling trial moves follows from the condition of detailed balance (see section 6.1): acc(o → n) N (n)α(n → o) = , acc(n → o) N (o)α(o → n)

(15.4.22)

where α(o → n) is the a priori probability of generating path n from path o, and N (n) the desired probability distribution, i.e., Eqs. (15.4.20) or (15.4.21). In the Monte Carlo moves that are discussed below we consider trial moves for which the a priori probability of generating path n from o is equal to the probability of generating o from n. When we impose this equal a priori probability for forward and reverse path-sampling moves, the acceptance rules reduce to acc(o → n) N (n) = . acc(n → o) N (o)

(15.4.23)

Let us now consider the most important path-sampling moves. At first sight, it might seem simplest to generate a new path by making a small change in the initial condition x0 and then perform a Molecular Dynamics simulation of length T or t, and use the acceptance rule to accept or reject this new path. However, because of the extreme sensitivity of MD trajectories to small changes in the initial conditions, this method will almost certainly fail for all but the shortest paths, or more precisely, for the windows nearest to A. A small change in the initial conditions of a path that visits B, will almost certainly results in a nonreactive path. Shooting moves A better strategy to generate a new path is to make a change in the x at some time t , somewhere in the middle of the trajectory and integrate forwards to compute xt and backward in time to compute x0 . Since both A and B are stable states, these points will “attract” paths and therefore such shooting moves in the middle of a path are more likely to result in a path n that is also reactive. Changing x(t ) will change the total energy of the trajectory, which in Newtonian dynamics, is conserved. Hence, we can split the acceptance of a path sampling move in a first “cheap” check on the change of the Boltzmann factor, and only if that step is accepted, carry out the forward and reverse integrations to verify that the new path still has a non-zero weight (Eq. (15.4.20) or Eq. (15.4.21)). Usually, we consider path “shooting’ moves where the Jacobian

546 PART | IV Advanced techniques

for the change from xo to xn equals one. For cases where this Jacobian is not equal to one, it should be included in the acceptance criterion (see [645]). Shifting moves We have seen that it is unlikely to generate a reactive path by making a small change in the initial positions or velocities. However, we can also change a trajectory by shifting initial and final times t0 and t by the same amount t, where t can be positive or negative. For such a shifting move, we do not have to recompute the complete path, but only the increment i.e., we should perform a Molecular Dynamics simulation from t to t + t if t is positive, and from t0 to t0 + t if t is negative. Note, however, that shifting moves are, in practice, not ergodic. To improve the sampling, shifting moves should therefore be combined with shooting moves. Still, shifting moves are useful to improve the statistics. As the above discussion of shooting and shifting moves makes clear, such trial moves tend to generate paths that can be reached from the original path without crossing a high free-energy barrier in path space. Yet, it is in general not guaranteed that a sequence of small “diffusive” steps in path space will be able to sample all relevant reactive trajectories, in particular when the most important reactive trajectories follow a path that is qualitatively different from the path that was used to initialize the path sampling.

15.4.3 Beyond transition-path sampling Among the techniques to compute the rate of rare events, TPS occupies a special position, as it avoids some of the assumptions that earlier techniques made. For instance, unlike the Bennett-Chandler method, it does not assume that all coordinates orthogonal to the reaction coordinate equilibrate quickly. Moreover, TPS does not assume any knowledge of a reaction coordinate characterizing the progress of the transition: rather, TPS relies on the presumed “ergodicity” of path sampling to find the most important reactive trajectories. Another important feature of TPS is that it provides an unbiased estimate of the dominant reaction pathway: often, it is more important to know how a reaction proceeds, than how fast. However, in its original form, TPS is computationally expensive. The high cost is partly due to the fact that paths must be followed during a time t that is long enough to ensure that C(t), the probability that the system has crossed from A to B, scales linearly with t. Inevitably, this means that the transition paths spend a considerable fraction of the time in the basins of A and B, which is not wrong but uninformative.

15.4.4 Transition-interface sampling To reduce the high cost of TPS, van Erp et al. introduced the so-called Transition Interface Sampling (TIS) method [648], which, like TPS, is a path-sampling method, but based on a rather different philosophy. First of all, unlike TPS,

Rare events Chapter | 15 547

TIS introduces an order parameter Q that measures the progress of the system from A to B. The set of points that correspond to a constant value of Q defines a hyper-surface in configuration space.3 Jumping ahead a little, we note that different terms have been introduced to denote these constant-Q hypersurfaces. Faradjian and Elber [649], and also Haji-Akbari [650], use the term “milestones”, whereas Van Erp, Moroni, and Bolhuis [648], and also Allen et al. [651,652], use the term “interfaces”. In what follows, we shall use the term “interfaces” as it is more reminiscent of the hyper-surfaces in configuration space, noting that these interfaces are, of course, not physical interfaces that separate two phases. The key innovation of TIS is that it separates the calculation of a transition rate into two factors: one is the rate at which trajectories arrive at an interface (a Q-hyper-surface) close to A, and the second measures the probability that a trajectory that has crossed this first interface will continue all the way to B rather than return to A. In practice, this second probability is decomposed into the product of probabilities that trajectories that have crossed interface i will proceed to interface i + 1, rather than return to A. Note that this factorization of the transition rate is very different from the approach used in TPS. Computing the rate at which trajectories cross the first transition interface is straightforward: provided that this interface is not too far removed from A, the rate at which trajectories arrive at this interface from A can be computed in a normal MD simulation. The second step, computing the probability that a trajectory that has reached interface i will go on to i + 1, is computed using path-sampling techniques similar to TPS. An important advantage of TIS is that it focuses on trajectories that are in the process of crossing from A to B: no time is wasted computing the time evolution of a trajectory before it has left A or after it has arrived in B. In addition, TIS uses several other tricks that enhance its efficiency. For these, and other details, we refer the reader to ref. [648]. One important feature that TIS shares with TPS is that, during the path sampling, the acceptance of a trial move in trajectory space depends on the stationary distribution (e.g., the Boltzmann distribution) with which paths are generated.

15.5

Forward-flux sampling

In the spirit of Transition Interface Sampling (TIS), Forward-Flux Sampling (FFS) [651–654] also aims to speed up the calculation of the transition rate from A to B by decomposing the barrier-crossing process into a sequence of crossings of intermediate interfaces defined by monotonically increasing Q-values. In FFS, a simulation is first run in the “reactant” state A, and configurations are collected that correspond to crossings of the first interface in the direction of increasing Q. This collection of configurations is used to initiate new simulations 3 In general, Q can also depend on the momenta, in which case a constant value of Q defines a

hyper-surface in phase space.

548 PART | IV Advanced techniques

that are continued either until the next interface is reached, or until the trajectory returns to A. This results in a new collection of configurations at the next interface that can be used to initiate simulations to the subsequent interface (or back to A), etc. The crossing rate is then computed (as in TIS) as the product of the rate at which trajectories first arrive at the interface nearest to A, and the probability that such a trajectory will then continue to reach B without first returning to A. As in TIS, this probability is computed as the product of probabilities that a trajectory will proceed from interface i to i + 1 without returning to A, for all interfaces between A and B. Like TIS [648,655] and the “milestoning” approach of ref. [649], FFS does require the concept of a reaction coordinate, although the coordinate Q need not reproduce an actual pathway.4 FFS differs from other path-sampling techniques discussed before (and from TPT) in that it does not require prior knowledge of the phase-space density. Hence, FFS dispenses with the need to know the stationary distribution from which trajectories are generated. In fact, the earliest application of FFS considers rare switching events in driven biochemical networks: Boltzmann sampling would not even be possible in such systems. An advantage of FFS is that, unlike TIS (or TPS), it does not require propagating paths backwards in time to check whether they did indeed originate in A. The advantages of FFS mentioned above come at a cost: FFS will only work if the time evolution of the system has a stochastic component – it will not work for purely Newtonian dynamics. The reason why FFS needs stochasticity is simple: it makes it possible to generate new trajectories that are identical to a given parent trajectory from state A up to a given interface i, and different beyond that point. As a consequence, there is no need to back-propagate trajectories in time in order to check if they originated in A: they always do. This feature makes FFS particularly simple to implement or add to an existing code, and that is, in itself, a great advantage (see [34], p. 533).

15.5.1 Jumpy forward-flux sampling The original Forward-Flux sampling approach was based on the assumption that if a system makes a transition characterized by the change in an order-parameter Q from QA to QB , then all intermediate values of the order parameter must also be visited on the way. It is this assumption that made it possible to decompose a transition in sub-steps: first from A to the first interface on the way from A to B, characterized by the order-parameter value Q1 , then to Q2 and so on, until QB . However, this assumption is not always justified: sometimes, an order parameter may skip over one or more interfaces between A and B. An example is 4 We note that the milestoning approach of ref. [649] also considers the propagation of trajectories

from one interface to the next, but it resembles the Bennett-Chandler method in that it assumes that the distribution of phase space points at the interfaces is equal to the stationary distribution of states at any given interface. For more details, see ref. [649].

Rare events Chapter | 15 549

nucleation and growth by cluster aggregation: if our reaction coordinate is the number of particles in a nucleus, then the attachment of a multi-particle cluster to a growing nucleus will result in a discontinuous jump in the number of particles in that nucleus. Such events are not captured by the original FFS scheme. To address this issue, Haji-Akbari [650,656] has introduced a “jumpy” FFS scheme, that allows for the skipping of interfaces. In this algorithm, one should keep track of all distinct sequences of visited interfaces on the way from A to B. If we have chosen M distinct interfaces, then there are 2M − 1 possible sequences of visited interfaces between A and B, because every interface may, or may not, be visited: however, if the trajectory does not visit a single interface, then it has failed. In practice, the number of sequences sampled in a simulation is much less than 2M − 1. However, apart from that extension —and an improved method for choosing the location of the interfaces —the scheme of ref. [650] has the same structure as FFS. As jumpy-FFS includes trajectories that would be missed by standard FFS, it yields higher (sometimes much higher [657]) estimates of the transition rate. Unlike the original FFS, trajectories generated with the approach of ref. [650] can deal with the situation where the barrier crossing evolves via states in between interfaces. For instance, if in a simulation of nucleation rates, cluster sizes of 10 and 20 are interfaces, then we might find in a simulation that the system jumps from cluster size 5 to 15. Nevertheless, these intermediate points can be used as starting points for launching new trajectories, as they have the correct statistical weight.

15.5.2 Transition-path theory As mentioned in the introduction to path-sampling techniques, deterministic dynamics is arguably the most difficult case to tackle. As soon the dynamics generating trajectories has some stochasticity, path sampling becomes easier, and forward-flux sampling discussed in section 15.5 is a case in point. Transition Path Theory (TPT) [635] provides a general framework for modeling rare events in systems that evolve in time according to a stochastic differential equation, such as the Langevin equation, or its overdamped limit (Brownian dynamics). The key observation of TPT is that associated with the stochastic differential equation for the dynamics. There is a partial differential equation (Fokker-Planck or, more precisely, Smoluchowski) that describes the evolution of the probability density of reactive paths.5 Associated with the Fokker-Planck equation, there is another equation (the backward Kolmogorov equation), the solution of which is the so-called committor C(x). The value of C(x) equals the probability that a trajectory starting in phase-space coordinate x located in the 5 There is some confusion about nomenclature here. We use the following convention: we use the

name Fokker-Planck (or “forward-Kolmogorov”) equation to describe the equation that results from the lowest order expansion of the Chapman-Kolmogorov master equation for processes with diffusion and drift. If, in addition, we impose detailed balance, we obtain the Smoluchowski equation.

550 PART | IV Advanced techniques

transition region between A and B, will end up in B before it ends up in A. The transition state (or actually, surface) is then the collection of all points x from where the system is equally likely to proceed to B as to return to A: in other words, at the transition surface C(x) = 0.5. In the theory of rare events, the committor is a crucial quantity. In fact, as discussed in Illustration 22, PB , a quantity closely related to C(x) is often used in path-sampling simulations to test the quality of a postulated reaction coordinate. PB is computed by launching many trajectories from points that have the same value of the reaction coordinate: some go to A, some to B. Note that different points in configuration space may have the same value of the reaction coordinate. For a good reaction coordinate, all points at the transition surface will have the same committor: 0.5. However, if we had chosen a poor reaction coordinate, the committor distribution for some points could be peaked close to one, and for other points, it may peak close to zero: this means that these points are not at all transition states. Some are inside the basin of attraction of A, others in the basin of attraction of B. Hence, computing the distribution of PB is a way to test the quality of our reaction coordinate. Of course, the above description of the committor immediately implies that the committor itself is the ideal reaction coordinate ... if only we could compute it easily. TPT tells us, in principle, how to compute the committor. However, in practice, we can only compute the committor analytically in very simple cases (See Appendix G). In practical implementations of TPT, the calculation focuses on finding the minimum (free) energy path from A to B and, with this information, to compute how probability flows from A to B. In this picture, individual trajectories are no longer important: what TPT aims to achieve is to construct the path (string/tube) of the dominant probability flux from A to B, and from knowledge of this flux, to compute the reaction rate. A full description of the practical implementation of TPT is beyond the scope of this chapter, where we focus on trajectory-based simulations. Illustration 22 (Ion pair dissociation). The dissociation of a Na+ Cl− pair in water is an example of an activated process. It is of particular interest to understand the effect of the water molecules on the dynamics of this process. As a first guess, one can use the ionic separation as a reaction coordinate: rion ≡ rNa+ − rCl− . The free energy as a function of this reaction coordinate is shown schematically in Fig. 15.8. Once we have computed the free energy barrier, we could, in principle, use the Bennett-Chandler approach to compute the reaction rate (see section 15.2). However, for this system, one would observe a very small transmission coefficient, which suggests that the chosen reaction coordinate does not provide an adequate description of the dynamics of this reaction.

Rare events Chapter | 15 551

FIGURE 15.8

Free energy as a function of the ionic separation rion .

Fig. 15.9 explains how an unfortunate choice of the reaction coordinate may result in a low transmission coefficient in the Bennett-Chandler expression for the rate constant. But even if the reaction coordinate is well chosen, we may still get a low transmission coefficient. If the free energy landscape of the dissociation reaction looks like Fig. 15.9(a), the progress of the reaction would correlate directly with the ionic separation. However, it could still be that the system exhibits diffusive behavior near the transition state. If this is the case, one would obtain better statistics using the diffusive barrier crossing method described in section 15.3.

FIGURE 15.9 Two possible scenarios for the ion dissociation; the two figures show the two-dimensional free energy landscape in a contour plot. A is the stable associated state and state B the dissociated state, the dotted line corresponds to the dividing surface as defined by the maximum of the free energy profile (see Fig. 15.8), rion is the reaction coordinate while q represents all other degrees of freedom. In (a) one sees that the dividing surface nicely separates the two stable basins, while in (b) a point of r ∗ “belongs” either to the A basin or to the “B” basin.

552 PART | IV Advanced techniques

Another possible scenario is shown in Fig. 15.9(b). Here, we have a situation in which the ionic separation is not a good reaction coordinate. Unlike the situation in Fig. 15.9(a), the dividing surface does not discriminate between the two stable states. Apparently, there is another relevant coordinate (denoted by q) that is an (as yet unknown) function of the positions of the solvent particles. Since we do not know what this additional order parameter is, this is an ideal case to use Transition-Path Sampling (TPS), as in TPS we do not have to make an a priori choice of reaction coordinate. Geissler et al. [658] used transition path sampling to generate some 103 reaction paths for this process. These paths were subsequently analyzed to obtain the transition state ensemble. This is the ensemble of configurations on the reaction paths that have the following “transition-state” property: half of the trajectories that are initiated at configurations that belong to this ensemble end up on the product side, and the other half on the reactant side. Although all trajectories originate from the same configuration, they have different initial velocities (drawn from the appropriate Maxwellian distribution). In general, PB , the probability that a trajectory starting from an arbitrary configuration will end up in the state B is different from 0.5. Geissler et al. showed that PB ≈ 0 for most configurations that contained fivefold-coordinated sodium ions. Conversely, PB ≈ 1, for configurations with sixfold-coordinated sodium ions. For the Cl− ion, no such effect was found. This indicates that, in order to reach the transition state from the associated state, water molecules have to enter into the first solvation shell of the sodium ions. The water coordination of the Na+ ion was the order parameter that was missing in the simple analysis. This example illustrates how TPS can be used to elucidate unknown “reaction” mechanisms.

15.5.3 Mean first-passage times Our discussion of (non-jumpy) Forward-Flux Sampling provides a natural introduction to another method to study rare events, based on the distribution of Mean First-Passage Times (MFPT), even though the MFPT formalism in the theory of stochastic processes predates FFS by many years (see e.g., [67,659]). The first step in FFS and TIS, involves computing the rate at which a system that was originally in the “reactant” state A crosses the interface nearest to A (interface “1”). Suppose that we have started L independent trajectories in state (1) A: for every trajectory i, we can measure the time interval ti until it first arrives at interface 1. The rate at which trajectories arrive at interface 1 is kA→1 = !L

L

(A→1) i=1 ti



1 (A→1) τMFPT

,

(15.5.1)

Rare events Chapter | 15 553

where !L (A→1)

τMFPT =

(A→1) i=1 ti

L

(15.5.2)

is the average time it takes a trajectory to arrive at 1: this is the mean first passage time for trajectories that arrive at 1 from A. The times ti in Eq. (15.5.2) account for the total time between one arrival at interface 1 and the next: the system spends most of that time in A. Note that kA→1 in Eq. (15.5.1) is a peculiar rate. It is the net forward rate: the reverse rate from 1 to A is not included. So kA→1 is the rate that one would obtain if there would be an absorbing boundary at 1. Similarly, we can compute (A→2) τMFPT the mean first passage time from A to interface 2, even though this is not (A→2) what we compute in FFS. To compute τMFPT , we could collect all trajectories that first arrive at 1 and then proceed to 2 without first returning to A. But note that there may be many trajectories that arrive at 1, but then return to A rather than proceed to 2. We have to include the duration of all these failed trajectories in the time to travel for A to 2. Hence, for rare events, the MFPT to travel from A to 2 can be much longer than the MFPT to travel to 1, when 2 is “uphill’ from 1. The forward rate of going from A to 2 is kA→2 =

1 (A→2) τMFPT

.

(15.5.3)

Clearly, we can repeat this procedure for every subsequent interface, up to B. The forward rate of going from A to B is then kA→B =

1 (A→B) τMFPT

.

(15.5.4)

Eq. (15.5.4) shows that the forward rate of a process is directly related to the mean first passage time. Of course, the backward rate can be obtained by computing the mean passage time from B to A. For the relation between overall reaction rates and forward/backward rates, see Eq. (15.1.6). In practice, MFPTs are often used to compute rates in cases where we can observe barrier crossings in brute-force simulations, but it should be clear that we can also compute the MFPT with FFS, be it that there is little to be gained. The added advantage of the MFPT method is that, under certain conditions, it allows us to compute both the rate of a barrier crossing and the shape of the free energy barrier [660]. This approach to reconstruct the free energy barrier from MFPT data can be used when the barrier crossing is diffusive, and timeevolution of the probability density is governed by the Smoluchowski equation, or a discretized version thereof, as in the Becker-Döring theory of nucleation. The expression for the MFPT in the case of a crossing of a one-dimensional

554 PART | IV Advanced techniques

free-energy barrier F (q) between A and B: (A→B) τMFPT



qB

=

qa

eβF (q) dq D(q)



q

−∞



dq e−βF (q ) ,

(15.5.5)

where D is the diffusion coefficient, which may depend on q. For a derivation of this expression, and for a clear general discussion of the mean first-passage time approach, we refer the reader to ref. [34]. Additional details can be found in refs. [661,662]. To understand the physics behind Eq. (15.5.5), we give a simple derivation but approximate expression of the MFPT. We start with Eq. (15.5.4), which relates the forward flux to the MFPT. We assume that barrier crossing is diffusive and governed by the Smoluchowski equation [633] for the evolution of the probability density ρ(q, t), written as6  

∂ρ(q, t) = ∇D(q)ρeq (q)∇ ρ(q, t)/ρeq (q) ≡ −∇J (15.5.6) ∂t where ρeq (q) is the normalized “equilibrium” probability density of the system in reactant state A, i.e., the distribution that would have resulted if the escape from A would be blocked, and J denotes the probability flux. ρeq is normalized and assumed to be small (but non-zero) outside A, which implies that ρeq (q) ≈

e−βF (q) . −βF (q) basin A dq e

(15.5.7)

We consider the case of slow barrier crossings, in which case we reach a steady state long before basin A has been depleted. In a steady state, the flux J is constant

(15.5.8) J = −D(q)ρeq ∇ ρ(q, t)/ρeq or



B

dq A

J =− D(q)ρeq



dq ρ(q, t)/ρeq = 1 ,

(15.5.9)

where the last equality follows from the fact that in A, ρ(q, t) ≈ ρeq and in B ρ(q, t) ≈ 0. Using Eq. (15.5.7), we can then rewrite Eq. (15.5.9) as 

dq e−βF (q) basin A



B

dq A

eβF (q) (A→B) = J −1 = τMFPT , D(q)

(15.5.10)

which is basically Kramers’ result for the rate of a diffusive escape over a barrier [663]. 6 The Smoluchowski equation can be viewed as a special case of the Fokker-Planck equation, with

the added constraint of detailed balance —see e.g., [34].

Rare events Chapter | 15 555

Eq. (15.5.10) shows that, for barriers much larger than kB T , the value of (A→B) τMFPT is dominated by the value of D(q) in q-range close to the top of the barrier. The more general equation (15.5.5) can be used to compute MFPTs for (A→q) interfaces at values of q between A and B, and the variation of τMFPT with q can be used to reconstruct the free energy barrier [660]. On the whole, the MFPT approach is most useful for studying barriercrossings that are sufficiently frequent to allow good statistics to be collected in brute-force simulations. It is not a good technique to study very rare events. Illustration 23 (Transition-path sampling with parallel tempering). TransitionPath Sampling (TPS) is a technique that allows us to compute the rate of a barrier-crossing process without a priori knowledge of the reaction coordinate or the transition state. However, when there are many distinct pathways that lead from one stable state to another, it can be difficult to sample all possible pathways within the time scale of a single simulation. Vlugt and Smit [664] have shown that parallel tempering (see section 13.1.1) can be used to speed up the sampling of transition pathways that are separated by high free energy barriers. The objective of TPS is to sample all relevant transition paths within a single simulation. This becomes difficult when different transition paths lead to distinct saddle points in the free energy surface. To be more precise, problems arise when the (free) energy barrier between two saddle points is much higher than kB T (see Fig. 15.10). Of course, the sampling problem would be much less serious if one could work at much higher temperatures where the transition path can cross the barriers separating the saddle points.

FIGURE 15.10 Schematic representation of two different transition paths from state A to state B. A and B are two stable states separated by a free energy barrier. There are two dynamical pathways for the system to go from A to B, one path crosses the barrier via saddle point S1 while the other path crosses via saddle point S2 . If the (free) energy barrier E between the two paths is much larger than kB T , it is unlikely that path 1 will evolve to path 2 in a single transition path simulation. Note that the energy barrier between two paths (E) is not the same as the energy barrier along the path.

Parallel tempering exploits the possibility of generating “transitions” between different saddle points at high temperatures, to improve the sampling

556 PART | IV Advanced techniques

efficiency at low temperatures. As was already discussed in section 13.1.1, parallel tempering can be used to switch between systems at various temperatures. As an illustration of this combined parallel tempering and transition path sampling approach, we consider the example discussed in ref. [664]: a two-dimensional system containing a linear chain of 15 repulsive LennardJones particles. The nonbonded interactions are given by ⎧ " # ⎨ 1 + 4 r −12 − r −6 r ≤ rrep , (15.5.11) urep (r) = ⎩ 0 r > rrep in which r is the distance between two particles and rrep ≡ 21/6 (we have used the Lennard-Jones σ as our unit of length). Neighboring particles i and j in the chain interact through a double-well potential: 

2 2

rij − w − rrep , udw rij = h 1 − w2

(15.5.12)

where rij denotes the distance between the neighboring particles i and j . This potential has two equivalent minima, one for rij = rrep and the other for rij = rrep + 2w (w > 0). The chain can have a compact state, A, if all bonds are in the first minimum, or an extended state, B, if all bonds are in the second minimum. We wish to express the transition rate from A to B as the sum of the rates of the contributions due to all distinct transition paths. Clearly, there are many (in this case 14!) distinct pathways that lead from the compact state to the fully extended state (e.g., first stretch bond 2 − 3, then 1 − 2, then 6 − 7, etc.). Without Parallel Tempering (PT) it would take a prohibitively long time to obtain a representative sampling of all transition paths at low temperatures. In the present case, the number of distinct reaction paths is too large to be sampled even with parallel tempering. However, for many problems, the number of relevant paths is small, and the present approach can be used to compute the rate constant. For more details, the reader is referred to ref. [664].

15.6 Searching for the saddle point In section 15.4.1, we described a general procedure for finding barrier-crossing rates. In principle, this procedure should work even if we have no knowledge about the reaction coordinate. However, in practice, the calculations may become very time-consuming. For this reason, a variety of techniques that aim to identify the relevant reaction coordinate have been proposed. Below, we briefly describe some of the methods that have been proposed. More details (and more references) can be found in ref. [48]. The methods for searching for a reaction path, which we discuss, have been designed for situations where the free energy barrier separating “reactant” and “product” is energy-dominated. This is often the case, but certainly not always (see, e.g., [556]).

Rare events Chapter | 15 557

One such energy-based scheme is the so-called Nudged Elastic Band (NEB) method proposed by Jonsson and collaborators [665]. This method aims to find the lowest-energy path to the saddle point that separates the reactant basin from the product basin. The NEB method assumes that both the reactant and product states are known. A number of replicas of the original system are now prepared. These replicas are initially located equidistantly along the “linear” path from reactant to product. The position of this originally linear path is now relaxed to find the reaction path. This is achieved as follows: the replicas are connected by harmonic springs that tend to keep them equally spaced—this is the “elastic band.” In addition, all replicas experience the gradient of the intermolecular potential that tends to drive them to a minimum of the potential energy. However, these gradient forces are only allowed to act perpendicular to the local tangent. Conversely, the elastic band forces are only allowed to act along the local tangent. As a consequence, the intermolecular forces move the elastic band laterally until the transverse forces vanish (i.e., when it is a minimum energy path) while the longitudinal forces prevent all replicas from collapsing into the reactant or product state. For more details and further refinements, see [666–668]. A technique that is similar in spirit, but very different in execution, to the NEB method is the activation-relaxation technique developed by Barkema and Mousseau [669]. This method also aims to find the lowest energy path to the saddle point. Unlike the NEB method, this scheme does not make use of any a priori knowledge of the product basin. To find the saddle point, the system is forced to move “uphill” against the potential energy gradient. However, if we would simply let the system move in a direction opposite to the force that acts on it, we would reach a potential energy maximum, rather than a saddle point. Hence, in the method of [669], the force that acts on the system is only inverted (and then only fractionally) along the vector in configuration space that connects the position of the initial energy minimum (i.e., the lowest energy initial state) with the present position of the system. In all other directions, the original forces keep acting on the system. The aim of this procedure is to force the system to stay close to the lowest energy trajectory towards the saddle point. Often there is more than one saddle point. In that case, the initial displacement of the system from the bottom of the reactant basin will determine which saddle point will be reached. Note that we cannot tell a priori whether the saddle point that is found will indeed be the relevant one. The true transition state can only be found by attempting many different initial displacements, and by computing the energy and (in the case of a quasi-harmonic energy landscape) the entropy of the saddle point. The only reason special techniques are needed to simulate activated processes is simply that rare events are . . . rare. If, somehow, one could artificially increase the frequency of rare events in a controlled way, this would allow us to use standard simulation techniques to study activated processes. Voter and collaborators [670,671] have explored this route. The idea behind the approach of Voter is that the rate of activated processes can be increased either by artificially

558 PART | IV Advanced techniques

lowering the energy difference between the top of the barrier and the reactant basin (“hyperdynamics” [670]) or by increasing the temperature (“temperatureaccelerated dynamics” [671]). The trick is to apply these modifications in such a way that it is possible to correct for the effect that they have on the crossing rate. In both schemes, the essence of the correction is that the rate kiB , at which the system crosses a point i in the biased system, is higher than the corresponding rate kiU in the unbiased system. To recover the unbiased rate, the biased rate should be multiplied by a factor U (i) PBoltzmann B PBoltzmann (i)

,

U B where PBoltzmann (PBoltzmann ) is the unbiased (biased) Boltzmann weight of configuration i. For more details, the reader is referred to refs. [670,671]. To recover the unbiased rate, the biased rate should be multiplied by a factor U PBoltzmann (i) B PBoltzmann (i)

,

U B (PBoltzmann ) is the unbiased (biased) Boltzmann weight of conwhere PBoltzmann figuration i. An additional “linear” speed-up of the rate calculations can be achieved by performing n barrier-crossing calculations in parallel [672]. Although this approach does not reduce the total amount of CPU time required, it does reduce the wall-clock time of the simulation. For more details, the reader is referred to refs. [670–672].

15.7 Epilogue After this long but far from an exhaustive review of techniques to predict the pathway and rate of rare events, it is fair to ask if any of these techniques is guaranteed to find the dominant path by which a rare transition proceeds in reality. It may come as something of a let-down to the reader that the answer is: “Past performance is no guarantee for future results”. Just as free-energy calculations typically compare the relative stabilities of structures that we have identified, so rare-event techniques can only compute the rates for pathways that we have guessed or found. Yet, intuition and guessing are becoming less important in this context as Machine Learning (ML), particularly the use of auto-encoders, is transforming this field. ML can help to identify the number and approximate functional form of the collective variables that best describe, e.g., reaction pathways [673] or transition states [674]. Undoubtedly, much more is to come. Hence, we cannot do much more than signaling these developments and refer the reader to the emerging literature.

Chapter 16

Mesoscopic fluid models This book describes classical molecular-simulation techniques. The focus on classical simulations means that we may use the results of quantum chemical calculations of the properties and interactions of atoms and molecules, but we do not discuss the quantum calculations as such. Neither do we use molecular simulations to study macroscopic phenomena where the particulate nature of atoms and molecules is irrelevant, such as in large-scale flow phenomena, or in the macroscopic description of the mechanical properties of solids. Molecular simulations can be used to predict the properties of such materials, e.g., the viscosity, thermal conductivity, or, in the case of solids, the elastic constants (Appendix F.4). But that is where the role of molecular simulations stops: we will not consider computational techniques to model a waterfall or a buckling beam. However, in practice, the separation between the micro and the macro world is not always clean. There are many examples of physical phenomena that require a combination of a macroscopic description with a model that accounts for the particulate nature of matter. Examples are two-phase flows near interfaces or the propagation of a fracture tip in a solid. Another important example where micro and macro meet is the study of colloidal suspensions. Colloids are mesoscopic particles with typical sizes of 10 nm–1 µm. These particles consist of millions, or even billions, of atoms, and these bodies are, therefore, best described by continuum mechanics. However, colloids undergo thermal motion, and a collection of colloids behaves in many ways like a collection of giant atoms: they are subject to the laws of Statistical Physics, and their properties can be studied by molecular simulations. Yet, such simulations will not focus on the atoms in the colloids, but rather treat the colloids themselves as the basic building blocks of a suspension. The situation becomes more complex when we consider the solvent in which these colloids move: clearly, it would be prohibitively expensive to model the millions or billions of molecules of the solvent. Yet, we cannot ignore the solvent because, without a solvent, the colloids would move ballistically, whereas, in reality, the viscous drag of the solvent quickly kills off any ballistic motion of the colloids. The solvent is also driving the thermal motion of the colloids. In addition, the solvent is responsible for hydrodynamic interactions between the colloids, i.e., the phenomenon that the motion of one colloid creates a (transient) flow field that affects the motion of other colloids. Understanding Molecular Simulation. https://doi.org/10.1016/B978-0-32-390292-2.00027-1 Copyright © 2023 Elsevier Inc. All rights reserved.

559

560 PART | IV Advanced techniques

It is distinctly unattractive to use the continuum (Navier-)Stokes equation to compute all hydrodynamic interactions in a dense suspension of colloids, particularly in confined geometries. This is why it is common to use a simplified, particle-based description of the solvent: not to reproduce the molecular properties of the real solvent, but to account in the simplest way possible, both for the thermal motion of the colloids and for their hydrodynamic interactions in arbitrary geometries. We stress that the above considerations for the introduction of a coarse-grained model of the solvent apply only to the dynamics of colloids. For the static properties, hydrodynamics plays no role. Hence, for structural properties, implicit solvent models can be fine, provided that they adequately account for the effect of the solvent on the effective intermolecular interactions between solute particles. The latter point is well illustrated by the use of the hard-sphere model to describe colloidal suspensions. Although the hard-sphere model was introduced as the simplest possible model of an atomic liquid [18], it became clear in the 1980s that it provided a rather realistic description of the static properties of suspensions of hard, spherical colloids (see, e.g., [209]). However, the dynamics of Alder and Wainwright’s hard-sphere model was Newtonian, which mean that the motion of the spheres in between collisions is purely ballistic. Such a model cannot account for the hydrodynamic effects of the solvent. Yet, although we cannot ignore the existence of the solvent, the Brownian motion of a hard spherical colloid depends on the molecular properties of the solvent almost exclusively through the solvent viscosity, even for dense suspensions. In addition, Brownian diffusion depends on temperature, a quantity that is completely insensitive to molecular details. Hence, if we are interested in the dynamics of suspensions, any solvent model that can reproduce the thermal motion and the viscosity of the solvent will do. The above comments highlight the need for constructing particle-based solvent models, which are as simple as possible, yet conserve momentum and obey Boltzmann statistics. The very simplest such model, namely the ideal gas, will not do, because ideal gas particles never collide with each other: as a consequence, flow in such a system will not evolve according to the Navier-Stokes equation. It is, however, possible to develop models that have the static properties of an ideal gas, yet undergo collisions, with the result that the system behaves like a hydrodynamic fluid. Two models, in particular, belong to this category. One method was introduced by Malevanets and Kapral in 1999 [675,676]. The method is now widely known under the name Stochastic Rotation Dynamics (SRD), which is a particular case of a wider class of models that are grouped under the name Multi-particle Collision Dynamics (MPC) [677]. We will briefly mention another method to describe a solvent with thermal fluctuations, namely the Lattice-Boltzmann method. However, on this topic, we will be brief because it is rather different from the particle-based methods described in the rest of this book. Moreover, Succi has written an excellent book devoted exclusively to the Lattice-Boltzmann method [678].

Mesoscopic fluid models Chapter | 16 561

16.1

Dissipative-particle dynamics

The Dissipative Particle Dynamics (DPD) method was introduced by Hoogerbrugge and Koelman [679,680] as a cheap, particle-based model for a solvent. The key feature of the DPD method is that, in addition to conservative forces between the particles, it includes momentum-conserving friction forces and random forces between pairs of particles:   fC (rij ) + fD (rij , vij ) + fR (rij ) . (16.1.1) Fi = j =i

In most applications, the conservative forces acting on a given particle i can be C written as a sum of pair-forces fC = j =i fij , although versions of DPD with i many-body forces do exist [681]. The dissipative force fD ij is purely frictional: it depends both on the positions of the particles i and j , and on their relative velocities: fD (rij , vij ) = −γ ωD (rij )(vij · rˆ ij )ˆrij ,

(16.1.2)

where rij = ri − rj and vij = vi − vj , and rˆ ij is the unit vector in the direction of rij . γ is a coefficient controlling the strength of the frictional force between the DPD particles. ωD (rij ) describes the variation of the friction coefficient with distance. The random force fR (rij ) is of the form: fR (rij ) = σ ωR (rij )ξij rˆ ij .

(16.1.3)

σ determines the magnitude of the random pair force between the DPD particles. ξij is a random variable with Gaussian distribution1 and unit variance and ξij = ξj i , while ωR (rij ) describes the variation of the random force with distance. The functions ωR (r) and ωD (r) cannot be chosen independently. Español and Warren [683] showed that, in order to ensure Boltzmann sampling of the configurations and velocities of the system, the following relation must be satisfied [683]:  2 (16.1.4) ωD (rij ) = ωR (rij ) , which shows that ωD (rij ) cannot be negative. In the spirit of the Langevin equation, γ and σ are related to the temperature according to σ 2 = 2kB T γ .

(16.1.5)

We derive Eqs. (16.1.4) and (16.1.5) in SI section L.13. However, the simple derivation of the relation between friction and random momentum transfer (Eqs. (7.1.6) through (7.1.8)) applies just as well to the present case, because the 1 Groot and Warren [682] found that a uniform distribution with unit variance gave similar results.

562 PART | IV Advanced techniques

relation should hold for any value ωD (r). However, as the time step in a DPD simulation is of finite length, we should be careful about where in the time step we compute the friction. This issue is briefly discussed in section 16.1.1. Possible choices for the functional form of r-dependence of ωR (r) will be discussed in Example 27. At first sight, DPD method bears a strong resemblance to Langevin dynamics (section 7.1.1.3), as both schemes employ a combination of random and dissipative forces. However, in Langevin dynamics, the frictional and random forces do not conserve momentum. In DPD, both the frictional and random forces do not change the center-of-mass velocity of a pair of particles, and hence the algorithm conserves momentum. Momentum conservation is necessary for recovering the correct “hydrodynamic” (Navier-Stokes) behavior on sufficiently large length and time scales. An alternative approach for imposing both momentum conservation and proper Boltzmann sampling has been proposed by Lowe [258] (see section 7.1.1.2). We still have to show that, on sufficiently large lengths and time scales, the DPD fluid obeys the Navier-Stokes equation of hydrodynamics. At present, there exists no rigorous demonstration that this is true for an arbitrary DPD fluid.2 However, all existing numerical studies suggest that, in the limit where the integration time step δt → 0, the large-scale behavior of the DPD fluid is described by the Navier-Stokes equation. The kinetic theory for the transport properties of DPD fluids [684–687] supports this conclusion. One interesting limit of the DPD model is the “dissipative ideal gas”, i.e., a DPD fluid without the conservative forces. The static properties of this fluid are those of an ideal gas, yet its transport behavior is that of a viscous fluid. The advantage of DPD over conventional (atomistic) MD is that it uses a coarse-grained model with a soft intermolecular potential (see SI, Case Study 25). Such models are computationally cheap because they allow rather large time steps. The applicability of soft potentials in coarse-grained models is not limited to simple fluids: we can use the same approach to construct coarsegrained models of the building blocks of complex liquids. However, if we are only interested in the static properties of a complex liquid, we could have used standard MC or MD on a model with the same conservative forces, but without dissipation. The real advantage of DPD shows up when we try to model the dynamics of complex liquids.

16.1.1 DPD implementation A DPD simulation can easily be implemented in any working Molecular Dynamics program. The only subtlety is in the integration of the equations of 2 In MD simulations, many seemingly obvious properties cannot be proven rigorously. One would

expect the Navier-Stokes picture to hold if momentum and mass are conserved, and if all fluctuations of non-conserved quantities decay on microscopic length and timescales.

Mesoscopic fluid models Chapter | 16 563

motion. As the forces between the particles depend on their relative velocities, the standard velocity-Verlet scheme cannot be used. In their original publication, Hoogerbrugge and Koelman [679] used an Euler-type algorithm to integrate the equations of motion. However, Marsh and Yeomans found that, with such an algorithm, the effective equilibrium temperature depends on the time step that is used in a DPD simulation [688]. Only in the limit of the time step approaching zero was the correct equilibrium temperature recovered. A similar result was obtained by Groot and Warren [682] using a modified velocity-Verlet algorithm. There is, however, an important feature missing in these algorithms. If we compare the DPD integration schemes with those used in a Molecular Dynamics simulation, all “good” MD schemes are intrinsically time-reversible while the above DPD schemes are not. Pagonabarraga et al. [689] argued that time reversibility is also important in a DPD simulation, since only with a time-reversible integration scheme can detailed balance be obeyed. In the Leap-Frog scheme (see section 4.3.3) the velocities are updated using v(t + t/2) = v(t − t/2) + t

f (t) , m

(16.1.6)

and the positions using r(t + t) = r(t) + tv(t + t/2).

(16.1.7)

In DPD, the force at time t depends on the velocities at time t. The velocity at time t is approximated by v(t) =

v(t + t/2) + v(t − t/2) . 2

This implies that the term v(t + t/2) appears on both sides of Eq. (16.1.6). In the scheme of Pagonabarraga et al., these equations were solved selfconsistently; i.e., the value for v(t + t/2) calculated from Eq. (16.1.6) has to be the same as the value for v(t + t/2) used to calculate the force at time t. This implies that we have to perform several iterations before the equations of motion can be solved. This self-consistent scheme implies that the equations are solved in such a way that time reversibility is preserved. To see how time reversibility is related to detailed balance, we consider a single DPD step as a step in a Monte Carlo simulation. If we have only conservative forces, DPD is identical to standard Monte Carlo. In fact, we can use the hybrid Monte Carlo scheme (see section 13.3.1) for the DPD particles. For hybrid Monte Carlo, a time-reversible algorithm must be used to integrate the equations of motion. In the case of hybrid Monte Carlo, detailed balance implies that if we reverse the velocities, the particles should return to their original positions. If this is not the case, detailed balance is not obeyed. If we use a non-iterative scheme to solve the equations of motion in our DPD scheme, the

564 PART | IV Advanced techniques

velocity that we calculate at time t is not consistent with the velocity that is used to compute the force at this time. Hence, if we reverse the velocities, the particles do not return to their original positions, and detailed balance is not obeyed. In Example 27, the DPD method is illustrated with a few examples. In the DPD approach, the forces due to individual solvent molecules are lumped together to yield effective friction and a fluctuating force between moving fluid elements. While this approach does not provide a correct atomistic description of molecular motion, it has the advantage that it does reproduce the correct hydrodynamic behavior on long length and time scales. Although the idea behind DPD is simple, the integration of the equations of motion, which contain velocity-dependent forces and position-dependent friction, requires care. The problem of position-dependent friction is discussed in refs. [250,682]. Different DPD algorithms are compared in ref. [690]. Example 27 (Dissipative particle dynamics). To illustrate the DPD technique, we have simulated a system of two components (1 and 2). The conservative force is a soft repulsive force given by    aij 1 − rij rˆ ij rij < rc C , (16.1.8) fij = 0 rij ≥ rc in which rij = rij  and rc is the cutoff radius of the potential. The random forces are given by Eq. (16.1.3) and the dissipative forces by Eq. (16.1.2). The total force on a particle equals the sum of the individual forces:

 S + fR + fD . fC (16.1.9) + f fi = ij ij ij ij i=j

To obtain a canonical distribution, we use σ 2 = 2γ kB T     2 wD rij = wR rij . A simple but useful choice is [682]   2   D 1 − rij /rc w rij = 0

rij < rc rij ≥ rc

with rc = 1. The simulations were performed with ρ = 3.0 and σ = 1.5. We have chosen aii = ajj = 25 and aij,i=j = 30. This system will separate into two phases. In the example shown in Fig. 16.1, we have chosen the zdirection perpendicular to the interface. The left part of Fig. 16.1 shows typical density profiles of the two components. In Fig. 16.1 (right), we have plotted the concentration of one of the components in the coexisting phases.

Mesoscopic fluid models Chapter | 16 565

FIGURE 16.1 (Left) Density profile for kB T = 0.45. (Right) Phase diagram as calculated using DPD and Gibbs ensemble simulations. Both techniques result in the same phase diagram, but the Gibbs ensemble technique needs less particles due to the absence of a surface. In the DPD simulations, we have used a box of 10 × 10 × 20 (in units of rc3 ). The time step of the integration was t = 0.03.

Since we can write down a Hamiltonian for a DPD system, we can also perform standard Monte Carlo simulations [691]. For example, we can also use a Gibbs ensemble simulation (see section 6.6) to compute the phase diagram. As expected, Fig. 16.1 shows that both techniques give identical results. Of course, due to the presence of an interface, one needs many more particles in such a DPD simulation. Thermodynamic quantities are calculated using only the conservative force. The pressure of the system is calculated using 1  rij · fC . p = ρkB T + ij 3V i>j

FIGURE 16.2

Surface tension as a function of temperature.

When the DPD fluid undergoes phase separation, two slabs containing the different fluid phases form, and we can compute the associated interfacial tension, using the techniques described in Section 5.1.6. Fig. 16.2 shows this

566 PART | IV Advanced techniques

interfacial tension γ as a function of temperature. Clearly, γ decreases with increasing temperature and should vanish at the critical point. As the critical point is approached, the driving force for the formation of a surface (surface tension) becomes very low, and it becomes difficult, therefore, to form a stable interface in a small system. For more details, see SI (Case Study 25).

16.1.2 Smoothed dissipative-particle dynamics The name dissipative particle dynamics implies that during a simulation energy is dissipated. The fact that energy is not conserved in a DPD simulation can be a disadvantage for some applications. For example, if we have a system in which there is a temperature gradient, such a temperature gradient can only be sustained artificially in a DPD simulation [685]. An early solution to this problem was to introduce an additional variable characterizing the internal energy in a DPD simulation [692,693], which corresponds to every DPD particle carrying its own internal-energy “reservoir.” This reservoir absorbs or releases the energy that would normally go into the internal degrees of freedom of the group of molecules that are represented by a single DPD particle. During a collision, the energy of this reservoir can increase or decrease. Subsequently, Español and Revenga [694] proposed a generic “top-down” approach to treat transport processes in mesoscopic fluid models, based on the ideas of Smoothed Particle Hydrodynamics [695–697]. In this context, topdown means that the model takes the concept of DPD particles as fluid elements seriously, and attributes to them a (constant) mass m, entropy S, and volume V . The energy of these fluid elements is related to the basic thermodynamic parameters through an equation of state of the form Ei = E(m, Si , Vi ), which can be specified by the user to match the properties of the system being modeled. The fact that E depends on time only through the time dependence of mi , Si , and Vi implies that local thermodynamic equilibrium is assumed. In the Smoothed Dissipative Particle Dynamics (SDPD) approach of ref. [694], the volume of the fluid element is time-dependent, as it depends on the local density around ρi around a particle i: Vi ≡ 1/ρi

ρi = drρ(r)W (r − ri ) , where ρ(r) ≡

 j

δ(r − rj ) and W (r − ri ) is a normalized weight function:

dr W (r − ri ) = 1 .

Note that the density around particle i includes i itself: hence ρi > 0. A convenient choice for W [694] is given in Appendix H.

Mesoscopic fluid models Chapter | 16 567

In addition, the entropy production in a volume element is given by the standard (macroscopic) equations that follow from irreversible thermodynamics [57], which describe the entropy production due to the (bulk and shear) viscous dissipation and heat flux. Again, the parameters determining these terms can be chosen from the macroscopic transport equations (subject to some constraints). Note that in this approach the entropy flux appears naturally as the advective term that describes how the entropy Si of the particles is transported. The important feature of SDPD is that it yields a closed set of equations that describes flow and dissipation and, importantly, like DPD also the effect of thermal noise—so that SDPD describes the dynamics of a hydrodynamic fluid subject to thermal fluctuations—and capable of driving the thermal motion of solutes. The great advantage of SDPD is that it can account for a much wider range of transport phenomena (including thermal) than DPD, and that the parameters can be chosen such that the system can be designed to mimic a liquid with known transport properties and a known equation of state, at least in the limit where discretization errors become unimportant. However, when the aim of the simulation is to obtain a reasonable, coarsegrained description of a complex, molecular fluid, then the SDPD approach is less suited because all parameters in the model are essentially macroscopic and do not reflect any details of the microscopic intermolecular interactions. For some practical details, we refer the reader to a brief description in Appendix H. For a more extensive discussion, we refer to the paper by Español and Revenga [694] and to the comparative review by Español and Warren [690].

16.2

Multi-particle collision dynamics

The term Multi-particle Collision Dynamics (MPC) refers to a particularly simple class of mesoscopic fluid models. In MPC models, particles are propagated ballistically during a fixed time step, and then undergo a momentum and energyconserving collision with other particles in their neighborhood, defined by cells in a periodic lattice. A predecessor of the MPC scheme is the so-called Direct Simulation Monte Carlo Method (DSMC) method of Bird [698]. This method was developed to simulate the flow of dilute gases. In the DSMC method, particles move ballistically between collisions. Collisions are then carried out stochastically, that is: the number of collisions per time step is fixed by the known collision frequency at the specified density and temperature. To select collision partners, space is divided into cells. Collision partners are then randomly selected pairs within one cell. The precise collision dynamics depends on the molecular model that is used. However, in all cases, collisions conserve linear momentum and energy. For more details on the DSMC method, see refs. [699,700], and [701]. Malevanets and Kapral [676] built on the DSMC idea to make a simple model of the solvent in a complex liquid [676]. The method of ref. [676] mixes a DSMC-style dynamics and normal MD, as the forces between the macromolecules, and between solute and solvent are taken into account explicitly:

568 PART | IV Advanced techniques

FIGURE 16.3 (a) In an SRD simulation, velocities of all particles in a cell, relative to the cell center-of-mass velocity are computed (black arrows in the central cell) and then uniformly rotated over an arbitrary angle α. The new relative velocities are indicated in gray. The uniform rotation of the relative velocities changes neither the total momentum nor the total kinetic energy of the particles in a given cell. (b) To ensure Galilean invariance, the mesh that defines the cell locations is shifted by a random vector (gray arrow) before every multi-particle collision. SRD collisions are then carried out between particles in the same new cells (dashed borders).

Molecular Dynamics is used to update the solvent and solute positions and momenta. The solvent-solvent interactions are represented by ballistic propagation interrupted by stochastic collisions: space is divided into cells, and solvent-solvent collisions take place inside these cells. However, unlike DSMC, ref. [676] uses many-body collisions between the solvent particles. During such collisions, the motion of the center of mass of the particles in a cell is left unchanged, but a uniform rotation over an angle α around a randomly chosen axis, is applied to the relative velocities of all particles within a given cell (see Fig. 16.3(a)). The rotations carried out in different cells are uncorrelated. As the rotation does not change the magnitude of the relative velocities, the total energy is conserved, while the conservation of the center-of-mass velocity takes care of the momentum conservation. These conservation laws are satisfied irrespective of the length of the SRD times steps. However, the transport properties of the SRD fluid do depend on the time step and on the average number of particles per cell —typically between 3 and 20 [677]. The SRD algorithm does not conserve angular momentum. Usually, this is not a problem, but in some cases, such as two-phase flow in a Couette cell [677], it can lead to unphysical results. If the mesh of the collision cells is kept fixed, the simulations will break Galilean invariance, in particular, if the mean-free path is less than the cell size. To restore Galilean invariance, Ihle and Kroll [702] proposed to apply a random shift to the mesh of collision cells before every collision (see Fig. 16.3(b)). There exist analytical expressions for the transport properties of the SRD gas, which are quite accurate for both high and low densities (see [677]). The

Mesoscopic fluid models Chapter | 16 569

values of the transport coefficients depend on density, temperature, time step, cell size, and on the value of the rotation angle α (see [677]). As the SRD gas is ideal, its equation of state is simply the ideal gas law. However, modifications of the SRD model that mimic non-ideal gases have been proposed (see e.g., [703]). A simple way to couple the dynamics of embedded objects, such as polymers, to the solvent is to include the update of the velocity of the monomers in the multi-particle collision step [676]. It is advisable to choose the monomeric mass equal to the average mass of solvent particles in a cell. After the SRD collision step, the internal dynamics of the polymer is updated using normal MD. The above approach is less suited for larger objects, such as colloids. In that case, it is better to describe the large particle as a hard object with walls that impose non-slip boundary conditions on the solvent. To model fluid flow in complex geometries, we must be able to implement boundary conditions at the wall: usually non-slip. This involves two modifications of the algorithm: first of all, SRD particles that hit a wall during their ballistic propagation are “bounced back” elastically in the direction where they came from. But if the boundaries run through cells of the mesh, the multiparticle collision step is also changed: for every collision, virtual SRD particles with a Maxwell-Boltzmann velocity distribution with zero average momentum are placed inside the part of a cell that is inside the wall. The number of these virtual particles is chosen such that the total number of particles in that cell equals the average [704]. For implementation details and a description of many variants of the MPC approach, see ref. [677].

16.3

Lattice-Boltzmann method

The Lattice-Boltzmann (LB) method [705–707] is not a particle-based method, although it can be viewed as a pre-averaged version of a lattice-gas cellular automaton model of a fluid [708], which is a (discrete) particle-based model. The reason for including the LB method in this brief description of fluid models for mesoscopic, particle-based simulations is that if thermal fluctuations are added to the LB model of a fluid, it can account for both hydrodynamic interactions and the Brownian motion of dissolved particles. Although LB simulations have now moved away from the underlying Cellular-Automaton (CA) picture, it is still helpful to refer to that picture to explain the physical meaning of the LB method. In a CA model, particles live on links connecting different lattice sites. Every lattice point is connected by z links to neighboring lattice points, and possibly also to itself. In two dimensions, z is usually equal to 6 (on a triangular lattice), and in 3d, a commonly used model has z = 19 (an FCC lattice with 12 nearest neighbors, 6 next nearest neighbors, and one self-connection). In a single time step, particles move along the link from their current lattice site to the corresponding link on the neighboring lattice site. “Rest particles” stay where they are.

570 PART | IV Advanced techniques

The propagation step is carried out for all particles simultaneously. The next step is the collision step. During collisions, the total number of particles and the total momentum (and, in certain models, the total energy) on a given lattice site is maintained, but apart from this constraint, particles can change their velocity, which is equal to the length of the link divided by the time step. There is considerable freedom in selecting the collision rules, as long as they maintain the full symmetry of the lattice. Moreover, the links must be chosen such that the viscosity of the lattice fluid, which is a fourth-rank tensor, can be made isotropic. Cellular Automata models can mimic hydrodynamic behavior. However, CAs are “noisy” and suffer from a number of other practical drawbacks. Moreover, by construction, the lattice model lacks Galilean invariance. The LatticeBoltzmann (LB) method was devised to overcome some (but not all) of the problems of lattice gas cellular automata. In its most naive version, one can view the LB model as a pre-averaged version of a lattice gas cellular automaton [705]. In this pre-averaging, the number of particles on a given link is replaced by the particle density on that link. Note that the particle number is either zero or one, but the density is a real number. In addition, the resulting equations are greatly simplified if the collision operator (i.e., the function that describes how the post-collision state of a lattice point depends on the pre-collision state) is linearized in the deviation of the particle densities from their (local) equilibrium value. Finally, there is no need to restrict the LB collision operators to forms that can be derived from an underlying cellular automaton [706,707]. It is, however, essential that the collision operator satisfies the conservation laws and the symmetries of the original model. For the simulation of complex flows, the LB method is much more efficient than the original cellular automaton model. However, one aspect is missing in the LB approach, namely the intrinsic fluctuations that result from the discreteness in the number of particles. This problem was resolved by Adhikari et al. [709] who showed how fluctuations could be included in a consistent way to the LB models, such that all conservation laws are obeyed, and the fluctuations have all the characteristics of bona-fide thermal fluctuations. The fluctuating LB model of ref. [709] has all the features (hydrodynamic interactions, thermal fluctuations) needed to act as a cheap model of a solvent in a mesoscopic model of a (macro-molecular) solution [710]. Of course, the correct implementation of the boundary conditions between solvent and suspended particles, requires special care —but for these aspects we refer the reader to a review by Ladd [711]. For other aspects of the LB method, the standard work on the subject is the book by Succi [678].

Part V

Appendices

This page intentionally left blank

Appendix A

Lagrangian and Hamiltonian equations of motion Knowledge of Newton’s equations of motion is sufficient to understand the basis of the Molecular Dynamics method. However, many of the more advanced simulation techniques make use of the Lagrangian or Hamiltonian formulations of classical mechanics. Here, we briefly sketch the relation between these different approaches (see also [712]). For a more detailed and more rigorous description of classical mechanics, the reader is referred to the book by Goldstein [54].

A.1 Action The Lagrangian formulation of classical mechanics is based on a variational principle. The actual trajectory followed by a classical system in a time interval {tb , te }, between an initial position xb and a final position xe , is the one for which the action, S, is an extremum (usually, a minimum). The classical action S for an arbitrary trajectory is defined as the time integral of the difference between the kinetic energy K and the potential energy U of the system, computed along that trajectory:  te S= dt [K − U] . (A.1.1) tb

Before considering the general Lagrangian equations of motion that follow from this extremum principle, let us first consider a few simple examples. The first case is that of a single particle that moves in the absence of an external potential, i.e., U = 0. As the particle has to move from xb to xe in a time interval te − tb , we already know its average velocity: vav . If the particle would always move with this average velocity, it would follow a straight trajectory that we denote by x(t). ¯ Let us denote the true trajectory of the particle by x(t) = x(t) ¯ + η(t), where η(t) is, as yet, unknown. Then the velocity of the particle is ˙ the sum of the average velocity vav and the deviation from it, η(t): ˙ v(t) = vav + η(t). By construction,

 dt η(t) ˙ = 0. 573

574 Lagrangian and Hamiltonian equations of motion

In the present example, the potential energy is always zero, and hence the action S is determined by the time integral of the kinetic energy:   1 1 2 S = m dt [vav + η(t)] ˙ = Sav + m dt η˙ 2 (t). 2 2 Since the last term can never be less than zero, the action has its minimum if η(t) ˙ = 0 for all t. In other words, we recover the well-known result that, in the absence of external forces, the particle moves with constant velocity. This is Newton’s first law. Next, consider a particle moving in a one-dimensional potential U (x). In that case, the action is     te  1 dx(t) 2 S= m dt − U (x) . 2 dt tb An arbitrary path, x(t), can be written as the sum of the actual path that a classical particle will follow, x(t), ¯ plus the deviation from this path η(t): x(t) = x(t) ¯ + η(t). As before, we impose the initial and final positions of the particle and hence η(tb ) = η(te ) = 0. For paths x(t) that are close to the actual path, we can expand the action in powers of the (small) quantity η(t). Actually, as η(t) is itself a function of t, such an expansion is called a functional expansion. The action is extremal if the leading (linear) terms in this functional expansion vanish. Let us now consider the functional expansion of the action around the action of the true path to linear order in η(t):    te dx(t) ¯ dη(t) 2 1 S= + dt m − U [x(t) ¯ + η(t)] 2 dt dt tb    2   te dx(t) ¯ 1 dx(t) ¯ dη(t) ∂U (x) ¯ η(t) ≈ dt m +2 − U (x(t)) ¯ + 2 dt dt dt ∂x tb   te  ¯ dx(t) ¯ dη(t) ∂U (x) − η(t) dt m = S¯ + dt dt ∂x tb te  te  2  ¯ dx(t) ¯ d x(t) ∂U (x) ¯ ¯ η(t) − η(t), =S+ m dt m + dt ∂x dt 2 tb tb where the last step has been obtained via partial integration. Since by definition, η(t) = 0 at the boundaries, the second term on the right-hand side vanishes. The action has its extremum if the integrand in the last line of the above equation vanishes for arbitrary η(t). This condition can be satisfied if and only if m

d2 x(t) ¯ ∂U (x) ¯ , =− 2 ∂x dt

(A.1.2)

Lagrangian and Hamiltonian equations of motion 575

which is Newton’s second law. In other words, Newton’s equations of motion can be derived from the statement that a particle follows a path for which the action is an extremum.

A.2 Lagrangian There would be little point in introducing this alternative expression of the laws of classical mechanics if it did not allow us to do more than simply rederive F = ma. In fact, the Lagrangian formulation of classical mechanics turns out to be very powerful. For one thing, the Lagrangian approach makes it easy to derive equations of motion in non-Cartesian coordinate frames. Suppose that we wish to use some generalized coordinates q instead of the Cartesian coordinate x. For example, consider a pendulum of length l in a uniform gravitational field. The angle that the pendulum makes with the vertical (i.e., with the direction of the gravitational field) can be used to specify its orientation. Since the path that the pendulum follows is clearly independent of the coordinates that we happen to use to specify its state, the action, S, should be the same:   S = dt L(x, x) ˙ = dt L(q, q), ˙ (A.2.1) where the quantity L is called the Lagrangian. The Lagrangian is defined as the kinetic energy minus the potential energy1 : L ≡ K(q) ˙ − U(q).

(A.2.2)

We again introduce our actual path q(t) ¯ and the deviation η(t) from it: q(t) = q(t) ¯ + η(t) ˙¯ + η(t). q(t) ˙ = q(t) ˙ We can write for the Lagrangian, L, ˙¯ ˙¯ ∂L(q, ¯ q) ¯ q) ˙¯ + ∂L(q, η(t) ˙ + η(t). L(q, q) ˙ = L(q, ¯ q) ∂ q˙ ∂q As in the previous section, we use the functional expansion of S in powers of η(t) to derive an expression for the classical path. To this end, we substitute the Lagrangian in the expression for the action (A.2.1). Next, we write a possible path of the particle as the sum of the actual path and a correction η(t). As before, we use partial integration and use the fact that η(t) vanishes at the boundaries of the integration. It then follows that the action has an extremum if    ˙¯  ∂L(q, ˙¯  d ∂L(q, ¯ q) ¯ q) dt − + η(t) = 0, (A.2.3) dt ∂ q˙ ∂q 1 The correct definition is more restrictive; see [54] for more details.

576 Lagrangian and Hamiltonian equations of motion

which is satisfied for arbitrary η(t) if and only if    ˙¯  d ∂L(q, ¯ q) ¯˙ ∂L(q, ¯ q) − + = 0. dt ∂ q˙ ∂q

(A.2.4)

This is the Lagrangian equation of motion. To cast this equation of motion in a more familiar form, we introduce the generalized momentum p associated with the generalized coordinate q: p≡

∂L(q, q) ˙ . ∂ q˙

(A.2.5)

Substitution of this expression into Eq. (A.2.4) yields p˙ =

∂L(q, q) ˙ . ∂q

(A.2.6)

As the above formulation is valid for any coordinate system, it should certainly hold for Cartesian coordinates. In these coordinates the Lagrangian reads 1 L(x, x) ˙ = mx˙ 2 − U (x). 2 The momentum associated with x is px =

∂L(x, x) ˙ = mx˙ ∂ x˙

and the equation of motion is mx¨ = −

∂U (x) , ∂x

which is indeed the result we would obtain from Newton’s equation of motion. Illustration 24 (A pendulum in a gravitational field). Consider a simple pendulum of length l with mass m (see Fig. A.1). A uniform gravitational field is acting on the pendulum and the potential energy is a simple function of the angle θ that the pendulum makes with the vertical: U (θ ) = mgl [1 − cos(θ)] . We wish to express the equations of motion in terms of the generalized coordinate θ . The Lagrangian, L, is 1

L = K − U = m x˙ 2 (t) + y˙ 2 (t) − U (θ) 2 ml 2 ˙ 2 θ − U (θ). = 2

Lagrangian and Hamiltonian equations of motion 577

FIGURE A.1

A simple pendulum of length l with mass m.

The generalized momentum is defined as pθ =

∂L = ml 2 θ˙ ∂ q˙

and the equation of motion follows from Eq. (A.2.6) p˙ θ = −

∂U (θ) ∂θ

or θ¨ = −

1 ∂U (θ) . ml 2 ∂θ

A.3 Hamiltonian Using the Lagrangian, we obtain equations of motion in terms of q and q. ˙ Often, it is convenient to express the equations of motion in terms of q and its conjugate momentum p. To do this, we can perform a Legendre transformation2 : H(q, p) ≡ p q˙ − L(q, q, ˙ t).

(A.3.1)

2 In thermodynamics, Legendre transforms are used to derive various thermodynamic potentials.

For example, the energy E is a natural function of the entropy S and volume V : E = E(S, V ), i.e., in these variables, E is a thermodynamic potential. In most practical applications, it is more convenient to have the temperature T rather than the entropy S as the independent variable. Since the temperature is the variable conjugate to the entropy (∂E/∂S = T ), we can perform a Legendre transform to remove the S dependence: A ≡ E − T S, yielding dA = dE − d(T S) = −SdT − pdV . For historical reasons, the Legendre transform linking the Lagrangian to the Hamiltonian has the opposite sign.

578 Lagrangian and Hamiltonian equations of motion

This equation defines the Hamiltonian H of the system. As H is a function of q, p, and, in general, also of t, it is clear that we can write an infinitesimal variation of H as dH(q, p) =

∂H ∂H ∂H dp + dq + dt. ∂p ∂q ∂t

(A.3.2)

But, using the definition of H, we can also write dH(q, p) = d(p q) ˙ − dL(q, q) ˙   ∂L ∂L ∂L dq + dq˙ + dt = pdq˙ + qdp ˙ − ∂q ∂ q˙ ∂t ∂L dt = pdq˙ + qdp ˙ − pdq ˙ − pdq˙ − ∂t ∂L = qdp ˙ − pdq ˙ − dt, ∂t where we have used the definitions of p and p, ˙ Eqs. (A.2.5) and (A.2.6), respectively. It then follows directly that ∂H = q˙ ∂p ∂H = −p. ˙ ∂q

(A.3.3) (A.3.4)

These are the desired equations of motion in terms of q, p. For most systems that we consider in this book, the Lagrangian does not explicitly depend on time. Under those circumstances, the Hamiltonian is conserved. This follows directly from the equations of motion: dH(q, p) ∂H ∂H = p˙ + q˙ dt ∂p ∂q ∂H ∂H ∂H ∂H + =− ∂p ∂q ∂q ∂p = 0. This conservation law expresses the fact that, in a closed system, the total energy is conserved. In Cartesian coordinates, the Hamiltonian can be written as H(x, px ) = xp ˙ x − L(x, x) ˙ 1 = mx˙ 2 − mx˙ 2 + U (x) 2 1 2 p + U (x), = 2m x

Lagrangian and Hamiltonian equations of motion 579

and the Hamiltonian equations of motion reduce to Newton’s equations px ∂H = ∂px m ∂U (x) ∂H p˙ x = − =− . ∂x ∂x x˙ =

The Hamiltonian equations of motion are two first-order differential equations —one for p and one for q. In contrast, the Lagrangian formalism yields a single second-order equation. However, both formalisms yield identical results. The choice between the two is dictated by considerations of mathematical convenience. Example 28 (A pendulum in a gravitational field: Part II). We consider again the simple pendulum in a uniform gravitational field, introduced in Example 24: U (θ) = mgl [1 − cos(θ)] , where θ is the angle that the pendulum makes with the vertical and g the gravitational acceleration. In Example 24 we have derived the equations of motion from the Lagrangian in terms of a second-order differential equation in θ. Now we will use Hamilton’s formulation. The Lagrangian is L(θ, θ˙ ) = UK − UP =

ml 2 2 θ˙ − U (θ). 2

The Lagrangian depends on the variables θ and θ˙ and in the Hamiltonian language we want to express the equations of motion in terms of θ and its conjugate momentum pθ . This conjugate momentum is defined by Eq. (A.2.5) pθ ≡

˙ ∂L(θ, θ) ˙ = ml 2 θ. ˙ ∂θ

The Hamiltonian follows from the Legendre transformation (A.3.1) ˙ H = pθ θ˙ − L(θ, θ) pθ + U (θ) = 2ml 2 1 = ml 2 θ˙ 2 + U (θ), 2 which is, of course, equal to the total energy of the pendulum. The equations of motion follow from Eqs. (A.3.3) and (A.3.4): θ˙ =

Pθ ∂H = 2 ∂pθ ml

580 Lagrangian and Hamiltonian equations of motion

p˙θ = −

∂H dU (θ) =− , ∂θ dθ

which are the desired equations of motion in terms of two first-order differential equations.

A.4 Hamilton dynamics and statistical mechanics The choice between the Hamiltonian and Lagrangian formulations of classical mechanics is determined by considerations of convenience. One example of a case where the Lagrangian formalism is more convenient is in the derivation of the equations of motion of a system with constraints (see section 14.1). On the other hand, the Hamiltonian expressions are to be used when establishing the connection with statistical mechanics (see Chapter 2).

A.4.1 Canonical transformation In the Hamiltonian formulation, the generalized coordinates and momenta are independent variables. One can therefore introduce a transformation of both variables simultaneously. For example, the transformation of the coordinates q, p to Q, P is denoted by Q = Q(q, p) P = P (q, p)

(A.4.1)

and the inverse transformation, Q, P into q, p, by q = q(Q, P ) p = p(Q, P ).

(A.4.2)

Obviously, the value of any function of the phase-space coordinates is unaffected by the coordinate transformation. In the case of the Hamiltonian, this implies that H(q, p) ≡ H [Q(p, q), P (q, p)] ≡ H (Q, P ).

(A.4.3)

In general, the equations of motion in the new coordinates are not of the canonical form, unless the coordinate transformation is canonical.3 If the coordinate transformation is canonical, the equations of motion for the new phase-space coordinates Q, P are   ∂H (Q, P ) ˙ Q= (A.4.4) ∂P 3 As we assume that time does not appear explicitly in these equations, we are defining a so-called

restricted canonical transformation.

Lagrangian and Hamiltonian equations of motion 581

  ∂H (Q, P ) ˙ P =− . ∂Q

(A.4.5)

From Eq. (A.4.1) and the Hamilton equations of motion for the coordinates q, p, it follows that     ∂Q(q, p) ∂Q(q, p) ˙ q˙ + p˙ Q= ∂q ∂p       ∂H(q, p) ∂Q(q, p) ∂H(q, p) ∂Q(q, p) − . = ∂q ∂p ∂p ∂q Using Eq. (A.4.3), we can write         ∂H (Q, P ) ∂H(q, p) ∂p(P , Q) ∂H(q, p) ∂q(P , Q) = + . ∂P ∂p ∂P ∂q ∂P ˙ if This equation can only be equal to expression (A.4.4) for Q         ∂Q(q, p) ∂p(Q, P ) ∂Q(q, p) ∂q(Q, P ) = and =− . ∂q ∂P ∂p ∂P (A.4.6) Similarly, we can start with P˙ , and derive two other conditions:         ∂P (q, p) ∂p(Q, P ) ∂P (q, p) ∂q(Q, P ) =− and = . ∂q ∂Q ∂p ∂Q (A.4.7) These two equations define the condition for a canonical transformation.

A.4.2 Symplectic condition We can express the above conditions for a canonical transformation in a single equation, by using a matrix notation. Let  be a 2dN-dimensional vector containing the generalized coordinates qi and momenta pi of the N particles in d dimensions (see Section 2.5.1). Hamilton’s equations of motion (A.3.3) and (A.3.4) can be written as ∂H , ˙ = ω ∂ where ω is an antisymmetric matrix defined as

0 1 ω= . −1 0

(A.4.8)

In a similar way, we can define ξ to be the 2N -dimensional vector containing the generalized coordinates Qi and Pi . Using the matrix notation the transfor-

582 Lagrangian and Hamiltonian equations of motion

mation (A.4.1) from Q, P to q, p is written as ξ = ξ (). For the time derivatives of ξ , we can write ˙ ξ˙ = M, where M is the Jacobian matrix of the transformation. The elements of this matrix are ∂ξi . (A.4.9) Mij = ∂j We can write, using Eq. (A.4.8), for the time derivatives of ξ ∂H ξ˙ = Mω . ∂

(A.4.10)

In a similar way, we can define the inverse transformation (A.4.2)  = (ξ ). Since H(p, q) = H(P , Q), we can write ∂H()  ∂H(ξ ) ∂ξj = . ∂i ∂ξj ∂i

(A.4.11)

j

If we define the transposed matrix4 of M as defined in Eq. (A.4.9), ˜ ij = ∂ξj . M ∂i This allows us to rewrite Eq. (A.4.11) in matrix notation as ∂H() ˜ ∂H(ξ ) . =M ∂ ∂ξ

(A.4.12)

If we combine Eqs. (A.4.10) and (A.4.12), we have ˜ ∂H . ξ˙ = MωM ∂ξ This expression for the equations of motion is valid for any set of variables ξ that are being transformed (independently of time) from the set . Such a 4 One can obtain the transposed matrix of a given matrix A by interchanging rows and columns, i.e., a˜ ij = aj i .

Lagrangian and Hamiltonian equations of motion 583

transformation is canonical if the equations of motion in the new coordinates have the canonical form: ∂H . ξ˙ = ω ∂ξ This can only be the case if M satisfies the condition ˜ = ω. MωM

(A.4.13)

This condition is often called the symplectic condition. A matrix M that satisfies this condition is called a symplectic matrix.5

A.4.3 Statistical mechanics Using the symplectic notation for a canonical transformation, we consider the implications for statistical mechanics. In the microcanonical ensemble, the classical partition function  of a three-dimensional atomic system is defined as  1 N,V ,E = 3N dpN dqN δ (H(p, q) − E) , (A.4.14) h N! where h is Planck’s constant and the delta-function restricts the integration to the hypersurface in phase space defined by H(p, q) = E. We can re-express this integral in terms of other phase-space coordinates, but then we have to take into account that a volume element in the two coordinate sets needs not to be the same. The volume element associated with  is d = dq1 . . . dqN dp1 . . . dpN and to ξ dξ = dQ1 . . . dQN dP1 . . . dPN . These two volume elements are related via the Jacobian matrix of the transformation matrix d = |Det(M)| dξ .

(A.4.15)

This equation shows that, in general, a coordinate transformation will result in the appearance of a Jacobian in the partition function:    1 N,V ,E = 3N (A.4.16) dPN dQN |Det(M)| δ H (P , Q) − E . h N! 5 To see that this condition is identical to Eqs. (A.4.6) and (A.4.7), we have to multiply this equation

˜ from the right with the inverse matrix of M:

˜ −1 . Mω = ωM

584 Lagrangian and Hamiltonian equations of motion

When computing ensemble averages in coordinate systems other than the original Cartesian one, the Jacobian of the transformation, M, may be different from one, and should be taken into account. In what follows, we denote the Jacobian |Det(M)| by the symbol ω. For a canonical transformation, i.e., obeys condition (A.4.13), the absolute value of the Jacobian is one. To derive this result, we take the determinant of both sides of the symplectic condition (A.4.13) ˜ = Det(ω) Det(MωM) Det2 (M)Det(ω) = Det(ω). This equation can only be true if the determinant of M is ±1, which implies that for a canonical transformation the absolute value of the Jacobian associated with this transformation must be one. The natural time evolution in phase space of a classical system may be considered as a coordinate transformation: (t0 ) → (t). One important property of a Hamiltonian system is that the natural time evolution corresponds to a symplectic coordinate transformation. We can consider the transformation from (t0 ) to (t) as a sequence of infinitesimal transformations with time step δt. Suppose that we define the evolution of the coordinates during the time interval δt as a transformation of coordinates from  to ξ : ξ = ξ () = (t + δt) ˙ = (t) + (t)δt. The Jacobian of this transformation is M≡

∂ξ ∂

= 1 + δt

  ∂H ∂ ω ∂ ∂

= 1 + δtω

∂ 2H , ∂∂

where 

∂ 2H ∂∂

 = ij

∂ 2H . ∂i ∂j

Lagrangian and Hamiltonian equations of motion 585

Taking into account that ω is an antisymmetric matrix, we can write for the transpose of the matrix M: 2 ˜ = 1 − ∂ H ω. M ∂∂

Substitution of this expression for the Jacobian into the symplectic condition (A.4.13) yields (to first order in δt)     2 2 ˜ = 1 + δtω ∂ H ω 1 − δt ∂ H ω MωM ∂∂ ∂∂ ≈ ω + δtω

∂ 2H ∂ 2H ω − ωδt ω ∂∂ ∂∂

= ω. Hence the symplectic condition holds for the evolution of  during an infinitesimal time interval. As we can consider the evolution of  during a finite interval, as a sequence of canonical transformations of infinitesimal steps, the total time evolution also satisfies the symplectic condition. One may view the Hamiltonian as the generator of a canonical transformation acting on all points in phase space. As the Jacobian of a canonical transformation is equal to 1, the size of a volume element in phase space does not change during the natural time evolution of a Hamiltonian system. Moreover, the density f (q(t), p(t)) around any point in phase space also remains constant during the time evolution. To see this, consider a volume V in phase space bounded by a surface S. During time evolution, the surface moves, and so do all points inside the surface. However, a point cannot cross the surface. The reason is simple: if two trajectories in phase space crossed, it would imply that two trajectories start from the same phase-space point. But this is impossible, as it would mean that a trajectory starting from this point is not uniquely specified by its initial conditions. Hence, the number of phase-space points inside any volume does not change in time. As the volume itself is also constant, this implies that the phase-space density (i.e., the number of points per unit volume) is constant. In other words: the phase-space density of a Hamiltonian system behaves like an incompressible fluid: df = 0. dt

(A.4.17)

While the exact solution of Hamilton’s equations of motion will satisfy the incompressibility condition, discrete, numerical schemes will —in general —violate it. As before, we can consider any numerical MD algorithm (e.g., Verlet, velocity Verlet, . . . ) as a transformation from (q(t), p(t)) to (q(t + t), q(t + t)). We can then compute the Jacobian of this transformation, and

586 Lagrangian and Hamiltonian equations of motion

check whether it is equal to 1 (see sections 4.3 and 4.3.4). For all “good” algorithms to solve Newton’s equations of motion, the Jacobian of the transformation from (q(t), p(t)) to (q(t + t), q(t + t)) is equal to 1 —such algorithms are said to be “area-preserving.” It should be noted that the symplectic condition implies more than just the area-preserving properties. Unfortunately, these other consequences do not have such a simple intuitive interpretation. When we say that it is desirable that an algorithm be symplectic, we mean more than that it should be area-preserving —it should really satisfy the symplectic condition. Fortunately, in many cases, the symplectic nature of an algorithm is easy to demonstrate by making use of the fact that any set of classical Hamiltonian equations of motion satisfies the symplectic condition. An algorithm that can be written as a sequence of exact time evolutions generated by simple Hamiltonians is, therefore, necessarily symplectic. An example is the Verlet algorithm. As discussed in section 4.3.4, this algorithm can be viewed as a sequence of exact propagations using either the kinetic part of the Hamiltonian or the potential part. Either propagation satisfies the symplectic condition. Hence the Verlet algorithm as a whole is symplectic. For an accessible discussion of symplectic dynamics in general, see ref. [713]. A discussion of symplectic integrators for Molecular Dynamics simulations can be found in ref. [714].

Appendix B

Non-Hamiltonian dynamics A systematic procedure for extending the techniques of classical statistical mechanics to non-Hamiltonian systems was proposed by Tuckerman et al. [267,715,716]. In the present appendix, we sketch the general approach for analyzing extended Lagrangian systems. We will, however, skip most of the derivations. For a more complete and more rigorous derivation, using the mathematical techniques of differential geometry, the reader is referred to the original references. In general, the dynamics that results from solving non-Hamiltonian equations of motion is not area-preserving. As we have seen in Appendix A.4.3, solving the equations of motion can be considered as a coordinate transformation from the phase-space coordinates at time t0 to those at time t. If the system is Hamiltonian, the time evolution of the system will change the shape of an infinitesimal volume element in phase space, but not its volume d. In contrast, for a non-Hamiltonian system, we have to take into account the Jacobian of the transformation associated with the evolution of d(t0 ) → d(t): d t = J( t ;  0 )d 0 , where the subscript 0 indicates the phase-space volume at t = t0 and J is the determinant of the Jacobian matrix M of the transformation. For convenience, we choose t0 = 0. The motion in phase space of a Hamiltonian system is similar to that of an incompressible liquid: in time the volume of this “liquid” does not change. In contrast, a non-Hamiltonian system is compressible. This compressibility must be taken into account when considering the generalization of the Liouville theorem to non-Hamiltonian systems. The compressibility can be derived from the time dependence of the Jacobian dJ( t ;  0 ) = κ( t , t)J( t ;  0 ) dt

(B.0.1)

in which κ( t , t), the phase space compressibility of the dynamical system, is defined: ˙ κ( t , t) ≡ ∇  · .

(B.0.2) 587

588 Non-Hamiltonian dynamics

Eq. (B.0.1) has as formal solution 



t

J( t ;  0 ) = exp

κ( s , s)ds . 0

If we define w( t , t) as the primitive function of κ( t , t), then we can write J( t ;  0 ) = exp [w( t , t) − w( 0 , 0)] √ g( 0 , 0) ≡ √ , g( t , t) √ where the last line defines the quantity g. Recall that d t = J( t ;  0 )d 0 √ g( 0 , 0) = √ d 0 . g( t , t) Hence,



g( t , t)d t =

 g( 0 , 0)d 0 ,

which defines an invariant measure in phase space. This result can be used to derive a new form of the Liouville equation for non-Hamiltonian systems. The important point here is that the phase-space distribution, f (), the function in which we are interested, which gives the probability density in phase space, √ should be kept separate from the phase-space metric, g, which ensures that the volume of phase space of a non-Hamiltonian system is invariant under time evolution, √  √  ∂f g + ∇ · f g ˙ = 0. (B.0.3) ∂t The expression corresponding to an ensemble average is  √ d g()A()f () A =  . (B.0.4) √ d g()f () Assuming that there are nc conservation laws, k (  ) = Ck for k = 1, . . . , nc , the partition function of the non-Hamiltonian system is given by  (C1 , . . . , Cn ) =

nc    d  g(  ) δ k (  ) − Ck .

(B.0.5)

k=1

In many applications, one obtains the correct (N V T or N P T ) partition function from the above “microcanonical” partition function, by carrying out the integration over the unphysical variables that have been introduced to represent the effect of a thermostat or barostat. In order to do this properly, it is essential

Non-Hamiltonian dynamics 589

to identify all conservation laws. Moreover, it is useful to eliminate from the analysis all those coordinates that are linearly dependent on other variables and variables that are “driven.” Variables are called “driven” when they do not influence the time evolution of (and are not coupled through a conservation law) the physical variables of interest in the system, even though their own time evolution may depend on these last variables.

This page intentionally left blank

Appendix C

Kirkwood-Buff relations C.1 Structure factor for mixtures In section 5.1.7.1 we discussed the relation between the structure factor S(q), Eq. (5.1.40), of a one-component system and the mean-squared value of the Fourier transform of the particle density (Eq. (5.1.38)): ρ(q) =

N 

 eiq·ri =

dr ρ(r)eiq·r . V

i=1

For an n-component system, we can generalize this relation to yield expressions for the partial structure factors Sab (q) that measure the cross-correlations between fluctuations in the Fourier transforms of the densities of species a and b: 1 δρa (q)δρb (−q) Sab (q) = √ (C.1.1) Na  Nb      1  =√ dr dr δρa (r)δρb (r ) eiq·(r−r ) , Na  Nb  V V where δρ(q)i denotes the fluctuation of ρ(q) around its average value. If we take the limit q → 0 in the second half of Eq. (C.1.1), we get: Na Nb  lim Sab (q) = √ Na  Nb 

q→0

(C.1.2)

where Na denotes the fluctuation in the total number of particles of species a in the system. Following Kirkwood and Buff [717], we show that Eq. (C.1.2) has a direct thermodynamic interpretation, which provides a powerful route to determine the composition-dependence of chemical potentials in solution. For a multi-component system, we can generalize the expression for the GrandCanonical partition, , Eq. (2.3.19) to  ({μ}, V , T ) ≡

∞ 

n 

exp (βμa Na ) e−βF ({N },V ,T ) ,

(C.1.3)

N1 ,N2 ,··· ,Nn =0 a=1

where {μ} denotes μ1 , μ2 , · · · , μn and {N } stands for N1 , N2 , · · · , Nn . The Grand Potential,  =  ({μ}, V , T ) is given by  = −kB T ln  ({μ}, V , T ) . 591

592 Kirkwood-Buff relations

From Eq. (C.1.3), it then follows that   ∂ = − Na  ∂μa and



∂ 2 ∂μa ∂μb



 =−

∂Na ∂μb

(C.1.4)

 = −β Na Nb {μ},V ,T .

(C.1.5)

Comparing Eq. (C.1.5) with Eq. (C.1.2) indicates that there is a close relation between the behavior of the structure factors Sab (q) in the limit q → 0, and the thermodynamic derivatives of the Grand Potential. Kirkwood and Buff were the first to propose these relations [717]. However, they also expressed their results in terms of integrals over the radial distribution functions gab (r), and many simulation studies use the g(r)-approach to compute the composition-dependence of chemical potential. The g(r)-based relations are correct in principle, but as explained below Eq. (5.1.42), they are very dangerous to use in simulations on small (and not even very small) systems. It is, therefore, better to stick with Eq. (C.1.2) [718].1 We still have to explain why the relations Eq. (C.1.5) are important. The key reason is that they allow us to compute the variation with the composition of the chemical potentials of the various species in a multi-component mixture under conditions where particle-insertion methods (see section 8.5.1) fail. We first note that at constant T and V : d = −

n 

Nr dμr

r=1

and, hence 

∂ ∂Na

 T ,V ,N 

=−

n  r=1

 Nr 

∂μr ∂Na

 T ,V ,N 

,

where N  and μ denote the set of all {N } and {μ} that are not being varied in that particular differentiation (i.e., we use the same symbol for different sets). It then follows that        n   ∂ 2 ∂μr ∂μs ∂ 2 = ∂Na ∂Nb T ,V ,N  ∂μr ∂μs T ,V ,μ ∂Na T ,V ,N  ∂Nb T ,V ,N  r,s=1   2 n  ∂ μr Nr  − . (C.1.6) ∂Na ∂Nb T ,V ,N  r=1

1 Ref. [719] describes an alternative approach to exploit the Kirkwood-Buff relations in systems

where the total numbers of molecules of each species are fixed.

Kirkwood-Buff relations 593

From the Gibbs-Duhem equation (dV ,T = − 

∂ 2 ∂Na ∂Nb



 T ,V ,N 

=−

∂μa ∂Nb

 T ,V ,N 



n 

r

Nr dμr ) it follows that

 Nr

r=1

∂ 2 μr ∂Na ∂Nb

 T ,V ,N 

. (C.1.7)

Combining Eqs. (C.1.6) and (C.1.7), we get 

∂μa ∂Nb

 T ,V ,N 

     n   ∂ 2 ∂μr ∂μs =− . ∂μr ∂μs T ,V ,μ ∂Na T ,V ,N  ∂Nb T ,V ,N  r,s=1

(C.1.8) Note that Eq. (C.1.8) has the form of a matrix equation Aab = Aar Brs Asb , where

 Aab = Aba =

∂μa ∂Nb

 T ,V ,N 

,

and Brs = Bsr = β Nr Ns {μ},V ,T . But then, A = B−1 or, in compact notations,   ∂μr Nb Ns T ,V ,μ = δrs , β ∂Nb T ,V ,N 

(C.1.9)

where δrs is the Kronecker δ-function. In other words, once we have computed the matrix of cross-correlations in the number fluctuations, then we know the variation with the composition of the chemical potentials of all species. Note that the A-matrices refer to an ensemble where T , V , and N  are fixed, whereas B expresses the fluctuations in an ensemble at constant μ, V , T .

C.2 Kirkwood-Buff in simulations To compute the phase behavior of mixtures, we need to know the variation of the chemical potentials with composition. For dense liquids, we cannot compute μi using the particle-insertion method (8.5.1), or using grand-canonical simulations (6.5) because the probability of successful insertions becomes very small. It is under such conditions that Eq. (C.1.2) becomes useful: if the number of particles of the different species is fixed, Sab (q = 0) vanishes identically, but we can compute the limit Sab (q = 0) for q → 0, and this limit is well defined and, apart from possible finite size effects, equal to the desired value.

This page intentionally left blank

Appendix D

Non-equilibrium thermodynamics D.1 Entropy production When a system relaxes from a non-equilibrium situation, entropy is produced. Irreversible thermodynamics, as formulated by Onsager [55,56], establishes relations between the different contributions to the entropy production. As a starting point, we, therefore, need an expression for the entropy production. The “canonical” derivation for the expression of the entropy production is given in the book on Non-equilibrium Thermodynamics by De Groot and Mazur [57]. However, the derivation of the expression for entropy production in ref. [57], whilst complete, is a bit daunting. Here, we have opted for a different route: it is less complete, but it is quick, and it makes it easier to understand what is going on. Let us first briefly look at the entropy generation that takes place if we bring two sub-systems (I and II) of a closed system into contact. The subsystems can exchange energy and particles. We ignore the fact that the medium may flow —and that is the price we have to pay for keeping things simple. In reality, viscous flow is an important non-equilibrium phenomenon, and the way to include it in the expression for the entropy production is described in ref. [57]. The change of the entropy, S of system I due to an infinitesimal exchange would be:  μI 1 i dS I = I dU I − dNiI . (D.1.1) T TI i

As the total system is closed, we have dU I = −dU I I , and dNiI I = −dNiI I . Hence, the total change in entropy is     II  μI μ 1 1 i dS total = dS I + dS I I = − I I dU I − − iI I dNiI . (D.1.2) TI T TI T i

This equation shows that entropy is produced if there is an energy flux in a system with a gradient in 1/T or if there is a particle flux in a system with a gradient in μi /T . Now consider the case that we have a thin slab of material with width dx and cross-section A and that heat flows into the system at temperature 595

596 Non-equilibrium thermodynamics

T (x) and out of it at temperature T (x + dx). We denote the amount of heat ˙ ˙ The heat flux is then jq = Q/A. transported per unit of time by Q. Similarly, the number of particles of type i flowing through the system is N˙ i and the particles flux of species i is ji = N˙ i /A. Note that we assume that the system as a whole is at rest and, therefore, the sum of all particle fluxes must vanish.1 We can then write the rate of entropy production S˙ as     ∂μi (x)/T (x)  ∂1/T (x) dx A jq − dx A ji , S˙ = (D.1.3) ∂x ∂x i

or, in terms of σ , the entropy production per unit volume:     ∂μi (x)/T (x)  ∂1/T (x) jq − ji . σ= ∂x ∂x

(D.1.4)

i

We can generalize this slightly by dropping the assumption that the transport is one-dimensional, and we can combine the gradient of the chemical potential with other forces Fi acting on species i, due to gradients in external potentials. We then obtain

1 1 μi (D.1.5) Ji . T ∇ − Fi . σ = Jq · ∇ − T T T i

For reasons to be discussed below, it is convenient to separate the gradient of μi /T into a part that depends on the temperature gradient and a part that does not. If the chemical potential depends on temperature and pressure, we can use   ∂βμi = hi , (D.1.6) ∂β P ,{Nj } where β ≡ 1/kB T , and 

 1 1 Ji hi · ∇ − Ji · (∇μi )T − Fi σ = Jq − T T i

(D.1.7)

i

and this allows us to define the ‘non-diffusive’, irreversible heat flow J q  J q ≡ Jq − Ji hi . (D.1.8) i 1 The most elegant choice for the particle fluxes would be the mass flux, in which case the condition

that the sum of fluxes must vanish means that the center of mass is stationary. However, in practice, it makes little difference which fluxes we use. We will use number densities. The thing to bear in mind is that the definition of the chemical potential (e.g., per particle, per unit mass, or per mole) has to be consistent with the choice of the fluxes.

Non-equilibrium thermodynamics 597

In what follows, we shall use the symbol Jh to denote the enthalpy flux:  Jh ≡ Ji hi . (D.1.9) i

D.1.1 Enthalpy fluxes Why is it important to subtract the enthalpy fluxes? First of all: as de Groot and Mazur state, heat fluxes in mixtures are not uniquely defined. The same holds for enthalpy fluxes. The value of the enthalpy flux depends on where we choose the zero of the energy of the particles. We could, for instance, include the mi c2 associated with the rest-mass of particle i. This is not as silly as it seems because if we are, for instance, pumping UF6 or a similar nuclear fuel, the energy flux does take the rest of the mass into account. The key point is that the “internal energy” choice makes a huge difference in what we mean by “transferring a particle at constant energy.” If we move particle i from reservoir 1 to reservoir 2, without moving energy, then removing particle i from reservoir 1 will lead to a large increase in the entropy (because the energy that used to be in the rest mass of particle i is now taken up by the bath of other particles), and conversely, the entropy of system 2 will decrease very substantially to compensate for the energy gain associated with the introduction of particle i. It is convenient to view this process as if we allow particle i to retain its enthalpy but then add an enthalpy flux from 2 to 1 that exactly compensates for this. Schematically, the problem of transferring a particle at constant energy is sketched in Fig. D.1. Inparticular, we could consider the case where, in addition, there is a heat flux i Ji hi from 1 to 2. This would correspond to the case that we do not force the particle to extract energy when it leaves 1 and to add energy when it arrives in 2. In this particular case, the contribution of diffusive enthalpy transport to the net ‘irreversible’ heat flow J q is zero. Importantly, if we thus allow particles to travel with their associated enthalpy, the entire problem of the reference state disappears. Of course, in reality, there still will be a heat flux, but this is the heat associated with thermal motion and intermolecular interactions. To summarize: the rate of entropy production is given by σ = Jq · ∇



1 1 Ji · (∇μi )T − Fi − T T

(D.1.10)

i

where the irreversible heat flow depends neither on the enthalpy carried by the particle fluxes Ji .

D.2 Fluctuations In the previous sections, we argued that the equilibrium state of a system corresponds to the state with the largest number of microscopic realizations. More-

598 Non-equilibrium thermodynamics

FIGURE D.1 Transferring particles from system 1 at temperature T1 to system 2 at temperature T2 , without changing the energy of either system, requires that every particle leaves all energy (even kinetic) behind in 1 and acquires its new energy at the expense of system 2.

over, we argued that we could link this probabilistic picture to the experimental observation that the entropy of a closed system is at a maximum if we identify the entropy S with kB ln : S = kB ln  .

(D.2.1)

We now assume that the entropy is a unique function of n linearly independent extensive variables, denoted by {A1 , A2 , · · · , An }. In equilibrium, the entropy of an isolated system must be at a maximum. If Ai denotes a non-conserved quantity (e.g., the degree of crystallinity), then the second law implies that, at equilibrium, ∂St =0, ∂Ai

(D.2.2)

where St denotes the entropy of the entire system. However, if Ai denotes a conserved quantity, then it cannot change in a closed system. In that case, we can consider the entropy variation as an amount dAi is transferred from subsystem

Non-equilibrium thermodynamics 599

1 to subsystem 2. Then ∂St (2) ∂Ai (2)

=

∂S2 (2) ∂Ai



∂S1 (1)

∂Ai

=0,

(D.2.3)

(1)

because dAi = −dAi . More generally, if we consider m subsystems, there (n) are m − 1 independent variables Ai if A is conserved, and m if A is not conserved. For the conserved quantities, Eq. (D.2.3) simply expresses the equality of T , P /T , or μi /T in the two subsystems. So, S does not vary to linear order in Ai . However, to quadratic order, S does vary with Ai .2 Hence, to quadratic order, we can write S = S0 +

1  ∂ 2 St αi αj , 2 ∂Ai ∂Aj

(D.2.4)

i,j

where αi ≡ Ai − A0i . To keep our notation aligned with De Groot and Mazur [57], we write gij ≡ −

∂ 2 St . ∂Ai ∂Aj

(D.2.5)

If the αi are linearly independent, gij is a symmetric, positive definite matrix. The probability to find a system in a state away from (but close to) the most probable state and characterized by the variables {α1 , α2 , · · · , αk } is ⎞ ⎛  1 P ({α1 , α2 , · · · , αk }) ∼ exp ⎝− (D.2.6) gij αi αj ⎠ . 2kB i,j

As the second law states that a system will evolve from a less probable to a more probable state, we can define the “driving force” that causes the system to return to its most probable state. With every variable αi , there is an associated driving force Xi given by   1 ∂ i,j gij αi αj Xi = − =− gij αj . (D.2.7) 2 ∂αi j

To define the driving forces Xi , we made use of the assumption of local thermodynamic equilibrium, meaning that locally the relations between the entropy S and the basic extensive thermodynamic quantities U , V , and Mi (the mass of component i) are the same as during a quasi-static process. We do not know a priori how quickly a system returns to equilibrium. A separate set of constitutive 2 We assume that S is an analytical function of the A . This may seem reasonable, but it need not i

always be true.

600 Non-equilibrium thermodynamics

equations is needed to describe the relation between the driving forces Xj and the “fluxes” Ji ≡ α˙ i . We assume that, to lowest order, these relations are of the form:  Ji = Lij Xj . (D.2.8) j

At this stage, we know nothing about the transport coefficients Lij . Onsager [55] assumed that the law describing the rate of decay of a variable αi to its equilibrium value (zero) is valid for arbitrarily small αi and therefore also describes the rate at which spontaneous fluctuations around equilibrium decay to their average value. This “Onsager Regression Hypothesis” provides a link between the macroscopic transport coefficient and the microscopic dynamics of a system in equilibrium. The regression hypothesis can be viewed as a generalization of Einstein’s assumption that diffusive transport can be treated as a macroscopic manifestation of Brownian motion. From Eq. (D.2.4) it is easy to derive an expression for the entropy production ˙ S:    S˙ = − gij α˙ i αj = Xi · α˙ i = Lij Xi Xj . (D.2.9) i,j

i

ij

D.3 Onsager reciprocal relations To make contact with the decay of fluctuations in equilibrium, we now show that the equilibrium fluctuations αi are only correlated to the fluctuations in the conjugate force Xi   αi Xj = −kB δij (D.3.1) where kB is Boltzmann’s constant and δij is the Kronecker delta. Equation (D.3.1) follows from the fact that    ∂P ({α}) αi Xj = kB d{α}αi ∂αj  ∂αi = −kB d{α} P ({α}) ∂αj = −kB δij , where {α} denotes the set {α1 , α2 , · · · , αk }. Using Ji = lows that   αj (t)Ji (t) = −kB Lij .

(D.3.2)  j

Lij Xj , it then fol(D.3.3)

Equation (D.3.3) allows us to derive the Onsager reciprocal relations. But before we do so, note that microscopically, Eq. (D.3.3) is a bit strange, because a fluctuation at time t will only result in a flux for t > 0. In fact, as αj and Jj

Non-equilibrium thermodynamics 601

have different time-reversal symmetries,  product actually van the equal-time ishes. The quantity that is non-zero is αj (t)Ji (t + ) . We will come back to this point later. For the time being, we continue with Eq. (D.3.3) and rewrite it as:  ∞   dt αj (0)J˙i (t) = −kB Lij , (D.3.4) 0

  where we have made used the fact that αj (0)J˙i (0) vanishes. Next, using time invariance,  ∞   dt α˙ j (0)Ji (t) = +kB Lij , (D.3.5) 0

and, using the fact that Jj = α˙ j :  ∞   dt Jj (0)Ji (t) = kB Lij .

(D.3.6)

0

The important thing is that (classically):       Jj (0)Ji (t) = Ji (t)Jj (0) = Ji (0)Jj (−t) . Here, and in what follows, we will limit the discussion to fluxes    that all have  the same time-reversal symmetry, in which case Jj (0)Ji (t) = Ji (0)Jj (t) , from which it then follows that Lij = Lj i .

(D.3.7)

This page intentionally left blank

Appendix E

Non-equilibrium work and detailed balance The relation between free-energy differences and (non-equilibrium) work described in section 8.7 holds for all protocols or equations of motion that are markovian and satisfy detailed balance for every step. To see what that means, consider a process where work is performed by changing a control parameter . For instance,  might be the system’s volume, or it might be a parameter in the Hamiltonian of the system. We denote the original/final value of  as 0 /K (we use the index K to keep the subsequent notation consistent). The protocol to change  from 0 to K can be decomposed in two types of elementary steps: during the first, all phase-space coordinates of the system () are kept fixed, and  is changed by an amount i , where i labels the step; if there are K such steps, then i = {1, 2, · · · , K}. Note that steps that only change  are deterministic. However, there is also a second type of step in the protocol, namely one where  is kept fixed, but the system is allowed to evolve by its natural dynamics, exchanging energy with a thermostat. Examples of the evolution at constant  are sequences of one or more Monte Carlo moves, or of one or more time steps in a constant-temperature MD simulation.1 In the language of Crooks [389], we denote the energy exchanged with the “reservoir” (i.e., the thermostat) by Q: it can be interpreted as the heat delivered to the system by the reservoir. Because of detailed balance, the ratio of the probability of the system to evolve at constant  (say i ) from phase-space coordinate  to   to the probability of the reverse move, is given by P ( →   ; i ) = e−βE(i ) , P (  → ; i )

(E.1.1)

where E(i ) ≡ E(  ; i ) − E(; i ). When we change  at constant , we perform work on the system. We denote the work associated with a change  from i−1 to i at fixed i by wi . The total work W done on the system as  is increased from 0 to K , is then equal to W = K i=1 wi . 1 The argument can be generalized to the case where we also exchange volume or particles with a

reservoir, but here we consider the simplest case.

603

604 Non-equilibrium work and detailed balance

If the time evolution of the system is Markovian, we can write the probability to evolve from 0 at 0 to K at K as K−1 

[P (i → i+1 ; i ) × 1] .

i=0

We included a factor one in every step to indicate that changing  at constant  is deterministic. In what follows, we leave out this trivial factor. We can also write down the probability to evolve from K at K to 0 at 0 along the same path, as 0 

P (i+1 → i ; i ) .

i=K−1

Because of Eq. (E.1.1), we have K−1 

P (i → i+1 ; i ) =

i=0

0  

 P (i+1 → i ; i )e−βE(i ) .

(E.1.2)

i=K−1

 Note that 0i=K−1 e−βE(i ) is equal to e−βQ({}) , where Q is the total energy transferred from the reservoir for a sequence of states {} ≡ 0 , 1 , · · · , K . Note that Q is not equal to E(K ; K ) − E(0 ; 0 ) because the energy of the system is also changed by performing work: E(K ; K ) − E(0 ; 0 ) = W ({}) + Q(|) ,

(E.1.3)

which can be viewed as a microscopic version of the First Law of Thermodynamics. Note, in particular, that W and Q are path-dependent, but their sum is not. If we sample initial conditions from a Boltzmann distribution, then we can express the average of e−βW as e−βW =



PB (0 , 0 ))

0 ,··· ,K

K−1 

P (i → i+1 ; i )e−βW ({}) ,

(E.1.4)

i=0

where PB (0 , 0 ) = exp (−β[E(0 ; 0 ) − F (0 )]), and the Helmholtz free energy F is, as usual, given by  βF () = − ln e−β[E(;) . (E.1.5) 

Using Eq. (E.1.2), we can write K−1 0

i=0

P (i → i+1 ; i )

i=K−1 P (i+1 → i ; i )

= e−βQ({})

(E.1.6)

Non-equilibrium work and detailed balance 605

or, using the definition of the Boltzmann weights: K−1 PB (0 , 0 ) i=0 P (i → i+1 ; i ) = e−βQ({}) e+β[E−F ] , (E.1.7) 0 PB (K , K ) i=K−1 P (i+1 → i ; i ) where E ≡ E(K ; K ) − E(0 ; 0 ), and F ≡ F (K ) − F (0 ). We can then write: PB (0 , 0 )

K−1 

P (i → i+1 ; i )e−βW ({})

i=0

= PB (K , K )

0 

P (i+1 → i ; i )e−β[Q({})+W ({})−E] e−βF

i=K−1

= PB (K , K )

0 

P (i+1 → i ; i )e−βF ,

(E.1.8)

i=K−1

where we have used Eq. (E.1.3). It then follows that Eq. (E.1.4) can be written as e−βW = e−βF



0 

P (i+1 → i ; i ) = e−βF ,

(E.1.9)

0 ,··· ,K i=K−1

where the last equality follows from the fact that all transition probabilities are normalized.

This page intentionally left blank

Appendix F

Linear response: examples F.1

Dissipation

Many experimental techniques probe the dynamics of a many-body system by measuring the absorption of some externally applied field (e.g., visible light, infrared radiation, microwave radiation). Linear response theory allows us to establish a simple relation between the absorption spectrum and the Fourier transform of a time-correlation function. To see this, let us again consider an external field that is coupled to a dynamical variable A(pN , qN ). The timedependent Hamiltonian, H, of the system is H(t) = H0 − f (t)A(pN , qN ). Note that the only quantity explicitly time-dependent is f (t). As the Hamiltonian depends on time, the total energy E of the system also changes with time: E(t) = H(t) . Let us compute the average rate of change of the energy of the system. This is the amount of energy absorbed (or emitted) by the system, which per unit of time reads:   dH ∂E = (F.1.1) ∂t dt      ∂H ∂H ∂H q˙ i + = + p˙ i . ∂qi ∂pi ∂t i

But, from Hamilton’s equations of motion, we have q˙ i =

∂H ∂pi

and p˙ i = −

∂H . ∂qi

As a consequence, Eq. (F.1.1) simplifies to   ∂E ∂H = ∂t ∂t 607

608 Linear response: examples



= − f˙(t)A(pN , qN ) = −f˙(t) A(t) .

(F.1.2)

Note, however, that A(t) itself is the response to the applied field f : ∞ A(t) = dt  χAA (t − t  )f (t  ). −∞

Let us now consider the situation where f (t) is a periodic field with frequency ω (e.g., monochromatic light). In that case, we can write f (t) as f (t) = Re fω eiωt and

iω fω eiωt − fω∗ e−iωt . f˙(t) = 2 The average rate of energy dissipation is ∂E = −f˙(t) A(t) ∂t = −f˙(t)



−∞

(F.1.3)

dt  χAA (t − t  )f (t  ).

For a periodic field, we have



fω eiωt dt χAA (t − t )f (t ) = 2 −∞ 









−∞



dt  χAA (t − t  )eiω(t −t)

f ∗ e−iωt ∞   + ω dt χAA (t − t  )e−iω(t −t) 2 −∞   iωt = π fω e χAA (ω) + fω∗ e−iωt χAA (−ω) , (F.1.4)

where 1 χAA (ω) ≡ 2π





dt χAA (t)e−iωt .

0

˙ the rate of change of the energy, we must average ∂H/∂t over To compute E, one period, T (= 2π/ω), of the field:  −π T dt iω(fω eiωt − fω∗ e−iωt ) E˙ = 2T 0   × fω eiωt χAA (ω) + fω∗ e−iωt χAA (−ω) = − πω |fω |2

χAA (ω) − χAA (−ω) 2i

Linear response: examples

= − πω |fω |2 Im [χAA (ω)] .

609

(F.1.5)

We use the relation between χAA (t) and the autocorrelation function (2.5.17) of A: ∞    1 ˙ . dt e−iωt −β A(0)A(t) χAA (ω) = 2π 0 The imaginary part of χAA (ω) is given by ∞   β ˙ dt sin(ωt) A(0)A(t) Im [χAA (ω)] = 2π 0 ∞ β dt ω cos(ωt) A(0)A(t) . =− 4π −∞

(F.1.6)

Finally, we obtain βω E˙ = |fω |2 4

2





−∞

dt cos(ωt) A(0)A(t) .

(F.1.7)

Hence, from knowledge of the autocorrelation function of the quantity that couples with the applied field, we can compute the shape of the absorption spectrum. This relation was derived assuming classical dynamics and therefore is valid only if ω  kB T . However, it is also possible to derive a quantummechanical version of linear response theory that is valid for arbitrary frequencies (see, e.g., [53]). To give a specific example, let us compute the shape of the absorption spectrum of a dilute gas of polar molecules. In that case, the relevant correlation function is the dipole (μ) autocorrelation function: Mx (0)Mx (t) =

N μ(0) · μ(t) . 3

For molecules that rotate almost freely (almost, otherwise there would be no dissipation), μ(0) · μ(t) depends on time, because each molecule rotates. For a molecule with a rotation frequency ω, we have μ(0) · μ(t) = μ2 cos(ωt), and for an assembly of molecules with a thermal distribution of rotational velocities P (ω), we have 2 μ(0) · μ(t) = μ dωP (ω) cos(ωt). The rate of absorption of radiation is then given by πβω2 N μ2 P (ω) |fω |2 . E˙ = 12

(F.1.8)

610 Linear response: examples

For more details about the relation between spectroscopic properties and time correlation functions, the reader is referred to the article by Madden in [44].

F.2 Electrical conductivity In the derivation of linear response theory in Chapter 2, we assumed that we prepare the system in an equilibrium state with the perturbation on and then allow the system to relax to a new equilibrium state with the perturbation off. However, this will not always work. Consider, for instance, electrical conductivity. In that case, the perturbation is an electrical field that will cause a current to flow in the system. Hence, the state in which we prepared the system with the field on is not an equilibrium state but a steady non-equilibrium state. The same holds, for instance, for a system under steady shear. It would seem that, in such circumstances, one cannot use the framework of linear response theory in its simplest form to derive transport coefficients such as the electrical conductivity σe or the viscosity η. Fortunately, things are not quite as bad as that. Consider, for example, electrical conductivity. Indeed, if we put a conducting system in an external field, we will generate a non-equilibrium steady state. However, what we can is perturbing the system by switching on a weak, uniform vector potential A. The Hamiltonian of the system with the vector potential switched on is H =

N  1 ei 2 pi − A + Upot . 2mi c

(F.2.1)

i=1

The system described by this Hamiltonian satisfies the same equations of motion as the unperturbed system (A is a gauge field), and the system will be in an equilibrium state at t = 0. We then abruptly switch off the vector potential. From electrodynamics, we know that a time-dependent vector potential generates an electric field: 1˙ (F.2.2) E = − A. c In the present case, the electrical field will be an infinitesimal δ spike at t = 0: 1 E(t) = Aδ(t). c

(F.2.3)

We can compute the current that results in the standard way. We note that we can write H in Eq. (F.2.1) as N  ei H = H0 − pi · A + O(A2 ) cmi 

i=1

A = H0 − c



N  ei pi δ(ri − r) dr mi i=1

Linear response: examples

A = H0 − c

611

dr j(r),

(F.2.4)

where j(r) denotes the current density at point r. The average current density at time t due to the perturbation is given by   A j(t) = (F.2.5) drdr j(r, 0)j(r , t) . cV kB T The phenomenological expression for the current response to an applied δfunction electric field spike is (see Eq. (2.5.14)) t j(t) = dt  σ (t − t  )E(t  ) −∞

A = σ (t) . c From this, it immediately follows that   1 σ (t) = drdr j(r, 0)j(r , t) . V kB T The dc conductivity is then given by ∞   1 dt drdr j(r, 0)j(r , t) . σ (ω = 0) = V kB T 0

F.3

(F.2.6)

(F.2.7)

(F.2.8)

Viscosity

The corresponding linear response expression for the viscosity seems more subtle because shear usually is not interpreted in terms of an external field acting on all molecules. Still, we can use, by analogy to the electrical conductivity case, a canonical transformation, the time derivative that corresponds to uniform shear. To achieve this, we consider a system of N particles with coordinates rN and Hamiltonian H0 =

N 

pi2 /(2mi ) + U(rN ).

(F.3.1)

i=1

Now consider another system described by a set of coordinates r N related to rN by a linear transformation: r i = hri .

(F.3.2)

The Hamiltonian for the new system can be written as N  1  N H1 = p · G−1 · p i + U(r ), 2mi i i=1

(F.3.3)

612 Linear response: examples

where G, the metric tensor, is defined as G ≡ hT · h.

(F.3.4)

We assume that h differs infinitesimally from the unit matrix I: h = I + .

(F.3.5)

In the case that we are interested in the effect of uniform shear, for instance, we could choose xy = , while all other elements of αβ are 0. Now consider the case that we equilibrate the system with Hamiltonian H1 , and at time t = 0, we switch off the infinitesimal deformation . This means that, at t = 0, the system experiences a δ-function spike in the shear rate ∂vx = − δ(t). ∂y

(F.3.6)

We can compute the time-dependent response of the shear stress, σxy (t), to the sudden change from H1 to H0 :   σxy (t) = −

 1  σxy (0)σxy (t) . V kB T

(F.3.7)

By combining Eqs. (F.3.6) and (F.3.7) with Eq. (2.5.14), we immediately see that the steady-state stress, σxy , that results from a steady shear is given by ∞   1 ∂vx × σxy = dt σxy (0)σxy (t) , (F.3.8) ∂y V kB T 0 and the resulting expression for the shear viscosity η is ∞   1 η= dt σxy (0)σxy (t) . V kB T 0

(F.3.9)

F.4 Elastic constants A liquid flows under the influence of shear forces. A solid does not. Rather, any small deformation of a solid induces an elastic response (stress) that counteracts it. This elastic stress is proportional to the applied deformation (strain). The constants of proportionality between stress and strain (to be defined more precisely below) are called the elastic constants. Below we discuss how to measure these constants by computer simulation. For the sake of simplicity, we limit the discussion to crystals on isotropic (hydrostatic) pressure. When considering the effect of the strain on the free energy of a solid, it is essential to introduce the so-called Lagrangian strain tensor (see, e.g., [182]).1 1 Confronted with the finiteness of the Greek alphabet, we use the symbol η to denote the La-

grangian strain tensor. This symbol can easily be confused with the scalar viscosity η.

Linear response: examples

613

The reason is that, on a local scale, all changes in free energy are due to changes in the distances between the particles that make up the solid. And the quantity that measures this change is precisely the Lagrangian strain. We start with the relation between new and old coordinates due to an elastic deformation: r = (1 + )r,

(F.4.1)

where

αβ ≡

∂uα ∂xβ

(F.4.2)

is the (conventional) strain tensor. It measures the variation of the displacement field u with the original coordinate r. Due to the strain, the distance rij separating two points i and j in the solid is changed. The new squared distance is then related to the old distance by

rij2 = rij 1 +  T (1 + ) rij

= rij 1 +  T +  +  T  rij ≡ rij (1 + 2η)rij . This defines the Lagrangian strain tensor η. The new volume V  of the system is related to the original volume V0 by

or

V  = V0 det(1 + )

(F.4.3)

 V  = V0 det(1 + 2η).

(F.4.4)

We now expand the Helmholtz free-energy (F ) per unit of (undeformed) volume (V ) in powers of the Lagrangian strain parameters η:   ∂F 1 ∂ 2F −1 F (0) + ηαβ + ηαβ ηγ δ + · · · F (η)/V = V ∂ηαβ 2 ∂ηαβ ∂ηγ δ 1 (2) (1) ηαβ + Cαβγ (F.4.5) = V −1 F (0) + Cαβ δ ηαβ ηγ δ + · · · . 2 (2)

This equation defines the (second-order) elastic constants Cαβγ δ . To compute the elastic constants numerically, we need a microscopic expression for the ηdependence of F . To derive such a relation, we must consider in detail what a deformation of the system does to the partition function. Let us first consider the deformed system. The partition function of this system (ignoring constants, such as h−3N ) is equal to

  (F.4.6) Q(η) = dpN drN exp −βH pN , rN .

614 Linear response: examples

This partition function depends on the deformation through the boundary conditions of the integral over the coordinates. This is not very convenient when computing derivatives with respect to the strain. Therefore, we first express the partition function of the deformed system in terms of coordinates and momenta of the original, undeformed system. We can express the coordinates (ri ) and velocities (˙ri ) in this system in terms of the strain tensor h ≡ (1 + ), and the original coordinates (r0,i ) and velocities (˙r0,i ): ri = hr0,i r˙ i = h˙r0,i 1

˙ 2i , 2 mi r

The kinetic energy, K =

can be written as

1

mi r˙ 2i 2 1 1 mi r˙ 0,i (hT h)˙r0,i ≡ mi r˙ 0,i · G · r˙ 0,i , = 2 2

K=

(F.4.7)

(F.4.8)

where hT = (1 +  T ) is the transverse of h and G = hT h is the metric tensor. From the definition of h it follows that G = (1+2η). We can now write down the generalized momentum p0,i conjugate to the coordinate r0,i (see Appendix A):   ∂K β α p0,i = (F.4.9) = mi Gαβ r˙0,i α ∂ r˙0,i and hence K=

1 2

mi r˙ 0,i · G · r˙ 0,i

 1 p0,i · G−1 ·p0,i 2mi  1 = p0,i · (1 + 2η)−1 ·p0,i . 2mi =

(F.4.10)

As pi = mi r˙ i = mi h˙r0,i = (hT )−1 p0,i ri = hr0,i ,

(F.4.11)

N the Jacobian of the transformation between {pN , rN } and {pN 0 , r0 } is equal to 1. Hence, we can write

  (F.4.12) Q(η) = dpN drN exp −βH pN , rN  

 1 N N −1 N = dp0 dr0 exp −β p0,i · (1 + 2η) ·p0,i + U r0 ; η . 2mi

Linear response: examples

615

Now the dependence of Q(η) on η is only contained in the Hamiltonian. We can now explicitly carry out the differentiation with respect to η. Using  2     α rβ   ∂U  r0,ij ∂rij ∂U ∂U 0,ij = = 2 ∂ηαβ ∂η ∂r r ∂rij αβ ij ij i rc) from i, then the 2d array element list(i,icount)=j. The total number of particles within distance rv from i is given by nlist(i). The array xv(i) contains the position of particle i at the moment that the list is made. 2. The example above is for a Verlet list for MC: it stores j as a neighbor of i and i as a neighbor of j. For MD, every particle pair enters only once. 3. We must update the Verlet list if any particle has moved more than a distance (rv-rc)/2 from the position where it was when the list was last made. 4. Clearly, there is a trade off: the smaller rv, the cheaper it is to make the list, but the more often it has to be updated.

FIGURE I.2 Verlet lists: (left) conventional approach in which each particle has a Verlet list; (right) the approach of Bekker et al. in which each periodic image of a particle has its own Verlet list that contains only those particles in the central box.

1.5. In addition, Bekker et al. have shown that a similar trick can be used to take the calculation of the virial (pressure) out of the inner loop.

Saving CPU time 629

Algorithm 31 (Calculating the energy using a Verlet list) function en_vlist(i,xi)

calculates interaction energy of particle i with the Verlet list

en=0

for 1 ≤ jj ≤ nlist(i) do j=list(i,jj) xij=xi-x(j) xij=|(xij-box*round(xij/box)| en=en+enij(xij)

loop over the particles in the list next particle in the list nearest image distance enij is pair potential of pair ij

enddo end function

Specific Comment (for general comments, see p. 7) 1. Array list(i,itel) and nlist are made in Algorithm 30 and enij (not specified) returns the pair potential energy of particles i and j at xi and x(j).

FIGURE I.3 The cell list: the simulation cell is divided into cells of size rc × rc ; a particle i interacts with those particles in the same cell or neighboring cells (in 2d there are 9 cells; and in 3d, 27 cells).

I.2 Cell lists An algorithm that scales with N is the cell list or linked-list method [28]. The idea of the cell list is illustrated in Fig. I.3. The simulation box is divided into cells with a size equal to or slightly larger than the cutoff radius rc ; each particle in a given cell interacts with only those particles in the same or neighboring cells. Since the allocation of a particle to a cell is an operation that scales with N and the total number of cells that needs to be considered for the calculation of the interaction is independent of the system size, the cell list method scales as N . Algorithm 33 shows how a cell list can be used in a Monte Carlo simulation.

630 Saving CPU time

Algorithm 32 (Making a cell list) make linked cell list for pair interactions with cut-off distance rc determine diameter of cells: rn ≥ rc box is the simulation box diameter

function new_nlist(rc) rn=box/int(box/rc)

for

0



icel



ncel-1

hoc(icel)=0

enddo for 1 ≤ i ≤ npart do icel=int(x(i)%rn) ll(i)=hoc(icel) hoc(icel)=i

do set head of chain to 0 for each cell loop over the particles determine cell number link list the head of chain of cell icel make particle i the head of chain

end function

Specific Comment (for general comments, see p. 7) 1. This algorithm sets up a 1D linked-list. All particles are attributed to “their” cell. The latest particle (say i) added to a cell is referred to as the head of chain and stored in the array hoc(icel). Particle i replaces the previous head of chain, but is linked to it via the link-list array ll(i). Every particle points to (at most) one other particle. If ll(i) = 0, then there are no particles in the cell, beyond i. (chain). 2. The desired (optimum) cell size is rc, and rn ( > rc) is the closest size that fits in the box.

I.3 Combining the Verlet and cell lists It is instructive to compare the efficiency of the Verlet list and cell list in more detail. In three dimensions, the number of particles for which the distance needs to be calculated in the Verlet list is given by 4 nv = πρrv3 ; 3 for the cell list, the corresponding number is nl = 27ρrc3 . If we use typical values for the parameters in these equations (Lennard-Jones potential with rc = 2.5σ and rv = 2.7σ ), we find that nl is five times larger than nv . As a consequence, in the Verlet scheme, the number of pair distances that needs to be calculated is 16 times less than in the cell list. The observation that the Verlet scheme is more efficient in evaluating the interactions motivated Auerbach et al. [726] to use a combination of the two

Saving CPU time 631

Algorithm 33 (Calculating the energy using a cell list) function ennlist(i,xi,en)

calculates energy using a linked-cell list

en=0 icel=int(xi%rn)

for

-1



jn



1

do

jcel=(icel+jn)%ncel j=hoc(jcel)

determine the cell number loop over neighbor cells (1d) triple loop in 3d 0≤ jcell ≤ncel -1 head of chain of cell jcel

while j = 0 do if i = j then xij=xi-x(j) rij=|(xij-box*round(xij/box)| en=en+enij(rij)

nearest image distance enij is pair potential of pair ij

endif j=ll(j)

next particle in the list

enddo enddo end function

Specific Comment (for general comments, see p. 7) 1. Array ll(i) and hoc(icel) are constructed in Algorithm 32; enij is a function that returns the pair energy of particles i and j at distance rij. 2. jn = -1,0,+1 designates the cell to the left of icel, icel and the cell to the right.

lists: use a cell list to construct a Verlet list. The use of the cell list removes the main disadvantage of the Verlet list for a large number of particles—scales as N 2 —but keeps the advantage of an efficient energy calculation. An implementation of this method in a Monte Carlo simulation is shown in Algorithm 34.

I.4 Efficiency The first question that arises is when to use which method. This depends very strongly on the details of the systems. In any event, we always start with a scheme as simple as possible, hence no tricks at all. Although the algorithm scales as N 2 , it is straightforward to implement, and therefore the probability of programming errors is relatively small. In addition, we should take into account how often the program will be used. The use of the Verlet list becomes advantageous if the number of particles in the list is significantly less than the total number of particles; in three dimen-

632 Saving CPU time

Algorithm 34 (Combination of Verlet and cell lists) function mcmove_clist o=int(R*npart)+1

if

(x(o)-xv(o) > rv-rc then new_clist endif eno= en_vlist (o,x(o)) xn=x(o)+(R-0.5)*delx if (xn)-xv(o) > rv-rc then new_clist endif enn = en_vlist(o,xn) arg=exp(-beta*(enn-eno)) if R < arg then

displace a particle using a combined list select a particle at random need to make new list ?

energy old configuration random displacement need to make new list ?

energy new configuration

accepted: replace x(o) by xn

x(o)=xn

endif end function

Specific Comments (for general comments, see p. 7) 1. The algorithm is based on Algorithm 29. 2. Function new_clist creates a Verlet list using a cell list (see Algorithm 35) and function en_vlist calculates the energy of a particle at the given position using the Verlet list (see Algorithm 31).

sions, this means 4 nv = πrv3 ρ  N. 3 If we substitute some typical values for a Lennard-Jones potential (rv = 2.7σ and ρ = 0.8σ −3 ), we find nv ≈ 66, which means that only if the number of particles in the box is more than 100 does it make sense to use a Verlet list. To see when to use one of the other techniques, we have to analyze the algorithms in somewhat more detail. If we use no tricks, the amount of CPU time to calculate the total energy is given by τ = cN (N − 1)/2. The constant gives the required CPU time for an energy calculation between a pair of particles. If we use the Verlet list, the CPU time is τv = cnv N +

cv 2 N , nu

Saving CPU time 633

Algorithm 35 (Making a Verlet list using a cell list) function new_clist new_nlist(rv) for 1 ≤ i ≤ npart do

makes a new Verlet list using a cell list first make the cell list initialize Verlet list

nlist(i)=0 xv(i)=x(i)

enddo for 1 ≤ i ≤ npart do icel=int(x(i)%rn) for -1 ≤ jn ≤ 1 do jcel=(icel+jn)%ncel j=hoc(jcel)

store particle positions

determine cell number loop over neighbor cells (1D) 0≤ jcell ≤ncel -1 head of chain of cell jcel

while j=0 do if i=j then xr=x(i)-x(j) xr=xr-box*round(xr/box)

if

|xr|< rv then nlist(i)=nlist(i)+1 nlist(j)=nlist(j)+1 list(i,nlist(i))=j list(j,nlist(j))=i

nearest image distance add to the Verlet lists

endif endif j=ll(j)

next particle in the cell list

enddo enddo enddo end function

Specific Comments (for general comments, see p. 7) 1. Array list(i,itel) is the Verlet list of particle i. The number of particles in the Verlet list of particle i is given by nlist(i). The array xv(i) contains the position of the particles at the moment the list is made, and is used in Algorithm 29 to test if new list should be made). 2. Function new_nlist makes a cell list (Algorithm 32). The cell size rn should be no less than the range of the Verlet list (rv), which in turn should be no less than the interaction range rc.

where the first term arises from the calculation of the interactions and the second term from the update of the Verlet list, which is done every nth u cycle. The cell list scales with N , and the CPU time can be split into two contributions: one that accounts for the calculation of the energy and the other for the

634 Saving CPU time

making of the list, τl = cnl N + cl N. If we use a combination of the two lists, the total CPU time becomes τc = cnv N +

cl N. nu

The way to proceed is to perform some test simulations to estimate the various constants, and from the equations, it will become clear which technique is preferred. In Example 29, we have made such an estimate for a simulation of the Lennard-Jones fluid. Example 29 (Comparison of schemes for the Lennard-Jones fluid). It is instructive to make a detailed comparison of the various schemes to save CPU time for the Lennard-Jones fluid. We compare the following schemes: 1. Verlet list 2. Cell list 3. Combination of Verlet and cell lists 4. Simple N 2 algorithm We have used the program of Case Study 1 as a starting point. At this point it is important to note that we have not tried to optimize the parameters (such as the Verlet radius) for the various methods; we have simply taken some reasonable values. For the Verlet list (and for the combination of Verlet and cell lists) it is important that the maximum displacement be smaller than twice the difference between the Verlet radius and cutoff radius. For the cutoff radius we have used rc = 2.5σ , and for the Verlet radius rv = 3.0σ . This limits the maximum displacement to x = 0.25σ and implies for the Lennard-Jones fluid that, if we want to use a optimum acceptance of 50%, we can use the Verlet method only for densities larger than ρ > 0.6σ −3 . For smaller densities, the optimum displacement is larger than 0.25. Note that this density dependence does not exist in a Molecular Dynamics simulation. In a Molecular Dynamics simulation, the maximum displacement is determined by the integration scheme and therefore is independent of density. This makes the Verlet method much more appropriate for a Molecular Dynamics simulation than for a Monte Carlo simulation. Only at high densities does it make sense to use the Verlet list. The cell list method is advantageous only if the number of cells is larger than 3 in at least one direction. For the Lennard-Jones fluid this means that, if the number of particles is 400, the density should be lower than ρ < 0.5σ −3 . An important advantage of the cell list over the Verlet list is that this list can also be used for moves in which a particle is given a random position. From these arguments it is clear that, if the number of particles is smaller than 200–500, the simple N 2 algorithm is the best choice. If the number of particles is significantly larger and the density is low, the cell list method is probably more efficient. At high density, all methods can be efficient and we have to make a detailed comparison.

Saving CPU time 635

FIGURE I.4 Comparison of various schemes to calculate the energy: τ is in arbitrary units and N is the number of particles. As a test case the Lennard-Jones fluid is used. The temperature was T ∗ = 2 and per cycle the number of attempts to displace a particle was set to 100 for all systems. The lines serve to guide the eye.

To test these conclusions about the N dependence of the CPU time of the various methods, we have performed several simulations with a fixed number of Monte Carlo cycles. For the simple N 2 algorithm the CPU time per attempt is τN 2 = cN, where c is the CPU time required to calculate one interaction. This implies that the total amount of CPU time is independent of the density. For a calculation of the total energy, we have to do this calculation N times, which gives the scaling of N 2 . Fig. I.4 shows that indeed for the Lennard-Jones fluid, the τN 2 increases linearly with the number of particles. If we use the cell list, the CPU time will be τl = cVl ρ + cl pl N, where Vl is the total volume of the cells that contribute to the interaction (in three dimensions, Vl = 27rc3 ), cl is the amount of CPU time required to make a cell list, and pl is the probability that a new list has to be made. Fig. I.4 shows that the use of a cell list reduces the CPU time for 10,000 particles with a factor 18. Interestingly, the CPU time does not increase with increasing density. We would expect an increase since the number of particles that contribute to the interaction of a particle i increases with density. However, the second contribution to τNeigh (pl ) is the probability that a new list has to be made, depends on the maximum displacement, which decreases when the density increases. Therefore, this last term will contribute less at higher densities. For the Verlet scheme the CPU time is τv = cVv ρ + cv pv N 2 , where Vv is the volume of the Verlet sphere (in three dimensions, Vv = 4π rv3 /3), cv is the amount of CPU time required to make the Verlet-list, and pv

636 Saving CPU time

is the probability that a new list has to be made. Fig. I.4 shows that this scheme is not very efficient. The N 2 operation dominates the calculation. Note that we use a program in which a new list for all particles has to be made as soon as one of the particles has moved more than (rv − rc )/2; with some more bookkeeping it is possible to make a much more efficient program, in which a new list is made for only the particle that has moved out of the list. The combination of the cell and Verlet lists removes the N 2 dependence of the simple Verlet algorithm. The CPU time is given by τc = cVv ρ + cv pv cl N. Fig. I.4 shows that indeed the N 2 dependence is removed, but the resulting scheme is not more efficient than the cell list alone. This case study demonstrates that it is not simple to give a general recipe for which method to use. Depending on the conditions and number of particles, different algorithms are optimal. It is important to note that for a Molecular Dynamics simulation the conclusions may be different. For more details, see SI (Case Study 26).

Appendix J

Some general purpose algorithms This appendix describes a few algorithms that are used in the text.

J.1 Gaussian distribution Algorithm 36 (Gaussian distribution) If R1 and R2 are two random numbers distributed uniformly over the interval {0, 1}, then Xg , given by √ Xg = Xavg + σ − ln(R1 ) cos(2π R2 ) has a Gaussian distribution with average Xavg and variance σ 2

Comment The above algorithm is but one example. It is simple, but not necessarily the fastest.

637

638 Some general purpose algorithms

J.2 Selection of trial orientations Algorithm 37 (Selection of trial orientations) In the configurational bias MC method, we often need to select the next bond direction from a set of k trial directions. Below, we assume that the (Boltzmann) weights w(n) of the individual trial directions are known. function select(w,sumw)

selects a trial orientation n with  prob. p(n) = w(n)/ j w(j )

ws=R*sumw cumw=w(1) n=1

while

cumw < ws do n=n+1 cumw=cumw+w(n)

enddo end function

The function returns n, the index of the selected trial position

Specific Comments (for general comments, see p. 7) 1. For large values of k bisection [38] can be more efficient.

J.3 Generate random vector on a sphere Algorithm 38 (Random vector on a unit sphere) function ranor ransq=2. do while (ransq.ge.1.0) ran1=1.-2.*R

generates a 3d random unit vector with components bx,by,bz Continue until the vector is inside unit sphere

ran2=1.-2.*R

ran3=1.-2.*R ransq=ran1*ran1+ran2*ran2 +ran3*ran3 enddo



or = 1.0/

ransq

bx=ran1*or by=ran2*or bz=ran3*or

end function

Specific Comment (for general comments, see p. 7) 1. The above algorithm is but one example. It is simple, but not necessarily the fastest.

Some general purpose algorithms 639

J.4 Generate bond length Algorithm 39 (Generate bond length with harmonic springs (3d)) returns bond length  assume harmonic springs. Spring constant kv . 0 : bond length at T = 0

function bondl

α = kv /(kB T ) M = (0 /2) ∗ (1 +



1 + 8/(α20 ))

Position maximum at T

ready=.false.

while ready == .false. do  = gauss(α, M ) aux = 2 ∗ [−(/M − 1) + ln(/M )] if R ≤ exp([aux) then

generate  with a Gaussian distr. auxiliary quantity accepted ?

ready=.true.

endif enddo end function

rejection step

Specific Comments (for general comments, see p. 7) 1. The bond length has the following distribution: p() ∝ exp[−β0.5kv ( − 0 )2 ]dl ∝ 2 exp[−β0.5kv ( − 0 )2 ]dl. 2. We make use of the fact that x − 1 ≥ ln x. 3. gauss(α, M ) is a 1d normal distribution, see section J.1. The distribution of bond lengths of linear, harmonic molecules in 3d is close to a Gaussian distribution and can be generated starting from the Box-Muller algorithm [66]. Consider a linear molecule that can vibrate along its axis. The force constant for the vibration is denoted by κ. For a fixed orientation, the equilibrium bondlength is denoted by 0 . We denote the inverse temperature by β. We ignore coupling between rotation and vibration. Under these circumstances, the length-distribution of the molecule is P () ∼ 2 exp[−0.5βκ( − 0 )2 ] = exp[−0.5βκ( − 0 )2 + 2 ln ] . (J.4.1) For what follows, the normalization is unimportant. We cannot sample this distribution directly. However, we can use the rejection method. First, we determine the location M of the maximum of ln P (). We then get: βκ(M − 0 ) − 2/M = 0 .

(J.4.2)

640 Some general purpose algorithms

Denoting βκ by α we get 2M − M 0 − 2/α = 0, or M = 0.5(0 +



20 + 8/α) .

(J.4.3)

(J.4.4)

We can approximate Eq. (J.4.1) by a Gaussian around 0 : P  () ∼ exp[−0.5α( − M )2 ] .

(J.4.5)

P  () is related to P () through

  2( − M ) ≤1. P ()/P  () = exp 2 ln(/M ) − M

We can therefore draw values of  from P  () and then reject values of  if   2( − M ) . R > exp 2 ln(/M ) − M

J.5 Generate bond angle Algorithm 40 (Generate bond angle) generate bond orientation vector b with Boltzmann probability given by the bond-bending potential

function bonda(xn,i) ready=.false.

while

ready == .false.

b =

do

ranor

dx1x2=xn(i-1)-xn(i-2) u12=dx1x2/|dx1x2| theta=acos(b*dx1x2)

ubb(theta) if R < exp(-beta*bu) then bu =

unit vector on a sphere vector r21 = ri−1 − ri−2 normalize vector ˆ bending angle θ = arccos (uˆ 12 · b) bond-bending energy rejection test

ready=.true.

endif endif end function

Specific Comment (for general comments, see p. 7) 1. This algorithm uses a naive rejection scheme to generate a Boltzmann disˆ Function ranor generates a random vector on a tribution of orientations b. unit sphere (Algorithm 38). The function ubb (not specified) gives the bondbending energy for the given angle.

Some general purpose algorithms 641

J.6 Generate bond and torsion angle Algorithm 41 (Generate bond and torsion angle) generate a unit vector with orientational Boltzmann distribution determined by torsion and bond-bending potentials

function tors_bonda(xn,i)

ready=.false.

while b =

ready == .false.

do

ranor

dx1x2=xn(i-1)-xn(i-2) dx1x2=dx1x2/|dx1x2| dx2x3=xn(i-2)-xn(i-3) dx2x3=dx2x3/|dx2x3| theta=acos(b * dx1x2) ubb=ubb(theta) xx1=b

×

dx1x2

xx2=dx1x2 × dx2x3 [ ...normalize xx1 and xx2] phi=acos(xx1 * xx2) utors=utors(phi) usum=ubb+utors

if R < exp(−β ∗ usum) then

generate random unit vector bˆ vector r21 = ri−1 − ri−2 normalize r12 : uˆ 12 ≡ r12 /|r12 | vector r23 = ri−2 − ri−3 normalize r23 : uˆ 23 ≡ r23 /|r23 |

ˆ bending angle θ = arccos(uˆ 12 · b) bond-bending energy cross product : xx1 = bˆ × uˆ 12 cross product: xx2 = uˆ 12 × uˆ 23 ˆ · xx2) ˆ torsion angle φ = arccos(xx1 determine torsion energy rejection test

ready=.true.

endif enddo end function

Specific Comments (for general comments, see p. 7) 1. This algorithm uses a naive rejection scheme to generate a Boltzmann disˆ tribution of orientations b. 2. In the literature, various definitions of the torsion angle are used. 3. The function ranor generates a vector uniformly on a unit sphere (Algorithm 38), and the function ubb (not specified) gives the bond-bending energy for the given angle θ . The function utors (also not specified) gives the torsion energy for the dihedral angle φ.

This page intentionally left blank

Part VI

Repository

This page intentionally left blank

Appendix K

Errata This page is optimistically left empty, but will be updated if it turns out that our optimism was unwarranted. Please send your suggestions/corrections by email to [email protected] and [email protected].

e1

This page intentionally left blank

Appendix L

Miscellaneous methods L.1 Higher-order integration schemes The basic idea behind the predictor-corrector algorithms is to use information about the position and its first n derivatives at time t to arrive at a prediction for the position and its first n derivatives at time t + t. We then compute the forces (and thereby the accelerations) at the predicted positions. And then we find that these accelerations are not equal to the values that we had predicted. So we adjust our predictions for the accelerations to match the facts. But we do more than that. On the basis of the observed discrepancy between the predicted and observed accelerations, we also try to improve our estimate of the positions and the remaining n − 1 derivatives. This is the “corrector” part of the predictorcorrector algorithm. The precise “recipe” used in applying this correction is a compromise between accuracy and stability. Here, we shall simply show a specific example of a predictor-corrector algorithm, without attempting to justify the form of the corrector part. Consider the Taylor expansion of the coordinate of a given particle at time t + t: r(t + t) = r(t) + t

∂r t 2 ∂ 2 r t 3 ∂ 3 r + + + ··· . ∂t 2! ∂t 2 3! ∂t 3

Using the notation x0 (t) ≡ r(t) ∂r x1 (t) ≡ t ∂t t 2 ∂ 2 r x2 (t) ≡ 2! ∂t 2 t 3 ∂ 3 r x3 (t) ≡ , 3! ∂t 3 we can write the following predictions for x0 (t + t) through x3 (t + t): x0 (t + t) = x0 (t) + x1 (t) + x2 (t) + x3 (t) x1 (t + t) = x1 (t) + 2x2 (t) + 3x3 (t) x2 (t + t) = x2 (t) + 3x3 (t) x3 (t + t) = x3 (t). e3

e4 Miscellaneous methods

Now that we have x0 (t + t), we can compute the forces at the predicted position, and thus compute the corrected value for x2 (t + t). We denote the predicted by x2 : difference between x2corrected and x2 predicted

x2 ≡ x2corrected − x2

.

We now estimate “corrected” values for x0 through x3 , as follows: predicted

xncorrected = xn

+ Cn x2 ,

(L.1.1)

where the Cn are constants fixed for a given order algorithm. As indicated, the values for Cn are such that they yield an optimal compromise between the accuracy and the stability of the algorithm. For instance, for a fifth-order predictor-corrector algorithm (i.e., one that uses x0 through x4 ), the values for Cn are 19 120 3 C1 = 4 C2 = 1 1 C3 = 2 1 C4 = . 12 C0 =

(of course)

One may iterate the predictor and corrector steps to self-consistency. However, there is little point in doing so because (1) every iteration requires a force calculation. One would be better off spending the same computer time to run with a shorter time step and only one iteration because (2) even if we iterate the predictor-corrector algorithm to convergence, we still do not get the exact trajectory: the error is still of order t n for an nth-order algorithm. This is why we gain more accuracy by going to a shorter time step than by iterating to convergence at a fixed value of t.

L.2 Surface tension via the pressure tensor If we have an interface in our system we can compute the interfacial tension, γ , from the pressure tensor. In a homogeneous system at equilibrium, the thermodynamic pressure is constant and equal in all directions. For an inhomogeneous system, mechanical equilibrium requires that the component of the pressure tensor normal to the interface is constant throughout the system. The components tangential to the interface can vary in the interfacial region, but must be equal to the normal component in the bulk liquids.

Miscellaneous methods e5

For an inhomogeneous fluid there is no unambiguous way to compute the normal pn and tangential pt components of the pressure tensor [727–729]. Here, we have used the Kirkwood-Buff convention [730] for expressing the stress tensor in a system of particles with pairwise-additive interactions. The system is divided into Nsl equal slabs parallel to the x, y plane. The local normal (pn (k)) and tangential (pt (k)) components of the pressure tensor are given by [143]   2 1 (k) zij dU (rij ) pn (k) = kB T ρ(k) − , (L.2.1) Vsl rij dr (i,j )

and

  2 2 1 (k) xij + yij dU (rij ) pt (k) = kB T ρ(k) − , 2Vsl rij dr

(L.2.2)

(i,j )

where ρ(k) is the average density in slab k, Vsl = Lx Ly Lz /Nsl is the volume of a slab, U (r) is the intermolecular potential from which the conservative forces (k) can be derived. (i,j ) means that the summation runs over all pairs of particles i, j for which the slab k (partially) contains the line that connects the particles i and j . Slab k gets a contribution 1/No from a given pair (i, j ), where No is the total number of slabs which intersect this line. It can be shown that, even though the stress tensor is not unique, the definition of the interfacial tension γ is free from ambiguities [729]. The interfacial tension can be calculated by integrating the difference between the normal and tangential components of the pressure tensor across the interface. In the case of our system with two interfaces, γ reads  1 Lz γ= dz [pn (z) − pt (z)] . (L.2.3) 2 o The factor 12 corrects for the fact that, in a system with periodic boundary conditions, interfaces necessarily come in pairs.

L.3 Micro-canonical Monte Carlo Most experimental observations are performed at constant N , P , T ; sometimes at constant μ, V , T ; and occasionally at constant N , V , T . Experiments at constant N , V , E exist (e.g., in adiabatic calorimetry) but are rare, and it is fair to say that microcanonical MC simulations of dense liquids or solids are equally rare. In fact, the first microcanonical Monte Carlo method was suggested by Creutz [172], in the context of lattice gauge theory MC simulations. The microcanonical Monte Carlo method does not use random numbers to determine the acceptance of a move, but it does use random numbers to generate trial moves.

e6 Miscellaneous methods

The constant N V E MC algorithm uses the following procedure. We start with the system in a configuration qN . We denote the potential energy for this state by U (qN ). We now fix the total energy of the system at a value E > U . The excess energy ED (the D stands for “demon”) is equal to E − U is carried by an additional degree of freedom. ED must be non-negative. Now we start our Monte Carlo run. 1. After each trial move from an “old” configuration (o) to a trial configuration (n), we compute the change in potential energy of the system, U = U (qN (n)) − U (qN (o)). 2. If U < 0, we accept the move and increase the energy carried by the demon by |U |. If U > 0, we test if the demon carries enough energy to make up the difference. Otherwise, we reject the trial move. Note that no random numbers were used in this decision. Once the system has equilibrated, the probability density to find the demon with an energy ED is given by the Boltzmann distribution: N (ED ) = (kB T )−1 exp(−ED /kB T ) , where T is the temperature that the system reaches after equilibration. Hence, the demon acts as a thermometer. In that sense, the demon energy plays the same role as the kinetic energy in a micro-canonical MD simulation. Micro-canonical Monte Carlo is rarely used to simulate molecular systems but it finds many other applications in cases where canonical MC simulations would fail, for instance in the study of systems interacting through gravity, as their potential energy is not bounded from below.

L.4 Details of the Gibbs “ensemble” The introduction of a new ensemble brings up the question of whether it is a “proper ensemble”; that is, does it yield the same results as the conventional ensembles? To prove it does, we use the partition function (6.6.1) as derived in section 6.6.2 to define a free energy. This free energy is used to show that, in the thermodynamic limit, the Gibbs ensemble and the canonical ensemble are equivalent. This proof gives considerable insight into why the method works. Before we proceed, we first list a few basic results for the free energy in the canonical ensemble.

L.4.1 Free energy of the Gibbs ensemble L.4.1.1 Basic definitions and results for the canonical ensemble Consider a system of N particles in a volume V and temperature T (canonical ensemble). The partition function is defined as (see Ruelle [731])  1 Q(N, V , T ) ≡ 3N drN exp [−βU(N )] . (L.4.1)  N! V

Miscellaneous methods e7

The free energy density is defined in the thermodynamic limit by f (ρ) ≡ lim fV (ρ) ≡ lim − V →∞

V →∞ N/V =ρ

1 ln QN,V ,T , βV

where ρ = N/V is the density of the system. For a finite number of particles we can write Q(N, V , T ) = exp {−βV [f (ρ) + o(V )]} ,

(L.4.2)

where g(V ) = o(V ) means g(V )/V approaches 0 as V → ∞. With this free energy, we can derive some interesting properties of a canonical system in the thermodynamic limit. For example, it can be shown that this free energy is a convex function of the density ρ [731]: f (xρ1 + (1 − x)ρ2 ) ≤ xf (ρ1 ) + (1 − x)f (ρ2 ),

(L.4.3)

for every ρ1 , ρ2 , and x where 0 ≤ x ≤ 1. The equality holds in the case of a first-order transition, if ρg ≤ ρ1 ≤ ρ2 ≤ ρl , where ρg , ρl denote the density of coexisting gas and liquid phases, respectively. Another interesting result, which plays a central role on the following pages, is the well-known saddle point theorem [732] (also called the steepest descent method). This theorem is based on the observation that, for a macroscopic system (N very large) in equilibrium, the probability that the free energy density deviates from its minimum value is extremely small. Therefore, when we calculate for such a system an ensemble average, we have to take into account only those contributions where the free energy has its minimum value. Assume that Q(N, V , T ) can be written as  Q(N, V , T ) ≡ da1 , · · · , dam exp [−βV (fm (a1 , · · · , am ) + o(V ))] , where a1 , · · · , am are variables that characterize the thermodynamic state of the system. Furthermore, define f (ρ) ≡ min fm (a1 , · · · , am ) a1 ,··· ,am

and assume that fm (a1 , · · · , am ) and the term o(V ) satisfy a few technical conditions [732], which hold for most statistical mechanics systems. The saddle point theorem states that, in the thermodynamic limit, the free energy of the system is equal to this minimum value f (ρ) or lim −

V →∞ N/V =ρ

1 ln Q(N, V , T ) = f (ρ). βV

(L.4.4)

e8 Miscellaneous methods

Moreover, this saddle point theorem can also be used to calculate the ensemble average of a quantity A:  1 A(a1 , · · · , am )V ≡ da1 , · · · , dam Q(N, V , T ) (L.4.5) × A(a1 , · · · , am ) exp {−βV [fm (a1 , · · · , am ) + o(V )]} . In the thermodynamic limit, this ensemble average again has contributions only from those configurations where fm (a1 , · · · , am ) has its minimum value. Let us define S as the collection of these minima:     S = y1 , · · · , ym fm (y1 , · · · , ym ) = min fm (a1 , · · · , am ) . a1 ,··· ,am

We now can state the saddle point theorem in a convenient form by introducing a function G(a1 , · · · , am ) ≥ 0 with support on the surface S and normalization  da1 , · · · , dam G(a1 , · · · , am ) = 1, S

such that, for an arbitrary function A, A(a1 , · · · , am ) ≡ lim A(a1 , · · · , am )V V →∞  = da1 , · · · , dam G(a1 , · · · , am )A(a1 , · · · , am ).

(L.4.6)

S

L.4.1.2 The free energy density in the Gibbs ensemble The Gibbs ensemble is introduced in section 6.6.2 as an N , V , T ensemble to which an additional degree of freedom is added: the system is divided into two subsystems that have no interaction with each other. We can rewrite the partition function of the canonical ensemble (L.4.1): Q(N, V , T ) =

1 3N N!

N   N n1 =0

n1

0



V

dV1

drn1 1



−n1 exp {−β [U(n1 ) drN 2

+ U(N − n1 ) + interactions between the two volumes]} . (L.4.7) The difference between this equation and the partition function of the Gibbs ensemble (6.6.1) is that, in Eq. (L.4.7), we have interactions between the subsystems. In the case of short-range interactions, the last term in the exponent of Eq. (L.4.7) is proportional to a surface term. This already suggests that both ensembles should behave similarly in many respects. We work out these ideas more rigorously in the following pages.

Miscellaneous methods e9

In the usual way, we define, as a free energy in the Gibbs ensemble, 1 ln Q¯ N,V ,T . f¯(ρ) ≡ lim − βV V →∞

(L.4.8)

N/V =ρ

In the partition function of the Gibbs ensemble (6.6.1), we can substitute Eq. (L.4.1): ¯ Q(N, V,T ) =

N  

V

dV1 Q(n1 , V1 , T )Q(N − n1 , V − V1 , T ).

n1 =0 0

Introducing x = N1 /N and y = V1 /V , and assuming that the number of particles is very large, we can then write ¯ Q(N, V, T ) = NV





1

dx 0

1

¯ N (x, y), dy Q

0

where ¯ N (x, y) = QxN,yV ,T Q(1−x)N,(1−y)V ,T Q



 1−x x ρ + (1 − y)f ρ + o(V ) . = exp −βV yf y 1−y Note that, in this equation, f (ρ) is the free energy of a canonical system. So, we can apply the saddle point theorem of the previous section (L.4.4) to calculate the free energy density of the Gibbs ensemble f¯(ρ)

x 1−x f¯(ρ) = min yf ρ + (1 − y)f ρ ≡ min f¯(x, y). y 1−y 0≤x≤1 0≤x≤1 0≤y≤1

0≤y≤1

We now have to find the surface S on which the function f¯(x, y) reaches its minimum. For this, we can use that f (ρ) is a convex function of the density (L.4.3). This gives, for f¯(x, y),

x 1−x ¯ f (x, y) ≥ f y ρ + (1 − y) ρ = f (ρ). (L.4.9) y 1−y We first consider the case where there is only one phase. For this case any combination of x and y that results in densities ρ1 and ρ2 in the subsystems different from ρ will give a higher free energy. So, the equality in Eq. (L.4.9) holds only if x 1−x ρ= ρ, or x = y. y 1−y

e10 Miscellaneous methods

Thus, when there is only one phase, the free energy of the Gibbs ensemble has its minimum value (in the thermodynamic limit) when both boxes have a density equal to the equilibrium density of the canonical ensemble. Therefore, the surface S is given by S = {(x, y) |x = y } . Second, we consider the case of a first-order phase transition. Let ρ be such that ρl ≤ ρ ≤ ρg , and let us choose x and y such that ρg ≤

x 1−x ρ ≡ ρ3 ≤ ρl and ρg ≤ ρ ≡ ρ4 ≤ ρl . y 1−y

(L.4.10)

For this case the equality in Eq. (L.4.3) holds, and we can write, for f¯(x, y), f¯(x, y) = yf (ρ3 ) + (1 − y)f (ρ4 ) = f (yρ3 + (1 − y)ρ4 ).

(L.4.11)

(yρ3 + (1 − y)ρ4 ) = ρ,

(L.4.12)

f¯(x, y) = f (ρ).

(L.4.13)

Note that

which gives

It can be shown that, if x, y do not satisfy Eq. (L.4.10), f¯(x, y) > f (ρ). Therefore, the surface S in the case of a first-order phase transition is given by    x 1−x S = (x, y) ρg ≤ ρ ≤ ρl , ρg ≤ ρ ≤ ρl . (L.4.14) y 1−y This result shows that, in the case of a first-order transition, the (bulk) free energy of the Gibbs ensemble has its minimum value (in the thermodynamic limit) for all values of x, y where there is vapor-liquid coexistence in both boxes. Eqs. (L.4.9) and (L.4.13) show that, in the thermodynamic limit, the free energy of the Gibbs ensemble is equal to the free energy of the canonical ensemble. To calculate an ensemble average, it remains to determine the function G(x, y) using Eq. (L.4.6). In the case of a pure phase G(x, y) needs to be of the form G(x, y) = g(x) δ(x − y).

(L.4.15)

It is shown in the Appendix of [220] that, for an ideal gas, g(x) = 1. We expect that the same holds for an interacting gas. Fig. L.1 shows a probability plot in

Miscellaneous methods e11

FIGURE L.1 Probability plot in the x, y plane (x = n1 /N , y = V1 /V and x = (N − n1 )/N , y = (V − V1 )/V ) for a Lennard-Jones fluid at various temperatures: (left) high temperature (T = 10), (middle) well below the critical temperature (T = 1.15), and (right) slightly below the critical temperature (T = 1.30).

the x, y plane for a simulation of a finite system at high temperature. This figure shows that x ≈ y. In the case of two phases, we will show that the system will split up into a liquid phase, with density, ρl , in one box, and a vapor phase, with density, ρg , in the other box. Until now we have ignored surface effects, which arise from the presence of a liquid-vapor interface in the boxes. When the density in one box is between the vapor and liquid density the system will form droplets of gas or liquid. The interfacial free energy associated with these droplets has (in the thermodynamic limit) a negligible contribution to the bulk free energy of the Gibbs ensemble. Nevertheless, this surface-free energy is the driving force that causes the system to separate into a homogeneous liquid in one box and a homogeneous vapor phase in the other. These surface effects are taken into account in the next significant term in the expression for the free energy (L.4.2), which is the term due to the surface tension. This gives, for the partition function, Q(N, V , T ) = exp {−β [Vf (ρ) + γ A + o(A)]} ,

(L.4.16)

where A denotes the area of the interface and γ denotes the interfacial tension. For three-dimensional systems, in general this area will be proportional to V 2/3 . Using this form of the partition function for the Gibbs ensemble, Eq. (L.4.5) can be written as A(x, y)V (L.4.17)     2/3 2/3 dxdy A(x, y) exp −β Vf (x, y) + γ V a(x, y) + o(V ) , = Q(N, V , T ) where a(x, y) is a function of the order of unity. We know from the saddle point theorem that the most important contribution to the integrals comes from the region S, defined by Eq. (L.4.14). Thus,

e12 Miscellaneous methods

A(x, y)V     dxdy A(x, y) exp −β Vf (x, y) + γ V 2/3 a(x, y) + o(V 2/3 ) S     ≈ 2/3 a(x, y) + o(V 2/3 ) S dxdy exp −β Vf (x, y) + γ V     dxdy A(x, y) exp −β γ V 2/3 a(x, y) + o(V 2/3 ) S     = (L.4.18) 2/3 a(x, y) + o(V 2/3 ) S dxdy exp −β γ V and applying the saddle point theorem again    2/3 a(x, y) + o(V 2/3 ) SA dxdy A(x, y) exp −βγ V    A(x, y)V ≈ (L.4.19) 2/3 a(x, y) + o(V 2/3 ) SA dxdy exp −βγ V and

 lim A(x, y)V =

dxdy G(x, y) A(x, y),

(L.4.20)

where the surface SA is now given by     SA = (x, y) a(x, y) = min a(x, ¯ y) ¯ .

(L.4.21)

V →∞

SA

x, ¯ y¯

In the infinite system it is easily seen that the area of the interface is 0, if box 1 contains only gas (liquid) and box 2 only liquid (gas). Therefore, the surface SA contains only two points, which correspond to the vapor and liquid densities:   x 1−x x 1−x  SA = (x, y)  = ρl and = ρg or = ρg and = ρl . (L.4.22) y 1−y y 1−y It is straightforward to show that this surface gives, for G(x, y),



ρ − ρg ρg ρ − ρg 1 G(x, y) = δ x − δ y− 2 ρ ρl − ρg ρl − ρg



1 ρl − ρ ρl ρl − ρ + δ x− δ y− . 2 ρ ρl − ρg ρl − ρg

(L.4.23)

We have shown more formally that the free energy density for the Gibbs ensemble, as defined by Eq. (L.4.8), becomes identical to the free energy density of the canonical ensemble. Furthermore, it is shown that, at high temperatures, x = y; that is, the densities in the two subsystems of the Gibbs ensemble are equal and equal to the density in the canonical ensemble (see Fig. L.1). In the case of a first-order phase transition, if surface terms would be unimportant, then x and y are restricted to the area defined by Eq. (L.4.14): ρg ≤

x 1−x ρ ≡ ρ3 ≤ ρl and ρg ≤ ρ ≡ ρ4 ≤ ρl . y 1−y

(L.4.24)

Miscellaneous methods e13

FIGURE L.2 Probability plot in the x − y plane of a successful simulation of a Lennard-Jones fluid well below the critical temperature (T = 1.15 and N = 500).

If we take surface effects into account, this surface (Eq. (L.4.24)) reduces to two points in the x, y plane. The densities of these points correspond to the density of the gas or liquid phase in the canonical ensemble. It is interesting to compare this with the results of an actual simulation of a finite system. In Fig. L.1, the results are shown for a simulation at a temperature well below the critical point. Under such conditions, the surface reduces to two points. This should be compared to the results of a simulation close to the critical point (Fig. L.1). Under such conditions the interfacial tension is very small and we see that the simulation samples the entire surface S. Note that due to the finite size of this system, fluctuations are also possible in which the density of a subsystem becomes greater or smaller than the density of the liquid or gas phase.

L.4.2 Graphical analysis of simulation results In Appendix L.4, we describe a graphical technique for analyzing the results of a Gibbs ensemble simulation. In this scheme, the fraction of all particles (ni /N ) in box i is plotted versus the fraction of the total volume (vi /V ) taken up by this box. In the x-y plane, where x = ni /N and y = Vi /V , every dot represents a point sampled in the simulation. In the thermodynamic limit, only two points in the x-y plane are sampled; namely, those that correspond to the coexisting liquid and gas density (see Appendix L.4). For a finite system, we expect to observe fluctuations around these points. Fig. L.2 shows an x-y plot for a simulation of two-phase coexistence well below the critical temperature. The fact that the simulation results cluster around the two points that correspond to the coexisting liquid and vapor indicates that the system was well equilibrated. If a simulation in the Gibbs ensemble is performed far below the critical temperature, it is in general no problem to analyze the results. After the equilibration, it becomes clear which of the boxes contains the vapor phase and which the liquid phase. The densities of the coexisting phases can simply be obtained by sampling the densities at regular intervals.

e14 Miscellaneous methods

FIGURE L.3 Density in the two boxes in a Gibbs ensemble simulation close to the critical temperature. The left figure shows the evolution of the density of the two boxes during a simulation. The right figure gives the corresponding probability density. The simulations were performed on a Lennard-Jones fluid with N = 256 at T = 1.30.

When estimating the accuracy of the simulation one should be careful since the “measured” densities are not sampled independently: in estimating the standard deviations of the results one should take this into account (this aspect is discussed in more detail in Appendix A of [733]). Close to the critical point, however, it is possible that the boxes continuously change “identity” during a simulation. In Fig. L.3 the evolution of the density in such a simulation close to the critical point is shown. In such a system, the average density in any one of the two boxes will tend to the overall density (N/V ). In those circumstances, it is more convenient to construct a histogram of the probability density P (ρ) to observe a density ρ in either box. Even when the boxes change identity during the simulation, the maxima of P (ρ) are still well defined. And, as shown in Appendix L.4, in the thermodynamic limit, the two maxima of P (ρ) correspond to coexisting vapor and liquid densities, except precisely at the critical point. (For a discussion of the critical behavior of P (ρ), see the article by Allen and Tildesley [46].) Because P (ρ) is obtained by sampling the density in both boxes, the results are not influenced when the boxes change identity. In Fig. L.3 an example of such a density distribution is shown. In this particular example, the simulation was carried out rather close to the critical point. Under those conditions, the interpretation of the density histogram is complicated because interfaces may form in both boxes. As a consequence, three peaks are observed; the two outside peaks correspond to the coexisting liquid and gas phase. A simple model that accounts for the existence of the middle peak is discussed in [220]. Example 30 (Finite-size effects in the Gibbs ensemble). Most Gibbs-ensemble simulations are performed on relatively small systems (64 ≤ N ≤ 500). One

Miscellaneous methods e15

therefore would expect to see significant finite-size effects, in particular, close to the critical point. Indeed, in simulations of a system of 100 Ising spins on a lattice,a phase coexistence is observed at temperatures as much as 25% above the critical temperature of the infinite system. In contrast to what is found in lattice gases, the first Gibbs-ensemble studies of the phase diagram of the Lennard-Jones fluid (in two and three dimensions) [77,211,213,220] did not show significant finite-size effects. This striking difference with the lattice models motivated Mon and Binder [734] to investigate the finite-size effects in the Gibbs ensemble for the two-dimensional Ising model in detail. For the two-dimensional Ising model the critical exponents and critical temperature are known exactly. Mon and Binder determined for various system sizes L the order parameter ML (T ) (see Eq. (6.6.13)): ML (T ) =

ρl (T ) − ρc = (1 − T /Tc )β , ρc

where ρl (T ) is the density of the liquid phase, ρc and Tc are the critical density and temperature, respectively, and β is the critical exponent.

FIGURE L.4 Finite-size effects in a Gibbs-ensemble simulation of the two-dimensional 1/β Ising model. Order parameter ML (T ) for L = 10 (i.e., L × L = 100 spins) versus T /Tc , where Tc is the exact critical temperature for the infinite system. The lines are fitted through the points. The simulation data are taken from [734].

The results of the simulations of Mon and Binder are shown in Fig. L.4, in 1/β which the order parameter ML (T ) is plotted as ML (T ) versus T /Tc . Such a plot of the order parameter allows us to determine the effective critical exponent of the system. If the system behaves classically, the critical exponent has the mean field value β = 1/2 and we would expect a linear behavior of 2 (T ). On the other hand, if the system shows nonclassical behavior, with ML 8 (T ). Fig. L.4 shows exponent β = 1/8, we would expect a straight line for ML that, away from the critical point, the temperature dependence of the order parameter is best described with an exponent β = 1/8. Closer to the critical point, the mean field exponent β = 1/2 fits the data better. This behavior is as expected. Away from the critical point the system can accommodate all rele-

e16 Miscellaneous methods

vant fluctuations and exhibits nonclassical behavior. But close to the critical point the system is too small to accommodate all fluctuations and, as a consequence, mean field behavior is observed. In addition, Fig. L.4 shows that we still can observe vapor-liquid coexistence at temperatures 20% above the critical temperature of the infinite system, which implies significant finite-size effects. The study of Mon and Binder shows that, in a lattice model of a fluid, finite-size effects on the liquid-vapor coexistence curve are very pronounced. It is important to note that, in this lattice version of the Gibbs ensemble, we do not change the volume and therefore fewer fluctuations are possible than in the off-lattice version.

FIGURE L.5 Finite-size effects in the liquid-vapor coexistence curve of the twodimensional Lennard-Jones fluid (truncated potential rc = 5.0σ ) studied by Gibbs-ensemble simulation. The order parameter M corresponds to the density difference between the coex1/β isting liquid and vapor phases. The figure shows ML (T ) versus T /Tc for various system sizes L. Tc is the estimated critical temperature for the infinite system (Tc = 0.497 ± 0.003). The simulation data are taken from [222].

The striking differences between the findings of Mon and Binder and the results of the early simulations of the Lennard-Jones fluid motivated Panagiotopoulos to reinvestigate in some detail the finite-size effects of Gibbsensemble simulations of the two- and three-dimensional Lennard-Jones fluid [222]. The results of the simulations of Panagiotopoulos are shown in Fig. L.5. For the Lennard-Jones fluid, the order parameter is defined as ML (T ) = ρl − ρg . The results for the two-dimensional Lennard-Jones fluid are qualitatively similar to the results of Mon and Binder. At low temperatures, Ising-like behavior is observed and close to the critical point mean-field-like behavior. An important difference is the magnitude of the finite-size effects. Fig. L.5 shows that, for the two system sizes, the results are very similar; the finite size effects are at most 5%. In addition, Fig. L.5 also indicates why the initial Gibbs-ensemble studies on the Lennard-Jones fluids did not show significant finite-size effects. All these studies used Eqs. (6.6.12) and (6.6.13) to determine the critical point.

Miscellaneous methods e17

If we use these equations we implicitly assume nonclassical behavior up to the critical point. In Fig. L.5, this corresponds to extrapolating the lines, fitted 8 (T ) = 0 gives to the data point, for β = 1/8. Extrapolation of these lines to ML a critical point that is not only independent of this system size but also very close to the true critical point of the infinite system. For the three-dimensional Lennard-Jones fluid Panagiotopoulos did not observe a crossover from Ising-like to mean field behavior in the temperature regime that could be studied conveniently in the Gibbs ensemble (T < 0.98Tc ). Also for liquid-liquid equilibria for the square well fluid, Recht and Panagiotopoulos [735] and de Miguel et al. [246] did not observe such a crossover. Moreover, for the three-dimensional Lennard-Jones fluid, the finitesize effects were negligible away from Tc and very small close to Tc . a The Ising model is equivalent to a lattice-gas model of a fluid. The latter model is the

simplest that exhibits a liquid-vapor transition.

L.4.3 Chemical potential in the Gibbs ensemble One of the steps in the Gibbs ensemble involves the insertion of a particle in one of the boxes. During this step, the energy of this particle has to be calculated (see section 6.6.3). Since this energy corresponds to the energy of a test particle, we can use the Widom insertion method [736] to calculate the chemical potential without additional costs [213]. At this point it is important to note that the Gibbs method requires no computation of the chemical potentials. However, to test whether the system under consideration has reached equilibrium or for comparison with other results, it is important to calculate the chemical potential of the individual phases correctly. The original Widom expression is valid only in the N , V , T ensemble and can be modified for applications in other ensembles (see section 8.5.1). Here we derive an expression for the chemical potential for the Gibbs ensemble. We restrict ourselves to temperatures sufficiently far below the critical temperature that the two boxes, after equilibration, do not change identity. For the more general case, we refer to [217]. If we rescale the coordinates of the particles with the box length, the partition function for the Gibbs ensemble (6.6.1) becomes ¯ N,V ,T ≡ Q

N   N

1

V

dV1 V1n1 (V − V1 )N −n1 n1 0 V 3N N ! n1 =0   −n1 × dsn1 1 exp [−βU1 (n1 )] dsN exp [−βU2 (N − n1 )] 2 =

N   n1 =0 0

V

dV1 V1n1 (V − V1 )N −n1 Q1 (n1 , V1 ) Q2 (N − n1 , V − V1 ), (L.4.25)

e18 Miscellaneous methods

where s = r/L is the scaled coordinates of a particle, L is the box length of the subsystem in which the particle is located, and Qi (ni , Vi ) is the partition of the canonical ensemble (see also section L.4.1.2). The chemical potential of box 1 can be defined as μ1 ≡ − kB T ln ×

N  

V

n1 =0 0

dV1 V1n1 (V − V1 )N −n1

Q1 (n1 + 1, V1 ) Q2 (N − n1 , V − V1 ). Q1 (n1 , V1 )

(L.4.26)

For the ratio of the partition functions of box 1, we can write 

ds1n1 +1 exp [−βU1 (n1 + 1)]  n1 ds exp [−βU1 (n1 )]  n1 1   ds1 exp −βU1+ exp [−βU1 (n1 )] V1  = , (n1 + 1)3 dsn1 1 exp [−βU1 (n1 )] (L.4.27)

Q1 (n1 + 1, V1 ) V1 = Q1 (n1 , V1 ) (n1 + 1)3

in which we have used the notation U1 (n1 + 1) = U1+ + U1 (n1 ), where U1+ is the test particle energy of a (ghost) particle in box 1. We can write Eq. (L.4.26) as an ensemble average restricted to box 1:    V1 1 + μ1 = −kB T ln 3 exp −βU1 , (L.4.28)  n1 + 1 Gibbs, box 1 where · · ·Gibbs, box i denotes an ensemble average in the Gibbs ensemble restricted to box i (note that this ensemble average is well defined if the boxes do not change identity during a simulation [217]).

L.4.4 Algorithms of the Gibbs ensemble

L.5 Multi-canonical ensemble method In the multi-canonical ensemble method, we extend our ensemble by using the energy space [350]. In the previous sections, we used different tricks to obtain information on parts of phase space that are rarely sampled in conventional simulations. The idea of a multi-canonical ensemble is to ensure that all energies are sampled uniformly. The probability of finding a system with a particular energy is given by P(U ) = N (U )w(U ),

Miscellaneous methods e19

Algorithm 42 (Attempt to change the volume in the Gibbs ensemble) function mcvol vo1=box1**3 vo2=v-vo1 eno1=toterg(vo1,1) eno2=toterg(vo2,2) lnvn=log(vo1/vol2)+ + (R-0.5)*vmax v1n=v*exp(lnvn)/(1+exp(lnvn)) v2n=v-v1n

trial volume change of box 1/2 at constant total volume volume box 1 (diameter box1) ...and 2 energy old conf. box 1 and 2 random walk in ln V1 /V2 new volume box 1 and 2

box2n=v2n**(1/3)

new box length box 1 new box length box 2

for 1 ≤ i ≤ npart do if ibox(i) == 1 then

determine which box

box1n=v1n**(1/3)

fact=box1n/box1o

else fact=box2n/box2o

endif x(i)=x(i)*fact

enddo en1n=toterg(v1n,1) en2n=toterg(v2n,2)

rescale positions total energy box 1 total energy box 2

arg1=-beta*(en1n-en1o) +(npbox(1)+1)*log(v1n/v1o) arg2=-beta*(en2n-en2o) +(npbox(2)+1)*log(v2n/v2o)

appropriate weight function acceptance rule (6.6.10)

if R > exp(arg1+arg2) then do i=1,npart

if

ibox(i)== 1 then fact=box1o/box1n

REJECTED determine which box

else fact=box2o/box2n

endif x(i)=x(i)*fact

restore old configuration

enddo endif end function

where N (U ) is the density of states and w(U ) the particular weight in the ensemble. If we use w(U ) = C/N (U ), we have a flat distribution of energies. In practice, we do not know the density of states so we need to find this iteratively. For example, if we start with an ordinary N , V , T simulation, and

e20 Miscellaneous methods

Specific Comments (for general comments, see p. 7) 1. The term ibox(i) = 1 indicates that particle i is in box 1; npart = npbox(1) + npbox(2) where npbox(i) gives the number of particles in box i. 2. In this algorithm we perform a random walk in ln V and we use acceptance rule (6.6.10). 3. The function toterg is not shown explicitly: it is similar to Algorithm 5 and calculates the total energy of one of the two boxes. It requires information about the identity of the box (1 or 2) and the volume of these boxes (here assumed to be cubic). In most cases the energy of the old configuration is known, and therefore it is not necessary to determine this energy at the beginning of the volume step.

we can make a histogram of the occurrence of a particular value of the energy Hj (U ), we can use this to improve our bias potential [737]: bias Ujbias +1 (U ) = Uj (U ) +

 1 ln Hj (U ) − ln H¯j β

where H¯j is the average value of histogram. Those energies that occur more than the average get an unfavorable bias, a higher energy, and those that occur lower than the average a favorable bias. If the histogram is flat, the bias potential does not change in the next iteration. To see how this can be used to compute a free-energy (difference) in the canonical ensemble we have: 

    d N δ U ( N ) − U exp −βU ( N )    d N exp −βU ( N )    d N δ U ( N ) − U = exp (−βU ) C0

P(U ) =

and in the multi-canonical ensemble, we have 

P

MulCan

      d N δ U ( N ) − U exp −β U ( N ) + U bias U ( N )      (U ) = d N exp −β U ( N ) + U bias U ( N )     d N δ U ( N ) − U  bias = exp −β U + U (U ) CMulCan

Or, for the ratio of these two probabilities:   P(U ) exp (−βU ) bias    = C (U ) . = C exp βU P MulCan (U ) exp −β U + U bias (U )

Miscellaneous methods e21

Algorithm 43 (Attempt to swap a particle between the two boxes) attempts to swap a particle between the two boxes which box to add or remove

function mcswap if R < 0.5 then in=1 out=2

else in=2 out=1

endif xn=R*box(in) enn = ener(xn,

in)

w(in)=w(in)+vol(in)* + exp(-beta*enn)/(npbox(in)+1) if (npbox(out) == 0) return ido=0

while

ido

=

out

new particle at a random position energy new particle in box in update chemical potential (L.4.28) if box empty return find a particle to be removed

do

o=int(npart*R)+1 ido=ibox(o)

end while eno= ener(x(o), out) arg=exp(-beta*(enn-eno

if

energy particle o in box out +

+ log(vol(out)*(npbox(in)+1)/ + (vol(in)*npbox(out)))/beta)) R < arg then

acceptance rule (6.6.11)

x(o)=xn ibox(o)=in nbox(out)=npbox(out)-1 nbox(in)=npbox(in)+1

add new particle to box in

endif end function

Specific Comments (for general comments, see p. 7) 1. ener(x, ib) calculates the energy of a particle at position x in box ib. It carries a box label and is slightly different from the function called in Algorithm 2. 2. We specify an additional argument for the function ener: the index of the box where we attempt to insert (in) or remove (out) a particle. 3. The acceptance rule (6.6.11) is used in this algorithm. 4. We also compute the Boltzmann factor associated with the random insertion of a particle, at virtually no added cost. At the end of the simulation, the excess chemical potential can be calculated from w(box) using μbox = − ln wbox  /β, where wbox  is the average of wbox accumulated during the run.

e22 Miscellaneous methods

If we achieve an exactly flat distribution, P MulCan (U ) = C, we have for the free energy: −βF (U ) = ln P(U ) + C = βU bias (U ). This illustrates that in order to get a perfectly flat distribution, we need to apply a biasing potential that is the inverse of the free energy. In practice, we will never get a perfectly flat distribution, and the biasing potential becomes equivalent to the umbrella sampling technique, which we discuss in section 8.6.6.

L.6 Nosé-Hoover dynamics L.6.1 Nosé-Hoover dynamics equations of motion We now apply the methods of non-Hamiltonian dynamics to analyze the Nosé-Hoover algorithm and the Nosé-Hoover chains that are discussed in section 7.1.2.1 and Appendix L.6.1.2 Our discussion of these algorithms is only intended as an illustration that there exist systematic techniques for predicting the phase-space density generated by a particular non-Hamiltonian dynamics scheme. While we show a few simple examples, we refer the reader to ref. [267] for a more comprehensive discussion.

L.6.1.1 The Nosé-Hoover algorithm In section 7.1.2.1 we showed that the Nosé-Hoover algorithm generates nonHamiltonian dynamics. The Nosé-Hoover equations can be written as r˙ i = pi /mi pξ pi p˙ i = Fi − Q ξ˙ = pξ /Q  p2i /mi − kB T L, p˙ ξ = i

where L is a parameter that has to be determined to generate the canonical distribution. Implementation In section 7.1.2.1, we showed how the introduction of an additional dynamical variable (s) in the Lagrangian could be used to perform MD simulations subject to a thermodynamic constraint (in this case, constant temperature). We stress that the importance of such extended-Lagrangian techniques transcends the specific application. In addition, the problems encountered in the numerical implementation of the Nosé scheme are representative of a wider class of algorithms (namely, those where forces depend explicitly on velocities). It is for this

Miscellaneous methods e23

reason that we discuss the numerical implementation of the Nosé thermostat in some detail (see also SI L.6.2). The Nosé equations of motion can be written in terms of virtual variables or real variables. In a simulation based on the virtual variables, the real-time step is fluctuating, which complicates the sampling of averages. For this reason, it is better to use the real-variable formulation. It is now common to use the Nosé scheme in the formulation of Hoover [257,738], who showed that the equations derived by Nosé could be written in a form that resembles Newtonian dynamics. In Eqs. (7.1.27), (7.1.28), and (7.1.29), the variables s , ps , and Q occur only as s ps /Q. To simplify these equations, we can introduce the thermodynamic friction coefficient ξ = s ps /Q. The equations of motion then become (dropping the primes on p) and using dots to denote time derivatives) r˙ i = pi /mi ∂U(rN )

p˙ i = − − ξ pi ∂ri    L 2 ξ˙ = pi /mi − /Q β

(L.6.1) (L.6.2) (L.6.3)

i

s˙ /s =

d ln s = ξ. dt

(L.6.4)

Note that the last equation is redundant, since Eqs. (L.6.1)–(L.6.3) form a closed set. However, if we solve the equations of motion for s as well, we can use Eq. (7.1.30) as a diagnostic tool, since H must be conserved during the simulation. In terms of the variables used in Eqs. (L.6.1)–(L.6.4), H reads HNose =

N  p2i ln s ξ 2Q +L . + U(rN ) + 2mi 2 β

(L.6.5)

i=1

As we use the real-variable formulation in this set of equations, we have to take L = dN. An important implication of the Nosé equations is that in the Lagrangian (7.1.10), a logarithmic term (ln s) is needed to have the correct scaling of time. Any scheme that does not rescale time will fail to recover the canonical ensemble. Hoover [738] demonstrated that the equations of motion (L.6.1)–(L.6.3) are unique, in the sense that other equations of the same form cannot lead to a canonical distribution. In SI L.6.2, we discuss an efficient way of implementing the Nosé-Hoover scheme. The equations of motion of the Nosé-Hoover scheme cannot be derived from a Hamiltonian. This implies that one cannot use the standard methods (see Appendix A) to make the time averages obtained with this dynamics map onto

e24 Miscellaneous methods

ensemble averages of the standard form. In Appendix B, we discuss how one can analyze such non-Hamiltonian dynamics. The result of this analysis is that the conventional Nosé-Hoover algorithm only generates the correct distribution if there is a single constant of motion. Normally, the total energy defined by HNose , see Eq. (L.6.5), is conserved. But the existence of other conserved quantities creates a problem. This implies that one should not have any other conserved quantity, such as the total momentum. Momentum conservation is not an issue in simulations of systems confined by walls, or subject to another external potential that breaks momentum conservation. However, if we simulate a system without external forces, momentum is conserved. Under those conditions, the Nosé-Hoover scheme can still be correct, provided that the center of mass remains fixed. This condition can be fulfilled if we set the initial velocity of the center of mass to zero. However, if we simulate systems with momentum conservation and a non-zero velocity of the center of mass, or if we have additional conserved quantities, then we must go beyond the simple NH algorithm and use chains of Nosé-Hoover thermostats [267,269] to obtain the correct canonical distribution.

L.6.1.2 Nosé-Hoover chains The equations of motion for a system of N particles coupled with M NoséHoover chains are given (in real variables, hence L = dN) by r˙ i =

pi mi

(L.6.6)

p˙ i = Fi −

pξ1 pi Q1

(L.6.7)

pξk k = 1, . . . , M Q k   p2 pξ i p˙ ξ1 = − LkB T − 2 pξ1 mi Q2 i  2  pξk−1 pξ p˙ ξk = − kB T − k+1 pξk Qk−1 Qk+1  2  pξM−1 p˙ ξM = − kB T . QM−1 ξ˙k =

(L.6.8) (L.6.9)

(L.6.10)

(L.6.11)

For these equations of motion, the conserved energy is HNHC = H(r, p) +

M  pξ2k k=1

2Qk

+ LkB T ξ1 +

M 

kB T ξk .

(L.6.12)

k=2

We can use this conserved quantity to check the integration scheme. It is important to note that the additional M − 1 equations of motion form a simple one-

Miscellaneous methods e25

dimensional chain and therefore are relatively simple to implement. In SI L.6.2, we describe an algorithm for a system with a Nosé-Hoover chain thermostat.

Example 31 (Nosé-Hoover chain for harmonic oscillator). The harmonic oscillator is the obvious model system on which we test the Nosé-Hoover chain thermostat. If we use a chain of two coupling parameters, the equations of motion are r˙ = v v˙ = −r − ξ1 v ξ˙1 = ξ˙2 =

v2 − T − ξ1 ξ2 Q1 Q1 ξ12 − T Q2

.

A typical trajectory generated with the Nosé-Hoover chains is shown in Fig. L.6. The distribution of the velocity and position of the oscillator are also shown in Fig. L.6. Comparison with the results obtained using the Andersen thermostat (see Case Study 13) shows that the Nosé-Hoover chains do generate a canonical distribution, even for the harmonic oscillator.

FIGURE L.6 Test of the phase space trajectory of a harmonic oscillator, coupled to a NoséHoover chain thermostat. The left-hand side of the figure shows part of a trajectory: the dots correspond to consecutive points separated by 10,000 time steps. The right-hand side shows the distributions of velocity and position. Due to our choice of units, both distributions should be Gaussians of equal width.

The Fortran code to generate this Example can be found in the online-SI, Case Study 14.

To analyze the dynamics of this system, we have to determine the conservation laws and the non-driven variables. Let us consider first the case in which

e26 Miscellaneous methods

we assume that we only have conservation of energy, viz. Eq. (7.1.13) HNose =

N  pξ2 p2i + LkB T ξ + U(rN ) + 2mi 2Q i=1

= H(r, p) +

pξ2 2Q

+ LkB T ξ = C1 ,

where H(r, p) is the physical Hamiltonian. If we use  = (rN , pN , ξ, pξ ), the phase-space compressibility of this system can be written as κ() = ∇  · ˙   ∇ ri · r˙ i + ∇ pi · p˙ i + ∇ ξ · ξ˙ + ∇ pξ · p˙ ξ = i

=



i

∇ pi · p˙ i

i

= −dNpξ /Q = −dN ξ˙ . √ Hence, it follows that the metric g is given by 

√ g = exp − κdt = exp(dN ξ ). Substitution of this metric in the expression for the partition function gives:    

T (N, V , C1 ) = dN p dN r dpξ dξ   pξ2 + LkB T ξ − C1 . × exp(dN ξ )δ H(r, p) + 2Q In this expression, the integration over ξ and pξ can be performed analytically. Because of the δ-function, integration over ξ gives as condition   pξ2 1 ξ= C1 − H(r, p) − . LkB T 2Q Substitution of this condition in the partition function gives    1 N N

T (N, V , C1 ) = d p d r dpξ LkB T    pξ2 dN × exp C1 − H(r, p) − LkB T 2Q    exp (dN C1 /LkB T ) = dpξ exp −dNpξ2 /2QL LkB T

Miscellaneous methods e27

 ×

 N

dN r exp (−βdNH(r, p)/L)

d p

∝ Q(N, V , T ), where the last equality only holds provided that we choose L equal to dN. The integration over pξ yields a constant prefactor that has no physical importance. In section 7.1.2.1, we derived a similar result for the Nosé equations in terms of its real variables. The present demonstration that the Nosé-Hoover equations lead to a canonical distribution is completely different from Nosé’s original argument, yet the end result is the same. Note, however, that we have assumed that there is only a single conservation law, viz. conservation of HNose . In general, there will be more conserved quantities. For instance, if we consider a system in the absence of external forces, then the total linear momentum is also conserved. This will affect the phase-space distribution. In the Nosé-Hoover dynamics, conservation of total momentum reads

  pξ dPeξ = eξ P˙ + Pξ˙ = eξ P˙ + P dt Q   

pξ pξ ξ Fi − pi + P =e Q Q i

= 0, and hence 

Peξ = K,

(L.6.13)

where P = i pi is the center-of-mass momentum of the system and K is an arbitrary constant vector. To continue the analysis, we should eliminate the driven variables from our system. The center-of-mass position, R, and momentum, P, have no influence on the other variables. We can eliminate these by considering the positions and momenta relative to the center of mass of the system, r and p , respectively. Note, however, that the magnitude of the center-of-mass momentum is coupled to the other variables through a conservation law and cannot be eliminated from the analysis. The components of the center-of-mass momentum P are linearly dependent.1 Therefore of the d components, only one component can be chosen  2 1/2 . independently, or we can take as independent variable P = α Pα 1 To see this, consider the components of Eq. (L.6.13):

Py Pz Px = = = eξ , Kx Ky Kz which shows that only one of the components is independent.

e28 Miscellaneous methods

We now have to perform a transformation of our systems to the variables {p , P, r , R}; the resulting equations of motion are r˙ i = p i /m i pξ p p˙ i = F i − Q i pξ P˙i = − Pi Q pξ ξ˙ = Q p˙ ξ =

N −1  i

p 2i P2 − kB T L. + m i M

The equations of motion have two2 conservation laws: H(p , r , P ) +

pξ2 2Q

+ LKB T ξ = C1 P exp(ξ ) = C2 .

In the first conservation law, we have used H(r , p , P ) =

N −1  i=1

  p 2i P2 N + U r = H(r, p). + 2m i 2M

To compute the partition function, we have to determine the metric from the compressibility: κ=

N −1  i

∇ r i · r˙ i +

N −1 

∇ p i · p˙ i + ∇ P · P˙ + ∇ ξ · ξ˙ + ∇ pξ · p˙ ξ

i

= − [d(N − 1) + 1] ξ˙ . From which the metric follows directly: √ g = exp {[d(N − 1) + 1] ξ } . The partition function contains two δ-functions that express the two conservation laws:      N −1 N −1 p r d dP dpξ dξ

T (N, V , C1 , C2 ) = d 2 Because we have replaced the d center-of-mass momenta components by a single variable P only

one conservation law for the momenta is left.

Miscellaneous methods e29

 × exp {[d(N − 1) + 1] ξ } δ H(r , p , P ) +   δ e ξ P − C2 .

pξ2 2Q

 + LkB T ξ − C1

The second δ-function imposes that ξ = ln(C2 /P ). Hence, integrating over ξ yields     1 N −1 N −1

T (N, V , C1 , C2 ) = p r d d dP dpξ C2  

d(N −1)+1 pξ2 C2 × δ H(r , p , P ) + + LkB T ln(C2 /P ) − C1 . P 2Q The remaining δ-function fixes pξ :   − 1 2 . pξ = 2Q C1 − H(r , p , P ) − LkB T ln(C2 /P ) Integration over pξ then results in the following expression for : √   2Q

T (N, V , C1 , C2 ) = dN −1 p dN −1 r C2

d(N −1)+1   − 1 C2 C1 − H(r , p , P ) − LkB T ln(C2 /P ) 2 . × dP P Note that this is not the partition function for a canonical ensemble. This problem was first pointed out by Cho et al. [739]. Only in the case that C2 = 0 can the conventional Nosé-Hoover scheme generate a canonical distribution. If C2 = 0 the partition function reads     

T (N, V , C1 , 0) = dN −1 p dN −1 r dP dpξ dξ   2 p ξ × exp {[d(N − 1) + 1] ξ } δ H(r , p , P ) + + LkB T ξ − C1 2Q   × δ eξ P − 0 . The δ-function imposes that P = 0. Integration over P then yields     N −1 N −1 p r d dpξ dξ

T (N, V , C1 , 0) = d × exp {[d(N − 1) + 1] ξ } exp(−ξ )

e30 Miscellaneous methods

 × δ H(r , p ) +

pξ2 2Q

 + LkB T ξ − C1 .

The other δ-function fixes ξ   2 p β ξ ξ =− H(r , p ) + − C1 , L 2Q and we finally obtain exp [βd(N − 1)C1 /L] LkB T    × dpξ exp −βd(N − 1)pξ2 /(2QL)

  d(N − 1) H(r , p ) × dN −1 p dN −1 r exp −β L

  d(N − 1) H(r , p ) . ∝ dN −1 p dN −1 r exp −β L

T (N, V , C1 , 0) =

Clearly, if we choose L = d(N − 1), then the correct canonical partition function is recovered. In practice, most conventional Nosé-Hoover simulations are performed with a fixed center of mass and therefore obey the condition P = 0.

L.6.1.3 The NPT ensemble For the N , P , T ensemble the equations of motion are (see also section 7.2) pi p + ri mi W

pξ 1 p pi − 1 pi p˙ i = Fi − 1 + N W Q1 p  V˙ = dV W N pξ 1  p2i − 1 p p˙  = dV (Pint − Pext ) + N mi Q1 r˙ i =

i=1

ξ˙k = p˙ ξ1 = p˙ξk =

pξk Qk

for k = 1, . . . , M

N  p2i pξ p2 +  − (dN + 1)kB T − 2 pξ1 mi W Q2 i=1 pξ2k−1

Qk−1

− kB T −

pξk+1 pξ Qk+1 k

for k = 2, . . . , M − 1

Miscellaneous methods e31

p˙ ξM =

pξ2M−1 QM−1

− kB T .

To analyze the dynamics of this system, wehave to consider two cases. In the case in which the sum of the forces is zero, i Fi = 0, implies that we have additional conservation laws. The second case, i Fi = 0, has only one conserved quantity; conservation of “energy”: H − C1 = H(p, r) +

p2  pξk + + (dN + 1)kB T ξ1 + kB T ξc + Pext V − C1 W Qk M

2

k=1

= 0, where ξc is defined as the center of the thermostat ξc =

M 

ξk .

k=2

 We first consider the case i Fi = 0. To analyze its dynamics we have to compute the compressibility. The independent variables are3  = pN , rN , ξ1 , ξc , pξ1 , pξc , V , p : κ = ∇ ·  = −(dN + 1)

pξ pξ1 − c, Q1 Qc

which gives a phase-space metric √ g = exp[(dN + 1)ξ1 + ξc ]. We can now write for the partition function     

T ,Pext (N, C1 ) = dV dN p dN r dpξ1 dξ1      × dpξc dξc dp exp [(dN + 1)ξ1 + ξc ] δ H − C1 . The delta function gives a condition for ξ1   M 2 1 p2  pξk − ξ1 = − kB T ξc − Pext V . C1 − H(p, r) − (dN + 1)kB T W Qk k=1

Substitution of this expression into the partition function gives      exp (βC1 ) N N

T ,Pext (N, C1 ) = dV d p d r dpξ1 dξ1 (dN + 1)kB T 3 A Nosé-Hoover chain of length M has two independent variables, we use ξ and ξ . c 1

e32 Miscellaneous methods

 ×

 dpξc dξc  

 dp

 M 2 p2  pξk + × exp −β H(p, r) + + Pext V W Qk k=1    N ∝ dV exp (−βPext V ) d p dN r exp [−βH(p, r)] . The integration over ξc gives a constant, which can be infinite but has no physical importance. This demonstrates that the desired distribution is generated. At this point, we would like to emphasize that the original Nosé-Hoover algorithm does not generate this distribution. The reason is that the metric for this algorithm generates an additional 1/V term in the partition function. With the algorithm of Martyna et al., this term is removed. This point is explained in detail in ref. [267].  For the case i Fi = 0, we have as additional conservation laws for the total momentum P P exp [(1 + 1/N)  + ξ1 ] = K. Similar to the N , V , T ensemble, the components of P are linearly dependent, and the center-of-mass coordinates have to be eliminated from the analysis. This results in a set of equations of motion in coordinates relative to the center of mass. The details of this proof can be found in ref. [267]. Similar to the N , V , T ensemble, if we use K = 0, we generate an (N − 1)P T ensemble.

L.6.2 Nosé-Hoover algorithms As discussed in section L.6.1.1, it is advantageous to implement the Nosé thermostat using the formulation of Hoover, Eqs. (L.6.1)–(L.6.4). Since the velocity also appears on the right-hand side of Eq. (L.6.2), this scheme cannot be implemented directly into the velocity Verlet algorithm (see also section 4.3). To see this, consider a standard constant-N , V , E simulation, for which the velocity Verlet algorithm is of the form r(t + t) = r(t) + v(t)t + f (t)t 2 /(2m) f (t + t) + f (t) v(t + t) = v(t) + t. 2m When we use this scheme for the Nosé-Hoover equations of motion, we obtain for the positions and velocities ri (t + t) = ri (t) + v(t)t + [fi (t)/mi − ξ(t)vi (t)] t 2 /2

(L.6.14)

vi (t + t) = vi (t) + [fi (t + t)/mi − ξ(t + t)vi (t + t) + fi (t)/mi − ξ(t)vi (t)] t/2.

(L.6.15)

Miscellaneous methods e33

The first step of the velocity Verlet algorithm can be carried out without difficulty. In the second step, we first update the velocity, using the old “forces” to the intermediate value v(t + t/2) ≡ v . And then we must use the new “forces” to update v : vi (t + t) = v i + [fi (t + t)/mi − ξ(t + t)vi (t + t)] t/2.

(L.6.16)

In these equations vi (t + t) appears on the right- and left-hand sides; therefore, these equations cannot be integrated exactly.4 For this reason the Nosé-Hoover method is usually implemented using a predictor-corrector scheme or solved iteratively [623]. This has a disadvantage that the solution is no longer time reversible. Martyna et al. [126] have developed a set of explicit reversible integrators using the Liouville approach (see section 4.3.4) for this type of extended systems.

L.6.2.1 Canonical ensemble For M chains, the Nosé-Hoover equations of motion are given by (see also section L.6.1.2) r˙ i = pi /mi pξ p˙ i = Fi − 1 pi Q1 pξk ξ˙k = k = 1, . . . , M Qk    pξ 2 p˙ ξ1 = pi /mi − LkB T − 2 pξ1 Q2 i  2  pξk−1 pξ p˙ ξk = − kB T − k+1 pξk Qk−1 Qk+1  2  pξM−1 p˙ ξM = − kB T . QM−1 The Liouville operator, iL, for the equations of motion is defined as (see section 4.3.4) iL ≡ η˙

∂ ∂η

with η = (rN , pN , ξ M , pξM ). Using the equations of motion, pi = mi vi , and pξk = Qk vξk , we obtain as Liouville operator for the Nosé-Hoover chains iLNHC =

N  i=1

vi · ∇ ri +

N  Fi (ri ) i=1

mi

· ∇ vi −

N  i=1

vξ1 vi · ∇ vi +

M  k=1

vξk

∂ ∂ξk

4 For the harmonic oscillator, it is possible to find an analytic solution (see Case Study 13).

e34 Miscellaneous methods

+

M−1 



Gk − vξk vξk+1

k=1

 ∂ ∂ + GM ∂vξk ∂vξM

with 1 G1 = Q1



N 

 mi v2i

− LkB T

i=1

 1  Gk = Qk−1 vξ2k−1 − kB T . Qk As explained in section 4.3.4, the Liouville equation combined with the Trotter formula is a powerful technique for deriving a time-reversible algorithm for solving the equations of motion numerically. Here we will use this technique to derive such a scheme for the Nosé-Hoover thermostats. We use a simplified version; a more complete description can be found in ref. [126]. We have to make an intelligent separation of the Liouville operator. The first step is to separate the part of the Liouville operator that only involves the positions (iLr ) and the velocities (iLv ) from the parts that involve the NoséHoover thermostats (iLC ): iLNHC = iLr + iLv + iLC with iLr =

N 

vi · ∇ ri

i=1

iLv =

N  Fi (ri ) i=1

iLC =

M 

mi

 ∂ − vξ1 vi · ∇ vi ∂ξk N

vξk

k=1

+

· ∇ vi

M−1  k=1

i=1

  ∂ ∂ Gk − vξk vξk+1 + GM . ∂vξk ∂vξM

There are several ways to factorize iLNHC using the Trotter formula; we follow the one used by Martyna et al. [126]:   e(iLt) = e(iLC t/2) e(iLv t/2) e(iLr t) e(iLv t/2) e(iLC t/2) + O t 3 . (L.6.17) The Nosé-Hoover chain part LC has to be further factorized. Here, we will do this for a chain of length M = 2; the more general case is discussed in ref. [126].

Miscellaneous methods e35

The Nosé-Hoover part of the Liouville operator for this chain length can be separated into five terms: iLC = iLξ + iLCv + iLG1 + iLvξ1 + iLG2 , where the terms are defined as iLξ ≡

2 

vξk

k=1

iLCv ≡ −

N 

∂ ∂ξk

vξ1 vi · ∇ vi

i=1

iLG1 ≡ G1

∂ ∂vξ1

 ∂  iLvξ1 ≡ − vξ1 vξ2 ∂vξ1 ∂ iLG2 ≡ G2 . ∂vξ2 The factorization for the Trotter equation that we use is5 e(iLC t/2) = e(iLG2 t/4) e ×e =e



  iLvξ1 t/4+iLG1 t/4

   iLξ t/2 (iLCv t/2) iLG1 t/4+iLvξ1 t/4 (iLG2 t/4)

e

e

e

e

e

      iLG2 t/4 iLvξ1 t/8 (iLG1 t/4) iLvξ1 t/8 

e

 iLξ t/2 (iLCv t/2)

×e e      × e iLvξ1 t/8 e(iLG1 t/4) e iLvξ1 t/8 e(iLG2 t/4) .

(L.6.18)

Our numerical algorithm is now fully defined by Eqs. (L.6.17) and (L.6.18). This seemingly complicated set of equations is actually relatively easy to implement in a simulation. To see how the implementation works, we need to know how each operator works on our coordinates η = (rN , vN , ξ1 , vξ1 , ξ2 , vξ2 ). If we start at t = 0 with initial condition η, the position at time t = t follows from eiLNHC t f [rN , vN , ξ1 , vξ1 , ξ2 , vξ2 ]. Because of the Trotter expansion, we can apply each term in iLNHC , sequentially. For example, if we let the first term of the Liouville operator, iLG2 , act on 5 The second factorization, indicated by [· · · ], is used to avoid a hyperbolic sine function, which

has a possible singularity. See ref. [126] for details.

e36 Miscellaneous methods

the initial state η,   ∂ t G2 f rN , pN , ξ1 , vξ1 , ξ2 , vξ2 exp 4 ∂vξ2 ∞   (G2 t/4)n ∂ n  = f rN , pN , ξ1 , vξ1 , ξ2 , vξ2 n n! ∂vξ2 n=0   = f rN , pN , ξ1 , vξ1 , ξ2 , vξ2 + G2 t/4 .

This shows that the effect of iLG2 is to shift vξ2 without affecting the other coordinates. This gives as transformation rule for this operator: e(iLG2 t/4) :

vξ2 → vξ2 + G2 t/4.

(L.6.19)

The operators (iLvξ1 and iLCv ) are of the form exp (ax∂/∂x); such operators give a scaling of the x coordinate:



∂ ∂ f (x) = exp a f {exp[ln(x)]} exp ax ∂x ∂ ln(x) = f {exp[ln(x) + a]} = f [x exp(a)] If we apply this result6 to iLvξ1 , we obtain for this operator

  t ∂ exp − vξ2 vξ1 f rN , pN , ξ1 , vξ1 , ξ2 , vξ2 8 ∂vξ1



t N N = f r , p , ξ1 , exp − vξ2 vξ1 , ξ2 , vξ2 , 8 giving the transformation rule 

e

 iLvξ1 t/8

:

  vξ1 → exp −vξ2 t/8 vξ1 .

(L.6.20)

In a similar way, we can derive for the other terms e(iLG1 t/4) : e

  iLξ t/2

:

vξ1 → vξ1 + G1 t/4

(L.6.21)

ξ1 → ξ1 − vξ1 t/2

(L.6.22)

6 This can be generalized, giving the identity

exp a



 ∂ ∂ f (x) = exp a f g −1 [g(x)]) ∂g(x) ∂g(x) 

 ∂ f g −1 (y) = exp a ∂y   = f g −1 [y + a] = f g −1 [g(x) + a] .

Miscellaneous methods e37

e

(iLCv t/2)

:

ξ2 → ξ2 − vξ2 t/2   vi → exp −vξ1 t/2 vi .

(L.6.23) (L.6.24)

Finally, the transformation rules that are associated with iLv and iLr are similar to the velocity Verlet algorithm, i.e., e(iLv t/2) : e

(iLr t)

:

vi → vi + Fi t/(2m)

(L.6.25)

ri → ri + vi t.

(L.6.26)

With these transformation rules (L.6.19)–(L.6.26) we can write down our numerical algorithm by subsequently applying the transformation rules according to the order defined by Eqs. (L.6.17) and (L.6.18). If we start with initial coordinate η(0) = (rN , vN , ξ1 , vξ1 , ξ2 , vξ2 ), we have to apply first eiLC η. Since this operator is further factorized according to Eq. (L.6.18), the first step in our algorithm is to apply e(iLG2 t/4) . According to transformation rule (L.6.19) applying this operator on η gives as new state vξ2 (t/4) = vξ2 + G2 t/4. The output of this rule is the new state on which we apply the next operator in Eq. (L.6.18), iLvξ1 , with transformation rule (L.6.22):   vξ1 (t/8) = exp −vξ2 (t/4)t/8 vξ1 . The next step is to apply iLG1 , followed by again iLvξ 1 , etc. In this way, we continue to apply all operators on the output of the previous step. Applying the Nosé-Hoover part of the Liouville operator changes ξk , vξk , and vi . The other two Liouville operators change vi and ri . This makes it convenient to separate the algorithm into two parts in which the positions and velocities of the particles and the Nosé-Hoover chains are considered separately. An example of a possible implementation is shown in Algorithm 44. The updates of the velocities and positions, i.e., the inner terms in Eq. (L.6.17), is done withe the velocity-Verlet algorithm (Eq. (7)).

L.6.2.2 The isothermal-isobaric ensemble Similar to the canonical ensemble, we can derive a time-reversible integration scheme for simulation in the N P T ensemble. The equations of motions are given by expressions (L.6.27)–(L.6.34): pi p + ri mi W

pξ p d p˙ i = Fi − 1 + pi − 1 pi dN W Q1 ˙ V = dVp /W r˙ i =

(L.6.27) (L.6.28) (L.6.29)

e38 Miscellaneous methods

Algorithm 44 (Equations of motion: Nosé-Hoover) function integrate

integrate equations of motion for Nosé-Hoover thermostat

chain(K) vel_verlet(K) chain(K) end function

Specific Comments (for general comments, see p. 7) 1. This function solves the equations of motion for a single time step t using the Trotter expansion (Eqs. (L.6.17) and (L.6.18)). 2. In function chain we apply eiLC t/4 to the current state (see Algorithm 45). 3. In function pos_vel we apply e(iLr +iLp )t to the current state (see Algorithm 7). 4. K is the total kinetic energy: it is modified to impose a canonical distribution on K.

p˙  = dV (Pint − Pext ) +

N pξ 1  p2i − 1 p N mi Q1

(L.6.30)

i=1

ξ˙k = p˙ ξ1 = p˙ ξk = p˙ ξM =

pξk for k = 1, . . . , M Qk

(L.6.31)

N  p2i pξ p2 +  − (dN + 1)kB T − 2 pξ1 mi W Q2

(L.6.32)

i=1 pξ2k−1

Qk−1 pξ2M−1

− kB T −

QM−1

pξk+1 pξ for k = 2, . . . , M − 1 Qk+1 k

− kB T .

(L.6.33) (L.6.34)

To derive a time-reversible numerical integration scheme to solve the equations of motion, we use again the Liouville approach. A state is characterized by the variables η = (rN , pN , , p , ξ M , pξM ). The Liouville operator is defined by iLN P T ≡ η˙

∂ . ∂η

For a chain of length M = 2, using pi = mi vi ( = mi r˙ i ), pξk = Qk vξk ,  = (ln V )/d, and pη = W vη , the Liouville operator for these equations of motion can be written as iLN P T = iLr + iLv + iLCP ,

Miscellaneous methods e39

Algorithm 45 (Equations of motion of NH chain) apply Eq. (L.6.18) to the current position

function chain(K) G2=(Q1*vxi1*vxi1-temp) vxi2=vxi2+G2*delt4

Update vξ2 using Eq. (L.6.19) Update vξ1 using Eq. (L.6.20)

vxi1=vxi1*exp(-vxi2*delt8) G1=(2*K-L*temp)/Q1 vxi1=vxi1+G1*delt4

Update vξ1 using Eq. (L.6.21) Update vξ1 using Eq. (L.6.20) Update ξ1 using Eq. (L.6.22) Update ξ2 using Eq. (L.6.23) Scale factor in Eq. (L.6.24)

vxi1=vxi1*exp(-vxi2*delt8) xi1=xi1+vxi1*delt2 xi2=xi2+vxi2*delt2 s=exp(-vxi1*delt2)

for 1 ≤ i ≤ npart do rescale vi using Eq. (L.6.24)

v(i)=s*v(i)

enddo rescale kinetic energy Update vξ1 using Eq. (L.6.20)

K=K*s*s vxi1=vxi1*exp(-vxi2*delt8) G1=(2*K-L*temp)/Q1 vxi1=vxi1+G1*delt4 vxi1=vxi1*exp(-vxi2*delt8)

Update vξ1 using Eq. (L.6.21) Update vξ1 using Eq. (L.6.20)

G2=(Q1*vxi1*vxi1-temp)/Q2 vxi2=vxi2+G2*delt4

Update vξ2 using Eq. (L.6.19)

end function

Specific Comments (for general comments, see p. 7) 1. In this function temp is the imposed temperature, delt = t, delt2 = t/2, delt4 = t/4, and delt8 = t/8. 2. K is the total kinetic energy.

in which we define the operators

iLr =

N 

(vi + v ri ) · ∇ ri + v

i=1

iLv =

N  Fi (r) i=1

iLCP = −

N  i=1

mi

∂ ∂

· ∇ vi

vξ1 vi · ∇ vi +

M  k=1

M−1   ∂ ∂ Gk − vξk vξk+1 vξk + ∂ξk ∂vξk k=1

e40 Miscellaneous methods

N

 ∂  ∂ 1  + GM − 1+ v vi · ∇ vi + G − v vξ1 ∂vξM N ∂v i=1

with  N 1  2 2 G1 = mi vi + W v − (Nf + 1)kB T Q i=1  1  Gk = Qk−1 vξ2k−1 − kB T Qk  

N N  1 ∂U (r, V ) 1  2 − dPext V . mi vi + ri · Fi (r)mi − dV G = 1+ W N ∂V i=1

i=1

An appropriate Trotter equation for the equations of motion is [126]   e(iLNP T t) = e(iLCP t/2) e(iLv t/2) e(iLr t) e(iLv t/2) e(iLCP t/2) + O t 3 . (L.6.35) The operator iLCP has to be further factorized: iLCP = iLξ + iLCv + iLG + iLv + iLG1 + iLvξ1 + iLG2 , where the terms are defined as iLξ ≡

2 

vξk

k=1

iLCv ≡ −

∂ ∂ξk

N  i=1

iLG ≡ G



d vξ1 + 1 + vi · ∇ vi dN

∂ ∂v

 ∂  iLv ≡ − vξ1 v ∂v ∂ iLG1 ≡ G1 ∂vξ1  ∂  iLvξ1 ≡ − vξ1 vξ2 ∂vξ1 ∂ iLG2 ≡ G2 . ∂vξ2 The Trotter expansion of the term iLC is e(iLCP t/2) = e

  iLG2 t/4+iLvξ1 t/4 (iLG1 t/4) (iLG t/4+iLv t/4)

e

e

Miscellaneous methods e41

×e ×e =e



  iLξ t/2 (iLCv t/2)

e

(iLv t/4+iLG t/4)

e

  iLG1 t/4+iLvξ1 t/4 (iLG2 t/4)

e

     iLG2 t/4 iLvξ1 t/8 (iLG1 t/4) iLvξ1 t/8

e e e   × e(iLv t/8) e(iLG t/4) e(iLv t/8)     × e iLξ t/2 e(iLCv t/2) e(iLv t/8) e(iLG t/4) e(iLv t/8)      × e iLvξ1 t/8 e(iLG1 t/4) e iLvξ1 t/8 e(iLG2 t/4) . (L.6.36)

Similar to the N V T version, the transformation rules of the various operators can be derived and translated into an algorithm. Such an algorithm is presented in ref. [126].

L.7 Ewald summation in a slab geometry In Chapter 11, we discussed the treatment of long-range interactions in threedimensional systems. For some applications, one is interested in a system that is finite in one dimension and infinite in the other two dimensions. Examples of such systems are fluids adsorbed in slit-like pores or monolayers of surfactants. Special techniques are required to compute long-range interactions in such inhomogeneous systems. The most straightforward solution would be to use the same approach as for the three-dimensional Ewald summation, but restrict the reciprocal-space sum to vectors in the x, y directions [740,741]. The energy we wish to calculate is UCoul =

N 1   qi qj ,  rij + n 2 i,j =1 n

where the summation over n = (Lx nx , Ly ny , 0) indicates that periodicity is only imposed in the x, y directions. As in the ordinary Ewald summation, the prime indicates that for cell (0, 0, 0) the terms i = j should be omitted. We have a twodimensional periodicity in the x, y directions for which we can use the Fourier representation. The resulting expression for the energy is [742]  N  erfc(α|rij + n|) π  1  UCoul = + 2 qi qj cos(h · rij )F (h, zij , α) 2 |rij + n| L n i,j =1

 α −g(zij , α) − √ π

h>0

M 

qi2 ,

(L.7.1)

i=1

where h ≡ (2πmx /Lx , 2πmy /Ly , 0) denotes a reciprocal lattice vector, zij is the distance between two particles in the z direction, and α is the screening

e42 Miscellaneous methods

parameter. The function F (h, zij , α) exp(hz)erfc [αz + h/(2α)] + exp(−hz)erfc [−αz + h/(2α)] 2h (L.7.2) corrects for the inhomogeneity in the non-periodic direction. If the system is truly two-dimensional, this term takes a simpler form. The function g(z, α)   √ g(z, α) = z erf(αz) + exp −(zα)2 /(α π) (L.7.3) F (h, z, α) =

is an additional self-term of charge interactions in the central cell that must be subtracted from the reciprocal-space sum. In a neutral system with all particles in the plane z = 0 this term disappears. The last term in Eq. (L.7.1) is the same self-term that appears in the normal Ewald summation (11.2.21). The details of the derivation can be found in refs. [740–744]. From a computational point of view, Eq. (L.7.1) is inconvenient. Unlike the three-dimensional case, the double sum over the particles in the Fourier part of Eq. (L.7.1) can, in general, not be expressed in terms of the square of a single sum. This makes the calculation much more expensive than its threedimensional counterpart. Several methods have been developed to increase the efficiency of the evaluation of the Ewald sum for slab geometries. Spohr [745] showed that the calculation can be made more efficient by the use of a look-up table combined with an interpolation scheme and the long-distance limit given by Eq. (L.7.6). Hautman and Klein [746] considered the case in which the deviation of the charge distribution from a purely two-dimensional system is small. For such a system one can introduce a Taylor expansion in z, to separate the in-plane contri! butions x, y in 1/ x 2 + y 2 + z2 from the out-of-plane contributions. Using this approach, Hautman and Klein derived an expression in which the Fourier contribution can again be!expressed in terms of sums over single particles. However, unless the ratio z/ x 2 + y 2  1, the Taylor expansion converges very poorly. Therefore the applicability of this method is limited to systems in which all charges are close to a single plane. An example of such a system would be a self-assembled monolayer in which only the head groups carry a charge [746]. An obvious idea would be to use the three-dimensional Ewald summation by placing a slab of vacuum in between the periodic images (see Fig. L.8). Spohr has shown [745], however, that even with a slab that is four times the distance between the charges one does not obtain the correct limiting behavior (see Example 25). The reason is that a periodically repeated slab behaves like a stack of parallel plate capacitors. If the slab has a net dipole moment, then there will be spurious electric fields between the periodic images of the slab. More importantly, the usual assumption that the system is embedded in a conducting sphere does not correctly account for the depolarizing field that prevails in a system with a (periodic) slab geometry. Yeh and Berkowitz [747] have shown

Miscellaneous methods e43

that one can add a correction term to obtain the correct limiting behavior in the limit of an infinitely thin slab. In the limit of an infinitely thin slab in the z direction, the force on a charge qi due to the depolarizing field is given by [748] Fz = −

4πqi Mz , V

(L.7.4)

and the total electrostatic energy due to this field is Uc = −

2π 2 M , V z

(L.7.5)

where Mz is the net dipole moment of the simulation cell in the z direction Mz =

N 

qi zi .

i=1

If the slab is not infinitely thin compared to the box dimensions, higher-order correction terms have to be added. However, Yeh and Berkowitz [747] have shown that the lowest-order correction is sufficient if the spacing between the periodically repeated slabs is three to five times larger than the thickness of the slab (see also Crozier et al. [749]). Illustration 25 (Ewald in slab). To illustrate the difficulties that arise when using the Ewald approach for computing long-range forces in a slab geometry, Spohr and co-workers [745,749] considered a simple example of two point charges: a charge +q at (0, 0, z) and a charge −q at (0, 0, 0). The system is finite in the z direction and periodic in the x, y directions with box sizes Lx and Lz (see Fig. L.7). Because of the periodic boundary conditions, the system forms two “sheets” of opposite charge.

FIGURE L.7 A system containing two point charges at positions (0, 0, 0) and (0, 0, z); because of the periodic boundary conditions in the x and y directions, two oppositely charge “sheets” are formed. There are no periodic boundary conditions in the z direction.

In the limit z → ∞, the distance between the periodic images of the charge is small compared to the distance between the sheets. We can therefore assume a uniform charge density q/(Lx Ly ) on each sheet. In this limit, the force acting between the two particles is given by

e44 Miscellaneous methods

Fz =

2π q 2 . Lx Ly

(L.7.6)

It is instructive to compare the various methods to compute the long-range interactions in this geometry. The true forces are given by the two-dimensional Ewald summation, and we can compare the following methods: • Two-Dimensional Ewald summation is the “exact” solution to this problem. • Bare Coulomb Potential, we simply assume that the periodic images do not exist (or give a zero contribution). The resulting forces follow Coulomb law. • Truncated and Shifted Coulomb Potential, in this method, it is assumed that beyond rc = 9 the potential is zero. To remove the discontinuity at r = rc the potential is shifted as well (see section 3.3.2.2). • Three-Dimensional Ewald Summation, in this approximation a layer of vacuum is added. The total system (vacuum plus slab) is seen as a normal periodic three-dimensional system (see Fig. L.8) for which the threedimensional Ewald summation (see Eq. (11.2.24)) is used. To study the effect of the thickness of the slab of vacuum, two systems are considered, one with Lz = 3Lx and a larger one with Lz = 5Lx .

FIGURE L.8 The system of Fig. L.7 artificially made periodic in the z direction by adding a slab of vacuum.

3-Dimensional Ewald Summation with Correction Term, this method is similar to the previous one; i.e., the normal three-dimensional Ewald summation is used with an additional slab of vacuum, except that now we correct for the spurious dipolar interactions. In Fig. L.9 we compare the various approximations with the true twodimensional solution. The bare Coulomb potential and the shifted and truncated Coulomb potential both give a zero force in the limit z → ∞ and therefore do not lead to the correct limiting behavior. Although the threedimensional Ewald summation gives a better approximation of the correct •

Miscellaneous methods e45

solution, it still has the incorrect limiting behavior for both a small and a large added slab of vacuum. The corrected three-dimensional Ewald summation, however, does reproduce the correct solution, for both a slab of vacuum of 3Lx and that of 5Lx .

FIGURE L.9 Comparison of various methods for approximating the long-range interaction for two charges of the slab geometry shown in Fig. L.7.

L.8 Special configurational-bias Monte Carlo cases L.8.1 Generation of branched molecules The generation of trial configurations for branched alkanes requires some care. Naively, one might think that it is easiest to grow a branched alkane atom by atom. However, at the branchpoint we have to be careful. Suppose we have grown the backbone shown in Fig. L.10 and we now have to add the branches bA and bB . The total bond-bending potential has three contributions, given by ubend = ubend (c1 , c2 , bA ) + ubend (c1 , c2 , bB ) + ubend (bA , c2 , bB ). Vlugt [750] pointed out that, because of the term ubend (bA , c2 , bB ), it is better not to generate the positions of bA and bB independently. Suppose that we would try to do this anyway. We would then generate the first trial position, bA , according to P (bA ) ∝ exp [−βubend (c1 , c2 , bA )] ,

FIGURE L.10

Growth of a branched alkane.

e46 Miscellaneous methods

next, we would generate the second trial position, bB , using P (bB |bA ) ∝ exp {−β [ubend (c1 , c2 , bB ) + ubend (bA , c2 , bB )]} , where P (bB |bA ) denotes the probability of generating bB for a given position of segment bA . However, if we would generate both positions at the same time, then the probability is given by P (bA , bB ) ∝ exp {−β [ubend (c1 , c2 , bA ) + ubend (c1 , c2 , bB ) + ubend (bA , c2 , bB )]} . The two schemes are only equivalent if P (bA , bB ) = P (bB |bA )P (bA ). In general, this equality does not hold. To see this, compare the probability of generating configuration bA for the two schemes. This probability is obtained by integrating over all orientations bB . If both chains are inserted at the same time, we find that  P (bA ) = dbB P (bA , bB ) ∝ exp [−βubend (c1 , c2 , bA )]  × dbB exp {−β [ubend (c1 , c2 , bA ) + ubend (bA , c2 , bB )]} . For the sequential scheme, we would have obtained  P (bA ) = dbB P (bB |bA )P (bA ) = P (bA ) ∝ exp [−βubend (c1 , c2 , bA )] as, in this scheme, segment bA is inserted before segment bB . Therefore the probability P (bA ) cannot depend on bB . We can now easily see that if we use a model in which the two branches are equivalent, for example, isobutane, the sequential scheme does not generate equivalent a priori distributions for the two branches. Of course, the generation of trial segments is but one step in the CBMC scheme. Any bias introduced at this stage can be removed by incorporating the ratio of the true and the biased distributions in the acceptance criterion. However, the resulting algorithm may be inefficient. Vlugt et al. [538] have shown that simply ignoring the bias introduced by the “sequential” scheme will result in small, but noticeable, errors in the distribution of the bond angles.

Miscellaneous methods e47

The insertion of two segments at the same time is less efficient than sequential insertion, several strategies have been proposed to increase the efficiency of the simultaneous generation of branches. For molecules in which the bond-bending potential has three contributions (as in the example above), the simplest scheme is to generate two random vectors on a sphere and use the conventional rejection scheme to generate configurations with a probability proportional to their Boltzmann weight [751]. One can also use this approach for more complex potentials that include torsion. If the random generation of trial directions becomes inefficient, it may be replaced by a simple Monte Carlo scheme [538]. For some intramolecular potential, it may even be necessary to add more than two atoms at the same time to ensure a proper a priori distribution of added segments. In fact, for some molecules that have multiple torsional angles, such as 2,3-dimethylbutane, this approach would imply that all atoms have to be added at the same time. To avoid such many-particle insertions, Martin and Siepmann [752] developed a scheme similar to the multiple-first-bead algorithm (see section 12.5). The idea is to use a random insertion to generate several trial positions and to use a CBMC scheme to select acceptable candidates using the internal energies only. These configurations that are distributed according to the correct intramolecular Boltzmann weight will subsequently be used in another CBMC scheme that involves more expensive external energy calculations. To see how this approach works, assume that we have a model with internal interactions given by uint . A single segment is added using the following steps: 1. First generate a set of nint random trial positions and for each position compute the internal energy, U int (i), and calculate the Rosenbluth factor, W associated with this internal energy W int (n) =

nint 

  exp −βU int (j ) .

j =1

A possible orientation is then selected using   exp −βU int (j ) int P (j ) = . W int (n) 2. Step 1 is repeated to generate k trial positions, which are then fed into the conventional CBMC scheme to compute the Rosenbluth factor, W , using the external potential W ext (n). 3. A similar method is used for the old configuration, giving W int (o) and W ext (o). 4. A move is accepted using

W int (n)W ext (n) acc(o → n) = min 1, int . W (o)W ext (o)

e48 Miscellaneous methods

Depending on the details of the potential, further refinements are possible. One can, for instance, separate the bond-bending potential and the torsion potential. This would imply three nested CBMC steps giving three different Rosenbluth factors. For more details see [752].

L.8.2 Rebridging Monte Carlo If we model a realistic polymer or peptide we have to include bond-bending and torsional potentials. Suppose that we rotate in the interior of a polymer a randomly selected torsional angle by an amount φ. If we would keep all other torsional angles of the remainder of the chain fixed, a tiny change of this torsional angle would lead to a large displacement of the last atom of the chain. If, on the other hand, one would only displace the neighboring atoms, the intramolecular interactions of the chain would increase significantly, again limiting the maximum rotation. We would like to ensure that the rotation affects only a small part of the interior of the chain and that it results in a redistribution of atoms that does not result in a large increase in the intramolecular energy. Concerted rotation and rebridging and end-bridging Monte Carlo are schemes that have been developed by Theodorou and co-workers [99,517,753] to perform such Monte Carlo moves. In Fig. L.11 the rebridging problem is sketched schematically. Suppose we give the atoms 1 and 5 a new position by a random rotation of the driver angles φ0 and φ7 . Assume that all bond lengths and bond angles have a prescribed value, for example, their equilibrium value or any other specified value. The rebridging problem is to find all possible conformations of the trimer consisting of the atoms 2, 3, and 4 that rebridge the new positions of atoms 1 and 5 given the constraints of the prescribed bond lengths and angles. Wu and Deem [754] have shown that for the rebridging problem an analytical solution exists and that the maximum number of solutions is strictly limited to 16. Alternatively, in refs. [517,753] it is shown how to numerically locate all solutions of the rebridging problem. Suppose that we have all solutions to the rebridging problem, either by the analytical solution of Wu and Deem or by the numerical scheme of Theodorou and co-workers. The next step is to use this in a Monte Carlo scheme. The scheme that we discuss here is only valid for the interior segments of a polymer. For the ends of the chains, a slightly different scheme has to be used [99]: 1. The present conformation of the polymer is denoted by o. We generate the new configuration of the polymer, n, by selecting an atom and a direction (forward or backward) at random. This defines the atom’s pair 1 and 5. These atoms are given new positions 1 and 5 by performing a random rotation around bonds −1, 0 and 6, 7, respectively (see Fig. L.11). 2. Solve the rebridging problem to locate all possible conformations of the trimer that bridge the new positions of atoms 1 and 5 . The total number of conformations is denoted by Nn and out of these we select one confor-

Miscellaneous methods e49

FIGURE L.11 Schematic drawing of the rebridging Monte Carlo scheme. Suppose we give the atoms 1 and 5 a new position, for example, by rotating around the −1, 0 and 6, 7 bonds by angles φ0 and φ7 , respectively. If we would not change the positions of the trimer, consisting of the gray atoms 2, 3, and 4, the intramolecular energy would increase significantly. The rebridging problem is to find a new conformation of the trimer with the same bond length and bond angle as the old conformation that bridges the new positions 1 and 5 .

3.

4.

5. 6.

mation, say n, at random.7 If no such conformation is found the move is rejected. For the old conformation, we also locate all possible conformation, i.e., we solve the rebridging problem to locate the conformations of the trimer that bridge the old positions of atoms 1 and 5. This number of conformation is denoted by No . In the rebridging scheme, we use a dihedral angle φ to generate a new configuration. This implies a temporary change of coordinate system; a Jacobian is associated with this change. In general, this Jacobian is not equal to 1. This Jacobian has to be taken into account in the acceptance rules [99]. The equations for this Jacobian can be found in refs. [99,753,754]. Here, we assume that these determinants for the old and new conformation have been calculated and are denoted by J (o) and J (n), respectively. Of the new and old conformations the energies are calculated, U (o) and U (n), respectively. The new conformation is accepted with a probability

exp[−βU (n)]J (n)/Nn . acc(o → n) = min 1, exp[−βU (o)]J (o)/No

In refs. [99,753] the proof is given that this rebridging scheme obeys detailed balance and samples a Boltzmann distribution. The reason it is important to find all solutions to the rebridging problem is to ensure detailed balance. Suppose that the determinants of the Jacobians are one and the energies are zero, then without the terms 1/Nn and 1/No the acceptance probability would be one for all possible new conformations. Suppose that we have a single solution for the new conformation, Nn = 1, and for the old conformation No = 2. Without the correction, we would violate detailed balance since 7 One could use the configurational-bias Monte Carlo scheme as an alternative for the random

selection.

e50 Miscellaneous methods

the a priori probability of the reverse move is only a half. Pant and Theodorou [753] have developed an alternative scheme in which one has to find only a single rebridging conformation, which is the first solution of their numerical scheme. To ensure detailed balance one has to check that the old conformation should also be the first solution to which the numerical scheme converges. One can also use the rebridging scheme to connect atoms of different chains. The idea of end-bridging Monte Carlo is to alter the connectivity of the chain by bridging atoms from different chains. The simplest form is to rebridge a chain end to an interior segment of another chain with a trimer. Such an endbridging Monte Carlo move induces a very large jump in configuration space. An important aspect of such an end-bridging move is, however, that it alters the chain lengths of the two chains. Therefore, such a move cannot be used if it is important to keep the chain length fixed. However, in most practical applications of polymers, one does not have a single chain length but a distribution of chain lengths. Pant and Theodorou [753] have shown that the resulting chain length distribution resembles a truncated Gaussian distribution. One can envision a chain length distribution as a mixture of a very large number of components, each component characterized by its chain length l. Imposing the chain length distribution is equivalent to imposing the chemical potentials of the various components. This suggests that we could combine these end-bridging moves with the semigrand ensemble simulation technique (see section 6.5.4) to determine whether a change of the polymer length should be accepted. In principle, one can use two rebridging moves for the interior segments of two chains. This would allow us to perform moves in which the total chain length remains constant. Whether in practice such a scheme will work depends on the probability that two segments of different chains with the same number of end segments connected to it are sufficiently close to each other. Tests show that the rebridging method is very efficient for polymer melts with chain lengths up to C30 . For chains up to C70 rebridging Monte Carlo still samples the local structure efficiently, but fails to sample chain characteristics at larger length scales such as the end-to-end vector. End-bridging Monte Carlo effectively relaxes chains up to C500 [517]. Another important application of rebridging Monte Carlo is the possibility of simulating cyclic molecules. This application is illustrated by Wu and Deem in their study of cis/trans isomerization of proline-containing cyclic peptides [754,755].

L.8.3 Gibbs-ensemble simulations In section 6.6, the Gibbs-ensemble technique was introduced as an efficient tool for simulating vapor-liquid phase equilibria. One of the Monte Carlo steps in the Gibbs-ensemble technique is the transfer of molecules between the liquid phase and gas phase. For long-chain molecules, this step, if carried out completely randomly, results in a prohibitively low acceptance of particle exchanges. Therefore, the Gibbs-ensemble technique used to be limited to systems containing

Miscellaneous methods e51

atoms or small molecules. However, by combining the Gibbs-ensemble method with configurational-bias Monte Carlo, the method can be made to work for much longer chain molecules. Algorithm Let us consider a continuum system with strong intramolecular interactions. In section 12.2.3 it is shown that for such a system, it is convenient to separate the potential energy into two contributions: the bonded intramolecular energy (U bond ) and the “external” energy (U ext ) that contains the intermolecular interactions and the nonbonded intramolecular interactions. As in the original implementation of the Gibbs ensemble, we attempt to exchange a molecule between the two boxes. However, while in section 6.6.1 the molecules were inserted at random, we now use the following procedure to grow a molecule atom by atom in a randomly selected box. Let us assume this is box 1 with volume V1 . The number of particles in this box is denoted by n1 . 1. The first atom is inserted at a random position, and the (external) energy uext 1 (n) is calculated together with w1ext (n) = k exp[−βuext 1 (n)].

(L.8.1)

2. To insert the next atom i, k trial orientations are generated. The set of k trial orientations is denoted by {b}k = b1 , b2 , · · · , bk . These orientations are not generated at random but with a probability that is a function of the bonded part of the intramolecular energy: pibond (bn ) = C exp[−βubond (bn )]. i

(L.8.2)

Of each of these trial orientations the external energy is calculated uext i (bj ) together with the factor wiext (n) =

k 

exp[−βuext i (bj )].

(L.8.3)

j =1

Out of these k trial positions, we select one with probability piext (bn ) =

exp[−βuext i (bi )] . ext wi (n)

(L.8.4)

3. Step 2 is repeated  − 1 times until the entire molecule is grown and the Rosenbluth factor, W of the molecule can be calculated: W

ext

(n) =

 " i=1

wiext (n).

(L.8.5)

e52 Miscellaneous methods

For the other box, we select a molecule at random and determine its Rosenbluth factor, using the following procedure: 1. A particle is selected at random. 2. The (external) energy of the first atom is determined uext 1 (o) together with w1ext (o) = k exp[−βuext 1 (o)].

(L.8.6)

3. For the next atom, k − 1 trial orientations are generated with a probability given by Eq. (L.8.2). These trial orientations, together with the actual position of atom i (bo ), form the set {b }k for which we determine the factor wiext (o) = exp[−βuext i (o)] +

k 

exp[−βuext i (b j )].

(L.8.7)

j =2

4. Step 2 is repeated  − 1 times until we have retraced the entire chain and its Rosenbluth factor can be calculated: W ext (o) =

 "

wlext (o).

(L.8.8)

l=1

We then accept this move with probability

V1 (N − n1 ) W ext (n) . acc(o → n) = min 1, (V − V1 )(n1 + 1) W ext (o)

(L.8.9)

The proof of the validity of this algorithm, again, is very similar to those shown earlier in this chapter. We, therefore, refer the interested reader to [512,756,757]. The combination of the Gibbs-ensemble technique with the configurational-bias Monte Carlo method has been used to determine the vapor-liquid coexistence curve of chains of Lennard-Jones beads [512,756] and alkanes [535,757–759]. In Illustration 26, an application of this method is described. Illustration 26 (Critical properties of alkanes). Alkanes are thermally unstable above approximately 650 K, which makes an experimental determination of the critical point of alkanes longer than decane (C10 ) extremely difficult. The longer alkanes, however, are present in mixtures of practical importance for the petrochemical industry. In these mixtures, the number of components can be so large that it is not practical to determine all phase diagrams experimentally. One, therefore, has to rely on predictions made by equations of state. The parameters of these equations of state are directly related to the critical properties of the pure components. Therefore, the critical properties of the long-chain alkanes are used in the design of petrochemical processes, even if they are unstable close to the critical point [760]. Unfortunately, experimen-

Miscellaneous methods e53

tal data are scarce and contradictory, and one has to rely on semi-empirical methods to estimate the critical properties [760]. Siepmann et al. [535,757] have used the combination of the Gibbsensemble technique and Configurational-Bias Monte Carlo (CBMC) to simulate vapor-liquid equilibria of the n-alkanes under conditions where experiments are not (yet) feasible. Phase diagrams are very sensitive to the choice of interaction potentials. Most available models for alkanes have been obtained by fitting simulation data to experimental properties of the liquid under standard conditions. In Fig. L.12, the vapor-liquid curve of octane, as predicted by some of these models, is compared with experimental data. This figure shows that the models of [761,762], which give nearly identical liquid properties, yield estimates of the critical temperature of octane that differ by 100 K. Siepmann et al. [535,757] used these vapor-liquid equilibrium data to improve the existing models.

FIGURE L.12 The vapor-liquid curve of octane: comparison of Gibbs ensemble simulations using the so-called OPLS model of Jorgensen and co-workers [761], the model of Toxvaerd [762], and the model of Siepmann et al. [535,757].

In Fig. L.13, the critical temperatures and densities as predicted by the model of Siepmann et al. are plotted versus the carbon number. The simulations reproduce the experimental critical points very well. There is considerable disagreement, however, between the various experimental estimates of the critical densities. Much of our current knowledge of the critical properties of the higher alkanes is based on extrapolations of fits of the experimental data up to C8 . The most commonly used extrapolations assume that the critical density is a monotonically increasing function of the carbon number, approaching a limiting value for the very long alkanes [760,763]. In contrast to these expectations, the experimental data of Anselme et al. [764] indicate that the critical density has a maximum for C8 and then decreases monotonically. The data of Steele (as reported in [763]), however, do not give any evidence for such a maximum (see Fig. L.13). The simulations indicate the same trend as that observed by Anselme et al. In this context, it is interesting to note that Mooij et al. [512], Sheng et al. [765], and Escobedo and de Pablo

e54 Miscellaneous methods

[766] used Monte Carlo simulations to study the vapor-liquid curve of a polymeric bead-spring model for various chain lengths. These studies also show a decrease of the critical density as a function of chain length. Such a decrease of the critical density with chain length is a general feature of long-chain molecules, as was already pointed out by Flory.

FIGURE L.13 Critical temperature Tc (left) and density ρc (right) as a function of carbon number Nc . The open symbols are the simulation data, and the closed symbols are experimental data.

The Gibbs ensemble technique makes it possible to compute efficiently the liquid-vapor coexistence curve of realistic models for molecular fluids. This makes it possible to optimize the parameters of the model to yield an accurate description of the entire coexistence curve rather than a single state point. It is likely, but not inevitable, that a model that describes the phase behavior correctly will also yield reasonable estimates of other properties, such as viscosity or diffusivity. Mondello and Grest have shown that this is indeed true for the diffusion coefficient of linear hydrocarbons [767,768], while Cochran, Cummings, and co-workers [769,770] found the same for the viscosity. The hydrocarbon model that was used in these studies had been optimized to reproduce experimental vapor-liquid coexistence data [535,757]. Improved force fields have since been proposed for linear alkanes [514,771,772], branched alkanes [752], alkenes [773,774], alkylbenzenes [773], and alcohols [775,776].

L.9 Recoil growth: justification of the method The best way to arrive at the acceptance rule for the recoil growth scheme is to pretend that we actually carry out the naive brute-force calculation where we first generate the tree of all k l−1 trial conformations. We denote this tree by Tn and the a priori probability for generating this tree by PT (Tn ). Next we test which links are “open” or “closed.” The decision whether a monomer direction is “open” or “closed” is made on the basis of the probabilities equation (12.7.1) and we denote the probability that we have a particular set On

Miscellaneous methods e55

of “open” monomers (and all others “closed”) by PO (On |Tn ). Let us note the number of “open” monomers in this set by N (On ) and the number of “closed” monomers by N (Cn ). It is easy to see that the probability of generating this particular set is given by PO (On |Tn ) =

N" (On )

open

pj

(b)

N" (Cn )

j =1

open

(1 − pk

(b)).

k=1

Finally we try to select one completely open conformation by randomly selecting, at every step, one of the “available” trial directions, i.e., a direction that is connected to (at least) one feeler that does not “die” within lmax steps. At every step, there are mi (n) such directions. Hence the probability of selecting a given direction is simply 1/mi (n) and the total probability that a specific conformation will be selected on the previously generated tree of all possible conformations is PS (n|On ) =

l−1 " i=1

1 mi (n)

if all mi are non-zero, and PS (n|On ) = 0 otherwise. The fact that the algorithm leaves out many redundant steps (viz. generating the “doomed” branches or checking if there is more than one open feeler in a given direction) is irrelevant for the acceptance rule. The overall probability that we generate a trial conformation n on the set On , PS (n|On ), in a tree Tn is PT (Tn ) × PO (On |Tn ) × PS (n|On ).

(L.9.1)

In order to compute the acceptance probability of a trial move, we should consider the reverse move where the old conformation is generated. By analogy to the forward case, this probability is given by PT (To ) × PO (Oo |To ) × PS (o|Oo ).

(L.9.2)

We wish our MC scheme to obey detailed balance. However, just as in the CBMC case, it is easier to impose the stronger condition of super-detailed balance. This implies that, in the forward move, we also should consider the probability of generating a complete tree of possible conformations around the “old” conformation and the probability that a subset of all monomers on this tree is “open.” We denote the probability of generating this tree by PT (To ), where the prime indicates that this is the probability of generating all branches of the old tree, except the already existing old conformation. Clearly PT (To ) = PT (To ) × Pgen (o),

(L.9.3)

e56 Miscellaneous methods

where Pgen (o) denotes the probability of generating the old conformation. As in the CBMC scheme, we can include strong intramolecular interactions in the generation of these trial monomers (see section 12.3). Pgen (o) will then be of the form (see section 12.3)  l  " bond Pgen (o) = pi (bo ) . (L.9.4) i=1

Similarly, we have to consider the probability PO (Oo |To ) that a set Oo on this tree is “open.” Again, the prime indicates that we should not include the old conformation itself. Again, it is easy to see that  l  " open PO (Oo |To ) = pi (bo ) PO (Oo |To ). (L.9.5) i=1

The a priori probability of generating a trial move from o to n is then given by α(o → n|Tn , On , To , Oo ) = PT (Tn ) × PO (On |Tn ) × PS (n|On ) × PT (To ) × PO (Oo |To ).

(L.9.6)

For the reverse move n → o, we can derive a similar expression: α(n → o|Tn , On , To , Oo ) = PT (To ) × PO (Oo |To ) × PS (o|Oo ) × PT (Tn ) × PO (On |Tn ).

(L.9.7)

In these equations, we have used the notation (o → n|Tn , On , To , Oo ) to indicate that we consider a transition from o to n (or vice versa) for a given set of “embedding” conformations. Clearly, there are many different trees and sets of open orientations that include the same conformations n and o. Our super-detailed balance condition now becomes N (o) × α(o → n|Tn , On , To , Oo )acc(o → n|Tn , On , To , Oo ) = N (n) × α(n → o|Tn , On , To , Oo )acc(n → o|Tn , On , To , Oo ).

(L.9.8)

All terms in this equation are known, except the acceptance probabilities. We now derive an expression for the ratio acc(o → n|Tn , On , To , Oo )/acc(n → o|Tn , On , To , Oo ). To this end, we insert Eqs. (L.9.3) and (L.9.5) (and the corresponding expressions for PT (Tn ) and PO (On |Tn )) into our super-detailed balance condition equation (L.9.8). This leads to a huge simplification as there is a complete cancellation of all probabilities for generating “open” or “closed” monomers that do not belong to the new (or the old) conformation. What remains is  l open  "p (b ) n i N (o) × Pgen (n) acc(o → n|Tn , On , To , Oo ) mi (n) i=1

Miscellaneous methods e57

 = N (n) × Pgen (o)

open l " p (bo ) i

i=1

mi (o)

 acc(n → o|Tn , On , To , Oo ).

(L.9.9)

In order to simplify the notation, we shall assume that the trial directions are uniformly distributed, i.e., see Eq. (L.9.4), p bond = constant. From Eq. (L.9.4) it then follows that Pgen (n) and Pgen (o) are identical constants. Our expression for the ratio of the acceptance probabilities then becomes # open acc(o → n) N (n) i=1 pi (o)/mi (o) = , # acc(n → o) N (o) i=1 piopen (n)/mi (n)

(L.9.10)

where we have dropped the indices Tn , On , . . . . Using the definitions of W (n) and W (o) (Eq. (12.7.2) and below), acc(o → n) N (n)W (n) = . acc(n → o) N (o)W (o)

(L.9.11)

This is precisely the acceptance rule given by Eq. (12.7.3). This concludes our “derivation” of the recoil growth scheme. The obvious question is: how well does it perform? A comparison between CBMC and the RG algorithm was made by Consta et al. [540], who studied the behavior of Lennard-Jones chains in solution. The simulations showed that for relatively short chains ( = 10) at a density of ρ = 0.2, the recoil growth scheme was a factor of 1.5 faster than CBMC. For higher densities ρ = 0.4 and longer chains  = 40 the gain could be as large as a factor 25. This illustrates the fact that the recoil scheme is still efficient, under conditions where CBMC is likely to fail. For still higher densities or still longer chains, the relative advantage of RG would be even larger. However, the bad news is that, under those conditions, both schemes become very inefficient. While the recoil growth scheme is a powerful alternative to CBMC, the RG strategy is not very useful for computing chemical potentials (see [540]). More efficient schemes for computing the chemical potential are the recursive sampling scheme and the Pruning-Enriched Rosenbluth Method (PERM) (see Chapter 10).

L.10

Overlapping distribution for polymers

Let us first consider how the basic idea behind the overlapping distribution method can be applied to the Rosenbluth insertion scheme. The simplest approach would be to consider the histogram of the potential energy change on addition or removal of a chain molecule (see section 8.6.1). However, for chain molecules, this approach differs from the original Shing-Gubbins approach in that it has little, if any, diagnostic value. For instance, if we consider the chemical potential of hard-core chain molecules, the distributions of U will always

e58 Miscellaneous methods

overlap (namely, at U = 0), even in the regime where the method cannot be trusted. Here, we shall describe an overlapping distribution method based on histograms of Rosenbluth weights [455]. This method will prove to be a useful diagnostic tool. Consider again a model with internal potential energy uint and external potential energy uext . In what follows, we shall compare two systems. The first, denoted by 0, contains N chain molecules (N ≥ 0). The second system, denoted by 1, contains N + 1 chain molecules. In addition, both systems may contain a fixed number of other (solvent) molecules. Let us first consider system 1. Around every segment j of a particular chain molecule (say, i), we can generate k − 1 trial directions according to an internal probability distribution given by Eq. (10.2.19). Note that the set does not include the actual orientation of segment j . We denote this set of trial orientations by k−1 "

{γ rest (j )} ≡

{γ }j ,

j =1

where the subscript rest indicates that this set excludes the actual segment j . The probability of generating this set of trial directions is given by Prest (j ), given by Eq. (10.2.19). Having thus constructed an umbrella of trial directions around every segment 1 ≤ j ≤ , we can compute the Rosenbluth weight Wi of molecule i. Clearly, Wi depends on all coordinates of the remaining N molecules (for convenience, we assume that we are dealing with a neat liquid), on the position ri and conformation  i of molecule i, and on the  sets of k − 1 trial directions: { rest } ≡

 "

{γ }rest (j ).

j =1

We now define a quantity x through x ≡ ln Wi (QN +1 , { rest }), where we use Q to denote the translational coordinates r and conformational coordinates  of a molecule. Next, consider the expression for the probability density of x, p1 (x):  p1 (x) =

 # dQN +1 d{ rest } exp −βU(QN +1 ) j =1 Prest (j )δ(x − ln Wi ) ZN +1

where  ZN +1 =

  " dQN +1 d{ rest } exp −βU(QN +1 ) Prest (j ) j =1

,

Miscellaneous methods e59

 =

 ···

  dQN +1 exp −βU(QN +1 ) .

The second line of this equation follows from the fact that all Pint (j ) are normalized. We shall now try to relate p1 (x) to an average in system 0 (i.e., the system containing only N chain molecules). To this end, we write U(QN +1 ) as U(QN ) = uex (QN , Qi ) + uint (Qi ). Second, we use the fact that exp [−βuint (i)] = Zid ×

 "

Pint (j ),

j =1

where  Zid ≡

d i

 "

exp [−βuint (j )] .

j =1

Our expression for p1 (x) now becomes p1 (x) =

Zid ZN +1 ×

 "



  dQN dri d{ trial } exp −βU(QN )

Ptrial (j ) exp [−βuex (j )] δ(x − ln Wi ).

j =1

We use the symbol { trial } to denote the set of all trial segments, that is, the “umbrella” of trial directions around all segments of the chain molecule, plus the segments themselves. Next, every term exp(−βuex (j )) is multiplied and divided by Zj , defined as Zj ≡

k 

exp(−βuext (j )).

j =1

This allows us to write, for p1 (x), p1 (x) =

V Zid ZN +1    × dQN dsi d{ trial } exp −βU(QN ) Psel (Qi )Wi δ (x − ln Wi ) , trials

where we have transformed from real coordinates ri to scaled coordinates si by factoring out V , the volume of the system. Here, Psel (Qi ) denotes the probability of selecting the actual conformation of the molecule from the given set of trial segments according to the rule given in Eq. (10.2.20). Finally, we multiply and

e60 Miscellaneous methods

FIGURE L.14 The functions f (ln W) ≡ p0 (ln W) + 12 ln W, g(ln W) ≡ p1 (ln W) − 12 ln W for fully flexible chains of hard sphere of length (left)  = 8 and (right)  = 14 in a hard-sphere fluid at density ρσ 3 = 0.4. Note that the overlap between the distributions decreases as the chains become longer. The difference g(ln W) − f (ln W) is the overlapping distribution estimated for βμex . For the sake of comparison, we also show the value for βμex , obtained using the Rosenbluth test particle insertion method (dashed lines).

divide by ZN and employ the fact that the δ function ensures that Wi = exp(x):

V Zid ZN p1 (x) = e ZN +1   N N trials dQ dsi d{ trial } exp(−βU(Q ))Psel (Qi )δ(x − ln Wi ) × . ZN x

Finally, we obtain p1 (x) = ex

V Zid ZN p0 (x) ZN +1

or ln p1 (x) = x + βμex + ln p0 (x). Hence, by constructing a histogram of ln w both in system 0 (with N chains) and in system 1 (with N + 1 chains), we can derive the excess chemical potential of the chain molecules by studying ln p1 (x) − ln p0 (x). As in the original Bennett/Shing-Gubbins scheme [341–343], the method works only if there is a range of x values where we have good statistics on both p1 (x) and p0 (x). The advantage of this overlapping distribution scheme over the simple Rosenbluth particle insertion method is that, with the present method, sampling problems for long chains will manifest themselves as a breakdown of the overlap of p0 and p1 . Fig. L.14 shows an example of an application of this overlapping distribution method to hard-sphere polymers.

Miscellaneous methods e61

L.11

Hybrid Monte Carlo

In Molecular Dynamics simulations, all particle coordinates are updated simultaneously. In conventional MC simulations, only a few coordinates are changed in a trial move. As a consequence, collective molecular motions are not well represented by Monte Carlo, and this may adversely affect the rate of equilibration. The advantage of Monte Carlo is that, unlike MD, we can carry out unphysical moves. Moreover, in MC the system is not constrained to move on a hypersurface where some Hamiltonian is conserved. The time step in Molecular Dynamics is limited by the need to conserve energy. Clearly, no such constraint applies to Monte Carlo. For this reason, many authors have attempted to combine the natural dynamics of MD with the large jumps in configuration space possible in MC. The book by Allen and Tildesley [21] describes a number of such techniques (force-bias MC, Langevin Dynamics, smart Monte Carlo) that basically work by including some or all information about the intermolecular forces in the construction of a collective MC trial move. One scheme that uses MD to generate Monte Carlo trial moves is the socalled hybrid Monte Carlo scheme [564]. At first sight, the advantage of mixing MC and MD is not obvious. However, the criteria for what constitutes a good Monte Carlo trial move are more tolerant than the specifications of a good Molecular Dynamics time step. In particular, one can take a time step that is too long for MD. Energy will not be conserved in such a trial move. However, as long as one uses an algorithm that is time reversible and area-preserving (i.e., that conserves volume in phase space), such collective moves can be used as a Monte Carlo trial move. Fortunately, a systematic way now exists to construct time-reversible, area-preserving MD algorithms, using the multiple-time-step MD scheme of Tuckerman et al. [117]. The usual Metropolis algorithm can then be used to decide on the acceptance or rejection of the move (see, e.g., [565,566]). For every trial move, the particle velocities are chosen at random from a Maxwell distribution. In fact, it is often advantageous to construct a trial move that consists of a sequence of MD steps. The reason is that, due to the randomization of the velocities, the diffusion constant of the system becomes quite low if the velocities are randomized well before the natural decay of the velocity autocorrelation function. Yet one cannot make the time step for a single hybrid MC move too long, because then the acceptance would become very small. As a consequence, the performance of hybrid MC is not dramatically better than that of the corresponding Molecular Dynamics. Moreover, the acceptance probability of hybrid MC moves of constant length decreases with the system size, because the rootmean-square error in the energy increases with N 1/2 . MD does not suffer from a similar problem. That is to say, the noise in the total energy increases with N , but the stability of the MD algorithm does not deteriorate. Hence, for very large systems, MD will always win. For more normal system sizes, hybrid MC may be advantageous.

e62 Miscellaneous methods

It is also interesting to use hybrid MC on models that have an expensive (many-body) potential energy function that may, to a first approximation, be modeled using a cheap (pair) potential. We could then perform a sequence of MD steps, using the cheap potential. At the end of this collective (MD) trial move, we would accept or reject the resulting configuration by applying the Metropolis criterion with the true potential energy function. Many variations of this scheme exist. In any event, the hybrid Monte Carlo method is a scheme that requires fine tuning [566]. Forrest and Suter [567] have devised a hybrid MC scheme that samples polymer conformations, using fictitious dynamics of the generalized coordinates. This scheme leads to an improved sampling of the polymer conformations compared to normal MD. The interesting feature of the dynamics used in ref. [567] is that it uses a Hamiltonian that has the same potential energy function as the original polymer model. In contrast, the kinetic part of the Hamiltonian is adjusted to speed up conformational changes.

L.12 General cluster moves In general, it is not possible to design clusters such that trial moves are always accepted. However, it is often convenient to perform clustering to enhance the acceptance of trial moves. For instance, in molecular systems with very strong short-range attractions, trial moves that pull apart two neighboring particles are very likely to be rejected. It is preferable therefore to include trial moves that attempt to displace the tightly bound particles as a single cluster. To do this, we have to specify a rule for generating clusters. Let us assume that we have such a rule that tells us that particles i and j belong to a single cluster with probability p(i, j ) and are disconnected with a probability 1 − p(i, j ). Here, p(i, j ) depends on the state (relative distance, orientation, spin, etc.) of particles i and j . Moreover, we require that p(i, j ) be unchanged in a cluster move if both i and j belong to the cluster, and also if neither particle belongs to the cluster. For instance, p(i, j ) could depend on the current distance of i and j only. If we denote the potential energy of the old (new) configuration by U0 (U1 ), the detailed balance condition requires that exp(−βUo )

"

[1 − p f (k, l)]acc(o → n)

kl

= exp(−βUn )

" [1 − p r (k, l)]acc(n → o),

(L.12.1)

kl

where k denotes a particle in the cluster and l a particle outside it. The superscripts f and r denote forward and reverse moves. In writing Eq. (L.12.1), we have assumed that the probability of forming bonds completely within, or completely outside, the cluster is the same for forward and reverse moves. From Eq. (L.12.1), we derive an expression for the ratio of the acceptance probabili-

Miscellaneous methods e63

ties: " 1 − p r (k, l) acc(o → n) = exp{−β[U(n) − U(o)]} . acc(n → o) 1 − p f (k, l)

(L.12.2)

kl

Clearly, many choices for pkl are possible. A particularly simple form was chosen by Wu et al. [777], who assumed that p(i, j ) = 1 for rij less than a critical distance rc and p(i, j ) = 0 beyond that distance (see Example 27). Note that the acceptance rule in Eq. (L.12.1) guarantees that two particles that did not belong to the same cluster in the old configuration will not end up at a distance less than rc . Illustration 27 (Micelle formation). Surfactants are amphiphilic molecules that consist of two chemically distinct parts. If these two parts cannot be connected, then one part will preferentially dissolve in another solvent compared to the other part. The most common case is that one part of the molecule is water soluble and the other oil soluble. But, as the two parts of the molecule are connected, the dissolution of surfactant molecules in a pure solvent (say, water) causes frustration, as the oil-soluble part of the molecule is dragged along into the water phase. Beyond a certain critical concentration, the molecules resolve this frustration by self-assembling into micelles. Micelles are (often spherical) aggregates of surfactant molecules in which the surfactants are organized in such a way that the hydrophilic heads point toward the water phase and the hydrophobic tails toward the interior of the micelle. It is of considerable interest to study the equilibrium properties of such a micellar solution. However, Molecular Dynamics simulations on model micelles have shown that the micelles move on a time scale that is long compared to the time it takes individual surfactant molecules to move [778]. Using conventional MC, rather than MD, does not improve the situation: it is relatively easy to move a single surfactant, but it takes many displacements of single surfactants to achieve an appreciable rearrangement of the micelles in the system. Yet, to sample the equilibrium properties of micellar solutions, the micelles must be able to move over distances that are long compared to their own diameter, they must be able to exchange surfactant molecules and they must even be able to merge or break up. As a consequence, standard simulations of micellar self-assembly are very slow. Wu et al. [777] have used cluster moves to speed up the simulation of micellar solutions. They use a cluster MC scheme that makes it possible to displace entire micelles with respect to each other. The specific model for a surfactant solution that Wu et al. studied was based on a lattice model proposed by Stillinger [779]. In this model, the description of surfactants is highly simplified: the hydrophilic and hydrophobic groups are considered independent (unbonded) particles. The constraint that the head and tail of a surfactant be physically linked is translated into an electrostatic attraction between these groups. The magnitude of these effective charges depends on density and tem-

e64 Miscellaneous methods

perature. A typical configuration is shown in Fig. L.15. In such a system it is natural to cluster the molecules in a micelle and subsequently, move entire micelles. The cluster criterion used by Wu et al. is $ 1 if rij < rc p(i, j ) = 0 if rij > rc Wu et al. used rc = 1, which implies that two particles belong to the same cluster if they are on neighboring lattice sites.

FIGURE L.15 Snapshot of a typical configuration of the surfactant model of Wu et al. [777], in which the hydrophilic and hydrophobic parts of surfactant molecules are represented by charges (black head and white tail). Under appropriate conditions, the surfactants self-assemble into micelles.

In the first step of the algorithm, the clusters are constructed using the preceding criterion. Subsequently, a cluster is selected at random and given a random displacement. It is instructive to consider the case in which we would use the ordinary acceptance rules to move the cluster; that is, acc(o → n) = min[1, exp(−βU )]. Fig. L.16 shows such a cluster move. The first step (top) is the construction of the cluster, followed by a displacement of one of the clusters (middle). If we accept this move with the probability given by the preceding equation, we would violate microscopic reversibility. Since we have moved the clusters in such a way that they touch each other, in the next step, these two clusters would be considered a single cluster. It then will be impossible to separate them to retrieve the initial configuration. If we use the correct acceptance rule, Eq. (L.12.2), ⎤ ⎡ " 1 − p new (k, l) ⎦, acc(o → n) = min ⎣1, exp(−βU ) 1 − p old (k, l) kl

Miscellaneous methods e65

then this move would be rejected because pnew (k, l) = 1. Since these cluster moves do not change the configuration of the particles in a cluster and do not change the total number of particles in a cluster, it is important to combine these cluster moves with single particle moves, or use a cluster criterion p(i, j ) that allows the number of particles in a cluster to change.

FIGURE L.16 Violation of detailed balance in a cluster move; the top figure shows the four clusters in the system; in the middle figure one of the clusters is given a random displacement, which brings this cluster into contact with another cluster; and the bottom figure shows that the new configuration has only three clusters if the moves have been accepted.

Orkoulas and Panagiotopoulos have used such cluster moves to simulate the vapor-liquid coexistence curve of the restricted primitive model of an ionic fluid [780,781].

L.13

Boltzmann-sampling with dissipative particle dynamics

The original version of DPD [257] did not generate a Boltzmann distribution. Español and Warren [683] corrected this problem. In the analysis of ref. [683], the time evolution generated by a DPD algorithm is written in the form of a Fokker-Planck equation (see e.g., [67]): ∂t N (r, p; t) = LC N (r, p; t) + LD N (r, p; t),

(L.13.1)

e66 Miscellaneous methods

where LC is the usual Liouville operator of a Hamiltonian system interacting with conservative forces FC , ⎡ ⎤  pi ∂  ∂ ⎦ LC ≡ − ⎣ + fC , (L.13.2) ij m ∂ri ∂pi i

i,j

and the operator LD takes into account the effects of the dissipative and random forces:

 ∂ ∂ ∂ 2 γ ωD (rij )(ˆrij · vij ) + σ 2 ωR . rˆ ij LD ≡ (rij )ˆrij − ∂pi ∂pi ∂pj i,j

(L.13.3) The derivation of these equations uses techniques developed for stochastic differential equations. The advantage of casting the DPD equations in a FokkerPlanck form is that the steady-state solution of Eq. (L.13.1) then corresponds to ∂t Neq (r, p; t) = 0. To make the connection with statistical mechanics, the steady-state solution should correspond to the canonical distribution: Neq (r, p; t) = =

1 QN V T 1 QN V T

exp[−βH(r, p)]     exp −β pi /2mi + V (r) , i

where V (r) is the potential that gives rise to the conservative forces, H(r, p) is the Hamiltonian, and QN V T is the partition function of the N V T ensemble. By construction, this Boltzmann distribution satisfies LC Neq (r, p; t) = 0. We, therefore, need to ensure that LD Neq (r, p; t) = 0. This is achieved by imposing that 2 (r) = ωD (r) ωR

and

σ 2 = 2kB T γ ,

which are Eqs. (16.1.4) and (16.1.5), i.e., the choice made by Español and Warren [683].

Miscellaneous methods e67

L.14

Reference states

L.14.1 Grand-canonical ensemble simulation In a grand-canonical ensemble simulation, we impose the temperature and chemical potential. Experimentally, however, usually, the pressure rather than the chemical potential of the reservoir is fixed. To compare the experimental data with the simulation results it is necessary therefore to determine the pressure that corresponds to a given value of the chemical potential and temperature of our reservoir.

L.14.1.1 Preliminaries The partition function of a system with N atoms in the N, V , T -ensemble is given by Q(N, V , T ) =

VN 3N N !

 dsN exp[−βU(sN )],

(L.14.1)

where sN are the scaled coordinates of the N particles. The free energy is related to the partition function via F =−

1 ln Q(N, V , T ), β

which gives us for the chemical potential μ≡

∂F 1 = − ln[Q(N + 1, V , T )/Q(N, V , T )]. ∂N β

(L.14.2)

For a system consisting of N molecules with each molecule having M atoms, the partition function is M  q(T )N V N " N Q(N, M, V , T ) = dsN i exp[−βU(si )], N!

(L.14.3)

i=1

where q(T ) is the part of the partition function of a molecule that contains the integration over momenta (for an atom, q(T ) is simply −3 ) and the sN are the Cartesian coordinates of atoms in the molecule. It should be stressed that, in writing Eq. (L.14.3), we are making the assumption that there are no “hard” constraints on these intramolecular coordinates. In the presence of hard constraints, the integral in Eq. (L.14.3) would contain a Jacobian (see section 14.1).

e68 Miscellaneous methods

L.14.1.2 Ideal gas In the limit of zero density, any system will behave as an ideal gas. In this limit, only the intramolecular interactions contribute to the total potential energy U≈

N 

U intra (i).

i=1

For a system consisting of noninteracting atoms, the partition function (L.14.1) reduces to VN . (L.14.4) 3N N ! We can write, for the chemical potential of such an ideal gas of atoms, QIG (N, V , T ) =

μid.gas = μ0id.gas + kB T ln ρ,

(L.14.5)

with the chemical potential of the reference state defined by μ0id.gas ≡ kB T ln 3 .

(L.14.6)

In the case of gas of noninteracting molecules, the partition function (L.14.3) reduces to q(T )N V N Qid.gas (N, M, V , T ) = N!

$M  "

)N dsi exp[−βU

intra

(si )]

.

i=1

(L.14.7) Substitution into Eq. (L.14.2) yields, for the chemical potential, μid.gas = μ0id.gas + kB T ln ρ,

(L.14.8)

where the reference chemical potential is defined as βμ0id.gas ≡ − ln q(T ) + βμ0intra M   " = − ln q(T ) − ln dsi exp[−βU intra (si )] .

(L.14.9)

i=1

Note that μ0id. gas depends only on temperature. At any given temperature, it simply acts as a constant shift of the chemical potential that has no effect on the observable thermodynamic properties of the system.

L.14.1.3 Grand-canonical simulations In a grand-canonical simulation, we use the following acceptance rules (see section 6.5.2). In addition, we have

Miscellaneous methods e69

acc(N → N + 1)   V q(T ) exp(−βμ0intra ) B = min 1, exp{β[μ − U(N + 1) + U(N )]} . (N + 1) For the removal of a particle, we have acc(N → N − 1) 

 N B = min 1, exp{−β[μ + U(N − 1) − U(N )]} . q(T ) exp(−βμ0intra )V

These equations are based on the idea that particles are exchanged with a reservoir containing the same molecules at the same chemical potential, the only difference being that, in the reservoir, the molecules do not interact. In practical cases (e.g., adsorption), this means that we have a dense phase in equilibrium with a dilute vapor. And, whereas the absolute chemical potential of the vapor is of little interest, the absolute pressure is clearly an important quantity. The pressure in the reservoir is related to the chemical potential through βμB ≡ βμ0id.gas + ln(ρ) = βμ0id.gas + ln(βPid.gas ).

(L.14.10)

Substitution of this expression in the acceptance rules yields

V βPid.gas acc(N → N + 1) = min 1, exp{−β[U(N + 1) − U(N )]} (N + 1) (L.14.11) for the addition of a particle and a similar expression for particle removal. In other words, if the experimental conditions are such that the system of interest is in equilibrium with a reservoir that behaves like an ideal gas, then only the pressure of this effectively ideal gas enters into the acceptance rules for trial moves. All information about the reference state drops out (as expected). If the pressure in the reservoir is too high for the ideal gas law to hold, we have to use an equation of state to relate the chemical potential of the reservoir to its pressure: βμB = βμ0id.gas + ln(βP φ),

(L.14.12)

where φ is the fugacity coefficient of the fluid in the reservoir. The fugacity coefficient can be computed directly from the equation of state of the vapor in the reservoir. It is important to note that this fugacity coefficient is a function of the temperature and pressure. In summary, for a non-ideal gas, we should replace Pid.gas in the acceptance rule (L.14.11) by P φ.

This page intentionally left blank

Appendix M

Miscellaneous examples M.1 Gibbs ensemble for dense liquids Example 32 (Dense liquids). At high densities, the number of exchange steps can become very large and the simulation requires a significant amount of CPU time. This problem occurs also in conventional grand-canonical Monte Carlo simulations. Various methods, which are used to extend simulations in the grand-canonical ensemble to higher densities, can also be used in the Gibbs ensemble. An example of such a technique is the so-called excluded volume map sampling. This technique, based on the ideas of Deitrick et al. [782] and Mezei [188], has been applied to the Gibbs ensemble by Stapleton and Panagiotopoulos [783]. Before calculating the energy of the particle that has to be inserted, a map is made of the receiving subsystem, by dividing this subsystem into small boxes that can contain at most one particle. Each box carries a label that indicates whether it is empty or contains a particle. This map can then be used as a lookup table to check whether there is “space” for the particle to be inserted. If such a space is not available, the trial configuration can be rejected immediately. When using the excluded-volume map, some additional bookkeeping is needed to guarantee detailed balance (see [188] for further details).

M.2 Free energy of a nitrogen crystal Example 33 (Free energy of a nitrogen crystal). For an orientationally disordered solid, we can use an approach similar to that used for atomic crystals. Let us consider an orientationally disordered molecular solid. We transform this solid into a state of known free energy in two stages [404,407]. First we couple the molecules in the solid with harmonic springs to their lattice sites. But in contrast to the method described earlier, we leave the original intramolecular interactions unaffected. Subsequently, we expand this “interacting Einstein crystal” to zero density (see Fig. M.1). Due to the coupling to the lattice, the crystal cannot melt on expansion but keeps its original structure. In the low-density limit, all intermolecular interactions vanish and the system behaves as an ideal Einstein crystal. This scheme of calculating an absolute free energy is referred to as the lattice-coupling-expansion method [404,407]. e71

e72 Miscellaneous examples

FIGURE M.1 Schematic drawing of the lattice-coupling-expansion method for calculating the free energy of a molecular solid: the first step is the coupling to an Einstein crystal (denoted by the black dots) and the second step is the expansion to zero density.

During the first stage of the thermodynamic integration, the potential energy function U˜I contains both the original intermolecular potential and the harmonic coupling to the lattice: U˜I (rN , N ; λ) = U (rN , N ) + λ

N 

α(ri − r0,i )2 ,

(M.2.1)

i=1

where i denotes the orientation of particle i, ri its center-of-mass position, and r0,i the lattice site of particle i. For convenience, we have assumed that all lattice sites are equivalent. We therefore use the same value of the coupling constant α for all sites. In most molecular solids, several nonequivalent molecules may be in a unit cell. In that case different coupling constants may be chosen for all distinct lattice sites. The change in free energy associated with switching on the harmonic springs is given by Eq. (8.4.8):   1  N 2 dλ α(ri − r0,i ) . (M.2.2) FI = F (λ = 1) − Fmol sol = 0

i=1

λ

It is reasonable to expect that the integrand in Eq. (M.2.2) is a smooth function of λ, as the mean-squared displacement decreases monotonically with increasing λ. During the second stage of the thermodynamic integration, all molecules remain harmonically coupled to their (Einstein) lattice sites, but this reference lattice is expanded uniformly to zero density. In what follows, we assume for convenience that the intermolecular potential is pairwise additive: U (rN , N ) =

N  i1.2.

appreciable: for T > 1, the zero-pressure simulations predict liquid densities that are too low. Moreover, as the critical temperature Tc is approached, the surface tension tends to zero and, as a consequence, bubble nucleation becomes more likely. Under those conditions, the metastable liquid at P = 0 is increasingly likely to evaporate during a simulation. In short: do not use P = 0 simulations close to Tc . On a more positive note: not too close to the critical temperature, a reasonable estimate of the liquid density can be obtained by carrying out N P T simulations at P = 0.

N.9 Equation of state of the Lennard-Jones fluid - II Case Study 9 (Equation of state of the Lennard-Jones fluid - II). In Case Studies 1 and 7, we computed the equation of state of a Lennard-Jones fluid, using respectively N V T and N P T simulations. A third way to determine the equation of state is to perform a Grand-canonical simulation, imposing the temperature T and the chemical potential μ at constant V , and sample the resulting density and pressure. An example of such a calculation is shown in Fig. N.13. Grand-canonical simulations are not particularly useful for computing the equation-of-state of a homogeneous fluid, because there will be statistical errors in both the pressure and the density. However, for systems where the pressure itself is ill-defined (e.g., for nano-porous materials), grand-canonical simulations are the method of choice.

N.10 Phase equilibria of the Lennard-Jones fluid Case Study 10 (Phase equilibria of the Lennard-Jones fluid). Here we use the Gibbs-ensemble technique to determine the vapor-liquid coexistence curve of the Lennard-Jones fluid. In Case Studies 1, 7, and 9, we had already determined parts of the equation of state of this fluid, and in Case Study 8 we had made an estimate of the liquid coexistence-density from a zero-pressure simulation.

e88 Supporting information for case studies

FIGURE N.13 Equation of state of the Lennard-Jones fluid; isotherm at T = 2.0. The solid curve represents the equation of state of Johnson et al. [83]; the squares are the results from grandcanonical simulations (with volume V = 250.047). The dotted curve is the excess chemical potential as calculated from the equation of state of ref. [83], and the circles are the simulation results. Note that the excess chemical potential is related to the fugacity f through βμex = ln(f/ρ).

FIGURE N.14 Particle number density in the two boxes of the Gibbs ensemble as a function of the number of Monte Carlo cycles for a system of Lennard-Jones particles; the number of particles was N = 256 and temperature T = 1.2.

In Fig. N.14, the density of the fluid in the two boxes is plotted as a function of the number of Monte Carlo cycles (as defined in Algorithm 15). The simulation started with equal density in both boxes. During the first 1000 Monte Carlo cycles, the system has not yet “decided” which box would evolve to a liquid density and which to the vapor. After 5000 Monte Carlo cycles, the system seems to have reached equilibrium, and the coexistence properties can be determined. In Fig. N.15, the phase diagram of the Lennard-Jones as obtained from Gibbsensemble simulations is compared with the phase diagram obtained from the equation of state of Johnson et al. [83]. We point out that comparison of the GE phase diagram with the literature data on the critical point (e.g., [212]) is complicated by the fact that different authors use different truncations of the Lennard-Jones potential (see e.g., [80,219]). Such truncations have a large effect on the location of the critical temperature.

Supporting information for case studies e89

FIGURE N.15 Phase diagram of the Lennard-Jones fluid, using tail correction beyond the cut-off of 2.5σ to mimic the full Lennard-Jones potential, as calculated with the Gibbs-ensemble technique (squares) and equation of state of Johnson et al. (solid curves). The solid circle indicates the estimated critical point.

N.11 Use of Andersen thermostat Case Study 11 (Use of Andersen thermostat). The Andersen thermostat [180] is arguably the simplest MD thermostat that has been proven to yield a canonical distribution. This implies that both the kinetic energy and the potential energy of the system under consideration are Boltzmann-distributed. However, the Andersen algorithm conserves neither momentum nor energy, as it attributes new velocities drawn from a Maxwell-Boltzmann distribution to randomly selected particles. We refer to these updates as stochastic “collisions”. The effect of the stochastic collisions is that the dynamical properties of a system with an Andersen thermostat may differ substantially, or even catastrophically, from those of a system without a thermostat, or with a “global” thermostat [256,257]. In the case of diffusion, it is easy to understand intuitively that the Andersen thermostat will decrease the self-diffusion coefficient D s : a Green-Kubo relation relates D to the integral of the velocity-auto-correlation function (VACF). The longer velocities persist, the larger D. Conversely, any effect that destroys the persistence in the velocity, will decrease D. And destroying the persistence in v is precisely what the Andersen thermostat does: the higher ν, the frequency of stochastic collisions, the lower D. This effect is illustrated in Fig. N.16. In practical cases, ν is usually chosen such that the timescale τE for the decay of energy fluctuations in a simulation of a system with linear dimension L is comparable to that of thermal fluctuations with a wavelength equal to L, in an unbounded medium: τE ≈ [L2 (N/V )CP ]/λ, where CP is the heat capacity per molecule (at constant pressure) and λ is the thermal conductivity. For reasonably large systems, such values of τE can only be achieved with rather low collision rates per particle, in which case the effect of collisions on the dynamics may be small [180], but at the same time, the thermostat becomes rather inefficient. One should not use the Andersen thermostat when computing quantities such as the thermal conductivity or the viscosity. The reason is that these quantities provide a measure for the rate at which energy or momentum diffuse. But such a description only makes sense if energy and

e90 Supporting information for case studies

FIGURE N.16 Mean-squared displacement as a function of time for various values of the collision frequency ν of Andersen thermostat. The system studies are a Lennard-Jones fluid (T = 2.0 and N = 108).

momentum are conserved. With an Andersen thermostat, a local perturbation in the energy or momentum does not diffuse away, and rather, it is screened exponentially. This effect is not described by the (normal) diffusion equation. Summarizing: do not use the Andersen method when computing transport properties. For static properties, it is fine —and very easy to implement. Fig. N.16 shows that the frequency of stochastic collisions has a strong effect on the time dependence of the mean-squared displacement. The mean-squared displacement becomes only independent of ν in the limit of very low stochastic collision rates. Yet, all static properties, such as the pressure or potential energy are rigorously independent of the stochastic collision frequency.

N.12 Use of Nosé-Hoover thermostat Case Study 12 (Use of Nosé-Hoover thermostat). As in Case Study 11, we start by showing that the Nosé-Hoover method reproduces the behavior of a system at constant N V T . In Fig. N.17 we compare the velocity distribution generated by the Nosé-Hoover thermostat with the correct Maxwell-Boltzmann distribution for the same temperature (7.1.1). The figure illustrates that the velocity distribution indeed is independent of the value chosen for the coupling constant Q. It is instructive to see how the system reacts to a sudden increase in the imposed temperature. Fig. N.18 shows the evolution of the kinetic temperature of the system. After 12,000 time steps, the imposed temperature is suddenly increased from T = 1 to T = 1.5. The figure illustrates the role of the coupling constant Q. A small value of Q corresponds to a low inertia of the heat bath and leads to rapid temperature fluctuations. A large value of Q leads to a slow, ringing response to the temperature jump. Next, we consider the effect of the Nosé-Hoover coupling constant Q on the diffusion coefficient. As can be seen in Fig. N.19, the effect is much smaller than in Andersen’s method. However, it would be wrong to conclude that the diffusion coefficient is independent of Q. The Nosé-Hoover method simply provides a way to keep the temperature constant more gently than Andersen’s method, where

Supporting information for case studies e91

FIGURE N.17 Velocity distribution in a Lennard-Jones fluid (T = 1.0, ρ = 0.75, and N = 256). The solid line is the Maxwell-Boltzmann distribution (7.1.1) the symbols were obtained in a simulation using the Nosé-Hoover thermostat [256].

FIGURE N.18 Response of the system to a sudden increase in the imposed temperature. The various lines show the actual temperature of the system (a Lennard-Jones fluid ρ = 0.75, and N = 256) as a function of the number of time steps for various values of the Nosé-Hoover coupling constant Q.

FIGURE N.19 Effect of the coupling constant Q on the mean-squared displacement for a Lennard-Jones fluid (T = 1.0, ρ = 0.75, and N = 256).

e92 Supporting information for case studies

FIGURE N.20 Trajectories of the harmonic oscillator: (from left to right) in the microcanonical ensemble, using the Andersen method and using the Nosé-Hoover method. The y axis is the velocity and the x axis is the position.

particles suddenly get new, random velocities. For the calculations of transport properties, we prefer simple N , V , E simulations.

N.13 Harmonic oscillator (I) Case Study 13 (Harmonic oscillator (I)). As the equations of motion of the harmonic oscillator can be solved analytically, this model system is often used to test algorithms. However, the harmonic oscillator is also a rather atypical dynamical system, as will become clear when we apply the NH algorithm to this simple model system. The potential energy function of the harmonic oscillator is 1 u(r) = r 2 . 2 The Newtonian equations of motion are r˙ = v v˙ = −r. If we solve the equations of motion of the harmonic oscillator for a given set of initial conditions, we can trace the trajectory of the system in phase space. Fig. N.20 shows a typical phase space trajectory of the harmonic oscillator, in a closed loop, which is characteristic of periodic motion. It is straightforward to simulate a harmonic oscillator at a constant temperature using the Andersen thermostat (see section 7.1.1.1). A trajectory is shown in Fig. N.20. In this case, the trajectories are points that are not connected by lines. This is due to the stochastic collisions with the bath. In this example, we allowed the oscillator to interact with the heat bath at each time step. As a result, the phase space density is a collection of discrete points. The resulting velocity distribution is Gaussian by construction; also, for the positions, we find a Gaussian distribution. We can also perform a constant-temperature Nosé-Hoover simulation using the algorithm described in SI L.6.2. A typical trajectory of the harmonic oscillator generated with the Nosé-Hoover scheme is shown in Fig. N.20. The most striking feature of Fig. N.20 is that, unlike the Andersen scheme, the Nosé-Hoover method does not yield a canonical distribution in phase space. Even for very long simulations, the entire trajectory would lie in the same ribbon shown in Fig. N.20. Moreover, this band of trajectories depends on the initial configuration.

Supporting information for case studies e93

FIGURE N.21 Test of the phase space trajectory of a harmonic oscillator, coupled to a NoséHoover chain thermostat. The left-hand side of the figure shows part of a trajectory: the dots correspond to consecutive points separated by 10,000 time steps. The right-hand side shows the distributions of velocity and position. Due to our choice of units, both distributions should be Gaussians of equal width.

N.14 Nosé-Hoover chain for harmonic oscillator Case Study 14 (Nosé-Hoover chain for harmonic oscillator). The harmonic oscillator is the obvious model system on which we test the Nosé-Hoover chain thermostat. If we use a chain of two coupling parameters, the equations of motion are r˙ = v v˙ = −r − ξ1 v v2 − T − ξ1 ξ2 ξ˙1 = Q1 ξ˙2 =

Q1 ξ12 − T Q2

.

A typical trajectory generated with the Nosé-Hoover chains is shown in Fig. N.21. The distributions of the velocity and position of the oscillator are also shown in Fig. N.21. Comparison with the results obtained using the Andersen thermostat (see Case Study 13) shows that the Nosé-Hoover chains do generate a canonical distribution, even for the harmonic oscillator.

N.15 Chemical potential: particle-insertion method Case Study 15 (Chemical potential: particle-insertion method). In this Case Study, we use the Widom test-particle method to determine the excess chemical potential of a Lennard-Jones fluid. The algorithm that we use is a combination of the basic algorithm for performing Monte Carlo simulations at constant N , V , and T (Algorithms 1 and 2) and determining the excess chemical potential (Algorithm 18). We stress that the tail correction for the chemical potential is similar, but not identical, to that for the potential energy. In the Widom test-particle method, we

e94 Supporting information for case studies

FIGURE N.22 The excess chemical potential of the Lennard-Jones fluid (T = 2.0) as calculated from the equation of state, grand-canonical Monte Carlo, and the test particle insertion method.

determine the energy difference: U = U (sN +1 ) − U (sN ). The tail correction is βμtail = U (sN +1 )tail − U (sN )tail = (N + 1)utail ((N + 1)/V ) − N utail (N/V )   ∞ N 1 N +1 dr r 2 u(r) −N 4π = (N + 1) V V 2 rc ∞ 2N 1 4π dr r 2 u(r) ≈ V 2 rc = 2utail (ρ).

(N.15.1)

In Case Study 9, we performed a grand-canonical Monte Carlo simulation to determine the equation of state of the Lennard-Jones fluid. In the grand-canonical ensemble the volume, chemical potential, and temperature are imposed; the density is determined during the simulation. Of course, we can also calculate the chemical potential during the simulation, using the Widom method. Fig. N.22 shows a comparison of the imposed and measured chemical potentials.

N.16 Chemical potential: overlapping distributions Case Study 16 (Chemical potential: overlapping distributions). In Case Study 15, we used the Widom test-particle method to determine the chemical potential of the Lennard-Jones fluid. This method fails at high densities, where it becomes extremely unlikely to insert a particle at a position where exp(−βU ) in Eq. (8.5.5) is non-negligible. Yet it is those unlikely insertions that dominate the average exp(−βU ). Because favorable insertions are so rare, the number of such events is subject to relatively large statistical fluctuations, and hence our estimate for exp(−βU ) is noisy. The overlapping-distribution method does not remove this

Supporting information for case studies e95

FIGURE N.23 Comparison of the overlapping distribution function method and the Widom particle insertion scheme for measuring the chemical potential of the Lennard-Jones fluid at T = 1.2. The solid curve is the particle-insertion result. The dashed curve was obtained using the overlapping distribution method (βμex = f1 − f0 ). The units for βμex are the same as for f (U). The figure on the left corresponds to a moderately dense liquid (ρ = 0.7). In this case, the distributions overlap and the two methods yield identical results. The right-hand figure corresponds to a dense liquid (ρ = 1.00). In this case, the insertion probability is very low. The distributions f0 and f1 hardly overlap, and the two different estimates of βμex do not coincide.

problem, but it provides a good diagnostic tool for detecting such sampling problems. To implement the overlapping-distribution method, we have to perform two simulations: one simulation using a system of N + 1 particles (system 1) and a second system with N particles and one ideal gas particle (system 0). For each of these systems, we determine the distribution of energy differences, Eqs. (8.6.3) and (8.6.4). For system 1, this energy difference U is defined as the change in the total energy of the system that would result if one particle, chosen at random, would be transformed into an ideal gas particle. We now make a histogram of the observed values of U in this system. This calculation can easily be appended to a standard MC move (Algorithm 2), because in a trial move, we randomly select a particle and compute its interaction energy before a trial move. But that interaction energy is precisely the U that we wish to compute. We thus obtain a probability distribution of U : p1 (U ). For system 0, we have to determine the energy difference U , which is the difference in total energy when the ideal gas particle (which could be anywhere in the system) would be turned into an interacting particle. This energy difference equals the energy of a test particle in the Widom method (section 8.5.1). When we determine p0 (U ), at the same time, we can obtain an estimate of the excess chemical potential from the Widom particle insertion method. As explained in the text, it is convenient not to use p0 (U ) and p1 (U ), but the closely related functions f0 (U ) and f1 (U ), defined in Eqs. (8.6.6) and (8.6.7). In Fig. N.23 we show how μex (U ) can be obtained (using Eq. (8.6.8)) from a plot of f0 (U ) and f1 (U ), as a function of U . The results shown in the left part of Fig. N.23 apply to a Lennard-Jones fluid at ρ = 0.7, the results on the right are for ρ = 1.00.

e96 Supporting information for case studies

For the sake of comparison, we have also plotted the results obtained using the Widom particle insertion method. The figure shows that at ρ = 0.7 there is a sufficiently large range of energy differences for which the two functions overlap (−10 < U < −5), in the sense that the noise in both functions is relatively small in this energy range. The result of the overlapping distribution function, therefore, is in good agreement with the results from the Widom method. However, at ρ = 1.00, the range of overlap is limited to the wings of the histograms p0 and p1 , where the statistical accuracy is poor. As a consequence, our estimate for μex (U ) is not constant (as it should) but appears to depend on U . Moreover, the results from the overlapping distribution method are not consistent with the result of the Widom particle insertion method. Note that two separate simulations are needed to determine the excess chemical potential from the overlapping distribution method. One might think that particle addition and particle removal histograms could be measured in a single simulation of an N -particle system. Such an approach would indeed be correct if there were no difference between the histograms for particle removal from N and N + 1 particle systems, but for dense systems containing only a few hundred particles, the system-size dependence of μex can be appreciable. Of course, sometimes simulations on small systems are performed as a preliminary step to larger-scale simulations. Then it is advisable to compute both p0 (U ) and p1 (U ) for the small system in a single simulation, as this allows us to check whether the overlap between the two distributions is likely to be sufficient to justify a larger-scale simulation.

N.17 Solid-liquid equilibrium of hard spheres Case Study 17 (Solid-liquid equilibrium of hard spheres). In this case study, we locate the solid-liquid coexistence densities of the hard-sphere model. We determine these densities by equating the chemical potential and the pressure of the two phases. For the liquid phase, we use the equation of state of Speedy [421], which is based on a Padé approximation to simulation data on both the equation of state and the virial coefficients of hard spheres: zliquid =

x + 0.076014x 2 + 0.019480x 3 Pβ =1+ . ρ 1 − 0.548986x + 0.075647x 2

For the solid phase of the hard-sphere model, Speedy proposed the following equation of state [320]: zsolid =

3 ρ ∗ − 0.7072 , − 0.5921 1 − ρ∗ ρ ∗ − 0.601

(N.17.1)

√ where ρ ∗ = σ 3 ρ/ 2. In Fig. N.24, we compare the predictions of this equation of state for the liquid and solid phases with the results from computer simulations of Alder and Wainwright [422] and Adams [171]. As can be seen, the empirical equations of state reproduce the simulation data quite well. To calculate the chemical potential of the liquid phase, we integrate the equation of state (see (9.1.1)) starting

Supporting information for case studies e97

FIGURE N.24 Pressure P (left) and chemical potential μ (right) as a function of the density ρ. The solid curves, showing the pressure and chemical potential of the liquid phase, are obtained from the equation of state of Speedy [421]. The dashed curve gives the pressure of the solid phase as calculated from the equation of state of ref. [320]. The open and filled symbols are the results of computer simulations for the liquid [171,422,423] and solid phases [422], respectively. The coexistence densities are indicated with horizontal lines.

from the dilute gas limit. This yields the Helmholtz free energy as a function of the density. The chemical potential then follows from βμ(ρ) =

P βG βF = + . N N ρkB T

The free energy per particle of the ideal gas is given by βf id (ρ) =

F id (ρ) = ln ρ 3 − 1, N kB T

where is the de Broglie thermal wavelength. In what follows, we shall write βf id (ρ) = ln ρ − 1. That is, we shall work with the usual reduced densities and ignore the additive constant 3 ln( /σ ), as it plays no role in the location of phase equilibria for classical systems. Fig. N.24 compares the chemical potential that follows from the Hall equation of state with some of the available simulation data (namely, grand-canonical ensemble simulations of [171] and direct calculations of the chemical potential, using the Widom test-particle method [423] (see Chapter 8). These results show that we have an accurate equation of state for the liquid phase and the solid phase. Since we know the absolute free energy of the ideal gas phase, we can calculate the free energy and hence the chemical potential of the liquid phase. For the solid phase we can use the equation of state to calculate only free energy differences; to calculate the absolute free energy we have to determine the free energy at a particular density. To perform this calculation, we use the lattice coupling method.

e98 Supporting information for case studies

We must now select the upper limit of the coupling parameter λ (λmax ) and the values of λ for which we perform the simulation. For sufficiently large values of λ

2 we can calculate N i=1 (ri − r0,i ) analytically, using 1 ∂F (λ) . r2 = λ N ∂λ For the noninteracting Einstein crystal, the mean-squared displacement is given by 3 r2 = . λ 2βλ

(N.17.2)

For a noninteracting Einstein crystal with fixed center of mass, the free energy is given by Eq. (9.2.23), which gives r2

Ein,λ

=

1 3 N −1 1 . β2 N λ

(N.17.3)

In [314] an analytical expression is derived for the case of an interacting Einstein crystal, which reads r2 = r2 λ

×

Ein,λ





1 βn

2 2a(2πβλ)(1/2) 1 − P nn



overlap λ    2 σ a − σ − 1/(βλ) exp −βλ(a − σ )2 /2

  + [σ a + σ 2 − 1/(βλ)] exp −βλ(a + σ )2 /2 ,

(N.17.4)

where a is the separation of two nearest neighbors i and j , a = r0,i − r0,j , σ is the hard-core diameter, and n is the number of nearest neighbors (for example, n = 12 for Face Centered Cubic (FCC) and hcp (hexagonal close-packed) solids or

nn n = 8 for bcc (body-centered cubic)); Poverlap

λ

is the probability that two nearest

neighbors overlap. Such probability is given by     erf (βλ/2)1/2 (σ + a) + erf (βλ/2)1/2 (σ − a) nn = Poverlap λ 2 exp[−βλ(σ − a)2 /2] − exp[−βλ(σ + a)2 /2] − (2πβλ)1/2 a

(N.17.5)

This equation can also be used to correct the free energy of a noninteracting Einstein crystal (9.2.23):  βFEin (λ) βFEin n nn = + ln 1 − Poverlap . λ N N 2

(N.17.6)

We choose λmax such that, for values of λ larger than this maximum value, r 2

λ

obeys the analytical expression. Typically, this means that the probability of overlap of two harmonically bound particles should be considerably less than 1%. The results of these simulations are presented in Fig. N.25. This figure shows that if we

Supporting information for case studies e99

FIGURE N.25 The mean-squared displacement r 2 as a function of the coupling parameter λ λ for a hard-sphere (FCC) solid of 54 particles (6 layers of 3 × 3 close-packed atoms at a density ρ = 1.04). The figure on the left shows the simulation results for low values of λ, the figure on the right for high values. The solid line takes into account nearest-neighbor interactions (N.17.4); the dashed line assumes a noninteracting Einstein crystal (N.17.3). The open symbols are the simulation results.

rely only on the analytical results of the noninteracting Einstein crystal, we have to take a value for λmax ≈ 1000–2000. If we use Eq. (N.17.4) for r 2 is sufficient. We should now integrate

λ

λmax = 500–1000

λmax F = dλ r 2 . λ N 0 In practice, this integration is carried out by numerical quadrature. We therefore must specify the values of λ for which we are going to compute r 2 . To improve λ

the accuracy of the numerical quadrature, it is convenient to transform to another integration variable: G−1 (λmax )  λmax  F dλ d G−1 (λ) g(λ) r 2 , = g(λ) r 2 = λ λ N g(λ) 0 G−1 (0) where g(λ) is an as-yet arbitrary function of λ and G−1 (λ) is the primitive of the function 1/g(λ). If we can find a function g(λ) such that the integrand, g(λ) r 2 , λ

is a slowly varying function, we need fewer function evaluations to arrive at an accurate estimate. To do this, we need to have an idea about the behavior of r 2 . λ For λ → 0, r 2 → r 2 , which is the mean-squared displacement of an atom λ

0

around its lattice site in the normal hard-sphere crystal. At high values of λ, where the system behaves like an Einstein crystal, we have r 2

to the following guess for the functional form of g(λ): g(λ) ≈ kB T / r 2 ≈ c + λ, λ

λ

→ 3kB T /(2λ). This leads

e100 Supporting information for case studies

FIGURE N.26 βF ex /N + ln(N )/N versus 1/N for an FCC crystal of hard spheres at a density ρσ 3 = 1.0409. The solid line is a linear fit to the data. The coefficient of the 1/N term is −6.0(2), and the intercept (i.e., the infinite system limit of βF ex /N ) is equal to 5.91889(4).

where c = kB T / r 2 . Here, r 2 can be estimated from Fig. N.25. The value of c 0

0

clearly on density (and temperature). For ρ = 1.04, extrapolation to λ → 0 depends gives r 2 ≈ 0.014, which gives c = 70. If we use this function g(λ), the free energy 0

difference is calculated from ln(λmax +c) F d [ln(λ + c)] (λ + c) r 2 . = λ N ln c For the numerical integration, we use a n-point Gauss-Legendre quadrature [424]. As the integrand is a smooth function, a 10-point quadrature is usually adequate. As discussed in section 9.2.5, the resulting free energy still depends (slightly) on the system size. An example of the system-size dependence of the excess free energy of a hard-sphere crystal is shown in Fig. N.26 [425]. From this figure, we can estimate the excess free energy of the infinite system to be βf ex = 5.91889(4). This is in good agreement with the estimate of Frenkel and Ladd, βf ex = 5.9222 [314]. Once we have one value of the absolute free energy of the solid phase at a given density, we can compute the chemical potential of the solid phase at any other density, using the equation of state of Speedy (see Fig. N.24). The coexistence densities follow from the condition that the chemical potentials and pressures in the coexisting phases should be equal. Using the value of 5.91889(4) from [425] for the solid at ρ = 1.04086, we arrive at a freezing density ρl = 0.9391 and a melting density ρs = 1.0376. At coexistence, the pressure is Pcoex = 11.567 and the chemical potential is μcoex = 17.071. In fact, as we shall argue below, the presence of vacancies in the equilibrium crystal lowers the coexistence pressure slightly: Pcoex = 11.564. These results are in surprisingly good agreement with the original data of Hoover and Ree [307], who obtained an estimate for the solid-liquid coexistence densities ρs = 1.041 ± 0.004 and ρl = 0.943 ± 0.004 at a pressure 11.70 ± 0.18. The free energy difference between the FCC and HCP for large hard-sphere crystals at melting is very close to 0, but the FCC structure appears to be the more stable phase [303,412,426,427].

Supporting information for case studies e101

FIGURE N.27 Equation of state of an eight-bead Lennard-Jones chain as obtained from N , V , T and N , P , T simulations using the configurational-bias Monte Carlo scheme. The simulations are performed with 50 chains at a temperature T = 1.9.

N.18 Equation of state of Lennard-Jones chains Case Study 18 (Equation of state of Lennard-Jones chains). To illustrate the configurational-bias Monte Carlo technique described in this section, we determine the equation of state of a system consisting of eight-bead chains of Lennard-Jones particles. The nonbonded interactions are described by a truncated and shifted Lennard-Jones potential. The potential is truncated at Rc = 2.5σ . The bonded interactions are described with a harmonic spring  2 vib u (l) = 0.5kvib (l − 1) 0.5 ≤ l ≤ 1.5 , ∞ otherwise where l is the bond length, the equilibrium bond length has been set to 1, and kvib = 400. The simulations are performed in cycles. In each cycle, we perform on average Ndis attempts to displace a particle, Ncbmc attempts to (partly) regrow a chain, and Nvol attempts to change the volume (only in the case of N , P , T simulations). If we regrow a chain, the configurational-bias Monte Carlo scheme is used. In this move we select at random the monomer from which we start to regrow. If this happens to be the first monomer, the entire molecule is regrown at a random position. For all the simulations, we used eight trial orientations. The lengths of trial bonds are generated with a probability prescribed by the bond-stretching potential (see Case Study 19). In Fig. N.27 the equation of state as obtained from N , V , T simulations is compared with one obtained from N , P , T simulations. This isotherm is well above the critical temperature of the corresponding monomeric fluid (Tc = 1.085, see Fig. 3.3), but the critical temperature of the chain molecules is appreciably higher [512].

N.19 Generation of trial configurations of ideal chains Case Study 19 (Generation of trial configurations of ideal chains). In section 12.2.3, we emphasized the importance of efficiently generating trial segments for mole-

e102 Supporting information for case studies

cules with strong intramolecular interactions in a CBMC simulation. In this Case Study, we quantify this. We consider the following bead-spring model of a polymer. The nonbonded interactions are described with a Lennard-Jones potential, and the bonded interactions with a harmonic spring:  2 vib u (l) = 0.5kvib (l − 1) 0.5 ≤ l ≤ 1.5 , ∞ otherwise where l is the bond length, the equilibrium bond length has been set to 1, and kvib = 400. The bonded interaction is only the bond stretching. The external (nonbonded) interactions are the Lennard-Jones interactions. We consider the following two schemes of generating a set of trial positions: 1. Generate a random orientation with bond length uniformly distributed in the spherical shell between limits chosen such that they bracket all acceptable bond lengths. For instance, we could consider limits that correspond to a 50% stretching or compression of the bond. In that case, the probability of generating bond length l is given by  2 p1 (l) ∝ Cdl ∝ l dl 0.5 ≤ l ≤ 1.5 . 0 otherwise 2. Generate a random orientation and the bond length prescribed by the bondstretching potential (as described in Algorithm 25). The probability of generating bond length l with this scheme is  vib vib 2 p2 (l) ∝ C exp[−βu (l)]dl = C exp[−βu (l)]l dl 0.5 ≤ l ≤ 1.5 . 0 otherwise Let us consider a case in which the system consists of ideal chains. Ideal chains are defined (see section 12.2.3) as chains having only bonded interactions. Suppose we use method 1 to generate the set of k trial orientations with bond lengths l1 , · · · , lk , then the Rosenbluth factor, W , for atom i is given by wi (n) =

k 

exp[−βuvib (lj )].

j =1

The Rosenbluth factor of the entire chain is W (n) =



wi (n).

i=1

For the old conformation, a similar procedure is used to calculate its Rosenbluth factor: W (o) =

 i=1

wi (o).

Supporting information for case studies e103

In absence of external interactions, the Rosenbluth factor of the first atom is defined to be w1 = k. In the second scheme, we generate the set of k trial orientations with a bond length distribution p2 (l). If we use this scheme, we have to consider only the external interaction. Since, for an ideal chain, the external interactions are, by definition, 0, the Rosenbluth factor for each atom is given by wiext (n) =

k 

exp[−βuext (lj )] = k,

j =1

and similarly, for the old conformation wiext (o) = k. Hence, the Rosenbluth weight is the same for the new and the old conformations: W ext (n) =



wiext (n) = k

i=1

and W ext (o) =



wiext (o) = k .

i=1

The acceptance rule for the first scheme is acc(o → n) = min[1, W (n)/W (o)] and for the second scheme is acc(o → n) = min[1, W ext (n)/W ext (o)] = 1. Inspection of these acceptance rules shows that, in the second scheme, all configurations generated are accepted, whereas in the first scheme, this probability depends on the bond-stretching energy and, therefore, will be less than 1. Hence, it is clearly useful to employ the second scheme. To show that the results of schemes 1 and 2 are indeed equivalent, we compare the distribution of the bond length of the chain and the distribution of the radius of gyration in Fig. N.28. The figure shows that the results for the two methods are indeed indistinguishable. The efficiency of the two methods, however, is very different. In Table N.1, the difference in acceptance probability is given for some values of the bond-stretching force constant and various chain lengths. The table shows that if we use method 1 and generate a uniformly distributed bond length, we need to use at least 10 trial orientations to have a reasonable acceptance for chains longer than 20 monomers. Note that the corresponding table for the second method has a 100% acceptance for all values of k independent of the chain length. Most of the simulations, however, do not involve ideal chains but chains with external interactions. For chains with external interactions, the first method performs even worse. First of all, we generate the chains the same way as in the case

e104 Supporting information for case studies

FIGURE N.28 Comparison of methods 1 and 2 for the distribution of bond lengths l (left) and the distribution of the radius of gyration Rg (right). The solid lines represent the results for method 1, and the dots for method 2 ( = 5 and k = 5).

TABLE N.1 Probability of acceptance (%) for ideal chains using uniformly distributed bond lengths (method 1), where  is the chain length and k is the number of trial orientations. The value for the spring constant is kvib = 400 (see [440]). For method 2, the acceptance would have been 100% for all values of k and . k 1 5 10 20 40 80

=5

 = 10

 = 20

 = 40

 = 80

 = 160

0.6 50 64 72 80 83

0.01 50 58 66 72 78

0.01 10 53 60 67 72

0.01 0.01 42 56 62 68

0.01 0.01 0.01 44 57 62

0.01 0.01 0.01 0.01 40 60

of the ideal chains. The bonded interactions are the same, and we need to generate at least the same number of trial directions to get a reasonable acceptance. In addition, if there are external interactions, we have to calculate the nonbonded interactions for all of these trial positions. The calculation of the nonbonded interactions takes most of the CPU time, yet, in the first method, most of the trial orientations are doomed to be rejected solely on the basis of the bonded energy. These two reasons make the second scheme much more attractive than the first.

N.20 Recoil growth simulation of Lennard-Jones chains Case Study 20 (Recoil growth simulation of Lennard-Jones chains). To illustrate the Recoil Growth (RG) method, we make a comparison between this method and Configurational-Bias Monte Carlo (CBMC). Consider 20 Lennard-Jones chains of length 15. The monomer density is ρ = 0.3 at temperature T = 6.0. Two bonded monomers have a constant bond length of 1.0, while three successive particles have a constant bond angle of 2.0 radians.

Supporting information for case studies e105

FIGURE N.29 Comparison of configurational-bias Monte Carlo (CBMC) with recoil growth for the simulation of Lennard-Jones chains of length 15. The left figure gives the distribution of the end-to-end distance (RE ). In the right figure the efficiency (η) is a function of the number of trial directions (k) for different recoil lengths (lmax ) as well as for CBMC.

In Fig. N.29, the distribution of the end-to-end vector, RE , of the chain is plotted. In this figure, we compare the results from a CBMC and a RG. Since both methods generate a Boltzmann distribution of conformations, the results are identical (as they should be). For this specific Case Study, we have compared the efficiency, η, of the two methods. The efficiency is defined as the number of accepted trial moves per amount of CPU time. For CBMC we see that the efficiency increases as we increase k, the number of trial orientations, from 1 to 4. From 4 to 8, the efficiency is more or less constant, and above 8, a decrease in the efficiency is observed. In the RG scheme, we have two parameters to optimize: the number of trial orientations k and the recoil length lmax . If we use only one trial orientation, recoiling is impossible since there are no other trial orientations. If we use a recoil length of 1, the optimum number of trial orientations is 4, and for larger recoil lengths, the optimum is reached with fewer trial orientations. Interestingly, the global optimum is 2 trial orientations and a recoil length of 3–5. In this regime, the increase in CPU time associated with a larger recoil length is compensated by a higher acceptance. In the present study, optimal RG was a factor 8 more efficient than optimal CBMC. Case Study 21 (Parallel tempering of a single particle). As a trivial illustration of the power of parallel tempering, we consider a single particle moving an external potential as shown in Fig. N.30(left): ⎧ ⎪ ∞ x < −2 ⎪ ⎪ ⎪ ⎪ 1 × (1 + sin (2π x)) −2 ≤ x ≤ −1.25 ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ 2 × (1 + sin (2π x)) −1.25 ≤ x ≤ −0.25 U (x) = (N.20.1) 3 × (1 + sin (2π x)) −0.25 ≤ x ≤ 0.75. ⎪ ⎪ ⎪ 4 × + sin x)) 0.75 ≤ x ≤ 1.75 (1 (2π ⎪ ⎪ ⎪ ⎪ ⎪ 5 × (1 + sin (2π x)) 1.75 ≤ x ≤ 2 ⎪ ⎪ ⎩ ∞ x>2 We place the particle initially in the left-most potential energy well.

e106 Supporting information for case studies

FIGURE N.30 (Left) Potential energy (U (x)) as a function of the position x. (Right) Probability (P (x)) of finding a particle at position x for various temperatures (T ) as obtained from ordinary Monte Carlo simulations. The lower-temperature systems are not (or barely) able to cross the energy barriers separating the wells.

FIGURE N.31 (Left) Probability (P (x)) of finding a particle at position x for various temperatures (T ) using parallel tempering. (Right) Position (x) as a function of the number of Monte Carlo trial moves (n) for T = 0.05. To avoid cluttering this figure with too many jumps, we have only used 0.05% swap moves here.

We first use normal Metropolis MC at three different temperatures (T = 0.05, 0.3, and 2.0). At the lowest temperature (T = 0.05), the particle is basically trapped in its initial potential energy well during the entire simulation, whereas for the highest temperature (T = 2.0), it can explore all wells. Next, we apply parallel tempering, that is: we allow for temperature swaps between the three systems (see Fig. N.31). Due to the temperature-swap moves, the systems now equilibrate rapidly at all three temperatures. The difference is particularly strong for the probability distribution at the lowest temperature. In the present parallel-tempering simulation, we consider two types of trial moves: 1. Particle displacement: we randomly select one of the three temperatures and carry out a trial displacement  of a randomly selected particle at that temperature, choosing the trial displacement  for a random distribution, uniform between −0.1 and 0.1. The acceptance of this trial displacement is determined

Supporting information for case studies e107

by the conventional Metropolis MC rule acc (o → n) = min {1, exp [−β (U (n) − U (o))]} .

(N.20.2)

2. Temperature swapping. The trial move consists of attempting to swap two randomly selected neighboring temperatures (Ti and Tj ). Such a trial move is accepted with a probability given by Eq. (13.1.2)      acc (o → n) = min 1, exp βj − βi × Uj − Ui .

(N.20.3)

We are free to choose the relative rate of displacement and swap moves. In the present example, we used 10% swap moves and 90% particle displacements, as suggested in ref. [550]. Note, however, that other choices are possible.4 As can be seen in Fig. N.31, parallel tempering results in a dramatic improvement in the sampling of configuration space at the lowest temperature.

N.21 Multiple time step versus constraints Case Study 22 (Multiple time step versus constraints). In this Case Study, we consider a system of diatomic Lennard-Jones molecules. We compare two models: the first model uses a fixed bond length l0 between the two atoms of a molecule. In the second model, we use a bond-stretching potential given by 1 Ubond (l) = kb (l − l0 )2 , 2 where l is the distance between the two atoms in a molecule. In the simulations, we used kb = 50000 and l0 = 1. In addition to the bond-stretching potential, all nonbonded atoms interact via a Lennard-Jones potential. The total number of diatomics was 125, and the box length 7.0 (in the usual reduced units). The Lennard-Jones potential was truncated at rc = 3.0, while T = 3.0. The equations of motion are solved using bond constraints for the first model, while multiple time steps were used for the second model. All simulations were performed in the N V E ensemble. It is interesting to compare the maximum time steps that can be used to solve the equations of motion for these two methods. As a measure of the accuracy with which the equations of motion are solved, we compute the average deviation of the initial energy, which is defined by Martyna et al. [623] as

E=

1 Nstep

   E (it) − E (0)  ,    E (0)

Nstep  i=1

in which E (i) is the total energy at time i. 4 There are even algorithms that use an infinite swap rate [551], including all possible permutations

of the temperatures. However, such algorithms (see also [552]) do not scale well with the number of distinct temperatures.

e108 Supporting information for case studies

FIGURE N.32 Comparison of the energy fluctuations as a function of the time step for a normal MD simulation with a harmonic bond potential and a constrained MD simulation with the SHAKE algorithm.

For the bond constraints, we use the SHAKE algorithm [606] (see also section 14.1). In the SHAKE algorithm, the bond lengths are exactly fixed at l0 using an iterative scheme. In Fig. N.32 (left), the energy fluctuations are shown as a function of the time step. Normally one tolerates a noise level in E of O(10−5 ), which would correspond to a time step of 2 × 10−4 for the first model. This should be compared with a single-time-step Molecular Dynamics simulation using the second model. A similar energy noise level can be obtained with a time step of 9 × 10−5 , which is a factor 2 smaller. To apply the multiple-time-step algorithm, we have to separate the intermolecular force into a short-range and a long-range part. In the short-range part, we include the bond-stretching potential and the short-range part of the Lennard-Jones potential. To make a split in the Lennard-Jones potential, we use a simple switching function S (r): ULJ (r) = Ushort (r) + Ulong (r) U short (r) = S (r) × ULJ (r) U long (r) = [1 − S (r)] ULJ (r) , where ⎧ ⎪ ⎨ 1 S (r) = 1 + γ 2 (2γ − 3) ⎪ ⎩ 0

0 < r < rc − λ r m − λ < r < rm rm < r < rc

and γ=

r − rm + λ . λ

(N.21.1)

In fact, there are other ways to split the total potential function [624,625]. We have chosen λ = 0.3 and rm = 1.7. To save CPU time, a list is made of all the atoms that are close to each other (see Appendix I for details); therefore, the calculation of the

Supporting information for case studies e109

FIGURE N.33 Comparison of the efficiency η for bond constraints (SHAKE) with normal molecular dynamics (left) and multiple times steps (right). The left figure gives the efficiency as a function of the time step and the right figure as a function of the number of small time steps n, t = nδt, where the value of δt is given in the symbol legend.

short-range forces can be done very efficiently. For a noise level of 10−5 , one is able to use δt = 10−4 and n = 10, giving t = 10−3 . To compare the different algorithms in a consistent way, we compare in Fig. N.33 the efficiency of the various techniques. The efficiency η is defined as the length of the simulation (time step times the number of integration steps) divided by the amount of CPU time that was used. In the figure, we have plotted η for all simulations from Fig. N.32. For an energy noise level of 10−5 , the SHAKE algorithm is twice as efficient than normal MD (n = 1). This means that hardly any CPU time is spent in the SHAKE routine. However, the MTS algorithm is still two times faster (n = 10, δt = 10−4 ) at the same efficiency.

N.22 Ideal gas particle over a barrier Case Study 23 (Ideal gas particle over a barrier). To illustrate the “BennettChandler” approach for calculating crossing rates, we consider an ideal gas particle moving in an external field. This particle is constrained to move on the dimensional potential surface shown in Fig. N.34. This example is rather unphysical because the moving particle cannot dissipate its energy. As a consequence, the motion of the particle is purely ballistic. We assume that, far away on either side of the barrier, the particle can exchange energy with a thermal reservoir. Transition state theory predicts a crossing rate given by Eq. (15.2.2): exp[−βu(q ∗ )] TST = 1 |q| kA→B ˙  q∗ = 2 dq exp[−βu(q)] −∞



exp[−βu(q ∗ )] kB T .  q∗ 2π m dq exp[−βu(q)]

(a)

−∞

If we choose the dividing surface q1 (see Fig. N.34) at the top of the barrier (q1 = q ∗ ) none of the particles that start off with a positive velocity will return to the reactant state. Hence, there is no recrossing of the barrier and transition state theory is exact for this system.

e110 Supporting information for case studies

FIGURE N.34 Potential-energy barrier for an ideal gas particle; if the particle has a position to the left of the dividing surface q1 the particle is in state A (reactant). The region to the right of the barrier is designated as product B. The top of the barrier is denoted by q ∗ (q ∗ = 0).

Note that transition state theory (Eq. (a)) predicts a rate constant that depends on the location of the dividing surface. In contrast, the Bennett-Chandler expression for the crossing rate is independent of location of the dividing surface (as it should be). To see this, consider the situation that the dividing surface is chosen to be the left of the top of the barrier (i.e., at q1 < q ∗ ). The calculation of the crossing rate according to Eq. (15.2.3) proceeds in two steps. First we calculate the relative probability of finding a particle at the dividing surface. And then we need to compute the probability that a particle that starts with an initial velocity q˙ from this dividing surface will, in fact, cross the barrier. The advantage of the present example is that this probability can be computed explicitly. According to Eq. (15.2.4), the relative probability of finding a particle at q1 is given by δ(q − q1 ) exp[−βu(q1 )] . =  q1 θ (q1 − q) −∞ dq exp[−βu(q)]

(b)

If the dividing surface is not at the top of the barrier, then the probability of finding a particle will be higher at q1 than at q ∗ , but the fraction of the number of particles that actually cross the barrier will be less than predicted by transition state theory. It is convenient to introduce the time-dependent transmission coefficient, κ. κ = κ(t), defined as the ratio κ(t) ≡

kA→B (t) TST kA→B

=

q(0)δ(q(0) ˙ − q1 )θ(q(t) − q1 ) . 0.5 |q(0)| ˙

(c)

The behavior of κ(t) is shown in Fig. N.35 for various choices of q1 . The figure shows that for t → 0 κ(t) = 1, and that for different values of q1 we get different plateau values. The reason κ(t) decays from its initial value is that particles that start off with too little kinetic energy cannot cross the barrier and recross the dividing surface (q1 ). The plateau value of κ(t) provides us with the correction that has to be applied to the crossing rate predicted by transition state theory. Hence, we see that as we change q1 , the probability of finding a particle at q1 goes up, and the transmission coefficient goes down. But, as can be seen from Fig. N.35, the actual

Supporting information for case studies e111

FIGURE N.35 Barrier recrossing: the left figure gives the transmission coefficient as a function of time for different values of q1 . The right-hand figure shows, in a single plot, the probability density of finding the system at q = q1 (solid squares), the transmission coefficient κ (open squares), and the overall crossing rate (open circles), all plotted as a function of the location of the dividing surface. Note that the overall crossing rate is independent of the choice of the dividing surface.

crossing rate (which is proportional to the product of these two terms) is independent of q1 , as it should be. Now consider the case that q1 > q ∗ . In that case, all particles starting with positive q˙ will continue to the product side. But now there is also a fraction of the particles with negative q˙ that will proceed to the product side. These events will give a negative contribution to κ. And the net result is that the transmission coefficient will again be less than predicted by transition state theory. Hence, the important thing is not if a trajectory ends up on the product side, but if it starts on the reactant side and proceeds to the product side. In a simulation, it is therefore convenient always to compute trajectories in pairs: for every trajectory starting from a given initial configuration with a velocity q, ˙ we also compute the time-reversed trajectory, i.e., the one starting from the same configuration with a velocity −q. ˙ If both trajectories end up on the same side of the barrier then their total contribution to the transmission coefficient is clearly zero. Only if the forward and time-reversed trajectories end up on different sides of the barrier, do we get a contribution to κ. In the present (ballistic) case, this contribution is always positive. But in general, this contribution can also be negative (namely, if the initial velocity at the top of the barrier is not in the direction where the particle ends up). We chose this simple ballistic barrier-crossing problem because we can easily show explicitly that the transmission rate is independent5 of the location of q1 . We start with the observation that the sum of the kinetic and potential energies of a particle that crosses the dividing surface q1 is constant. Only those particles that have sufficient kinetic energy can cross the barrier. We can easily compute the long-time limit of q(0)θ(q(t) ˙ − q1 ):  q(0)θ(q(∞) ˙ − q1 ) =

mβ ∞ dv v exp(−βmv 2 /2) 2π v

5 The general proof that the long-time limit of the crossing rate is independent of the location of

the dividing surface was given by Miller [630].

e112 Supporting information for case studies  =

1 1 exp(− βmv2 ) , 2π mβ 2

where v is the minimum velocity needed to achieve a successful crossing. v is given by 1 2 mv + u(q1 ) = u(q ∗ ) . 2  It then follows that q(0)θ(q(∞) ˙ − q1 ) =

 1 exp{−β[u(q ∗ ) − u(q1 )]} . 2π mβ

This term exactly compensates the Boltzmann factor, exp(−βu(q1 )), associated with the probability of finding a particle at q1 . Hence, we have shown that the overall crossing rate is given by Eq. (a), independent of the choice of q1 . The reader may wonder why it is important to have an expression for the rate constant that is independent of the precise location of the dividing surface. The reason is that, although it is straightforward to find the top of the barrier in a onedimensional system, the precise location of the saddle point in a reaction pathway of a many-dimensional system is usually difficult to determine. With the BennettChandler approach it is not necessary to know the exact location of the saddle point. Still, it is worth trying to get a reasonable estimate, as the statistical accuracy of the results is best if the dividing surface is chosen close to the true saddle point. The nice feature of the Bennett-Chandler expression for barrier-crossing rates is that it allows us to compute rate constants under conditions where barrier recrossings are important, for instance, if the motion over the top of the barrier is more diffusive than ballistic. Examples of such systems are the cyclohexane interconversion in a solvent [631] and the diffusion of nitrogen in an argon crystal [632].

N.23 Single particle in a two-dimensional potential well Case Study 24 (Single particle in a two-dimensional potential well). To illustrate the path sampling method, consider a system containing a single particle in the following simple two-dimensional potential [641]:  2

2 2 V (x, y) = 4 1 − x 2 − y 2 + 2 x 2 − 2 + (x + y)2 − 1 

2 + (x − y)2 − 1 − 2 /6. (N.23.1) Note that V (x, y) = V (−x, y) = V (x, −y). Fig. N.36 shows that this potential consists of two stable regions around the points (−1, 0), which we call A, and (1, 0), which we call region B. To be more specific, all points within a distance of 0.7 from (−1, 0) or (1, 0) are defined to be in region A or B, respectively. At a temperature of T = 0.1 transitions from A to B are rare events. To compute the rate of transitions from A to B we used path ensemble simulations. The initial distribution N (x0 ) was chosen to be canonical, i.e.,

Supporting information for case studies e113

FIGURE N.36 Contour plot of the function V (x, y) defined by Eq. (N.23.1). The two minima are at (−1, 0), A, and (0, 1), B. These minima are separated by a potential energy barrier.

FIGURE N.37 path length T .

hB (t) (left) and η (t) (right) as a function of time for various values of the total

N (x0 ) ∝ exp [−βH (x0 )] . A trajectory was generated using standard Molecular Dynamics simulations (see Chapter 4). The equations of motion were integrated using the velocity-Verlet algorithm with a time step of 0.002. The first step was the calculation of the coefficient η(t, t  ). This involves the computation of the path ensemble averages hB (xt )A,HB (T ) for various times t. The result of such a simulation is shown in Fig. N.37 for T = 4.0 and T = 3.6. An important question is whether the time T is long enough. Since we are interested in the plateau of k(t), the function hB (xt )A,HB (T ) must have become a straight line for large values of t. If this function does not show a straight line, the value of T was probably too short, the process is not a rare event, or the process cannot be described by a single hopping rate. The consistency of the simulations can be tested by comparing the results with a simulation using a shorter (or longer, but this is more expensive) T . Fig. N.37 shows that the results of the two simulations are consistent. The next step is the calculation of the correlation function C(t). For the calculation of P (λ, t), we have defined the order parameter λ as the distance from

e114 Supporting information for case studies

FIGURE N.38 (Left) P (λ, i, t = 3.0) for all slices i. (Right) P (λ, t = 3.0) when all slices i are 1 dλ P (λ, t) = 1. combined. The units on the y axis are such that −∞

point B: λ=1−

|r − rB | , |rA − rB |

(N.23.2)

in which rB = (1, 0). In this way, the region B is defined by 0.65 < λ ≤ 1, and the whole phase space is represented by −∞, 1]. In Fig. N.38 (left), we have plotted P (λ, i, t = 3.0) as a function of λ for different slices i. Recombining the slices leads to Fig. N.38 (right). The value of C (t = 3.0) can be obtained by integrating over region B: C (t) = dλP (λ, t) . (N.23.3) B

Combining the results gives for the total crossing rate k = η (t) C (t) .

(N.23.4)

Using t = 3.0 leads to η (2.0) = 1.94, C (3.0) = 4.0 × 10−6 , and k = 8.0 × 10−6 .

N.24 Dissipative particle dynamics Case Study 25 (Dissipative particle dynamics). To illustrate the Dissipative Particle Dynamics (DPD) technique, we have simulated a system of two components (1 and 2). The conservative force is a soft repulsive force, given by    aij 1 − rij rˆ ij rij < rc C fij = , (N.24.1) 0 rij ≥ rc in which rij = rij  and rc is the cutoff radius of the potential. The random forces are given by Eq. (16.1.3) and the dissipative forces by Eq. (16.1.2). The total force on a particle equals the sum of the individual forces:   S R D fi = fC (N.24.2) ij + fij + fij + fij . i=j

Supporting information for case studies e115

FIGURE N.39 (Left) Density profile for kB T = 0.45. (Right) Phase diagram as calculated using DPD and Gibbs ensemble simulations. Both techniques result in the same phase diagram, but the Gibbs ensemble technique needs fewer particles due to the absence of a surface. In the DPD simulations, we have used a box of 10 × 10 × 20 (in units of rc3 ). The time step of the integration was t = 0.03.

To obtain a canonical distribution, we use σ 2 = 2γ kB T     2 wD rij = wR rij . A simple but useful choice is [682]   wD rij =

 

1 − rij /rc 0

2

rij < rc rij ≥ rc

with rc = 1. The simulations were performed with ρ = 3.0 and σ = 1.5. We have chosen aii = ajj = 25 and aij,i=j = 30. This system will separate into two phases. In the example shown in Fig. N.39, we have chosen the z-direction perpendicular to the interface. The left part of Fig. N.39 shows typical density profiles of the two components. In Fig. N.39 (right), we have plotted the concentration of one of the components in the coexisting phases. Since we can write down a Hamiltonian for a DPD system, we can also perform standard Monte Carlo simulations [691]. For example, we can also use a Gibbs ensemble simulation (see section 6.6) to compute the phase diagram. As expected, Fig. N.39 shows that both techniques give identical results. Of course, due to the presence of an interface one needs many more particles in such a DPD simulation. Thermodynamic quantities are calculated using only the conservative force. The pressure of the system is calculated using p = ρkB T +

1  rij · fC . ij 3V i>j

If the DPD fluid undergoes phase separation into two slabs containing the different fluid phases, and we can compute the associated interfacial tension using the

e116 Supporting information for case studies

FIGURE N.40

Surface tension as a function of temperature.

techniques described in Section 5.1.6. Fig. N.40 shows this interfacial tension γ as a function of temperature. Clearly, γ decreases with increasing temperature and should vanish at the critical point. As the critical point is approached, the driving force for the formation of a surface (surface tension) becomes very low, and it becomes difficult, therefore, to form a stable interface in a small system.

N.25 Comparison of schemes for the Lennard-Jones fluid Case Study 26 (Comparison of schemes for the Lennard-Jones fluid). It is instructive to make a detailed comparison of the various schemes to save CPU time for the Lennard-Jones fluid. We compare the following schemes: 1. 2. 3. 4.

Verlet list Cell list Combination of Verlet and cell lists Simple N 2 algorithm

We have used the program of Case Study 1 as a starting point. At this point it is important to note that we have not tried to optimize the parameters (such as the Verlet radius) for the various methods; we have simply taken some reasonable values. For the Verlet list (and for the combination of Verlet and cell lists) it is important that the maximum displacement be smaller than twice the difference between the Verlet radius and cutoff radius. For the cutoff radius we have used rc = 2.5σ , and for the Verlet radius rv = 3.0σ . This limits the maximum displacement to x = 0.25σ and implies for the Lennard-Jones fluid that, if we want to use a optimum acceptance of 50%, we can use the Verlet method only for densities larger than ρ > 0.6σ −3 . For smaller densities, the optimum displacement is larger than 0.25. Note that this density dependence does not exist in a Molecular Dynamics simulation. In a Molecular Dynamics simulation, the maximum displacement is determined by the integration scheme and therefore is independent of density. This makes the Verlet method much more appropriate for a Molecular Dynamics simulation than for a Monte Carlo simulation. Only at high densities does it make sense to use the Verlet list.

Supporting information for case studies e117

FIGURE N.41 Comparison of various schemes to calculate the energy: τ is in arbitrary units and N is the number of particles. As a test case, the Lennard-Jones fluid is used. The temperature was T ∗ = 2 and per cycle, the number of attempts to displace a particle was set to 100 for all systems. The lines serve to guide the eye.

The cell list method is advantageous only if the number of cells is larger than 3 in at least one direction. For the Lennard-Jones fluid this means that if the number of particles is 400, the density should be lower than ρ < 0.5σ −3 . An important advantage of the cell list over the Verlet list is that this list can also be used for moves in which a particle is given a random position. From these arguments, it is clear that if the number of particles is smaller than 200–500, the simple N 2 algorithm is the best choice. If the number of particles is significantly larger and the density is low, the cell list method is probably more efficient. At high density, all methods can be efficient, and we have to make a detailed comparison. To test these conclusions about the N dependence of the CPU time of the various methods, we have performed several simulations with a fixed number of Monte Carlo cycles. For the simple N 2 algorithm, the CPU time per attempt is τN 2 = cN, where c is the CPU time required to calculate one interaction. This implies that the total amount of CPU time is independent of the density. For a calculation of the total energy, we have to do this calculation N times, which gives the scaling of N 2 . Fig. N.41 shows that, indeed for the Lennard-Jones fluid, the τN 2 increases linearly with the number of particles. If we use the cell list, the CPU time will be τl = cVl ρ + cl pl N, where Vl is the total volume of the cells that contribute to the interaction (in three dimensions, Vl = 27rc3 ), cl is the amount of CPU time required to make a cell list, and pl is the probability that a new list has to be made. Fig. N.41 shows that the use of a cell list reduces the CPU time for 10,000 particles with a factor 18. Interestingly, the CPU time does not increase with increasing density. We would expect an increase since the number of particles that contribute to the interaction of a particle i increases with density. However, the second contribution to τNeigh

e118 Supporting information for case studies

(pl ) is the probability that a new list has to be made, depends on the maximum displacement, which decreases when the density increases. Therefore, this last term will contribute less at higher densities. For the Verlet scheme, the CPU time is τv = cVv ρ + cv pv N 2 , where Vv is the volume of the Verlet sphere (in three dimensions, Vv = 4π rv3 /3), cv is the amount of CPU time required to make the Verlet-list, and pv is the probability that a new list has to be made. Fig. N.41 shows that this scheme is not very efficient. The N 2 operation dominates the calculation. Note that we use a program in which a new list for all particles has to be made as soon as one of the particles has moved more than (rv − rc )/2; with some more bookkeeping, it is possible to make a much more efficient program, in which a new list is made for only the particle that has moved out of the list. The combination of the cell and Verlet lists removes the N 2 dependence on the simple Verlet algorithm. The CPU time is given by τc = cVv ρ + cv pv cl N. Fig. N.41 shows that indeed the N 2 dependence is removed, but the resulting scheme is not more efficient than the cell list alone. This case study demonstrates that it is not simple to give a general recipe for which method to use. Depending on the conditions and number of particles, different algorithms are optimal. It is important to note that for a Molecular Dynamics simulation, the conclusions may be different.

Appendix O

Small research projects In this Appendix, we list a few small research projects. These projects involve the development of your own program. A possible strategy is to use the source code of one of the Case Studies as the starting point. It is our experience that the completion of such a research project takes about two weeks, depending on your experience.

O.1 Adsorption in porous media In this project, we will investigate the adsorption behavior in porous media. As a model we use a slit-like pore. The interactions with the pore are given by  0 0 0, σ > 0. Some questions that one should answer before one starts programming are the following:

Small research projects e121

1. Is the potential for the interactions with the walls appropriate for a Molecular Dynamics simulation? 2. What is the volume of the pore as a function of the parameter L? 3. What is the dimension of our problem? Do we have diffusion in 1, 2, or 3 dimensions? In the first part of the project we study the diffusion in a smooth pore as defined by the above potential as a function of the pore diameter. 1. Compute the diffusion coefficient of a bulk Lennard-Jones liquid for ρ = 0.6 and T = 2.0 and 1.5. Since the program uses an N V E ensemble, it is not possible to simulate at exactly the requested temperature. However, one can ensure to be close to this temperature by an appropriate equilibration of the system (this is also the case for the following two questions) during the first part of the MD simulation. 2. Compute the density as a function of the distance from the center of the pore for ρ = 0.6 and T = 2.0 and 1.5 and L = 5 and 2. Interpret the results. 3. Compute the diffusion coefficient for ρ = 0.6 and T = 2.0 and T = 1.5 and L = 5, 2, and 1. Interpret the results. The interpretation is not trivial. 4. The above calculations have been performed using the N V E ensemble. This implies that there is no coupling with the atoms of the walls. In a real system the walls are not smooth and can exchange heat with the adsorbed molecules. A possible way of modeling this is to assume that we have an Andersen thermostat in the boundary layer with the wall. Investigate how the results depend on the thickness of the boundary layer and the constant ν of the Andersen algorithm. The next step is to model the corrugation caused by the atoms. This corrugation could be a term: 

r −L 2 2 U (z, r) = A sin (πz/σw ) exp − , (O.3.2) L0 where z is the distance to the wall, and σw is a term characterizing the size of the atoms of the wall and A is the strength of the interaction. The exponential is added to ensure that the potential is localized close to the walls of the cylinder. Investigate the diffusion coefficient as a function of the parameters σw and A both in the N V E and in the Andersen thermostat cases.

O.4 Multiple-time-step integrators The time step in a Molecular Dynamics simulation strongly depends on the steepness of the potential energy surface. However, most potentials like the Lennard-Jones potential are steep at short distances. As short-range interactions can be computed very fast, it would be interesting to use a multiple-time-step integration algorithm, in which short-range (computationally cheap) interactions

e122 Small research projects

are computed every time step and in which long-range (computationally expensive) interactions are evaluated every n time steps (n > 1). Recently, there has been a considerable effort to construct time-reversible multiple-time-step algorithms [117,126]. 1. Why is it important to use time-reversible integration schemes in MD? 2. Modify Case Study 4 in such a way that pairwise interactions are calculated using a Verlet neighbor list. For every particle, a list is made of neighboring particles within a distance of rcut + . All lists have to be updated only when the displacement of a single particle is larger than /2. Hint: The algorithm1 in book of Allen and Tildesley [21] is a good starting point. 3. Investigate how the CPU time per time step depends on the size of  for various system sizes. Compare your results with Table 5.1 from ref. [21]. 4. Modify the code in such a way that the N V E multiple-time-step algorithm of ref. [126] is used to integrate the equations of motion. You will have to use separate neighbor lists for the short-range and the long-range parts of the potential. 5. Why does one have to use a switching function in this algorithm? Why is it a good idea to use a linear interpolation scheme to compute the switching function from ref. [126]? 6. Make a detailed comparison between this algorithm and the standard LeapFrog integrator (with the use of a neighbor list) at the same energy drift.

O.5 Thermodynamic integration The difference in free energy between state A and state B can be calculated by thermodynamic integration: FA − FB =

λ=1 λ=0

∂U (λ) dλ ∂λ

 ,

(O.5.1)

λ

in which λ = 1 in state A and λ = 0 in state B. In order to calculate the excess chemical potential of a Lennard-Jones system, we might use the following modified potential2 [785]:    σ 6

σ 12 . (O.5.2) U (r, λ) = 4 λ5 − λ3 r r Recall that the excess chemical potential is the difference in chemical potential between a real gas (λ = 1) and an ideal gas (λ = 0). 1. Make a plot of the modified LJ potential for various values of λ. 1 https://github.com/Allen-Tildesley/examples/blob/master/md_lj_vl_module.f90. 2 As the reader can easily verify, this modified potential is still a normal LJ 12-6 potential, but with

different values of  and σ .

Small research projects e123

2. Show that



∂ 2F ∂λ2