144 16 10MB
English Pages xxiii; 456 [482] Year 2022
本书版权归Arcler所有
Statistical Mechanics and Entropy
Statistical Mechanics and Entropy
Edited by: Olga Moreira
www.arclerpress.com
Statistical Mechanics and Entropy Olga Moreira Arcler Press 224 Shoreacres Road Burlington, ON L7L 2H2 Canada www.arclerpress.com Email: [email protected] HERRN(GLWLRQ2 ISBN: (HERRN)
This book contains information obtained from highly regarded resources. Reprinted material sources are indicated. Copyright for individual articles remains with the authors as indicated and published under Creative Commons License. A Wide variety of references are listed. Reasonable efforts have been made to publish reliable data and views articulated in the chapters are those of the individual contributors, and not necessarily those of the editors or publishers. Editors or publishers are not responsible for the accuracy of the information in the published chapters or consequences of their use. The publisher assumes no responsibility for any damage or grievance to the persons or property arising out of the use of any materials, instructions, methods or thoughts in the book. The editors and the publisher have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission has not been obtained. If any copyright holder has not been acknowledged, please write to us so we may rectify. Notice: Registered trademark of products or corporate names are used only for explana
© 2022 Arcler Press ISBN: 978-1-77469-178-6 (Hardcover)
Arcler Press publishes wide variety of books and eBooks. For more information about Arcler Press and its products, visit our website at www.arclerpress.com
DECLARATION Some content or chapters in this book are open access copyright free published research work, which is published under Creative Commons License and are indicated with the citation. We are thankful to the publishers and authors of the content and chapters as without them this book wouldn’t have been possible.
ABOUT THE EDITOR
Olga Moreira is a Ph.D. in Astrophysics and B.Sc. in Physics and Applied Mathematics. She is an experienced technical writer and researcher which former fellowships include postgraduate positions at two of the most renown European institutions in the fields of Astrophysics and Space Science (the European Southern Observatory, and the European Space Agency). Presently, she is an independent scientist working on projects involving machine learning and neural networks research as well as peer-reviewing and edition of academic books.
TABLE OF CONTENTS
List of Contributors .......................................................................................xv List of Abbreviations .................................................................................... xix Preface.................................................................................................. ....xxiii Chapter 1
The Elusive Nature of Entropy and its Physical Meaning ........................... 1 Abstract ..................................................................................................... 1 Introduction ............................................................................................... 2 From Carnot’s Ingenious Reasoning of Reversible Cycles to Clausius’ Definition of Entropy ......................................................... 4 Clausius Equality (Entropy) and Linequality (Entropy Generation) ............. 6 Physical Meaning of Entropy: Thermal Displacement of Thermal Energy (Dynamic Thermal-Volume) .................................................. 9 Physical Meaning of Entropy: A Measure of Thermal Disorder.................. 13 Conclusions: Physical Meaning of Entropy ............................................... 16 References ............................................................................................... 20
Chapter 2
The Second Law and Entropy Misconceptions Demystified..................... 21 Abstract ................................................................................................... 21 Introduction ............................................................................................. 22 The Essence of “Mass-Energy, Entropy and the Second Law Fundamentals”........................................................................ 24 The Three Essential “Entropy and Second Law Misconceptions” ............... 27 Other Typical “Entropy and Second Law Misconceptions” ....................... 31 Further Comments ................................................................................... 37 References and Note................................................................................ 39
Chapter 3
Thermodynamics, Statistical Mechanics and Entropy .............................. 41 Abstract ................................................................................................... 41 Introduction ............................................................................................. 42 Thermodynamic Postulates ...................................................................... 45 The Laws of Thermodynamics .................................................................. 52 Macroscopic Probabilities for Classical Systems....................................... 53 The Boltzmann Entropy, Sb, for Classical Systems .................................... 55 The Gibbs Entropy, Sg, for Classical Systems ............................................. 57 The Canonical Entropy, Sc, for Classical Systems ..................................... 59 The Grand Canonical Entropy, Sgc, for Classical Systems .......................... 61 Quantum Statistical Mechanics................................................................ 63 First-Order Phase Transitions .................................................................... 64 Collections of Non-Interacting Objects .................................................... 67 Negative Temperatures ............................................................................. 70 Discussion ............................................................................................... 71 References and Notes .............................................................................. 72
Chapter 4
Entropy and its Correlations with Other Related Quantities ................... 77 Abstract ................................................................................................... 77 Introduction ............................................................................................. 78 Exergy And Entropy.................................................................................. 79 Unavailable Potential Energy Per Unit Environmental Temperature .......... 81 Conclusions ............................................................................................. 88 Acknowledgments ................................................................................... 89 References ............................................................................................... 90
Chapter 5
A Brief Review of Generalized Entropies................................................. 93 Abstract ................................................................................................... 93 Introduction ............................................................................................. 94 Generalized Entropies.............................................................................. 97 Hanel–Thurner Exponents ...................................................................... 109 Asymptotic Relation Between the HT Exponent C and the Diffusion Scaling Exponent..................................................... 112 Conclusions ........................................................................................... 115 Author Contributions ............................................................................. 116 Funding ................................................................................................. 116
x
Acknowledgments ................................................................................. 116 Appendix A ........................................................................................... 117 Appendix B............................................................................................ 117 Appendix C ........................................................................................... 118 References ............................................................................................. 120 Chapter 6
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere.............. 125 Abstract ................................................................................................. 125 Introduction ........................................................................................... 126 Non-Boltzmannian Entropy Measures and Distributions ........................ 129 Further Connections .............................................................................. 145 Conclusions and Perspectives ................................................................ 151 Funding ................................................................................................. 153 Acknowledgments ................................................................................. 153 References and Notes ............................................................................ 154
Chapter 7
The Gibbs Paradox: Lessons from Thermodynamics .............................. 173 Abstract ................................................................................................. 173 Introduction ........................................................................................... 174 Formulating the Gibbs Paradox in Thermodynamics ............................... 177 Formulating the Gibbs Paradox in Statistical Mechanics ......................... 186 References ............................................................................................. 197
Chapter 8
The Gibbs Paradox and Particle Individuality ....................................... 199 Abstract ................................................................................................. 199 Introduction ........................................................................................... 200 The Gibbs Paradox in Thermodynamics ................................................. 201 The Gibbs Paradox in Statistical Mechanics ........................................... 203 Proposed Solutions of the Statistical Paradox.......................................... 205 Quantum Mechanics ............................................................................. 215 Conclusions ........................................................................................... 220 References ............................................................................................. 222
Chapter 9
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox . 225 Abstract ................................................................................................. 225 Introduction ........................................................................................... 226
xi
Results ................................................................................................... 228 Discussion ............................................................................................. 245 Methods ................................................................................................ 246 Acknowledgements ............................................................................... 246 Contributions ......................................................................................... 246 Supplementary Information.................................................................... 247 References ............................................................................................. 268 Chapter 10 Thermal Quantum Spacetime................................................................ 273 Abstract ................................................................................................. 273 Introduction ........................................................................................... 274 Background Independent Equilibrium Statistical Mechanics................... 277 Equilibrium Statistical Mechanics in Quantum Gravity .......................... 290 Conclusions and Outlook ...................................................................... 302 Acknowledgments ................................................................................. 303 References ............................................................................................. 306 Chapter 11 Thermodynamics, Stability and Hawking–Page Transition of Black Holes from Non-Extensive Statistical Mechanics in Quantum Geometry .............................................................................. 311 Abstract ................................................................................................. 311 Introduction ........................................................................................... 312 Quantum Theory in LQG ....................................................................... 315 Nonextensive Entropy ............................................................................ 317 Bekenstein-Hawking and Tsallis Entropy for Black Hole ......................... 318 Thermodynamic Stability of Black Holes ................................................ 320 Conclusion ............................................................................................ 324 References ............................................................................................ 325 Chapter 12 Gravitational Entropy and Inflation....................................................... 327 Abstract ................................................................................................. 327 Introduction ........................................................................................... 328 A Flat Universe Model With Ideal Gas and Live ..................................... 330 Measures of Gravitational Entropy ......................................................... 333 Gravitational Entropy in the Plane-Symmetric Bianchi Type I Universe... 335 Gravitational Entropy in LTB Models With Live ...................................... 336
xii
A Model of Entropy Generation At the End of The Inflationary Era .......... 339 The Concept of a Maximal Entropy For the Universe .............................. 345 Entropy Gap and the Arrow of Time ....................................................... 348 Conclusions ........................................................................................... 350 References ............................................................................................. 352 Chapter 13 Statistical Mechanics and Thermodynamics of Viral Evolution ............. 355 Abstract ................................................................................................. 355 Introduction ........................................................................................... 356 Methods ................................................................................................ 357 This is the Entry Target for Regularcells Way of Entering It ...................... 365 Results and Discussion .......................................................................... 370 Conclusions ........................................................................................... 383 Acknowledgments ................................................................................. 385 References ............................................................................................. 386 Chapter 14 Energy, Entropy and Exergy in Communication Networks .................... 389 Abstract ................................................................................................. 389 Introduction ........................................................................................... 390 Analogies Between Communication and Thermodynamic Systems ........ 392 Exergy in Communication Networks ...................................................... 401 Conclusions ........................................................................................... 409 Acknowledgments ................................................................................. 410 References ............................................................................................. 411 Chapter 15 Statistical Mechanics of Zooplankton .................................................... 415 Abstract ................................................................................................ 415 Introduction ........................................................................................... 416 Statistical Mechanics and Thermodynamics ........................................... 418 Materials and Methods .......................................................................... 419 Results ................................................................................................... 421 Outlook ................................................................................................. 425 Acknowledgments ................................................................................. 427 References ............................................................................................. 428
xiii
Chapter 16 Statistical Thermodynamics of Economic Systems ................................. 431 Abstract ................................................................................................. 431 Introduction ........................................................................................... 432 Statistical Thermodynamics .................................................................... 434 Hypothetical Economic Systems ............................................................ 437 Phase Transitions.................................................................................... 444 Discussion and Conclusions .................................................................. 446 Acknowledgments ................................................................................. 448 References ............................................................................................. 449 Index ..................................................................................................... 451
xiv
LIST OF CONTRIBUTORS Milivoje M. Kostic Department of Mechanical Engineering, Northern Illinois University, DeKalb, IL 60115, USA Robert H. Swendsen Physics Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA Jing Wu School of Energy and Power Engineering, Huazhong University of Science & Technology, Wuhan 430074, China Zengyuan Guo Key Laboratory for Thermal Science and Power Engineering of Ministry of Education, Department of Engineering Mechanics, Tsinghua University, Beijing 100084, China José M. Amigó Centro de Investigación Operativa, Universidad Miguel Hernández, Avda. de la Universidad s/n, 03202 Elche, Spain Sámuel G. Balogh Department of Biological Physics, Eötvös University, H-1117 Budapest, Hungary Sergio Hernández HCSoft Programación S.L., 30007 Murcia, Spain Constantino Tsallis Centro Brasileiro de Pesquisas Físicas and National Institute of Science and Technology for Complex Systems–Rua Dr. Xavier Sigaud 150, Rio de Janeiro 22290-180, Brazil Santa Fe Institute–1399 Hyde Park Road, Santa Fe, NM 87501, USA Complexity Science Hub Vienna–Josefstädter Strasse 39, 1080 Vienna, Austria Janneke Van Lith Department of Philosophy and Religious Studies, Utrecht University, Janskerkhof 13, 3512 BL Utrecht, The Netherlands
xv
Dennis Dieks History and Philosophy of Science, Utrecht University, 3508 AD Utrecht, The Netherlands Benjamin Yadin School of Mathematical Sciences and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems, University Park, University of Nottingham, Nottingham, UK Wolfson College, University of Oxford, Oxford, UK Benjamin Morris School of Mathematical Sciences and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems, University Park, University of Nottingham, Nottingham, UK Gerardo Adesso School of Mathematical Sciences and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems, University Park, University of Nottingham, Nottingham, UK Isha Kotecha Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Mühlenberg 1, 14476 Potsdam-Golm, Germany Institut für Physik, Humboldt-Universität zu Berlin, Newtonstraße 15, 12489 Berlin, Germany K.Mejrhit LHEP-MS, Department of Physics, Faculty of Science, Mohamed V University, Rabat, Morocco S-E.Ennadifi LHEP-MS, Department of Physics, Faculty of Science, Mohamed V University, Rabat, Morocco Øystein Elgarøy Institute of theoretical astrophysics, University of Oslo, Oslo N-0315, Norway Øyvind Grøn Oslo and Akershus University College of Applied Sciences, Faculty of Engineering, Olavs plass, Oslo N-0130, Norway Barbara A. Jones Almaden Research Center, IBM, San Jose, California, United States of America
xvi
Justin Lessler Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, United States of America Simone Bianco Almaden Research Center, IBM, San Jose, California, United States of America James H. Kaufman Almaden Research Center, IBM, San Jose, California, United States of America Slavisa Aleksic Institute of Telecommunications, Vienna University of Technology, Favoritenstr. 9-11/ E389, 1040 Vienna, Austria Peter Hinow Department of Mathematical Sciences, University of Wisconsin—Milwaukee, Milwaukee, Wisconsin, United States of America Ai Nihongi School of Freshwater Sciences, Global Water Center, University of Wisconsin— Milwaukee, Milwaukee, Wisconsin, United States of America J. Rudi Strickler School of Freshwater Sciences, Global Water Center, University of Wisconsin— Milwaukee, Milwaukee, Wisconsin, United States of America Hernando Quevedo Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, AP 70543, 04510 México, DF, Mexico Dipartimento di Fisica and ICRA, Università di Roma La Sapienza, 00185 Roma, Italy María N. Quevedo Departamento de Matemáticas, Universidad Militar Nueva Granada, Carrera 11 No. 101-80, 110111 Bogotá, DE, Colombia
xvii
LIST OF ABBREVIATIONS ASE
Amplified Spontaneous Emission
BHEL
Bekenstein-Hawking entropy law
BH
Black Hole
GHG
Greenhouse Gas
ICT
Information and Communication Technology
LTB
Lemaitre-Tolman-Bondi
LQG
Loop Quantum Gravity
LIVE
Lorentz Invariant Vacuum Energy
MD
Maxwell’s demon
PREFACE
The concept of entropy appears in several contexts and intersects many disciplines: thermodynamics, statistical mechanics, information theory, dynamical systems, quantum mechanics, biology, cosmology and more. It has been defined as a measure of different properties such as disorder, uncertainty, randomness, complexity, etc. Thermodynamic entropy, however, it is not related to space disorder, nor form, nor functional disorder entropy. It is a measure of thermal disorder. It is related solely to thermal energy and its heat transfer, and not related to any other randomness. Thermodynamic entropy is always generated, cannot be destroyed by any means at any scale without exception. Although the second law of thermodynamics is often challenged in biology, information theory and social sciences, the “impossibility of entropy reduction by destruction should not be confused with local entropy decrease due to entropy outflow with heat” (see Chapter 2). This edited book starts on the reflection of the proper definition of the thermodynamic entropy, and on debunking some claims of second law anomalies and violations. Although thermodynamic entropy is a macro-thermal concept (not a statistical nor probabilistic concept), statistical modelling can be used to describe it. Boltzmann was the first to express entropy as a logarithmic measure of all possible thermal-microstates corresponding to related macro-state. Later, Gibbs generalized entropy as a logarithmic measure of the sum of the uncertainties of all microstates. Other generalized entropies have been defined to quantify other types of disorders, however, these should not be confused with the classical thermodynamic entropy. The book overviews several generalized Boltzmann-Gibbs entropies suh as: Kolmogorov–Sinai entropy (dynamical systems), Tsallis entropy (statistical physics), canonical entropy (canonical ensemble), Rényi entropy and Shannon entropies (information theory). It also reviews the laws of thermodynamics as well as postulates such as Callen’s postulates, and axioms such as Shannon-Khinchin axioms. Gibbs paradox (entropy of mixing), Shanon entropy, gravitational entropy and the application of statistical mechanics concepts in other research fields such as cosmology, biology and information theory are also overviewed and discussed in the remaining part of this edited book. To summarize, this book is divided into four dominant topics in thermodynamics and statistical mechanics:
Physical meaning of entropy and its generalization: – ! "# $ on physical meaning of thermodynamic entropy.
–
xxii
% ' ! * " # $ debunking of claims of second law of thermodynamics violations and anomalies. – “Thermodynamics, Statistical Mechanics and Entropy” - an overview of thermodynamic entropy, Boltzmann-Gibbs entropy, canonical entropy, and grand canonical entropy for classical and quantum statistical mechanics system. – “Entropy and Its Correlations with Other Related Quantities” - an overview of correlations between entropy and other related quantities. – “A Brief Review of Generalized Entropies” - a review of generalized Boltzmann-Gibbs-Shannon entropies. – “Beyond Boltzmann-Gibbs-Shannon in Physics and Elsewhere” a review of non-Boltzmannian entropy measures and generalized entropies. Gibbs Paradox: – “The Gibbs Paradox: Lessons from Thermodynamics” - a review of Gibbs paradox and entropy of mixing. – “The Gibbs Paradox and Particle Individuality” - a discussion and analysis of the problem of discontinuous drop in the entropy of mixing when the mixed gases become equal to each other. – “Mixing Indistinguishable Systems Leads To A Quantum Gibbs Paradox”- overview of the Gibbs paradox, entropy of mixing, and + Extension of statistical mechanics entropy to quantum and gravitation entropy: – “Thermal Quantum Spacetime” - an intersection between equilibrium statistical mechanics and thermodynamics, quantum theory, and general relativity. – “Thermodynamics, Stability and Hawking–Page Transition of Black Holes From Non-extensive Statistical Mechanics In Quantum Geometry” - a calculation of Bekenstein-Hawking entropy in loop quantum gravity based on the formula of Shanon entropy. – ;$"# $ different measures of gravitational entropy related to the Weyl curvature (i.e. Weyl entropy).
Applications of statistical mechanics fundamental concepts to other research
< – “Statistical Mechanics and Thermodynamics of Viral Evolution” - an application of statistical mechanics to the study of the life cycle of viruses. It models viral infection and evolution using the grand canonical ensemble entropy. – “Energy, Entropy and Exergy in Communication Networks” $ through communication networks. It provides an analogy between communication networks and thermodynamic systems. – “Statistical Mechanics of Zooplankton” - an application of statistical mechanics to the study of the crustacean zooplankton Daphnia pulicaria behaviour. – “Statistical Thermodynamics of Economic Systems” - a formulation of the thermodynamics of economic systems in terms of an arbitrary probability distribution for a conserved economic quantity.
xxiii
CHAPTER 1
THE ELUSIVE NATURE OF ENTROPY AND ITS PHYSICAL MEANING
Milivoje M. Kostic Department of Mechanical Engineering, Northern Illinois University, DeKalb, IL 60115, USA
ABSTRACT Entropy is the most used and often abused concept in science, but also in philosophy and society. Further confusions are produced by some attempts to generalize entropy with similar but not the same concepts in other disciplines. The physical meaning of phenomenological, thermodynamic entropy is reasoned and elaborated by generalizing Clausius definition with inclusion of generated heat, since it is irrelevant if entropy is changed due to reversible heat transfer or irreversible heat generation. Irreversible, caloric heat transfer is introduced as complementing reversible heat transfer. It is also reasoned and thus proven why entropy cannot be destroyed but is always generated (and thus over-all increased) locally and globally, at every Citation: Kostic, M. M. (2014). The elusive nature of entropy and its physical meaning. Entropy, 16(2), 953-967.(15 pages) Copyright: © 2014 Kostic. Licensee MDPI, Basel, Switzerland. This is an open access article distributed under the Attribution 3.0 Unported (CC BY 3.0) license: https:// creativecommons.org/licenses/by/3.0/.
2
Statistical Mechanics and Entropy
space and time scales, without any exception. It is concluded that entropy is a thermal displacement (dynamic thermal-volume) of thermal energy due to absolute temperature as a thermal potential (dQ = TdS), and thus associated with thermal heat and absolute temperature, i.e., distribution of thermal energy within thermal micro-particles in space. Entropy is an integral measure of (random) thermal energy redistribution (due to heat transfer and/ or irreversible heat generation) within a material system structure in space, per absolute temperature level: dS = dQSys/T = mCSysdT/T, thus logarithmic integral function, with J/K unit. It may be also expressed as a measure of “thermal disorder”, being related to logarithm of number of all thermal, dynamic microstates W (their position and momenta), S = kBlnW, or to the sum of their logarithmic probabilities S = −kB¦pilnpi, that correspond to, or are consistent with the given thermodynamic macro-state. The number of thermal microstates W, is correlated with macro-properties temperature T and volume V for ideal gases. A system form and/or functional order or disorder are not (thermal) energy order/disorder and the former is not related to Thermodynamic entropy. Expanding entropy to any type of disorder or information is a source of many misconceptions. Granted, there are certain benefits of simplified statistical descriptions to better comprehend the randomness of thermal motion and related physical quantities, but the limitations should be stated so the generalizations are not overstretched and the real physics overlooked, or worse discredited. Keywords: caloric process; Carnot cycle; Clausius (in)equality; entropy; entropy generation; heat transfer; microstates number; statistical entropy; thermal energy; thermal interactions; thermal motion
INTRODUCTION What is the underlying nature of “entropy” and why does it always increase? Why is entropy so intriguing and mysterious, unique and universal, as if it is a miraculous property of natural, material systems? How does it encompass and quantify all processes at all natural space and time scales, governed by the Second Law of Thermodynamics? And many other elusive and debatable issues, as if entropy is among the deepest unresolved mysteries in nature, defying our common sense. Entropy is the most used and often abused concept in science, but also in philosophy and society. Further confusions are produced by some attempts to generalize entropy with similar, but not the same concepts in other disciplines. Von Neumann once remarked that
The Elusive Nature of Entropy and Its Physical Meaning
3
“whoever uses the term ‘entropy’ in a discussion always wins since no one knows what entropy really is, so in a debate one always has the advantage.” The historian of science and mathematician, Truesdell, explains in his essay of Method and Taste in Natural Philosophy: “Heads have split for a , like force, >[ \ \ # ]# $ meaning of entropy, and to put certain physical and philosophical concepts in perspective. Only two seminal references and two related publications by the author, in addition to three popular references to illustrate certain misconceptions, are cited. The classical, phenomenological thermodynamics [ > > % classical thermodynamics as an obsolete relic. Often, mostly due to lack of dubious comprehension, thermodynamics is considered as an engineering subject, and thus not as the most fundamental science of energy and nature. The phenomenological thermodynamics has the supremacy over other disciplines, due to its logical reasoning based on the fundamental laws and without the regard to the system complex dynamic structure, and even more complex related interactions. There are many puzzling issues surrounding thermodynamics and the \ > > very fundamental concepts. In modern times, there is a tendency by some scientists to unduly discredit thermal energy as being indistinguishable from other internal energy types. Romer [1] argues that “Heat is not a noun,” and proposes to remove it from the dictionary. Ben-Naim [2] titles his book “A Farewell To Entropy,” while Leff [3] in a series of articles entitled “Removing the Mystery of Entropy and Thermodynamics,” argues
4
Statistical Mechanics and Entropy
surprisingly, that “Entropy can be introduced and understood without ever mentioning heat engines,” and against the “thermal energy” concept in # " % the tabulated Thermodynamic Internal Energy (u = U/m, per unit mass) to be thermal energy, although it represents all energy types stored as kinetic and potential energy of constituent microstructure, thus thermal and mechanical elastic energy in simple compressive substances, in addition to chemical and nuclear internal energies. In more complex system structure there may be more energy types. Entropy is related to thermal motion of a system microstructure, the latter gives rise to all thermal phenomena and related properties, namely, temperature, thermal or heat capacity, thermal energy and entropy, among others. Due to conversion of thermal energy to other energy forms, like mechanical work in heat engine, and also spontaneous and unavoidable dissipation of all other energy forms to thermal energy via so called heat generation, additional issues and often confusions arise. However, entropy # # \ \ tabulated and/or correlated, for practical use in engineering and science. > \ { \ not be misrepresented as something it might be or is not.
FROM CARNOT’S INGENIOUS REASONING OF REVERSIBLE CYCLES TO CLAUSIUS’ DEFINITION OF ENTROPY Sadi Carnot (1824) laid ingenious foundations for the Second Law of Thermodynamics and discovery of Entropy before the Frist Law of energy conservation was even known (Joule, 1843), and long before thermodynamic concepts were established in the second half of the nineteenth century. In historical context, it is hard to comprehend now, how Carnot then, at age 28, ingeniously and fully explained the critical concepts of reversible thermomechanical processes and the limits of converting heat to work at inception of the heat engines’ era, when nature of heat was not fully understood. No wonder that Sadi Carnot’s “Réflexions sur la puissance motrice du feu (Reflections on the Motive Power of Fire [4]),” original treatise published in 1824, was not noticed at his time, when his ingenious reasoning of ideal heat engine reversible cycles, is not fully recognized, and may be truly comprehended by a few, even nowadays.
The Elusive Nature of Entropy and Its Physical Meaning
5
| entropy is a state function, a material property conserved in ideal, reversible cycles (Clausius Equality} entropy property), that entropy could not > \> generated (locally and globally, thus overall increased) due to dissipation of any and all work potentials to heat, causing generation of entropy in irreversible cycles (Clausius Inequality} entropy generation); thereby, quantifying all reversible and irreversible processes and providing generalization of the Second Law of Thermodynamics. Note that Carnot erroneously assumed that the same caloric (heat) passes through the engine and extracts (produces) work by lowering its temperature, similar to how $ # ~> lowering its elevation potential. This error, considering the knowledge at the time, in no way diminishes Carnot’s ingenious reasoning and conclusions about limiting, reversible processes and its accurate limitations of heat to work conversion [5]. The consequence of a process and cycle reversibility is most ingenious and far-reaching, see elsewhere [5]. Carnot’s simple and logical reasoning that mechanical work is extracted in heat engine due to the caloric (heat) passing from high to low temperature, led him to a very logical conclusion that any heat transfer from higher to lower temperature (like in a heat exchanger), without extracting possible work (like in a reversible heat engine), will be an irreversible loss of work potential [4,5]. Then he expended his logical reasoning to conclude that all reversible (ideal) heat engines, working between the same heat reservoirs, must have equal and maximum > \ coupled with another, the impossible “perpetual motion” will be achieved. { ] only, is functionally expressed by Equation (1):
(1) What a simple and logical, ingenious reasoning! The maximum, limiting or its design, but only depends on (and increases with) the temperature difference between the heat source and heat sink, similar to the water wheel output dependence on the waterfall height difference.
6
Statistical Mechanics and Entropy
After the First Law of energy conservation was later established (after %|] \ | \ = (QH–QL)/QH = [f(TH)–f(TL)]/f(TH), that led to a very important correlation deduced for a reversible Carnot cycle between any two temperature levels, say T1 and T2, i.e., (2) The above function f(T) could be arbitrary, but non-negative and \ > \thermodynamic temperature scale, say f(T) = = T\>> non-negative and increasing function), independent of the system used for measurement (i.e., thermometer). This simple, thermodynamic temperature scale happened to be the same as the absolute temperature scale obtained 5]. The Carnot ratio equality, Q/T = constant (named in Carnot’s honor, Equation (2)\ \ the “Carnot Ratio Equality” is probably the most important equation in Thermodynamics and among the most important equations in natural sciences. Carnot’s ingenious reasoning opened the way to the generalization > # + \ of absolute thermodynamic temperature, and a new thermodynamic material property “Entropy”, as well as the Gibbs Free Energy, one of the most important Thermodynamic functions for the characterization of electrochemical systems and their equilibriums, thus resulting in the formulation of the universal and far-reaching Second Law of thermodynamics [5,6].
CLAUSIUS EQUALITY (ENTROPY) AND LINEQUALITY (ENTROPY GENERATION) Another important consequence of the Carnot ratio equality, Equation (2), for a Carnot cycle working between the two different, but constant temperature thermal-reservoirs, TH and TL < TH, is: (3) Or in general, a reversible cycle, working between the variable temperature thermal-reservoirs, see Figure 1, could be accomplished with > \ | , each ~>
The Elusive Nature of Entropy and Its Physical Meaning
7
# Equation (3), | , we obtained the following equation:
Figure 1. Variable temperature reservoirs require multi-stage Carnot cycles.
(4) Equation (4) is well-known Clausius equality (more about Clausius inequality \ \ integral is independent on process path between any two given points A and B, namely (compare with Equation (4)):
(5) Let us reiterate: The reversible process/cycle equivalency, deduced by Sadi Carnot, has resulted in the Clausius equality (Equation (4) of any reversible cycle between any two given reservoirs’ temperatures
8
Statistical Mechanics and Entropy
> \ > \ > \ > { > > the absolute Thermodynamic temperature, and deduction, and thus proof based on Carnot’s reasoning, of the Clausius equality, Equation (4), and the \entropy, up to an arbitrary reference value, Equation (5). An additional consequence of Carnot’s reasoning is that non-reversible > {> > . Otherwise, if equal they would be reversible, and if bigger would be impossible, hence, allowing deduction (and thus proof) of the Clausius inequality based on the same Carnot’s subtle reasoning. We have to keep in mind that Carnot’s reasoning is based on the impossibility of making an autonomous machine (to spontaneously produce work on \ > + cycle than a related reversible cycle, thus destroying entropy. I in turn, it is equivalent to the deduced Clausius inequality (see next), the latter in limit being an equality for reversible cycles. Namely, as reasoned by Carnot and reemphasized above and elsewhere [5], for real irreversible (Irr \ than for reversible cycles (Rev, otherwise the Irr cycle will be reversible or impossible), i.e., for everything else being the same, then WIrr < WRev. Therefore, we can reason the proof of the well-known Clausius inequality as follows: (6) Considering the inequality, Equation (6), and expending on derivation of Clausius equality, Equation (4) (recall that T is non-negative absolute temperature), we have the following: reversible and irreversible cycles:
, or for both,
(7) Everything reasoned above is deduced from the impossibility to have > \ + to “impossibility of heat to be transferred spontaneously (without any { $ > \> to construct any device to achieve it in an autonomous process (a process
The Elusive Nature of Entropy and Its Physical Meaning
9
{ $ " \ entropy, was | +, Equations (4) and (5) [6]. The cyclic Clausius inequality, Equation (7)\ #${ of the new quantity, entropy, within a cyclic process must be negative (i.e., > #${\ ~ >~ (and thus the same all properties) after completing the cycle. This implies that all real, irreversible cyclic processes must generate (produce) the new property entropy, equal to the generated heat from lost cycle work potential per relevant absolute temperature, which in limit is zero, thus entropy being conserved in reversible processes. Therefore, it would be impossible to have a cyclic process that destroy entropy, since it would be equivalent to the spontaneous heat transfer from a colder to a hotter body against a thermal forcing, or to produce work from heat within a single reservoir, from within an equilibrium without a due forcing. The ‘process forcing’ has to be ‘uniquely directional,’ and so is the process energy transfer. It cannot be one way or the opposite way at will, thus defying directionality of forcing and existence of stable equilibrium. Similar reasoning has been further extended to all types of energy processes and thus establishing universality of the Second Law of entropy generation and energy degradation.
PHYSICAL MEANING OF ENTROPY: THERMAL DISPLACEMENT OF THERMAL ENERGY (DYNAMIC THERMAL-VOLUME) During reversible, boundary heat transfer Bry without work interactions, thus no volume (V) change for simple compressible substance system (Sys), the system temperature within a volume will be uniform and equal to its boundary (Bry) temperature at any instant (TBry = TSys = T), i.e.,:
(8) There is a tacit distinction between entropy transfer across the boundary surface Bry ${ + Bry = SሶBrydt) and change (e.g., storage) of entropy property within the material system dSSys (in time dt) that occupies space volume V, and the two are the same only for reversible processes, Equation (8). We also note that process irreversibility is associated with a
10
Statistical Mechanics and Entropy
material system occupying a volume (it is a volumetric phenomenon) and > ${ > no thickness, thus the distinction between differentials ‘]${d’ for system. There is an important peculiarity about heat transfer processes without any work interactions (like within heat exchangers): no heat conversion to work like in heat engine, and no other than ‘thermal work potential’ dissipation and heat generation, therefore the thermal energy (like original caloric) is conserved on its own. We like to name such processes, without work interactions, as “caloric processes” or caloric heat transfer. We also > " difference, as an ideal limiting case, achieved by an ideal Carnot cycle, when ~ \ ~ potential is extracted, instead of being dissipated as heat, like in the above \ ~ temperature difference at each temperature level (dT0) [7]. Therefore, during the reversible heat transfer from higher to lower temperature reservoirs, see Figure 2 (left), the work potential is extracted and less heat is reversibly transferred (QRe v,2 = Q1T2/T1) to the reservoir at lower temperature (consider it to be a system, T = TSys = T2), for the amount of the extracted ~ || \ \ so the entropy is conserved since no entropy was generated in the reversible cycle (note QRe v/T = constant).
Figure 2. During a caloric heat transfer process the work potential, WRe v, is completely dissipated into dissipated/generated heat at a lower temperature, QDiss, which after being added to the reduced reversible heat at lower tempera-
The Elusive Nature of Entropy and Its Physical Meaning
11
ture, QRe v, will result in conserved heat or thermal energy, QCal = QRe v+ QDiss, with increased, generated entropy in the amount of dissipated work potential per relevant absolute temperature. On the right, variable system temperature T2 < T < T1 is presented.
However, during a caloric process the work potential is completely dissipated into self-generated heat at lower temperature (QDiss = WRev), which after being added to the equally reduced reversible heat at lower temperature (QRe v), will result in conserved heat or thermal energy (QCal = QRe v + QDiss = const; i.e., caloric process) with increased, generated entropy in the amount of dissipated work potential per relevant absolute temperature, see Figure 2 (left). | > \ per the Clausius equality, it may be deduced that system entropy increase due to irreversible heat generation, is the same as if it is transferred from an internal (within) boundary, the same as entropy increase during corresponding, reversible heat transfer. Therefore, for irreversible caloric { ~ > \ may be conveniently measured and calculated in the same manner as for reversible heat transfer, i.e., (9) During caloric heating (i.e., a system isochoric process without work interactions), the system thermal or heat capacity, CV,Sys, will govern its temperature and entropy increase, regardless of the heat source temperature, the latter if higher, will cause the lesser source entropy change, resulting in source-system process irreversibility with entropy generation. Furthermore, based on the above reasoning, since any dissipated work type may be considered as a (generated) heat source for the system, the | > { process, including work interactions, as follows:
(10) Note that QGen = QDiss = WLoss also includes the Carnot work-potential loss during ‘caloric’ heat transfer \ boundary entropy transfer could be ‘considered to be reversible’ as in
12
Statistical Mechanics and Entropy
reversible processes (no ‘volumetric-process’ entropy generation through a boundary surface). | entropy change is equal to all thermal heat stored within the system, dQSys = Bry(Re v) + Gen per relevant, absolute system temperature TSys (at location where the heat is stored within a system), since it is irrelevant if entropy is changed due to reversible heat transfer or irreversible heat generation, i.e., (11) Note that for reversible processes, WLoss = QGen = 0, then QSys = QBry = QRev, and for caloric processes QSys = QCal. Furthermore, for reversible adiabatic process, QSys = QRev + QGen = 0, thus isentropic process. However, for irreversible adiabatic process, there will be irreversible loss of work potential within (instead of being extracted or stored) which will dissipate and convert to heat, i.e., generate heat (and thus entropy) within, and be stored within the system, QSys = WLoss = QGen > 0, thus resulting in irreversible entropy increase (Equation (11)). Accounting for all irreversibly-generated heat may be elaborate, except for reversible processes (being zero) and caloric processes (caloric heat conserved), Equation (9). Entropy is associated with the system heat stored and not with the work transferred, only with the work lost within the system, and thus generated heat, as if it is transferred from some internal, thermal boundary within. As already stated, it is irrelevant if the heat is stored (as thermal energy) within the system by (reversible) boundary heat transfer or generated within. For compressible ideal gas (PV = mRT CV and CP = kCV, and the gas constant R = CPCV, the entropy change, and thus \> | ' of energy conservation within a reference (Ref) constant, i.e.,
(12) As seen from Equation (12), entropy increases with an increase of temperature and volume, thus it could be considered as “thermal dynamicvolume” or general “thermal displacement.” However, during the reversible change of volume due to work transfer, temperature will change accordingly, and there will be no change of entropy, since the latter is not associated with
The Elusive Nature of Entropy and Its Physical Meaning
13
work, but only due to heat transfer, if any. In Statistical Thermodynamics it is shown that entropy could be derived (for simple systems like monoatomic ideal gas) as a function of thermal-motion momentum (thus function of temperature) and space position (thus function of volume) of microscopic particles of the material system. Therefore, dQSys = TSysdSSys = dUThermal, is valid in general, and it demonstrates that entropy is a thermal displacement (dynamic thermalvolume) of thermal energy, i.e., “system heat,” due to temperature as a thermal potential. It is similar to physical (mechanical) volume being the displacement for mechanical energy, dWSys(IN) = PSys dVSys dWSys(OUT) PSysdVSys = dUMechanical, due to pressure as a mechanical (elastic) potential. Thus, the differential change of internal energy, dU = [(TdSPdV)]Sys = dUThermal + dUMechanical, to be elaborated in another manuscript, may be resolved into thermal and mechanical components. Note that change of temperature alone (without entropy change) will not cause the heat transfer (as during isentropic expansion when mechanical energy is extracted, the latter not related to entropy); the same way, the change of force (pressure) without displacement (volume change), will not cause the work transfer. The stored system heat increases the system thermal energy that is distinguished from the system internal, mechanical (elastic) energy, which will be a subject of another manuscript. For example, the heating or compressing an ideal gas with the same amount of energy will result in the same temperature and internal energies, but different states, with different volumes and entropies, and similar for other material substances. In conclusion, entropy is a thermal displacement, an integral measure of (random) thermal energy redistribution (due to heat transfer and/or irreversible heat generation due to energy degradation) within a system mass and space, per absolute temperature level: dS = dSSys = dQSys/TSys with J/K unit. Note that adjective ‘thermal’ is critically important, since similar but non-thermal phenomena are not related to thermodynamic entropy.
PHYSICAL MEANING OF ENTROPY: A MEASURE OF THERMAL DISORDER Statistical Thermodynamics deals with microscopic structures and dynamic interactions of microparticles with an objective to describe thermo-physical properties of a classical system as a statistical average of all possible and relevant microstates, described by positions and momenta of microparticles.
14
Statistical Mechanics and Entropy
In principle, we can reason and derive Classical Thermodynamics from relevant statistics of microstates’ dynamics, down to quantum behavior of molecules and atoms. However, in reality, there are many obstacles, ambiguities and limitations due to complex microstructure and interactions at many different micro-, nano-, subatomic- and quantum- levels. Note that the statistical results are feasible for very few simple and idealized systems. \ \ $# microparticles gives rise to thermal macro-properties, like temperature, thermal energy, heat capacity, and entropy, among others. Temperature is a measure of average kinetic energy of relevant microparticles, or the kinetic energy of thermal motion of a representative microparticle, while the thermal energy is the sum of all kinetic and potential energies related to the thermal motion of all microparticles, which also give rise to thermal heat capacity. When thermal motion ceases, virtually all thermal properties vanish at absolute zero temperature. The entropy may also be expressed as a logarithmic measure of all possible thermal-microstates, W, that “correspond to or are consistent with” a given system macro-state. Thus, all real complexity is transferred to properly “counting” microstates W, to match the bulk entropy macro-state, the thermal displacement property, dS = dQSys/T = mCSysdT/T, thus logarithmic integral function, with J/K unit, as accurately deduced by Clausius and precisely measured in laboratories for any system of interest. Since, for the classical \ > \ thus uncountable, there is a need for proper discretization (discrete ranging/ grouping) of the microstates to obtain a proper, countable set, e.g., if the positions (x) and momenta (p) of ideal gas molecules are within certain, relevant and ranges of each others, respectively. >> of microstates corresponding to related macro-state. Regardless of what Boltzmann thought at the time, the well-known correlation: (13) was formalized later by Plank and inscribed on the Boltzmann’s tombstone in his honor. The value of W, originally intended to be proportional to the Wahrscheinlichkeit (the German word for probability), reduces to “a metaphorical” number of all possible, relevant molecular microstates, W, corresponding to a given bulk macro-state. The kB is a constant with entropy unit named after Boltzmann.
The Elusive Nature of Entropy and Its Physical Meaning
15
In Statistical Thermodynamics, “entropy” is also expressed as uncertainty of all molecular positions due to their thermal random motion (thermal \ $\ [ ‘thermal’ is critically important, but often overlooked in the literature. At absolute zero temperature the thermal motion ceases and the position of { > \ absolute zero entropy (the Third Law of Thermodynamics). For not purely crystalline substance (as ‘randomly frozen’ solid solution structures), the positions of the molecules are not uniquely determined and entropy is not exactly zero at absolute temperature (but some residual value); however, no uncertainty due to thermal motion. The thermal-motion randomness or thermal disorder, as related to the entropy, was generalized by Gibbs as “a (logarithmic) measure” of sum of all microstates’ uncertainties, i.e., probabilities pi, i.e., (14) Gibbs’ formulation is more general since it allows for non-uniform probability, pi, of the microstates. With an increase of thermal disorder, i.e., an increase of microparticle momenta (temperature) and space they occupy (volume), more information is required for its characterization than in more ordered systems, which could be described via related probabilities of microstates, thus generalizing disorder as Shanon’s information entropy. However, the Gibbs or von Neumann quantum (based on the number of possible quantum microstates), or Shanon or other probabilistic entropy descriptions, are also statistical as Boltzmann’s. Actually they all reduce to the Boltzmann expression for fullyrandomized, with uniform probability of all W microstates, where pi = 1/W = constant \> # physical quantities, and used as such. Any new approach should be correlated with existing knowledge, with the limitations clearly and objectively presented. The phenomenological thermodynamics has the supremacy due to logical and holistic reasoning based on the fundamental laws and without regard to the systems’ complex dynamic structures and even more complex interactions; thus its importance and supremacy should not be marginalized. Nothing occurs locally, nor globally in the universe, without massenergy exchange and entropy production/generation. The miracles are until we comprehend and explain them!
20
Statistical Mechanics and Entropy
REFERENCES 1. 2. 3.
4.
5.
6.
7.
Romer, R.H. Heat is not a noun. Am. J. Phys 2001, 69, 107. Ben-Naim, A. Farewell to Entropy: Statistical Thermodynamics Based on Information;% > Therefore, entropy is a macro-thermal property related to random thermal motion, i.e., “thermal disorder”, and not related to any other disorder [6]. The fundamental Second Law of thermodynamics describes all natural processes where energy is transferred and/or converted due to a natural tendency to spontaneously force mass-energy displacement (redistribution) from higher to lower energy concentration (energy intensive-potential), approaching mass-energy equilibrium (equipartition of mass-energy), and forcing directionality and irreversibility of all natural processes, and impossibility otherwise. Therefore, natural processes (spontaneous forcing of mass-energy displacement, not only thermal, but in general) cannot go spontaneously in opposite to the natural forcing direction. That is, spontaneous heat transfer must go from higher towards \ $ (spontaneously) from higher to lower elevation (gravitational potential; downwards, not upwards), space expansion from higher to lower pressure, $ \ disperse from higher to lower concentrations (chemical potential), etc. The
% ' ! *
27
essence here is “spontaneously” meaning in self-driven forcing direction and not in the opposite or reverse direction (irreversible thermodynamicspontaneity). Of cause, we may “externally force” (non-spontaneously, at the expense of external non-equilibrium, or from within quasi-equilibrium) by using special devices and processes (including technical, intelligent or life processes), to “trick” heat to net transfer (or net transport) from lower to
> ~ \$ to higher elevation, or transport electricity to higher voltage, or concentrate species, or build new-ordered structures, etc. However, all such processes will dissipate work potential and generate heat and entropy (only in ideal limit conserve entropy), and they will not violate the Second Law (will not destroy entropy), even though they may appear to be magically displacing mass-energy against natural forces, i.e., against spontaneity. Even more elusive are life processes or processes at small-space and # ~ +\ > $ due to the limitations of our observation and comprehension, where lack of entropy are more subtle and may lead to misleading conclusions.
THE THREE ESSENTIAL “ENTROPY AND SECOND LAW MISCONCEPTIONS” Is Entropy a Measure of Any Disorder? Thermodynamic entropy is the thermal displacement (“thermal motion space”, i.e., thermal disorder displacement space). Entropy is not any space disorder, nor form, nor functional disorder. Real thermodynamic entropy is related to thermal energy and its (heat) transfer only (Box 3). Expanding entropy to any type of disorder or information is misleading and a source of many misconceptions, since those disorders are mutually independent concepts. Entropy is a thermal property, not a statistical concept, although it may be described with statistical and probabilistic methods. The modern generalization that “Entropy is a measure of [any] disorder or randomness in the system...” is too general and over-reaching, based on inadequate and misleading analogy, and thus inappropriate [6].
28
Statistical Mechanics and Entropy
Box 3. Entropy Is Thermal Randomness, Not Any Randomness. Real thermodynamic entropy is the thermal displacement space, related to thermal motion randomness, and not related to any other randomness. Entropy is certainly a thermal concept and not a statistical nor probabilistic concept per se, just because statistical modeling is used to describe it. The modern generalization that “Entropy is a measure of [any] disorder or randomness in the system...” is too general and overreaching, based on inadequate and misleading analogy, and thus inappropriate.
To restate it again: real thermodynamic entropy is the thermal displacement space, related to thermal motion randomness, and not related to all other randomnesses. Entropy is a physical macro-property and certainly not a statistical and probabilistic concept per se, just because > ; \ > > of thermal motion and related physical quantities, but the limitations should be stated so the generalizations are not over-stretched and the real physics overlooked, or worse discredited. Natural phenomena are not subordinate to our science laws, but the other way around—our science laws only model and describe natural phenomena > \entropy is a measure of thermal disorder or thermal randomness in a system. For simple systems, such as ideal gasses, entropy may be well described by statistical/ probabilistic modeling of thermal particle motion, but it is inappropriate to expand it for all other possible disorders and generalize it into a “monster entropy”. Entropy, as thermal disorder, is always generated (produced), in all processes without exception, and cannot be destroyed (no “thermal order”) by any means. This should not be confused with local entropy change that could increase or decrease due to entropy transfer. However, the other orders or disorders, such as structural (subatomic, atomic, molecular, bulk), form or functional (structures and devices of all kinds), information, intelligence, life, social, or philosophical, can be created or destroyed by physical processes, and are always accompanied with entropy generation. This demonstrates that all other orders and disorders are autonomous concepts \ + > +# >
% ' ! *
29
quantify other disorders, but they should not be confused with classical, thermodynamic entropy. Expanding the classical concept of thermodynamic entropy (thermal energy disorder) to abstract or general concepts (for any and all types of disorders) or information is ‘overreaching’ and can be a source of many inconsistencies and misconceptions. A system form and/or functional order or disorder is not (thermal) energy order/disorder, and the former is not described by nor directly related to thermodynamic entropy.
Can the Second Law Be Valid and Entropy Always Increasing in Isolated Systems and the Universe, but (Maybe) Not Necessarily in All Processes and All Space and Time Scales? The notion that entropy may “decrease” in some open systems or in life processes, or on small space and time scales, and thus violate the Second Law is misleading and inaccurate. Local entropy decrease, due to entropy outflow with heat (thermodynamic entropy is associated with thermal motion, i.e., thermal energy or heat only), should not be confused with impossibility of entropy “reduction by destruction”. Instead of entropy increase and decrease, it would be more appropriate to account for entropy production (or generation) and destruction (the latter impassible). Entropy is generated everywhere and always (and thus overall increased), at any scale without exception (including infinitesimal open systems, micro-fluctuations, gravity or entanglement, far-field interactions), but entropy cannot be destroyed by any means, at any scale, and thus, entropy cannot overall decrease (Box 4). Box 4. Entropy Is Always Generated at Any Space and Time Scales and Cannot Be Destroyed by Any Means. Entropy is generated everywhere and always (and thus overall increased), at any scale { \ \#$\ity, or entanglement). Entropy cannot be destroyed by any means, at any scale, and thus, entropy cannot overall decrease. Instead of entropy increase and decrease, it would be more appropriate to account for entropy production (or generation) and destruction (the latter being impossible).
Furthermore, since there is no way to destroy entropy at any scale, it cannot be destroyed locally or temporarily and then be compensated elsewhere later. Non-thermal (ideal, reversible adiabatic) processes are isentropic, but
30
Statistical Mechanics and Entropy
in all real processes, due to all kinds of irreversibilities (friction, mixing, chemical and nuclear reactions, conduction heat transfer or thermal friction, etc.), the work potential is always expended, i.e., dissipated into generated heat, and thus always generating entropy. Therefore, entropy is always generated (produced) with heat generation, in limit conserved (in reversible processes), and there is no way to destroy entropy, since during heat conversion to work, entropy is ideally conserved, but it is also generated due to real process irreversibilities.
Can Self-Sustained Non-equilibrium, i.e., Structural QuasiEquilibrium, Violates the Second Law? The Second Law had been originally defined for simple compressible substances, allowing heat–work interactions only (where temperature is uniform at equilibrium), and in general, it describes process conditions during spontaneous directional displacement of mass-energy (cyclic or stationary extraction of work), accompanied with irreversible generation (production) of entropy due to partial dissipation of work potential to thermal heat. The existence and creation of self-sustained, stationary nonequilibrium, i.e., quasi-equilibrium with non-uniform properties (nonuniform temperatures, etc., but without stationary work production) does not destroy entropy, and therefore, does it violate the Second Law (Box 5). Making something perpetually ‘hotter or cooler’ (achieving a temperature difference) does not necessarily imply spontaneous heat transfer from lower to higher temperature, since it can be achieved by adiabatic (ideally isentropic) compression/expansion without heat transfer (e.g., as in refrigeration processes, or ‘vortex tube’), where external work or internal work potential is utilized to create a stationary thermal non-equilibrium, i.e., quasi-equilibrium. Box 5. A Quasi-Equilibrium with Non-Uniform Properties Does Not Violate the Second Law. The creation and existence of self-sustained, stationary non-equilibrium, i.e., quasi-equilibrium with non-uniform properties (non-uniform temperatures, etc., but without stationary work production) does not destroy entropy; therefore, it does not violate the Second Law.
Dynamic and structural quasi-equilibriums with non-uniform properties, as compared with classical ‘homogeneous equilibrium’, are elusive and may
% ' ! *
31
be construed as non-equilibriums, if they are re-structured and in a transient process some useful work is obtained, to allude to violation of the Second Law. However, such work potential is limited to one being stored within such a structure (regardless how small or large) and cannot be utilized as a perpetual (stationary or cyclic) device to continuously generate useful work from within an equilibrium. After all, before the Second Law violation claims are hypothesized, > % ' \ evaluation of entropy, should be established based on full comprehension of the fundamental laws.
OTHER TYPICAL “ENTROPY AND SECOND LAW MISCONCEPTIONS” Does Maxwell’s Demon Violate the Second Law? A demonic being, introduced by Maxwell, to miraculously create thermal non-equilibrium by taking advantage of the non-uniform distribution of molecular velocity in equilibrium, and thereby violate the Second Law of thermodynamics, has been among the most intriguing and elusive wishful concepts for over 150 years now [13,14,15]. Maxwell and his followers focused on “intelligent and effortless gating” of a molecule at a time, but overlooked simultaneous interference of all other thermally-chaotic molecules, while the demon exorcists tried to justify impossible processes with creative but misplaced “compensations” by work of measurements and gate operation, and information storage and memory erasure with entropy generation. It is reasoned phenomenologically and deduced by this author [15] that a Maxwell’s demon operation, against natural forces and without due work effort, is not possible, since it would be against the physics of the chaotic thermal motion, the latter without consistent molecular directional preference for selective timing. Maxwell’s demon (MD) would have miraculous useful effects, but also some catastrophic consequences. The most crucial fact—that the integral, chaotic and simultaneous interactions of all thermal particles on the MD operation—has been overlooked, but focus on a single, opportunistic particle motion is emphasized, as if the other thermal particles would “self-pause to watch” and not interfere during the gate operation. Due effort to suppress such forced interference of other thermal particles would amount to required, major due work to establish a macro-non-equilibrium, which is independent and in addition to auxiliary,
32
Statistical Mechanics and Entropy
minor gate work of MD to observe molecules and operate a gate (Box 6). The former, thermodynamic due work is unavoidable and substantial, while the latter, MD operational gate work, can be infinitesimally small if the MD operation is perfected to near-reversible actions, and thus making Second Law violation incorrect. After all, the wishful Maxwell’s demon could not had been ever realized and most of the Second Law challenges have been resolved in favor of the Second Law, and never otherwise. Box 6. Maxwell’s Demon Requires More Effort Than Just to Operate a Gate [15]. The most crucial fact—that the integral, chaotic and simultaneous interactions of all thermal particles on the Maxwell Demon’s gate operation—has been overlooked, but focus on a single, opportunistic particle motion is emphasized, as if the other thermal particles would “self-pause to watch” and not interfere during the gate operation. The Maxwell’s demon requires more effort than just to operate a gate.
Can a Vortex Tube and Similar Cooling Devices Violate the Second Law? Air injected tangentially into a pipe becomes refrigerated at the center and heated at the periphery, in apparent violation of the Second Law of thermodynamics. Observed in 1933, this remained unexplained for a long time, during which time much debate was produced in the literature. It surprises this author that the observed vortex tube stayed unexplained for so long. It only confirms that the Second Law is not fully comprehended by many. \ % ' [ > questioned. The physics of the vortex tube should be very clear: it is just another interesting type of refrigeration device that achieves cooling by using the work potential of compressed air. The same compressed air can be used to run a turbine and produce work, to in turn, run a classical refrigeration machine. Similar claims of possible Second Law violations or miraculous "\ > [ > other devices and processes utilizing illusive external or internal work potentials [16].
% ' ! *
33
Could the Second Law Be Violated during Animated Life and Intelligent Processes? The Second Law is often challenged in biology, life, and social sciences, including evolution and information sciences—all of which have history rich in confusion. There are other types of “organized structures” in addition to the mass-energy non-equilibrium, such as form and functional design structures, or information/algorithm/template structures, including DNA and RNA structures, with deferent functions and purposes. Even though the functional form structures and energy structures are similar in some regards and may be described with the same or similar simulation methods, they are not the same and do not have the same physical meaning nor the physical units. Namely, diverse statistical disorders are not the same as thermal-energy disorder, regardless of the fact that they can be described with similar statistical methodology. Therefore, thermodynamic entropy [J/K unit] is not the same as information entropy or other entropy, but it is different in terms of physical and logical concepts. Organization/creation of technical (manmade) and natural (including life) structures and thus creation of “local non-equilibrium” is possible and is always happening in many technical and natural processes driven by work potential, using another functional structures (tools, hardware/software templates, information/knowledge and/ or diverse intelligent algorithms, DNAs, etc.). \ # $ everywhere dissipate work potential (useful energy) and generate entropy > % ' \ \ { > systems’ non-equilibrium or from within structural quasi-equilibrium (Box 7). It may appear that the created non-equilibrium structures are selforganizing from nowhere, from within an equilibrium (and thus violating the Second Law)—due to the lack of proper observations and “accounting” of # $ \ > " rate due to state of our technology and comprehension (as history has shown us many times). Entropy can decrease (locally) but cannot be destroyed anywhere. Miracles remain until we comprehend and explain them!
34
Statistical Mechanics and Entropy
Box 7. Creation of Ordered Structures or Live Species Always Generate Thermal Disorder, i.e., Always Generate Entropy. It is possible to have water run uphill, heat transported from a colder to hotter body, build a functional (organized) structure, and yes, have natural and life processes create amazing organization and species, but all due to transferred or within work-potential expenditure with dissipation, accompanied with heat and entropy generation, and thus without Second Law violation.
Similarly, we cannot produce cold or hot or life (or any non-equilibrium) +>\ $ surroundings, or from within quasi-equilibrium, the latter sometimes may be > $ (mass-energy transfer always accompanied with entropy generation), there would be no formation of cyclones, crystals, or life! For example, until a couple of hundred years ago, we did not even know what happens with the energy of a falling stone after it hits the ground, because we could not easily observe or measure its outcome.
Could the Second Law Be Violated during Micro-Fluctuations? Firstly, the time and spatial integrals of micro-quantities have to result in macro-quantities. Therefore, claiming violation of the Second Law on micro-scale or special processes is inappropriate. Secondly, microfluctuations make up the macro-equilibrium, do not violate the Second Law, and cannot be “demonized” (without due work) into macro-non-equilibrium (by Maxwell’s demon or otherwise)(Box 8). Furthermore, on microscopic and quantum scales, the macroscopic + > \ > +# + certain comparisons, without true meanings. The underlying mass-energy structures and processes at the utmost micro-scales are more complex and undetected at our present state of tooling and intellectual comprehension. However, their integral manifestations at the macroscopic level are more realistically observable and reliable, and thus being the check-andbalance of microscopic and quantum hypotheses. Since true reversibility is present in ideal equilibrium processes—but for real processes (process means directional forcing of displacement of mass ${ \ # +> forcing is necessary } > irreversible, and thus all real processes are irreversible, with reversibility being an ideal limiting case (as is the equilibrium). Since the macroscopic
% ' ! *
35
processes are integral outcomes of micro- and quantum processes, then they > > \ #> > our present state of tooling and comprehension. Reversibility and equilibrium are only idealizations, as are many other concepts. Box 8. Near-Reversible Micro-Fluctuations Make Up Macro-Equilibrium and Cannot Be Demonized to Destroy Entropy. #$ # mal properties as micro-violation of the Second Law, but to the contrary, the thermal #$ { # \ # equilibrium with maximum entropy, i.e., minimum Boltzmann’s H-value quantity.
Thermal phenomena, temperature and entropy are macroscopic + >$# properties, including self-sustained equilibrium processes (macroscopic quantities constant in equilibrium, the essence of thermal equilibrium, made up of randomized and self-sustained, non-uniform micro-motions). For example, the Maxwell–Boltzmann thermal distribution within an ideal gas in equilibrium, is a non-uniform spatial and temporal distribution of micro-properties of the molecules’ positions and momenta. Therefore, #$ in macro-thermal properties as micro-violation of the Second Law, but to \ #$ { of self-sustained, thermal macro-equilibrium with maximum entropy, i.e., minimum Boltzmann’s H-value quantity. Fluctuating phenomena in perfect equilibrium are reversible, and thus isentropic, so that any reduction in entropy reported in the literature \ > ] the micro- and submicro-scales, approximate accounting for thermal and/ or overlooking diverse displacement contributions to entropy. For example, during isothermal free expansion, entropy is generated due to volume displacement, and similarly could be due to subtle and illusive micro- and submicro-‘displacements’, including particle ‘correlations’ and/or quantum \ \ $ $ \ by the well-known isentropic compression expansion, for example [15]. \$ % ' \ $> >\
$
36
Statistical Mechanics and Entropy
Could the Second Law Be Violated Locally and Temporarily and “Compensated” Elsewhere Later? Local and temporal destruction of entropy is impossible (as already reasoned) and cannot be “compensated” elsewhere or at a later time, to “post-satisfy” the Second Law. “Entropy of an isolated, closed system (or universe) is always increasing” is a necessary but not sufficient local condition of the Second Law of thermodynamics: entropy cannot be destroyed, locally or at a time, and compensated by generation elsewhere or later (Box 9). It would be equivalent to allow rivers to “spontaneously flow” uphill and compensate it by more downhill flow elsewhere or later. Entropy is generated everywhere and always, at any scale without exception, and cannot be destroyed by any means at any scale (as is reasoned above). Box 9. Entropy Cannot Be Destroyed, Locally or at a Time, and “Compensated” by Generation Elsewhere or Later. Entropy is generated everywhere and always, at any scale without exception, and cannot be destroyed by any means at any scale. “Entropy of an isolated, closed system " > Second Law of thermodynamics. Impossibility of entropy reduction by destruction should not be confused with local
Thermodynamic “Arrow of Time” Arrow of time is supposed to be a general concept independent from a clock design or “personal perception”, in the way thermodynamic temperature is independent of a thermometer design, or how light speed is independent of observer’s speed. The time and entropy are always being irreversibly generated and overall irreversibly increased. It is premature to make any definitive relationship between the two before their correlation is well established. However, the thermodynamic arrow of time, as general irreversibility, i.e., the directionality of all processes with entropy generation without exception, may be the answer to “where does our arrow of time come from?” Even if a reversible arrow of time could go backwards (reversibility of time arrow), it will not be violation of the Second Law, but it would be a limiting ideal, reversible case (entropy would have been conserved, not destroyed) (Box 10).
% ' ! *
37
Box 10. The “Thermodynamic Arrow of Time” May Be the Answer to “Where Does Our Arrow of Time Come From?” % > \ ibility being the limiting concept, then the real time could be slowed down, in limit stopped, but not reversed.
[ % ' anomalies or violations are never ending. However, no Second Law challenge has ever been proven to violate the Second Law but to the contrary.
FURTHER COMMENTS To summarize again: All processes in nature (cause-and-effect phenomena) are irreversibly driven in a forced direction by non-equilibrium useful energy (work potential), where mass-energy flux is transferred while conserved (the First Law), but in part (and ultimately in-whole at equilibrium), the work potential is dissipated to thermal heat, and thus always and irreversibly generating entropy (the Second Law). Claims by the Second Law challengers, that with creative devices, “challengers’ demons”, it is possible to embed them into an equilibrium environment and extract perpetual ‘useful work’ are philosophically and % demon’, if possible and embedded as a ‘black box’ in a system or environment at equilibrium, to produce a spontaneous, steady-state (stationary or cyclic) work-extraction process from within such an equilibrium, would also be a ‘catastrophically unstable’ process, with the potential to ‘syphon’ all existing mass-energy # # \ ‘monster black energy hole’. If it were ever possible, we would not exist ‘as we know it,’ here and now! Box 11. Hypotheses of Second Law Violation Are Still to Be Proven. All resolved paradoxes and misleading violations of the Second Law, to date, have been resolved in favor of the Second Law and never against. We are still to witness \ % ' ]\>
The challengers misinterpret the fundamental laws, present elusive hypotheses, and perform incomplete and misleading, biased experiments, % ' claims. That is why all resolved, challengers’ paradoxes and misleading violations of the Second Law, to date, have been resolved in favor of the
38
Statistical Mechanics and Entropy
Second Law and never against (Box 11). We are still to witness a single, still open ‘Second Law violation’\> In summary, the current frenzy about the Second Law violation (a ‘perpetual motion of the second kind’, obtaining useful energy from within equilibrium) is in many ways similar to the prior frenzy about the ‘First Law violation’ (obtaining energy from nowhere). It is hard to believe that a serious scientist nowadays, who truly comprehends the Second Law and its essence, would challenge it based on incomplete and elusive facts [17]. $
% ' ! *
39
REFERENCES AND NOTE Carnot, S. !"#$%; Chapman & Hall, Ltd.: London, UK, 1897; Available online: https://sites.google.com/ site/professorkostic/energy-environment/sadi-carnot (accessed on 15 May 2020). 2. Clausius, R. The Mechanical Theory of Heat; McMillan and Co.: London, UK, 1879; Available online: http://www.archive.org/details/ londonedinburghd02lond (accessed on 15 May 2020). 3. Bridgman, P.W. The Nature of Thermodynamics; Harvard University press: Cambridge, MA, USA, 1941. 4. Kostic, M. Irreversibility and Reversible Heat Transfer: The Quest and Nature of Energy and Entropy. In Proceedings of the Innovations in Engineering Education: Mechanical Engineering Education, Mechanical Engineering/Mechanical Engineering Technology Department Heads; ASME International: New York, NY, USA, 2004; pp. 101–106. 5. Kostic, M. Revisiting the Second Law of Energy Degradation and Entropy Generation: From Sadi Carnot’s Ingenious Reasoning to Holistic Generalization. AIP Conf. Proc. 2011, 327, 1411. 6. Kostic, M. The Elusive Nature of Entropy and Its Physical Meaning. Entropy 2014, 16, 953–967. 7. Kostic, M. Nature of Heat and Thermal Energy: From Caloric |] $ \ \ { \ Beyond. Entropy 2018, 20, 584. 8. Kostic, M. Comments on “Removing the Mystery of Entropy and Thermodynamics,” Northern Illinois University. Available online: https://www.kostic.niu.edu/kostic/_pdfs/Comments-KeyPointsf-Mystery-Entropy-Leff (accessed on 15 May 2020). 9. Eddington, A.S. The Nature of the Physical World; Cambridge University Press: Cambridge, MA, USA, 1929. 10. ' \ ¢$ { % ' + £¤ > online: http://www.lavoisier.cnrs.fr/ (accessed on 15 May 2020). 11. Maxwell, J.C. Theory of Heat; Longmans, Green, and Co.: London, UK, 1871; Chapter 12. 12. ' \¥ $ \ matter physics. Sci. Bull. 2018, 63, 1019–1022. 1.
40
Statistical Mechanics and Entropy
13. Leff, H.S.; Rex, A.F. Maxwell’s Demon 2: Entropy, Classical and Quantum Information, Computing; CRC Press: Boca Raton, FL, USA, 2002; ISBN 0-7503-0759-5. 14. Knott, C.G. &$'* $#+8; Cambridge University Press: London, UK, 1991; pp. 213–215. 15. ¦\!!{ ]* * arXiv 2020, arXiv:2001.10083. 16. Kostic, M.M. “Heat Flowing from Cold to Hot Without External " * an agreement on thermodynamics and the criteria that the entropy should \ > > entropy. The purpose of this paper is to present one view of those basic assumptions, while avoiding direct criticism of alternatives that have been suggested. However, because small effects of the order of 1/N, where N is the number of particles\ [1,2,3,4,5,6,7,8,9,10,11], my discussion will also take small effects seriously. I present the structure of thermodynamics in Section 2 as a development of the views of Tisza [30] and Callen [31,32§\ | ] # known postulates of thermodynamics. I will argue that some of those postulates are unnecessarily restrictive, and I will abandon or modify them
Thermodynamics, Statistical Mechanics and Entropy
43
¥ > \ along with their consequences. Criteria that the thermodynamic entropy must satisfy arise directly from these postulates. It will become clear that most general form of thermodynamics is not unique. However, additional criteria do make the entropy unique. There is no need to limit thermodynamics to homogeneous systems or short-ranged interactions (within a system). On the other hand, it should be recognized that when thermodynamics is applied to the calculation of materials properties, it can be useful to assume homogeneity. Statistical { preferred, because the Euler equation is valid for such systems. Thermodynamics is commonly taught by starting with the four Laws of Thermodynamics. In Section 3, I derive these laws from the postulates, to demonstrate the close connection between the laws and the postulates. The advantage of the postulates is that they make some necessary aspects of the structure of thermodynamics explicit. For classical statistical mechanics, I start in Section 4 from the \> that we have only limited knowledge of what that state is. We can describe our knowledge of the microscopic state by a probability distribution, with the assumption that all states that are consistent with experimental measurements are equally likely. From this assumption, we can, in principle, calculate all thermodynamic properties without needing thermodynamics. A thermodynamic description of these properties must be consistent with calculations in statistical mechanics. >>\ either surfaces or volumes in phase space, is that Liouville’s theorem does not prevent the entropy of an isolated system from increasing [33], and the 34]. A striking difference between these two methods of calculating the properties of physical systems is that while statistical mechanics only predicts probability distributions, thermodynamics is deterministic. For this \~ > \> systems. If a system has N particles, measurements of physical properties $ ¨ { > $\
44
Statistical Mechanics and Entropy
provides a deterministic prediction. Since typical values of N are in the ª«\ [ application of thermodynamics to physical experiments. However, I will not ~ "¬ Section 10). % ~ > apply to small systems—even a system consisting of a single particle [9,11]. Demanding that thermodynamics be valid for small systems would put an additional constraint on the structure of the theory, since thermodynamics must still be valid for large systems. Although thermodynamic ideas can be of value in understanding the properties of small systems, I believe that the full application of thermodynamics to small systems must be treated with care as a separate topic. \ thermodynamics and statistical mechanics make the same predictions of the results of macroscopic experiments. This consideration gives a candidate for > thermodynamics, the Boltzmann entropy [35,36,37,38,39]. It correctly predicts the mode of the probability distribution for equilibrium values of { % > ¨ most quantities, it is well within experimental resolution. However, there are other criteria that the entropy might satisfy. These include exact adiabatic invariance and the prediction of the mean of an experimental observable rather than the mode, even though the difference is usually of order 1/N. Indeed, some theorists have insisted that these criteria > 3,4]. Although I do not think that they are necessary for consistency, I will include these additional criteria in the discussion to give a complete picture. I will discuss four candidates for the entropy of classical systems in Section 5, Section 6, Section 7 and Section 8. For quantum systems, the discreteness of the spectrum of energy eigenvalues presents a special situation, which will be discussed in Section 9. A probability distribution over many states of the system, including linear combinations of eigenstates, is generated if the system of interest has ever been in thermal contact with another macroscopic system. The + > \ eigenvalues, is overly restrictive in that it omits all states that are formed as linear combinations of eigenstates. It has been shown that the properties
Thermodynamics, Statistical Mechanics and Entropy
45
of macroscopic quantum systems are continuous functions of the average \ > 22]. The question of the form of the thermodynamic entropy in the # Section 10. An exact inequality requires that a plot of entropy vs. energy must be concave. \ # order transition found in the convexity of the Boltzmann entropy near a
# 2,40,41,42§ # indeed very useful, but it is not present in the thermodynamic entropy. Section 11 illustrates the differences between the various suggested forms for the entropy with systems that consist of non-interacting objects. In the limit that the components of a thermodynamic system are independent of each other, they should contribute independently to any extensive property. If all objects have the same properties, the entropy should be exactly proportional to their number. Almost all suggested forms for the entropy satisfy this criterion, but $' \ condensation onto the walls at low temperatures. The systems can also be inhomogeneous and anisotropic, although care must be taken to generalize the concept of pressure. If the system has a net charge, it will interact with other systems through the long-ranged electrical force, which should be excluded. The condition that the systems be “chemically inert” is not necessary. Although I will present the thermodynamic equations for systems with just one chemical component, this is easily generalized. Chemical changes are described thermodynamically by the chemical potentials of the various components. The condition that the systems “are not acted on by electric, magnetic, \" {
Callen’s Postulates I will give Callen’s postulates along with some comments on their range of validity. Callen’s Postulate 1: There exist particular states (called equilibrium states) of simple systems that, macroscopically, are characterized completely by the internal energy U, the volume V, and the mole numbers N1, N2 ... Nr of the chemical components [31,32]. For simplicity, I will only consider one type of particle. The generalization to r different types is straightforward, as is the generalization to include the magnetization, polarization, etc. I also use N to denote the number of particles, instead of the number of moles. Callen’s Postulate 2: There exists a function (called the entropy S) of the extensive variables \ +> following property: The values assumed by the extensive parameters in the absence of an internal constraint are those which maximize the entropy over the manifold of constrained equilibrium states [31,32].
Thermodynamics, Statistical Mechanics and Entropy
47
This postulate is equivalent to the second law of thermodynamics in a very useful form. Since the entropy is maximized when a constraint is released, the total entropy cannot decrease, % The values of the released variables (energy, volume, or particle number) at the maximized entropy predict the equilibrium values. This postulate also introduces the concept of a state function, which \ existence of state functions a separate postulate (see Section 2.3 and Section 2.4). Callen’s Postulate 3: The entropy of a composite system is additive over the constituent subsystems. The entropy is continuous and differentiable and is a monotonically increasing function of the energy [31,32]. \ additive over the constituent subsystems,” is quite important. Together with Callen’s Postulate 2, it requires two systems, j and k, in thermal equilibrium to satisfy the equation:
(1) The left side of Equation (1) depends only on the properties of system j, while the right side depends only on the properties of system k. The ®%®¯ \ > ®%®¯31,32,43]. The limitation to entropy functions that are “continuous and > " [ > \ quantum systems with discrete spectra [22]. The limitation that the entropy is a “monotonically increasing function of the energy” is unnecessary. It is equivalent to the assumption of positive temperatures. It has the advantage of allowing the inversion of S=S(U,V,N) to obtain U=U(S,V,N). Callen uses Legendre transforms of U=U(S,V,N) to express many of his results. However, the same thermodynamics can be expressed in terms of Legendre transforms of S=S(U,V,N), without the limitation to positive temperatures.
Statistical Mechanics and Entropy
48
Callen’s Postulate 4: ®¯®%°\ (that is, at the zero of temperature) [31,32]. This expression of the Third Law of Thermodynamics requires the entropy as a function of energy to be invertible, which was assumed in Callen’s Postulate 3.
In my 2012 textbook on statistical mechanics and thermodynamics, I reformulated Callen’s postulates for mostly pedagogic reasons. They are given here for comparison, but I now think that they are somewhat too restrictive.
! macroscopic system that are characterized uniquely by a small number of extensive variables. ! { parameters of an isolated composite system in the absence of an internal constraint are those that maximize the entropy over the set of all constrained macroscopic states. ! ¤ \ S, can be shown to be a continuous function of the energy, U\ systems [22§ > + " # [23,44]. This completes the minimal set of postulates that are necessary for consistent thermodynamics.
Optional Postulates The remaining postulates are of interest for more restricted, but still important cases. Postulate 5: Extensivity The entropy is an extensive function of the extensive variables. Extensivity means that S(³¯\³°\³°)=³%(U,V,N)
(3)
This postulate is not generally true, but it is useful for studying bulk material properties. If we are not interested in the effects of the boundaries, we can impose conditions that suppress them. For example, periodic boundary conditions will eliminate the surface effects entirely. There will > # \> > > phase transition.
Thermodynamics, Statistical Mechanics and Entropy
51
If the system is composed of N independent objects, the entropy is expected to be exactly extensive. This would be true of independent oscillators, N two-level objects, or a classical ideal gas of N particles. Unfortunately, most suggested expressions for the entropy in statistical mechanics do not give exactly extensive entropies for these systems, although the deviations from extensivity are small. Extensivity implies that the Euler equation is valid. ¯%°´
«
The Euler equation leads to the Gibbs–Duhem relation, %°´\
±
which shows that T, P, and ´ are not independent for extensive systems.
Postulate 6: Monotonicity The entropy is a monotonically increasing function of the energy for equilibrium values of the energy. This postulate allows the inversion of S=S(E,V,N) to obtain U=U(S,V,N), and derive several thermodynamic potentials in a familiar form. The postulate is unnecessary, and Massieu functions can be used for a general treatment [22,23,44]. Postulate 7: Nernst Postulate The entropy of any system is non-negative. > ®%®¯ °\¬\\ \ which have non-zero (positive) entropy at zero temperature. This phrasing > \ equivalently, as µ~¬ The Nernst Postulate is also known as the Third Law of Thermodynamics. It only applies to quantum systems, but, since all real systems are quantum mechanical, it ultimately applies to everything. It is listed as optional because classical systems are interesting in themselves and useful in understanding the special properties of quantum systems. The Nernst postulate is often said to be equivalent to the statement that it is impossible to achieve absolute zero temperature. This is not correct, because reaching absolute zero is even more strongly prohibited by classical > steps would be required to reach absolute zero if a system really obeyed
52
Statistical Mechanics and Entropy
classical mechanics [45§ + \> > That completes the set of thermodynamic postulates, essential and otherwise. Before going on to discuss the explicit form of the entropy, I will discuss the derivation of the laws of the thermodynamics.
THE LAWS OF THERMODYNAMICS The establishment of the laws of thermodynamics was extremely important to the development of the subject. However, they do not provide a complete basis for practical thermodynamics, which I believe the postulates do. Consequently, I prefer the postulates as providing an unambiguous prescription for the mathematical structure of thermodynamics and the required properties of the entropy. \ \ > > physics, and, as a result, I have not stressed it, regarding it as obvious. It > > a manifestation of a form of energy was one of the great achievements of the nineteenth century. It is used throughout the thermodynamic formalism, most explicitly in the form, dU=¶·¶,
(6)
where dU is the differential change in the internal energy of a system, ¶· is the heat added to the system, and ¶ is the work done on the system (the use of the symbol ¶ indicates an inexact differential). The second law of thermodynamics follows directly from the second essential postulate. It is most concisely written as %\
(7)
where % indicates a change of the total entropy upon release of any constraint. The third law of thermodynamics is also known as the Nernst postulate. It has been discussed in Section 2. The zeroth law of thermodynamics was the last to be named. It is found in a book by Fowler and Guggenheim [46] in the form If two assemblies are each in thermal equilibrium with a third assembly, they are in thermal equilibrium with each other.
Thermodynamics, Statistical Mechanics and Entropy
53
It was deemed to be of such importance that it was not appropriate to give it a larger number than the other laws, so it became the zeroth. It is a direct consequence of Equation (1), which is follows from Postulates 2 and 3. In Section 4, I begin the discussion of the relevant statistical mechanics with the calculation of the probability distribution as a function of the energies, volumes, and number of particles for a large number of systems that might, or might not, interact. I then discuss the Boltzmann and Gibbs entropies. Taking into account the width of the probability distribution of the energy and number of particles leads to the canonical entropy and the grand canonical entropy.
MACROSCOPIC PROBABILITIES FOR CLASSICAL SYSTEMS The basic problem of thermodynamics is to predict the equilibrium values of the extensive variables after the release of a constraint between systems [31,43]. The solution to this problem in statistical mechanics does not require any assumptions about the proper definition of entropy. | ! \ systems that might, or might not, exchange energy, volume, or particles. Denote the phase space for the j-th system by {pj,qj}, where (in three dimensions) pj represents 3Nj momentum variables, and qj represents 3Nj > !~ > # > 22], the total Hamiltonian of the collection of systems can be written as a sum of contributions from each system. (8) The energy, volume, and particle number of system j are denoted as Ej, Vj, and Nj, and are subject to the conditions on the sums, (9) where ET, VT, and NT are constants. I will only write the equations for a single type of particle. The generalization to a variety of particles is trivial, but requires indices that might obscure the essential argument. The systems
Statistical Mechanics and Entropy
54
\ ¤! > independent. I am not restricting the range of the interactions within any of the M systems. I am also not assuming homogeneity, so I do not, in general, expect extensivity [47]. For example, the systems might be enclosed by adsorbing walls. Since I am concerned with macroscopic experiments, I assume that no measurements are made that might identify individual particles, whether or not they are formally indistinguishable [48]. Therefore, there are different permutations for assigning particles to systems, and all permutations are taken to be equally probable. The probability distribution for the macroscopic observables in equilibrium can then be written as (10) The constraint that the Nj particles in system j are restricted to a volume Vj is implicit in Equation (10), and the walls containing the system may have any desired properties. ºT is a constant, which is determined by summing or integrating over all values of energy, volume, and particle number that are consistent with the values of ET, VT, and NT in Equation (9). The value of the constant ºT does not affect the rest of the argument. Equation (10) can also be written as (11) where
(12) and ºT is a normalization constant. The factor of 1/h3Nj, where h is Planck’s constant, is included for agreement with the classical limit of quantum statistical mechanics [43].
Thermodynamics, Statistical Mechanics and Entropy
55
THE BOLTZMANN ENTROPY, SB, FOR CLASSICAL SYSTEMS | ! j(pj,qj). The systems are originally isolated, but individual constraints may be removed or imposed, allowing the possibility of exchanging energy, particles, or volume. The number M is intended to be quite large, since all systems that might interact are included. The magnitude of the energy involved in potential interactions between systems is regarded as negligible. The probability distribution for the extensive thermodynamic variables, energy (Ej), volume (Vj), and number of particles (Nj), is given by the expression in Equations (11) and (12) [49]. The logarithm of Equation (11) (plus an arbitrary constant, C) gives the Boltzmann entropy of the M systems. (13) where the Boltzmann entropy for the j-th system is (14) Using Equation (12),
(15) Since the total Boltzmann entropy is the logarithm of the probability W({Ej,Vj,Nj»\{ + the mode of the probability distribution mean of the probability distribution, but the difference between the mean and the mode is usually of order 1/N. As mentioned in the Introduction, I will take even such small differences seriously.
Strengths of the Boltzmann Entropy If any constraint(s) between any two systems is released, the probability distribution of the corresponding variable(s) is given by W in Equation (11). Since the Boltzmann entropy is proportional to the logarithm of W, it correctly predicts the mode of the probability distribution. Whenever the peak is narrow, the mode is a very good estimate for the mean since the relative difference is of the order 1/N. The mode is really used to estimate
Statistical Mechanics and Entropy
56
the measured value of some observable, which is within fluctuations of ¨
Weaknesses of the Boltzmann Entropy
ºE,V,N) has units of inverse energy, so that it is not proper to take its logarithm. This issue can easily be resolved by multiplying º by a constant with units of energy before taking the logarithm in Equation (15). Such a multiplicative factor adds an arbitrary constant to the individual entropies, but since it does not affect any thermodynamic prediction, it is not a serious problem. The Boltzmann entropy can be shown not to be adiabatically invariant [3,5,10§| ;>> É_ln_, where _ is the probability of a microscopic state.
76
Statistical Mechanics and Entropy
53. Hertz, P. Über die mechanischen Grundlagen der Thermodynamik. Ann. Phys. (Leipz.) 1910, 338, 225–274. 54. Khinchin, A.I. Mathematical Foundations of Statistical Mechanics; Dover: New York, NY, USA, 1949. 55. O. Penrose (Heriott-Watt University, Edinburgh, Scotland, UK) and J.-S. Wang (National University of Singapore, Singapore, Singapore) have both separately raised this point. Private communications (2015). 56. Touchette, H.; Ellis, R.S.; Turkington, B. An introduction to the thermodynamic and macrostate levels of nonequivalent ensembles. Physica A 2004, 340, 138–146. 57. Touchette, H. The large deviation approach to statistical mechanics. Phys. Rep. 2009, 478, 1–69. 58. Touchette, H. Methods for calculating nonconcave entropies. J. Stat. Phys. 2010, P05008, 1–22. 59. Touchette, H. Ensemble equivalence for general many-body systems. Europhysics Letters 2011, 96, 50010. 60. Touchette, H. Equivalence and nonequivalence of ensembles: Thermodynamic, macrostate, and measure levels. J. Stat. Phys. 2015, 159, 987–1016.
CHAPTER 4
ENTROPY AND ITS CORRELATIONS WITH OTHER RELATED QUANTITIES
Jing Wu1 and Zengyuan Guo 2 1
School of Energy and Power Engineering, Huazhong University of Science & Technology, Wuhan 430074, China 2 Key Laboratory for Thermal Science and Power Engineering of Ministry of Education, Department of Engineering Mechanics, Tsinghua University, Beijing 100084, China
ABSTRACT In order to find more correlations between entropy and other related quantities, an analogical analysis is conducted between thermal science and other branches of physics. Potential energy in various forms is the product of a conserved extensive quantity (for example, mass or electric charge) and an intensive quantity which is its potential (for example, gravitational potential or electrical voltage), while energy in specific form is a dissipative quantity during irreversible transfer process (for example mechanical or electrical energy will be dissipated as thermal energy). However, it has been
Citation: Wu, J., & Guo, Z. (2014). Entropy and its correlations with other related quantities. Entropy, 16(2), 1089-1100. (12 pages) Copyright: © Wu & Guo. Licensee MDPI, Basel, Switzerland. This is an open access article distributed under the Attribution 3.0 Unported (CC BY 3.0) license: https://creativecommons.org/licenses/by/3.0/.
78
Statistical Mechanics and Entropy
shown that heat or thermal energy, like mass or electric charge, is conserved during heat transfer processes. When a heat transfer process is for object heating or cooling, the potential of internal energy U is the temperature T and its potential “energy” is UT/2 (called entransy and it is the simplified expression of thermomass potential energy); when a heat transfer process is for heat-work conversion, the potential of internal energy UT0/T), and the available potential energy of a system in reversible heat interaction with the environment is U U0 T0(S S0), then T0/T and T0(S S0) are the unavailable potential and the unavailable potential energy of a system respectively. Hence, entropy is related to the unavailable potential energy per unit environmental temperature for heat-work conversion during reversible heat interaction between the system and its environment. Entropy transfer, like other forms of potential energy transfer, is the product of the heat and its potential, the reciprocal of temperature, although it is in form of the quotient of the heat and the temperature. Thus, the physical essence of entropy transfer is the unavailable potential energy transfer per unit environmental temperature. Entropy is a non-conserved, extensive, state quantity of a system, and entropy generation in an irreversible heat transfer process is proportional to the destruction of available potential energy. Keywords: potential energy; entransy; entropy; unavailable energy; thermomass
INTRODUCTION It is well known that entropy has a wide use in the second-law analysis of engineering devices. However, its macroscopic physical meaning is thought to be difficult to understand, even for Prigogine, the Nobel Prize winner, who has indicated that [1] “...entropy is a very strange concept without hoping to achieve a complete description...”. People are so confused about this concept that the macroscopic physical meaning of entropy is rarely discussed both in textbooks and the literature. By now, an answer to the physical meaning of entropy is mostly derived by considering the microscopic nature of matter, that is, entropy can be viewed as a measure of molecular disorder [2]. The more disordered a system becomes, the more entropy of this system is. However, the microscopic interpretation is not adequate to show the essence of entropy for engineers and researchers who are more interested in the macroscopic significance of entropy related to the efficiency of heat-work conversion of a heat engine or/and the heat transfer irreversibility etc.
Entropy and Its Correlations with Other Related Quantities
79
This paper aims to further clarify more correlations between entropy and other related quantities within the framework of classical thermodynamics in terms of introducing the potential and potential energy of heat based on the analogy of thermodynamics/heat transfer with other branches of physics. It is hoped that the discussion will shed some light on the macroscopic physical meaning of entropy.
EXERGY AND ENTROPY The macroscopic physical meaning of available energy (exergy) is easier to be understood compared with that of entropy. Under the environmental conditions of T0 and p0, the available energy transfer accompanying heat transfer is the maximal possible useful work produced from heat, while the available energy of a system is the maximum useful work obtainable during a process that brings the system into equilibrium with the environment. The concept of entropy is more easily appreciated to some extent with the help of the concept of available energy. For instance, the product of entropy generation and the environmental temperature is the available energy destruction during an irreversible process with regard to environmental dead state, and thus the entropy generation may represent the available energy destruction per unit environmental temperature. In the previous studies, there is a generally adopted notion that entropy is a measure of the unavailability of a system [3–5]. However, it has been manifested that the notion holds only when the volume of a system is kept constant [6]. By separating the reversible interactions between a closed system and the environment into two forms, i.e., the reversible heat interaction and work interaction [7], as shown in Figure 1, the available energy of the system, Ex, can be expressed as [6]:
80
Statistical Mechanics and Entropy
Figure 1. The sketch of energy exchange during the reversible (a) heat interaction and (b) work interaction between the system and the environment [6].
(1) where U0, S0, and V0 are respectively the internal energy, entropy and volume of the system at the dead state (a state in equilibrium with the environment), A1 is the unavailable energy during reversible heat interaction where the volume of the system is kept constant and the useful work is converted from the transferred heat through numerous heat engines operating between the system and the environment indirectly, A2 is the unavailable energy during reversible work interaction where the useful work is derived directly by the expansion of the system itself. Note that the process of reversible heat interaction is different from the irreversible heat transfer process at finite temperature difference. It can be clearly seen from the term A1 in Equation (1) $ of entropy on the unavailability of the closed system displays only in the reversible process of heat interaction. That is, entropy is related to the unavailability of a system during reversible heat interaction between the system and the environment. In general, the value of T0 is taken as typical value, such as 25 °C. In addition, for a closed system with designated composition, its entropy at the dead state, S0\ { ¯ condition, the more entropy of a closed system is, the more unavailable energy will be through reversible heat interaction only. Particularly, for a system with constant volume (for instance, an incompressible substance), there is no work interaction, that is, A2 = 0, and then T0 (SS0) is the unavailable energy of the system. It is shown that the environmental temperature or
Entropy and Its Correlations with Other Related Quantities
81
the available energy serves as the role of an auxiliary line as in solving geometrical problems to help clarify the macroscopic physical meaning of entropy.
UNAVAILABLE POTENTIAL ENERGY PER UNIT ENVIRONMENTAL TEMPERATURE Potentials and Potential Energies in Mechanics and Electrics The term “potential energy” was coined by W. Rankine in the 19th century [8]. At present, as one of the most important forms of energy, the concept of potential energy has wide applications in the areas of mechanics and electrics. In incompressible fluid mechanics, the gravitational potential energy of a liquid in a vessel with uniform cross section is MgH/2, where M is the mass of liquid and gH is the gravitational potential of the mass. Likewise, for an electric capacitor where the electrical potential increases in parallel with the charge stored in it, the resulting electrostatic potential energy is QeVe/2. It can be seen that the expressions of the gravitational and electrostatic potential energies have some common features. Each of them is the product of an extensive quantity and an intensive quantity. Moreover, in mass or charge transfer processes, each of the extensive quantity is conserved, and the intensive quantity serves as the transfer potential of its corresponding extensive quantity, as shown in Table 1. The gravitational potential energy transfer rate accompanying mass transfer rate and the electrostatic potential energy transfer rate accompanying charge transfer rate are MሶgH and QሶeVe respectively. Table 1. Analogous parameters in mechanics, electrics and thermodynamics/ heat transfer.
82
Statistical Mechanics and Entropy
In both mass and charge transfer processes, there are two different types of extensive quantities: conserved and non-conserved ones. Mass > \ is dissipative and non-conserved due to the presence of resistance. The ‘dissipative’ process refers to that during irreversible transport processes some part of energy in the type with higher grade must be dissipated into other type of energy with the same amount but lower grade. For example, the mechanical energy/electrical energy is dissipated into heat during viscous $ $ /electric charge transfer process. In each transfer process, if the resistance is large enough, the variation of kinetic energy of transferred mass or charge is negligible compared with its potential energy. Hence, the dissipation rate of potential energy, as listed in the last column of Table 1, may measure the irreversibility of each transfer process. Moreover, Table 1 shows the transfer laws for each transfer process, which can be obtained by the equation of conservation of momentum in the absence of inertial force, that is, when the driving force is in equilibrium with the resistive force. \ > $ with a constant velocity driven by the gravitational potential, Darcy’s law > > $ and the potential gradient, which can be derived from the Navier-Stokes equations via homogenization [9] when neglecting the inertial terms. Similarly, Ohm’s law can be obtained by the Newton’s second law when the inertial force can be neglected [10].
Potentials and Potential Energies in Thermodynamics/Heat Transfer In thermodynamics, internal energy, enthalpy, Gibbs free energy and Helmholtz free energy are usually taken as thermodynamic potentials (in fact, they are thermodynamic potential energies), each of which can be expressed
Entropy and Its Correlations with Other Related Quantities
83
as the product of a generalized force and a generalized displacement and they are useful in the study for a system to attain thermodynamic equilibrium under various conditions. The thermodynamic potentials are thought to play a similar role as the potentials in mechanics and electrics in some sense [11] since a thermodynamic system will tend towards lower values of potential and the potential is an unchanging minimum value at equilibrium. Moreover, the changes of the thermodynamic potentials equal to the maximum amount of work that can be extracted from a thermodynamic system in certain reversible processes. However, notice that, unlike the mechanical or electrical potential energy whose changes may measure the irreversibility of mass or charge transfer processes, the changes in internal energy, enthalpy, Gibbs free energy and Helmholtz free energy during irreversible heat transfer processes cannot be used to measure the irreversibility of heat transfer processes. It is therefore desirable to introduce physical quantities with the physical meaning of potential energy for heat transfer processes. The often used method of analogy provides us with a good deal of guidance in introducing the potential energies in heat transfer and thermodynamics, since the heat transfer process has similar transfer law with that for mass and charge transfer processes, as shown in Table 1. During a heat transfer process, heat is an extensive, conserved quantity revealing the mass-like characteristics. Taking into account that the temperature T is usually taken as the potential of the transferred heat for heat transfer processes not involved in thermodynamic cycles [12–15] and, the potential energy transfer rate > QሶhT, a product of the extensive quantity, Qሶh, and the intensive quantity, T. Like a mass capacitor (liquid in a vessel) which stores mass, M, and the gravitational potential energy, MgH/2, or an electric capacitor which stores electrical charge, Qe, and the electrostatic potential energy, QeVe/2, a closed incompressible system at temperature T can be taken as a thermal capacitor which stores heat, that is, internal thermal energy, and the potential energy of the heat, which has been coined as entransy describing the ability of object heating or cooling of the system [16§ \ the entransy of a closed system for object heating or cooling is UT/2 or McvT2/2, determined by integration from , as shown in Table 1. Here, U = McvT is the internal thermal energy of the system, M is the mass, and cv > or the potential energy of heat of a closed system is a state quantity (or state function). When the properties of the system are not uniform, the entransy
84
Statistical Mechanics and Entropy
of the system can be determined by integration form, that is, where V is the volume of the system and _ is the density.
,
The entransy of a closed system UT/2 in unit of J ·¦ expression of the potential energy of thermomass in unit of J, that is, there > UT/2 and the potential energy of thermomass [17]. '~ $ $ energy is dissipated or charge transfer through a conductor where electrostatic potential energy is dissipated, the potential energy of heat is also dissipated during heat transfer. As shown in Table 1, for a heat transfer process with the purpose of object heating or cooling, the entransy dissipation rate with the expression k(T)2can be taken as a measure of its irreversibility [16–20]. The entransy transfer rate, QሶhT, the entransy, UT/2, and its dissipation rate can be derived by the energy conservation equation. For heat conduction of a closed system without heat-work conversion and without any heat source, the energy conservation equation per unit of system volume is: (2) where t is the time, qሶh is the heat flux. Multiplying Equation (1) by temperature, T, leads to: (3a) Integrating Equation (3a) over the whole volume of the closed system, we have:
(3b) which was called as entransy balance equation in Reference [16]. The left term of Equation (3b) can be rewritten as:
(4a) Furthermore, Gauss’s theorem gives:
Entropy and Its Correlations with Other Related Quantities
85
(4b) where S is the boundary surface of the closed system. Substituting Equations (4a) and (4b) as well as Fourier’s law, qሶhkT, into Equation (3b), we obtain the entransy balance equation as:
(3c) where k is the thermal conductivity. The left term in Equation (3c) is the time variation of the potential energy of heat stored in the closed system the transfer rate of potential energy of heat or the entransy transfer rate >\ as QሶhT ${ boundary. The second term on the right is the total entransy dissipation rate within the system boundary, which resembles the dissipation rates of electrostatic potential energy in an electrical conduction system and gravitational potential energy$$ . Equation (3c) indicates that during a heat transfer process without volume variation, entransy is non-conserved and dissipated due to the thermal resistance induced irreversibility although heat is conserved revealing the mass-like characteristics. On the other hand, if what we concern is the amount of the useful work we can extract from the transferred heat, the transfer rate of potential energy of heat accompanying heat transfer rate is nothing but the available energy transfer rate Qሶh T0/T ÉS qሶh T0/T)]· nԦ dS \ T0/T can be regarded as the available potential of heat for doing work. Correspondingly, T0/T is the unavailable potential of heat for doing work, and Qሶh T0/T is the unavailable potential energy transfer rate. Furthermore, 1/T and Qሶh/T are respectively the unavailable potential and the unavailable potential energy transfer rate per unit environmental temperature, as shown in Table 1. Note that the entropy transfer rate Qሶh/T is essentially the product of Qሶh and 1/T, although it is in form of the quotient of the heat transfer rate Qሶh and the temperature T. From the point of view of nonequilibrium thermodynamics, a thermodynamic force is proportional to the gradient of the potential [21], and (1/T ${ a heat transfer process, qሶh, [22], manifesting that 1/T is the thermodynamic potential.
86
Statistical Mechanics and Entropy
Correspondingly, the potential energy for heat-work conversion of an , where U0 is the internal energy of the incompressible system is system at the dead state. Note that the lower limit of this integral is U0 rather than 0 for the reason that the system must be in thermodynamic equilibrium with the environment at the end of process when calculating the available energy of the system. For the reversible heat interaction only (i.e., dV = 0 and dU = ¶Qh), the integral result is: (5) which means that U U0 T0 (S S0) and T0 (S S0) are respectively the available and unavailable potential energies of a system for heat-work conversion. Hence, (SS0) is the unavailable potential energy of the system per unit environmental temperature. As mentioned above, when S0 is fixed, the value of entropy of a system may represent the unavailable potential energy per unit environmental temperature. The destruction rate of the available potential energy or the generation rate of the unavailable potential energy can be used to measure the irreversibility of a heat transfer process with the purpose of heat-work conversion. It is worth noting that the unavailable potential energy always increases since the unavailable potential, i.e., 1/T, always increases in irreversible heat transfer processes. The entropy generation rate is just the generation rate of the unavailable potential energy per unit environmental temperature. In nonequilibrium thermodynamics, Fourier’s law in the entropy form can be { ${# as qሶh =Lqq (1/T), where Lqq is a +kT2. The entropy generation rate $ \qሶh, and the thermodynamic force, (1/T), with the expression of
[23].
It seems that the concept of unavailable energy is unique to the thermodynamic systems since it is rarely mentioned or used in other kinds > {$ mechanics and electrics. Consider a circuit consisting of switch S, and two identical capacitors with same capacitance C but different electrical potentials, where No. 1 capacitor’s electrical potential, U1, is higher than that of No.2, U2 $\ until U1 = U2. The potential energy initially stored in the No. 1 capacitor
Entropy and Its Correlations with Other Related Quantities
87
is CU21/2 which cannot be entirely transferred to No. 2 capacitor when the charge process ends since U2Ì \C(U1 + U2)2 /8 is in fact the unavailable potential energy. This example shows that the unavailable energy is essentially due to the non-zero reference point of the potentials. Since people are used to taking the reference point as zero when analyzing mechanical and electrical systems, the concept of unavailable energy is of little value and no longer needed for these conditions. But for thermodynamic systems, the reference point, that is, the environmental temperature, T0, is non-zero and the concept > ~ > $ \ and thermodynamics/heat transfer show that heat exhibits the mass-like characteristics since it is conserved in heat transfer processes without volume change. Apart from the method of analogy, the mass nature of heat can be further understood by the recently proposed thermomass theory [24]. Thermomass is the equivalent mass of thermal energy calculated from the Einstein’s mass-energy relation. Heat conduction can be treated as the motion of thermomass in the media driven by the gradient of the thermal potential, i.e., the temperature [24–29]. The momentum equation for the thermomass can be established based on the Newtonian mechanics, and it reduces to the Fourier’s law when the inertial force can be neglected, revealing that the physical essence of Fourier’s law is the balance between the driving force and the resistive force acting on the thermomass [24], analogous to other transfer laws for mass and charge transfer processes as listed in Table 1. Figure 2 > quantities discussed for understanding the physical meaning of entropy. In thermodynamics we have to distinguish the state quantity (pressure p, temperature T, internal energy U and entropy S, etc.) from process quantity (heat Q, work W etc.).The state quantity can be divided into two folds: extensive quantity (internal energy U and entropy S, etc.) and intensive quantity (pressure p, temperature T, etc.). The extensive, state quantity can > ~ T0/T and QT0/T), respectively, as the transferred heat is for heat-work conversion, where QT0 / T is the unavailable energy transfer. Thus it can be seen that, Q / T, usually referred to as entropy transfer, is the unavailable potential energy transfer per unit environmental temperature. Therefore, though the entropy transfer is in form of the quotient of the heat transfer and the temperature, its physical essence, like other forms of potential energy transfer, is the product of the extensive quantity, i.e., the heat transfer Q and its unavailable potential, the reciprocal of temperature 1/T.
Entropy and Its Correlations with Other Related Quantities
89
For an incompressible system for doing work through the reversible heat interaction only its potential energy (available energy) is UU0T0 (S S0), where T0 (SS0) is the unavailable potential energy of system and S S0 is the unavailable potential energy per unit environmental temperature. Therefore, the entropy of the system can be understood as the unavailable potential energy of a system per unit environmental temperature for heatwork conversion under the reversible heat interaction only between the system and its environment. > $ mechanics, electrics show that the unavailable energy of heat or a system is essentially due to the non-zero reference point of the potential, that is, nonzero environment temperature, T0Ì \ can help clarify the macroscopic physical meaning of entropy, like an auxiliary line added to a diagram to help in a proof of geometrical problem. For instance, entropy generation is the available energy destruction per unit environmental temperature and the entropy transfer is the unavailable potential energy transfer per unit environmental temperature and so on. Entropy, as a state function of a system, is an extensive, non-conserved quantity that related to the unavailable part of the internal energy during reversible heat interaction between the system and the environment with T0 Ì
ACKNOWLEDGMENTS This research was supported by the National Natural Science Foundation of China (51206079).
90
Statistical Mechanics and Entropy
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8.
9. 10. 11.
12.
13. 14.
Prigogine, I. What is entropy? Naturwissenschaften 1989, 76, 1–8. Schwabl, F.; Brewer, W. Statistical Mechanics, 2nd ed.; Springer: Berlin, Germany, 2006; p. 36. Kirwan, A.D. Mother Nature’s Two LawsÈ% > increases non-exponentially with the number of particles or of degrees of ~\ { exhibit generic power laws in their properties, and the index q satisfactorily $ { È> "\ power laws emerge at large domains of variation of the external parameters acting on the system, in contrast with, say, traditional critical phenomena, { parameters, whether they are tuned from the outside (like most second-order phase transitions) or self-tuned (like in self-organized criticality). After deeper thinking involving all the elements described above, it is not particularly surprising that most of the applications beyond BG concern the q-entropy and q-statistics. Indeed, most of those applications, in physics and elsewhere, appear to accommodate satisfactorily with nonadditivity, trace-form, composability, and discriminating power, and Sq happens to be the unique (see [204]) entropic functional simultaneously having all of those
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
145
properties. As a mathematical curiosity, let us mention that all but two of the entropies mentioned in this review generalize that of BG. Both exceptions are of the exponential form, namely the Curado entropy [30,31] and the entropy (17), which do not recover the BG form for any of their particular instances. They therefore constitute not proper generalizations but rather \ + remains an interesting challenge, especially due to their property that, for equal probabilities, they ¬> \%q for q>1.
FURTHER CONNECTIONS Thermodynamical Background Some generic considerations are relevant at this point. Even when we face situations where many of the usual thermodynamical quantities scale with size in a non-traditional manner (for example, it is well known that microscopic long-range interacting two-body couplings in a conservative macroscopic system yield a superextensive internal energy), the extensivity of entropy must prevail. This is a nontrivial consequence of at least two interrelated reasons, namely to preserve in all cases the Legendre-transformation structure of classical thermodynamics [37,40] and to conform to some available illustrations for an extended form of the large deviation theory in the presence of long-range correlated random variables within a certain class [220,221,222]. To be concrete, let us assume that we have a classical many-body d-dimensional Hamiltonian with two-body interactions whose potential ~ Á ; > [37,40,223]
(21) d
where V Ô' \ \\´\ \ \ \ potential and external magnetic field, and U,S,V,N,M are the internal energy, \ \> \ È'¬ is the linear size of the system, and îÁ (see Figure 7). As we can verify, the total entropy S belongs to the same thermodynamical class as
146
Statistical Mechanics and Entropy
(V,N,M) and is therefore extensive for arbitrary values of î A Hamiltonian system that belongs to the above scenario is the Á#ÐÍ inertial ferromagnetic d-dimensional model [90]. Its non-Boltzmannian one-particle distributions of momenta and of energies are exhibited in Figure 8. In the same figure, we display the numerical results corresponding to an asymptotically scale-free d-dimensional network model [16]. We see here that both the thermal and the geometrical model exhibit the interesting Á scaling. The same happens for the Á# > inertial ferromagnet [100], the Á# ªª¯ model [101,102,103,104], and other asymptotically scale-free networks [17,18], thus exhibiting the ubiquity of this grounding scaling law.
Figure 7. Representation of the different size-scaling regimes for classical d # \ ÅÁÅ\ Á ÁÈ { \ \Á¤\\ three classes of thermodynamic variables, namely, those scaling with Lî, named pseudo-intensive (L is a characteristic linear length, î is a system-dependent parameter), those scaling with Lîwith îÁ\ the pseudo-extensive ones (the energies), and those scaling with Ld (which are always extensive). For shortrange interactions (i.e., ÁÆ we have î\ and the energies recover their standard Ld extensive scaling, falling in the same class of S, N, V, etc., whereas the previous pseudo-intensive variables become truly intensive ones (independent of L); this is the region, with only two classes of variables, that is covered by the traditional textbooks of thermodynamics. From [37,40,50,223].
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
147
Figure 8. Á#ÐÍ d-dimensional ferromagnet (for d=1,2,3). The time averages are done within the intervals indicated in the insets. Top left: illustration of q#; > # parison, the BG Gaussian is shown by a dashed line). Top right: illustration of q# { # \ ; is shown by a dashed line). Middle left: the Á# of the index qp. Middle right: the Á# of the index qE. We verify that above the critical value Á\ a region exists for which we numerically observe q>1. It > { \ \ + of the system size N and/or of the interval within which the time average is performed, and/or of the time t elapsed before starting the time average. Only (up
148
Statistical Mechanics and Entropy
to now intractable) analytical results or extremely heavy numerical calculations { \ { \ + ¬¬ appropriately scaled paths. From [97]. Asymptotically scale-free d-dimensional network (for d=1,2,3,4). The distribution of degree k >~ e~ëq. Bottom left: the Á# of the index q. The red dot indicates the Barabási–Albert (BA) universality class q=4/3 [19,22], which is here recovered as the Á particular instance. Bottom right: the Á# of the characteristic degree “temperature” ë From [16]. In all cases, the BG (q=1) description naturally emerges numerically at Á¬ and even before.
Before proceeding, let us clarify why the statistical mechanical description of scale-free networks appears as a particular instance of q-statistics. If we associate each link with an effective “energy” Ԗ, we may consider that Ԗ/2 contributes to each of the two nodes connected by that link. Consequently, the total effective energy associated with each node is just half the sum of the effective energies corresponding to all the bonds arriving to that node. That node total energy is therefore proportional to the degree k of that node, and the energy of the entire system is the sum of all the node energies. We can then handle this system energy as a many-body Hamiltonian, and its generalized canonical distribution is therefore given by the q-exponential distribution of the form for k>>1, where ÛÔ+ Indeed, in the optimization of the q-entropy, the traditional internal energy constraint is replaced by the constraint on the mean value of the degree Ԗ (i.e., k). Consequently, the quantity noted ë in Figure 8 plays precisely the role of an effective “temperature”. The size N of the network plays, in what concerns the stationary state, the same role as the number of particles of a many-body Hamiltonian (e.g., the number of rotators in the Á#ÐÍ model, also illustrated in Figure 8). This shows that sentences such as “growth and preferential attachment are needed simultaneously to reproduce the stationary power-law distribution observed in real networks” [20] represent some sort of inadvertence. In contrast with preferential attachment, growth { 21], where neat numerical q-exponentials emerge for the degree distribution in { Let us focus now on the intriguing facts illustrated at the middle (left and right) plots of Figure 8. The issue is why qp and qE seem to attain the BG regime (q=1) only for very large values of Á and not from Á on.
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
149
Indeed, the thermodynamical variables indicated in Equation (21), as well as the dynamical sensitivity to the initial conditions (more precisely, below what value of Á the largest Lyapunov exponent vanishes [90,91]), strongly indicate Á as the frontier between the regions of validity of BG statistics and q-statistics for Hamiltonian systems. This important point still remains N) systems t) evolution (in addition to other numerical-precision features related to the algorithm implemented to integrate the Newtonian differential equations). The mathematical validity of q-statistics is expected to require, due to subtle nonuniform convergences, the simultaneous limits \¬\¬\> > along a scaling such as t¶Á ¶ÁÆ (even perhaps ¶ÁÆ Such a scaling would imply that, for the increasingly large values of N (e.g., N106) that we computationally use, unaffordably large times t would be necessary before achieving the virtually stationary thermostatistical state of the system. In any case, for the entire range Á\ q-Gaussians for the momenta distribution and q-exponentials for the energy distributions appear to be excellent approximations, as we can verify in the left and right insets of Figure 8 (middle), respectively, ë+qp) and å++ In addition to the thermodynamical scalings discussed above in terms of the Legendre transform structure, another strong indication that the extensivity of the entropy must be preserved in all circumstances (even at \ \ ; È\ the passage from Newtonian to Einstein mechanics in order to achieve a
\ \ ' \ mechanics and Maxwell electromagnetism requires a small price, namely to ; ¤¤ # dimensional motion, replacing it by the relativistic composition of velocities v13=(v12+v23)/(1+v12cv23c)) is consistent with the numerical discussion of a nontrivial example within large deviation theory in the presence of long-range correlated random variables [220,221,222]. In the Table 1, we have summarised typical choices of entropic functionals corresponding to classes of non-vanishing-probability occupation of phase space for an increasing number of elements (or number of degrees of freedom) N.
150
Statistical Mechanics and Entropy
Table 1. Illustration of popular entropic functionals and the classes of systems to which they apply in order to exhibit thermodynamic extensivity. Additivity/nonadditivity only depends on the entropic functional, whereas extensivity/ nonextensivity also depends on the system. An example of the power-law class can be seen in [64§ %¶ > extensivity for the stretched-exponential family; another possibility is SHTc,d [41§ Û# values for (c,d). It is important to also note that the behavior of W with N¬ + class of entropies yielding thermodynamic extensivity: the values here indicated for q and ¶ are valid under the strong hypothesis of equiprobability. To make this point transparent, we can check for the quantum critical point for a d=1 system characterized by the central charge c¯ [108] that the entropy Sq is . However, if we wrongly assume that the state extensive for +>>\ > +¤ð ; +ð> ðÆ
q-Triplets If we have a complex system characterized by some nonadditive entropy (Sq, SHTc,d, or any other), the entropic indices are expected to be different for essentially different properties (they are typically equal only for BG systems, i.e., for q=1). This fact has indeed been verified for q-systems, like solar wind [144], ozone layer [164], El Niño [165], and many others [156,166]. These various values for q appear to be isomorphic to the set of integer numbers; the central elements of this set constitute what is frequently referred to as the q-triplet. For a given system, only a small number of those values are apparently independent, all the others being related to those few through still only partially known analytic connections. Although some meaningful progress has been achieved [64,224], the full scenario constitutes an open problem, even if it is by now clear that it is quite similar to the critical phenomena classes of universality.
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
151
Î+»> > # However, such analytical calculations are frequently mathematically intractable. This is the only reason why in many examples of the literature, indices q %\ are nevertheless available, such as the case indicated in Figure 9.
Figure 9. The index q has been determined [108§ \ from the universality class of the Hamiltonian. The values c=1/2 and c=1 respectively correspond to the Ising and XY ferromagnetic chains in the pres \ 225,226]. In ¬\ ; \ \+> c, the subsystem nonadditive entropy Sq is thermodynamically extensive for, and only for,
(hence
; some special values: for c=4 we
have q=1/2, and for c=6 we have , where Ñ is the golden mean). Let us emphasize that this anomalous value of q occurs only at precisely the zero-temperature second-order quantum critical point; anywhere else, the usual short-range-interaction BG behavior (i.e., q=1) is valid. From [227].
CONCLUSIONS AND PERSPECTIVES We have argued that the additivity of the BG entropy accommodates extremely well, as is extensively known, to those many systems whose elements are generically not very strongly correlated and/or whose nonlinear dynamics are governed by strong chaos, meaning that their dynamics are associated to a sensitivity to initial conditions that exponentially diverge with time. But it fails for those complex systems that do not satisfy such requirements, violating, in particular, the thermodynamic extensivity expected (to satisfy the Legendre structure of classical thermodynamics and to yield an appropriately generalized theory of large deviations) for their
152
Statistical Mechanics and Entropy
entropy and whose dynamical sensitivity to initial conditions diverges subexponentially with time. In contrast, nonadditive entropies going beyond the BG entropy do solve these difficulties. This is consistent with Einstein’s 1910 remark [227,228] stating that the equal-probability BG entropy is the only reasonable one if we assume additivity, but he said nothing about additivity itself being necessary! (In fact, the Einstein requirement for likelihood factorization, an epistemologically sound principle, is valid for all values of q, a remarkable property related to the fact that the q-entropy is the unique trace-form composable entropy [204].) Notice also that if we do not require the trace form, the Rényi entropy also satisfies, for all values of q, the Einstein likelihood factorization principle. At this point, let us emphasize that the loss of entropic additivity is a rather small price to pay in order to not loose entropic extensivity, thus preserving thermodynamics and its Legendre-transform structure, in agreement with Einstein’s celebrated declaration [229]. Although many entropy-based theories are available in the literature that generalize or extend the BG form, those that have up to now provided neat \ \ systems are restricted to a few of them. Those few include the q-entropy, Rényi entropy, Kaniadakis entropy, and Beck–Cohen superstatistics. Forms that repeatedly emerge in such applications, thus extending the BG exponential form, are q-exponentials, ë# { \ stretched exponentials, '> > > { \ asymptotically scale-free networks. Let us close by mentioning an ensemble of intriguing interrelated open > \ \ both pure and applied sciences and technologies. We may cite the analytic dependencies on Á of the q indices associated to the stationary (or quasistationary) time-averaged distributions of velocities and of energies, as well as their connection with the dynamical sensitivity to the initial conditions, in long-range interacting many-body classical Hamiltonian systems. Similarly, what characterizes the universality classes associated with low-dimensional conservative maps? Another central question that needs further progress is: What are the values of the various interrelated q-indices (to be obtained # \ { > involved mathematics is tractable) corresponding to a given class of complex systems, and how many of those indices are expected to be independent
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
153
[224]? In addition, it would be very valuable to extend the present q-central limit theorem [206§+¼ conditions for q-Gaussians and other deformed Gaussians (e.g., Kaniadakis ë#;\ { \+\Á#> >207]) to be attractors in the distribution space and analogously, in what concerns the large deviation theory directly related to q-exponential and possibly other deformed-exponential distributions. Another deep problem that needs further ~ \ indicate the applicability, for a given system, of BG statistics, q-statistics, Kaniadakis ë#\ or something else. Such highly nontrivial questions have not yet been transparently elucidated, not even for BG statistical mechanics [230]!
FUNDING Partially supported by the Brazilian agencies Faperj and CNPq.
ACKNOWLEDGMENTS I have benefited from fruitful discussions with N. Ay, C. Beck, E.P. Borges, E.M.F. Curado, H.J. Jensen, F.D. Nobre, A. Plastino, A.R. Plastino, S. Thurner, and U. Tirnakli. I also thank the courtesy of the authors of the figures reproduced in the present review.
154
Statistical Mechanics and Entropy
REFERENCES AND NOTES Boltzmann, L. * | * \ { \ two homogeneous substances, it will be possible to express the energy as the sum of the energies of the two substances only if we can neglect the surface energy of the two substances where they are in contact. The surface energy can generally be neglected only if the two substances > È \ > role. Majorana, E., The value of statistical laws in physics and social sciences. The original manuscript in Italian was published by G. Gentile Jr. in Scientia 36, 58 (1942), and was translated into English by R. Mantegna in 2005: This is mainly because entropy is an addditive quantity as the other ones. In other words, the entropy of a system composed of several independent parts is equal to the sum of entropy of each single part. […] Therefore one considers all possible internal determinations as equally probable. This is indeed a new hypothesis because the \ > \ subjected to continuous transformations. We will therefore admit as an extremely plausible working hypothesis, whose far consequences > \ +>> ¯ hypothesis, the statistical ensemble associated to each macroscopic state A> \Ö¤ Cartwright, J. Roll over, Boltzmann. Phys. World 2014, 27, 31–35. A Regularly Updated Bibliography. Available online: http://tsallis. cat.cbpf.br/biblio.htm (accessed on 14 July 2019). Brito, S.G.A.; da Silva, L.R.; Tsallis, C. Role of dimensionality in complex networks. Sci. Rep. 2016, 6, 27992. Nunes, T.C.; Brito, S.; da Silva, L.R.; Tsallis, C. Role of dimensionality
156
18. 19. 20. 21. 22.
23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33.
Statistical Mechanics and Entropy
in preferential attachment growth in the Bianconi-Barabasi model. J. Stat. Mech. 2017, 2017, 093402. Brito, S.; Nunes, T.C.; da Silva, L.R.; Tsallis, C. Scaling properties of d-dimensional complex networks. Phys. Rev. E 2019, 99, 012305. Barabási, A.L.; Albert, R. Emergence of scaling in random networks. Science 1999, 286, 509. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47. Thurner, S.; Tsallis, C. Nonextensive aspects of self-organized scalefree gas-like networks. Europhys. Lett. 2005, 72, 197. Emmerich, T.; Bunde, A.; Havlin, S. Structural and functional properties of spatially embedded scale-free networks. Phys. Rev. E 2014, 89, 062806. Borges, E.P.; Roditi, I. A family of non-extensive entropies. Phys. Lett. A 1998, 246, 399. Kaniadakis, G. Non linear kinetics underlying generalized statistics. Phys. A 2001, 296, 405. Kaniadakis, G. Statistical mechanics in the context of special relativity. Phys. Rev. E 2002, 66, 056125. Kaniadakis, G.; Lissia, M.; Scarfone, A.M. Deformed logarithms and entropies. Phys. A 2004, 340, 41. Kaniadakis, G. Statistical mechanics in the context of special relativity. II. Phys. Rev. E 2005, 72, 036108. Abe, S. A note on the q-deformation theoretic aspect of the generalized entropies in nonextensive physics. Phys. Lett. A 1997, 224, 326. Landsberg, P.T.; Vedral, V. Distributions and channel capacities in generalized statistical mechanics. Phys. Lett. A 1998, 247, 211. Curado, E.M.F. General aspects of the thermodynamical formalism. Braz. J. Phys. 1999, 29, 36. Curado, E.M.F.; Nobre, F.D. On the stability of analytic entropic forms. Phys. A 2004, 335, 94. Anteneodo, C.; Plastino, A.R. Maximum entropy approach to stretched exponential probability distributions. J. Phys. A 1999, 32, 1089. Tsekouras, G.A.; Tsallis, C. Generalized entropy arising from a distribution of q-indices. Phys. Rev. E 2005, 71, 046144.
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
157
34. Tsallis, C.; Souza, A.M.C. Constructing a statistical mechanics for Beck-Cohen superstatistics. Phys. Rev. E 2003, 67, 026106. 35. Beck, C.; Cohen, E.G.D. Superstatistics. Phys. A 2003, 322, 267. 36. Schwammle, V.; Tsallis, C. Two-parameter generalization of the logarithm and exponential functions and Boltzmann-Gibbs-Shannon entropy. J. Math. Phys. 2007, 48, 113301. 37. Tsallis, C. Introduction to Nonextensive Statistical MechanicsApproaching a Complex World; Springer: New York, NY, USA, 2009. 38. Shafee, F. Lambert function and a new non-extensive form of entropy. IMA J. Appl. Math. 2007, 72, 785. 39. Ubriaco, M.R. Entropies based on fractional calculus. Phys. Lett. A 2009, 373, 2516. 40. Tsallis, C.; Cirto, L.J.L. Black hole thermodynamical entropy. Eur. Phys. J. C 2013, 73, 2487. 41. \ È \ % { statistical systems and an axiomatic derivation of their entropy and distribution functions. Europhys. Lett. 2011, 93, 20006. 42. Tempesta, P. Group entropies, correlation laws, and zeta functions. Phys. Rev. E 2011, 84, 021121. 43. Jensen, H.J.; Tempesta, P. Group entropies: From phase space geometry to entropy functionals via group theory. Entropy 2018, 20, 804. 44. Curado, E.M.F.; Tempesta, P.; Tsallis, C. A new entropy based on a group-theoretical structure. Ann. Phys. 2016, 366, 22. 45. Tempesta, P. Beyond the Shannon-Khinchin formulation: The composability axiom and the universal-group entropy. Ann. Phys. 2016, 365, 180. 46. Jensen, H.J.; Pazuki, R.H.; Pruessner, G.; Tempesta, P. Statistical mechanics of exploding phase spaces: Ontic open systems. J. Phys. A: Math. 2018, 51, 375002. 47. Beck, C.; Schlogl, F. Thermodynamics of Chaotic Systems; Cambridge University Press: Cambridge, UK, 1993. 48. Zander, C.; Plastino, A.R.; Casas, M.; Plastino, A. Entropic entanglement criteria for Fermion systems. Eur. Phys. J. D 2012, 66, 14. 49. Tsallis, C.; Lloyd, S.; Baranger, M. Peres criterion for separability through nonextensive entropy. Phys. Rev. A 2001, 63, 042104. 50. Tsallis, C. Approach of complexity in nature: Entropic nonuniqueness.
158
51.
52. 53.
54.
55. 56.
57.
58.
59. 60.
61.
62. 63.
Statistical Mechanics and Entropy
Axioms 2016, 5, 20. Curado, E.M.F.; Tsallis, C. Generalized statistical mechanics: Connection with thermodynamics. J. Phys. A 1992, 24, L69–L72, Corrigenda in 1991, 24, 3187 and 1992, 25, 1019. Tsallis, C.; Mendes, R.S.; Plastino, A.R. The role of constraints within generalized nonextensive statistics. Phys. A 1998, 261, 534. Bediaga, I.; Curado, E.M.F.; Miranda, J. A nonextensive thermodynamical equilibrium approach in e+e hadrons. Phys. A 2000, 286, 156. Adare, A.; Afanasiev, S.; Aidala, C.; Ajitanand, N.N.; Akiba, Y.; AlBataineh, H.; Alexander, J.; Aoki, K.; Aphecetche, L.; Armendariz, R.; et al. (PHENIX Collaboration). Measurement of neutral mesons in p + p¨; ° Phys. Rev. D 2011, 83, 052004. ALICE Collaboration. Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions. Nat. Phys. 2017, 13, 535. Wong, C.Y.; Wilk, G.; Cirto, L.J.L.; Tsallis, C. From QCD-based hard-scattering to nonextensive statistical mechanical descriptions of transverse momentum spectra in high-energy pp and pp¯ collisions. Phys. Rev. D 2015, 91, 114027. Marques, L.; Cleymans, J.; Deppman, A. Description of high-energy pp collisions using Tsallis thermodynamics: Transverse momentum and rapidity distributions. Phys. Rev. D 2015, 91, 054025. ALICE Collaboration. Production of 0 and mesons up to high transverse momentum in pp collisions at 2.76 TeV. Eur. Phys. J. C 2017, 77, 339. ALICE Collaboration. Production of ò1385)± and õ±¤0 in p-Pb ¨± °Eur. Phys. J. C 2017, 77, 389. ALICE Collaboration. K*(892)0 and Ñ1020) meson production at high transverse momentum in pp and Pb-Pb collisions at ¨£² °Phys. Rev. C 2017, 95, 064606. Rybczynski, M.; Wilk, G.; Wlodarczyk, Z. System size dependence of the log-periodic oscillations of transverse momentum spectra. Eur. Phys. J. Web Conf. 2015, 90, 01002. Wilk, G.; Wlodarczyk, Z. Tsallis distribution decorated with logperiodic oscillation. Entropy 2015, 17, 384–400. Yalcin, G.C.; Beck, C. Generalized statistical mechanics of cosmic
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
64.
65. 66.
67.
68.
69. 70.
71.
72. 73. 74. 75. 76.
77.
159
rays: Application to positron-electron spectral indices. Sci. Rep. 2018, 8, 1764. Tsallis, C.; Gell-Mann, M.; Sato, Y. Asymptotically scale-invariant occupancy of phase space makes the entropy Sq extensive. Proc. Natl. Acad. Sci. USA 2005, 102, 15377. Plastino, A.R.; Plastino, A. Non-extensive statistical mechanics and generalized Fokker-Planck equation. Phys. A 1995, 222, 347. Schwammle, V.; Nobre, F.D.; Curado, E.M.F. Consequences of the H-theorem from nonlinear Fokker-Planck equations. Phys. Rev. E 2007, 76, 041123. Tsallis, C.; Bukman, D.J. Anomalous diffusion in the presence of external forces: Exact time-dependent solutions and their thermostatistical basis. Phys. Rev. E 1996, 54, R2197. Upadhyaya, A.; Rieu, J.-P.; Glazier, J.A.; Sawada, Y. Anomalous diffusion and non-Gaussian velocity distribution of Hydra cells in cellular aggregates. Phys. A 2001, 293, 549. Rapisarda, A.; Pluchino, A. Nonextensive thermodynamics and glassy behavior. Europhys. News 2005, 36, 202. Combe, G.; Richefeu, V.; Stasiak, M.; Atman, A.P.F. Experimental { Phys. Rev. Lett. 2015, 115, 238301. Viallon-Galiner, L.; Combe, G.; Richefeu, V.; Atman, A.P.F. Emergence > J. Fluid Mech. 2013, 717, R9. Miah, S.; Beck, C. Lagrangian quantum turbulence model based on $ $ Europhys. Lett. 2014, 108, 40004. ¦> \¥È \È \%| { > their sample-space scaling exponents. New J. Phys. 2018, 20, 093007. Komatsu, N.; Kimura, S. Entropic cosmology for a generalized blackhole entropy. Phys. Rev. D 2013, 88, 083534. Anteneodo, C.; Tsallis, C. Breakdown of the exponential sensitivity to the initial conditions: Role of the range of the interaction. Phys. Rev. Lett. 1998, 80, 5313. Campa, A.; Giansanti, A.; Moroni, D.; Tsallis, C. Classical spin systems with long-range interactions: Universal reduction of mixing.
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
92. 93. 94. 95. 96.
97. 98.
99.
100.
101.
102.
103.
104. 105.
161
Phys. Lett. A 2001, 286, 251. Antoni, M.; Ruffo, S. Clustering and relaxation in Hamiltonian longrange dynamics. Phys. Rev. E 1995, 52, 2361. Pluchino, A.; Rapisarda, A.; Tsallis, C. Nonergodicity and central limit behavior in long-range Hamiltonians. Europhys. Lett. 2007, 80, 26002. Pluchino, A.; Rapisarda, A. Nonergodicity and central limit behavior for systems with long-range interactions. SPIE 2007, 2, 6802–6832. Chavanis, P.H.; Campa, A. Inhomogeneous Tsallis distributions in the HMF model. Eur. Phys. J. B 2010, 76, 581. |\ '¥'È \ °°È \ | $ range on the thermostatistics of a classical many-body system. Phys. A 2014, 393, 286. Cirto, L.J.L.; Rodriguez, A.; Nobre, F.D.; Tsallis, C. Validity and failure of the Boltzmann weight. Europhys. Lett. 2018, 123, 30003. > \*È\|| # # > ferromagnetic model: Metastability and sensitivity to initial conditions. Phys. Rev. E 2003, 68, 036115. Cirto, L.J.L.; Lima, L.S.; Nobre, F.D. Controlling the range of interactions in the classical inertial ferromagnetic Heisenberg model: Analysis of metastable states. JSTAT 2015, 2015, P04012. Rodriguez, A.; Nobre, F.D.; Tsallis, C. d-Dimensional classical Heisenberg model with arbitrarily-ranged interactions: Lyapunov exponents and distributions of momenta and energies. Entropy 2019, 21, 31. Christodoulidi, H.; Tsallis, C.; Bountis, T. Fermi-Pasta-Ulam model with long-range interactions: Dynamics and thermostatistics. EPL 2014, 108, 40006. Christodoulidi, H.; Bountis, T.; Tsallis, C.; Drossos, L. Dynamics and statistics of the Fermi–Pasta–Ulam –model with different ranges of particle interactions. J. Stat. Mech. 2016, 2016, 123206. Bagchi, D.; Tsallis, C. Sensitivity to initial conditions of d-dimensional long-range-interacting quartic Fermi-Pasta-Ulam model: Universal scaling. Phys. Rev. E 2016, 93, 062213. Bagchi, D.; Tsallis, C. Fermi-Pasta-Ulam-Tsingou problems: Passage from Boltzmann to q-statistics. Phys. A 2018, 491, 869. Leo, M.; Leo, R.A.; Tempesta, P. Thermostatistics in the neighborhood
162
106.
107.
108.
109. 110.
111. 112.
113.
114. 115.
116. 117. 118.
Statistical Mechanics and Entropy
of the -mode solution for the Fermi-Pasta-Ulam system: From weak to strong chaos. J. Stat. Mech. 2010, 2010, P04021. Antonopoulos, C.G.; Bountis, T.C.; Basios, V. Quasi-stationary chaotic states in multi-dimensional Hamiltonian systems. Phys. A 2011, 390, 3290. Leo, M.; Leo, R.A.; Tempesta, P.; Tsallis, C. Non Maxwellian behaviour and quasi-stationary regimes near the modal solutions of the FermiPasta-Ulam -system. Phys. Rev. E 2012, 85, 031149. Caruso, F.; Tsallis, C. Nonadditive entropy reconciles the area law in quantum systems with classical thermodynamics. Phys. Rev. E 2008, 78, 021102. Saguia, A.; Sarandy, M.S. Nonadditive entropy for random quantum spin-S chains. Phys. Lett. A 2010, 374, 3384. Carrasco, J.A.; Finkel, F.; Gonzalez-Lopez, A.; Rodriguez, M.A.; Tempesta, P. Generalized isotropic Lipkin-Meshkov-Glick models: Ground state entanglement and quantum entropies. J. Stat. Mech. 2016, 2016, 033114. Lourek, I.; Tribeche, M. On the role of the -deformed Kaniadakis distribution in nonlinear plasma waves. Phys. A 2016, 441, 215. Bacha, M.; Gougam, L.A.; Tribeche, M. Ion-acoustic rogue waves in magnetized solar wind plasma with nonextensive electrons. Phys. A 2017, 466, 199. Merriche, A.; Tribeche, M. Electron-acoustic rogue waves in a plasma with Tribeche-Tsallis-Cairns distributed electrons. Ann. Phys. 2017, 376, 436. Livadiotis, G. Thermodynamic origin of kappa distributions. EPL 2018, 122, 50001. Oliveira, D.S.; Galvao, R.M.O. Non-extensive transport equations in magnetized plasmas for non-Maxwellian distribution functions. Phys. Plasmas 2018, 25, 102308. Daniels, K.E.; Beck, C.; Bodenschatz, E. Defect turbulence and generalized statistical mechanics. Phys. D 2004, 193, 208. Komatsu, N.; Kimura, S. Evolution of the universe in entropic cosmologies via different formulations. Phys. Rev. D 2014, 89, 123501. Komatsu, N. Cosmological model from the holographic equipartition Eur. Phys. J. C 2017, 77, 229.
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
163
119. ~\||${ SQUIDs. Sci. Rep. 2016, 6, 28275. 120. Hou, S.Q.; He, J.J.; Parikh, A.; Kahl, D.; Bertulani, C.A.; Kajino, T.; Mathews, G.J.; Zhao, G. Non-extensive statistics solution to the cosmological Lithium problem. Astrophys. J. 2017, 834, 165. 121. Kohler, S. Fixing the Big Bang theory’s Lithium problem. NOVA— Res. Highlights J. Am. Astron. Soc. 2017. 122. Bertulani, C.A.; Shubhchintak; Mukhamedzhanov, A.M. Cosmological Lithium problems. EPJ Web Conf. 2018, 184, 01002. 123. Ruiz, G.; Tsallis, C. Roundoff-induced attractors and reversibility in conservative two-dimensional maps. Phys. A 2007, 386, 720–728. 124. Ruiz, G.; Tsallis, C. Nonextensivity at the edge of chaos of a new universality class of one-dimensional unimodal dissipative maps. Eur. Phys. J. B 2009, 67, 577–584. 125. Ruiz, G.; Bountis, T.; Tsallis, C. Time-evolving statistics of chaotic orbits of conservative maps in the context of the Central Limit Theorem. Int. J. Bifurc. Chaos 2012, 22, 1250208. 126. Bountis, T.; Skokos, H. Complex Hamiltonian Dynamics; Springer Series in Synergetics; Springer: Berlin, Germany, 2012. 127. Ruiz, G.; Tirnakli, U.; Borges, E.P.; Tsallis, C. Statistical characterization of the standard map. J. Stat. Mech. 2017, 2017, 063403. 128. Ruiz, G.; Tirnakli, U.; Borges, E.P.; Tsallis, C. Statistical characterization of discrete conservative systems: The web map. Phys. Rev. E 2017, 96, 042158. 129. Nobre, F.D.; Rego-Monteiro, M.A.; Tsallis, C. Nonlinear generalizations of relativistic and quantum equations with a common type of solution. Phys. Rev. Lett. 2011, 106, 140601. 130. Filho, R.N.C.; Alencar, G.; Skagerstam, B.-S.; Andrade, J.S., Jr. Morse Europhys. Lett. 2013, 101, 10009. 131. Nobre, F.D.; Plastino, A.R. A family of nonlinear Schrodinger equations admitting q-plane wave. Phys. Lett. A 2017, 381, 2457. 132. Plastino, A.R.; Wedemann, R.S. Nonlinear wave equations related to nonextensive thermostatistics. Entropy 2017, 19, 60. 133. Andrade, J.S., Jr.; da Silva, G.F.T.; Moreira, A.A.; Nobre, F.D.; Curado, E.M.F. Thermostatistics of overdamped motion of interacting particles.
164
134.
135.
136.
137.
138.
139. 140. 141. 142. 143.
144.
145. 146. 147. 148.
Statistical Mechanics and Entropy
Phys. Rev. Lett. 2010, 105, 260601. Vieira, C.M.; Carmona, H.A.; Andrade, J.S., Jr.; Moreira, A.A. General continuum approach for dissipative systems of repulsive particles. Phys. Rev. E 2016, 93, 060103R. Zand, J.; Tirnakli, U.; Jensen, H.J. On the relevance of q-distribution functions: The return time distribution of restricted random walker. J. Phys. A: Math. 2015, 48, 425004. Curado, E.M.F.; Souza, A.M.C.; Nobre, F.D.; Andrade, R.F.S. Carnot cycle for interacting particles in the absence of thermal noise. Phys. Rev. E 2014, 89, 022117. Nobre, F.D.; Curado, E.M.F.; Souza, A.M.C.; Andrade, R.F.S. Consistent thermodynamic framework for interacting particles by neglecting thermal noise. Phys. Rev. E 2015, 91, 022135. Puertas-Centeno, D.; Temme, N.M.; Toranzo, I.V.; Dehesa, J.S. Entropic uncertainty measures for large dimensional hydrogenic systems. J. Math. Phys. 2017, 58, 103302. Vignat, C.; Plastino, A.; Plastino, A.R.; Dehesa, J.S. Quantum potentials with q-Gaussian ground states. Phys. A 2012, 391, 1068. ; \ È \ ¥ { Phys. A 2006, 362, 168. Pickup, R.M.; Cywinski, R.; Pappas, C.; Farago, B.; Fouquet, P. Generalized spin glass relaxation. Phys. Rev. Lett. 2009, 102, 097202. Betzler, A.S.; Borges, E.P. Nonextensive distributions of asteroid rotation periods and diameters. Astron. Astrophys. 2012, 539, A158. Betzler, A.S.; Borges, E.P. Nonextensive statistical analysis of meteor $ Mon. Not. R. Astron. Soc. 2015, 447, 765– 771. Burlaga, L.F.; -Vinas, A.F. Triangle for the entropic index q of nonextensive statistical mechanics observed by Voyager 1 in the distant heliosphere. Phys. A 2005, 356, 375. Naudts, J. Generalised Thermostatistics; Springer: London, UK, 2011. Biro, T.S.; Barnafoldi, G.G.; Van, P. Quark-gluon plasma connected to
> Eur. Phys. J. A 2013, 49, 110. Bagci, G.B.; Oikonomou, T. Validity of the third law of thermodynamics for the Tsallis entropy. Phys. Rev. E 2016, 93, 022112. Gell-Mann, M.; Tsallis, C. (Eds.) Nonadditive Entropy—
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
149.
150. 151. 152. 153. 154. 155. 156. 157. 158.
159. 160. 161.
162. 163.
165
Interdisciplinary Applications; Oxford University Press: New York, NY, USA, 2004. Borland, L. Closed form option pricing formulas based on a nonGaussian stock price model with statistical feedback. Phys. Rev. Lett. 2002, 89, 098701. Borland, L. A theory of non-gaussian option pricing. Quant. Financ. 2002, 2, 415. Kwapien, J.; Drozdz, S. Physical approach to complex systems. Phys. Rep. 2012, 515, 115. \;È !\ Eur. Phys. J. B 2018, 91, 1. Borland, L. Financial market models. In Complexity and Synergetics; Springer: London, UK, 2017; p. 257. Xu, D.; Beck, C. Symbolic dynamics techniques for complex systems: Application to share price dynamics. Europhys. Lett. 2017, 118, 30001. \| #$ Phys. A 2010, 389, 3382. Celikoglu, A.; Tirnakli, U.; Queiros, S.M.D. Analysis of return distributions in the coherent noise model. Phys. Rev. E 2010, 82, 021124. Vallianatos, F. A Non-extensive statistical mechanics view on Easter island seamounts volume distribution. Geosciences 2018, 8, 52. Carbone, F.; Bruno, A.G.; Naccarato, A.; de Simone, F.; Gencarelli, C.N.; Sprovieri, F.; Hedgecock, I.M.; Landis, M.S.; Skov, H.;
166
164. 165.
166.
167. 168.
169.
170.
171.
172. 173.
174. 175.
Statistical Mechanics and Entropy
Pfaffhuber, K.A.; et al. The superstatistical nature and interoccurrence $J. Geophys. Res. 2018, 123, 764–774. Ferri, G.L.; Savio, M.F.R.; Plastino, A. Tsallis’ q-triplet and the ozone layer. Phys. A 2010, 389, 1829. Ferri, G.L.; Figliola, A.; Rosso, O.A. Tsallis’ statistics in the variability of El Niño/Southern Oscillation during the Holocene epoch. Phys. A 2012, 391, 2154. Pavlos, G.P.; Karakatsanis, L.P.; Xenakis, M.N.; Sarafopoulos, D.; Pavlos, E.G. Tsallis statistics and magnetospheric self-organization. Phys. A 2012, 391, 3069–3080. Amador, C.H.S.; Zambrano, L.S. Evidence for energy regularity in the Mendeleev periodic table. Phys. A 2010, 389, 3866–3869. Morais, S.F.D.A.; Mundim, K.C.; Ferreira, D.A.C. An alternative interpretation of the ultracold methylhydroxycarbene rearrangement mechanism: Cooperative effects. Phys. Chem. Chem. Phys. 2015, 17, 7443. Sekania, M.; Appelt, W.H.; Benea, D.; Ebert, H.; Vollhardt, D.; | \'%> | ~ Phys. A 2018, 489, 18. Aquilanti, V.; Coutinho, N.D.; Carvalho-Silva, V.H. Kinetics of lowtemperature transitions and a reaction rate theory from non-equilibrium distributions. Philos. Trans. R. Soc. A 2017, 375, 20160201. Aquilanti, V.; Borges, E.P.; Coutinho, N.D.; Mundim, K.C.; CarvalhoSilva, V.H. From statistical thermodynamics to molecular kinetics: The change, the chance and the choice. The Quantum World of Molecules. In Rendiconti Lincei; Scienze Fisiche e Naturali; Accademia Nazionale dei Lincei: Rome, Italy, 2018. Singh, V.P. Introduction to Tsallis Entropy Theory in Water Engineering; Taylor and Francis, CRC Press: Boca Raton, FL, USA, 2016. Stavrakas, I.; Triantis, D.; Kourkoulis, S.K.; Pasiou, E.D.; Dakanali, I. Acoustic emission analysis of cement mortar specimens during three point bending tests. Lat. Am. Journ. Sol. Struct. 2016, 13, 2283. Aifantis, E.C. Towards internal length gradient chemomechanics. Rev. Adv. Mater. Sci. 2017, 48, 112–130. Schafer, B.; Beck, C.; Aihara, K.; Witthaut, D.; Timme, M. Non; + $ > ' #
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
176. 177.
178.
179.
180.
181.
182.
183.
184.
185.
186.
187.
167
stable laws and superstatistics. Nat. Energy 2018, 3, 119. Yalcin, G.C.; Beck, C. Environmental superstatistics. Phys. A 2013, 392, 5431. Hagiwara, Y.; Sudarshan, V.K.; Leong, S.S.; Vijaynanthan, A.; Ng, K.H. Application of entropies for automated diagnosis of abnormalities in ultrasound images: A review. J. Mech. Med. Biol. 2017, 17, 1740012. Tsigelny, I.F.; Kouznetsova, V.L.; Sweeney, D.E.; Wu, W.; Bush, K.T.; Nigam, S.K. Analysis of metagene portraits reveal distinct transitions during kidney organogenesis. Sci. Signal. 2008, 1, ra16. Sotolongo-Grau, O.; Rodriguez-Perez, D.; Antoranz, J.C.; SotolongoCosta, O. Tissue radiation response with maximum Tsallis entropy. Phys. Rev. Lett. 2010, 105, 158105. Bogachev, M.I.; Kayumov, A.R.; Bunde, A. Universal internucleotide statistics in full genomes: A footprint of the DNA structure and packaging? PLoS ONE 2014, 9, e112534. Bogachev, M.I.; Markelov, O.A.; Kayumov, A.R.; Bunde, A. Superstatistical model of bacterial DNA architecture. Sci. Rep. 2017, 7, 43034. Mohanalin, J.; Beenamol; Kalra, P.K.; Kumar, N. A novel automatic + II fuzzy index. Comput. Math. Appl. 2010, 60, 2426. Diniz, P.R.B.; Murta, L.O.; Brum, D.G.; de Araujo, D.B.; Santos, A.C. Brain tissue segmentation using q-entropy in multiple sclerosis magnetic resonance images. Braz. J. Med Biol. Res. 2010, 43, 77. Acharya, U.R.; Hagiwara, Y.; Koh, J.E.W.; Oh, S.L.; Tan, J.H.; Adam, M.; Tan, R.S. Entropies for automated detection of coronary artery disease using ECG signals: A review. Biocybern. Biomed. Eng. 2018, 38, 373. Capurro, A.; Diambra, L.; Lorenzo, D.; Macadar, O.; Martins, M.T.; Mostaccio, C.; Plastino, A.; Perez, J.; Rofman, E.; Torres, M.E.; et al. Human dynamics: The analysis of EEG signals with Tsallis information measure. Phys. A 1999, 265, 235. Acharya, U.R.; Hagiwara, Y.; Deshpande, S.N.; Suren, S.; Koh, J.E.W.; Oh, S.L.; Arunkumar, N.; Ciaccio, E.J.; Lim, C.M. Characterization of focal EEG signals: A review. Future Gener. Comput. Syst. 2018, 91, 290. Briggs, K.; Beck, C. Modelling train delays with q-exponential
168
188.
189.
190. 191.
192.
193.
194. 195.
196. 197.
198.
199. 200.
Statistical Mechanics and Entropy
functions. Phys. A 2007, 378, 498. Picoli, S.; Mendes, R.S.; Malacarne, L.C.; Lenzi, E.K. Scaling behavior [Europhys. Lett. 2006, 75, 673. Anastasiadis, A.D.; de Albuquerque, M.P.; de Albuquerque, M.P.; Mussi, D.B. Tsallis q# { > > citations—A new characterization of the impact. Scientometrics 2010, 83, 205. Tsallis, C.; Stariolo, D.A. Generalized simulated annealing. Phys. A 1996, 233, 395. \È* > \%# # single crystals using generalized simulated annealing. J. Appl. Crystallogr. 2010, 43, 1046. Shang, C.; Wales, D.J. Communication: Optimal parameters for basinhopping global optimization based on Tsallis statistics. J. Chem. Phys. 2014, 141, 071101. Evangelista, H.B.A.; Thomaz, S.M.; Mendes, R.S.; Evangelista, L.R. Generalized entropy indices to measure - and -diversities of macrophytes. Braz. J. Phys. 2009, 39, 396. Harte, J.; Newman, E.A. Maximum information entropy: A foundation for ecological theory. Trends Ecol. Evol. 2014, 29, 384. Puzachenko, Y.G. Rank Distribution in Ecology and Nonextensive Statistical Mechanics; Archives of Zoological Museum of Lomonosov Moscow State University: Moscow, Russia, 2016; Volume 54, p. 42. Hadzibeganovic, T.; Cannas, S.A. A Tsallis’ statistics based neural network model for novel word learning. Phys. A 2009, 388, 732. Takahashi, T.; Hadzibeganovic, T.; Cannas, S.A.; Makino, T.; Fukui, H.; Kitayama, S. Cultural neuroeconomics of intertemporal choice. Neuroendocrinol. Lett. 2009, 30, 185. Siddiqui, M.; Wedemann, R.S.; Jensen, H. Avalanches and generalized memory associativity in a network model for conscious and unconscious mental functioning. Phys. A 2018, 490, 127. Borges, E.P. On a q-generalization of circular and hyperbolic functions. J. Phys. A 1998, 31, 5281. Santos, R.J.V.d. Generalization of Shannon’s theorem for Tsallis entropy. J. Math. Phys. 1997, 38, 4104.
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
169
201. Abe, S. Axioms and uniqueness theorem for Tsallis entropy. Phys. Lett. A 2000, 271, 74. 202. Boghosian, B.M.; Love, P.J.; Coveney, P.V.; Karlin, I.V.; Succi, S.; Yepez, J. Galilean-invariant lattice-Boltzmann models with H-theorem. Phys. Rev. E 2003, 68, 025103R. 203. Tsallis, C. Conceptual inadequacy of the Shore and Johnson axioms for wide classes of complex systems. Entropy 2015, 17, 2853. 204. Enciso, A.; Tempesta, P. Uniqueness and characterization theorems for generalized entropies. arXiv 2017, arXiv:1702.01336. 205. Jizba, P.; Korbel, J. Maximum entropy principle in statistical inference: Case for non-Shannonian entropies. Phys. Rev. Lett. 2019, 122, 120601. 206. Umarov, S.; Tsallis, C.; Steinberg, S. On a q-central limit theorem consistent with nonextensive statistical mechanics. Milan J. Math. 2008, 76, 307–328. 207. Umarov, S.; Tsallis, C.; Gell-Mann, M.; Steinberg, S. Generalization of symmetric -stable Lévy distributions for q > 1. J. Math. Phys. 2010, 51, 033502. 208. Hilhorst, H.J. Note on a q# J. Stat. Mech. 2010, 2010, P10023. 209. Jauregui, M.; Tsallis, C. q-generalization of the inverse Fourier transform. Phys. Lett. A 2011, 375, 2085. 210. Jauregui, M.; Tsallis, C.; Curado, E.M.F. q-moments remove the degeneracy associated with the inversion of the q-Fourier transform. J. Stat. Mech. 2011, 2011, P10016. 211. Plastino, A.; Rocca, M.C. Inversion of Umarov-Tsallis-Steinberg’s q-Fourier Transform and the complex-plane generalization. Phys. A 2012, 391, 4740. 212. Plastino, A.; Rocca, M.C. q-Fourier Transform and its inversionproblem. Milan J. Math. 2012, 80, 243. 213. Umarov, S.; Tsallis, C. The limit distribution in the q-CLT for q is unique and can not have a compact support. J. Phys. A 2016, 49, 415204. 214. Sicuro, G.; Tsallis, C. q-Generalized representation of the d-dimensional Dirac delta and q-Fourier transform. Phys. Lett. A 2017, 381, 2583. 215. Rodriguez, A.; Schwammle, V.; Tsallis, C. Strictly and asymptotically scale-invariant probabilistic models of N correlated binary random
170
216.
217.
218. 219.
220. 221. 222. 223.
224. 225. 226. 227.
228.
Statistical Mechanics and Entropy
variables having q-Gaussians as N¬>J. Stat. Mech. 2008, 2008, P09006. Ruiz, G.; Tsallis, C. Emergence of q-statistical functions in a generalized binomial distribution with strong correlations. J. Math. Phys. 2015, 56, 053301. Bergeron, H.; Curado, E.M.F.; Gazeau, J.P.; Rodrigues, L.M.C.S. Symmetric deformed binomial distributions: An analytical example where the Boltzmann-Gibbs entropy is not extensive. J. Math. Phys. 2016, 57, 023301. Amari, S.; Ohara, A. Geometry of q-exponential family of probability distributions. Entropy 2011, 13, 1170–1185. Amari, S.; Ohara, A.; Matsuzoe, H. Geometry of deformed exponential \ ¥ #;>> > not necessary for the likelihood factorization required by Einstein. Europhys. Lett. 2015, 110, 30005. Einstein, A., Ann. Phys. (Leipzig), 33 (1910) 1275: Dass die zwischen S und W in Gleichung (1) [S = R lg W + N konst.] gegebene Beziehung die einzig mogliche ist, kann bekanntlich aus dem Satze abgeleitet werden, dass die Entropie eines aus Teilsystemen bestehenden Gesamtsystems gleich ist der Summe der Entropien der Teilsysteme. [The relation
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
171
between S and W given in Equation (1) is the only reasonable given the proposition that the entropy of a system consisting of subsystems is equal to the sum of entropies of the subsystems. (free translation by Tobias Micklitz.)] 229. Einstein, A., in P.A. Schilpp, Ed. Autobiographical Notes. A Centennial Edition. Open Court Publishing Company. 1979. p. 31: A theory is the more impressive the greater the simplicity of its premises is, the more different kinds of things it relates, and the more extended is its area of applicability. Therefore the deep impression that classical thermodynamics made upon me. It is the only physical theory of universal content concerning which I am convinced that, within the framework of applicability of its basic concepts, it will never be overthrown. 1949. 230. Frigg, R.; Werndl, C. Can somebody please say what Gibbsian statistical mechanics says? Br. J. Philos. Sci. 2019, in press.
CHAPTER 7
THE GIBBS PARADOX: LESSONS FROM THERMODYNAMICS
Janneke Van Lith Department of Philosophy and Religious Studies, Utrecht University, Janskerkhof 13, 3512 BL Utrecht, The Netherlands
ABSTRACT The Gibbs paradox in statistical mechanics is often taken to indicate that already in the classical domain particles should be treated as fundamentally indistinguishable. This paper shows, on the contrary, how one can recover the thermodynamical account of the entropy of mixing, while treating states that only differ by permutations of similar particles as distinct. By reference to the orthodox theory of thermodynamics, it is argued that entropy differences are only meaningful if they are related to reversible processes connecting the initial and final state. For mixing processes, this means that processes should be considered in which particle number is allowed to vary.
Citation: Van Lith, J. (2018). The Gibbs Paradox: Lessons from Thermodynamics. Entropy, 20(5), 328. (16 pages) Copyright: © 2018 Lith. Licensee MDPI, Basel, Switzerland. This is an open access article distributed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license: http://creativecommons.org/licenses/by/4.0/.
174
Statistical Mechanics and Entropy
Within the context of statistical mechanics, the Gibbsian grandcanonical ensemble is a suitable device for describing such processes. It is shown how the grandcanonical entropy relates in the appropriate way to changes of other thermodynamical quantities in reversible processes, and how the thermodynamical account of the entropy of mixing is recovered even when treating the particles as distinguishable. Keywords: Gibbs paradox; thermodynamics; extensivity; foundations of statistical mechanics; indistinghuishability; entropy of mixing
INTRODUCTION There are in fact two distinct paradoxes that go under the heading of the Gibbs paradox. The original one was formulated by Josiah Willard Gibbs in 1875 [1]. It addresses the mixing of two quantities of ideal gas, and the entropy change that occurs as a result of the mixing process. The paradox arises from the difference between two scenarios: one in which two quantities of the same gas are mixed, and one in which the two gases being mixed are different kinds. First, of course, in the case of mixing two equal gases, thermodynamics tells us that there is no change of entropy. In this case, we could restore the initial thermodynamical state of the mixing process simply by putting the original partitioning back in place, at no entropy cost. It immediately follows that no change in entropy occurs when two quantities of the same kind of gas are mixed. Secondly, in contrast to this, when the two gases are of different kind, the final state of the mixing process is different from the initial state not only in a microscopic but also in a macroscopic description, and now an entropy increase does occur. It is well known that this entropy increase is equal to 2NkBln2, a number which interestingly depends on the amounts of gases mixed, but not on their nature. The paradoxical point, as noted by Gibbs, is that this leads to a discontinuity in the entropy of mixing, in an imagined sequence of mixing processes in which gases are mixed that are made more and more similar. Gibbs writes: “Now we may without violence to the general laws of gases which are embodied in our equations suppose other gases to exist than such as actually do exist, and there does not appear to be any limit to the resemblance which there might be between two such kinds of gas. However, the increase of entropy due to the mixing of given volumes of the gases at a given temperature and pressure would be independent of the degree of similarity
The Gibbs Paradox: Lessons from Thermodynamics
175
between them.” [1] (p. 167). Thus, in the imagined limit in which the gases become equal, the entropy of mixing drops to zero discontinuously. This is the original Gibbs paradox. Both in textbooks on statistical physics and in the more recent literature on the subject another version of the Gibbs paradox prominently, which deals with the mixing of equal gases only. The paradox now is the fact that a straightforward calculation of the entropy of mixing leads to the previously found quantity 2NkBln2. This time, however, this value for the entropy increase holds for the mixing of equal gases, contrary to the thermodynamical result according to which entropy remains constant. Note that therefore a solution of this, second, paradox would amount to the { Both paradoxes apply to classical physics. Nevertheless, in both cases, quantum mechanics has been appealed to for a solution. In case of the original paradox, it is argued that, due to the impossibility of letting the two substances become equal in a continuous process, the paradox disappears, as the following quote from Pauli shows: \ are only \ \ change in entropy is zero. Therefore, it is not allowed to let the difference between two gases gradually vanish. (This is important in quantum theory.)” [2] (p. 48). In a similar vain, it is often argued that the quantized discreteness of nature explains that there cannot exist an arbitrarily small difference between two substances. In a textbook that still is widely used, Reif writes: “Just how different must molecules be before they should be considered distinguishable [...]? In a classical view of nature, two molecules could, of \ > §+ \ troublesome question does not arise because of the quantized discreteness of nature [...] Hence, the distinction between identical and non-identical molecules is completely unambiguous in a quantum-mechanical description. The Gibbs paradox thus foreshadowed already in the last [i.e., nineteenth] > advent of quantum mechanics.” [3] (p. 245). In case of the second paradox, it is often argued that classical physics gives incorrect results, since it mistakenly takes microstates that differ only by a permutation of particles to be different. Thus, when counting the number
176
Statistical Mechanics and Entropy
of microstates that give rise to the same macrostate, too many microstates are counted in. Thus, for example, in another once popular textbook, Huang writes: > [the number of states with energy smaller than E] by N! to obtain the correct counting of states. The reason is inherently quantum mechanical.” [4] (p. 141). Similarly, Schrödinger claims that it is quantum mechanics that solves the Gibbs paradox by pointing out that permutations of like particles should not be counted as different states: { >;>>\ the same increase of entropy must not be taken into account, when the two molecules are of the same gas, although (according to naive gas-theoretical views) diffusion takes place then too, but unnoticeably to us, because all the particles are alike. The modern view (i.e., quantum mechanics) solves this paradox by declaring that, in the second case, there is no real diffusion because exchange between like particles is not a real event—if it were, we should have to take account of it statistically. It has always been believed that Gibbs’s paradox embodied profound thought. That it was intimately linked up with something so important and entirely new (i.e., quantum mechanics) could hardly be foreseen.” [5] (p. 61). Yet there is something peculiar in these appeals to quantum mechanics. After all, both versions of the paradox appear in a classical setting. It might very well be that quantum physics gives us a better description of physical reality. However, it would be strange indeed if it were needed to solve puzzles that are interior to classical theories. Rather than turning to quantum mechanics for a solution to the Gibbs paradox, I will argue in this paper that we can learn important lessons by reverting to elementary thermodynamics, and especially to the way in which thermodynamical entropy is intimately related to exchanges of heat, energy, work and particles in reversible processes. The second paradox consists in a difference between thermodynamical and statistical mechanical calculations of the entropy increase when equal gases are mixed. I will show how this difference disappears when the statistical mechanical entropy is introduced in a way that does justice to its thermodynamical origin, by paying close attention to the variation of entropy and the other thermodynamical quantities in reversible processes, rather than simply by counting the number of microstates that lead to the same macrostate.
The Gibbs Paradox: Lessons from Thermodynamics
177
This paper is structured as follows. In Section 2, I will start by presenting an account of the entropy of mixing that shows how confusions and incorrect results arise when one does not pay close enough attention to the connection > > ! \ the way in which entropy depends on the number of particles by neglecting > { > lead to incorrect results. For a correct determination of the entropy difference as a result of a mixing process, one needs to calculate entropy changes during processes in which particle number is actually allowed to vary. Next, I will present three different ways in which one may correctly arrive at the entropy { > $ change in the entropy of mixing when gases are considered that are more > {\ > of the Gibbs paradox is dissolved. In Section 3\ which entropy can be introduced in statistical mechanics while being faithful to thermodynamics. I will argue that the Gibbsian framework is much better suited than the Boltzmannian framework to give us proper counterparts of thermodynamical quantities. Moreover, within the Gibbsian framework, it is only the grandcanonical ensemble that is a suitable device for describing processes in which particle numbers vary, such as mixing processes. I will present two different ways in which one could motivate the appearance of the factor 1/N! in the grandcanonical distribution, neither of which makes an appeal to indistinguishability of particles. With the grandcanonical ensemble and this factor of 1/N! in place, one easily recovers the thermodynamical results for the entropy of mixing, both for the mixing of equal and unequal gases, whereby the second aspect of the Gibbs paradox is solved.
FORMULATING THE GIBBS PARADOX IN THERMODYNAMICS Let us start our discussion of the Gibbs paradox in thermodynamics by going over a standard way of deriving the entropy of mixing. This derivation will, surprisingly, turn out to be quite problematic, and the source of confusion. The setup is simple and familiar. We consider the mixing of two equal portions of monatomic ideal gases of equal temperature, volume and number of particles. We want to know the entropy of mixing, that is, the difference between the entropy of the initial state in which the two portions of gas are confined to their own part of the container of volume V, and the final
178
Statistical Mechanics and Entropy
state in which the partition is removed and the gas is spread out over the whole container of volume 2V. One arrives at the expression for the entropy of an ideal monatomic gas by starting from the fundamental equation TdS=dU+pdV, and filling in the ideal gas law pV=NkBT and U=(3/2)NkBT. We get (1) (2) A straightforward calculation of the entropy of mixing now is (3) (4) which gives us the well-known result. So far so good. Or really? We can see that something is not quite in order when we repeat the derivation, but start from expressions for the entropy in terms of a different set of variables. In terms of the pressure p and temperature T, we have (5) (6) and if we take the variables to be pressure and volume, we get (7) (8) Clearly, these results contradict each other. What is going on here? Well, in all three derivations, the additive constants c1, c2 and c3 have been set to zero. Since these are constants of integration, they obviously cannot depend on the variables, but they may still depend on anything else—including the number of particles! The above expressions for the entropy simply do not fully specify the way in which entropy varies with particle number. They treat particle number as a parameter, not as a variable. In fact, one easily checks that (9)
The Gibbs Paradox: Lessons from Thermodynamics
179
and so it is clear that setting all three constants to zero leads to incompatible results. In fact, these constants may still depend on the number of particles, and it turns out that they are different functions of particle number. Setting either one of the constants c1, c2 and c3 to zero is thus a conventional choice [ There is another peculiar feature of the above derivations. Have we attempted to derive the entropy change when two different ideal gases were mixed, or two portions of the same gas? No assumption as to the nature of the ideal gas went into the derivations. One way of looking at this is to note that the additive constants may not only still depend on the particle number, but also on the kinds of gases involved. Somehow, the derivation with c1=0 leads us to the correct result for the mixing of two different gases, and the derivation with c2=0 leads us to the correct result for the mixing of two portions of the same gas. This, however, so far seems to be just a coincidence. We need to have a better look at thermodynamical theory, and redo the derivation of the entropy of mixing. Let us go back to the basics (for a more detailed account, I refer to [6]). Entropy is introduced into the orthodox theory of thermodynamics as the state variable one gets when integrating ¶·/T along a curve in equilibrium space: (10) Here, ¶· { process. It is an inexact differential, and Q is not a state variable, i.e., it cannot be expressed as a function on equilibrium space. The differential dS on the other hand is exact, and S > + (10), only entropy differences \ \ > up to a constant of integration. Moreover, only entropy differences are > +> > > fully takes place in equilibrium space, i.e., by a quasistatic process. This means, for example, that entropy values for non-equilibrium states so far
> > \\ one mole of oxygen and one mole of argon, since these cannot be connected by a quasistatic process. { \ { \ { > . This, however, will not help us out in the case of the Gibbs paradox, since this method does not work for comparing entropy
180
Statistical Mechanics and Entropy
values of different kinds of gas, again since these cannot be connected by a quasistatic process. Another common convention is to appeal to the third law of thermodynamics, which states that all entropy differences (as far as  > zero. This invites the conventional choice of setting not only all entropy differences but also all entropy values to zero in this limit. Unfortunately, this again will not be a convenient choice for the setting of the Gibbs paradox, since classical ideal gases do not obey the third law. Another conventional choice is to take the entropy to be extensive, that is, require that the entropy increases by a factor of q when the system as a whole increases by q. Note that this is a rough characterisation, since it is not always clear what it means for a system to increase by a certain factor, especially not for inhomogeneous \ as a function of other state variables that themselves are clearly intensive (such as temperature and pressure) or extensive (such as volume and number of particles). We may then require, say, that S(T,qV,qN)=qS(T,V,N). Note, incidentally, that this requirement immediately yields % for the entropy difference in Equation (3). However, another extension of the + additivity, that is, take the entropy of a composite system to be the sum of the entropies of the constituents, also when those constituents are not in equilibrium with one another. (Note that additivity indeed differs from extensivity. The composite system one gets when combining two containers of volume V still includes the walls of both containers. Extensivity, on the other hand, applies to extending systems > > of non-equilibrium thermodynamics. \ { \ care. Some extensions, such as additivity, simply enlarge the applicability \ > ~ { or absolute values that are already { > + 10). Thus, what about extensivity of entropy? Can this safely be assumed? In many cases, it is simply stated that entropy in thermodynamics clearly is extensive (see for example [7,8§ +10), this > + \ values, only differences, so that the issue of extensivity is underdetermined. > { > + { $ thermodynamics. However, one may wonder whether this is actually the case. One interesting account in this respect is given by Landsberg [9],
The Gibbs Paradox: Lessons from Thermodynamics
181
who turns the claim that it is possible to consider all of the most important thermodynamical quantitites as either intensive or extensive into a fourth law of thermodynamics. This, in my view, only highlights the conceptual possibility that entropy is not extensive. Thus, the most careful thing to do
{ \~ of Equation (10) in order to calculate entropy differences along quasistatic processes. Fixing the integration constants by conventions would run ~ $ already determined. Let us return to the entropy of mixing. If we want to improve on the derivations given above, we should not make unwarranted assumptions about the N-dependence of the entropy. A parsimonious derivation is to be preferred over one that makes abundant use of extra assumptions on top \ $ assumptions such as setting all three constants c1, c2 and c3 to zero should be avoided at all cost. A further desideratum is that it should be clear from the derivation whether it applies to the mixing of equal or different gases. Fortunately, several such derivations are available, which however interestingly differ with respect to the exact assumptions that are appealed to. I will discuss three of them in turn. The most straightforward thing to do is to make use of expressions for the entropy that truly treat particle number as a variable rather than as a parameter. That is, we add to the fundamental equation terms that take into account varying particle numbers: (11) so that it becomes possible to calculate the way in which entropy varies with varying number of particles of kind i. We follow a derivation given by Denbigh [10] (pp. 111–118), which applies to mixtures of perfect gases. A perfect gas is more general than an ideal gas, since the specific heat may be an arbitrary function of the temperature, rather than a constant (3/2)NkB. Denbigh defines a perfect gas mixture as one in which each component satisfies (12)
182
Statistical Mechanics and Entropy
Denbigh further assumes both additivity and extensivity of entropy, and calculates the entropy S(T,V,NA,NB). It then straightforwardly follows that when two different gases are mixed, the entropy increases by (13) (14) and when two equal gases are mixed, the entropy remains constant. We thus have the familiar results, on the basis of assumptions of additivity and extensivity of entropy and a definition of the chemical potential of perfect gases. How about the difference between the cases of mixing different or equal gases? One could say that this difference is introduced by definition: for each kind of gas, a term ´idNi is included in the fundamental equation. Suppose we decided to treat a pure gas as a mixture of two portions of the same kind of gas, and added a term ´idNi for each of those portions. We then would also find the entropy increase we found in Equation (14) in the case of mixing the same gas! This leaves us with a somewhat unsatisfactory treatment of the difference between the two cases. Another derivation of the entropy of mixing has been given by Planck [11]; it is both sparser in its assumptions, and clearer on the distinction between mixing different or equal gases. Planck does not fully calculate the entropy as a function S(T,V,NA,NB) of the state parameters. Instead, he directly calculates entropy differences along quasistatic processes only for the mixing processes of interest. For this, he uses a construction with semipermeable membranes, which allow particles of one kind to go through. Suppose we start with a container of volume V that contains a mixture of two gases A and B, and with two membranes: one on the far left side that lets through only particles of kind A, and one on the far right side that lets through only particles of kind B. These containers can now be slowly extended like a telescope, leaving us in the end with two containers of volume V each, where >A >B. Suppose further that Dalton’s law holds, that is, the pressure of a gas equals the sum of partial pressures. Then, it follows that, during the extension, no work is done, and moreover, the total energy and amount of particles remain constant. This extension therefore leaves the entropy constant. Next, the volume of each container is compressed in a quasistatic, isothermal process to V/2, resulting in an entropy change (15)
The Gibbs Paradox: Lessons from Thermodynamics
183
for each of the two containers. Now, considering the whole process in reverse so that we start with the separate gases and end with the mixture, we arrive at an entropy increase of 2NkBln2. It is immediately clear that this construction does not apply to the mixing of two portions of the same gas. After all, a semi-permeable membrane that is transparent to gas A but not to gas A cannot exist. For the mixing of equal gases, we simply appeal to the reasoning given earlier, namely that we can restore the initial thermodynamical state simply by putting back the partition, at no entropy cost. Therefore, there is no entropy of mixing in this case. A third derivation of the entropy of mixing has been given in a wonderful paper by Van Kampen [12]. Now, however, the entropy of mixing is understood to be the absolute value of the entropy of a mixture, not the entropy difference as the result of a mixing process. Van Kampen is careful and explicit in formulating conventions with respect to the entropy value, and takes great care not to compare entropy values “belonging to different N, unless one introduces a new kind of process by which N can be varied in a reversible way” [12§¤± ~ the integration constants equal for systems that are identical. The second convention is the additivity of entropy. These conventions make it possible to derive an expression for entropy that still contains an integration constant, > > > { gas, the procedure is simply to remove a partition between containers with °¦ (16) For a mixture, the procedure is to mix or separate the gases by making # > > >\°¦ (17) where also Dalton’s law has been used. One notes that the entropy of a mixture in Equation (17) does not reduce to the entropy of a pure gas in Equation (16) when A and B are the same, and NA+NB=N. This, according to Van Kampen, is the Gibbs paradox. There is an interesting parallel with the remark we made above about the entropy of mixing in Equation (14), where the value of the entropy of mixing also seemed to depend on whether we choose to consider a gas as consisting of one single gas A, or of a mixtures of two portions of that very gas. Van Kampen, however,
184
Statistical Mechanics and Entropy
unlike Denbigh, treats the difference between mixtures of different or equal gases with great care. He extends the definition of entropy by an appeal to reversible processes that connect systems of different particle number. The process for doing so in the case of equal gases, namely the removal or addition of a partition, clearly cannot be used for reversibly mixing or separating different gases. Conversely, mixing or separating gases by means of semi-permeable membranes is applicable to the case of unlike gases only. Thus, where Denbigh’s derivation introduces the distinction by definition, Van Kampen gives us an explanation of this distinction. How does Van Kampen’s treatment of the Gibbs paradox fare with respect to our other desideratum, parsimony of assumptions? In this respect, Planck’s treatment is still superior. Both assume Dalton’s law, and appeal to semi-permeable membranes that can be used to separate the two kinds of gas. However, on top of this, Van Kampen makes use of two conventions that fix entropy values, rather than just entropy differences in a mixing process. Planck’s derivation shows that this is not necessary in order to derive the difference between mixing the same or different gases, and thus in order to arrive at the Gibbs paradox. The core of the original, thermodynamical paradox thus lies in the procedures by which the definition of entropy differences can be extended to cases in which particle number varies. It does not lie in the conventions that can be used in order to fix the value of entropy. About solutions to the original, thermodynamical paradox, I want to be brief. The modern standard response is that there is indeed a discontinuity between mixing the same or different gases, but that there is nothing remarkable about that [12,13]. The construction with the semi-permeable membranes shows that differences between the two cases should not surprise us. Some authors [14] further motivate this viewpoint by an appeal to a subjective interpretation of entropy, according to which entropy measures the amount of information that is available to a subject. On such a view, it is up to the experimenter to regard two gases either as equal or as different, and the entropy of mixing depends on this subjective choice. Other authors [15] further motivate the viewpoint by an appeal to an operationalist approach to thermodynamics, according to which the meaning of thermodynamical notions is given by a set of operations (which may be either physical, or " \ the entropy of mixing of two portions of the same gas differ from those of mixing different gases, a discontinuous change in the entropy of mixing is not considered to be remarkable. We may, however, abstract away from these
The Gibbs Paradox: Lessons from Thermodynamics
185
two particular motivations. One need not be committed to either subjective approaches to statistical physics or to operationalism to appreciate the general point that there is nothing paradoxical about the discontinuity. The lessons that we should learn from the thermodynamical paradox do not concern the solution to the original paradox, but rather the way in which entropy differences are tied to reversible processes, and the question > { > such as extensivity. Jaynes [14] makes interesting observations about this in a discussion of Pauli’s account (which is similar to Van Kampen’s account > \ + { { Jaynes writes: “Note that the Pauli analysis has not demonstrated from the principles of physics that entropy actually should be extensive; it has only indicated the form our equations must take if it is. However, this leaves open two possibilities:
È | \ dS=¶·/T) indicates that only entropy differences are physically \ > in any way we please. […] The variation of entropy with N is not arbitrary; it is a substantive matter with experimental consequences. Therefore, the Clausius ' $ {{{ easily lead to confusion or even erroneous results, as we have seen. It would lead to two kinds of entropy differences: those to which the Clausius \ { > impression may arise that also entropy differences that have been introduced by convention correspond to heat exchanges in quasistatic processes. This, however, is generally incorrect, since the additive constants in this way are also determined for entropy differences between states that cannot be connected by physical processes, such as one mole of oxygen and one mole of argon. Errors may result in cases where the additive constants are determined by convention, and where also quasistatic processes are possible by means of which the entropy differences can be determined. We have seen examples of this in the above derivations of Equations (4) and (8).
186
Statistical Mechanics and Entropy
¥ |] > > \ this is also incorrect: by considering quasistatic processes in which particle number is allowed to vary, entropy differences can be determined. ! > | describe thermodynamical processes. The additive constants should not be neglected, and can still depend on anything but the state parameters of { > \ > \ { the constants may lead to confusion and error.
FORMULATING THE GIBBS PARADOX IN STATISTICAL MECHANICS Entropy in Statistical Mechanics In statistical mechanics, discussions about the Gibbs paradox typically center around the correct way of counting microstates. We now encounter a second version of the paradox, which only deals with mixing two portions of the same gas. Standard calculations lead to an entropy increase by the amount of 2NkBln2 also in the case of mixing the same gas, which is in conflict with the thermodynamical result. A standard response is that the way in which the number of microstates that correspond to a given macrostate is calculated, should be modified, in order to arrive at the desired result that the entropy does not increase when two portions of the same gas are mixed. Thus, this correction re-introduces the distinction between mixing of equal and different gases, and re-introduces the first version of the paradox. Here is one such standard calculation of the entropy of mixing, working from the Boltzmannian framework to statistical mechanics (see, for example, [7,8,16]). The Boltzmann entropy is given by SB=kBlnW, where W counts the number of microstates corresponding to a certain macrostate. Now, suppose we have an ideal gas, consisting of N molecules that each can be in any of X> ÐN, and thus S=NkBlnX. ] $ not extensive: when the gas doubles in size, not only the number of particles, but also the available > mixing by the amount of 2NkBln2. For another standard calculation (see, for example, [3,17§\ ;>> ~ an ideal canonically distributed gas leads us back to the thermodynamical
The Gibbs Paradox: Lessons from Thermodynamics
187
relation Equation (2) with c1=0, and so, again, to the entropy of mixing we also found in Equation (4). It is time to take the lessons from the previous section to heart, and have a more careful look. Neither the Boltzmann entropy, nor the Gibbs entropy of the canonical ensemble, tell us how entropy varies in processes in which the number of particles is allowed to vary. They treat particle number as a parameter, not as a variable. Moreover, the calculations we just gave neglect the additive constants, and do not pay attention to the fact that these may still depend on particle number. Again, in order to do better, we need to attend to reversible processes in which particle number is allowed to vary. It ;>> \ the canonical but the grandcanonical ensemble. The standard counterpart of thermodynamical entropy within Gibbsian statistical mechanics # # \;>> \ > probability distribution å{ over points x in phase space: (18) Gibbs himself was reluctant to view Equation (18) as the counterpart of thermodynamical entropy for general probability distributions å{ He showed only for special choices of the probability distributions, namely \ > \ # grained entropy shares a number of properties with thermodynamical entropy, notably, that it obeys the fundamental equation TdS=¶·¯¶\ for the canonical ensembles, one can do a bit more than this, and show that # unique function that obeys this fundamental equation (see, for example, [18§ª« \ need to be made for all terms in the fundamental equation. Generally, the statistical mechanical counterparts of thermodynamical quantities are not taken to be the functions of phase space, with all their microscopic detail and variability. Rather, they are taken to be either expectation values of phase functions, or parameters of the probability distribution, or some combination of those. In this vain, the statistical mechanical counterpart of U is straightforwardly taken to be the expectation value of the Hamiltonian ۦHۧå. The work performed by the system ¶ (19)
188
Statistical Mechanics and Entropy
where the ak denote external parameters such as the volume. Remember that in thermodynamics the relation TdS=¶· " es. The crucial point is that these are now construed as processes in which the system remains a member of a Gibbsian ensemble. Uhlenbeck and Ford write: “We will now show that for a change ¶ in which both the µ of the heat reservoir and the parameters ak are changes in such a slow or “reversible” way that the system may always be considered to be canonically distributed, the quantity µ¶· is a perfect differential of a function of the state of the system, that is a function of µ and the ak.” [18] (p. 21). That is, it is assumed that if a system is (micro-; grand-)canonically distributed, then, after small changes in the state parameters, the system will again be member of a (micro-; grand-) canonical ensemble, albeit a different one. In this respect, the system is in equilibrium all the time during ~ " processes are not allowed by the dynamics. That is, a curve consisting of (micro-; grand-) canonical distributions only cannot be the solution of the Liouville equation, not even with a time-dependent Hamiltonian. For a >¯ > ~ dynamically allowed processes, see [19]). For the canonical ensemble, this procedure leads to (20) where c0 is an integration constant, and Z is the canonical partition function (21) where µkBT) and where f(N) is a function of N that we presently take to be arbitrary, but that is often taken to be 1/N! (more on this below). In order to apply the same procedure to the grandcanonical ensemble, we note that the grandcanonical partition function
(22) allows us to write
(23)
The Gibbs Paradox: Lessons from Thermodynamics
189
Here, for convenience, we have written Áµ´ We now add a term to the fundamental equation. That is, we take as the counterpart of the thermodynamical number of particles the grandcanonical expectation value of N, whereas we identify the chemical potential ´ with a parameter in the probability distribution, in parallel to the temperature parameter µ \ { grandcanonical entropy: (24) This is strikingly similar to Equation (20) but differs in an important { N-dependence of the canonical partition function, that is, by specifying f(N). From Equation (20), we see that (25) where, as noted before, the constant may still depend on N. We see immediately that fixing the factor f(N) does not fix the N-dependence of the canonical entropy, since any change in this factor may be counterbalanced by the integration constant c0. In addition, the factor f(N) does not affect any expectation values of phase functions, since it cancels out by normalization. For the grandcanonical ensemble, however, things are different. This should not surprise us, since the grandcanonical ensemble applies to systems of which particle number is allowed to vary. Indeed, the factor f(N1,ڮ,Nn) in the partition function now affects both the value of the grandcanonical entropy Equation (24), and also the expectation values of phase functions. \ > { N-dependence becomes important. In the following subsection, we will turn { N-dependence.
190
Statistical Mechanics and Entropy
N! Arguments for adding a factor 1/N! to the partition function roughly fall into two categories. First, one may appeal to the correct way in which microstates need to be counted. Secondly, one may assume extensivity of entropy (or, as we shall see, the intensivity or extensivity of related quantities). I will discuss several arguments in turn. First and foremost, the factor 1/N! is often motivated by the claim that already in the classical domain microstates that only differ by a permutation of particles of the same kind should be treated as the same microstate (see, for example, [3,4,5,17,20,21]). The factor enters the partition function as a correction by which one divides by the number of ways in which N particles can be permuted. This correction is often motivated by an appeal to quantummechanical indistinguishability. That this shows up already in the classical domain is then seen as a remarkable and profound result— witness the quote from Schrödinger in the introduction. Saunders [21] gives a different twist to the story, by arguing that classical indistinguishability of particles is not mysterious at all. He writes: “Why not treat permutations just like any other symmetry group, and factor them out accordingly?” This kind of reasoning relates back to a discussion between on the one hand Planck [20], and, on the other hand, Ehrenfest and Trkal [22] in the 1920s, so even before the rise of quantum mechanics. Here, Saunders’ viewpoint is in line with Planck, who claims that the statistical method does not require one to count the number of states in which particles can be, but in which the gas as a whole can be. In addition, for the gas as a whole, it does not matter which particle is in which state, since all particles are the same. Ehrenfest and Trkal counter that it is the core of the method of statistical physics that one determines the number of microstates that lead to states that are macroscopically indistinguishable. Permutations of like particles result in states that differ from each other at the microlevel, and therefore should not be treated as the same microstate. Indeed, when truly counting in all microstates that result in the same macrostate and refraining from factoring out symmetries, one stays closer to the spirit of statistical physics (see, for example, [8,12,13,16] for a similar diagnosis). A completely different argument in favour of adding 1/N! to the partition function is given by Van Kampen [12], who in turn makes use of a more general argument given by Ehrenfest and Trkal [22]. Van Kampen shows how this factor in the grandcanonical distribution can be derived by taking a suitable limit from the canonical distribution. Van Kampen considers a
The Gibbs Paradox: Lessons from Thermodynamics
191
gas consisting of N particles in volume V and with Hamiltonian H(x) that Ó °Ó Ó{Ó > > \ we write NכÓ\ °כ°°Ó כNכÓÓ >>\{ N random particles in V in the point in phase space x + \ \ N particles from amongst the total amount of N כ% Ó > > \ (26) where the normalization constant is found by intregrating over all possible states of the combined system: (27) כ
When we now take the limit N ¬\°Ó¬ åÔכ°Ó\ °Ó> \ (28) where z is a function of å and µ This is the grandcanonical distribution, and it contains the factor 1/N!. A similar calculation applies to systems containing several kinds of particles. The construction now requires several particle reservoirs, each of which is connected by a semi-permeable membrane to the system of interest. This results, as one may expect, in factors 1/Ni! for each of the kinds of particles. It is a remarkable fact that, in Van Kampen’s construction, it is not the indistinguishability of classical particles that gives rise to the factor 1/N!. Rather, it is the distinguishability of classical particles! Van Kampen does not divide by the number of microstates that lead to the same macrostate, but rather multiplies by the number of possibilities to choose the N particles from among the total amount of Nכ. Thus, all microstates that lead to the same macrostate are taken into account; states that only differ by a permutation of %\ > \ that 1/N! appears in the grandcanonical distribution function. A different argument for adding 1/N! to the partition function has been given in a short and barely noticed paper by Buchdahl [23]. He presents
192
Statistical Mechanics and Entropy
his argument as a classical argument, in which considerations of quantum mechanical indistinguishability play no role, and in which no convention is required about extensivity of entropy. Rather, he claims that internal consistency of the Gibbsian ensemble formalism demands that 1/N! should be present in both the canonical and grandcanonical partition function. Below, I will criticize parts of his argumentation, but what will survive my criticism is still an interesting conclusion: for the grandcanonical partition function (though not for the canonical partition function), 1/N! can be derived on the basis of this classical argument. Extensivity of entropy is however also required in order to reach this conclusion. By internal consistency of the Gibbs formalism, Buchdahl means that the canonical and grandcanonical partition functions ought to depend in the same way on the number of particles. That is, the grandcanonical ensemble should be a weighted sum of the canonical ensemble (29) Here, gN is the N-dependence of the partition function that this argument aims to determine, and ³exp(µ´ Buchdahl now appeals to the following relations for the thermodynamic potential X: (30) (31) From this, it follows that (32) where a and b are functions of T and ´ alone. In order to determine gN it suffices to compare Equations (29) and (32) for a specific case, namely the . One finds that ideal gas, for which (33) where a and b/ø³ need to be constant, so that we end up with gN=1/N!. As remarked earlier, Buchdahl presents this argument as an alternative to requiring extensivity of entropy. However, in fact, such a requirement is hiding behind Equation (30)! Buchdahl refers for this relation to his textbook [24], and it turns out that he there derives this relation on the basis
The Gibbs Paradox: Lessons from Thermodynamics
193
of the assumptions of extensivity of entropy S, extensivity of energy U, and extensivity of the thermodynamic potential X. Now, as a matter of fact, one could be a little sparser in the assumptions and still arrive at Equation (30); this relation also follows from assuming extensivity of the volume V and intensivity of the chemical potential ´ (This follows by applying Euler’s Ð\°\´ \ ´ and extensitivity of S are intimately related through (34) these assumptions are of a similar kind. My other point of criticism is that the requirement expressed by Equation (29) is too strict. Why would internal consistency of the Gibbs formalism demand that the N-dependence of the canonical and grandcanonical ensembles is exactly the same? After all, variation of particle number exactly marks the difference between the two ensembles. In my view, it would be more reasonable to demand that the grandcanonical partition function is indeed a weighted sum of canonical partition functions, but with weights that are allowed to vary with N: (35) The previous argument then only determines the product fNgN, and thus { N-dependence of the grandcanonical ensemble, not of the canonical ensemble. However, note that this is all that we were after. As I argued earlier, the N-dependence in the canonical ensemble is immaterial, since it does not affect the value of entropy differences or of ensemble averages. Thus, Buchdahl does provide us with a classical account of the N-dependence in the case where it actually matters. ' > $ which we have determined the N-dependence in the grandcanonical partition function. Both in their own way are concerned with an internal consistency of the Gibbs formalism. In Van Kampen’s case, the grandcanonical distribution is derived from the canonical distribution in a suitable limit, in which part of the canonically distributed system is playing the role of a particle reservoir for the rest of the system. In case of systems containing several kinds of particles, this construction makes use of semi-permeable membranes. In the \ + grandcanonical partition function is a weighted sum of canonical partition \ +
194
Statistical Mechanics and Entropy
assumption that entropy is extensive, or the assumption that the chemical potential is intensive. It is clear that Van Kampen’s derivation is the more parsimonious one. Saunders comments as follows on Van Kampen’s derivation: “Why so much work for so little reward? Why not simply assume that the classical description be permutable (i.e., that points of phase space related by a permutation of all N כparticles represent the same physical situation)?” [21] (p. 196). I respond by noting that the reward should not be underestimated. What ° ¦ [ Â grandcanonical partition function that stays true to the core of classical >~ that in principle could be distinguished. Combining his argument with Uhlenbeck and Ford’s way of relating the grandcanonical entropy from Equation (24) to quasistatic changes in other thermodynamical quantities [ which particle number is allowed to vary.
The Gibbs Paradox in Statistical Mechanics We finally turn to the entropy of mixing in a statistical mechanical description of the mixing process. Since we are dealing with processes in which the particle number varies, we appeal to the grandcanonical ensemble. Fortunately, the N-dependence of the partition function and thus also of the entropy have now been determined. We can simply calculate the entropy of an ideal gas by plugging the partition function into Equation (24), and we find
(36) For the mixing of two equal gases\ (37) and, for the mixing of two different gases, we find (38) We thus arrive at the same results as in thermodynamics. This means that we have solved the second version of the Gibbs paradox, by restoring the original Gibbs paradox. For just the same reasons as in the thermodynamical
The Gibbs Paradox: Lessons from Thermodynamics
195
case, the discontinuous difference between mixing the same or different gases should not be considered paradoxical. In Section 2, we discussed three different ways of correctly arriving at the entropy of mixing. In fact, we can reproduce all of them in the statistical mechanical context. The calculation just presented parallels Denbigh’s derivation of Equation (14), even though an important difference is that now we did not have to put in as an assumption that entropy is extensive. Van Kampen’s account of the Gibbs paradox can also be parallelled in the context of the grandcanonical ensemble. He himself now does not calculate the value {\> { probability function. The probability function for a mixture of two gases is given by
(39) and this does not reduce to Equation (28) when A and B are equal, and N=NA+NB. Finally, we could also simply copy Planck’s line of reasoning, and conclude that separating a mixture by means of his telescopic construction with semi-permeable membranes leaves the entropy constant. We then again only need to compute the entropy change for an isothermal compression of the two containers, each containing a single gas. Completely analogous to Equation (15), we again arrive at the same result. This time, however, we might as well work with the canonical rather than the grandcanonical ensemble, since the number of particles remains constant during the compression. Statistical mechanics is a far richer theory than thermodynamics because it brings into the picture the details of microscopic motion, and the interactions between molecules. Such details matter also in the description of mixing processes: arbitrary Hamiltonians can be taken into account, and $ > \ $ \ entropy also in the statistical mechanical context is a function of just a handful of macroscopic variables. Stripped down to the simple case of classical ideal monatomic gases, the functional relationships between entropy, volume, temperature and particle number are the same as in thermodynamics. We thus
{ ' > > $ stories about the Gibbs paradox. Dieks [13] (p. 371) claims that ‘if the
196
Statistical Mechanics and Entropy
submicroscopic particle picture of statistical mechanics is taken completely seriously, the original formula S=klogW, without the ad hoc division by N!, gives us correct results’—for we may imagine there to be semi-permeable membranes that are not only sensitive to the kinds of gas A and B, but also to individual particle trajectories and origins. Such membranes can be used to bring back all particles to their original container also after two portions of the same gas have been mixed. Thus, in parallel to Planck’s reasoning, we { ! \ this is now seen as the correct statistical mechanical result. Dieks claims that the practical issue that such membranes do not exist and macroscopic separation techniques will not be capable of selecting particles on the basis of their individual trajectories is irrelevant to the more principled issue of whether particles are distinguishable. To this much I agree; also in the original construction given by Planck, the membranes need not actually be available in practice in order for them to do their conceptual work in shedding light on the difference between mixing equal or unequal gases. \ { + from the initial state because not exactly the same particles are located in each of the containers, then obviously work needs to be done in order to restore the initial state, and there is an entropy of mixing also in the case of mixing the same gas. However, this does not mean that this is the correct or the most natural way of looking at mixing processes, even in a statistical mechanical context. Entropy is, also in the statistical mechanical context, most naturally viewed as a function of a handful of macroscopic variables. Microscopic details only enter in the exact nature of those functional relationships.
The Gibbs Paradox: Lessons from Thermodynamics
197
REFERENCES 1.
2. 3. 4. 5. 6.
7.
8.
9. 10. 11. 12.
13.
Gibbs, J.W. On the equilibrium of heterogeneous substances. In The ' # $ *{{ +||; Vol. I, Original edition 1875– 1878; Dover: New York, NY, USA, 1961. Pauli, W. Pauli lectures on physics: Vol. 3. In Thermodynamics and the Kinetic Theory of Gases; MIT: Cambridge, MA, USA, 1973. Reif, F. Fundamentals of Statistical and Thermal Physics; McGraw Hill: Singapore, 1985; p. 651. Huang, K. Statistical Mechanics; Wiley: New York, NY, USA, 1963; p. 493. Schrödinger, E. Statistical Thermodynamics; Cambridge University Press: Cambridge, UK, 1946; p. 88. ¯ ~\ ¥ | physics. In Handbook for Philosophy of PhysicsÈ \ ¥\ Earman, J., Eds.; Elsevier: Amsterdam, The Netherlands, 2006; pp. 924–1074. Dieks, D. Is there a unique physical entropy? Micro versus macro. In New Challenges to Philosophy of Science; Andersen, H., Dieks, D., Gonzalez, W.J., Uebel, T., Wheeler, G., Eds.; Springer: Dordrecht, The Netherlands, 2013; pp. 23–34. Versteegh, M.A.M.; Dieks, D. The Gibbs paradox and the distinguishability of identical particles. Am. J. Phys. 2011, 79, 741– 746. Landsberg, P.T. Thermodynamics and Statistical Mechanics; Oxford University Press: Oxford, UK, 1978; p. 461. Denbigh, K.G. The Principles of Chemical Equilibrium; Cambridge University Press: Cambridge, UK, 1955; p. 491. Planck, M. Treatise on Thermodynamics; Dover: New York, NY, USA, 1945; p. 297. Van Kampen, N.G. The Gibbs paradox. In Essays in Theoretical Physics. In Honour of Dirk ter Haar; Parry, W.E., Ed.; Pergamon Press: Oxford, UK, 1984; pp. 303–312. Dieks, D. The Gibbs paradox revisited. In Explanation, Prediction, ' >Â}> [ > applying Equation (7) to the Gibbs situation [2,4§ the total entropy as suggested in the discussion above, following Equation (7) ([5,9], see also [10]): (10) with a constant C that can be arbitrarily chosen. Then, it is true that a choice for C can be made such that the resulting formula suggests a total entropy that is extensive: choose C=kln2NÂ
in which the total entropy is the sum of two partial entropies (In the more general case of unequal partial volumes V1 and V2, we would arrive at the entropy expressions Si=kln(Vi/V)N)/Ni!, with V the total volume). However, making the entropy linear in N in this way would be achieving extensivity by fiat, by the conventional choice of a different constant C for each individual value of N. This extensivity by choice clearly does not give us an explanation on the basis of what physically happens on the microlevel. Of course, it was not to be expected that we can derive a physical N-dependence of the combined system because this system is isolated and N is constant. Therefore, the factorials in Equation (7) do not imply extensivity of the combined system and do not solve the statistical paradox (Swendsen [11] has proposed an approach that formally looks similar, starting from Equation (7) and entropy as the logarithm of the probability, but with the important difference that the probability in Equations (7) and (10) is interpreted in an
The Gibbs Paradox and Particle Individuality
211
information theoretic sense, namely as a representation of our uncertainty about where individual particles are located [12]. Swendsen argues that the form of the dependence of the entropy on N can in this case be derived even for closed systems: since we are ignorant about which particles, from all particles of the same kind in the world, are located in the system in question, the probability formula Equation (7), with the desirable factor 1/N!, applies. From a Boltzmannian point of view, this information theoretical argument about particles in other systems cannot yield a physical explanation for what happens in the isolated Gibbs set up. In [13,14], Swendsen responds to criticism, but does not address our concerns). Summing up, taking account of the individuality of classical particles in { [ > factors of the form 1/N! in the probabilities; however, this does not imply the extensivity of the entropy when two gases of the same kind are mixed and does not solve the statistical Gibbs paradox (This is not to deny, of course, that the grand canonical ensemble, with its factor 1/N! in the probability distribution as derived from the binomial distribution, plays an essential role in problems in which particle numbers can vary, for example in the study of dissociation equilibria [9]. What we deny is that this grand canonical factor is relevant for the solution of the Gibbs paradox).
The Effects of Particle Permutability The traditional justification for inserting a factor 1/N! in the entropy goes back to Gibbs himself and to Planck (see [5,9,15] for references to the early literature) and relies on the argument that we have overcounted the number of states because a permutation of particles of the same kind does not change the state. In the case of a system consisting of N particles of the same kind, we must accordingly divide the number W—obtained by traditional counting— by N!, the number of permutations of N particles. In order to judge this argument, we need to be clear about the intended sense of “exchanging particles” (cf. [4], Sections 2 and 3). If a permutation of conventionally chosen particle labels is intended, permutability is a truism. Empirical facts concerning a particle system are completely determined by physical particle properties, and names or labels that do not represent such physical features are irrelevant; this holds independently of whether the particles have the same intrinsic properties or not. This trivial character of label permutability makes it irrelevant for the Gibbs paradox. We need a more substantial notion of exchangeability if it is to be physically relevant.
212
Statistical Mechanics and Entropy
The exchange notion implicit in most permutability arguments seems to be the following. Consider particles of the same kind (i.e., with the same intrinsic properties), suppose particle 1 is in the one-particle state a, and particle 2 is in the state b. Now take, not as a concrete physical process but merely in thought, particle 1 with its intrinsic properties and substitute it for particle 2, in the state b particle 2 was in; and, vice versa, put particle 2 in the state a > % same intrinsic properties, nothing has changed in the physical situation after this swap—except for the interchange of the labels 1 and 2 that attach to the particles. Since these labels are conventional, we can change them back (leaving the particles where they are), so that we end up in exactly the same situation as when we started. In this way, we obtain Ni! permuted states of particles of kind i with exactly the same physical properties. To eliminate $\ > W of a system consisting of Ni particles of kind i by the factor ùiNi!, so that we obtain a phase volume that is smaller than the volume considered before permutability was taken into account. With Equation (4), the new counting method leads to a reduced value of the entropy: kln(W/ùiNi!). As we have seen, this is exactly the expression needed for extensivity of the entropy and disappearance of the statistical Gibbs paradox. [ >i!, and thus for passing from ordinary state space to the “reduced state space”, is convincing if the exchanges are not associated with physical differences. The case of a particle interchange as just described is an example of such an unphysical change: here, the swapping of particles of the same kind was a mere mental operation and not a physical process. The intuitive appeal coming from the term “particle exchange” is deceptive in this context: it is obscure if there is anything exchanged at all. In the case of particles of the same kind, placing particle 1 in the state of particle 2, and vice versa, only makes sense if the particles possess an identity over and above their intrinsic physical properties and states. In philosophy, individuating principles of this sort are sometimes discussed (“primitive thisness” or “haecceity”), but such concepts are not recognized in physical theory. In accordance with this, statistical mechanics is not meant to consider situations as different that relate to each other by the permutation of putative non-physical particle identities. This seems to entail that, in the case of particles of the same kind, we should never use the usual unreduced state space but ought to always pass to the reduced state space where all states differing by exchanges of particle labels have been collapsed into one.
The Gibbs Paradox and Particle Individuality
213
However, the use of the unreduced state space for particles of the same kind is legitimate if the particle labels (which also number the coordinate { > ;>> \ > following way. Number the particles, in the initial state, according to their positions—for example, from left to right on the basis of their horizontal spatial coordinates. Thus, the particles in the left compartment receive the labels 1,2,…,N, and the particles on the right-hand side are numbered N+1,N+2,…,2N. Now, it is important that, although these assignment of labels is conventional, once given, the labels stick to the particles over time via the trajectory that each particle follows. Thus, given this initial labeling, it makes sense to say that, at some later point in time, particles 1 and N+1 may end up in states x and y, respectively, but that it may also be the other way around (1 in y and N+1 in x); and that these two states differ > case, the particle that originated from the left-hand corner of the container occupies state x, and, in the second case, it is another particle that does so. > \ ~\ > i at x, particle j at y” and “particle i at y, particle j at x”; and this distinction is in principle because it may be that the difference is practically irrelevant, in which case we may revert to the reduced phace space—see the discussion at the end of this section) relevant to the calculation of W. The numbering of axes of the \ \ > > { ~ùiNi!. For systems that are subject to external manipulation, in the sense that something is done to the system that affects the number of possible particle trajectories, the situation is different, though. In the Gibbs case that starts with two equal volumes of equal gases, both with initial particle number N, the multiplicity of realizations of any microstate with the partition in place is (N!)2 because the particles cannot move out of their compartments (the N-particle states localized in each individual compartment can each be realized in N! possible ways). After removal of the partition, the multiplicity of states with N particles on both sides becomes much greater: now, we must take into account that the particles may go from one side to the other. As a result, after the removal, there are ( 2NN)different ways that the particles can distribute themselves over the total volume. There are therefore many more evolutions and states that lead up to macroscopic equilibrium than before: the only originally allowed situations, in which particles stayed in their own compartments, have become statistical oddities. Given any initial state just before the partition was removed, the probability is overwhelming that particles will move out of their original regions and will redistribute themselves approximately uniformly over the total volume. The total amount of phase volume available to the system has grown spectacularly, which is measured by the additional mixing entropy kln(2NN)=2Nln2. Because classical particles always possess identity over time, the calculation of the mixing entropy remains the same regardless of whether the particles have the same intrinsic properties or not. Summing up the results of these two subsections, the statistics of particles possessing individuality does not entail that the entropy is extensive in the Gibbs set up. Quite the opposite, individuality is the essential factor responsible for the appearance of an entropy of mixing: particles have their own individual trajectories according to classical physics, and the possibility of physical exchanges of particles leads to a growth of microstates and consequently to an increase of the statistical entropy. Evidently, the growth in accessible phase volume that is at issue here will more often than not be without empirical consequences because its are no chemical differences between the gases). This introduces a notion of pragmatic non-individuality and permutability of particles. If we consider
The Gibbs Paradox and Particle Individuality
215
the difference between gas particles coming from the left and right as > in macroscopic quantities), there is no practical point in thinking of an increase of the phase volume. In the numbers of states bookkeeping, we can in this case divide all multiplicities after mixing by (2NN) (expressing that it does not matter from which compartment the particles originally came; we factor out the associated multiplicity)—this removes the entropy of mixing. This procedure gives us the right empirical entropy values, given the measurement limitations inherent in thermodynamics (In [8], the authors present an elegant general information theoretic account of how entropies on different levels of description relate to each other. It follows from their treatment that ignoring particle individualities and trajectories leads to the appearance of a factor 1/N! in the entropy expression, in accordance with what we argue). The reduction of the number of states and the transition to [ \> that the unimportance of trajectory information for the usual phenomenal predictions implies the non-existence of differences on the microlevel (Saunders [16], by contrast, takes the position that microstates really and literally remain the same, as a fundamental microscopic fact, when two particles of the same kind are swapped. According to his analysis, the absence of an entropy of mixing is due to this fact. This is a major difference with our argument).
QUANTUM MECHANICS In quantum mechanics, it is a basic principle that states of particles of the same kind (i.e., with the same intrinsic properties) must be completely symmetric or anti-symmetric under permutations. In the case of bosons, we have symmetrical many-particle states that do not change under permutations, whereas the states of fermions are anti-symmetrical, thus incurring a minus sign under uneven permutations. To see exactly what is permuted, and what the consequences are, (unreduced) phase space representation as we have discussed and defended it in Section 4.1, if the electrons are in the states Ԅ and û\ respectively, the total system is in one of two states, Ԅ1\û2 or Ԅ2\û1, where the labels refer to some distinguishing property that persists over time. As we have noted, distinctions on the basis of such labels may be practically irrelevant so that we may switch over to the reduced classical phase space. In quantum
216
Statistical Mechanics and Entropy
mechanics, the (anti-)symmetrization postulates tell us to do something similar, but now not as a pragmatic choice but as a law-like principle: states solely differing by a permutation of the indices are combined into one superposed state. In the case of two quantum electrons, this results in an anti-symmetrical total state, e.g.,: (11) In an N-particles state, the N! product states (The multiplicity is N! if each one-particle state occurs only once, as is the case for fermions; for bosons, we may assume the same in the thermodynamic limit) that are permuted copies of each other are similarly united in one (anti-)symmetrical superposition. Consequently, instead of N! possibilities, we have only one state. This reduction of the number of states seems highly relevant for the Gibbs paradox: it provides us with a factor 1/N! at exactly the right place, as instead of the classical value S=klnW (with W the number of states of labeled particles) we now obtain S=klnW/N!, not as a pragmatic choice but as a matter of principle. According to this line of thought, the interchange of two particles from the left and right, respectively, does not lead to a new state and therefore not to a new physical situation, so that there can be no entropy of mixing in the case of two equal gases. However, there are some caveats here; and taking them into account undermines the just-mentioned conclusion. First, it is important to realize that the permutations considered in the symmetrization postulates permute the indices that occur in the N-particle state. These indices refer to the individual Hilbert spaces whose tensor product forms the N-particles Hilbert space, but do not label what we ordinarily would call “particles” in the classical limit (see [17] for an extensive discussion of this point). For example, in a case in which the states |Ԅۧ ýûۧ in Equation (11) represent two narrow and well-separated spatial wave packets, it is these wave packets (so states " laboratory practice. These individual wave packets approximately follow classical particle trajectories (In this special case of wave packets with distinguishable trajectories, quantum mechanics describes entities that are like individual classical objects. In [16,18,19], Saunders considers such emerging “individuals” as new objects, which differ from quantum È ~ # > ~ discernible”) objects. In our opinion, it is better to say that, in these cases, particles emerge as individual entities, and that the concept of a particle
The Gibbs Paradox and Particle Individuality
217
is not always applicable at the fundamental quantum level. One among several reasons was already noted: the “individuals” correspond to what > 1,4,17,20]; another that the “weakly discernible” objects are always in exactly the same state, so that they can never possess individuating properties. For example, all electrons (conceived as indistinguishables) in the universe possess exactly the same spatial and non-spatial characteristics. It seems odd to call such putative entities “particles”). By contrast, the two indices 1 and 2 in Equation (11) are both associated with exactly the same physical state, namely the density operator 1/2(|ԄۧۦԄýýûۧۦûý Thus, any physical entities labeled by these indices are in exactly the same state, which implies that the permutation of 1 and 2 is without physical meaning. However, that does not follow at all > ýԄۧýûۧ. These particles can be distinguished, and in special situations they mimic the behavior of individual classical particles with trajectories. There is a second important ingredient to be added to the argument in order to make the analogy with the classical situation closer. In the initial Gibbs situation, it is assumed that the two groups of particles are be interactions between the walls (including the partition) and the particles. In the case of narrow wave packets, these interactions can be visualized as > > ~ walls. This implies that the total quantum state will not be as in Equation (11) (or its many-particles analogue), but must include the environment. In any realistic model of the interactions, the particles will leave their imprints on the walls and this implies that a sort of physical labeling of the particles takes place: the environment keeps a record of the particles. This point is reinforced when we also consider what must happen when we try to separate particles originating from the two compartments by means of the quantum analogues of semi-transparent membranes. These membranes should be devices that are sensitive to the difference between wave packets coming from the left and right, respectively; and should be able to respond differently in these two cases. This would lead to a detection and effectively a physical labeling of the particles. To include this additional structure in the expression for the total Gibbs state, we have to introduce states referring to the environment. Let the interaction at some early stage between electron state |Ԅۧ and initial
218
Statistical Mechanics and Entropy
environment state |E0ۧ lead to the environment state |Eaۧ, and interaction ýûۧ to the environment state |Ebۧ. Thus: Then, instead of Equation (11), we obtain: (12) In this formula, the different environment states correlated with |Ԅۧ ýûۧ , respectively, serve as physical markers of these electron states. Now, in order to see whether the removal of the partition in the Gibbs set-up results in a growth of the available state space, we can repeat the argument of Section 4.1. We can write for the initial state instead of Equation (12):
(13) Now suppose that, after the partition has been removed, there is a possible state with one electron in state |ԄÓۧ\ \ ýûÓۧ, on the right. Then, there are two possibilities for the total state (and different evolutions leading up to these states), namely (14) and (15) These are two different two-particles states (The physical meaning of these states can be made more transparent by using the formalism of “wedge products” [21]. With the wedge product ר, the states (14) and (15) assume the forms |ԄÓۧa|רûÓۧb and |ûÓۧa|רԄÓۧb, respectively, with the natural interpretation of individual particles a and b in switched states). The difference between these states corresponds, just as in the classical case, to a physical interchange between the two particles. The situation is essentially the same as the one of [ positions: |Ԅۧ has developed into |ԄÓۧýûۧýûÓۧ, or |Ԅۧ has become ýûÓۧýûۧ has become |ԄÓۧ. One of these two states was not possible before the removal of the partition, and has become so afterwards. The number of possible states has thus grown by a factor 2; in the general N-particles case, the multiplicity is N!.
The Gibbs Paradox and Particle Individuality
219
In some situations, the (anti-)symmetrical quantum states therefore show characteristics that are very similar to those of classical N-particle states, so that analogous considerations apply concerning the Gibbs paradox. A Maxwellian quantum demon would be able to distinguish wave packets coming from the left from those coming from the right, and verify the existence of an entropy of mixing. Admittedly, situations in which the quantum states mimic the behavior of classical particles are very special cases. However, also in more general situations, a distinction in principle between one-particle states originally coming from the right or left is possible. Each one-particle state that is localized in the right compartment (in the initial Gibbs situation) is orthogonal to all one-particle states in the left compartment. This mutual orthogonality will be maintained by independent unitary evolution of the one-particle states (as in the case of an ideal gas). By virtue of this persisting mutual orthogonality, ideal yes-no measurements can in principle be devised that at a later stage of the evolution determine the origin of the one-particle states—needless to say that these measurements would soon become forbiddingly complicated. However, a superhumanly skillful and knowledgeable demon could exploit them to establish that there is an entropy of mixing; with the help of quantum measuring devices as analogues of semi-permeable membranes, particles originally coming from the left will be allowed to pass, others not, and vice + { > >!{ demon in the classical case [2]. This analogy shows that the appearance of 1/N! as a result of (anti-)symmetrization does not rule out the appearance of an entropy of mixing in the case of equal gases, and so does not solve the Gibbs paradox. This does not mean that quantum mechanics is irrelevant to the entropy of mixing. If quantum gases are not dilute (as supposed in the above arguments), Fermi-Dirac or Bose-Einstein statistics will have to be used and the multiplicity of states is no longer simply N!. This is one thing that will affect the value of the entropy of mixing. Another thing is the role of measurement interactions: in quantum mechanics, measurements partly determine the post-measurement properties of the measured particles. Interactions between the particles and the outside world therefore can $ {È > explored elsewhere. There are also other complications that should be taken into account in the quantum case [22].
220
Statistical Mechanics and Entropy
CONCLUSIONS The account given in this paper is motivated by the original Boltzmannian idea to make macroscopic phenomena intelligible by interpreting them in terms of microscopic “mechanical” processes and objective probabilities (as in ergodic theory). According to this approach, the increase in entropy when two classical gases mix is a consequence of the increase in possibilities for the gas particles: they are no longer confined to one compartment but may move freely through the whole volume. Whether or not this microscopic change leads to empirical consequences depends on how discriminating our measuring techniques are. In standard thermodynamics, we only consider macroscopic features of physical systems, like pressure and temperature, and exclude the observation of individual particle paths. By contrast, chemical differences, however minute, are standardly taken to belong to the area of competence of thermodynamics. The standard thermodynamic account therefore says that there is no entropy of mixing in the case of gases of the same kind and that there is such an entropy if the gases are chemically different. However, as pointed out at the end of Section 2, it may well happen that, in an actual experiment, chemical differences are too small to be detectable, in which case they acquire the same status as hidden microscopic differences due to the origin of individual particles. The decision to restrict ourselves to a coarse-grained level of description of what happens when gases mix is pragmatic, and it is possible to go to a deeper, microscopic level. If we do so, we should be able to verify an entropy growth even if the gases that mix are of the same kind: instead of focusing on a macroscopically accessible difference, we should in this case pay attention to a microscopic one, namely “coming from the left” versus “coming from the right” for single particles. Permutability of labels or invariance under mental particle swapping are irrelevant for this conclusion, as are arguments that derive factors 1/N! from the use of the binomial probability distribution. The Boltzmannian microscopic explanation of these results is concretely visualizable and close to physical intuition, and in this way >23]. There are nevertheless other ways of reproducing the thermodynamic predictions, most importantly by purely information theoretic methods. As these are less concerned with concrete physical processes on the microlevel, and use probabilities to quantify our lack of information (“subjective probabilities”), they satisfy different standards of intelligibility. Viewed from this angle, the
The Gibbs Paradox and Particle Individuality
221
Gibbs paradox and statistical physics in general provide an interesting case of plurality of approaches in physics. Finally, quantum mechanics is able to describe situations that closely resemble cases from classical mechanics—a paradigm example is that of N narrow wave packets (coherent states) far apart in space. In such situations, quantum mechanics in very good approximation reproduces the classical predictions, and this includes the presence of an entropy of mixing. This is so regardless of the (anti-)symmetry of the total wave function, which already shows that the symmetrization postulates by themselves do not remove the Gibbs paradox. Semi-classical situations are of course an exception in quantum theory, but also, in more general quantum settings, the presence { > { \ > $ ~ in Section 5. It is true that quantum predictions as a rule differ from their classical counterparts, and that this also applies to the value of the mixing entropy (see the end of the previous section). However, it is wrong to think that the statistical entropy of mixing in the case of equal gases vanishes automatically because of the “identity of quantum particles of the same kind”.
222
Statistical Mechanics and Entropy
REFERENCES Dieks, D. The Gibbs Paradox Revisited. In Explanation, Prediction, and '~ > { M > % > > { Supplementary Note 3 for details) that Sinfo is an average of entropy changes for each value of M. For Signo, all that changes is the probability pJ\ > > > | > #; Importantly, for both observers, the result is a function of only via the probability distribution qM for the spin value M. In Fig. 4, one observes the smooth transition from identical to orthogonal spin states as varies from 0 to .
Figure 4: Results for partially distinguishable spins. Plots of Sinfo, Signo as a function of orthogonality of the spin states as determined by in (32). The
> system with initial numbers of particles on either side of the box n = m = 15, and d± \ classical change in entropy, 2nln(2). Here, the greatest change in entropy occurs when the spin states are orthogonal at = .
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
245
DISCUSSION In contrast to the classical Gibbs paradox setting, we have shown that quantum mechanics permits the extraction of work from apparently indistinguishable gases, without access to the degree of freedom that distinguishes them. It is notable that the lack of information about this ‘spin’ does not in principle impede an experimenter at all in a suitable macroscopic limit with large particle number and low density—the thermodynamical value of the two gases is as great as if they had been fully distinguishable. The underlying mechanism is a generalisation of the famous Hong–Ou– Mandel (HOM) effect in quantum optics34,40,41. In this effect, polarisation may play the role of the spin. Then a non-polarising beam splitter plus photon detectors are able to detect whether a pair of incoming photons are similarly polarised. The whole apparatus is polarisation-independent and thus accessible to the ignorant observer. Given this context, it is therefore not necessarily surprising that quantum Gibbs mixing can give different results to the classical case. However, the result of the low density limit is not readily apparent. This limit is reminiscent of the result in quantum reference frame theory38 that the lack of a shared reference frame presents > « Two recent papers18,43 have studied Gibbs-type mixing in the context of optomechanics. A massive oscillator playing the role of a work reservoir interacts with the photons via their pressure. This oscillator simultaneously acts as a beam splitter between the two sides of the cavity. In ref. 18, the beam splitter is non-polarising and thus (together with the interaction with the oscillator) accessible to the ignorant observer. The main behaviour there is driven by the HOM effect, which enhances the energy transfer to \ > $ «¤\ Gibbs mixing as a function of the relative polarisation rotations between left and right, bosonic statistics are therefore described as acting oppositely to Gibbs mixing effects—which is different from our conclusions. However, there is no contradiction: we have shown that an advantage is gained by optimising over all allowed dynamics. Moreover, the scheme in ref. 43 uses a polarisation-dependent beam splitter, which is only accessible to the informed observer. Therefore the effect described here cannot be seen in such a set-up. It is an interesting question whether such proposals can be > \
246
Statistical Mechanics and Entropy
It is important to determine how the thermodynamic enhancements predicted in this paper may have implications for physical systems. Such an investigation should make use of more practical proposals (such as refs. 16,18,43) to better understand possible realisations of mixing. For example, systems of ultra-cold atoms in optical lattices44 may provide a suitable platform to experimentally realise the thermodynamic effects predicted in this work. The question of the maximal enhancement in the macroscopic limit is particularly compelling given the rapid progress in the manipulation of large quantum systems45.
METHODS The Supplementary Information contains detailed proofs. Supplementary Note 1 describes the treatment of classical particles, starting from a description akin to first quantisation, and then coarse-graining the state space along with appropriate restrictions on the allowed dynamics. Supplementary Note 2 fills out the derivation for the quantum ignorant observer sketched in the main text. Supplementary Note 3 provides details for the general case of non-orthogonal spins. Supplementary Note 4 computes the dimensions of the spaces Hx³ from representation theory formulas. Supplementary Notes 5, 6 show how to take the low density and large particle number limits, respectively.
ACKNOWLEDGEMENTS We acknowledge financial support from the European Research Council (ERC) under the Starting Grant GQCOP (Grant No. 637352), the EPSRC (Grant No. EP/N50970X/1) and B.M.’s doctoral prize EPSRC grant (EP/ T517902/1). B.Y. is also supported by grant number (FQXi FFF Grant number FQXi-RFP-1812) from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation. We are grateful to Zoë Holmes, Gabriel Landi, Vlatko Vedral, Chiara Marletto, Bernard Kay, Matthew Pusey, Alessia Castellini and Felix Binder for valuable discussions.
CONTRIBUTIONS B.Y. and B.M. contributed equally to the research and writing the paper, supervised by G.A.
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
247
SUPPLEMENTARY INFORMATION Supplementary Note 1 – Classical Treatment Classical state space and microscopic dynamics Here, we describe the classical setting with identical particles having an internal spin degree of freedom that is not accessed by the experimenter. The aim is to give a treatment that parallels the quantum one so that the two cases can be compared fairly. Each particle has two degree of freedom – a position x = 1, . . . , d and a spin s = 1, . . . , S – which are the accessible and hidden degrees of freedom, respectively. (Note that we only require S = 2 in the main text.) We start from the point of view of a hypothetical observer for whom all the particles are fully distinguishable. The effective indistinguishability of the particles will be imposed later by a suitable restriction on the allowed ~ #+ + identical particles. The underlying state space of N distinguishable particles is (1) ~§Î\\\~» > { | ò = òx NØòs N of the individual spaces for each degree of freedom. A thermodynamical operation involves coupling the particles to a heat bath and work reservoir, the latter two of which we group into a joint system " ò designate by a label a. A state of the whole system can therefore be specified by a tuple (x, s, a). We assume the underlying microscopic dynamics to be deterministic and reversible; thus, an evolution of whole system consists of an invertible mapping {\\{]\]\]
Dynamics independent of spin and particle label Now we impose the condition that the operation be spin independent. This translates into two features: i) the spins are all unchanged, so s’= s, and ii) x’and a’are functions of x and a only, not s. It is clear that s is completely decoupled from the other variables, so that the dynamics of the apparatus are
Statistical Mechanics and Entropy
248
the same for any value of s. Thus we can drop the redundant information and designate states of the whole system by (x, a). Next, we impose operational indistinguishability of the particles, again by restricting the allowed operations. An allowed operation must be invariant > ê אSN\ ê{§ = (xê, . . . , xê). Then we require that (3) i.e., the transformation commutes with all permutations. This condition implies that a’ is a function only of a and the type t of x. By this, we mean t = (t1, . . . , td) specifies the number ti of particles in each cell i. It is then clear that, as far as the dynamics of A are concerned, it is sufficient to keep track of just (t, a). The total number of effective microstates of the particles, as seen by the ignorant observer, is then the number of possible types, equal to
.
Subtlety with overly constrained dynamic However, there is a subtlety: one can ask whether all (deterministic and reversible) dynamics in the space of (t, a) are possible under the constraint % +¤\]\]> \ { {\{ \ {\{]\] % +¤ ê{§ of type t evolve. There may be a contradiction here – there are two ways in which a transformation might not be possible:
{ ê ê{§ { > ê{]§ ² {]\ transformation cannot be deterministic. { ê ê{§ ² { > ê{]§ { \ transformation cannot be reversible We give the following example, consider x = (1, 1), x’= (1, 2), which have types t = (2, 0), t’ = (1, 1). A swap of the two particles preserves x but {]ª ]> > this is because there is no way of “picking out” a particle from cell 1 and moving it to cell 2 in a way that acts non-preferentially on the particles. In quantum mechanics, this obstacle is avoided because it is possible + superposition of the two x’= (1, 2) and (2, 1).
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
249
This hints at a way to avoid the problem in the classical case: widening the scope to include stochastic operations. Since it is crucial to require that all dynamics are microscopically deterministic, we introduce stochasticity using additional degrees of freedom containing initial randomness. These couple to the different ways the particles can be permuted, and must necessarily be hidden, i.e., not accessible to the observer, in order to maintain ignorance about the particle labels. The idea is to construct globally deterministic, reversible dynamics such that tracing out the hidden degrees of freedom gives stochastic dynamics on (x, a) via the probabilities p(x’ , a’ |x, a). Analogously to Supplementary Eq. (3), we impose the condition (4) The claim is that such dynamics exist that enable all possible (deterministic, reversible) transformations of (t, a). To see this, consider [ \]\] additional variables h1, h2 which respectively contain information about x and x’. h1 starts in a “ready” state 0, while h2 is uniformly distributed over all x’ of type t’. Writing a joint state of all subsystems as (x, a, h1, h2), it is > < (5) where a’ is of course a function of t only. Here, h1 keeps a record of the initial configuration (to ensure reversibility) and h2 randomises the final configuration to range uniformly over all x’ of type t ‘ . Hence we see that p(x ‘ , a’ |x, a) is constant over all x, x 0 of interest and thus satisfies condition Supplementary Eq. (4). Note that h1 has to be initialised in a “pure” state of zero entropy such that it can record information. Such a state, being non-thermal, should be regarded as an additional resource which costs work to prepare. (By contrast, the uniformly random variable h2 is thermal and thus free.) The necessary leakage of information into h1 therefore entails dissipation of work into heat. Hence the work extraction formula (2,3)[main text] is technically an upper bound to what can be achieved classically. > > necessary only for those transitions where the set of x of type t is smaller than the set of x’ of type t’, in order to prevent irreversible merging of states. This situation can be avoided, for instance, in the case of the classical analogue of fermions wherein no more than one particle can occupy a cell. Similarly, in the low density limit (discussion of which appears in the main
Statistical Mechanics and Entropy
250
{\ >> (One could also argue that this problem is never encountered in reality – as \ + parameter regime.) To summarise what we have shown in this section:
Classical identical particles can be treated, analogously to the quantum case, as (in principle) distinguishable particles whose dynamics are restricted to be independent of particle label. An observer with access only to spin-independent operations can treat the system as if the particles were spin-less. There is a subtlety with the particle-label-independent operations that blocks certain transitions. This restriction can be lifted with additional degrees of freedom but may require dissipation of work into heat. This extra cost is zero when particles always occupy distinct cells.
Supplementary Note 2 – Details For Quantum Ignorant Observer In this section, we provide additional details for the entropy change as seen by the ignorant observer. Recall that Schur-Weyl duality [1, Chapter 5] provides the decomposition (6) ³ Í>{ ͳ > >{ \ with non-increasing row length from top to bottom. We can equivalently describe ³³1\³2, . . . , ³d), where ³i is the number of boxes in row i. For example, ³
would be denoted (3, 1) (where N = 4, d = 2). ³
H x and K x carry irreps of U(d) and SN respectively, corresponding to irreducible subspaces under the actions of single-particle unitary rotations u ٔN ٔ I ٔN > ùٔ I ٔN , each of which act only on the spatial part. The same decomposition works for the spin part HٔN s , ͳ { \ correspond to the familiar SU(2) irreps with total angular momentum J, via ³¥\¥
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
251
After putting the spatial and spin decompositions together, projecting onto the overall (anti-)symmetric subspace causes the symmetries of the two components to be linked. For bosons, the overall symmetric subspace (itself a trivial irrep of SN ) occurs exactly once in K³x ٔ K³’s³³]\ and otherwise does not [2, Section 7-13]. Thus we have (7) (8) \ [ > ³] ³T , denoting the transpose of the Young diagram in which rows and columns are interchanged; thus, (9) Due to the use of a two-dimensional spin, we employ the correspondence ¥Ç³¥\¥\\\ > ³>¥ ' > case. Thanks to the decomposition in % +\ å > > > written in terms of the basis |J, q>x |J, M> s |¥ ixs, where |J, q> אHJ x , |J, M> אHJs , |J > אKJx ٔ KJs , as described in the main text. The ignorant observer sees the reduced state after tracing out the spin part, of the form
(10) The entropy of this state is
(11) where distribution pJ .
is the Shannon entropy of the probability
{\
(12)
252
Statistical Mechanics and Entropy
with entropy (13) An example of a channel that achieves the mapping from åx to åx – albeit without a coupling to a heat bath or work reservoir – is the so-called “twirling” operation. This is a probabilistic average over all single-particle unitary rotations u ٔNx (14) where μ is the Haar measure over the group U(d). The entropy change for the ignorant observer is therefore (15) (Note that the states J do not enter into the entropy change.) Our goal is therefore to determine the probabilities pJ , dimensions dJ , and the entropy of the component states åJx . The case of indistinguishable gases is dealt with in the main text: the state is fully in the subspace J = N/2, corresponding to the spatially symmetric subspace for bosons and spatially antisymmetric for fermions. For gases of different spins, the initial state is such that all particles on ýÿ ý \ left, and mi in each cell i on the right (such that > \ properly symmetrised wavefunction is
(16) (17) ê êý\Æx # \ êê] on |n,m>, they must also have the same effect on .) N (n,m) is a normalisation factor (such that N (n,m) is the number of distinct terms in the sum). We determine the pJ via the expectation value of the projector PJs onto the subspace HJs :
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
253
(18) (19) (20) > ê\ê] terms in Supplementary Eq. (17) also have different actions on |n,m>. Now we use Clebsch-Gordan coefficients to evaluate each term in this last sum. { ýÿn> as a combined spin with J1 = M1 = n/2, ým> as a spin with J2 !2 = m/2. The Clebsch-Gordan coefficient is precisely the amplitude for this state ¥> > ê\ Supplementary Eq. (20) simplifies to
(21) Now it remains to consider the correct initial state, which is a uniform probabilistic mixture of all |û\Æ { > \ \ % | > #; \ ¤§ (22) (23) \ å¥{ ¯ basis provided by the Schur-Weyl decomposition, we have
(24) Here, | is some linear combination of the |J, q>x – without \ > \ fact that different |ûn,m)i are fully distinguishable just by measuring the > \
254
Statistical Mechanics and Entropy
(25)
(26) From orthogonality of the ûn,m, J), it follows that (27) Inserted into Supplementary Eq. (15), this results in the claimed entropy changes (11,12)[main text].
Supplementary Note 3 – Partial Distinguishability Here, we extend the analysis to include non-orthogonal spins states. As > \ ~ >{ýÿٔn , but now on the right we have ٔm, where .
Informed observer Let us first discuss the operations allowed to be performed by the informed > ~ > ýÿÆ\ ýÆ >È > \ this choice entails a preferred spin basis – this is necessary in order to have a well-defined notion of conditioning dynamics on the value of a spin. We thus require a global unitary of the form , where the block structure refers to subspaces with fixed M as defined by the Schur basis. Under a block-diagonal operation, one cannot extract work from coherences between the blocks [4, 5]; that is, the initial state of the spins can be effectively replaced by the dephased state (28) where QMs is the projector onto the M block. In other words, the state behaves thermodynamically as a statistical mixture of the different z-spin numbers M. It follows that the overall entropy change is the average (29)
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
255
As for the case of orthogonal spins, the initial state is a uniform mixture of states generalising equation Supplementary Eq. (17), (30) # ê êý\Æx . As before, N is simply the > î Expanding
in the preferred basis, it is easily seen that
(31) and so (32)
!~ ~ ·M ýûn,m)i, it is sufficient to note that all such states are pure and must be s orthogonal, since they can be mutually perfectly distinguished by measuring the occupation number in each cell. The entropy S(å(M)xs ) is therefore just as in Supplementary Eq. (27) for each M. Due to the block-diagonal structure of the global unitary U, the { >{{ ! >~| > > { number of up and down spins, the dimension of the M block is found to be entropy change
in the bosonic case. Hence the overall
In the fermionic case, analogous counting gives (34)
Ignorant observer For the ignorant observer, we now have to analyse åJx . From Supplementary Eq. (30) (recalling that the permutations to be summed over are those that êý\Æx
256
Statistical Mechanics and Entropy
(35) (36) J
using the fact that the projector P s commutes with permutations. In order to simplify this, we examine coefficients of the form . Using the Schur basis just for the spin part, in general one can expand . The r label, representing the part of the basis acted upon by the permutation group, consists of any quantum numbers needed to complete the set along with J and M. We describe a convenient choice of such numbers, denoted j1, j1,2, . . . , j1,...,n and k1, k1,2, . . . , k1,...,m. j1 is the total spin eigenvalue of spin 1, j1,2 of spins 1 and 2 together, and so on. k1, . . . have the same meaning, but for the remaining spins label n + 1, . . . , n + m. That these complete the set of quantum numbers is evident from imagining performing an iterated Clebsch-Gordan procedure. This would involve coupling spins 1 and 2, then adding in spin 3, and so on up to spins n. Spins n + 1 up to n + m would be coupled recursively in the same manner, and then finally the two blocks of spins coupled to give the overall J. each of the two blocks of spins is fully For the state symmetric, meaning that each of these spin eigenvalues is maximal: j1 = k1 = 1/2 , j2 = k2 = 1, . . . , j1,...,n = n/2 , k1,...,m = m/2 . Given this choice of basis, there is only a single value of r = r0 in the expansion of , referring to this collection of spin eigenvalues. Therefore we can write , and (37) (38) (39) since is independent of M (and r0 is fixed anyhow). in the preferred basis and using the Clebsch-Gordan Expanding coefficients for coupling the two blocks of spins gives
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
257
(40) where qM is defined in Supplementary Eq. (32). Putting this into Supplementary Eq. (36), we have (41) (42) Crucially, the state in brackets is independent of M and must therefore be identical to the state we named |ûn,m, J)> in Supplementary Eq. (24). Hence the remaining analysis runs exactly as in the orthogonal spin case, apart from the replacement of pJ by . Thus, all that changes is the probability distribution over J, and this only >> !\ > î
Supplementary Note 4 – Dimension Counting ²\| £§\ > >³ ¥
(43) (44) First take the bosonic case. Since the Young diagram for the SU(2) spin \ ³> # ³¾¥ \¥\¤\«\| numerator of Supplementary Eq. (43) is aided by the table below, which lists ³¾i³¾j , where i labels the row and j > i labels the column:
(45)
258
Statistical Mechanics and Entropy
(46) the second row gives
(47) and the remaining rows give (48) Putting these into Supplementary Eq. (43) results in the expression for dBN,J in (13)[main text]. For fermions, we instead use the transpose of the Young diagram, with
(49) An important restriction on ³T is that the number of rows can never be \¥Å
As before, the differences
can be arranged as follows:
Here, the bold lines indicate the division into the three main index groups. We want to calculate the product of all rows in the table. The bottom group of rows gives
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
259
(52) The next group up, being careful to discount the terms lost due to the jump at column j = N/2 + J, gives
(53) \ \ [[¥ + 1, gives
(54) Inserting into Supplementary Eq. (43), we need to divide the product of . This factor cancels all the factorials the above three terms by present in the above three expressions, with the exception of the top two \> ¥\ J) + 1. Therefore we have
Supplementary Note 5 – Low Density Limit Bosons Here we prove equation (23)[main text] for bosons. For simplicity, we take n = m. The result rests on the observation that, for sufficiently large d, the ratio
. We have
260
Statistical Mechanics and Entropy
(59) (60)
(61) Letting xk = [n + k]/d, we have
(62)
(63) (64) where the first and second order terms are evaluated to be (65) (66) and
= n 2/d. Similarly, letting yk¥~§\
(67) (68) with (69) (70) We then have
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
261
(71) (72) (73) (74) (75) (76) \ compared with the entropy for the informed observer:
(77) (78)
(79) 2
{{{{ /2 + . . . for small x. % + £Ö {\ >
> 1. Starting from Supplementary Eq. (23), we can rewrite
(118) (119)
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
265
where follows a binomial distribution with N + 1 trials and a success probability of 1/2. Using Stirling’s approximation in the form [7], we have
Using a local version of the central limit theorem [8, Chapter VII, Theorem 1], we can approximate b(n + J + 1) by a normal distribution with mean (2n + 1)/2 and variance (2n + 1)/4, obtaining
where o(f) denotes an error term going to zero strictly faster than f. Then (129) so the entropy is approximated by
For large n, we expect that the sum can be approximated by an integral. To show this, we can use the simplest version of the Euler-Maclaurin formula:
266
Statistical Mechanics and Entropy
(132) (133) Firstly, we have (134) (135) Along these lines, it is not hard to see that shifting the initial point from x = 0 to x = 1/2 leads to an o(1) error, so we change variables to y = x + 1/2 { > > poly[n, ln n]). For the remainder integral, we let k = (n + 1/2) and use (136) (137) ý>ýÅ\
in which the individual integrals can be evaluated with the highest order being . Overall, therefore (140) (141) (142) Û ±±£ #! Supplementary Eq. (131) (143)
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
(144)
267
268
Statistical Mechanics and Entropy
REFERENCES 1.
Bennett, C. H. Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon. Stud. Hist. Philos. Sci. B Stud. Hist. Philos. Mod. Phys. 34, 501–510 (2003). 2. Holian, B. L., Hoover, W. G. & Posch, H. A. Resolution of Loschmidt’s paradox: the origin of irreversible behavior in reversible atomistic dynamics. Phys. Rev. Lett. 59, 10 (1987). 3. Binder, F., Correa, L. A., Gogolin, C., Anders, J. & Adesso, G. Thermodynamics in the quantum regime. Fundamental Theories of Physics (Springer, 2019). 4. Gibbs, J. W. On the equilibrium of heterogeneous substances. Transactions of the Connecticut Academy of Arts and Sciences, Vol. 3, pp. 108–248 and 343–524 (Connecticut Academy of Arts and Sciences, 1879). 5. Levitin, L. B. Gibbs paradox and equivalence relation between quantum information and work. In Workshop on Physics and Computation, 223–226 (IEEE, 1992). 6. Allahverdyan, A. E. & Nieuwenhuizen, T. M. Explanation of the gibbs paradox within the framework of quantum thermodynamics. Phys. Rev. E 73, 066119 (2006). 7. Versteegh, M. A. & Dieks, D. The Gibbs paradox and the distinguishability of identical particles. Am. J. Phys. 79, 741–746 (2011). 8. Darrigol, O. The Gibbs paradox: early history and solutions. Entropy 20, 443 (2018). 9. Jaynes, E. T. The gibbs paradox. In Maximum Entropy and Bayesian Methods, 1–21 (Springer, 1992). 10. Lostaglio, M., Jennings, D. & Rudolph, T. Description of quantum coherence in thermodynamic processes requires constraints beyond free energy. Nat. Commun. 6, 6383 (2015). 11. Vinjanampathy, S. & Anders, J. Quantum thermodynamics. Contemp. Phys. 57, 545–579 (2016). 12. Goold, J., Huber, M., Riera, A., del Rio, L. & Skrzypczyk, P. The role of quantum information in thermodynamicsa topical review. J. Phys. A Math. Theor. 49, 143001 (2016).
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
269
13. Uzdin, R., Levy, A. & Kosloff, R. Equivalence of quantum heat machines, and quantum-thermodynamic signatures. Phys. Rev. X 5, 031044 (2015). 14. Plesch, M., Dahlsten, O., Goold, J. & Vedral, V. Maxwell’s daemon: information versus particle statistics. Sci. Rep. 4, 6995 (2014). 15. Bengtsson, J. et al. Quantum Szilard engine with attractively interacting bosons. Phys. Rev. Lett. 120, 100601 (2018). 16. Myers, N. M. & Deffner, S. Bosons outperform fermions: the thermodynamic advantage of symmetry. Phys. Rev. E 101, 012110 (2020). 17. Watanabe, G., Venkatesh, B. P., Talkner, P., Hwang, M.-J. & del Campo, A. Quantum statistical enhancement of the collective performance of multiple bosonic engines. Phys. Rev. Lett. 124, 210603 (2020). 18. Holmes, Z., Anders, J. & Mintert, F. Enhanced energy transfer to an optomechanical piston from indistinguishable photons. Phys. Rev. Lett. 124, 210601 (2020). 19. Killoran, N., Cramer, M. & Plenio, M. B. Extracting entanglement from identical particles. Phys. Rev. Lett. 112, 150501 (2014). 20. Morris, B. et al. Entanglement between identical particles is a useful and consistent resource. Phys. Rev. X 10, 041012 (2020). 21. Braun, D. et al. Quantum-enhanced measurements without entanglement. Rev. Mod. Phys. 90, 035006 (2018). 22. Boltzmann, L. Über die Beziehung zwischen dem zweiten Hauptsatze der mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung respektive den Sätzen über das Wärmegleichgewicht. Wiener Berichte 76, 373–435 (1877). 23. Saunders, S. The Gibbs Paradox. Entropy 20, 552 (2018). 24. Dieks, D. The Gibbs Paradox and particle individuality. Entropy 20, 466 (2018). 25. Horodecki, M. & Oppenheim, J. Fundamental limitations for quantum and nanoscale thermodynamics. Nat. Commun. 4, 2059 (2013). 26. Brandao, F. G., Horodecki, M., Oppenheim, J., Renes, J. M. & Spekkens, R. W. Resource theory of quantum states out of thermal equilibrium. Phys. Rev. Lett. 111, 250404 (2013). 27. Bergmann, P. G. & Lebowitz, J. L. New approach to nonequilibrium processes. Phys. Rev. 99, 578 (1955).
270
Statistical Mechanics and Entropy
28. Niedenzu, W., Huber, M. & Boukobza, E. Concepts of work in autonomous quantum heat engines. Quantum 3, 195 (2019). 29. Åberg, J. Truly work-like work extraction via a single-shot analysis. Nat. Commun. 4, 1–5 (2013). 30. Dahlsten, O. C. O., Renner, R., Rieper, E. & Vedral, V. Inadequacy of von Neumann entropy for characterizing extractable work. New J. Phys. 13, 053015 (2011). 31. Landauer, R. Irreversibility and heat generation in the computing process. IBM J. Res. Dev. 5, 183–191 (1961). 32. Bach, A. Indistinguishable Classical Particles. Lecture notes in physics monographs (1996). 33. Fujita, S. On the indistinguishability of classical particles. Found. Phys. 21, 439–457 (1991). 34. Adamson, R. B. A., Turner, P. S., Mitchell, M. W. & Steinberg, A. M. Detecting hidden differences via permutation symmetries. Phys. Rev. A 78, 033832 (2008). 35. Goodman, R. & Wallach, N. R. Symmetry, Representations, and Invariants, Vol. 255 of Graduate Texts in Mathematics (Springer New York, 2009). 36. Harrow, A. W. Applications of coherent classical communication and the schur transform to quantum information theory. PhD thesis, Massachusetts Institute of Technology (2005) http://hdl.handle. net/1721.1/34973. 37. Zurek, W. H. Maxwell’s demon, Szilard’s engine and quantum measurements. In Frontiers of Nonequilibrium Statistical Physics, 151–161 (Springer, 1986). 38. Bartlett, S. D., Rudolph, T. & Spekkens, R. W. Reference frames, superselection rules, and quantum information. Rev. Mod. Phys. 79, 555 (2007). 39. \*\| \' \ + for Schur and Clebsch-Gordan transforms. Phys. Rev. Lett. 97, 170502 (2006). 40. Hong, C. K., Ou, Z. Y. & Mandel, L. Measurement of subpicosecond time intervals between two photons by interference. Phys. Rev. Lett. 59, 2044–2046 (1987). 41. Stanisic, S. & Turner, P. S. Discriminating distinguishability. Phys. Rev. A 98, 043839 (2018).
Mixing Indistinguishable Systems Leads to a Quantum Gibbs Paradox
271
42. Bartlett, S. D., Rudolph, T. & Spekkens, R. W. Classical and quantum communication without a shared reference frame. Phys. Rev. Lett. 91, 027901 (2003). 43. Holmes, Z., Mintert, F. & Anders, J. Gibbs mixing of partially distinguishable photons with a polarising beamsplitter membrane. New J. Phys. 22, 113015 (2020). 44. Kaufman, A. M. et al. Quantum thermalization through entanglement in an isolated many-body system. Science 353, 794–800 (2016). 45. Fröwis, F., Sekatski, P., Dür, W., Gisin, N. & Sangouard, N. Macroscopic quantum states: measures, fragility, and implementations. Rev. Mod. Phys. 90, 025004 (2018).
CHAPTER 10
THERMAL QUANTUM SPACETIME
Isha Kotecha 1,2 1
Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Mühlenberg 1, 14476 Potsdam-Golm, Germany 2 Institut für Physik, Humboldt-Universität zu Berlin, Newtonstraße 15, 12489 Berlin, Germany
ABSTRACT The intersection of thermodynamics, quantum theory and gravity has revealed many profound insights, all the while posing new puzzles. In this article, we discuss an extension of equilibrium statistical mechanics and thermodynamics potentially compatible with a key feature of general relativity, background independence; and we subsequently use it in a candidate quantum gravity system, thus providing a preliminary formulation of a thermal quantum spacetime. Specifically, we emphasise an informationtheoretic characterisation of generalised Gibbs equilibrium that is shown to
Citation: Isha Kotecha, “Thermal Quantum Spacetime,” Universe 2019, 5(8), 187; https://doi.org/10.3390/universe5080187 Copyright: © 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
274
Statistical Mechanics and Entropy
be particularly suited to background independent settings, and in which the status of entropy is elevated to being more fundamental than energy. We also shed light on its intimate connections with the thermal time hypothesis. Based on this, we outline a framework for statistical mechanics of quantum gravity degrees of freedom of combinatorial and algebraic type, and apply it in several examples. In particular, we provide a quantum statistical basis for the origin of covariant group field theories, shown to arise as effective statistical field theories of the underlying quanta of space in a certain class of generalised Gibbs states. Keywords: background independence; generalised statistical equilibrium; quantum gravity; entropy
INTRODUCTION Background independence is a hallmark of general relativity that has revolutionised our conception of space and time. The picture of physical reality it paints is that of an impartial dynamical interplay between matter and gravitational fields. Spacetime is no longer a passive stage on which matter performs; it is an equally active performer in itself. Coordinates are gauge, thus losing their physical status of non-relativistic settings. In particular, the notion of time is modified drastically. It is no longer an absolute, global, external parameter uniquely encoding the full dynamics. It is instead a gauge parameter associated with a Hamiltonian constraint. \ # > + mechanics and thermodynamics have been of immense use in the physical sciences. From early applications to heat engines and study of gases, to modern day uses in condensed matter systems and quantum optics, these powerful frameworks have greatly expanded our knowledge of physical systems. However, a complete extension of them to a background independent setting, \ 1,2,3]. The biggest challenge is the absence of an absolute notion of time, and thus of energy, which is essential to any statistical and thermodynamical consideration. { > { equilibrium, for the natural reason that the standard concepts of equilibrium and time are tightly linked. In other words, the constrained dynamics of a background independent system lacks a non-vanishing Hamiltonian in general, which makes formulating (equilibrium) statistical mechanics and thermodynamics, an especially thorny problem. This is a foundational
Thermal Quantum Spacetime
275
issue, and tackling it is important and interesting in its own right, and even more so because it could provide deep insights into the nature of (quantum) gravitational systems. This paper is devoted to addressing precisely these points. of the deep interplay between thermodynamics, gravity and the quantum \ >~ >~ 4] were a glimpse into a curious intermingling of thermodynamics and classical gravity, even if originally only at a formal level of analogy. The discovery of black hole entropy and radiation [5,6,7] further brought quantum mechanics into the mix. This directly led to a multitude of new conceptual insights along with many puzzling questions which continue to be investigated still after decades. The content of the discovery, namely that a black hole must be assigned physical entropy and that it scales with the area of its horizon in Planck units, has birthed several distinct lines of thoughts, in turn leading to different (even if related) lines of investigations, such as thermodynamics of gravity, analogue gravity and holography. Moreover, early attempts at understanding the physical origin of this entropy [8] made evident the relevance of quantum entanglement, thus also contributing to the current > + theory and gravitational physics. This discovery further hinted at a quantum microstructure underlying a classical spacetime. This perspective is shared, to varying degrees of details, by various approaches to quantum gravity such as loop quantum gravity \ % |\ % within discrete non-perturbative approaches, spacetime is replaced by more fundamental entities that are discrete, quantum, and pre-geometric in the sense that no notion of smooth metric geometry and spacetime manifold exists yet. The collective dynamics of such quanta of geometry, governed by some theory of quantum gravity is then hypothesised to give rise to an emergent spacetime, corresponding to certain phases of the full theory. This would essentially entail identifying suitable procedures to extract a classical continuum from a quantum discretuum, and to reconstruct general relativistic gravitational dynamics coupled with matter (likely with quantum corrections). This emergence in quantum gravity is akin to that in condensed matter systems in which also coarse-grained macroscopic (thermodynamic) properties of the physical systems are extracted from the microscopic (statistical and) dynamical theories of the constituent atoms. In this sense
276
Statistical Mechanics and Entropy
our universe can be understood as an unusual condensed matter system, brought into the existing smooth geometric form by a phase transition of a quantum gravity system of pre-geometric ‘atoms’ of space; in particular, as a condensate [9]. This brings our motivations full circle, and to the core of this article: to illustrate, the potential of and preliminary evidence for, a rewarding exchange between a background independent generalisation of statistical mechanics and discrete quantum gravity; and show that ideas from the former are vital to investigate statistical mechanics and thermodynamics of quantum gravity, and that its considerations in the latter could in turn provide valuable insights into the former. These are the two facets of interest to us here. In Section 2, we discuss a potential background independent extension of equilibrium statistical mechanics, giving a succinct yet complete discussion of past works in Section 2.1.1, and subsequently focussing on a new thermodynamical characterisation for background independent equilibrium in Section 2.1.2, which is based on a constrained maximisation of information entropy. In Section 2.2, we detail further crucial properties of this characterisation, while placing it within a bigger context of the issue of background independent statistical equilibrium, also in comparison with the previous proposals. Section 2.3 is more exploratory, remarking on exciting new connections between the thermodynamical characterisation and the thermal time hypothesis, wherein information entropy and observer dependence are seen to play instrumental roles. In Section 2.4, we discuss several aspects of a generalised thermodynamics based on the generalised equilibrium statistical > \ Section 3 is devoted to statistical mechanical considerations of candidate quantum gravity degrees of freedom of combinatorial and algebraic type. After clarifying the framework for many-body mechanics of such atoms of space in Section 3.1, we give an overview of examples in Section 3.2, thus illustrating the applicability of the generalised statistical framework in quantum gravity. The one case for which we give a slightly more detailed + Gibbs states of the underlying microscopic system. Finally, we conclude and offer some outlook.
Thermal Quantum Spacetime
277
BACKGROUND INDEPENDENT EQUILIBRIUM STATISTICAL MECHANICS Covariant statistical mechanics [1,2,3] broadly aims at addressing the foundational issue of defining a suitable statistical framework for constrained systems. This issue, especially in the context of gravity, was brought to the fore in a seminal work [1], and developed subsequently in [2,3,10,11]. Valuable insights from these studies on spacetime relativistic systems [1,2,3,11,12,13] have also formed the conceptual backbone of first applications to discrete quantum gravity [14,15,16]. In this section, we present extensions of equilibrium statistical mechanics to background independent1 systems, laying out different proposals for a generalised statistical equilibrium, but emphasising on one in particular, and based on which further aspects of a generalised thermodynamics are considered. The aim here is thus to address the fundamental problem of formulating these frameworks in settings where the conspicuous absence of time and energy is particularly tricky. Section 2.1 discusses background independent characterisations of equilibrium Gibbs states, of the general form . In Section 2.1.1, we touch upon various proposals for equilibrium put forward in past studies on spacetime covariant systems [1,3,11,17,18]. From Section 2.1.2 onwards, we focus on Jaynes’ information-theoretic characterisation [19,20] +> > >~ independent equilibrium, and illustrated with an explicit example in the context of quantum gravity, in [14]. Using the terminology of [14], we call this a ‘thermodynamical’ characterisation of equilibrium, to contrast with the customary Kubo–Martin–Schwinger (KMS) [21] ‘dynamical’ characterisation2. We devote Section 2.2 to discussing various aspects of the thermodynamical characterisation, including highlighting many of its favourable features, also compared to the other proposals. In fact, we point out how this characterisation can comfortably accommodate the other proposals for Gibbs equilibrium. Further, as is evident shortly, the thermodynamical characterisation hints at the idea that entropy is a central player, which has been a recurring theme across modern theoretical physics. In Section 2.3, we present a tentative discussion on some of these aspects. In particular, we notice compelling new relations between the thermodynamical characterisation and the thermal
278
Statistical Mechanics and Entropy
time hypothesis, which further seem to hint at intriguing relations between entropy, observer dependence and thermodynamical time. We further propose to use the thermodynamical characterisation as a constructive criterion of choice for the thermal time hypothesis. Finally, in Section 2.4\ > + which can be derived immediately from a generalised equilibrium state, without requiring any additional physical and/or interpretational inputs. We clarify the issue of extracting a single common temperature for the full \ of a generalised thermodynamics.
Generalised Equilibrium Equilibrium states are a cornerstone of statistical mechanics, which in turn is the theoretical basis for thermodynamics. They are vital in the description of macroscopic systems with a large number of microscopic constituents. In particular, Gibbs states eµ have a vast applicability across a broad range of fields such as condensed matter physics, quantum information and tensor networks, and (quantum) gravity, to name a few. They are special, being the unique class of states in finite systems satisfying the KMS condition3. Furthermore, usual coarse-graining techniques also rely on the definition of Gibbs measures. In treatments closer to hydrodynamics, one often considers the full (non-equilibrium) system as being composed of many interacting subsystems, each considered to be at local equilibrium. While in the context of renormalisation group flow treatments, each phase at a given scale, for a given set of coupling parameters is also naturally understood to be at equilibrium, each described by (an inequivalent) Gibbs measure. Given this physical interest in Gibbs states, the question then is how >~ different proposals, all relying on different principles originating in standard non-relativistic statistical mechanics, extended to a relativistic setting.
Past Proposals The first proposal [1,12] was based on the idea of statistical independence of arbitrary (small, but macroscopic) subsystems of the full system. The notion of equilibrium is taken to be characterised by the factorisation property of the state, å12å1å2, for any two subsystems 1 and 2; and the full system is at equilibrium if any one of its subsystems satisfies this property with all the
Thermal Quantum Spacetime
279
rest. We notice that the property of statistical independence is related to an assumption of weak interactions [22]. This same dilute gas assumption is integral also to the Boltzmann method of statistical mechanics. It characterises equilibrium as the most probable distribution, that is one with maximum entropy4. This method is used in [11] to study a gas of constrained particles5. The work in [3] puts forward a physical characterisation for an equilibrium state. The suggestion is that å # physical, reduced state space) is said to be a physical Gibbs state if its modular å\ # È and, is such that there exists a (local) clock function T(x) on the extended state space (with its conjugate momentum pT(x)), such that the (pull-back) of h is proportional to (the negative of) pT. Importantly, when this is the case $ ]\ Section 2.2) is a geometric (foliation) $ \ å is said to be ‘physical’. Notice that the ># ¦!% +> system (thus it is an example of using the dynamical characterisation), since > ] ~ Hamiltonian on the base spacetime manifold. Another strategy [17] is based on the use of the ergodic principle and ~ > ~ \ this characterisation, similar to a couple of the previous ones, relies on the validity of a postulate, even if traditionally a fundamental one. Finally, the proposal of [18] interestingly characterises equilibrium by $ > information used is that of Shannon (entropy), I=lnN, where N is the number of microstates traversed in a given history during interaction. Equilibrium > $ \ ¶I=I21=0. This characterisation of equilibrium is evidently informationtheoretic, even if relying on an assumption of interactions. Moreover, it is much closer to our thermodynamical characterisation, because the condition of vanishing ¶I is nothing but an optimisation of information entropy. These different proposals, along with the thermal time hypothesis [1,2], have led to some remarkable results, such as recovering the Tolman–Ehrenfest effect [13,23], relativistic Jüttner distribution [23] and Unruh effect [24]. However, they all assume the validity of one or more principles, postulates or assumptions about the system. Moreover, none (at least presently) seems
280
Statistical Mechanics and Entropy
to be general enough, as with the proposal below, to be implemented in a full quantum gravity setup, while also accommodating within it the rest of the proposals.
Thermodynamical Characterisation This brings us to the proposal of characterising a generalised Gibbs state based on a constrained maximisation of information (Shannon or von Neumann) entropy [14,15,16], along the lines advocated by Jaynes [19,20] purely from the perspective of evidential statistical inference. Jaynes’ approach is fundamentally different from other more traditional ones of statistical physics, as is the thermodynamical characterisation, compared with the others outlined above, which is exemplified in the following. It is thus a new proposal for background independent equilibrium [14,25], which has the potential of incorporating also the others as special cases, from the point of view of constructing a Gibbs state. Consider a macroscopic system with a large number of constituent microscopic degrees of freedom. Our (partial) knowledge of its macrostate ÎۦOaۧ=Ua} of the observables we
¥ >> \ once known, will allow us to infer also the other observable properties of the system) is not only one that is compatible with the given observations, but also that which is least-biased in the sense of not assuming any more information about the system than what we actually have at hand (namely, {Ua}). In other words, given a limited knowledge of the system (which is always the case in practice for any macroscopic system), the least-biased probability distribution compatible with the given data should be preferred. As shown below, this turns out to be a Gibbs distribution with the general . form Let Ú> # > { \ # a. Denote by å a smooth statistical density (real-valued, positive and normalised function) on Ú\ > \ number of constraints, (1) ³' Ú\ and the integrals are taken to be welldefined. Further, å has an associated Shannon entropy
Thermal Quantum Spacetime
281
(2) By understanding S to be a measure of uncertainty quantifying our ignorance about the details of the system, the corresponding bias is minimised (compatibly with the prior data) by maximising S (under the set of constraints in Equation (1), plus the normalisation condition for å [19]. The method of Lagrange multipliers then gives a generalised Gibbs distribution of the form,
(3) where the partition function Zε» encodes all thermodynamic properties in principle, and is assumed to be convergent. This can be done analogously for a quantum system [20], giving a Gibbs density operator on a representation Hilbert space
(4) ;>> > \ > > > a, and their conjugate generalised “inverse temperatures” µa, which have entered formally as Lagrange multipliers. Given this class of equilibrium states, it should be evident that some thermodynamic quantities (e.g., generalised “energies” Ua) can > discussed in Section 2.4. Finally, we note that the role of entropy is shown to be instrumental in 6) equilibrium states: “…thus entropy becomes the primitive concept with which we work, more fundamental even than energy…” [19]. It is also interesting to notice that Bekenstein’s arguments [6] can > > > $ > ¥ ] # surrounding entropy, and these same insights have now guided us in the issue of background independent statistical equilibrium.
Remarks
~ the use of evidential (or epistemic, or Bayesian) probabilities, thus taking into account the given evidence {Ua}; and second is a preference for the least-biased (or most “honest”) distribution out
Statistical Mechanics and Entropy
282
of all the different ones compatible with the given evidence. It is not enough to arbitrarily choose any that is compatible with the prior data. An aware observer must also take into account their own ignorance, or lack of knowledge honestly, by maximising the information entropy. This notion of equilibrium is inherently observer-dependent because of its use of the macrostate thermodynamic description of the system, which in itself is observer-dependent due to having to choose a coarse-graining, that is the set of macroscopic observables. Given a generalised Gibbs state, the question arises as to which $ > ¦!% $ \ > ª Takesaki theorem [21], any faithful algebraic state over a von Neumann algebra is KMS with respect to its own one-parameter $ 7 Given this, then åε» is clearly KMS $ Ðå¯ ®®å > aµaOa. In particular, å{µ» is not $ Ða generated by Oa, unless they satisfy [Xa,XÓ§ \Ó 15]. In fact, this last property shows that the proposal in [1,12] based on statistical independence (that is, [Xå,Xå]=0) can be understood as a special \ > > { > \ $ and the state will be said to satisfy statistical independence. To be clear, the use of the “most probable” characterisation for equilibrium is not new in itself. It was used by Boltzmann in the late 19th century, and utilised (also within a Boltzmann interpretation of statistical mechanics) in a constrained system in [11]. The fact +> { ] also not new: it was well known already in the time of Gibbs8. The novelty here is: in the revival of Jaynes’ perspective, of deriving equilibrium statistical mechanics in terms of evidential probabilities, solely as a problem of statistical inference without depending on the validity of any further conjectures, physical assumptions or interpretations; and in the suggestion that it is general enough to apply to genuinely background independent
Thermal Quantum Spacetime
283
systems, including quantum gravity. Below, we list some of these more valuable features. The procedure is versatile, being applicable to a wide array of > +\ # \ along with a set of observables with dynamically constant averages Ua > 9. \ +> mechanics (and from it, thermodynamics) does not lend any fundamental status to energy, nor does it rely on selecting a single, special (energy) observable out of the full set {Oa}. It can thus be crucial in settings where concepts of time and energy are > \ # > quantum gravity. It has a technical advantage of not needing any (one-parameter) ># > \ unlike the dynamical characterisation based on the standard KMS condition. It is independent of any additional physical assumptions, hypotheses or principles that are common to standard statistical physics, and, in the present context, to the other proposals of generalised equilibrium recalled in Section 2.1. Some examples of these extra ingredients (not required in the thermodynamical characterisation) that we have already encountered are ergodicity, weak interactions, statistical independence, and often a combination of them. It is independent of any physical interpretations attached (or not!) + appeal for use in quantum gravity where the geometrical (and physical) meanings of the quantities involved may not necessarily be clear from the start. One of the main features (which helps accommodate the other proposals as special cases of this one) is the generality in the choice of observables Oa allowed naturally by this characterisation. In principle, they need only be mathematically # whether it is kinematic i.e., working at the extended state space
284
Statistical Mechanics and Entropy
level, or dynamic, i.e., working with the physical state space), satisfying convexity properties so that the resultant Gibbs state is normalisable. More standard choices include a Hamiltonian in a non-relativistic system, a clock Hamiltonian in a deparameterised system [3,14], and generators of kinematic symmetries such as rotations, or more generally of one-parameter subgroups of Lie group actions [26,27]. Some of the more unconventional choices include geometric observables such as volume [14,28], (component functions of the) momentum map associated with geometric closure of classical polyhedra [15,16], half-link gluing (or face-sharing) constraints of discrete gravity [15], a projector 15,29], and generic gauge-invariant observables (not necessarily symmetry generators) [11]. We refer to [14] for a more detailed discussion. In Section 3.2, we outline some examples of using this characterisation in quantum gravity, while a detailed investigation of its consequences in particular for covariant systems on a spacetime manifold is left to future studies.
Relation to Thermal Time Hypothesis This section outlines a couple of new intriguing connections between the thermodynamical characterisation and the thermal time hypothesis, which we think are worthwhile to be explored further. Thermal time hypothesis [1,2] states that the (geometric) modular flow of the (physical, equilibrium) statistical state that an observer happens to be in is the time that they experience. It thus argues for a thermodynamical origin of time [30]. What is this state? Pragmatically, the state of a macroscopic system is that which an observer is able to observe and assigns to the system. It is not an absolute property since one can never know everything there is to know about the system. In other words, the state that the observer “happens to be in” is the state that they are able to detect. This leads us to suggest that the thermodynamical characterisation can provide a suitable criterion of choice for the thermal time hypothesis. What we mean by this is the following. Consider a macroscopic system that is observed to be in a particular macrostate in terms of a set of (constant) observable averages. The thermodynamical characterisation then provides the least biased choice for the underlying (equilibrium) statistical state. Given this state then, the thermal time hypothesis would imply that the
Thermal Quantum Spacetime
285
{ > > $ of the observed state. Jaynes [19,20] turned the usual logic of statistical mechanics upsidedown to stress on entropy and the observed macrostate as the starting \ +> from it (and importantly, a further background independent generalisation, as shown above). Rovelli [1], later with Connes [2], turned the usual logic of the # > $ from it. The suggestion here is to merge the two and get an operational way of implementing the thermal time hypothesis. It is interesting to see that the crucial property of observer-dependence of relativistic time arises as a natural consequence of our suggestion, > > # the thermodynamical characterisation. This way, thermodynamical time is intrinsically “perspectival” [31] or “anthropomorphic” [32]. To be clear, this criterion of choice will not single out a preferred state, by the very fact that it is inherently observer-dependent. It is thus compatible with the basic philosophy of the thermal time hypothesis, namely that there is no preferred physical time. Presently the above suggestion is rather conjectural, and certainly much work remains to be done to understand it better, and explore its potential consequences for physical systems. Here, it may be helpful to realise that the thermal time hypothesis can be sensed to be intimately related with (special and general) relativistic systems, and so might the thermodynamical characterisation when considered in this context. Thus, for instance, Rindler spacetime or stationary black holes might offer suitable settings to begin investigating these aspects in more detail. The second connection that we observe is much less direct, and is via $ 1§\å\ immediately be observed to be related to Shannon entropy in Equation (2). ! \ > + \ generator is the log of the modular operator of von Neumann algebra theory [2§ \ \~ be an algebraic measure of relative entropy [33], which in fact has seen a recent revival in the context of quantum information and gravity. This is a remarkable feature in our opinion, which compels us to look for deeper insights it may have to offer, in further studies.
286
Statistical Mechanics and Entropy
Generalised Thermodynamic Potentials, Zeroth and First Laws Traditional thermodynamics is the study of energy and entropy exchanges. However, what is a suitable generalisation of it for background independent systems? This, as with the question of a generalised equilibrium statistical mechanics which we have considered until now, is still open. In the following, we offer some insights gained from preceding discussions, including identifying certain thermodynamic potentials, and generalised zeroth and first laws. Thermodynamic potentials are vital, particularly in characterising the different phases of the system. The most important one is the partition function Z{µ», or equivalently the free energy (5) It encodes complete information about the system from which other thermodynamic quantities can be derived in principle. Notice that the F comes with an additional factor of a (single, global) temperature, that is we normally have ѵ However, for now, Ñ > + F since we do not (yet) have a single common temperature for the full system. We return to this point below. Next is the thermodynamic entropy (which by use of the thermodynamical > \ straightforwardly (6) for generalised Gibbs states of the form in Equation (3). Notice again the lack of a single µ scaling the whole equation at this more general level of equilibrium. By varying S such that the variations dUa and ۦdOaۧ are independent [19§\ > (7) and, from it (at least part of the10) work done on the system dWa [15], can be identified (8)
Thermal Quantum Spacetime
287
From the setup of the thermodynamical characterisation presented in Section 2.1.2, we can immediately identify Ua as generalised “energies”. Jaynes’ procedure allows these quantities to democratically play the role of generalised energies. None had to be selected as being the energy in order to +> quantities can be broken most easily by preferring one over the others. \ $ > parameter (relational or not), then this observable can play the role of a dynamical Hamiltonian. Thermodynamic conjugates to these energies are several generalised inverse temperatures µa. By construction, each µa is the periodicity in the $ a, in addition to being the Lagrange multiplier for the ath constraint in Equation (1). Moreover, these same constraints can determine µa, by inverting the equations (9) or equivalently from (10) \εa} is a multi-variable inverse temperature. In the special case when Oa are component functions of a dual vector, then µԦÔµa) is a vector-valued temperature. For example, this is the case when OԦ ÔÎa} are dual Lie algebra-valued momentum maps associated with Hamiltonian actions of Lie groups, as introduced by Souriau [26,27], and appearing in the context of classical polyhedra in [15]. As shown above, a generalised equilibrium is characterised by several \> for the full system is of obvious interest. This can be done as follows [12,15]. A state of the form in Equation (3), with modular Hamiltonian (11) generates a modular flow (with respect to which it is at equilibrium), parameterised by
288
Statistical Mechanics and Entropy
(12) where ta are the flow parameters of Oa. The strategy now is to reparameterise the same trajectory by a rescaling of t, ø > \ Ó\ set of Oa \ \ Ó that by extension the other Oa are functions of this one), then by standard arguments, the rest of the Lagrange multipliers will be proportional to µÓ\ which in turn would then be the common inverse temperature for the full system. The point is that this latter instance is a special case of the former. { +> Standard statement refers to a thermalisation resulting in a single temperature being shared by any two systems in thermal contact. This can be extended by the statement that at equilibrium, all inverse temperatures µ are equalised. This is in exact analogy with all intensive thermodynamic parameters, such as the chemical potential, being equal at equilibrium. > > energy. In the generalised equilibrium case corresponding to the set of individual constraints in Equation (1\ ath-energywise, (18) The fact that the law holds a-energy-wise is not surprising because the separate constraints in Equation (1) for each a mean that observables Oa do not exchange any information amongst themselves. If they did, then their Lagrange multipliers would no longer be mutually independent and we would automatically reduce to the special case of having a single µ after thermalisation. On the other hand, for the case with a single µ\ variation of the entropy in Equation (17) gives (19) giving a first law with a more familiar form, in terms of total energy, total heat and total work variations (20) As before, in the even more special case where µ is conjugate to a single \ ~
290
Statistical Mechanics and Entropy
Further, the quantities introduced above and the consequences of this setup also need to be investigated in greater detail.
EQUILIBRIUM STATISTICAL MECHANICS IN QUANTUM GRAVITY Emergence of spacetime is the outstanding open problem in quantum gravity that is being addressed from several directions. One such is based on modelling quantum spacetime as a many-body system [34], which further complements the view of a classical spacetime as an effective macroscopic thermodynamic system. This formal suggestion allows one to treat extended regions of quantum spacetime as built out of discrete building blocks whose dynamics is dictated by non-local, combinatorial and algebraic mechanical models. Based on this mechanics, a formal statistical mechanics of the quanta of space can be studied [14,15]. Statistical mixtures of quantum gravity states are better suited to describe generic boundary configurations with a large number of quanta. This is in the sense that given a region of space with certain known macroscopic properties, a more reasonable modelling of its underlying quantum gravity description would be in in terms of a mixed state rather than a pure state, essentially because we cannot hope to know precisely all microscopic details to prefer one particular microstate. A simple example is having a region with a fixed spatial volume and wanting to estimate the underlying quantum gravity (statistical) state [11,14]. In addition to the issue of emergence, investigating the statistical mechanics and thermodynamics of quantum gravity systems would be expected to contribute towards untangling the puzzling knot between thermodynamics, gravity and the quantum theory, especially when applied to more physical settings, such as cosmology [28]. In the rest of this article, we use results from the previous sections to outline a framework for equilibrium statistical mechanics for candidate quanta of geometry (along the lines presented in [14,15], but generalising further to a richer combinatorics based on [35]), and within it give an overview of some concrete examples. In particular, we show that a group
> # ;>> quanta. In addition to providing an explicit quantum statistical basis for \ > for quanta of geometry [36,37,38,39]. As expected, we see that even though the many-body viewpoint makes certain techniques available that are almost
Thermal Quantum Spacetime
291
analogous to standard treatments, there are several non-trivialities such as that of background independence, and physical (possible pre-geometric and effective geometric) interpretations of the statistical and thermodynamic quantities involved.
Framework The candidate atoms of space considered here are geometric (quantum) d-polyhedra (with d faces), or equivalently open d-valent nodes with its half-links dressed by the appropriate algebraic data [40]. This choice is motivated strongly by loop quantum gravity [41], spin foam [42], group field theory [36,37,38,39] and lattice quantum gravity [43] approaches in the context of 4d models. Extended discrete space and spacetime can be built out of these fundamental atoms or “particles”, via kinematical compositions (or boundary gluings) and dynamical interactions (or bulk bondings), respectively. In this sense, the perspective innate to a many-body quantum spacetime is a constructive one, which is naturally also extended to the statistical mechanics based on this mechanics. Two types of data specify a mechanical model, combinatorial and algebraic. States and processes of a model are supported on combinatorial structures, here abstract11 graphs and 2-complexes, respectively; and algebraic dressings of these structures adds discrete geometric information. Thus, different choices of combinatorics and algebraic data gives different mechanical models. For instance, the simplest spin foam models « > %¯ #${ > + {
Atoms of Quantum Space and Kinematics In the following, we make use of some of the combinatorial structures defined in [35]. However we are content with introducing them in a more intuitive manner, and not recalling the rigorous definitions as that would not be particularly valuable for the present discussion. The interested reader can refer to [35] for details13. The primary objects of interest to us are boundary patches, which we take as the combinatorial atoms of space. To put simply, a boundary patch is the most basic unit of a boundary graph, in the sense that the set of all boundary patches generates the set of all connected bisected boundary graphs. A bisected boundary graph is simply a directed boundary graph with each of its full links bisected into a pair of half-links, glued at the bivalent nodes (see Figure 1). Different kinds of atoms of space are then the different, inequivalent boundary patches (dressed further with suitable data), and the choice of combinatorics basically boils down to a choice of the set of admissible boundary patches. Moreover, a model with multiple inequivalent boundary patches can be treated akin to a statistical system with multiple species of atoms.
Figure 1. Bisected boundary graph of a 4-simplex, as a result of non-local pairwise gluing of half-links. Each full link is bounded by two 4-valent nodes (denoted here by m,n,…), and bisected by one bivalent node (shown here in green).
Thermal Quantum Spacetime
293
The most general types of boundary graphs are those with nodes of arbitrary valence, and including loops. A common and natural restriction is to consider loopless structures, as they can be associated with combinatorial polyhedral complexes [35]. As the name suggests, loopless boundary patches are those with no loops, i.e., each half-link is bounded on one end by a unique bivalent node (and on the other by the common, multivalent central + > > half-links (or equivalently, by the number of bivalent nodes bounding the central node). A d-patch, with d number of incident half-links, is simply a d-valent node. Importantly for us, it is the combinatorial atom that supports (quantum) geometric states of a d-polyhedron [40,51,52]. A further common \ { \ to consider d-regular loopless structures. Let us take an example. Consider the boundary graph of a 4-simplex as shown in Figure 1. The fundamental atom or boundary patch is a 4-valent > «# (denoted m,n,…,q), and gluing the half-links, or equivalently the faces of the dual tetrahedra, pair-wise, with the non-local combinatorics of a complete «# > ~\> >
~ extended boundary states from the atoms are precisely the half-link gluing, or face-sharing conditions on the algebraic data decorating the patches. For instance, in the case of standard loop quantum gravity holonomy${ > (כSU(2)), the face-sharing gluing constraints are area matching [48], thus lending a notion of discrete classical twisted geometry to the graph. This is much weaker than a Regge geometry, which could have been obtained for the same variables if instead the so-called shape-matching conditions [47] are imposed on the pair-wise gluing of faces/half-links. Thus, kinematic composition (boundary gluings) that creates boundary states depends on two crucial ingredients, the combinatorial structure of the resultant boundary graph, and face-sharing gluing conditions on the algebraic data. From here on, we restrict ourselves to a single boundary patch for simplicity, a (gauge-invariant) 4-valent node dressed with SU(2) data, i.e., a quantised tetrahedron [40,51]. However, it should be clear from the brief discussion above (and the extensive study in [35]) that a direct generalisation of the present (statistical) mechanical framework is possible also for these more enhanced combinatorial structures.
294
Statistical Mechanics and Entropy
The phase space of a single classical tetrahedron, encoding both intrinsic and extrinsic degrees of freedom (along with an arbitrary orientation in R3) is Ú(כSU(2)4/SU(2))
(21)
where the quotient by SU(2) imposes geometric closure of the tetrahedron. The choice of domain space is basically the choice of algebraic data. For instance, in Euclidean 4d settings a more apt choice would be the group Spin(4), and SL(2,C) for Lorentzian settings. Then, states of a system of N tetrahedra belong to ÚÚØ\ and observables would be smooth (realvalued) functions defined on Ú [14,15]. The quantum counterparts are, (22) ٔN
for the single-particle Hilbert space, and HN=H for an N-particle system. In the quantum setting, we can go a step further and construct a Fock space based on the above single-particle Hilbert space,
(23) where the symmetrisation of N-particle spaces implements a choice of bosonic statistics for the quanta, mirroring the graph automorphism of node exchanges. One choice for the algebra of operators on HF is the von Neumann algebra of bounded linear operators. A more common choice though is the larger *-algebra generated by ladder operators \†, which generate the full HF by acting on a cyclic Fock vacuum, and satisfy a commutation relations algebra
(24) where gԦ ÔI)אSU(2)4 and the integral on the right ensures SU(2) gauge invariance. In fact, this is the Fock representation of an algebraic bosonic group field theory defined by a Weyl algebra [14,29,53].
Interacting Quantum Spacetime and Dynamics Coming now to dynamics, the key ingredients here are the specifications of propagators and admissible interaction vertices, including both their combinatorics, and functional dependences on the algebraic data, i.e., their amplitudes.
Thermal Quantum Spacetime
295
The combinatorics of propagators and interaction vertices can be ~ 35], the bonding map and the >~ \ > > >> boundary patches. Two patches are bondable if they have the same number of nodes and links. Then, a bonding map between two bondable patches ~\ > >> one in another, then their respective half-links (attaching them to their \> map basically bonds two bulk vertices via (parts of) their boundary graphs to form a process (with boundary). This is simply a bulk edge, or propagator. > >>~ This map augments the set of constituent elements (multivalent nodes, bivalent nodes, and half-links connecting the two) of any bisected boundary graph, by one new vertex (the bulk vertex), a set of links joining each of the original boundary nodes to this vertex, and a set of two-dimensional faces bounded by a triple of the bulk vertex, a multivalent boundary node and a bivalent boundary node. The resulting structure is an interaction vertex with the given boundary graph14. The complete dynamics is then given by the >\ $ the dependence on the algebraic data. The interaction vertices can in fact be described by vertex operators on the Fock space in terms of the ladder operators. An example vertex operator, corresponding to the 4-simplex boundary graph shown in Figure 1, is (25) where the interaction kernel encodes the combinatorics of the boundary graph. There are of course other vertex operators associated with the same graph (that is with the same kernel), but including different combinations of creation and annihilation operators15. \ ~ È >> > to a given Û, \ # \ \\Õ\' > ~\ \\¤ %¯ {\> #~> full link) l of node n. We can then choose instead to impose these constraints
298
Statistical Mechanics and Entropy
weakly by requiring only its statistical averages in a state to vanish. This gives a Û# state on Ú!\ written formally as (29) where Á\µאR3L are generalised inverse temperatures characterising this fluctuating twisted geometric configuration. In fact, one can generalise this state to a probabilistic superposition of such internally fluctuating twisted geometries for an N particle system (thus, defined on ÚN), which includes contributions from distinct graphs, each composed of a possibly variable number of nodes M. A state of this kind can formally be written as, (30) where i is the particle index, and MmaxÅ !max ÎÛ» M for a fixed M are model-building choices. The first sum over M includes contributions from all admissible (depending on the model, determined by Mmax) different M-particle subgroups of the full N particle system, with the gluing combinatorics of various different boundary graphs with M nodes. The second sum is a sum over all admissible boundary graphs Û\ with a given fixed number of nodes M. Furthermore, the third sum takes into account all M-particle subgroup gluings (according to a given fixed Û of the full N particle system. We note that the state in Equation (30) is a further generalisation of that presented in [15]; specifically, the latter is a special case of the former for the case of a single term M=Mmax=N in the first sum. Further allowing for the system size to vary, that is considering a variable N, gives the most general configuration, with a set of coupling parameters linked directly to the underlying microscopic model, (31) A physically more interesting example is considered in [14], which \ (32) °ÉԦ v(gԦ †(gԦ Ԧ ) is a positive, self-adjoint operator on HF, and the state is a well-defined density operator on the same. In fact, with
Thermal Quantum Spacetime
299
a grand-canonical extension of it, this system can be shown to naturally support Bose–Einstein condensation to a low-spin phase [14]. Clearly, this state encodes thermal fluctuations in the volume observable, which is especially an important one in the context of cosmology. In fact, the rapidly developing field of condensate cosmology [54] for atoms of space of the kind considered here, is based on modelling the underlying system as a condensate, and subsequently extracting effective physics from it. These are certainly crucial steps in the direction of obtaining cosmological physics from quantum gravity [9]. It is equally crucial to enrich further the microscopic quantum gravity description itself, and extract effective physics for these new cases. One such important case is to consider thermal fluctuations of the gravitational field at early times, during which our universe is expected to be in a quantum gravity regime. That is, to consider thermal quantum gravity condensates using the frameworks laid out in this article (as opposed to the zero temperature condensates that have been used till now), and subsequently derive effective physics from them. This case would then directly reflect thermal fluctuations of gravity as being of a proper quantum gravity origin. This is investigated in [28]. >~ ~ > ~; ;37,38,39] # ' ! { \ of SU(2),Spin(4) or SO(4). For instance, a complex scalar GFT over SU(2) > \ (33) where ´ is a functional measure which in general is ill-defined, and SGFT is the GFT action of the form (for commonly encountered models), (34) where gאG, and the kernel V is generically non-local, which convolutes the arguments of several and ðð fields (written here in terms of a single function f). It defines the interaction vertex of the dynamics by enforcing the combinatorics of its corresponding (unique, via the inverse of the bulk map) boundary graph. ¿; ; %GFT. Below we outline a way to derive such covariant dynamics from a suitable quantum statistical equilibrium description of a system of quanta of space
300
Statistical Mechanics and Entropy
Section 3.1 + coherent states is the same as in [15,29], but with the crucial difference that
\ if formal) between a canonical dynamics (in terms of a projector operator) and a covariant dynamics (in terms of a functional integral). Here we simply show a quantum statistical basis for the covariant dynamics of a GFT, and in the process, reinterpret the standard form of the GFT partition function in Equation (33 a coarse-graining and further approximations of the underlying statistical quantum gravity system. We saw in Section 3.1 that the dynamics of the polyhedral atoms of space is encoded in the choices of propagators and interaction vertices, which can be written in terms of kinetic and vertex operators in the Fock description. In our present considerations with a single type of atom (SU(2)labelled 4-valent node), let us then consider the following generic kinetic and vertex operators, (35) where N>2 is the number of 4-valent nodes in the boundary graph Û\ and N. { \ ¤\ > ³†³††. As shown above, in principle, a generic model can include several distinct vertex operators. Even though what we have considered here is the simple of case of having only one, the argument can be extended directly to the general case. ¦° # ~ Using the thermodynamical characterisation then, we can consider the formal constraints16 =ۧ¦ۦconstant and ۦ°ۧ= constant, to write down a generalised Gibbs state on HF,
(36) where a=1,2 and the partition function17 is, (37) > { > > a basis of coherent states on HF [15,29,55]. Field coherent states give a
Thermal Quantum Spacetime
301
continuous representation on HF where the parameter labelling each state is a wave (test) function [55]. For the Fock description mentioned in Section 3.1, the coherent states are (38) where |0ۧ is the Fock vacuum (satisfying Ԧ )|0ۧ=0 for all gԦ ), ûÉSU(2)4ûðð and its adjoint are smeared operators, and ûאH. The set of all such states provides an over-complete basis for HF. The most useful property of these states is that they are eigenstates of the annihilation operator, (39) The trace in the partition function in Equation (37) can then be evaluated in this basis, (40) where ´ here is the coherent state measure [55]. The integrand can be treated and simplified along the lines presented in [15] (to which we refer for details), to get an effective partition function,
(41) where subscript 0 indicates that we have neglected higher order terms, ¿Ä\ { ¿Îµa}, and the functions in the exponent are K=ۦûý+ space [38,39].
CONCLUSIONS AND OUTLOOK We have presented an extension of equilibrium statistical mechanics for background independent systems, based on a collection of results and insights from old and new studies. While various proposals for a background independent notion of statistical equilibrium have been summarised, one in particular, based on the constrained maximisation of information entropy, has been stressed upon. We have argued in favour of its potential by highlighting its many unique and valuable features. We have remarked on interesting new connections with the thermal time hypothesis, in particular suggesting to use this particular characterisation of equilibrium as a criterion of choice for the application of the hypothesis. Subsequently, aspects of a generalised framework for thermodynamics have been investigated, including defining the essential thermodynamic potentials, and discussing generalised zeroth and first laws. We have then considered the statistical mechanics of a candidate quantum gravity system, composed of many atoms of space. The choice of (possibly different types of) these quanta is inspired directly from boundary structures +\ are combinatorial building blocks (or boundary patches) of graphs, labelled with suitable algebraic data encoding discrete geometric information, with their constrained many-body dynamics dictated by bulk bondings between interaction vertices and amplitude functions. Generic statistical states can > #> \ ;>> > 14]. Finally, we have given an overview of applications in quantum gravity [14,15,16,28]. In \ a coarse-graining using coherent states of a class of generalised Gibbs states of the underlying system with respect to dynamics-encoding kinetic and vertex operators; and in this way reinterpreted the GFT partition function \ {
Thermal Quantum Spacetime
303
underlying statistical quantum gravity system. More investigations along these directions will certainly be worthwhile. For example, the thermodynamical characterisation could be applied in a spacetime setting, such as for stationary black holes with respect to the mass, charge and angular momentum observables, to explore further its physical implications. The black hole setting could also help unfold how the selection of a single preferred temperature can occur starting from a generalised Gibbs measure. Moreover, it could offer insights into relations with the thermal time hypothesis, and help better understand some of our more intuitive reasonings presented in Section 2.3, and similarly for generalised thermodynamics. It requires further development, particularly \ > \ > ~> > understood, particularly in the context of background independence. For \ \ > %] generalisation of Lie group thermodynamics [26,27]. There are many avenues to explore also in the context of statistical mechanics and thermodynamics of quantum gravity. In the former, for example, it would be interesting to study potential black hole quantum gravity states [56]. In general, it is important to be able to identify suitable observables to characterise an equilibrium state of physically relevant cases. On the cosmological side, for instance, those phases of the complete quantum gravity system which admit a cosmological interpretation will be expected to have certain symmetries whose associated generators could then be suitable candidates for the generalised energies. Another interesting cosmological aspect to consider is that of inhomogeneities induced by early $+\> an application of the volume Gibbs state [14§ > of it) recalled above. The latter aspect of investigating thermodynamics of + > thermodynamics of spacetime in semiclassical settings. We may also need to consider explicitly the quantum nature of the degrees of freedom, and use + 57], which itself has fascinating links to quantum information [58].
ACKNOWLEDGMENTS Many thanks are due to Daniele Oriti for valuable discussions and comments on the manuscript. Special thanks are due also to Goffredo Chirco and
304
Statistical Mechanics and Entropy
Mehdi Assanioussi for insightful discussions. The generous hospitality of the Visiting Graduate Fellowship program at Perimeter Institute, where part of this work was carried out, is gratefully acknowledged. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. 1.
In the original works mentioned above, the framework is usually referred to as covariant or general relativistic statistical mechanics. However, we choose to call it background independent statistical mechanics as our applications to quantum gravity are evident of the fact that the main ideas and structures are general enough to be used in radically background independent systems devoid of any spacetime manifold or associated geometric structures.
2.
For a more detailed discussion of the comparison between these two characterisations, we refer the reader to [14]. The main idea is that the various proposals for generalised Gibbs equilibrium can be divided into these two categories. Which characterisation one chooses to use in a given situation depends on the information/description of the system that one has at hand. For instance, if the description includes # $ \ \ \ ¦!% \ +> > pes’ for constructing a Gibbs state, and which one is more suitable depends on our knowledge of the system.
3.
The algebraic KMS condition [21] is well known to provide a comprehensive characterisation of statistical equilibrium in systems of arbitrary sizes, as long as there ex # # This latter point, of the required existence of a preferred time evolution of the system, is exactly the missing ingredient in our case, thus limiting its applicability.
4.
Even though this method relies on maximising the entropy similar to the thermodynamical characterisation, it is more restrictive than the latter, as is made clear in Section 2.2.
5.
We remark that except for this one work, all other studies in spacetime covariant statistical mechanics are carried out from the Gibbs ensemble point of view.
6.
Local, in the sense of being observer-dependent (see Section 2.2).
7.
This is also the main ingredient of the thermal time hypothesis [1,2], which we return to below.
8.
However, as Jaynes points out in [19], these properties were relegated to side remarks in the past, not really considered to be fundamental to the theory or to the [
9.
In fact, in hindsight, we could already have anticipated a possible equilibrium description in terms of these constants, whose existence is assumed from the start.
Thermal Quantum Spacetime
305
10.
By this we mean that the term ۦdOaۧ, based on the same> > ¯\> $ ~ However, naturally, we do not expect or claim that this is all the work that is/can be performed on the system by external agencies. In other words, there could be other work contributions, in addition to the terms dWa. A better understanding of work terms in this background independent setup, will also contribute to a better under >
11.
Thus, not necessarily embedded into any continuum spatial manifold.
12.
In fact, the work in [35§ approach, but the structures are general enough to apply elsewhere, such as in spin foams, as evidenced in [44].
13.
For clarity, we note that the terminology used here is slightly different from that in [35§ % \ > Ç Ç > È >~ { Ç È > Ç > { ðÈ ~ ~ Ç > ð\ðÈ #~Ç> {ð> { to a minor difference in the purpose for the same combinatorial structures. Here, we are in a setup where the accessible states are boundary states, for which a statistical È suitable (amplitude) functional over the the boundary state space. On the other hand, the perspective in [35] is more in a spin foam constructive setting, so that modelling the 2-complexes as built out of fundamental spin foam atoms is more natural there.
14.
An interesting aspect is that the bulk map is one-to-one, so that for every distinct > > \ + { > from it.
15.
This would generically be true for any second quantised operator [29].
16.
A proper interpretation of these constraints is left for future work.
17.
> # { operator norm unboundedness of the ladder operators.
306
Statistical Mechanics and Entropy
REFERENCES 1. 2.
3. 4. 5. 6. 7. 8. 9. 10. 11.
12. 13.
14. 15. 16.
Rovelli, C. Statistical mechanics of gravity and the thermodynamical origin of time. Class. Quantum Gravity 1993, 10, 1549–1566. Connes, A.; Rovelli, C. Von Neumann algebra automorphisms and time thermodynamics relation in general covariant quantum theories. Class. Quantum Gravity 1994, 11, 2899–2918. Rovelli, C. General relativistic statistical mechanics. Phys. Rev. 2013, 87, 084055. Bardeen, J.M.; Carter, B.; Hawking, S.W. The Four laws of black hole mechanics. Commun. Math. Phys. 1973, 31, 161–170. Bekenstein, J.D. Black holes and the second law. Lett. Nuovo Cimento 1972, 4, 737–740. Bekenstein, J.D. Black holes and entropy. Phys. Rev. 1973, 7, 2333– 2346. Hawking, S.W. Particle Creation by Black Holes. Commun. Math. Phys. 1975, 43, 199–220. Bombelli, L.; Koul, R.K.; Lee, J.; Sorkin, R.D. A Quantum Source of Entropy for Black Holes. Phys. Rev. D 1986, 34, 373–383. Oriti, D. The universe as a quantum gravity condensate. C. R. Phys. 2017, 18, 235–245. Rovelli, C. The Statistical state of the universe. Class. Quantum Gravity 1993, 10, 1567. Montesinos, M.; Rovelli, C. Statistical mechanics of generally covariant quantum theories: A Boltzmann-like approach. Class. Quantum Gravity 2001, 18, 555–569. Chirco, G.; Haggard, H.M.; Rovelli, C. Coupling and thermal equilibrium in general-covariant systems. Phys. Rev. 2013, 88, 084027. Rovelli, C.; Smerlak, M. Thermal time and the Tolman-Ehrenfest effect: Temperature as the “speed of time”. Class. Quantum Gravity 2011, 28, 075007. Kotecha, I.; Oriti, D. Statistical Equilibrium in Quantum Gravity: Gibbs states in Group Field Theory. New J. Phys. 2018, 20, 073009. Chirco, G.; Kotecha, I.; Oriti, D. Statistical equilibrium of tetrahedra from maximum entropy principle. Phys. Rev. 2019, 99, 086011. Chirco, G.; Kotecha, I. Generalized Gibbs Ensembles in Discrete Quantum Gravity. In Geometric Science of Information 2019; Nielsen,
Thermal Quantum Spacetime
17.
18. 19. 20. 21.
22.
23. 24.
25. 26. 27.
28. 29. 30. 31.
307
F., Barbaresco, F., Eds.; Springer: Cham, Switzerland, 2019. Chirco, G.; Josset, T.; Rovelli, C. Statistical mechanics of reparametrization-invariant systems. It takes three to tango. Class. Quantum Gravity 2016, 33, 045005. Haggard, H.M.; Rovelli, C. Death and resurrection of the zeroth principle of thermodynamics. Phys. Rev. 2013, 87, 084001. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620–630. Jaynes, E.T. Information Theory and Statistical Mechanics. II. Phys. Rev. 1957, 108, 171–190. Bratteli, O.; Robinson, D.W. Operator Algebras and Quantum Statistical Mechanics—I, II; Springer: Berlin/Heidelberg, Germany, 1987. Landau, L.D.; Lifshitz, E.M. Statistical Physics, Part 1; Volume 5 of Course of Theoretical Physics; Butterworth-Heinemann: Oxford, UK, 1980. Chirco, G.; Josset, T. Statistical mechanics of covariant systems with # arXiv 2016, arXiv:1606.04444. Martinetti, P.; Rovelli, C. Diamonds’s temperature: Unruh effect for bounded trajectories and thermal time hypothesis. Class. Quantum Gravity 2003, 20, 4919–4932. \!;>> 0, which goes beyond the discussion of ref. [11], where, only q>1 was allowed for the Barbero-Immirzi parameter, and hence, the area spectrum to > { §\ > > + Concerning the Schwarzschild BH strength between the horizon microstates and the bulk geometry. «êÛ >~ >
314
Statistical Mechanics and Entropy
the CS §\§ + BH horizon is presented by (2) I ab
where F indicates the horizon CS gauge fields curvature and òabI refers to the soldering form pull-back built from the bulk tetrads. s may > > \
+CS { > + > > «êÛ Therefrom, it is apparent that for a BH with a given area the strength with > >~ controlled by the parameter . The microstates estimate is afforded by the horizon Hilbert space, leaded by this CS theory quantization coupled to the >~ In this letter we show that these two points of view, quite naturally, complement each other if a generalization of the Shannon entropy (from now on to be named as q-entropy) is introduced to integrate the effect of a bias in the microstates. We use it to calculate the BH entropy and necessitate it to yield the BHEL. As a result, we get from the BH microstates in LQG the BHEL, and this for arbitrary real positive values of the parameter . Though, the parameter \ \ { > > { + \ q will be a function of A which is physically well substantiated and this is due to the following reason: the coupling between the horizon and the bulk has a strength dependent on A~ Ô[max. Seen that q states for the effect of the bias generated in the horizon microstates because of this coupling, it has to be also dependent on A. Here, we examine the problem of the Bekenstein-Hawking entropy of a Schwarzschild BH by taking into account the seeming formal logarithm approach of the Tsallis see e.g. [17], [18] (considered below). In the present \ > Hawking-Page [6] phase transition of BHs in the Tsallis is an interesting similarity between the corresponding temperature function and that of a standard BH in AdS space in thermodynamics). In the two
Thermodynamics, Stability and Hawking–Page Transition of Black ...
315
situations, the temperature has a minimum and the considered heat capacitymass relation is very identical to the one of a BH in AdS space. We showed that the corresponding critical temperature depends only on the q-parameter of the Tsallis formula (if is constant), such as it related exclusive on the curvature parameter in AdS space. There are many interesting connections of this phenomenon with other theoretical physics open problems, say, the matter cosmic nucleation into BHs in the early Universe, or because of the resemblance with the Boltzmann-AdS problem [6], also, in string theory and related phenomena, it can be linked to the AdS/CFT correspondence. The question of possible phase transitions is also discussed here.
QUANTUM THEORY IN LQG In the framework of LQG, we deal with the properties of the statistical mechanical in the quantum isolated horizon [5]. A basis is given by spin networks for the Hilbert space of canonical gravity. These are graphs whose edges are assorted by representations of the gauge group of the theory. This group is SU(2) in the case of gravity, and the corresponding representations are therefore labeled by jk, possessing values in the set {1/2,1,3/2,…s/2}, corresponding to the spin related to the kth puncture and s (s=2jmax) is the maximum number of spins. A surface acquires the quantum area Aqu of the horizon if it has an intersection by an edge of such a spin network carrying the label jk [15], [16] (3) where, is the Planck length and parameter. A path for the determination of the BH entropy is furnished by LQG were the statistical mechanical properties of quantum isolated horizons are investigated in [5]. Especially, they are specified by means of quantum states that are constructed by the association of spin variables with punctures on the horizon. More accurately, a special eigenvalue of the area operator. We will deal with a horizon made up of N punctures that are situated on s different surfaces of quantum area ak with spin jk, we obtain nk number of punctures in the spin jk. That is in the area Eq. (3) the sum over the spins j can substitute the sum over the punctures like (4)
316
Statistical Mechanics and Entropy
The BH horizon associated Hilbert space is that of a quantized SU(2) Ô[max CS theory coupled to the punctures Hs= §\§\ λÔÈÅ[lÅÈl[א1,N] such as Aquè2p) and ‘Inv’ account for the gauge invariance on the horizon. At this stage, for a known set of N punctures with spins j1,ڮ,jN the microstates number is determined by [10], [12]
(5) For the calculations we shall use the expression (5). To obtain the total number of conformal blocks, this expression must be summed over all possible spin values at each puncture.
(6) Using Multinomial Expansion [10], the above expression can be recast into the following form
(7) In the of large number of punctures N and large s (which is preserved for Aبp2)), the number of physical microstates corresponding to a > Îj} is roughly offered by
(8)
Thermodynamics, Stability and Hawking–Page Transition of Black ...
317
where for any configuration {nj} the number is the total number of the associated punctures. Here the configuration {nj} states for a set of punctures in which nj number of punctures take spin j, whilst j goes from 1/2 to s/2.
NONEXTENSIVE ENTROPY Primarily, the incorporate, at the level of statistical mechanical, of the effect of a bias in the probabilities of the microstates of the underlying quantum mechanical system was the idea behind the introduction of the notion of q-entropy [17]. The q parameter is designated by entropic index. Broadly, we have 0 #\q = 1) and Tsallis (red-dashed q > 1 and black-dotted q < 1) entropies.
The thermal stability of the Tsallis BH> { > { sign of its heat capacity CBH. The CBH of the Tsallis BH associated with
(26) From Fig. 2, when 01, it is positive. In this case q>1, the temperature has the minimum at TH=THmin. As is described in Fig. 1, the temperature is positive TH>THmin>0 for every m. According to the viewpoint of Davies [3], [9], the phase transitions occurs there at the divergences points mmin of heat capacity are of second order.
322
Statistical Mechanics and Entropy
Figure 2. Schwarzschild BH heat capacity as a function of its mass parameter in the standard with Boltzmann (blue-continuous, q = 1) and Tsallis (red-dashed, q > 1 and black-dotted, q < 1) entropies.
To investigation the global thermodynamical stability of the BHs with Tsallis entropy in LQG, we needness to consider Gibbs free energy GBH ; + $BHs ;>
(27) As is described in Fig. 3, we show the Gibbs free energy arrives at the maximum value when m=mmin at the divergent point of CBH. Now, the right hand side at the critical point mmin, the GBH decreasing monotonically with the mass m. Thus, we know that, the larger the BH is the more thermodynamically stable. Therefore, to ensure a positive heat capacity CBH Z !!Z CL " Z quantities, there must be q>1 and m>mmin and CBH 1 and black-dotted q < 1) entropies.
324
Statistical Mechanics and Entropy
CONCLUSION In this letter, we considered the Bekenstein-Hawking entropy of Schwarzschild BH event horizons as a nonextensive Tsallis entropy (if we assume that this idea is true), and by considering their equation of state based on the Tsallis non-extensive entropy using the LQG theory. From this study, we concluded that the non-extensive thermodynamics should be > { +Å\> q>1, one obtains a temperature minimum THmin at a given mass Mmin. The BHs with smaller mass M0Mmin of BHs has positive specific heat become stable in the Schwarzschild-Tsallis model. In this current approach, the best substantial result is the affirmation of a stability change and the Hawking-Page transition of Schwarzschild BHs in the Tsallis model. These results are analogous to the ones obtained by Hawking and Page in AdS space within the standard BG entropy approach, because the whole curve TH(m) is of the same form as the result from a BH in the space AdS [2], [6]. Therefore, there is a correspondence between the q-parameter nonextensive of the Schwarzschild-Tsallis model and the curvature parameter in the AdS space. Therefore, because of its similarity with the AdS problem, our present results have some relation to the correspondence of AdS/CFT and related phenomena point of view as well.
Thermodynamics, Stability and Hawking–Page Transition of Black ...
325
REFERENCES 1.
2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12.
13. 14. 15. 16.
J. Baez, A. Ashtekar, K. Krasnov, Quantum geometry of isolated horizons and black hole entropy, Adv. Theor. Math. Phys. 4 (2000) 1–94. V.G. Czinner, H. Iguchi, Renyi entropy and the thermodynamic stability of black holes, Phys. Lett. B 752 (2016) 306–310. P.C.W. Davies, Thermodynamic theory of black holes, Proc. R. Soc. Lond. Ser. A 353 (1977) 499–521. A. Ghosh, P. Mitra, An improved estimate of black hole entropy in the quantum geometry approach, Phys. Lett. B 616 (2005) 114. A. Ghosh, A. Perez, Black hole entropy and isolated horizons thermodynamics, Phys. Rev. Lett. 107 (24) (2011). S.W. Hawking, D.N. Page, Thermodynamics of black holes in anti-de sitter space, Commun. Math. Phys. 87 (1983) 577. G. Immirzi, Quantum gravity and Regge calculus, Nucl. Phys. B, Proc. Suppl. 57 (1997) 65–72. R.K. Kaul, P. Majumdar, Quantum black hole entropy, Phys. Lett. B 439 (1998) 267–270. Meng-Sen Maa, Ren Zhao, Stability of black holes based on horizon thermodynamics, Phys. Lett. B 278 (2015) 751. A. Majhi, Conformal blocks on a 2-sphere with indistinguishable punctures and implications on black hole entropy, Phys. Lett. B 762 (2016) 243–246. A. Majhi, Non-extensive statistical mechanics and black hole entropy from quantum, Phys. Lett. B 775 (2017) 32–36. A. Majhi, P. Majumdar, ‘Quantum hairs’ and entropy of the quantum isolated horizon from Chern–Simons theory, Class. Quantum Gravity 31 (2014) 195003. K.A. Meissner, Black hole entropy in loop quantum gravity, Class. Quantum Gravity 21 (2004) 5245–5252. C. Rovelli, Loop quantum gravity, Living Rev. Relativ. (2008). C. Rovelli, L. Smolin, Discreteness of area and volume in quantum gravity, Nucl. Phys. B 442 (1995) 593–622. '%\'~+ > quantum gravity, J. Math. Phys. 36 (1995) 6417–6455.
326
Statistical Mechanics and Entropy
17. C. Tsallis, Introduction to nonextensive statistical mechanics: approaching a complex world, J. Stat. Phys. 52 (1988) 479. 18. C. Tsallis, L.J.L. Cirto, Black hole thermodynamical entropy, Eur. Phys. J. C 73 (2013) 2487. 19. D.H. Zanette, P.A. Alemany, Thermodynamics of anomalous diffusion, Phys. Rev. Lett. 75 (1995) 366.
CHAPTER 12
GRAVITATIONAL ENTROPY AND INFLATION
Øystein Elgarøy1 and Øyvind Grøn 2 1
Institute of theoretical astrophysics, University of Oslo, Oslo N-0315, Norway Oslo and Akershus University College of Applied Sciences, Faculty of Engineering, Olavs plass, Oslo N-0130, Norway
2
ABSTRACT The main topic of this paper is a description of the generation of entropy at the end of the inflationary era. As a generalization of the present standard model of the Universe dominated by pressureless dust and a Lorentz invariant vacuum energy (LIVE), we first present a flat Friedmann universe model, where the dust is replaced with an ideal gas. It is shown that the pressure of the gas is inversely proportional to the fifth power of the scale factor and that the entropy in a comoving volume does not change during
Citation > the cosmic entropy during a period dominated by repulsive gravity has not yet been obtained. Before the Planck time, 10«¤ s, the Universe was probably in a state + $ \ $
;$
329
lasting about 10¤¤s, dominated by dark energy with a huge density, possibly in the form of Lorentz invariant vacuum energy (LIVE). The Universe then evolved exponentially fast towards a smooth and maximally symmetric de Sitter space, which represents the equilibrium end state with maximal entropy when the evolution is dominated by a non-vanishing cosmological constant. This was an essentially adiabatic expansion with small changes of entropy. $ \ > The vacuum energy was transformed to radiation and matter; gravity became attractive, and it became entropically favorable for the Universe to grow clumpy. As pointed out by Veneziano [2§\$ \ of the present entropy is the result of these dissipative processes. However, not only did the entropy of the matter increase, as calculated in Section 7, the value of the maximum possible entropy also increased—and much more than the actual entropy. Hence, a large gap opened between the actual entropy of the Universe and the maximum possible entropy of the Universe. According to Davies, this accounts for all the observed macroscopic time asymmetry in the physical world and imprints an arrow of time on it. D. N. Page [3] has disputed this conclusion. He argued that because the de Sitter spacetime, the perturbed form of which is equal to the spacetime $ \ # \ of Einstein’s equations, which corresponds to decaying perturbations, there will be a time-reversed solution that describes growing perturbations. Page \ \ perturbations is the absence of correlations in the perturbations in the region. Davies’ reply to this [4] was that the perturbations will only grow if they conspire to organize themselves over a large spatial region in a cooperative fashion. Hence, it is necessary to explain why it is reasonable for the universe to have been in a state with no correlations initially. Davies then went on to give the following explanation. Due to repulsive gravity, the de Sitter spacetime may be considered to be a state of equilibrium { \ + $ > % >~' $ $ $\ > will be uncorrelated. A randomly chosen perturbed state will almost certainly > $ This state is thus one in which the perturbations will decay rather than grow whichever direction of time is chosen as forward.
330
Statistical Mechanics and Entropy
\ $ , and expansion raises it; it is not obvious which should dominate. We shall $ simple universe models: a plane-symmetric Bianchi type I universe and a Lemaitre-Tolman-Bondi (LTB) universe, both dominated by LIVE, using measures of gravitational entropy, which we will introduce and motivate. We intend to give a quantitative description of entropy generation at the $ \\ > entropy of the material contents of the Universe and the maximum possible entropy of the Universe. For this purpose, we apply a universe model presented by Bonanno and Reuter [5,6,7], which allows for a transition from vacuum energy to radiation. We solve analytically the Friedmann equations of this model for the Hubble parameter as a function of time. The integration be performed numerically. We use the result of the numerical integration to calculate the time evolution of several physical quantities, in particular, the radiation entropy and the entropy production rate.
A FLAT UNIVERSE MODEL WITH IDEAL GAS AND LIVE We will first consider a flat, expanding Friedmann universe filled with ideal gas and LIVE. The density of LIVE is represented by a cosmological constant. In this section, we will assume that there is no exchange of energy between the LIVE and the gas. The relativistic equation of continuity for the gas may be written: (1) where a is the scale factor normalized, so that a(t0)=1 (t0 is the cosmic time at present), i.e., it represents the ratio of the cosmic distance between two galaxy clusters at an arbitrary point of time relative to their present distance. The monoatomic gas inside a comoving surface has N identical atoms, each with rest mass m0. Hence, in the limit that the temperature and the pressure> \\\i.e., in the dust limit, the mass of the gas is M0=Nm0. When the temperature and pressure of the gas is T and p, the internal energy of the gas is:
;$
331
(2) where kB is Boltzmann’s constant, and the mass of the gas is: (3) using units where the speed of light c=1. Due to the positive pressure of the gas, the gas will perform work on the environment at a comoving surface; so, the mass of the gas inside the surface will decrease during the expansion, but the dust contribution, M0, is constant. Hence, the pressure is expected to decrease faster than ןa¤ during the expansion. The density of the gas is: (4) Differentiation with respect to time gives: (5) Inserting the last two equations into the equation of continuity, Equation (1), the two terms containing M0 cancel each other, and we arrive at the remarkably simple equation: (6) Integration with p(t0)=p0 gives: (7) Hence, the pressure decreases faster than ןa¤, as expected. The mass of the gas inside a comoving surface decreases as: (8) and the temperature of the gas decreases as: (9) with T0=p0/NkB. Hence, for the gas in this universe model, there is a temperature-redshift relation:
332
Statistical Mechanics and Entropy
(10) Substituting Equation (7) in Equation (4) shows the density of the gas decreases with the scale factor as: (11) +< (12) leads to: (13) where H0 is the present value of the Hubble parameter, and we have introduced the constants, ºmê;!0/3H20, ºp«ê;0/H20 and º¤20. Note that we have by definition that ºmºpº=1. Even without LIVE, this leads to an elliptic integral; so, we have solved this equation numerically. The results in the case, º=0, are shown in Figure 1.
Figure 1. Variation of the scale factor with time for the ideal gas universe model of Section 2, with vanishing cosmological constant, for different values of ºm and ºpºm: ºm=0.1 (full line), ºm=0.5 (dotted line), ºº (dashed line) and ºm=1 (dot-dashed line).
;$
333
The entropy of the gas is given by the Sackur-Tetrode equation [8], which here takes the form: (14) where K1 and K2 are constants. It follows from this equation and Equation (9) that the entropy of a cosmic ideal gas in an expanding homogeneous universe is constant, independently of the other contents of the universe, as long as there is no exchange of energy between the gas and the other ingredients of the universe. The way that the scale factor depends upon time does not matter in this respect. The constancy of the entropy of the ideal gas is not surprising. In a homogeneous universe, there are no global temperature differences and, hence, no heat for a non-interacting gas. It follows that the expansion is adiabatic and that the entropy of the cosmic gas is constant.
MEASURES OF GRAVITATIONAL ENTROPY Since we do not have a theory of quantum gravity, the entropy of the gravitational field cannot be calculated by counting its microstates. In an effort to incorporate the tendency of (attractive) gravity to produce inhomogeneities—and in the most extreme cases, black holes—into a generalized second law of thermodynamics, Penrose [9] made some suggestions about how to define a quantity representing the entropy of a gravitational field. Such a quantity should vanish in the case of a homogeneous field and obtain a maximal value given by the BekensteinHawking entropy for the field of a black hole. In this connection, Penrose formulated what is called the Weyl curvature conjecture, saying that the Weyl curvature should be small, near the initial singularity of the Universe. Wainwright and Anderson [10] interpreted this hypothesis in terms of the ratio of the Weyl and Ricci curvature invariants:
(15) According to their formulation of the Weyl curvature hypothesis, PWA should vanish at the initial singularity of the Universe. The physical content of the hypothesis is that the initial state of the Universe is homogeneous and isotropic. Hence, the hypothesis need not be interpreted as a hypothesis about gravitational entropy. Neither need it refer to an unphysical initial
334
Statistical Mechanics and Entropy
singularity, but should instead be concerned with an initial state, say, at the Planck time. However, it was shown by Grøn and Hervik [11,12] that according to ] +\2WA, essentially due to its local nature, diverges at the initial singularity, both in the case of the homogeneous, but anisotropic, Bianchi type I universe models and the isotropic, but inhomogeneous, LTB universe models. This means that there are large anisotropies and inhomogeneities near the initial singularities in these universe models. Hence, the classical behavior is not in agreement with the Weyl curvature conjecture, as formulated in terms of Equation (15). If the entropy represents a large number of gravitational microstates , it may be more properly represented by a non-local quantity proportional to PWA. Such a quantity > \ WA diverges. Grøn and Hervik [11,12] therefore considered the quantity: (16) Here, kS is a constant, gii are the spatial components of the metric and V is the invariant volume corresponding to a unit coordinate volume in # $ 11,12] that SG1 behaves in accordance with the Weyl curvature conjecture in the LTB models. However, SG1 diverges in the Schwarzschild spacetime, and so, it cannot reproduce the Bekenstein-Hawking entropy of a Schwarzschild black hole. Rudjord, Grøn and Hervik [13§\ \ of gravitational entropy. Considering the spacetime of a black hole, they < (17) 2
where P is a quantity proportional to the Weyl curvature invariant. However, it cannot be given by Equation (15), since the Ricci curvature invariant vanishes in the Schwarzschild spacetime. Therefore, in [13], this expression was replaced by: (18) where the denominator is the Kretschmann curvature scalar. They proved that P2¼ ${\ passes the spacetime outside the most general Kerr-Newman black hole and
;$
335
the isotropic and homogeneous Friedmann universe models. The entropy of a black hole is proportional to the area of the event horizon. Hence, the entropy can be written as a surface integral over the horizon, : (19) and the constant, kS, is determined by demanding that the formula reproduces the Bekenstein-Hawking result for a Schwarzschild black hole, which results in kS=kc3«;Ä, where k is Boltzmann’s constant. This means that according to this prescription, all of the entropy of a Schwarzschild black hole is due to the inhomogeneity of the gravitational field. An expression for the entropy density can be found by rewriting Equation (19) as a volume integral by means of the divergence theorem: (20) \ > entropy in Equation (16) and the one provided by Equations (17)–(19).
GRAVITATIONAL ENTROPY IN THE PLANE-SYMMETRIC BIANCHI TYPE I UNIVERSE The line element of the LIVE-dominated, plane symmetric Bianchi type I universe is [14]: (21) where: (22) The comoving volume is: ° ø
¤
For this spacetime, the Ricci scalar, the Weyl scalar and the Kretschmann curvature scalar are, respectively: (24) Hence, the quantities, PWA and P are, respectively: (25)
336
Statistical Mechanics and Entropy
This gives
(26) + > $ era are shown in Figure 2. We see that if SG1 or SG2 were a dominating form of entropy, then the entropy of the Universe decreased during most of the $ \ { >
Figure 2. Time variation of the candidate gravitational entropies, SG1 (left panel) and SG2 (right panel), in a comoving volume during the beginning of the $
GRAVITATIONAL ENTROPY IN LTB MODELS WITH LIVE We next turn our attention to inhomogeneous cosmological models described by the Lemaitre-Tolman-Bondi (LTB) line element: (27) 2
2
2
2
º î +sin îԄ . We follow [15] and consider a spherically symmetric energy-momentum tensor: (28) Here, åm(r,t) is the density of pressureless matter, and the dark energy is represented by the density, å\ and the radial and transverse pressures, pr(r) and år(r). In [15], it was shown that the Einstein equations imply: (29) with:
;$
337
(30) being with ºD(r)=1, which gives:
the
present epoch, . We will consider models in order to mimic inflation. In this case,
(31) The Einstein equations then give the following simple equation for R: (32) with solutions of the form: (33) The function, X\ > \ Ð\Ó\\ also follows from the Einstein equations. The functions, R0(r) and H0(r), are arbitrary. % should reproduce the Bekenstein-Hawking entropy of a Schwarzschild black hole, we choose to show results for the entropy, SG2\ +19). Calculating the Weyl and Kretschmann invariants analytically is a cumbersome task and the resulting expressions not very illuminating, so this was done numerically from the components of the Riemann tensor. In Figure 3, we have plotted the time evolution of the entropy within spheres of different comoving radii. We have chosen H0(r)=Hccosh(r/r0), R0(r)=Rcsinh(r/r0), where Hc, Rc and r0 are constants. If we choose instead H0(r)=Hctanh(r/r0), R0(r)=Rctanh(r/r0), we get the results shown in Figure 4 for the evolution of the entropy. In both cases, we see that the gravitational entropy, SG2\ > \ \$ \ model, as in the case of the Bianchi type I model of the previous section, the >$
338
Statistical Mechanics and Entropy
Figure 3. The variation with time, relative to the initial value, of the total gravitational entropy within spheres of varying radius.The black curve corresponds to r=2r0, the red curve to r=3r0 and the blue curve to r=4r0. Here, H0(r)=Hccosh(r/ r0), R0(r)=Rcsinh(r/r0).
Figure 4. The variation with time, relative to the initial value, of the total gravitational entropy within spheres of varying radius. The black curve corresponds to r=2r0, the red curve to r=3r0 and the blue curve to r=4r0. Here, H0(r)=Hctanh(r/ r0), R0(r)=Rctanh(r/r0).
;$
339
A MODEL OF ENTROPY GENERATION AT THE END OF THE INFLATIONARY ERA A. Bonanno and M. Reuter [5,6,7] have investigated a new mechanism of entropy generation during the inflationary era. The physics behind their model is associated with the renormalization group and results in possible variations of the density of vacuum energy as represented by a time-varying cosmological “constant”, t), and a varying gravitational “constant”, G(t). According to their model, the time variation of G is much smaller than that of \ and will be neglected here. Additionally, in the present work, we $ Following Bonanno and Reuter [7§\ + +$ < (34) (35) where _ is the density of radiation, and the time variation of the density of the vacuum energy is given in terms of the Hubble parameter by:
(36) where HT is a constant. From Equations (34) and (35), it follows that: (37) \ ¼ \ åÆ From Equation (34), it follows that: (38) and from Equation (36), we have:
340
Statistical Mechanics and Entropy
(39) with
.
Taking time derivatives of Equation (36) and Equation (37) and >+¤±\ < (40) which can readily be integrated once, with respect to time, to give:
(41) where C is a constant of integration. From this result and Equation (37), we get: (42) and from Equation (39), we find: (43) Therefore, to satisfy the condition (38), we must choose C=H2 . Then, Equation (41) becomes:
(44) where:
(45) We must have to have real solutions, and in that case, H1>H2Æ\ ¼\ density is positive, the Hubble parameter must be in the interval, H2 { entropy inside a comoving surface with standard coordinate radius is: (55) >$ + å\ the density changes during the expansion according to åa3(1+w)å0, where å0 is the present density. Hence, the Bekenstein bound for the cosmic entropy is proportional to a¤ . In the radiation-dominated phase that lasted for about
346
Statistical Mechanics and Entropy
\ $\i.e., the radiation, had w=1/3. Hence, the Bekenstein maximum of entropy was constant during the radiation-dominated era. For cosmic matter with w increases faster than in the matter-dominated era. Secondly, the Holographic bound [18,19] may be formulated by asserting that for a given volume, V, the state of maximal entropy is the >~ V, and this maximum > inside the cosmological horizon in a FRW universe, this gives the entropy bound:
(56) ~ ¯ \ becoming more and more dominated by dark energy, possibly in the form of LIVE, the holographic bound (56) increases exponentially fast in the future. Several versions of this bound have been discussed by Custadio and Horvath [20]. Frautschi [21§ { horizon of a universe as the entropy that is produced if all matter inside the
>~ \ $ universe, this leads to: (57) >$ + å\ . Hence, an era dominated by cold matter this implies that has approximately constant value for this upper bound on the entropy. The bound decreased in the radiation-dominated era, and it increases in the present and future dark energy-dominated era. However, this bound is ill \ > ~ aexp(H1¬\ \ As applied to our description of the transition of vacuum energy $ \ the maximal entropy of the Universe give time variations of the maximal entropy, as shown in Figure 12. In Figure 13, we show the radiation entropy
;$
347
between the radiation entropy and all the entropy bounds and that the former
Figure 12. The Bekenstein (full line) and holographic (dashed line) upper bounds on the entropy of the universe as functions of time in the model of Section 6. The entropy is plotted in units of Boltzmann’s constant and have also been divided by a factor, C=1.0295×10122.
Figure 13. The radiation entropy contained within the event horizon as a function of time in the model of Section 6. Note that it is miniscule compared to the >
348
Statistical Mechanics and Entropy
ENTROPY GAP AND THE ARROW OF TIME P. C. W. Davis has summarized some main concepts related to the term “the arrow of time” in his Whithrow lecture [22]. Already, in 1854, Helmholtz introduced the term “the heat death of the universe” as a consequence of the second law of thermodynamics, which had recently been formulated. It says that the entropy of a closed system can never decrease. Since the Universe may be considered to be the supreme closed system, it follows that the entropy of the Universe cannot decrease. The final state is one of thermodynamic equilibrium, where nothing of the thermal energy in the Universe can be utilized to perform work. This is the heat death of the Universe. On the other hand, the entropy of the Universe must have been smaller earlier in the history of the Universe than it is today. However, measurements of the temperature variations in the cosmic microwave background radiation show that the radiation was in a state of nearly thermal equilibrium with maximal thermal entropy, already 400,000 years after the Big Bang. If the Universe contained LIVE, cold matter in the form of dust with practically zero temperature, and radiation, the dominating contribution to the thermal entropy of the Universe would come from the radiation, and the Universe was then in a state of maximal entropy at early times. > to the thermal entropy of the Universe: an ideal gas and radiation. According to Equation (9), the temperature of the ideal gas depends on the scale factor as Tןa. The temperature of the radiation, on the other hand, varies with the scale factor as Tןa. Hence, even if the gas and the radiation had the same temperature at some early point in time, the expansion of the Universe would force the temperature of the gas and the radiation to evolve away from each other. There could then exist a cosmic heat engine acting between the gas and the radiation, producing useful work. Then, the thermal entropy of the Universe would be a decreasing function of time. If there is no other form of \ > $ , # \ { types of entropy than the thermal one, and the sum of the entropies must have been much less than the maximal value at early periods in the history of the Universe.
;$
349
A classic article discussing the concept of maximal entropy in an { > actual entropy of the Universe and the maximal entropy for the cosmic arrow of time is that of Frautschi [21]. He introduced the concept of a causal ¯ { as the entropy of a black hole made of the mass in the region. He then calculated the time evolution of the maximal entropy and the actual entropy in an expanding universe and showed how an entropy gap opens up between the maximal entropy and the actual entropy. This has also been discussed by Penrose [9,23], who has estimated the gap quantitatively. He found that the maximum possible value of the entropy of the Universe at a given point in time can be taken to be the entropy of a black hole made up of the mass inside the cosmic event horizon at that time. { %{ݤ~ at present. On the other hand, the thermal entropy of the radiation inside %ÝÖ~ \ \ is a large entropy gap between the actual entropy of the Universe and the maximum possible entropy. For a gravitating system, homogeneity means low entropy, and inhomogeneity with the matter collected in black holes means large entropy. The large entropy gap means that the Universe is in a very homogeneous state compared to a state where most of the matter is collected in black holes. This requires an explanation. The most probable reason according to our present understanding is to > { $ \ 10¤¤ s. In this era, gravity was repulsive, and the Universe evolved towards homogeneity. That is how an immensely large entropy gap opened up. $ \ became attractive, and the further entropic evolution of the Universe was to try to close the entropy gap. This gives a direction for the evolution of the Universe—an arrow of time. More recently, Aquilano, Castagnino and Eiroa [24] have discussed the future evolution of the entropy gap due to conversion of gravitational and ~ connection is the production of radiation in the photospheres of stars.
350
Statistical Mechanics and Entropy
CONCLUSIONS We have investigated the generation of entropy in different scenarios with matter, radiation and vacuum energy. Three types of entropy have been considered: thermal entropy related to the internal energy of a gas, gravitational entropy related to the inhomogeneity of a gravitational field, as represented by the Weyl curvature tensor, and horizon entropy, which originated from the relativistic theory of the thermodynamics of black holes. We have considered at least one example of each type. There is no thermal entropy associated with dust, but a monoatomic ideal gas has a thermal entropy given by the Sackur-Tetrode equation. As shown in Section 2, the thermal entropy of a non-interacting ideal gas in a comoving volume remains constant during the expansion of the Universe. The time evolution of the gravitational entropy of the Weyl type has been calculated for a plane-symmetric Bianchi type I universe model and two examples of the class of LTB universe models. In both cases, we found > in these universes was decreasing during the expansion. This is as expected. The Weyl entropy is expected to be of relevance mostly in connection with the increasing inhomogeneity of the mass distribution on a more local scale [25]. Finally, we have given a further development of a universe model presented by Bonanno and Reuter [5,6,7], which allows for the transition of vacuum energy to radiation energy. We have calculated the time evolution + $ * this brief period, the Hubble parameter changes from a nearly constant high value to a much lower value, as shown in Figure 5. The evolution of the density of the vacuum energy, shown in Figure 6, decreases in a similar way as the Hubble parameter, which is to be expected, since a vacuum dominated universe has constant Hubble parameter and a constant density of vacuum energy. The variation with time of the radiation energy density is shown in Figure 7. First, it increases, due to a rapid transition of vacuum energy to radiation; then, it decreases, due to the expansion of the Universe and the slowing down of the vacuum energy-radiation transition. Figure 9 and Figure 10 show that although the entropy density of the radiation decreases with time due to the expansion, the thermal entropy in a comoving volume increases, and Figure 11 shows that there is a steady entropy production. This is due to the conversion of vacuum energy to radiation energy. Figure 12 shows the time evolution of the Bekenstein and holographic upper
;$
351
> ¯ $ \ Figure 13 shows the thermal radiation entropy within the horizon during this period. Note that this entropy varies in a different way from the radiation entropy in a comoving volume, since the horizon is not a comoving surface. The radiation entropy inside the horizon reaches a maximum value and, then, starts decreasing. Hence, an entropy gap opens up between the actual entropy of the Universe and the maximum possible entropy. This imposes a thermodynamic arrow of time upon the Universe.
352
Statistical Mechanics and Entropy
REFERENCES 1. 2. 3. 4. 5.
6. 7. 8. 9.
10.
11. 12. 13. 14. 15.
16.
* \ | $ and the time asymmetry in the Universe. Nature 1983, 301, 398–400. Veneziano, G. Entropy bounds and string cosmology. High Energy Phys.—Theory 1999. arXiv: hep-th/9907012. \*$ does not explain time asymmetry. Nature 1983, 304, 39–41. * \|$ in the universe and time asymmetry. Nature 1984, 312, 524–527. Bonanno, A.; Reuter, M. Cosmology with self-adjusting vacuum { Phys. Lett. B 2002, 527, 9–17. Bonanno, A.; Reuter, M. Entropy signature of the running cosmological constant. J. Cosmol. Astropart. Phys. 2007. Bonanno, A.; Reuter, M. Entropy production during asymptotically $Entropy 2011, 13, 274–292. Wallace, D. Gravity, entropy and cosmology: In search of clarity. Br. J. Philos. Sci. 2010, 61, 513–540. Penrose, R. Singularities and Time Asymmetry. In General Relativity, an Einstein Centenary Survey; Hawking, S.W., Israel, W., Eds.; Cambridge University Press: New York, NY, USA, 1979; pp. 581–638. Wainwright, J.; Anderson, P.J. Isotropic singularities and isotropization in a clas of Bianchi type VIh cosmologies. Gen. Rel. Grav. 1984, 16, 609–624. Grøn, Ø.; Hervik, S. Gravitational entropy and quantum cosmology. Class. Quant. Grav. 2001, 18, 601–618. Grøn, Ø.; Hervik, S. The weyl curvature conjecture. Int. J. Theor. Phys. Group Theory Nonlinear Opt. 2003, 10, 29–51. Rudjord, Ø.; Grøn, Ø.; Hervik, S. The weyl curvature conjecture and black hole entropy. Phys. Scr. 2008, 77, 1–7. Grøn, Ø. { $ Phys. Rev. D 1985, 32, 2522–2527. Grande, J.; Perivolaropoulos, L. Generalized LTB model with inhomogeneous isotropic dark energy: Observational constraints. Phys. Rev. D 2011, 84, 023514. Grøn, Ø. Entropy and gravity. Entropy 2012, 14, 2456–2477.
;$
353
17. Bekenstein, J.D. Universal upper bound to entropy-to-energy ratio for bounded systems. Phys. Rev. D 1981, 23, 287–298. 18. T’ Hooft, G. Dimensional reduction in quantum gravity. Gen. Relativ. Quantum Cosmol. 2009. arXiv:gr-qc/9310026. 19. Susskind, L. The world as a hologram. J. Math. Phys 1995, 36, 6377– 6396. 20. Custadio, P.S.; Horvath, J.E. Supermassive black holes may be limited by the holographic bound. Gen. Rel. Grav. 2003, 35, 1337–1349. 21. Frautschi, S. Entropy in an expanding universe. Science 1982, 217, 593–599. [PubMed] 22. Davies, P.C.W. The arrow of time. Astron. Geophys. 2005, 46, 1.26– 1.29. 23. Penrose, R. The Road to Reality; Jonathan Cape: London, UK, 2004. 24. Aquilano, R.; Castagnino, M.; Eiroa, E. Entropy gap and time asymmetry II. Mod. Phys. Lett. 2000, A15, 875–882. 25. Amarzguioui, M.; Grøn, Ø. Entropy of gravitationally collapsing matter in FRW universe models. Phys. Rev. D 2005, 71, 083001.
CHAPTER 13
STATISTICAL MECHANICS AND THERMODYNAMICS OF VIRAL EVOLUTION
Barbara A. Jones1 , Justin Lessler2 , Simone Bianco1 , James H. Kaufman1 1
Almaden Research Center, IBM, San Jose, California, United States of America Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, United States of America
2
ABSTRACT This paper uses methods drawn from physics to study the life cycle of viruses. The paper analyzes a model of viral infection and evolution using the “grand canonical ensemble” and formalisms from statistical mechanics and thermodynamics. Using this approach we enumerate all possible genetic states of a model virus and host as a function of two independent pressures– immune response and system temperature. We prove the system has a real
Citation: Jones, B. A., Lessler, J., Bianco, S., & Kaufman, J. H. (2015). Statistical mechanics and thermodynamics of viral evolution. PloS one, 10(9), e0137482. (25 pages) Copyright: © This is an open access article, free of all copyright, and may be freely \> \ \ \>\ > for any lawful purpose. The work is made available under the CC0 1.0 Universal (CC0 1.0) Public Domain Dedication: https://creativecommons.org/publicdomain/zero/1.0/.
356
Statistical Mechanics and Entropy
thermodynamic temperature, and discover a new phase transition between a positive temperature regime of normal replication and a negative temperature “disordered” phase of the virus. We distinguish this from previous observations of a phase transition that arises as a function of mutation rate. From an evolutionary biology point of view, at steady state the viruses naturally evolve to distinct quasispecies. This paper also reveals a universal relationship that relates the order parameter (as a measure of mutational robustness) to evolvability in agreement with recent experimental and theoretical work. Given that real viruses have finite length RNA segments that encode proteins which determine virus fitness, the approach used here could be refined to apply to real biological systems, perhaps providing insight into immune escape, the emergence of novel pathogens and other results of viral evolution.
INTRODUCTION Viruses are microscopic subcellular objects that infect cells of living organisms across all six kingdoms of life [1]. Because viruses require host cellular machinery to replicate [2], a common set of steps must occur for the reproduction of most viruses. First, the virus must enter the cell, which can occur through membrane fusion, endocytosis, or genetic injection [3]. During the replication process, tens to thousands of progeny are produced [2]. While the fidelity of the replication process varies between viruses, for most, particularly RNA viruses, the mutation rate is quite high [2]. Finally, progeny exit the cell (via budding, apoptosis, or exocytosis), in many cases killing the cell in the process [2]. The generally high levels of genetic variability created during replication lead to rapid “exploration” of genetic sequence space, allowing the virus to evade the host immune system, overcome environmental challenges such as antiviral drugs, and perhaps even adapt to new host species [4–6]. While even single cell organisms have an innate immune response, viral evolution becomes particularly important when viruses attempt to evade the adaptive immune system of humans and other vertebrates [2,7]. Successful viruses all must survive host defense mechanisms, compete to infect host cells, reproduce, and eventually pass to other hosts [2,8], though an immense variety of strategies are used to accomplish these tasks. Recent technical advances in genome sequencing have revealed the enormous genetic diversity of RNA virus populations during infection [9], >
Statistical Mechanics and Thermodynamics of Viral Evolution
357
Information about mutation distributions during evolution has proven to be helpful in assessing the intricate mechanisms of viral reproduction [10,11]. Moreover, new insights into the tradeoff between mutational robustness, invariance of the phenotype to genetic insults, and evolvability, the capability of the viral species to adapt to new environments, are emerging, supported by a wealth of new data [12,13]. However, it may not always be necessary, or even advisable, to capture the full intricacies of this system in useful models of viral evolution and about the behavior of viral populations. For example, Alonso and Fort measured thermodynamic observables to analyze a phase transition observed in a model of RNA virus error catastrophe [14–16]. In analogy to Bose condensation they derive an order parameter to characterize two phases separated by the error catastrophe phase transition. The error catastrophe literature demonstrates the importance of mutation rate and reveals a phase transition due to information loss at large rate [14–22]. Nowak and May consider the transition from sustained infection to elimination of infection as a function of basic reproductive ratios for various mutant strains [16]. In the current work we demonstrate a way to compute the steady state solutions + > two independent and competing energy terms, temperature and immunity. Statistical mechanics allows physicists to describe the macroscopic characteristics of a multi-particle system based on microscopic properties [23–25]. Given a large collection of molecules or atomic particles, it is > >> of thermodynamic quantities such as system heat, energy, and entropy [23– 25]. These macroscopic properties are determined by an “ensemble” of all “microstates” of the collection, along with the probabilities associated with each microstate. If a simple model of viral replication, transmission and evolution can be developed that lends itself to such analysis, it may serve as a foundation on which to develop a powerful theory to describe the general behavior of viral systems using the same key concepts used in statistical mechanics.
METHODS In this paper we present a model of viral replication and evolution within a single host and the analytic theory required to find steady-state solutions for
358
Statistical Mechanics and Entropy
this system. We then describe the steps used to solve the analytic equations and the methods used. Finally, we study the thermodynamics and statistical mechanics of the viral evolution model. We calculate thermodynamic quantities such as entropy, an order parameter, specific heat, energy, and properties of viral population dynamics such as host cell occupancy and viral load in the environment.
Viral Infection as Energy Barriers Viruses replicate and transmit by a complex multi-step process. For an influenza virion to infect a cell and replicate, it must bind to receptors on the cellular membrane; induce endocytosis; release ribonucleoprotein (vRNP) complexes into the cytoplasm; vRNPs must be imported into the nucleus where replication can occur; and viral offspring must leave the cell through viral budding [26]. At each step there is some probability of failure, and the more fit the virus, the lower this probability [27,28]. We abstract this process as crossing two symmetric barriers, one for infecting the cell, and one for replication and exit. > \ >>> and system “temperature”, and is computed using an activated Arrhenius form [23]: (1) where e i is the probability of successfully crossing the barrier, f i is viral fitness, and T is the system temperature. At this point one can view T as a parameter that governs how discriminating the barrier is between viruses with different numbers of “matches” to some target receptor. In classical chemistry the barrier height is a function of both reactants and products, while the temperature is a property of the reactants only (viruses in this case). When there is a distribution of energies for the reactants, k BT is the average energy of the most probable distribution. In this model temperature changes the discrimination of the barriers to infection and reproduction. At low temperature viruses with a high match to the target are favored. At very high temperature, virtually all viruses of any match are able to infect and reproduce. As we will see, raising the temperature increases the fraction of viable virions within a quasispecies distribution. We will later demonstrate that temperature in this model is not only a tuning parameter, but also the thermodynamic temperature for the system, providing a distribution of
Statistical Mechanics and Thermodynamics of Viral Evolution
359
energies for the viruses, which naturally form quasispecies distributions. We will also derive the effective Boltzmann constant relating temperature and the observed energy scales [23–26].
Figure 1: Model of an Idealized Virus Life Cycle.
The barrier height (BE) is equal to the number of mismatches of virus to a target. Viruses with different numbers of genetic matches will see barriers of different height. The probability of a virus passing is based on an activated Arrhenius model. In viral entry, a protein creates a receptor on the viral envelope that binds to complementary receptors on a target cell membrane. We do not intend to model the complex interactions between these receptors. Instead, we abstract > > acids) in a virus’s genome match an idealized target genetic sequence. The receptor in a real virus is a sub-region of the binding protein (providing many degrees of freedom). To capture these degrees of freedom we require that a sub-region of a virus “gene” matches a complementary genetic % \ + ± letters in length associated with the host cell, and each virus is assigned a genome of 100 letters (i.e., 300 bases). As with real amino acids, letters are the phenotypic representation of a codon of three underlying bases (A,C,G, and U/T m as the number of matches between host and target sequences at the alignment that minimizes the total number of mismatches (but still completely overlays the target). This model is not intended to capture the complex interactions between proteins based \ > characterized by the difference between the number of matches and the length of the genome (i.e., f i = f m±m)). Once the number of matches \ { that virion.
360
Statistical Mechanics and Entropy
Then, given Eq (1), the probability of a successful barrier crossing is: (2) On each replication there is some probability of mutation in a given base, allowing the distribution of viruses to change or evolve over time.
The Viral Life Cycle In our simplified model of viral infection and replication the system of viruses passes through three stages in discrete generations (Figure 2). Free viruses first infect cells, passing into the post-infection stage, I. Some proportion of infected cells are then “killed” by the immune system, and instantly replaced by uninfected cells, and we enter the post-immunity stage, . Finally, viruses replicate and exit the cell, and we enter the postreproduction/pre-infection stage, R. The system state in each stage can be described completely by two interacting sets of variables: the occupation of the host cells, and the distribution of “free” viruses in the environment. The self-consistent (steady state) solution for the virus life cycle is one in which each state remains unchanged after completing a full cycle.
Figure 2: Virus Life Cycle.
The changing states of all viruses must be computed self-consistently available to infect at any one time, with the infection process proceeding as follows:
At any time at most one virus can infect each cell. Each free virus successively attempts to infect the unoccupied cells with a success probability of each attempt of e m. Competition continues until either all cells are occupied or all free viruses have made an attempt. With these criteria we can analytically derive the overall infection rate, @(N), as a function of the number of target host cells and the number of free viruses in the environment N (see below).
362
Statistical Mechanics and Entropy
The immune responses Vertebrate hosts defend themselves from viral infections using both innate and adaptive immune responses [21]. In our model the innate immune response can be considered to be captured by the barrier that viruses must cross to infect and replicate in cells, while we explicitly model the adaptive immune response. In an adaptive immune response, the immune system develops an increasingly strong and specific response to infecting viruses by producing cells and antibodies which recognize and respond to specific viral epitopes (i.e., short sequences of amino acids that identify the virus) [21]. Here we assume that all parts of the virus are exposed to the immune system. Furthermore, we are interested in a steady state solution where the immune system has learned to recognize the target epitopes (not the entire viral genome). In steady state a virus genome matches some part of the target genome. In analogy to adaptive response to a specific set of epitopes, we use this matching sub-region to determine efficiency of immune response in steady state. In particular we represent the ability of the adaptive immune system to kill infected cells as a function of the match between a virion and the target as: (4) ÅAÅ { \v is the number of ±æ A = 1. In real viruses, the length of epitopes targeted by immune effector cells can vary widely, from as few as 3–4 of critical amino acids for B-cell conformational epitopes, to 8–11 amino acids for T cell epitopes (for example) [29]. Here we chose an intermediate value and = 6 as a typical epitope length. The results and conclusions are not highly sensitive to this choice. This abstraction is meant to model the steady state response of an adaptive antibody-mediated immunity. In this paper we explore the full range of A and m+«\ ++¤\
Viral reproduction and mutation Viral offspring differ from their parent through mutation of individual bases during the replication process. The resulting evolution is an important component of the survival strategy for many viruses, allowing them to evade the immune system and respond to changes in their environment (e.g., the introduction of chemotherapeutic agents). When a virus reproduces,
Statistical Mechanics and Thermodynamics of Viral Evolution
363
the actual reproductive rate (number of offspring per parent) defines the “fecundity”. In our model replication occurs with a fecundity, \ and offspring have one, and only one, codon mutation per offspring. Mutation to the same amino acid is allowed. As in the Moran model, single mutation can either reduce the maximal match length by one ( (< = +1) or leave the maximal match length unchanged (< = 0) [18,19]. Consider a virus with a maximum of m 0 codons matching the organism ># the maximum number of matches, m 0, to the target by counting matches for each possible alignment. For a mutation to decrease m, two conditions must be true. First, the mutation must occur at a currently matched position in the matching region and must change the expressed amino acid (i.e., must not be a same-sense or silent mutation). Second, there must not be two or more alignments with the same maximal match. For a mutation to increase m, it must occur at a non-matching codon in a maximally matching alignment and result in a change to the amino acid at that position to one that matches the target. The mutation operators take the distribution of viruses that survive the immune response process, and transform it into a distribution of free viruses that exist after reproduction and mutation (Figure 2). In order to calculate the probability distribution, P(m), for the viral state after mutation, we need to calculate the general “transition probability” matrix for changes in the number of matches as a function of m. Assume a virus with a given number of initial matches, m. To calculate the probability that a mutation on this virus will cause a +1, -1, or 0, change in the number of matches we
>> can be derived analytically at two limits: no (or few) matches and perfect or near perfect matches. Depending on the degeneracy of the target, the values for these limiting cases can vary considerably and have the most variability for viruses with low numbers of matches before mutation. In the case of a highly degenerate target, that is, one with a very limited number of distinct or unique codons, the probability of a single mutation keeping the same number of matches approaches zero in the low initial match limit. Conversely, the probability of a mutation increasing the number of matches goes rapidly (to one) in this limit. In the opposite case of a target with no repeating codons, the results depend on whether the alphabet is much larger even than the size of the
364
Statistical Mechanics and Entropy
target, or whether the target uses almost all the possible codons. If there are a large number of distinct codons possible in the viruses, beyond the number already in the target, the probability of a mutation keeping the same number of matches approaches 1 in the low-match limit, while the probability of a mutation increasing the number of matches is low for all initial numbers of matches. If almost all the available codons appear in the target, and the viruses contain essentially the same selection of codons that the target does, then the probabilities for keeping the same number of matches and increasing the number of matches by 1 become nearly equal at 1/2 each in the low match limit. The probability that a mutation decreases the number of matches starts at zero for zero initial matches, and typically rises to a value near 0.5 for a complete match with the target, independent of target degeneracy. The probability of a mutation causing no change in the perfectly matching virus is also near 0.5 regardless of target degeneracy. These general cases are shown in Figure 3. It should be noted that these results hold for matches determined by complete overlay of the target binding sites by the virus. We do not consider alignments where the target extends past the end of the virus giving only partial overlay of the target binding sites. This alternative method would yield different limiting cases, as well as different expressions below for the mutation. For clarity, in our model there exist only v-t+1 allowed alignments, where v and t are the length of virus and target, respectively. (This is in contrast to the ‘extended alignment’ method, not implemented here, which has v+t-1 alignments.) The limiting ¤ segment exactly twice as long as the target segment (our model system).
Figure 3: Three Limiting Cases of the Effect of Viral Mutation in a Single Codon.
Statistical Mechanics and Thermodynamics of Viral Evolution
365
The transition probabilitiy (Pmut) as a function of the number of matches, m (pre-mutation). Curves labeled ‘+1’ represent the probability of a mutation increasing the number of matches between the virus and target genomes, ‘-1’ the probability of decreasing the number of matches, and ‘0’ the probability of no change. Given alphabet length ‘a\] limiting behavior for: (a) Small a, highly degenerate target; (b) target+1, i.e., medium degeneracy; (c) Large a, i.e., low target degeneracy. \
\ # >>\mut, of a viral mutation causing an increase, decrease, or stasis in the number of matching codons between the virus and the actual target genome ( = +1,0,-1). We note that our particular target is at the low degeneracy limit, and is part-way between the cases using all the available alphabet vs. those using only a small fraction of it. It thus can be viewed as fairly “challenging” to \ length of the target, or changing the degeneracies in the codon “alphabet” (without going into trivial regimes or limiting cases of no degeneracy) only rescales temperature without changing the overall behavior or conclusions. In future work it is straightforward to explore other cases in detail. Note also that many viruses not only mutate, but also recombine. Such viruses may infect individual cells multiple times. This model does not capture these processes. In addition, neither matches nor mismatches in this model should be interpreted as “replication error.” In contrast to the quasispecies model(s) of viral evolution used to study viral error catastrophe [14–22], evolution in this model is a function of two independent and competing
Statistical Mechanics and Thermodynamics of Viral Evolution
367
Self-Consistent Solutions We solve self-consistently the probability functions of the virus in the host cells { I, õ, R}, the virus distribution in the environment, and the total number of viruses. The 51x51 matrix of inter-related m), Eq (3), can be diagonalized analytically to obtain the following:
(6)
with
mem and all other terms are defined as in Eqs (2–4). Eq (6) give the probability that a cell is occupied by virus with m matches at each stage of the life cycle (I, õ\ and R). The stable solutions to Eq (6) hold for any given P m and ³(N), which must also be solved self-consistently. The limiting case of p = 0 has no virus remaining in the cells after each cycle so it decouples the solution of the occupation of the virus in I(m) from R(m)). If R(m) is zero in the equation for the infection rate, the solution of the virus in the cells becomes trivial. We have analyzed the model represented by the solutions in Eqs (6–10) for the full range of probability p, from p that the effect varying p, the probability that the virus remains in the cell if not cleared by the immune response, is only a slight rescaling of the temperature parameter, demonstrating universality in the solution described below. We also note that the addition of p breaks the symmetry between reproduction and infection but does not change the results. Since a value of p = 1 represents the most strongly coupled case of interaction between cells and virus, we present those results below.
368
Statistical Mechanics and Entropy
Solution for the viral genetic states Eq (6) (solutions to Eq 3) provide the viral occupation (or load) in the cells as a function of the distribution of virus in the environment. We next solve for the steady state distribution of virus in the environment. Imposing selfconsistency on the reproduction and mutation processes, we derive the following equation:
(7) Where M is a matrix of probabilities formed from Eq (5) and: (8) Solving Eq (7) gives the steady-state viral probability distributions of the system. Eq (7) can be recognized as an eigenvalue equation where every valid eigenstate P m of matrix M D m must have eigenvalue . It can be proven that any eigenvector solution of M D m P m = @ m P m has an eigenvalue @
m
equal to
as long as the eigenvectors P
probability vectors (i.e., matrix).
m
are normalizable as
) (note that D m is expressed as a diagonal
+£ { + Tripathi et al. (and references therein) modeled RNA virus evolution in a non [20]. However, the landscape considered here cannot simply be expressed as a sum of independent selection effects with cross terms.
Number of viruses For each solution P m of Eq (7), the number, N, of viruses in the environment is equal to the total probability that a cell has a virus that successfully reproduces, times the number of target host cells, c, times the fecundity, \ defined above: (9)
Statistical Mechanics and Thermodynamics of Viral Evolution
369
N can then be found as the solution of a pair of coupled transcendental equations, one for N as a function of the infection rate ³\ and the other for ³ as a function of N. Substituting into Eq (9) from Eq (6), and using the D m from Eq (8), we obtain:
(10a) with
(10b) With E\ {+²\(1-E) is the probability of a cell not being infected in a single viral pass given the distribution of virus in the environment P m. In the calculations in this paper, the number of target host cells, c, was taken to be 5 and the fecundity, \ was 20, giving a maximum N of 100 viruses replicated from the cells into the environment. These coupled nonlinear equations were solved using Newton-Raphson methods. It can be shown that the functional form above results in one and only one solution for N, for each choice of initial parameters and input viral distribution P m.
Numerical solutions There are 51 roots of the eigenvalue Eq (7), corresponding to the vector size of P m, (which is indexed by the different m, m = 0 to 50). More generally, given a genetic target of length G, there would exist G+1 roots. We employed a number of tests to determine which of the eigenstates are physical. Each eigenvector element represents a probability. The eigenvectors should not have imaginary elements, and after normalization they should not have negative elements. The Perron-Frobenius theorem states that a real square matrix with positive entries has a unique largest real eigenvalue and that eigenvector has strictly positive components. Our matrix meets the criteria required by Perron-Frobenius so the all-positive eigenvector corresponding to the largest eigenvalue provides the equilibrium solution.
370
Statistical Mechanics and Entropy
The number of viruses corresponding to an eigenstate (Eq 10) must also be greater than or equal to zero. With these conditions, only one nontrivial physical eigenstate was found for any set of initial conditions (temperature, immunity, etc.). The trivial zero state (no virus) is always a stable solution. ~ analytically derived steady state result.
RESULTS AND DISCUSSION Evolution of the Virus The probability of a virus having a given number of matches, m, at a specific temperature and maximum immune response (Eq 4), A, is the normalized eigenstate, P m (Figure 5). Each P m can be thought of as the steady state quasispecies distribution, the peak of which represents the most “robust” virus type in the quasistates [14–16,20–22,30]. The width of each distribution reflects the accessible states and can be viewed as an indicator of evolvability or adaptive genetic diversity [4].
Figure 5: Eigenstates of the System.
\\ > matches, m, as a function of temperature and maximum immune response, A. Each distribution, P(m), represents the quasispecies distribution (i.e. >> > { temperatures shown here are 0.01,0.03,0.05,0.1,0.3,0.5,1,2,3,4,5,10,…,
Statistical Mechanics and Thermodynamics of Viral Evolution
371
100*,110,120,130,140,150,200,250,300 (*step by 5) for each immunity indicated. Low temperature is represented by the narrow distribution at high match (at far right). For all temperatures and immunities studied (with the exception of T, A = 0, see below), only one stable (non-trivial, non-zero) eigenstate was found. At very low temperature, the virus must closely match the target genome (m ~ 50). As temperature is raised, the mean number of matches of the quasistate decreases, eventually excluding the perfect match as an important component of the solution (i.e., the state de-pins from m = 50). Two distinct behaviors are observed as a function of immunity. At low immune amplitude (Figure 5A), as T increases, the mean of the distributions moves smoothly from high match to low match (m~14.5). At higher immune amplitude (Figure 5B–5D), the quasistates distribution jumps from higher to lower m with increasing T. This is most pronounced in Figure 5D where all eigenstates are found only near higher or lower m regions. At low temperatures the virus must be well adapted to the host as $ | \ T, the barrier is less important (i.e., entry into the cell is thermally “activated”) allowing the viruses to more easily avoid the immune system through greater genetic variation. We call distributions with mean near the perfect match “ordered states” of the virus, and distributions with low mean (m " \ thermal barrier. Viruses responding to either of these pressures will have \ > \
372
Statistical Mechanics and Entropy
the phase transition that occurs in the Eigen and Schuster model, which $ also different from the very large literature on viral error catastrophe [14– 22]. Error catastrophe is a phase transition that occurs in a dynamic model as mutation rate is increased past a critical rate. This is distinct from the phase transition observed above in the current steady state model. The phase transition observed here is a consequence of two competing energy terms. It is also distinct from the dynamic transition discussed by Nowak and May as a function of basic reproductive ratios for various mutant strains [16]. In our model even for increasing immunity in the disordered phase the viral population does not collapse.
Figure 6: Order Parameter.
The order parameter, M env, as determined by sampling virus in the environment to measure the average fraction of matching codons, as a function of temperature and maximum immune response, A. The order +% yields almost exactly the same result. Figure 7 shows the occupancy of the cells after infection and immune response (before virus reproduction). One can view the occupancy as a > zero and 1. With zero immunity, A = 0, the cell occupancy is 1.0 for all T. As immunity is raised the occupancy decreases (approximately linearly in A) until reaching the phase boundary separating the regime of normal replication from the disordered phase. At high temperature and immune response the virus is in the disordered phase and cell occupancy plateaus at ~
Statistical Mechanics and Thermodynamics of Viral Evolution
373
±æ that never completely clear, but have low occupancy, low match, and evoke low immune response. At low temperature the virus never enters the disordered phase and cell occupancy decreases linearly with increasing A, eventually falling to zero. The region of phase space with zero virus (viruses that clear in steady state) appears small but for an individual \ { ² ¤æ maximum mismatch energy (50) scaled by the effective Boltzmann constant. Likewise the region of zero virus is bounded by maximum immune response A> = 0.94. In the regime of large A and low temperature, one would expect a dynamic process with basic reproductive ratio below unity.
Figure 7: Occupancy of Cells.
The occupancy of the cells, (m), derived from the steady-state solution (Eq 6), is shown as a function of temperature and maximum immune response, A. The same phase transition observed as in Figure 6 is evident here. The discontinuity observed in Figs 6 and 7 is also evident in the order parameter measured for virus occupying the cells. These discontinuities suggest one or more phase transitions.
Thermodynamics and Statistical Mechanics To understand the possible phase transitions (Figs 6 and 7) we now study the thermodynamics of the system. To do so we must first define a temperature. So far, we have used parameter T as an “effective” temperature. At this point we go further and posit that T is in fact the natural temperature of the system.
374
Statistical Mechanics and Entropy
Systems at finite “natural” temperature do not stay in one equilibrium microstate. Rather, they sample all accessible states with a probability based on the Boltzmann distribution. We will estimate the effective Boltzmann constant and test the degree to which T acts as a real temperature below. In order to determine the correct statistical thermodynamic ensemble of our system, we must identify the constant thermodynamic variables. In our model, the total number of cells, the size of the generic alphabet, and the length of the virus and the target genomes are all constant. To do thermodynamics, we need conservation of energy, and for this we need to > \T = 0 the system must enter a zero energy “ground state”. In our model, due to the Arrhenius form with temperature in the denominator of the exponential, at T = 0 the probabilities of infection and reproduction become delta functions at the maximum number of matches. Only viruses with a perfect match will successfully reproduce. We thus assign energy E±m. With \T = 0, only the E = 0 state of the virus will be present. Any multiple of E would also serve. A Boltzmann constant must relate the \ \ denominator in Eq (3) is actually k B T. In general, the expectation value of the energy of a viral state with N total viruses in the environment at a given temperature and immunity is:
(12) T = 0 axis for all immunities (as required), and increases monotonically with temperature. So far we have discussed our model in terms of viruses in the cells and in the environment. It is clear that as temperature and immunity are changed, both the energy and the number of virions change. Energy and number are both conserved only if we imagine that our cells and environment are both in contact with a third reservoir or bath that includes all possible viruses in thermal and “chemical” equilibrium with the rest of the system. Chemical +> + $ > the reservoir and the system. Classically, for particle number to have an associated chemical potential, chemical potential of the system must be conserved during the internal dynamics of the system, and only able to change when the system exchanges particles with an external reservoir. This #grand canonical ensemble
Statistical Mechanics and Thermodynamics of Viral Evolution
375
(Figure 8). This ensemble is the natural statistical ensemble for modeling any system of viruses. It ensures conservation of both number and energy. Any viruses not in cells or the environment (e.g., those eliminated by immune response) are in the reservoir, and any new viruses entering the system (e.g., mutated offspring) are drawn from the reservoir.
Figure 8: The Grand Canonical Ensemble for a System of Viruses.
The three thermodynamic elements of the system are shown. The “Reservoir” of all possible virus is usually referred to as the “thermal bath”. [23] In this case the bath of possible viral sequences is very large (effectively \ \ the reservoir with a distribution based on temperature and immunity. In the infection phase, virus that successfully infect cells are drawn from the environment. Virus that fails to infect are returned to the reservoir. Immunity may remove virus (from the cells back to the reservoir), and reproduction draws new offspring from the reservoir and repopulates the environment (emptying the cells). The double arrows indicate population from and return to the reservoir. The curved arrows show the virus life cycle. Consider an initial (fully occupied) state of the reservoir with no viruses in the cells or the environment. In this state the bath includes enough copies of all possible virus sequences to populate any possible system state. That is for each of the 26100 possible viral sequences there must be a number of copies equal to fecundity times the number of cells. Reproduction is then a process of drawing new viruses from the theoretical reservoir constrained > % > { \ { \ Given this ensemble it is possible to use the methodologies of thermodynamics and statistical mechanics to calculate any thermodynamic
376
Statistical Mechanics and Entropy
quantity of interest. For example, given the expectation value of the energy, \C(T)\ >
\ that all of the data collapses onto a single universal curve. The evolvability is lowest for values of T and A that lead to quasispecies where m robust closely matches the target as well as for quasispecies where the m robust has almost no matches. The curve has a maximum for m robust/50 = 0.54. The largest evolvability corresponds to quasistates near the phase transition where the curve breaks apart.
Figure 10: Robustness vs. Evolvability.
The robustness (m robust/50) as a function of “evolvability”, À\ for each quasispecies, at all studied temperatures and immunities, A (red points). > > \ ¼Æ\
378
Statistical Mechanics and Entropy
for each quasispecies distribution P(m). For a symmetric distribution this corresponds to the most probable m. The curve is nearly universal, breaking apart only near the phase transition. The colored segments and the green arrows indicate the trajectories as function of increasing immunity (for temperatures represented). In Figure 10 the segment in yellow represents the trajectory along the universal curve where immunity, A\ { T (T = 10 degrees). Counterintuitively, as immune pressure is increased, evolvability decreases and mrobust increases. This behavior provides an explanation for the phase transition. In this model viruses must survive two types of pressure. Low temperature selects for phenotypes best adapted to infect the host. Immune response puts pressure on phenotypes that most closely match the target. The direction in which quasistate distributions shift in response to these combined pressures depends on the relative steepness of each energy term as a function of T and A. In the example at T = 10 degrees, with increasing immunity the most robust virus shifts to even better match the target thus lowering temperature driven barriers to reproduction. While this leads to a slightly higher immune pressure, the immune response function has nearly > ± \ > from lowering the match to avoid immune response. As immune pressure is increased still further, there comes a tipping point where increasing visibility to the immune response becomes too much for the virus, and there is a jump in population characteristics–the phase transition–favoring a much lower match to the target. In general, for all of the states in Figure 10, all trajectories move away from the phase transition observed in Figs 5–8 (as indicated by the arrows in Figure 10). This behavior is further demonstrated by the shift in eigenstates as a function of immunity and temperature. The relationship between viral robustness and evolvability explored in Figure 10 deserves to be understood more in depth. Viruses, especially RNA viruses, use a number of strategies to preserve genetic information during replication. Neutral (synonymous) mutations, large population sizes, coinfections and molecular chaperones are just few of these mechanisms. On the other hand, adaptation through new mutations to harsh environments is paramount for the evolution of the virus. The apparent antithetic nature of these two necessary mechanisms implies the need for a tradeoff between them. Thus, a relationship like the one obtained in Figure 10 may to be necessary for the evolution of the virus. The observation that evolvability is at a minimum when robustness is very low or very high and at a maximum > >*
Statistical Mechanics and Thermodynamics of Viral Evolution
379
# ~ §% > theoretically and experimentally the relationship between evolvability and robustness and observed this proposed universal behavior in polio virus [13].
Thermodynamic Temperature All of the analyses discussed above relate thermodynamic variables to strength of the immune system and an effective temperature. The question remains: how does our effective temperature relate to a real thermodynamic temperature? To determine this relationship, we calculate how the (genetic) states of the virus are distributed in energy. (15) where º (E) is the number of accessible states at energy E. The accessible states represent the entire cohort of N viruses. In the previous sections we calculated the equilibrium viral state and its properties as a function of effective temperature (T) and immune strength (A). At a given effective T and A, each state has a well-defined number of viruses, N, and a probability distribution, P m, representing the number of genetic matches (and mismatches) between the virus and target. While N and P m are sufficient to calculate average properties (e.g., average energy), in order to calculate thermodynamic temperature one must enumerate the complete set of realizations of all systems with N viruses, and probability of match distributed as P m. In order to calculate º (E), we need to do a careful counting of states as a function of energy. We transform the probability distribution as a function of matches m, Pm, >> E, P(E)\ of Energy in Eq (12). Note that contributions to the probability of a virus at a given energy can be from several different quasistates. With the determination of P(E)\ > energy as:
(16) with (17)
380
Statistical Mechanics and Entropy
where ºo(m) is the number of distinguishable configurations of the codons for a virus with m matches. This very large number depends on the number of matches, length of the virus and target genome length, the size of the alphabet, and the number of codons used (and not used) in the target. In addition, to be accessible, the states must be connected by permissible mutations. In practice this limits ºo(m) from being the maximal value obtained by permutation alone. We have computed ºo(m) numerically and find ln ºo(m) ~ 47 for all m, given our definition of genomes, codon alphabet, and mutations. In thermodynamics the formal relation between entropy and number of states is: (18) + £ > º p lnp, ¤§ show below our calculated thermodynamic temperature as a function of the temperature parameter in our model, T model. From Eq (15) the effective k B T ºE) vs. E. thermo For T model less than the critical temperature (Figs 5 and 6), the system is in a regime of normal replication. In this phase, Figure 11 demonstrates \ \{ linearly related to T model. The constant of proportionality is the effective Boltzmann constant. Observe that the temperature scale set by the entropy and the number of states is tied to the genetic properties of the virus+host target pair. A different type of virus with a protein receptor of different length or different degeneracy, or a different target (host) receptor would change the entropy and the energy and therefore the corresponding temperature scale. Lowering temperature can cause some virions to fail to infect any cell (in a particular host) which might otherwise been able to infect a cell. This is analogous to changing immunity, which can cause some virions to die, which may not have otherwise died. Changing temperature changes a ] + >
Statistical Mechanics and Thermodynamics of Viral Evolution
381
Figure 11: Testing for a Thermodynamic Temperature.
T thermo vs T model. For T below the phase transition, the relationship is linear with slope k B. From model Eq (15) the effective k B T thermo is the inverse slope derived from a plot of ºE) vs. E. Above the phase transition a negative temperature is observed as expected. In the inset we plot the slope of T thermo vs T model for the values of maximum immune response, A, shown in the color legend, and observe that they all fall on a single line. This suggests that the effective k B decreases linearly with increasing immune amplitude, A, and shows that immune strength rescales temperature. ° > . Fitness of a quasispecies distribution is affected by temperature, immunity, and also the number of possible states at each match. There are many more virus ± ± % the steady quasispecies distributions). As illustrated in Figure 5 (A) with near zero immune response (for example) changing temperature causes the quasispecies distribution to shift. constant also depends on the immune strength, A. In the inset to Figure 11,we plot the slope of the thermodynamic temperature, T thermo vs T model for the immunities shown in the color legend, and observe that they all fall on a single line. This suggests that the effective k B decreases linearly with increasing maximum immune response, A, and shows that immune strength rescales temperature in the regime of normal replication. This rescaling is approximately linear with k B = -5.5A +7.2, where A is maximum immune
382
Statistical Mechanics and Entropy
response. We note with interest that for A = 1 we have k B ~2 implying the thermodynamic and model temperature scales are not far apart. At the critical temperature, T c, there is a phase transition and the system switches from the regime of normal replication to a disordered phase for T >T c. For reference, at high temperature and high immunity, the order model $ \ temperature is negative in the disordered phase. This negative temperature { > T model there is less advantage to \ survival penalty for eigenstates with high matches. Although it is possible to increase T model to arbitrarily large values, the number of mismatches can never exceed the length of the target genome and the degeneracy of a state { > > In classic textbook examples [23], ºE) is a rapidly increasing function of E \ \ºE) has a maximum near the phase transition and then decreases. This occurs because as temperature increases past T c there are actually fewer accessible states in the disordered phase. This gives ºE) vs E and, therefore, a negative temperature at high T model \ system has both an upper and lower bound to the possible energies. This codon alphabet. Theoretically this should also be true of real biological viruses but it remains to be seen if any examples exist. current biologically inspired model provides an easy to understand example of why a state with negative temperature is hotter than a state at positive >~ >> the total entropy \ > \ past the critical point, entropy actually decreases because the number of > than the number of possible states at lower energy with (e.g.,) one matching codon. A fully disordered state cannot use any of the codons found in the target so it has lower entropy. In the limit of very high energy (and negative temperature) the disordered phase represents a state with a cohort of viruses, some with no codons that match the target genome. Due to mutation the cohort must contain some offspring in the environment with some matching codons.
Statistical Mechanics and Thermodynamics of Viral Evolution
383
CONCLUSIONS In this paper we explored a simplified model of viruses and their life cycle. Within the model, the process of viral transmission is characterized by a series of energy barriers. A virus’s ability to cross these barriers is defined by its genetic similarity to an idealized target sequence for the host. The genetic properties of the viruses evolve, through natural selection, to a steady-state distribution of genetic states best adapted to an environment at each fixed temperature and immune response. The immune response represents the host’s ability to clear a virus based on both viral genetics and host immune memory. Viral evolution, in this case, is simply an operation on the genetic code of the multiple offspring of a parent virus. The diversity of viral sequences in the extant population depends on the temperature of the system, T, and the strength of the immune response. > +# diverse distribution of viral sequences. The average of this distribution has a characteristic number of codons which match the target. This average match (M \ to be related to the system energy. The width of the quasi-state distributions is a measure of the diversity (and evolvability) of the extant population. We
>> all data at all temperatures and immune function to a single universal curve in agreement with previous theoretical and experimental literature [12,13]. We determined all equilibrium states of this model system, as well as the probability distribution describing the matches of those viruses as a function of temperature and immune response. The stable quasi-states and resulting $ " ~ > the immune system. The order parameter based on the number of matches reveals two regimes. To understand these regimes we applied the machinery of thermodynamics and statistical mechanics. Enumerating the states of all possible viruses (those able to infect and reproduce in cells, off spring found in the environment, and a “reservoir” or “bath” of all remain states with their respective probabilities) we used the grand canonical ensemble to derive all of the thermodynamic variables for the system including thermodynamic \ \ \ \ The grand canonical ensemble is the natural statistical ensemble for modeling any system of viruses.
384
Statistical Mechanics and Entropy
In response to temperature and immune pressure we observe a phase transition between a positive temperature regime of normal replication and a negative temperature “disordered” phase of the virus. In this model viruses must survive two types of pressure. Low temperature selects for phenotypes best adapted to infect the host. Conversely, immune pressure is strongest on phenotypes that most closely match the target. The direction in which quasistate distribution shifts in response to these combined pressures depends on the relative steepness of each energy term as a function of T and A. At some temperatures and immunities increasing immunity causes the virus quasistates to shift to even better match the target thus lowering temperature driven barriers to reproduction. As immune pressure is increased still further, there comes a tipping point where increasing visibility to the immune response becomes too much for the virus, and there is a jump in population characteristics–a phase transition–favoring a much lower match to the target. The phase transition separates a regime of normal reproduction from a disordered regime with negative temperature. The negative temperature regime requires a scenario wherein a virus with few matching segments is still able to enter the cell. In real viruses there are many cases where we see large genetic diversity in individual genes, often those that are important to the \ |$ The action of these genes may be functionally more like a diffusion process, allowing greater diversity in the genes than would be expected in the regime of normal replication. We have demonstrated that this simple model of viral replication has a real thermodynamic temperature linearly related to the effective model ! and its parameters simply rescale the temperature. This suggests that if the model can be extended to capture the dynamics of true biological systems, complex aspects of such systems may similarly be understood using the formalisms of thermodynamics and statistical mechanics, thus greatly simplifying their analysis. Microbiological experiments systematically measuring the functional sensitivity of particular genes to changes in + as an important step in adapting this model to real systems.
Statistical Mechanics and Thermodynamics of Viral Evolution
385
ACKNOWLEDGMENTS We would like to thank Dr. Niina Haiminen, Dr. Charles Stevens, Professor Donald Burke, Dr. Kenneth Clarkson and Judith Douglas.
386
Statistical Mechanics and Entropy
REFERENCES 1. 2. 3. 4.
5.
6. 7. 8. 9.
10.
11.
12.
13.
Koonin EV, Senkevich TG, Dolja VV. The ancient virus world and evolution of cells. Biol Direct. 2006; 1: 1–27. 10.1186/1745-6150-1-29 ¦ *!\ !\ ; * Philadelphia: Wolters Kluwer Health; 2007. Marsh M, Helenius A. Virus entry: open sesame. Cell. 2006; 124: 729– 740. ° \ | ¥\ ! mutational robustness. Proc Natl Acad Sci USA 1999; 96: 9716–9720. doi: 1073/pnas96.17.9716 Wilke CO, Wang JL, Ofria C, Lenski RE, Adami C. Evolution of $ Nature. 2001; 412: 331–333. 10.1038/35085569 *\¥ Ann Rev Microbiol. 1997; 51: 151–178. Janeway CA, Travers P, Walport M, Sclomchik MJ. Immunobiology. 6th edn. New York, NY: Garland Science; 2004. Anderson RM, May RM. Infectious diseases of humans: dynamics and control Oxford, UK: Oxford University Press; 1992. \ ~ '\ ! landscapes of an RNA virus revealed through population sequencing. Nature 2014; 5057485: 686–690. 10.1038/nature12861 Schulte MB, Draghi JA, Plotkin JB, Andino R. Experimentally guided models reveal replication principles that shape the mutation distribution of RNA viruses. eLife. 2015; 4: e03753.09. Schulte MB, Andino R. Single-cell analysis uncovers extensive biological noise in poliovirus replication. J Virol. 2014. June; 88(11): 6205–6212. 10.1128/JVI.03539-13 Epub 2014 Mar 19. Draghi JA, Parsons TL, Wagner GP, Plotkin JB. Mutational robustness can facilitate adaptation. Nature. 2010; 463(7279): 353–355 10.1038/ nature08694 Stern A, Bianco S, Te Yeh M, Wright C, Butcher K, Tang C, et al. Costs > > . Cell Rep. 2014. August 21;8(4):1026–1036. 10.1016/j.celrep.2014.07.011 Epub 2014 Aug 7.
Statistical Mechanics and Thermodynamics of Viral Evolution
387
14. Alonso J, Fort H. Error catastrophe for viruses infecting cells: analysis of the phase transition in terms of error classes. Philo Trans R Soc A. 2010; 368(1933): 5569–5582. 15. Hart GH, Ferguson AL. Error catastrophe and phase transition in the °Phys Rev E. 2015. March; 91(3– 1):032705 10.1103/PhysRevE.91.032705 16. Nowak M, May RM. Virus dynamics: mathematical principles of immunology and virology Oxford, UK: Oxford University Press; 2000. 17. Wright S. Evolution and the genetics of populations Vol.1-4 Chicago, IL: University of Chicago Press; 1984. 18. Ramsey CL, De Jong KA, Grefenstettc JJ, Wu AS, Burke DS. Genome length as an evolutionary self-adaptation. Lecture Notes in Computer Science, 1498, 1998. p. 345–353. 19. Hartl D, Clark A. Principles of population genetics 4th ed. Stamford, CT: Sinauer Associates; 2007. p. 112Tian JP. Evolution algebras and their applications. Lect Notes Math. 2008; 1921. Berlin: SpringerVerlag. 10.1007/978-3-540-742845 20. Tripathi K, Balagam R, Vishnoi NK, Dixit NM. Stochastic simulations suggest that HIV-1 survives close to its error threshold. PLoS Comp Biol. 2012; 89: e1002684. 21. Summers J, Litwin S. Examining the theory of error catastrophe. J Virol. 2006; 80 (1):20–26. 22. Shekhar K, Ruberman CF, Ferguson AL, Barton JP, Kardar M, Chakradborty AK. Spin models inferred from patient-derived viral + > ° Phys Rev E. 2013; 88(6): 062705. 23. Reif F. Fundamentals of statistical and thermal physics New York, NY: McGraw-Hill; 1965. p. 66. 24. Levine RD. Molecular reaction dynamics Cambridge, UK: Cambridge University Press; 2005. 25. Landau LD, Lifshitz EM. Statistical physics Oxford, UK: Pergamon Press; 1980. pp. 9ff. 26. '~!\!¥\¿ Ð$ . Microbes Infect. 2004; 6:929–936. 10.1016/j.micinf.2004.05.002 27. Burch CL, Lin C. Evolution by small steps and rugged landscapes in the RNA virus Ԅ6. Genetics 1999; 151: 921–927.
388
Statistical Mechanics and Entropy
28. Novella IS, Duarte EA, Elena SF, Moya A, Domingo E, Holland JJ. { transmissions. Proc Natl Acad Sci USA. 1995; 92: 5841–5844. 29. Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P. Molecular biology of the cell New York: Garland Science; 2002. 30. Eigen M, Schuster P. The hypercycle: a principle of natural selforganization Berlin: Springer-Verlag; 1979. 31. Shannon CE, Weaver W. The mathematical theory of communication Urbana and Chicago, IL: University of Illinois Press; 1949.
CHAPTER 14
ENERGY, ENTROPY AND EXERGY IN COMMUNICATION NETWORKS
Slavisa Aleksic Institute of Telecommunications, Vienna University of Technology, Favoritenstr. 9-11/ E389, 1040 Vienna, Austria
ABSTRACT The information and communication technology (ICT) sector is continuously growing, mainly due to the fast penetration of ICT into many areas of business and society. Growth is particularly high in the area of technologies and applications for communication networks, which can be used, among others, to optimize systems and processes. The ubiquitous application of ICT opens new perspectives and emphasizes the importance of understanding the complex interactions between ICT and other sectors. Complex and interacting heterogeneous systems can only properly be addressed by a
Citation: Aleksic, S. (2013). Energy, entropy and exergy in communication networks. Entropy, 15(10), 4484-4503. (20 pages) Copyright: © Aleksic. Licensee MDPI, Basel, Switzerland. This is an open access article distributed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license: http://creativecommons.org/licenses/by/4.0/
390
Statistical Mechanics and Entropy
holistic framework. Thermodynamic theory, and, in particular, the second law of thermodynamics, is a universally applicable tool to analyze flows of energy. Communication systems and their processes can be seen, similar to many other natural processes and systems, as dissipative transformations that level differences in energy density between participating subsystems and their surroundings. This paper shows how to apply thermodynamics to analyze energy flows through communication networks. Application of the second law of thermodynamics in the context of the Carnot heat engine is emphasized. The use of exergy-based lifecycle analysis to assess the sustainability of ICT systems is shown on an example of a radio access network. Keywords: energy; entropy; exergy; communication networks; lifecycle analysis
INTRODUCTION In the last two decades, the information and communication technology (ICT) sector has been growing very fast. There is no such example in human history that the development of a technology has changed our way of life in such a rapid and fundamental manner. In the meanwhile, ICT has become an integral part of our everyday life, including social interactions, business processes, technology and ecology. Due to the fact that ICT not only promises enormous potentials, but also carries risks, it is extremely important to carefully evaluate and assess ICT systems and applications regarding their sustainability and potential for improving global energy productivity. Advanced ICT applications and services can be used to increase the \ , building management, water management, manufacturing, as well as in production, distribution and consumption of electricity (smart grids). In > |\ # effects that are sometimes referred to as direct effects, which relate to energy consumption and the carbon footprint of ICT hardware. Additionally, environmental impacts resulting from the change in production, transport and consumption processes, due to the application of ICT, are probably even more important. This effect is referred to as the second-order effect. Finally, there are environmental impacts emerging from the medium- or longterm adaptations of behavior and economic structures following from the
Energy, Entropy and Exergy in Communication Networks
391
availability of ICT applications and services, including the rebound effects # ! communication networks There have been a number of studies concentrating on potential energy ~> # and processing systems, adaptive transmission links, dynamic power management and switching off or putting in sleep mode some of the less utilized transmission links and line cards [1,2,3,4,5]. It has been shown that > ~ can be achieved when optimizing both the network concept and the architecture of network elements with regard to energy consumption. Additionally, a high > ~ support rapid development and broad use of applications and services for improved energy productivity can lead to very large indirect energy savings, i.e., the second order effects. It has been recently estimated that the potential reduction of greenhouse gas (GHG) emissions through the use of advanced ICT applications and services is up to ten times higher than the ICT’s own emissions [6]. Thus, complex interactions between the network and various advanced services and applications should be better understood in order to optimize both network infrastructure and applications for maximizing improvements in global energy productivity [7]. Finally, the whole lifecycle of ICT equipment should be considered in order to properly understand and assess the role of ICT in sustainable development. This paper reviews recent research efforts in treating communication and information processing systems using thermodynamic approaches and tools. Analogies between communication and thermodynamic systems are indicated, and several examples at the device, subsystem and system levels are described. The paper is organized as follows. The next section describes $ and entropy generation through communication and information processing systems. A particular emphasis is given to the application of the second law of thermodynamics and approaches that use the concept of the Carnot heat engine. Section 3 discuss the use of the exergy concept to assess the > | \ { $ perspective are presented for an exemplary radio access network. Finally, Section 4 presents concluding remarks.
392
Statistical Mechanics and Entropy
ANALOGIES BETWEEN COMMUNICATION AND THERMODYNAMIC SYSTEMS In general, one can define communication as the exchange of meaningful information. A communication system consists of at least three components: a sender, a communication channel over which messages can be transmitted and a recipient. Since all the components of a communication system are physical objects that need energy to function properly, the communication itself is always a dissipative process. Therefore, the perceivable information is physical; we are not able to generate, transmit, receive or process information without its physical representation. Thus, the logical consequence of these facts is that any physical communication system consumes and dissipates energy while transmitting and processing information. Indeed, if we take a look at the current communication networks, we will recognize that, from the thermodynamic point of view, all processes within the network are irreversible (dissipative) processes. As can be seen from Figure 1, all elements on an end-to-end path through communication networks consume energy. Thus, an amount of energy (Ec) must be continuously supplied to active network elements, in which a part of the energy is used to perform the desired function, i.e., useful work, while the rest of the supplied energy is lost due to the inefficiencies of the transforming processes (Ed). The information transmitted through the network is physically represented by means of modulated carrier signals. Although passive elements, such as optical fibers, do not directly consume energy, they are responsible for energy losses, due to the gradual attenuation of the optical signal as it propagates through the transmission line, which must be compensated for by applying an active component, such as an amplifier. Amplification is, in general, a dissipative process that leads to an increase in entropy. During the process of amplification, only a part of the supplied energy is converted into the signal at the output, while another significant portion of the energy is either lost, due to inefficiencies, or converted into noise. Current communication networks are very complex systems that comprise a large number of components and are able to provide a huge number of coexisting end-to-end paths. They are used to exchange information between software applications for various purposes and in different areas of business and society. Since the use of ICT services and applications can influence processes and flows of energy in other different areas, there is a need for a holistic approach that is able to deal with complex interdependencies between heterogeneous systems.
Energy, Entropy and Exergy in Communication Networks
393
Figure 1. $ ## networks.
The fundamental laws of thermodynamics were developed many years ago through a combination of observation and experimentation. Although the laws of thermodynamics were developed solely through observations on thermodynamic systems, the main principles and implications of these laws have found a broad application in many other systems and areas of science, such as mechanics, chemistry, biology, genetics, astrophysics and communications. The '{$ to realize an ideal network element in which no energy is dissipated, then > æ hypothetical case, the internal energy would change only due to the transfer of energy from surroundings to the system (supply of electricity) and to the work done by the system on its surroundings while performing the desired data processing function and writing the results at the output. However, the ~ æ\ because dissipation is an integral part of an irreversible change of state in real systems. The second law of thermodynamics is a basic postulate applicable to any system involving the transfer of energy. It can be expressed as the tendency that over time, differences in temperature, pressure and chemical
394
Statistical Mechanics and Entropy
potential equilibrate in a physical system. That means the differences in > $ in a preferred direction. For example, without applying any external work,
$ # # regions. Moreover, the second law of thermodynamics introduces the principle of the increase in entropy. Entropy can be seen as the opposite of available energy, i.e., a measure of the energy that is not available for useful work in a thermodynamic process. It can be used to differentiate between the bound and free forms of energy. Rudolf Clausius had already developed the concept of thermodynamic entropy in the early 1850s [8]. According to
\ > \ \ | \ %¶·/T, where T is the absolute temperature. While Clausius considered a reversible process, an increase of entropy is commonly associated with irreversible transformations. For an isolated system undergoing an arbitrary process, entropy can never decrease, i.e.\%\ + of the well-known formulations of the second law is that the entropy of the Universe tends to a maximum. There also exist other formulations that can be applied to various systems. For example, Annila et al. [9,10] state { \$ \ $ selects the fastest ways. Similarly, for a communication system, we can { \ $ of information can appear to diminish that difference. The information $ # be noticed here that the latter postulation is an intuitive observation by the current author, which is not going to be proven in this correspondence. Here, we should also underline the difference between thermodynamic and logical entropy. Thermodynamic entropy, which is mostly associated with Clausius, ' ¦ |\ * as “the quantitative measure of the amount of thermal energy per unit temperature not available to do work in a closed system”. It has the unit of ¥¦' \ \ >' Boltzmann in the 1880s [11], is “a measure of disorder or randomness in a closed system” (The American Heritage Dictionary). Logical entropy has been used by Shannon to characterize information coding and properties of communication channels [12]. Since its introduction, entropy has found use in many areas, such as in thermodynamics, communications, statistical mechanics, statistics, quantum mechanics, evolutionary biology, genetics
Energy, Entropy and Exergy in Communication Networks
395
and theoretical astrophysics. Entropy generation analysis has been shown to be an effective tool in optimizing systems and processes [13,14,15,16]. A very useful concept for treating many different systems is that of a heat >%|17]. Models based on the Carnot heat engine have been often used to describe thermodynamic cycles of systems, in which some work may be performed by the system on its surroundings [18]. The system is thereby acting as a heat engine that performs work according to the difference in the temperature between the hot and the cold reservoirs. Both the hot and the cold reservoir are assumed to be at thermodynamic +> $ (QH) at the constant absolute temperature, TH, is the energy supplied to the engine to perform work (W\ ·| $ cold sink at the constant equilibrium temperature, TC, which is the residual energy not used to perform useful work. The Carnot engine is a reversible
{> > \| | by: max|
& ' && * In analogy to thermodynamic systems, it is possible to use the concept of the Carnot heat engine to determine the thermodynamic efficiency of communication systems. For example, the approach presented in [15] introduces a new modeling technique that combines rate and propagation equations with thermodynamic approaches in order to evaluate both energy dynamics and the evolution of thermodynamic entropy within an optical amplifier. When observing the amplification process in an optical amplifier, e.g., in an erbium-doped fiber amplifier (EDFA), one can identify analogies with some well-known thermodynamic systems, such as the heat exchanger (see Figure 2). In an EDFA, the high-energy pump signal transfers a part of its energy to the low-energy data signal. As a result of this amplification process, the energy of the data signal is increased at the output of the device. The efficiency of this process is characterized by the energy conversion efficiency of the amplifier. Similarly, energy transfer occurs in the heat exchanger by the transfer of heat from the high-temperature (hot) fluid to the low-temperature (cold) fluid, which results in an increased temperature of the cold fluid at the output of the device. Thermodynamically speaking, the amplification process in an erbium-doped fiber (EDF) requires external energy to be supplied in the form of the optical pump signal, and the work
396
Statistical Mechanics and Entropy
done on its surroundings is manifested by the increase in energy of the data signal. The internal processes are dissipative and lead to an increase in entropy. In both EDFA and the heat exchanger, higher efficiency is achievable in the counter-propagating (counter-flow) design. In parallelflow heat exchangers, the cold fluid cannot reach a higher temperature than the hot fluid towards the exit, while at the output of EDFA in the copropagating configuration, the energy of the signal can exceed that of the residual pump, because EDF is acting as the active medium.
Figure 2. > ># > a) and (b)) and the heat exchanger ((c) and (d)). In general, energy transfer between the high # ># > * $ $ in a heat exchanger. Although the background physical processes are different,
Energy, Entropy and Exergy in Communication Networks
397
both systems behave similarly when analyzed from the thermodynamic point of * \ the signal energy can be higher than the residual pump energy at the output of * # >\ # $ { \> $
$
Considering the analogies described above, we can draw a simple thermodynamic system representing EDFA as a heat pump, as shown in Figure 3. A heat pump usually moves thermal energy in the opposite direction $ { # accomplish the desired transfer of thermal energy from the heat source to the heat sink. Let us consider a steady-state process in which the pump energy is ># > * ${ \p. Here, # ${ \ fÔ% [19,20,21,22,23], to introduce the entropy rate via Tf%Ôf. This enables describing light generating and converting devices using thermodynamic quantities, such as temperature and heat, which has led to advances in several areas, such as in luminescence [24,25], the thermodynamics of lasers [26], laser cooling systems [27] and photosynthesis [28,29§ ${ is generally not equivalent to the absolute thermodynamic temperature, \ >Ô®%®¯ > ${ temperature and the absolute thermodynamic temperature can be found in [22§ \ $ >p%pp/Tp, respectively. The work > * > $ ss,outs,in). \ *> %ss/ Ts\ ${ > \ is chosen such that the energy of the pump is almost completely used up * | + \ p,res becomes very low and can be neglected, so that energy losses in EDF are mainly due to non-radiative processes, % > >~ internal processes cause an additional entropy generation that is denoted by SEDF. The accompanying generation and transfer of heat to the surroundings > $ \·\ # (TL).
398
Statistical Mechanics and Entropy
Figure 3.! ># > modynamic heat pump.
Thus, under the steady-state condition, we can write two basic balance equations for the EDFA model presented in Figure 3 and the second law of thermodynamics as follows: (1) (2) \ %EDF ·ps from Equation (1 s,p%s,pTs,p\ s,p and Ts,p represent the energies and flux temperatures of the signal and the pump, respectively, one can rewrite Equation (2) in the following form: (3) + * \ EDFsp. It can be now obtained from Equation (3) as:
(4) It is evident from Equation (4 { * a heat pump can be determined from temperatures TL, Tp and Ts only. It is of in [19]. Through analyzing Equation (4\ | \
Energy, Entropy and Exergy in Communication Networks
399
EDFA presented in Figure 3 is equivalent to a tandem arrangement of two heat engines [30§ \ to a heat engine, in which no entropy can be associated with output work, i.e.\%s=0. Consequently, the system becomes reversible and Ts¬ equivalent to the situation where the output signal light is close to perfectly monochromatic and perfectly directional radiation, which carries energy > ¯ \ * Equation (4> | \i.e., cL/Tp\EDF# > a) counter-propagating and (b) co-prop > 15].
A three-level EDFA model based on rate and propagation equations can be used to effectively model the propagation of optical signals and their interaction with Er3+ ># > 15,31,32]. The entropy > > to a “gas of photons”, as has been done by several authors for systems of n discrete quantum states in the past [20,21,22]. Hence, according to the ${ \fÔ%\ we can calculate the ${ \p and Ts, associated with optical pump and signal lights as the ratio between the rates of energy and entropy carried by the light. Figure 4 ${ * [15§ > \ ${ > \ temperature increases. Thus, EDF behaves similarly to the heat exchanger.
Data Processing Systems As a Carnot Heat Engine Even a more complex network element, such as a switch or a data processing module within a router, can be considered as a Carnot subsystem, as depicted in Figure 5 [33]. Here, the input data represent the work performed
400
Statistical Mechanics and Entropy
on the system by its surroundings, which is characterized by the entropy of the incoming data stream. The entropy of a data stream is associated with the degrees of freedom of the information system used and the corresponding quantity of energy for the particular ensemble. Similarly, the output data represent the work done by the system on its surroundings, which is characterized by its entropy. It has already been shown that the minimum energy required for processing, or, more precisely, deleting, a bit of information in binary systems is ·~BTln2 [34], where kB is the Boltzmann’s constant. Thus, the entropy of a binary ensemble composed of D degrees of freedom is given by S=DkBln2.
Figure 5. A possible representation of a data processing element using the analogy to the Carnot engine [33].
> $ from the hot reservoir (QH) associated with the locally generated array of size DH, which represents the space in which the data processing operation is performed. The size of the ensemble of the internal degrees of freedom (DH) must be larger or equal to that of the incoming data, i.e., DH*I. The data processor makes use of the energy, QH, to process input data and to generate results of a length of DO. The minimum energy dissipated by this process + $ ·C=QHO, which is determined by the number of by-product bits of the processing operation. Thus, Figure 5 shows an example of a model based on the Carnot heat engine representing a simple data processor with only one input and one >O/QH.
Energy, Entropy and Exergy in Communication Networks
401
EXERGY IN COMMUNICATION NETWORKS A very useful quantity that stems from the second law of thermodynamics is exergy. It can be used to clearly indicate the inefficiencies of a process by locating the degradation of energy. In its essence, exergy is the energy that is available to be used, i.e., the portion of energy that can be converted into useful work. In contrast to energy, it is never conserved for real processes, because of irreversibility. Any exergy loss indicates possible process improvements. The exergy of a macroscopic system is given by: (5) where extensive system parameters are internal energy (U), volume (V) and the number of moles of different chemical components, i (ni), while intensive parameters of the reference environment are pressure (Pr), temperature (Tr) and the chemical potential of component i (´r,i). A useful formula for practical determination of exergy is [35]: (6) where the relatively easily determined quantities denoted by “o” in the subscript are related to the equilibrium with the environment. The exergy content of materials, ExmatExmat, at a constant temperature, T=T0, and pressure, P=P0, can be calculated from:
(7) In Equation (7), cici is the concentration of the element i, R is the gas constant, while ´´ denotes the chemical potential for the element i, relative to its reference state. The relation of exergy loss to entropy production is given by: (8) where % is the entropy (irreversibility) generated in a process or a system. In other words, for processes that do not accumulate exergy, the difference between the total exergy flows into and out of the system is the exergy loss due to internal irreversibilities, which is proportional to entropy creation. The overall exergy loss of a system is the sum of exergy losses in all system components, i.e., Exloss,total{loss,component. Exergy analyses have been performed in industrial ecology to indicate the potentials for improving
Statistical Mechanics and Entropy
402
the use of resources and minimizing environmental impact. The higher the exergy efficiency is, i.e., the lower exergy losses, the better the sustainability of the considered system or approach.
Exergy-Based Lifecycle Analysis (E-LCA) of ICT Equipment Since exergy analysis is a universally applicable method to assess process efficiency, it is well suited to investigate the sustainability of heterogeneous systems. Indeed, there are a number of studies that apply exergy-based lifecycle analysis (E-LCA) to assess the sustainability of complex systems and technologies [36,37,38,39,40,41]. Since recently, E-LCA has also been used by several research groups to assess the sustainability of ICT infrastructure and applications [39,40,41]. #'|\ $ { lifecycle. First, the embodied exergy of materials used to manufacture the device is determined. This embodied material exergy acts as the input into the system. In the study presented in this paper, the material inventory is performed by surveying the raw material composition of different components. Two examples of a typical decomposition of a smartphone and tablet PC are given in Table 1. Table 1. Decomposition of a typical smartphone and tablet PC [42]. Smartphone Material
Tablet PC Mass [g] Mass [%]
Material
Mass [g]
Mass [%]
Glass
40.9
30
Glass
140
23
Stainless Steel
38.7
29
Stainless Steel
115
19
Battery
24.7
18
Battery
131
21
Circuit boards
15.4
11
Circuit boards
40
7
Display
7.2
5
Display
142
23
Plastic
3.1
2
Plastic
19
3
Other materials
5
4
Other materials
26
4
Total
135
100
Total
613
100
The estimation of exergy consumption for the raw material extraction phase is performed on a per-mass basis and according to the exergy contents # { materials are mainly taken from [39,41,43,44,45]. Then, we calculate the amounts of exergy destructed during various LCA phases, including material
Energy, Entropy and Exergy in Communication Networks
403
extraction, transportation, manufacturing, use and disposal. Furthermore, { are considered [38§ { obtain the results presented in this paper are listed in Table 2. As a result of the E-LCA, one can determine exergy-based sustainability indicators that can be used to easily compare the sustainability of different concepts, technologies and approaches. Two examples of exergy lifecycles for a Universal Mobile Telecommunications System (UMTS) radio base station and a smartphone are presented in Figure 6a,b, respectively. A reutilization of recycled «æ > > \ { generation sources is chosen according to the values stated in Table 2, which roughly correspond to the current situation in Austria [46]. It is evident from Figure 6c that the main contributors to the exergy losses of the entire device lifecycle are the manufacturing and material extraction processes, in the case of a smartphone, and the high operational energy consumption of a radio base station. This difference in the relation of embodied to operational exergy for radio base stations and smartphones is mainly due to the fact that radio base stations have several times longer lifecycles than modern mobile devices and that mobile devices are optimized for low energy consumption. Table 2.% { \ 38,39,41,43,44,45]. Category
Value
Unit
Notes
Embodied exergy in materials (cumulative exergy cost (CExC)) Aluminum
341.5
MJ/kg
using the Bayer process and Hall | \±æ>{
Steel
52.1
MJ/kg
±æ\
Copper
67
MJ/kg
includes mining, concentrating,
Iron
51.04
MJ/kg
iron casting
Zinc
198.9
MJ/kg
$\ electrolysis
Plastic
92.3
MJ/kg
low-density polyethylene (LDPE) from crude oil
Other
20
MJ/kg
order-of-magnitude estimate
% {
Statistical Mechanics and Entropy
404
Metals
0.28
kJ/kg
machining process
Plastic
14.9
kJ/kg
injection, modeling
Printed Circuit Boards (PCBs)
238.4
MJ/m22
FR-4, per area
Integrated Circuits (ICs)
12.5
MJ/IC
for an average IC size
Complex Processor
1,242
MJ/processor
Exergy consumption of different transportation modes Air
22.14
kJ/kg-km
per km and kg of transported goods
Truck
2.096
kJ/kg-km
per km and kg of transported goods
Rail
0.253
kJ/kg-km
per km and kg of transported goods
Ship
0.296
kJ/kg-km
per km and kg of transported goods
Energy source mix for Austria [46§ { generation systems [38] Hydroelectric power generation
57.1
æ
{ Öæ
Coal-Fired Power Plant
37.2
æ
{ ¤²æ
Wind Turbine System
4.2
æ
{ ±æ
Solar Photovoltaic System
1.5
æ
{ ±æ
To obtain the results presented in Figure 6, it is assumed that the lifecycle of a smartphone is two years, while that of a base station is 10 years. Thus, | and technologies are launched within short cycles of only a few years. Even \ > power and use intensity increase, which consequently lead to a more or less constant power consumption despite the continuous improvements in energy { and environmental pollution, due to ever-increasing production volumes and decreasing lifetime, as well as inadequate disposal of ICT hardware. These issues can only be properly addressed using a holistic approach that considers the whole lifecycle of products and services.
Energy, Entropy and Exergy in Communication Networks
405
Figure 6. Examples of exergy-based lifecycles for (a) a Universal Mobile Telecommunications System (UMTS) base transceiver station and (b) a smartphone. (c) The relation between the embodied and the operational exergy for Node B and the smartphone [39,40].
E-LCA of Radio Access Networks In order to illustrate the use of E-LCA for evaluating the sustainability of communication networks, we consider an exemplary radio access network (RAN). Here, we assume a system as shown in Figure 7a, which represents a generic architecture of a UMTS RAN. The main components of such a network are the base transceiver station (Node B), the radio network controller (RNC) and the connection to the core network that is referred to as the backhaul. The backhaul can be realized in different ways and using different technologies based on radio links, copper cable or optical fiber. Customers use mobile devices (CMD) to connect to radio base stations in
406
Statistical Mechanics and Entropy
order to use various services and applications, such as telephony, classical Internet services or new services, such as videoconferencing, video on demand, file sharing or any kind of cloud service.
Figure 7. (a) Generic architecture of UMTS radio access network and (b) an exemplary coverage of the City of Vienna by the UMTS macrocell radio access network.
As an example, we model a hypothetical radio access network for the City of Vienna. According to the statistical data on areas and population densities [47§ ~ | of Vienna, we estimate the required number of base station sites (see Figure 7b) and, consequently, the number of network elements, such as Node B, RNC and backhaul equipment using the tool for the evaluation of the energy ~ 48,49]. We also estimate the + > > \ ## # # ~ is the overall energy consumption. Hence, knowing the operational energy consumption and the required number of network elements, it is possible to obtain exergy consumptions of different lifecycle phases. The main assumptions we made for the E-LCA of the UMTS radio access network are summarized in Table 3. Figure 8 shows the estimated values of the embodied material exergy and the exergy consumed during the manufacturing of typical UMTS Node B and RNC racks. For both Node B and RNC, the embodied material exergy is dominated by the housing, i.e., by the material used for racks and enclosures. Differently, the most exergy-intensive components in the manufacturing process of Node B and RNC are radio frequency (RF) >\
Energy, Entropy and Exergy in Communication Networks
407
The lifecycle of a radio access network for the City of Vienna, including the customer’s mobile devices, is presented in Figure 9a. Here, we assume a case where all row materials are available within a radius of 1,000 km from the manufacturing location, while the recycling is taking place in China. The «æ ~ dimensioning and the number of mobile customers are made according to data we obtained from Statistics Austria, the Austrian Regulatory Authority for Broadcasting and Telecommunications (RTR), Austrian network operators and the Forum Mobilkommunikation (FMK); particularly, data on technology penetration, market shares and population statistics. For more detailed information about the assumptions and main modeling parameters, the reader is referred to [39]. Table 3. Main assumptions for the exergy-based lifecycle analysis (E-LCA) of the UMTS radio access network for the City of Vienna. E-LCA model
Assumptions for the case with 40% material reutilization
Raw Material Extraction
Spatial context: southeast Asia/China Material supply: within the radius of 1,000 km «æ $
Material Transportation
Spatial context: within the radius of 5,000 km Mode of transportation: rail/truck
Manufacturing and Assembly
Spatial context: southeast Asia/China
Product Transportation
From southeast Asia/China to Austria/Vienna Mode of transportation: rail/truck/ship
Operation (Network Design Parameters)
Lifespan: 9 years [ IEEE J. Quantum Electron. 2013, 49, 100–107. ~\% $ > > # > IEEE J. Light. Technol. 1991, 9, 271–283. 32. Becker, P.C.; Olsson, N.A.; Simpson, J.R. Erbium-Doped Fiber ` Nature. 1996;381:1044–1048. 10.1038/381413a0 Lewis MA, Maini PK, Petrovskii SV (editors). Dispersal, Individual Movement and Spatial Ecology vol. 2071 of Lecture Notes in Mathematics. New York, Berlin, Heidelberg: Springer Verlag; 2013. Viswanathan GM, da Luz MGE, Raposo EP, Stanley HE. The Physics of Foraging. Cambridge: Cambridge University Press; 2011. Mandelbrot BB. The Fractal Geometry of Nature. San Francisco: W.
Statistical Mechanics of Zooplankton
14.
15.
16. 17.
18. 19. 20. 21.
22.
23.
24.
25.
26.
429
H. Freeman and Co.; 1982. Humphries NE, Weimerskirch H, Queiroz N, Southall EJ, Sims DW. >'¢$ Proc Natl Acad Sci USA. 2012;109:7169–7174. 10.1073/pnas.1121201109 Bartumeus F, Peters F, Pueyo S, Marrasé C, Catalan J. Helical Lévy walks: Adjusting searching statistics to resource availability in microzooplankton. Proc Natl Acad Sci USA. 2003;100:12771–12775. 10.1073/pnas.2137243100 Margalef R. La Biosfera: Entre la Termodinámica y el Juego. Barcelona: Ediciones Omega; 1980. Cressler CE, Nelson WA, Day T, McCauley E. Starvation reveals the cause of infection-induced castration and gigantism. Proc R Soc B. 2014;281:20141087 10.1098/rspb.2014.1087 Baierlein R. Thermal Physics. Cambridge: Cambridge University Press; 1999. Press WH, Flannery BP, Teukolsky SA, Vetterling WT. Numerical Recipes. Cambridge: Cambridge University Press; 1988. Peters RH, De Bernardi R (editors). Daphnia. Verbania Pallanza: Instituto Italiano di Idrobiologia; 1987. Cholera vaccines. A brief summary of the March 2010 position paper. Geneva: World Health Organization; 2010. www.who.int/ immunization/Cholera_PP_Accomp_letter__Mar_10_2010.pdf Chiavelli DA Marsh JW, Taylor RK. The mannose-sensitive hemagglutinin of Vibrio cholerae promotes adherence to zooplankton. Appl Environ Microbiol. 2001;67:3220–3235. 10.1128/ AEM.67.7.3220-3225.2001 Pruzzo C, Vezzulli L, Colwell RR. Global impact of Vibrio cholerae interactions with chitin. Environ Microbiol. 2008;10:1400–1410. 10.1111/j.1462-2920.2007.01559.x Seuront L, Stanley HE. Anomalous diffusion and multifractality enhance mating encounters in the ocean. Proc Natl Acad Sci USA. 2014;111:2206–2211. 10.1073/pnas.1322363111 Bell WE, Preston RR, Yano J, van Houten JL. Genetic dissection of attractant-induced conductances in Paramecium . J Exp Biol. 2007;210:357–365. 10.1242/jeb.02642 Kim HJ, Sawada C, Hagiwara A. Behavior and reproduction of the
430
27.
28. 29.
30.
31.
32.
Statistical Mechanics and Entropy
rotifer Brachionus plicatilis species complex under different light wavelengths and intensities. Int Review Hydrobiol. 2014;99:151–156. 10.1002/iroh.201301715 Yang WQ, Braun C, Plattner H, Purvee J, van Houten JL. Cyclic nucleotides in glutamate chemosensory signal transduction of Paramecium . J Cell Sci. 1997;110:2567–2572. Francis JT, Hennessey TM. Chemorepellents in Paramecium and Tetrahymena . J Euk Microbiol. 1995;41:244–262. Lovern SB, Strickler JR, Klaper R. Behavioral and physiological changes in Daphnia magna when exposed to nanoparticle suspensions (Titanium Dioxide, Nano-C60, and C60HxC70Hx). Environ Sci Technol. 2007;41:4465–4470. 10.1021/es062146p McConnell JV. Comparative physiology: Learning in invertebrates. Annu Rev Physiol. 1966;28:107–136. 10.1146/annurev. ph.28.030166.000543 Mishra S, Tunstrøm K, Couzin ID, Huepe C. Collective dynamics of selfpropelled particles with variable speed. Phys Rev E. 2012;86:011901 10.1103/PhysRevE.86.011901 Ringelberg J. Diel Vertical Migration of Zooplankton in Lakes and ¥}{~{`"'. New York: Springer Verlag; 2009.
CHAPTER 16
STATISTICAL THERMODYNAMICS OF ECONOMIC SYSTEMS
Hernando Quevedo1,2 and María N. Quevedo3 1
Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, AP 70543, 04510 México, DF, Mexico 2 Dipartimento di Fisica and ICRA, Università di Roma La Sapienza, 00185 Roma, Italy 3 Departamento de Matemáticas, Universidad Militar Nueva Granada, Carrera 11 No. 101-80, 110111 Bogotá, DE, Colombia
ABSTRACT We formulate the thermodynamics of economic systems in terms of an arbitrary probability distribution for a conserved economic quantity. As in statistical physics, thermodynamic macroeconomic variables emerge as the mean value of microeconomic variables, and their determination is reduced to the computation of the partition function, starting from an arbitrary function.
Citation: Quevedo, H., & Quevedo, M. N. (2011). Statistical thermodynamics of economic systems. Journal of Thermodynamics, 2011. (9 pages) Copyright: © 2011 Hernando Quevedo and María N. Quevedo. This is an open access article distributed under the Attribution 3.0 Unported (CC BY 3.0) license: https://creativecommons.org/licenses/by/3.0/.
432
Statistical Mechanics and Entropy
Explicit hypothetical examples are given which include linear and nonlinear economic systems as well as multiplicative systems such as those dominated by a Pareto law distribution. It is shown that the macroeconomic variables can be drastically changed by choosing the microeconomic variables in an appropriate manner. We propose to use the formalism of phase transitions to study severe changes of macroeconomic variables.
INTRODUCTION Thermodynamics is a phenomenological science that derives its concepts directly from observation and experiment. The laws of thermodynamics can be considered as axioms of a mathematical model, and the fact that they are based upon commonplace observations makes them tremendously powerful and generally valid. In particular, the interest of applying thermodynamics in a systematic manner to describe the behavior of economic and financial systems has a long history [1]. One of the difficulties of this approach is that it is necessary to identify a priori the economic variables that can be identified as thermodynamic variables, satisfying the laws of thermodynamics. The results of this identification are very often controversial. For instance, whereas in some studies money is considered as a well-defined thermodynamic variable, other analyses suggest that money is a completely irrelevant economic variable that, consequently, should not be used in any thermodynamic approach to economy [2]. Even the basic thermodynamic assumption that an economic system be in equilibrium has been the subject of numerous discussions. The concept of economic entropy presents also certain difficulties and different definitions can be formulated [3, 4]. Formal mappings between thermodynamic and economic variables can be formulated [5] which, however, leave the notion of entropy unclear and the range of models in which it holds undefined. On the other hand, econophysics is a relatively new branch of physics [6] in which several methods of applied theory of probabilities, that have been used with excellent results in statistical physics, are implemented to > \ prominent founds is that certain economic variables can be considered as conserved and their distribution among the agents of an economic system are described by simple probability densities. In fact, it is currently well established that wealth distribution in many societies presents essentially two phases [7–15]. This means that the society can be differentiated in two disjoint populations with two different probability distributions. Various
Statistical Thermodynamics of Economic Systems
433
analysis of real economic data from several countries have shown that one phase possesses a Boltzmann-Gibbs (exponential) probability distribution >Ö±æ\ \ \>±æ with highest wealths, shows a Pareto (power law) probability distribution. Similar results are found for the distribution of money and income. In a recent review [16], the concepts of classical thermodynamics are shown to be applicable to economic systems and a consistent thermodynamic formulation of economics is presented, using the language of calculus and differential forms and the fundamentals of statistics. Moreover, it is shown that thermodynamic concepts like entropy, temperature, pressure, and volume can be related to economic concepts like production, effective costs, capital growth, and capital. Furthermore, a recent work [17] demonstrates in a plausible manner that statistical models based on exponential and power law distributions can be used to reproduce the distribution of money, wealth, and income in modern societies. Additionally, the dynamics of such statistical models was shown to be described as a stochastic Markov process so that the >\ { \ ~~ # Planck equation. In the present work, we propose a different approach in which the concepts of microeconomic and macroeconomic parameters are > to use the standard formulation of statistical thermodynamics in order to relate in a systematic and rigorous way the thermodynamic approach with the statistical properties of complex economic systems found in econophysics. We obtain as a result that the entire properties of economic systems can, \ > function from which all the thermodynamic properties of the system can be derived. It turns out that in certain systems, the macroeconomic variables > > > microeconomic variables. Moreover, the drastic changes that occur at the level of the macroeconomic variables can be investigated by using the formalism of phase transitions. This paper is organized as follows. In Section 2 we formulate the fundamentals of statistical thermodynamics in the case of an arbitrary conserved economic variable. In Section 3.1, we analyze the simplest case of an economic system for which the conserved variable depends linearly on the microeconomic parameters. More general situations are analyzed in Sections 3.2 and 3.3, where quadratic and multiplicative dependences of abstract parameters are considered. The partition function is analyzed in
434
Statistical Mechanics and Entropy
all the cases and possible interpretations are presented. In Section 4, we > the occurrence of phase transitions. Finally, Section 5 contains a summary and a discussion of our results.
STATISTICAL THERMODYNAMICS Consider a hypothetical economic system in equilibrium for which a quantity, say M, is conserved. There is a reasonable number of arguments [18] which show that certain current economies can be considered as systems in equilibrium, and some quantities, like the total amount of money in the system, are conserved during certain periods of time. For the sake of concreteness, we will consider in this work that M is the conserved money although our approach can be applied to any conserved quantity. Suppose that the system is composed of N agents which compete to acquire a participation m of M. In real economic systems, the total number of agents is such a large > ¬ economic system, the equilibrium probability distribution (density function) of m is given by the Boltzmann-Gibbs distribution åm) ןe, where T is an effective temperature equal to the average amount of money per agent. The amount of money m that an agent can earn depends on several additional ³1\³2, ... , ³l, which we call microeconomic parameters. Since the density function can be normalized to 1, we obtain [19] (1) where Q(T, x ) is the partition function and O represents the set of all microeconomic parameters. Here, we have introduced the notation x = x1, x2, ... , xn to denote the possible set of macroeconomic parameters which can appear after the integration over the entire domain of definition of the microeconomic parameters O . Following the standard procedure of statistical thermodynamics [19], we introduce the concept of mean value for any function g = g( O ) as (2)
Statistical Thermodynamics of Economic Systems
435
These are the main concepts which are needed in statistical thermodynamics for the investigation of a system which depends on the temperature T and macroscopic variables x. Consider, for instance, the mean ¼Æå O , and let us compute the total differential d, (3) { > > å· \ we obtain (recall that å O = 1 and, therefore, d å O = 0)
(4) where the entropy S and the “intensive” macroscopic variables are defined in the standard manner as
(5) | \ « % of statistical mechanics, the remaining laws of thermodynamics are also valid. Similar results can be obtained for any quantity that can be shown > $ thermodynamic potentials can be used to describe the same thermodynamic system. It is useful to calculate explicitly the entropy S = åå³ by using å· > the form (6) so that
(7)
436
Statistical Mechanics and Entropy
This means that the entire information about the thermodynamic properties of the system is contained in the expression for the “free money” f which, in turn, is completely determined by the partition function Q(T, x ). In statistical physics, this procedure is still used, with excellent results, to investigate the properties of thermodynamic systems. We propose to use a similar approach in econophysics. In fact, to investigate a model for an economic system one only needs to formulate the explicit dependence of any conserved quantity, say money m( O ), in terms of the microeconomic parameters O . From m( O ), one calculates the partition function Q(T, x ) and the free money f (T, x ) which contains all the thermodynamic information about the economic system. As mentioned in Section 1, the concept of economic entropy still presents ¤\«§ \ be considered as a measure of the irrevocable degradation of the ability to perform an economic activity. In this sense, the value of the entropy depends entirely on the selected function m( O ). In the examples to be considered below, we will see that the entropy is a measure of the “price” to be payed to reach a positive change of the money’s mean value. This interpretation is in agreement with the notion of entropy per agent [4]. In this case, the free money can be interpreted as an intrinsic money measure that characterizes the potential to deliver money to an external market through voluntary trade. For a different choice of m( O ), it should be possible to identify the mean value with the total capital of the economic system so that the entropy can be associated with the production function and the free money turns out to be the effective cost function [16]. In the next sections, we present several examples of hypothetical economic systems with relatively simple expressions for m( O ). In real economic systems, probably very complicated expressions for m( O ) will appear for which analytical computations are not available. Nevertheless, the calculation of the above integrals for the partition function can always be performed by using numerical methods so that the corresponding thermodynamic properties of the system can be found qualitatively.
Statistical Thermodynamics of Economic Systems
437
HYPOTHETICAL ECONOMIC SYSTEMS The determination of the quantity m in terms of the microeconomic parameters O is a task that requires the knowledge of very specific conditions and relationships within a given economic system. The first step consists of identifying the microeconomic parameters O which influences the capacity of an individual agent to compete for a share m of the conserved quantity M. Then, it is necessary to establish how this influence should be represented mathematically so that m becomes a well-defined function of O . Another aspect that must be considered is the fact that in a realistic economic system not all agents are equivalent. For instance, an agent > ~ { income would be considerably different from an agent represented by the
> #;>> > > characteristics of the agents involved in the economic model [7]. For the statistical thermodynamic approach, we are proposing in this work this means that we can decompose the quantity m into classes m = mI + mII + ··· , and different classes can be described by different functions of different sets of microeconomic parameters. The formalism of statistical thermodynamics \ \> # O . In the following subsections, we will study several hypothetical economic systems in which m is given as simple ordinary functions of the microeconomic parameters. We expect, however, that these simple { { # function m can represent any conserved economic quantity, for the sake of concreteness, we will assume that it represents the money and from now on m( O ) will be referred as to the money function.
Linear Systems The simplest model corresponds to the case m = c0 = const. Then, the partition function (1) is given by (8)
438
Statistical Mechanics and Entropy
where = ³i, i = 1, 2, ... , n, represent the macroeconomic parameters. The calculation of the thermodynamic variables, according to (7), yields
(9) \ \ = c0; that is, the mean value of money is a constant, as expected. This economic model is considerably simple. Each agent possesses the same amount of money c0, the entropy does not depend on the mean value of the money c0, and the state equations are = c0 and yii = T. The system is completely homogeneous in the sense that each agent starts with a given amount of money, c0, and ends up with the same amount. Probably, the only way to simulate such an economic system would be by demanding that agents do not interchange money; this is not a very realistic situation. Indeed, the fact that the entropy is a constant, that does not depend on the mean value of the money, allows us to renormalize the macroscopic parameters in such a way that i = ³i = 1, for each i, so that the total entropy vanishes. In this case, from (9), we see that f = c0 and the corresponding equations of > > used in the description of the third law of thermodynamics. This observation indicates that a completely homogeneous economic system is not realizable as a consequence of the third law of thermodynamics. Consider now the function (10) where c1 is a positive constant. The corresponding partition function can be written as
(11) The relevant thermodynamic variables follow from (6) and (7) as
(12)
(13) and the conservation law (4) becomes
Statistical Thermodynamics of Economic Systems
439
(14) ! \ > \ be shown that = T, and so, the fundamental thermodynamic equation in the entropic representation [20] can be written as
(15) This expression relates all the extensive variables of the system, and it can be used to derive all the equations of state in a manner equivalent to that given in (13). Notice that in this case, the entropy is proportional to the temperature (mean value of money) so that an increase of the average money per agent is necessarily associated with an increase of entropy. This observation is in agreement with the second law of thermodynamics. Notice, \ \ example given above, it is possible to renormalize the macroeconomic parameters j such that the last term of the fundamental equation (15) vanishes. Nevertheless, in order to reach the minimum value of the entropy, \ an indication of the validity of the third law of thermodynamics. \ \¬ > ³1 + >
{ +±\ + \1 = 0. However, it is also possible to consider the interval [³min 1 , ³max 1 ] so that the macroscopic variables ³min 1 and ³max 1 reappear in the fundamental equation and can be used as extensive variables which enter the conservation law (4). In this and further examples, the limits of the interval are chosen in order to obtain analytical compact expressions and are not motivated by economic considerations. In a realistic economic system, the interval of integration ³1. The limits ³min 1 and ³max 1 will then contain all possible situations that can > $ >³1. For the sake of simplicity, we choose in this work the former case in which the corresponding macroeconomic parameter does not enter the analysis. It is interesting to analyze the most general linear system for which
440
Statistical Mechanics and Entropy
(16) where c0,c1, ... are positive real constants. It is then straightforward to calculate the partition function (17) and the relevant thermodynamic variables
(18) All the macroscopic parameters vanish, and the system depends only on the temperature. However, the total number of macroscopic parameters n does enter the expression for the entropy so that to increase the mean value > ¼Æ21), it is necessary to increase >%2/T1); both amounts are proportional to the total number of macroscopic parameters Another consequence of this analysis is that once the constants c0 and n { \> changing the temperature of the system. This means that an isothermal positive change of is possible only by increasing the total amount of money in the system.
Nonlinear Systems Consider the quadratic function m = c1³21 which generates the partition function (19) ³1 in the interval ¬\¬ >
(20) which can be put together in the fundamental equation
Statistical Thermodynamics of Economic Systems
441
(21) Again, we see that the effect of considering the extreme values of the ³1 > 1 does not enter the expressions for the thermodynamic variables, and, consequently, the corresponding intensive thermodynamic variable vanishes. Furthermore, ³1 in the money function leads to a decrease of the mean value . To investigate the general case, we consider the monomial functional dependence m = c1³¶1\ ¶> > calculation leads to the following partition function and thermodynamic variables: (22) (23) and the intensive variables yi are given as in the previous case. It follows ¶Æ > ¶Æ\ \ ¼¶¼ In such a hypothetical system, a way to increase the amount of money per > ³1 and apply the measures which are necessary for the money function m to become m ן³¶1, ¶¼ If we consider a transition of an economic system from a state > ¶ ¶2, maintaining the same temperature, the mean value of the money undergoes a ¼Æ¶2¶1)T so that for a positive change, we must require ¶2¼¶1. Moreover, if we desire a positive change by an amount greater ¼ÆÆ\ ¶2 < ¶1¶1 ¶1 = 1), in which no increase of is possible, we can reach a state of greater mean ¼Æ > ¶2 < 1/2. Of course, for a positive change of the “price” to be payed will result in an increase of entropy by an ¶2¶1). We see this possibility as an advantage of our statistical approach. In fact, we start from an equilibrium state with a lower value of and end up
442
Statistical Mechanics and Entropy
in a state with a higher value of by choosing appropriately a particular microeconomic parameter. The positive change of the macroeconomic variable is induced by a change of a microeconomic parameter. Once the process is started, the system will naturally evolve into a state characterized by a higher value of . This natural evolution occurs because, as has been §\ +> corresponds to a Boltzmann-Gibbs probability distribution, which is a basic component of the approach proposed in this work.
Multiplicative Systems An interesting case follows from the money function m( O ) = c1³1. In fact, the partition function is given by (24) ³1 { \¬\{א³min1 . This expression is nothing more but the partition function (cumulative probability) of the Pareto distribution density åP ן1/xÁwhich has been shown to correctly describe the distribution of money (and other conserved economic quantities) in the upper tail of the distribution, that is, for amounts greater than x. A similar derivation of the Pareto law was recently performed in [21]. We now have the possibility to analyze the Pareto distribution in terms of macroeconomic parameters. The computation of the thermodynamic variables using (7) yields
(25) (26) Notice that the intensive variable y conjugate to the lower limit x of Pareto’s distribution appears with a different sign, when compared with the remaining intensive variables yj, j = 2, 3, ... , n. As a consequence of this change of sign the conservation law is given as
(27)
Statistical Thermodynamics of Economic Systems
443
so that if we interpret, by analogy with physical systems, the intensive variables as “forces”, we can conclude that the “force” corresponding to Pareto’s distribution is negative. Moreover, if from the above expressions and that of the free money f , we calculate the mean value , we obtain (28) The origin of the second term is clear, because we have chosen the money function as m = c1³1{ ³1 term, however, is new and has the interesting property that it diverges as c1 > > in the upper tail of the distribution of money, maintaining the values { { \ > takes the value of the average amount of money per agent T. Perhaps this simple observation could be useful for the understanding of the monetary evolution that takes place in the upper class of the wealth distribution in certain economies. Finally, we analyze the case of the money function (29) which corresponds to the density distribution
(30) where c1, d1\¶ { ~ as the Gamma distribution and has been used in econophysics to investigate models with multiplicative asset exchange [22]. A straightforward computation shows that the corresponding partition function can be cast in the form (31) an expression which is essentially equivalent to the partition function (22) following from the monomial function m ן³¶1 discussed in Section 3.2. In particular, in the limit d1 \ \ necessarily in disagreement with the third law of thermodynamics since at the poles the temperature can be made to tend to a constant positive value, 1~¶ > The singular pole structure of the partition function leads to a peculiar behavior of the thermodynamic variables which deserve a more detailed and deeper analysis. Outside the poles, however, the thermodynamic behavior is essentially dictated by (23) which correspond to a Boltzmann-Gibbs distribution with a power law dependence for the money function. This result shows that from a macroeconomic point of view there is no essential difference between the Gamma law distribution and the Boltzmann-Gibbs distribution, as far as the value for the temperature does not correspond to a singular pole of the partition function of the Gamma distribution. This result explains in a simple manner why in concrete examples it is possible to mimic the results of an exponential distribution by choosing appropriately the additional parameters of the Gamma law distribution [23].
PHASE TRANSITIONS An important feature of many thermodynamic systems is their capability to exist in different phases with specific interior and exterior characteristics which can drastically change during a phase transition. The thermodynamic approach proposed in this work insinuates the possibility of considering the phase structure of economic systems and the conditions under which a particular economic system can undergo a phase transition. In this section, we will perform such an analysis for the hypothetical systems studied above. Recall that a phase transition is usually associated with discontinuities or divergences in the thermodynamic variables or its derivatives. In particular, the behavior of the entropy function is used as a criterion for the analysis of phase transitions. Moreover, the heat capacity (32) is an important thermodynamic variable which indicates the existence of second-order phase transitions. From the results presented in the preceding sections one can show that the heat capacity for systems characterized by a monomial money function, m = c1³¶\\ \|¶\ |
Statistical Thermodynamics of Economic Systems
445
case m = c0 % ¶ > value to be positive, the temperature of such systems raises under an increase of “economic heat”, and vice versa. An inspection of the remaining thermodynamic variables shows that such systems cannot undergo a phase transition. An interesting additional result is that such hypothetical systems are stable. In fact, a positive heat capacity is usually interpreted in statistical mechanics as a condition of stability. Consequently, systems described by ¶Æ > The situation is different in the case of multiplicative systems. For a system characterized by a Pareto law distribution, we obtain from (25)
(33) First, we notice that this heat capacity is positive, indicating that the system is stable. However, a phase transition occurs in the limit c1 \ where the heat capacity diverges. This is also the value for which the mean value of the money raises unlimitedly. This shows that a phase transition can be induced by choosing appropriately the parameter c1 and the corresponding economic system undergoes a drastic change with agents possessing more and more money; however, this hypothetical process must end at some stage due to the natural boundary imposed by the fact that the { * transition, several thermodynamic variables diverge, and, consequently, the thermodynamic approach breaks down. As in ordinary thermodynamics, a different (nonequilibrium) approach is necessary to understand the details of the phase transition. This is beyond the scope of the present work. In the case of a system with a Gamma law distribution and partition function given in (31), the phase structure is much more complex. The analytical results are rather cumbersome expressions that cannot be written in a compact form. A preliminary numerical analysis shows that near the # # > transition of a system with a Pareto law distribution with the mean value ¼Æ { ¼Æ1\1\ where the value of the constant c depends on the kind of pole of the partition function. The second scenario is characterized by a rapid reduction of , { { 0 which depends on the
446
Statistical Mechanics and Entropy
kind of pole. Unexpectedly, we also found divergences in the corresponding heat capacity which are not related to the poles of the partition function. A more deep analysis will be necessary to understand the complete phase
DISCUSSION AND CONCLUSIONS In this work, we propose to apply the standard methods of statistical thermodynamics in order to investigate the structure and behavior of economic systems. The starting point is an economic system in which a conserved quantity is present. In certain current economies, it has been shown that such quantities exist, the money being one of them. We have shown that to any conserved quantity it is possible to associate a function m (the money function, for instance) from which all the thermodynamic variables and properties of the system can be derived. The money function depends on a set of microeconomic parameters which generate macroeconomic parameters at the level of the partition function. The thermodynamic variables depend on the macroeconomic parameters and satisfy the ordinary laws of thermodynamics. Starting from simple money functions, we consider linear, nonlinear, and multiplicative systems as examples of hypothetical economic systems in which it is possible to apply our approach, obtaining analytical results. In all the cases, we computed the most relevant thermodynamic variables and analyzed their behavior. The results show that it is possible to manipulate the microeconomic parameters in order to control the output at the level of the macroeconomic parameters. We see this possibility as an advantage of our statistical approach. One can start from an equilibrium state with some given mean value for the money and raise its value by choosing appropriately a particular microeconomic parameter. Once the process is started, the system will naturally evolve into a state characterized by a greater value of . This natural evolution occurs because, as has been §\ +> corresponds to a Boltzmann-Gibbs probability distribution, which is a basic component of the approach proposed in this work. This evolution, however, must be understood as in standard thermodynamics, that is, the evolution process must be quasistatic so that at each step the economic system is in equilibrium. A more realistic evolutionary model must take into account nonequilibrium states, a task that cannot be treated within the standard approach of statistical thermodynamics. It would be interesting to investigate
Statistical Thermodynamics of Economic Systems
447
if the existing generalizations of nonequilibrium thermodynamics can also be applied in the context of economic systems. We propose to use the formalism of phase transitions to analyze the behavior of economic systems. The relatively simple examples studied in this work show that in fact, a phase transition can be associated to drastic changes of the mean value of the money. Economic crisis are usually accompanied ># > be interesting to propose more sophisticated and realistic models for money functions and investigate their behavior during phase transitions. If a crisis could be understood this way, an appealing problem would be to explore the possibility of controlling its consequences. To investigate this problem, it will be necessary to explore and establish the economic meaning of the microeconomic variables Ol by using an approach based upon concepts of standard economics. This remains an open question that could be the subject of further investigations. In the context of systems with many constituents, thermodynamic interaction is an important concept which can completely modify the interior and exterior structure of the system. In physics, one can introduce thermodynamic interaction into a system by choosing appropriate potentials, because we understand the physical meaning of the Hamiltonian function. In our approach, the analogous of the Hamiltonian is the money function; > \ \ \ \ is much more sophisticated. Therefore, we propose to use a different method to introduce thermodynamic interaction into an economic system. Recently, the theory of geometrothermodynamics [24] was formulated with the aim of describing thermodynamics in terms of geometric concepts. One of the results of this formalism is that thermodynamic interaction can be interpreted as the curvature of the equilibrium space. This has been shown to be true not only in the case of ordinary thermodynamic systems, like the ideal gas and ±\²§\> { ~ black holes [27]. This opens the possibility of introducing thermodynamic interaction by just modifying the curvature of the equilibrium space. In fact, it can be shown that all the hypothetical economic systems analyzed in this work have very simple equilibrium spaces for which the manipulation of ~ { > near future.
448
Statistical Mechanics and Entropy
ACKNOWLEDGMENTS This work was supported in part by DGAPA-UNAM, Grant no. IN106110. H. Quevedo would like to thank G. Camillis for interesting comments and suggestions.
Statistical Thermodynamics of Economic Systems
449
REFERENCES 1. 2.
3.
4.
5. 6.
7. 8.
9. 10.
11.
12.
13.
G. Guillaume and E. Guillaume, Sur le Fundements de Leconomique Rationelle, Gautier-Villars, Paris, France, 1932. E. Samanidou, E. Zschischang, D. Stauffer, and T. Lux, “Agent-based ~ \"Reports on Progress in Physics, vol. 70, no. 3, article R03, pp. 409–450, 2007. E. W. Montroll and M. F. Shlesinger, “Maximum entropy formalism, fractals, scaling phenomena, and 1/f noise: a tale of tails,” Journal of Statistical Physics, vol. 32, no. 2, pp. 209–230, 1983. E. Smith and D. K. Foley, “Classical thermodynamics and economic general equilibrium theory,” Journal of Economic Dynamics and Control, vol. 32, no. 1, pp. 7–65, 2008. W. M. Saslow, “An economic analogy to thermodynamics,” American Journal of Physics, vol. 67, no. 12, pp. 1239–1247, 1999. H. E. Stanley, V. Afanasyev, L. A. N. Amaral et al., “Anomalous $ {