277 53 4MB
English Pages 320 [312] Year 2021
Complexity and Complex Chemo-Electric Systems
Complexity and Complex Chemo-Electric Systems Stanisław Sieniutycz Warsaw University of Technology, Faculty of Chemical and Process Engineering, Poland
Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States © 2021 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-12-823460-0
For information on all Elsevier publications visit our website at https://www.elsevier.com/books-and-journals
Publisher: Susan Dennis Acquisitions Editor: Anita A. Koch Editorial Project Manager: Mariana L. Kuhl Production Project Manager: Bharatwaj Varatharajan Cover Designer: Matthew Limbert Typeset by SPi Global, India
Preface The book on Complexity and Complex Chemo-Electric Systems is the second volume on complexity and systems written by the same author. The first book titled Complexity and Complex Thermo-Economic Systems, which was published at the beginning of 2020, was focused on thermodynamic and economic aspects of complexity and complex systems. Even if the first book appeared not long ago, our world has changed henceforth significantly. Now, we attempt to live and work in a world partly fragmented by coronavirus epidemic, i.e., in a world different than previous one, one full of problems different than could be expected 2years ago by an ordinary human being. The structure and change in humans’ decisions must certainly involve more coordinates related to the current health and medical recommendations. We are likely to experience an impact upon organizations, societies, environment, science, medicine, etc., yet the proportions between these components will be different than previously. All said earlier implies the growing role of the knowledge related to biological and medical information that will dominate for at least a period of time over the role of purely scientific information, stemming from exact sciences. Needless to say, however, that thermodynamics, as the basic macroscopic science, will invariably remain an inevitable component in modeling, analysis, and optimization of the majority of practical, biological, and even living systems, in particular those considered in the present book. It is probably familiar to the majority of readers that most thermodynamic models take into account energy and matter balances, the results of invariance principles of physics. These balances incorporate limitations on working parameters of real processes, which should be taken into account when formulating constraints on performance of all macroscopic systems, including living systems. Thermodynamics kinetic theory and/or experiments provide data of static and transport properties needed in calculations. These data are necessary to express balance or conservation laws for energy and substances in terms of variables used in systems modeling, analysis, synthesis, the stages terminating sometimes at an optimization problem (after making the selection of state variables, controls, and parameters). In fact, thermodynamic variables constitute frequently state and/or control coordinates in systems optimizations; moreover, some thermodynamic variables or their functions may constitute performance criteria of optimization. Thermodynamics may also influence the formulation of paraeconomic optimization criteria governing thermoeconomic optimization of systems, as, e.g., those in our first (2020) complexity book.
ix
Preface Briefly, the basic purpose of the present book is to investigate formulations, solutions, and applications describing optimal or suboptimal performance of selected chemo-electric systems, including those considered from the standpoint of a prescribed optimization criterion (complexity measure, chemical production, power yield, topological structure, economical profit, discretization scheme, iterative strategy, simulation time, convergence test, technical index, etc.). Introductory Chapter 1 sketches the applications of classical thermodynamics to complex disequilibrium systems such as solutions of macromolecules, magnetic hysteresis bodies, viscoelastic fluids, polarizable media, etc. To be properly described, these systems require certain extra variables, the so-called internal variables, introduced into the fundamental equation of Gibbs. Irreversible systems belonging to extended irreversible thermodynamics (EIT) are also included and briefly analyzed. In this extended case, dissipative fluxes are independent variables treated on an equal footing with classical variables of thermostatics. Systems governed by Gibbs equations with additional “internal” variables are studied to consider viscoelastic and viscoplastic materials, and fluids with high diffusive fluxes of heat, mass, and momentum. Rational and extended thermodynamic theories are appropriate tools for their analysis. Finally, we consider complex solutions of macromolecules, magnetic hysteresis bodies, viscoelastic fluids, polarizable media, i.e., all systems which exhibit involved thermodynamic behavior and for which the internal variables are essential in their thermodynamic description. Chapter 2 deals with systems characterized by complex states which undergo complex transformations. In this book, they are illustrated by equations of chemical invariants in systems of electroneutral components. The chapter synthesizes valuable investigations toward a systematic method identifying reaction invariants and mole balances for complex chemistries. Reaction invariants are quantities that take the same values before, during, and after a reaction. Dealing with these invariants is usually related to control aspects of a system as well as to using them in process design and planning of experiments. The material balances for chemically reacting mixtures correspond exactly to equating these invariants before and after the reaction. Following the important paper by Gadewar et al. (2001), we present a systematic method for determining the reaction invariants from any postulated set of chemical reactions. The described strategy not only helps in checking the consistency of experimental data and reaction chemistry but also greatly simplifies the task of writing material balances for complex reaction chemistries. One of the important applications of this method is the automation of mole balances in the conceptual design of chemical processes. Further on, the text is devoted to complex states and complex transformations occurring during optical instabilities as well as in growth and aging phenomena. In Chapter 3 we describe many meaningful aspects of Heylighen’s discussion of evolving complex systems. Although the growth of complexity during evolution seems obvious to many
x
Preface observers, it has sometimes been questioned whether such increase objectively exists. The chapter tries to clarify the issue by analyzing the idea of complexity as a combination of two factors: variety and dependency. For both living and nonliving systems, it is argued that variation and selection automatically produce differentiation (variety) and integration (dependency). Structural complexification is produced by spatial differentiation and a selection of fit linkages between components. Functional complexification follows from the need to increase actions in order to cope with more diverse environmental perturbations, and also the need to integrate actions into higher-order complexes, to minimize the difficulty of decision-making. Both processes produce a hierarchy of nested supersystems or metasystems and tend to be self-reinforcing. Simplicity is a selective factor, but it does not tend to arrest or reverse overall complexification. Increase in the absolute components of fitness associated with complexification defines a preferred direction for evolution, although the process remains wholly unpredictable. Chapter 4 treats selected aspects of complexity in biological systems. We begin with pointing out the interaction between statistical mechanics, thermodynamics, and nonlinear dynamics in order to stress the importance of Mandelbrot’s discovery that fractals occur vastly in nature. This opens the door to new world of fractals, their origin, “production,” and properties. Of particular interest are physical aspects of fractals which originate in Nature from the self-organized criticality of dynamical processes. A relevant approach to the understanding of erythrocyte behavior (in particular, erythrocyte sedimentation and its enhancement) is a theory utilizing fractal analysis. Computer results may be compared with experimental values, showing a progress in the performance of the operation. Blood poisoning phenomena are discussed along with basic aspects of involved biodynamics. The most recent investigations in the field of erythrocytes are concentrated on their detrimental changes caused by the blood infections of coronavirus in the human organism, as shown in Figs. 4.1 and 4.2 of this book. The first figure exposes the infected blood flow, the second one picturizes Coronavirus SARS-CoV-2. A remarkable paper available on combination prevention for COVID-19 is cited. The reader is referred to the current literature on this topic that grows in time. Chapter 5 reviews in its large part the existing dynamic models of microbial fuel cells (MFCs) and microbial electrolysis cells (MEC) as well as the emerging approaches for their optimization and control. The mechanism of catalyst action opens the greatest possibilities in the heterogeneous catalysis. Heterogeneous catalysis implies two basic problems: (a) a search for new catalysts, understood as scientifically substantiated substances showing catalytic properties with respect to a given reaction, and (b) finding (or working out) appropriate support and binding linkage for a catalyst substance on this support, ensuring required properties of chemical products. Bioelectrochemical systems (BESs), such as microbial fuel cells (MFCs) and microbial electrolysis cells (MECs), are capable of producing energy from renewable organic materials. Over the last decade, extensive experimental work has been dedicated to exploring BES applications for combined energy production and wastewater treatment. These xi
Preface efforts have led to significant advancement in areas of BES design, electrode materials selection, as well as a deeper understanding of the associated microbiology, which helped to bring BES-based technologies within commercial reach. Further progress toward BES commercialization necessitates the development of model-based optimization and process control approaches. Chapter 6 deals with the hierarchical scaling complexities. The complexity measures discussed therein refer explicitly to a hierarchical structure and to the infinite-depth limit of a certain relevant quantity associated with the corresponding tree. The tree structure underlying a hierarchical system can be characterized both topologically, that means by referring solely to its shape, and metrically, by considering the values of the observables associated with each of its vertices (Badii and Politi, 1997). The former approach is particularly appropriate when self-similarity does not hold. This appears to be the general case, as can be observed in biological organisms, electric discharges, diffusion-limited aggregates, dynamical systems, turbulent structures, and many other cases. Other researchers as, e.g., Huberman and Hogg (1986) propose to associate complexity with the lack of self-similarity, evaluated in terms of diversity of the hierarchy. This proposal seems to be accepted by Badii and Politi who assert that the proposal is the best defined by referring to a tree with a finite depth n and maximal branching ratio b, as shown in Fig. 9.1 in their 1997 book, which is represented by Fig. 6.1 in the present volume. Chapter 7 deals with modeling of power yield in noncatalytic chemical and electrochemical systems. The main objectives of this chapter are power generation limits, which are the basic quality indicators for the energy systems. These limits are evaluated via optimization of various converters, such as chemical or electrochemical engines, but also solar engines. Thermodynamic analyses lead to converters’ efficiencies. Efficiency equations serve to solve problems of upgrading and downgrading of resources. While methods of static optimization, i.e., differential calculus and Lagrange multipliers, are sufficient for steady processes, dynamic optimization applies the variation calculus and dynamic programming for unsteady processes. In the reacting systems, chemical affinities constitute prevailing components of an overall efficiency. Therefore the flux balances are applied to derive power in terms of active part of chemical affinities. Examples show power maxima in fuel cells and prove the applicability of the developed theory to chemical and electrochemical systems. Chapter 8 synthesizes concomitant action of fuels, catalysts, wastes, and poisons in chemo-electric systems. After the brief reviewing of a group of valuable papers treating the effect of the catalyst decay on the performance of chemical reactions, we pass to the analysis and discussion of those selected works that characterize the performance of chemical reactors and SOFCs with poisoning of catalysts or electrodes. Results for catalytic heterogeneous systems are quite representative to reveal the progress in the field of their mathematical modeling and optimization. Considered are kinetics of contact (catalytic) reactions, surface
xii
Preface processes, external and internal diffusion, chemical networks, and anode-supported SOFCs for determination of their poisoning limits. Outlined is a generalized approach to processes with chemical reaction and catalyst deactivation, based on kinetic models proposed by N.M. Ostrovskii of Boreskov Institute of Catalysis in Russia. He demonstrates that many deactivation equations can be derived using the Bodenstain’s principle of the quasisteady state approximation. With this principle, he suggests a general equation usable to any linear mechanism and provides examples for various involved cases of chemical reaction subject to catalyst deactivation. The reader is referred to his works, reach in new ideas and knowledge. Chapter 9 develops modeling and simulation of chemo-electro-mechanical coupling in the human heart. Computational modeling of the human heart allows to predict how chemical, electrical, and mechanical fields interact throughout a cardiac cycle. Pharmacological treatment of cardiac disease has advanced significantly over the past decades, yet it remains unclear how the local biochemistry of individual heart cells translates into global cardiac function. Following the basic research of Wong et al. (2013), the chapter applies a novel, unified strategy to simulate excitable biological systems across three biological scales; from the molecular level, via the cellular level, to the organ level. To discretize the governing chemical, electrical, and mechanical equations in space, a monolithic finite element scheme is proposed. A global-local split is applied, in which the deformation of the mechanical problem and the transmembrane potential and electrical structure are introduced globally as a nodal degrees of freedom, while the state variables of the chemical problem are treated locally as internal variables on the integration point level. This particular discretization scheme is highly efficient and inherently modular, since it allows to combine a variety of different cell types through only minor local modifications on the constitutive level. To ensure unconditional algorithmic stability, implicit backward Euler finite difference scheme is exploited to discretize the resulting system in time. The incremental, iterative Newton-Raphson scheme is applied to increase algorithmic robustness and to guarantee optimal quadratic convergence. The proposed algorithm allows to simulate the interaction of chemical, electrical, and mechanical fields during a cardiac cycle on a real patient-specific geometry, robust and stable, with calculation times on the order of 4 days on a standard desktop computer. For the real-time clinical applications, the plain electrical problem only requires simulation times of less than 10 s. Systems optimization and mathematics in many chapters of our previous books offer the formal way of assuring the best intervention into complicated realities, either by providing limiting values of certain quantities (extremum power, minimum cost, maximum yield, etc.) or by finding economically optimal solutions (optimal trajectories and optimal controls). These solutions ensure optimal feasible profits or costs attributed to economical or exergo-economical models. Optimal solutions, also called “best solutions,” are obtained by calculations applying suitable computational algorithms. Corresponding computer programs use these algorithms, each leading to optimal solution in its own way. Results of optimization calculations include
xiii
Preface optimal trajectories and controls usually in discrete forms (discrete representation of the optimization solution). The book has some unique features of related benefit to the reader, in view of still demanded literature information on solutions of practical optimization problems. The book and its first (2020) part extend the family of previously treated problems by inclusion of chemical invariants, enriched references to experiments, information on growth of complexities in evolution, extra information on deactivation of catalysts, and other factors appropriate for using thermodynamic models and advanced methods of optimization. The literature of optimizations in chemical and economical worlds keeps growing, so that efficient literature searches are still necessary. The book is intended as a collection of chapters addressed to actively working scientists and students (both undergraduate and graduate). Associated professionals are chemical engineers, physiologists, medical engineers, control engineers, environmental engineers, pharmacologists, cardiologists, and some others. The authors offer basically a textbook equipped in the feature of a reference book. Because of abundance of literature discussion and citation of numerous references, the book should constitute a valuable and helpful reference volume for any reader, including readers employed in industries and universities. Because of the textbook property of the proposed volume, names and levels of appropriate courses are listed as follows. The book can be used in schools, libraries, industries, and internet centers as a directory of guided tour described in the book chapters, each tour representing a research problem. One can ask: What type of knowledge do readers need before reading the book? The book applies analytical reasoning and transfers to the reader a reasonable amount of analytical mathematics. Therefore the knowledge of elementary and differential calculus is mandatory. Basic equations of optimal control (Pontryagin’s maximum principle and the corresponding optimization theory) should be the requirement only when reading Chapters 7 and 8. Since, however, the satisfaction of this condition might be difficult to the general reader, Chapter 12 of Sieniutycz’s and Szwast’s 2018 book Optimizing Thermal, Chemical and Environmental Systems offers a synthesizing text on Pontryagin’s maximum principle and related criteria of dynamic optimization. Because of the presence of the earlier theoretical information, it is not assumed that using other sources will be necessary. In the most advanced examples, complete analyses are developed to achieve solutions enabling verbal descriptions of the discussed problems. Another question is: What should the readers gain (academically/professionally) from the reading of the book? When the volume is used as a textbook, it can constitute a basic or supplementary text in the following courses, conducted mostly in engineering departments of technical universities:
xiv
Preface • • • • • • •
Technical thermodynamics and industrial energetics (undergraduate). Electrochemical reaction engineering (undergraduate). Electrochemical description of biological cells (undergraduate). Modeling and optimal control of bioelectrochemical systems (graduate). Anode-supported SOFCs for determination of poisoning limits (graduate). Complex chemo-electric systems: theory and applications (graduate). Computational aspects of chemo-electro-mechanical systems (graduate).
Having read the book, the readers will gain the necessary information on what has been achieved to date in the field of complexity and complex systems, what new research problems could be stated, or what kind of further studies should be developed within the specialized modeling and optimizations of complex systems. It is expected that the information contained in the book will be helpful to improve both abstract and technical skills of the reader. The present book is especially intended to attract graduate students and researchers in engineering departments (especially in chemical, electrochemical, and electrical engineering departments). The author hopes that the book will also be a helpful source to actively working electrochemists, engineers, and students. Finally, we would like to list other related books that can target a similar audience with this type of content. Elsevier has recently published the following thermodynamically oriented books: Variational and Extremum Principles in Macroscopic Systems (Ed. by S. Sieniutycz and H. Farkas, Elsevier, Oxford, 2000). Energy Optimization in Process Systems (by S. Sieniutycz and J. Jezowski, Elsevier, Oxford, 2009). Energy Optimization in Process Systems and Fuel Cells (by S. Sieniutycz and J. Jezowski, Elsevier, Oxford, 2013 (2nd ed.)). Thermodynamic Approaches in Engineering Systems (by S.Sieniutycz, Elsevier, Oxford, 2016). Optimizing Thermal, Chemical and Environmental Systems (by S. Sieniutycz and Z. Szwast, Elsevier, Oxford, 2018). Energy Optimization in Process Systems and Fuel Cells (by S. Sieniutycz and J. Jezowski, Elsevier, Oxford, 2018 (3rd ed.)). Complexity and Complex Thermo-Economic Systems (by S. Sieniutycz, Elsevier, Oxford, 2020).
Reading the present book on Complexity and Complex Chemo-Electric Systems will provide the opportunity for treating all previously mentioned books because of their unity of teaching style as a teaching cluster.
xv
Preface
References Badii, R., Politi, A., 1997. Complexity: Hierarchical Structures and Scaling in Physics. Cambridge University Press, Cambridge. Gadewar, S.B., Doherty, M.F., Malone, M.F., 2001. A systematic method for reaction invariants and mole balances for complex chemistries. Comput. Chem. Eng. 25 (9–10), 1199–1217. Huberman, B.A., Hogg, T., 1986. Complexity and adaptation. Physica D 22, 376. Wong, J., G€oktepe, S., Kuhl, E., 2013. Computational modeling of chemo-electro mechanical coupling: a novel implicit monolithic finite element approach. Int. J. Numer. Method Biomed. Eng. 29 (10), 1104–1133. https:// doi.org/10.1002/cnm.2565.
xvi
Acknowledgments This is the second book on complexity that treats chemo-electric systems. The author started his research collecting scientific materials on system theory and complexity principles during his 2001 stay at the Chemistry Department of The University of Chicago and then while his lecturing on a course on the systems theory for students of Faculty of Chemical Engineering at the Warsaw University of Technology (Warsaw TU). A part of suitable materials was obtained in the framework of two national grants, namely Grant 3 T09C 02426 from the Polish Committee of National Research (KBN) and the Hungarian OTKA Grant T-42708, the latter in cooperation with Henrik Farkas of the Department of Physics at the Budapest University of Technology and Economics. In preparing this volume the author received help and guidance from Marek Berezowski (Faculty of Engineering and Chemical Technology, Cracow University of Technology), Andrzej Ziębik, (Silesian University of Technology, Gliwice), Andrzej B. Jarzębski (Institute of Chemical Engineering of Polish Academy of Science and Faculty of Chemistry at the Silesian TU, Gliwice), Elzbieta Sieniutycz (University of Warsaw), Lingen Chen (Naval University of Engineering, Wuhan, PR China), and Piotr Kuran (Faculty of Chemical and Process Engineering at the Warsaw University of Technology). The author is also sincerely grateful to Piotr Juszczyk (Warsaw TU) for his careful and creative work in making all necessary artwork for this book. An important part of preparing any book is the process of reviewing, thus the author is very much obliged to all researchers who patiently helped him to read through subsequent chapters and who made valuable suggestions. The author, furthermore, owes a debt of gratitude to his students who participated and listened to his lectures on systems engineering in the period 2010–16. Finally, appreciations also go to Anita Koch, Elsevier’s Acquisition Editor, and the whole book’s production team in Elsevier for their cooperation, help, patience, and courtesy.
S. Sieniutycz
xvii
CHAPTER 1
Complexity in abstract and physical systems 1.1 Problem formulation We begin with referring the reader to our first complexity book (Sieniutycz, 2020) and reminding the Lotka-Volterra oscillatory model (Lotka, 1920, 1922), whose graph is shown in Fig. 1.1. This oscillatory model yields an equation describing the ratio dX2/dX1 in the right part of Fig. 1.1, which characterizes the interactions between populations of two species: predators (foxes) and preys (rabbits). The graph in Fig. 1.1 shows a general nature of integral curves of Lotka’s equation which describes ratio dX2/dX1 in terms of X1 and X2. In the positive quadrant of X1X2 the curves are closed, contained entirely within the quadrant and intersecting the axes of x1x2 orthogonally. Near-origin curves are very nearly elliptical (Lotka, 1920). Dynamic plot in Fig. 1.2 shows the time variability of the populations of predators (foxes) and preys (rabbits). Lotka-Volterra oscillations, Fig. 2.1 in Sieniutycz’s (2020) book, Lorentz attractor, Fig. 2.2 therein, and the roll streamlines in the Rayleigh-Benard convection, Fig. 2.3 therein, were crucial for the development of the theory of complex self-organizing systems. Consecutive autocatalytic reaction systems of the Lotka-Volterra type play essential role in the BelousovZhabotinsky reaction and other oscillatory reactions. Pictorial characteristics of these models in the present book, Figs. 1.1 and 1.2, illustrate the effect of competition between dissipation and damping with restoring forces, which leads to oscillations and limit cycles. The time variability of both populations provides the ecological interpretation of the model. In this section, the scientific basis for the complexity notion is formulated in general terms, along with stressing the physical motivation for related research. The genesis of the notion of “classical complexity,” born in the context of the early computer science, is briefly reviewed, in particular by referring the reader to the physical viewpoint presented in Ch. 1 of Badii and Politi (1997) book. Next, similarly as in Ch. 1 of their book, different methodological questions arising in the practical realization of effective indicators of complexity are exemplified. Badii and Politi (1997) are convinced that the success of modern science is determined by the success of the experimental method. The present author believes, however, that this opinion is difficult to reconcile with great achievements of many pure theorists. A good example is provided by vast Einstein’s discoveries, for whom logics and a purely intellectual speculation Complexity and Complex Chemo-Electric Systems. https://doi.org/10.1016/B978-0-12-823460-0.00003-3 # 2021 Elsevier Inc. All rights reserved.
1
2
Chapter 1
Fig. 1.1 Lotka-Volterra graph describes interactions between populations of predators and preys. 220 Rabbit population Fox population
200 180 160 140 120 100 80 60 40 20 0
10
20
30
40
50
60
Fig. 1.2 Time variability of the populations of predators (foxes) and preys (rabbits).
was sufficient to formulate the physically valid while distinctively sophisticated tensorial theory of gravitation (Landau and Lifshitz, 1974). Remarkably, Einstein’s theory was confirmed by experiments performed through years and, what’s more, by strikingly diverse experiments. Another remarkable example is deriving the whole contemporary statistical
Complexity in abstract and physical systems 3 physics from a single unifying/governing principle of Gauss’ error law, as shown by B.H. Lavenda in his book Statistical Physics—a Probabilistic Approach (Lavenda, 1991). Closing this commentary, we turn to further considerations. Current measurements reach an extreme accuracy and reproducibility, especially in some fields, thanks to the possibility of conducting experiments under well-controlled conditions (Badii and Politi, 1997, Sec. 1.1). Accordingly, the inferred physical laws are “designed” so as to yield nonambiguous conditions. Whenever substantial disagreement is found between theory and experiment, this is attributed either to unforeseen external factors or forces or to incomplete knowledge of the system’s state. In the latter case, the procedure follows a reductionist approach in which the system is observed with an increased resolution in the search for elementary constituents (Badii and Politi, 1997). Matter has been split into molecules, atoms, nucleons, quarks, thus reducing reality of a huge number of bricks, mediated by only three fundamental forces in nuclear, electro-weak, and gravitational interactions (Badii and Politi, 1997). The view that everything can be traced back to such a small number of different types of particles and dynamical laws is certainly gratifying. Can one thereby say, however, that we understand the origins of such phenomena of Nature as earthquakes, weather variations, growing of trees, the origin and evolution of life? One could say: in principle, yes (Badii and Politi, 1997). Excluding the possible existence of other unknown forces, we have just to fix relevant initial conditions for each of elementary particles and insert them into the governing dynamical equations to determine the solution. However, without the need of giving numbers, this attempt evidently becomes worthless, at least because of the immense size of the problem. An even more fundamental objection to this attitude is that a real understanding implies the achievement of a synthesis from the observed data, with the elimination of variables irrelevant for the “sufficient” description of the phenomenon. An old, well-known example shows that the equilibrium state of a one-component gas is accurately described by only three macroscopic variables (T, P, V), linked by a state equation. This simple gas is viewed as a collection of essentially independent subregions, where the “internal” degrees of freedom can safely be ignored. The change of descriptive level, from the microscopic to the macroscopic, allows recognition of the “inherent simplicity of this system” (Badii and Politi, 1997, p. 4). However, such “exaggeratedly viewed synthesis” is no longer possible when it is necessary to investigate motion at a mesoscopic scale determined, e.g., by an impurity. Actually, the trajectory of a Brownian particle (e.g., a pollen grain) in a fluid can be exactly accounted for only with the knowledge of forces exerted by the surrounding molecules. While the problem is intractable again, a partial resolution has been found thanks to passage from the descriptions of single items to ensembles. Instead of tracing the single orbit, we evaluate the probability for the particle to be in a given state, equivalent to considering a family of orbits with the same initial conditions but experiencing different microscopic configurations of the fluid. This new level of description (after shortcut operations) leads to a Brownian motion
4
Chapter 1
model in which knowledge of the macroscopic variables again suffices. The Brownian particle constitutes an open subsystem evolving erratically, subjected to random fluctuations on one side and frictional damping on the other (Badii and Politi, 1997). In addition to these effects, deterministic drift may be present. This drift is shown to be governed by the gradient of the sum of chemical potential of Brownian particles μ and gravitational potential gz, effectively by the chemo-gravitational potential μ+gz. The composition of all effects leads to the wave rather than the parabolic model for the macroscopically averaged movement of Brownian clouds (Sieniutycz, 1984). Lavenda (1991) has developed a general probabilistic approach to statistical thermodynamics based on Gauss’ law of errors and on concavity properties of the entropy. Gauss’ law has been shown to replace Boltzmann’s principle relating the entropy to thermodynamic probability for isolated systems. This new probabilistic approach incorporates both Bose-Einstein and Fermi-Dirac statistics. Legendre transforms of entropy lead to dual distributions for intensities. Their invariance determines the physical state equations. As an illustration, dual intensities are used to derive the thermodynamic uncertainty relations. A kinetic derivation of Gauss’ law of errors involves an optimal path describing the growth of fluctuations (Lavenda, 1991). Without correlations between nonequilibrium states, deterministic rates maximize the path probability. In order to take these correlations into account, a diffusional limit should be analyzed, Lavenda (1985a,b) and Sec.7.2.3 in Lavenda (1991). Limitations on Gauss’ distribution are determined by relating the entropy and its derivative to a quasipotential governing the evolution of diffusion. A number of inaccuracies in the literature of Brownian motion is explained by Lavenda’s approaches, both in Lavenda (1985a,b) and in the final Chapter 8 of his statistical physics book (Lavenda, 1991). In the summarizing descriptions, as, e.g., in Shlesinger et al. (1999), Brownian motion represents simple diffusion random walk processes. More complex random walk processes also can occur when probability distributions describing the random jump distances and times have infinite moments. Shlesinger et al. (1999) explore the manner in which these distributions can arise and investigate how they underlie various scaling laws that play an important role in both random and deterministic systems.
1.2 Some historical aspects Although the inference of concise models is the primary aim of all science, the first formalization of complexity problems is found in discrete mathematics (Badii and Politi, 1997, Sec. 1.2). An object is represented therein in as a sequence of integers which the investigator attempts to reproduce exactly by detecting internal rules and incorporating them into the model, also a sequence of integers. The procedure is successful if a size reduction is obtained. For example, a periodic sequence, such as 011011011…, is readily specified by the “unit cell,” 011 in this case, and by the number of repetitions. Such an approach gives rise to two disciplines:
Complexity in abstract and physical systems 5 computer science and mathematical logic (Badii and Politi, 1997). In the former, the model is a computer program and the object sequence is its output. In the latter, the model consists of the set of rules of a formal system (e.g., the procedure to extract square roots) and the object is any valid statement within that system (e.g., the statement that the square root of 9 is 3, or that generally (x2)1/2 ¼jxj.) Compression means that knowledge of the whole formal system permits the deduction of all theorems automatically, i.e., without any external information. In this view, the complexity of symbol strings is called algorithmic and is defined as the size of the minimal program which is able to reproduce the input string (Kołmogorov, 1965; Chaitin, 1966; Badii and Politi, 1997).
1.3 Spontaneously created complexities The perplexity brought by a symbolic pattern of unknown origin is often attributed to either of two different mechanisms. On the one hand, it may picture in our mind the result of many nearly independent inspirations or stimuli, either internal to the system or not (Badii and Politi, 1997). This is, for example, the routine scheme when the interactions between the system and its environment (bath) need be taken into consideration. On the other hand, the pattern may be presupposed to come to light from a generic (unformed) initial condition under the action of a simple dynamical influence. It is obviously this second sort of complexity, called self-generated or spontaneously created, that we intend to characterize in this volume. In other words, perhaps more precisely, we speak of self-generated complexity whenever the infinite iteration of a few basic rules causes the emergence of structures with features that are not shared by the rules themselves. Examples of this variety of ways are represented by various types of symmetry breaking occurring in superconductors and long-ranged correlations (phase transitions, cellular automata). The relevance of these phenomena and their universality properties discovered by methods of statistical mechanics indicates self-generation as the most promising and meaningful paradigm for the study of complexity. Yet, it should be stressed that the concept of self-generation is both too ample and too loosely defined to be transformed into a practical study or classification tool. In fact, it embraces fields as diverse as chaotic dynamics, cellular automata, and combinatorial optimization algorithms (Badii and Politi, 1997). Regardless of the unknown dynamics, the observer is first confronted with reproducing the pattern with a model to be selected in a relevant class. The choice depends on a few basic questions. The pattern can be either seen as a single item to be reproduced exactly or as one of many possible experimental outcomes. In the latter case, the model should rather apply to the source itself and describe rules common to all patterns produced by it. The single-item approach is commonly applied in computer science, whereas the ensemble approach is more appropriate in applied physics because it has a physical, statistical motivation. Two extreme cases that may occur as the basis of biological evolution (the model of a large automaton or a small one) are described by Badii and Politi (1997). Contemporary DNA molecules are thought to be a result
6
Chapter 1
of the repeated application of elementary assembly operation (self-generated complexity) and of random mutation followed by selection, i.e., by the verification of the effective fitness of the product to the environment. This stage is equivalent to testing if a symbolic pattern of DNA is recognized by an extremely complicated automaton: the environment. It must be noticed that an elaborate machine need not always yield a valuable complex output. The earlier considerations suggest that there cannot be a unique indicator of complexity, but that we need a set of tools from various disciplines (probability and information theory, computer science, statistical mechanics, etc.). Therefore complexity is seen through an openended sequence of models and may be expressed by numbers, or, possibly, by functions (Badii and Politi, 1997). It would be strange if “complexity functions” used to appraise many diverse objects were devoid of complexity.
1.4 Complex thermodynamic systems 1.4.1 Introduction This section outlines the theory and applications of classical disequilibrium thermodynamics in systems traditionally regarded as physically complex, such as solutions of macromolecules, magnetic hysteresis bodies, viscoelastic fluids, polarizable media, etc. Complexity of these systems is of physiochemical and, usually mathematical nature, because all they require certain extra variables (so-called internal variables or nonequilibrium variables) to be introduced into the fundamental equation of Gibbs. Systems of so-called extended irreversible thermodynamics (EIT) are also included and analyzed. In the standard EIT theory, dissipative fluxes are independent variables treated on an equal footing with the classical variables of thermostatics. Properties and significance of systems with Gibbs equations incorporating additional variables are studied. Such systems include viscoelastic and viscoplastic materials and fluids with high diffusive fluxes of heat, mass, and momentum. Rational and extended thermodynamic theories are appropriate theoretical tools in analysis of these systems. Relative merits of these approaches are discussed and corresponding research papers are reviewed. This review, as opposed to those in the next chapters of the present book, focuses on historical aspects of complex physicochemical processes, ignoring, in principle, structural aspects of apparatuses and systems in which the processes are running.
1.4.2 Classical and quasiclassical complex systems In this section we shall review complex behavior of fluids, solids, polarizable media, etc., i.e., systems which exhibit involved thermodynamic behavior and for which certain extra variables, internal variables of disequilibrium variables, are essential for a proper thermodynamic description.
Complexity in abstract and physical systems 7 The concept of an internal state variable, i.e., a variable not controllable through external conditions, originated from the analysis of some models in rheology (Lemaitre and Chaboche, 1988; Maugin, 1987, 1990a) and electromagnetic bodies (Meixner, 1961; Kluitenberg, 1966, 1977, 1981). The goal was a theoretical framework for studying relaxation processes such as a shock wave passing through an electrically polarizable substance. However, relaxation phenomena are not the only ones for which the concept of internal variables is useful. Theories of plasticity and fracture and description of dissipative effects in electro-deformable solids can also make good use of this concept (Maugin, 1990a,b). Furthermore, some nontrivial analogies exist between plastic behavior and magnetic hysteresis. Therefore the developments in the plasticity theory have suitable implications for the magnetic hysteresis systems. It follows that hysteresis processes can be described by using the concept of the internal variables of state, provided one takes into account some residual fields at vanishing load. However, one should be careful to distinguish between the relaxational recovery of equilibrium and hysteresis processes. In relaxation, time scale is an essential issue; in hysteresis the relatively slow response exhibited by a ferromagnetic sample is practically rate independent. Similarly, the low-temperature plastic effects are practically independent of the rate of strain. A detailed comparison of these effects is available (Maugin, 1992). For the theory of internal variables, see also (Kestin, 1979; Kestin and Rice, 1970; Kestin and Bataille, 1980; Muschik, 1981, 1986, 1990a,b, 2007, 2008). Maugin (1992) treats magnetic hysteresis as a thermodynamic process. As contrasted with many previous formal approaches, hysteresis is cast in the framework of nonequilibrium thermodynamics. The process is described at the phenomenological level, in terms of the constitutive equations, internal variables of state, and the dissipation function which is homogeneous of degree one. The model allows to account for the cumulative effect of the residual magnetization and describes magnetic hardening in a natural way. Both local and global stability criteria for hysteresis loops are obtained. The model, being incremental, provides an operational way to construct hysteresis loops from a virgin state by alternate loads with increasing maximum amplitude. The interaction between magnetic properties and stress or temperature variations is taken into account. An interesting analogy between solid mechanics (viscoplastic Bingham fluid) and magnetism is shown while a clear distinction is made between relaxation processes and hysteresis-like effects. The continuum modeling of polarizable macromolecules in solutions is the next example showing the role of internal variables (Maugin and Drouot, 1983, 1992; Maugin, 1987). Their use allows us to construct unknown constitutive equations and evolution equations describing mechanical, electrical, and chemical behavior. Also, the effects that do not contribute to dissipation can be singled out from this general formalism. Flow induced and electrically induced phase transitions can be studied as well as diffusion, migration, and mechanochemical effects. In addition, Maugin and Drouot (1983) treat thermomagnetic behavior of magnetically
8
Chapter 1
nonsaturated fluids, whereas Maugin and Sabir (1990) determine nondestructive testing in mechanical and magnetic hardening of ferromagnetic bodies. Rheology of viscoelastic media (Leonov, 1976; Lebon et al., 1988), isotropic-nematic transitions in polymer solutions (Sluckin, 1981; Giesekus, 1986), and stress-induced diffusion (Tirrell and Malone, 1977) are other applications of internal variables. A number of reviews describe a requisite nonequilibrium thermodynamic framework (Bampi and Morro, 1984; Morro, 1985; Maugin, 1987). Notably, Bampi and Morro (1984) develop a hidden or internal variable approach to nonequilibrium thermodynamics. An application of the internal variable concept using rational thermodynamics to describe viscoelasticity is also available. Developing a general scheme for thermodynamics of simple materials, Morro (1992) imbeds the thermodynamics of linear viscoelasticity into the framework of rational thermodynamics and extremum principles. This scheme is applicable to materials with internal variables and those with fading memory. Morro (1992) provides rigorous definitions for states, processes, cycles, and even thermodynamic laws. His thermodynamic approach is applied to viscoelastic solids and fluids. Interestingly, his approach is essentially entropy free; its dissipative nature is assured by the negative definiteness of the half-range Fourier sine transform of a (Boltzmann) relaxation kernel. The fact that such negative definiteness yields all conditions derived so far by various procedures is a new result. In addition, such a condition turns out to be sufficient for the validity of the second law. In effect, necessary and sufficient conditions for the validity of the second law are derived for fluids. Evolution equations are analyzed. The Navier-Stokes model of a viscous fluid follows as a limiting case when some relaxation functions are equal to delta functions. The extremum principles are shown to be closely related to thermodynamic restrictions on the relaxation functions. The treatment is quite involved from the mathematical view point, but the reader may reexamine many basic features of the rational theory in a brief and precise way. In this aspect, the reader is also referred to the paper by Altenberger and Dahler (1992) which discusses applications of convolution integrals to the entropy-less statistical mechanical theory of multicomponent fluids. Historically, the starting point for extended thermodynamic theories was, roughly speaking, the paradox of infinite propagation rate, encountered in classical phenomenological equations describing the transport of mass, energy, and momentum. Attempts to overcome this paradox in equations of change have appeared in terms of (damped) wave equations rather than classical parabolic equations. This has been demonstrated in a large number of articles. Their topics ranged from theoretical developments (based on phenomenological thermodynamics or nonequilibrium statistical mechanics), to applications in fluid mechanics, chemical engineering, and aerosol science. It has been recognized that after taking into account local disequilibria and/or inertial effects (resulting from finite masses of the diffusing particles) the constitutive equations become non-Fourier, non-Fick, and non-Newtonian. A tutorial review of
Complexity in abstract and physical systems 9 the historical development of the elimination of the paradox of infinite propagation is available in (Sieniutycz, 1992a), and as Chap. 3 in TAES (Sieniutycz, 2016). The concept of relaxation time has been substantiated and its simplicity of evaluation for heat, mass, and momentum has been shown for ideal gases with a single propagation speed. For nonideal systems, a simplest extended theory of coupled heat and mass transfer has been constructed based on a nonequilibrium entropy. In the approach discussed, the entropy source analysis yields a formula for the relaxation matrix, τ. This matrix describes the coupled transfer and corresponds to the well-known values of relaxation times for pure heat transfer and for isothermal diffusion. Various simple forms of the wave equations for coupled heat and mass transfer appear. Simple extensions of the second differential of entropy and of excess entropy production can be constructed. They allow one to prove the stability of the coupled heat and mass transfer equations by the second method of Lyapunov. Dissipative variational formulations can be found, leading to approximate solutions by a direct variational method. Applications of the hyperbolic equations to the description of short-time effects and high frequency behavior can be studied. Some related ideas dealing with a variational theory of hyperbolic equations of heat can be found in Vujanovic’s papers where a class of variational principles of Hamilton’s type for transient heat transfer has been formulated and analyzed, and valuable approximate solutions have been found by direct variational methods (Vujanovic, 1971, 1992; Vujanovic and Djukic, 1972; Vujanovic and Jones, 1989). Flux relaxation effects are particularly strong in the diffusion of aerosols containing particles of large mass and in polymers characterized by large molecular weight. Experiments with laser pulses of high frequency acting on solid surfaces showed that nonclassical models of heat transfer and diffusion of mass and energy are more appropriate in the case of highly nonstationary transients (Doma nski, 1978a,b). Acoustic absorption at high frequency is another example (Bubnov, 1976; Carrasi and Morro, 1972, 1973; Luikov et al., 1976). The phenomenological thermodynamics of diffusion links some relaxation effects with the absence of local thermodynamic equilibrium in systems with finite kinetic energy of diffusion (Truesdell, 1962, 1969; Sandler and Dahler, 1964; Sieniutycz, 1983, 1984, 1992a). Banach and Piekarski (1992) attempt to give the meaning of temperature and pressure beyond the local equilibrium and obtain disequilibrium thermodynamic potentials. To achieve these goals, they apply the following maximum principle: among all states having the same values of conserved variables, the equilibrium state ensures the greatest value of entropy. See, e.g., Chap. 3 in TAES (Sieniutycz, 2016). Predominantly, the classical theory of nonequilibrium thermodynamics (de Groot and Mazur, 1984) assumes the local equilibrium. Consequently, “classical” Gibbs equation for the entropy differential is used in its well-known form which contains the derivatives of the internal energy, volume (or density), and concentrations of chemical constituents. Application of the
10
Chapter 1
conservation laws for energy, mass, and momentum yields the well-known expression for entropy balance: q Y ds + q rT 1 T 1 : ru (1.1) ρ ¼ r dt T The right-hand side of this equation contains both a source term and a divergence term related to the conductive entropy flow. The considered equation serves to postulate admissible “phenomenological equations.” All can be said at this point is that these postulated forms do not disagree with the “traditional” second law, i.e., that they satisfy the requirement of nonnegative entropy source. This is a brief account of problems which arise in the classical theory of nonequilibrium thermodynamics. See also Sec. 1.2 of TAES (Sieniutycz, 2016). However, there are also nontrivial criticisms of the earlier reasoning, for which the reader is referred to the literature (Meixner, 1949, 1974; M€ uller, 1966, 1967, 1971a,b, 1985; Muschik, 1981, 1986, 1990a,b, 2004, 2007, 2008; Muschik et al., 2001; Lebon and Mathieu, 1983). Muschik (2004) refers to terminology; papers Muschik (1990a, 2007) refer to the theory and answer the question: Why are so many schools in thermodynamics?. Briefly, the answer is explained by the variety of possible disequilibria that can be achieved from a definite equilibrium. Muschik (1990b) considers the role of internal variables in description of disequilibria. There are at least two solid arguments to regard Eq. (1.1) as an insufficient representative of the general thermodynamic theory, the claim which has been essential for the development of the “extended” theories. First, the local equilibrium postulate is not valid for continua with complex internal structure and for simple materials in the presence of high frequency phenomena and fast transients. Second, the “equi-presence principle,” recognized in the rational thermodynamics ( Jou et al., 1988, 1999, 2001) ensures that if a variable appears in one constitutive equation, it has full right to appear in all remaining equations of the theory. For example, since the mass (heat) diffusion flux is present in the first Fick’s (Fourier’s) law, this flux can also appear in the Gibbs equation. In fact, the need for such terms was already outlined by (occasionally criticized) approach by de Groot and Mazur (1984) for diffusive flows of mass in the context of their “kinetic energy of diffusion,” by Grad (1958) for heat flow, by Sieniutycz and Berry (1989) for diffusive entropy flow, and by a number of other researchers for other flows. The reader is referred to regular reviews of these approaches by Jou, Casas-Vazquez and Lebon, e.g., Jou et al. (1988, 1999, 2001). Among of many original, valuable contributions to the contemporary nonequilibrium theory, the results obtained by Baranowski have lasting value. Namely, in Sects. 5.7 and 7.3 of his book, Baranowski (1974, 1975) derives a generalized equation for the internal energy balance of a multicomponent mixture. This equation contains the enthalpy-based heat flux, diffusion fluxes, and a generalized thermodynamic force for mass diffusion, Eq. (1.2) (or Eq. (7.54) in Baranowski’s book). In absence of external fields this generalized force is
Complexity in abstract and physical systems 11 1 ∂ρi vi + div ðΠi + ρi vi vi Þ , Xi ¼ ð gradμi ÞT ρi ∂t
(1.2)
where μi is the chemical potential of ith component, ΠI is the related viscous tensor, and vi is the absolute velocity of ith species. This equation serves Baranowski to derive a generalized equation of internal energy. Next, as usual, he confronts this energy equation with the classical Gibbs equation. Essential at this point is his question: “To which degree we shall extend the corresponding form of the entropy balance.” Next, cooperating with his colleges, he shows that the use of force (1.2) ensures the invariance of entropy production for an arbitrary reference frame (Baranowski, 1974, p. 118, and his ref. 7). Generalized thermodynamic force (1.2) also leads to the conclusion a coupling effect between diffusion and the viscous flow. Such crossing effect is not predicted by the traditional irreversible thermodynamics. In fact, the traditional thermodynamic theory excludes the emergence of the mass concentration gradient in viscous flows, despite of numerous cases described in the literature which imply the appearance of such gradients in viscous flows (Baranowski, 1974, p. 118, ref. 11) Later, Bartoszkiewicz and Miekisz (1989, 1990) investigate, in a similar context, generalized thermodynamics founded on the Bearman-Kirkwood equations of the statistical origin. They show that the Prigogine’s and Baranowski’s generalized theorems on entropy production invariance, also hold when inertia and local stresses contribute to the diffusion-dependent part of the dissipation function. In the next later work (Baranowski, 1991, 1992) presents a relevant information on thermodynamics of elasticity and plasticity in an expanded form. In these papers Baranowski applies nonequilibrium thermodynamics to membrane transport and diffusion in elastic media with stress fields. He characterizes the framework of linear nonequilibrium thermodynamics (LNET) by tracing the way of derivation of phenomenological equations and presenting their applications in the membrane transport. He provides a theory of the active transport in the realm of linear NET. He also shows the importance of individual balance equations for coupling between diffusion and viscous phenomena. Importantly, he illustrates the efficiency of his approach by achieving a generalization of Prigogine’s theorem for invariance of the diffusional entropy production. This approach indicates possible nonlocal phenomena due to stress fields. Moreover, by applying extended irreversible thermodynamics, he outlines a more general approach to the membrane transport. Baranowski (1992) explores the influence of stress fields on the diffusion in isotropic elastic media. In fact, he develops the thermodynamics of solids with stress fields. He provides an extension of the chemical potential to stress fields under the (experimentally proved) assumption neglecting the role of off-diagonal elements in the stress tensor. His attention is directed to stresses in the solid developed by a diffusing component causing displacements and resulting stresses. Plastic deformations are neglected, i.e., an ideal elastic medium is assumed. Diffusion equations are derived in a one-dimensional case. It is shown that the resulting equation of change for concentration is a difficult nonlinear integro-differential equation.
12
Chapter 1
Nonlocal diffusion effects are notwithstanding verified by an original experiment. Experimental results of hydrogen diffusion in a PdPt alloy are compared with the theory. The general importance of the stress field to diffusion in solids is then substantiated. Effect of stresses on the entropy production is discussed with the conclusion that a stress-free steady state obeys the minimum entropy production principle as a stationarity criterion.
1.4.3 Extended thermodynamics of macroscopic systems 1.4.3.1 Basic information on the theory Extended thermodynamics uses disequilibrium (flux dependent) entropies. Kinetic theory supports the existence of such entropies, first introduced by Grad (1958). The general conservation laws retain their classical form in EIT. When they are substituted into the expression for the differential of the nonequilibrium entropy, they lead to modified expressions for the entropy flux and entropy source. In the simplest version of the extended thermodynamics of a one-component system, the second law inequality can be expressed in the form 1 0 s 0 0 0 ds q C dq dΠ 1 1 T Π: (1.3) + ru +K Π q +q + rT ρ ¼ r G dt 2Gdt dt T 0
Phenomena associated with bulk viscosity are neglected here, and the tensor Π is the deviator of 0
the viscous pressure tensor. From Eq. (1.2), the stress Π is related to the symmetrized velocity gradient, and the time derivative of the stress itself, the structure which leads to the viscoelastic behavior. G is the shear modulus which is the product of the mass density and the square of the propagation speed, c20. C is the entropy capacity matrix which reduces to the scalar C¼cpT2 for the pure heat transfer. A extra term with a new variable K appears in the entropy flux expression in accordance with the kinetic theory. The problem of frame invariance is, however, ignored in Eq. (1.3). Also effects of chemical reactions are ignored. In fact, Eq. (1.3) is still an approximate result. A more exact extended theory has been constructed on the basis of the rational formalism (Liu and M€ uller, 1983; M€ uller, 1985; Lebon and Mathieu, 1983; Lebon and Boukary, 1988; Kremer, 1985, 1986, 1987, 1989, 1992) or based on some additional postulates added to the kinetic theory (Eu, 1982, 1988). The historical aspects of these ideas, from early simplified descriptions to the contemporary theory, are presented in M€ uller’s (1992) paper, which shows the development of a theory based on hyperbolic field equations. Here only a few introductory remarks are given. Essential for the development of EIT has been a simple and basic form for the entropy inequality introduced by M€ uller (1966, 1967, 1985). He assumed the second law in the form
Complexity in abstract and physical systems 13 ρ
ds + r js 0 dt
(1.4)
containing the entropy flux js which has to be described by a constitutive equation. The idea was improved in M€ uller’s later approaches (M€uller, 1985) as well as in works by others. An advanced theory (Liu and M€ uller, 1983) uses Lagrange multipliers to take into account the entropy inequality (Liu, 1972). As a simplest example we consider a one-component rigid heat conductor. The set of the constitutive equations is assumed in the form f ¼ f ðT, Ti , qi Þ s ¼ sðT, Ti , qi Þ Qi ¼ Qi ðT, Ti , qi Þ jsi ¼ T 1 qi + K ðT, Ti , qi Þ
(1.5)
where f is the free energy. These equations are substituted into the entropy inequality (1.4). The resulting inequality yields the classical thermodynamic conditions, e.g., ∂f/∂T¼s, as well as the information about the structure of equations describing the fluxes of entropy and heat. Nonclassical terms appear in the expression for js, and the Cattaneo (1958) equation is recovered in the linear case. The free energy formula contains the heat flux as an independent variable. Lebon (1992) and Lebon et al. (1993) develop the thermodynamics of rheological materials in the context of the extended irreversible thermodynamics (EIT). They show that EIT has capability of a general unifying formalism which is able to produce a broad spectrum of viscoelastic constitutive equations. In particular, linear viscoelasticity is easily interpreted within EIT. These works show how the classical rheological models of Maxwell, Kelvin-Voigt, Poynting-Thomson, and Jeffreys are obtained as special cases of the formalism developed. Lebon (1992) offers a general Gibbs equation for viscoelastic bodies, and discusses its special cases, pertaining to the particular models specified earlier. Then he poses the question: what are the consequences of introducing a whole spectrum of relaxational modes for the pressure tensor, instead of working with one single mode. For that purpose the Rouse-Zimm relaxation models (Rouse, 1953; Zimm, 1956) are analyzed. These models are based on a whole relaxation spectrum; such spectra are frequently predicted by kinetic models. Without doubt, these models have proved their usefulness in describing dilute polymeric solutions. Lebon’s (1992) evaluations prove that EIT is capable of coping with the relaxational spectrum of the RouseZimm model. Finally, this analysis is extended to fluids described by nonlinear constitutive equations (non-Newtonian). The considered, EIT-based approach to viscoelasticity can be compared with other thermodynamic approaches, based on other theories: classical (Meixner, 1949, 1961, 1966, 1968, 1972, 1974; Kluitenberg, 1966, 1977, 1981; Kestin, 1979; Kestin and
14
Chapter 1
Rice, 1970; Kestin and Bataille, 1980) or rational (Rivlin and Ericksen, 1955; Koh and Eringen, 1963; Huilgol and Phan-Thien, 1986). A series of reviews is available devoted to extended irreversible thermodynamics (EIT). M€ uller’s (1992) review starts with the analysis of imperfections in Cattaneo’s (1958) treatment and then outlines a simplified extended theory based on a nonequilibrium (flux dependent) entropy (M€ uller, 1966, 1967, 1985). Its compatibility with the entropy resulting from Grad’s 13 moment theory (Grad, 1958) is shown and the problem of material frame indifference is discussed. The role of rational thermodynamics as a starting point for constructing an improved extended theory is also substantiated. This is based essentially on Liu and M€uller’s approach (Liu and M€ uller, 1983) which uses Liu’s powerful method of Lagrange multipliers to take into account the entropy inequality (Liu, 1972). A nonrelativistic principle of relativity is used to make the fields explicit in velocity (Ruggeri, 1989). Constitutive equations, inertial frame balance laws, and equilibrium conditions are obtained. A detailed comparison of the results stemming from EIT with those implied by rational thermodynamics is made. The relativistic generalization of the theory is given and relativistic analogs of the phenomenological equations of Fourier and Navier-Stokes are derived. The structure of EIT is analyzed from the viewpoint of pure mathematics (Bass, 1968). In the general case, the approaches outlined earlier result in a nonequilibrium theory of extended irreversible thermodynamics, containing independent variables that include the heat flux q, diffusion fluxes, and viscous stresses ( Jou et al., 1988, 1999, 2001). The classical role of q as a (dependent) state variable has already been abandoned in the kinetic theory by Grad (1958); the use of the heat flux q as an independent variable is in complete agreement with his moment analysis. In the general case, all diffusive fluxes (of heat, mass, momentum) constitute extra independent variables in EIT. The earlier results were obtained by assuming that the Gibbs equation is modified by the presence of diffusive fluxes, whereas the form of the conservation laws remains unchanged. However, it was recently shown (Sieniutycz, 1988, 1989, 1990; Sieniutycz and Berry, 1989) that extended Gibbs equations can be derived from a Lagrangian approach (Hamilton’s stationary action) where an extended internal energy is extracted from the total energy. The extended internal energy is obtained within the time component Gtt of an energy-momentum tensor Gik. The structure of this tensor corresponds to the classical conservation laws only when the sources of mass and entropy play a negligible role. For substantially irreversible processes, where these sources are large, the structure of Gik deviates from the classical structure, and new important variables play a role in Gik and in energy E. These new variables have been called thermal phase and matter phase. For substantially irreversible processes, the energy formula and the conservation laws cannot be however changed to take the sources of mass and entropy into account in a generalized variational statement. These matters are discussed by two papers
Complexity in abstract and physical systems 15 (Sieniutycz, 1990, 1992b) of which the first presents an extended but reversible hydrothermodynamics and the second the irreversible extended theory. The concept of internal variables has successfully been used to analyze relaxation in strained solids by Kestin and Rice (1970); dielectric and magnetic relaxation (Kluitenberg, 1977, 1981; Meixner, 1961); spin relaxation in ferromagnets (Maugin, 1975), magnetoelasticity (Maugin, 1979), thermo-plastic fracture effects (Maugin, 1990a,b), electromechanical and other couplings (Maugin, 1990a) and magnetically nonsaturated fluids (Maugin and Drouot, 1983). As contrasted with previous approaches, magnetic hysteresis (Maugin, 1987; Maugin and Sabir, 1990) has been cast in the framework of nonequilibrium thermodynamics describing this phenomenon exactly in terms of the constitutive equations, internal variables, and dissipation function. An interesting analogy between solid mechanics, viscoplastic Bingham fluids, and magnetism has been shown (Maugin, 1990a). A clear distinction has been made between relaxation processes and hysteresis effects. A thermodynamic framework incorporating internal variables is summarized in the reviews by Coleman and Gurtin (1964) and Bampi and Morro (1984). Thermodynamics is a core part of most science and engineering curricula. However, most texts available to students still treat thermodynamics very much as it was presented in the 19th century, generally for historical rather than pedagogical reasons. Modern Thermodynamics (Kondepudi and Prigogine, 1998) takes a different approach and deals with the relationship between irreversible processes and entropy. The relationship between irreversible processes and entropy is introduced early on, enabling the reader to benefit from seeing the relationship in such processes as heat conduction and chemical reactions. This text presents thermodynamics in a contemporary and quite exciting manner, with a wide range of applications, and many exercises and examples. Another model based on nonequilibrium thermodynamics is extended so as to account for the new phenomena related to the presence of electrically polarizable macromolecules in solution € (Aubert and Tirrel, 1980; Bird et al., 1977; Bird and Ottinger, 1992; Carreau et al., 1986; Drouot and Maugin, 1983, 1985, 1987, 1988; Drouot et al., 1987). The role of dissipative effects is essential although certain less known phenomena are shown to be gyroscopic in a sense that they do not contribute to dissipation (Maugin, 1990a,b). This is discussed in the paper by Maugin and Drouot (1992), where a relatively simple account of electromechanical couplings and mechano-chemical effects is presented. See also Maugin (1990a). Maugin and Drouot (1992) developed an internal variable-based thermodynamics of solutions of macromolecules. The relevant internal variables of state are the components of the conformation tensor. This quantity models some properties of macromolecules by treating them as “material” particles undergoing deformations. The dissipation inequality is obtained in the form of the Clausius-Duhem inequality involving the free energy. Two different types of dissipative processes are distinguished. The first type involves only the observable variables of the
16
Chapter 1
thermo-electro-mechanics. The second are the relaxation processes (of the type encountered in chemical kinetics) involving electric and conformational relaxation. Processes of a third type (plasticity), while essential in the previous work discussed earlier, do not appear. Evolution equations for conformation, electric polarization, and chemical processes are given. An important effect for gyroscopic-type reversible processes is developed. Applications to equilibrium conformations, flow-induced and electrically induced conformational phase transitions, electric polarization, and mechano-chemical effects are discussed. Possible generalizations accounting for diffusion and interaction between vortex fields and conformations are given. This paper should be read by everyone who has to deal with complex fluids, such as liquid crystals and emulsions. Viscoelastic bodies (solids and fluids) have been analyzed in the framework of the rational thermodynamics using internal variables (Morro, 1985). Restrictions on the relaxation functions have been established (Day, 1971; Boven and Chen, 1973; Fabrizio and Morro, 1988; Fabrizio et al., 1989). Convexity of the governing functionals has been proved and minimum principles have been established. The relaxation functions satisfy the dissipation inequality. This makes the corresponding functional convex (Fabrizio et al., 1989) and hence the thermodynamic limitations imply the convexity of the governing functional. An analogous result has recently been found for the nonequilibrium energy function of a flowing fluid in the field representation of thermo-hydrodynamics (Sieniutycz and Berry, 1991). Extended thermodynamics of viscoelastic and non-Newtonian bodies has been formulated by Lebon and coworkers (Lebon et al., 1986, 1988, 1990, 1993). Evren-Selamet (1995) explored both diffusional and the hysteretic dissipation of the mechanical energy into entropy, and, consequently postulated both diffusive and hysteretic foundations of entropy production. Examples involve Kelvin solid and Maxwell viscoelastic fluid. Lavenda (1995) formulated basic principles in the thermodynamics of extremes and rare events. Statistical thermodynamics is usually concerned with most probable behavior which becomes almost certainty if large enough samples are taken. But sometimes surprises are in store where extreme behavior becomes the prevalent one. Turning his attention to such rare events Lavenda’s (1995) interest lies in the formulation of a thermodynamics of earthquakes which is gaining increasing attention. By properly defining entropy and energy, a temperature can be associated to an aftershock sequence giving it an additional means of characterization. A new magnitude-frequency relation is predicted which applies to clustered aftershocks in contrast to the Gutenberg-Richter law which treats them as independent and identically distributed random events. Extended irreversible thermodynamics (EIT) has grown to the status of a mature theory. Typical problems investigated include compatibility with Grad’s 13 moment method (Grad, 1958), frame invariance, relation to rational thermodynamics, the role of Lagrange multipliers, and speeds of propagation (M€ uller, 1985; Boukary and Lebon, 1986; Lebon and Boukary,
Complexity in abstract and physical systems 17 1988). Three levels of description have been analyzed: macroscopic (thermodynamic), mesoscopic (fluctuation theory), and microscopic (projection operators). The EIT description has been compared with the memory function approach based only on the conserved variables (Garcia Colin, 1988; Jou et al., 1988). The superiority of the extended theory over the classical one has been shown. The correspondence of the theory with classical results for ideal and real gases as regards thermal and caloric equations of state has been established. Explicit results have been obtained for the classical ideal gas, for degenerate Bose and Fermi gases, and for molecular gases and mixtures (M€ uller, 1985; Kremer, 1985, 1986, 1987, 1989). The theory was given kinetic theory foundations by means of the Boltzmann equation for a mixture of dilute monatomic gases and the generalized Boltzmann equation for a mixture of dense simple fluids (Eu, 1980, 1982, 1988). The theory has been applied to rheology, to fluids subject to shear, and to shear-induced phase changes (Eu, 1989). Additional reciprocity conditions resulting from introduction of heat, matter, and momentum fluxes as state variables have been shown to be useful in calculating transport coefficients in liquids and in the estimation of the nonlinear effects in heat flow and diffusion (Nettleton, 1987). These conditions have also been obtained in the framework of the maximum entropy formalism (Nettleton and Freidkin, 1989). The relation between EIT and fluctuation theory has been discussed ( Jou et al., 1988). It has also been shown that EIT provides a natural framework for the interpretation of memory functions and generalized transport coefficients (Perez-Garcia and Jou, 1986). Extended chemical systems have been treated (Garcia Colin et al., 1986; Garcia-Colin and Bhalekar, 1997; Sieniutycz, 1987). Astrophysical and cosmological applications of EIT have been pursued in the framework of a relativistic theory (Pavon, 1992; Pavon et al., 1983). Eu (1992) analyzes the relation between kinetic theory and irreversible thermodynamics of multicomponent fluids. His approach is based on an axiomatic formulation (Eu, 1980, 1982, 1987) and on nonequilibrium entropy (Eu, 1987, 1988, 1989), i.e., on exactly those features that are supported by kinetic theories of gases and fluids. His emphasis is on the nonlinear constitutive equations compatible with the statistical mechanical theories of matter in nonequilibrium. The Boltzmann equation of dilute gases and a modified moment method are used to construct an irreversible theory. Approximation methods are employed to obtain the distribution functions. The requirement that the second law be rigorously satisfied by the approximate solution of the Boltzmann equation is the tenet under which the modified moment methods are developed. The equations for nonconserved moments are the constitutive equations. A consistency condition (in the form of a partial differential equation) is coupled to the evolution equations for the Gibb’s variables. The Onsager-Rayleigh quadratic dissipation and classical phenomenological laws follow in the limit of the linear theory close to equilibrium. The theory is extended to dense multicomponent fluids, based on a set of the kinetic equations for a dense mixture of charged particles in a solvent. Tests of the constitutive equations are outlined in the framework of rheology (polymer viscoelasticity). The formal
18
Chapter 1
theory of nonequilibrium thermodynamic potentials is given with an extended Gibbs-Duhem relation. Applications for fluids under shear and shear-induced depression of the melting point are relevant. Kremer’s (1992) review extends EIT theory to a broader class of processes. He investigates EIT of ideal and real monatomic gases, molecular gases, and mixtures of ideal monatomic gases. For the ideal and real gases the formalism of 14 fields is developed where the 13 fields of mass density, pressure tensor, and heat flux are supplemented by an additional scalar field associated with nonvanishing volume viscosities. Coefficients of shear and volume viscosities, thermal conductivity, three relaxation times, and two thermal-viscous coefficients are identified in the theory. The ideal gas theory results as a limiting case from the real gas approach. Explicit results are given for classical ideal gases and for degenerate Bose and Fermi gases. Then, an extended theory of molecular gases is formulated which contains 17 fields of density, velocity, pressure tensor, heat flux, intrinsic energy, and intrinsic heat flux. The intrinsic energy pertains to the rotational or vibrational energy of the molecules. Coefficients of thermal and caloric equations of state as well as transport or rate coefficients are identified (shear viscosity, thermal conductivity, self-diffusion, absorption and dispersion of sound waves). Finally, the EIT theory for mixtures of monatomic ideal gases is analyzed. Generalized phenomenological equations follow corresponding to the classical laws of Fick, Fourier, and Navier-Stokes. Onsager reciprocity follows without statistical arguments. The unusual breadth of EIT theory and its capability to synthesize static, transport and rate properties of nonequilibrium systems in one unifying approach is shown in this paper. Nettleton (1992) treats EIT problems from the viewpoint of the general reciprocity (and antireciprocity) relations encountered in nonequilibrium systems and predicted by extended theories. These reciprocity relations result from the introduction of diffusional fluxes of mass, energy, and momentum as (extra) state variables of the nonequilibrium Gibbs equation. They are shown to be useful in calculating transport coefficients in liquids. Reciprocity reduces the number of coefficients which must be estimated from molecular approaches. By using reciprocity, certain generalizations of the Cattaneo equation of heat and the Maxwell equation of viscoelasticity can be obtained. These equations govern coupled flows of heat and matter and viscoelasticity under several creep mechanisms. The use of reciprocity in the estimation of nonlinear effects associated with the flow of heat and diffusion is shown. Nonlinear reciprocity is derived from the Fokker-Planck equation for a distribution g of values of certain microscopic operators whose averages are variables such as heat and mass fluxes. The equation for g is obtained by applying Zwanzig or Grabert projection operators to the Liouville equation. The maximum entropy formalism provides an ansatz for the solution of the Liouville equation which yields a finite number of moments. From this ansatz one can derive kinetic equations exhibiting nonlinear reciprocity by taking moments of the equation for g. It is shown that a Lagrangian formulation can lead to the reciprocity and antireciprocity properties.
Complexity in abstract and physical systems 19 Casas-Vazquez and Jou (1992) review the theory of fluctuations in the framework of EIT. Mutual contributions of the two theories are analyzed. Fluctuation theory enables one to evaluate the values of the coefficients of the nonclassical terms in the entropy and in the entropy flux, from a microscopic basis. The fluctuation-dissipation relations focus on the time correlation function of fluctuations in the fluxes. Equilibrium fluctuations of dissipative fluxes are considered and the second moments of the fluctuations are obtained. Connections with the Green-Kubo transport formulae are shown. Microscopic evaluations of the coefficients of the nonequilibrium entropy are presented for ideal gases, dilute real gases, and dilute polymer solutions. An expression is given for the entropy flux in terms of fluctuations. Next, the nonequilibrium fluctuations of dissipative fluxes are analyzed. The contribution of the irreversible fluxes to the entropy leads to explicit expressions for the second moments of nonequilibrium fluctuations. EIT predicts nonequilibrium contributions to the fluctuations of variables as a consequence of the nonvanishing relaxation times of the fluxes. An example is given discussing fluctuations of the heat flux in a rigid heat conductor. The second differential of the nonequilibrium entropy (with nonvanishing mixed terms) is introduced into the Einstein formula to compute the second moments of fluctuations in a nonequilibrium steady state. It is shown that the correlations of energy and heat flux and those of volume and heat flux, which vanish in equilibrium on account of the different parity of u and q and of v and q, are different from zero in nonequilibrium. This expresses the breaking of the time-reversal symmetry in nonequilibrium states. The results of the second moments from the nonequilibrium entropy and from microscopic (kinetic) models are of the same order but not coincide. This open problem points out the status of the present theory of fluctuations in EIT. Jou and Salhoumi (2001) propose a Legendre transform between a nonequilibrium thermodynamic potential for electrical systems using the electric flux as a nonequilibrium variable and another one using the electric field. The transforms clarify the definition of the chemical potential in nonequilibrium steady states and may be also useful in other contexts where the relaxation times of the fluxes play a relevant role. The relation between their proposal and some Legendre transforms used in the context of dissipative potentials is examined. See Sieniutycz’s (1994) book for other examples of using the Legendre transform. Frieden et al. (1990) show that the Legendre-transform structure of thermodynamics can be replicated without any change if one replaces the entropy S by Fisher’s information measure I (Fisher, 1925). Also, the important thermodynamic property of concavity is shown to be obeyed by I. By this use of the Fisher information measure the authors develop a thermodynamics that seems to be able to treat equilibrium and nonequilibrium situations in a manner entirely similar to the conventional one. Much effort has been devoted to Fisher’s information measure (FIM), shedding much light upon the manifold physical applications (Frieden and Soffer, 1995). It can be shown that the whole field of thermodynamics (both equilibrium and nonequilibrium) can be derived from the MFI approach. In the paper by Frieden and Soffer (1995) FIM is specialized to
20
Chapter 1
the particular but important case of translation families, i.e., distribution functions whose form does not change under translational transformations. In this case, Fisher’s measure becomes shift invariant. Such minimizing of Fisher’s measure leads to a Schr€odinger-like equation for the probability amplitude, where the ground state describes equilibrium physics and the excited states account for nonequilibrium situations. Both nonequilibrium and equilibrium thermodynamics can be obtained from a constrained Fisher’s extremizing process whose output is a Schr€odinger-like wave equation (SWE). Frieden et al. (2002a). Within this paradigm, equilibrium thermodynamics corresponds to the ground state (GS) solution, while nonequilibrium thermodynamics corresponds to the excited state solutions. SWE appears as an output of the constrained variational process that extremizes Fisher information. Both equilibrium and nonequilibrium situations can thereby be tackled by one formalism that clearly exhibits the fact that thermodynamics and quantum mechanics can both be expressed in terms of a formal SWE, out of a common informational basis. The method is a new and powerful approach to off-equilibrium thermodynamics. Frieden et al. (2002a) discuss an application to viscosity in dilute gases and electrical conductivity processes, and thereby, by construction, show that the following three approaches yield identical results: (1) the conventional Boltzmann transport equation in the relaxation approximation, (2) the Rumer and Ryvkin Gaussian-Hermite algorithm, and (3) the authors Fisher-Schr€odinger technique. Information measures (IM) are the most important tools of information theory. They measure either the amount of positive information or of “missing” information an observer possesses with regards to any system of interest. The most famous IM is the so-called Shannon entropy (Shannon and Weaver, 1969), which determines how much additional information the observer still requires in order to have all the available knowledge regarding a given system S, when all he/she has is a probability density function (PD) defined on appropriate elements of such system. This is then a “missing” information measure, Plastino et al. (1996, 2005). The IM is a function of the PD only. If the observer does not have such a PD, but only a finite set of empirically determined mean values of the system, then a fundamental scientific principle called the Maximum Entropy (MaxEnt) asserts that the “best” PD is the one that, reproducing the known expectation values, maximizes otherwise Shannon’s IM. Fisher’s information measure (FIM), named after Fisher (1925), is another kind of measure, in two respects, namely, (1) it reflects the amount of (positive) information of the observer, (2) it depends not only on the PD but also on its first derivatives, a property that makes it a local quantity (Shannon’s is instead a global one). The corresponding counterpart of MaxEnt is now the FIM minimization, since Fisher’s measure grows when Shannon’s diminishes, and vice versa. The minimization here referred to (MFI) is an important theoretical tool in a manifold of disciplines, beginning with physics. In a sense it is clearly superior to MaxEnt because the later procedure yields always as the solution an exponential PD, while the MFI solution is a
Complexity in abstract and physical systems 21 differential equation for the PD, which allows for greater flexibility and versatility (Plastino et al., 2005). Plastino et al. (2005) analyze Fisher’s variational principle in the context of thermodynamics. Their text, which shows applications of variational approaches in statistical physics and thermodynamics, constitutes the Chapter II.1 of Sieniutycz and Farkas’s (2005) book. Plastino et al. (2005) prove that standard thermostatistics, usually derived microscopically from Shannon’s information measure via Jaynes’ Maximum Entropy procedure, can equally be obtained from a constrained extremization of Fisher’s information measure that results in a Schr€ odinger-like wave equation. The new procedure has the advantage of dealing on an equal footing with both equilibrium and off-equilibrium processes. Equilibrium corresponds to the ground-state solution and nonequilibrium to superpositions of the gas with excited states. As an example, the authors illustrate these properties with reference to material currents in diluted gases, provided that the ground state corresponds to the usual case of a Maxwell-Boltzmann distribution. The great success of thermodynamics and statistical physics depends crucially on certain necessary mathematical relationships involving energy and entropy (the Legendre transform structure of thermodynamics). It has been demonstrated that these relationships are also valid if one replaces S by Fisher’s information measure (FIM). Much effort has been focused recently upon FIM. Frieden and Soffer (1995) have shown that FIM provides one with a powerful variational principle that yields most of the canonical Lagrangians of theoretical physics. Markus and Gambar (2003) discuss a possible form and meaning of Fisher bound and physical information in some special cases. They suppose that a usual choice of bound information may describe the behavior of dissipative processes. In view of the recognition that equilibrium thermodynamics can be deduced from a constrained Fisher information extremizing process, Frieden et al. (2002b) show that, more generally, both nonequilibrium and equilibrium thermodynamics can be obtained from such a Fisher treatment. Equilibrium thermodynamics corresponds to the ground-state solution, whereas nonequilibrium thermodynamics corresponds to the excited-state solutions of a Schr€odinger wave equation (SWE). That equation appears as an output of the constrained variational process that extremizes the Fisher information. Both equilibrium and nonequilibrium situations can therefore be tackled by one formalism that exhibits the fact that thermodynamics and quantum mechanics can both be expressed in terms of a formal SWE, out of a common informational basis. As an application, the authors discuss viscosity in dilute gases. In conclusion, it is becoming increasingly evident that Fisher information I is vital to the fundamental nature of physics. The authors are clearly aware of the fact that the I concept lays the foundation for both equilibrium thermodynamics and nonequilibrium thermodynamics, the notion which subsumes the horizon envisioned in contemporary theoretical works. The main result of all these works is the establishment, by means of Fisher information, of the above connection. Frieden et al. (2002b) state that the emphasis lies in the word “connection” and asks why would such a link be of interest? They answer: “Because it clearly shows that thermodynamics and quantum
22
Chapter 1
mechanics can both be expressed by a formal SWE, out of a common informational basis.” Applications of these ideas are also visible in some works on variational thermodynamics attempting to set a Hamiltonian formulation as a basis for the quantization of thermal processes (Markus, 2005; Va´zquez et al., 2009). The unification of thermodynamics and quantum mechanics is also postulated by Gyftopoulos (1998), Beretta (1986), and in other publications of these authors. Markus (2005) applies a Hamiltonian formulation to accomplish the quantization of dissipative thermal fields on the example of heat conduction. He introduces the energy and number operators, the fluctuations, the description of Bose systems, and the q-boson approximation. The generalized Hamilton-Jacobi equation and special potentials, classical and quantumthermodynamical, are also obtained. The author points out an interesting connection of dissipation with the Fisher and the extreme physical information. To accomplish the thermal quantization the variational description of the Fourier heat conduction is examined. The Lagrangian is constructed by a potential function that is expressed by the coefficients of Fourier series. These coefficients are generalized coordinates by which the canonical momenta, the Hamiltonian and the Poisson brackets are calculated. This mathematical construction opens the way toward the canonical quantization that means that the operators can be introduced for physical quantities, e.g., energy and quasiparticle number operators for thermal processes. The elaborated quantization procedure is applied for weakly interacting boson systems where the nonextensive thermodynamic behavior and the q-algebra can be taken into account and can be successfully built into the theory. The Hamilton-Jacobi equation, the action, and the kernel can be calculated in the space of generalized coordinates. This method, called Feynman quantization, shows that repulsive potentials are acting in the heat process, similar to a classical and a quantum-thermodynamic potential. Perez-Garcia and Jou (1992) present EIT theory as a suitable macroscopic framework for the interpretation of generalized transport coefficients. The classical thermodynamic theory holds only in the hydrodynamic limit of small frequency and small wave vector (vanishing ω and k) for any disturbances. A generalized hydrodynamics (GH) is postulated which replaces the dissipative transport coefficients by general memory functions which are justified by some microscopic models. Memory terms constitute important ingredients considered in many advanced thermodynamic approaches (Nunziato, 1971; Shter, 1973; Builtjes, 1977; Altenberger and Dahler, 1992; Morro, 1985, 1992, and many others). A phenomenological theory involving higher order hydrodynamic fluxes is given for the description of GH processes. The additional variables playing the role of the currents of the traceless pressure tensor and viscous pressure are introduced into the extended entropy and balance equations. The entropy flux contains the most general vector that can be constructed with all the dissipative variables. As specific applications, velocity autocorrelation functions are calculated around equilibrium, and ultrafast thermometry is analyzed. A parallelism of the macroscopic theory with the projection-operator techniques is outlined. Memory functions and continued
Complexity in abstract and physical systems 23 fraction expansions for transport coefficients are interpreted in terms of the generalized thermo-hydrodynamics. Llebot (1992) presents extended irreversible thermodynamics of electrical phenomena. He studies a rigid isotropic solid conductor in the presence of flows of heat and electricity. By combining the balance equations for the internal energy and electron concentration with the nonequilibrium entropy, an expression for the entropy production is obtained containing the generalized thermodynamic forces with the time derivatives of the energy and the electron flux. He next considers nonstationary electrical conduction. He shows that only EIT can preserve the positiveness of the entropy production σ in general. For sufficiently large frequencies, the classical theory can yield an erroneous expression allowing negative σ during part of the cycle. However, EIT always gives the correct result implying σ > 0 at each time instant. Nonequilibrium fluctuations of the electric current are analyzed. Instability of the electric current in plasma is studied in the framework of EIT. The critical value of the electric field obtained within the EIT formalism is in good agreement with that obtained on the basis of kinetic considerations (Fokker-Planck equation). Jou and Llebot (1980) offer the treatment of fluctuations of electric current in the extended irreversible thermodynamics. Garcia Colin (1988, 1992) analyzes chemical kinetics as described by the classical kinetic mass action law from the view point of the compatibility of this law with EIT. Consistency of the kinetic mass action law with thermodynamics is also discussed by Garcia Colin et al. (1986). The difficulty lies, of course, in the nonlinearity of the kinetics. A single-step homogeneous chemical reaction is considered with the well-known exponential relation between the chemical flux and the ratio of the affinity and the temperature. Nonequilibrium entropy is postulated to depend on the conventional variables, as well as on diffusion fluxes, heat flux, and chemical flux. Using the formalism of extended thermodynamics, an expression for the entropy source is obtained and extended phenomenological relations are postulated. It is shown that a correspondence between classical kinetics and the implications of EIT can be made provided that an ansatz is adopted concerning the appropriate form of a phenomenological function appearing in the generalized relaxation formula. Fluctuations of the chemically reacting fluid around an equilibrium reference state are discussed in the framework of EIT. It is shown that their time decay obeys an exponential rather than a linear equation due to nonvanishing relaxation times in the fluctuation formulae. The open question concerning the existence of general thermodynamic criteria suitable for predicting how a chemical reaction evolves in time is discussed. Ebeling (1983) compared with experiments the theory of hydrodynamic turbulence formulated by Klimontovich (1982, 1986), which yields the effective turbulent viscosity as a linear function of the Reynolds number. Klimontovich’s theory seems to be able to describe the observed effective viscosity of hydrodynamic vortices and, after introducing some modifications, the fluid flow in tubes up to very high Reynolds numbers. The good agreement
24
Chapter 1
with experimental data from tube flow measurements demonstrates the usefulness of the theory and soundness of its assumptions. Yet, in conclusion, the author states that the theory is still semiphenomenological since there appear phenomenological constants. The development of a statistical theory would be highly desirable. The entropy of dissipative and turbulent structures, defined by Klimontovich is discussed by Ebeling and Engel-Herbert (1986). According to Beck (2002a,b), statistical properties of fully developed hydrodynamic turbulence can be well described using methods from nonextensive statistical mechanics. Relativistic thermal inertia systems (leading to EIT effects even in the limiting case of the low velocities) have been investigated in terms of Lagrangian and Hamiltonian structures. A theory of thermodynamic transformations has been constructed in which the extended Legendre transformations play the double role characteristic of mechanics and thermodynamics (Sieniutycz, 1986, 1988, 1989, 1990, 1992b). Similar qualitative effects have been found for the theory constructed on the basic of Grad’s (1958) solution. Invariant thermodynamic temperatures have been found and a theory of thermodynamic transformations developed in the context of field thermo-hydrodynamics (Sieniutycz and Berry, 1989, 1991, 1992, 1997). Valuable variational principles for irreversible wave systems were found and successfully applied to obtain approximate solutions by direct variational methods (Vujanovic and Jones, 1989; Sieniutycz and Farkas, 2005). Sieniutycz’s (1992b) development of extended thermodynamics continues his earlier analysis (Sieniutycz, 1990) which views the EIT theory as a macroscopic consequence of the “micro-thermodynamic” theory of de Broglie (de Broglie, 1964, 1970). The treatment is in the context of Hamilton’s action and the conservation laws for multicomponent fluids without source terms in the continuity equations (Sieniutycz, 1994). The de Broglie theory is microscopic, quantum, and relativistic; however, its nonrelativistic and macroscopic counterpart is pursued. This is sufficient to preserve the effect of thermal inertia, inherent in that theory. In this extended thermodynamics, sources of entropy and matter result as partial derivatives of a generalized energy, E, with respect to the action-type variables (“phases”) under the condition of constancy of momenta and other natural variables of E. These sources can also be obtained by differentiation of generalized thermodynamic potentials (Legendre transforms of E) with respect to phases. Internal symmetries of the theory are shown to yield the global mass conservation law and ensure chemical stoichiometry. Vector external fields and charged systems can be considered in a generalized variational scheme. Quite general equations of motion are obtained. They correspond to the phenomenological equations of irreversible thermodynamics subject to suitable kinetic potential or generalized energy function. Since the usual assumption allowing the energy-momentum tensor be independent of sources of the entropy and matter cannot generally be met, the conservation laws of irreversible processes seem to be modified by the presence of dissipation (explicit phase variables). These issues are summarized (Sieniutycz, 1994), yet the explicit presence of phase variables cannot be accepted because it would violate internal symmetries in the macroscopic systems considered.
Complexity in abstract and physical systems 25 Our present reevaluation of the earlier book and variational formulations obtained for irreversible systems by various researchers to date, shows, moreover, the necessity of rejection of our past belief that a general, exact Hamilton’s principle can exist for irreversible processes. Unsuccessful formulations have been, in fact, obtained through years for equations of chemical processes, probably because they necessarily contain not sole four divergences but also, side by side, certain sources. Similarly hopeless turned out numerous trials toward effective inclusion of the entropy balance equation with an irreversible source into an exact variational model. This opinion is sustained by the present author along with his stressing that the opinion has nothing in common with the existence of Hamiltonian principles for non-self-adjoint operators (Va´zquez et al., 1996) and with still appearing Lagrangian or Hamiltonian formulations for the truncated sets of equations comprising conservation laws, phenomenological equations, and equations of change for macroscopic systems. Selected, mathematically impressive, formulations are found, between others by Gambar and Markus (1993, 1994), Anthony (2000, 2001), Scholle (1994), Scholle and Anthony (1997), and others. Remarkable stochastic approaches and results are provided by Gaveau, Moro, and Toth for reaction-diffusion systems in terms of information potential, path integrals, fluctuations, rate constants, energy dissipation, and stochastic systems far from equilibrium Gaveau et al. (1999a,b, 2001). Corresponding Master equations (Gaveau et al., 2005) are formulated and analyzed in the book edited by Sieniutycz and Farkas (2005), devoted to summarizing Hamiltonian and extremum principles in thermodynamic systems. Bennett (1982) considers the thermodynamics of computation and defines algorithmic entropy as a microscopic analog of ordinary statistical entropy in the following sense: if a macrostate p is concisely describable, e.g., if it is determined by equations of motion and boundary conditions describable in a small number of bits, then its statistical entropy is nearly equal to the ensemble average of the microstates’ algorithmic entropy. Alicki and Fannes (2005) develop line of ideas centered around entropy production and quantum dynamics, emerging from von Neumann’s work on foundations of quantum mechanics and leading to current research. The concepts of measurement, dynamical evolution, and entropy were central in von Neumann’s work. Further developments led to introducing generalized measurements in terms of positive operator-valued measures, closely connected with theory of open systems. Fundamental properties of quantum entropy are derived as well as Kolmogorov and Sinai type chaotic properties of classical dynamical systems with asymptotic entropy production. Finally, entropy production in quantum dynamical systems is linked with repeated measurement processes and a whole research area on nonequilibrium phenomena in quantum dynamical systems seems to emerge. A stochastic parameter which appears to be related to the Kolmogorov entropy has been computed by Casartelli et al. (1976) for a system of N particles in line with the nearestneighbor Lennard-Jones interaction. It has been found that the parameter depends on the initial conditions and is equal to zero or to a positive value which depends on the specific energy u.
26
Chapter 1
A limit seems to exist for the parameter at fixed u when N!∞, as shown by computations from N ¼ 10 to 200. See also Landauer (1961) and Bag and Ray (2000) for further investigations. Altenberger and Dahler (1992) propose an entropy source-free method which allows one to answer unequivocally questions concerning the symmetry of the kinetic coefficients. It is based on nonequilibrium statistical mechanics and it involves a linear functional generalization of Onsagerian thermodynamics. It extends some earlier attempts (Schofield, 1966; Shter, 1973) to generalize fluctuation dynamics and Onsager’s relations using convolution integrals to link thermodynamic fluxes and forces. By using the Mori operator identity (Mori, 1965), applied to the initial values of currents of mass and heat, constitutive equations linking the thermodynamic fluxes and forces are obtained. Correspondence of the results of functional theory with the classical one is verified by the use of Fourier representations for the fields and for the functional derivatives. This approach provides a consistent description of transport processes with temporal and spatial dimensions and yet enables one to recover the classical results in the longwave limit, where gradients in the system are small. The possibility of extending this linear theory to situations far from equilibrium is postulated. The extension is based on the probability densities of the phase-space variables determined for stationary nonequilibrium reference states. The roles of external fields and various reference frames are investigated, with the conclusion that the laboratory frame description is as a rule the most suitable for the theory. Transformations of thermodynamic forces are thoroughly analyzed, focusing on violations of the Onsagerian symmetries. While endoreversible heat-to-power conversion systems operating between two heat reservoirs have been intensely studied, systems with several reservoirs have attracted little attention. However, Sieniutycz and Szwast (1999) and Amelkin et al. (2005) have analyzed the maximum power processes of the latter systems with stationary temperature reservoirs. Amelkin et al. (2005) have found that, regardless of the number of reservoirs, the working fluid uses only two isotherms and two infinitely fast isentropes/adiabats. One surprising result is that there may be reservoirs that are never used. This feature has been explained for a simple system with three heat reservoirs. However, Gyftopoulos (1999) criticizes the finite-time thermodynamics adducing several reasonable and a number of dubious arguments. In the PhD thesis of Andresen (1983) most of his earlier papers on FTT can be found. Andresen et al. (1984b) formulate basic principles of thermodynamics in finite time. Andresen et al. (1984c) criticize the claim that, among the several possible spatial structures, the system tends to prefer the structure producing the entropy at the largest rate. In other words, the most stable stationary state is unnecessarily associated with the largest rate of entropy production. Andresen (1990) regards finite-time thermodynamics (FTT) as a specific branch of irreversible thermodynamics which uses aggregated macroscopic characteristics (e.g., friction coefficients, heat conductances, reaction rates, etc.) rather than their microscopic counterparts. Treating processes which have explicit time or rate dependences, he discusses finite-time
Complexity in abstract and physical systems 27 thermodynamic potentials and the generalization of the traditional availability to the case of finite time, involving entropy production and constraints. Paths for endoreversible engines are analyzed from the viewpoint of various performance criteria and comparison is made of their properties with those of traditional engines (e.g., Carnot engines). Andresen (2008) postulates the need for disequilibrium entropy in finite-time thermodynamics and elsewhere. In his review of the later advances in the FTT field, Andresen (2011) describes the concept of finite-time thermodynamics, which, according his opinion, can be applied not only to optimize chemical and industrial processes but also, when appropriate variables are replaced, to solve economic and possibly even ecological problems. One of the classic problems of thermodynamics has been the determination of the maximum work that might be extracted when a prepared system is allowed to undergo a transformation from its initial state to a designated final state (Andresen et al., 1983). When that final state is defined by the condition of equilibrium between the system and an environment, the maximum extractable work is generally known now as the availability A (other name is exergy). This is a convenient shortening of “available work.” In the paper by Andresen et al. (1983) the concept of availability as an upper bound to the work that can be extracted from a given system in connection with specified surroundings is extended to processes constrained to operate at nonzero rates or to terminate in finite times. The authors use some generic models which describe a whole range of systems in such a way that the optimal performance of the generic model is an upper bound to the performance of the real systems. The effects of the time constraint are explored in general and in more detail for a generic model in which extraction of work competes with internal relaxation. Extensions to nonmechanical systems are indicated. The authors wish to establish the finite-time availability as a standard of performance more useful than the traditional availability of reversible processes. For further work along this line, especially in the context of finite-time exergy via HJB equations of dynamic programming, see Sieniutycz (1997, 1998a,b, 1999, 2000a,b,c, 2001a,b,c, 2013a,b) and Xia et al. (2010a,b, 2011a, b), and some others. Andresen et al. (1984a) have pointed out the inadequacy of reversible thermodynamics to describe processes which proceed at nonvanishing rates and have pointed out a number of new methods to treat this situation. The primary goal has been to obtain bounds of performance which are more realistic than the reversible ones. Some of the finite-time procedures consist in generalizations of traditional quantities, like potentials and availability; others are new, like the thermodynamic length. Since the central ideas of reversible thermodynamics are retained in finite-time thermodynamics, the authors are developing attempts to generalize traditional concepts to include time into the FTT theory. Andresen and Essex (2003) give an example of the use of disequilibrium thermodynamics in characterizing biological organisms. They worked out finite-time thermodynamics optimization of mitochondria that are the fuel cells of the living body (See Figs. 9.14 and 9.15 in
28
Chapter 1
Fig. 1.3 Schematic drawing of a mitochondrion showing the two double membranes and the location of the cytochrome chain embedded in the inner membrane (Andresen and Essex, 2003).
our first book on complexity; Sieniutycz, 2020). One of these figures is shown as Fig. 1.3 in this chapter. In mitochondria, the large free energy of reaction between hydrogen and molecular oxygen is harvested in a number of steps by the cytochrome chain. As reactants and products are exchanged with the matrix fluid, only those steps which produce ATP are included. This process is very interesting because mitochondria are the fuel cells of the body. The context is nonetheless foreign to the contemporary fuel cell design. Thus insights gained here are valuable for design questions generally and for energetic optimization of industrial fuel cells specifically. This work aims toward demonstrating the use of thermodynamic geometry and optimization. Important are connections between statistical mechanics and irreversible thermodynamics when investigating a finite-time content of Keizer’s Σ-function (Keizer, 1987). The results currently emerging are diverse applications of thermodynamic length, which seem to be able to simplify calculations of such diverse systems as lasers, separation by diffusion, and signal encoding. Considering radiation systems Badescu (2013) concludes that links between lost available work and entropy generation cannot be obtained in the general case, i.e., for systems in contact with arbitrary heat and/or radiation reservoirs. Reservoirs operating in local (full or partial) thermodynamic equilibrium are needed. See also Chap. 13 in Sieniutycz (2016). A main tenet of empirical sciences is the possibility of inferring the basic features of studied systems by means of the outputs generated by the system itself in the form of a measurable
Complexity in abstract and physical systems 29 quantity. This implies the possibility of relating some features of the output with some other relevant feature of the generating system. Working with this premise Trulla et al. (2003) present the derivation of a general “distance-from-randomness” index for time series scaling with the relative complexity of an unknown generating system. His index corresponds to the Shannon entropy of the distribution of the eigenvalues of the correlation matrix derived by the embedding matrix of the series. The efficacy of the proposed index is tested by a number of simulations, demonstrating the possibility to derive a sensible complexity index without any strong theoretical constraint. Moreover, its direct relation was demonstrated to the relative complexity of the network system producing the series allowing for a system identification approach on a strictly data-driven manner, without any theoretical assumption. A group of papers deals with entropy and extended relativistic thermodynamics (ERT). The reader interested in causal relativistic theories is referred to the papers by Israel (1976) and Hiscock and Lindblom (1983). Concerning the application-oriented works, Pavon et al. (1980) consider the heat conduction in ERT. Pavon et al. (1983) treat equilibrium and nonequilibrium fluctuations in relativistic fluids. Pavon (1992) deals with the applications to astrophysics and to cosmology, rather than dealing with the general theory. The second moments of fluctuations are determined from the Einstein’s probability formula involving the second differential of the nonequilibrium entropy. These moments are next used to determine the transport coefficients of heat conductivity, bulk viscosity, and shear viscosity of a radiative fluid. By exploiting correlation formulae, second-order coefficients of transport are determined. Nonequilibrium fluctuation theory, based on the assumption of the validity of the Einstein formula close to equilibrium, is used to determine the nonequilibrium corrections of the bulk viscous pressure. An analysis of survival of the protogalaxies in an expanding universe completes the presentation of astrophysical applications of ERT. Cosmological applications are discussed next. It is shown that nonequilibrium effects and transport properties (such as, e.g., bulk viscosities) play a certain role in the cosmic evolution of the universe. In that context, the following issues are outlined: cosmological evolution (described by a Bianchi-type model with the bulk viscosity), production of entropy in the leptonic period, the inflationary universe, and FRW cosmology. Some cosmological aspects of the second law are also discussed. In a theoretical, general paper Williams (2001) shows that the classical laws of thermodynamics require that mechanical systems must exhibit energy that becomes unavailable to do useful work. In thermodynamics this type of energy is called entropy. It is further shown that these laws require two metrical manifolds, equations of motion, field equations, and Weyl’s quantum principles. Weyl’s quantum principle requires quantization of the electrostatic potential of a particle and that this potential be nonsingular. The interactions of particles through these nonsingular electrostatic potentials are analyzed in the low velocity limit and in the relativistic limit. An extension of the gauge principle into a five-dimensional manifold, then restricting the generality of the five-dimensional manifold by using the conservation principle, shows that the four-dimensional hypersurface that is embedded within the 5-D manifold is required to obey
30
Chapter 1
Einstein’s field equations. The 5-D gravitational quantum equations of the solar system are presented. Gyftopoulos (1998) discusses the statistical interpretation of thermodynamics and finds its inadequate and incomplete to explain all thermodynamic phenomena. He summarizes and compares two alternative approaches to the statistical interpretation, one purely thermodynamic and the other quantum-theoretic and thermodynamic, and then the meaning of a nonstatistical entropy. References are given to the past research directed toward nonstatistical interpretations of thermodynamics. According to Gyftopoulos, recent understanding of thermodynamics casts serious doubts about the conviction that, in principle, all physical phenomena obey only the laws of mechanics. The volume edited by Amoroso et al. (2002) has a unique perspective in showing that the volume chapters, the majority written by the world-class physicists and astrophysicists, contrast both with the mainstream conservative approaches and leading edge extended fundamental models in physical theory and observation. For example in the first of the five parts: Astrophysics and Cosmology, papers review Bigbang Cosmology along with articles calling for exploration of alternatives to a Bigbang universe. This unique perspective continues through the remaining sections on extended EM theory, gravitation, quantum theory, vacuum dynamics, and space-time. The question, why there are so many schools of thermodynamics, is answered concisely as follows (Muschik, 2007): There is no natural extension from thermostatics to thermodynamics. The extension from thermostatics to dynamics seems to be easy: one has to replace the reversible processes of thermostatics by real ones and to extend the balances of continuum mechanics by the appropriate thermal quantities such as heat flux density, internal energy, and entropy. But it is not so easy as supposed, because usually thermostatics is formulated for discrete systems, whereas thermodynamics can be presented into two forms, as a nonequilibrium theory of discrete systems or in field formulation extending the balances of continuum mechanics. Both descriptions are used in practice which is widespread for thermodynamics. Its methods are successfully applied in various different disciplines such as Physics and Physical Chemistry, Mechanical and Chemical Engineering, Heat and Steam Engine Engineering, Material Science, Bio-Sciences, Energy Conversion Techniques, Air Conditioning and Refrigeration. Therefore it is impossible to mention the different terminologies, methods, and schools completely in a brief survey. Presupposing that classical thermostatics is well known, Muschik (2007) proceeds along a way sketched in Fig. 1.7 of the TAES book (Sieniutycz, 2016). We shall end this chapter with an original summary by Jou et al. (1988) after they introduce a new formulation of nonequilibrium thermodynamics, known as extended irreversible thermodynamics (EIT). The basic features of this formalism and several applications are reviewed by Jou et al. (1988) focusing on points which have fueled increasing attention.
Complexity in abstract and physical systems 31 Extended irreversible thermodynamics includes dissipative fluxes (heat flux, viscous pressure tensor, electric current) in the set of basic independent variables of the entropy. Starting from this hypothesis, and by using methods similar to classical irreversible thermodynamics, evolution equations for these fluxes can be obtained. These equations reduce to the classical constitutive laws in the limit of slow phenomena, but may also be applied to fast phenomena, such as second sound in solids, ultrasound propagation, or generalized hydrodynamics ( Jou et al., 1988). In contrast with the classical theory, extended thermodynamics leads to hyperbolic equations with finite speeds of propagation for thermal, diffusional, and viscous signals. Supplementary information about the macroscopic parameters is provided by fluctuation theory. The results of the macroscopic theory are confirmed by the kinetic theory of gases and nonequilibrium statistical mechanics (Lebon, 1978). The theory is particularly useful for studying the thermodynamics of nonequilibrium steady states and systems with long relaxation times, such as viscoelastic media or systems at low temperatures. There is no difficulty in formulating the theory in the relativistic context (Israel, 1976). Applications to rigid electrical conductors as well as several generalizations including higher order fluxes are also presented. A section is devoted to the formulation of extended irreversible thermodynamics within the framework of the rational thermodynamics (Perez-Garcia and Jou, 1986, 1992). Further, Jou et al. (1999) review the progress made in extended irreversible thermodynamics during the 10 years that have elapsed since the publication of their first review on the same subject ( Jou et al., 1988). During this decade much effort has been devoted to achieving a better understanding of the fundamentals and a broadening of the domain of applications. The macroscopic formulation of EIT is reviewed and compared with other nonequilibrium thermodynamic theories. The foundations of EIT are discussed on the basis of information theory, kinetic theory, stochastic phenomena, and computer simulations. Several significant applications are presented, some of them of considerable practical interest (nonclassical heat transport, polymer solutions, nonFickian diffusion, microelectronic devices, dielectric relaxation), and some others of special theoretical appeal (superfluids, nuclear collisions, cosmology). They also outline some basic problems which are not yet completely solved, such as the definitions of entropy and temperature out of equilibrium, the selection of the relevant variables, and the status to be reserved to the H-theorem and its relation to the second law. In writing their review, the authors had four objectives in mind: to show (i) that extended irreversible thermodynamics stands at the frontiers of modern thermodynamics, (ii) that it opens the way to new and useful applications, (iii) that much progress has been achieved during the decade 1988–98, and (iv) that the subject is far from being exhausted ( Jou et al., 1999).
1.4.4 Including power yield and power limits into thermodynamics In Chapter 7 of this volume we shall treat power generation limits, which are the basic quality indicators of energy systems. They are evaluated via optimization for various converters, such as thermal, solar, and chemical (electrochemical) engines. Thermodynamic analyses lead to
32
Chapter 1
converters’ efficiencies. While methods of static optimization, i.e., differential calculus and Lagrange multipliers, are sufficient for steady processes, dynamic optimization applies the variation calculus and dynamic programming for unsteady processes. In reacting systems chemical affinities constitute prevailing components of an overall efficiency, thus flux balances are applied to derive power in terms of active part of chemical affinities. Examples show power maxima in fuel cells and prove suitability of a thermal machine theory to chemical and electrochemical systems. The basic notions here are: power limits, mass and entropy flows, chemical engines, fuel cells, and process of electrolysis. For the Stefan-Boltzmann engine, exact expression at the optimal point cannot be determined analytically, yet, the temperature can be found graphically from the chart p¼f(T0 ). A pseudoNewtonian model which treats state-dependent energy exchange with coefficient α(T3) omits to a considerable extent analytical difficulties associated with the acceptance of the StefanBoltzmann equation. The reader should consult with the literature for further information (Sieniutycz and Jezowski, 2018, 3rd ed of the book) and Chapter 7 in this volume.
1.5 Equipment complexity In technology, the term “complex equipment” refers to complex configurations and systems composed of various subsystems, Fig. 1.4. Complex equipment is a very intricate business. One piece of real equipment can have from 500 to over 1,000,000 components and require 200 to 500 steps to assemble. Complex equipment manufacturing companies tend to be low-volume businesses with a large product mix. They have traditionally been among the highest gross margin sectors in high technology, with some running in the 55%–65% range. Recently, however, margins have eroded and many orders have dropped during the current economic downturn. In effect, complex equipment manufacturing companies have struggled to adapt to a changing market. The companies are faced with three basic challenges: managing configuration complexity efficiently, delivering flexibility in their supply chains, and optimally servicing their installed base. The first challenge, configuration complexity, has been a growing issue since the 1980s.
Complexity in abstract and physical systems 33
Fig. 1.4 Schemes of various complex equipment systems. See more pictures of this sort in our book Complexity and Complex Thermodynamic Systems (Sieniutycz, 2020).
34
Chapter 1
References Alicki, R., Fannes, M., 2005. Quantum mechanics, measurement and entropy. Rep. Math. Phys. 55 (1), 47–59. Altenberger, A., Dahler, J., 1992. Statistical mechanical theory of diffusion and heat conduction in multicomponent systems. In: Flow, Diffusion and Transport Processes.Advances in Thermodynamics Series, vol. 6. pp. 58–81. Amelkin, S.A., Andresen, B., Burzler, J.M., Hoffmann, K.H., Tsirlin, A.M., 2005. Thermo-mechanical systems with several heat reservoirs: maximum power processes. J. Non-Equilib. Thermodyn. 30, 67–80. Amoroso, R.L., Hunter, G., Katafos, M., Vigier, J.P., 2002. Gravitation and cosmology: from the Hubble radius to the Planck scale. In: Proceedings of a Symposjum in Honour of the 80th Birthday of Jean-Pierre Vigier. Kluwer Akademic Publishers, Dordrecht (Fundametal Theories of Physics 126). Andresen, B., 1983. Finite-Time Thermodynamics. University of Copenhagen. Andresen, B., 1990. Finite-time thermodynamics. In: Finite-Time Thermodynamics and Thermoeconomics. Advances in Thermodynamics Series, vol. 4. Taylor and Francis, New York, pp. 66–94. Andresen, B., 2008. The need for entropy in finite-time thermodynamics and elsewhere. In: Meeting the Entropy Challenge: Intern. Thermodynamics Symposium in Honor and Memory of Professor Joseph H. Keenan, AIP Conf. Procvol. 1033. pp. 213–218. Andresen, B., 2011. Current trends in finite-time thermodynamics. Angew. Chem. 50, 2690–2704. Andresen, B., Essex, C., 2003. Finite-time thermodynamics optimization of mitochondrial chemistry and fuel cells. In: Proceedings of ECOS 2003 Copenhagen, Denmark, June 30–July 2, 2003. Andresen, B., Berry, R.S., Ondrechen, M.J., Salamon, P., 1984a. Thermodynamics for processes in finite time. Acc. Chem. Res. 17, 266–271. Andresen, B., Rubin, M.H., Berry, R.S., 1983. Availability for finite-time processes. General theory and a model. J. Phys. Chem. 87, 2704. Andresen, B., Salamon, P., Berry, R.S., 1984b. Thermodynamics in finite time. Phys. Today 62. Andresen, B., Zimmerman, E.C., Ross, J., 1984c. Objections to a proposal on the rate of entropy production or systems far from equilibrium. J. Chem. Phys. 81 (10), 4676–4678. Anthony, K.-H., 2000. Lagrange-formalism and thermodynamics of irreversible processes: the 2nd law of thermodynamics and the principle of least entropy production as straightforward structures in Lagrange-Formalism. In: Sieniutycz, S., Farkas, H. (Eds.), Variational and Extremum Principles in Macroscopic Systems. Elsevier Science, Oxford, pp. 25–56 (Chapter I.2). Anthony, K.-H., 2001. Hamilton’s action principle and thermodynamics of irreversible processes—a unifying procedure for reversible and irreversible processes. J. Non-Newtonian Fluid Mech. 96 (1–2), 291–339. Aubert, J.H., Tirrel, M., 1980. Macromolecules in nonhomogeneous velocity gradient fields. J. Chem. Phys. 72 (4), 2694–2701. Badescu, V., 2013. Lost available work and entropy generation: heat versus radiation reservoirs. J. Non-Equilib. Thermodyn. 38, 313–333. Badii, R., Politi, A., 1997. Complexity, Hierarchical Structures and Scaling in Physics. Cambridge University Press. Bag, B.C., Ray, D.S., 2000. Fluctuation–dissipation relationship in chaotic dynamics. Phys. Rev. E 62 (2), 1927–1935. Bampi, F., Morro, A., 1984. Nonequilibrium thermodynamics: a hidden variable approach. In: Cassas-Vazquez, J., Jou, D., Lebon, G. (Eds.), Lecture Notes in Physics. In: vol. 199. Springer, Berlin, pp. 221–232. Banach, Z., Piekarski, S., 1992. A coordinate-free description of nonequilibrium thermodynamics. Arch. Mach. 44, 191–202. Baranowski, B., 1974. Niero´wnowagowa Termodynamika w Chemii Fizycznej Non-Equilibrium Thermodynamics in Physical Chemistry Panstwowe Wydawnictwa Naukowe, Warszawa. Baranowski, B., 1975. Nicht-Gleichgewichts-Thermodynamik in der physikalischen Chemie. VEB Deutscher Verlag f€ur Grundstoffindustrie, Leipzig. Baranowski, B., 1991. Non-equilibrium thermodynamics as applied to membrane transport. J. Membr. Sci. 57 (2–3), 119–159.
Complexity in abstract and physical systems 35 Baranowski, B., 1992. Diffusion in elastic media with stress fields. In: Advances in Thermodynamics. Taylor and Francis, New York, pp. 168–199. Bartoszkiewicz, M., Miekisz, S., 1989. Some properties of generalized irreversible thermodynamics founded by Bearman–Kirkwood equations. J. Chem. Phys. 90, 1787. Bartoszkiewicz, M., Miekisz, S., 1990. Diffusion-viscous flow coupling and global transport coefficients symmetry in matter transport through porous membranes. Ber. Bunsenges. Phys. Chem. 94, 887–893. Bass, J., 1968. Cours de Mathematiques. vol. 1. Masson, Paris. Beck, C., 2002a. Generalized statistical mechanics and fully developed turbulence. Physica A 306, 5189–5198. Beck, C., 2002b. Nonextensive statistical mechanics approach to fully developed turbulence. Chaos, Solitons Fractals 13, 499–506. Bennett, C.H., 1982. The thermodynamics of computation—a review. Int. J. Theor. Phys. 21, 905–940. Beretta, G.P., 1986. A theorem on Lyapounov stability for dynamical systems and a conjecture on a property of entropy. J. Math. Phys. 27 (1), 305–398. € Bird, R.B., Ottinger, H.C., 1992. Transport properties of polymeric liquids. Annu. Rev. Phys. Chem. 43, 371–406. Bird, R.B., Amstrong, R.C., Hassager, D., 1977. Dynamics of Polymeric Liquids. Wiley, New York. Boukary, M.S., Lebon, G., 1986. A comparative analysis of binary fluid mixtures by extended thermodynamics and the kinetic theory. Physica 137A, 546–572. Boven, R., Chen, P.J., 1973. Thermodynamic restriction on the initial slope of the stress relaxation function. Arch. Ration. Mech. Anal. 51, 278–284. Bubnov, V.A., 1976. On the nature of heat transfer in acoustic wave. Inzh. Fiz. Zh. 31, 531–536. Builtjes, P.J.H., 1977. Memory Effects in Turbulent Flows. vol. 97. W.T.H.D., pp. 1–45 Carrasi, M., Morro, A., 1972. A modified Navier-Stokes equation and its consequence on sound dispersion. Il Nuovo Cimento 9B, 321–343. Carrasi, M., Morro, A., 1973. Some remarks about dispersion and adsorption of sound in monoatomic rarefied gases. Il Nuovo Cimento 13B, 249–281. Carreau, P.J., Gmerla, M., Ait-Kadi, A., 1986. A conformational model for polymer solutions. In: Commun. 2nd Conference of European Rheologists, Prague, June 17–20. Casartelli, M., Diana, E., Galgani, L., Scotti, A., 1976. Numerical computations on a stochastic parameter related to the Kolmogorov entropy. Phys. Rev. A 13, 1921. Casas-Vazquez, J., Jou, D., 1992. Extended irreversible thermodynamics and fluctuation theory. In: Extended Thermodynamic Systems. Advances in Thermodynamics Series, vol. 7. Taylor and Francis, New York, pp. 263–288. Cattaneo, C., 1958. Sur une forme d l’equation eliminant le paradoxe d’une propagation instantance. C. R. Hebd. Seances Acad. Sci. 247, 431–433. Chaitin, G.J., 1966. On the length of programs for computing binary sequences. J. Assoc. Comput. Math. 13, 547. Coleman, B.D., Gurtin, M., 1964. Thermodynamics with internal variables. J. Chem. Phys. 47, 597–613. Day, W.A., 1971. Restrictions on relaxation functions in linear viscoelasticity. Q. J. Mech. Appl. Math. 24, 487–497. de Broglie, L.V., 1964. La thermodynamique “cache” des particules. Ann. Inst. Henri Poincare 1, 1–19. de Broglie, L.V., 1970. The reinterpretation of wave mechanics. Found. Phys. 1 (1), 5–15. de Groot, S.R., Mazur, P., 1984. Nonequilibrium Thermodynamics. Dover, New York. Domanski, R., 1978a. Analytical description of temperature field caused by heat pulses. Arch. Termod. Spalania 9, 401–413. Domanski, R., 1978b. Temperature fields caused by high frequency heat pulses. In: 6-Th Int. Heat Transfer Conf Paper CO-10: 275. Drouot, R., Maugin, G.A., 1983. Phenomenological theory for polymer diffusion in non homogeneous velocity gradient flows. Rheol. Acta 22, 336–347. Drouot, R., Maugin, G.A., 1985. Continuum modeling of polyelectrolytes in solutions. Rheol. Acta 24, 474–487. Drouot, R., Maugin, G.A., 1987. Optical and hydrodynamical regimes in dilute solutions of polyelectrolytes. Rheol. Acta 26, 350–357.
36
Chapter 1
Drouot, R., Maugin, G.A., 1988. Polyelectrolytes in solutions: equilibrium conformations of the microstructure without and with external fields. Int. J. Eng. Sci. 26, 225–241. Drouot, R., Maugin, G.A., Morro, A., 1987. Anisotropic equilibrium conformations in solutions of polymers and polyelectrolytes. J. Non-Newtonian Fluid Mech. 23, 295–304. Ebeling, W., 1983. Discussion of the Klimontovich theory of hydrodynamic turbulence. Ann. Phys. 40 (1), 25–33. Ebeling, W., Engel-Herbert, H., 1986. Entropy lowering and attractors in phase space. Acta Physiol. Hung. 66 (1–4), 339–348. Eu, B.C., 1980. A modified moment method and irreversible thermodynamics. J. Chem. Phys. 73, 2858. Eu, B.C., 1982. Irreversible thermodynamics of fluids. Ann. Phys. 140, 341–371. Eu, B.C., 1987. Kinetic theory and irreversible thermodynamics of dense fluids subject to an external field. J. Phys. Chem. 87, 1220–1237. Eu, B.C., 1988. Entropy for irreversible processes. Chem. Phys. Lett. 143, 65. Eu, B.C., 1989. Irreversible thermodynamic theory of scalar induced melting point depression. Physica A 160, 87. Eu, B.C., 1992. Kinetic theory and irreversible thermodynamics of multicomponent fluids. In: Extended Thermodynamic Systems. Advances in Thermodynamics Series, vol. 7. Taylor and Francis, New York, pp. 183–222. Evren-Selamet, E., 1995. Hysteretic foundations of entropy production. In: Proceedings of ECOS’95 July II-15, Paper A1-182, Istanbulpp. 81–88. Fabrizio, M., Morro, A., 1988. Viscoelastic relaxation functions compatible thermodynamics. J. Elast. 19, 63–75. Fabrizio, M., Giorgi, C., Morro, A., 1989. Minimum principles, convexity and thermodynamics in viscoelasticity. Contin. Mech. Thermodyn. 1, 197–211. Fisher, R.A., 1925. Theory of statistical estimation. Math. Proc. Camb. Philos. Soc. 700–725. Frieden, B.R., Soffer, B.H., 1995. Lagrangians of physics and the game of Fisher-information transfer. Phys. Rev. E 52, 2274. Frieden, B.R., Plastino, A., Plastino, A.R., Soffer, B.H., 1990. Fisher-based thermodynamics: its Legendre transform and concavity properties. Phys. Rev. 60 (1), 48–53. Frieden, B.R., Plastino, A., Plastino, A.R., Soffer, B.H., 2002a. Non-equilibrium thermodynamics and Fisher information: an illustrative example. Phys. Lett. A 304, 73–78. Frieden, B.R., Plastino, A., Plastino, A.R., Soffer, B.H., 2002b. Schr€ odinger link between nonequilibrium thermodynamics and Fisher information. Phys. Rev. E. 66, 046128. Gambar, K., Markus, F., 1993. On the global symmetry of thermodynamics and Onsager’s reciprocity relations. J. Non-Equilib. Thermodyn. 18, 51–57. Gambar, K., Markus, F., 1994. Hamilton–Lagrange formalism of nonequilibrium thermodynamics. Phys. Rev. E 50, 1227–1231. Garcia Colin, L.S., 1988. Comments on the kinetic mass action law revisited by thermodynamics. J. Phys. Chem. 92, 3017–3018. Garcia Colin, L.S., 1992. Chemically reacting systems and extended ireversible thermodynamics. In: Extended Thermodynamic Systems. Advances in Thermodynamics Series, vol. 7. Taylor and Francis, New York, pp. 364–385. Garcia Colin, L.S., de la Selva, S.M.T., Pina, E., 1986. Consistency of the kinetic mass action law with thermodynamics. J. Phys. Chem. 90, 953–956. Garcia-Colin, L.S., Bhalekar, A.A., 1997. Recent trends in irreversible thermodynamics. Proc. Pakistan Acad. Sci. 50 (4), 295–305. Gaveau, B., Moreau, M., Toth, J., 1999a. Variational nonequilibrium thermodynamics of reaction-diffusion systems, I: the information potential. J. Chem. Phys. 111 (17), 7736–7747. Gaveau, B., Moreau, M., Toth, J., 1999b. Variational nonequilibrium thermodynamics of reaction-diffusion systems, II: path integrals, large fluctuations, and rate constants. J. Chem. Phys. 111 (17), 7748–7757. Gaveau, B., Moreau, M., Toth, J., 2001. Variational nonequilibrium thermodynamics of reaction-diffusion systems, III: progress variables and dissipation of energy and information. J. Chem. Phys. 115 (2), 680–690.
Complexity in abstract and physical systems 37 Gaveau, B., Moreau, M., Toth, J., 2005. Master equations and path-integral formulations of variational principles for reaction-diffusion problems. In: Sieniutycz, S., Farkas, H. (Eds.), Variational and Extremum Principles in Macroscopic Systems. Elsevier Science, Oxford, pp. 315–336. Giesekus, H., 1986. Constitutive models of polymer fluids: toward a unified approach. In: Kroner, E., Kirchgassner, K. (Eds.), Trends in Applications of Pure Mathematics to Mechanics. Springer, Berlin, pp. 331–348. Grad, H., 1958. Principles of the theory of gases. In: Flugge, S. (Ed.), Handbook der Physik. In: vol. 12. Springer, Berlin. Gyftopoulos, E.P., 1998. Maxwell and Boltzmann’s triumphant contributions to and misconceived interpretation of thermodynamics. Int. J. Appl. Thermodyn. 1 (1–4), 9–19. Gyftopoulos, E.P., 1999. Infinite time (reversible) versus finite time (irreversible) thermodynamics: a misconceived distinction. Energy 24 (12), 1035–1039. Hiscock, W.A., Lindblom, L.A., 1983. Stability and causality in dissipative fluids. Ann. Phys. 151, 466–496. Huilgol, R.R., Phan-Thien, N., 1986. Recent advances in the continuum mechanics of viscoelastic liquids. Int. J. Eng. Sci. 24, 161–261. Israel, W., 1976. Nonstationary irreversible thermodynamics: a causal relativistic theory. Ann. Phys. 100 (1/2), 310–331. Jou, D., Llebot, J.E., 1980. Electric current fluctuations in extended irreversible thermodynamics. J. Phys. A 13, 47. Jou, D., Salhoumi, A., 2001. Legendre transforms in nonequilibrium thermodynamics: an illustration in electrical systems. Phys. Lett. A 283, 163–167. Jou, D., Casas-Vazquez, J., Lebon, G., 1988. Extended irreversible thermodynamics. Rep. Prog. Phys. 51, 1105–1172. Jou, D., Casas-Vazquez, J., Lebon, G., 1999. Extended Irreversible Thermodynamics Revisited (1988–98). . Jou, D., Casas-Vazquez, J., Lebon, G., 2001. Extended Irreversible Thermodynamics, third ed. Springer, Berlin. Keizer, J., 1987. Statistical Thermodynamics of Nonequilibrium Processes. Springer, New York. Kestin, J., 1979. Domingos, et al., (Ed.), Foundations of Non-Equilibrium Thermodynamics. Macmillan Press, London. Kestin, J., Bataille, J., 1980. Thermodynamics of solids. In: Continuum Models of Discrete Systems. University of Waterloo Press, Ontario. Kestin, J., Rice, J.M., 1970. Paradoxes in the application of thermodynamics to strained solids. In: Stuart, F.B., Gal Or, B., Brainard, A.J. (Eds.), A Critical Review of Thermodynamics. Mono Book Corp, Baltimore, pp. 275–298. Klimontovich, Y.L., 1982. Kinetic Theory of Nonideal Gases and Nonideal Plasmas. Pergamon, Oxford. Klimontovich, Y.L., 1986. Statistical Physics. Harwood Academic Publishers, Chur. Kluitenberg, G., 1966. Nonequilibrium thermodynamics, variational techniques, stability. In: Donelly, R., Herman, R., Prigogine, I. (Eds.), Application of the Theory of Irreversible Processes to Continuum Mechanics. University Press, Chicago. Kluitenberg, G.A., 1977. On dielectric and magnetic relaxation and vectorial internal degrees of freedom in thermodynamics. Physica A 87, 541–563. Kluitenberg, G.A., 1981. On vectorial internal variables and dielectric and magnetic relaxation phenomena. Physica A 109 (1–2), 91–122. Koh, S., Eringen, C., 1963. On the foundations of nonlinear thermo-viscoelasticity. Int. J. Eng. Sci. 1, 199–229. Kołmogorov, A.N., 1965. Three approaches ton the quantitative definition of information. Probl. Inf. Transm. 1, 4. Kondepudi, D.K., Prigogine, I., 1998. Modern Thermodynamics: From Heat Engines to Dissipative Structures. Wiley, New York. Kremer, G.M., 1985. Erweiterte Thermodynamik Idealer und Dichter Gase. (Diss.)TU Berlin. Kremer, G.M., 1986. Extended thermodynamics of ideal gases with 14 fields. Ann. Inst. Henri Poincare 45, 419–440. Kremer, G.M., 1987. Extended thermodynamics of nonideal gases. Physica A 144, 156–178. Kremer, G.M., 1989. Extended thermodynamic of molecular ideal gases. Contin. Mech. Thermodyn. 1, 21–45.
38
Chapter 1
Kremer, G.M., 1992. On extended thermodynamics of ideal and real gases. In: Extended Thermodynamic Systems. Adv. Thermodyn. Ser, vol. 7. pp. 140–182. Landau, L., Lifshitz, E., 1974. The Classical Theory of Fields. Pergamon, London. Landauer, R., 1961. Irreversibility and heat generation in the computing process. IBM J. Res. Dev. 5, 183–191. Lavenda, B.H., 1985a. Nonequilibrium Statistical Thermodynamics. Wiley, Chichester, UK (Chapter 2). Lavenda, B.H., 1985b. Brownian motion. Sci. Am. 252 (2), 70–84. Lavenda, B.H., 1991. Statistical Physics—A Probablilistic Approach. Wiley, Chichester, UK. Lavenda, 1995. Thermodynamics of Extremes. Horwood. Lebon, G., 1978. Derivation of generalized Fourier and Stokes–Newton equations based on thermodynamics of irreversible processes. Bull. Acad. Soc. Belg. Cl. Sci. LXIV, 456–460. Lebon, G., 1992. Extended irreversible thermodynamics of rheological materials. In: Extended Thermodynamic Systems. Advances in Thermodynamics, vol. 7. Springer, Berlin, New York, pp. 310–338. Lebon, G., Boukary, M.S., 1988. Objectivity, kinetic theory and extended irreversible thermodynamics. Int. J. Eng. Sci. 26, 471–483. Lebon, G., Mathieu, P., 1983. Comparison of the diverse theories of nonequilibrium thermodynamics. Int. Chem. Eng. 23, 651–662. Lebon, G., Perez-Garcia, C., Casas-Vazquez, J., 1986. On a new thermodynamic description of viscoelastic materials. Physica A 137, 531–545. Lebon, G., Perez Garcia, C., Casas-Vazquez, J., 1988. On the thermodynamic foundations of viscoelasticity. J. Chem. Phys. 88, 5068–5075. Lebon, G., Dauby, P.C., Palumbo, A., Valenti, G., 1990. Rheological properties of dilute polymer solutions: an extended thermodynamic approach. Rheol. Acta 29, 127–136. Lebon, G., Casas-Vazquez, J., Jou, D., Criado-Sancho, M., 1993. Polymer solutions and chemical reactions under flow: a thermodynamic description. J. Chem. Phys. 98, 7434–7439. Lemaitre, J., Chaboche, J.L., 1988. Mechanics of Solid Materials. Cambridge University Press, Cambridge, UK. Leonov, A.I., 1976. Nonequilibrium thermodynamics and rheology of viscoelastic polymer media. Rheol. Acta 21, 683–691. Liu, I.-S., 1972. Method of Lagrange multipliers for exploitation of the entropy principle. Arch. Ration. Mech. Anal. 46, 131–148. Liu, I.-S., M€uller, I., 1983. Extended thermodynamics of classical and degenerate gases. Arch. Ration. Mech. Anal. 83, 285–332. Llebot, J.E., 1992. Extended irreversible thermodynamics of electrical phenomena. In: Extended Thermodynamic Systems. Advances in Thermodynamics Series, vol. 7. Taylor and Francis, New York, pp. 339–363. Lotka, A.J., 1920. Undamped oscillations derived from the law of mass action. J. Am. Chem. Soc. 42, 1595–1599. Lotka, A.J., 1922. Contribution to the energetics of evolution. Proc. Natl. Acad. Sci. U. S. A. 8, 147–151. Luikov, A.V., Bubnov, V.A., Soloview, I.A., 1976. On wave solutions of the heat conduction equation. Int. J. Heat Mass Transf. 19, 245–249. Markus, F., 2005. Hamiltonian formulation as a basis of quantized thermal processes. In: Sieniutycz, S., Farkas, H. (Eds.), Variational and Extremum Principles in Macroscopic Systems. Elsevier, Oxford, pp. 267–291 (Chapter I.13). Markus, F., Gambar, K., 2003. Fisher bound and extreme physical information for dissipative processes. Phys. Rev. E. 68, 016121. Maugin, G.A., 1975. On the spin relaxation in deformable ferromagnets. Physica A 81, 454–458. Maugin, G.A., 1979. Vectorial internal variables in magnetoelasticity. J. Mech. 18, 541–563. Maugin, G.A., 1987. Thermodynamique a variables internes et applications. In: Seminar: Thermodynamics of Irreversible Processes. Institut Francais du Petrole, Rueil-Malmaison, France. Maugin, G.A., 1990a. Thermomechanics of Plasticity and Fracture. Cambridge University Press, Cambridge, UK. Maugin, G.A., 1990b. Nonlinear dissipative effects in electrodeformable solids. In: Maugin, G.A., Collet, B., Drouout, R., Pouget, J. (Eds.), Nonlinear Mechanical Couplings. Manchester University Press, Manchester, UK (Chapter 6).
Complexity in abstract and physical systems 39 Maugin, G.A., 1992. Thermodynamic of hysteresis. In: Extended Thermodynamic Systems. Advances in Thermodynamics Series, vol. 7. Taylor and Francis, New York, pp. 25–52. Maugin, G.A., Drouot, R., 1983. Internal variables and the thermodynamics of macromolecules solutions. Int. J. Eng. Sci. 21, 705–724. Maugin, G.A., Drouot, R., 1992. Nonequilibrium thermodynamics of solutions of macromolecules. In: Advances in Thermodynamics Series.vol. 7. pp. 53–75. Maugin, G.A., Sabir, M., 1990. Mechanical and magnetic hardening of ferromagnetic bodies: influence of residual stresses and applications to nondestructive testing. Int. J. Plast. 6, 573–589. Meixner, J., 1949. Thermodynamik und relaxationserscheinungen. Z. Naturforsch. 4a, 594–600. Meixner, J., 1961. Der Drehimpulssatz in der thermodynamik der irreversiblen prozesse. Z. Phys. 16, 144–165. Meixner, J., 1966. TIP has many faces. In: Parkus, H., Sedov, L. (Eds.), UATAM Symposia. Springer, Vienna. Meixner, J., 1968. Entropie in Nichtgleichgewicht. Rheol. Acta 7, 8–13. Meixner, J., 1972. The fundamental inequality in thermodynamics. Physica 59, 305–313. Meixner, J., 1974. Coldness and temperature. Arch. Ration. Mech. Anal. 57, 281–286. Mori, H., 1965. Transport, collective motion, and Brownian motion. Progr. Theor. Phys. 33, 423–455. Morro, A., 1985. Thermodynamics and constitutive equations. In: Grioli, G. (Ed.), Relaxation Phenomena via Hidden Variable Thermodynamics. Springer, Berlin. Morro, A., 1992. Thermodynamics and extremum principles in viscoelasticity. In: Extended Thermodynamic Systems. Advances in Thermodynamics Series, vol. 7. Taylor and Francis, New York, pp. 76–106. M€uller, I., 1966. Zur Ausbreitungsgeschwindigkeit vor Storungen in kontinuierlichen Medien. (Dissertation)TH Aachen. M€uller, I., 1967. Zum Paradox der W€armeleitungstheorie. Z. Phys.. 198. M€uller, I., 1971a. Die Kaltenfunction, eine universelle Funktion in der Thermodynamik viscoser warmeleintender Flussigkeiten. Arch. Rat. Mech. Anal. 40, 1–36. M€uller, I., 1971b. The coldness: a universal function in thermodynamic bodies. Arch. Rat. Mech. Anal. 41, 319–329. M€uller, I., 1985. Thermodynamics. Macmillan, London. M€uller, I., 1992. Extended thermodynamics-a theory with hyperbolic field equations. In: Extended Thermodynamic Systems. Advances in Thermodynamics Series, vol. 7. Taylor and Francis, New York, pp. 107–139. Muschik, W., 1981. Thermodynamical theories, survey and comparison. ZAMM 61, T213. Muschik, W., 1986. Thermodynamical theories, survey, and comparison. J. Appl. Sci. 4, 189. Muschik, W., 1990a. Aspects of Non-equilibrium Thermodynamics: Six Lectures on Fundamentals and Methods. World Scientific, Singapore. Muschik, W., 1990b. Internal variables in the non-equilibrium thermodynamics. J. Non-Equilib. Thermodyn. 15, 127–137. Muschik, W., 2004. Remarks on thermodynamical terminology. J. Non-Equilib. Thermodyn. 29, 199–203. Muschik, W., 2007. Why so many “schools” of thermodynamics? Forsch. Ingenieurwes. 71, 149–161. Muschik, W., 2008. Survey of some branches of thermodynamics. J. Non-Equilib. Thermodyn. 33, 165–198. Muschik, W., Papenfuss, C., Ehrentraut, H., 2001. A sketch of continuum thermodynamics. J. Non-Newtonian Fluid Mech. 96, 255. Nettleton, R.E., 1987. Application of reciprocity to nonlinear extended thermodynamics kinetic equations. Physica A 144, 219–234. Nettleton, R.E., 1992. Reciprocity in extended thermodynamics. In: Extended Thermodynamic Systems. Advances in Thermodynamics Series, vol. 7. Taylor and Francis, New York, pp. 223–262. Nettleton, R.E., Freidkin, E.S., 1989. Nonlinear reciprocity and the maximum entropy formalism. Physica A 158, 672–690. Nunziato, J.W., 1971. On the heat conduction in materials with memory. Q. Appl. Math. 29, 187–204. Pavon, D., 1992. Astrophysical and cosmological applications of extended relativistic thermodynamics. In: Extended Thermodynamic Systems. Advances in Thermodynamics Series, vol. 7. Taylor and Francis, New York, pp. 386–407.
40
Chapter 1
Pavon, D., Jou, D., Casas-Vazquez, J., 1980. Heat conduction in relativistic extended thermodynamics. J. Phys. A 13, L67–L79. Pavon, D., Jou, D., Casas-Vazquez, J., 1983. Equilibrium and nonequilibrium fluctuations in relativistic fluids. J. Phys. A 16, 775–782. Perez-Garcia, C., Jou, D., 1986. Continued fraction expansion and memory functions for transport coefficients. J. Phys. A 19, 2881. Perez-Garcia, C., Jou, D., 1992. Thermodynamic aspects of memory functions for transport coefficients. In: Extended Thermodynamic Systems. Advances in Thermodynamics Series, vol. 7. Taylor and Francis, New York, pp. 289–309. Plastino, A., Plastino, A.R., Miller, H.G., Khanna, G.C., 1996. A lower bound for Fisher’s information measure. Phys. Lett. A 221, 29–33 (Chapter 1). Plastino, A., Plastino, A.R., Casas, M., 2005. Fisher variational principle and thermodynamics. In: Sieniutycz, S., Farkas, H. (Eds.), Variatpional and Extremum Principles in Macroscopic Systems. Elsevier, Oxford, pp. 379–394 (Chapter II.1). Rivlin, R.S., Ericksen, J.L., 1955. Stress deformation relations for isotropic materials. J. Rational Mech. Anal. 4, 325–425. Rouse, P.E., 1953. A theory of the viscoelastic properties of dilute solutions of coiling polymers. J. Chem. Phys. 21, 1272–1280. Ruggeri, T., 1989. Galilean invariance and entropy principle for system of balance laws. The structure of extended thermodynamics. Cont. Mech. Thermodyn. 1, 3–20. Sandler, S.I., Dahler, J.S., 1964. Nonstationary diffusion. Phys. Fluids 2, 1743–1746. Schofield, P., 1966. Wavelength-dependent fluctuations in classical fluids. Proc. Phys. Soc. 88, 149–170. Scholle, M., 1994. Hydrodynamik im Lagrange formalismus: Untersuchungen zur W€armeleitung in idealen Fl€ussigkeiten, Diplomarbeit. University of Paderborn. Scholle, M., Anthony, K.-H., 1997. Lagrange formalism and complex fields in hydro- and thermodynamics. In: Workshop on Dissipation in Physical Systems, vol. 1, September 9, Kielce, Poland. Shannon, C.E., Weaver, W., 1969. The Mathematical Theory of Communication. The University of Illinois Press, Urbana, IL. Shlesinger, M.F., Klafter, J., Zumofen, G., 1999. Above, below and beyond Brownian motion. Am. J. Phys. 67 (12), 1253–1259. Shter, I.M., 1973. The generalized Onsager principle and its application. Inzh. Fiz. Zh. 25, 736–742. Sieniutycz, S., 1983. The inertial relaxation terms and the variational principles of least action type for nonstationary energy and mass diffusion. Int. J. Heat Mass Transf. 26, 55–63. Sieniutycz, S., 1984. The variational approach to nonstationary Brownian and molecular diffusion described by wave equations. Chem. Eng. Sci. 39, 71–80. Sieniutycz, S., 1986. On the relativistic temperature transformations and the related energy transport problem. Int. J. Heat Mass Transf. 29, 651–654. Sieniutycz, S., 1987. From a last action principle to mass action law and extended affinity. Chem. Eng. Sci. 42, 2697–2711. Sieniutycz, S., 1988. Hamiltonian tensor of energy-momentum in extended thermodynamics of one-component fluid. Inz. Chem. Proc. 4, 839–861. Sieniutycz, S., 1989. Energy-momentum tensor and conservation laws for extended multicomponent fluids with the transport of heat, mass and electricity. Research Report of Inst. Chem. Eng. at Warsaw TU (in Polish). Sieniutycz, S., 1990. Thermal momentum, heat inertia and a macroscopic extension of the de Broglie microthermodynamics. I. The multicomponent fluids with the sourceless continuity constraints. In: Advances in Thermodynamics Series. vol. 3. Taylor and Francis, New York, pp. 328–368. Sieniutycz, S., 1992a. Wave equations of heat and mass transfer. In: Flow, Diffusion and Transport Processes. Advances in Thermodynamics Series, vol. 6. Taylor and Francis, New York, pp. 146–167. Sieniutycz, S., 1992b. Thermal momentum, heat inertia and a macroscopic extension of de broglie microthermodynamics, II: the multicomponent fluids with the sources. In: Advances in Thermodynamics Series. 7, Taylor and Francis, New York, pp. 408–447.
Complexity in abstract and physical systems 41 Sieniutycz, S., 1994. Conservation Laws in Variational Thermo-Hydrodynamics. Kluwer Academics, Dordrecht. Sieniutycz, S., 1997. Generalized Carnot problem of maximum work in a finite time via Hamilton–Jacobi–Bellman equation. In: Florence World Energy Research Symposium, FLOWERS ’97, Italy, Florence, July 30– August.1, pp. 151–159. Sieniutycz, S., 1998a. Generalized Carnot problem of maximum work in finite time via Hamilton–Jacobi–Bellman theory. Energy Convers. Manage. 39, 1735–1743. Sieniutycz, S., 1998b. Nonlinear thermokinetics of maximum work in finite time. Int. J. Eng. Sci. 36, 577–597. Sieniutycz, S., 1999. Endoreversible modeling and optimization of multistage thermal machines by dynamic programming. In: Wu, C., Chen, L., Chen, J. (Eds.), Recent Advances in Finite Time Thermodynamics. Nova Science Publishers, New York, pp. 189–219. Sieniutycz, S., 2000a. Hamilton–Jacobi–Bellman framework for optimal control in multistage energy systems. Phys. Rep. 326, 165–285. Sieniutycz, S., 2000b. Optimal control of active (work-producing) systems. Inz. Chem. Proc. 21 (1), 29–55. Sieniutycz, S., 2000c. Thermodynamics of development of energy systems with applications to thermal machines and living organisms. In: Eotwos University, Workshop on Recent Developments in Thermodynamics, Budapest, 22–24 VII 01, Periodica Polytechnica. Ser. Chem. Eng. 44 (1), 49–80. Sieniutycz, S., 2001a. Work optimization in continuous and discrete systems with complex fluids. J. Non-Newtonian Fluid Mech. 96, 341–370. Sieniutycz, S., 2001b. Optimal work flux in sequential systems with complex heat exchange. Int. J. Heat Mass Transfer 44 (5), 897–918. Sieniutycz, S., 2001c. Thermodynamic limits for work-assisted and solar-assisted drying operations. Archiv. Thermodyn. 22 (3–4), 17–36. Sieniutycz, S., 2013a. An unified approach to limits on power yield and power consumption in thermos-electrochemical systems. Entropy 15, 650–677. Sieniutycz, S., 2013b. Power yield and power consumption in thermo-electro-chemical systems—a synthesizing approach. Energy Convers. Manage. 68, 293–304. Sieniutycz, S., 2016. Thermodynamic Approaches in Engineering Systems. Elsevier, Oxford. Sieniutycz, S., 2020. Complexity and Complex Thermo-Economic Systems. Elsevier, Oxford. Sieniutycz, S., Berry, R.S., 1989. Conservation laws from Hamilton’s principle for nonlocal thermodynamic equilibrium fluids with heat flow. Phys. Rev. A 40, 348–361. Sieniutycz, S., Berry, R.S., 1991. Field thermodynamic potentials and geometric thermodynamics with heat transfer. Phys. Rev. A 43, 2807–2818. Sieniutycz, S., Berry, R.S., 1992. Least-entropy generation: variational principle of Onsager’s type for transient hyperbolic heat and mass transfer. Phys. Rev. A 46, 6359–6370. Sieniutycz, S., Berry, R.S., 1997. Thermal mass and thermal inertia: a comparison of hypotheses. Open Sys. Inf. Dyn 4, 15–43. Sieniutycz, S., Farkas, H. (Eds.), 2005. Variational and Extremum Principles in Macroscopic Systems. Elsevier Science, Oxford. Sieniutycz, S., Jezowski, J., 2018. Energy Optimization in Process Systems and Fuel Cells, 3rd ed Elsevier, Oxford. Sieniutycz, S., Szwast, Z., 1999. Optimization of multistage thermal machines by a Pontryagin’s-like discrete maximum principle. In: Wu, C. (Ed.), Advances in Recent Finite Time Thermodynamics. Nova Science, New York, pp. 221–237 (Chapter 12). Sluckin, T.J., 1981. Influence of flow on the isotropic-nematic transition in polymer solutions: a thermodynamic approach. Macromolecules 14, 1676–1680. Tirrell, M., Malone, F., 1977. Stress-induced diffusion of macromolecules. J. Polym. Sci. 15, 1569–1583. Truesdell, C., 1962. Mechanical basis of diffusion. J. Chem. Phys. 37, 2236–2344. Truesdell, C., 1969. Rational Thermodynamics. McGraw-Hill, New York. Trulla, L.L., Zbilut, J.P., Giuliani, A., 2003. Putting relative complexity estimates to work: a simple and general statistical methodology. Physica A 319, 591–600. Va´zquez, F., del Rı´o, J.A., Gamba´r, K., Ma´rkus, F., 1996. Comments on the existence of Hamiltonian principles for non-self-adjoint operators. J. Non-Equilib. Thermodyn. 21, 357–360.
42
Chapter 1
Va´zquez, F., Ma´rkus, F., Gamba´r, K., 2009. Quantized heat transport in small systems: a phenomenological approach. Phys. Rev. E. 79, 031113. Vujanovic, B., 1971. An approach to linear and nonlinear heat transfer problem. AIAA J. 9, 131–134. Vujanovic, B., 1992. A variational approach to transient heat transfer theory. In: Flow, Diffusion and Transport Processes. Advances in Thermodynamics Series, vol. 6. Taylor and Francis, New York, pp. 200–217. Vujanovic, B., Djukic, D.J., 1972. On the variational principle of Hamilton’s type for nonlinear heat transfer problem. Int. J. Heat Mass Transf. 15, 1111–11115. Vujanovic, B., Jones, S.E., 1989. Variational Methods in Nonconservative Phenomena. Academic Press, New York. Williams, P.E., 2001. Mechanical entropy and its implications. Entropy 3, 76–115. Xia, S., Chen, L., Sun, F., 2010a. Hamilton–Jacobi–Bellman equations and dynamic programming for power optimization of multistage heat engine system with generalized convective heat transfer law. Chin. Sci. Bull. 55 (29), 2874–2884. Xia, S., Chen, L., Sun, F., 2010b. Finite-time exergy with a finite heat reservoir and generalized radiative heat transfer law. Rev. Mex. Fis. 56, 287–296. Xia, S., Chen, L., Sun, F., 2011a. Endoreversible modeling and optimization of multistage heat engine system with a generalized heat transfer law via HJB equations of dynamic programming. Acta Physiol. Pol. 6, 747–760. Xia, S., Chen, L., Sun, F., 2011b. Power optimization of non-ideal energy converters under general convective heat transfer law via Hamilton–Jacobi–Bellman theory. Energy 36, 633–646. Zimm, B.H., 1956. Dynamics of polymer molecules in dilute solution: viscoelasticity, flow birefringence and dielectric loss. J. Chem. Phys. 24, 269–278.
Further reading Bubnov, V.A., 1982. Toward the theory of thermal waves. Inzh. Fiz. Zh. 43, 431–438. Garcia-Collin, L.S., Lopez de Haro, M., Rodriguez, R.F., Casas-Vazquez, J., Jou, D., 1984. On the foundations of extended irreversible thermodynamics. J. Stat. Phys. 37, 465–484. Klimontovich, Y.L., 1991. Turbulent Motion and the Structure of Chaos. Kluwer Academics, Dordrecht. Klimontovich, Y.L., 1999. Entropy, information, and criteria of order in open systems. Nonlinear Phenom. Complex Syst. 2 (4), 1–25. Lavenda, B.H., 1990. The statistical foundations of nonequilibrium thermodynamics. In: Nonequilibrium Theory and Extremum Principles. Advances in Thermodynamics Series, vol. 3. Taylor and Francis, New York, pp. 175–211. Sieniutycz, S., Berry, R.S., 1993. Canonical formalism, fundamental equation, and generalized thermomechanics for irreversible fluids with heat transfer. Phys. Rev. E 47, 1765–1783.
CHAPTER 2
Examples of complex states and complex transformations 2.1 Instabilities in liquids Fluid systems are often regarded as a paradigm of the transition from the order to disorder, the process being manifested by turbulence (Badii and Politii, 1997). Classical, well understood phenomenon in this contest is the process of Rayleigh-Benard (thermal) convection (Chandrasekhar, 1961). The operation for studying the Benard cell runs in the experimental apparatus that consists of a highly instrumented insulated container enclosing a fluid. Conditions for the container must be satisfied assuring that spatial inhomogeneities are ignorable in comparison with the irregularity of the time behavior (Badii and Politii, 1997, Ch. 2, p. 13). The bottom of the container is a heat source and the top is a cold reservoir. When the fluid is heated from below it resists the applied gradients (ΔT) by dissipating heat through conduction. As the temperature gradient is increased, convection begins and the dissipative structure forms a pattern of hexagonal Benard cells appear the fluid develops convection cells (Chandrasekhar, 1961, p. 11; Capra, 1996, p. 87; Schneider and Kay, 1994, Fig. 1; Sieniutycz, 2016, Fig. 5.9, p. 273; Sieniutycz, 2020, p. 327).
2.2 Turbulence and randomness in fluid mechanics When the fluid is supplied with an increasing energy, it undergoes a sequence of transitions from an ordered state with uniform velocity field to a disordered state, both in space and time, called turbulent. In the weakly turbulent region, the fluctuations do not extend over a wide range of frequencies and length scales. Instead, fully developed turbulence involves motion with conspicuous energy transfer down to tiny length scales, so that the fluctuations become fundamentally important (Badii and Politii, 1997). Hydrodynamic modes contribute, in a wide range of wavelengths, to the formation of coherent vortices in the midst of a disordered background (Builtjes, 1977; Monin and Yaglom, 1967, 1971, 1975). Turbulence problems may be approached from the viewpoint of their link with the fluctuation theory (Monin and Yaglom, 1967). Memory effects in turbulent flows are treated by Builtjes (1977). Also, Durbin (1993) offers a near-wall turbulence model based on k, ε, and two extra Complexity and Complex Chemo-Electric Systems. https://doi.org/10.1016/B978-0-12-823460-0.00004-5 # 2021 Elsevier Inc. All rights reserved.
43
44
Chapter 2
equations. The model predicts flow and heat transfer in two-dimensional channels and boundary layers. It leads to the good agreement with data on skin friction, Stanton number, mean velocity, and turbulent intensities. Solutions to the model show correct Reynolds number dependence without building it into any of the coefficients. Calculations of zero and adverse pressure gradient show that in both cases the results agree well with an experiment. Majority of the interesting features of turbulence can be found in Newtonian incompressible fluids subjected to gravity forces and with small density variations caused by heating. Changes of entropy production are generally unimportant for often assumed low-speed flows. A common approximation in Benard’s problem neglects the contribution from pressure variations and assumes an incompressible medium in which density depends only on the temperature but not the pressure, i.e., ρ ¼ ρ0 ð1 αðT T0 ÞÞ (Landahl and Mollo-Christensen, 1992). The symbol α represents the coefficient of thermal expansion, whereas subscript 0 denotes reference values. All this leads to the Boussinesq approximation and, next, to averaged equations consisting a mean part (usually written with an overbar), and a fluctuating part with a zero average which may be time dependent. The equations for fluctuating components are found by subtracting the averaged equations from the full equations. The turbulent energy equation, e.g., Eq. (3.28) in Landahl and MolloChristensen (1992), illustrates how Reynolds shear stress can do work against the mean velocity shear and thereby transfer the energy from the mean flow to the fluctuating field. A flow is unstable when disturbances grow in time. Details describing convection experiments and associated data of the heat dissipation rate and entropy production rate are available (Silveston, 1957; Brown, 1973, Sieniutycz, 2020).
2.3 Complexities in chemically reacting systems 2.3.1 Introduction Here we present a “macroscopic” type of complexity, in chemical world. Working in the realm of electroneutral components, we describe important synthesizing investigations of Gadewar, Doherty, and Malone (Gadewar et al., 2001) toward the development of a systematic method for reaction invariants and mole balances for complex chemistries. Dealing with reaction invariants is related to the process control aspects of a system whenever we need to employ them in process design. The researchers (Gadewar et al., 2001) have also shown that this methodology is useful in planning experiments. With care, the theory can be applied to electrochemical systems provided that they are described not as ionic media (Newman and Thomas-Alyea, 2000), but rather as systems composed of electroneutral components (Sieniutycz and Ratkje, 1996). In the book by Newman and Thomas-Alyea (2000), where a
Examples of complex states and complex transformations 45 whole electrochemical cell explicitly appears, I is the current at the negative (there, left) electrode taken to be positive for an anodic current at that electrode. Three important areas of investigation of a reaction system are stoichiometry, kinetics, and mechanism. Frequently, it is assumed that the stoichiometry of a reaction system is known and, therefore, the main emphasis is laid on determining the mechanism and kinetics. However, getting the correct stoichiometry is equally important and can also be difficult. Smith and Missen (1979) define chemical stoichiometry as the constraints placed on the composition of a closed chemical system by the necessity of conserving the amount of each elemental or atomic species in any physiochemical change of state occurring within the system. Alternatively, Gibbs’ rule of stoichiometry must be satisfied by a chemical system (Aris and Mah, 1963). A classic approach for determining the reaction chemistry for a reaction system on the basis of limited experimental observations was published by Aris and Mah (1963). This approach also helps in determining the stoichiometric degrees of freedom. The information needed to implement the method is an accurate knowledge of all the chemical species present in the reaction system. Some of the earliest work in this subject was published by Jouguet (1921) and Defay (1931). Implications of the presence of isomers in the method of Aris and Mah were discussed by Whitwell and Dartt (1973). This treatment was extended by Happel (1986) to incorporate transient species which may not be observed necessarily in the inlet or outlet of the reaction system. A reaction mechanism involves detailed reactions between reactants, intermediates that may or may not be transient, and the final products in a reaction mixture. The reaction chemistry is determined from a proposed mechanism by eliminating the transient species in the overall scheme. The dependent reactions in the reaction chemistry generated are then eliminated. The mechanistic approach is used widely and is frequently successful. Sometimes, the reaction chemistry generated from a reaction mechanism might contain less than the maximum number of independent reactions. However, Aris and Mah’s method always generates a set containing the maximum number of independent reactions. Aris and Mah’s approach, therefore, should be considered complementary to the mechanistic approach. Bonvin and Rippin (1990) and Amrhein et al. (1999) used a method called “Target Factor Analysis” for determination and validation of reaction stoichiometry based on experimentally measured data. Their method is also applicable to systems where the molecular formula of some of the species is not known. For a continuous reactor, the number of moles of component i in the outlet can be represented in terms of the inlet moles as ni ¼ n0i νTi ε, i ¼ 1,…, c,
(2.1)
where νTi is a row vector of the stoichiometric coefficients of component i in R reactions and ε is a column vector of the extents of reaction. For simple reaction chemistries where each reaction
46
Chapter 2
contains at least one species that is unique only to that reaction, it is easy to write each extent in terms of a single component. For example, ni ¼ n0i + νi1 ε1 + νi2 ε2 + ⋯νiR εR
(2.2)
and if component i occurs only in reaction 1, νi2¼νi3¼⋯¼νiR¼0. Therefore ε1 can be expressed solely in terms of moles of i. Similarly, the extent of each reaction can be found, and mole balances can be written by expressing these extents in terms of other components, and eliminating the extents. However, no systematic methods are available for writing the material balances for reaction systems of arbitrary complexity where the extent of each reaction cannot be expressed in terms of mole numbers of a single component. Standard textbooks provide a good introduction for developing intuition and skills for writing mole balances (Nauman, 1987; Himmelblau, 1996). Rosen (1962) published an iterative procedure for solving material balances over a reactor; however, the technique had problems with convergence. Sood and Reklaitis (1979) and Sood et al. (1979) proposed a procedure which required no iterations for solving material balances for flow sheets. Their procedure, however, needed specification of the extents of reaction for solving balances around a reactor. Schneider and Reklaitis (1975) were among the earliest to consider the relationship between mole balances and element balances for steady-state chemically reacting systems. For simple reaction stoichiometries (e.g., involving one, two, or three reactions) it is often possible to write the material balances based on intuition and experience. This task, however, becomes much more difficult for complex reaction chemistries with many reactions. It is, therefore, useful to devise a systematic method to determine the material balances for reaction systems, supposing that the reaction chemistry is known. To facilitate the numerical treatment of complex reaction systems, many authors have proposed linear and nonlinear transformation of variables (e.g., Waller and M€akil€a, 1981). In their review, Waller and M€akil€a describe various ways to decompose a state vector (consisting of all variables needed to describe the system) into variants and invariants. Denn and Shinnar (1987) used invariants for checking mass balances for coal gasification reactors. These invariants depend on the molar feed ratios to the reactor, but are independent of the type of gasifier used. Reaction invariants are variables that take the same values before, during, and after the reaction. They are independent of the extent of reaction, although they may change with flow rate and other parameters. Asbjørnsen and coworkers (Asbjørnsen and Fjeld, 1970; Asbjørnsen, 1971/1972; Fjeld et al., 1974) demonstrated the use of reaction invariants to reduce the dimensionality of the differential equations describing the process dynamics of continuous stirred tank reactors. Srinivasan et al. (1998) extended this methodology to include flow invariants for such systems. Most of literature dealing with reaction invariants is related to the process control aspects of a system (Waller and M€akil€a, 1981). Our aim is to employ them for process design. We use linear transformations to determine the reaction invariants which are easy to understand and straightforward to implement. This transformation can be used
Examples of complex states and complex transformations 47 effectively to go from limited experimental observations to setting up molar balances for a reactor system at steady state. Also, this methodology is useful in planning experiments as it provides an estimate of degrees of freedom necessary for checking data consistency. The use of this systematic methodology in process design applications is demonstrated after the basic method is developed.
2.3.2 Reaction invariants Consider a reaction system consisting of c components undergoing R independent chemical reactions. A block diagram for such a system is shown in Fig. 2.1. This process block can contain any complex combination of unit operations. The inlet to the process is represented by a c-dimensional column vector of inlet molar flow rate of species, n0; the outlet of the process is represented by a vector of outlet flow rate of species, n. The R independent chemical reactions are written as ν1r A1 + ν2r A2 + ⋯ + νcr Ac , 0 r ¼ 1, 2,…R,
(2.3)
where Ai are the reacting species and νi,r is the stoichiometric coefficient of component i in reaction r. The convention used is νi,r>0 if component i is a product, νi,r0.95).
2.3.9 Level 3: Recycle structure of the flow sheet A simplified process flow sheet is shown in Fig. 2.6. The flow sheet consists of a reaction system and a separation system. The determination of recycle flows is critical in evaluating the economic potential for the process. Also, it is necessary for reactor design since the unreacted feed is recycled back to the reactor. Comparing Fig. 7 and Fig. 10, in Gadewar et al. (2001) we find that the overall molar balances for Level 3 are the same as for Level 2 since the inlet and outlet streams are identical in both cases. However, at this stage, based on process requirements (often determined by reaction constraints or optimization), some new design variables are often imposed on the process which may include specifying values for molar feed ratios at reactor inlet, equality of composition of the recycle stream and purge stream for gaseous components in the absence of a gas recovery system in the separation block of the flow sheet, etc. This will lead to specification of new variables which were not specified at the Level 2 balances. Depending on the overall degrees of freedom at Level 3, this may result in cancelation of some specifications made at Level 2. This happens when
Fig. 2.6 Block diagram for Level 3 of hierarchical procedure for conceptual design.
66
Chapter 2 Number of restrictions at Level 3 > DOF at Level 3 DOF at Level 2:
(2.47)
For example, this occurs when the selectivities (used at Level 2) depend on the molar feed ratios at the reactor inlet. 2.3.9.1 Degrees of freedom 2.3.9.1.1 Case 1
If the mole fractions of the components in the purge streams are the same as the corresponding recycle stream, xR is the same as xpurge and yR is the same as ypurge in Fig. 2.10. Therefore there are two new variables introduced at this level compared to Level 2, which are the recycle flow rates of the liquid and gaseous streams, respectively. Therefore, DOF ¼ c + l + g + R However, since c + g + l + R variables are already specified at Level 2, we must specify two additional degrees of freedom to solve the Level 3 balances. Usually, molar feed ratios at the reactor inlet are chosen. Also, conversion of the limiting reactant is an important variable which is known before solving these balances. In order to incorporate this information, we formulate the recycle balances in terms of molar ratios. However, since the degrees of freedom are unchanged by this new formulation, we may no longer have control over some variables we specified at Level 2 (like purge liquid and gas compositions), which can then be calculated once the recycle balances are solved. This will occur if the condition given in Eq. (2.47) is satisfied. The recycle flow rate of the limiting reactant can be written in terms of the feed flow rates by a material balance around the mixing point Rlr ¼ FT lr F0 lr
(2.48)
where FT lr is the total molar flow rate of the limiting reactant at the reactor inlet and F0 lr is the fresh molar feed rate of the limiting reactant. The recycle flow rates for all the components can be written as Rl + Rg ¼ F0 lr ðXlr Þ1 M F0 :
(2.49)
Here, Rl and Rg are the column vectors of the recycle flow rates of c components in the liquid and gas recycle, respectively, Xlr is the conversion of the limiting reactant, M is a column vector of the molar feed ratio of components to the limiting reactant at the reactor inlet, where Mi ¼ Fi T =FT lr is the molar feed ratio of component i with respect to the limiting reactant, and F0 is the column vector of the fresh molar feed rates of c components. The molar feed ratios at the reactor inlet are usually kept constant during operation to avoid disturbances in the process conditions. Since there are l components in the liquid recycle, cl values in the column vector Rl are known to be zero. Similarly, cg values in the column vector Rg are known to be zero.
Examples of complex states and complex transformations 67 Therefore Gadewar et al. (2001) have introduced c+l+g+1 new variables which include c molar ratios, l+g recycle flow rates, and conversion of the limiting reactant. Gadewar et al. (2001) thus have c+1 additional equations at Level 3, c equations given in Eq. (2.49) and FT lr Xlr ¼ F0 lr . The variables specified earlier but need not be specified in the new formulation at Level 3 are l1 mol fractions in the liquid purge and g 1 mol fractions in the gaseous purge. 2.3.9.1.2 Case 2
If a separation system is used to separate the purge stream from the recycle stream, the component mole fractions in the recycle stream will be different from the corresponding purge stream. Therefore xR is different from xpurge and yR is different from ypurge in Fig. 2.10. If the number of components in the gas and liquid purge streams is g and l, respectively, there are l+g new variables at Level 3 as compared to Level 2. These l + g new variables consist of (l 1) independent mole fractions in the liquid recycle stream, (g1) independent mole fractions in the gas recycle stream, recycle flow rate of liquid, and recycle flow rate of gas, given as RL and RG, respectively, in Fig. 2.10. Therefore, DOF ¼ c + 2l + 2g + R:
(2.50)
However, since c + g + l + R variables are already specified at Level 2, we must specify l + g additional degrees of freedom to solve the Level 3 balances. The balances given in Eq. (2.49) along with the Level 2 balances can be solved in this case if the molar feed ratios of components at the reactor inlet are known.
2.3.10 Concluding remarks on mole balances in complex chemistries Following Gadewar et al. (2001) we have developed a systematic treatment of input-output mole balances for complex chemistries by using the concept of reaction invariants to determine mole balances for a complex reaction system. If the reaction chemistry is not known, we employ a systematic method by Aris and Mah (1963) to determine a consistent set of chemical equations. We demonstrate the applicability of this method in data reconciliation for two examples, viz., propane dehydrogenation, and oxidation of formaldehyde and methanol. We prove that the mole balances for the maximum number of independent reactions are in fact the element balances and these balances are equivalent for any set of maximum number of independent chemical equations. This methodology also gives the degrees of freedom for experimental analysis of a reaction system and thus provides insight for planning experiments. One of the foremost applications of reaction invariants is in the automation of the hierarchical decision procedure for process synthesis published by Douglas (1985, 1988). This systematic methodology greatly simplifies the task of setting mole balances for complex chemistries at Levels 2 and 3 of Douglas’s hierarchical procedure.
68
Chapter 2
2.3.10.1 Appendix A. Mole number transforms Consider a reaction system with a total of c components undergoing R independent chemical reactions. The R independent chemical reactions can be written as: ν1r A1 + ν2r A2 + ν3r A3 + ⋯ + νcr Ac $ 0 r ¼ 1, 2,…R,
(2.A.1)
Here Ai is the reacting species and νi,r is the stoichiometric coefficient of component i in reaction r. The convention used is νi,r>0 if component i is a product, νi,r R, the number of mole balances is greater than the number of element balances. Therefore the element balances form a subset of the mole balances for less than the maximum number of independent reactions. 2.3.10.4 Appendix D. Independence of element balances Consider a reaction system consisting of ethylene oxide (C2H4O), water (H2O), ethylene glycol (C2H6O2), and diethylene glycol (C4H10O3). The species are numbered as follows: ethylene oxide (1), water (2), ethylene glycol (3), and diethylene glycol (4). The atomic matrix A for the system is given as
Examples of complex states and complex transformations 73
(1)
(2)
(3)
(4)
C 2
0
2
4
H 4
2
6
10
O 1
1
2
3
ð2:D:1Þ
Each row of the atomic matrix represents an element balance given by the summation of the product of the coefficient in the row and the corresponding species at the top of the column. Performing elementary row operations and reducing the matrix A to its echelon form A*, we get
(1)
(2)
(3)
(4)
C1
0
1
2
H0
1
1
1
O0
0
0
0
ð2:D:2Þ
The rank of matrix A is R ¼ 2. There are c R ¼ 2 independent chemical reactions, C2 H4 O + H2 O $ C2 H6 O2
(2.D.3a)
C2 H4 O + C2 H6 O2 $ C4 H10 O3
(2.D.3b)
Since R¼2, there are only two independent element balances for the reaction system. The row vectors of the elements in matrix A can be related as C + 2O H ¼ 0
(2.D.4)
Physically, it means that the constituent elements of each molecule are related to each other by a stoichiometric relationship (2.D.4), and this relationship applies to each species in the system. Since there is one stoichiometric relationship between the elements for this reaction system, there is one less independent element balance. The concept of independence of element balances is discussed by Reklaitis (1983, his Ch. 4). Reklaitis (1983, Example 4.5) considers an example with five chemical species (CO2, H2O, NH3, (NH2)2CO, NH2COONH4) and involves four elements (C, H, O, N). One of the element balances for this system is dependent and the stoichiometric relationship between the elements is, 4C + H 2O 3N ¼ 0, which is satisfied by each of the species.
2.4 Optical instabilities (Badii and Politii, 1997) Following we describe another, quite different, example of complex states and complex transformations. The similarities with hydrodynamics suggest the possibility of a universal (i.e., system-independent) understanding of macroscopic features although the observations of these complex transformations are strongly influenced by the boundaries and the available amount of data, a circumstance that may be, for certain purposes, insufficient (Badii and Politii, 1997, Sec. 2.4). A wide range of complex phenomena may be observed in nonlinear optics,
74
Chapter 2
including various types of temporal instabilities and disordered patterns, as well as the spontaneous formation of cellular structures and vortices (Harrison and Uppal, 1993). The basic mechanism underlying such behavior is the nonlinear interaction between two linear systems: electromagnetic waves (described by the Maxwell equations) and an atomic medium microscopically described by the Schr€ odinger equation (Schr€odinger, 1967). The exemplary optical instability occurs in lasers, where the atomic medium is confined inside of a resonant cavity and supplied with a continuous flow J of energy which pumps the atoms to some excited level. Whenever J is small enough, energy is dissipated by incoherent emission of light, atomic collisions, and dispersion in the optical cavity. At larger values of J, a more effective process of energy removal sets in: the well-known lasing action with emission of coherent light. The laser instability is conceptually equivalent to the onset of Benard-Rayleigh convection. Then a time-periodic behavior emerges spontaneously when the incoming energy flux is strong enough in comparison to intrinsic energy losses. This analogy has led some researchers to suspect that the mechanism underlying the mechanism of this instability could represent a sort of paradigm for the emergence of increasingly complex, if not even disordered, structures. Nowadays, this phenomenon has been recognized as a Hopf bifurcation which is just one of the many qualitative changes that may occur in systems far from equilibrium. The optical cavity plays the same role as the fluid container does in hydrodynamics (Badii and Politii, 1997). A “short” cavity is equivalent to a cell with a small aspect ratio, since only one cavity mode (one Fourier mode in the case of plane mirrors) can become excited and influence substantially on dynamics, all others being quickly damped. Although strongly constraining for possible spatial structures, this geometry allows for a rich time evolution as shown, e.g., by one of the simplest experimental arrangements, such as the CO2 laser with periodic modulation of the cavity losses (Arrechi et al., 1982). Lasers are not only important examples of nonlinear optical devices but, because the strong fields they generate, they also serve as main energy sources in several experimental arrangements. This is of great importance, for example for the simultaneous excitations of many modes, a necessary component for the onset of spatial structures.
2.5 Growth and aging phenomena (Badii and Politii, 1997, Sec. 2.5, pp. 23–24) Patterns with quite different shaped components emerging from a dull background are frequently associated with complexity (Badii and Politii, 1997, p. 23, their Sec. 2.5, on growth). Typical examples of this behavior are aggregates of small, usually elementary objects, e.g., spherical particles or liquid droplets. These objects are formed under the effect of spatially isotropic forces (Family and Landau, 1984). Notwithstanding the simplicity of the constituents and of the dynamic rules, such structures develop a ramified aspect with relevant symmetries
Examples of complex states and complex transformations 75 that do not reflect those of the interparticle forces (Arneodo et al., 1992a, 1992b). In addition, they do not have a characteristic length scale; they rather exhibit self-similar features. Examples include metal colloids, coagulated aerosols, viscous fingering (after, e.g., injecting water into layers of plaster, flows through porous media, spinodal decomposition, dielectric breakdown, and electrodeposition; (Aharony and Feder, 1989; Meakin, 1983, 1991). In all these systems, the growth of the pattern starts with an initial nucleus, particle, or droplet, to which other particles eventually stick. The accretion of the cluster make occur at any point of its boundary, although with different probability. The commonly observed shapes closely resemble coral formations, trees, or thunderbolts (Badii and Politii, 1997). Look at typical gold colloid aggregate whose electron-microscopic image represents a two-dimensional projection of the real cluster, Fig. 2.7, p. 24 in Badii and Politii, 1997. In the diffusion-limited aggregation, DLA model, the ensemble of all particles, performing the random walk, simulates a scalar field ϕ that obeys the Laplace equation r2 ϕ ¼ 0: See Eq. (2.4) in Badii and Politii, 1997 and the related probability formula for a walker on a square lattice. In general, the number N of the particles in the cluster scales as RD, where R is the gyration radius and D is a dimension-like quantity which alone cannot account for the multitude of observed shapes observed in different systems. To discriminate them exactly and identify possible universal structures, the kinetic of growth phenomena must be considered as well. Two
Fig. 2.7 Two-dimensional wave pattern in the Belousov-Zhabotynskii reaction in the thin solution layer. Based on M€ uller, S.C., Hess, B., 1989. Spiral order in chemical reactions. In: Goldbeter A. (Ed.), Cell to Cell Signaling: From Experiments to Theoretical Models. Academic Press, London, p. 503.
76
Chapter 2
characteristic times are involved: the diffusion time td needed by the particles to come into contact, and the reaction time tr needed to form a bond. The description simplifies when the time scales are very different. The setting of a complexity theory is considerably facilitated when it is carried out within a discrete framework. However, most mathematical and physical problems find their natural formulation in the complex or real field. As the transformation of continuous quantities into a symbolic form is much more easier than converse, it is sufficient to adopt a common representation for complex systems based on integer mathematics. This idea does not limit the approach generality, especially because discrete patterns occur, in fact, in considered physical systems and in their mathematical models. Numerous examples may be found in the realm of alloys, crystals, cellular automata, and DNA. Badii and Politii (1997) recall, however, that a proposal for a theory of computational complexity has recently been advanced (Blum, 1990). The symbolic representation of continuous systems also helps to elucidate the relation between chaotic phenomena and random processes although it is by no means restricted to nonlinear dynamics (Badii and Politii, 1997, p. 69; Meakin, 1983, 1991).
References Aharony, A., Feder, J. (Eds.), 1989. Fractals in Physics: Essays in Honor of Benoit B. Mandelbroot. North-Holland, Amsterdam. Amrhein, M., Srinivasan, B., Bonvin, D., 1999. Target factor analysis of reaction data: use of data pretreatment and reaction invariant relationships. Chem. Eng. Sci. 54, 579. Aris, R., 1965. Introduction to the Analysis of Chemical Reactors. Prentice Hall, New Jersey. Aris, R., Mah, R.H.S., 1963. Independence of chemical reactions. Ind. Eng. Chem. Fundam. 2, 90. Arneodo, A., Argoul, F., Bacry, E., Muzy, J.F., Tabard, M., 1992a. Golden mean arithmetic in the fractal branching of diffusion-limited aggregates. Phys. Rev. Lett. 68, 3456. Arneodo, A., Argoul, F., Muzy, J.F., Tabard, M., 1992b. Structural five-fold symmetry in the fractal morphology of diffusion-limited aggregates. Physica A 188, 217. Arrechi, E.T., Meuci, R., Puccioni, G., Tredicce, J., 1982. Experimental evidence of subharmonic bifurcations, multistability, and turbulence in a Q-switched gas laser. Phys. Rev. Lett. 49, 1217. Asbjørnsen, O.A., 1971/1972. Reaction invariants in the control of continuous chemical reactors. Chem. Eng. Sci. 27, 709. Asbjørnsen, O.A., Fjeld, M., 1970. Response modes of continuous stirred tank reactors. Chem. Eng. Sci. 25, 1627. Badii, R., Politii, A., 1997. Complexity: Hierarchical Structures and Scaling in Physics. Cambridge University Press. Blum, L., 1990. Jen, (Ed.), Lectures on a Theory of Computation and Complexity Over the Reals or an Arbitrary Ring.p. 1. Bonvin, D., Rippin, D.W.T., 1990. Target factor analysis for the identification of stoichiometric models. Chem. Eng. Sci. 45, 3417. Brown, W., 1973. Heat-flux transitions at low Rayleigh number. J. Fluid Mech. 69, 539–559. Builtjes, P.J.H., 1977. Memory Effects in Turbulent Flows. W. T. H. Dvol. 97. pp. 1–145. Capra, F., 1996. The Web of Life: A New Scientific Understanding of Living Systems. Anchor Books, New York, p. 87. Chandrasekhar, S., 1961. Hydrodynamics and Hydromagnetic Stability. Oxford University Press, London, p. 11.
Examples of complex states and complex transformations 77 Cheng, W.H., 1996. Methanol and formaldehyde oxidation study over molybdenum oxide. J. Catal. 158, 477. Defay, R., 1931. Azeotropisme—Equations Nouvelles des Etats Indifferents. Bull. Cl. Sci. Acad. R. Belg. 17, 940. Denn, M.M., Shinnar, R., 1987. Coal gasification reactors. In: Carberry, J.J., Varma, A. (Eds.), Chemical Reaction and Reactor Engineering. Marcel Dekker, New York, p. 499. Douglas, J.M., 1985. A hierarchical decision procedure for process synthesis. Am. Inst. Chem. Eng. J. 31, 353. Douglas, J.M., 1988. Conceptual Design of Chemical Processes. McGraw-Hill, New York. Douglas, J.M., 1990. Synthesis of multistep reaction processes. In: Siirola, J.J., Grossmann, I.E., Stephanopuolos, G. (Eds.), Foundations of Computer-Aided Process Design. Elsevier, New York, p. 79. Douglas, J.M., 1995. Synthesis of separation system flowsheets. Am. Inst. Chem. Eng. J. 41, 2522. Durbin, P.A., 1993. Application of near-wall turbulence model to boundary layers and heat transfer. Int. J. Heat Fluid Flow 14, 316–323. Family, F., Landau, D.P. (Eds.), 1984. Kinetics of Aggregation and Gelation. North Holland, Amsterdam. Feinberg, M., 2000. Optimal reactor design from a geometric viewpoint. Part II. Critical sidestream reactors. Chem. Eng. Sci. 55, 2455. ˚ str€om, K.J., 1974. Reaction invariants and their importance in the analysis of Fjeld, M., Asbjørnsen, O.A., A eigenvectors, state observability and controllability of the continuous stirred tank reactor. Chem. Eng. Sci. 29, 1917. Forni, L., Miglio, R., 1993. Catalytic synthesis of 2-methylpyrazine over Zn–Cr–O/Pd. A simplified kinetic scheme. In: Guisnet, M., Barbier, J., Barrault, J., Bouchoule, C., Duprez, D., Perot, G., Montassier, C. (Eds.), Heterogeneous Catalysis and Fine Chemicals III. Elsevier, New York, p. 329. Gadewar, S.B., Doherty, M.F., Malone, M.F., 2001. A systematic method for reaction invariants and mole balances for complex chemistries. Comput. Chem. Eng. 25 (9–10), 1199–1217. Glasser, D., Hildebrandt, D., Crowe, C., 1987. A geometric approach to steady flow reactors: the attainable region and optimization in concentration space. Ind. Eng. Chem. Res. 26, 1803. Happel, J., 1986. Isotopic Assessment of Heterogeneous Catalysis. Academic Press, Orlando. Harrison, R.G., Uppal, J.S. (Eds.), 1993. Nonlinear Dynamics and Spatial Complexity in Optical Systems. SUSSP, Edinburgh and IOP, Bristol. Himmelblau, D., 1996. Basic Principles and Calculations in Chemical Engineering. Prentice Hall, New Jersey. Joshi, S.K., Douglas, J.M., 1992. Avoiding accumulation of trace components. Ind. Eng. Chem. Res. 31, 1502. Jouguet, J., 1921. Notes de Me’canique Chimique. J. ‘Ecole Polytech. 21, 62. Landahl, M.T., Mollo-Christensen, E., 1992. Turbulence and Random Processes in Fluid Mechanics. . Meakin, P., 1983. Formation of fractal structures and networks by irreversible diffusion-limited aggregation. Phys. Rev. Lett. 51, 1119. Meakin, P., 1991. Fractal aggregates in geophysics. Rev. Geophys. 29, 317 Ref 335 in Badii and Politii, (1997). Monin, A.S., Yaglom, A.M., 1967. The Mathematical Problems of Turbulence. Nauka, Moscow. Monin, A.S., Yaglom, A.M., 1971. Statistical Fluid Mechanics. vols. 1 and 2. MIT Press, Cambridge, MA. Monin, A.S., Yaglom, A.M., 1975. Statistical Fluid Mechanics. vols. 1 and 2. MIT Press, Cambridge, MA. Nauman, E.B., 1987. Chemical Reactor Design. Wiley, New York. Newman, J., Thomas-Alyea, K.E., 2000. Electrochemical systems. third ed.John Wiley & Sons, Hoboken, NJ ISBN: 0-472-47756-7. Price, G.L., Kanazirev, V., Dooley, K.M., Hart, V.I., 1998. On the mechanism of propane dehydrocyclization over cationcontaining, proton-poor MFI zeolite. J. Catal. 173, 17. Reklaitis, G.V., 1983. Introduction to Material and Energy Balances. Wiley, New York. Rosen, E.M., 1962. A machine computation method for performing material balances. Chem. Eng. Prog. 58 (10), 69. Schr€odinger, E., 1967. What is Life? The Physical Aspect of the Living Cell. Cambridge University Press, Cambridge, UK (first ed. in 1944). Schneider, E., Kay, J., 1994. Life as a manifestation of the second law of thermodynamics. Math. Comput. Model. 19, 25–48 Special issue on the modeling of complex systems, ed. Mikulecky, D. and Whitten M. Schneider, D.R., Reklaitis, G.V., 1975. On material balances for chemically reacting systems. Chem. Eng. Sci. 30, 243. Sieniutycz, S., 2016. Thermodynamic Approaches in Engineering Systems. Elsevier, Oxford, p. 273 Re Fig. 5.9.
78
Chapter 2
Sieniutycz, S., 2020. Complexity and Complex Thermo-Economic Systems. Elsevier, Oxford, p. 327. Sieniutycz, S., Ratkje, S.K., 1996. Variational principle for entropy in electrochemical transport phenomena. Int. J. Eng. Sci. 34, 549–560. Silveston, P.L., 1957. Warmedurchange in Horizontalen Flassigkeitschichtem. (Ph.D. thesis)Techn. Hochsch., Muenchen, Germany. Smith, W.R., Missen, R.W., 1979. What is chemical stoichiometry? Chem. Eng. Educ. 13, 26. Solymosi, F., Erdohelyi, A., Szoke, A., 1995. Dehydrogenation of methane on supported molybdenum oxides— formation of benzene from methane. Catal. Lett. 32, 43. Sood, M.K., Reklaitis, G.V., 1979. Solution of material balances for flowsheets modelled with elementary modules: the constrained case. Am. Inst. Chem. Eng. J. 25, 220. Sood, M.K., Reklaitis, G.V., Woods, J.M., 1979. Solution of material balances for flowsheets modelled with elementary modules: the unconstrained case. Am. Inst. Chem. Eng. J. 25, 209. Srinivasan, B., Amrhein, M., Bonvin, D., 1998. Reaction and flow variants/invariants in chemical reaction systems with inlet and outlet streams. Am. Inst. Chem. Eng. J. 44, 1858. Strang, G., 1988. Linear Algebra and Its Applications. Harcourt Brace Jovanovich, San Diego. Tyreus, B.D., Luyben, M.L., 2000. Industrial plantwide design for dynamic operability. In: Malone, M.F., Trainham, J.A. (Eds.), Foundations of Computer-Aided Process Design.In: American Institute of Chemical Engineering Symposium Series, vol. 323(96). p. 113. Ung, S., Doherty, M.F., 1995a. Theory of phase equilibria in multireaction systems. Chem. Eng. Sci. 50, 3201. Ung, S., Doherty, M.F., 1995b. Synthesis of reactive distillation systems with multiple equilibrium chemical reactions. Ind. Eng. Chem. Res. 34, 2555. Waller, K.V., M€akil€a, P.M., 1981. Chemical reaction invariants and variants and their use in reactor modeling, simulation and control. Ind. Eng. Chem. Process. Des. Res. 20, 1. Weissermel, K., Arpe, H.J., 1993. Industrial Organic Chemistry. VCH, Weinheim, Germany. Whitwell, J.C., Dartt, S.R., 1973. Independent reactions in the presence of isomers. Am. Inst. Chem. Eng. J. 19, 1114.
Further reading Gadewar, S. B., 2001. Feasible products for reaction with separation. Ph.D. dissertation, University of Massachusetts, Amherst. Gadewar, S.B., Schembecker, G., Doherty, M.F., 2005. Selection of reference components in reaction invariants (short communication). Chem. Eng. Sci. 60 (24), 7168–7171. Goldbeter, A. (Ed.), 1989. Cell to Cell Signaling: From Experiments to Theoretical Models. Academic Press, London, p. 503.
CHAPTER 3
Heylighen’s enlarged view of growing complexities in evolution 3.1 Introduction The present chapter, which describes properties of complexity, is based on Heylighen’s original publication (Heylighen, 1996) extensively and not frugally enough enlarged by the author of the present book by suitable additions: updates, comments, and extra references. The basis of the chapter remains still rigorous Heylighen’s reasoning which is in a sense universal and available for every reader. Cases when the author of this volume is not compliant with Heylighen’s viewpoint are discussed in detail. New, suitable references are added and questions associated with these additions are discussed. At least since the days of Darwin, the idea of evolution has been associated with the increase of complexity: if we go back in time we see originally only simple systems (elementary particles, atoms, molecules, unicellular organisms) while more and more complex systems (multicellular organisms, vertebrates, mammals, human beings) appear in later stages (Heylighen, 1996). Traditional evolutionary theory, however, had no methods for analyzing complexity, and so this observation remained a purely intuitive impression (Heylighen, 1996). The last decades have seen a proliferation of theories offering new concepts and principles for modeling complex systems: information theory, general systems theory, cybernetics, nonequilibrium thermodynamics, catastrophe theory, deterministic chaos, complex adaptive systems, etc. These have led to the awareness that complexity is a much more important aspect of the world than classical, reductionist science would have assumed (Heylighen, 1996). Paradoxically, this development has also been accompanied by a questioning of the idea that complexity necessarily grows during evolution (Heylighen, 1996). It turns out that complexity is itself a complex concept: difficult to define and to model, and easy to misinterpret. To a certain extent, complexity is in the eye of the beholder: what is complex for one observer may be simple for another one. This awareness is reinforced by the wider intellectual climate, characterized by a “postmodern” philosophy, which stresses the subjectivity or culture dependence of all scientific models. To this must be added the continuing trend away from anthropocentrism, which was started by Copernicus’ insight that the Earth is not the center of the solar system, and Darwin’s discovery that humans and animals have a common origin Complexity and Complex Chemo-Electric Systems. https://doi.org/10.1016/B978-0-12-823460-0.00013-6 # 2021 Elsevier Inc. All rights reserved.
79
80
Chapter 3
(Heylighen, 1996). The “growth of complexity” idea can be and has been used to argue that humanity, though it may no longer be at the center of the universe, is still at the top of the evolutionary ladder (Gould, 1994). The present relativistic ideology, which tends to put all people, theories, cultures, and even species on an equal footing, shies away from the implied idea of a ladder or hierarchy of complexity, and therefore rejects the whole growth of complexity argument (Heylighen, 1996). On the other hand, concrete observations in diverse domains seem to confirm in ever more detail the intuitive notion of increasing complexity (Heylighen, 1996). For example, the at present generally accepted “Big Bang” model of cosmogenesis and its extensions (Heylighen, 1996) sees the evolution of the universe as one in which simple, homogeneous systems became more differentiated and integrated in subsequent stages: after several rounds of “symmetry breaking” caused by the cooling down of the universe, the primordial energy-matter field condensed into the four basic forces and the many families of elementary particles we know now. These particles got further integrated, first into nucleons, then into hydrogen atoms. About the same time, the more or less homogenous distribution of matter in space condensed locally, forming a heterogeneous distribution of stars, within galaxies, within clusters of galaxies (Heylighen, 1996). Hydrogen in stars, under the influence of heat and pressure, formed the different elements through nuclear reactions. Heavier elements that left the stellar core could then combine through chemical reactions to form a variety of molecules. Under the right conditions, these molecules would form dissipative cycles of reactions, that in a second stage would give rise to primitive life (Heylighen, 1996). Once we enter the biological realm, things become more ambiguous, and examples can be found to illustrate both increase and decrease of complexity. Yet, the general picture is still one of more complex systems again and again evolving out of more simple ones, e.g., eukaryotes from prokaryotes, sexual reproduction from asexual reproduction, multicellular organisms from unicellular ones (Maynard Smith and Szathma´ry, 1995). Once we reach the level of culture, the general growth of complexity again becomes more obvious. Perhaps with temporary exceptions, like the fall of the Roman empire, human history is characterized by an ever more quickly accumulating body of culture, science, technology, and socioeconomic organization. Though the speed at which this happens may not have been noticeable in ancient civilizations, it has increased to such a degree that in our present “age of information” few people would dare to deny the observation that the world becomes more complex every year. These two opposing forces, a philosophically motivated tendency to question the increase of complexity, and the recurrent observation of growing complexity at all stages of the evolutionary process, put the researchers on complex evolution in a strange predicament. Like Maynard Smith and Szathma´ry (1995), in their study of the major transitions in evolution, they feel obliged to pay lip service to the current ideology by noting the “fallacy” of believing that
Heylighen’s enlarged view of growing complexities in evolution 81 there is something like progress or advance toward increasing complexity, and then continue by describing in detail the instances of such increase they have studied (Heylighen, 1996). The aim of the Heylighen’s (1996) paper is to clarify the previously mentioned issue following his original reasoning, i.e., by arguing that there are theoretical grounds for concluding that complexity tends to increase, while noting that that increase is not as inexorable or straightforward as the evolutionists in the beginning of this century might have believed. This will be done by introducing a number of basic concepts and principles that can helped Heylighen to conceptualize evolution and complexity. In a second stage, the main arguments used to criticize the view that complexity grows will be analyzed also following Heylighen’s original (1996) work. It will be shown how each of these arguments can be countered in the more encompassing conceptual framework. This framework is Heylighen’s extension of the theory of metasystem transitions (Heylighen et al., 1995), which has been developed in the context of the Principia Cybernetica Project. The author of the present volume also needs to stress the role of researchers outside of this project, such as Ingarden (1977), Ebeling (1985), Ebeling and Feistel (1992), and some others to be cited later. Alhumaizi (2000) treats chaotic autocatalytic reactions with mutation.
3.2 What is complexity? Complexity has turned out to be very difficult to define. The dozens of definitions that have been offered all fall short in one respect or another, classifying something as complex which we intuitively would see as simple, or denying an obviously complex phenomenon the label of complexity (Heylighen, 1996). Moreover, these definitions are either only applicable to a very restricted domain, such as computer algorithms or genomes, or so vague as to be almost meaningless. Edmonds (1996) gives a good review of the different definitions and their shortcomings, concluding that complexity necessarily depends on the language that is used to model the system. Still, probably, there is a common, “objective” core in the different concepts of complexity, (Heylighen, 1996). Let us go back to the original Latin word complexus, which signifies “entwined,” “twisted together.” This may be interpreted in the following way: in order to have a complex you need two or more components, which are joined in such a way that it is difficult to separate them. Similarly, the Oxford Dictionary defines something as “complex” if it is “made of (usually several) closely connected parts.” Here we find the basic duality between parts which are at the same time distinct and connected (Heylighen, 1996). Intuitively then, a system would be more complex if more parts could be distinguished, and if more connections between them existed. More parts to be represented means more extensive models, which require more time to be searched or computed. Since the components of a complex cannot be separated without destroying it, the method of analysis or decomposition into independent modules
82
Chapter 3
cannot be used to develop or simplify such models. This implies that complex entities will be difficult to model, that eventual models will be difficult to use for prediction or control, and that problems will be difficult to solve. This accounts for the connotation of difficult, which the word “complex” has received in later periods (Heylighen, 1996). The aspects of distinction and connection determine two dimensions characterizing complexity. Distinction corresponds to variety, to heterogeneity, to the fact that different parts of the complex behave differently (Heylighen, 1996). Connection corresponds to constraint, to redundancy, to the fact that different parts are not independent, but that the knowledge of one part allows the determination of features of the other parts. Distinction leads in the limit to disorder, chaos or entropy, like in a gas, where the position of any gas molecule is completely independent of the position of the other molecules. Connection leads to order or negentropy, like in a perfect crystal, where the position of a molecule is completely determined by the positions of the neighboring molecules to which it is bound (Heylighen, 1996). Complexity can only exist if both aspects are present: neither perfect disorder (which can be described statistically through the law of large numbers), nor perfect order (which can be described by traditional deterministic methods) are complex. It thus can be said to be situated in between order and disorder, or, using a recently fashionable expression, “on the edge of chaos” (Waldrop, 1992; Heylighen, 1996). The simplest way to model order is through the concept of symmetry, i.e., invariance of a pattern under a group of transformations (Heylighen, 1996; Sieniutycz, 1994). In symmetric patterns one part of the pattern is sufficient to reconstruct the whole. For example, in order to reconstruct a mirror-symmetric pattern, like the human face, you need to know one half and then simply add its mirror image. The larger the group of symmetry transformations, the smaller the part needed to reconstruct the whole, and the more redundant or “ordered” the pattern. For example, a crystal structure is typically invariant under a discrete group of translations and rotations. A small assembly of connected molecules will be a sufficient “seed,” out of which the positions of all other molecules can be generated by applying the different transformations. Empty space is maximally symmetric or ordered: it is invariant under any possible transformation, and any part, however small, can be used to generate any other part (Heylighen, 1996). It is interesting to note that maximal disorder too is characterized by symmetry, not of the actual positions of the components, but of the probabilities that a component will be found at a particular position. For example, a gas is statistically homogeneous: any position is as likely to contain a gas molecule as any other position. In actuality, the individual molecules will not be evenly spread. But if we look at averages, e.g., the centers of gravity of large assemblies of molecules, because of the law of large numbers the actual spread will again be symmetric or homogeneous (Heylighen, 1996). Similarly, a random process, like Brownian motion, (Sieniutycz, 1984), can be defined by the fact that all possible transitions or movements are equally probable. Complexity can then be characterized by lack of symmetry or “symmetry
Heylighen’s enlarged view of growing complexities in evolution 83 breaking,” by the fact that no part or aspect of a complex entity can provide sufficient information to actually or statistically predict the properties of the other’s parts. This again connects to the difficulty of modeling associated with complex systems appointment (Heylighen, 1996). Edmonds (1996) notes that the definition of complexity as midpoint between order and disorder depends on the level of representation: what seems complex in one representation may seem ordered or disordered in a representation at a different scale. For example, a pattern of cracks in dried mud may seem very complex. When we zoom out, and look at the mud plain as a whole, though, we may see just a flat, homogeneous surface. When we zoom in and look at the different clay particles forming the mud, we see a completely disordered array (Heylighen, 1996). The paradox can be elucidated by noting that scale is just another dimension characterizing space or time (Havel, 1995), and that invariance under geometrical transformations, like rotations or translations, can be similarly extended to scale transformations (homotheties). Havel (1995) calls a system “scale-thin” if its distinguishable structure extends only over one or a few scales. For example, a perfect geometrical form, like a triangle or circle, is scale-thin: if we zoom out, the circle becomes a dot and disappears from view in the surrounding empty space; if we zoom in, the circle similarly disappears from view and only homogeneous space remains. A typical building seen from the outside has distinguishable structure on 2 or 3 scales: the building as a whole, the windows and doors, and perhaps the individual bricks (Heylighen, 1996). A fractal on Fig. 3.1 and self-similar shapes (Kudrewicz, 1993, 1996, and Chapter 6) have an infinite scale extension: however deeply we zoom in, we will always find the same recurrent structure. A fractal is invariant under a discrete group of scale transformations, and, as such, is ordered or symmetric with respect to the scale dimension. The fractal is somewhat more complex than the triangle, in the same sense how a crystal is more complex than a single molecule: both consist of a multiplicity of parts or levels, but these parts are completely similar (Heylighen, 1996). To find real complexity on the scale dimension, we may look at the human body: if we zoom in we encounter complex structures at least at the levels of complete organism, organs, tissues, cells, organelles, polymers, monomers, atoms, nucleons, and elementary particles. Though there may be superficial similarities between the levels, e.g., between organs and organelles, the relations and dependencies between the different levels are quite heterogeneous, characterized by both distinction and connection, and by symmetry breaking. We may conclude that complexity increases when the variety (distinction), and dependency (connection) of parts or aspects increase, and this in several dimensions. These include at least the ordinary 3 dimensions of spatial, geometrical structure, the dimension of spatial scale, the
84
Chapter 3
Fig. 3.1 A monofractal image of the Sierpinski carpet.
dimension of time or dynamics, and the dimension of temporal or dynamical scale. In order to show that complexity has increased overall, it suffices to show, that—all other things being equal—variety and/or connection have increased in at least one dimension. The process of increase of variety may be called differentiation, the process of increase in the number or strength of connections may be called integration. We will now show that evolution automatically produces differentiation and integration, and this at least along the dimensions of space, spatial scale, time, and temporal scale. The complexity produced by differentiation and integration in the spatial dimension may be called “structural,” in the temporal dimension “functional,” in the spatial scale dimension “structural hierarchical,” and in the temporal scale dimension “functional hierarchical” (Heylighen, 1996). It may still be objected that distinction and connection are in general not given, objective properties. Variety and constraint will depend upon what is distinguished by the observer, and in realistically complex systems determining what to distinguish is a far from trivial matter. What the observer does is picking up those distinctions which are somehow the most important, creating high-level classes of similar phenomena, and neglecting the differences which exist between the members of those classes (Heylighen, 1990). Depending on which distinctions the observer makes, he or she may see their variety and dependency (and thus the complexity of the
Heylighen’s enlarged view of growing complexities in evolution 85 model) to be larger or smaller, and this will also determine whether the complexity is seen to increase or decrease. For example, when we noted that a building has distinguishable structure down to the level of bricks, one implicitly ignored the molecular, atomic, and particle structure of those bricks, since it seems irrelevant to how the building is constructed or used. This is possible because the structure of the bricks is independent of the particular molecules out of which they are built: it does not really matter whether they are made out of concrete, clay, plaster, or even plastic. On the other hand, in the example of the human body, the functioning of the cells critically depends on which molecular structures are present, and that is why it is much more difficult to ignore the molecular level when building a useful model of the body. In the first case, we might say that the brick is a “closed” structure: its inside components do not really influence its outside appearance or behavior (Heylighen, 1990). In the case of cells, though, there is no pronounced closure, and that makes it difficult to abstract away the inside parts. Although there will always be a subjective element involved in the observer’s choice of which aspects of a system are worth modeling, the reliability of models will critically depend on the degree of independence between the features included in the model and the ones that were not included. That degree of independence will be determined by the “objective” complexity of the system. Though we are in principle unable to build a complete model of a system, the introduction of the different dimensions discussed earlier helps us at least to get a better grasp of its intrinsic complexity, by reminding us to include at least distinctions on different scales and in different temporal and spatial domains (Heylighen, 1996).
3.3 Evolutionary mechanisms Now that we have analyzed complexity as a static property, we must turn to the concepts underlying dynamics and change (Heylighen, 1996). We will here try to describe evolution in the most general, most abstract way, so that it can be used to analyze the complete development from elementary particles to human culture. Every process of evolution can be conceptualized as an interplay between variation and selection (Heylighen, 1996). Although these concepts originated in biology, their domain of application is much wider, as illustrated by recent evolutionary approaches to computing, economics, design of molecules, or the development of scientific theories. Variation is that aspect of a process that creates configurations different from the previous ones, in other words, that produces diversity or variety. Without variation there can be no change, so we will take variation as a primitive that does not need further explanation (Heylighen, 1996). Variation can be either sequential, a single system passing through a variety of subsequent configurations, or parallel, different systems independently diversifying into different
86
Chapter 3
configurations. Variation can be internal, as when the components of a system or their interrelation are changed, or external when a system is brought into contact or combined with different other systems (Heylighen, 1991a). The mutation of a chromosome is an example of internal variation, the recombination with another chromosome is an example of external variation. Variation on its own, without further constraints, produces entropy or disorder, by diffusion of existing constraints or dependencies. The equivalent for DNA is called “genetic drift” (Heylighen, 1996). However, variation is generally held in check by selection. Selection is the elimination or reduction of part of the variety of configurations produced by variation. Selection decreases disorder or entropy, by reducing the number of possibilities (Heylighen, 1992). A system that undergoes selection is constrained: it is restricted in the number of variations it can maintain. The existence of selection follows from the fact that in general not all variants are equivalently stable or capable of (re)production: those that are more easy to maintain or generate will become more numerous relative to the others (Heylighen, 1992). If all possible configurations are equally likely to be produced or conserved, there is no selection, and the only possible outcome of the process is maximization of statistical entropy, as in the cloud of gas molecules that diffuses to homogeneously fill its container. Selection too can be internal, as when an unstable system (e.g., a radioactive atom) spontaneously annihilates, or external, as when a system is eliminated because it is not adapted to its environment (Heylighen, 1991a, 1996). Although variation produces disorder, and selection produces order, it would be simplistic to conclude that their joint product must be complexity, as the midpoint between order and disorder. Variation and selection could simply annihilate each other’s effects, with the net result that nothing really new is created. (Simple mechanical motion can be interpreted as an instance of such a process, cf. Heylighen, 1991a). For a deeper analysis, we need to introduce the concept of “fitness” (Heylighen, 1996). Fitness is an assumed property of a system that determines the probability that that system will be selected, i.e., that it will survive, reproduce, or be produced. Technically, the fitness of a system can be defined as the average number of instances of that system that can be expected at the next time step or “generation,” divided by the present number of instances. Fitness larger than one means that the number of systems of that type can be expected to increase. Fitness smaller than one means that that type of system can eventually be expected to disappear, in other words that that type of system will be eliminated by selection (Heylighen, 1996). High fitness can be achieved if a system is very stable, so that it is unlikely to disappear, and/or if it is likely that many copies of that system will be produced, by replication or by independent generation of similar configurations (for example, though snowflakes are unstable and cannot reproduce, they are still likely to be recurrently produced under the right circumstances). The fitter a configuration, the more likely it is to be encountered on future occasions (Heylighen, 1994).
Heylighen’s enlarged view of growing complexities in evolution 87 Although this technical interpretation may seem rather far removed from the intuitive notion, the English word “fit” is eminently suited for expressing the underlying dynamic (Heylighen, 1996). Its spectrum of meanings ranges between two poles: (1) “fit” as “strong,” “robust,” “in good condition”; (2) “fit” as “adapted to,” “suited for,” “fitting.” The first sense, which may be called “absolute fitness,” points to the capability to survive internal selection, i.e., intrinsic stability and capacity for (re)production. The second sense, which may be called “relative fitness,” refers to the capability to survive external selection, i.e., to cope with specific environmental perturbations or make use of external resources (Heylighen, 1996). It must be noted that “internal” and “external” merely refer to complementary views of the same phenomenon. What is internal for a whole system may be external for its subsystems or components. For example, the concentration of oxygen in the air is an external selective factor for animals, since in order to survive they need a respiratory system fit to extract oxygen. Similarly, the concentration of carbon dioxide is an external selective factor for plants. However, when we consider the global ecosystem consisting of plants and animals together, we see that the concentrations of both oxygen and carbon dioxide are internally determined, since oxygen is produced out of carbon dioxide by plants, and carbon dioxide out of oxygen by animals. Survival of the global system requires an internal “fit” of the two halves of the carbon dioxide-oxygen cycle: if more oxygen or carbon dioxide would be consumed than produced, the whole system would break down (Heylighen, 1996). Similarly, when we look at a crystal as whole system, we see it as a stable structure that is unlikely to disintegrate, i.e., it is absolutely fit and survives internal selection. However, when we look at the molecules as the parts that make up the crystal, we see that they must have the right connections or bonds, i.e., fit relations, to form a stable whole. The exact configuration of each molecule is externally selected by the other molecules to which it must fit. In this way, every absolute or intrinsic fitness characterizing a whole can be analyzed as the result of a network of interlocking relational fitnesses connecting the parts. In summary, a system will be selected if: (1) its parts “fit together,” i.e., form an intrinsically stable whole, (2) the whole “fits” its environment, i.e., it can resist external perturbations and profit from external resources to (re)produce (Heylighen, 1996).
3.4 The growth of structural complexity (Heylighen, 1996) The previously discussed relational interpretation of fitness is sufficient to explain why variation and selection tend to produce complexity. As Heylighen said, variation produces differentiation, by creating a variety of distinct systems. Even if systems started from an initially similar configuration, independent variation processes will make their trajectories diverge, and make the resulting configurations increasingly diverse, the selection of fit relations will simultaneously produce integration, by producing stable bonds or linkages between
88
Chapter 3
distinct systems. Indeed, differentiation and integration together produce complexity (Heylighen, 1996). Following Heylighen (1996) we consider the following example. Let us study this process in more detail by considering an individual system A. Internal variation and selection will have produced an intrinsically stable configuration. Suppose now that A is in contact with another system B. A will play the role of external environment toward B. The relative variation of B with respect to its environment A will undergo selection: some of the configurations through which it passes will be fit, i.e., stable, and therefore retained, others will remain unstable and therefore replaced by other configurations. If none of the configurations through which it varies is fit, nothing happens, and the overall configuration remains at the same level of complexity. However, if B’s configuration fits its environment A, by definition, their mutual configuration will be retained, and a constraint will be imposed on their relative variation. B has “snapped into place” or “discovered a niche.” Thus a new, higher order system or supersystem, consisting of the subsystems A and B bound together by their relative constraint, is formed. Note that although we saw A as the environment to which B may adapt, we might as well have reversed the roles and considered B as the environment or selector to which A tries to fit. Intrinsically, fit is a two-way relation, although we do not need to assume that it is perfectly symmetric. For example, a parasite must fit its host strongly, since it is completely dependent on the host. On the other hand, the host would do better without the parasite. The host “fits” the parasite only in the very weak sense that it will survive the interaction. (If the host would die, the relation would be eliminated.) Further, more symmetric examples of the emergence of supersystems by the selection of fit relations are easily found (Heylighen, 1996). The more symmetric form of symbiotic fit is mutualism: organisms interacting in a way that benefits each partner. A classic example is the lichen, which looks like a single organism, but is in reality a symbiotic joining of two different organisms, an alga and a fungus, which support each other by producing substances the other partner is incapable of producing. It is now also generally accepted that eukaryotes, the complex, nucleated cells containing distinct organelles which form the basis of higher order animals and plants, evolved out of the symbiotic assembly of several simpler, prokaryotic cells (Margulis and Fester, 1991; Maynard Smith and Szathma´ry, 1995). Similarly, multicellular organisms result from the intricate coadaptation or fit of individuals cells. On an even higher level, ecosystems are formed by the symbiotic coupling of a host of interdependent organisms (Heylighen, 1996). Some comments come now from the author of the present volume. It is well known that as ecosystems grow and develop, they increase their total dissipation, create more complex structures with more energy flow, increase their cycling activity, develop greater diversity, and generate more hierarchical levels, all to help energy degradation. This view is supported by observation of species which can survive in ecosystems. It is now clear that such species funnel
Heylighen’s enlarged view of growing complexities in evolution 89 energy into their own production and reproduction and contribute to the autocatalytic processes which increase total degradation (occasionally called “dissipation”) of the energy input to the ecosystem. Summing up, ecosystems develop in ways which systematically increase their ability to degrade the incoming energy, usually solar energy. Schneider and Kay (1994) believe that their thermodynamic paradigm facilitates the study of ecosystems developed from descriptive science to predictive science based on principles of general macroscopic physics. The reader may also note that a suitable correct description of the earlier issues can be achieved in terms of the information theory, in view of the link between nonequilibrium thermodynamics and Fisher information (Frieden et al., 2002). Schr€ oodinger (1944) points out that at first glance, living systems seem to defy the second law of thermodynamics as it implies that, within closed systems, entropy should tend to maximum as the disorder should reign. Living systems, however, are the antitheses of such disorder. They display excellent levels of order created from disorder. For instance, plants are highly ordered structures, which are synthesized from disordered atoms and molecules found in atmospheric gases and soils. By turning to nonequilibrium thermodynamics, Schr€ odinger recognizes that living systems accommodate well in a world of energy, materials, and flows (fluxes). An organism stays alive in its highly organized state by taking energy from outside, from a larger encompassing system. This energy is used to process within this organism a lowered entropy, corresponding with a more organized state. Schr€odinger recognizes that life constitutes a far-from-equilibrium structure that maintains its local level of organization at the expense of the increased budget of the global entropy. He claims that the studies of living systems from a nonequilibrium perspective would reconcile biological self-organization and thermodynamics. Furthermore, he expects that studies of living systems from a nonequilibrium perspective would reconcile biological self-organization and thermodynamics and yield new principles of physics (Schr€oodinger, 1944; Jorgensen, 2001). The symbiotic coupling principle pervades the lower levels of physics and chemistry (Heylighen, 1996). Particles, atoms, or molecules that interact can form a physical or chemical bond, i.e., a collectively constrained configuration that has a lower potential energy than the unbounded configuration, and which is therefore more stable. Although bonds can be destroyed, this requires the (a priori not very likely) external input of the right amount of (a noble energy, SS), whereas the creation of a bond can happen spontaneously by the emission of the surplus energy. In an environment that is not too rich in such energy (i.e., that has a relatively low temperature), bound configurations are intrinsically more stable than configurations consisting of freely moving particles, and thus will be naturally selected (Heylighen, 1996). Since the second law of thermodynamics implies a natural tendency of energy to dissipate (degrade), it should not surprise us that the history of the physical universe since the Big Bang is characterized by the emergence of ever more numerous bonds between particles. Some examples are the formation of a hydrogen atom by the electromagnetic bonding of a proton and an electron, of a helium atom by the strong nuclear bonding of different
90
Chapter 3
hydrogen atoms, and of the higher elements by different combinations of hydrogen, helium, or other atoms with additional protons, electrons, and neutrons. The weaker electromagnetic bonds formed between atoms produce a sheer infinite variety of molecules, and simple molecules (monomers) may combine to form complex chains or polymers. Different molecules may fit together to form crystals, which provide the underlying structure of rocks. In space, rocks tend to aggregate into asteroids, planets, and planetary systems, held together by gravity (Heylighen, 1996). Again, the configuration where separate rocks have coalesced into a larger system has a lower gravitational energy and is therefore more stable (and thus fit) than a configuration consisting of independently moving pieces. These examples also illustrate the hierarchical architecture of structural complexity (Simon, 1962). Some types of relational fit are only tried out after others have been established. For example, electromagnetic bonds between atoms are only possible because the stronger nuclear forces have overcome the electrostatic repulsion between protons in order to form different types of atomic nuclei. This can be understood by noting that not all relations of fit have the same strength: some are more difficult to produce or to dislodge than others. For example, it requires much more energy to break up an atom than to break up a molecule, and it is easier to disperse a herd of animals than to separate the cells that make up their bodies. By definition, selection will prefer the relations with higher fitness to those with lower fitness (assuming that variation provides the opportunity to try out high fitness configurations). It is only after the available high fitness configurations have formed that the remaining weakly fit linkages get a chance. Thus electromagnetic bonds will typically become important after the stronger nuclear bonds have stabilized, and before the weaker gravitational ones come into play. The strong linkages will produce tightly bound assemblies or systems, in which internal variation has been strictly constrained. These systems will continue to undergo free external variation and appear in different combinations, until they discover a combination that is itself bound, i.e., in which the different components have established a set of (weakly) fit connections. This determines a less strongly bound higher order system, which has the more strongly bound systems as parts. This supersystem can now again undergo free recombination with other systems until a new, again less fit, type of linkage is discovered, producing a thirdorder supersystem, which now has two levels of subsystems. Thus a nested hierarchy of systems is formed, where at each lower level smaller and fitter subsystems can be distinguished as components of the system at the level above. This hierarchical structure illustrates our concept of differentiation along the scale dimension: zooming in will make us discover systems at ever smaller scales, which have different, stronger connections that are typically less likely to disintegrate. This greater stability of parts or subsystems, which seems to characterize the physical world, is not a universal rule, though. Remember that our definition of fitness included both difficulty of
Heylighen’s enlarged view of growing complexities in evolution 91 destruction (stability) and ease of production (“productivity”). In some systems (e.g., organisms), high (re)productive fitness is achieved in spite of low stability. For example, in ecosystems, relations between populations (e.g., predators and prey) are typically more stable than the individual organisms that make up the populations. Yet, the evolutionary time needed to develop a stable predator-prey relation is much longer than the time needed to produce an individual organism. Similarly, cells in the human body are typically more stable than the polymers that constitute them. However, the polymers are much more easy to produce than the cells.
3.5 Self-reinforcing structural complexification Although the earlier mentioned mechanism explains the emergence of structural complexity in many different cases, it does not seem to guarantee the continuation of complexity growth. We could well imagine that variation might reach a point where it has discovered all fit configurations, and no further complexification occurs (complexification is the act or process of making something more complex, SS). However, it can be argued that not only structural complexification does not stop, but moreover that it has a tendency to accelerate. For example, it is well documented by evolutionary biologists that ecosystems tend to become more complex: the number of different species increases, and the number of dependencies and other linkages between species increases. This has been observed as well over the geological history of the Earth, as in specific cases such as island ecologies, which initially contained very few species, but where more and more species arose by immigration or by differentiation of a single species specializing on different niches (like the famous Darwin’s finches on the Galapagos islands, Heylighen, 1996). As is well explained by Wilson (1992), not only do ecosystems contain typically lots of niches that will eventually be filled by new species, but there is a self-reinforcing tendency to create new niches. Indeed, a hypothetical new species (let’s call them “bovers”) occupying a hitherto empty niche, by its mere presence creates a set of new niches. Different other species can now specialize in somehow using the resources produced by that new species, e.g., as parasites that suck the bover’s blood or live in its intestines, as predators that catch and eat bovers, as plants that grow on the bover’s excrements, as furrowers that use abandoned bover holes, etc. (Heylighen, 1996). Each of those new species again creates new niches that can give rise to even further species, and so on, ad infinitum. These species all depend on each other: take the bovers away and dozens of other species may go extinct. This same idea can be generalized to other types of evolutionary systems: each new system that appears will become a new selector which provides opportunities for more new systems to “fit in.” A metaphor that may clarify this mechanism is that of an infinite jigsaw puzzle. Every system that is selected can be seen as a piece of the puzzle that has found a place where it fits, locking in with the neighboring pieces. However,
92
Chapter 3
every newly added piece will add a segment to the puzzle’s outward border, where further pieces may find a place to fit. The more pieces are added to the puzzle, the larger the border becomes, and the more opportunities there are for further pieces to be added. Thus every instance of “fit,” or niche filled, increases the number of available niches, leading to a runaway, positive feedback process of growing complexity. Kauffman (1995) discusses a similar process for autocatalytic chemical networks, where every type of molecule can function as either a substrate or catalyst for a chemical reaction that produces further molecule types. He shows that randomly assembled sets of molecules will in general increase in diversity by such processes, and moreover argues that for larger initial diversities (variation) or higher probabilities of catalysis (selection or fit) diversity will increase more strongly, until a critical point is reached, where the system continuously generates new kinds of molecules that in turn catalyze the formation of still further molecules types, and so on, in an endless, accelerating explosion of novelty (Heylighen, 1996; Hordijk and Steel, 2018).
3.6 The growth of functional complexity Our previous arguments were limited to the development of structural complexity, i.e., the differentiation and integration, and associated symmetry breaking, of systems in the static, spatial dimensions. They did not take into account any complexity of the dynamics, behavior, or functioning of these systems. It is the latter type of complexity that seems to distinguish human beings from similarly large and structurally complex systems, like cows, crocodiles, or sharks. The question why functional complexity too appears to increase so quickly during evolution can be answered easily by combining the traditional cybernetic principle of the “Law of Requisite Variety” (Ashby, 1958) and a concept of coevolution. Until now we have seen “fit” basically as a static relation: either a system fits its environment, and then it remains in the same relation, or it does not, and then everything can change. However, when the environment itself changes in such a way as to affect the system, no static fitness relation is possible. However, an invariant configuration could still be achieved if the system would continuously adapt to whatever environmental change impinges on it. The maintenance of an invariant configuration in spite of variable disturbances defines the problem of homeostasis (von Foerster, 1958; Heylighen, 1996). As studied in cybernetics (Ashby, 1964), homeostasis can be achieved by control, i.e., the compensation of perturbations by the appropriate counteractions so that a desired goal is reached or maintained. The classic example is the thermostat which maintains a stable inside temperature in spite of external temperature changes by selectively switching a heating mechanism “on” or “off.” All living systems are control systems, actively maintaining a host of internal variables within a restricted domain by anticipating and reacting to all possible deviations from the preferred configuration. For example, when the level of sugar in the blood of an animal drops below a certain level, this will produce a feeling of hunger, which will make
Heylighen’s enlarged view of growing complexities in evolution 93 the animal search for and ingest food, which will again lead to the required increase in the sugar concentration. Ashby’s (1958, 1964) Law of Requisite Variety states that in order to achieve control, the variety of actions a control system is able to execute must be at least as great as the variety of environmental perturbations that need to be compensated. The larger the variety of available counteractions, the larger the set of disturbances that can be corrected, and the larger the domain of potential environmental situations in which the control system can survive. All other things being equal, greater control variety implies greater fitness. For example, an animal capable of finding and digesting more diverse types of food is likely to survive and thrive in a larger variety of circumstances. Therefore evolution through natural selection will tend to increase control, and thus, because of Ashby’s law, internal variety. This can be interpreted as a functional differentiation, i.e., the appearance of more diverse activities or functions. However, the larger the variety of available options, the more difficult it will become for the control system to select the most adequate one (cf. Heylighen, 1991b, 1994), and the longer it will take to decide which action to take. The resulting difficulty of decision-making becomes especially poignant if we take into account the fact that the solution of most problems (i.e., deviations from the desired situation) requires a series of actions. For example, reducing the feeling of hunger may require locating prey, stalking it, running after it, catching it, killing it, tearing apart the carcass, and swallowing the meat. Sequencing of elementary actions produces a combinatorial explosion in the number of possibilities to be considered. For example, the number of possible combinations consisting of a sequence of 10 actions selected from a repertoire of 100 available actions is 10010, an astronomical number that is absolutely unmanageable. The most general way of coping with the complexity of decision-making consists in factorizing the decision problem, i.e., decomposing it into relatively independent subproblems, each characterized by its own subgoal (Simon, 1962). Each subproblem can be solved by a much smaller combination selected from a reduced set of actions. For example, running requires a combination of different movements of the legs, whereas tearing requires a combination of movements of the jaws and neck. Each subgoal thus requires the coordination of a few closely related functions. The linkage between different subproblems is controlled at a higher level, where the goal of “hunting prey” activates a coordinated interplay of subgoals. This may again be controlled at a yet higher level, where the general problem of “reducing hunger” requires a balanced selection or division of labor between activities such as “hunting prey,” “gathering fruit,” and “digging up roots” (Heylighen, 1996). Thus different lower level activities are linked and integrated in the pursuit of higher order goals, which are themselves integrated at a yet higher level. This results in a functional hierarchy of control levels, which is in many ways similar to, though not directly determined by,
94
Chapter 3
the structural hierarchy we discussed earlier. A detailed model of fundamental control hierarchies characterizing living systems can be found in Powers’ (1973, 1989) Perceptual Control Theory. The larger the variety of the environmental perturbations that need to be compensated, in general the larger the control hierarchy needed (cf. Aulin’s (1979, 1982) law of requisite hierarchy). The emergence of a higher level of control, which may be called a metasystem transition (Turchin, 1977; Heylighen et al., 1995), is a process different from, but to some degree analogous to, the emergence of a supersystem. Thus the postulated evolutionary increase in control variety will necessarily be accompanied by a sequence of metasystem transitions (Heylighen, 1995). The higher level processes in a control hierarchy will necessarily extend over longer time intervals, since they require the preliminary completion of lower level subprocesses. For example, the process of hunting prey will take place at a much slower pace than the subprocess of moving a leg in order to sustain running. The higher the level at which a goal is situated, the longer term the planning involved. For example, human activities may be planned for years ahead, whereas the most complex activities of bacteria are likely to be completed in seconds. Thus the functional hierarchy produces differentiation and integration in the temporal scale dimension, similar to the differentiation and integration in the spatial scale dimension characterizing the structural hierarchy.
3.7 Self-reinforcing functional complexification Again, we must ask whether functional complexification can continue indefinitely. Since we may assume that the environment as a whole has always more variety than the system itself, the evolving system will never be able to achieve complete control (i.e., be capable to thrive under all possible circumstances). Yet we may assume that it will at least be able to gather sufficient variety to more or less control its most direct neighborhood. We might imagine a continuing process where the variety of an evolving system slowly increases toward but never actually matches the infinite variety of the environment. On the other hand, as internal variety increases, decision-making becomes more difficult (even if we assume that decision-making difficulty can be strongly reduced by hierarchical factorization), and so it becomes less and less advantageous to further increase functional variety. The evolving system will asymptotically reach a trade-off level, depending on the variety of perturbations in its environment, where requisite variety is in balance with difficulty of decision-making and perhaps other limiting factors on complexity. For example, for viruses the balance point will be characterized by a very low functional variety, for human beings by a very high one. This analysis assumes that the environment is stable and a priori given. However, the environment of a system A itself consists of evolving systems (say B, C, D…), which are in
Heylighen’s enlarged view of growing complexities in evolution 95 general undergoing the same asymptotic increase of variety toward their trade-off points. Since B is in the environment of A, and A in the environment of B, the increase in variety in the one will create a higher need (trade-off point) in variety for the other, since it will now need to control a more complex environment. Thus instead of an increase in complexity characterized by an asymptotic slowing down, we get a positive feedback process, where the increase in variety in one system creates a stronger need for variety increase in the other (cf. Waddington, 1969). This self-reinforcing interaction is an illustration of the “Red Queen Principle” (Van Valen, 1973), which says that a system must continuously develop in order to merely maintain its fitness relative to the systems it coevolves with. The net result is that many evolutionary systems that are in direct interaction with each other will tend to grow more complex, and this with an increasing speed. As an example, in our present society individuals and organizations tend to gather more knowledge and more resources, increasing the range of actions they can take, since this will allow them to cope better with the possible problems appearing in their environment. However, if the people you cooperate or compete with (e.g., colleagues) become more knowledgeable and resourceful, you too will have to become more knowledgeable and resourceful in order to keep up with them. The result is an ever faster race toward more knowledge and better tools, creating the “information explosion” we all know so well. The present argument does not imply that all evolutionary systems will increase in complexity: those (like viruses, snails, or mosses) that have reached a good trade-off point and are not confronted by an environment putting more complex demands on them will maintain their present level of complexity. But it suffices that some systems in the larger ecosystem are involved in the complexity race to see an overall increase of available complexity.
3.8 Selection for simplicity? As mentioned in the introduction, many researchers have criticized the idea that complexity grows during evolution. We will now review some of the main arguments proposed by these critics, and show how they can be replied to in our framework. The most obvious counterargument to growing complexity is that there are evolutionary costs to complexity: if the same purpose can be achieved by a simpler design, this design is in general preferable to the complex one. We have already discussed the costs in more difficult decision-making and coordination connected to functional complexity. For structural complexity, it can similarly be noted that more complex designs bear the cost of the production and maintenance of more diverse material components. The more parts a system has, the more likely it is that one of the parts would malfunction because of an error or because of lacking resources. This can to some extent be overcome by redundancy or the building up of reserves, but it seems obvious that if a simpler design is proposed by variation, this will in general be preferred by selection. Sometimes, a totally new, much simpler method to achieve the same purposes is discovered.
96
Chapter 3
Examples of such a revolution from the history of science would be the heliocentric model of Copernicus replacing the hopelessly complicated Ptolemaic model for the trajectories of the planets, or the replacement of the multipart propeller mechanism for planes by the much simpler jet engine (Arthur, 1993). Yet, outside the realm of science such revolutionary simplifications seem extremely rare. To start, if a simple solution was readily available, it is likely that that solution would already have been found, since normally evolution starts with simple configurations before it explores the more complex ones. By definition there are much less simple configurations than complex ones, and therefore it should be easier to first discover the simple ones. Second, the fact that a simpler solution exists does not make it likely that that solution will be found by blind variation starting from a very different initial configuration. For a new configuration to evolve, it must not only be fit, but moreover all configurations intermediate between the present one and the new one must be fit (a requirement that does not apply to scientific speculation). For radically new designs, it is not very likely that sufficiently fit intermediate configurations for all transitional steps would be discovered by variation. This general principle may apply with particular strength to changes in complexity, if we believe the argument advanced by Saunders and Ho (1976). They argue that for evolution it is in general much easier to add a component, which is not likely to do much harm, than to take away a component, which is likely to disturb a complex network of interdependencies. This seems to be confirmed by the observation of vestigial organs, like the appendix, which are conserved by evolution even though they are no longer needed. Thus even though a radically simplified design with much less components may exist, it is unlikely that this configuration can evolve by taking away components one by one. For example, imagine a jet engine “evolving” out of a propeller engine by the gradual elimination of unnecessary parts. However, the well-known definition of disorder (Schr€odinger, 1967) is not used in the analysis in question. In fact, as explained by Landsberg, entropy itself is not a direct measure of disorder for growing systems as it is an extensive quantity (Landsberg, 1984a,b). In other words, for growing systems, the classical definitions of disorder D ¼ eS and order Ω ¼ eS proposed by Schr€ odinger are inapplicable. Their flaws are not shared by Landsberg’s definitions of disorder and order (Landsberg, 1984a, 1984b, 1994). According to Landsberg, disorder D ¼ S/Smax and order Ω ¼ 1 D. As Smax depends on n, in Landsberg’s definition, both disorder D and order Ω are functions of entropy S and number of states in the system, n; i.e., D¼D(S, n) and Ω¼Ω(S, n). For evolution processes, a particularly important role is played by the complexity Γ, which is a function of disorder D and order Ω, i.e., Γ ¼ f(D,Ω). In the literature, various functions f are advocated (Shiner et al., 1999; Shiner, 2000). A popular form has the simple structure Γ¼4DΩ¼4D(S, n)[1 D(S, n)], which was used by Szwast (1997). In this case, the maximum of Γ is attained for D¼0.5 and equals the unity. In Szwast’s (1997) and Szwast et al., (2002) analyses, two statements of Saunders and Ho (1981)
Heylighen’s enlarged view of growing complexities in evolution 97 are confirmed: (1) “While the increase of complexity Γ is more likely and the trend will be in this direction, we must also expect occasional decrease of complexity” and (2) “Only completely reversible changes are processes which develop without change of complexity (isocomplex processes).” This reversible application requires constancy of complexity or entropy, the requirement which restricts motions to those along “entropy isolines” when modifications or specializations occur at a constant number of states, n. This means that the decrease of states is a possible process which is compensated during the evolution by the creation of modified states. This process occurs along an isoline of entropy which is not a maximum entropy for a given n; it is rather the entropy which maximizes complexity Γ (in fact, probabilities which would maximize entropy would rather minimize complexity). Therefore it is the set of conditions dΓ/dS¼0 and d2Γ/dS2