Entropy of Complex Processes and Systems 012821662X, 9780128216620

Entropy of Complex Processes and Systems formalizes our understanding of many complex processes, including the developme

233 46 2MB

English Pages 308 [303] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Entropy of Complex Processes and Systems
Copyright
Contents
Preface
Introduction
1 Brief history: properties and problems of entropy parameter
1.1 Thermodynamic entropy
1.2 Statistical substantiation of entropy
1.3 Notion of statistical ensemble
1.4 Substantiation of the statistical analysis validity
1.5 Some problematic aspects of entropy
1.6 Gibbs paradox and problems of gaseous systems separation
1.7 Actual separation of gases
1.8 Solution to the Gibbs “paradox”
1.9 Phenomenological problems of the second law
(a) Essence of the problem
(b) Thermodynamic aspects of biological systems
(c) Discussion of the problem and conclusions
2 Statistical entropy component
2.1 Notions of randomness, chaos, and stability
2.2 Probabilistic characteristics
2.3 Random values and distribution functions
2.4 Probabilistic interpretation of granulometric characteristics of the solid phase of a polyfractional mixture of solid p...
2.5 Determining average values of random quantities
2.6 Importance of unambiguous evaluation of complex compositions of various systems
2.7 Uncertainty of mixture compositions
2.8 Separation efficiency
2.9 Separation optimality condition according to entropic criterion for binary mixtures
2.10 Unbiased evaluation of the efficiency of multicomponent mixtures separation
2.11 Example of optimization of separation into four components
2.12 Mathematical model of separation into n components
2.13 Unambiguous evaluation of completeness of a complicated object in the process of construction
2.14 Unambiguous evaluation of complex processing of natural resources
3 Dynamic component of entropy
3.1 Modeling and analogy—the basis of dynamic systems comprehension
3.2 Substantiation of the physical analogy
3.3 Foundations of the statistical model of critical two-phase flows
3.4 Definition of distribution parameters
3.5 Substantiation of the entropy parameter for two-phase flows
3.6 Main properties of dynamic entropy characterizing a two-phase system
3.7 Stationary state as a condition of entropy maximality
3.8 On the issue of entropy parameter formation for dynamic systems
3.9 Entropy and probabilities distribution
3.10 Multidimensional statistical model of a two-phase system
3.11 Two-phase flow mobility
3.12 Other invariants for two-phase flows
3.13 Canonical distributions in the definition of statistical ensembles for two-phase flows
(a) Microcanonical ensemble
(b) Canonical ensemble
(c) Grand canonical ensemble
(d) Relation between canonical and microscopic ensembles
3.14 Statistical analysis of mass exchange in a two-phase flow
3.15 Statistical parameters of mass transfer
4 Verification of the entropic model adequacy for two-phase flows in separation conditions
4.1 Mathematical model of poly-fractional mixture redistribution in a multistage cascade
4.2 Experimental check of theoretical conclusions
(a) Initial composition
(b) Solid-phase concentration in a flow
(c) Process stability
(d) Flow velocity and particle size
4.3 Unifiеd separation curves
4.4 Generalizing invariant
4.5 Determination of solid-phase distribution coefficients in a two-phase flow
(a) Turbulent flow around particles and turbulent regime of medium motion in the apparatus
(b) Laminar flow regime
(c) Intermediate regime of flow around particles
4.6 Estimation of distribution coefficients
4.7 Development of the methodology of separation processes computation
4.8 Definition of generalizing invariants for all separation regimes
4.9 Correlation of structural and cellular models of the process
4.10 Generalizing criteria
5 Place of the entropy parameter in technical sciences and other branches of knowledge
5.1 Problematic character of entropy
5.2 General properties of entropy
5.3 Development as growth of complexity
(a) Physical complexity
(b) Biological complexities
(c) Development of civilization
5.4 Biological systems and darwinism
5.5 Principal aspects of entropy
5.6 Certain world-view aspects of entropy
5.7 Conclusion
Bibliography
Index
Recommend Papers

Entropy of Complex Processes and Systems
 012821662X, 9780128216620

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Entropy of Complex Processes and Systems

Entropy of Complex Processes and Systems Eugene Barsky Department of Industrial Engineering, Azrieli College of Engineering, Jerusalem, Israel

Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States Copyright © 2020 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN: 978-0-12-821662-0 For Information on all Elsevier publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Susan Dennis Acquisitions Editor: Anita A. Koch Editorial Project Manager: Amy Moone Production Project Manager: Bharatwaj Varatharajan Cover Designer: Mark Rogers Typeset by MPS Limited, Chennai, India

Contents Preface Introduction 1.

2.

ix xiii

Brief history: properties and problems of entropy parameter

1

1.1 Thermodynamic entropy

1

1.2 Statistical substantiation of entropy

10

1.3 Notion of statistical ensemble

11

1.4 Substantiation of the statistical analysis validity

16

1.5 Some problematic aspects of entropy

20

1.6 Gibbs paradox and problems of gaseous systems separation

25

1.7 Actual separation of gases

27

1.8 Solution to the Gibbs “paradox”

28

1.9 Phenomenological problems of the second law

31

Statistical entropy component

39

2.1 Notions of randomness, chaos, and stability

39

2.2 Probabilistic characteristics

40

2.3 Random values and distribution functions

44

2.4 Probabilistic interpretation of granulometric characteristics of the solid phase of a polyfractional mixture of solid particles

45

2.5 Determining average values of random quantities

49

2.6 Importance of unambiguous evaluation of complex compositions of various systems

53

2.7 Uncertainty of mixture compositions

54

2.8 Separation efficiency

61 v

vi

Contents

3.

2.9 Separation optimality condition according to entropic criterion for binary mixtures

62

2.10 Unbiased evaluation of the efficiency of multicomponent mixtures separation

67

2.11 Example of optimization of separation into four components

71

2.12 Mathematical model of separation into n components

78

2.13 Unambiguous evaluation of completeness of a complicated object in the process of construction

84

2.14 Unambiguous evaluation of complex processing of natural resources

91

Dynamic component of entropy

95

3.1 Modeling and analogy—the basis of dynamic systems comprehension

95

3.2 Substantiation of the physical analogy

97

3.3 Foundations of the statistical model of critical two-phase flows

102

3.4 Definition of distribution parameters

106

3.5 Substantiation of the entropy parameter for two-phase flows

115

3.6 Main properties of dynamic entropy characterizing a two-phase system

123

3.7 Stationary state as a condition of entropy maximality

128

3.8 On the issue of entropy parameter formation for dynamic systems

135

3.9 Entropy and probabilities distribution

137

3.10 Multidimensional statistical model of a two-phase system

144

3.11 Two-phase flow mobility

150

3.12 Other invariants for two-phase flows

153

3.13 Canonical distributions in the definition of statistical ensembles for two-phase flows

159

3.14 Statistical analysis of mass exchange in a two-phase flow

166

3.15 Statistical parameters of mass transfer

178

Contents vii

4.

5.

Verification of the entropic model adequacy for two-phase flows in separation conditions

193

4.1 Mathematical model of poly-fractional mixture redistribution in a multistage cascade

193

4.2 Experimental check of theoretical conclusions

199

4.3 Unifiеd separation curves

208

4.4 Generalizing invariant

215

4.5 Determination of solid-phase distribution coefficients in a two-phase flow

217

4.6 Estimation of distribution coefficients

228

4.7 Development of the methodology of separation processes computation

234

4.8 Definition of generalizing invariants for all separation regimes

239

4.9 Correlation of structural and cellular models of the process

242

4.10 Generalizing criteria

246

Place of the entropy parameter in technical sciences and other branches of knowledge

251

5.1 Problematic character of entropy

251

5.2 General properties of entropy

254

5.3 Development as growth of complexity

261

5.4 Biological systems and darwinism

268

5.5 Principal aspects of entropy

271

5.6 Certain world-view aspects of entropy

273

5.7 Conclusion

277

Bibliography

279

Index

283

Preface Entropy was introduced into the theory of dynamic processes of modern science as a metrical invariant. This notion arose and became central in classical statistical physics during the development of thermodynamics, and was adopted from there for some other branches of knowledge connected with heat transfer. In this book, an attempt is made to use the notion of entropy for describing the statistical properties of complicated systems and processes that are not connected with thermodynamics. It is universally recognized that entropy characterizes a quantitative measure of uncertainty of dynamic phenomena whose realization is connected with ambiguity and randomness. A detailed study of random processes has allowed revealing, side by side with dynamic uncertainty, its static component. It was promoted by the analysis and solution of Gibbs’ entropy paradox. The static component of entropy is determined by the composition and ratio of components of material objects participating in a concrete process. It is shown that in the physical sense, dynamic and static entropy components mutually complement characteristics of different aspects of complicated processes. These parts of entropy can be compared and combined with each other. We cannot say that the static part of entropy was totally ignored in thermodynamics. For example, it is shown that the analysis of Gibbs Duhem equation for the entropy of heterogeneous systems also contains a static part. But this part of entropy was never emphasized as a separate specific feature of processes or systems. The static component of entropy does not remain constant for dynamic processes, it varies jointly with the transformation of the component’s composition. This makes it possible to formulate quality criteria for the evaluation of the completeness of the performed transformations using the ratio of the values of this entropy at the beginning and end of the process. Static entropy change reflects a concrete change in compositions or substances in the course of a concrete transformation. This can be illustrated by the following examples. In coal mining, coal composition can be determined by particle size and density. The analysis of such composition is usually presented as a table (matrix), where particles size is shown along the horizontal, and their density along the vertical. For the mining of multimetal ores, their composition can be characterized not only by a planar, but also by a multidimensional matrix depending on their composition and purposes of their further processing. It is shown how we can estimate the entire composition of any complexity by one figure— uncertainty value or static entropy. ix

x

Preface

As a result of performing processes of coals or multimetallic ores beneficiation, target component extraction is carried out, and in each of the extracted products ranking of compositions occurs. Each of these products can be unambiguously evaluated by the static entropy value. The comparison of the initial evaluation and the sum of final evaluations taking into account the share of each component output unambiguously estimated the efficiency of the performed processes. This makes it possible to create a method of control and optimization for complicated processes in practically all branches of industry where any transformation of materials or substances occurs. Another example is taken from a field far from industry. It is considered that the entropy of the Sun is permanently decreasing because it gives out heat to the environment. This is correct only for the dynamic component of entropy. However, the total entropy consisting of dynamic and static parts cannot only decrease, but also grow. It is known that, on the Sun, thermonuclear synthesis with permanent hydrogen transformation into helium occurs. If at the present moment there is more hydrogen than helium in the composition of the Sun, its transformation leads to the growth of a static entropy component until their contents become equal. At this moment, the value of this component reaches its maximum. This growth can significantly exceed the dynamic entropy decrease. Thus, entropy parametrizes the change in the system state in the process of its evolution. At a natural evolution, this parametrization has allowed formulating a characteristic of a system change in the form of the “entropy maximum principle.” While working on this book, the author came across fundamental questions connected with this notion: 1. Is there any connection between entropy definitions used in various branches of natural science, technology, geology, biology, sociology, economics, and thermodynamics? Is there a unified common entropy parameter applicable for particular cases, or is this parameter different for different branches of knowledge? 2. How does one compare entropies of particular processes? 3. Why is the entropy maximal for systems realized in physical nature and minimal in biological systems? If a phenomenon of any nature can have a probabilistic interpretation, it can be objectively characterized by the entropy parameter. Obviously, the magnitudes of this parameter for processes of various natures can be compared, added, subtracted, and undergo, if necessary, any other mathematical transformations. The usefulness of this parameter can be illustrated by the following example characteristic of the mining industry. More than 150 years ago, starting with the works of Rittinger, large groups of scientists in many developed countries worked on the creation of a theory of processes of beneficiation and separation of minerals and other materials, but conclusive results were not obtained. Up to the present time, the creation of new separation equipment and determination of optimal regime parameters of its operation are based on an expensive experimental method. For each specific problem, experimental facilities are built to perform numerous experiments. Here a set of structural and technological parameters is experimentally chosen in such a way as to obtain the optimal separation result. This does not guarantee a successful solution to the

Preface

xi

problem, since a transition from a model to an industrial unit entails, as a rule, numerous complications, because a simple scaling vary rarely leads to a satisfactory result. The situation is aggravated by the absence of generally accepted methods of optimization of such processes. As is already known, empirical methods do not promote the development of theory. Apparently, this can explain the stagnation in separation technology observed for many decades. Until today, the largest manufacturing companies produce separators whose operating principle is only slightly different from that of models patented in the mid-19th century. As has become clear, all these complexities can be overcome using the methods of classical statistical mechanics for mass chaotic processes whose analysis is based on the generalizing entropy invariant. From this standpoint, we have managed to understand the physics of separation and obtain adequate methods for the calculation of these processes. The beneficial influence of the use of this parameter can be also noted in other areas of natural sciences, for instance, in biology, information theory, crystallography, cosmology, etc. It is also applicable to processes of other natures. It is shown in this book how entropy can be used in the management of complicated buildings and industries. Depending on the field of entropy application, its most diverse interpretations arise, such as measures of uncertainty, “lack of knowledge” of the true state of a system, order/disorder, complexity, orderliness, inaccuracy of the system control, and species of the action functional in Hamilton’s mechanics. All these interpretations are secondary with respect to the meaning that was put into the term “entropy” by its author, Clausius. He interpreted this parameter as “transformation” or “evolution,” bearing in mind the role of the second law of thermodynamics. Based on the development of these ideas, the author has managed to create a methodology of analytical computation of such a complicated process as separation of polyfractional mixtures of solid particles in an ascending flow. It may seem improbable, but this methodology does not need any empirical coefficients, without which it has been impossible until now to calculate any complicated process. The adequacy of the results of these computations is confirmed by numerous experimental data obtained both at the pilot scale and on industrial facilities. Eugene Barsky

Introduction Analysis of the state of modern science leads to a clear understanding of its origin from two sources. The first emerged after the development of differential and integral calculus by Newton and Leibnitz, which served as a foundation for the development of classical mechanics. This mechanics achieved outstanding success at once and continues developing today. One of its distinctive features is a strict determinism. Classical mechanics can predict the behavior of a system either with a small number of elements, or with many symmetrically located elements. Therefore if the initial conditions of a system are specified, and forces influencing the latter are determined, using classical mechanics we can follow changes in this system both in the future and in the past. An especially important conclusion from the successes of this mechanics was the fact that under the influence of Newton’s works, it has become clear that nature obeys simple and universal laws that can be perceived and expressed in the precise language of mathematics. Since then, an experiment, a quantitative study of various physical quantities in their interdependence, and a mathematical form of correlations between them have become a basis for grasping secrets of nature. Using classical mechanics, it was possible to carry to completion the problem of motion of bodies in celestial mechanics, but it proved to be totally helpless even for the problem of the motion of three bodies in a general case, not to speak of the mass motion of many bodies. The second trend arose with the analysis of the mechanism of complex systems consisting of a set of independent elements. This problem was solved first in thermodynamics. In the 17th and 18th centuries, the authority of Newton’s works was so high that persistent attempts were made to reduce all laws of nature known at that period to classical mechanics. In the long run, they were not crowned with success. However, at least in one field of science these attempts gave excellent results. This field is related to the theory of thermal phenomena, but it happened only in the middle of tbarshe 19th century. By that time, the certainty about the molecular structure of substances had started to become more certain. Therefore efforts were made to clarify how the macroscopic properties of a substance depend on the a priori behavior of the molecules it consists of. The first successes in this direction were achieved with regard to gases whose molecules interact with an insignificant force. The obtained results made it possible to express pressure, temperature, and other macroscopic parameters of gas using such an averaged characteristic of molecules as their kinetic energy. Therefore this theory was named kinetic theory or statistical mechanics. It is xiii

xiv

Introduction

based on the fundamental works of Boltzmann and Gibbs, in which their speculative analysis allowed establishing statistical correlations between the molecular structure of a substance and its microscopic properties at a macroscopic level. Statistical mechanics was based on classical mechanics, but it established new bonds and introduced new concepts. The starting point for Boltzmann was atomistic theory—the idea that a substance consists of a huge amount of small moving balls. At first sight, one could conclude that with an increasing number of particles, the complexity and entanglement of the properties of the system under study had to increase incredibly, which made it impossible to find even traces of some regularity. However, it is not the case—for a great number of particles, new peculiar bonds arise. In no case can they be reduced to purely mechanical phenomena. Here qualitatively new regularities arise, which are characteristic of complicated systems only. Their main distinctive feature is the use of probabilistic representations based on the understanding of ambiguity, randomness, and uncertainty of mass events. The behavior of each molecule in a complex system is, at the same time, independent of and dependent on the behavior of the rest of the molecules, since they collide with each other and with walls enclosing the system. Between the collisions, their motion is uniform and rectilinear, and their velocity is determined by temperature. At each specific moment in time, it is impossible to predict the direction of the motion of a concrete particle (which shows its independence), and at the same time, its trajectory depends on collisions with other particles. Before Boltzmann’s works, physicists thought that the uncertainty leads to a complete randomness and incognizability, that is, to chaos. In fact, it has turned out that a strictly scientific study of the uncertainty of events is possible. This started long before Boltzmann’s works with the analysis of the results of roulette playing. This analysis gave rise to the necessity of calculating probabilities, which was not considered as serious mathematics for a long time. The main idea of determining the probability is usually illustrated by the fact that, when tossing a coin, the definiteness of what will fall—heads or tails—is totally absent. However, if such tossing is repeated very many times, in this case, the total fall of heads or tails is close to 50%. A similar result is observed at simultaneous tossing of a great number of identical coins. This transition from a total uncertainty to an almost exact certainty, when a long sequence of events is observed or large systems are realized, is the main idea behind the study of randomness. The principal laws of classical mechanics describe reversible processes, they do not contain the possibility of irreversibility. These laws contain time ðtÞ in the second power. Therefore, the substitution of 1t with 2t provides the reversibility. This makes it possible to calculate the future in a respective problem with the same degree of confidence as in the past. In thermodynamics, the irreversibility was revealed, which found its substantiation in statistical mechanics. The first step in formalizing the irreversibility was made in the course of entropy determination. There exists a rule known as the second law of thermodynamics, which states that in each physical process entropy remains constant or increases, and if it

Introduction

xv

increases, the process is irreversible. At first, all this looked rather mysterious and unsatisfactory. What is the sense of entropy? Why does it always increase, and never decrease? Boltzmann tried to solve these problems on the basis of an “atomistic” hypothesis. He obtained amazing results. They proved to be no less important for the physics of the 20th century than the discovery of the relativity theory and quantum mechanics. During the last more than 100 years, his concepts have shown their scientific strength, and now they are applied to various situations going far beyond the limits of thermal problems for whose solution they were intended. As a rough approximation, we can explain in plain language the notion of entropy by the following reasoning. Imagine gas atoms enclosed under normal conditions in a closed vessel with the volume of, for example, one cubic meter, whose temperature and pressure are constant. Atoms in this vessel are in a certain configuration with respect to each other and the vessel walls. However, atoms are constantly moving, and the system configuration is constantly varied. This means that a system consisting of a great number of particles may have a great number of configurations. For an external observer all these configurations look identical to a cubic meter of gas. For an internal observer, under the condition, he has an ability to distinguish separate atoms, this concrete volume of gas becomes rather ambiguous. Boltzmann has determined the measure of this ambiguity as the entropy parameter. We continue examining this cubic meter of gas at the destabilization of the situation and introduce additional heat in a certain small part of the vessel containing this gas, or increase pressure in a local part of the volume. As a result, in this part of the vessel, the velocities of gas particles will change, and at the interaction with other molecules they will pass them a part of their kinetic energy. If we leave the system without any influence, the velocities of molecules will reach again some average value determined by the new level of temperature or pressure. Temperature and pressure in the vessel will reach again some stable values and remain constant. At that point, the chaotic motion of molecules persists. Therefore such complicated systems are characterized by two levels—external determinism (determined by temperature, pressure, and volume) and internal chaos (determined by the velocity and direction of the molecules’ motion). For the clarification of general regularities of such systems, theories called statistical mechanics were developed. A basic distinction of this approach is that it is based on the necessity of determining the state of the entire system at once, irrespective of its size, big or small. Here the methods of mass processes analysis are understood as naturally statistical. The practice of mathematical modeling of technological processes shows that in order to construct a successful model, it is not obligatory to strive for their comprehensive description. It is necessary to find a simple scheme which must reflect to a sufficient degree the analogy to the essence of the phenomenon to be modeled. The study of such systems is based on various methods of their modeling. Therefore a formal model must combine a deterministic character of the observed processes with their chaotic nature. This means that the modeling of internal transformations of a system must lead to a certain completely regular process. Methods of such analysis based on due regard for the interaction between macro- and micro-levels depend on a concrete type of the complex system. Such a model should take into account the peculiarity of a transition from the

xvi

Introduction

transformation of random motions of the system elements to a general deterministic process. A set of a great number of particles (molecules or atoms) is always taken as a classical example of a macrosystem, as in the kinetic theory of gases. A general character of regularities in this theory depends only to a small degree on the motion of each separate particle. Here the individual and collective properties of particles are essentially different. At first, only macrostates of such systems characterized by thermodynamic quantities— temperature, pressure, volume, energy—were analyzed. The connection between these parameters was formulated strictly functionally, as it was accepted in classical mechanics, that is, deterministically. The account for the microstate in the course of the development of science led to a radical revision of the notions of the process mechanism. Another level of the consideration of mass interaction of a great number of particles appeared in the model of the process. It became clear that such an interaction is a cause of a concrete macrostate of a system. And although the set of thermodynamic quantities and interconnections between them have not changed because of that, the notions of their nature have considerably changed. These quantities proved to be averaged characteristics of random interactions at the micro-level. Further improvement of the model mainly concerned a more precise definition of the behavior of particles at the micro-level and their distribution by quantum states. This question has turned out to be so basic that depending on the hypotheses of particles staying in different quantum states, different statistics for the distribution functions were developed—those of Gibbs, Boltzmann, FermiDirac, BoseEinstein. All these notions of microinteractions are formulated in the form of distribution functions. Models connecting thermodynamic quantities characterize a stationary (equilibrium) state of a macrosystem. Theoretical results obtained in this field and confirmed by numerous experiments have become classical. Many properties and regularities inherent to physical macrosystems are revealed in complex systems of a totally different nature. Among these, we can mention systems of exchange or distribution of economic resources in the market economy, municipal service, industrial production, etc. Such complicated processes as biological, cosmological, and sociological ones are also considered from such positions. The availability of these developments allows applying methods of statistical physics for studying complicated systems. As a classical example of such complex systems, we can mention such physical phenomenon as a two-phase flow. At present, there is no particular progress in the development of the theory of two-phase flows, everything is limited by only fragmentary empirical relationships as yet. Two-phase flows are widely used in the up-to-date industrial practice. They include systems containing discrete inclusions distributed in a continuous medium. The discrete inclusions represent either solid particles that do not change their shape and size, or drops or gas bubbles capable of changing their sizes in the course of the process. Liquids or gases are used as continuous media. The interest of modern technology in two-phase flows was

Introduction

xvii

caused by a large contact surface of the dispersed and continuous phases, which ensures high velocities of mass exchange and other processes in them. Dispersed systems with a solid phase are the simplest, in some respect they can serve as a simplified model for systems with drops and gas bubbles. In the vertical direction, two-phase flows can realize various types of flow depending on the correlation between the continuous medium velocity and principal characteristics of the solid-phase particles. At the flow velocity at which all solid phase ascends, a so-called transport regime is provided. Its velocity is limited from below by the magnitude at which even the coarsest particles do not settle up stream. The minimal value of this velocity is called the critical velocity of pneumatic or hydraulic transport depending on the applied continuous medium—air or water. The opposite regime of two-phase flow, in which so-called descending bed is formed, is also applied in technology. In this regime, all solid particles are settled against the medium flow. In this case, the flow velocity is limited from above. The maximal velocity at which even the finest particles of the solid phase are not transported by the flow is called critical for the descending bed. A particular case of the descending bed is the so-called stationary bed, in which the solid material lies motionless on a grate and is blown through bottom-up. The ratio of these critical velocities from the standpoint of mass exchange processes inside their range is of a certain theoretical and practical interest. The importance of the study of regularities of such flow regimes can be testified at least by the fact that the velocity regime of such a widespread process in up-to-date technology as the boiling bed is within this range. Critical regimes of two-phase flows are most widely used in the industrial practice for the organization of processes of bulk materials fractionating by particle sizes or densities. Only in these regimes it is possible to organize transportation of fine light particles with the flow and a simultaneous motion of coarse heavy particles against the flow. Until today, these processes have been insufficiently studied and formalized. Critical regimes of two-phase flows can be realized not only in a vertical flow, but also in a centrifugal field, in the field of magnetic and electrostatic forces and in other cases where a differently directed motion of a solid phase with respect to the flow is realized. Separation of bulk materials in critical regimes of two-phase flows is an extremely complicated physical process. This fact predetermined a total hopelessness of a strict analytical description of the process. As is already known, there is no acceptable analytical theory even for a single-phase turbulent flow so far. The initial study of the process under discussion started with the analysis of the behavior of isolated particles. The first publications on this topic appeared in the second half of the 19th century. Rittinger determined the regularities for a simple precipitation of isolated particles of spherical shape in an unrestricted motionless liquid. In numerous subsequent works dealing with the study of precipitation, the influence of various factors, such as medium and material densities, final velocities of particles precipitation, and their drag coefficients, was revealed.

xviii

Introduction

The transition to real materials with particles of irregular shapes in the experiment has proved to be so complicated that publications devoted to these issues are still being published nowadays. This testifies to the complexity of phenomena occurring even with the simple precipitation of particles of real materials in a motionless medium. An attempt to apply principal regularities obtained in the study of the behavior of isolated particles in a flow to real processes does not give generalizing results. In fact, from this standpoint we have to admit that an ascending flow will carry out of the separation zone completely only the particles whose final precipitation velocity is lower than the flow velocity and vice versa, all particles with final precipitation velocity exceeding the flow velocity will precipitate up the stream. All available experience of bulk materials fractionating shows that it is far from being so. Thus, it is necessary to admit that despite a more than 100-year period of development, this scientific trend has not reached the level required for the development of theory suitable for solving practical problems. The production needs are usually satisfied at the level of voluminous empirical researches. An enormous amount of experimental studies accomplished in recent years has allowed the development of numerous empirical methods of calculation of the main parameters for concrete apparatuses. Undoubtedly, it is an important and necessary work. However, researchers are not always aware of the fact that empirical methods have too little influence on the development of the process theory. As has turned out, a statistical approach to the analysis of two-phase flow separation regimes provides the basis for a strict analytical calculation of this process. Distribution of fractions in two-phase flow separation regimes is of a purely random character. However, in spite of this, the system acquires an average value of a random parameter, which allows its deterministic evaluation. The statistical approach allows interpreting complex systems as the movement of numerous elements in an ordinary or a certain abstract space. Therefore the system elements themselves can have absolutely different natures. Such elements are atoms or molecules in a gas, solid particles in a two-phase flow, money in economics, elements of units to be erected in complex construction, etc. If there is a possibility to imagine a mass system with probabilistic characteristics based on these elements, methods of statistical analysis can be applied to it. To obtain at least minimal information about the system under study, it is necessary to clarify characteristic features of its components. This means that one must assign one or several characteristic features to each element in such a way that the entire multitude of components could be distributed into groups according to them. For example, loose materials used in industry can be distributed by particle sizes, their densities, chemical composition, charge, shape, color, radioactivity, or other features. For performing this analysis, it makes sense to examine groups in which these features change over the course of the process, that is, some transformations take place in them.

1 Brief history: properties and problems of entropy parameter 1.1 Thermodynamic entropy A function introduced by Clausius in 1864, which he called entropy, caused an unexpected response in science. It appeared while he was trying to understand the laws of heat transfer and heat transformation into other types of energy, and had a rather simple form: H5

ΔQ T

(1.1)

where ΔQ is heat consumption; and T is absolute temperature at which this consumption is realized. The idea of Clausius was based on principal conclusions obtained by Carnot. The analysis of heat machine functioning suggested for the first time by Carnot decisively influenced the development of classical thermodynamics. It served as a basis for further formation of such fundamental notions as absolute temperature and entropy. Clausius paid sufficient attention to the mathematical aspect of the theory and tried to substantiate rigorously his theoretical conclusions. He has shown that the Carnot cycle is characterized by such a correlation between any elementary amount of heat dQ and temperature T, at which this amount has been taken from the source, that an integral of dQ T over the entire cycle for a reversible process equals zero, that is, ð

dQ 50 T

(1.2)

The integrand itself characterizes a certain parameter of the system state called entropy of this system dH 5

dQ T

(1.3)

For irreversible processes this integral over a closed cycle is ð

dQ , 0; T

Entropy of Complex Processes and Systems. DOI: https://doi.org/10.1016/B978-0-12-821662-0.00001-0 © 2020 Elsevier Inc. All rights reserved.

(1.4)

1

2

Entropy of Complex Processes and Systems

and therefore dH $

dQ ; T

(1.5)

where the equality is observed for reversible processes, and the inequality for irreversible ones. In the process of spontaneous heat transfer in the conditions of real systems, the entropy always grows. If the process is organized so that in some system entropy decreases, then in another system connected with the first one the entropy grows to a greater extent. Being defined as a sum of reduced heats, entropy is a function of the state of the system. The fact that in all practical applications the integral of Clausius obeys an inequality having one and the same sign caused misunderstanding among researchers. In addition, it was aggravated by the fact that entropy has a property of growing due to a spontaneous conversion of ordered properties of energy into thermal energy. This duality of entropy as a parameter existing owing to reversibility, but growing due to irreversibility, caused numerous disputes about the physical meaning of entropy. The operation of any real machine is accompanied by entropy growth, since it is inevitably accompanied by irreversible processes. Meanwhile, the irreversibility takes place in all natural physical phenomena, since friction processes, heat losses, etc., are always present in all of them. Hence, the sum of entropies of the bodies participating in these phenomena always grows. Many researchers believe that according to the energy conservation law, entropy is a total flow of discharged energy in the form of heat related to a unit volume. Entropy has the same dimension as the specific heat capacity—it is the amount of heat related to a unit of temperature. Owing to this dimension, for many years until now thermodynamic entropy has been considered as energy. All this gave a reason to formulate the physical meaning of entropy in the form of the following assertion: “Entropy represents energy required for a reversible return of the working medium into the initial state after an adiabatic process completed at the temperature corresponding to this initial state.” Therefore it is stipulated that here the minimal energy is implied. At the same time, the attribution of energy dimension to entropy seems insufficiently well-reasoned and requires special consideration. While examining entropy increment dH during a short time interval dt, two components are usually distinguished: dH 5 de H 1 di H

The quantity de H is caused by an external factor that manifests itself as mass or heat exchange with the environment. This part of entropy is reversible, which means that it can be both positive and negative. The quantity di H denotes an increment of the internal entropy obtained at the expense of irreversible processes taking place in real thermal processes and engines due to friction, heat losses, diffusion, etc. This part of entropy, which is internal with respect to the system, never changes its sign.

Chapter 1 • Brief history: properties and problems of entropy parameter

3

In the long run, this component reflects irreversible changes occurring inside the system. The main feature of this summarized function is that it changes in one direction only and always grows, that is, dH 5 de H 1 di H . 0

Even if, for some reason, de H can decrease, it is assumed that the overall dH value is always positive. It is accepted that if the heat is consumed by a system, its increment is positive for this system, that is, dQ . 0; and if it is released by the system, its value is negative, that is, dQ , 0: As noted, for a closed reversible Carnot’s cycle the following relation is valid: I H5

dQ 50 T

Another interesting property of entropy is that if this integral is calculated between two sets of heat engine parameters A and B, its value is independent of the character of these parameters that change between A and B. It depends only on the values of these parameters at the initial point A and final point B. Hence, for a reversible process we can write HI 2 HII 5

ðB A

dQ T

If the process is isothermal, that is, it occurs at a constant temperature, the entropy change for a reversible cycle amounts to ΔH 5

Q T

In the case of an irreversible real cycle, a system gives some heat to the environment at the expense of both heat losses and mechanical friction, whose energy also turns into heat. This heat dissipates in space, that is, it leads to an increase in the entropy of the environment. As a result, any real system that undergoes a cycle of operations and returns into the initial state, functions only by increasing the entropy of the environment it is in contact with. The analysis of thermodynamic laws has allowed Clausius to make three fundamental conclusions regarding entropy and energy: 1. The sum of entropy changes in the environment cannot decrease. 2. Energy of the universe is constant. 3. Entropy of the universe tends to a maximum. After the completion of any process, it cannot return, as a whole, to the initial state. If a system is isolated, dHe 5 0. In this case, the system entropy continues increasing due to irreversible processes and reaches the highest possible value in the state of

4

Entropy of Complex Processes and Systems

thermodynamic equilibrium. In this state, all irreversible processes cease. When the system starts exchanging energy with the environment, in a general case, it goes out of equilibrium, and the entropy generated by nonequilibrium starts growing. If a system obtains only energy, but no substance from an external source, the following relation is valid: de H 5

dQ dU 1 pdV 5 and di H $ 0 T T

(1.6)

where dQ is the amount of heat obtained by the system during a certain time dt, dU is a total change in the closed system energy; and pdV is work performed during the time dt. In a general case, for open systems that exchange substance and energy with the environment, dU 1 PdV 6¼ Q

In this case, de H 5

dU 1 PdV 1 de H0 and di H $ 0 T

(1.7)

where de H0 is entropy change stipulated by the substance flow. This quantity is usually calculated using the chemical potential μ: The notion of “chemical potential” was introduced by Gibbs. He examined a heterogeneous system consisting of several homogeneous parts S1 ; S2 ; . . .Sn with the masses m1 ; m2 ; . . .mn : These substances do not react chemically with each other (no chemical transformations occur). He took into account only the material exchange between these homogeneous parts. Assuming that energy change dU of a certain homogeneous part must be proportional to the change in the mass of substances, dm1 ; dm2 ; dm3 ; . . . dmn , he suggested an equation that is true in any part of the system: dU 5 TdH 2 pdV 1 μ1 dm1 1 μ2 m2 1 . . . 1 μn mn

(1.8)

Gibbs took into account the classical definition of entropy according to which the system had to be in an equilibrium state, and transformations between equilibrium states had to be reversible, that is, such that dQ 5 TdH

In thermodynamics, entropy is a function of the total energy U, volume V , and the number of molecules or moles of the substance N, that is, H 5 f ðU; V ; N1 ; N2 . . .Nn Þ

Chapter 1 • Brief history: properties and problems of entropy parameter

5

A general expression in the explicit form gives dU 5 TdH 2 pdV 1 μ1 dN1 1 μ2 dN2 1 . . .μn dNn

This expression can be rewritten as dU 5 TdH 2 pdV 1

n X

μk dNk

(1.9)

1

It can be obtained from (1.9) that       @U @U @U 5T 52p 5 μk @H V ;Nk @V H;Nk @Nk H;V ;Nk 6¼j

It should be also noted that the entropy parameter introduced by Clausius is a metrical invariant specifically characterizing a thermotechnical system. An invariant is realized as a system of quantities, parameters, or regularities describing a group of some phenomena remaining unchanged at certain transformations inside this system. By way of example, we can mention similarity criteria in hydrodynamics—Re; Fr; Eu and others. An infinitely large set of parameters corresponds to each numerical value of this criterion. Entropy covers all aspects of substance transformations—changes in energy, volume, and composition. Consequently, any system in nature—weather. gas, solution, two-phase flow. or living cell— can be characterized by entropy. Since Newton’s time, it has been established that the laws of nature are simple enough. In addition, nature is disposed to optimization. Rather often, natural phenomena occur is such a way that a certain physical quantity characterizing this phenomenon reaches its minimal or maximal value. This is typical, for instance, of optics, mechanics, heat transfer processes, etc. As Feynman has shown, all equations of mechanics can be obtained using the principle of the least action. Today it has become clear that the entropy introduced by Clausius is a new physical quantity which is as fundamental and universal as energy. It is natural that at first it was received with a lack of understanding and skepticism. Energy is something tangible, material, while entropy is an exceptionally abstract notion. However, if we thoroughly scrutinize the notion of energy, it is far from being as simple as it seems at first sight. It especially concerns energy transformation and its conservation law. For a moving body, conservation of a sum of kinetic and potential energies follows from Newton’s laws. However, this regularity was not applied on physical phenomena already known in the end of the 18th and beginning of the 19th centuries, which are characterized by specific types of energy, such as electric, magnetic, thermal, and chemical. It was revealed that each type of energy in this list can be transformed into other types. It was determined that electric current can cause chemical reactions and be a source of heat and light. Chemical reactions giving rise to electric current were found. It was also determined that electric current gives rise to a magnetic field, and even the conditions of heat transition into

6

Entropy of Complex Processes and Systems

electric current were determined (thermoelectricity). All these discoveries interconnected thermal, chemical, magnetic, and electrical phenomena and put on the agenda a global problem of correlations between the energies of all these phenomena. Soon it became clear that all relations between the mentioned phenomena reflect the transformation of one parameter—energy. An answer to the problem of correlations between these transformations was found in the course of the development of thermodynamics. It was established that despite the presence of various forms of energy and its eternal transformations, its total amount remains unchanged. Scientists discovered this in the course of clarification of the dependence of macroscopic properties of a substance on the a priori presented behavior of constituent microscopic particles. It was promoted by the fact that at that time the majority of scientists were adherents of the mechanistic approach to nature. They reduced any energy to the kinetic and potential energy of particles. It was not proved at that time, but some scientists believed that a substance in any state (gaseous, liquid, and solid) consists of minute particles. Depending on the energy value, these particles acquire a certain velocity of linear or oscillatory motion. The total kinetic energy of a body then manifests itself macroscopically as its thermal energy. Such interpretation of the properties of gas was suggested by Bernoulli. Newton adhered to the same point of view on the nature of heat. Thus in this case, energy conservation law means that the total amount of kinetic and potential energy is conserved for all the particles the substance consists of. However, the energy conservation law as such, in spite of its seeming clearness and simplicity, is actually far from being trivial and does not go without saying. As a rule, it must express the constancy of a sum of three summands: 1. kinetic energy depending on the motion velocity; 2. potential energy depending on the position of a body or a system; 3. internal molecular energy of a body or system expressed as thermal, chemical, electric, magnetic, and other energies. Sometimes it is difficult to distinguish these constituents of energy. For instance, in the case of electrically charged bodies, their electrostatic energy depends not only on the charge value, but also on the body motion velocity and its position with respect to other bodies. In addition, according to the relativity theory, a substance possessing some mass should be considered as a certain form of energy, and any energy should possess a certain mass. Einstein expressed the connection between mass ðmÞ and energy ðEÞ by the dependence E 5 mc2

where c is the velocity of light. In a general case, this law must cover not only energy conservation, but mass conservation, too. Besides matter, there exist fields. It has been established that electromagnetic radiation is a process connected with energy transfer. At the start of the 20th century, the notions of particles and fields were bound by quantum theory. This asserts that all particles excite fields,

Chapter 1 • Brief history: properties and problems of entropy parameter

7

and the electromagnetic field is connected with particles—photons, which possess both wave and corpuscular nature. It is established that the fields of nuclear forces also have corresponding particles. In all these cases, energy conservation law connecting fields with particles is valid. In this situation, it is hard to realize what is conserved—it is neither substance, field, wave, nor energy. It represents a certain mathematical function whose physical meaning is not intuitively clear. The belief in energy conservation was shaken only once. While studying β 2 decay, it was revealed that the energy of decay products according to Einstein’s formula is not equal to the initial energy of the nucleus. In 1930, Pauli, who believed in the conservation law, suggested a hypothesis that the lacking energy is removed by some unknown particle weakly interacting with other particles, which does not allow it to be revealed. Only in 1956, did scientists manage to fix experimentally a new particle—the neutrino. Since then, the belief in the conservation law has become stronger and remains unshakeable. Therefore it is clear why each concrete type of energy is usually determined only with accuracy up to an additive constant. As a rule, only a change in the system energy is fixed, since it seems impossible to determine the absolute energy. As has been established, thermodynamic parameters are finally determined by the behavior of atoms or molecules, that is, they are based on the microscopic level of a substance. Meanwhile, these parameters themselves characterize the behavior and state of a substance on the whole, as a single system, that is, at the macroscopic level. The ratio of these levels can be demonstrated by the following simple example. An experiment has shown that if a gas in a closed vessel is in a turbulent regime at first, after a certain time interval it will pass into one of equilibrium states, in which the temperature and pressure in the entire vessel are equalized, that is, they become identical. In general, it is known that a nonequilibrium system always tends to pass into a macroscopically equilibrium state. However, in this case, only macroscopic parameters characterizing the system on the whole are in equilibrium. At that microscopic level, complete chaos is observed in the motion of gas particles. They move randomly in all possible directions. One of the initial postulates of thermodynamics consists in the fact that with the course of time, an isolated macroscopic system comes to the state of thermodynamic equilibrium and cannot go out of it spontaneously. Usually, in this case equilibrium and nonequilibrium, reversible and irreversible processes are distinguished. Processes described by equations that do not contain time are considered equilibrium, that is, changes in thermodynamic functions depend on other parameters only. Such processes are sometimes called stationary, steady-state. Any relaxations of systems are considered nonequilibrium, since their parameters depend on time. Subdivision of processes into reversible and irreversible according to Planck is stipulated by the second law of thermodynamics. The transition of a system from the equilibrium state A to the equilibrium state B can be considered reversible, if its return into the initial state from B to A is realized without any changes in external bodies. Otherwise, the process is irreversible. It is perfectly clear that real processes connected with heat transfer are always

8

Entropy of Complex Processes and Systems

irreversible, and reversible processes represent their idealized versions. The comparison of these processes sometimes causes confusion. Thus Leontovich (1983) identified equilibrium processes with reversible ones and nonequilibrium ones with irreversible ones. However, this is not true. As early as the start of the 20th century, Caratheodory convincingly proved that the process reversibility in no way follows from its equilibrium. He has also shown that equilibrium processes per se model real processes in their infinitely slow interpretation, when at any moment of time the internal parameters of a system can be considered equilibrium. At the same time, the existence of equilibrium states to which a substance spontaneously tends being left to itself does not logically follow from thermodynamics. This also refers to thermodynamic parameters, which are unambiguously determined by equilibrium states. However, although all this is confirmed at a purely empirical level by all, without exception, experimental results obtained over more than 200 years, it has not, apparently, acquired the character of a global law. A more careful examination of the problem of the origin and physical nature of these regularities comes across essential difficulties which have not been overcome yet. Here a striking contradiction between theory and experiment arises. According to the second law of thermodynamics, the universe should have been in static equilibrium state since a long ago. However, the properties of nature have nothing in common with the properties of an equilibrium system, the universe permanently evolves. The main principle of extremum in thermodynamics consists in the fact that all isolated systems evolve independently to an equilibrium state, where the entropy reaches its maximal value. At the same time, at a constant volume and entropy, any closed system evolves to a state with minimal energy. As noted, the dimension of entropy coincides with the dimension of heat capacity. Let us try to examine this parameter. If the body temperature grows by dT at the absorption of the amount of heat dQ, then the ratio dQ 5C dT

is called the heat capacity of the body. There exist heat capacity at a constant volume CV and heat capacity at a constant pressure Cp . At a constant volume, all the heat is consumed to increase the intrinsic energy, and hence, we can write 

CV 5

dE dT



(1.10) V

If p 5 const; the heat is consumed not only for increasing the intrinsic energy, but also for executing work. Hence, Q 5 E 1 pV

(1.11)

Chapter 1 • Brief history: properties and problems of entropy parameter

9

In these expressions, E is the internal energy of a system, p is gas pressure, and V is gas volume. This function is called heat content or enthalpy. It is also a function of the body state. Hence, at p 5 const, the heat capacity is Cp 5

dQ dT

It is clear from the comparison of (1.10) and (1.11) that Cp . CV

always holds. We can think that this inequality is connected only with the work that a system expanding at its heating should execute. However, this is not the case. This inequality equally refers to the few bodies whose volume decreases with heating. Here the so-called Le Chatelier principle becomes valid. Its main idea is that external impacts perturbing the thermodynamic equilibrium of a system cause processes tending to weaken the results of this impact. Now we examine 1-g molecule of a gas, whose respective heat capacities cV and cp are called molar. By virtue of the equation pV 5 RT, the thermal function of 1 mole of gas is connected with its internal energy by the relationship Q 5 E 1 pV 5 E 1 RT

Differentiating this equality with respect to temperature, we obtain cp 5 cV 1 R

or cp 2 cV 5 R

R 5 8.3 J/ mol 5 2 cal/ mol. The heat capacity of a monoatomic gas can be easily found. In this case, the internal energy of gas is just a sum of the kinetic energies of the translational movements of its particles. For one particle, this energy equals 32 kT: The internal energy of 1 mol of gas is 3 3 E 5 N0 kT 5 RT 2 2

Hence, 3 cV 5 R 5 12:5 J=  mole 2

10

Entropy of Complex Processes and Systems

cp 5 20:8 J=  mole

These magnitudes are temperature independent. This theory was supplemented by Planck. He showed that a general expression of entropy in the form H5

ðT 0

Q dT T

does not contain any integration constants. In the low-temperature region, the specific heat capacity change with temperature obeys the law established by Einstein on the basis of quantum theory. He represented each atom of a solid as an elementary oscillator whose energy varies by portions which are multiple to the product hν (where ν is the eigenfrequency of an elementary oscillator, and h is the Planck constant), and not continuously. The average energy in the ensemble of oscillators equals kT; where k is the Boltzmann constant. At very low temperatures, kT can become smaller than the quantum hν; so that the greatest part of oscillators is at rest. This explains the zero value of the heat capacity. At the same time, according to the theory of heat capacity suggested by Einstein, the mechanism of energy absorption by molecules remains, in his opinion, not clear enough as yet.

1.2 Statistical substantiation of entropy Classical thermodynamics deals with directly measurable quantities—pressure, temperature, volume, amount of heat, etc. It represents a complete system. It allowed a complete and comprehensive maintenance of various technological processes and needs of various industries. By the second half of the 19th century, it already allowed a detailed estimation of furnaces, boilers, turbines, various heat exchangers, and a bit later, of steam, diesel, and petrol engines. These methods remain relevant today and are widely used without any special improvements. Actually, it was not a public necessity to advance the theory of these regularities at that time. However, it is known that scientists are always more interested in finding an answer to the question “Why?” than “How much?” Although the available thermodynamics was perfect, this interest stimulated them to look into details of its mechanism and, thus to understand the causes of the observed phenomena. First, due to the high regard in which Newton was held, attempts were made to reduce all natural phenomena to the laws of classical mechanics. Most of these attempts failed, but in the field of thermodynamics, thanks to the works of Boltzmann, this approach gave excellent results. This approach was based on heat identification with the motion of molecules. The simplest of these theories, which was developed during the following 3540 years after Clausius in the works of Schiller, the Ehrenfest spouses, Caratheodory, Born, Minde, Planck, and others, proved the justifiability of the second law of thermodynamics for a broad variety of processes.

Chapter 1 • Brief history: properties and problems of entropy parameter

11

It became clear that thermodynamic parameters per se represent certain average values for a system. This called for statistical, and not purely mathematical, methods for the perception of the molecular theory of thermal processes. This task was brilliantly solved by Boltzmann and then expanded upon by Gibbs. It was a purely speculative model that could be devoid of real significance, unless it was confirmed by the entire experimental material accumulated up to the present time. It is difficult to imagine today how courageous the idea of identifying heat with the motion of atoms and molecules in the second half of the 19th century was, when the existence of these atoms and molecules had not yet been proved. This theory connects the thermal energy with the mechanical energy of material particles. This theory can be most visually applied to an ideal gas. Gas molecules are considered as vanishingly small balls moving at high linear velocities and permanently colliding with each other and with vessel walls. The kinetic energy of molecules macroscopically manifests itself as the thermal energy of gas under study. Thus the entire internal energy of gas is reduced to the kinetic energy of its molecules. However, it is impossible to take into account the influence of separate molecules on gas parameters. This influence can be the subject of a kind of averaging, that is, statistical generalization. The credit for the successful solution of this problem should be given to Boltzmann. He introduced the notion of discontinuity and probability into the science, which made a revolution in physics at that time. He established, from the standpoint of a statistical approach, rather simple relations between the mechanical energy of molecules and all thermodynamic functions, in particular, entropy. This approach was developed by such representatives of new physics of the early 20th century as Gibbs, Poincare, Lorentz, Planck, and Einstein.

1.3 Notion of statistical ensemble It is clear that the complexity of systems signifies much more than the impossibility to know the behavior of each particle. In many cases, the complexity leads to the appearance of new qualities, which can be rather unexpected. For example, a gas that consists of atoms interacting by means of simple, well-known forces, under certain conditions can suddenly condense into a liquid. As it turned out, the main reason for the success of this theory was the fact that a large number of particles make it possible to use statistical methods efficiently, although the complexity of the systems can be frightening. The model of microworld behavior suggested by Boltzmann proved to be, in principle, sufficient for determining the properties of the entire system using several approximate methods. Usually, a multidimensional system consisting of N identical parts is described by an equation with the parameter of response constituting the total energy of the system. E 5 mgðx1 1 x2 1 . . . 1 x3N Þ 1

1 2 ðp 1 p22 1 . . . 1 p23N Þ 2m 1

(1.12)

12

Entropy of Complex Processes and Systems

In this relation, the first summand reflects the effect of the potential energy of the system on the formation of E; and the second—the effect of the kinetic energy, since  2 1 dxi p2 m 5 i 2 dt 2m

where m is the particle mass; xi is the particle coordinate; and p1 is the particle momentum. In a general case, three coordinates of particle position and three values of its momentum correspond to each particle. Eq. (1.12) is called the Hamiltonian equation. It is repeated once more that the principal idea of statistical mechanics consists in the “averaging” caused by our lack of knowledge of the details of the behavior of the system of particles under study. At the same time, one should keep in mind that there exist such states of systems, whose statistical properties are not of interest any more from the standpoint of the examined process, since their probability is vanishingly small or equals unity. Probabilistic notions play a determining role in statistical mechanics. It is convenient to use probabilities because they have a numerical value between zero and unity. When variables under study have a continuous set of values, the probability of obtaining any concrete value from this continuum equals zero. At the same time, the sum of probabilities must equal unity. There is nothing paradoxical in this—it is exactly analogous to the statement that a geometrical point has no length, but a segment consisting of a set of points has a nonzero length. Hence, in this case it is necessary to consider the probability within an interval, and not at a concrete point, that is, it is necessary to introduce into consideration the probability density PðzÞ: This parameter is the probability for the value z to be comprised within the interval dz: The expression PðzÞd n z denotes the probability of z 5 ðz1 ; z2 ; z3 . . .zn Þ to be a vector containing all continuous variables of the set under study, and d n z is the volume of an infinitely small cell of n-dimensional space. In this case, the condition for the sum of probabilities to equal unity is written in the form ð PðzÞdn z 5 1 z

where PðzÞ is the probability density; and z is the region of n-dimensional space where z changes. Later we omit, for the sake of convenience, the upper symbol n in the differential denoting an element of volume. The probability density is necessary for calculating averages. It means that in order to calculate the average value of the function φðzÞ, it must be integrated over all the values of z with the weighting function equal to the probability density of z   ϕðzÞ 5

ð PðzÞφðzÞdz z

Chapter 1 • Brief history: properties and problems of entropy parameter

13

Further discussion requires a clear explanation, which is as simple as possible. Therefore we start with a simple case of one molecule moving in one dimension. The position of such a particle is specified by a single coordinate x: A complete description of such a system from the standpoint of classical mechanics also requires, besides the coordinate, the momentum ρ of the particle. Such a situation allows a geometrical interpretation. Specifying x and ρ is equivalent to specifying a point in two-dimensional space. Such space is called phase space. The variables x and ρ take on continuous values. To make the state of the system countable, we can subdivide the range of variables x and ρ into arbitrarily small discrete intervals Δx and Δρ: Thus the entire phase space is divided into elementary cells with the volume (area) S0 5 ΔxΔρ

where S0 is a small constant for the performed subdivision, whose dimension is that of the moment of momentum. For a complete description of the state of a particle, it is now sufficient to indicate that its coordinates lie within a certain cell between x and x 1 Δx; and its momentum—in the interval from ρ to ρ 1 Δρ: Geometrically, it means that a point with coordinates x and ρ is located in a certain cell of the phase space. This reasoning can be applied to N particles. This can be represented using a set of orthogonal coordinates in 2N-dimensional vectorial or 6N-dimensional ordinary space. Dividing these coordinates into intervals, we divide 6N-dimensional space into small cells, whose volume equals the product ðΔx1 Δx2 . . .Δx3N Δρ1 Δρ2 . . .Δρ3N Þ: This means that we can specify the state of a system if we determine the set of intervals (cells of phase space) containing the particle coordinates x1 ; x2 . . .x3N and momenta ρ1 ; ρ2 . . .ρ3N : Such an approach provides the basis for general solution of the problem. As established, despite the continuous movement of phase points, their distribution function can be stationary. In a particular case, where f ðNÞ depends on 2N independent variables, the distribution stationarity is proved by the Liouville theorem using Hamilton’s function. Note that in Gibbs’ terminology, the distribution function f ðNÞ determines a statistical ensemble, that is, a certain set of copies of the system under study. in this respective, average values obtained using such distribution function are called “statistical averages.” A peculiar feature of Gibbs’ approach to the problem lies in the fact that he considered a set of systems (ensemble) consisting of a large number of systems instead of considering the system changes in time. The sameness means that each of the systems satisfies the same macroscopic conditions as the original system. The properties of a statistical ensemble are independent of time, if the number of systems corresponding to a specific case is the same at any moment of time. An isolated system in equilibrium state can be found with equal probability in any of its available states, that is, in any available cell of the phase space. As is known from statistical mechanics, if a system is in equilibrium with a heat reservoir at temperature T 5 ðkβÞ21 ; the probability of finding this system in the state j with energy Ej is ρj  eβEj

14

Entropy of Complex Processes and Systems

where j is a definite cell of the phase space, for which the coordinates and momenta of a system have a specified value ðx1 ; x2 . . .x2n ; ρ1 ; ρ2 . . .ρ2N Þ. As shown, the particle energy is composed of two components: • kinetic energy 1 1 ρ2 E 5 mv2 5 2 2m

where m is mass; v is velocity; and ρ 5 mv is momentum; • potential energy of position U: The total energy of particles is usually expressed by the Hamilton function in the form E5

1 X 2 ρ 1U 2m j j

Joint motion of particles is determined by equations of motion dρj dxj @E @E 5 52 and dt dt @ρj @xj

where t is time. The operator

@ @a

is regarded here as a vector with coordinates

@ @ax

;

@ @ ; : @ay @az

Let the function f ðNÞ include the sequence ð1; 2; 3. . .N; tÞ, that is, it depends on time in 2N-dimensional space with variables x1 ; x2 . . .xN ; ρ1 ; ρ2 . . .ρN : This distribution can be presented as a cloud of dust in phase space, whose particles are located in its separate cells with different densities. The trajectory of phase points making up the dust cloud with changing time is describes by Liouville’s equation " # @f ðNÞ X @f ðNÞ ρj @f ðNÞ @E 2 5 dt @xj m @ρj @xj j

This equation forms the basis of the entire molecular statistical mechanics. In this form, it is analogous to the continuity equation in hydrodynamics of incompressible liquid, where mρ and 2 @E @x play the role of “velocity” components of phase points “in the direction” of coordinates or momenta, respectively.

Chapter 1 • Brief history: properties and problems of entropy parameter

15

The total result that can be considered finally established is the fact that entropy of a certain state is connected with its probability. The connection between entropy and probability is accepted a priori, because these two quantities characterizing a system always vary in the same direction. In fact, according to the Clausius principle, any system evolves in such a way that its energy grows, and at the same time, such evolution is always directed to more probable states. This is usually illustrated by a simple example. We conceive a closed space (Fig. 11) separated into two parts by an imaginary partition, and introduce a great number of gas molecules N, whose mean kinetic energy characterizes the gas temperature, into the left-hand part of the space. We let these molecules move spontaneously and look at their positions after a certain period of time, when there are N1 molecules in one part and N2 in another. Obviously, each individual molecule has equal chances of being in either part. In this case, the number of outcomes of a random value of the distribution, or the number of complexes, as Landau named them, at N 5 N1 1 N2 is ϕ5

N! N1 !N2 !

Boltzmann, who was the first to discover the meaning of entropy as a measure of molecular chaos, came to the conclusion that the law of entropy growth reflects the growing disorganization. According to such disorganization, the most probable distribution corresponds to an approximate equality of the number of molecules in both parts, that is, finally, to N1 5 N2 5 N2 : In the process of evolution, ϕ grows and reaches its maximum at N1 5 N2 . Boltzmann understood that irreversible entropy growth can be considered as a manifestation of increasing molecular chaos, while the distribution asymmetry leads to a decrease in the number of complexes ϕ: He identified entropy with the number of complexes by the formula H 5 klogϕ

(1.13)

where k is a universal constant. Irrespective of the initial distribution, its evolution leads, in the long run, to a uniform distribution ðN1 5 N2 Þ:

pVT

FIGURE 1–1 Closed space with a partition.

pVT

16

Entropy of Complex Processes and Systems

The magnitude of k was obtained numerically for maintaining a common dimension with Clausius’ entropy from the relation k5

R NA

where R is the gas constant for a gram molecule, and NA is the Avogadro number. Such an ensemble of systems can be visually represented by a distribution function expressing the probability Pf of the fact that the system chosen from the ensemble is in some concrete state. After Boltzmann, Planck formulated principal relations connecting the distribution function Pf with thermodynamic properties of macro-state in a more general form: H 52k

X

Pf lnPf

(1.14)

P under the condition that Pf 5 1: This dependence satisfies the ideas of entropy behavior. It allows more exact measurements of the disorder, which represents, in this sense, the absence of information about the exact state of the system. It should be especially noted that, for systems consisting of a large number of particles, all states differing from a uniform distribution are hardly probable.

1.4 Substantiation of the statistical analysis validity Boltzmann’s statistical approach caused sharp objections. Equations of mechanics are invariant with respect to the time parameter, that is, they are symmetrical at t substitution with 2t: At the same time, entropy change singles out a certain time direction. The earliest objection to Boltzmann’s theory was formulated on this basis. It was offered by Loschmidt, and is known in science as a “reversibility paradox.” Its essence is as follows. It can be logically assumed that if a certain mass process is possible according to the laws of mechanics, these laws must also allow an opposite process, where the system passes through the same configurations in the opposite order, and its entropy decreases. This means that if a gas is in a certain state S0 at a certain moment of time, after the lapse of time t it passes into state St ; and the entropy of the system increases, that is,Ht . H0 : Loschmidt thought that at the inversion of the velocities and directions of all gas particles from state St in a closed system, entropy must revert to its initial value H0 after the same time t. However, it does not occur according to Boltzmann’s theory, since in this case, entropy also grows, that is, H2t $ Ht : An objection of Zermelo was more essential. It consisted in the fact that according to Poincaré’s theorem of returnability, a closed mechanical mass system must finally return, as a result of its own evolution, to a state arbitrarily close to the initial one. Thus it turned out that if entropy were a purely dynamic value, it could not always grow.

Chapter 1 • Brief history: properties and problems of entropy parameter

17

Today we understand that the objections of Loschmidt and Zermelo are inconsistent because of a simplified formulation of the notion of thermodynamic entropy. The formula of a number of particle collisions in a gas, which served as a basis for Boltzmann’s equation derivation, must not be considered as a simple consequence of equations of mechanics. It calls for some additional interpretation. Boltzmann himself interpreted these results statistically, and only after that formulated the notion of statistical entropy. The point is that each particle cannot contribute to entropy by itself. This parameter is determined by the behavior of their totality, which behaves in a special way. Here we imply only the most probable consequence of a definite state of the whole system. In other words, the principle of entropy increase means only the following. If we consider a certain macroscopic description of a system, the overwhelming majority of all states satisfying this macroscopic state give an entropy increase (or the same entropy) in the subsequent moments of time. Generally speaking, this leads to two problems: 1. Is it possible to conciliate the reversibility in time with the idea of returnability with an “observable” irreversibility? 2. Is such conciliation possible within the frames of classical mechanics? The first problem is essentially logical. An affirmative answer will be obtained if a model possessing all required properties is indicated. Such a model of the process was suggested by P. and T. Ehrenfests (1907). This model is essentially one of the examples of a finite Markov’s chain, but it is of great independent interest. Its essence is as follows. Let 2R balls numbered from 1 to 2R be placed into two boxes A and B. Let a randomnumber generator give a certain integer from the range between 1 and 2R at a discrete moment of time with the time interval t. Then the ball with this number is replaced from its box to another, and the procedure is repeated many times. For the sake of the analysis simplicity, let all the balls stay in box A at the initial moment of time. It is intuitively clear what will happen. Imagine that at a certain moment of time, after S extractions, there are nA ðsÞ balls in box A and, respectively, 2R 2 nA ðsÞ balls in box B. At the next ðs 1 1Þ step, the probanA bility of the appearance of a ball with the number placed in box A is 2R ; and in box B, respec2R 2 nA tively, 2R : If nA . 2r 2 nA ; the probability of the appearance of a ball from box A is higher than that of a ball from box B. Therefore in this situation a transition from A to B is more probable, and the difference between the numbers of balls in both boxes decreases. This tendency is conserved until the equality nA 2 ð2R 2 nA Þ 5 0 is attained, and becomes weaker while this difference tends to zero. Thus while the numbers of balls in both boxes are equalizing, the probabilities of the appearance of balls from A or B are approaching each other, and the result becomes less and less clear. At the extraction of the next ball, further equalization of the numbers of balls in both boxes can take place, but the opposite also can happen. Fig. 12 shows the result of the realization of such a process with 40 balls. Obviously, the process seems irreversible at first, but in the vicinity of the “equilibrium position” oscillations (fluctuations) of the difference of ball numbers appear, which points to the fact that we actually are dealing with an irreversible process. (In this figure the fluctuation value is always

18

Entropy of Complex Processes and Systems

40

30 20

10

0

10

20

30

40

50

60

70

80

90

FIGURE 1–2 The Ehrenfests distribution. Ordinate: [nA(S) 2 nB(S)] 5 2[nA(S) 2 R].

positive, since the ordinate axis shows the modulus of the difference of ball numbers in boxes A and B.) We cannot assert that this difference always decreases, but we can be sure that at large numbers of balls 2R, it “almost always” decreases until we are far away from the “equilibrium.” This is just the behavior of entropy of a nonequilibrium multiparticle system. The Ehrenfests’ model provides easy answers to all objections against the substantiation of thermodynamic irreversibility on the basis of statistical reasons. In particular, according to the microscopic reversibility principle, it is quite possible to imagine a process in which the motion of balls after the “time reversal” will follow back exactly along the same curve. At large R this process is absolutely improbable. The probability for all the balls getting “some time” in the same box is not zero, but very small (incredibly small for R  1022 ). This is just the sense of thermodynamic irreversibility and the entropy growth law. Thus it certainly follows from this that the number of states consecutively passed by an isolated system corresponds to a more and more probable distribution. Therefore processes in a nonequilibrium closed system occur in such a way that the system continuously passes from a state with a lower entropy to that with a higher entropy until the entropy reaches the maximal value possible under specific conditions, which corresponds to the total statistical equilibrium. Speaking about “the most probable” behavior, one should keep in mind that, in reality, the probability of a transition into a state with a higher entropy is so overwhelmingly greater than that of its noticeable decrease that the latter actually cannot be observed in nature, except small fluctuations. This formulation of the entropy growth law in a purely probabilistic sense was suggested by Boltzmann. According to Landau (1936), the law of entropy growth formulated in this way could hardly be derived on the basis of classical mechanics. For some time, an idea of large spontaneous fluctuations of a system, which could lead to a considerable entropy reduction, was seriously discussed. The Ehrenfests’ model completely rejected such a possibility. However, specialists reverted to disputes on this problem for a long time. The validity of Boltzmann’s theory was proved by Katz (1957) on the basis of a mathematical model of the process and finally resolved by Ruelle (1971). The latter has

Chapter 1 • Brief history: properties and problems of entropy parameter

19

shown that at the reversal of the velocities of all the particles of a system, there exists a delicate situation that was not taken into account by Boltzmann’s opponents, namely, initial conditions. When they applied the laws of mechanics for the examination of a reversed motion, an assumption was made that the system under study was completely isolated. However, it is unrealistic. The walls of a vessel containing gas also consist of molecules, and hence the motion of molecules in them must be also reversed, and they are in contact with the outer space. Hence, for the reversal of particles inside the system, the motion of all atoms and molecules of the universe must be reversed, which is totally absurd even in a mental experiment. If the direction of velocities in some part of the universe is changed to the opposite one, time will not turn back, and entropy will not decrease. In the times of Boltzmann, the role of sensible dependence on the initial conditions for the understanding of reversibility was not duly appreciated. Today there is no doubt that the ideas of Boltzmann totally fit the knowledge acquired later. However, at that time many scientists thought that his kinetic theory of gases was based on a doubtful “atomistic hypothesis” developed using doubtful mathematics. His statements concerning an irreversible time evolution derived from the laws of classical mechanics, which are obviously reversible per se, caused specific objections. Regarding these debates, we can conclude that abstract discussion of physical phenomena that they are trying to explain without any reference to the reality is misleading and rather useless. Boltzmann was working in the field of thermodynamics—theory connected with entropy and irreversibility. His great achievement was thermodynamics interpretation within the framework of “atomistic hypothesis” using statistical mechanics. Boltzmann’s anticipation became a reality, as today it has been irrefutably proven that matter consists of atoms, and Boltzmann’s formula of entropy can be checked experimentally. His statistical mechanics has acquired an immense predictive value in modern science. At the same time, in should be noted that Boltzmann’s ideas on atoms were far from being complete. Atoms or molecules are not just small moving balls, as he imagined, but rather possess a complicated internal structure. However, the importance of Boltzmann’s ideas is not connected with atomic structure cognition—they reflect only one important step in the understanding of nature. In the mid-20th century, the Boltzmann ratio led Shannon to a connection between entropy and information. It was an unexpected discovery, although even the notion of thermodynamic entropy was based on certain aspects of informational approach. Entropy is interpreted as a measure of disorder of a complicated system. Hence, the higher the entropy, the less we know about the system. In his basic works, Shannon introduced a quantitative measure of uncertainty connected with random events. His works influenced a more accurate definition of some statements of statistical mechanics. He formulated informational entropy in the form of a relation similar to Boltzmann’s formula H 5 2 klogP

where k is the proportionality factor, and P is an event probability.

(1.15)

20

Entropy of Complex Processes and Systems

Some authors see a connection between the information theory and thermodynamics in this relation. For this purpose, they assume that the coefficient k in this equation is equal to the coefficient k in Boltzmann’s formula (1.13). In this case, it is seen that the dimensions of thermodynamic and informational entropy are the same. This is substantiated by a seemingly incontestable fact that information production, conversion, transmission, and receipt always need energy expenditure. In different conditions, this expenditure can differ. Therefore instead of an actual amount of consumed energy, the corresponding entropy is used, since information is measured in entropy units. Apparently, this is caused by the understanding of the fact that transmitted information is in no way connected, for example, with the surrounding temperature. It is unlikely that this parameter reflects the work consumed for the information transmission. After all, in practical applications informational entropy becomes dimensionless and is expressed in bits—units that are in no way connected with energy. It is assumed in thermodynamics that entropy is a measure of energy consumed for returning a system into its initial state with an irreversible change in the system. Clausius formulated the notion of entropy using macroscopic parameters of thermodynamics only, while Boltzmann defined it using microscopic parameters only. There exists at least one field where these two aspects of entropy appear at the same time—the theory of gas mixing and separation. In its modern form, this leads to Gibbs’ paradox, which is physically grounded, and also to some other paradoxes that are not physically grounded and require investigation. In addition, it is necessary to coordinate the notion of statistical entropy, which uses as a parameter the number φ of complexes in an ensemble representing a certain number raised to the power N  1023 , even for a mole of gas, and informational entropy, where P is the system probability, which always equals or is less than unity. Here we outline a number of problems connected with entropy to be solved for generating a similar notion for complicated systems.

1.5 Some problematic aspects of entropy One unclear question connected with entropy dimension remained. It was totally wellgrounded that while formulating the macroscopic entropy, Clausius ascribed to it a dimension equal to energy divided by temperature according to Eq. (1.3). He could not do anything else. However, for some reason, many scientists perceive entropy as energy, as mentioned above. Recall once more that entropy according to Boltzmann is H 5 k log ϕ and k 5

R NA

(1.16)

where R is the universal gas constant, R 5 8:31 j=mol; and NA is the Avogadro number, 1 NA 5 6; 023 3 1023 mol :

Chapter 1 • Brief history: properties and problems of entropy parameter

21

The dimension of Boltzmann’s constant has, in essence, the dimension of the specific heat capacity. This reduces Boltzmann’s entropy to Clausius’ entropy, and from the standpoint of physics, these parameters are identical. However, there is a latent drawback in this dimension, which often manifests itself in a paradoxical way. In this connection, we make one more attempt to define the physical meaning of entropy more precisely. According to the already mentioned empirical rule of Dulong and Petit for atomic gases, first, their heat capacity is independent of temperature, and, second, the specific heat capacity of a mole of gas is constant and equals approximately cp  21 j=mol K

(1.17)

This rule expresses the energy conservation law, because a mole of different gases contains the same number of particles. However, in each gas, molecules have individual masses. Fine particles gather higher speeds than heavy ones. At the same energy consumption, the temperature of all these gases rises by the same value. We examine the specific heat capacity parameter, whose dimension is analogous to that of thermodynamic entropy cp 5

dQ dT

(1.18)

where cp is the heat capacity of gas at a constant pressure. According to (1.18), the specific heat capacity per unit mass is expressed as cp 5

21 j μ K

where μ is the mass of a mole of the respective gas. This hyperbolic dependence shows that for light gases this parameter is of greater importance than for heavy ones. Hence, the temperature increase of a unit mass of light gases by one degree requires greater heat consumption than that of heavy gases. This looks paradoxical. Now we can revert to the entropy expression of Clausius dH 5

dQ T

From here and from (1.18), two equalities follow: dQ 5 TdH dQ 5 cp dT

22

Entropy of Complex Processes and Systems

Hence, TdH 5 cp dT

We separate variables and take the integral ð T2 T1

dT 1 5 T cp

ð T2 dH T1

We obtain ΔH 5 cp log

T2 T1

where ΔH is entropy increment at the temperature change from T1 to T2 : Temperature ratio adds nothing to the dimension of cp : The logarithm of the temperature ratio considerably levels the effect of their difference on the entropy increment value, which is, at the same time, always positive. This shows that entropy grows as temperature increases from T1 to T2 , but its absolute value is mostly determined by the specific heat capacity of gas. All these paradoxes can be overcome if one idea only is realized. This consists of the assumption that entropy H is dimensionless not only in the theory of information, but also in thermodynamics. In fact, this is true. Following Clausius’ relation dH 5

dQ ; T

entropy is interpreted as a parameter connected with energy. Here we come across a somewhat strange logical situation. In this case, the temperature T becomes a dimensionless parameter with respect to energy and represents a kind of dimensionless coefficient devoid of physical meaning. This fact obscures the general pattern of thermodynamics and leads to confusion. The fact that temperature is a carrier of energy per se is clear even at an intuitive level, and laws of physics also point to that. It is sufficient to recall Gay-Lussac’s law stating that the energy of an ideal gas depends on temperature only, being independent of volume and pressure. We can also mention phenomena arising at the thermal expansion of solids. These are determined by temperature only and develop huge forces. It is perfectly clear that the definition of temperature must be based on a physical quantity characterizing the state of a body, which is automatically the same for any of various bodies that are in thermal equilibrium with each other. It turns out, and it has been known for a long time, that the mean kinetic energy of translational motion of particles (atoms or molecules) of a body possesses this remarkable property. Because of this, the mean kinetic energy of translational motion of particles inside a system or a body can be chosen as a measure of temperature. As early as the 19th century, it

Chapter 1 • Brief history: properties and problems of entropy parameter

23

was known from fundamental physics that temperature depends on the kinetic energy of translational motion of molecules and is defined by the relation T5

  1 2 mv 3

(1.19)

where m is the particle mass, and v is its mean velocity. Here the mean value can imply either a value averaged over the velocities of the body particles at the same moment of time, or a value averaged over the velocities of one and the same particle at different moments of time. Both these definitions are absolutely equivalent. According to (1.19), temperature has the dimension of energy and can be measured in energy units, say, in ergs. However, for two reasons, this parameter proves to be extremely inconvenient. First, the energy of thermal motion of particles is negligibly small in comparison with erg. Second, a direct measurement of temperature as particle energy is extremely difficult. No one would think of measuring temperature in this way when having the ability to use degrees—a simple and practical measurement unit for this parameter. It should be emphasized that a degree of temperature also has the dimension of energy. A conversion factor has been found showing the fraction of erg contained in one degree of temperature. This factor corresponds to the Boltzmann constant and equals k 5 1:38 3 10216 erg=K 

Total kinetic energy of particles in 1-g molecule of a substance per one degree equals k 3 NA 5 1:38 3 10216 3 6:02 3 1023 erg 5 8:31 J

Now it is clear that if the temperature is expressed in degrees, its magnitude in ergs equals kT, and the relation (1.19) should be written as  kT 5

1 2 mv 3

 (1.20)

Here we have to emphasize the fact that the dimension of energy refers to the temperature parameter, and not to the Boltzmann coefficient. Hence, it becomes clear that entropy in Clausius’ formula is a dimensionless parameter, just as the informational entropy From this standpoint, the specific heat capacity is also dimensionless and most probably characterizes the thermal mobility of a specific system. This removes all absurdities men tioned in this chapter. Obviously, all misunderstandings arose because the dimension erg/ K was assigned to the Boltzmann factor ðkÞ. This dimension is perceived as energy, and in most cases this factor is interpreted exactly in this way. In fact, however, it is dimensionless, since both erg and degree refer to energy, and, thus finally, the dimension of this factor is zero. The only case—a combination of this factor with temperature kT—actually expresses

24

Entropy of Complex Processes and Systems

energy. We recall that this factor constitutes a ratio between the temperature expressed in degrees and energy expressed in ergs. As for Boltzmann’s entropy H 5 klogφ;

where in this relation k is dimensionless. Hence, a fundamental conclusion follows—entropy according to Clausius and Boltzmann is a dimensionless quantity. This makes it possible to compare and combine it with entropy of other processes that are not connected with heat transfer, for example, with informational entropy, etc. There is another aspect of this problem that should be discussed before continuing these considerations. By way of example, we revert to the gaseous system shown in Fig. 11. Here the entropy value is determined through the number of complexes defined by the following expression: ϕ5

ðN1 1 N2 Þ! N! 5 N1 !N2 ! N1 !N2 !

(1.21)

with the entropy being equal to H 5 k ln ϕ

(1.22)

The number of molecules in gas is enormous. One-gram molecule contains NA  6U1023 molecules. Multiplying or dividing (1.22) by NA ; we obtain H 5 k ln ϕ 3 NA

(1.23)

Here the value of the factor k is not essential and does not affect the logic of our reasoning. Therefore we concentrate on the expression under the logarithm sign. Since each particle can be located either in the right-hand or left-hand compartment (Fig. 11), the probability of its state is P 5 12 ; and the number of complexes in the ensemble is on the order of 2N : We expand the logarithm in (1.23) H 5 kðln ϕ 6 ln NA Þ

(1.24)

and divide both parts of (1.24) by NA H k ln ϕ k ln NA 5 6 NA NA NA

The second summand in the right-hand part of this relation is vanishingly small in comparison with the first one. Hence, it does not affect entropy and, therefore, can be discarded. This points to another subtle property of entropy. The values of its parameters located under the sign of the logarithm can comprise countless numbers of complexes of the system, but they can also have the value of probability expressed by fractions of unity. The main issue is the ratio between these values appearing in the expression of entropy.

Chapter 1 • Brief history: properties and problems of entropy parameter

25

In conclusion, we emphasize once more that the entropy of any physical system (thermodynamic, informational, or any other) determining its uncertainty degree is a dimensionless value. Therefore entropies of any system can be compared and combined with others. The dimensionless character of entropy was mentioned in the work of Landau and Lifshitz, who asserted it without any proofs and derivations (1934). Apparently, because of that, this assertion has not been consolidated in modern thermodynamics.

1.6 Gibbs paradox and problems of gaseous systems separation The problem known in physics as the Gibbs “paradox” was formulated in 1879. Since then, it has attracted the attention of many researchers including first-rate scientists. For more than 130 years, they have been attempting to solve it using various approaches and within the frameworks of different branches of science, including thermodynamics, classical statistics, quantum-mechanical statistics, and theory of information, from the standpoint of so-called operational analysis or combining some of the mentioned methods. Such outstanding scientists as, for example, Boltzmann, Lorentz, Poincaré, Planck, Einstein, Landau, Schrodinger, and many more, have tried to solve this problem. During the 20th century, many publications on this problem appeared in different countries and still appear nowadays. However, an unambiguous solution to this problem has not been found yet. The Gibbs “paradox” appears at the entropy change after mixing various ideal gases. It is formulated in a very simple way. Imagine two isolated identical volumes (Fig. 11), each containing the same number of moles of gas, but with the gases being different. We consider the case of the same pressure p and the same temperature T in these identical volumes. From the standpoint of thermodynamics, each gas in the initial state in the volume under study possesses a certain entropy. Before their mixing, the total entropy of both parts of gas is   V V H1 5 N S1 1 C1 ln T 1 Rln 1 N S2 1 C2 lnT 1 Rln N N

(1.25)

where S1 ; S2 are constants characterizing a definite gas; C1 ; C2 are heat capacities of each gas; T is the absolute temperature; N is the number of molecules of each gas; R is the universal gas constant; and V is the volume. If we remove the partition and allow the gases to mix, the entropy increment at their mixing amounts to 0 1 2V V 2V V 2 2NRln 5 2RN @ln 2 ln A 5 ΔH 5 2NRln N N N N 2V N 5 2RNln 5 2RNln2 N V

(1.26)

26

Entropy of Complex Processes and Systems

From the standpoint of classical mechanics, it can be admitted following Lorentz that in each volume the entropy is zero, since the probability of particles for them is P 5 1 and log1 5 0: In a general case, the number of statistical complexes of the system of particles after the partition removal is ϕ5

ðN1 1 N2 Þ N1 !N2 !

We expand this expression using the Stirling formula logN1 ! 5 NlogN 2 N

Taking this into account, the entropy of the mixture is

ΔH 5 k N1 1 N2 ÞlogðN1 1 N2 Þ 2 ðN1 1 N2 Þ 2 N1 log N1 1 N1 2 N2 log N1 1 N2 5 k N1 logðN1 1 N2 Þ 2 log N1 1 N2 logðN1 1 N2 Þ 2 log N2

If the quantities of the mixed particles are equal, N1 5 N2 ; then ΔH 5 2kNlog2

(1.27)

This is the value of entropy increment. We ignore the difference between the expressions (1.26) and (1.27). This difference is explicable and not very fundamental. It is important that in both of these cases, mixing of different gases leads to a jump in the entropy. These expressions are valid if the particles of gases being mixed differ in some way. If, however, the particles are totally identical, entropy does not increase at all with their mixing. In this case, 2N particles will occupy the volume 2V , while the entropy increment drops to zero. This is the essence of the “paradox.” At the mixing of different gases, entropy increases stepwise by the value ΔH, and this value is independent of the nature of gases being mixed and on the extent of their similarity. Even if the parameters of gases are very close, an “entropy jump” takes place if they have at least some insignificant difference. In the analysis of the Gibbs “paradox,” attention was concentrated on at least three essential questions: 1. What is the physical essence of the difference between a mixture of different gases and a “mixture” of identical ones? 2. Is a limiting transition from mixing different gases to “mixing” identical ones possible and how is it provided? 3. Why is the entropy jump value independent of the nature of gases being mixed and on their extent of similarity? From the moment of the “paradox” formulation and up to now, entropy has been considered as a certain form of energy, and entropy jump has been interpreted as the work of

Chapter 1 • Brief history: properties and problems of entropy parameter

27

diffusion of the gases being mixed. The magnitude of this work is considered as the minimal energy required for separating the mixture of these gases into their original components. It was not simple to answer these questions, and, as follows from special literature, their clear physical explanation remains absent. The Gibbs “paradox” has played an important part in the development of modern physics. First, attempts were made to solve it by thermodynamic methods. At the start of the 20th century, it was realized, for example, by Nernst in 1904 that the work of mixing of gases is imperceptible for the study using thermodynamic methods and should be determined by some other methods. The “paradox” solution was not obtained using methods of classical statistics. The solution of a limiting transition based on the analysis of particle indistinguishability was only outlined. As the history of the development of physics shows, quantum statistics appeared, to some extent, due to the Gibbs “paradox.” Fermi arrived at his statistics through the problem of the absolute value of entropy. He was developing a statistics that allowed, according to his words, to obtain the values of “entropy constants consistent with the experiment.” In his opinion, the principal criterion of the correctness of his statistics was “its consistency with the Gibbs theorem.” The Gibbs “paradox” also played a role in the development of BoseEinstein statistics. The consideration of this problem conceals a profound issue. These statistics do not distinguish molecules of the same sort, but distinguish those of different types. Springer offered an explanation of entropy jump on the basis of the quantum dispersibility of a substance. In addition, scientists were seeking the solution to the Gibbs “paradox” using operational methods. These are based on the combination of the system under study and tools for its analysis. In the case of a certain instrument, gases are indistinguishable, and there is no entropy jump. In the case of a different instrument they become distinguishable, and the jump is present. Here the main object is a tool or even an observer, and not the system of gases. From the standpoint of an informational approach, entropy jump denotes the quantity of information which is necessary for the separation of gases. This is the same as the minimal work of separation, but expressed differently.

1.7 Actual separation of gases Separation of gases is not only of theoretical interest—it is realized in industrial practice. Usually, this process is accomplished by methods of fractional distillation or centrifugation. According to the Gibbs “paradox,” gaseous mixture separation should be independent of the nature of components. However, as practice shows, this is not the case. It turned out that the amount of energy required for component separation is greater, the less the difference between them. In this respect, data given by Shambodal are of interest. According to computations using the mixture entropy, for the separation of a mixture of gases consisting of 99:3% , 238 F6 and

28

Entropy of Complex Processes and Systems

0:7% , 235 F6 during the production of nuclear fuel, energy of 0.023 kW/h is needed per 1 kilogram of the target component. Actual energy consumption was 1:2 3 106 kW/h, that is, fifty million times greater. This means that the efficiency is on the order of η 5 1029 : According to other experimental data, practical energy consumption at the isotope separation in comparison with theoretical methods of gas diffusion has efficiency η 5 1026 in the case of the gas diffusion method, η 5 1027 at mass diffusion and η 5 1028 at thermodiffusion. These data give rise to doubts as to the correctness of the interpretation of gas mixing and separation theory. In fact, a jump of mixing entropy in no way characterizes energy consumption for the separation process. Here is a characteristic opinion of Shambodal shared by many scientists, “Classical theory asserts that any mixture involves a certain entropy of diffusion (or mixing entropy), which is independent of the mixture components. However, this conclusion is not applicable to any problem that should be solved by theory. Here the theory completely fails, inevitably leading to the Gibbs ‘paradox’ problem, which had not been satisfactorily explained yet. It is not surprising that its ‘theoretical’ results are not related at all to reality. Therefore no Gibbs ‘paradox’ exists at all despite speculative reasoning aimed at attaching the conspicuity of existence to it, as is done in theory.” It seems to us that the problem is due to a somewhat imprecise interpretation of entropy on the whole and entropy jump in particular, and not to the flaws of the theory. Entropy and the Gibbs “paradox” are real parameters of physical phenomena that need correct understanding, which turns out to lie in a somewhat different sphere of physical notions.

1.8 Solution to the Gibbs “paradox” The Gibbs “paradox” was formulated within the framework of classical statistics under strict initial conditions. Therefore it was necessary to seek its solution within the same statistics. It seems somewhat incorrect to seek its solution within the framework of other scientific ideas, statistics, and theories and, even more so, to connect it with an experiment, since it is, in essence, a purely speculative problem. The point is that the initial conditions are trivial—we examine two volumes of different gases at the same temperature and pressure. With the mixing of these gases, the joint entropy of the obtained system is characterized by a jump. The questions arise as to why it happens and what the meaning of this jump is. We are trying to answer this question from a standpoint that is somewhat different from what has been generally accepted until now. We examine a physical system consisting of the same two different gases placed into two volumes with the same technical parameters, that is, T1 5 T2 and p1 5 p2 : The numbers of gas molecules in each volume can differ. The difference is that we examine a system with a partition which cannot be removed. Such an approach is quite correct, as cellular models of distributions with the number of cells in a system exceeding the Avogadro number ð1023 Þ are often examined in physics.

Chapter 1 • Brief history: properties and problems of entropy parameter

29

We assume that this system consists of two cells only. According to classical statistics, the number of complexes in such a system is ϕ5

ðN1 1 N2 Þ! N1 !N2 !

(1.28)

Entropy jump for such a system equals

ΔH 5 klogφ 5 k logðN1 1 N2 Þ! 2 logN1 ! 2 logN2 !

Using Stirling’s formula, we can obtain

ΔH 5 k ðN1 1 N2 ÞlogðN1 1 N2 Þ 2 ðN 1 N21 Þ 2 N1 log N1 1 N1 2 N2 log N2 1 N2 5 5 k N1 logðN1 1 N2 Þ 2 log N1 1 N2 logðN 1 N21 Þ 2 log N2

(1.29)

If the number of particles in each cell of the system is the same, N1 5 N2 ; then

ΔH 5 k Nðlog 2N 2 log NÞ 1 Nðlog 2N 2 log NÞ 5 2kN log2 5 2kN log 2;

(1.30)

which exactly corresponds to the result (1.27) obtained while mixing identical portions of different gases. From the standpoint of the existing notions, the obtained result is rather strange. In this system, there is no mixing of gases and no work of diffusion, but the entropy jump takes place. Hence, it characterizes something different, which is not connected with the work of mixing. To understand it, we have to revert to the analysis of physical meaning of entropy once more. All problematic questions that have arisen here can be resolved by accepting one fact. Entropy jump or the Gibbs “paradox” reflects the change in the state of a system in a somewhat different manner which is not connected with energy change. As is known, a general property of entropy is its growth with increasing disorder or complicacy of a system. Since the time when the statistical definition of entropy was formulated, it has been considered not only as a parameter of the processes’ irreversibility, but also as an uncertainty unambiguously connected with the probability of its state. Examine the entropy jump of a joint system (1.29) more attentively. After simple transformations, we obtain ΔH 5 2 kN

  N1 N1 N2 N2 log 1 log N N N N

(1.31)

The ratios NN1 and NN2 are nothing but shares of the respective particles in the system under study. As is known, such a share represents the probability of the contents of these particles, that is, N1 N2 5 P1 ; 5 P2 N N

under the condition that N1 1 N2 5 N:

30

Entropy of Complex Processes and Systems

Taking this into account, relation (1.31) acquires the form ΔH 5 2 kNðP1 logP1 1 P2 logP2 Þ

(1.32)

Thus a jump of multicomponent system entropy is an unambiguous characteristic of the mixture composition. For N1 5 N2 ; that is, for P1 5 P2 , the entropy jump value is ΔH 5 2 kN

  1 1 1 1 log 1 log 5 2kN log2; 2 2 2 2

(1.33)

which exactly corresponds to the “paradoxical” Gibbs jump at equal amounts of the mixed particles. In principle, in this way we can specifically characterize the uncertainty of the composition of any material systems consisting of various elements, mixtures of multicolored balls, or milled polyfractional materials, etc. It is accepted that entropy is a characteristic of dynamic systems only. However, it turns out that static systems also can be characterized by the entropy parameter. A milled material can be motionless, while its composition is characterized by a certain value of compositional entropy according to its particle size and density. It can be easily shown that this parameter is a component of the thermodynamic identity. Such an approach allows for providing a detailed and comprehensive answer to all the questions that arose at the formulation of the Gibbs “paradox.” 1. A mixture of different ideal gases basically differs from a “mixture” of identical ones. The probability of some component content in a mixture is always below unity, and this is a reason for determining the composition uncertainty. A “mixture” of identical gases contains only one component, and therefore its probability in this composition equals unity, and there is no uncertainty. 2. A limiting transition from mixing different gases to identical ones is clear from the relationship for the compositional uncertainty. This uncertainty has the maximal value at an equiprobable composition of a binary mixture, where P1 5 P2 5 12. With decreasing share of one component, the probability of another component increases, and the total uncertainty decreases. This decrease is monotonic until the share of one of them becomes zero, the share of another acquires the value P 5 1; and the total uncertainty is zero. In a general case, entropy of a mixture varies within the limits 0 # ΔH # 1

3. The entropy jump value is independent of the nature of gases being mixed. It takes place for different gases, while for identical gases it is zero. The entropy jump value depends only on the ratio of the component shares in a mixture. Thus we can conclude that the entropy jump at the mixing of gases is not a “paradox.”

Chapter 1 • Brief history: properties and problems of entropy parameter

31

The dynamic component of entropy depends on the concrete parameters, such as temperature, pressure, and volume, and therefore its character is explicitly objective. As for the static entropy of composition, the situation is somewhat different.

1.9 Phenomenological problems of the second law (a) Essence of the problem A general result that can be considered as finally established on the basis of all that is stated above for spontaneous processes in abiotic nature is the existence of a connection between the entropy of a certain state of a system and the probability of this state. (For the convenience of further exposition, we conditionally call elements of abiotic nature physical objects, and elements of living nature biological objects.) We accept a priori the presence of a connection between the entropy and probability of the state of a system, since these two quantities change in the same direction. The main peculiarity of processes of this kind is a decrease in the free energy of a system observed with growing entropy. This forms the basis of the only law in physics characterizing the direction of changes in nature—the second law of thermodynamics. It predetermines the direction of physical systems evolution towards equilibrium, that is, to the maximal possible stability. For a system approaching the equilibrium, there exist several physical and mathematical definitions of stability. However, all of them are connected with the equilibrium state. Usually it is considered that the system under study can be stable only if it is characterized by only one equilibrium state and when all others tend to this state. It should be emphasized that the mentioned law, just like other conservation laws in physics, is characteristic of closed systems only. The generalization of the action of these laws leads to some contradictions. Their essence is described in the following. It follows from these laws that the tendency to a uniform energy distribution in the universe is of a general character, such distribution being accompanied by energy degradation from its less stable forms to more stable ones. After a long-term series of such transformations, the total energy is finally transformed into thermal energy incapable of further transformations. If we assume the universality of this law, after the completion of dissipation processes any kind of motion ceases, since such motion needs at least a temperature difference. The notion of “stability” refers to a class of systems tending to equilibrium and, consequently, losing their ability to change after achieving the former. Thus the second law of thermodynamics, as it is usually interpreted, gets into a contradiction with empirical facts concerning the eternity and nonannihilable character of motion. Even the ancients understood that “it is impossible to enter one and the same river twice.” What had to happen on Earth on the basis of the second law, has not happened during four milliard years. On the contrary, during this period, the process of the “impossible” transformation into “possible,” which is incredible from the standpoint of this law, has

32

Entropy of Complex Processes and Systems

been proceeding. While discussing this contradiction, scientists try to explain it by the fact that the character of the second law is not universal. It is supposed that there exist natural phenomena whose tendency is opposite to this law. At that, they refer, among other things, to biology and life. At present, the interest in the studies of thermodynamic aspects of biological systems is significantly growing. Possibly, their analysis will provide a clue to the solution to this global contradiction. Today’s state of science cannot give an answer concerning the phenomenological contradiction between global empirical facts referring to nature and the second law of thermodynamics. However, this does not mean that this contradiction is absolute. Certain aspects of the development of complicated systems, such as biological life, show that here an establishment of certain concrete relationships fitting into the frames of the second law is possible. At the end of this book we will revert to the discussion of the possibilities and ways of overcoming of kinds of contradictions.

(b) Thermodynamic aspects of biological systems These systems are undoubtedly complicated. It is noteworthy that almost since the moment of thermodynamics origination, attempts to analyze biological phenomena from the standpoint of their laws have been made. At first, the notion of energy was absent. All observable natural phenomena were explained by the action of”forces,” whose number was approaching the number of observed phenomena. Side by side with “electric,” “magnetic,” and other forces, “vital” force explaining the phenomena of biological life was also recognized as existing. Some generalization of these forces was needed, because even at that level of knowledge, an assumption about the unity of all forces in nature and about the existence of a law of force conservation was suggested. For this purpose, a certain generalizing fluid—phlogiston—was introduced into science. The fallibility of this approach was realized later, and the notion of energy appeared instead. Even at that level it has become clear that the only source of energy ensuring the existence of all living organisms including people and plants is the energy of solar light. One of the first formulations of the second law of thermodynamics was given by Thomson. According to Thomson, the heat of heated bodies dissipates in the outer space, and processes that would allow it to concentrate again and start functioning actively cannot exist. After some time, Boltzmann’s works appeared, in which this law obtained a statistical interpretation. This law still exists today in the same form, without any corrections. In the continuation of his works, Boltzmann tried to carry out a thermodynamic analysis of the life phenomenon. His ideas consisted of the following. At that time, Darwin’s ideas of the struggle for existence covering the entire organic world were very popular. Boltzmann thought that this struggle was not a concurrence for the substance and energy. Chemical elements needed for the construction of organic matter are contained in abundance in the ground, air, and water. Energy in the form of heat is scattered in all bodies and is provided in enormous amounts by the sun. Hence, he came to the conclusion that the struggle for existence is a

Chapter 1 • Brief history: properties and problems of entropy parameter

33

struggle for entropy, which becomes accessible at the energy transition from the hot sun to the cold Earth. Then a specific thermodynamic function of chlorophyll contained in green plants was discovered. It plays the role of an accumulator scattered in the space of light radiation. This energy becomes the chemical energy of products of photosynthesis in it, which is a fundamental basis for all phenomena observed in the world of biology. It is noteworthy that some kinds of bacteria possess the property analogous to chlorophyll, but their contribution to the general development of the biology is insignificant. In this respect, green plants are of fundamental importance. They represent border-line phenomena between two great general conclusions of the 19th century connected with the names of Thomson and Darwin— phenomena of energy dissipation and struggle for existence. Physical objects are heated in the sun, but this heat is not accumulated in them—t is dissipated in space. Green plants are also heated in the sun, but a some of this energy is accumulated in them in chlorophyll grains. Then this energy is released in processes that ensure the life of biological objects. From the standpoint of thermodynamics, two laws govern processes of physical nature. It is considered that these laws do not control biological processes. For a long time, many reputable biologists and physicists have made attempts to expand thermodynamics and formulate its third law for this purpose. For over 100 years, debates and discussions on the topic have continued. Attempts have been made to introduce new notions into thermodynamics, such as “extropy,” “negentropy,” and negative entropy, whose meaning is approximately the same. In addition, attempts have been made to give a new interpretation to Carnot’s principle. Life is an empirical fact, but it does not fit into the frames of this principle in the form it is formulated. It manifests itself as follows. The origination and establishment of such sciences as geochemistry and biogeochemistry led to great progress in the development of living systems science. In the course of the development of these sciences, it has become clear that there exist a number of properties of a general life process, which are not characteristic of a separate individual. The main one is the mortality of an individual and the infinity of life phenomena in the process of evolution. From the standpoint of thermodynamics, life processes consist of the accumulation of radiant and chemical energy. At that, biological objects slow down the transformation of this energy into heat. This special feature of life phenomena is revealed in the most vivid form in the processes of technogenesis, that is, in geochemical changes on the ground surface caused by industrial activities of humans. Complicated organic compounds with a huge reserve of accumulated energy arise in the processes of technogenesis and biogenesis. It is clear that in this case the law of entropy growth is violated. Even the independent activity of a human on the whole is contrary to the second law of thermodynamics in the sense that technical and technological activities lead to an entropy decrease. In addition, for the entire system of the aggregate living substance on the planet, the growth of free energy accumulated in it over time is characteristic. This does not refer to a separate individual, it is valid for all biological objects over the course of time. Here a part of radiant energy turns into potential energy in biological objects. As an example of such transformations, we can mention

34

Entropy of Complex Processes and Systems

black coal containing such energy accumulated over distant geological epochs. The amount of free energy is constantly increasing due to the functioning of biological objects. Free oxygen produced by green plants, black coal generated from the residues of organic compounds, food for animals and humans—all of this is not accompanied by any degradation of the initial solar energy accumulated in green plants. This leads to a conclusion that owing to the existence of life, a certain part of entropy of the universe should decrease, and not increase. Today these facts are generally recognized, but they give rise to different interpretations. Some scientists see a new principle contradictory to the notion of entropy in them. Therefore it is asserted that life phenomena do not unambiguously obey Carnot’s principle. However, this opinion is far from being shared by everyone. On the contrary, in recent years interest in thermodynamic analysis of biological phenomena not only has not decreased, but has considerably broadened, leading to numerous publications and discussions in progress until now. At that time, a principle of “stable nonequilibrium” was advanced as the main law in biology. This means that a biological system can persistently stay in the conditions far from thermodynamic equilibrium, that is, in a highly improbable state, which is difficult to understand. After external perturbations, any physicochemical system passes into an equilibrium state. It then possesses a minimal free energy and cannot spontaneously change itself. Biological systems also change their state in changing external conditions, but the direction of this change is directly opposite. These changes under the action of, for instance, sunlight lead to a free energy increase inside the system, which ensures a possibility of performing work on external conditions, realization of metabolism, and accumulation of a part of energy in the form of its potential component. In the mid-20th century, Schrödinger’s book on the conformity of life to natural laws from the standpoint of physics received a broad resonance. In this book, he introduced the notion of negative entropy. Schrödinger’s main ideas are very interesting and deserve a more thorough examination. His main postulates include the following: 1. Everything that takes place in nature leads to an entropy growth in the part of the world where it is possible. 2. A living organism also produces positive entropy. However, its unlimited growth brings it nearer to the state of maximal entropy, which represents the death of this organism. 3. To prevent entropy growth, that is, to remain alive, a biological system must extract negative entropy from the environment. This is the principal food for an organism. Thus it follows that the second law expresses a statistical tendency of nature to disorder, energy leveling, and depreciation. This is expressed by the fact of entropy growth. In contrast, not only does entropy growth not occur in organisms—its greatest decrease takes place. Thus it turns out that the main law in physics is a tendency to disorder and entropy growth, and in biology, to the growth of orderliness and a decrease in entropy.

Chapter 1 • Brief history: properties and problems of entropy parameter

35

In principle, metabolism takes place everywhere, both in living and nonliving nature, but it leads to two opposite results. In one case it results in an increase in the free energy of the system, while in another case it leads to its decrease. There is nothing extraordinary in that. Such a contrast was discovered in chemistry long ago. Chemical reactions are subdivided into exothermal and endothermal. Reactions of the first type are accompanied by energy discharge, while those of the second type are connected with the external energy absorption. Exothermal reactions are realized at the expense of their own, previously accumulated energy, while endothermal reactions require energy consumption. It is interesting to illustrate the peculiarities of adverse metabolism by a concrete example— rusting of a piece of iron and growth of a plant. In both cases, metabolism takes place. As the iron rusts, an exothermal process occurs, where the free energy decreases. As the plant grows, in contrast, the free energy increases. We emphasize once more that only due to the external energy inflow, does an increase in the mass of an organic substance, that is, plant growth, become possible. However, this contrast is not so simple and unambiguous. In living and nonliving nature, both types of chemical reactions take place. The direction of changes is determined by their prevalent type. If both these types of reactions proceed in a system at the same velocity, the total energy of the system remains unchanged. To realize biological processes, endothermal reactions must prevail over exothermal ones. The animal world is impossible without the consumption of energy accumulated by chlorophyll-containing plants. The totality of animals, including humankind, corresponds to energy manifestations characteristic of green plants. The account of all biochemical energy produced by living organisms at the expense of food consumption leads to an unexpected result. The total mass of biological organisms increases. Long ago, the quantity of living substances on the Earth and in the biosphere amounted to grams, while now it is on the order of 1013 tons. At the average chemical energy content on the order of 4 kcal/g of living substance, a progressive energy increase of aggregate biological objects is being observed. The direction of metabolism between living and physical nature is determined by the fact that the amount of free energy connected with living substances is constantly increasing. All this is provided by radiant energy from the sun in a redundant amount; it is estimated at about 1.5 kWt/m2 of the Earth’s surface. The mass of biological class is an expression of the accumulated energy. Long ago, scientists learned to determine it by the combustion heat of any organic substances. In this respect, the efficiency of agricultural production can be interpreted in a somewhat different way. This production can be considered profitable only when the total food energy consumption for performing all kinds of agricultural works by people and machines is lower than the energy accumulated by the crop. Ore production, metal casting, machine building, etc. must be added to the cost-based part of energy. In fact, these expenses must be referred to the number of years of operation of concrete mechanisms. A violation of the said correlation due to a number of reasons including organizational ones, leads to malnutrition and even famine in an entire country. The experience of Soviet rule in Russia eloquently testifies to this. As stated earlier in this section, the principal law of physics is a tendency to disorder accompanied by a change in free energy and entropy growth in a system, while the principal

36

Entropy of Complex Processes and Systems

law of biology is the growth of orderliness at decreasing entropy. It is rather difficult to comprehend this contradiction, but some conclusions can be made even at present.

(c) Discussion of the problem and conclusions Science is based on empirical facts selected and systematized by human intellect on the basis of the laws of human logic. Obviously, the logic of natural phenomena differs from human logic. Therefore scientific laws are often not completely adequate for natural phenomena. It is considered that the main characteristic of such laws is their complete relativity. However, this is not entirely true. Any generalization based on the analysis of empirical facts contains elements, maybe small, of absolute truth. The entire wealth of the content of a created theory or law is not revealed at once. It is extracted step by step as a result of long-term verification, which allows correcting them in order to increase the part of absolute truth. Nature is infinitely complicated. Therefore it should be clear that any description of the former is only a model or a conception of a certain fragment of the reality that surrounds us. The second law of thermodynamics was formulated long ago for closed physical systems. For almost 200 years, in has been completely confirmed for such systems in various branches of knowledge. It is considered that a contradiction to this law was revealed in biological systems. We do not set a challenge to give an exact final answer allowing to overcome this contradiction, but only examine some aspects of this problem that have different natures. We reiterate that a spontaneous evolution of closed physical systems is accompanied by a reduction of their free energy and by entropy growth. Biological systems are always open. The presence of chlorophyll in green plants gives them the opportunity to accumulate the energy coming from outside, which leads to an entropy decrease. In this respect, there are no contradictions here. In both systems, according to the second law, a free energy decrease leads to an entropy growth, and energy growth to an entropy decrease. At that time Boltzmann formulated the struggle for the existence of biological species as a struggle for entropy. Probably, it would be more accurate to describe this as a “struggle against entropy.” Schrödinger thought that biological objects feed on negative entropy. This is not entirely correct. Biological objects consume energy, which leads to an entropy decrease. Entropy is an abstract physical parameter having no dimensions, but providing evaluation of the system evolution direction. This parameter is useful enough for the analysis of physical, chemical, biological, and other processes. It has a precise mathematical definition. In everyday practice, entropy can be compared to the odor of food. The freshness of meat, fish, and some other products is evaluated by the odor intensity. Thus odor is a useful parameter. In recent years it has even acquired a quantitative estimation. In Israel, cancer of the lung and gastric cancer now are diagnosed by the odor of exhaled air, the odor intensity allowing a diagnosis of the disease stage. Stating that biological objects feed on negative entropy is the same as saying that a human feeds on negative or positive odor. It is totally absurd. One must be very careful when specifying negative values of scalar physical quantities. It is necessary to indicate their reference level.

Chapter 1 • Brief history: properties and problems of entropy parameter

37

Negative temperatures are possible with respect to 0 C, but it is impossible to imagine a negative temperature with respect to absolute zero. The notion of the negative absolute temperature appeared in the second half of the 20th century, after the discovery of spin systems. Using the reversal of the magnetic field sign or other methods, scientists managed to create, theoretically, an “inversion of populations” of energy levels possessing spins of elementary particles, most of them being located at the highest energy level. Therefore it seemed admissible to find spin systems with a negative absolute temperature above the infinitely high absolute temperatures, T ! N! Such a fitting to “quantum classics” leads to a contradiction with the principal laws of physics. This “inversion” logically leads to a statement about the possibility of a complete transformation of heat into work in such systems and the impossibility of a complete transformation of work into heat. In such a system, the thermal efficiency of Carnot’s machine η512

T2 T1

becomes negative. Within the region T , 0, a body whose negative temperature is smaller by its absolute value should be considered hot. Here we have to admit that in such conditions even a perpetual motion machine of the second kind is possible, which is absolutely absurd. We can count negative pressure with respect to the atmosphere, but not with respect to absolute zero. The assertion about the presence of negative entropy is also absurd. Energy flow determines everything in the change of the system entropy, and entropy is an estimation parameter characterizing the direction of evolution. In this respect, the second law of thermodynamics is fully applicable to biological systems, although it does not have such a global meaning for them as for physical systems. For biological systems, other laws of development arise, and we will revert to the discussion of some of them in the concluding part of this book. Here entropy finds a new confirmation which disproves, in principle, Prigogine’s assertion connected with the existence of a flagrant contradiction of thermodynamics with biological evolution theory. As we can see, all this does not contradict the second law, which, in this way, acquires a universal character for all complicated varying systems. The analysis of the entropy parameter has shown its universal character. It has become clear that entropy can find a new unexpected applied use. This parameter can be a useful instrument for the analysis of human activities and processes created by humans. Here is a concrete example. Mining operations, minerals beneficiation, metal melting, production of other minerals, fabrication of various articles—all of these are accompanied by energy expenditure. This must lead to a decrease in the initial entropy value characterizing the materials before treatment. The ability to find the entropy value characterizing the state of materials and substances before the processing and its change in the course of technological processes with the purpose of their optimization is one of the tasks of our research. Therefore we start the next chapter with the peculiarities of revealing the entropy parameter for some systems of applied importance.

2 Statistical entropy component 2.1 Notions of randomness, chaos, and stability It has been established that a system entropy and probability of its state are additive. Therefore we analyze first the possibility of applying probability to the analysis of complicated systems. It has been considered for a long time that the notions of randomness, uncertainty, and fate have a certain mystical, negative sense for humans. As has been defined, these notions were perceived by fortune-tellers, magicians, and shamans, and did not have a defined essential sense. However, it has turned out that a scientific investigation into randomness is possible and that it was started a relatively long time agо. The analysis of randomnicity arose from the desire to grasp the regularities of roulette behavior. Pascale, Fermat, Huygens, Bernoulli, and others devoted their works to this problem, starting with the theory of games. This analysis gave rise to a new trend—calculation of probabilities, which has been considered for a very long time as a secondary part of mathematics. Scientific understanding of randomness started from the moment when the notion of probability was used. A physical concept of probability is based on intuition mathematically connected with the numerical evaluation of undefined circumstances. However, as it turned out, it was not so simple to systematize and formalize this parameter. Probability evaluation replaces an indefinite “chance” with something more substantial. The probability theory had to be physically substantiated, since it is necessary to ground an adequate comparison of obtained results with physical reality. This is necessary for connecting probability with determinism, which is one of the main goals of the mathematical concept of uncertainty evaluation. When tossing a coin, heads or tails may occur with equal probabilities. However, if a coin is tossed twentyfold, the probability of only tails occurring every time is less than one chance per million. Therefore it was necessary to impact a mathematically consistent structure to this parameter. Today, probability theory is one of the constituents of mathematical analysis, and in the narrow sense of the word, it can be referred to as the theory of measure. Its most modern part is connected with combinatorics. However, all this far from exhausts the content of probability theory, and it is barely possible, in general, to define its exact place and boundaries. Strict mathematical bases of probability calculation were introduced into mathematics by Kolmogorov. In order to avoid confusion later, let us define important parameters characteristic of probabilistic processes: 1. A random value usually implies a numerical parameter whose value is determined by the outcome of tests. Entropy of Complex Processes and Systems. DOI: https://doi.org/10.1016/B978-0-12-821662-0.00002-2 © 2020 Elsevier Inc. All rights reserved.

39

40

Entropy of Complex Processes and Systems

2. A random process is an abstract mathematical notion used for describing random phenomena that depend on a certain parameter obeying probabilistic laws. 3. The probabilistic law obeyed by the development of a random process is specified by joint probabilistic distributions of random values forming this process.

2.2 Probabilistic characteristics A phenomenon that can either happen or not is considered a random event. We elucidate it for our consideration by way of the following example. Consider a vertical round pipe whose transversal part Δx high is transparent (Fig. 21). Organize a two-phase flow in a boiling-bed regime in this pipe above a grid located in the lower part of the pipe. Select a narrow size class of coal particles and create hovering conditions in this pipe for it. This does not mean that the particles will be in static equilibrium. On the contrary, as is known from practice, they permanently move inside the entire volume, and the bed inflates

Δ X1

Δ X2

FIGURE 2–1 Vertical pipe with nonintersecting windows.

Chapter 2 • Statistical entropy component

41

up to a certain height. The maximal height of this bed above the grid is denoted by L. We launch one particle made of chalk into this system. Assume that this particle has exactly the same aerodynamic characteristics as particles of coal. Therefore it can “travel” over the entire height of the boiling bed L without leaving it. If the observation through this window is realized perpendicularly to the pipe axis, the ingress of a white particle into the field of vision is a random event. Assume that observations or photographs were repeated a great number of times M, this particle being observed in the window Δx m times. It is considered that the probability of a certain random event is the ratio between the number of observations m at which the event occurred to the total number M of performed observations under the condition that M is sufficiently large. If we denote the event probability by P, then P 5 lim

m

M!N M

which is often written simply as P5

m M

(2.1)

The necessity of a large M is clear from the consideration of exact determination of the probability value. If at a fivefold observation (photographing) one managed to observe a particle once, it would have been early to assert that P 5 15 : If four more observations were performed, and the event was not repeated, the probability would have amounted to 19 : To obtain suffim ciently exact results, the test should be performed until the ratios M differ from each other by small values. The deviation will be determined by the desirable precision of defining the probability of the event under study. From the experimental standpoint, the criterion of the correct interval between particle observations or photographing is that repeated sets of trials with time m intervals increased with respect to the initial ones leading to the same limiting value M : It follows from the definition of probability that its values lay between zero and unity. Two events are considered incompatible if the realization of one of them excludes the possibility of the realization of the second (Fig. 22). Event 1 consisting of the fact that at the moment of time t a particle is found in Δx1 and event 2 consisting of the fact that at the same moment of time t it is found in Δx2 are incompatible. In the case of intersecting windows Δx3 and Δx4 ; two events can be compatible, since at a certain moment of time t, a particle can be located on the part of the pipe belonging simultaneously to Δx3 and Δx4 : The probability of realization of one of two events equals the sum of probabilities of realization of each of them. In fact, if at M observations for the first window is P1 5

m1 ; M

P2 5

m2 ; M

and for the second window

42

Entropy of Complex Processes and Systems

Δ X3 Δ X4

FIGURE 2–2 Vertical pipe with two intersecting windows.

then the particle ingress into the field of vision at least in one window is ðm1 1 m2 Þ times, and the probability, respectively, is P5

m1 1 m2 m1 m2 5 1 5 P1 1 P2 M M M

(2.2)

A complete group of incompatible events is a whole set of them where the realization of one of them is credible. Incompatible events considered in the previous example do not form a complete group, since a particle can occur outside the field of vision Δx1 and Δx2 ; in a different part of the pipe. If these two events are supplemented by a third one, that is, occurrence inside a nontransparent pipe, this group of three events forms a complete group for which we can write P1 1 P2 1 P3 5 1

If the events are equally possible and the quantity of their realizations is n, the probability of each event is P5

1 n

Chapter 2 • Statistical entropy component

43

In this case, the probability of observing a particle in the window Δx1 is PðΔx1 Þ 5 m

1 m 5 n n

On the other hand, the probability of particle residence in the window Δx1 at total pipe height L is PðΔx1 Þ 5

Δx1 L

(2.3)

Besides incompatible events, one more notion can be defined, called independent random events. These are such events that the realization of one of them does not influence the probability of the realization of the second. Consider the following example. Let the first event consist in the fact that one white particle appears in the window Δx1 at the moment of time t, and the second event in the fact that a second white particle let into the flow at the same time is observed in the window Δx2 : These events are independent, because the probability of observing the second particle in Δx2 is independent of whether the first particle is observed in Δx1 or not. Assume that in the case of M tests it has turned out that the first particle was observed m1 times in Δx1 , and the second particle m2 times in Δx2 , that is, P1 5

m1 m2 ; P2 5 M M

We pick out of all m1 tests the cases where the second particle simultaneously got into Δx2 : The total number of such cases is m1 P2 : If the found number of events is related to the total number of tests, the probability of a joint event amounts to P1;2 5 P1 3 P2

(2.4)

The probability of joint realization of independent events equals the product of probabilities of each of them. For mutually dependent events, a so-called conditional probability is discerned. We illustrate this by way of example of two intersecting windows, Δx3 and Δx4 : The probability Pð3; 4Þ of simultaneous particle ingress into Δx3 and Δx4 is, in essence, the probability of getting into the intersection of these two windows, whose height is Δx: Hence, the probability Pð3; 4Þ 5

Δx L

This formula can be represented in the form Pð3; 4Þ 5

Δx Δx Δx1 5 5 P3 ð4Þ 3 P3 3 L Δx1 L

(2.5)

44

Entropy of Complex Processes and Systems

or Pð3; 4Þ 5

Δx Δx Δx2 3 5 P4 ð3Þ 3 P4 5 L L Δx2

(2.6)

2.3 Random values and distribution functions The quantity which can acquire various numerical values with a certain probability is described as random. If the total amount of black particles is fixed in the transparent window Δx in Fig. 21, their number is a random value. If this quantity can take on values from the series 0; 1; 2. . .i. . .N; and each of them is realized with a different probability Pi ; then such a random value has a discrete distribution. It takes on values differing by a finite quantity. In addition, continuous random values also are distinguished. Designate the pipe height as x. Imagine that particles fill the pipe in various concentrations. Their probability distribution along the height is determined by a certain non-negative function of x. We believe that x is a continuous variable, and f(x) is a certain analytical function describing the material distribution along the pipe height. It is assumed that the probability distribution function is normalized. This implies that ð f ðxÞdx 5 1

(2.7)

This means that it is necessary to divide the total material distribution along the whole pipe by its total weight, and then its relative content f(x) is obtained. Multidimensional distribution functions of two and, probably, more variables are determined in a similar way. They are normalized if ðð f ðx; yÞdx 3 dy 5 1

(2.8)

Conditional distribution functions are determined as a distribution by variables of one row at specified definite values of variables of another row. In the case of two variables by x, a conditional one with respect to y is determined as F ðx; y Þ 5

RðyÞ gðxÞ

(2.9)

Ð Ð where gðxÞ 5 f ðxyÞdy; hðyÞ 5 f ðxyÞdx 2 ; these functions are described as reduced. In this case, variables x and y are statistically independent, if a two-dimensional function can be represented as a product: f ðxyÞ 5 gðxÞhðyÞ

(2.10)

In this case, conditional probability of any variable is independent of other values. Variables that are not statistically independent are considered statistically correlated.

Chapter 2 • Statistical entropy component

45

Mathematical expectation or mean value of a certain variable, its power, or any other function of this variable are determined as a weight average over a continuous distribution function hx i 5 Ð hx n i   pðxÞ 5

Ð

xf ðxÞdx

x f ðxÞdx Ð pðxÞf ðxÞdx n

(2.11)

It is noteworthy that mathematical expectation of a random function, in general, does not coincide with the function of mathematical expectation, that is,   pðxÞ 6¼ pðhx iÞ

Sometimes this difference is neglected, which leads to mistakes. A convenient measure of the distribution function dispersion is a mean squared deviation  2   x 2 hx i2 5 ðx2 hx iÞ2 $ 0

(2.12)

Now we briefly describe characteristics of discrete distribution functions. We introduce a large number of variables and assume that they are statistically independent and their probability distribution functions are identical. Denote  these variables by x1 ; x2 . . .xN ; and their average values and squared deviations by hx i and x 2 2 hx i2 ; respectively. The total variable is then X 5 x1 1 x2 1 . . .xN ;

and its multidimensional function Fðx1 ; x2 . . .xN Þ 5 f ðx1 Þf ðx2 Þf ðx3 Þ. . .f ðxN Þ

(2.13)

For this function, we can write      X 5 N hx i; X 2 2 hX i2 5 N x 2 2 hx i2

(2.14)

Hence, the sum value pffiffiffiffigrows proportionally to N, but distribution function dispersion grows proportionally to N only. The relations obtained above (2.11) also are preserved for discrete variables, but in this case the integrals should be replaced with signs of sum.

2.4 Probabilistic interpretation of granulometric characteristics of the solid phase of a polyfractional mixture of solid particles All information about probabilistic processes given above will be repeatedly used in this book. Below we make an attempt to demonstrate their usage on an example of applied

46

Entropy of Complex Processes and Systems

importance. Compositions of any mixtures can be interpreted using probabilistic characteristics both formally and inherently. A direct definition of such a fundamental notion as probability is extremely complicated. It is usually defined using bypass routes, and, by way of example, a set of balls of various sizes or colors is often considered. If one retrieves balls one by one at random from a container, fix their size or color and then put them back—at a manifold repetition of this experiment—the probability of the contents of various balls in the mixture can be determined. If some outcome A is obtained in any test, its probability equals PðAÞ 5 1

If the mixture consists of balls of two types, and the outcome of the second type is B, then PðAÞ 5 1 2 PðBÞ

If there are many balls, and they are multicolored, then, while taking balls at random out of the container, the probability of the appearance of some specific color is determined at a prolonged experiment by the share of this color in the mixture of balls. Thus the enumeration of the probabilities of all colors of balls allows obtaining the function of their distribution by colors. In the case of various ball sizes, such an experiment gives a function of their distribution by sizes or their granulometric characteristic. Thus the distribution function implies an enumeration of all possible outcomes of the events under study. If in the example under study there were N multicolored balls, then the sum of extracted balls of the same color equals an integer from the series from 0 to N. The sum of all probabilities then covers all the possibilities and, hence, must be a confidence, that is, X

Pi 5 1

N

It is clear from this example that the granulometric composition can be interpreted through the probability. As a concrete example, we examine data in Table 21. In this table, in addition to the distribution of initial compositions, polyfractional mixture compositions after the realization of the process of their separation are also outlined. Residues on the mesh sieves expressed in shares of unity determine the probability of the residence of each concrete class in a mixture. It is an example of a discrete distribution. The average value of this distribution is determined as hRi 5

R1 x1 1 R2 x2 1 R3 x3 1 . . . 1 RN xN 5 x1 1 x2 1 x3 1 . . . 1 xN

P

Ri xi L

N

(2.15)

where L is the value of the difference between the maximal and minimal values of mesh apertures L 5 xmax 2 xmin .

Chapter 2 • Statistical entropy component

Table 2–1

47

Principal parameters of crushed quartzite.

Residue total on mesh residue sieve Mean mesh aperture size in initial material Mesh Micron rs%

Residue on mesh sieve in Total Total fine residue passes product Rs% Ds % rf%

Residue on mesh sieve in coarse product rc%

Fractional extraction into fine product Ff(x),%

Fractional extraction into coarse product Fc(x),%

Narrow class size x, mm

1 16 20 30 45 60 80 120 200 270 400 Bottom

4 0 4.5 13.7 27.3 42.5 59.8 78.1 91.74 95.4 98.5 100

7 0 4.5 8.9 11.5 8.8 8.2 5.6 4.3 1.1 0.4 0

8  0 3.26 15.44 42.1 52.6 69.4 67.7 72.5 87.1 100

9  100 96.74 85.56 57.9 47.4 30.6 32.3 27.5 12.9 0

10  925 725 478 300 215 152 100 64 45 20

3 0 4.5 9.2 13.6 15.2 17.3 18.3 13.3 4.0 3.1 1.5

5 100 95.5 86.3 72.7 57.5 40.2 21.9 8.6 4.6 1.5 0

6 0 0 0.3 2.1 6.4 9.1 12.7 9.0 2.9 2.7 1.5

Partial residues,

dR dx

2 1000 850 600 355 250 180 125 75 53 38 0

Δ Rxi

a

Δ xi

xi

B

Particle size, x FIGURE 2–3 Particles’ size distribution.

Under certain conditions, x can be taken as a continuous quantity. If a mark is made at random on the line of particle sizes with length L, which can be found with equal probability in any part of this segment, the probability of revealing it between x and x 1 dx amounts to P5

dx L

Since the contents of particles of various sizes in a mixture are not the same, the probability of particles of each size is determined by a weighting function, which can be called a function of particle distribution by size (Fig. 23).

48

Entropy of Complex Processes and Systems

Taking this into account, we can write for the segment dx dPðxÞ 5

dR dx dx

(2.16)

The quantity dPðxÞ determines the probability of residence of particles of a certain size within the interval dx and includes both a weighting function and the parameters of singlerow distribution. Since dx is located with confidence between 0 and L; the following integral relation is valid ðL

PðxÞ 5 1

0

If the equation of the curve in Fig. 2.3 is known, a distribution function can be introduced instead of the weighting function. According to the above, the average value of a random quantity RðxÞ; if it is specified continuously, amounts to   RðxÞ 5

ðL 0

dR dx dx

(2.17)

As we can see, at a discrete assignment of granulometric composition obtained at a screening on a set of sieves, the general distribution can be specified by a line of the following type: R1 ; R2 ; R3 . . .RN

We emphasize once more that according to the definition, the probability implies the content of particles of a definite size Pðxi , x , xi 1 dxi Þ 5 ΔRxi ;

(2.18)

if this content is expressed in shares of unity. If the particles can be coarse or fine ones with respect to the boundary x0 , the probability of the fine fraction content is P ð0 , x # x0 Þ 5

x0 X

ΔRx 5 Ds

(2.19)

0

The probability of coarse fraction content is Pðx0 , x , xmax Þ 5

xmax X

ΔRx 5 Rs

(2.20)

x0

If bulk material has a density distribution in addition to the size distribution, the probabilistic interpretation of its composition requires a conditional expression (Fig. 24). In this case, the probability of the content of particles with the density ρj is

Particle density, ρ (kg/m3)

Chapter 2 • Statistical entropy component

0

Δqγ

c

γi

49

i

Δγi

d

Relative content FIGURE 2–4 Particles’ density distribution.

PðdρÞ 5 Δgρi

But here it is necessary to determine probabilities intersection proceeding from the fact that each narrow size class contains particles of different densities, that is,   Δgρi P Δgρi =ΔRxi 5 ΔRρi

(2.21)

The sum of probabilities of the components of any mixture is unity, and therefore the determination of the probability of some components does not clarify the contents of other components except the case of binary mixture. Therefore an uncertainty arises here. It is perfectly clear that the separation effect can be defined only at the expense of comparison of the initial and final product compositions. It is clear that this problem is not unambiguously soluble using only the probability. Here a different approach is necessary, which must be based on a single-valued characteristic of any arbitrarily complex mixture. This problem is basically of phenomenological character, because it must correspond not only to the estimation of particle mixture, but also to all possible mixtures, for example, for mixtures of gases, liquids, complex mixtures of liquids and gases, for discrimination of isotopes, etc.

2.5 Determining average values of random quantities A statistical approach determines average parameters of the quantities under study using probabilities instead of confidences. A more precise definition of the probability of any experimental result can be obtained by repeating the experiment on a large number of identical systems. Although it is impossible to predict a reliable result of a specific experiment, a statistical approach allows determining the probability of the appearance of each of possible results of this experiment.

50

Entropy of Complex Processes and Systems

Naturally, it is impossible to predict the behavior of each particle in a system consisting of a large number of particles, N. When using a statistical method, instead of one system, an ensemble of ϕ analogous systems is examined. Consider in detail methods of determining the average, because they define the essence of the statistical approach. We assume that a variable u characterizes a certain parameter of a system and acquires α possible discrete values u1 ;u2 ; u3 :::uα ; and probabilities P1 ; P2 ; P3 :::Pα correspond to these values. This means that in an ensemble of ϕ analogous systems, the variable uz has a self-similar value z for the following number of systems: ϕz 5 ϕ 3 Pz

The average value of the quantity under study or averaging over an ensemble that we denote by , u . ; equals α P

,u. 5

z51

ϕ z 3 uz ϕ

;

(2.22)

but ϕϕz 5 Pz ; and hence α X

,u. 5

Pz uz

z51

Similarly, if f ðuÞ is a certain function u; its mean value , f ðuÞ . is determined by the following expression: , f ðuÞ . 5

α X

Pz 3 f ðuÞ

(2.23)

z51

Some simple, but useful properties for determining mean values follow from this dependence. 1. If f ðuÞ and qðuÞ are two functions of u; then ,f 1q. 5

α X

α α X   X Pz f ðuz Þ 1 qðuz Þ 5 Pz f ðuz Þ 1 Pz qðuz Þ 5 , f . 1 , q .

z51

z11

(2.24)

z51

This means that the mean value of a sum of these functions equals a sum of mean values of each function. 2. If c is a certain constant, then , cf . 5

α X z51

α X   Pz cf ðuz Þ 5 c Pz f ðuz Þ 5 c , f . : z51

Chapter 2 • Statistical entropy component

51

3. If f ðuz Þ 5 const; the obtained dependence shows that the mean value of a constant equals the constant itself. 4. Assume that there are two discretely variable quantities, u and v; which assume the values u1 ; u2 ; :::; uα v1 ; v2 ; :::; vβ

We denote by Pz the probability of the fact that the variable u assumes the value uz and by Ps the probability of the fact that the variable v assumes the value vs : If these variables are independent, the joint probability of these events equals Pzs 5 Pz 3 Ps

Assume that there are two functions of these variables, f ðuÞ and qðuÞ: Then it follows from (2.24) that , f ðuÞqðuÞ . 5

β α X X

Pzs 3 f ðuz Þ 3 qðus Þ 5

z51 s51

β α X X    Pz f ðuz Þ Ps qðus Þ 5 , f ðuÞ . 3 , qðuÞ . ; z51 s51

5. Sometimes it is necessary to measure the deviation of the variable value from the mean value, that is, Δu 5 u 2 , u .

The mean value of this quantity is zero because , Δu . 5 , u 2 , u . . 5 , u . 2 , u . 5 0

However, a square of this value does not equal zero. , Δu . 2 5

α X

Pz ðuz 2, uz .Þ2

z51

This quantity is always positive, that is, , Δu . 2 $ 0

and is called dispersion. A linear measure of the scatter of parameter values is the expression Δu 5

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , Δu . 2

This quantity is called standard deviation.

52

Entropy of Complex Processes and Systems

6. The totality of all probabilities Pz for various values of uz provides total statistical information on the distribution of the parameter u values in the ensemble. In the case of a process with two realizations, N particles can be distributed upwards 1μ and downwards 2μ: Their potential difference is I5

X

μi

(2.25)

i

To obtain the mean value I; it is sufficient to average both parts of the equality (2.25) ,I . 5,

N X

μi . 5

i51

N X

, μi . 5 N , μi .

i51

This result is obvious. This means that the mean value of potential extraction I for a system of N particles is N times greater than the mean value of the same parameter for one particle. We determine the deviation for this case: I 2,I . 5

N X ðμi 2 , μi .Þ i51

This formula can be simplified as: ΔI 5

N X

Δμi

i51

Hence, , ΔI . 2 5

N P i51

Therefore, ΔI 5

, Δμi . 2 5 N 3 Δμ2i

pffiffiffiffi p1ffiffiffi N 3 Δμi ; а ΔI I 5 N 3

Δμi , μi .

If we denote the probability of orientation upwards for a certain spatial region by a, and downwards by b (clearly, a 1 b 5 1Þ, then , μ . 5 aμi 1 bð2 μi Þ 5 μi ða 2 bÞ 5 μi ð2a 2 1Þ

Hence, it follows that , I . 5 Nða 2 bÞμi , ΔI . 2 5 4Nabμ2i ;

and the standard deviation is pffiffiffiffiffiffiffiffiffi , ΔI . 5 2 Nab 3 μi

Chapter 2 • Statistical entropy component

53

2.6 Importance of unambiguous evaluation of complex compositions of various systems It often becomes necessary to analyze the efficiency of the use of multicomponent raw material for the production of various materials. As an example of such analysis, we can mention processes of multimetallic ores dressing or suggest another concrete example. As is well known, Israel is a country with a total absence of mineral products. Almost all Israeli processing industries use the Dead Sea water as a raw material. Several separate plants producing potassium salt, metallic magnesium, magnesia refractory materials, cooking salt, bromine, and combined fertilizers use this raw material. How efficiently is this raw material used? Where can we find latent reserves in each concrete case? It is very important to give answers to these questions, but for this purpose it is necessary to develop a methodology of unambiguous evaluation of such a complicated system as a totality of all kinds of production based on the Dead Sea water. .It will be made in the end of this section. In addition, we will consider the problem of a similar evaluation of the building of a complicated unit composed of many complex parts. However, to formulate the main ideas of this approach to the problem, we examine a simpler example of a complicated system—the process of poly-fractional mixture separation into target products. At the separation of bulk materials, as well as at the conduction of other complicated processes, one has to deal, as a rule, with multicomponent compositions. Here the initial mixtures can be separated into two or more products, and in each of these products, various components of the initial mixture can be present in arbitrary ratios between them. A complicated problem of unequivocal evaluation of the composition of mixture of any complicacy arises here. Without such an evaluation, it is difficult to perform such processes as, for instance, separation, because the multiplicity of parameters characterizing final compositions of products do not allow optimization of this process. At a binary composition of the initial material, ideal separation of one component automatically provides ideal separation of another. Physical meaning put into the notion of the quality criterion or efficiency in the case of binary separation remains necessary, but becomes insufficient in the case of the analysis of separation of a multicomponent mixture. At a large number of final products, even an ideal extraction of one of the components from a poly-fractional mixture does not testify to 100% efficiency of the entire process, since other components can be separated nonideally. Obviously, it will be possible to estimate the quality of separation of the n-component mixture into m products, if we succeed in realizing the following ideas. It is easy to express unambiguously a binary mixture composition, because the content of one component defines per se the content of another, since their sum equals unity or 100%. This means that a binary mixture has one degree of freedom. Obviously, the n-component mixture has ðn 2 1Þ degrees of freedom, and determination of various components in it in an ordinary way does not result in unambiguity.

54

Entropy of Complex Processes and Systems

Therefore it is necessary to find a method allowing the characterization unambiguously (by a number) of the general composition of a multicomponent mixture. This method must take into account the share of each component, both in the initial mixture and in the separation products. Unfortunately, successful mathematical approximations of f ðxÞ and f ðρÞ types for a bulk material have not been found yet. While performing calculations, one has to use tables of experimental analysis of powder composition by particle size and density. However, all relations defined herein are extremely useful and can be applied during the analysis of multicomponent compositions, as well as in further conclusions. In nature, multicomponent mixtures are encountered most often. Their processing is sometimes connected with the necessity to isolate one product from a mixture, and, more rarely, with simultaneous isolation of two or more components at the same time. In this case, the aim of the process is to isolate each of the target components separately and, as far as possible, maximally purified from inclusions of other components. By way of an example, we can mention multimetallic ores, coals containing particles of different densities, as well as other minerals whose target component is mixed with a significant amount of waste rock. For a binary mixture, we can write Ds 1 Rs 5 1

where Ds is the number of particles smaller than x or lighter than ρ in the mixture, and Rs is the number of particles bigger than x or heavier than ρ in the mixture. The sum of probabilities of the components of any mixture is unity, and therefore the determination of the probability of only some of the components does not clarify the contents of other components (except the case of binary mixtures). Here a different approach is needed, which must be based on an unambiguous characteristic of any arbitrarily complicated mixture. We now examine this issue in more detail.

2.7 Uncertainty of mixture compositions As shown previously, the composition of any mixtures can be interpreted using the probability. However, it is impossible to use the latter for the evaluation of a multicomponent mixture because of the absence of unambiguity. Usually, several numbers or a whole table are obtained. Therefore we will turn to another characteristic of a random quantity— the number of outcomes or a measure of uncertainty. The meaning of this parameter is as follows. If a certain random quantity xi has k equiprobable outcomes (for a coin, k 5 2; for dice, k 5 6), then the number of outcomes and probability are connected by the relation Pðxi Þ 5

1 k

Chapter 2 • Statistical entropy component

55

This means that the greater the number of outcomes of a random event, the smaller its probability. It is clear even at the intuitive level that any probabilistic event involves some uncertainty, and it must represent a certain function of the number of outcomes f ðkÞ: We now try to suggest a formal definition of uncertainty involved in a certain random event. This will allow us to determine the parameter of state with a countable number of outcomes. A function of events depending only on their probabilities and satisfying the conditions given below can be called uncertainty: 1. An event occurring with the probability of unity has a zero uncertainty, f ð1Þ 5 0: 2. If one event has a lower probability than another, then the uncertainty of the first event is higher than that of the second one. 3. The uncertainty of simultaneous occurrence of two independent events is equal to a sum of their uncertainties. As far as possible, let us try to express these conditions using known functions. Since this function must depend only on the probability of events, it must be determined within the segment [0.1] and, at the same time, correspond to the formulated conditions. It must be monotonically decreasing in this segment and satisfy a functional equation kðxyÞ 5 kðxÞ 1 kðyÞ

(2.26)

All the three formulated conditions are reduced to a logarithmic dependence, that is, f ðkÞ 5

c 3 logk 5 H; N

0,P#1 P50

(2.27)

where c is a proportionality factor; H is an uncertainty of a random event; logk is a quantity determined with a precision to within a constant. The dependence (2.27) possesses all the properties of uncertainty; moreover, it is the only function satisfying all three properties formulated above. In order to pass to computations, it is necessary to give definitions of some parameters. The dependence (2.27) as it is corresponds to the entropy formula introduced by Boltzmann for a thermodynamic system. Therefore the parameter f ðkÞ evaluating the composition heterogeneity can be called the mixture composition entropy. By its physical meaning, it is identical to entropy, that is, for the case under study, f ðkÞ 5 H. The entropy of a random mass event is the mathematical expectation of a set of uncertainties of this event, that is, ð H5

Pðf Þkðf Þ 5 2 c

X

PðAÞlogA

(2.28)

Here we introduce a notion of mixture entropy as an average amount of uncertainty contained in a random mass process. Let us check the functioning of this newly formulated parameter.

56

Entropy of Complex Processes and Systems

Taking into account our preliminary remarks, examine a binary mixture first. Let it contain N1 fine particles and N2 coarse particles. The total number of particles in the mixture is G 5 N1 1 N2

By definition, the probability of a fine particle is P1 5

N1 ; G

P2 5

N2 G

and the probability of a coarse one is

According to statistical physics, the number of ways (outcomes) for a random extraction of coarse and fine particles from a mixture is k5

G! N1 !N2 !

where k is the number of possible states of the system under study. According to (2.27), the uncertainty can be expressed by a relation HðkÞ 5 clogk 5 cðlogG! 2 logN1 ! 2 logN2 !Þ

(2.29)

If the values N1 ; N2 ; G are sufficiently large, their logarithms can be computed using the approximate Stirling’s formula logA! 5 AðlogA 2 1Þ

Taking this into account, (2.29) is transformed into an expression   HðkÞ 5 c GðlogG 2 1Þ 2 N1 ðlogN1 2 1Þ 2 N2 ðlogN2 2 1Þ

Factor out G and remove the brackets HðkÞ 5 2 cG

N1 N1 N2 N2 log 1 log G G G G

According to the definition of probability, it is written as HðkÞ 5 2 cGðP1 logP1 1 P2 logP2 Þ

The quantity c in (2.30) is an arbitrary quantity, and we assume c 5 1:

(2.30)

Chapter 2 • Statistical entropy component

57

Then the uncertainty of a concrete system per one element of this system is HðxÞ 5

HðkÞ 5 2 ðP1 logP1 1 P2 logP2 Þ G

(2.31)

The relation (2.31) expresses the uncertainty or entropy determined for one particle in a system of G particles (Fig. 25). Determine an expression of uncertainty for a poly-fractional mixture. Let it consist of n fractions. Denote the content of particles in each fraction by the line N1 ; N2 ; N3 ; . . .Ni . . .; Nn : The total number of particles is G5

X

Ni

n

The probability of a particle of each class is then Pi 5

Ni G

By definition, X

Entropy H

n

1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

0

Pi 5

X Ni n

G

51

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Content of one of fractions (in parts of unity)

FIGURE 2–5 Dependence of binary mixture entropy on the components ratio.

1

58

Entropy of Complex Processes and Systems

The total number of outcomes for this mixture of particles is G!

k5

n

L Ni ! i51

Hence, " HðkÞ 5 clogk 5 c GðlogG 2 1Þ 2

n X

# Ni ðlogNi 2 1Þ

i51

In compliance with the previous derivation, we obtain HðkÞ 5 2 cG

n X

Pi logPi

i51

For c 5 1, the uncertainty of composition of a concrete multicomponent mixture per particle is HðxÞ 5 2

n X

P1 logP1

(2.32)

u51

Consider a more sophisticated variant. Assume that in the previous case each fraction of particles has some internal distribution by densities. Then the content of each class by size and density can be expressed by a matrix M11 M21 ^ Mj1 ^ Mn1

M21 M22

M13 M23

. . .. . . . . .. . .

M1i M2i

. . .. . . . . .. . .

Mj2

Mj3

. . .. . .

Mji

. . .. . .

Mn2

Mn3

. . .. . .

Mni

. . .. . .

Clearly, for each line m X

Mjn 5 Nj;

j51

and for the whole matrix m X n X Mji j51 i51

G

51

which is equivalent to the expression m X n X j51 i51

Pji 5 1

M1m M2m Mjm Mnm

Chapter 2 • Statistical entropy component

59

The total number of outcomes for this mixture is G!

k5

n

m

L L Mji i51 j51

Hence, " HðkÞ 5 clogk 5 c GðlogG 2 1Þ 2

n X m X

# Mji ðlogMji 2 1Þ

i51 j51

and as a result, we obtain for this case HðxÞ 5 2

n X m X

Pij logPij

(2.33)

i51 j51

The uncertainty of composition unambiguously characterizes its heterogeneity. If a mixture consists of identical particles, its uncertainty is zero, and there is no heterogeneity. Thus mixtures of different compositions, even very complicated ones, can be successfully evaluated by one number. By this methodology, different compositions acquire different, but single-valued estimations. This can serve as a key for estimating changes in mixture compositions while conducting changes in granulometric composition, for example, grinding or separation. Formulas of composition heterogeneity involve a logarithm. In principle, it can have any base. Values of logarithm of some quantities at various bases differ by a constant, since logy x 5

lg10 x lg10 y

Therefore the value that will make computations somewhat comparable should be taken as the logarithm base for mixtures. First, examine a binary mixture. A single-valued characteristic of heterogeneity for it is expressed by formula (2.31) Hs ðxÞ 5

2 X

Pi logPi

1

Reverting to Fig. 25, this expression can be rewritten as Hs ðxÞ 5 2 ðRs logRs 1 Ds logDs Þ

The value of the maximal possible heterogeneity of composition is determined from habitual estimations taking it as unity, that is, Hs ðxÞmax 5 1: It is known that Rs 1 Ds 5 1: Therefore we can write

60

Entropy of Complex Processes and Systems

Hs ðxÞ 5 2 Rs logRs 2 ð1 2 Rs Þlogð1 2 Rs Þ

(2.34)

Taking a derivative of the obtained expression, we equate it to zero   dHs ðxÞ 5 2 1 1 logRs 2 1 2 logð1 2 Rs Þ 5 0 dRs

Hence, logRs 5 logð1 2 Rs Þ; i:e: Rs 5 0:5

A binary mixture heterogeneity is maximal at equal contents of both components in it. Substitute the obtained Rs value into the expression (2.34) 1 1 1 1 1 Hs ðxÞmax 5 2 log 2 log 5 2 log 5 log2 2 2 2 2 2

To implement the formulated requirement, in this case we should take 2 as the logarithm base. In this case, to satisfy the formulated requirement, the logarithm base in this case must be taken as 2. Therefore Hs ðxÞmax 5 log2 2 5 1

The dependence of Hs ðxÞ on the relation between Ds and Rs is shown in Fig. 25. For Rs 5 0, Ds 5 1; and for Ds 5 0, Rs 5 1: In these extreme points, the mixture composition is homogeneous. Therefore the uncertainty of such compositions is zero. Now examine a three-component mixture. For this mixture, the following relation is valid: P1 1 P2 1 P3 5 1

In compliance with (2.33), for this case Hs ðxÞ 5 2 ðP1 logP1 1 P2 logP2 1 P3 logP3 Þ

(2.35)

It can be readily shown that in this case too, the uncertainty of composition is maximal when all the three components are equiprobable, that is, P1 5 P2 5 P3 5

1 3

The value of maximal uncertainty for this case is written, according to (2.33), as Hs ðxÞ 5 2

1 1 1 1 1 1 1 log 1 log 1 log 5 2 log 5 log3 3 3 3 3 3 3 3

Chapter 2 • Statistical entropy component

61

In order to reduce the estimation of the composition heterogeneity for a three-component mixture to unity, it is necessary to take the number 3 as the logarithm base. Then Hs ðxÞmax 5 log3 3 5 1

For any multicomponent mixture, the uncertainty acquires the maximal possible value also at equal contents of all components in the mixture. For example, for the matrix composition shown above, it equals n P m n P m P P 1 1 1 1 1 Hs ðxÞmax 5 2 mn 5 2 log mn 5 2 log mn mn 5 2 log mn 5 logmn 5 logmn mn 5 1 i51 j51

i51 j51

Thus the entropy maximum for a mixture of any composition can be reduced to unity. It is important for the analysis of the estimation of separation effect and for the comparison of mass distribution results obtained in different apparatuses. It is convenient to illustrate this taking an example of separation processes.

2.8 Separation efficiency Numerous attempts at finding universal formulas for estimating changes in granulometric composition of powders in separation processes have been made, but these were not crowned with success. For this purpose, we apply ideas formulated in the previous section. The uncertainty of n-component mixture composition is expressed by the dependence H52

n X

Pi lnPi

i51

This function objectively reflects the extent of heterogeneity (uncertainty) of a system, being its static entropy. Recall that the criterion of the separation quality must satisfy, among other things, the following two boundary conditions: 1. In the case of ideal separation, this criterion must take on the maximal value; 2. In the case of separation without changes in the fractional composition, this criterion must equal zero. Let Hs be the entropy of the initial composition of a material, and H1 ; H2 be the entropies of each component after separation, respectively. Then it is logical to understand the process efficiency as a difference of uncertainties of the initial composition and the final product, that is, as a characteristic of the regulation of compositions E 5 Hs 2

2 X i51

μi Hi

(2.36)

62

Entropy of Complex Processes and Systems

2 P where μi is the relative amount of each component after separation, μi 5 1: i51 Verify the accomplishment of initial conditions.

1. In the case of an ideal separation, we consider any component number i: Then Ni i Hi 5 2 N Ni ln Ni 5 1ln1 5 0: Hence, E 5 Hs : It is clear that this is the maximal efficiency that can be obtained for a concrete composition of the initial material. 2. At the separation in an absolutely random way without changing fractional composition of the material, we obtain

Hi 5 2

2 X μ Nk i

i51

Hence, E 5 Hs 2

2 P i51

μi G

ln

2 X μi Nk Nk Nk 52 ln 5 Hs μi G G G i51

μi Hs 5 0:

This means that the function E corresponds to the meaning of the criterion of separation quality estimation. But here a question can arise—how to compare the efficiencies of separation of materials of different compositions. It turns out that the maximal possible efficiency of each material separation is its initial entropy. We make an attempt to normalize the dependence (2.36) to a unified value. Define a new function E5

E Hs

(2.37)

and check if it is suitable for estimating the separation efficiency. E is defined when Hs 6¼ 0; Hs 5 0; when the initial material is either homogeneous or just absent, and in both cases the separation is senseless. For the initial condition 1 (for ideal separation), E 5 1: For condition 2 (in the absence of changes in composition at the separation), E 5 0: Then 2 P

E512

i51

μi H i

Hs

; Hs 6¼ 0

(2.38)

is suitable as a criterion of separation efficiency estimation. It should be noted that at a constant initial composition, the dependence (2.36) can be used equally well. It is of interest to develop these problematic issues up to their practical application.

2.9 Separation optimality condition according to entropic criterion for binary mixtures We define the optimality using a new criterion and compare it with all conditions examined earlier.

Chapter 2 • Statistical entropy component

dR dx

63

Q(x) B Ds

Rs

n(x)

Partial residues

q(x)

A 0

Rc

Df

Dc

Rf x0 Particle size

C xmax x

FIGURE 2–6 Character of poly-fractional mixture distribution into two products.

We represent the initial composition of solid phase in partial residues by the dependence QðxÞ 5 f ðxÞ shown in Fig. 26. After feeding such material into the separator and its distribution into both outputs, their composition is depicted in the same figure by the curves gðxÞ and nðxÞ: Since ideal separation processes are, in principle, impossible, these two curves must intersect at some point corresponding to the particles’ size x0 : The absence of ideality points to the fact that a part of fine particles Dc with respect to the size x0 goes out together with coarse particles, while a part of coarse particles with respect to the same boundary size Rf goes out into the fine product. The content of fine product in the fine output is Df ; and the content of coarse product in the coarse output is Rc : According to the plot, 9 Df 1 Dc 5 Ds > > > Rc 1 Rf 5 Rs = Rf 1 Df 5 γ f > > > ; Rc 1 Dc 5 γ c

(2.39)

where γ f and γ c , respectively, are the outputs of the fine and coarse products. We take the initial product composition as unity. Then Rs 1 Ds 5 1

The point x0 is not accidentally taken as a separation characteristic. With respect to this size, the concrete separation of bulk material (Fig. 25) is optimal in the sense that the

64

Entropy of Complex Processes and Systems

mutual contamination of products is minimal. In this distribution, the total contamination is represented by the checkered cross-hatched area, that is, Ef 5 Dc 1 Rs

(2.40)

If the separation boundary is displaced to the left or right of x0 , it is evident from the figure that the area of mutual contamination grows, since it includes additional parts besides the checkered cross-hatched area. To determine specific conditions for obtaining a maximal difference in separation products, it is necessary to minimize (2.40), that is, dE 50 dx

Expand this expression: d dRf dE dDc 1 5 5 dx dx dx

Ð x0 0

d nðxÞdx 1 dx

Ð xmax x0

gðxÞdx

dx

As is known, a derivative of a definite integral with a variable upper limit and a constant lower limit equals the integrand in the upper limit point, that is, 2nðxÞ 1 gðxÞ 5 0

This leads to nðxÞ 5 gðxÞ

(2.41)

Thus we have determined here the particle size value for which the separation becomes optimal. This is a necessary, but insufficient, condition. If we take several separators attaining the same boundary x0 , they give a different effect, whose value must be determined in order to make the correct choice. As an optimization criterion, one can use a criterion showing the extent of completeness of composition regulation in separation products in comparison with the initial material E 5 Hs 2 γ i ðHf γ f 1 Hc γ c Þ

In compliance with Fig. 24, we can write Hs 5 2 ðRs logRs 1 Ds logDs Þ

We determine the base of the logarithm in this expression from the condition HsðmaxÞ 5 1

(2.42)

Chapter 2 • Statistical entropy component

65

To satisfy these requirements, as has been shown, the logarithm base must be taken equal to 2. In this case, HsðmaxÞ 5 1

Recall that in the case of multicomponent mixture separation with the purpose of obtaining n products, the logarithm base must be taken as equal to n: The quality criterion for estimating the perfection of separation in a continuous process can be written in the general form according to (2.42) h i E 5 2 ðDs log2 Ds 1 Rs log2 Rs Þ 2 γ f ðDf log2 Df 1 Rf log2 Rf Þ 2 2 γ c ðRf log2 Rf 1 Df log2 Df Þ

(2.43)

where Df ; Rf ; Dc ; Rc are parameters related to the outputs of respective products Rf 5

Rf Df Rc Dc ; Df 5 ; Rc 5 Dc 1 γf γf γc γc

Here Rf 1 Df 5 1; Rc 1 Dc 5 1: We write below several simple relations according to Fig. 26: Df 9 > > > Ds > > > > > Rc > > > εc 5 > = Rs >

εf 5

> > > > > > > > > Dc > > > kc 5 > Ds ; kf 5

Rf Rs

where εf is the fine product extraction; εc is the coarse product extraction; kf is the fine product contamination; and kc is the coarse product contamination. In practical conditions, it is difficult to use the dependence (2.43) with logarithms whose base equals 2. We try to obtain a simpler relation determining the optimality of the separation process from expression (2.43). To do this, we equate the derivative of (2.43) to zero. We rewrite (2.42) in the form 2

0

13 D R R D f f c c E 5 2 4ðRs log2 Rs 1 Ds log2 Ds Þ 2 @Rc log2 1 Dc log2 1 1 Df log2 1 Rf log2 A5 γc γc γf γf

After certain transformations, the derivative of (2.44) acquires the form Df dE Rc Dc 1 nðxÞlog2 1 qðxÞlog2 2 5 QðxÞlog2 Rs 2 QðxÞlog2 Ds 2 nðxÞlog2 γc γc γf dx

(2.44)

66

Entropy of Complex Processes and Systems

2qðxÞlog2

Rf 50 γf

Hence, !

Rf Df Rc Dc 1 qðxÞ log2 QðxÞðlog2 Rs 2 log2 Ds Þ 5 nðxÞ log2 2 log2 2 log2 ; γc γc γf γf

which leads to the expression log2

Rf Rs Rc 5 ϕc ðxÞlog2 1 ϕf ðxÞlog2 Ds Dc Df

(2.45)

In this expression ϕc ðxÞ 5

nðxÞ qðxÞ ; ϕf ðxÞ 5 QðxÞ QðxÞ

It is clear that ϕc ðxÞ 1 ϕf ðxÞ 5 1: We transform (2.45) log2

Rf Rs Rc Rc 2 log2 5 ϕf ðxÞ log2 2 log2 Ds Dc Df Dc

Hence, log2

Rf Dc Rs Dc 5 ϕf ðxÞlog2 Ds Rc Rc Df

According to (2.45), this expression can be written as log2

kf kc kc 5 ϕf ðxÞlog2 εc εc εf

Taking into account that in the optimal regime ϕf ðxÞ 5

1 2

the dependence (2.46) is transformed into the following form kc 5 εc

sffiffiffiffiffiffiffiffiffi kf kc εf εc

(2.46)

Chapter 2 • Statistical entropy component

67

Since ε varies within the limits 0:5 # ε # 1; this expression can be valid only in one case with εf 5 εc ; which automatically equates kc 5 kf , as well. Thus from optimality conditions based on a rather complicated entropic criterion, a simple dependence is obtained εf 5 εc ;

Clearly, the higher the latter index, the better the separation was performed.

2.10 Unbiased evaluation of the efficiency of multicomponent mixtures separation In addition, the formulated parameter is of interest, since for the first time in engineering practice it provides a single-valued evaluation of the quality of initial material separation into m products for m . 2: Fig. 27 shows a schematic diagram of multiproduct separation. In the presence of m boundaries, m 1 1 products are obtained. Total contamination at the expense of only neighboring products at multiproduct separation according to Fig 26 can be determined as A 5 ðD1 1 R1 Þ 1 ðD2 1 R2 Þ 1 ðD3 1 R3 Þ 1 . . . 1 ðDm 1 Rm Þ

dR

Special remainders

dx

x3

x1

Size of particles FIGURE 2–7 Schematic diagram of multiproduct separation.

x4

x

68

Entropy of Complex Processes and Systems

Algorithms of optimization of separation into n components b

a

d

c

a

b 2

b

c 2 1

d a c 2 1

d 1

2

c 1

2

1

a

d b 2 1

d

c

d

b

b 2 1

d a

b 1 2

2 a c

c

d 1

2

c 1

b a

c

b

2 1

c b

2 1

c

a

c b

c a

b 2 1

b

a

FIGURE 2–8 Graph of exhaustive search.

dA dðD1 1 R1 Þ dðD2 1 R2 Þ dðDm 1 Rm Þ 5 1 1...1 50 dx dx dx dx

Here, just as in binary separation, the optimization condition can be obtained at the minimization of the absolute value of all the products’ contamination 9 gðx1 Þ 5 nðx1 Þ > > > gðx2 Þ 5 nðx2 Þ > > > > gðx3 Þ 5 nðx3 Þ = . . .. . .. . .. . .. . . > > > > . . .. . .. . .. . .. . . > > > ; gðxm Þ 5 nðxm Þ

The same result can be also obtained for the case of each product contamination, not only by the material of adjacent fractions. In this case, R1 ; R2 . . .Rn must be understood as total contamination of the respective material. This means that in order to obtain an optimum, a narrow class must be divided half-andhalf at each separation boundary. As has been demonstrated, the following dependence can be used: n P

E512

i51

μi Hi

Hs

;

Hs 6¼ 0

It is suitable as a criterion of the estimation of separation quality (efficiency) for multicomponent compositions.

Chapter 2 • Statistical entropy component

69

To search for the highest possible efficiency, we examine algorithms of material separation into n components. Let the given material consist of particles with sizes from a0 to an , and it must be separated into n components along the specified boundaries. The separation method is as follows. The material is separated along one of the boundaries into two components; after that, each component is separated along one of the internal boundaries into two more components, etc. until the material is separated along all the boundaries. Here a question arises of how to find the order of separation boundaries for obtaining the maximal separation efficiency. The efficiency is computed using the formula n P

E512

i51

μi Hi

Hn

Algorithm 1 Comletesearch First, the results of separation into two components for each boundary are determined at all the parameters of the apparatus and the process (place of material feed into the apparatus i; number of stages in the apparatus z; air flow velocity w). Then it is necessary to compute the results of separation into two components along all internal boundaries for all the parameters of the apparatus and the process for each of separation products obtained in computations, and continue until all possible ways of separation of a concrete material into n components will be computed for all possible parameters of the apparatus and the process. Then the separation efficiency of all searched methods is determined and the highest one is chosen. Thus the most efficient method of material separation into n components will be found. The algorithm described above gives a global maximum of separation efficiency, but the duration of its operation is very large, Oðn!Þ: A scheme of such separation is shown in the form of a graph (Fig. 28) for the case in which the initial material is divided along four boundaries (for some other number of separation boundaries, the approach is the same). In this graph each letter denotes the boundary along which the separation is carried out. After the separation by each boundary, fine and coarse products are obtained. If they have internal separation boundaries, then the examination of fine product in the graph continues by edge 1, and that of the coarse product by edge 2. Examine the second algorithm of finding a method of maximal efficiency of separation. Algorithm 2 “Greedy algorithm” At the first step, the results of separation into two components are computed for all the boundaries and apparatus parameters. Each time the efficiency is computed, and then the maximal efficiency is chosen. In this way, the first boundary in the sequence of separation boundaries and the necessary parameters of the process and apparatus are established. Then the same procedure is performed for each of the two obtained components with

70

Entropy of Complex Processes and Systems

a

b

a

d

1

2 c 1 2

b

c

d

a

b

2

c

1 a

c

2

2

d

b

2

d 1

d

b 1 2

a

c

FIGURE 2–9 Graph of material separation into four components.

respect to the found boundary until the material is divided into n components. Assume that we have found that the maximal efficiency of the initial material separation into two components is obtained along the boundary aj ; at the air flow velocity wi ; number of the apparatus stages zi , and stage number of the material feed into the apparatus i1 : In each of the obtained components there will be particles of all narrow size classes of the initial material, but some of them will be in the form of “contamination,” which was intended for other components. In the first of the obtained components, the separation boundary should be sought (for the efficiency maximization) between the boundaries a0 and aj21 ; and in the second component between the boundaries aj11 and an : This process should be continued until the initial material is divided along all required boundaries. This algorithm gives a local maximum of separation efficiency, but the time of its operation is OðnlnnÞ; and it is much less than the time of the first algorithm operation. If the computation of the order of separation boundaries for finding the maximal efficiency using the first algorithm takes too long, then, starting from some computation step, we may pass to find the local maximum using the second algorithm. Consider Fig. 27, which shows an example of material separation into four components using the first algorithm (exhaustive search) only at the first step. Starting from the second stage, the search for a local maximum of separation efficiency is performed using the second algorithm. The graph in Fig. 29 is a subgraph of the graph in Fig. 28. By way of a concrete example, we present computations aimed at finding the order of separation boundaries for obtaining a maximal efficiency, which show that the “greedy algorithm” is not so bad, that is, its results are close to those of an exhaustive search. We also show that it is possible to start the search for the order of separation boundaries for obtaining maximal separation efficiency with an exhaustive search, and then pass to the search for a local maximum (Fig. 210).

Chapter 2 • Statistical entropy component

ai

71

1

ai2

ai3

ai5

ai4

ai6 ai7 ai11

ai

ai9

ai10

aig

ai14

ai12 ai13

ai15

ai16

ai+1

ain–3

ai

n–4

ai

n–5

ai

n–1

ain–6

ai

n–2

FIGURE 2–10 Order of separation boundaries.

2.11 Example of optimization of separation into four components Consider ground phosphates as an initial material for separation. Their density is ρm 5 2800 kg=m3 : The air density is ρa 5 1:2 kg=m3 : Let the velocity of air flow entering into the apparatus from below be w 5 1:8 m=s: The number of separation stages in the apparatus is z 5 9: The number of stages of the material feed into the apparatus is i  5 5: (Without any loss of generality, the air flow velocity, number of separation stages, and place of material feed into the apparatus are assumed to be constant. In a general case of optimization, these parameters can be varied.) We examine separation along three boundaries at constant apparatus parameters. Granulometric composition of the initial material is specified in Table 22. We determine the entropy of the initial material considering the latter as two narrow size classes for each of the three separation boundaries in turn. First, we count total material residues for each separation boundary in %.

72

Entropy of Complex Processes and Systems

Table 2–2

Granulometric composition of the initial material.

Sieve size (mm) Partial residues of narrow classes on sieves (%) Narrow class number

00.0265 11.4 4

0.0530.0635 13.4 3

Narrow class number

Qs %

rs %

1 2 3 4

52.2 75.5 88.6 100

52.2 23 13.4 11.4

0.0740.0875 23 2

0.1050.1275 52.2 1

Qs % The initial entropy is Hs 5 2 ðP1 lnP1 1 P2 lnP2 Þ; where P1 5 100% ; P2 5 1 2 P1 . Denote separation boundaries by letters as follows (with numbers of narrow classes written above)

4 3 2 1 a b c

The initial entropy for each of the three separation boundaries equals с b a

Hs 0.692 0.56 0.355

The p separation coefficient for all narrow size classes (k) in compliance with formula ffiffiffiffiffiffiffiffiffi k 5 1 2 0:4B amounts to 1 0.40018

2 0.497452

3 0.576695

4 0.40018

Determine the fractional extraction degree for each narrow class ðFf Þ 1 0.116769

2 0.487262

3 0.824351

4 0.992503

The amount of material that came out into the fine and coarse products after the separation into two components for each of the three separation boundaries is rf % 5 Ff rs %; rc % 5 rs % 2 rf %: Table of fine product output ðμf Þ: Amount of material in a component (%) 39.67

1 6.1

2 11.21

3 11.05

4 11.31

Chapter 2 • Statistical entropy component

73

Table of coarse product output ðμc Þ: Amount of material in a component (%) 60.33

1 46.1

2 11.79

3 2.35

4 0.09

We find one of the three separation boundaries for which the highest efficiency of separation into two components is obtained. Further we present tables of total residues for coarse and fine products. Table for fine product: Amount of material in a component (%) 39.67

1 6.1

2 17.31

3 28.36

4 39.67

1 46.1

2 57.89

3 60.24

4 60.33

Table for coarse product: Amount of material in a component (%) 60.33

We find the entropy of the coarse and fine products for all separation boundaries. For the fine product ðHf Þ: a 0.598

b 0.69

c 0.429

For the coarse product ðHc Þ: a 0.011

b 0.0776

c 0.389

We find the efficiency of separation into two components along all separation boundaries E512

μf Hf 1 μc Hc ; μf 5 0:3967; μc 5 0:6033: Hs

Further we present results of efficiencies (Е) computation: a 0.313

b 0.428

c 0.415

It is evident that the maximal efficiency of separation into two components is achieved along the boundary b. However, each of the remaining components has one internal boundary, and in order to complete our search, it remains to separate each of the obtained components along this boundary. (If it was discovered that the maximal efficiency of separation into two components was located on boundary a or с, we had to continue the search for the

74

Entropy of Complex Processes and Systems

separation boundary in the component which had two internal separation boundaries left until we succeeded in finding a maximal local efficiency.) In the present case, it remains to separate two obtained components along boundaries а and с, respectively. The result of the first component separation along boundary a are presented next. Fine product ðrf %Þ: Amount of material in a component (%) 26.514

1 0.712

2 5.462

3 9.11

4 11.23

1 5.39

2 5.75

3 1.94

4 0.08

Coarse product ðrc %Þ: Amount of material in a component (%) 12.85

The result of the second component separation along boundary с follows. Fine product ðrf %Þ: Amount of material in a component (%) 13.156

1 5.382

2 5.745

3 1.94

4 0.089

1 40.718

2 6.045

3 0.41

4 0.001

Coarse product ðRc %Þ: Amount of material in a component (%) 47.174

We calculate the final efficiency of separation into four components. Entropy of the initial material consisting of four narrow classes equals Hs 5 1:2: Entropy of each component after separation is H1 5 1:151; H2 5 1:041; H3 5 1:04; H4 5 0:432: Then the separation efficiency is 4 P E512

μi Hi

i51

Hs

5 0:437: We emphasize that the material was separated in the following order

of separation boundaries (b, a, c) using the algorithm to obtain a local maximum of separation efficiency. We find the separation efficiency maximum using an exhaustive search at the first step and search for a local maximum starting from the second step. The extent of fractional extraction for each narrow class remains unchanged, as the process parameters do not vary. We now consider the initial material separation as separation along boundary а. Fine product ðrf %Þ: Amount of material in a component (%) 39.67

1 6.1

2 11.21

3 11.05

4 11.31

Chapter 2 • Statistical entropy component

75

Coarse product ðrc %Þ: Amount of material in a component (%) 60.33

1 46.1

2 11.79

3 2.35

4 0.09

In the obtained coarse product, we search for a boundary providing maximum efficiency of separation into two components. It can be verified that the maximal separation efficiency is achieved along boundary b. We find the result of coarse product separation into two components using formulas rf % 5 Ff rs % and rc % 5 rs % 2 rf %: We obtain the following two components (separation along boundary b). Fine product ðrf %Þ: Amount of material in a component (%) 13.156

1 5.382

2 5.745

3 1.94

4 0.089

1 40.718

2 6.045

3 0.41

4 0.001

Coarse product ðrc %Þ: Amount of material in a component (%) 41.174

We now calculate the separation of the obtained coarse product along boundary с. Fine product ðrf %Þ: Amount of material in a component (%) 8.04

1 4.754

2 2.945

3 0.33798

4 0.000993

Coarse product ðrc %Þ: Amount of material in a component (%) 39.13

1 35.96

2 3.1

3 0.07202

4 0.000007

Consider the initial material separation for the following order of boundaries (a, c, b). We use the available results of coarse product separation first along boundary с, and then the separation of the remainder along boundary b. The result of separation along boundary с is given next. Fine product ðrf %Þ: Amount of material in a component (%) 13.156

1 5.382

2 5.745

3 1.94

4 0.089

1 40.718

2 6.045

3 0.41

4 0.001

Coarse product ðrc %Þ: Amount of material in a component (%) 47.174

76

Entropy of Complex Processes and Systems

We determine the separation of the final fine product along boundary b. Fine product ðrf %Þ: Amount of material in a component (%) 5.1176

1 0.628

2 2.8

3 1.6

4 0.0883

Coarse product ðrc %Þ: Amount of material in a component (%) 8.04

1 4.754

2 2.945

3 0.34

4 0.0006

We calculate the efficiency for the examined order of boundaries (a, c, b) 4 P

H1 5 1:36; H2 5 0:433; H3 5 1:022; H4 5 0:812; E 5 1 2

i51

μi Hi

Hs

5 0:282:

We examine the initial material separation into two components as separation along boundary с. Fine product ðrf %Þ: Amount of material in a component (%) 39.67

1 6.1

2 11.21

3 11.05

4 11.31

3 2.35

4 0.09

Coarse product ðrc %Þ: Amount of material in a component (%) 60.33

1 46.1

2 11.79

We consider the separation of the obtained fine product along boundary b. We obtain the following two components: Fine product ðrf %Þ: Amount of material in a component (%) 26.514

1 0.712

2 5.46

3 9.11

4 11.23

1 5.39

2 5.75

3 1.94

4 0.08

Coarse product ðrc %Þ: Amount of material in a component (%) 13.16

Chapter 2 • Statistical entropy component

77

We consider the separation of the obtained fine product along boundary а. Fine product ðrf %Þ: Amount of material in a component (%) 21.4

1 0.083

2 2.66

3 7.51

2 2.8

3 1.6

4 11.14

Coarse product ðrc %Þ: Amount of material in a component (%) 5.113

1 0.629

4 0.084

We calculate the separation efficiency for the obtained order of boundaries (a,b,c). 4 P

H1 5 0:66; H2 5 1:04; H3 5 1:1; H4 5 0:984; E 5 1 2

i51

μi Hi

Hs

5 0:333:

We calculate the results of the initial material separation for the order of boundaries (a,b, c). We take already obtained results of separation along boundary с and examine the separation of fine product first along boundary а, and then the separation of what is obtained along boundary b. The result of separation along boundary а. Fine product ðrf %Þ: Amount of material in a component (%) 26.514

1 0.712

2 5.46

3 9.11

4 11.23

1 5.39

2 5.75

3 1.94

4 0.08

Coarse product ðrc %Þ: Amount of material in a component (%) 13.16

Finally, the efficiency for the considered order of boundaries (c, a, b) is 4 P

H1 5 0:66; H2 5 1:55; H3 5 1:015; H4 5 0:812; E 5 1 2

i51

μi Hi

Hs

5 0:315:

It follows that if the algorithm of local maximum search had been used in the algorithm of exhaustive search after the initial material separation along boundary c, then the local maximum would have given the order of separation boundaries (а, b, c).

78

Entropy of Complex Processes and Systems

Below we present a table containing separation efficiencies for all possible orders of separation boundaries. Order of boundaries

b.c.a (obtained at the search of maximal efficiency maximum)

a.b.c (local maximum starting from the second step)

Efficiency

0.347

0.285

a.c.b

c.b.a (local maximum starting from the second step)

c.a.b

0.282

0.333

0.315

This and other examples show that the algorithm of search of the local separation efficiency maximum is sufficient in comparison with the exhaustive search algorithm.

2.12 Mathematical model of separation into n components Let an initial material with particle sizes in the range from a0 to an be specified, and it is necessary to separate it along the ðn 2 1Þ-th boundary. We assume, without any limitations of generality, that the following order of separation boundaries is obtained using algorithms for obtaining maximal separation efficiency ai1 ; ai2 ; ai3 ; . . .ai ; . . .ai11 ; . . .ain22 ; ain21

(i.е., here separation along the boundary ai occurs earlier than along the boundary ai11 ; and if it were vice versa, the approach would be exactly the same). In addition, for the separation along each boundary, apparatus parameters were obtained (i.е., z is the number of stages, i is the number of stages of the material feed into the apparatus, and w is the velocity of air flow entering the apparatus from below; these parameters are different for all the boundaries). Assume that it is necessary to estimate separation results in the component i 1 1: According to the obtained sequence of separation boundaries, separation results in the component i 1 1 will be known after separation along the boundary ai11 : In the graph (Fig. 28), we denote by С the path connecting vertices ai1 and ai11 : Let rs;j be the amount of material of the size class j in the initial material. Then, after the initial material separation along the boundary ai1 ; the amount of material rs;j F1;f ;j of the size class j gets into the fine product, and rs;j ð1 2 F1;f ;j Þ into the coarse product, where F1;f ;j is the degree of fractional extraction of the size class j for the first apparatus in which the separation takes place. The material that came out into the fine product should be separated after that, along the internal separation boundaries. The same applies to the material that came out into the coarse product. The amount of material of the size class j that got into the coarse and fine products will be found after the separation along each boundary, similarly to the first separation step. Then ri11;f ;j 5 rs;j Fi11;f ;j L ð1 2 Fk;f ;j Þ 3 L Fk;f ;j ak AC R

ak AC l

(2.47)

Chapter 2 • Statistical entropy component

Table 2–3

79

Initial material characteristic.

Mesh size of content sieve x (µm)

Mean size of narrow class x (µm)

Partial residues on sieve r (%)

1 2800 2500 1800

2 3000 2650 2150

3 0.78 2.24 6.73

1000 750 630 400 315 200 160 100 80 63 50 40 0

1400 825 690 515 357.5 257.5 180 130 90 71.5 56.5 45 20

12.07 11.21 10.7 16.2 13.73 10.2 6.1 3.31 2.43 2.03 1.28 0.7 0.29

Products boundaries (µm) 4 12150 2150 11400 1400 1690 690 1357.5 357.5 1180 180 171.5

71.5

Products Each product in designation the initial one (%) 5 G

6 3.02

F

6.73

Е

23.28

D

26.9

C

23.93

B

11.84

A

4.3

where ri11;f ;j is the quantity of the size class j material that came out into the (i 1 1)thcomponent; Fk;f ;j is the degree of fractional extraction of class j material into the k-th component, C is the path between ai1 and ai51 ; ak AC = or ak AC = ; if the edge in path C after the vertex ak is directed to the right or left, respectively. Fig. 27 illustrates a schematic diagram of multiproduct separation. In the presence of m boundaries, we obtain m 1 1 products. A shelf classifier was chosen for the study. It consists of seven stages, the initial material is fed to the second stage counting from the top. Ground quartzite with a broad range of particle sizes was chosen for the experiments. The main characteristics of this material are shown in Table 23. The narrow size class value was determined as an arithmetical mean of the mesh sizes of two adjacent sieves. The entire size range is divided by six boundaries into seven products. The table shows the amount of each product contained in the initial material and which narrow class must be divided in the optimal way in order to obtain a minimal mutual contamination of products at each separation act. On the basis of this table, the value of entropy of the initial composition was obtained from (2.38) using normal logarithms: Hs 51 1:71

80

Entropy of Complex Processes and Systems

Table 2–4 principle.

Results of multiproduct separation of initial material by fine-to-coarse Initial material—100%

Separation stage and flow velocity

Fine product

Coarse product

1 w 5 0:95 m=s

Product А γ A 5 3:545% γz 5 2:065% γp 5 1:48%

Residue γ 5 96:455%

2 w 5 1:34 m=s

Product В γB 5 12:285% γz 5 6:115% γp 5 6:17%

Residue γ 5 84:17%

3 w 5 1:89 m=s

Product С γC 5 20:96% γ Z 5 15:05% γP 5 5:945% Product D γ D 5 26:705% γ z 5 17:81% γp 5 8:895%

Residue γ 5 63:21%

4 w 5 2:62 m=s

Residue γ 5 36:5%

5 w 5 3:74 m=s

Product Е γ E 5 21:14% γz 5 14:903% γp 5 6:237%

Residue γ 5 15:365%

6 w 5 4:63 m=s

Product F γF 5 10:01% γz 5 3:0% γ ρ 5 7:01%

Product G γF 5 5:362% γ z 5 2:242% γ p 5 3:12%

A preliminary set of experiments was performed, which allowed to determine optimal air flow velocities referred to the total classifier cross section, which provide optimal separation of narrow class values indicated in column 3. It is established that, in this apparatus, this velocity is 0.95 m/s for the class of 71.5 μm; 1.34 m/s for the class of 180 μm; 1.89 m/s for the class of 357.5 μm; 2.62 m/s for the class of 690 μm; 3.74 m/s for the class of 1400 μm; and 4.63 m/s for the class of 2150 μm. Three groups of experiments of sequential separation of the initial product into seven mentioned products were performed. In the first group of experiments, first the finest product A was separated in an optimal way, then product B was separated from the residue in an optimal way, too, then С, etc., up to product G (Table 24). The results of this study are tabulated in Table 22. This table shows the output of each product. Consider, for example, product D. Its output with respect to the initial composition is γ D 5 26.705%. In this product, target fractions, that is, particles within the size range from 357.5 to 690 μm, present 17.81% of the indicated quantity ðγ z 5 17:81%Þ: In addition, in this

Chapter 2 • Statistical entropy component

81

product there are particles coarser than 690 μm and finer than 357.5 μm. Their total amount is the contamination of this product. For product D their quantity is 8.895% and is denoted as γ p 5 8:895%: Entropy of each product is determined on the basis of the contents of target fractions ðγ z Þ and contamination ðγ p Þ in it. Here a product is examined independently, that is, a sum of these two components is taken as unity or 100%. In this case, z5

17:81 5 66:69% 26:705

p5

8:895 5 33:31% 26:705

Composition entropy for each concrete product D is written in this case as follows: HD 5 2 ð0:6669ln0:6669 1 0:3331ln0:3331Þ 51 0:6362

Similarly calculated composition entropies for all other products are HA 51 0:6795; HB 51 0:696; HC 5 0:5958; HE 51 0:606; HF 51 0:6096; HG 51 0:6796

It is determined that the total efficiency of such multiproduct separation amounts to EI 5 0:632

In the second group of experiments, the separation of each product from the initial material was carried out, in contrast, from coarse to fine—first, product G, then F, etc., up to product A. The results of this group of experiments are given in Table 25. Using the data of Table 25, it was determined for this case that HG 51 0:6718;HF 5 2 0:6052; HE 5 2 0:6313; HD 5 2 0:6361; HC 5 2 0:5756; HB 5 2 0:549; HA 51 0:388:

It was determined that the total efficiency of such multiproduct separation was EII 5 0:6466

In the third group of experiments, the principle of efficiency maximization at each separation stage was realized. First, a narrow size class, with respect to which the content of fine and coarse product was the closest to the same ratio, was determined in the initial composition. It was a narrow class with mean particle size x 5 690 μm: With respect to this class, total residues Rs 5 43:73% and total passages Ds 5 56:27%. Therefore the first separation stage was performed at the air flow velocity w 5 2:62 m=s: At that, 62.04% of the entire initial material came out into the fine product, and 37.96% into the coarse product. The class which was the closest to the 50% composition in the fine product output was the class with the mean particle size x 5 357:5 μm: With respect to this class, Rs 5 57:5%; Ds 5 42:5%: An analogous size class in the coarse product output was that with x 5 1400 μm: With respect to this class, R 5 31%; Ds 5 69%:

82

Entropy of Complex Processes and Systems

Table 2–5 principle.

Results of multiproduct separation of initial material by the coarse-to-fine Initial material—100%

Separation stage, flow velocity

Fine product

Coarse product

1 w 5 4:63 m=s

Residue γ 5 94:02%

Product G γG 5 5:98% γz 5 2:375% γ p 5 3:605%

2 w 5 3:74 m=s

Residue γ 5 83:795%

Product F γ F 5 10:225% γz 5 3:0% γ p 5 7:225%

3 w 5 2:62 m=s

Residue γ 5 61:74%

Product Е γ E 5 22:056% γz 5 14:868% γp 5 7:188%

4 w 5 1:89 m=s

Residue γ 5 35:04%

Product D γ D 5 26:7% γ z 5 17:826% γp 5 8:874%

5 w 5 1:34 m=s

Residue γ 5 14:495%

Product С γ C 5 20:545% γz 5 15:015% γ p 5 5:53%

6 w 5 0:95 m=s

Product А γA 5 3:453% γ z 5 3:0% γ p 5 0:453%

Product В γ B 5 11:049% γz 5 8:42% γ p 5 2:629%

The second separation was performed with the fine product output at the velocity w 5 1:89 m=s; which corresponds to the optimal separation by the class of 357.5 μm. The coarse output material of this separation act is product D, which contains 27% of the initial material. The third separation in this group was performed with the coarse material remaining from the first experiment at the air flow velocity w 5 3:74 m=s; which is optimal for a narrow class with the mean particle size x 5 1400 μm. The fine output material of this separation act made up product Е. The separation was completed in a similar way. A schematic diagram of this separation is shown in Table 26. On the basis of this table, composition entropies of each product were calculated HD 51 0:641; HE 51 0:6396; HC 51 0:6183; HF 51 0:607; HG 51 0:6555; HA 51 0:412; HB 51 0:3368

Chapter 2 • Statistical entropy component

83

Table 2–6 Results of multiproduct separation of the initial material by the principle of maximal efficiency at each stage. Initial material—100% Separation stage, air flow velocity

Fine product

Coarse product

1 w 5 2:62 m=s 2 w 5 1:89% from I fine product

I fine product γ 5 62:04% II fine product γ 5 35:05%

I coarse product γ 5 37:96% Product D γD 5 27% γz 5 17:83% γp 5 9:13%

3 w 5 3:74 m=s from I coarse product

Product Е γ E 5 22:6% γz 5 14:97% γ p 5 7:63%

III coarse product γ 5 15:34%

4 w 5 1:34 m=s from II fine product

IV fine product γ 5 14:97%

Product С γc 5 20:079% γz 5 15:025% γρ 5 5:054%

5 w 5 4:63 m=s from III coarse product

Product F γF 5 10:13% γ z 5 3% γ p 5 7:13%

Product G γG 5 5:8% γz 5 2:11% γp 5 3:69%

6 w 5 0:95 m=s from IV fine product

Product А γ A 5 3:6% γ z 5 3:08% γp 5 0:52%

Product В γB 5 11:37% γ z 5 10:153% γp 5 1:217%

It is determined using formula (2.38) that in this case the total efficiency of separation into seven products is EIII 5 0:6526;

which exceeds the value of the total efficiency of separation in two previous cases. The following conclusions can be made from the performed analysis: 1 The entropic criterion is applicable for the evaluation and optimization of multiproduct separation; it can be also used for unambiguous estimation of complicated layouts of concentrating plants and other multistage processing lines intended for separation of both multicomponent and binary mixtures. 2 The efficiency value for multiproduct separation is determined by the initial material composition. 3 Maximal possible efficiency is also determined by the chosen order of separation boundaries. 4 As the obtained results show, the greatest effect can be realized if two conditions are observed:

84

Entropy of Complex Processes and Systems

a. Each separation along any boundary must realize the optimality condition at the expense of mutual contamination minimization; b. The order of separation boundaries must by chosen in such a way that the separation could be carried out along the boundary for which the ratio between the products in specified conditions is the closest to the equality Rs 5 Ds 5 50%: In contrast to dynamic processes, this parameter can evaluate stationary states. Therefore it is called stationary entropy here. A change in such a system that happened as a result of some process is fixed using the same method as a difference between an initial and final entropies. This allows making unambiguous conclusions about the completeness and perfection of performed transformations. At the same time, it should be noted that static entropy has an objectively subjective character. For example, multimetallic rock has some components that refer to waste rock according to the requirements of the existing technology. By calculating entropy with the allowance for other components, a definite estimation of composition can be obtained. If in this case one or several additional components are taken into account in the waste rock, the estimation is totally different. Consider another example. More than seven billion people live on the Earth. They can be distinguished by nationality, skin color, age, weight, height, eye color, etc. Entropy of composition provides an objective unambiguous estimation of mankind by the parameter (or their combination) that we specify. From this standpoint, production processes can be also evaluated, such as complicated construction or operation of a specific enterprise with complex and even ramified technology.

2.13 Unambiguous evaluation of completeness of a complicated object in the process of construction It is known that the construction of industrial enterprises is characterized by a diversity of erected objects and their versatility involving various kinds of works. It is usually very difficult to evaluate the extent of completeness of separate elements at the stage of the construction of such objects. And even if one manages to do this as impartially as possible, it is difficult to make a conclusion about the state of the entire object at some moment in time on the basis of multiple evaluations. This problem can be solved using the parameter of static entropy. It is best to demonstrate this on a definite object. For this purpose, we consider the construction of a railway segment in general. Imagine that, according to the project, it is planned to build several stations at this railway equipped with railway terminals of various levels depending on the population size in the particular locality and on some other factors. At some stations it is planned to build repair workshops of various levels and even several depots, other stations being without these. Some sections of the railroad bed will be trailed in a plain terrain, others in hilly and mountainous terrain. This means that besides excavation works for the railway track along the entire railroad, it

Chapter 2 • Statistical entropy component

85

Xn Yn–1 Xi

Yi

Yi–1

X3

X2

X1

Y3

Y2

Y1

FIGURE 2–11 Schematic diagram of railway construction.

becomes necessary to build bridges and tunnels. All the required objects can be very different. A schematic diagram of such a project is shown in Fig. 211. In this diagram, railway terminals are enumerated as x1 ; x2 ; x3 . . .xi . . .xn . Road segments between them are enumerated according to the number of terminals located to the left of the station and denoted by y1 ; y2 ; y3 . . .yi . . .yn . Respectively, we denote a list of works at the stations by the symbol x and the road sections by y. An approximate incomplete list of works at the first station includes: X11 , excavation works; X12 , building foundation; X13 , walls and roof; X14 , plasterwork; X15 , plasterwork; X16 , electrical works; X17 , communication means; X18 , automation; X19 , workshops, etc., up to: Xn , arrangement of green spaces. For the first railroad segment track:

86

Entropy of Complex Processes and Systems

Y11 , on-the-site planning; Y12 , leveling and earthing; Y13 , bridges; Y14 , overhead roads; Y15 , laying cross-ties; Y16 , laying rails; Y17 , building of railway points and traffic lights; Y18 , automation and block systems; Y19 , communication lines, electricity; etc., up to: Y15 , land improvement, clearing of rubble and debris. Naturally, the presented list of works is incomplete, otherwise it would take up the volume of en entire book. We only consider the principle of unambiguous evaluation of works in such a complicated case. Before starting the construction, each type of works is estimated, for example, in dollars. In a general form, such expenses for the construction of the first station can be written as follows: Qðx11 Þ 1 Qðx12 Þ 1 Qðx13 Þ 1 . . . 1 Qðx1i Þ 1 . . .Qðx1m Þ 5 Qðx1 Þ

Dividing the left side by the right, we obtain the share of expenses for each type of works at the first station, which is written as Pðx11 Þ 1 Pðx12 Þ 1 Pðx13 Þ 1 . . . 1 Pðx1i Þ 1 . . . 1 Pðx1m Þ 5 1

(2.48)

Each of these summands can be interpreted as a characteristic of the relative cost of works. For each of these works at the construction of the first object, the static entropy can be determined using the following formula: Hðx1;i Þ 5 Pðx1i ÞlnPðx1i Þ

(2.49)

As a result, we can write for the entire object Hðx11 Þ 1 Hðx12 Þ 1 Hðx13 Þ 1 . . . 1 Hðx1i Þ 1 . . . 1 Hðx1m Þ 5 Hðx1 Þ

(2.50)

The total value Hðx1 Þ determines the complicacy of building the station denoted by index 1 in the schematic diagram. By the change in the quantity (2.50) in the course of construction, we can follow the results of the erection of this station. Imagine that, after some time, the contractor performing the works x1i has drawn a share of means intended for his object P = ðx1i Þ. Naturally, before the completion of works at his site, P = ðx1i Þ , Pðx1i Þ. The complicacy of his site then decreases by the value Pðx1i ÞlnPðx1i Þ 2 P = ðx1i ÞlnP = ðx1i Þ 5 ΔHðx1i Þ

(2.51)

Chapter 2 • Statistical entropy component

87

which is equivalent to Hðx1i Þ 2 H = ðx1i Þ 5 ΔHðx1i Þ

If the means have been drawn completely, this difference is ΔHðx1i Þ 5 0

In the case of a single elementary site, it is clear without additional explanations. However, such an approach allows making unambiguous conclusions about the course of the entire project development. In the course of construction, changes taking place are analogous to those considered in each summand of the relation (2.50). On a certain specific date of the works performance, we can calculate H = ðx1m Þ; H = ðx12 Þ; H = ðx13 Þ. . .H = ðx1i Þ. . .H = ðx1m Þ

A sum of entropies of the drawn funds or performed works gives H = ðx1m Þ 5

m X

H = ðx1i Þ

i

Incompleteness of works at the project is ΔHðx1m Þ 5 Hðx1m Þ 2 H = ðx1m Þ

(2.52)

To obtain a habitual estimation in fractions of unity or in percentages, the incompleteness of works can be expressed by a parameter, if we divide both expressions in the right-hand side of (2.51) by Hðxis Þ. Then F5

H = ðxim Þ Hðxim Þ

(2.53)

Repeat once more that the construction incompleteness at all the objects of the first station is determined using the relation (2.53). We can express this relation in percent, which is habitual for builders, writing a relation F5

ΔHðxim Þ 3 100% Hðxim Þ

(2.54)

The construction completeness can be expressed by

 H = ðxim Þ 3 100% E5 12 Hðxim Þ

It is clear that E 1 F 5 100% is always valid.

(2.55)

88

Entropy of Complex Processes and Systems

To evaluate the course of construction of all the stations at the same time, we can also obtain a unified estimation using this method. The same method allows obtaining the evaluation of the building of all components of a project of any complicacy and all its constituents. For this purpose, the expenses for all the stations should be expanded into a matrix of the type Qðx11 Þ Qðx12 Þ Qðx13 Þ ^ Qðx1i Þ ^ Qðx1m Þ

Qðx21 Þ Qðx22 Þ Qðx23 Þ

... ... ...

Qðxi1 Þ Qðxi2 Þ QðX13 Þ

... ... ...

Qðx2i Þ

...

Qðxii Þ

...

Qðx2m Þ

...

Qðxim Þ

...

Qðxn1 Þ Qðxn2 Þ Qðxn3 Þ Qðxni Þ Qðxnm Þ

(2.56)

The sum of the elements of this matrix gives the total cost of the construction of all the stations ‫מ‬ X

Qðxij Þ 5 QðxÞ

(2.57)

‫ן‬

Dividing all the elements of (2.56) by (2.57), we can determine, using the abovedescribed method, the relative complicacy of all the stations in the form of total costs and obtain a generalized matrix of static entropies for their characteristics Hðx11 Þ Hðx21 Þ Hðx12 Þ Hðx22 Þ ^ Hðx1i Þ Hðx2i Þ

... ...

Hðxi1 Þ Hðxi2 Þ

... ...

...

Hðxii Þ

...

Hðxn1 Þ Hðxn2 Þ Hðxni Þ

(2.58)

The sum of all entropies for n stations gives the magnitude of the total complicacy of works n X

Hðxij Þ 5 HðxÞ

(2.59)

i

While all the works are being accomplished at all the stations at once, their residual complicacies can be determined by performing all computations with this matrix according to Eqs. (2.49) and (2.50), finding H = ðxÞ and determining

H = ðxÞ Ex 5 1 2 3 100% HðxÞ

(2.60)

In a similar way, we can unambiguously determine the course of accomplishing works connected with railroad tracks.

Chapter 2 • Statistical entropy component

89

The costs for the first section can be written as follows: Qðy11 Þ 1 Qðy12 Þ 1 Qðy13 Þ 1 . . .Qðy1s Þ 5 Qðy1 Þ

(2.61)

To obtain a characteristic of the relative complicacy of the performed works, each summand in the left-hand part of (2.61) must be divided by the expression in the right-hand part. Their sum gives Pðy11 Þ 1 Pðy12 Þ 1 Pðy13 Þ 1 . . . 1 Pðy1i Þ 1 . . . 1 Pðy1s Þ 5 1

(2.62)

The entropy of each summand of this sum is Hðy1i Þ 5 Pðy1i ÞlnPðy1i Þ

(2.63)

and the sum of these entropies is s X

Hðy1i Þ 5 HðyÞ

(2.64)

i51

Their change gives the entropy characterizing the residual complicacy of the works H = ðy1i Þ. Estimation of the residual complicacy is FðyÞ 5

H = ðy1j Þ 3 100% Hðy1j Þ

(2.65)

The completeness of works is determined by the expression

 H = ðyij Þ 3 100% E 5 HðyÞ 2 FðyÞ 5 1 2 Hðyij Þ

(2.66)

If someone needs a general pattern of the current state of construction of all railroad tracks, it is necessary to construct, at first, a matrix analogous to (2.58). Then it is necessary to perform the procedure of computations. In a similar way, an overall estimation of the performance of works for railway stations and railway tracks can be obtained. For that, the specific weight of costs per each group must be taken into account: Q 5 QðxÞ 1 QðyÞ

(2.67)

Hence, 1 5 ϕðxÞ 1 ϕðyÞ;

where ϕðxÞ 5

QðxÞ QðyÞ ; ϕðyÞ 5 Q Q

(2.68)

90

Entropy of Complex Processes and Systems

Then the total project completion is   E 5 1 2 Fx 3 ϕðxÞ 2 Fy 3 ϕðyÞ 3 100%

(2.69)

Here we can note the following: 1. In the initial matrices (2.56) and (2.58), certain elements can have a zero value. 2. During the performance of works, some elements of the matrices can remain unchanged, if at those sites works were not performed. Similarly, the state of works of the same type at different objects, for example, assembly, blocking, automation at all the stations, electric lines laying, earthworks, etc., can be estimated separately. To do this, one must perform horizontal summing in matrices of (2.56) type in the necessary line corresponding to a concrete type of works. Then it is needed to compute the fraction of each element by dividing its cost by this sum, and after that to determine the entropy of each element according to the described method (2.62), find the total complicacy (2.63) and perform such analysis while the works (2.64) and (2.67) are being completed. The exposed method allows a global and unambiguous estimation of the status of a construction system of any complicacy. For instance, the Ministry of Transport can build, side by side with the railroad, highways, ports, gas stations, and many more. The accomplishment of these works at any moment can be unambiguously estimated using the proposed method for the entire Ministry. We illustrate the application of this method by a concrete example. All the computations are presented in the form of a table (Table 27).

Table 2–7 Example of evaluation of completeness of complicated construction at an intermediate stage. No. Name

Enumeration of building objects x1

1 2 3 4 5 6 7 8

Financing fraction for each object Pðxi Þ Object complicacy Hðxi Þ 5 Pðxi ÞlnPðx‫ ן‬Þ Percentage of works fulfillment at objects, k% Fraction of fulfilled works, P = ðxi Þ 5 kPðxi Þ Residual works at the objects Pðx0 Þ 5 Pðxi Þ 2 P = ðxi Þ Residual complicacy of works Hðx0 Þ 5 Pðx0 ÞlnPðx0 Þ 0Þ Relative residual complicacy F 5 Hðx HðxÞ Efficiency h of works i fulfillment at objects 0Þ E 5 1 2 Hðx HðxÞ 3 100%

x2

x3

0.05 0.125 0.175 0.150 0.26 0.364 28 37 42

x4

x5

x6

x7

x8

0.25 0.347 63

0.08 0.202 17

0.12 0.254 86

0.13 0.265 91

0.07 0.186 12

0.014 0.046 0.0735 0.1575 0.036 0.036 0.079 0.1015 0.0925 0.044

0.1032 0.1183 0.0084 0.0168 0.0117 0.0616

0.120 0.2

0.232

0.22

0.1374 0.0686 0.052

0.171

0.8 20

0.64 36

0.63 37

0.68 32

0.92 8

0.77 23

0.27 73

0.2 80

Chapter 2 • Statistical entropy component

91

By way of example, here we have presented a construction consisting of eight objects. All the computations in each cell of the table are explained by formulas placed in the left-hand column. We determine the sums of the second and sixth lines on the basis of all computations: 8 X i

Hðxi Þ 5 2:028 и

8 X

H0 ðxi Þ 5 1:201

i

Hence, the relative residual complicacy of the entire construction is P H0 ðxi Þ 5 0:59 F5 P Hðxi Þ

A unified estimation of the accomplished works is E 5 1 2 F 1 0:41 5 41%

This method of unambiguous estimation of the completeness of complicated construction works at any stage of their fulfillment creates conditions for simplifying the operative construction management and saving time and means for its realization.

2.14 Unambiguous evaluation of complex processing of natural resources State-of-the-art processing of natural resources is characterized by the constantly increasing volumes of the mining industry. On the one hand, it leads to the involvement of evergrowing volumes of depleted natural resources into industry, since rich sources have been practically exhausted. On the other hand, ecological requirements to the processing industry are ever growing. These two circumstances make it necessary to advance processing technologies for maximal usage of all valuable components of natural raw materials. As an example of such enterprises, we can mention the processing of multicomponent ores of nonferrous metals or the production of various mineral materials and even metal materials from the Dead Sea water. It is impossible as yet to evaluate unambiguously the total efficiency of such combined industries. This makes it difficult to manage and optimize them. Such a situation requires the development of a method allowing an unambiguous estimation of the completeness of complex usage of raw materials at all stages of the technology, which is sometimes rather ramified. First of all, it is necessary to have data on the number of tons of ore or the number of cubic meters of seawater annually used by the respective plants. In addition, complete data on the material composition of ore or seawater are necessary. Denote the initial quantity of these parameters by F tons/year. If we denote the fraction of useful components in raw material by k, then their total amount coming into production

92

Entropy of Complex Processes and Systems

is kF tons/year. Besides useful components, there is waste rock in this raw material, whose fraction is denoted by m. Clearly, k 1 m 5 1 The waste rock can contain various substances, sometimes very valuable, but at present they are not the target product. Determine the uncertainty of the composition of useful components that can be obtained purely theoretically at their ideal extraction. Denote the relative content of each component in this idealized balance by xi , and then their sum is x1 1 x2 1 x3 1 . . . 1 xi 1 . . . 1 xn 5 1

(2.70)

The components can be calculated in fractions of unity or as a percentage. The entropy of this initial composition is determined, by definition, as Hn 5 x1 lnx1 1 x2 lnx2 1 x3 lnx3 1 . . . 1 xi lnxi 1 . . . 1 xn lnxn

(2.71)

Denote the total yearly output of ready products of the plants under consideration by G tons/year. It is absolutely clear that G , kF

(2.72)

is always valid. = Denote the relative amount of each manufactured product by xi . It is also clear that =

=

xi , xi is always valid, if xi is calculated as a fraction of the ideal initial composition, and n P = xi , 1 or 100%. =

The entropy of the obtained total production is Hn= 5

n X

=

=

xi lnxi , Hn

(2.73)

=

The efficiency of production using complicated technology can be unambiguously determined as =

E5

Hn 3 100% Hn

(2.74)

Besides the total technological evaluation of the entire production complex, the analysis of obtaining each product separately can be performed in a similar way using the relation Ei 5

H = ðixi Þ 3 100% Hðixi Þ

(2.75)

Chapter 2 • Statistical entropy component

Table 2–8

93

Example of calculation of the process results.

No.

Produced product

x1

x2

x3

x4

x5

1 2 3

Component content in initial raw material, %, xi Entropy of initial composition, Hi ðxÞ Component output in the production with respect to the initial = content, xi % Entropy of actual output H= ðxi Þ, = Efficiency of the technology for each component E 5 HHðxðxi iÞÞ 3 100%

52 205.46 43

12 29.8 9

16 44.36 14.85

8 16.63 6.25

12 29.8 10.75

161 78.3

19.78 66.37

40.06 90.3

11.45 68.85

25.8 86.6

4 5

However, it also visually shows a simple percentage extraction of each component separately. This makes it possible to analyze the perfection level of the technology of obtaining each product. It is especially important that using (2.75) we can analyze the level of the entire combined production. Such analysis allows revealing technological reserves and deciding where and to what extent it must be modernized in order to increase the general effect. Such an analysis confirmed by financial calculations can even allow sometimes a decrease in the output of one product at the expense of increasing that of another in order to reach an increase in the total effect both technologically and financially. For the sake of simplicity and clearness, we present a concrete example of the application of the suggested method. Usually the number of products made from the same raw material is not very high, with a maximum of five or six. The calculations are presented in Table 28. To clarify the total technological efficiency of all departments of the enterprise, it is necessary to sum up the initial entropy of the raw material in line 2 and the obtained entropies of manufactures in line 4 and to compose a ratio of the obtained numbers s X

Hðxi Þ 5 326:05

i51

s X

H 0 ðxi Þ 5 258:09

i51

The total efficiency of useful component usage in the production process is P E5

H = ðxi Þ 3 100 258:09 3 100 P 5 79:15% 5 Hðx= Þ 326:05

In addition, this shows the presence of unemployed reserves in the total production. At the same time, the obtained result shows that the general usage of raw material is satisfactory. A method of unambiguous evaluation of a complicated technological process using complex raw material is developed. This allows an operative analysis of various production stages and modernization of lagging ones. This will allow an intensification of the total effect at the expense of a targeted allotment of funds for the improvement of sectors possessing reserves for further growth.

3 Dynamic component of entropy 3.1 Modeling and analogy—the basis of dynamic systems comprehension While static entropy characterizes the complexity of compositions of materials or substances participating in the processes of their transformation, dynamic entropy is caused by the behavior of these processes. It characterizes the irreversibility extent of all real transformations observed both in nature and in technology. To illustrate the principal characteristics of the dynamic entropy, it is necessary to examine in detail an extremely complicated, practically totally chaotic phenomenon that will allow obtaining a comprehensive pattern of the origin of dynamic entropy and its main properties. A sufficient amount of such processes can be found in numerous branches of knowledge. Traditionally, changes occurring in gaseous systems are considered as such processes. Undoubtedly, a two-phase flow with a poly-fractional solid phase in critical, and not in transportation regimes, is more complicated and chaotic. The critical regime of a two-phase flow is characterized by the following: a part of the solid phase moves upwards together with the ascending flow, and a part of it precipitates upstream. This makes the chaotic character of such flow considerably more complicated and increases its randomness. Such regimes are rather widely used in modern industry either in the form of processes of bulk materials separation by the particle size, or in the processes of natural materials beneficiation by particle density or other parameters. They are also observed in nature, in the case of winds in canyons, ascending flows in deserts, and in other cases. Critical regimes of two-phase flows are characterized by numerous special features and internal connections. It is impossible to take all of them into account thoroughly and exactly, and therefore, when performing an analysis, the most general features are singled out. As a result, one gets a simplified idealized sufficiently approximate model of the process. Despite all this, a serious scientific model is, to a considerable degree, “synthetic.” It widely uses mathematical instruments and knowledge from various fields of physics, hydraulics, mineralogy, etc. On this basis, a model is being developed, which must lead to a sufficient exactness of the process description, adequately reflecting its laws. While constructing a mathematical model, one has to take away from a large amount of the process peculiarities. Because of this situation, the rigidly specified results of mathematical research of a problem and constructive solutions on their basis cannot be interpreted as the only ones. Today, when the physics of objects consisting of a mass of elements is not sufficiently understood and developed, various methods of problem solving can be explored. However, it is Entropy of Complex Processes and Systems. DOI: https://doi.org/10.1016/B978-0-12-821662-0.00003-4 © 2020 Elsevier Inc. All rights reserved.

95

96

Entropy of Complex Processes and Systems

not always possible to determine the optimal one among the various solutions. Therefore, one has to make a choice comparing the merits and drawbacks of the available options. The basis of scientific analysis is the finding of quantitative links between various factors of a process. A rigorous analysis based on physical laws and analytical methods of mathematics gives the most reliable results. In an ideal case, such analysis does not need the involvement of any experimental data. However, due to the complexity and diversity of cause-and-effect relationships, such an analysis is successful only extremely rarely, in the simplest cases. Meanwhile, engineering practice cannot wait for rigorous theoretical solutions and is seeks additional possibilities. Experimental data obtained both from the operation of factory equipment and as a result of researches using laboratory experimental facilities are generalized. On their basis, empirical relations are obtained. They are, as a rule, of a particular character. Their application beyond the range of variation of the parameters in which they were obtained leads to crude errors. Broadening of the ranges of parameters in the experiment is not always possible. For instance, it is impossible to study a full-scale facility in laboratory conditions. Therefore a transition from a laboratory model to an industrial design is generally very complicated and can result in numerous mistakes and corrections. Among the general methods of simplification of the regularities under study, the linearization of relations between the phenomena under study and process results is especially singled out. The diversity and complexity of such relations predetermines their nonlinear nature, and this leads to serious difficulties in the analysis and mathematical description of the process. A transition to linear relations considerably simplifies the analysis, but at the same time, its precision decreases. Here it is important to remain within certain limits and to admit precision decrease within the limits that do not considerably distort the final result of the analysis. The diversity of the relations between the properties of an object and the process parameters complicated by insufficient understanding of the physics of the occurring phenomena leads to insuperable difficulties in the establishment of quantitative regularities, to cumbersome and intricate design relations. Usually, it is kept silent, but we have to indicate clearly the rupture in the level of solving problems of turbulent two-phase flows, their critical flows, and possibilities of analytically obtained equations. The arising difficulties force us to simplify both the construction of the equations themselves and the conditions of unambiguity, which leads to a loss of precision of the obtained results. In such cases, numerical methods are often used. Numerical methods of solving differential equations are linked with concrete parameters of a process and limited by an accepted range of variation. The obtained results do not possess solution generality and can be used only in a particular case. Attempts are made to solve theoretical equations by numerical methods in a sufficiently broad range of change of variables, and then empirical relations are chosen for the obtained results. In the case of a large number of factors, the realization of this method is characterized by high labor intensiveness, and the correctness of the obtained results cannot be guaranteed.

Chapter 3 • Dynamic component of entropy

97

Overcoming of this situation simplifies, to a certain extent, a transition to generic variables compiled from elementary factors of the process according to definite rules. These new variables are dimensionless and have a definite physical meaning. With their aid, the connection is established not between the process factors, which are numerous, but between generalizing complexes integrating these factors. The relations obtained in generalizing complexes have the following features: • • • •

They They They They

are more compact in comparison with dimensional factors; allow formulating analytical solutions in a more laconic form; are useful for processing and formalization of experimental data; allow performing calculations in any system of units, since they are dimensionless.

We especially emphasize that the integration of factors is not of formal character. In real processes, the influence of separate factors shows itself jointly, and not individually. Therefore, if these factors are combined into a complex, the latter reflects the process state. The use of generalized variables, each including several factors, increases the generality of the process description, since one value of the complex can be realized, strictly speaking, in an infinitely large quantity of combinations of numerical values of factors that it includes. Hence, it follows that these complexes can characterize not only isolated phenomena or processes, but also a group of similar phenomena or processes for which such complexes have identical numerical values. This forms the basis of the notion of physical similarity. Therefore such complexes are called similarity criteria. They are used for the modeling of complicated processes and apparatuses, for the processing of experimental data, and during analytical solution of technological problems. The formation of similarity criteria was an important stage in the development of science. It has turned out that lengthy differential equations obtained by analytical methods are not of great value per se. When using initial conditions and single-value conditions, their exact solution is practically impossible. However, on the basis of these equations, without solving them, we can formulate similarity criteria correctly, competently perform the experiment, and process the obtained results using these criteria. Basic analytical equations in the science of single-phase flows have been obtained using elementary simple models. Thus, the main equations of hydraulics are based on balance ratios. Here important details of the flow—its structure, turbulence with its developed spectrum of pulsations, situation on channel walls—are not taken into account, only the respective balances. And even such a simplified analytical model has led to the determination of principal laws and similarity criteria.

3.2 Substantiation of the physical analogy We try to apply methods of statistical mechanics as a basis for modeling two-phase flows. This mechanics, in contrast to the classical Newton’s mechanics, was developed for complicated systems consisting of a large number of objects. Its main difference consists in the fact

98

Entropy of Complex Processes and Systems

that it is based on probabilistic notions. At first, statistical mechanics was developed for the analysis of gaseous systems. The main goal of statistical mechanics is the analysis of the process of irreversible tendency of a complicated system to the equilibrium state. As has turned out, in this case, the equilibrium can be not only static, but also dynamic. The statistical mechanics appeared as a product of the classical works of Boltzmann and Gibbs. The approaches that they suggested are essentially different. Boltzmann investigated statistical properties of a system of colliding particles in an ordinary three-dimensional space. He derived a kinetic equation for the density of distribution by velocities and coordinates in S-space, which became famous, and showed that a general solution of this equation in the limit of t ! N tends to the Maxwell distribution. It can be also noted that, in essence, Boltzmann’s method uses a general concept of Markov’s random processes, where the direction of complicated systems evolution is specified a priori in the beginning of the process. This was clearly exhibited at the development of visual models which confirmed Boltzmann’s accuracy, such as the urn model of the Ehrenfests or the circular model of Katz. On the whole, the kinetic equation for a gaseous system derived by Boltzmann is approximate. It describes correctly the system evolution in the case of rarefied gases within definite intervals. Nowadays, the kinetic Boltzmann’s equation is a generally accepted basis for numerical modeling of the kinetics of interacting particles. Gibbs’ approach is based on the study of probabilistic distributions in the phase space of a system of interacting particles. The change of these distributions (the Gibbs ensembles) with time is governed by the reversible Liouville equation. Gibbs attempted to prove that any distribution of such type tends with a course of time, in some sense, to a canonical form. The methods of solving this problem that he outlined did not lead to the final results. One of the reasons for this is connected to the substantial problems of the ergodic theory, some of which remain unsolved today. At present, the development of statistical mechanics has had many achievements, but some of its drawbacks also have remained. Even today a remark expressed long ago by Poincaré remains topical, “We deal with reversibility of prerequisites and irreversibility of consequences.” He thought that in the basis of the kinetic theory of gases many vague aspects remained, and it was far from being completed, which is valid for its state even today. However, in spite of this, the use of kinetic theory of gases for about one and a half centuries after its creation has led to outstanding successes both in the development of thermodynamics, and in many other processes connected with heat transformation. We will attempt to use methods of statistical mechanics for at least an approximate analysis of two-phase flows with solid particles. In such flows we can trace a certain analogy with the axioms of the kinetic theory of gases. And although the differences are obvious and considerable, such an approach gives us some hope. Until now, nothing developed for twophase flows has resulted in any reassuring theoretical substantiations aimed at the understanding of this complicated flow. Being aware of the conditional character of the analogy between these two complicated phenomena (kinetic theory of gases and two-phase flows), at the analysis of the main regularities of the latter, the results of conclusions are constantly

Chapter 3 • Dynamic component of entropy

99

compared with experimental results. This allows for developing the theory of two-phase flows in the correct direction. The most interesting fact is that, in the long run, this approach has allowed for creation for the first time of an analytical theory of estimation of separation results for various materials without involving any experimental (empirical) coefficients, without which it could be never managed. As is known, a physical phenomenon can serve as an analog or model of another phenomenon, while the coincidence of all aspects of their behavior and characteristics is not obligatory. It is sufficient for this to reveal at least one common analogous characteristic in two quite different physical phenomena in order to admit that they are analogous in this respect. As a classical example, we can mention a pendulum and an oscillatory electric circuit. There is, indeed, nothing in common between them physically. However, their respective characteristics are analogous. With pendulum swinging, its deviation from the vertical axis is of a sinusoidal character. The current strength in an oscillatory circuit varies in exactly the same way. This has long served as a basis for a successful analysis of one phenomenon by analogy with another. Model hypotheses are intended to determine the physical mechanism and structure of the investigated phenomenon. These hypotheses reflect the level of our notions concerning the essence of a physical phenomenon. Their peculiarity consists of the possibility of their variation in the course of the development of science, because at the moment of their formation, they correspond to the current level of knowledge. Therefore model approximations are always relative with respect to a phenomenon. A characteristic example of their refinement could be a model of the structure of an atom. Originally, it was considered as a minute indivisible particle. Then the Thomson model appeared representing an atom as a mixture of positive and negative charges—a “plum pudding model.” To replace this model, Rutherford’s hypothesis was created, in which a positive nucleus is surrounded by a cloud of negative electrons. This hypothesis transformed into the planetary model of Bohr. Today more complicated models are being discussed. Usually, as a rule, physical modeling is supplemented with a mathematical model. The mathematical model implies one or several formulas connecting the elements of the phenomenon into a comprehensive whole. We can assert in a well-grounded way that the similarity between two-phase flows and the behavior of gases is not less than the similarity between a pendulum and an oscillatory electric circuit. Therefore, in the first approximation, a gaseous system can be accepted as an analog of a two-phase flow. As a rule, even at insignificant concentrations, a huge quantity of particles takes part in two-phase flows. As an example to confirm this statement, we can mention the following data. At the separation in air flow with solid-phase concentration μ 5 2:2 kg=m3 and an average particle size х 5 30 μm, there are N 5 5:7 3 1016 particles in a cubic meter of air, and at an average particle size х 5 100 μm, N 5 2 3 1015 : If we take into account that at an average efficiency of various apparatuses with two-phase flows, up to 40,00050,000 m3 of air pass through them per hour, it becomes clear that the amount of particles taking part in such

100

Entropy of Complex Processes and Systems

flow processes is large enough to consider them as a statistical system. In addition, it should be taken into account that solid particles occupy a relatively small volume in a flow. In both examples, the relative value of this volume reaches only β 5 0:0008: This points to the fact that, despite the great number of particles, the distance between them is significant. The kinetic theory of ideal gases is based on the hypothesis that its molecules represent ideal solid balls located, while moving, at relatively large distances from each other. Short time intervals during which molecules interact allow us to consider them as collisions. In a two-phase flow, particles are also sufficiently far from each other. It is experimentally established that solid particles in a flow interact rather intensely. According to experimental data, about 700800 collisions per second occur per one square centimeter of the control particle surface. The only hope to expand the insight into the mechanism of two-phase flow is connected with the fact that its most general laws can be understood only using statistical methods analogous to the kinetic theory of gases. All other approaches applied until recently have shown their total helplessness due to the complexity of two-phase flows accompanied by a large range of random factors. As is known, a basic distinction of the statistical approach consists in the fact that it is based on an attempt to determine the state of the entire system at once, and not of individual objects it consists of. A statistical model of two-phase flows in separation (critical) regimes is aimed at the solution of the following basic problems that remain unsolved as yet. 1. In modern theories based on the so-called “velocity hypothesis,” even the formulation of the problem of connection between principal factors of the process (medium velocity, solid phase composition, particle size and density, and apparatus design) and distribution of various size classes in separation products is absent. Finding the functional relation between them must become the principal problem of any separation theory, even a rather approximate one. Only the application of such a theory can substantiate the final goal of forecasting—to obtain compositions of separation products by computational methods. 2. On the basis of existing theories, it is problematic to determine separation results as a function of the solid phase granulometry and consumed concentration. 3. Usually, in theoretical schemes of separation processes, principal attention is paid to the flow influence on solid particles. However, the solid phase also affects the moving medium, and such an influence is essential and must be considered when creating a theory of the process. 4. An important aspect of the correct development of a theory is to define the optimal flow velocity. Usually, it is assumed to be the final precipitation velocity of boundary-size particles. At that, everyone understands that separation occurs in suspension regimes, but believes that the velocities of particles hovering and precipitation are identical. However, this is far from the case. First, these parameters are of different physical nature—hovering velocity relates to the flow motion, while precipitation velocity relates to a solid particle.

Chapter 3 • Dynamic component of entropy

101

Second, the drag coefficient of a particle precipitating in a motionless medium differs from that of a hovering particle by a quantity affected by the medium flow regime. And third, a certain structure (velocity profile) is always established in a flowing medium. Local velocity values in the channel cross-section are different and can significantly differ from average velocities serving as a basis of computations. 5. In the “velocity hypothesis,” which is generally accepted today, there is no basic difference between separation processes realized in liquid or gaseous media. Usually, attempts are made to describe them using analogous empirical relations. However, they are essentially different due to a higher density of liquids and higher mobility of gases. Therefore there are many reasons to believe that principal laws of wet and dry separation, probably, somewhat match, but are basically different. 6. It is crucial that the “velocity hypothesis” requires the organization of the separation process so that the mean flow velocity is equal to the final precipitation velocity (or hovering velocity). As shown by the experience of operation and exploration of vertical cascade classifiers, the value of optimal flow velocity is a function of not only the particle size and boundary size, but also depends on the place of material input into the apparatus and the apparatus length. The principal tools of statistical analysis are based on various aspects of probability theory. Taking into account that chaos is a fundamental principle of order in a mass system, we will try to determine the presence of deterministic regularities in such a process. This is the most important goal in the comprehension of this concrete process. The kinetic theory of gases is based on the description of molecule motion using classical Newtonian mechanics. On the whole, it is considered that the kinetic theory is a theory of collisions and of the mean free path of particles between two subsequent collisions. It should be noted that the principal results of the kinetic theory of gases have been confirmed by all experimental facts obtained over about 150 years since the creation of this theory. This theory has been successfully applied not only in its own field, but also in many fields of science adjacent, even if remotely, to the kinetic theory of gases. Therefore it is of some interest to analyze the character of two-phase flow from a similar standpoint. The obtained results of such analysis must be compared with experimental data. The extent of their adequacy will allow for making a conclusion about the validity of a statistical approach to two-phase flows. However, in such analysis, a number of difficulties must be overcome. First, the size of dust particles is undoubtedly large in comparison with molecules. Therefore, when modeling the behavior of such particles, it is impossible to use differential calculus. Under certain limitations, one has to use finite-difference parameters. Strictly speaking, as for molecules, it is also problematic to use parameters of differential calculus, since molecules have a finite size. Second, the shape of dust particles is different from ideal balls. The shape of gas molecules is not ideally regular either. However, they were considered as balls, and on the whole, the obtained results of modeling were excellent. Keeping in mind that this is an incommensurably

102

Entropy of Complex Processes and Systems

coarser simplification, we assume in the beginning of the model analysis that solid dust particles are balls of regular shape and various sizes equivalent to particle volumes. In the model under study we are interested only in the projection of particle velocity on the vertical axis, since mass distribution occurs in the vertical direction—upwards and downwards. As further analysis shows, even such a simplified, not to say primitive, model can provide a sufficiently detailed description of the process results. Third, in statistical theory, gaseous systems are examined in a closed volume, where all possible directions of molecule motion are considered equiprobable. In a two-phase flow, a closed volume is out of the question, and in addition, the resultant motion of the flow has a predominant direction. A determining parameter of the statistical approach in the theory of gas systems is the potential energy of a continuum of particles depending on the temperature value. Clearly, in a two-phase flow the motion is provided by energy from other sources. As for temperature, its change does not practically influence the main parameters of such kinds of flows. One more remark should be made. Here we have to reject generally accepted parameters of a system (temperature, energy, heat capacity, work, enthalpy, etc.), being obliged to introduce new parameters determining potential extraction, probability of the particle motion direction, flow mobility, chaotizing factor of the flow, etc. At the same time, it should be noted that while considering a model of two-phase flow, ideas and methods of gas mechanics will be used. In the course of this examination, we obtain certain relations with the structure of their formulas recalling the laws of statistical physics. And although these laws and the parameters they contain have a totally different physical sense, names accepted in statistical mechanics, for example, Boltzmann’s factor, Gibbs’ factor, statistical sum, etc. will be assigned to them.

3.3 Foundations of the statistical model of critical two-phase flows An enormous quantity of solid particles participating in a two-phase flow can be represented, to the first approximation, as solid balls, by analogy with molecules representation in the statistical theory of ideal gases. This gives reasons to apply, by analogy, certain methods of the kinetic theory to the process under study. Boltzmann’s theory of gases shows that, in order to develop a successful model of a process, it is not obligatory to strive to a comprehensive description of this process. It is enough to define an elementary scheme of the process, which must reflect its essence. First, we examine the behavior of one narrow size class in a flow. It was clarified in a purely empirical way that within a certain range of concentrations, each narrow class of particles behaves in a flow autonomously, irrespective of the behavior of other classes. This is confirmed by the invariance of separation curves with respect to the initial supply composition. Here we assume that each class consists of identical particles.

Chapter 3 • Dynamic component of entropy

103

An ordinary and, probably, the only method of formal description of a system consists in the choice of a mathematical model successfully corresponding to the characteristics of this system. When developing the model, we consider the total amount of particles moving together with an ascending flow in both directions—upwards and downwards—through a certain volume of the channel. This volume or its part, which is limited at different heights by two parallel planes, can be considered, together with solid particles, as a statistical system. In such a system we are interested in the solid phase mass distribution in both directions. We ignore other characteristics of the process, such as interactions of particles with each other and with channel walls, local irregularities of concentration fields, the medium motion regime, and other factors characterizing a two-phase flow. We also ignore the true velocity of particles and their direction in space. We consider only the projections of these particles’ velocity on the vertical axis. We are not interested even in the values of these projections. Our attention is concentrated on their direction only. The directions of projections of all particles’ velocities in the volume under study make up a statistical system. Using the ergodic principle, we can examine this situation in a somewhat different aspect. Observation of one fixed particle making N subsequent independent steps up and down along the vertical axis can lead to an analogous result. The final result is a displacement of this particle in one of the directions after N steps. Such a situation is called random walk. We assume that the size class under study forming a stationary statistical system consists of N equivalent particles. Each of these particles has a chance or probability to ascend equal to a or to descend, equal to b: Clearly, at every step of one particle a1b51

We can assert that among N particles, n of them have a probability to ascend, and ðN 2 nÞ to descend. Here a discrete distribution of probabilities takes place, since n can take on any whole values from 0 to N: While considering characteristics of a system, it becomes necessary to use two parameters for upward n and downward ðN 2 nÞ orientations, which leads to a certain inconvenience. It has turned out that it is possible to decrease the number of such parameters by introducing another indicator for a potential single-piece account of particle direction in connection with their quantity as follows: εf 5

9 N 1z> > > = 2

εc 5

N > 2z> > ; 2

(3.1)

104

Entropy of Complex Processes and Systems

where z is a separation factor. It is clear that fractional extraction imbalance amounts to εf 2 εc 5 2z

In the case of equiprobable separation with εf 5 εc 5 0:5, the quantity z 5 0: In a physical sense, the separation factor z is equal to the number of particles, by which the values and εc deviate from the equiprobabilistic distribution. These parameters differ only by a constant, and their derivatives are the same by modulus. Since each particle has two orientations, the number of the system states is 2N : It is clear that for a system of N particles there are N! ways of their placement. However, n1 ways ensuring the acquisition of z1 can be among them. Here it is necessary to introduce one more notion for a statistical system. Namely, the number of stationary states of the system or of its part having the same separation factor or its value within a narrow interval, are identical to each other. We call such states self-similar. We emphasize two basic aspects of self-similarity definition. First, this definition is applicable only to the separation factor magnitude, and not to the system states, all of which are different. Second, practical evaluation of self-similarity under the conditions of a real process is determined in many respects by the perfection of the experimental method. Using a more perfect method, we can reveal a difference in the extraction where it was seemingly absent, if the particles are distributed into narrower classes. In this respect, we should make two more observations. First, it is perfectly clear that any states of a system, which are self-similar with respect to the separation factor, are equiprobable. Second, from the standpoint of the results of the process under study, situations can arise where the statistical properties of a system are of no interest any more, since their probability is vanishingly small. Apparently, the fractional extraction value should be taken as the principal parameter for the separation process modeling. Ff ðxÞ 5

rf rs

where rf is the particle output from a fraction into the fine product; and rs is the content of these particles in the initial feed. As established, this characteristic in its universal version is the main invariant technological parameter of a process. A special merit of fractional extraction is that in the conditions of a real process, at an immense amount of random factors, it acquires an explicitly expressed deterministic character. The dependence of Ff ðxÞ on the process parameters forms the separation curves in respective coordinates.

Chapter 3 • Dynamic component of entropy

105

We should especially note that fluctuations of fractional separation curves are insignificant and have a markedly random character. The stationary state of a system does not appear at once, but after the lapse of a certain time after the beginning of the process or after external perturbations (relaxation time). Here it is necessary to determine how the parameters describing directly measurable values are interconnected and how they are formed. Here the amount of possible values of potential extractions for the system under study is only ðN 1 1Þ: For example, for two particles one can obtain three values of the separation factor: 1. mm both particles are oriented upwards ðz 51 1Þ; 2. kk both particles are oriented downwards ðz 5 2 1Þ; 3. mk and km particles have different orientations ðz 5 0Þ: Note that the two latter values of the system are self-similar. Thus, the number of states exceeds the number of possible values of potential extraction. For example, at N 5 10 there are 2N 5 1024 states of a system, which corresponds to only 11 different values of potential extraction. We assume that N is an even number. We examine cases where N is very big, and in this situation it does not matter whether N is even or odd. Note that the numerical difference in particle orientation is always even and amounts to     N N 1z 2 2 z 5 2z 2 2

This parameter varies within the range 2

N N #z# 1 2 2

The separation factor is convenient, since its magnitude unambiguously characterizes fractional distribution into both products, and operations with two parameters εnf and εnc are not needed. In fractional form, dividing both parts of the relation (3.1) by N; we can obtain Ff ðxÞ 5

1 z9 1 > > = 2 N>

Fc ðxÞ 5

1 z> 2 > > 2 N;

(3.2)

In this case it is obvious that Ff ðxÞ and Fc ðxÞ differ by a constant only, and their modulo derivatives are the same.

106

Entropy of Complex Processes and Systems

3.4 Definition of distribution parameters In this section we define more precisely the details of the accepted model. While developing the model, we consider a certain amount of identical solid particles moving up and down in a vertical direction together with an ascending flow through a limited volume of space in the critical flow regime. In such a type of flow, we are interested only in the redistribution of the initial material, more exactly—in fractional extraction of various size classes into upper and lower products. Therefore we ignore the magnitude of particle velocity, whose magnitudes and directions in space are different, and consider only the direction of this velocity, that is, its projection on the vertical axis (Fig. 31) for each of them. Naturally, it is a very simplified model. However, it is not simpler than the model of a moving medium, on whose basis all principal regularities and similarity criteria for hydraulic single-phase systems have been obtained. The direction of the velocity projection for each particle in this model can be oriented in two ways only—upwards and downwards. Note that the probability of this orientation for each particle is independent of the orientation of the others. Thus, the process model to be considered reflects the main idea of critical flow—oppositely directed motion of particles in a flow. Taking into account only the velocity direction, we introduce notations for upwards direction a and for downwards direction b according to Fig. 31.

A

A a b

b a a

a

b

b a b b a

a

b

a

b b

a a b

B

FIGURE 3–1 Statistical model of two-phase flow.

a

b

a

B

Chapter 3 • Dynamic component of entropy

107

It can be noted that the probability of zero velocity projection on the vertical axis is vanishingly small, since at such a moment the particle must be either motionless with respect to the walls or move strictly normal to the vertical. The probability of such a state approximately equals the probability for a coin to stay edgewise when tossed. From the standpoint of the process under study, this probability is so small that it stops being of interest. In principle, for two-phase flows of a different nature, the principal axis can be located differently, for example, horizontally for horizontal or centrifugal flows, aslant for inclined flows. For the separation in an ascending flow, it is most natural to assume that the principal axis is vertical. A system implies the totality of all particles passing in both directions through a height-restricted space of the flow. In Fig. 31 the system is restricted by lines A and B. The object of our study is not any sort of a system, but only a system with a steady-state process or a system in a stationary state. Since there is no constant material accumulation in the singled-out space because the total output of both products in a steady-state process always equals the initial feed, we can assume that the number of particles in the singled-out volume is practically constant at any moment of time. Here it is necessary to use only one notion of statistical mechanics, namely, the notion of a stationary state of a system. A property of the stationary state of a physical system is that its external parameters are time-independent. Such a state of a system with the velocity of the ascending medium flow being constant in time is characterized by the fact that fractional separation of various size classes does not explicitly depend on time, that is, in fact, a deterministic value, which is confirmed by all experiments. Its fluctuations therefore are insignificant and have a markedly random character. Here the most important thing is that such stationary states of the system under study are countable, although their number can be very large. From the static point of view, the disorder in a system is determined by the number of various ways of distributing a certain totality of objects. The greater the number of such objects, the higher is the probability of their random distribution, and not of an ordered state. Let us count the number of different ways of distributing particles in the model under study so as to satisfy certain restrictions imposed on the system. To represent some state of the system, one can use a visual image such as that shown in Fig. 31, or a symbolic record a1 b2 a3 a4 b5 a6 b7 . . .ai bj . . .bN

(3.3)

We start computations from the simplest one. In a system of two particles, multiplying ða1 1 b1 Þða2 1 b2 Þ, we obtain four possible states ða1 1 b1 Þða2 1 b2 Þ 5 a1 a2 1 a1 b2 1 b1 a2 1 b1 b2

Note that the second and third states of the system are self-similar, since one particle in them is oriented upwards and another downwards.

108

Entropy of Complex Processes and Systems

We can equally well perform multiplication of the expression ðm1 1 k1 Þ 3 ðm2 1 k2 Þ: A defining function for three particles gives: ða1 1 b1 Þða2 1 b2 Þða3 1 b3 Þ 5 a1 a2 a3 1 a1 a2 b3 1 a1 b2 b3 1 a1 b2 a3 1 1b1 a2 b3 1 b1 b2 b3 1 b1 a2 a3 1 b1 b2 a3

Here we obtain eight states of the system, which corresponds to 23 5 8

Here we obtain only four values of potential extraction for eight states of the system. The product of N multipliers in (3.3) can be written without taking into account the ordinal numeration of particles, which is not a matter of principle from the standpoint of the considered process. Each possible distribution of particles will be called the system configuration or simply configuration. For N particles, it is possible to obtain 2N probable configurations, since each particle has a direction probability equal to 1/2 The probability of such configuration for one particle is P5

1 : 2N

In a more general case, we assume that n out of N particles are oriented upwards. In this situation it is necessary to determine the probability PðnÞ of upward orientation for n particles out of N. As is known, the probability of upward and downward orientations for one particle is equal to, respectively, a and b: As these orientations are statistically independent, according to the properties of probabilities, we can write for one configuration that the probability of its state is n N2n PðnÞ 5 a 3 affl{zfflfflfflfflfflfflfflfflffl 3 a. . .a 3 bffl{zfflfflfflfflfflfflfflfflffl 3 b. . .b |fflfflfflfflfflfflfflfflffl ffl} b |fflfflfflfflfflfflfflfflffl ffl} 5 a b n

(3.4)

N 2n

However, in the system under study, a concrete orientation of all the particles can be realized in CN ðnÞ different ways. This parameter is called a number of combinations of N particles with respect to n: We determine CN ðnÞ, the number of possible configurations of a system, at which the situation with n particles directed upwards and ðN 2 nÞ directed downwards is realized. The desired probability PðnÞ of the case with n out of N particles oriented upwards equals the probability of the realization of either the first or second, etc., up to the last of CN ðnÞ possible configurations. The desired probability PðnÞ equals the sum of probabilities (3.4) multiplied by a coefficient CN ðnÞ: Thus, it amounts to

Chapter 3 • Dynamic component of entropy

109

PðnÞ 5 CN ðnÞan bN2n

Now we have to introduce the identification of CN ðnÞ into this computation. The possible number of combinations JN ðnÞ can be written as JN ðnÞ 5 NðN 2 1ÞðN 2 2Þ. . .ðN 2 n 1 1Þ

It can be written in a shorter form using factorials: JN ðnÞ 5

NðN 2 1ÞðN 2 2Þ. . .ðN 2 n 1 1Þ. . .2 3 1 N! 5 ðN 2 nÞ. . .2 3 1 ðN 2 nÞ!

All the particles oriented upwards are equivalent, they differ only in the permutation of indices. The number of possible permutations of n indices equals n!: Therefore, we obtain the desired number CN ðnÞ of different configurations by dividing JN ðnÞ by n!; that is, CN ðnÞ 5

N! n!ðN 2 nÞ!

(3.5)

At that, the probability to be computed takes the form PðnÞ 5

N! an bN2n n!ðN 2 nÞ!

(3.6)

In the case of a 5 b 5 1/2, the probability amounts to  N N! 1 PðnÞ 5  2 N 2 2!

The obtained dependence (3.6) is a reminder of a term of binomial distribution. Expanding a binomial equal to ða1bÞN by the powers of n, we can see that the coefficient of such expansion at an bN2n equals the number of terms comprising a as a multiplier n times and b in the same role—ðN 2 nÞ times, that is, ða1bÞN 5

N X n50

N! an bN2n n!ðN 2 nÞ!

(3.7)

In the case under study, where a 1 b 5 1; this equation is equivalent to the following: N X

PðnÞ 5 1

n50

Thus, it is shown that the sum of probabilities for all possible values of n equals unity, as required by the normalization condition.

110

Entropy of Complex Processes and Systems

To understand how the probability value PðnÞ depends on n; we examine the character of the parameter CN ðnÞ variation according to the relation (3.5). First, this parameter is symmetrical with respect to n substitution with ðN 2 nÞ; that is, CN ðnÞ 5 CN ðN 2 nÞ

In addition, CN ð0Þ 5 CN ðNÞ 5 1; since 0! 5 1 Second, the relationship CN ðN 1 1Þ n!ðN 2 nÞ! N 2n 5 5 CN ðnÞ ðn 1 1Þ!ðN 2 n 2 1Þ! n11

is valid. It follows from the analysis of this relationship that CN ðnÞ passes through a maximum near n 5 N/2, and that the magnitude of CN ðnÞ at this maximum is very large, if, of course, N is large. Now it is possible to understand the character of the probability PðnÞ dependence. For a 5 b 5 1/2, the probability PðnÞ has a maximum in the vicinity of n 5 1/2N If, however, a . b; the maximum exists all the same, it shifts to the values n . 1/2N and also becomes rather sharp. This explains, from the standpoint of statistical mechanics, the stability of fractional separation curves not only in the optimal regime at Fz 5 Fc 5 1/2, but also at their other values, which also follows from the experiment. For the distribution factor we can write the following: z 5 n 2 ðN 2 nÞ 5 2n 2 N

Hence, only a value of z corresponds to a definite value of n, and vice versa. 1 n 5 ðN 1 zÞ 2

(3.8)

Therefore, the probability PðzÞ of the fact that z takes on definite values must coincide with the probability PðnÞ of the fact that n takes on the value following from (3.2). Therefore   N 1z PðzÞ 5 P 2

The most probable situation, that is, the probability maximum, corresponds to z 5 0: In a somewhat different form, 12 X N

ϕðN; zÞ 5

2N2

N 2

N! N N   a 2 1z b 2 2Z 1 z ! N2 2 z 1

Chapter 3 • Dynamic component of entropy

111

Consider the dependence (3.7). The product a 2 1z b 2 2z enumerates all possible separation factors within the range 2 N2 # z # 1 N2 ; while the binomial coefficients show the number of selfsimilar states of a system with a fixed number of particles oriented upwards and downwards. If N is large, it is difficult to determine the probability PðzÞ, since the formula contains factorials. One can choose an approximation which brings this dependence to a rather simple form. In a general case, for z 6¼ 0 we obtain a dependence N

PðN; zÞ 5 N 2

N

N! N N   a 2 1z b 2 2z 1 z ! N2 2 z !

or PðN; zÞ 5 ϕðxÞ 3 a 2 1z b 2 2z N

N

On the one hand, ϕðN; zÞ are binomial coefficients. On the other hand, they can be taken as the self-similarity value, since this dependence shows the number of states of a system of particles with the same value of z in a concrete distribution. We perform computations under the conditions that N . . 1 and z # N/2. In this expression, z is any integer between 2 N/2 and N/2. Finding the logarithm of the left and right parts, we arrive from ϕðN; zÞ to the expression     N N lnϕ 5 lnðN!Þ 2 ln 1 z ! 2 ln 2z ! 2 2

Examine separate parts of this expression: 20 13 0 1 0 1 z X N N N ln4@ 1 zA!5 5 ln@ !A 1 ln@ 1 k A 2 2 2 k51 20 13 0 1 0 1 z X N N N ln4@ 2 zA!5 5 ln@ !A 2 ln@ 2 k 1 1A 2 2 2 k51

Sum up these two expressions: ln

      X z 11 N N N ln 1 z ! 1 ln 2 7 ! 5 2ln ! 1 2 2 2 12 k51

Proceeding from the suggestion that ond summand in (3.9) amounts to z X k51

N

ln N2 2

N 2

2k N 2k N

2 k 1 1 is approximately equal to

z X 1k 11 ln 5 2k 12 k51

2k N 2k N

5

z X

ln

k51

where x 5 2k N It is clear from the definition that x , , 1 is always valid.

11x 12x

(3.9) N 2

2 k; the sec-

(3.10)

112

Entropy of Complex Processes and Systems

We perform additional computations to reveal the meaning of (3.10). Remember that ex 5 1 1 x 1 x 2 1 . . .:

Taking into account that x , 1; we can restrict ourselves by the first two terms. Then ex  1 1 x; that is, x  lnð1 1 xÞ e  1 2 x; that is, 2x  lnð1 2 xÞ 2x

This means that ln

11 11x  2x; ln 12x 12

2k N 2k N



4k ; N

and the dependence (3.10) takes on the form z X k51

ln

z 11x 4X 4ðz 1 1ÞðzÞ 2z2 5  k5 12x N k51 2N N

(3.11)

Thus, the required dependence is transformed into the following form: N! 22z2 ϕðN; zÞ  N  N  e N ! ! 2 2

The obtained result can be written as 22z2 N

ϕðN; zÞ 5 ϕðN; 0Þe

(3.12)

This expression must be understood as the number of states of the system under study at the separation factor value equal to z: The value of the exponent coefficient can be simplified using Stirling’s formula: 1 1 n!  ð2πnÞ2 nn eð2n112n1:::Þ

Taking this into account, N! N  N  5 2 ! 2 !

rffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffi N 2N 2πN N e 2 N 5 2 πN πN 2N!N e2N

(3.13)

Hence, the relationship (3.12) is ϕ 5 2N

rffiffiffiffiffiffiffi 2 22z2 eN πN

(3.14)

Analyze the obtained dependence. The validity of this expression can be checked by summing it up over all z values from 2 N2 to 1 N2 :

Chapter 3 • Dynamic component of entropy

N z51 X2

2N

z52N2

113

rffiffiffiffiffiffiffi 2 22z2 eN ; πN

Summing up over all values of z can be substituted with an integral rffiffiffiffiffiffiffi rffiffiffiffiffiffiffi ð 1N 2 22z2 2 22z2 N N 2 e N dz e dz 5 2 πN πN 2N 2N

ð 1N

N

(3.15)

We introduce a new variable 2z2 5 y2 N

then rffiffiffiffi rffiffiffiffi 2 N dy dz 5 dy; и dz 5 N 2

With regard to this, the dependence under study is transformed into 2N

rffiffiffiffi rffiffiffiffiffiffiffi ð 1N ð 2 N 2N 1N 2y2 2 dy 5 pffiffiffi e2y e dy πN 2N 2 π 2N

(3.16)

Taking into account that according to reference data ð 1N 2N

e2y dy 5 2

pffiffiffi π ; 2

we obtain F5

ð 1N 2N

2N

rffiffiffiffiffiffiffi 2 22z2 e N 3 dz 5 2N ; πN

(3.17)

which exactly corresponds to the total number of the system states. The distribution determined by the right-hand side of the dependence (3.14) is the Gauss distribution. It has a maximum with the center at z 5 0: For z2 5 12 N, the quantity φ 5 ðN; zÞ is e times smaller than its maximal value. Hence, it follows that qffiffiffiffiffi z2 1 z 1 5 ; а 5 2 2N N 2N N Note also that the examined distribution is symmetrical with respect to z; that is, φðN; zÞ values for 1z and 2z coincide. For such curves, a measure of relative distribution width is the root-mean-square deviation. Its value, as known, is

114

Entropy of Complex Processes and Systems

σ5

pffiffiffiffi N

pffiffiffi The ratio of the root-mean-square deviation to the maximal value equals NN 5 p1ffiffiffi : N If the total number of particles constituting a system equals N  1016 ; as was determined for separation conditions, then the relative distribution width is on the order of 1028 . Therefore in this case we obtain a sharp maximum at the average value z 5 0: The physical sense of this is that the separation factor actually attained in a concrete apparatus is not, in principle, the only possible, but the most probable of all possible variants. Here we have managed to show that the probability of this value suppresses greatly any other conceivable distribution by its magnitude that it can be considered the only one possible, that is, deterministic, for the given conditions. This explains reliably enough the constancy of the fractional separation curve for a process in which a huge number of particles participate simultaneously. Such constancy is confirmed by all available experimental material. We introduce one more notion for the system under study. The parameter 2z unambiguously determines the imbalance in the distribution of particles of a narrow size class between two directions. One of the main parameters of the distribution is the ascending flow velocity. Obviously, the separation parameter value (2z) is functionally or correlationally connected with the flow velocity. But the flow velocity reflects only one side of the process—its kinetic component. However, there is another factor in a flow, which balances the kinetic component. Introduce one more parameter reflecting the potential component of this process, which must be proportional to the separation parameter, and denote it by the symbol I: We call this symbol the “lifting factor.”

I 5 2zc;

(3.18)

where c is the proportionality coefficient. By analogy with the kinetic theory of gases, this coefficient must include the gravitational parameter equal, as is known, to gd: In addition, to reflect the potential component, it must include the mass of a particle of a narrow size class. In the final form, I 5 2 2zgdm

(3.19)

The dimension of this parameter is kgm, that is, it has the dimension of energy. The lifting factor expresses the energy ensuring the imbalance of particles. The minus sign appears because the gravity force is directed downwards against the flow. On the whole, this parameter estimates the value and direction of fractional extraction, since it includes z: It has a generalizing significance for fields of various natures—centrifugal, magnetic, electric, etc. For them, its form is different from that of a gravitational field, although the method of its production remains the same. The differential of the lifting factor is written as

Chapter 3 • Dynamic component of entropy

115

dI 5 2 2gdmdz

The differential of I and z or of N should be understood not as an infinitesimal quantity, but as a number of particles which is several orders smaller than these quantities. There is no other choice here, but instead of the sign of differential, one can equally well apply the sign of difference, for example, δN: For counteracting the potential energy I, kinetic energy must be developed on the part of the flow, whose level is determined by the minimal effort per unit area of the flow crosssection, which can be denoted by f. The dimension of this parameter is kg/m2. Potential extraction determines the deviation of particle orientation from equilibrium, for which z 5 0: At the same time, one should take into account that the potential energy of a particle in a flow is gdm: In this dependence, m 5 V ðρ 2 ρ0 Þ;

where m is the particle mass, V is the particle volume, and ρ; ρ0 are the densities of the material and the medium. For air flows, ρρ . 1000; therefore, in this case we can assume that 0

m5 Vρ

3.5 Substantiation of the entropy parameter for two-phase flows Now we revert to the search for an approximation for the entire expression (3.7) written as PðnÞ 5

N! an bN2n n!ðN 2 nÞ

(3.20)

where a 5 1 2 b It has been shown that PðnÞ has a maximum, whose sharpness rapidly grows with increasing N: This means that PðnÞ becomes negligibly small if n differs from the mean , n . corresponding to the maximum. Therefore the values of n; for which the probability is of essential importance, must be located in the vicinity of , n . : We seek a simplified expression for PðnÞ in this relatively small range of n values, which is of interest. If N and , n . are large, the values of n, which are of interest for us, are also significant. At that, PðnÞ can be considered a smooth function of the continuous variable n; although only whole numbers n have physical meaning. The second circumstance facilitating the problem of approximation is that lnPðnÞ is a much less variable function of n than PðnÞ: Finding the logarithm of (3.20), we obtain lnPðnÞ 5 lnN! 2 lnn! 1 lna 1 ðN 2 nÞlnb

(3.21)

116

Entropy of Complex Processes and Systems

Note that at n 5 , n . , PðnÞ reaches its maximum and is determined from the condition dPðnÞ 5 0; dn

which is equivalent to dlnPðnÞ 1 dPðnÞ 5 3 50 dn PðnÞ dn

It is known that at n value greatly exceeding unity, we can assume that dlnn!  lnn dn

Hence, differentiating (3.21), we can write, to a good approximation dlnPðnÞ 5 2 lnn 1 lnðN 2 nÞ 1 lna 2 lnb dn

(3.22)

To find the maximum of PðnÞ; the expression (3.22) must be equated to zero, that is,   ðN 2 nÞa 50 ln nb

This means that ðN 2 nÞa 5 1; ðN 2 nÞa 5 nb nb

Na 5 nða 1 bÞ; but a 1 b 5 1: Hence, the value of n in the maximum equals , n . 5 Na

To examine the behavior of lnPðnÞ near the maximum, it must be expanded in the vicinity of , n . into Taylor’s series, that is,  lnPðnÞ 5 lnPð , n .Þ 1

     dlnP 1 d2 lnPðnÞ 2 1 d3 lnP 3 y1 y y 1... 1 dn 2! dn2 3! dn3

(3.23)

where y 5 n 2 , n . ; and square brackets denote the derivation at the point n 5 , n . : As is known, the value of a function at which the first derivative becomes zero is the mathematical expectation. The second derivative gives d2 lnP 5

1 1 N 2 52 n N 2n nðN 2 nÞ

Chapter 3 • Dynamic component of entropy

117

The value of this derivative for n 5 , n . and n 5 Na; while N 2 n 5 Nð1 2 bÞ; equals d2 lnP 1 52 dn2 Nab

Now the expression (3.23) takes on the form lnPðnÞ 5 lnPð , n .Þ 2

y2 2Nab

Hence, y2

2ðn2, n .Þ2 2Nab

PðnÞ 5 , PðnÞ . e2Nab 5 , PðnÞ . e

The quantity , PðnÞ . can be expressed through a and b using the normalization condition X

PðnÞ 5 1

(3.24)

n

Since PðnÞ changes little at n changing by unity, we can replace summation with the integral in (3.24). Thus, (3.24) can be written as ð PðnÞdn 5

ð 1N

ðn2, n .Þ2 2Nab

, PðnÞ . e

2N

dn 5 1

(3.25)

Using a formula from a reference book ðN 2N

2y 2

e

pffiffiffi π dy 5 ; 2

we can obtain from (3.25) , PðnÞ .

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2πNab 5 1

and the maximal probability will amount to 1 , PðnÞ . 5 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2πNab

Using this result and , n . 5 Nab; we can write an expression for the probability value ðn2NaÞ2 1 PðnÞ 5 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi e 2Nab 2πNab

(3.26)

This probability also constitutes the Gauss distribution. It appears very frequently in statistical derivations when the numbers under study are large.

118

Entropy of Complex Processes and Systems

We emphasize that the Gauss distribution can be recovered using the mathematical expectation and dispersion. In our case, the mathematical expectation is , n . 5 Na;

and the dispersion is ðΔnÞ2 5 Nab:

The standard deviation is Δn 5

pffiffiffiffiffiffiffiffiffi Nab

(3.27)

To determine an invariant of any mass system, it is necessary to know how to determine the number of its admissible states. A state is considered admissible if it is compatible with the characteristics of the system. An interesting situation arises at the establishment of contact between two systems. Let us look into the mechanism of the exchange of separation factors and lifting factors. Here we take into account that the main problem with statistical mechanics consists in the study of the most probable distribution between systems ensuring their mutual equilibrium. We examine a system consisting of particles of the same size class and mark particles of one part in some way, for example, using paint or isotopes, leaving another part unchanged. We obtain two systems differing by the number of particles. Denote the amount of one type of particles by N1 and that of the other by N2 : Determine the number of admissible states of the two systems and find the most probable configuration of the combined system. We examine their behavior in the following way. First, we observe marked particles only, their amount being N1 : After we determine the number of states of this system, we start observing a system of the second type of particles. After that, we examine the characteristics of the system formed by particles of both classes. Two interacting systems, which exchange energy and particles, finally reach equilibrium, at which their energy characteristics become identical. Assume that the realization of a joint system is determined by certain fixed values of the separation factor z1 and z2 ; which are nonzero. The number of admissible self-similar states of the first system is ϕ1 ðN1 ; z1 Þ, and each can be realized with any of ϕ2 ðN2 ; z2 Þ states of the second system. Therefore the total number of states in the configuration of the combined system is ϕ1 ðN1 ; z1 Þϕ2 ðN2 ; z2 Þ

Denote z 5 z1 1 z2

Chapter 3 • Dynamic component of entropy

119

that is, z2 5 z 2 z1

At a constant number of particles in the systems, N 5 N1 1 N2 5 const

The realization of a combined system can be characterized fully enough by the product ϕ1 ðN1 ; z1 Þϕ2 ½N2 ðz 2 z1 Þ

To obtain the total quantity of all self-similar states, it is sufficient to summarize the obtained expression over all the values of z1 ; that is, FðN; zÞ 5

X

ϕ1 ðN1 ; z1 Þ 3 ϕ2 ½N2 ðz 2 z1 Þ

z1

As is known, such a sum has a sharp maximum at a certain value of z1  zm1 . This value of the parameter determines the most probable realization of a combined system. Then the number of states in the most probable configuration equals ϕ1 ðN1 ; zm1 Þϕ2 ðN2 ; z 2 zm1 Þ

(3.28)

It is clear that if at least in one of two systems the number of particles is large, this maximum is extremely sharp with respect to z1 changes. The presence of a sharp maximum means that statistical properties of a combined system are determined by a relatively small number of configurations. Taking into account the previous conclusion, we can write

ϕ 5 ϕ1 ðN1 ; z1 Þϕ2 ðN2 ; z2 Þ 5 ϕ1 ðN1 ; 0Þϕ2 ðN2 ; 0Þe

2z2

2z2

1

2



2N1 2N2

(3.29)

We examine this dependence as a function of z1 : Then (3.29) can be rewritten as h 2

ϕ 5 Ae

i

2z2 2ðz2z Þ 1 1 N1 1 N2

(3.30)

Note that a certain function lnyðxÞ reaches its maximum at the same value of x as the function yðxÞ: We obtain from (3.30) lnϕ 5 lnA 2

2z12 2z2 2 2 N1 N2

This quantity has an extremum when the derivative with respect to z1 is zero.

120

Entropy of Complex Processes and Systems

For the first derivative we obtain 2

4z1 4ðz 2 z1 Þ 1 50 N2 N1

The second derivative  24

1 1 1 N1 N2



is negative and, hence, the extremum represents a maximum. Thus, the most probable configuration is the one for which the relation z1 z 2 z1 z2 5 5 N1 N2 N2

is valid. Here we have obtained a very interesting relationship. Two systems are in the most probable state when the relative separation factor of the first system equals the relative separation factor of the second system. Thus, we have obtained a result confirming the experimentally determined invariance of fractional separation value with respect to the initial mixture composition. The performed derivation shows the statistical essence of this empirical result obtained long ago. If z1 and z2 in the maximum of the product under study are equal, respectively, to zm1 and zm2 , the obtained relation can be written as zm1 zm z 5 2 5 N N1 N2

(3.31)

Hence, 2z2

ðϕ1 ϕ2 Þmax 5 ϕ1 ðN1 ; zm1 Þϕ2 ðN2 ; z 2 zm1 Þ 5 ϕ1 ðN1 ; 0Þϕ2 ðN2 ; 0Þe2 N

We assume that z1 5 zm1 1 ε; z2 5 zm2 2 ε

Here ε serves as a measure of z1 and z2 deviation from their maximal values zm1 and zm2 : Hence, it is clear that z12 5 z12 1 2zm1 ε 1 ε2 2 z22 5 zm 2 2zm2 ε 1 ε2 2

Taking this into account,

ϕ1 ðN1 ; z1 Þϕ2 ðN2 ; z2 Þ 5 ðϕ1 ϕ2 Þmax e

4z ε

2 N1

1



2

2ε2 N1

1



4z2 ε N2

2

2ε2 N2

Chapter 3 • Dynamic component of entropy

121

In accordance with zm1 zm 5 2 N1 N2

(3.32)

the number of states of a configuration characterized by the deviation ε from the maximum equals

ϕ1 ðN1 ; zm1 1 εÞϕ2 ðN2 ; zm2 2 εÞ 5 ðϕ1 ϕ2 Þmax e

2



22ε N

1

2

2ε2 N2

(3.33)

To perceive the influence of this dependence, we assume that N1 5 N2 5 1010 and ε 5 106 . Hence, ε/N 5 1024 For such an insignificant deviation from the equilibrium, we have 2ε2 2 3 1012 N1 5 1010 5 200: The product ϕ1 ϕ2 represents a share equal to e2400  102179 of its maximal value. Hence, it is clear that the decrease proves to be very strong and, therefore, ϕ1 ϕ2 must be a function of zm1 with a very sharp peak. Therefore the most frequently arising values of z1 and z2 are almost always very close to their maximal values zm1 and zm2 . It is natural to expect that appreciable relative deviations from these values are possible for a small system. When considering a small system which is in contact with a large one, there are no theoretical difficulties. The obtained number of admissible states of two systems being in contact can be generalized on the case of two systems taking into account the lifting factor. Using the same reasoning as previously, we obtain an expression for the self-similarity of a combined system X

ϕðN; IÞ 5

ϕ1 ðN1 I1 Þϕ2 ðN2 ; I 2 I1 Þ;

(3.34)

I1

where the summation is performed over all values of I1 smaller than or equal to I: Here ϕ1 ðN1 ; I1 Þ is the number of permissible states of the system I at I1 : The configuration of a combined system is determined by the values of I1 and I2 : The number of permissible states is expressed by a product ϕ1 ðN1 ; I1 Þϕ2 ðN2 ; I2 Þ; and the summation over all configurations gives ϕðN; IÞ: Define the greatest summand in this sum. It is necessary for the extremum that the respective differential was equal to zero:  dϕ 5

@ϕ1 @I1

 ϕ2 dI1 1 N1

  @ϕ2 ϕ dI2 5 0 @I2 N2 1

(3.35)

We take into account that dI1 1 dI2 5 0: Dividing this equation by ϕ1 ϕ2 and taking into consideration that dI1 52dI2 ; we obtain     1 @ϕ1 1 @ϕ2 5 ϕ1 @I1 N1 ϕ2 @I2 N2

(3.36)

122

Entropy of Complex Processes and Systems

Proceeding from the fact that d dy lny 5 ; dx ydx

(3.37)

we can rewrite the previous expression as follows: 

@lnϕ1 @I1



  @lnϕ2 5 @I2 N2 N1

(3.38)

Here we have derived a very important relation for the statistical consideration of the discussed issue that we will revert to more than once. For the time being, the following should be noted: First, a derivative of the logarithm of the number of self-similar states of each system with respect to the lifting factor determines the most probable configuration of the system, and this is the most important property of the dependence (3.38). Second, two systems are in equilibrium when the combined system is in the most probable configuration, that is, when the number of permissible states is maximal. Third, pay attention to the quantity in the numerator of the expression (3.38) H 5 lnϕ

(3.39)

The obtained expression is surprisingly simple. According to Boltzmann’s classical definition, this quantity is none other than entropy. This definition corresponds to the statement in the sense that the more permissible states a system has, the higher its entropy. However, this entropy is not derived for an ideal gas depending on its temperature. This entropy is obtained for the characteristic of the solid phase distribution in a two-phase flow. It is established here that for the process under study, entropy is a function of the number of particles in the system and the lifting factor, that is, H 5 f ðN; IÞ

We will trace the entropy connection with other parameters of the process during further exposition of the material. Here we have used some ideas of statistical mechanics. However, a specific character of two-phase flows required a basic reconsideration of these ideas. The main difficulties consist in the following. A two-phase flow must have a peculiar stationary equilibrium ensuring a maximal possible entropy value. The task is to determine it. Here it is necessary to take into account the following conditions of the stationary state of a two-phase flow, if its invariants have a deterministic character: 1. After the stationary state of the system is reached, the system has no tendency to spontaneous changes. If a violation is introduced into the system from the outside,

Chapter 3 • Dynamic component of entropy

2.

3. 4.

5.

123

the latter spontaneously returns to its stationary state, possibly, at a different level depending on the violation character. The time of the transition to this state is called the relaxation time. A stationary state of a system can be described by several deterministic parameters, since they are independent of time. This state is independent of the prehistory of the system development, being determined by a set of concrete parameters. The properties of a statistical ensemble are independent of time if the number of systems corresponding to a given stationary state is the same at any moment of time. We consider accessible those configurations in which a system can stay without violating specified conditions of its existence. A statistical ensemble created in compliance with these conditions consists of stationary systems with admissible states with respect to external parameters. If a system is in any of the accessible states with an equal probability, it is stationary.

If ϕ is the number of accessible states of a stationary system, the probability of the system staying in any of these states is the same and equals ϕ1 : All these five conditions are unambiguously confirmed by all available experimental data.

3.6 Main properties of dynamic entropy characterizing a two-phase system Since the time of Clausius, entropy has become an objective category because it is measurable. It is accepted according to Boltzmann that entropy is a measure of chaos, while according to Shannon, entropy corresponding to information is a measure of disorder. This contradiction is resolved owing to the fact that in the first case the greatest chaos is achieved in a system when it is possible to estimate the maximal possible quantity of different states in it. In the second case, the maximal chaos in a system is achieved when its complete description requires the greatest amount of information. Examine the properties of entropy: 1. Entropy value is zero when the state of a system is completely defined, that is, it is unambiguous. In this case, the separation factor acquires the value z 5 6 N/2, which means that ϕ 5 1 and PðnÞ 5 1: In this case, H 5 lnϕ 5 0 or, which is the same, H 5 lnPðnÞ 5 0: 2. Define the physical meaning of the expression (3.39). In essence, it is a quantity equal to the entropy derivative with respect to the lifting factor, which is the same for both systems that are in contact after achieving the stationary state, that is, @H I 5 @I χ

Taking into account that I and z differ by a constant, we can obtain according to (3.38)

124

Entropy of Complex Processes and Systems

@H 4z 1 5 5 @z N χ

By analogy with thermodynamic processes, the physical meaning of χ is defined as a chaotizing factor of the process, since the entropy dimension is zero. Its dimension should correspond to the lifting factor dimension, namely, kgm. The magnitude of χ is connected with the character of the movable medium and should express the measure of its kinetic energy. On the whole, this parameter equals χ5

m0 w 2 2

where m0 is the mass of the medium within the particle volume. With respect to the entire process, chaotizing factor has the meaning of kinetic energy of the part of the flow equal to the solid phase volume. 3. When the chaotizing factor of two contacting systems is exactly the same, the contact allows a spontaneous change in the particle direction inside them, although the systems are in a stationary state. If the quantity of states of the first system equals ϕ1 , each can be realized simultaneously with any of the permissible states of the second system ϕ2 : At the combination of the systems, entropy of the combined system is greater than the sum of entropies of each system, that is, HP $ H1 1 H2

Inequality occurs when particles in the systems are different. If the entropy of the initial state H0 is determined, the entropy of an arbitrary state amounts to Ht 5 H0 1

ðt

dI dχ 0

Usually mass processes are basically irreversible. This is not obvious, but it is the case as if there exists at least one system whose entropy spontaneously decreases without the application of external efforts, it can be used for decreasing the entropy of some other system. This means that a spontaneous decrease in the entropy of one system leads to a spontaneous decrease in the entropy of all the systems. Hence, either all mass processes are irreversible, or irreversible processes do not exist at all. Usually the process irreversibility is identified with an expression similar to dH $

dI χ

Chapter 3 • Dynamic component of entropy

125

Irrespective of the fact that dI can have a positive or negative value, entropy change in an irreversible process must be always positive. 4. Entropy has a property of additivity. We know that H 5 lnðϕ1 ϕ2 Þ; hence, H 5 lnϕ1 1 lnϕ2 5 H1 1 H2 : Let us analyze in what aspect the additivity properties are valid. We have found the number of states in a configuration characterized by a certain deviation ε. We assume for the sake of convenience that N1 5 N2 5 N/2. Then we can write ϕðN; zÞ 5

ð 1N  2  X 8ε ðN; zm 1 εÞϕ2 ½N2 ðzm 2 εÞ 5 ðϕ1 ϕ2 Þmax e2 N dε 2N

ε

where the summation with respect to deviations ε is replaced by an integral. Denote 8ε2 2 N 5 x ; then ð 1N 2N

 2

2

dεe

8ε N

rffiffiffiffi ð 1N rffiffiffiffi rffiffiffiffiffiffiffi N N pffiffiffi πN 2x 2 π5 e dx 5 5 8 2N 8 8

and, consequently, 1 πN ; lnϕðN; zÞ 5 lnðϕ1 ϕ2 Þmax 1 ln 2 8

which differs from the value lnðϕ1 ϕ2 Þ by a magnitude on the order of lnN: The order of the first summand lnðϕ1 ϕ2 Þ equals N, since φ  2N : This means that the magnitude of lnN can be neglected in comparison with N. Hence, we must conclude that the entropy of compound systems consisting of particles of the same size is equal to the sum of entropies of the systems entering into it under the condition that the latter possess the most probable configuration, that is, they are in a stationary state. 5. In the process of two-phase flow, the dynamic part of entropy can reach its maximal value while the system approaches the stationary state. Here it is valid to assert that if the entropy has reached the maximal value for specific conditions, the state of the system is stationary and vice versa. 6. Define the relation between the main parameters. In the case under study, for a narrow size class, the quantity of self-similar states of a system at N 5 N1 1 N2 is written as ϕ5

N! N N 2!2

2z2

!

e2 N

Hence, H 5 lnϕ 5 lnN! 2 2ln

N 2z2 !2 2 N

126

Entropy of Complex Processes and Systems

Taking into account the Stirling formula lnn!  nðlnn 2 1Þ;

this expression can be reduced to   N 2z2 H 5 NðlnN 2 1Þ 2 N ln 2 1 2 2 N

from which we obtain H 5 Nln2 2

2z 2 N

(3.40)

By definition, the potential extraction is I 5 2 2zgdm

To write entropy in the form HðN; IÞ, we raise both parts of this expression to the second power I 2 5 4z2 ðgdmÞ2

Hence, z2 5

I2 4ðgdmÞ2

Substituting this expression into (3.40) HðN; IÞ 5 HðN; 0Þ 2

I2 2ðgdmÞ2 N

(3.41)

From the definition of the chaotizing factor, we can obtain 1 @H I 5 52 χ @I ðgdmÞ2 N

From this, I can be expressed in terms of χ I52

NðgdmÞ2 χ

Substituting this expression into (3.41), we obtain HðN; IÞ 5 HðN; 0Þ 2

NðgdmÞ2 2χ2

(3.42)

Chapter 3 • Dynamic component of entropy

127

It follows from two latter dependencies that the potential extraction and entropy grow with increasing chaotizing factor. The mean value hI i 5 2 2hzimgd

Compare this expression with (3.42) 22hzimgd 5 2

NðgdmÞ2 χ

It follows that 2hzi gdm gdm gdðρ 2 ρ0 Þ 5 5 5 2 N χ w m0 w2 ρ0

Hence, hzi gdðρ 2 ρ0 Þ 5B 5 w2 ρ0 N

Here we obtain an indication that the parameter B determines the extraction magnitude. A profound physical meaning of this relation will be confirmed later. We revert to the relation (3.42). If the same system of particles is examined in other field (centrifugal, electric, magnetic, ultrasound, etc.), we can write in the case of the conservation of the entropy value H0 2

NðgdmÞ2 NðadmÞ2 5 H 2 0 2χ21 2χ22

It follows that g2 a2 g χ 5 1 5 2 or 2 a χ2 χ1 χ2

where a is the acceleration of the field of a different nature. We determine the value of specific potential extraction as  iðχÞ 5

@I @χ

 (3.43) N51

This is a ratio of potential and kinetic energy attributed to one particle of a narrow size class. In such a form, the parameter i is constant for particles of a certain size. For other particles it is also constant, but has a different value for each size class.

128

Entropy of Complex Processes and Systems

3.7 Stationary state as a condition of entropy maximality The main relation connecting the probability distribution Pn with the system properties, which characterizes an ensemble, was established by Boltzmann and formulated in a more general form by Planck. It connects the system entropy H with the distribution function Pn by the equation H 52k

X

Pn lnPn

(3.44)

n

under the condition X

Pn 5 1

(3.45)

n

Summing is performed over all the states forming an ensemble, that is, for those of them for which Pn is nonzero. The dependence (3.45) satisfies the statement that the entropy H is a measure of the system disorder. In fact, if a system is in ideal order, then Pi 5 1; and all other Pn21 are zero. Since ln1 5 0; the entropy H in this case is also zero. On the other hand, if a system is disordered, it can be in any of its states. The greater the number of such states, the greater the disorder. If Pn 5 1/M for M states and is zero for all other states, then H 5k

M X 1 1 ln 5 klnM M M n51

The greater M, the higher is H. Hence, Eq. (3.44) satisfies the notions of entropy behavior. We can use this relation for finding the distribution function of various states of a twophase system, in particular, equilibrium states. As is known, entropy tends to an increase until it reaches its limiting value in a stationary regime, obeying, at the same time, the limitations imposed on the system. Assume that no limitations are imposed on the distribution probability Pn except the limitation (3.44), and that the number of states of the system is a certain finite value ϕ. Determine the maximal entropy value under such conditions. The condition (3.44) means that only ðϕ 2 1Þ values of Pn can acquire mutually independent values. One Pϕ value is connected with others by Pϕ 5 1 2

X

Pn

n51

However, entropy is a function of all Pi values, and it can be represented as HðP1 ; P2 ; P. . .Pn . . .Pϕ Þ

Here Pϕ can be expressed through other probabilities, as shown in (3.46).

(3.46)

Chapter 3 • Dynamic component of entropy

129

For H to become maximal, partial derivatives of H with respect to each independent Pi should equal zero. Taking into account that Pϕ depends on other Pi ; these equations acquire the form 9 @Pϕ @H @H @H @H 1 3 5 2 50 > > > > @P1 @Pϕ @P1 @Pϕ @P1 > > > > > > @H @H = 2 50 @P2 @Pϕ > > . . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . . > > > > > @H @H > > 2 50 > ; @Pϕ21 @Pϕ

The values of functions P which satisfy these equations must satisfy the relations (3.44) and (3.45). @H Obviously, for these equations, the partial derivative @P equals a certain value. We denote ϕ @H 5 2 a0 @Pϕ

Then the equations can be written as @H @H @H 1 a0 5 0; 1 a0 5 0; ? 1 a0 5 0 @P1 @P2 @Pϕ21

(3.47)

The value a0 is determined with observance of the condition (3.44). This parameter a0 is known as the Lagrange factor. Hence, here we are dealing with a problem of variations calculus. Substitute the relationship for Pk into (3.47) 05

" # ϕ ϕ X X @ a0 Pn 2 k Pk lnρk 5 a0 2 klnPk 2 k @Pk i51 i51

Hence, we obtain a0

Pk 5 e k 21

The solution shows that in the equilibrium state of the system all the values of Pk are equal, since a0 and k are independent of k value. Therefore for this state of the system the ϕ P value of the quantities a0 and Pk follows from the requirements Pk 5 1 and Pk 5 ϕ1 : n51

Hence, H52

ϕ X 1

ϕ k51

ln

1 5 klnϕ ϕ

(3.48)

130

Entropy of Complex Processes and Systems

Consequently, a system limited by a finite number of states ϕ and having no other restrictions can have a maximal entropy in any ϕ states with equal probability. Here we can see that the more states a system has, the higher is its entropy. In addition, two more conclusions follow from this. First, here we prove that in the equilibrium state all self-similar states of the system are equiprobable. Second, here we can make a seemingly senseless and paradoxical assertion that each point of the response surface for entropy is a maximum. This is a very interesting result. However, it requires a careful analysis and justification of the notion of equilibrium state in a two-phase flow. What is an equilibrium state in a system of particles of various sizes moving in opposite directions in a medium flow? We try to formulate and justify this complicated phenomenon for the process under study. To gain an insight into all of this, we determine a relation for a system of two-phase flow with a polyfractional solid phase by analogy with a gaseous system dI 5 χdH 2 fdV 1

X

τ k dNk

(3.49)

k

In this expression, entropy H and potential extraction I are functions of state, and χ; τ; f ; N are active parameters of the system. Using the Euler theorem, we can obtain from this equation the following: I 5 χH 2 fV 1

X

τ k Nk

k

Now we write this expression in a complete differential form: dI 5 χdH 1 Hdχ 2 fdV 2 Vdf 1

X ðτ k dNk 1 Nk dτ k Þ k

This expression complies with (3.49) under the condition that Hdχ 2 Vdf 1

X

Nk dτ k 5 0

k

This shows that all active variables of the system under study cannot be mutually independent. It is important for understanding of the influence of fluctuations of various parameters on the process stability. According to the notions formulated by Clausius, the entropy of a closed system permanently grows. Boltzmann has shown in the kinetic theory of gases that a closed system evolves to a state with the internal entropy maximal for the given conditions. The system acquires, at that, an equilibrium state. The equilibrium state according to Boltzmann is a state in which the interactions between gas molecules cannot influence the magnitude of the achieved value of the internal entropy.

Chapter 3 • Dynamic component of entropy

131

Further investigations into this problem performed by such scientists as Einstein, Gauss, and Prigogine, have shown that this law is also characteristic for other systems, in which a spontaneous change occurs in such a way that the total value of the internal entropy ðHi Þ permanently grows until it reaches the extreme value in specified conditions. As has been shown, this regularity is not characteristic for the process under study. At the separation, the total internal entropy decreases as a result of ordering of the separation product compositions in comparison with the composition of the initial supply. In this respect, the critical flow entropy is basically different from the thermodynamic entropy. If we consider the total internal entropy as a sum of entropies of all narrow size classes Hi 5

X

Hin ;

n

on the whole, it decreases. However, there is one component among the summands of this sum, whose value increases up to the extreme value according to the dependence 2z 2

H 5 H0 3 e N

Here we imply particles of the boundary size class. A characteristic dependence for the boundary class is z 5 0 and H 5 H0 ;

that is, the entropy of this class acquires the maximal possible value in the given conditions. At that, a specific form of equilibrium takes place, since this class is extracted equally into both products, that is, Ff ðxÞ 5 Fc ðxÞ 5 50%: Obviously, it is just the boundary class entropy that should be taken as a basis for the process stability analysis. It is clear that the equilibrium state in a real process is permanently violated by fluctuations of various separation parameters, such as χ; τ; f ; N: They fluctuate around their average values. Fluctuations of each parameter cause changes in the equilibrium entropy. Clearly, such changes can be directed only towards its decrease, that is, ΔH0 , 0

Nonequilibrium processes occurring in the system level these fluctuations and return the entropy to its initial value, which is extreme for these conditions. Otherwise, the system will lose stability, which does not take place in real practice. Thus, a system is stable with respect to fluctuations, if entropy variations are directed toward its decrease. The problem consists in determining the probability of fluctuations of a specified parameter and determining the conditions at which they become essential.

132

Entropy of Complex Processes and Systems

Boltzmann introduced his famous relation connecting entropy with probability for thermodynamic systems: H52

X

plnp

n

Einstein suggested a formula of the probability of thermodynamic quantities fluctuations using Boltzmann’s idea vice versa. He took entropy as a basis and derived the probability from it: ΔH

PðΔHÞ 5 Ze k ;

where ΔH is entropy change connected with a fluctuation with respect to equilibrium state; Z is a statistical sum; and k is the Boltzmann constant. Obviously, these two relations are mathematically interconnected, but they are opposite in their meaning. For Boltzmann, the probability of a system state is a determining parameter, and the fluctuation probability is derived from it. To obtain the fluctuations probability, one should obtain the entropy change connected with the former. Thus, the main problem is reduced to the derivation of ΔH connection with fluctuations of such process parameters as δχ; δτ; δf ; δN: In a general case, the entropy can be expanded into a series 1 H 5 H0 1 δH 1 δ2 H 1 ?; 2

(3.50)

where δH is a term of the first order containing δ2 I; δ2 χ; δ2 N; etc. of the first order; δ2 H is a term of the second order containing δ2 I; δ2 χ;δ2 N, etc. of the second order; and H0 is the stable state entropy. 1. First, we examine the simplest situation. Imagine that the fluctuation took place in a small part of a limited system. A result of this is a flow of N; χ; f from one part of the system to another. The entropy of the system under study can be expressed by the sum H 5 H1 1 H2 ;

(3.51)

where 1 is the part of the system where the fluctuation took place; and 2 is the remaining part of the system. H1 5 f ðN1 ; χ1 ; τ 1 ; f1 ; etc:Þ H2 5 ϕðN2 ; χ2 ; τ 2 ; f2 ; etc:Þ

Examine the process stability with respect to fluctuations and flow velocity ðw; χÞ: For this purpose, we use (3.51) and express entropy deviation from the equilibrium state at the expansion into the Taylor series

Chapter 3 • Dynamic component of entropy

H 2 H0 5 ΔH 5

@H1 @H2 @2 H1 δ2 I1 @2 H2 δ2 I2 1 1? δI1 1 δI2 1 2 @I1 @I2 @I1 2 @I22 2

133

(3.52)

In this expansion, the terms of higher order can be neglected. Note that all the derivatives in (3.52) refer to the equilibrium state. Since in the stable state the potential extraction (or extraction from the zone) remains constant, δI1 5 2 δI2 5 δI

On the other hand, is was determined that   @H 1 5 @I V ;N χ

Taking this into account, the relation (3.52) can be rewritten as  ΔH 5

δ2 H 2

   1 1 @ 1 @ 1 δ2 I δI 1 2 1 χ1 χ2 @I1 χ1 @I2 χ2 2

(3.53)

Now we can determine the deviation of the first and second orders in the entropy δH and :  δH 5

 1 1 δI 2 χ1 χ2

 δ2 H 5

 @ 1 @ 1 δ2 I 1 @I1 χ1 @I2 χ2 2

In the equilibrium state, χ1 5 χ2 ; hence, δH 5 0: It follows that only fluctuations of the second order introduce a change into the entropy. It has been shown that @ 1 1 @χ 1 1 5 5 2 @I χ χ2 @I χ i @I where i 5 @χ is the specific potential extraction. Hence, we can write that

δI 5 iδχ

Taking this into consideration, δ2 H 5 2

iðδχÞ2 ,0 χ2

134

Entropy of Complex Processes and Systems

This condition requires a positive specific extraction, that is, i . 0: In the opposite case, the system loses stability. Therefore, in separation processes, i . 0 always holds. 2. Consider the case of fluctuations of the amount of particles in a certain part of the system cross-section. As in the first case, for such kind of fluctuations we can write H 2 H0 5 ΔH 5

@H1 @H2 @2 H1 δ2 N1 @2 H2 δ2 N2 1 δN1 1 δN2 1 2 @N1 @N2 @N1 2 @N22 2

(3.54)

Note that the diffusion of particles from one part of the system to another is δN1 52δN2 5 δN; and @H τ 52 @N χ

Taking into account this reasoning, Eq. (3.54) can be written as ΔH 5 δH 1

    δ2 H τ2 τ1 @ τ1 @ τ 2 δ2 N 5 2 δN 2 1 2 @N1 χ @N2 χ 2 χ χ

Since the derivatives are taken for the equilibrium state, τ 1 5 τ 2 : Hence, here also the first summand is zero, and, on the whole, the dependence for entropy change is   δ2 H @ τ δ2 N 51 2 @N χ 2

If this condition is satisfied, the system is stable with respect to the diffusive particles exchange. In this way, we can determine the influence of other parameters on the entropy stability. In fact, the character of the impact of fluctuations in a real process is more complicated. Examine the chaotizing factor χ: Its changes cause a change not only in the carrying medium flow, but also in other flows, for example, flows of solid particles N, changes of concentration μ, and pressure in the flow f. Meanwhile, a change in any of these parameters, for example, concentration, can, in turn, affect χ; I; N; f : As a result, we obtain a cross effect of the influence. The most visual example is a cross diffusion, where the concentration gradient of particles of one size causes a diffusive flow of particles of other size. It is easy to imagine that in the process of separation the diffusion is permanent, since, undoubtedly, particles of various classes do not have a uniform distribution within the apparatus volume. At the same time, other parameters also have permanent disturbances. It seems extremely complicated to determine their mutual impact on the entropy value. Therefore, for a pure assessment of the effect of all cumulative disturbances, we can use a dependence proposed by Prigogine for the fluctuation of thermodynamic entropy, which can be written by analogy for a critical flow in the form ΔHi 5 2

nc ; 2

Chapter 3 • Dynamic component of entropy

135

where ΔHi is a mean entropy deviation; n is the amount of independent variables; and c is a certain constant. The simplicity of this dependence is unique and explainable. The point is that a real effect on entropy deviation from its extreme value is caused by fluctuations of the second order. Obviously, it does not matter which parameter fluctuates—their effect is small and simple to sum up. Each independent parameter introduces a quantity equal to 2 2c into ΔHi . Note once more that all the derivations accomplished in this part are related to the boundary separation size containing N0 particles. Entropic stability refers only to this class of particles. The most interesting point is that all the remaining classes of particles behave stably with respect to this class, since the separation curve is constant. Probably, our study of the stability in a critical flow could be limited at this point. However, the question in what way other classes of particles, whose entropy does not reach extreme values, acquire stability remains unsolved. Probably, some other parameters, besides entropy, provide stability in this case. Here a continuation of the research is needed. It is possible that a probabilistic analysis of the process would somewhat clarify this issue.

3.8 On the issue of entropy parameter formation for dynamic systems Trying to apply the method of statistical mechanics to the process under study, we take into account the difference between Boltzmann’s and Gibbs’ approaches to the solution of this problem. Boltzmann took the velocity space as a starting point of his statistics. Gibbs built his system proceeding from the concept of ensembles. It was a novel approach opening large opportunities. Our modeling will be performed under the following restrictions: 1. We examine only stationary or steady flow states. 2. It is necessary to choose one or several parameters of the external manifestation of twophase flow (macro-level), whose values are predetermined by phenomena occurring at the internal level of the system (micro-level). In thermodynamics, such parameters are, for instance, gas temperature, pressure, and volume. It is desirable for these parameters to have a deterministic character. As established by numerous experiments, in separation processes such a parameter is the fractional extraction value invariant to the content of a narrow class of solid phase and to the value of its concentration within the working range recommended for this class of processes. 3. Logics of the preceding examination of two-phase motion laws leads to an assertion that the stability state in a dynamic system is achieved at the entropy maximum in this system. This assertion is not new. It is generally known that for complicated mass systems, a higher level of generalizations in the course of modeling is connected with the entropy optimization method. These three postulates form the basis for modeling to be performed. First of all, it is necessary to determine the state of the system. The issue of the necessary information for

136

Entropy of Complex Processes and Systems

making an acceptable model of such a complicated phenomenon as a two-phase flow is far from being simple. Methods of classical mechanics show that the state of a two-phase stationary and dynamic system can be completely determined using the coordinates and momentums of all solid particles participating in the flow at any moment in time. We can assume that a particle coordinate is the position of its gravity center in space, and the momentum that equals a product of the particle mass by its velocity is applied to this center of gravity. We assume that the particles of this narrow class are identical. This is substantiated by the fact that these particles are characterized by a deterministic value of fractional extraction. Within the frames of this derivation, we denote this parameter by E. In principle, methods of classical mechanics are applicable to the totality of solid particles moving in the flow. Here the behavior of the entire continuum can be specified by defining the behavior of each separate particle for which we can write, according to the Newton’s second law dxi dvi d2 xi 5 vi ; 5 Fi ; hence; 5 Fi dt dt dt 2

where Fi is the force acting on the i-th particle; xi is the coordinate of the i-th particle; and vi is the velocity of the i-th particle. This expression shows all the parameters for the i-th particle along one axis x. To obtain a complete picture of its motion, analogous equations for the y and z axes should be written. Theoretically, if the coordinates and momentums of all particles are known, everything about the continuum would be known also. The behavior of particles of a narrow size class consisting of N particles should be described in Cartesian coordinates by a vector of 6N-dimensional system. Six components of this vector correspond to each of N particles—three coordinates of a particle position and three components of its momentum. In a general case, the quantity Fi is composed of gravitational forces and forces developed by the flow, as well as of the i-th particle interaction with other particles and walls enclosing the flow. To solve the obtained system, it is necessary to specify 6N initial values of the main parameters. It is perfectly clear that this problem cannot be solved only due to a large quantity of particles, but also because all these equations are interconnected, since the force acting on each particle at each specific moment of time is a function of the position of the rest of the particles of the system, that is, Fi 5 f ðxj Þ ðj 5 1; 2. . .; NÞ for i 6¼ j We can seek the solution to this problem by statistical methods only. Obviously, here we have to average not only the initial conditions of particle locations, but also the details of their interaction. The practical result of such averaging is reduced to the necessity of operating with probabilities instead of confidences. Within the framework of such an approach, we cannot speak about a definite position and velocity of a specific particle, but only about the probability of the realization of its various positions and velocities. A probabilistic approach

Chapter 3 • Dynamic component of entropy

137

to such a system gives a reason to hope that it will be possible to formulate the entropy parameter for it. This will allow considerable progress in the understanding of physics of two-phase flows, because entropy has become an invariant of transformations in the theory of dynamic systems having a probabilistic realization. We have taken the fractional extraction value E as a generalizing parameter. Therefore the obtained system must be examined in ð6N 1 1Þ-dimensional space, where the parameter E plays the part of a surface of response at the change in the parameters of the system. Methods of statistical mechanics allow explanation and prediction of certain macroproperties of a system with a rather high degree of precision without taking into account the behavior of each separate particle at the micro-level. Such macro-analytical methods are connected with microsituations using the concept of entropy, as proved, for example, by Landau. He has shown that in a gaseous system a set of states can exist at the microlevel, which leads to the same distribution at the macrolevel. In statistical mechanics, the system entropy is an equivalent of the probability logarithm of a certain distribution. Here the greatest number of microstates corresponds to the most probable distribution. These statistical mechanics data can be interpreted so that the most probable distribution corresponds to the situation in which the highest uncertainty in the system microstate takes place. For the process under study, this means that the system has the maximal possible number of self-similar states.

3.9 Entropy and probabilities distribution

..

H P1 ; P2 ; P3

.

First, we examine a certain abstract random value x, which can be realized by the values x1 ; x2 ; x3 . . .xN with the probabilities P1 ; P2 ; P3 . . .PN . This means that for a random value there exists a discrete distribution function f ðxi Þ: A measure of uncertainty or entropy for this case is

PN

It can be shown that this is the only single-valued measure of uncertainty of the probabilistic distribution Pðxi Þ: At that, the following conditions must be unambiguously satisfied: 1. H is a monotonous function of Pi : 2. If all Pi are equal, the total entropy HP 5 H



 1 1 1 ; ... n n n

must be an increasing function of the number n:

(3.55)

138

Entropy of Complex Processes and Systems

3. If a different grouping of outputs takes place, that is, c1 5 P1 1 P2 1 P3 1 . . .Pk c2 5 Pk11 1 Pk12 1 . . .Pn etc:;

then HðP1 ; P2 . . .Pn Þ 5 Hðc1 Þ 1 Hðc2 Þ

Therefore, here the situation is as follows: xi can be realized ni times for all existing possiP bilities, whose number is i ni : This means that outputs of a random value x1 ; x2 . . .xk can be interpreted as compound outputs consisting of n1 ; n2 . . .nk identical alternatives. It follows from this condition that HðP1 ; P2 . . .Pn Þ 1

X

Pi Hðni Þ 5 H

i

X

! ni

(3.56)

i

If in a particular case all ni 5 mi ; then Eq. (3.56) is reduced to the form HðmÞ 1 HðnÞ 5 HðmnÞ

As shown by Khinchin, the only function satisfying this relation and condition (3.56) is H 5 klnk; где k $ 0

(3.57)

Substituting (3.57) into (3.56), we obtain HðP1 ; P2 . . .Pn Þ 5 kln

X

! ni

1k

X

Pi lnni

(3.58)

i

After these preliminary notes, consider a system of identical particles in a flow in a stationary state. As is known, this class of particles is characterized by an almost deterministic value of fractional extraction E. As has been proved already, this parameter can fluctuate within a narrow range and acquire the values ðE 6 ΔEÞ: Outside this range, the value of this parameter equals zero. Determine the probability of fractional extraction of an individual particle within the range of the parameter E variation. At that, we assume that the number of particles N belonging to the system is high enough. Here ni is the number of particles whose probability to be extracted corresponds to εi : Clearly, N5

X

ni

(3.59)

i

A system of numbers ni specifies particle distribution by fractional extractions εi , and a complete ensemble of particles is described by this distribution.

Chapter 3 • Dynamic component of entropy

139

In a stationary state of the system, the set of values ni is based on the fact of the existence of the mean value of extraction for a particle hεi using the formula X

ni εi 5 N hεi

(3.60)

i

In addition, we can assume the following: 1. No limitations besides (3.59) and (3.60) are imposed on ni value. 2. Transposition of particles inside ni does not lead to new configurations of the system. Under these assumptions, the number of various configurations of a system of N particles for a certain set ni is proportional to the value ϕ5

N! Li ni !

(3.61)

As shown previously, the set of numbers ni , at which the number ϕ is maximal, corresponds to the most probable distribution. Let us find the distribution H 5 lnϕ 5 NlnN 2

X ðni lnni 2 ni Þ

(3.62)

i

Taking (3.59) into account, the dependence (3.62) can be represented in the form P H52

i

ni ðlnni 2 1Þ 5 2 Pi lnPi NlnN

(3.63)

If there is no additional information about a random value, the maximization of entropy P (3.61) under the condition i Pi 5 1 gives an optimal probability distribution at the realization of the condition P(xi)51/n, which is equivalent to Pi51/ϕ This coincides with the qualitative determination of uncertainty even at an a priori level. Hence, 2P i lnPi 5 2

and the value H 5

ϕ P 1 1

ϕ lnϕ 5 lnϕ;

1 1 1 ln 5 lnϕ; ϕ ϕ ϕ

which totally coincides with the initial definition of

entropy for a two-phase system. If, however, a certain distribution function f ðxÞ is known for the random value, the problem of seeking optimal distribution from the standpoint of entropy optimization is somewhat simplified. Under the condition

140

Entropy of Complex Processes and Systems P Pi f ðxi Þ 5 F P Pi 5 1

(3.64)

this search gives the following result: ni 5 N

e2βεi Z

(3.65)

z is a constant determined from the normalization condition (3.60) giving Z5

X

e2βεi

(3.66)

i

β is the Lagrange constant whose physical meaning will be shown later. On the whole, z is a sum of expressions e2βεi over all the states i. It is called a statistical sum or a sum over the states of the system. The physical sense of z can be interpreted as summing of all microstates of a system over the phase space, and therefore it is not a function of one state only. Thus, it is a link connecting macroscopic quantities of the system with its microscopic states. The dependence (3.63) can be also unambiguously obtained from (3.58) Hi 5 ln

X

! ni

2

X

i

Pi lnni

i

Taking into account that it follows from (3.59) that ni 5 Pi

X

ni

i

we substitute it into the second summand of (3.58) P

Pi lnni 5

P i

Pi lnPi

P P P P ni 5 i Pi lnPi i ni 5 i Pi ðlnPi 1 ln ni Þ 5 P P P 5 Pi lnPi 1 i Pi ln ni

P

i

The second summand in this expression is canceled by the first summand in (3.58), and as a result, we obtain Hi 5 2 Pi lnPi

A statistical ensemble represents a set of identical systems, whose distribution at a microlevel has a chaotic character in self-similar states. Undoubtedly, an ensemble is a certain mental structure presenting the properties of a real system at a fixed moment of time in the process of its evolution.

Chapter 3 • Dynamic component of entropy

141

An ensemble consists of a large number of systems, whose external manifestations are as identical as one can imagine. Therefore, each system, which is a part of an ensemble, is an exact copy of a real system and is equivalent to the latter for all practical computations. A proof of the equivalence of Boltzmann’s and Gibbs’ averaging methods is the subject of so-called ergodic theory. It has been proved within the frames of this theory that averaging over an ensemble complies with the real properties of a system better that time averaging. The theory of ensembles complements the method of entropy maximization. It shows ways of investigating various complicated systems. Our main task consists in the formulation of micro (internal) and macro (external) levels for separation regimes. The general theory of ensembles is connected with the notion of phase space and examines equations of motion of respective systems in it. An ensemble as such represents a set of admissible realizations, wherein each realization corresponds to one of ϕ possible states of the system. Emphasize once more that the same probabilities correspond to all these ϕ states, and besides, these states must be characterized by at least one common deterministic external parameter. As follows from experimental results, the fractional separation index of a narrow size class or potential extraction parameter in the model that we have examined can be such a parameter. A space connected with more than three orthogonal axes is called a phase space. For a system of N particles, S-dimensional phase space is usually considered, where S depends on the presentation of the system elements. A situation with S 5 2 is possible, but in most cases S 5 6 or even 7. The variables describing a multiparticle system are usually specified in the form of canonically conjugated pairs of xi and ρi type, where i 5 1; 2; 3. . .N, xi is the coordinate of the position of the respective particle, and ρi is the momentum of this particle. Therefore the function F 5 f ðx; P; tÞ can be defined in a general case as the density of points in the phase space, where x denotes all xi ; ρ 2 all ρi ; and t 2 time. Otherwise, an ensemble can be specified by the number of systems ϕðx; ρÞdxdρ;

corresponding to an element of the phase space volume dxdρ, where we can write the following for a system of N particles in the Cartesian coordinates system: dx 5 dx1 dx2 . . .dx3N dρ 5 dρ1 dρ2 . . .dρ3N

(3.67)

Thus, the ensemble of systems is determined by specifying ϕðx; ρÞ —the density of the number of systems (points) in the phase space. In other words, a system is represented in the statistical sense by an ensemble with the density ϕðx; ρÞ: An average value of some quantity ϕðx; ρÞ over an ensemble is determined by the relation

 ϕ 5

Ð

ϕðx; ρÞPðx; ρÞdxdρ Ð Pðx; ρÞdxdρ

(3.68)

142

Entropy of Complex Processes and Systems

A criterion of how exactly an ensemble represents the behavior of a physical system is the equality of the mean value of the desired quantity taken over an ensemble to its mean value for the real system. Usually, a general equation of the system motion in the phase space is called a Hamiltonian, which we denote by G. It usually represents the response function for a phase system. The Hamiltonian can be written as G 5 f ðx; ρÞ; and the entire system is written in a very compact way. Equations of the system motion acquire the form xi 5

@G @G ; ρ 5 @Pi i @xi

(3.69)

Equation of the parameter ϕ variation is  3N  @ϕ X @ϕ @G @G @ϕ 5 3 2 3 @t @xi @Pi @xi @Pi i

(3.70)

When one examines a stationary ensemble, @ϕ 50 @t

Hence, the value of ϕ can depend only on constant components of motion. If E is a response of the system of interest for us, then ϕ 5 f ðEÞ

The density of phase space distribution at the moment of time t is determined from the relation ð ϕdxdρ 5 1

(3.71)

If the average value is determined according to (3.65) as

 f ðEÞ 5

ð f ðxi ρÞϕðxi ρÞdxdρ

(3.72)

The famous Liouville’s expression is derived from (3.66), (3.67) and subsequent relations:   @ϕ X @ϕ @G @G @ϕ @ϕ 1 50 5 U 2 U @t @x @P @x @P @t i i i i i

(3.73)

where @ϕ @t is a total or hydrodynamic derivative. This theorem states that the rate of change in the density of the number of ensemble systems ϕ with time along the streamline is zero.

Chapter 3 • Dynamic component of entropy

143

Determine the entropy of the system under study: ð H5

ϕlnϕdxdρ

(3.74)

At that, the boundary conditions are ) Ð ϕdxdρ 5 1 Ð ϕEdϕdρ 5 E

(3.75)

The solution of (3.74) with the observance of the conditions (3.75) gives ϕ 5 eð2λ2βEÞ

(3.76)

For this reason, we can determine the entropy of such probability distribution. It amounts to H52

X

Pi ln Pi

(3.77)

i

The problem of maximization of the dependence (3.77) at the boundary conditions (3.75) and (3.76) gives Pi 5

ϕi 5 e2λ2βEi ϕ

(3.78)

where eλ 5 Z 5

X

e2βEi

(3.79)

i

where Z is the statistical sum of the system. In the traditional interpretation, the probability density in the phase space is equivalent to determining the volume of this space. It can be shown that this volume corresponding to the state ϕi is proportional to the probability value PðNi Þ 5

ϕi Li ϕi !

(3.80)

Hence, the most probable state is obtained as a result of lnPðϕi Þ maximization under the limitations (3.78) and (3.79) expressed using ϕi P P ϕi εi 5 E i ϕi 5 ϕ и

144

Entropy of Complex Processes and Systems

3.10 Multidimensional statistical model of a two-phase system Having determined a generalizing parameter of two-phase flow in the form of a potential extraction factor, as well as methods of canonical distribution formation, we can move on to a multidimensional problem. Usually, a multidimensional system consisting of N identical parts is described by an equation with a response parameter representing the total energy of the system. In the system under study, the potential extraction value having, as shown, the dimension of energy, can be accepted as a cumulative parameter of all the particles I 5 mgðx1 1 x2 1 . . . 1 x3N Þ 1

1 2 ðP 2 1 P22 1 . . . 1 P3N Þ 2m 1

In this dependence, the first summand reflects the influence of the potential energy of the system on the formation of I, and the second one reflects the influence of the kinetic energy, since  2 1 dxi P2 m 5 i 2 dt 2m

(3.81)

where m is the particle mass; xi is the coordinate of particle location; and Pi is particle momentum. In a general case, three coordinates of its position and three momentum values correspond to each particle. Repeat once more that the main idea of statistical mechanics is the “averaging” resulting from our lack of knowledge of details of the behavior of a system of particles under study. At the same time, one should keep in mind the existence of such states of systems, whose statistical properties are not of interest any more from the standpoint of the process under study, since their probability is vanishingly small or equals unity. By way of example, we can refer to Fig. 32, where a separation curve shows the distribution of various particles in both directions. This means that for each size within the distribution range there is a different probability of the output in one of the directions. In addition, it is necessary to examine the probability within an interval, and not at a specific point, that is, to take into consideration the probability density PðzÞ: This parameter is the probability of z value location within the interval dz. The expression PðzÞd n z denotes the probability of the fact that z 5 ðz1 ; z2 ; z3 . . .zn Þ is a vector containing all continuous variables of the set under study, and dn z is a volume of an infinitesimally small cell of n-dimensional space. For the sake of simplicity, we omit hereinafter the upper symbol n in the differential denoting an elementary volume. In this case, the condition of the sum of probabilities being equal to unity is written as ð PðzÞdz 5 1 z

Chapter 3 • Dynamic component of entropy

A

C

B

145

A

B

w2

w1

FIGURE 3–2 Two-phase flow with a longitudinal partition.

where PðzÞ is the probability density; and z is the region of n 2 dimensional space where z varies. The probability density is necessary for computing averages. It means that in order to compute the average value of the function ϕðzÞ, it must be integrated over all values of z with a weighting function equal to the probability density of the realization of z

 ϕðzÞ 5

ð PðzÞϕðzÞdz z

Previously, we modeled a continuum of solid particles in the form of a system. Although this model was somewhat schematic, it has turned out to be sufficient for obtaining rather interesting results that were confirmed in practice. When it is necessary to determine all possible states for a totality of particles, different approaches can be used, for instance, to substantiate an ensemble of systems representing a copy of the system under study. Such an ensemble contains at the same moment of time all possible states of the system united by the highest generality. We can believe that all of them possess a constant number of particles N, the sameness of potential extraction or other parameters, whose value is determined in some specific conditions. For the understanding of an ensemble of systems, it is helpful to apply the geometry of n 2 dimensional space. It is impossible to conceive such a space visually—it is beyond human abilities. However, an abstract analysis of such a space allows solving rather complicated problems with systems consisting of a great number of particles. It is

146

Entropy of Complex Processes and Systems

noteworthy that while operating with n 2 dimensional space, it is helpful to compare mentally the changes in the latter by analogy with an habitual three-dimensional space. Refer to Fig. 31 and consider a simplified model of the process for this case. Emphasize once more that a dynamic system characteristic of a two-phase flow involves a great number of particles N of a narrow size class. At that, we abstract away from the real shape of particles and represent them as ideal balls devoid of internal degrees of freedom. This means that the state of each particle is completely described by one coordinate and one velocity component, because everything is considered in a projection to the vertical axis. We accept one more simplification. Assume that the probability density of the state of the system under study averaged over characteristic features of the flow is time-invariant h i h i @PðzÞ @PðzÞ @t 5 0 and spatially uniform @xj 5 0 : These limitations must be understood not only as a steady-state process, but also as a local fulfillment of the energy conservation law. After these preliminary notes, we proceed to the development of a multidimensional model of the system. In the considered system consisting of N particles, their location in space is characterized by N coordinates of their center of gravity on the vertical axis, which we denote by xi ; where i 5 1; 2. . .N: Then the state of such a system at some moment of time is determined by N values of coordinates xi and N respective velocities x_ i , or, which is practically identical, by conjugate momentums of each particle pi : For such a system of N particles, its general state is represented by a point in 2N-dimensional phase space, whose coordinates are x1 ; . . .x2 . . .xN ; p1 ; . . .p2 . . .pN

The location of particles in a system changes with time and, respectively, the point of phase space representing the state of the system moves in this space circumscribing a line that can be described as a phase trajectory. A simple global condition of a steady-state process is that the value of potential extraction in it has a certain fixed constant magnitude. According to (3.81), we can write for the system under study I5

X1 i

2

mv2i 1

X

mðxi 2 x0 Þ

(3.82)

i

In this dependence, the first summand is the kinetic energy of the system, and the second summand is the potential energy with respect to a certain level x0 : As a result, instead of 2Ndimensional phase space, we obtain a ð2N 1 1Þ-dimensional space, where the potential extraction (I) has the meaning of a response function. This model is much simpler than Boltzmann’s model, since we fix only particle distribution into two outlets—upper and lower. Of course, at any specified moment of time, each particle can acquire only one value of contribution to the total separation factor. We observe a system of N particles of a certain narrow size class at successive moments of time

Chapter 3 • Dynamic component of entropy

147

t1 ; t2 ; t3 . . .tm ; and the number of such observations is high and equals m. Naturally, at each observation, the system is in one of its states. It can happen that in n cases of observation the system has one and the same value of the separation factor equal to i. Then the probability of such a state is PðiÞ 5

nðiÞ m

It follows from the probability definition that X

PðiÞ 5 1

i

Here the definition of a mean value of a physical quantity for a discrete system under study appears in a natural way hz i 5

X

zðiÞPðiÞ 5

i

1X zðiÞnðiÞ m i

For a system of N particles, there are N! ways of their placement. However, we can suppose that among these placements there are n1 particles ensuring the obtainment of z1 : From this standpoint, systems located in the n1 group become indistinguishable. It has been noted that the number of states of a system or its part possessing the same separation factor or its values laying within a narrow interval, are self-similar to each other. Their number should be taken into account while counting the total number of states. At a stationary state of a system in which I does not depend explicitly on time, a certain point of phase space s corresponds to each mechanical state of the system. The position of the initial point s0 at the moment of time t0 , together with the Hamiltonian, completely determines the system evolution. The main idea of introducing the notion of an ensemble consists in the consideration of a set of systems corresponding to one and the same Hamiltonian instead of one dynamic system. As a rule, it is a very large number of systems of the same nature, which differ by particle velocities and mutual configurations of their arrangement in space so that they covered all conceivable combinations of configurations and velocities. The choice of this set or ensemble depends on limitations imposed on the system and on the initial conditions. If the initial conditions are specified unambiguously, the ensemble is concentrated in some region of the phase space with a clearly distinguishable boundary. Most often, initial conditions are not specified unambiguously, and therefore, an ensemble, as a rule, is distributed over a broad region with fuzzy boundaries. In a general case, an ensemble is represented as a cloud of points on the response surface in the phase space. This cloud can be considered as a continuous one, with the density f ðx1 ; x2 . . .xN ; p1 ; p2 . . .pN Þ

on the response surface in the phase space.

148

Entropy of Complex Processes and Systems

This density is normalized to unity. At that, each concrete state of a system is represented by a point in ð2N 1 1Þ-dimensional space. All states of a system with the potential extraction I have points representing these states on a certain “surface.” The states, whose extractions are enclosed between I and I 1 dI; have points located in an infinitesimally thin layer between the surfaces I and I 1 dI: All the states, whose potential extraction is smaller than I, make up a region limited by the surface dI. It is important to clarify the magnitude (volume) of this region. If a system is steady-state, its I remains unchanged with time. For such a case, Liouville proved a theorem affirming that an ðN 1 1Þ-dimensional surface formed in this way has a limited finite area. For the model that we are examining, Eq. (3.81) is somewhat simplified for two reasons. First, we examine a model of the process for which it is sufficient to know only one coordinate xi and one momentum component pi corresponding to the vertical axis for each point. Second, particle position in the volume under study is of no importance, because particles with different potential energies make the same contribution to the parameter I depending on the direction of their velocity or, which is the same, on their momentum direction. Therefore for each concrete model considered it is enough to write I5

 c  2 p 1 p22 1 . . .p2N 2m 1

where c is a certain proportionality coefficient. Examine all states of the system, whose coordinates lie within the limits p1 1 dp1 ; p2 1 dp2 ; p3 1 dp3 . . .pN 1 dpN: . All the coordinates in the phase space are mutually perpendicular. In this case, an elementary region volume is dv 5 dp1 3 dp2 3 dp3 . . .dpN

This expression represents the volume of a multidimensional parallelepiped in the phase space. The total phase space volume can be divided into elementary parallelepipeds. The totality of these elementary volumes and potential extraction values allows determining the quantity of complexes in the investigated system of N particles in the flow. The elementary layer thickness is proportional to dI. If the quantity of elementary volumes is ϕ, the entire volume can be represented as ϕ 3 dI: We introduce the parameter ϕ into the entropy formula and show that the obtained expression possesses all properties of entropy: H 5 lnϕ

We obtain the dependence for the dynamic entropy in the form dH 5

dI 1 fdV χ

(3.83)

Chapter 3 • Dynamic component of entropy

149

In addition, we have substantiated the relationship fV 5N χ

The potential extraction parameter has been determined as I 5 c1 Nχ

where c1 is the proportionality coefficient taking into account the two latter relationships. The dependence (3.83) can be transformed into dH 5

dI 1

I dV χ V

χ

5

dI I

1

I dV χ V

χ

The expression (3.83) represents a differential. By choosing properly the integration constant, we can obtain H 5 k ln IV

(3.84)

In this expression k is a generalized proportionality coefficient. Now we check the entropy value obtained from a multidimensional model. As is already known, the computation of a multidimensional space region can be considerably simplified by dividing orthogonal coordinates into several groups whose integration limits are independent of each other. In this case, the region size is determined as a product of a certain number of integrals. In this way, phase space can be separated into any number of spaces of a lower dimension up to one particle, that is, into N regions. At that, a space region for each particle should be equal to a certain volume V. It is known that particles occupy an extremely small volume of this space. Therefore each of them can occupy any position within the volume V. Thus, phase space must have a volume equal to V N : Now determine the phase space volume corresponding to a thin layer comprised between I and I 1 dI: In N-dimensional space, the volume of a hypersphere is proportional to r N : Hence, we  N2 have to differentiate an expression containing a multiplier 2m c I , from which the desired  N2 21 phase space size is proportional to 2m : c Taking this into account, we can write ϕU dI 5

Substitute ϕ into the entropy formula:

2m N N 21 V I 2 dI c

150

Entropy of Complex Processes and Systems

H 5 lnϕ 5 ln

  2m 1 1 1 NlnV 1 N 2 2 ln I c 2 N

The value of N is high, and therefore the expression N1 in brackets is vanishingly small and can be neglected. If ln 2m c is rejected as being of no interest, we can finally obtain H 5 NlnVI

This expression corresponds to the relation (3.84). Hence, entropy is determined on the basis of a multidimensional model correctly.

3.11 Two-phase flow mobility In the course of the analysis of the main regularities of thermodynamics, it was concluded that various gases have different mobilities. The magnitude of this mobility was determined as a parameter identical with or proportional to heat capacity. Undoubtedly, for two-phase flows it is also appropriate to pose a question about the system mobility. At a purely intuitive level, it is clear that this parameter is determined by the magnitude of the chaotizing factor, granulometric composition of the solid phase and its concentration. Let us try to determine this parameter. For this purpose, consider a steady flow in a system whose arrangement is shown in Fig. 31. We assume that the potential extraction in a system is in the range between I and I 1 dI because of fluctuations. The point representing this system is located in the corresponding thin layer of the phase space. We assume that in this layer all the positions of the representing point are equally probable. This means that the layer separation into small elements of equal volumes gives an equal probability of a point location in any of these elements. Obviously, the most probable state corresponds to the greatest of these constituents of the phase space. Now we examine a steady flow in the facility shown in Fig. 32. A peculiar feature of this facility is the presence of a partition, which is longitudinal with respect to the flow. This partition is made in the form of a grid capable of letting through the molecules of the carrier flow, but detaining solid particles. In addition, the cells are so small that particles do not stick in them. Imagine that in each hose of the flow there are particles of a narrow size class with the dimensions x1 and x2 ; and their amounts are also different—N1 and N2 : Here the flow velocities can be different, that is, w1 6¼ w2 : At the convergence of these flows into the vertical channel, the velocities of the carrying medium start equalizing. In one part of the channel the velocity somewhat grows, and in the other it decreases in compliance with cross-section characteristics. At that time, the production of entropy takes place in both parts of the system. It ceases after the establishment of a stationary flow regime in both parts of the channel. If we assume that the part of the channel with a stationary flow is a statistical system, the following relations are valid for it:

Chapter 3 • Dynamic component of entropy

N 5 N1 1 N2 I 5 I1 1 I2

151

(3.85)

We specify the total potential extraction not absolutely exactly—let it be contained between I and I 1 dI: If in this case the change of I1 for the first flow is between I1 and I1 1 dI1 , we can write the increment of the second flow as I2 1 dI2 : Here we obtain that for system 1 the phase space region is proportional to dI1 ; аnd for system 2 it is proportional to dI2 : The volumes of these regions as they are determined above can be written as ϕ1 ðI1 ÞdI1 and ϕ2 ðI2 ÞdI For the volume of the aggregate region, it gives ϕ1 ðI1 Þϕ2 ðI2 ÞdI1 dI2

(3.86)

To obtain the most probable state of the aggregate system, it is necessary to seek the maximum of the expression (3.86) or the maximum of its logarithm. Supposing that H1 5 klogϕ1 ðI1 Þ H2 5 klogϕ2 ðI2 Þ;

we need to seek the maximum of the sum H1 1 H2

The number of admissible states of a combined system is maximal at the equality of the chaotizing factor or flow velocity in both systems. A complete change in the entropy of a combined system in the channel behind the partition occurs at the expense of the chaotizing factor equalization. It will be equal to the sum of entropy changes of each part caused by equalizing flows, that is, dH 5 2

  dI dI 1 1 dI 1 5 2 χ1 χ2 χ2 χ1

Hence, it is clear that when the chaotizing factor is equalized in both parts, entropy production becomes zero, and before this moment exceeded zero, that is, the entropy was growing. As we have shown, the most probable configuration of the combined system is the one for which the number of admissible states is maximal at the equality of the chaotizing factor of both systems. This maximality can be determined by analyzing the product of the numbers of admissible states of separate systems by independent variables characterizing both systems. From the relation ϕ 5 ϕ1 ϕ2 5 ϕ1 ðN1 ; I1 Þϕ2 ðN 2 N1 ; I 2 I1 Þ

152

Entropy of Complex Processes and Systems

the extremum condition is  dðϕ1 ϕ2 Þ 5

   @ϕ1 @ϕ @ϕ2 @ϕ dN1 1 1 dI1 ϕ2 1 dN2 1 2 dI2 ϕ1 5 0 @N1 @I1 @N2 @I2

(3.87)

We can write dN2 5 dðN 2 N1 Þ 5 2 dN1 dI2 5 dðI 2 I1 Þ 5 2 dI1

Hence, @ϕ2 @ϕ @ϕ @ϕ 52 2; 2 52 2 @N1 @N2 @I1 @I2

Dividing both parts of (3.87) by the product ϕ1 ϕ2 and taking into account the found relations, we obtain 

   @ϕ1 @ϕ2 @ϕ1 @ϕ2 dN1 1 dI1 5 0 2 2 ϕ1 @N1 ϕ2 @N2 ϕ1 @I1 ϕ2 @I2

This expression reflects the condition of mutual equalization or stationarity of the combined system. This dependence can be somewhat simplified:     @lnϕ1 @lnϕ2 @lnϕ1 @lnϕ dN1 1 dI1 5 0 2 2 @N1 @N2 @I1 @I2

Obviously, the condition of equilibration of two systems will be satisfied when the expressions in brackets assume values equal to zero, since the second brackets in the equilibrium conditions equal zero, as established earlier. Thus, we obtain @H1 @H2 @H1 @H2 5 ; 5 @N1 @N2 @I1 @I2

(3.88)

The second condition is familiar to us. It is disclosed as χ1 5 χ2 ; that is, the values of chaotizing factors in both parts of the system are equalized. The first condition is new. We introduce a notation @H τ 5 @N χ

(3.89)

where τ is a parameter playing the role of mobility factor. H and N are dimensionless quantities, therefore the right-hand part in (3.89) must be also dimensionless.

Chapter 3 • Dynamic component of entropy

153

Thus, here one more condition of a stationary or steady process is added. At the merging of two systems at the same flow velocity, an additional new condition of the steady flow acquires the form τ1 τ2 5 χ1 χ2

(3.90)

that is, two systems come to a dynamic equilibrium when the ratios of their mobility factors to the chaotizing factor become equal. The mobility factor characterizes a particle behavior under specific conditions of a flow. The dimension of this parameter must be equal to the chaotizing factor dimension, that is, kgm. Such a characteristic for a single particle is a square of the vertical component of its velocity multiplied by the particle mass: τ5

mv2 2

(3.91)

The new parameter determines the presence of particle diffusion. Imagine a nonequilibrium version. Let τ 2 . τ 1 take place. At the transition of ΔN particles from system 2 into system 1, entropy change behind the partition is, according to the condition (3.87), dH 5 dðH1 1 H2 Þ 5

      @H1 @H2 τ1 τ2 3 dN 3 dN 2 3 dN 5 2 1 @N I1 @I2 I2 χ χ2

(3.92)

In this expression, the statistical force of the process is χτ 1 5 χτ 2 , and the statistical flow 1 2 is the redistribution of particles. This statistical force and flow are the basis of entropy production. When the equilibrium comes, that is, at τ1 τ2 5 ; χ1 χ2

a directed flow of particles ceases, and entropy production becomes zero. At that point, stationarity sets in. In the first approximation, the mobility parameter determines such nonequilibrium processes as diffusion and equalization of flow velocities. Its general physical meaning will be explained later.

3.12 Other invariants for two-phase flows Consider the interaction of solid particles in the flow in the separation mode. The gravitational field orders the motion of particles directing them downwards, while the chaotizing factor opposes this. We mentally single out a section limiting a certain volume by two planes, which are perpendicular to the vertical channel axis, at the distance l between them

154

Entropy of Complex Processes and Systems

l

FIGURE 3–3 Flow section.

(Fig. 33). Analyze the pattern of the process in this section. It does not matter how the solid bulk material arrives at this section—with the flow from below, against the flow from above, or through holes in the side of this section. We assume that at a certain moment of time, there are N particles of different sizes in this section: Nn 5

X

Ni

n

We select a concrete narrow size class out of the particle  flow and examine its behavior. The flow affects this class with a certain specific force f kg=m2 ; which allows a certain

Chapter 3 • Dynamic component of entropy

155

number of these particles to overcome the force of gravity. The medium flow rate through this volume is V m3 =s. At that, the chaotizing factor is χ kgm. In a stationary mode, the quantity of particles in this volume is proportional to the volume size V and the specific force f, and inversely proportional to the chaotizing factor, because the higher the chaotizing factor, the quicker the particles leave this volume. Hence, we can write for a narrow class fV 5 σi Ni χ

(3.93)

where σi is a certain proportionality coefficient. This coefficient is dimensionless. We will derive this relation later from the parameters of the process. This dependence can be written for all the particles as X Vf 5 σ1 N1 χ n

(3.94)

The separation process is provided if this dependence is valid at least for one i-th size class from the total distribution of particles. The quantity f must provide the overcoming of weight for particles of a certain size within their distribution. If for all classes of particles Vf . σ i Ni ; χ

it is a regime of pneumatic transport. If Vfχ , σi Ni is valid for all classes of particles, a descending layer or a motionless layer is realized on the grid blown through from below. Taking these remarks into account, it can be stated that the entropy of the separation process under study is an additive quantity of the lifting factor, volume, and number of particles ðI; V ; NÞ: It can be written as a function of many variables: dH 5

      @H @H @H 3 dI 1 3 dV 1 3 dN @I V ;N @V I;N @N I;V

Recall two determinative relations found previously: @I @H

5 χ and @H @N 5

τ χ

If the left and right sides of these expressions are multiplied, one obtains @I @N

5 τ; and hence, @I 5 τdN

It follows from the first expression that @I 5 χdH:

(3.95)

156

Entropy of Complex Processes and Systems

We attempt to determine the impact of the volume V and flow intensity f on the lifting factor or potential extraction. Consider a certain amount of self-similar systems with the same entropy H, volume V, lifting factor I, and f value. Mentally perform a slow quasistationary increase of the volume of one of these systems from V to V 1 ΔV : The change is so slow that the system remains in its initial equilibrium state, that is, its entropy H and the number of particles N in it remain unchanged. Under these conditions, such a change is reversible. As is known, all equations of mechanics can be obtained using the principle of the least action (principle of potential energy minimum). This is a fact established and commonly recognized long ago. All systems in the nature spontaneously evolve to an equilibrium state, in which the entropy reaches its characteristic extreme values. If physical quantities acquire minimal or maximal possible values, it means that they reach their characteristic extreme values. It is known that at constant I and V a system evolves to a state with maximal entropy. At the same time, by analogy with the second law of thermodynamics, at constant H and V, a system evolves to a state with minimal I. In our case, with increasing volume at a constant H, the lifting factor changes from I(V) to IðV 1 ΔV Þ: Expand the lifting factor in series with the precision up to the terms of the first order with respect to ΔV : IðV 1 ΔV Þ 5 IðV Þ 1

@IðV Þ 3 ΔV 1 . . . @V

If I decreases at that point, it is obvious that @IðV Þ 52f; @V

and the total expression is written as IðV 1 ΔV Þ 5 IðV Þ 2 ΔVf

Taking this into account, it follows from the general relation dI 5 χdH 2 fdV 1 τdV ;

that dH 5

dI f τ 1 dV 2 dN χ χ χ

Reducing the latter expression to a common denominator, we can write χdH 5 dI 2 τdN 1 fdV

In a similar way, we can derive relations for I. dI 5 χH 2 fdV 1 τdN

(3.96)

Chapter 3 • Dynamic component of entropy

157

By analogy with thermodynamics, this dependence can be called a statistical identity for a two-phase flow. The physical sense of this identity becomes visual if we separate kinetic and potential components: χdH 1 τdN 5 dI 1 fdV

(3.97)

The left-hand part of this identity contains the kinetic energy of particles flow, and the right-hand part their potential energy. However, this identity is not a conservation law, since it does not include all the energy of the flow and all the potential energy of the solid phase. The kinetic energy does not refer to the entire flow, but only to its part which equals the total volume of solid particles. Potential energy of solid particles in this expression refers only to the particles making up the imbalance. It follows from the definition of potential extraction and chaotizing factor of the flow. In separation regimes of two-phase flows, a new unusual and interesting property arises— negative production of entropy. When a polyfractional material gets into an ascending flow, a certain ordering of particles by sizes occurs as a result of the phenomenon of delamination. This phenomenon is opposite to mixing. It is known that a mixture of particles can be characterized by the entropy of composition. The accomplishment of the separation process will lead to a decrease in this entropy component. In conclusion, we discuss one more relationship. The specific flow pressure can be found from the dependence fV 5 σN χ

Hence, f5

σNSw2 ρ0 2V

We denote β5

NS V

where β is the specific volume of solid phase in a flow, and S is the volume of an individual particle. Taking this into account, 1 f 5 σβw2 ρ0 2

(3.98)

Here we observe an analogy with the square law of drag, since f is proportional to w2 ρ0 ; which is completely true from the standpoint of physics of two-phase flows. It is perfectly clear from that stated above that the number of particles N in a system is an independent statistical quantity. For a narrow class of particles described by Eq. (3.97), entropy of the flow can be written in the form

158

Entropy of Complex Processes and Systems

dH 5

dI f τ 1 dV 2 dN χ χ χ

(3.99)

For a polyfractional system consisting of k different narrow size classes containing N1 ; N2 ; N3 . . .Nk particles, the following notation suggests itself for entropy, which is definitely an additive quantity: dH 5

  dI fdV τ1 τ2 τk 1 2 dN1 1 dN2 1 . . . dNk χ χ χ χ χ

(3.100)

It is unclear whether this notation is complete or incomplete, or, even more so, basically incorrect. The dependence (3.100) does not contain in the explicit form a very important entropy component arising at the mixing of homogeneous components—in this case, different narrow classes of particles. To clarify this issue, we represent the total entropy consisting of two parts—dynamic ðH1 Þ and static ðH2 Þ —as follows dH 5 dH1 1 dH2

(3.101)

The dynamic entropy component is expressed by a relation dH1 5

dI fdV 1 χ χ

(3.102)

The static part of this parameter, as determined for the entropy of mixing, is written as a ratio of probabilities. It is unclear which is true, dH2 5 2

X

P Pk lnPk or H2 5 2

k

k

τ k Nk χ

(3.103)

According to our definition, the probability of a concrete size class is determined as Ni Pi 5 P k Nk

(3.104)

The expression (3.104) shows the relative concentration of i-th particles in the mixture, that is, Pi 5 μi

P Here k μi 5 1 To determine the connection between the relative concentration of particles and flow parameters, it is necessary to examine the influence of the number of particles on the character of principal regularities. Energy, potential extraction, and entropy possess the additivity property. This property allows making a definite conclusion about the character of the dependence of all these

Chapter 3 • Dynamic component of entropy

159

quantities on the number of particles. The additivity of a parameter means that when the number of particles N changes several times, this parameter changes as many times as the latter. We express the potential extraction in the form of the following function of entropy and volume:  I 5 Nf

H V ; N N

 (3.105)

The dependence (3.105) is the most general form of a homogeneous function of the first order of N; H; V : Considering N as another independent variable, we can write according to (3.99) that dI 5 χdH 2 fdV 1 τdN

(3.106)

 @I 

where τ 5 @N H;V is the system mobility. Hence, it can be readily shown that I 5 Nτ

(3.107)

Thus, the mobility of a system of solid particles in separation regimes is none other than the potential extraction related to one particle. Being expressed as a function of χ and f ; flow mobility by itself is independent of N. Its differential can be expressed, immediately, as follows dτ 5 2 hdχ 1 vdf

(3.108)

where h and v are the entropy and volume related to one particle, respectively.

3.13 Canonical distributions in the definition of statistical ensembles for two-phase flows In classical physics, the system energy has a special part because of its connection with a Hamiltonian. The development of the theory of information has somewhat changed the situation. It uses entropy, and not energy, as the principal concept. Such a precedent is very useful for the study of two-phase flows, because energy does not play a special part in their statistical model. Taking this remark into account, we can proceed to a consistent discussion of various kinds of ensembles using a discrete formalism. In this case, we have to accept as the main characteristic, as shown previously, the potential extraction of a dynamic system of particles.

160

Entropy of Complex Processes and Systems

(a) Microcanonical ensemble A microcanonical ensemble is characterized by the consideration of a fixed number of particles with the same aerodynamic characteristic in a narrow interval of potential extractions, that is, N 5 const, and ΔI varies within a narrow interval. Here we can construct the required ensemble of systems assuming that their density is zero everywhere except the chosen narrow interval ΔI: This allows us to accept the following distribution density: const ϕðIÞ 5 0



ðfor extraction in the interval ΔI near IÞ ðoutside this intervalÞ

The characteristic of a microcanonic ensemble is determined by the fact of the presence of a fixed number of particles of a narrow size class N and a mean probability of the particle extraction hεi i: These parameters are constant for this distribution. Hence, the average extraction of particles out of the system amounts to I 5 N hεi i

(3.109)

This means that all the states of a macrosystem acquire identical characteristics. And from the microposition, we can get to know which distribution of Ni is the most probable. Since X

ϕi 5 ϕ;

(3.110)

i

the probability can be determined using the formula Pi 5

ϕi ϕ

(3.111)

At the same time, the probability normalization gives X

Pi 5 1

(3.112)

i

Pi must be understood as the probability of the i-th particle to be extracted. Therefore, the total extraction of these particles is X

Pi εi 5 I

(3.113)

i

But now we can determine the entropy of such probability distribution. It amounts to H52

X i

Pi lnPi

(3.114)

Chapter 3 • Dynamic component of entropy

161

The problem of maximization of the dependence (3.114) under the boundary conditions (3.112) and (3.113) gives Pi 5

ϕi 5 e2λ2χIðxi Þ ϕ

(3.115)

where eλ 5 Z 5

X

e2τIðxii Þ

(3.116)

i

where Z is a statistical sum of the system. In the traditional interpretation, the probability density in the phase space is equivalent to determining the volume of this space. It can be shown that this volume corresponding to the state ϕi is proportional to the probability value PðNi Þ 5

ϕi Li ϕi !

Hence, the most probable state is obtained as a result of lnPðϕi Þ maximization under the limitations (3.112) and (3.113) expressed using ϕi P P ϕi εi 5 I i ϕi 5 ϕ and This means that the obtained result is analogous to (3.115) and (3.116). Thus, a microscopic ensemble characterizes a system of particles of a narrow size class, which has a stable extraction in specific conditions. As will be shown later, this is always confirmed in experiments.

(b) Canonical ensemble A canonical ensemble is obtained from a microcanonical one, if we assume the possibility of changing potential extraction I in a system. Such an ensemble is characterized by the dependence I 5 N 3 , εi .

Hence, here we deal with a larger spectrum of particles for which different extractions are possible. At that, IðxÞ varies for varying particle sizes. In this case, various possible macrostates arise. Now we determine the probability of the situation where all the particles of a certain size i have an extraction equal to Iðxi Þ. This is denoted by Pi . Then the following is valid for the entire system: X

Pi 5 1

(3.117)

162

Entropy of Complex Processes and Systems

X

Pi Iðxi Þ 5 I

H52

X

Pi lnPi

(3.118) (3.119)

Entropy maximization under the indicated limitations gives Pi 5 e2λ2χIðxi Þi

(3.120)

where eλ 5 Z 5

X

e2χðxi Þ

(3.121)

i

There is a fundamental difference between the expressions (3.115), (3.116), (3.120), and (3.121). The first two refer to individual extractions, while the final two refer to the extractions of the entire system. It is of interest to make the following step in the analysis of a canonical system. The extractions of Ni particles of the same class take on the values Iðxi Þ with the canonical probability distribution Pi (3.120). If we pass to a logarithm in this equation and substitute it into (3.119), using the dependence (3.118), we can obtain H 5 λ 1 τ , IðxÞ .

(3.122)

It follows that in a canonical ensemble the entropy value depends on the average value of the extraction, the value of λ, and the value of τ. The physical meaning will be clarified in the following part, and the parameter τ can be found by solving Eq. (3.118). Separation results in such a system are completely determined at the expense of the assignment of all εi and their average value , Iðxi Þ . . These magnitudes are connected with the process variables, the most important of these being the flow velocity. Proceeding from (3.117), we obtain X

dPi 5 0

Substituting the value of Pi from (3.120), we obtain X

e½2λ2χIðxi Þ ½ðdλ 1 χdIðxi Þ 1 dλIðxi Þ

(3.123)

i

Assuming that Iðxi Þ is a function of only the flow velocity w, we can determine the parameter Si 5 2

@Iðxi Þ @wi

(3.124)

Chapter 3 • Dynamic component of entropy

163

and the quantity Si should be admitted as a “generalized influence.” We denote the average quantity by , Si . . We can obtain from (3.123) dλ 1 , Iðxi Þ . 2 χ

X

Si dwi 5 0

(3.125)

i

Hence, we obtain dH 5 dλ 1 χd , Iðxi Þ . 1 , Iðxi Þ . dχ

(3.126)

Excluding λ from Eqs. (3.125) and (3.126), we obtain dH 5 χ , Iðxi Þ . 1 χ

X

Si dwi

(3.127)

i

The expression (3.127) can serve as a basis for understanding the behavior of the system under study represented by a canonical ensemble. Such understanding can be attained by analogy with thermodynamic laws. These laws point to the following properties of a developed system. 1. A system represented by a canonical ensemble is in equilibrium, and this equilibrium is characterized by the parameter χ. If an interaction of two systems with the same χ is realized, the system will be in equilibrium with the same χ. 2. If two contacting systems have different χ, the extraction of overwhelming number of particles changes. This is analogous to an impact of an additional heat flow, that is, these systems lose equilibrium. 3. If a system is isolated and wi cannot vary, then, because of their interaction, dH 5 0

Consider a very slow change of w. In this case, , εi . will change, and χd , Iðxi Þ . 5 2 χ

X

Si dwi ;

i

which ensures dH . 0

(c) Grand canonical ensemble Grand canonical ensemble represents a further generalization of the canonical ensemble that will be obtained if we examine a system with a variable number N. Therefore, instead of a fixed N value, we consider their average , N . . Let Ni be the total number of particles at

164

Entropy of Complex Processes and Systems

the moment of time t, and Iðxi Þ be the potential extraction at the moment of time t. Now, if Pi is the probability of the realization of the system state at the same moment of time, then Pi 5 e2λ2αNi 2χIðxi Þ

(3.128)

where eλ 5 Z 5

N X

e2αNi 2χIðxi Þ

i

α plays the role of the system partition into distinguishable elements. By analogy with the previous example, we can obtain for the grand canonical ensemble H 5 λ 1 αN 1 χ , IðxÞ .

Besides systems with variable composition, the grand canonical ensemble structure simplifies calculations and allows obtaining results that cannot be obtained in ensembles of a smaller level. It proves to be useful for microstates analysis.

(d) Relation between canonical and microscopic ensembles We examine the microstate in a system described by a canonical ensemble. Imagine, as previously, that the system contains Ni particles, and the potential extraction of each of them is , εi . . Using the expounded method, total extractions in the system , IðxÞ . can be determined. The probability of the existence of Ni particles with the extractions εi is PðNi Þ 5

X

Pi

(3.129)

i

where the summation is performed over all the states of the i-th system, in which Ni have a potential extraction equal to εi . In this case, X

IðxÞ 5

Ni εi

and the number of ensembles of the system is φ5

N! Li Ni !

(3.130)

This quantity must be used as a weighting coefficient of e2μεi at the summation in (3.129). After the performed calculations, we can finally obtain

Chapter 3 • Dynamic component of entropy

I @lnZ χ dIðxi Þ

IðNi Þ 5 2

165

(3.131)

where Z is a statistical sum of a canonical ensemble. It equals " Z5

X

#N 2χIðxi Þ

e

(3.132)

Ne2χIðxii Þ IðNi Þ 5 P 2χIðx Þ i ie

(3.133)

i

Substitution of (3.132) into (3.131) gives

The connection of statistical sum of microcanonical and canonical ensembles is expressed as N Zk 5 Zmk

(3.134)

It is of interest to reveal the connection of entropies of microcanonical and canonical ensembles. For a canonical ensemble, the following was obtained Hk 5

X

Pi lnPi 5 λ 1 χIðxÞ

(3.135)

The same dependence can be written using the statistical sum: Hk 5 lnZk 2 χ

dlnZk dχ

Similarly, for a microcanonical ensemble Hmk 5 lnZmk 2 χ

dlnZmk dχ

Using (3.134), we can obtain Hk 5 NHmk

The dependence (3.133) shows that even one particle in a two-phase flow can be represented by a canonical ensemble with the same characteristics as an ensemble with many particles. The results of this conclusion are useful when it becomes necessary to obtain models of the behavior of particles with identical characteristics in the process. Here there are at least two principal cases—FermiDirac statistics, where Ni can take on the values 0 or 1, and BoseEinstein statistics where Ni can take on 0 or any whole positive value.

166

Entropy of Complex Processes and Systems

3.14 Statistical analysis of mass exchange in a two-phase flow Consider a two-phase system having a constant number of particles Na and the volume V in its stationary state. Assume that at a fixed flow velocity ðwÞ of the medium, the lifting factor of this system is characterized by the quantity Ia : We tentatively divide this system into two parts (Fig. 33) and call the larger one an apparatus and the smaller one a zone. The zone is understood as a part of the vertical channel volume that has a small height and overlaps the entire cross-section of the channel. It is assumed that the zone height is small, but sufficient for holding a large number of particles and insufficient by height for significant changes in composition, particle concentration, and other parameters of the process. For the sake of convenience, we arrange the singled-out zone at the upper edge of the system, although it can be located in any part of the apparatus, which does not affect the character of the obtained results. We introduce another zone height restriction. It is chosen so small that all the particles whose motion is oriented upwards leave its range, that is, they are withdrawn out of the apparatus. Consider the statistical properties of a particle in such a zone taking into account its contact with the apparatus. Such contact means that they have an unlimited possibility to exchange particles, and flow velocities in them are either equal or rigidly connected in compliance with the ratio of the respective flow sections only. If the number of particles in the zone equals NðN , , Na Þ; then their number in the apparatus is Na 2 N; and if the zone has the lifting factor E, this parameter for the apparatus equals ðIa 2 EÞ: The problem consists in determining the statistical properties of the system consisting of two parts. The probability of the ingress of a certain particle into the zone is Pð1Þ 5

ΔV ; V

(3.136)

and the probability of its staying outside this volume is 1 2 Pð1Þ 5 1 2

ΔV V

(3.137)

since the sum of probabilities of both events must be equal to unity. Now we can pass to determining the probability of the presence of N analogous particles in the zone. As one can readily see, in a general case we obtain the ratio PðNÞ 5

 ðN 2NÞ Na ! N Pð1Þ 12Pð1Þ a N!ðNa 2 NÞ!

(3.138)

Events at which the number of particles that got into ΔV equals 0; 1; 2; 3. . . are incompatible and make up a complete group. Hence, the sum of probabilities of these events must equal unity, which is actually true:

Chapter 3 • Dynamic component of entropy

Na X

PðNÞ 5

N50

Na X

 N 2N Na ! N Pð1Þ 12Pð1Þ a N!ðN 2 NÞ! a N50

167

(3.139)

Using the binomial theorem formula, we can write N X

Na ! P N ð12PÞNa 2N 5 ½P2ð12PÞN 5 1N 5 1 N!ðN a 2 NÞ! N50

Now we determine the mean statistical value for the number of particles in the zone N. Assume that a large number M of measurements have been performed, each of them recording the number of particles in ΔV : Let the value N1 be recorded m1 times, the value N2 2 m2 times, the value N3 2 m3 times, etc. If the number of measurements is large, the ratios m1 m2 m3 M ; M ; M , etc. become equal to the probabilities of the respective values N. According to the rules of determining a mean value for a discrete system, we can write hN i 5

X m1 N1 1 m2 N2 1 . . . m1 N1 1 m2 N2 1 . . . 5 PðN1 ÞN1 1 P2 ðN2 ÞN2 1 5 PðNi ÞNi m1 1 m2 1 . . . M i

Here the summation is performed over all possible values of the random quantity Ni ðNi 5 0; 1; 2. . .Na Þ: Taking into account the computations carried out above, the mean value is determined as hNi 5

Np X Ni 50

Ni

    Na ! ΔV Ni ΔV Na 2Ni 12 Ni !ðNa 2 Ni Þ! V V

(3.140)

When computing this sum, one should pay attention that the summation is performed actually starting from Ni 5 1, since the summand with Ni 5 0 equals zero (but it should be taken into account that 0! 5 1). In this connection, reducing by Ni , we can write hN i 5

    Na ! ΔV Ni ΔV Na 2Ni 12 ðNi 2 1Þ!ðN 2 Ni Þ! V V ni 51 Na X

If we denote Ni 2 1 5 k, after taking the common factor Na symbol, this expression acquires the form hN i 5 Na

ΔV  V

outside the summation

    a 21 ΔV NX ðNa 2 1Þ! ΔV k ΔV Na 2k21 12 V k50 k!ðNa 2 k 2 1Þ! V V

(3.141)

We have shown that such a sum equals unity, and thus, the average number of particles of the narrow size class under study in the zone is

168

Entropy of Complex Processes and Systems

hN i 5 Na

ΔV V

(3.142)

Now we determine the probability of detecting the zone containing N particles in the i-th state with the lifting factor Ei. The probability PðEi ; NÞ is proportional to the number of permissible states of the apparatus, and not of the zone, because at the fixation of the zone state, the number of permissible states of the entire system equals the number of permissible states of the apparatus. The apparatus in this state, as noted, contains ðN0 2 NÞ particles and has the lifting factor ðIa 2 Ei Þ: The sought-for probability of this state is PðN; Ei Þ  ϕ½ðNa 2 NÞ; ðIa 2 Ei Þ

In this relation, the proportionality coefficient is unknown. To find this coefficient, we apply a technique which is usually used by researchers for overcoming this difficulty. Namely, we determine the ratio of probabilities of the zone staying in two states PðN1 ; E1 Þ ϕðNa 2 N1 ; Ia 2 E1 Þ 5 PðN2 ; E2 Þ ϕðNa 2 N2 ; Ia 2 E2 Þ

(3.143)

According to the definition of entropy for the entire apparatus, we can write ϕðNa ; Ia Þ 5 eHðNa ;Ia Þ

Taking this into account, expression (3.143) can be written as a difference of entropies P1 ½N1 ; E1  5 eHðNa 2N1 ;Ia 2E1 Þ2HðNa 2N2 ;Ia 2E2 Þ P2 ½N2 ; E2 

(3.144)

The exponent in expression (3.144) can be expanded into the Taylor series  HðNa 2 N; Ia 2 EÞ 5 HðNa ; Ia Þ 2 N

@H @Na

 2E Ia

  @H 1? @Ia Na

The difference of entropies can be written to within the first order as  ΔH  ½ðNa 2 N1 Þ 2 ðNa 2 N2 Þ

@H @Na

 1 ½ðIa 2 E1 Þ 2 ðIa 2 E2 Þ Ia

    @H @H 2 ðE1 2 E2 Þ 5 2 ðN1 2 N2 Þ @Na Ia @Ia N0

Using the definition of factors found previously     1 @H τ @H ;2 5 ; 5 χ @I N χ @N I

  @H 5 @Ia Na

Chapter 3 • Dynamic component of entropy

169

we can write the obtained expression in the form ΔH 5

ðN1 2 N2 Þτ ðE1 2 E2 Þ 2 χ χ

(3.145)

It is noteworthy that ΔH applies to the apparatus, while N1 ; N2 :E1 ; E2 2 to the zone. Thus, changes occurring in the zone predetermine the entropy change of the entire apparatus. It is clear even at an intuitive level, since everything leaving the zone, and only this, predetermines the sought-for magnitude of fractional separation. Taking this into account, the dependence (3.145) provides an extremely important relation from the standpoint of a statistical approach to the problem: N1 τ2E1

P1 ½N1 ; E1  e χ 5 2 P2 ½N2 ; E2  eN2 τ2E χ

Each exponential term in this expression is analogous, by its structure, to the relation obtained by Gibbs while studying the dynamics of elementary particles of an ideal gas. Although it contains utterly different parameters defining the process under study, we call it Gibbs’ factor for a two-phase flow. There is another relation well-known in thermodynamics. It is obtained from Gibbs’ factor at the fixation of the number of particles ðN1 5 N2 5 NÞ: In this case, this expression is written in the form E1

E12E2 P1 ðI1 Þ e χ 5 5e χ P2 ðI2 Þ eEχ2

This dependence shows a ratio of probabilities of the zone residence in two states with the lifting factors E1 and E2 , and a constant number of particles in the zone N. The parameter of e2χ type is called Boltzmann’s factor. Consider some more parameters that are extremely important by their sense. If we summarize the dependence characterizing the Gibbs factor over all the states of the zone and all particles, we obtain an expression called a large statistical sum E

Zðτ; χÞ 5

XX N

Nτ2Ei χ

e

i

Such a sum is a normalization factor transforming relative probabilities into absolute ones, that is, it plays the role of a proportionality factor which was unknown earlier. Now it is clear that a system in the state with N1 ; I1 has a probability determined by the Gibbs factor divided by the large sum PðN1 ; I1 Þ 5

e Z

ðN 3 τ2IÞ χ

170

Entropy of Complex Processes and Systems

One can readily see that the sum of this probability over all N and I equals unity. In chemical kinetics, the large statistical sum is often expressed using a so-called “absolute activity” parameter. In our case it is written as τ

λ 5 eχ

and we name it, by analogy, a parameter of “absolute mobility” of the system, and the large sum in this case is Z5

XX N

Ei

λN e2 χ

i

Using this reasoning, we can determine that the probability of the zone residence in the i-th state is defined as PðNi ; Ii Þ 5

N ½Ei τ2Ii  χ

e

Z

From this standpoint, a number of important parameters of the process can be determined. The formation of the grand sum value provides a possibility of finding average values of determining parameters. Suppose we are interested in the average value of a certain physical quantity C taken over an ensemble of systems. Denote the average value of this quantity by , C . : If CðN; iÞ is the value of C for a system of N particles while the zone is in the i-th state, we can write ,C. 5

XX N

P P CðN; iÞPðN; Ii Þ 5

N

i

i

CðN; iÞe Z

ðN τ2Ii Þ χ

(3.146)

From this standpoint of this approach, a number of important parameters can be determined, for example, the average number of particles in the zone. In principle, the number of particles in the zone can vary, first, because it is in contact with the apparatus, and second, because a part of particles leaves the zone in the upward direction. To obtain the average, it is necessary to multiply each Gibbs factor in the grand sum by N and, in compliance with (3.146), it can be written as P P ,N . 5

N

i

Z

Nτ2Ei χ

Ne

(3.147)

The expression (3.147) can be written in the form that is easier to use in computations. According to the definition of the grand sum,

Chapter 3 • Dynamic component of entropy

171

i @Z 1 X X N τ2E Ne χ 5 @τ χ N i

In this expression, the numerator of (3.147), is under the summation signs, and therefore we can write ,N . 5

χ @Z @lnZ 5χ Z @τ @τ

(3.148)

Taking into account the expression for the grand sum, the average value over the ensemble of states is written as ,N . 5λ

@ lnZ 5 N @λ

(3.149)

The average value of the lifting factor of the zone can be defined as P P ,E. 5

We denote α 5

1 χ

Ei e Z

N

i

Nτ2Ei χ

(3.150)

and find a derivative of the grand sum: XX @Z 5 Ei ðNτ 2 Ei Þ @α N i

Following (3.150), we can write hNτ 2 Ei i 5

1 @Z @lnZ 5 Z @α @α

(3.151)

Combining (3.148) and (3.151), we obtain   @ @ 2 lnZ E 5 χτ @τ @α

(3.152)

If the number of particles in the zone is constant, we can take a quantity analogous to the Boltzmann factor Zs 5

X

Ei

e2 χ

(3.153)

i

as a normalizing sum. It is called a statistical sum, and it also plays the part of a proportionality factor between the probability and the Boltzmann factor, that is, Ei

PðEi Þ 5

e2 χ Zs

(3.154)

172

Entropy of Complex Processes and Systems

At a fixed number of particles, the average value of the lifting factor in the zone is P , Ei . 5

i

Ei

Ei e2 χ χ2 @Zs @lnZs 5 χ2 5 Zs Zs @χ @χ

(3.155)

Here the averaging is performed over the ensemble of states of the zone, which is in contact with the apparatus, but contains a constant number of particles in a stationary process. As established, the lifting factor is a homogeneous function of two parameters Iðχ; NÞ: Therefore, we can write Iðχ; NÞ 5 NiðχÞ;

where iðχÞ is the potential extraction probability for one particle. Therefore, in order to understand mass exchange with the cell, we examine a borderline case with only one particle of a certain fixed size class permanently staying in the zone, and then consider N identical independent particles of the same class. Determine a statistical sum for one particle. Clearly, one particle has only two possible states with its velocity oriented upwards or downwards. The grand sum for these two possible states is Z 5 1 1 e2χ E

(3.156)

The average value of the lifting factor for one particle is 0 3 1 1 Ee2χ Ee2χ 5 ,E. 5 E Z 1 1 e2χ E

E

The average value for a system of N particles is N times greater and amounts to NEe2χ E

I 5,E. 5

2χE

11e

5

NE E χ

e 11

(3.157)

These relations lead to the necessity to introduce one more element of the model under study—its cell. Trying to determine the entropy from this standpoint, we write the Boltzmann factor Ei

e2 χ Pi 5 Z

and find a logarithm of this expression lnPi 5 2

Hence, taking into account that

P

Ei 2 lnZ χ

Ei 5 χdH, we obtain

(3.158)

Chapter 3 • Dynamic component of entropy

Ei 5 2 χðlnPi 1 lnZÞ

173

(3.159)

The latter is valid only for the stationary state of the system. Taking into account (3.158) and (3.159), we can obtain χdH 5

X

X

Ei dPi 5 2 χ

i

ðlnPi ÞdPi 2 χlnZ

X

i

dPi

i

However, the probabilities are normalized to unity, that is, X

Pi 5 1;

i

Therefore, X

dPi 5 0

i

As a result, we obtain χdH 5 2 χ

X ðlnPi ÞdPi

(3.160)

i

Now we try to determine the effect of the chaotizing factor and probability of extracting a narrow size class into the fine product on the entropy value. The number of system states depending on the number of particles is ϕðNa Þ 5

Na ! ðNa 2 NÞ!Na !

According to the definition of entropy, H 5 lnϕðNa Þ 5 lnNa ! 2 lnðNa 2 NÞ! 2 lnNa !

Using Stirling’s approximation, this expression can be written as H 5 Na lnNa 2 Na 2 ðNa 2 NÞlnðNa 2 NÞ 1 Na 2 N 2 NlnN 1 N 5      N N N N ln 1 2 2 ln 5 2 Na 1 2 Na Na Na Na

The quantity NNa 5 εf shows narrow class extraction upwards. Taking this into account, H 5 2 Na ð1 2 εf Þlnð1 2 εf Þ 1 εf lnεf

A plot of the dependence

(3.161)

174

Entropy of Complex Processes and Systems

H 5 f ðεf Þ

is shown in Fig. 34. Its characteristic feature is the presence of optimum. Determine the value of ε‫ כ‬corresponding to the optimal value of HðNa Þ    @H 5 2 Na logð1 2 εf Þ 2 logεf @εf

Hence, logð1 2 εf Þ 5 logεf

(3.162)

Entropy H

A unique value ε‫ כ‬5 12 complies with this condition. It is clear that εf is proportional to χ: At the point εf 5 0, all the particles precipitate downwards, although χ can be non-zero. With increasing χ, εf starts growing. At that point, the uncertainty in the particle behavior increases, and the entropy grows. This growth with increasing χ continues until the value ε‫ כ‬5 12 is reached. At that point, HðNa Þ reaches its maximum. Here a specific equilibrium in particles distribution ensues—half of them are oriented upwards and another half downwards. At the further increase in χ, the value of εf grows, a greater number of particles start moving in the upward direction, and, therefore, the total uncertainty starts decreasing, and the entropy value drops. At the point εf 5 0, the quantity χ has the value of descending layer velocity for the size class under study. At the point εf 5 1, the parameter χ acquires the value

0

0.2

0.4

0.6

Parameter ef FIGURE 3–4 Plot of the dependence H 5 f(εf).

0.8

1.0

Chapter 3 • Dynamic component of entropy

175

connected with the critical pneumatic transport velocity. Here the entropy reaches its minimal value. These two limiting cases correspond to z5 6

Na 2

and z 5 0 corresponds to the optimal value. To determine the value of optimally attainable entropy, we apply the dependence HðNa ; zÞ 5 HðNa ; 0Þ 2

2z2 N

Hence, HðNa ; 0Þ 5

Na ! Na Na 2 2

5 Na lnNa 2 Na 2 Na ln

Na 1 Na 5 Na ln2 2

At a monotonous increase of χ; the entropy grows up to the optimum. This increase can start at a certain value of χ . 0: Consider real conditions that can affect the obtained results. The number of particles in the zone, which is in contact with the apparatus, is not constant. The average number of particles is determined as , N . 5 χz @Z @τ ; and we can show that , N2 . 5

χ2 @ 2 Z z @2 τ

(3.163)

Root-mean-square deviation ð , ΔN . 2 Þ of the number of particles N from , N . is determined by the relation N 2 2 2N , N . 1 , N . ; while , ðΔNÞ2 . 5 , ðN 2, N .Þ2 . 5 , N 2 . 2 2 , N . , N . 1 , N . 2 5 , N 2 . 2 , N . 2

The latter relation, taking account of the previous ones, is "

  # 1 @2 Z 1 @Z 2 , ðΔNÞ . 5 χ 2 2 z @τ 2 Z @τ 2

2

It can be also shown that , ðΔNÞ2 . 5 χ

@,N . @τ

. 1 We have already demonstrated that ,,ðΔNÞ N .2 5 ,N . The magnitude of , N . is very high, and therefore fluctuations of the number of particles are very small. Hence, it is necessary to make a conclusion that the number of particles in the zone is a well-determined quantity, although it is not maintained strictly constant. 2

176

Entropy of Complex Processes and Systems

As for the lifting factor fluctuations, we can write, taking into account previously obtained results,  , ðE2, E .Þ2 . 5 χ2

@I @χ



As we know,   1 @H 5 ; χ @I N

where the right-hand part is computed for the most probable or equilibrium configuration of the system. Hence, at a constant flow velocity, the lifting factor can fluctuate insignificantly. Now we assume that the entire volume of the apparatus is divided into rectangular cells in such a way that the cell volume can hold no more than one particle. If the size class under study contains N particles, we can think that N cells in the apparatus are occupied, and the remaining ones are empty. It is assumed that the number of cells considerably exceeds the number of particles. Consider a system composed of one cell. We assume that the cell is in the zone of the apparatus and possesses the properties of a zone. This means that a particle located in it leaves the limits of the apparatus, that is, it is extracted into the fine product, in case of its orientation being upwards. We assume that all the rest, except this cell, is the apparatus. If the cell is not occupied, Е 5 0 is valid for it. If it is occupied, its lifting factor has a certain value corresponding to the probability of its orientation upwards. It is known from the definition of the grand sum for one cell that 2E

Z 5 1 1 λe χ

(3.164)

The first summand corresponds to the case where the cell is not occupied and the lifting factor is zero, while the second summand refers to an occupied cell with n 5 1; E 6¼ 0: The average cell occupancy equals the ratio of the term in the grand sum with n 5 1 to the sum of summands with n 5 0 and n 5 1: 2E

, nðEÞ . 5

λe χ

2E χ

1 1 λe

5

1 21

E

λ eχ 1 1

We introduce a simpler designation for the average cell occupancy: , nðEÞ . 5 f ðEÞ τ

Recall that λ 5 eχ : Taking this into consideration, (3.165) can be written as

(3.165)

Chapter 3 • Dynamic component of entropy

f ðEÞ 5

177

1 E2τ χ

e

11

The value of f ðEÞ is always between zero and unity and has a physical meaning of the magnitude of the cell occupancy probability. In such a form, this dependence is similar to the FermiDirac distribution function for fermionic gas. This dependence allows making interesting conclusions from the standpoint of the study of statistical properties of the process under consideration. As for fine particles, it is necessary to accept the condition that several particles can be simultaneously located in one cell. If a comparable relation of the sizes of fine and coarse particles is introduced, the number of fine particles in one cell can be significant. Consider one cell in the apparatus zone. The quantity of fine particles in the cell is n. Emphasize once more that from the standpoint of coarse particle behavior, a cell can be either occupied or empty, while in the case of fine particles it can be either empty or occupied with n varying within a broad range. The grand sum for fine particles is written as follows: Z5

X

2nE

λn e χ 5

n

X

2E

λe χ

n

n

We denote the expression in brackets 2E

λe χ 5 y

(3.166)

Then at y , 1, the following is valid for (3.166): Z5

X

yn 5

n

1 1 5 2E 12y 1 2 λe χ

By the definition of the average value and taking this into account, the average number of particles in a cell is P ny n , nðNÞ . 5 Pn n ny

(3.167)

Transforming this dependence, we obtain , nðEÞ . 5

y 1 1 5 5 21 12y y 2 1 λ21 eχE 2 1

Hence, we can finally write , nðEÞ . 5

1 E2τ χ

e

21

This relation is similar to the BoseEinstein distribution function for the boson gas.

(3.168)

178

Entropy of Complex Processes and Systems

The dependencies (3.167) and (3.168) differ from one another: 11 in the denominator of the first formula is replaced with 21 in the second. However, their physical meaning is basically different. Thus, a common statistical dependence for all particles is written as f ðEÞ 5

1 E2τ χ

e

61

(3.169)

3.15 Statistical parameters of mass transfer The distribution function (3.169) applies both to fine and coarse particles. Here it must be taken into account that formally the lifting factor Е refers only to a cell, and not to the entire apparatus. Taking into account that we analyze the averaged behavior of a cell, the results of such analysis can be extended to all the cells of a zone. A classical separation regime ensues when certain size classes precipitate, totally or partially, counter to the flow. The precipitation of a class of particles countering the flow is possible when their hovering velocity considerably exceeds the flow velocity, that is, when w50 . w

It is characteristic for the conditions of air separation that ρ . . 1; ρ0

Therefore, a part of the denominator in (3.169) E2τ

e χ .1

In this case, the relationship (3.169) can be written without 6 1; that is, in the form τ2E

f ðEÞ  e χ

(3.170)

The dependence (3.170) is a limiting case of the distribution of coarse and fine particles in the cellular model. The physical meaning of this case is that the average probability of the occupancy of any cell irrespective of the particle size is always below unity. This condition totally corresponds to fractionating regimes at the solid component concentration in gas up to μ 5 2 kg=m3 : For instance, at the particles density ρ 5 2000 kg=m3 ; the volume occupancy of space is only β5

μ 5 0:001 ρ

Chapter 3 • Dynamic component of entropy

179

If we assume, in compliance with the particle size, that the cell volume is a cubic millimeter, then one cubic meter contains Nα 5 109 cells. Assuming that the particles are spherical with the size of 1 mm, their quantity equals N5

6G 5 2 3 106 πD3 ρ

Thus, n5

109 2 3 106

5 500 cells

are due to each particle, that is, the occupancy of particles is actually very small. Thus, the dependence of (3.170) type is the distribution limit for coarse and fine particles at a very small average cell occupancy in comparison with unity. In the statistical theory of gases, such an expression is usually called a statistical limit. It should be noted that the limiting distribution function can be used for finding the mean probable value of such parameters of the process as the number of particles, their concentration, potential extraction, specific flow pressure, and even particle velocity distribution. The total number of particles can be obtained from the distribution function as a result of summarizing over all the cells: N5

X

f ðEi Þ;

i

that is, the total number of particles equals the sum of their mean contents in each cell. Substituting the sum with the integral N5

ðN

τ2E

e χ dE 5 λ

ðN

0

2E

e χ dE 5 λχ

0

we obtain a compact expression for the parameter of absolute mobility λ5

N χ

(3.171)

If we pass to concrete separation conditions, as a rule, the concentration parameter in kg=m3 is used, and not the notion of particle quantity. Here a step forwards is the possibility of determining the concentration as the quantity of particles per cubic meter. Therefore, we can write N 5μ3V

where μ is particles concentration; and V is the volume containing the particles.

(3.172)

180

Entropy of Complex Processes and Systems

Let us try to understand which volume is implied here. Clearly, a stationary volume of the apparatus lets through various amounts of particles per unit of time depending on the value of the chaotizing factor (or flow velocity). It is necessary to single out a certain amount of volume normalized by the chaotizing factor. In the capacity of such a quantity, we can take V1 5

Vgd ; μ

(3.173)

where V is the apparatus volume; χ is the chaotizing factor; g is gravity acceleration; and d is the particle size. In such a form, V1 has the dimension of volume: ½m3 : Taking into account the dependencies (3.171) and (3.172), the dependence (3.173) can be written as λ5μ

We call the parameter VQ 5 Hence, we can write

V1 gd

V1 gd

a discrete volume for the critical two-phase flows. τ

λ 5 eχ 5 μVQ

(3.174)

This dependence characterizes the absolute activity of particles in a flow. Taking this into account, the classical distribution function is written as 2E

f ðEÞ 5 μVQ e χ

It follows from (3.174) that τ 5 lnμ 1 lnVQ χ

(3.175)

The concentration parameter in this expression is under the logarithm sign. This explains the experimentally obtained results. In separation processes, concentration values are small, up to 2  3 kg=m3 : Logarithms of these values are small and very close to each other, and therefore, the influence of this parameter on the process results is not experimentally revealed, that is, in real conditions we can write with a sufficient precision that τ  χln

V1 χ

We finally obtain for τ in a unit volume τ  χ 2 lnχ

(3.176)

Chapter 3 • Dynamic component of entropy

181

Hence, the mobility parameter τ is smaller than the chaotizing factor by the value of its logarithm. It has the same dimension as the chaotizing factor. This differs from the idealized value of the particle velocity. In a real flow, due to the interaction of coarse and fine particles, their velocity is somewhat leveled because fine particles accelerate coarse ones, and coarse particles slow down fine ones. Therefore the real velocity of both differs from the idealized value v 5 w 2 w50

The expression (3.176) reflects the real relationship between the particles and flow velocities, which is influenced by a large number of random factors of a two-phase flow, although it reflects, to a certain extent, their idealized relationship. The total magnitude of the lifting factor in the case under study is I5

X

Ei f ðEi Þ 5 λ

X

i

2E



i

We substitute the sum with an integral and take it for all the cells: I 5λ

ðN

2E

Ee χ dE 5 λχ2

0

Taking (3.171) into account, we can obtain I 5 Nχ 5 μVQ χ

It follows that the lifting factor is directly proportional to the value of the chaotizing factor, as well as to the particle concentration and discrete volume. The two latter parameters of the working interval of concentrations affect the separation process only slightly. Let us try to determine the entropy value from this standpoint. By definition, we can write   @H τ 52 @N I χ

On the other hand, we have determined that τ 5 lnμVQ 5 lnN χ

In this formula, N is the amount of particles in a discrete volume. Taking this into account, we can write ðN 0

dH 5

ðN 0

  ðN τ dN 2 dNlnN 5 NlnN 2 N; 5 χ 0

182

Entropy of Complex Processes and Systems

that is, H 5 NðlnN 2 1Þ

Hence, we obtain a sufficiently simple dependence for entropy: H 5 NðlnμV 2 1Þ 5 NðlnV 1 lnμ 2 1Þ

The concentration varies in a small range of values, and therefore its logarithm is practically constant. Taking this into account, we can write H 5 NðlnV 1 constÞ

(3.177)

From the expression (3.177), taking into account the relation obtained earlier @H f 5 @V χ

we can obtain C

f N f 5 5 χ VQ χ

It follows that VQ f 5N χ

(3.178)

This dependence has been obtained for a two-phase flow in the separation regime. We have considered this relationship earlier, and here we have derived it taking into account some clarifying details, and it has turned out that σ 5 1: It has been obtained that for a constant number of particles dI 5 χdH 2 fdV

Hence, χdH 5 dI 1 fdV

In this relationship, we express dI as a function of V and χ:  χdH 5

@I @V

 χ

 dV 1

 @I dχ 1 fdV @χ V

Clearly, the change in potential extraction is independent of the volume, therefore

Chapter 3 • Dynamic component of entropy 

@I @V

183

 χ

50

@I The quantity @χ 5 iN Taking this into account, a general equation can be written as

dH 5

iN dV dχ 1 f χ χ

The parameter f can be determined from the relation fV 5 Nχ

Taking this into account, dH 5

iN dV dχ 1 N χ V

We take an integral of this expression H 5 iNlnχ 1 NlnV 1 c 5 Nðlnχ 1 lnV 1 sÞ

(3.179)

where c is the integration constant, s 5 Nc : The volume value V is not defined here. In this expression, i is related to one particle. To obtain a correct sum, the volume value should be also related to one particle, that is, we must write   V H 5 N ilnχ 1 ln 1 s N

The value NV 5

1 μ

is inverse to the particle concentration. Therefore, we finally obtain H 5 Nðilnχ 2 lnχ 1 sÞ

The impact of the concentration value on the entropy is small, as it appears in this dependence under the logarithm sign. Here the amount of particles exerts a determining influence. As for the potential extraction, proceeding from the relationship @H 1 5 ; @I χ

we can obtain I 5 Nχðilnχ 2 lnμ 1 sÞ

(3.180)

184

Entropy of Complex Processes and Systems

Since solid phase concentration is not high in separation processes (weight concentration is 23 kg/m3 and volume one is β 5 0:001), and its influence is expressed by a logarithm, separation results are independent of μ value, which has been established in numerous experiments. Two parameters exert a determining influence on the potential extraction—chaotizing factor and the number of particles. The chaotizing factor value varies over the channel crosssection. From the standpoint of energy minimization, all the particles must concentrate near the channel walls, where the velocity is minimal, but this does not take place. Measurements show that particles of all size classes are distributed approximately uniformly over the entire cross-section. This is very clear. Probably, this issue is similar to that of a coffee-and-milk mixture. Particles of milk, whose density is smaller than that of coffee, do not flow up. Obviously, it is caused by the fact that entropy is greater when particles are uniformly distributed in space. The point is that equilibrium is characterized by entropy maximization. As demonstrated, potential extraction is mainly determined by the chaotizing factor and the number of particles Iðχ; NÞ 5 NiðχÞ

For a polyfractional mixture, we can write Iðχ; NÞ 5

X

ik ðχÞN

k

For a narrow size class, the total differential for I is  dI 5

@I @χ





V ;N

dχ 1



@I @V



χ;N

dV 1

@I @N

 dN V ;χ

For a system consisting of many fractions, we can write  dI 5

@I @χ





V ;Nk

dχ 1

@I @V

 χ;Nk

dV 1

X  @I  k

@V

V ;χ;Nk6¼i

If the system under study consists of n narrow classes, each of them containing N1 ; N2 ; N3 ; . . .; Nn particles, the potential equation is connected with them by the following equation: dI 5 χdH 2 fdV 1 τ 1 dN1 1 τ 2 dN2 1 τ 3 dN3 1 ? 1 τ n dNn

In the general form, this relation can be written as dI 5 χdH 2 fdV 1

X n

τ k dNk

Chapter 3 • Dynamic component of entropy

185

As we know, it is of no interest to examine total extraction of various classes. Invariance is characteristic only of the extraction of narrow classes. Therefore we concentrate on the potential extraction of a narrow class keeping in mind that each narrow class in critical regimes contributes not only to I, but also to the system entropy. We can write for a narrow class dI 5 χdH 2 fdV 1 τdN

Using Euler’s theorem for homogeneous functions, we can write I 5 Hχ 2 fV 1 τN

(3.181)

Hence, the entropy in the explicit form is H5

I fV τ 1 5 N χ χ χ

(3.182)

The two latter expressions are defining for the process under study. Now we repeat ourselves a little, and recall that it was suggested to assume that for coarse particles, the distribution of the entire volume of the system (or the apparatus) over rectangular cells is such that the volume of one cell allows, in a general case, to accommodate one particle at most. Assume that at such a separation, G cells are obtained. It was determined that solid particles occupy 0.001 of the flow volume at most. And although their amount is large enough, all the same, it is much less than the amount of cells, that is, G . . N: Placing each of N particles into one of the G cells, we obtain GN possible distributions. There are identical distributions among them, which differ only by permutations of particles. Recall that according to the conditions of the model, all the particles of a narrow class are considered identical. Therefore, the total number of permutations of N particles is N! Thus, in these conditions, the number of complexes is determined by the relationship ϕ5

GN N!

The entropy of such distribution is a logarithm of the expression H 5 lnϕ 5 NlnG 2 lnN!

For the disclosure of lnN! content, we use Stirling’s formula in the form lnN!  NlnN

Taking this into account, H 5 ðNlnG 2 NlnNÞ

(3.183)

186

Entropy of Complex Processes and Systems

If we introduce an average value of cell occupancy kn 5

N ; G

then we obtain for the dependence (3.183) H 5 2 kn Glnkn

For the model under study, kn , , 1: This condition confirms that actually, at average occupancy values, no more than one particle is located in each unit cell at any moment of P time. It is clear that kn 5 N: Here we have to substantiate one more definition. In the case of coarse particles, no more than one particle can be found in each cell. In this case, N particles must be distributed over ðG 2 1Þ cells. We obtain the number of distribution ways ϕ5

ðG 2 1Þ! N!ðG 2 N 2 1Þ!

(3.184)

Finding the logarithm of (3.184), neglecting unity due to high values of N and G, and using Stirling’s formula, we obtain   H 5 GlnG 2 NlnN 2 ðG 2 NÞlnðG 2 NÞ

We introduce a parameter kn 5

N G

For the distribution of coarse particles, we finally obtain H 5 G½kn lnkn 1 ð1 2 kn Þlnð1 2 kn Þ

In the case of fine particles, any number of particles can be found in one cell. In this case, the number of ways of distributing N particles over G cells is ϕ5

ðG 1 N 2 1Þ! ðG 2 1Þ!N!

(3.185)

Finding the logarithm of this expression under the same conditions as for the dependence (3.184), we finally obtain H 5 G½ð1 1 kn Þlnð1 1 kn Þ 2 kn lnkn 

The relations (3.185) and (3.186) determine the apparatus entropy. Now we examine the behavior of a separate cell connected with the apparatus.

(3.186)

Chapter 3 • Dynamic component of entropy

187

First, the cell is in the apparatus zone. This means that a particle located in it leaves the scope of the apparatus at the orientation upwards, that is, it is extracted into the fine product. Second, we consider the rest of the cells as the apparatus. If a cell is not occupied, the probability of extraction from it equals zero, that is, E50

where E is potential extraction from a cell. If a cell is occupied, the lifting factor for it has a certain value corresponding to the probability of its orientation upwards. From the definition of the grand sum accepted in statistical mechanics, for one cell it amounts to Z 5 1 1 λe2χ E

The first summand in this expression corresponds to the case of an unoccupied cell and a zero lifting factor. The second summand relates to an occupied cell with N 5 1 and E 6¼ 0: The parameter λ is determined as it has been shown: τ

λ 5 eχ

At that, the average cell occupancy equals the ratio of the term in the grand sum with N 5 1 to the sum of summands with N 5 0 and N 5 1: λe2χ E

hkn i 5

2χE

1 1 λe

5

1 21

E

λ eχ 1 1

Substituting the value of λ into this dependence, we can obtain hkn i 5

1 e

E2τ χ

11

This dependence is characteristic of coarse particles as we have defined them here. For fine particles, this parameter is hkn i 5

1 E2τ

e χ 21

where hkn i is a function connected with the probability of extraction of particles from a cell, its value being always between zero and unity.

E2τ For flow conditions under study, the expression e χ c1 is always much greater than unity. Therefore, irrespective of whether the particle under study is coarse or fine, we can write τ2E

hkn i 5 e χ

(3.187)

188

Entropy of Complex Processes and Systems

The dependence (3.187) is a limiting case for the distribution of coarse and fine particles in the cellular model. This means that the total number of particles of a narrow size class equals the sum of their average contents in each cell. We replace the sum with an integral, keeping in mind that here a proportionality coefficient arises, which normalizes the dimension of the obtained relationship: N 5c

ðN

τ2E

τ

e χ dE 5 ceχ

ðN

0

τ

dE 5 cχeχ

0

Hence, we obtain τ

N 5 cχeχ

(3.188)

The exponent in the exponential expression is dimensionless, N is also a dimensionless value, so c must be a constant with the dimension inverse to that of the chaotizing factor. Therefore the proportionality coefficient can be written as c5

1 gdm

The parameters g,d,m characterizing a particle of a narrow class are constant values. In ordinary conditions, it is difficult to operate with the notion of the number of particles while examining two-phase flows. In this case, a parameter of weight concentration is applied Nmg 5 χV

(3.189)

where μ is the weight concentration of particles related to one cubic meter of a continuous medium; V is the volume of the continuum where solid particles are located. Proceeding from (3.188) and (3.189), we can write τ

eχ 5 χ

Vmgd χ

(3.190)

From the statistical definition of the main parameters of a two-phase flow, the potential extraction parameter has been found in the form I 5 2zmgd

Hence, mgd 5

I 2z

Chapter 3 • Dynamic component of entropy

189

Taking this into account, the dependence (3.190) can be presented in the form τ

eχ 5 χ

VI 2zχ

(3.191)

The analysis of (3.190) from the standpoint of flow parameters leads to a dependence mgd V1 ðρ 2 ρ0 Þgd gd ðρ 2 ρ0 Þ 5 5 2B 52 2 V1 ρ0 w2 χ w ρ 2

where V1 is the volume of a particle of a narrow size class. Taking into account (3.191), the relation (3.190) can be written as τ

eχ 5 μ2BV

wherefrom, taking into account (3.191), we obtain B5

I 4zχ

Hence, it follows that B5

  gd ρ 2 ρ0 w2 ρ0

(3.192)

We should pay attention to the fact that the relation mgd/χ contains an expression of potential energy of the particle in the numerator and an expression of its kinetic energy in the denominator. As it turns out, their ratio gives an invariant for fractional separation curves. This leads to a dependence connecting the parameter B composed of the process characteristics d; w; ρ; g with the statistical mechanics parameters I; Z; χ: Now we revert to the issue of the physical meaning of the notion of mobility. One parameter in (3.191) is not defined—the flow volume V : For the sake of computation convenience, we relate all our further derivations to one cubic meter of the flow, keeping in mind that when we substitute V 5 1, the dimension of this unity is ½m3 : Taking this into account, the relation (3.191) can be written as τ

eχ 5

χI 2zχ

This expression shows the principal relation for a narrow size class. Taking into account other classes present in the flow, it is necessary to introduce indexation: τk

eχ 5

χk Ik 2zk χ

190

Entropy of Complex Processes and Systems

Besides k-th particles, there are particles of other size classes in the flow. We denote their total concentration by μ0 : Introducing this parameter into the numerator and denominator of the right-hand part of this expression, we obtain τk

eχ 5

μk μ Ik 3 0 μ0 2zk χ

For the convenience of derivations, we denote the second factor by νk 5

μ 0 Ik 2zk χ

We call this expression a discrete parameter of the k-th narrow size class. Finally, we can write τk

eχ 5

μk νk μ0

Hence, τ k 5 χln

μk 1 χlnν k μ0

When there is only one class of particles in a flow, the ratio μk μ0

5 1; and ln μμk 5 0 0

For this case, the mobility parameter ðτ k0 Þ takes the form τ k0 5 χlnν

In a general case, at the motion in a flow of a polyfractional mixture, this parameter acquires the form τ k 5 τ k0 1 χln

μk μ0

(3.193)

It becomes clear that the mobility parameter of particles of a narrow size class is mainly determined by the chaotizing factor of this class in a flow, as well as by the possible main statistical parameters of two-phase flow. It has discrete values for each narrow class in the flow. For a polyfractional mixture, the mean velocity of particles of each size class somewhat increases due to the flow obstruction and interactions of particles of various classes. However, at that τ ,χ

is always valid.

Chapter 3 • Dynamic component of entropy

191

We examine the dependence P dH2 5 2

k

τ k dNk χ

Taking (3.193) into account, we can write P H2 5

k

X τ 0k dNk 2 dNk lnμk χ k

The first summand of this expression relates to a dynamic entropy component, while the second summand represents the mixing entropy. In conclusion, we can write that the entropy of two-phase flow with a polyfractional solid phase is dH 5

dI fdv 1 2 χ χ

P k

X τ 0k dNk 2 dNk lnPk χ k

It follows from (3.193) that the mobility parameter for a narrow size class is proportional to the chaotizing factor ðχÞ or to the medium flow velocity, concentration of this class and the discrete parameter value. In a polyfractional mixture, the mobility of each concrete narrow size class is determined by the dependence (3.193), which reflects the share of this class in the total composition of the solid phase. Finally, the expression of total entropy for a polyfractional mixture can be written as dH 5

dI fdv 1 2 χ χ

P k

X τ 0k dNk dPk lnPk 2 N0 χ k

In this dependence N0 is the total number of particles in the mixture. Now we determine the entropy of two-phase flow in the regimes of pneumatic transport and descending layer. In the pneumatic transport regime, when all the material ascends with the flow, z 51 N2 : Taking this into account, we can write for a narrow size class H 5 Nln2 2

2

N 2 2

N

  1 5 0:19315N  0:2N 5 N ln2 2 2

(3.194)

The entropy of a narrow class for a descending layer, where all solid particles precipitate against the flow, has an analogous value; in this case, z 5 2 N2 : For a polyfractional solid phase, it is insufficient to sum (3.194) over all narrow classes in order to obtain the total entropy for its compact displacement in a flow. Here another summand appears, and it is very important not to omit it. It is the entropy of composition, which is nonzero for a polyfractional mixture. Taking this remark into account, the total entropy is written as

192

Entropy of Complex Processes and Systems

H

0

1

æ

FIGURE 3–5 Plot of the dependence H 5 F(χ).

H0 5 N0 ln2 2

X 2z 2 k

k

Nk

2

X

Pk lnPk

(3.195)

k

where H0 is the total entropy of a polyfractional two-phase flow; k is the amount of discernible narrow classes; zk is separation factor for the k-th class; Nk is the number of particles in the k-th class of particles; Pk is the share of particles of the k-th class with respect to the entire mixture of particles, or their probability. For the pneumatic transport and descending layer in the case of a polyfractional solid phase, H0  0:2N0 2

X

Pk lnPk

(3.196)

k

P where N0 5 k Nk Now we need to check experimentally whether this parameter possesses real invariant properties for the process under study. Before moving to the next part of this book, we make an attempt to formulate a definition of the dynamic entropy component. On the whole, the dynamic entropy of a system spontaneously changing without any external energy intake is a characteristic of the change of any physical object. It reflects the process of active impact dissipation leading to the restoration of all kinds of equilibrium and uniformity (Fig. 35).

4 Verification of the entropic model adequacy for two-phase flows in separation conditions 4.1 Mathematical model of poly-fractional mixture redistribution in a multistage cascade The validity of the obtained results can be checked on the basis of empirical studies of concrete separators, in which critical regimes of two-phase flows are realized in a critical manner. We make an attempt to do this using the most efficient apparatuses available at present. In recent years, as a result of intensive experimental and theoretical researches, it has been established that the multistage organization of separation processes allows for a significant increase in their efficiency. This principle has proved to be more progressive in comparison with the existing separation methods, where only one stage of the process is realized in the apparatus. This allows for a considerable productivity growth and improved quality of products obtained in apparatuses of moderate overall dimensions. For instance, we have created an efficient cascade classifier with productivity of 140 tons/h and a bulk material separation boundary of 100 μm for the potassium works at the Dead Sea having overall dimensions of 0.6 m 3 1.1 m 3 5 m, which has been in operation since 2002 up to the present time. Many technological processes are organized according to the cascade principle, such as petroleum cracking, adsorption, rectification, extraction, and separation of isotopes and some liquid mixtures. Therefore the compilation and solution of mathematical models and determination of invariants for cascade separation are, in principle, of phenomenological character. Taking into account the peculiarities of concrete, the obtained results of powder separation can be applied to a group of cascade processes of separation and other physical nature. Under the action of an ascending flow of a medium in a vertical channel, a part of bulk material moves upwards, and another part precipitates downwards. The character of narrow class particles distribution is shown in Fig. 41. It follows from this figure that the lightest particles are all carried out upwards, while the coarsest ones all precipitate downwards. Particles of intermediate classes between a and b are divided between the two outputs—fine and coarse products.

Entropy of Complex Processes and Systems. DOI: https://doi.org/10.1016/B978-0-12-821662-0.00004-6 © 2020 Elsevier Inc. All rights reserved.

193

194

Entropy of Complex Processes and Systems

Ff (x) 100 80 60 40 20 0 1

2 a

3

4

5

6

7

8 b

9

10

X

FIGURE 4–1 Example of the distribution of different size classes of particles at separation.

The quantity characterizing the separation of a narrow size class in an individual element of a cascade can be represented in a simplified form as k5

ri ri

where ri is the initial content of narrow size class particles at the ith stage; ri is the amount of particles of the same size class passing from the ith to the (i 2 1)th stage; k is the distribution coefficient. A proportional model of distribution of particles of a certain fixed size class over the apparatus height at their feeding to the i th stage is shown in Fig. 42A (for k 5 const). The result of the distribution of particles of narrow size classes is determined, in many respects, by the location of the material feeding into the apparatus. Let the initial mixture contain a certain amount of particles of jth size. For convenience, we take their initial content as unity. At invariable regime parameters of the classifier operation, fractional separation of a fixed narrow size class depends on the number of stages and on the place of material feeding into the apparatus. r We call the fraction Ff ðxÞ 5 rfs fractional extraction of the fine product in the entire channel for a narrow size class, where rf and rs are, respectively, the amounts of the narrow size class in the fine product and in the initial material. In the case of one stage, the process pattern is rather simple (see Fig. 42B). In this case, fractional extraction of the fine product corresponds to the distribution coefficient Ff ðxÞ1 5 k: Examine mass transfer in a cascade as a system of equations characteristic of each stage. According to Fig. 42, we denote fractional extraction of the examined (or fixed) class of particles in each stage upwards by fi ; and fractional extraction of the same particles in the same stage downwards by ci : As is shown, irrespective of the amount of material fed to a stage, some part of it is extracted from this stage upwards (k), and the remaining amount (1 2 k) goes out downwards.

Chapter 4 • Verification of the entropic model adequacy

ri* – 2(1 – K)

(A)

(B)

ri*+1(1 – K)ri*+1K

ri*

ri*+1

K

i* + 2

1

ri+2K

i* + 1

ri*–1 ri*(1 – K)ri*K

i*

ri*–1(1 – K)ri*–1K

i* – 2

i* – 1

195

(1–K)

1

FIGURE 4–2 (A) Proportional model of distribution of a narrow size class along the apparatus height; (B) distribution of particles for one stage.

In a general case, for a cascade consisting of z identical stages, according to Fig. 43, we can write a system of 2z joint equations—z equations for fractional extraction and z equations of material balance for all stages of the cascade 8 f1 5 f2 k > > > > > f 2 5 ðc1 1 f3 Þk > > > > > f 3 5 ðc2 1 f4 Þk > > > > > . . .. . .. . .. . .. . .. . .. . . > > > > > . . .. . .. . .. . .. . .. . .. . . > > > > > > < fi 5 ðci 21 1 fi 11 Þk . . .. . .. . .. . .. . .. . .. . . > > > . . .. . .. . .. . .. . .. . .. . . > > > > > > fi 5 ðci21 1 fi11 Þk > > > > > . . .. . .. . .. . .. . .. . .. . . > > > > > . . .. . .. . .. . .. . .. . .. . . > > > > > fz21 5 ðcz22 1 fz Þk > > : fz 5 cz21 3 k

(4.1)

196

Entropy of Complex Processes and Systems

f1

(A)

1

f2

c1 2 f3

c2 3 c3

fi

ci–1 i

fi–1

ci

1

fi*

ci*–1 i*

fi*–1

ci*

fz–1

cz–2

(B)

z–1 fz

1

z

cz

(1–K)

cz–1

K

FIGURE 4–3 General diagram of mass transfer in a cascade.

8 f1 1 c1 5 f2 > > > > > f2 1 c2 5 c1 1 f3 > > > > > f3 1 c3 5 c2 1 f4 > > > > > . . .. . .. . .. . .. . .. . .. . . > > > > > . . .. . .. . .. . .. . .. . .. . . > > > > > > < fi 1 ci 5 ci 21 1 fi 11 1 1 . . .. . .. . .. . .. . .. . .. . .. . .. . . > > > . . .. . .. . .. . .. . .. . .. . .. . .. . . > > > > > > > fi 1 ci 5 ci21 1 fi11 > > > > . . .. . .. . .. . .. . .. . .. . .. . . > > > > > . . .. . . . . .. . .. . .. . .. . .. . . > > > > > > fz21 1 cz21 5 cz22 1 fz > : fz 1 cz 5 cz21

In these expressions, k is a coefficient of material distribution at one stage.

(4.2)

Chapter 4 • Verification of the entropic model adequacy

197

At each stage, the total arrival of particles equals their departure from the stage, and therefore, we can write the following relation for each of them: ci 5 fi

ð1 2 kÞ k

(4.3)

If we take an equation for the stage i from the system (4.2) and substitute instead of ci and ci21 their values from (4.3), we can obtain a recurrent equation corresponding to the systems (4.1) and (4.2) fi 5 fi21 ð1 2 kÞ 1 fi11 3 k

(4.4)

Eq. (4.4) is a uniform finite-difference equation of the second order with constant coefficients. It can be written somewhat differently: fi12 2 fi11 3

1 12k 1 fi 3 50 k k

(4.5)

We define three boundary conditions for the recurrent Eq. (4.5) using for this purpose the dependence (4.3): 8 > < f1 5 f2 3 k fi 5 fi 21 3 ð1 2 kÞ 1 fi 11 3 k 1 k > : fz 5 fz21 3 ð1 2 kÞ

(4.6)

A characteristic equation corresponding to Eq. (4.5) can be written as 1 12k 50 λ2 2 λ 1 k k

The solution to this equation gives the following roots: λ1 5

12k ; λ2 5 1 k

Under the condition of k 5 0:5, we obtain λ1 5 λ2 5 1

A general solution of (4.5) in the case of λ1 6¼ λ2 has the following form: fi 5 b1 λi1 1 b2 λi2

Using boundary conditions (4.6), we find for 1 # i # i h  iz112i h 12ki i 12 12k k 1 2 k k h fi 5 12kz11 i ð2k 2 1Þ 12 k

(4.7)

198

Entropy of Complex Processes and Systems

For stages within the range i # i # z h  k i ih12kz11 12ki i k 1 2 12k 2 k k h fi 5 12kz11 i 12 k ð2k 2 1Þ

(4.8)

In the case of k 5 0:5; that is, for λ1 5 λ2 5 1, a general solution is: fi 5 b1 iλi 1 b2 λi 5 b1 i 1 c2

Using boundary conditions (4.6), we obtain fi 5 i

ðz 1 1 2 i Þ ð1 # i # i Þ ðz 1 1Þ

(4.9)

ðz 1 1 2 iÞ  ði # i # zÞ ðz 1 1Þ

(4.10)

fi 5 i 

Revert once more to Fig. 43. This figure clearly shows that the total extraction of the narrow size class under study into the fine product corresponds to the value of its extraction from the first stage, that is, f1 . Respectively, this class is extracted into the coarse product from the last stage, and the value of its extraction is cz : It is clear that Ff 5 f1 ; Fc 5 cz ; Ff 1 Fc 5 1

(4.11)

According to (4.8), for i 5 1 we can obtain for the upper input Ff 5

12

12kz112i

12

k12kz11

for k 6¼ 0; 5

(4.12)

k

For k 5 0:5, it follows from (4.10) that Ff 5

z 1 1 2 i z11

(4.13)

It follows from the analysis of the mechanism of separation process in cascade apparatuses that a symmetric material feeding into the middle of the apparatus does not displace the separation boundary. This corresponds to the condition i 5

z11 2

Chapter 4 • Verification of the entropic model adequacy

199

In this case, the expression (4.12) acquires the form Ff 5

11

1 12kz11 2

(4.14)

k

The slope of the fractional separation curve shows the extent of process perfection. It can be determined by the value of tg of the slope angle of the tangent in the middle point, that is, at Ff ðxÞ 5 0:5. 8 9 > n11 12kz21 >   < 2 1 = dFf 2 5 h 2  k z11 ki2 dk k50:5 > : 11 12k 2 > ; k

5

z11 2

(4.15)

k50:5

Hence, it follows that with increasing z, the separation efficiency continuously grows. It is noteworthy that the effect of increasing the number of cascade stages decays with increasing z. This means that an excessive increase in the number of stages in a cascade does not make sense.

4.2 Experimental check of theoretical conclusions Extensive studies of various cascade classifiers have been carried out in laboratory and industrial conditions (Fig. 44). These separators are recognized as the most progressive apparatuses providing high separation efficiency and productivity and smooth regulation of final product compositions. It is noteworthy that in all cases, without any exceptions, the obtained results correspond to certain regularities with deviations not exceeding the error of the accepted experimental method. In addition, it should be noted that the overwhelming amount of experiments have been performed in aerial working medium. As for water, a limited number of experiments have been performed this medium, since in the respective literature, practically all experimental material has been obtained in water flows. Regularities of separation in water and air media are somewhat different, but taking into account different viscosities and densities of these flows, they make up an orderly system. In principle, the processes of bulk materials separation by particle size or density can be realized using various methods—in the field of gravity forces, in centrifugal, electric fields and, probably, in fields of other natures, too. However, no matter how this process is organized, the separation principle remains unchained. Its essence consists in oppositely directed motion of particles of different size classes, that is, fine particles move predominantly in one direction, and coarse particles in another. Graphical illustration shows in the most visual way the material redistribution in this case (refer to Fig. 26). In this figure, the ABC curve represents granulometric composition of a certain bulk material in partial residues. Assume that for some reasons it is necessary to separate this material by the size of x0 mm. Note that the area of the graph limited by the curve

200

Entropy of Complex Processes and Systems

f

f

s

f

s

f

s

f

f

s s

1.

2.

w

3.

w

c

c

c

f

4.

w

5.

c

w

s

6.

w

c

c

w

f f s

s s

7.

8.

9.

w

w

c

w

c

c

Screen

FIGURE 4–4 Various separator designs: 1, hollow rectangular; 2, hollow round; 3, cascade; 4, cascade with perforated shelves; 5, “zigzag” rectangular; 6, “zigzag” round; 7, polycascade rectangular; 8, polycascade round; 9, multirow with a distributive grate. Notations: s, initial feed; c, coarse product; f, fine product; w, air flow.

ABC and x-axis corresponds, on a certain scale, to the total amount of the initial material. In an ideal case, the separation should take place along the line Bx0 : This line divides the initial composition into two parts with respect to the coarseness x0 :Ds —fine initial product, Rs — coarse initial product. In a real process, even if the flow velocity w corresponds to the optimal value, the separation does not proceed ideally, because practically always some of the fine product gets into the coarse product, and a part of the coarse one gets into the fine product.

Chapter 4 • Verification of the entropic model adequacy

201

We assume that the composition of the fine product is depicted by the curve ADE. Then the coarse product is described by the curve KDC, because there is a balance dependence between the initial, fine, and coarse products. Thus, in a general case, the plot area is separated by the lines of ideal and real processes into four parts: Dc —fine product amount extracted into the fine product; Rf —coarse product amount extracted into the fine product; Rc —coarse product amount extracted into the coarse product; Df —fine product amount extracted into the coarse product. The following equalities follow from Fig. 26: Ds 1 Rs 5 1 Df 1 Rf 5 γ f Df 1 Dc 5 Ds Dc 1 Rc 5 γ c Rc 1 Rf 5 Rc

(4.16)

γf 1 γc 5 1

where γ f ; γ c are, respectively, the outputs of the fine and coarse products. On the basis of relations (4.16), various parameters characterizing separation processes can be found, for instance, fine product extraction: εf 5

Df ; Ds

εc 5

Rc ; Rs

kf 5

Rf ; Rs

kc 5

Dc Ds

coarse product extraction:

fine product contamination:

and coarse product contamination:

These parameters all depend on the initial material composition, material concentration in a flow, and the value of the boundary separation coarseness, which does not allow for finding of some general approaches to the analysis of separation processes using them. This can be done using so-called curves of fractional extraction. It is generally accepted that fractional separation curves were first introduced into the practice of minerals beneficiation by Nagel in 1936. However, they are most often connected with the name of the Dutch engineer Tromp, who published his work in 1937. The idea of

202

Entropy of Complex Processes and Systems

(A) dx

Partial residues, R (%)

x 30

x0

N(x)

20 nc(x) 10 nf (x) 0 1

2

3

7 4 5 6 Particle size (mm)

8

9

10

8

9

10

Fraction extraction, F(x) (%)

(B)

100

xb

xd F1

80

Fc(x)

60

Ff (x)

40 20 0

F2 0

2

3

4 5 6 7 Particle size (mm)

FIGURE 4–5 Plotting of separation curves of various size classes during the separation depending on air flow velocity.

fractional separation curves is rather simple. It is based on the identification of fractional extraction of various narrow size classes in the conditions of classification. Fractional separation curves are plotted on the basis of the relation of the obtained separation products and initial composition without any complicated intermediate calculations. The principle of plotting separation curves is illustrated in Fig. 45. Fractional extraction for each narrow size class expressed as a percent is determined by dependencies of the following type: 9 rf 3 100 > > > = rs rc > Fc ðxÞ 5 3 100 > > ; rs

Ff ðxÞ 5

(4.17)

Usually, fractional separation value correlates with the average dimension of a narrow size class defined as an arithmetical mean of adjacent sieves at the assessment of granulometric composition.

Chapter 4 • Verification of the entropic model adequacy

203

These curves apply to the zone between the points xa and xb , for which Ff ðxa Þ 5 100 Ff ðxb Þ 5 0 Fc ðxa Þ 5 0 Fc ðxb Þ 5 100

 (4.18)

Total mirror symmetry of these curves with respect to the horizontal axis passing through the ordinate point having the value of 50% follows from expression (4.18) and the relation Ff ðxi Þ 1 Fc ðxi Þ 5 100

(4.19)

It should be noted that the fractional separation curve contains complete information on the change of granulometric composition of final products in comparison with the initial material. The separation will be the more complete, the narrower the zone between xa and xb . Clearly, at an ideal separation, xa 5 xb : At a simple division without changes in the fractional composition, the functions FðxÞ are degenerated into lines parallel to the abscissa axis. It can be proved that the size corresponding to the optimal separation is the abscissa of the point of the intersection of the curves, x50 : Fractional separation dependence on the air flow is shown in Fig. 45B. These curves have a number of very interesting properties that make it possible to find general regularities of the process. These properties are now considered.

(a) Initial composition Special studies aimed at revealing the influence of initial composition on the separation results have been carried out at air flow velocities 3.15, 4.35, and 5.45 m/s in a shelf cascade classifier. In the first set of experiments, the initial product had a continuous distribution, each size class being reclaimed according to the following scheme: 3.3, 6.7, 10, 12.5, 30, 50, 76.7, 100%. It was accepted that the contents of other classes in each experiment were uniformly distributed. Each of the seven narrow classes acquired these values, and then the separation was performed in the mentioned apparatus at all listed velocities. The results of one group of these experiments are shown in Fig. 46. The character of experimentally obtained dependencies testifies to the fact that each narrow size class behaves independently under the conditions of separation, as if other classes do not exist. At first sight, these results seem somewhat unexpected, since the presence of intense interaction of particles of various size classes has been experimentally proved in these conditions. This interaction is of a clearly defined random character. This process is also characterized by other random factors, and at the same time, the invariance of this parameter with respect to the initial composition is formed in some way. This completely confirms the conclusion obtained at the statistical analysis of the behavior of particle systems in a flow. Scientists have not managed to explain the invariance of fractional extraction with respect to the product composition by other methods.

204

Entropy of Complex Processes and Systems

R% 2.0; 1.0; 0.5; 0.25 mm

Fractional extraction, F(x)%

100

3.0 mm 80

60

5.0 mm

40

7.0 mm

20 0

40 60 80 100 20 Narrow size grade content in the initial mixture, %

FIGURE 4–6 Dependence of fractional separation of various size classes on their contents in the initial composition.

Fractional separation invariance with respect to the initial material composition plays a fundamental role in the development of the general theory of the process. This property of separation curves was described without any explanation in the above-mentioned work by Tromp.

(b) Solid-phase concentration in a flow Optimal productivity of classifying facilities is closely connected with the overall dimensions of the apparatus and solid-phase concentration in a flow. Numerous researches have been devoted to the study of the concentration influence on separation results. However, the absence of well-grounded and precise concepts of the process mechanism did not allow for taking into account the influence of this factor completely enough. Therefore, at best, the experimental material was reduced to purely empirical dependencies having no clear physical meaning. The influence of solid particles on the character of a two-phase flow motion reveals itself in two ways. First, at the concentration growth, the motion constraint increases, and second, the probability of contact interaction of particles grows. An attempt to reveal the influence of material concentration in a flow on fractional separation curves led to unexpected results. For this purpose, special experimental studies were performed in a hollow pipe and in cascade apparatuses with attachments. In all cases, qualitatively the same results were obtained. They can be illustrated by the dependence for a hollow pipe (Fig. 47). A characteristic feature of experimentally determined dependencies is the presence of a segment practically parallel to the concentration axis. For a hollow apparatus, this segment is limited by the concentration μ  2 kg=m3 , and for an apparatus with attachments, the region of this regularity spreads up to μ  6 kg=m3 .

Chapter 4 • Verification of the entropic model adequacy

205

Ff (x), % 100 0,5–0,2 mm

90 80

1–0,5 mm

70 60 50 40 2–1 mm 30 20

5–3 mm

10 2–3 mm 0 0,2

0,4

0,6

0,8

1,0

1,2

1,4

1,6

1,8

μ, kg/m3

FIGURE 4–7 Dependence of fractional separation of various size classes on the material concentration in the flow.

Within the limits of this region, the fractional separation value is practically independent of the material concentration in a flow. A similar character of curves is obtained also for cascade apparatuses. Such independence of the character of separation curves from the solid-phase concentration in a flow was obtained during the analysis of the statistical model of the process and totally confirmed in the experiments.

(c) Process stability Multiple reiteration of the experiment in laboratory conditions at a thoroughly verified air flow velocity gives the same separation curve even at a variation of granulometric composition in the initial supply and a change of the supply by the feeder. This was confirmed at an 11-fold repetition of the experiments. In industrial conditions, on a classifier with productivity of 3540 tons/h, the separation curve was determined in a week after its setting into operation. It so happened that the reevaluation of the apparatus operation was performed only after 11 months. Exactly the same separation curve was obtained, although the scatter of experimental points with respect to the former somewhat increased, reaching the maximal deviation of 7.3%. This is surprising, since an immense number of particles take part in the process. At the concentration of μ 5 2 kg=m3 , up to 1010 of particles with the size of 30 μm,

206

Entropy of Complex Processes and Systems

whose content in the initial supply is only 10%, fall per each cubic meter of air. At that point, air output through the separator is 20,000 m3 =h. Visually, the process is purely random, and the obtained result is strictly deterministic. Explanation of the phenomenon of such stability can be obtained only from the standpoint of statistical analysis of the parameters and results of the process under consideration.

(d) Flow velocity and particle size Using separation curves, the most general regularities of the separation process for apparatuses of any configuration and height were revealed experimentally at the separation of various natural and crushed powdery materials. The basis of these regularities is the affinity of fractional separation curves. The theoretically derived invariant (3.193) for two-phase flows in the case of solid phase with a different density of particles is proportional to the ratio of potential and kinetic energies in the form of the relation gd=w2 : This expression resembles the Froude criterion. We assume that this ratio is a peculiar modified Froude criterion and is denoted as Fr 5

gd w2

(4.20)

Usually, the parameters entering into this criterion are related either to the flow (d— channel diameter, w—flow velocity), or to a solid particle (d—particle diameter, w—particle velocity). In our case, this criterion connects solid particles with flow velocity. Such a criterion cannot be identified with the Froude parameter, but we especially stipulate such modification of the former. Special researches were performed on a cascade separator with pour-over shelves [Fig. 44(3)]. To ensure air flow turbulence over the entire experimental range, crushed quartzite of increased size was chosen. Its fractions had the following boundaries: 107 mm; 75 mm; 53 mm; 32 mm; 21 mm; 10.5 mm; and 0.50.25 mm. The material was supplied into the apparatus by a feeder ensuring solid-phase concentration in the flow, μ # 2 kg=m3 : Experiments were performed at various flow velocities. Experimentally obtained curves for various narrow size classes are shown in Fig. 48. On the basis of experimental studies, it has been revealed that each fraction behaves independently during separation. Its extraction is independent of the value of its content in a mixture with other classes within the limits of working concentration, which follows from the statistical model. From this figure we can note that the beginning of extraction of various fractions into the fine product starts at different flow velocities. Each fraction has its own velocity, at which its extraction is complete, and all the curves in the plot are arranged so that larger classes of particles are shifted to the region of higher velocities. These experimental points are transferred to the plot shown in Fig. 48, where the relation Fr 5 gd=w2 , obtained in a purely theoretical way for ρ 5 const in the next part, is taken as the abscissa axis. In this plot, all experimental points are reliably arranged in the same curve. Similar dependencies were experimentally revealed for several dozens of various types of separators of different heights,

Chapter 4 • Verification of the entropic model adequacy

207

Ff (x)% 100 90 80

mm

mm

mm –7 10

5–3

7–5 mm

40

3–2

50

2–1 mm

0,5–0,2

60

1–0,5 mm

5 mm

70

30 20 10 0 1

2

3

4

5

6

7

8

9

10

11

12

13 w m/s

FIGURE 4–8 Dependence of fractional separation on the flow velocity: O, experimental point; ___, theoretical curve.

Ff (x) 100

80

60

40

20

0 0,02

0,04

0,06

0,08

0,10

0,12

0,14 Fr.102

FIGURE 4–9 Affinization of separation curves using the Froude criterion Ff (x) 5 f (Fr).

cross-sections, and internal facilities. It is noteworthy that the coefficient of correlations for the indicated approximation for all the versions of air separators tested in laboratory conditions varies within the limits from 0.79 to 0.93. This testifies to a sufficiently rigid connection between the parameters under study. In no case was this regularity violated, and it was experimentally confirmed for more than 150 constructions and dimension types of gravitational classifiers (Fig. 49).

208

Entropy of Complex Processes and Systems

The physical meaning of this experimentally established fact consists in the following: 1. Separation curves obtained on the same aggregate in turbulent flow regimes of the ascending flow of medium are affine. 2. Despite the visual chaotization of the process, the affinization of curves points to the presence of a general physical regularity of the process, according to which the results of fractional separation by different boundary sizes are in a rigid correlation with the medium flow velocities. 3. A curve of the type Ff ðxÞ 5 f ðFrÞ is invariant not only by the initial composition, but also by the coarseness of fractions and the medium flow velocity. 4. If we take into account that this curve is also invariant by the material concentration in the flow in a certain working range, we can conclude that it is a unique characteristic of the separator design. In fact, the position of this curve at a constant scale of the plots is independent of the initial feed composition, fractions coarseness, flow velocity, or productivity variations. However, for various separators differing by their height, boundary conditions, or configuration, curves of the same type are obtained, which are arranged differently with respect to the coordinate axes. Hence, they are only a characteristic of the separator design. 5. Hence, an opposite conclusion follows—separators must be compared on the basis of characteristics of an affined separation curve. 6. Even the possibility of revealing deterministic regularities of mass transfer in two-phase turbulent flows seems incredible, since even monophasic turbulent flows resist any distinct physical interpretation as yet. At the same time, the determinism of such a kind is an established fact that has been experimentally confirmed for many years. The cause of this phenomenon and its physical aspects can be successfully explained only from the standpoint of statistical analysis of two-phase flow.

4.3 Unifiеd separation curves Two-phase flows are often used in industrial practice for performing processes with particles of different densities. Methods of mineral beneficiation can serve as the most visual example of that. Ground material is subjected to the impact of a moving flow in order to extract valuable components from a mixture. To reveal the influence of the material density on the general regularity, special researches have been carried out. Materials with a broad range of variation of this parameter were chosen. The granulometric characteristics and densities of these materials are given in Table 41. All the materials listed in this table except potassium salt and cement clinker have roundshaped particles. It can be especially emphasized that the range of these densities covers practically the entire region characteristic of the beneficiation of both metallic minerals and rock products. Two types of cascade classifiers with the height of seven stages (z 5 7) and

Chapter 4 • Verification of the entropic model adequacy

Table 4–1

209

Composition of bulk materials in partial residues.

Material name

Density ρ ðkg=m3 Þ

Granulated polyvinyl chloride Potassium salt Crushed gypsum Quartzite Cement clinker Magnetic ironstone Alloy No. 1 Granulated cast iron Alloy No. 2

1070 1980 2270 2675 3170 4350 6210 7810 8650

Partial residues (%) on sieves with the mesh Х, mm 2.5

1.5

1.0

0.75

0.43

0.2

0

10.1 13.7 4.1 7.2 0.2 7.2 0.6 5.3 0.25 12.6

20.9 34.1 29.5 27.8 19.4 26.0 10.1 34.4 0.2 41.8

28.5 33.9 23.6 21.2 25.5 22.8 26.8 39.7 0.15 7.7

16.3 5.0 10.3 10.3 11.7 11.3 18.6 10.7 0.12 7.5

15.3 5.3 12.9 15.8 14.5 13.8 25.4 6.8 0.088 29.7

7.97 5.0 11.2 12.8 11.9 13.3 15.2 2.4 0 0.7

0.93 3.0 8.4 4.9 16.8 5.6 3.3 0.7 

material feeding into the second and fourth stages counting top-down (i 5 2; i 5 4Þ were chosen for the experiments. Separation of each material was performed independently on these apparatuses in a broad range of air flow velocity variations. For each material, graphs of Ff ðxÞ 5 f ðFrÞ type were plotted, and the obtained curves turned out to be affine, however for each density the curves were different, even for the same apparatus. These experiments were supplemented by the studies of separation of mixed powders of various densities. A mixture containing 40% magnetic ironstone and 60% quartzite was prepared. As can be seen from the table, these two materials have rather similar granulometric characteristics, that is, in this case, the impact of the difference in granulometric composition on separation results is reduced almost to zero. All experiments were also carried out with this mixture on both separators. Experimental results are shown in Fig. 410 related to the apparatus with z 5 7; i 5 2: It follows that the affinization of separation curves for materials of different densities is individual to each one. This circumstance can be considered as independence of separation of each mixture component from others. We have shown earlier that the autonomy in the conditions of separation of different size classes of the same density at consumed concentrations does not exceed 2  3 kg=m3 : Now this statement can be expanded, as it has turned out that materials of different densities also behave independently in a flow, despite the fact that in each of them the size of particles has a rather broad granulometric range. Attempts were made to find a parameter allowing the affinization of separation curves taking into account the difference in densities. This parameter corresponds to B 5 Fr 3

ρ 2 ρ0 gx ρ 2 ρ0 5 23 ρ0 ρ0 w

(4.21)

Its formulation was based on the statistical analysis of two-phase flow which gave, as a result, the relation (3.193).

210

Entropy of Complex Processes and Systems

Ff (x) 100

90

80

70

60

50

40

30

20

10

0

0,2

0,4

0,6

0,8

1,0

1,2

1,4

1,6

1,8

B

FIGURE 4–10 Universal curve of Ff (x) 5 f (B) type for the separation of various densities materials and their mixtures.

The use of this parameter provides reliable affinization for the results of all experiments with materials of various densities and mixtures of materials, as shown in Fig. 411. A somewhat elevated spread of experimental points can be attributed to the difference in the shapes of particles and some difference in the density of the same material in different size classes. Thus, we have found an invariant providing the affinity of fractional extraction curves obtained in a channel of a concrete structure for materials of different compositions and densities in turbulent flows. The essence of this law consists in the fact that any bulk material is separated in a concrete apparatus equivalently, according to the same curve that is universal for it. The character of this curve is not affected at all by the regime parameters, granulometric characteristics, solid-phase concentration (within the limits of the working range),

Chapter 4 • Verification of the entropic model adequacy

211

Ff (x) 100 80 60 40 20 0

B 1

2

3

4

5 B50

6

7

8

9

10

FIGURE 4–11 Unified separation curve.

value of the boundary size of the particles, or material density, because at any correlation between all these parameters the separation follows the same curve. The unified curve contains complete information about all possible regimes of a twophase flow in concrete conditions. Despite the visual chaos of the process, the idea of the absence of randomness of obtaining any fractional extraction value clearly emerges from the unified curve. Here a practically deterministic regularity takes place. It is rather strange because, above all, the process is characterized by a great number of random factors, which vary both in space and time. However, as follows from the statistical analysis, the impact of all these factors is leveled in some way in the real conditions of the process. The obtained universal relationship reflects by its position in the coordinate system Ff ðxÞ 5 f ðBÞ

only the design of the separating device in which the process is realized. To obtain another curve, it is necessary either to introduce a change into the design, or to replace the entire apparatus. The separation process parameters can be influenced only by these methods. To understand the physical meaning of the separation cure, we attempt to single out some characteristic points on it (Fig. 26). It is clear from the method of plotting these curves that Qðxi Þ 5 qðxi Þ 1 nðxi Þ

Dividing both parts of this equality by Qðxi Þ; we obtain 15

qðxi Þ nðxi Þ 1 Qðxi Þ Qðxi Þ

212

Entropy of Complex Processes and Systems

which is analogous to 1 5 Ff ðxi Þ 1 Fc ðxi Þ

(4.22)

where Ff ðxi Þ is a narrow class extraction into the fine product; Fc ðxi Þ is the extraction of the same class into the coarse product. Note also that Df and Rc are target products of separation. The larger their share, the higher is the quality of the process organization. Dc and Rf are contaminating components of the products. To determine the conditions for maximal difference in separation products, it is necessary to optimize the relation EI 5 Df 1 Rc

or to minimize EII 5 Rf 1 Dc

The optimality condition for these dependencies is dEI dEII 5 0; 5 0: dx dx

The first condition is disclosed as d dDf dEI dRc 5 1 5 dx dx dx

Ðx 0

qðxÞdx d 1 dx

Ð xmax x

nðxÞdx 50 dc

As is already known, a derivative of a definite integral with a variable upper limit and a constant lower limit equals the integrand in the point of the upper limit. Hence, we obtain qðxÞ 2 nðxÞ 5 0;

that is, qðxÞ 5 nðxÞ

The disclosure of the second condition gives an analogous result. This means that at the optimum point, a narrow class is divided into halves, that is, the relation Ff ðxÞ 5 Fc ðxÞ 5 50%

(4.23)

corresponds to the optimality condition. Thus, we obtain three characteristic points on the separation curve:

Chapter 4 • Verification of the entropic model adequacy

213

1. Ff ðxÞ 5 100%—pneumatic transport condition; 2. Ff ðxÞ 5 0%—descending layer condition; 3. Ff ðxÞ 5 50% defines optimal separation condition. The rest of the points of this curve correspond to the parameters of two-phase flow providing mass distribution of the solid phase between two products. At the same time, parameters characteristic of an isolated particle, such as the Froude criterion ðFrÞ, or a dimensionless relation B are invariants leading to the affinization of separation curves. As is already known, particle hovering velocity is determined by a relation sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4gxðρ 2 ρ0 Þ w0 5 3λ0 ρ0

where λ0 is particle resistance in hovering conditions. We transform this expression: 3λ0 gxðρ 2 ρ0 Þ 5 w02 3 ρ0 4

(4.24)

It is experimentally established that at the flow velocity in a hollow apparatus equal to the hovering velocity of narrow class particles, Ff ðxÞ 5 50%: It follows from the dependence (4.24) that 3λ0 5 B50 4

(4.25)

This means that for one apparatus B50 5 const; that is, in these conditions it acquires the meaning of drag coefficient of a solitary particle. At the same time, B 5 const determines a stable equilibrium state for the boundary class of particles. In this connection, a problem arises—to determine the values of other parameters B ensuring equal extraction of various classes (e.g., by 10%; 20%; 40%; 70%; 95%, etc. for any value) at the same physical level. It follows that in the case of fractionation of a poly-fractional mixture in an ascending flow, one can always find such a narrow class of particle sizes which is in equilibrium with respect to the flow, that is, it is separated half-and-half into both products at Ff ðxÞ 5 50%. For this class equilibrium corresponds to the second law. This is its most probabilistic state. The affinity of separation curves leading to a general universal curve has been experimentally established and repeatedly confirmed. Other values of Ff ðxÞ 6¼ 50% can be explained by the fact that each size class has different dynamic characteristics (weight, size, shape, etc.) and, obtaining the same energy from the flow, acquires a different stable nonequilibrium characteristic of each class of particles irrespective of its stay in a mixture of particles or independent location in the flow.

214

Entropy of Complex Processes and Systems

Hence, a system can be stable in compliance with the law of physics for only one narrow class at Ff ðxÞ 5 50%. Other classes that are different from it acquire a stationary distribution into both products from this class at other Ff ðxÞ in compliance with their cumulative dynamic characteristics. Although this “stable nonequilibrium” is less probable, in the present conditions it provides the stationarity of separation results. Here an interesting situation arises. The point is that a complicated system can stay in the state of stable nonequilibrium at the expense of the consumption of external energy contained in the flow of medium. Each narrow size class acquires its value of fractional extraction in the flow. For Bn 5 const, the classes n are separated stationarily. As is already known, as a result of such a process, a total cumulative entropy of separation products composition decreases in comparison with the initial composition entropy. All this strictly corresponds to the second law of thermodynamics. It follows from the latter that a spontaneous change in the system leads to an entropy increase, and its decrease is possible only at the expense of the external energy absorption by the system. This important conclusion will be used later while examining complicated systems of a different nature. In an idealized flow, a particle motion velocity, as already known, amounts to v 5 w 2 w0

(4.26)

Hence, in a general case, the flow velocity can be represented as a sum of particle velocity and its hovering velocity, that is, w 5 v 1 w0

In this case, a general expression for the parameter B can be written as B5

gx ðρ 2 ρ0 Þ gxðρ 2 ρ0 Þ 3 5 5 λ 2 2 w ρ0 4 ðv1w0 Þ ρ0

(4.27)

Here λ is a coefficient of particle resistance for some concrete value of Ff ðxÞ 5 const: According to (4.27), B50 5

gx ðρ 2 ρ0 Þ 3 5 λ0 w02 ρ0 4

(4.28)

Dividing (4.27) by (4.28) gives B λ w2 5 5 02 w B50 λ0

(4.29)

It follows that the ratio of flow velocity and hovering velocity of particles of a narrow class predetermine the value of equal extractability of particles of various fractions.

Chapter 4 • Verification of the entropic model adequacy

215

If the character of the dependence Ff ðxÞ 5 f ðBÞ is universal for turbulent flows only, it can  be expected that the dependence of Ff ðxÞ 5 ϕ BB50 type will acquire a universal character for any regimes of the medium motion.

4.4 Generalizing invariant To determine a generalizing invariant, it was necessary to perform experiments in the velocity range including turbulent, transient, and laminar motion regimes. It is clear that a material with low density and small particle size was needed for such an experiment. For this purpose, aluminum powder with a density of 2700 kg/m3 used for making paints was chosen. The experiments were carried out on a cascade classifier consisting of nine stages with a middle material input into the apparatus ðz 5 9; i 5 5Þ. Three series of experiments were performed at the flow velocity w 5 0:53 m=s with consumed concentrations of the solid phase equal to 2.75, 6.0, and 14.3 kg/m3. It turned out that in this range of concentrations its influence is insignificant. This is evident from the plot in Fig. 412, where all experimental results give the same universal curve. The existing scatter of experimental points is not large and may be due to errors connected with the complexity of determining the granulometric composition of particles of the size range under study. Experiments in this concentration range were performed on the same apparatus at various air flow velocities varying within the range of 0.291.46 m/s. All the experiments in this series were duplicated three times. On the same apparatus, separation experiments were performed on quartzite powder with a similar density of ρ 5 2670 kg=m3 , but elevated, in comparison with aluminum powder, particle size from 0.1 to 3.0 mm. Experiments on quartzite were performed at air flow velocities equal to 4.7, 5.57, 6.67, 7.3, and 7.89 m/s. All these experiments were processed using the method of Ff ðxÞ 5 f ðBÞ and plotted on the same graph (Fig. 413). The results of all experiments with quartzite provide a stable affinization, and practically all experimental points obtained at the aluminum powder fractionation at elevated velocities equal to 1.19 and 1.46 m/s lie on the same curve. Ff (x) % 100 + +

80

+

60

+

40

+

20

+

0 0,5 FIGURE 4–12 Ff (x) 5 f(B).

1

1,5

++ B

Ff (x) 100 ++ + + 80

60

++ + + ++

ω = 0,65 ω = 0,53

HBap – 40 µMT ω = 0,92 + ++ + ω = 1,19 + + ++ + 0 ω = 1,46 +++ 0 1

ω = 0,31

ω = 0,29

20

ω = 0,38

2

3

FIGURE 4–13 Dependency Ff (x) 5 f (B) for a mixture of materials with super-fine particles.

4

5

B

Chapter 4 • Verification of the entropic model adequacy

217

At lower flow velocities, curves of analogous shape are obtained, but they diverge from one another, their divergence being more intense at lower flow velocities. This is natural. It would be strange to expect the laws of solid particle interactions with laminar or transient flow regimes to be the same as for a turbulent flow. It is necessary to examine the basic physics of two-phase flow motion and to formulate, on this basis, invariant parameters of separation curve affinization for all flow motion regimes. Before that, it is necessary to look into the methods of mathematical modeling of separators and their hydrodynamic situation. We take cascade classifiers as a basis. These are recognized as the most efficient apparatuses, allowing a smooth regulation of separation efficiency and quality of the obtained products. Before passing to the development of the methodology of a detailed calculation of the process parameters, note that the experimental check has completely and unambiguously confirmed all the results obtained by a statistical analysis of the process under study.

4.5 Determination of solid-phase distribution coefficients in a two-phase flow Distribution coefficients are the main parameter characterizing the results of the separation of various size classes. They can be assumed as a basis for analytical computation of the results of the process under study. To determine these, a statistical approach to the analysis of two-phase flow is very important, but it is not enough to limit oneself to this. It is necessary to supplement it with a physical picture of the real process. In connection with the extreme complexity of such flows, a number of assumptions are needed while constructing a model: 1. Particle shape is spherical; 2. Distribution of particles of any narrow size class over the channel cross-section is uniform due to their intense interaction with each other, as well as with the apparatus walls and internal facilities; 3. Ascending two-phase flow must be considered as a continuum with an elevated density. As established earlier, the carrying capacity of a dusty flow is higher than that of a clean medium. It can be conventionally taken into account by increasing the effective density of the flow. Distribution of local velocities of a continuous phase is a function of the geometrical characteristics of the flow section of the channel. In a general form, it can be written according to Fig. 414 as follows ur 5 w 3 f

r R

(4.30)

  where f Rr is a function connected with the cross-section geometry; r is a characteristic coordinate of a certain point of the apparatus cross-section; R is a characteristic boundary

218

Entropy of Complex Processes and Systems

r R

f

r01

f

r02

r01 R

r03

Isotaches wr0

1

wr0

2–3

wR

FIGURE 4–14 Formation of the distribution coefficient in the case of complicated transversal flow structure.

size of the channel cross-section; ur is a local velocity of the continuous phase in a point with the coordinate r; and w is the mean flow velocity. Thus, the dependence (4.30) takes into account the shape of the channel cross-section. According to NewtonRittinger’s law, the dynamic influence of a flow on a single particle is determined by the following dependence: Fr 5 λ

πd 2 ður 2 vr Þ ρ ; 4 n 2

where λ is the drag coefficient of a particle; πd4 is the particle midsection area; ρn is the flow density; vr is the local absolute velocity of the particle motion with respect to the flow; and ður 2 vr Þ is the particle velocity with respect to the flow. 2

Chapter 4 • Verification of the entropic model adequacy

219

The difference of absolute velocities is algebraic. The flow direction is chosen as the positive direction of the velocities ur and vr . If the total number of particles of a given monofraction in the cross-section under study is taken as unity, then the distribution coefficient in critical flow regimes is written as K 5 n—a relative number of particles of the given narrow size class having an absolute velocity greater or equal to zero ðv $ 0Þ. From the consideration of equilibrium of an individual particle at distance r0 from the channel axis, we obtain πd3 πd2 ður 2vr Þ2 ðρ 2 ρ0 Þ 5 λ ρ 6 4 0 2

(4.31)

It follows that rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4 3 B; ur 2 vr 5 w 3λ

(4.32)

2 ρ0 Þ where B 5 qdðρ w 2 ρ0 . We examine the regime of turbulent flow around a particle characterized by the constancy of drag coefficient λ. In this case, the Reynolds criterion for the particle is

Rer 5

ður 2 vr Þdρo $ 500; μ

where μ is a dynamic viscosity of the medium. Taking (4.32) into account, we obtain w

qffiffiffiffiffiffiffiffiffiffiffiffiffi 4 3λ 3 B 3 dρ0 μ

$ 500

This condition corresponds to the expression (for λ 5 0:5) rffiffiffiffiffiffiffiffiffi 8 Ar $ 500 3

(4.33)

where Ar is the Archimedes criterion Ar 5

qd2 3 ρ 3 ρ0 μ2

Hence, we can determine the limiting dimension of particles above which the flow around the particles will be a fortiori turbulent. Thus, for quartzite ðρ 5 2650 kg=m3 Þ, particles with the size d $ 1 mm are in the regime of turbulent flow-around.

220

Entropy of Complex Processes and Systems

We obtain from the condition (4.33) ρ0 500 $ qffiffi μ w 43B 3 d

(4.34)

We write the Reynolds criterion for the moving medium in the channel as Re 5

wDρ0 μ

(4.35)

where D is the channel diameter. Substituting the previous expression into this one, we obtain Re $

D 500 qffiffi d 4B

(4.36)

3

The condition (4.36) allows estimating the regime of the medium motion in the channel at a turbulent overflow of particles. Obviously, the estimation of ðReÞmin should be performed with respect to dmax ; at which the parameter B 5 Bmax : It is experimentally established that at the separation of coarse-grained materials Bmax 5 2:5  3 for all monofractions and depends only insignificantly on the apparatus design. The ratio Dd is on the order of 102 for experimental apparatuses and an order greater for industrial ones. Taking into account λ 5 0:5, we obtain from (4.35) ðReÞmin $ 1:7 3 104

This is also confirmed experimentally. Thus, at the classification of periclase ðρm 5 3600 kg=m3 Þ in equilibrium apparatuses with D 5 100 mm; H 5 900 mm and h 5 600 mm, at the velocity w 5 1:5 m=s; the fine product output was γ f , 5%: At that point, the consumed concentration was kept at the level of 1:5 kg=m3 ; and the granulometric composition was characterized by the content of particles smaller than 150 mm on the order of 11%. In such conditions, it can be easily checked that the Reynolds criterion amounts to Re 5 104 : Thus, in the conditions of turbulent flow around the particles, that is, at the separation with respect to the coarse material, the entire process (from γ f 5 0Þ practically occurs in a turbulent regime of medium motion in the apparatus ðγ f —fine product output). Examine special features of laminar flow around the particles. In this case, the following is valid: Rep 5

ður 2 vr Þdρ0 #1 μ

Chapter 4 • Verification of the entropic model adequacy

221

From the equilibrium conditions, we obtain πd3 qρ 5 3πμður 2 vr Þd 6

Taking into account the foregoing, an expression for this case can be written as Ar 5 18Re

In the conditions of air medium, it is true for particles with the size sffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0:4954 d # 10 ρ 23

Thus, in the case of quartzite, these are particles with the size d # 57 micron It follows from the inequality (4.34) that ρ0 1 # ður 2 vr Þd μ

(4.37)

Substitute the obtained result into the expression for Re Re #

D w D dður 2 vr Þ

(4.38)

2 vr To express the relation ur w , we use the ratio (4.33) taking into account that the drag coefficient in case of laminar flow-around amounts to

λ5

24 24μ 5 Re ður 2 vr Þdρ0

Then ur 2 v r 5 w

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4 ður 2 vr Þdρ0 B 3 24ν

Taking into account that Re2 B 5 Ar; we obtain ur 2 v r 5 w

pffiffiffiffiffiffiffiffi ArB 18

Substituting (4.38) into (4.39), we obtain Re #

D 18 pffiffiffiffiffiffiffiffi d ArB

(4.39)

222

Entropy of Complex Processes and Systems

Since in the laminar regime Ar # 18; we finally obtain D Re # d

rffiffiffiffiffi 18 B

(4.40)

At the movement of finely dispersed material in a channel, the ratio Dd is of the order of 10  2 3 103 ; and it is experimentally established that

Bmax at Ff ðxÞ 5 0 has different values for different monofractions. Here Ff ðxÞ is the extraction of particles into the fine product 3

Ff ðxÞ 5 γ f

rf rc

where rc is the narrow class content in the initial material; rf is the content of the same class in the fine product; and γf is the fine product output. Thus, laminar flow around particles occurs even at a sufficiently developed flow turbulence, which also overlaps with the region of transient regimes. We evaluate Re for isolated particles characterized by the criterion value Ar 5 18: It is experimentally established for them that Bmax ½FðxÞ 5 0 5 2:0 Bmin ½FðxÞ 5 100% 5 0:5

Then the ratio (4.40) receives the values ReD103  104

Thus, according to experimental data, at the separation of powdery aluminum ðρ 5 2700 kg=m3 Þ in a shelf apparatus with z 5 10; i 5 5; μp 5 1:5  2:2 kg=m3 at the air flow velocity w 5 0:27 m=s; the monofraction d 5 56:5 μkm ðAr18Þ had fractional extraction Ff ðx0 Þ 5 0; and at the velocity w 5 1:7 m=s; 2 Ff ðxÞ 5 100%: In this case, the medium motion regime in the apparatus is, respectively, numerically characterized by Re 5 1:2 3 103 and Re 5 8 3 103 Thus, at the classification of particles flowed around in laminar regime, the entire process is characterized by the regime of the medium flow—both laminar and intermediate, up to the turbulent flow. In real conditions, a considerable amount of monofractions flows over in the intermediate regime ð18 , Ar , 93750Þ; which must provide intermediate and turbulent regimes of the medium motion in the apparatus. It is extremely important to elucidate this problem for the prognostic estimation of powder distribution in flows. All these conclusions are made for solitary particles. Regularities for mass motion of particles can be somewhat different.

Chapter 4 • Verification of the entropic model adequacy

223

Now we revert to the relationship (4.32). For particles of narrow class having an absolute velocity vr $ 0; we can write rffiffiffiffiffiffi 4B ur $ w 3λ

Substituting this into (4.32), we obtain f

r

$

R

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4 3B 3λ

(4.41)

Similarly, for particles with vr # 0 f

r

#

R

rffiffiffiffiffiffi 4B 3λ

(4.42)

Inequalities (4.41) and (4.42) include the following limiting cases: 1. For any coordinate: rffiffiffiffiffiffi 4B . ; f R 3λ r

In this case, the distribution coefficient K 5 1;  r  qffiffiffiffi 4B 2. Respectively, at f R , 3λ for any coordinate r we obtain K 5 0. An intermediate case is characterized by the fact that certain coordinates make up level lines according to the equality f

r 0

R

5

rffiffiffiffiffiffi 4B 3λ

(4.43)

We assume that this equation has one real root   r0 4B 5 f 21 3λ R

(4.44)

Taking this into account, we find a corresponding area ωr0 for which the following is valid: f

r i

R

$f

r 0

R

224

Entropy of Complex Processes and Systems

Then the distribution coefficient is written as 0 1 0 1 ωr0 r r 0 K5 5 C @ Afor a convex profile f @ A R ωR R 0 1 0 12 3 2 r r 0 K 5 C 41 2 @ A 5 for a concave profile f @ A R R

The coefficient C characterizes the shape of level lines and flow section. For example, for a circle C 5 1: Substituting the dependence (4.41) into the obtained equations, we finally obtain K 5ϕ

  B λ

An analogous dependence is valid for two  r  and more roots of Eq. (4.42), which takes place in the case of complicated profiles f R : Thus, in the case of the profile shown in Fig. 414 (isotachs), for a certain monofraction we have three real roots of Eq. (4.42): r01 ; r02 ; r03 : The latter form isotachs taking into account the shape of the apparatus flow P section. The isotachs determine the respective total area ωroi ; for which the following is valid: f

r R

$f

r oi

R

Thus, for the given case X

ωr0i 5 ωr01 1 ωr02 1 ωr03

and the distribution coefficient is written as P K5

ωr0i ωR

P Since ωr0i is unambiguously expressed using r0i ; which is the roots of Eq. (4.42), the final expression for the distribution coefficient has the form K 5ϕ

  B λ

(4.45)

A concrete expression for the distribution coefficient can be obtained passing to a concrete profile of the continuous medium diagram over the apparatus cross-section. We examine, in turn, possible cases of particle interaction with the flow.

Chapter 4 • Verification of the entropic model adequacy

225

(a) Turbulent flow around particles and turbulent regime of medium motion in the apparatus In this case, the distribution of continuous medium velocities over the radius for an equilibrium apparatus of a circular cross-section is usually expressed by an empirical dependence of the following type ur 5 w

r ðn 1 1Þðn 1 2Þ  r n 3 12 5 wf 2 R R

(4.46)

where n is a characteristic depending on the medium motion regime and the roughness of the pipe walls ðn , 1Þ: According to (4.43), we find the isotach coordinate in which the absolute velocity of a fixed monofraction equals zero ðn 1 1Þðn 1 2Þ  r0 n 3 12 5 2 R

rffiffiffiffiffiffi 4B 3λ

Hence, " rffiffiffiffiffiffi#n1 r0 2 4B 512 R ðn11Þðn12Þ 3λ

(4.47)

the cross-section area for which f

r R

$f

r 0

R

being ωr0 5 πr02 : Therefore the distribution coefficient is expressed by the ratio K5

r 2 0

R

Then, taking (4.47) into account, we obtain 8 >
4B = 3λ 5 K 5 12 4 > ðn11Þðn12Þ > : ; 2

23

Instead of the dependence (4.46), we can examine a different profile of the distribution of continuum velocities over the radius: ur 5

h  r n i n12 3w 12 n R

where n is the flow turbulization degree ðn 5 2  NÞ:

(4.48)

226

Entropy of Complex Processes and Systems

Consider the following cases: 1. Velocity gradient on the flow axis. For the dependence (4.48), the following is valid: du w  r n21 5 2 ðn 1 2Þ 3 dr R R

(4.49)

du w nðn 1 1Þðn 1 2Þ 52 3  12n dr R 2 12 Rr

(4.50)

For the dependence (4.47),

Then, according to (4.49), we obtain for the expression (4.48)   du 50 dr r50

Respectively, we obtain for (4.46) from (4.50)   du f ð0Þ 3 n 52 dr r50 R

Thus, the relationship (4.46), in contrast to (4.48), describes a discontinuous function on the flow axis. 2. Velocity gradient on the pipe wall. For the function (4.48), it follows from (4.49) that   du w f ð0Þ 3 n ; 5 2 ðn 1 2Þ  5 2 dr r5R R R

that is, the higher the flow turbulization degree and average velocity, the greater is the gradient. For the dependence (4.46), we obtain according to (4.50)   du 5 N; dr r5R

which points to the transfer of an infinite momentum (corresponding to an infinite friction force). 3. The expression (4.48), in contrast to (4.46), combines all regimes of motion up to the laminar one.

Chapter 4 • Verification of the entropic model adequacy

227

On the basis of (4.44), the radius forming the distribution coefficient is " rffiffiffiffiffiffi#n1 r0 n 4B 3 5 12 n12 3λ R

From here we obtain a relationship for the distribution coefficient.

(b) Laminar flow regime In this case, the regime of medium motion in a channel is based on the dependence (4.48) for the flow structure. Thus, for n 5 2 we obtain a parabolic velocity profile, for n . 8—turbulent motion, for 2 , n , 8—transient regime. Then, according to (4.44), we obtain r n i n12h 0 12 5 n R

rffiffiffiffiffiffi 4B 3λ

(4.51)

Using the expression for drag coefficient in this equation, we obtain r n i n12h 0 12 5 n R

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

r0  n12 n 3 w 1 2 R dρ0 3B 18μ

Simplifying, we come to the relationship r n i Re B n12h 0 w 5 12 R n 18

Taking into account that the distribution coefficient K 5 expression

r0 2 R

; we finally obtain the

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi

2 n n Ar 3 B 3 K 5 12 n12 18

For a parabolic profile ðn 5 2Þ K 512

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ar 3 B 36

(c) Intermediate regime of flow around particles Using a well-known dependence of drag coefficient on the criteria Re and Ar λ5

4 Ar 3 3 Re2

228

Entropy of Complex Processes and Systems

and interpolation formula valid for all flow-around regimes, Re 5

Ar pffiffiffiffiffiffi ; 18 1 0:61 Ar

we obtain pffiffiffiffiffiffi 4 ð1810:61 Ar Þ2 λ5 3 3 Ar

Substituting the latter into (4.51) and passing to the distribution coefficient, we determine pffiffiffiffiffiffiffiffiffiffiffiffiffiffi n12 Ar 3 B n pffiffiffiffiffiffi 1 2 K2 5 n 18 1 0:61 Ar

The generalized dependence for the distribution coefficient in the arbitrary regime of the medium motion and arbitrary regime of flow around particles will take the form pffiffiffiffiffiffiffiffiffiffiffiffiffiffi n2 n Ar 3 B P pffiffiffiffiffiffi 3 K 5 12 n12 ð1810:61 Ar Þ

(4.52)

4.6 Estimation of distribution coefficients Analyze the expression (4.52) for turbulent regimes ðAr $ 105 Þ. In such conditions, the summand 18 in the denominator can be neglected, and the formula is reduced to the following expression: n 3 K 5 12 n12

rffiffiffiffiffiffiffiffiffiffiffiffi!n2 8 3B 3

Taking into account the mathematical model of a regular cascade (4.14), this expression agrees well with experimentally obtained dependencies. Two approaches to the study of suspension-carrying flows are the most widespread. The first considers a two-phase flow as a certain continuous medium (continuum) with averaged properties. The characteristics of a dispersoid are a certain average velocity, density, etc. As noted above, such an approach in the pure form is unacceptable for separation flows. This approach can be successfully applied, for instance, to describe processes similar to pneumatic transport, and not classification. In the case of the second approach, the behavior of each phase is considered separately. Therefore, with regard to the classification process, numerous random factors must be taken into account. This stipulates insuperable difficulties for a quantitative description of the process results in the explicit form. Therefore the applicability of this approach is limited—it can

Chapter 4 • Verification of the entropic model adequacy

229

solve only the simplest problems of two-phase flow behavior and is totally useless for the classification process description on the whole. With regard to the classification process, it seems reasonable to use a combined method. Its essence consists in the fact that on the basis of evaluation of the continuum influence on a discrete phase, and on the basis of the behavior and interaction of separate monofractions, a transition to a dispersoid with an effective carrying capacity is realized. Thus, a continuous medium and each separate monofraction participate in the formation of dispersoid, and the latter, it its turn, influences the behavior of particles of each narrow size class. This implicitly reflects the intraphase and interphase interaction, but on the basis of a continuum. To substantiate the transition to a “disjoint” dispersoid, we estimate the density of monofractions flow. The amount of particles of a fixed monofraction passing through the apparatus cross-section in a unit of time can be expressed by QP 5

G 3 rs 3 ri P

(4.53)

where G is the monofraction mass; and P is the mass of one particle of the given narrow size class. An ascending flow of monofraction particles is then written as Qa 5

Gf 3 rs 3 ri 3 K P

(4.54)

where ri 3 K is a relative flow of particles under study from section i into the section located above. The resulting ascending flow of a given monofraction has the form Qr 5

Gf 3 rs 3 Ff P

(4.55)

Assuming that particle distribution over the apparatus cross-section is uniform, we obtain expressions for the flow density of particles of a fixed narrow size class q1 5

Gf 3 rs F 3P

for an individual relative flow, and q 5 q1 3 F f

(4.56)

for a resultant flow. We transform the obtained expressions by multiplying the numerator and the denominator of the right-hand part by the volumetric flow rate of the continuous phase V q5

Gf 3 rs 3 V μ 3 rs 3 V μ 3 rs 3 w 5 5 F 3P P V 3F 3P

(4.57)

230

Entropy of Complex Processes and Systems

Table 4–2 Densities of flows of various monofraction particles calculated by formulas (4.56) and (4.57). Narrow size class (mm) Average size d (mm) rs ð%Þ Ff ð%Þ

20.14 0.07 10.93 93

20.2 1 0.14 0.17 13.51 45.5

20.3 1 0.2 0.25 15.75 8.0

20.5 1 0.3 0.40 26.59 2.0

qðcm2 3 sÞ21 qP ðcm2 3 sÞ21

71:8 3 103

6:2 3 103

3

3

2:3 3 103 184

0:94 3 103 19

66:8 3 10

2:8 3 10

The dependencies (4.56) and (4.57) are now examined with a concrete example. Thus, at periclase ðρ 5 3600 kg=m3 Þ separation in an equilibrium apparatus in the regime of w 5 2:83 m=s and at the consumed concentration μ 5 1:5 kg=m3 , the fine product output was around 20%. Granulometric composition of the initial material, fractional extraction degree and densities of particle flows calculated using formulas (4.56) and (4.57) are presented in Table 42. The data in this table testify that the densities of particle flows, especially those of fine particles, are sufficiently high in the apparatus, although the fine product output is small. It can be interpreted as follows: fine particles, catching up with coarse ones, exert additional influence on them in comparison with a continuous medium. Therefore the high density of particles averages and equalizes this influence in time. This allows us to pass to the carrying capacity of the flow on the whole—to a dispersoid—and examine its impact on particles of each narrow size class separately (“disjoint dispersoid”). It is of interest to compare the quantitative value of the density of particle flow with experimental data. Thus, in Razumov’s work, experimental data on the number of collisions (under the conditions of vertical pneumatic transport) of a suspension-carrying flow containing a monofraction with the size d 5 2:3 mm with a motionless surface with the area 1 cm2 are presented. Here the parameters characterizing the experimental conditions were as follows: • • • • •

medium density ρ0 5 1:29 kg=m3 ; density of particle material ρ 5 1200 kg=m3 ; 3 initial mass concentration 3:5 kg=h kg=h ; which corresponds to μ 5 4:515 kg=m ; rs 5 100%; since the tests were performed with a monofraction; mean velocity of the medium flow varied within the limits 10  17:5 m=s:

Three hundred to 1300 collisions per second falling on 1 cm2 of a surface placed into an ascending flow were registered in the experiments. One can easily see that, according to the experimental conditions, the number of collisions is nothing but the density of particle flow determined by the expression (4.56). Assuming, on the average, that w 5 14 m=s; we find N 5 q 5 827 ðcm2 3 sÞ21 ; which is close to the average number of collisions registered in the experiment. For the velocities w 5 10 m=s and w 5 15 m=s, the number of collisions

Chapter 4 • Verification of the entropic model adequacy

231

determined by formula (4.56) is, respectively, 590 and 1033 (cm2 3 s)21. Apparently, such results are quite satisfactory as an estimation. For the transition to a “disjoint” dispersoid, which is different for particles of each narrow size class, it is necessary to evaluate its very important parameter—the density ρnj (dispersoid density for particles of the jth narrow size class). Consider two monofractions with the masses of separate particles m and M: We assume that N fine particles inelastically collide with one coarse particle passing their momentum to it. The amount N can be evaluated to the first approximation through the ratio of flow densities of the monofractions under study N5

 3 qm rm dM 5 3 qM rM dm

(4.58)

As a result of inelastic collisions, the ensemble of fine particles and a coarse particle acquire the same velocity vKm 5 vKM : At that point, the velocity of fine particles decreased from vHm to vKm ; and the velocity of coarse particle increased from vHm to vKm : The change in the momentum of the ensemble of fine particles is ΔLm 5 NmðvHm 2 vKm Þ;

and for a coarse particle ΔLM 5 Mðvm‫ צ‬2 vHm Þ:

Obviously, ΔLm 5 ΔLM : This stipulates vKm 5 vKm 5

NmvHm 1 MvHm Nm 1 M

(4.59)

The conditions of uniform motion of a fine and coarse particle with initial velocities have the following form ðu2vHm Þ2 5

3 ρ 3 gdm 3 4λ ρ0

(4.60)

ðu2vHM Þ2 5

3 ρ 3 3 gdm 4λ ρ0

(4.61)

The condition of uniform motion of a coarse particle with the final velocity in a dispersoid is, respectively, ðu2vKm Þ2 5

4 ρ 3 gdm 3λ ρ0

(4.62)

232

Entropy of Complex Processes and Systems

Dividing, term by term, the expression (4.61) by (4.62), we obtain   ρ u2vHM 2 5 ρ0 u2vKm

(4.63)

From the expression (4.59) we obtain u 2 vKm 5 u 2

NmvHm 1 mvHm Nm 1 M

(4.64)

After simple transformations, this expression acquires the form u 2 vKm 5

Nm ðu 2 vHm Þ 1 Mðu 2 vHm Þ Nm 1 M

(4.65)

We apply the expression obtained in Eq. (4.63) ρn ðNm 1MÞ2 5 2 ρ0 u2vHm Nm u2v 1M HM

(4.66)

or, taking into account (4.50) and (4.51), 0

12

ρn ðNm 1MÞ B N 11 C qffiffiffiffiffi A 5  qffiffiffiffiffi 2 5 @ ρ0 dm m dm NM Nm dM 1M dM 11 2

m M

m Using (4.48) with regard to the fact that M 5

 3 dm dM

(4.67)

; we obtain

0

12 11 ρn B C 5 @ qffiffiffiffiffi A ρ0 rm dm rM dM 11 rm rM

(4.68)

Since ρn 5 ρ0 1 Δρn ; the relative increase of the dispersoid density is Δρn ρ 5 n 21 ρ0 ρ0

Taking into account the nth amount of monofractions under study, which transfer their momentum to a coarser particle, a relative increase in the dispersoid density will have the form n   Δρn X ρn 5 2n11 ρ0 ρ0 j j51

Chapter 4 • Verification of the entropic model adequacy

233

Taking into account (4.68), the final expression for the dispersoid density has the form 12 3 0 rm n X 7 B rM 11 C 6 ρn 5 ρ0 4 @ qffiffiffiffiffi A 1 1 2 n5 rm dm j51 rM dM 11 2

(4.69)

In order to check the experiment with periclase classification considered earlier using Eq. (4.69), the dispersoid flow densities were calculated for the following monofractions: dm 5 0:40 mm; ρn 5 2:29 kg=m3 dm 5 0:25 mm; ρn 5 2:06 kg=m3 dm 5 0:17 mm; ρn 5 1:70 kg=m3

The obtained results testify to an insignificant change in the dispersoid flow density exerting influence on separate narrow classes of particles. On the average, in the present case, ρn 5 2:0 kg=m3 5 const can be accepted for all monofractions as the first approximation. It should be noted that for materials whose granulometry is not sharply different, the difference in the average flow densities is insignificant. For more precise calculations (or materials with extremely different compositions), it can be recommended to use expression (4.69) for each monofraction. The second important question at the transition to a dispersoid is the evaluation of its structure—the profile of its velocities distribution over the apparatus cross-section. In Eqs. (4.60) and (4.61), a dispersoid with local effective velocities u was implied. A transition to ρn was realized on the basis of the momentum transfer from fine particles to coarse ones. At that point, the flow density ρn was accepted as invariable over the apparatus cross-section (owing to the assumption of a uniform distribution of particles). As far as the local carrying capacity per unit area is characterized by the product ρn u2 ; its diagram profile must be affine or identical to the distribution of squared dispersoid velocities in the cross-section. Taking into account its affine transformation with the scale coefficient 0.5 and nonlinear transformation according to a square root, we obtain the diagram profile of effective velocities of dispersoid close to a parabolic one. Since the volumetric flow rate of a dispersoid must be equal to the volumetric flow rate of a continuous medium, the equation of the dispersoid velocities distribution must have the form f

 r 2 ; 5 2w 1 2 R R

r

(4.70)

which corresponds to the coefficient n 5 2: Obviously, all that has been stated above should be considered only as an approximate estimation, since it is based on a number of assumptions. Substituting n 5 2 into expression (4.58) and replacing ρ0 by ρn ; we obtain K 512

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0:4 3 B

(4.71)

234

Entropy of Complex Processes and Systems

The obtained expression reflects Bmax 5 2:5 adequately enough and agrees rather well with the description of experimental separation curves Ff ðdÞ on the basis of a cascade model (for turbulent regimes). In the case of an arbitrary regime of flow around particles, we obtain from (4.58): K 512

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ar 3 B pffiffiffiffiffiffi 36 1 1575 Ar

(4.72)

Formulas (4.71) and (4.72) are valid for ρ0 5 1:2 kg=m3 and ρn 5 2:0 kg=m3 which enter into the coefficients, and the criteria Ar and B are expressed, as before, in terms of ρ0 :

4.7 Development of the methodology of separation processes computation The totality of obtained results allows an exhaustive computation of separation processes even for such complicated systems as cascade classifiers, without resorting to some additional experimental coefficients. Up to date, at the level of the existing theories based mainly on empirical relationships, the statement of such a problem of the process computation could not be formulated. The computation order is presented by the following scheme. First, the possibility of plotting the separation curve Ff ðxÞ in any regime follows directly from relations (4.71) and (4.72). The same expressions make it possible to take into account separation results depending on the number of sections of a cascade apparatus (classifier height) and the place of the material feeding into the separation column. The influence of structural differences of the apparatuses on the fractionating process is taken into account by using various types of cascade models. In order to check fractional separation curves for particles of various narrow size classes depending on the classification regime, respective calculations were performed for a cascade shelf apparatus consisting of four sections ðz 5 4Þ with the upper input of the initial material ði 5 1Þ: Experimental data on quartzite ðρ 5 2650 kg=m3 Þ separation in this apparatus are shown in Fig. 48. The comparison of Ff ðdj ; wÞ computation results using formulas (4.71) and (4.53) with experimental data are presented in the same figure. According to Eq. (4.71), the velocity w0 of the beginning of a fixed monofraction extraction (intersection of the curve Ff ½di ; w with the abscissa axis) is determined from the condition K 50512

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0:4 3 B

Hence, neglecting the value ρ0 in comparison with ρf ; we obtain w0 5

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ρ 0:4 3 gd ρ0

Chapter 4 • Verification of the entropic model adequacy

235

This makes it possible to predict the ratio of the velocity w0 to the final precipitation velocity v0 of a solitary particle of the given size class in the air medium. As is already known, v0 5

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4 ρ 3 3 gd 3λ ρ0

(4.73)

At the coefficient of aerodynamic drag λ 5 0:5; we obtain v0 5 w0

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4 5 2:59 3 3 0:5 3 0:4

(4.74)

To check this ratio, the velocities w0 were calculated using the expression (4.73) for particles of all narrow size classes considered in the previous example, and the respective experimental values of w0 were taken. Here the coefficient of electrodynamic drag in Eq. (4.73) was determined from a refined relationship 29:2 430 λ 5 0:5 1 pffiffiffiffiffiffi 1 Ar Ar

The comparison of calculated dependence (4.74) with experimental data is presented in Fig. 415. Characteristic properties of separation curves are their affine properties, in particular, with  respect  to the regime. The consequence of this fact is the unitary character of the curve Ff w=w50 for all monofractions. A combined analysis of the cascade and structural models

Final precipitation rate v m/s

24 20 16 12 8 4 0 0

4

8

12

16

20

Velocity of initial extraction is 2,59w0.m/s FIGURE 4–15 Relation between the final precipitation velocity and the velocity at the beginning of the extraction of various monofractions providing Ff (x) 5 0.

236

Entropy of Complex Processes and Systems

confirms this fact. Thus, the distribution coefficient K50 for any monofraction extractable by 50% is determined from (4.75)  z112i 50 1 2 12K K50 5 0:5  z11

(4.75)

12K50 K50

We substitute the value of K50 found from (4.75) into Eq. (4.71): sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ρ gd K50 5 1 2 0:4 3 2 ρ0 w50

Hence, 0:4

ρ 2 3 gd 5 ð12K50 Þ2 w50 ρ0

We substitute the obtained value of 0:4 ρρ gd into the expression (4.72) of an arbitrary dis0 tribution coefficient K 512

w50 ð1 2 K50 Þ w

We use the latter in Eq. (4.73) 2 1 2 4 Ff 5

2

3z112i 5

1

w w50 21

1 12K50

1 2 4

(4.76)

5

1 1 12K50

3z11

w w50 21

  The obtained expression (4.76) satisfies the unitary character of the curve Ff w=w50 ; for whose plotting it is necessary to solve in turn Eq. (4.75), and then (4.76). In particular, for the considered example ðz 5 4; i 5 1Þ; we obtain from Eq. (4.75) K50 5 0:342; 1 2

1 5 1:519: K50

In this case, the dependence (4.76) takes the form  Ff

w w50





4 1 2 1:5191w 21 w50 5  5 1 w 1 2 1:519 w50 21

(4.77)

Chapter 4 • Verification of the entropic model adequacy

237

Ff, % 100 80 - 0,375 mm - 0,75 mm - 1,5 mm - 2,5 mm - 4,0 mm - 6,0 mm - 8,5 mm

60 40 20 0 0,4

0,6

0,8

1,0

1,2

1,4

1,8 w/w50

1,6

FIGURE 4–16 Fractional separation dependence on the relative velocity O, experimental point; ___, theoretical curve.

The comparison of the results of calculations using Eq. (4.77) with experimental data is presented in Fig. 416.   Similarly to the previous result, the unitary character of the curve Ff d=d50 for all velocities is a consequence of the affinity of separation curves Ff ðxÞ. Using the solution of Eq. (4.75) with respect to K50 in (4.71), we obtain for an arbitrary regime sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ρ gd50 K50 5 1 2 0:4 3 2 ρ0 w ð12K50 Þ2 ρ g 5 0:4 3 2 ρ0 w d50

(4.78)

(4.79)

Passing to an arbitrary distribution coefficient, we obtain sffiffiffiffiffiffiffi d K 5 1 2 ð1 2 K50 Þ 3 d50

(4.80)

Then expression (4.53) takes the form  Ff



d 5 d50

12

ð12K50 Þ

d50

12ð12K50 Þdd

" 12

pffiffiffiffi

z112i d 50

pffiffiffiffi #z11 ð12K50 Þ dd p50ffiffiffiffi d

12ð12K50 Þ

(4.81)

d50

Experimental verification of the dependence (4.81) for z 5 4; i 5 1 is presented in Fig. 417.

238

Entropy of Complex Processes and Systems

Ff , % 100 80 60 40 20 d/d50

0 0,4

0,8

1,2

1,6

2,0

2,4

2,8

FIGURE 4–17 Fractional separation dependence on the relative coarseness: O, experimental point; ___, theoretical curve.

100

Ff , %

80 60 40 20 0 0,2

0,4

0,6

0,8

1,0

1,2

Fr 103 1,4

FIGURE 4–18 Fractional separation dependence on the Froude criterion: O, experimental point; ___, theoretical curve.

 It is noteworthy   that in a similar way we can prove the unitary character of the curves Ff d=d50 and Ff w=w50 for an arbitrary value of fractional separation. Experimentally established universality of the curve Ff ðFrÞ for various regimes and various monofractions is revealed directly from the consideration of structural and cascade models. In fact, for a concrete apparatus ðz; i Þ and density of the material particles ðρÞ; fractional extraction is unambiguously determined by the parameter gd=w2 : The comparison of the theoretical curve using Eqs. (4.71) and (4.52) with experimental data ðz 5 4; i 5 1; 2500 kg=m3 Þ is presented in Fig. 418.

Chapter 4 • Verification of the entropic model adequacy

239

To check the structural model correspondence to the empirical dependence, we express the value of the parameter Fr50 using the relation (4.52): sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðρ 2 ρ0 Þ K50 5 1 2 0:4 3 Fr50 ρ0

Hence, Fr50 5

ð12K50 Þ2 ρ0 3 0:4 ðρ 2 ρ0 Þ

Taking into account that in the present example K50 5 0:342; we obtain Fr50 5 1:08

ρ0 ; ðρ 2 ρ0 Þ

which is close to the experimental correlation. On the whole, all examined examples convincingly testify to the prevalence of flow structure and statistical regularities in the process of gravitational classification. The merits of this approach are its simplicity and satisfactory agreement with principal experimental dependencies for the process under study accumulated so far.

4.8 Definition of generalizing invariants for all separation regimes Taking into account the flow structure, we can approach more impartially the prediction of fractionating results in separate apparatuses differing by their design features. Without going into the extremely complicated pattern of flow formation in real apparatuses, at the coarsest examination one should single out three peculiarities characteristic of a moving flow, which can be connected with the apparatus design: • Character of the change in the continuous phase velocity field along the apparatus height; • The presence of stagnant zones and the degree of filling of the apparatus cross-section by the moving flow; • Character of the material motion in the first phase of the process (in the feeding section), intensity of its interaction with the flow, and internal elements promoting the concentration equalizing, with a decrease of skips and downfalls. Thus, for instance, the velocity field of a continuous medium can be both uniform and nonuniform along the height. The work of hollow (equilibrium) apparatuses with a constant cross-section is the closest to the first case. A uniform velocity field stipulates the same regime of flow interaction with particles at any level of the cross-section and predetermines

240

Entropy of Complex Processes and Systems

the distribution coefficient invariable over the apparatus height. A model of the regular cascade satisfies most completely such conditions of the process organization. The intensity of medium interaction with particles depends not only on the nonuniform velocity field over the apparatus height, but also on its heterogeneity in the cross-section. Thus, for an apparatus of circular cross-section, the structural model takes into account the influence of the transverse heterogeneity of the flow. At that point, it should evident that the degree of filling the apparatus cross-section by a moving continuous phase is 100%. The situation is different in the apparatuses of square and rectangular cross-section—in their corners stagnant zones are formed, which reduce to zero the influence of the medium on the removal of the particles. If we assume, as an approximate evaluation, that the zero flow velocity line in an apparatus with a square cross-section is a circumference inscribed into a square, then the degree of filling such cross-section by an ascending flow is Csq 5

Fcir π 5 4 Fsq

Since the distribution coefficient (proceeding from the structural model) is determined through the ratio of areas, it is necessary to introduce a correction coefficient for its calculation for a square cross-section Csq 5

π 4

Taking this into account, ksq 5

π 3 K0 ; 4

(4.82)

where K0 is the distribution coefficient for an apparatus of a circular cross-section. If we assume that an inscribed ellipse is a zero velocity line for an apparatus of a rectangular cross-section, then expression (4.82) will be also valid for determining krec , since Crec 5

Fel π 5 4 Frec

(4.83)

Expression (4.83) for square and rectangular cross-sections is recommended only as a first approximation, since actual degrees of filling will be somewhat greater. Finally, the character of the material motion at the first stage of the process in the absence of intense interaction of particles with internal elements of the apparatus and in the presence of stagnant zones can lead to an appreciable downfall of the main mass of particles from the initial level of their input. The most favorable conditions for the realization of a skip are, apparently, characteristic of hollow apparatuses of square and rectangular cross-sections. In the light of the details stated above, at attempt was made to perform a quantitative prediction of the results of a fractionating process for a number of apparatuses of various designs. The results of calculations in comparison with experimental data are presented in the respective figures.

Chapter 4 • Verification of the entropic model adequacy

241

Ff , % 100

80 – d⫽0,875 mm – d⫽0,625 mm – d⫽0,400 mm – d⫽0,250 mm – d⫽0,170 mm – d⫽0,070 mm

60 40 20 0 0,4

0,8

1,2

1,6

2,0

2,4

2,8

3,2

B

FIGURE 4–19 Fractional separation dependence of materials of various densities on the parameter B: O, experimental point; ___, theoretical curve.

Ff, % 100 80

60

40

20 0 0,2

0,4

0,6

0,8

1,0

1,2

1,4

1,6

1,8

B

FIGURE 4–20 Results of periclase (ρ 5 36,000 kg/m3) separation on an equilibrium apparatus of circular crosssection: O, experimental point; ___, theoretical curve.

Fig. 419 shows a theoretical curve and experimental values for the classification of periclase with the density ρ 5 3600 kg=m3 in an equilibrium apparatus of circular cross-section. The number of conventional sections z 5 9; and feeding sections is i 5 6: The consumed material concentration is μ 5 1:5 kg=m3 : The computation was performed using the model of regular cascade and the structural model according to Eqs. (4.52) and (4.71). Fig. 420 shows the dependence Ff ðBÞ during the classification of quartzite with the density ρ 5 2650 kg=m3 in an equilibrium apparatus of square cross-section. The numbers of

242

Entropy of Complex Processes and Systems

conventional sections z 5 6, and the feeding section is i 5 3. The consumed material concentration μ 5 2 kg=m3 . A model of regular cascade with a skip of 1.5 conventional sections was accepted for the computation. Ff 5

1 2 σ2;5 1 2 σ7

The distribution coefficient is determined with a correction for the square cross-section according to (4.82) ksq 5

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffii πh 1 2 0:4 3 B 4

Maximal deviation of the theoretical curve from experimental points does not exceed 15%.

4.9 Correlation of structural and cellular models of the process For a cell, Eq. (4.32) can be written as rffiffiffiffiffiffi 4B ; ur 2 v r 5 ur 3λ

since here the hydrodynamic situation in the flow is examined practically in one point. Hence, vr 12 5 ur

rffiffiffiffiffiffi 4B 3λ

vr 512 ur

rffiffiffiffiffiffi 4B 3λ

or

According to (4.73), we can write for this case vr vr 5 5k ur w

Then the limiting expression for extracting coarse and fine particles out of the cell can be written as τ2E

ρv2

f ðEÞ 5 e χ 5 eρ0 w2

2wgdρ 2ρ

0

Chapter 4 • Verification of the entropic model adequacy

243

Hence, we obtain ρ

2

f ðEÞ 5 eρ0 k 2 B

Since k 5 1 2

qffiffiffiffi 4B 3λ; this dependence can be finally written as f ðEÞ 5 A 3 e2φðBÞ ρ

where A is a constant value A 5 eρ0 ; and ϕðBÞ is a certain function of the parameter B. This dependence completely corresponds to empirical dependencies for real separation curves, which were confirmed many times, but only found an audible theoretical substantiation in this work. For such processes as separation of loose materials, we imply that similarity criteria represent relationships that allow the obtaining of universal separation curves. Such a curve considerably simplifies the computation of the process results, its optimization, and makes the comparison of different separation facilities absolutely unbiased. For turbulent separation regimes, such criteria were found by purely theoretical means. For transient and laminar regimes of the medium motion, no parameters suitable for the generalization of separation curves have been found yet. Finding such parameters is extremely important for the development of the theory and practice of the separation of loose materials. We illustrate these properties of separation curves by concrete examples. A set of experiments with fractionating of a coarse-grain material—chromium oxide—was performed in an air cascade classifier with pour-over shelves at the consumed material concentration μ 5 2:0  2:2 kg=m3 : Separation was carried out in an apparatus with five cascade steps ðz 5 5Þ; with the initial feeding supply to the third step counting from the top down ði 5 3Þ: The granulometric composition of the initial material is presented in Table 43. The experiments were performed at the air flow velocities equal to 2.7, 3.0, 3.25, 4.15, and 4.75 m/s. As a generalizing criterion for this case, the previously found expression was used: B5

gd ðρ 2 ρ0 Þ ; w 2 ρ0

Fig. 421 shows a graphic dependence Ff ðxÞ 5 ϕðBÞ;

Table 4–3

Granulometric composition of chromium oxide ðρ 5 3600 kg=m3 Þ.

Average particle size, d (mm) 0.025 0.0565 0.0815 0.13 0.18 0.258 0.358 0.515 0.815 1.3 2.05 3.75 Narrow class content 10.7 4.3 5.9 6.9 4.1 9.7 7.1 11.4 11.2 11.1 14.3 3.2 ri ð%Þ

244

Entropy of Complex Processes and Systems

Ff (x) 100

80

60

40

20

0 0

0,2

0,4

0,6

0,8

1,0 B

FIGURE 4–21 Ff (x) 5 f(B) dependence for chromium oxide. O, experimental point; ___, theoretical curve.

Table 4–4

Granulometric compositions of aluminum powder ðρ 5 2700 kg=m3 Þ.

Average particles size d (μm) Content of class ri ð%Þ

10 3.6

25 7.2

40 17.0

56.5 8.2

71.5 7.8

90 6.7

112.5 6.5

142.5 7.5

180 7.8

.180 27.7

obtained on the basis of the performed experiments for all flow velocities and all narrow classes of particles. On the basis of this figure, we can conclude that the parameter В in this case leads to the production of a universal curve. As another example, we examine the separation of a fine-grain material—aluminum powder—in a cascade shelf apparatus consisting of 10 stages ðz 5 10; i 5 5Þ at the consumed material concentration μ 5 1.52.2 kg/m3. The granulometric composition of aluminum powder is shown in Table 44. Experiments were performed at the air flow velocities equal to 0.27, 0.36, 0.51, 0.9, 1.32, and 1.7 m/s. An analogous dependence Ff ðxÞ 5 ϕðBÞ

for this group of experiments is shown in Fig. 422. It follows that the parameter В does not ensure in this case the acquisition of a universal curve. The cause of this, obviously, consists in the fact that in the first case turbulent flow around particles takes place, and in the second case, for smaller particles and lower flow

Chapter 4 • Verification of the entropic model adequacy

245

Ff (x) 100

80

w m/s – 0.27 – 0.36 – 0.51 – 0.9 – 1,32 – 1,7

60

40

20

0 0

2

4

6

8

10

12

14

16

B

FIGURE 4–22 Ff (x) 5 f(B) dependence in case of aluminum powder separation.

velocities, the flow around particles occurs in a laminar or transient regime. Therefore we will try to find for these regimes also generalizing parameters from the standpoint of the results obtained during the examination of structural and cascade models of the process. It was obtained that for the laminar flow regime, the value of distribution coefficient is K 512

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ar 3 B 36

(4.84)

In a general case, this parameter is determined by the expression pffiffiffiffiffiffiffiffiffiffiffiffiffiffi n2 n Ar 3 B pffiffiffiffiffiffi K 5 12 n12 ð1810:61 Ar Þ

(4.85)

As a result of numerous researches, it was established that in transient and turbulent regimes of two-phase flows, the profile of the flow diagram is somewhat stretched in its central region and approaches the parabolic one, that is, under certain assumptions, n 5 2 can be accepted for all regimes. Then we can write for an arbitrary regime of flow around particles  K 5 12

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi  Ar 3 B pffiffiffiffiffiffi 36 1 1:22 Ar

(4.86)

246

Entropy of Complex Processes and Systems

In turbulent regimes, Ar acquires high values, and therefore the value 36 in the denominator can be neglected, and for them K 5 ð1 2

pffiffiffiffiffiffiffiffiffi 0:4BÞ

(4.87)

An interesting conclusion can be made from expression (4.87). A generalizing parameter in it is unambiguously connected with the distribution coefficient K determined through the flow structure. From this expression, as well as, by analogy, from (4.74) and (4.87), new criteria for gravitational classification can be formulated: • for turbulent flow regimes: H1 5 1 2 K 5

pffiffiffiffiffiffiffiffiffi 0:4B;

(4.88)

• for laminar flow: H2 5

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ar 3 B ; 36

(4.89)

• for an arbitrary regime of flow around particles: H3 5

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ar 3 B pffiffiffiffiffiffi 36 1 1:22 Ar

(4.90)

4.10 Generalizing criteria We apply the obtained criteria for processing experimental data. Experimental data on the aluminum powder separation shown in Fig. 423 and recalculated using Eq. (4.89) lead to the dependence shown in Fig. 423, and those recalculated using Eq. (4.80) lead to the dependence shown in Fig. 424. Chromium oxide separation presented in Fig. 421 at the recalculation using Eq. (4.89) is shown in Fig. 425. It follows from these examples that all these criteria give a generalizing effect. Refer to the dependence (4.80). It is clear that the value 36 in the denominator is a scale factor, and the physical sense of the parameter is in its numerator. Analyzing it: pffiffiffiffiffiffiffiffiffiffiffiffiffiffi qd2 ðρ 2 ρ0 Þ Ar 3 B 5 μ3w

Here we obtain a new dimensionless criterion valid for laminar flow-around regimes. These regimes ensure most of the wet separation processes and dry separation of very fine particles.

Chapter 4 • Verification of the entropic model adequacy

247

Ff (x)

80

60

40

20

0 0

0,1

0,2

0,3

0,4

0,5

H

FIGURE 4–23 Ff (x) 5 f(H) dependence in case of aluminum powder separation.

100

80

60

40

20

0 0

0,1

0,2

0,3

0,4

0,5

Ba

FIGURE 4–24 Ff (x) 5 f(Ba) dependence in case of aluminum powder separation.

Thus, we have managed to formulate generalizing parameters that allow obtaining universal separation curves: • at a turbulent flow around particles B5

qd ðρ 2 ρ0 Þ ; w2 ρ0

248

Entropy of Complex Processes and Systems

Ff (x) 100 80 d mm – 0.13

60

– 0.2575 – 0.515

40

– 0.815 – 2.05

20 0 0

0.2

0.4

0.6

0.8

1.0

1.2

H

FIGURE 4–25 Ff (x) 5 f(H) dependence in case of chromium oxide separation.

• at a laminar flow around particles Bα 5

qd2 ðρ 2 ρ0 Þ ; μw

(4.91)

• at any regimes of flow around particles H5

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ar 3 B pffiffiffiffiffiffi 36 1 1:22 Ar

(4.92)

To analyze the relationship between the criteria B and Bα , we divide one by the other Bα qd2 ðρ 2 ρ0 Þw2 ρ0 dwρ0 5 5 5 Re B μ 3 w 3 qdðρ 2 ρ0 Þ μ

(4.93)

This relationship gives the Reynolds number value determined for a particle. Thus, we can write Bα 5 B 3 Re

(4.94)

This relationship has a precise physical meaning. In the regimes of turbulent flow around particles, the Reynolds criterion degenerates, and then the criterion B only is sufficient. In the laminar region, where the particle drag is in direct proportion to Re; the product B 3 Re is necessary for the generalization of separation curves. The similarity criteria for obtaining universal separation curves in various regimes—laminar, transient, and turbulent—are determined here.

Chapter 4 • Verification of the entropic model adequacy

249

The most interesting thing consists in the fact that each of the universal curves can be obtained from one experiment. On the basis of the analysis of initial material and separation products, we can conclude that each experiment contains all the information both about the process and the separator design. All this is very interesting, however, it has been obtained at the expense of statistical analysis application to such a complicated physical phenomenon as a two-phase flow in critical regimes. In addition, it should be emphasized that the crucial parameter that has allowed describing such a complicated phenomenon is entropy. As it has turned out, its role here is universal.

5 Place of the entropy parameter in technical sciences and other branches of knowledge 5.1 Problematic character of entropy In modern natural science, entropy is sometimes assumed to have the role of the only parameter that evaluates the character and depth of matter and energy transformations occurring in all material objects and processes. Apparently, this fact, as well as some contradictory aspects of its physical meaning have led to a situation where in the global scientific literature there is no other notion causing as many debates, discussions, collisions of opinions, and speculation. Thousands of articles and books have been devoted to this notion. The heat of the passion about the notion of entropy periodically calms down, but then restarts with new vigor. However, until today this has not yet given an unambiguous explanation for the physical meaning of entropy. Such a situation established in scientific literature has even created a negative attitude about the use of this parameter. The main objections against entropy are reduced to the fact that a theory tolerating internal contradictions and paradoxes contains a certain flaw, and equivocal definitions are incomplete. Therefore, in some publications, a suggestion to reduce the use of this parameter altogether has been substantiated. Sometimes entropy is even announced as a “cancerous neoformation of thermodynamics” destroying the orderliness and consistency of the global scientific outlook. But at the same time, the notion of entropy has stepped over the boundaries of not only thermodynamics, but also physics, and penetrated into many fields of modern natural science. It is successfully applied not only in sciences paying main attention to stability, order, equilibrium, and uniformity, but also in those with dominating diversity, instability, nonequilibrium, and nonlinearity of principal relations. As opposed to those denying entropy, its notion as a generalizing invariant of up-to-date natural science has started taking shape little by little. Apparently, such a contradictory situation with this parameter has emerged as a result of incomplete and inexact understanding and unintended distortion of basic physics of entropy, which we have managed to overcome in part in the earlier chapters of this book. A clear account of the obtained results removes most of the ambiguities connected with the physical meaning of entropy. It seems impossible to examine in detail all objections against the impartiality of entropy—a separate book would have been necessary for that. Therefore Entropy of Complex Processes and Systems. DOI: https://doi.org/10.1016/B978-0-12-821662-0.00005-8 © 2020 Elsevier Inc. All rights reserved.

251

252

Entropy of Complex Processes and Systems

we will dwell on the main problems emerging in current technical literature in connection with this parameter. Entropy usually enters into thermodynamic relations in the form of the product HT called bound energy by Hamilton. This parameter is usually interpreted as follows. It is usually considered that the entropy H is a quantitative measure of chaotic motion, whose relation to the bound energy TH is the same as that of an average modulus of the particle momentum mv to its doubled kinetic energy mv2 . Hence, it is concluded that entropy is a kind of thermo-momentum, that is, the amount of thermal motion in a system, which has lost its vectorial nature due to the chaotic state of this motion. If this is the case, in many cases contradictions arise. In particular, an example of cutting metal is considered, where a part of the spent energy passes into the potential energy of the cutting waste. Therefore the work in this case turns out to be greater than the released heat. The same phenomenon is observed in the processes of crushing of mineral materials, in which a part of the energy passes into the surface energy of newly formed particles. In this case, proceeding from the total differential of the bound energy change, one can write dðTHÞ TdH HdT 5 1 dt dt dt

It follows that entropy growth reflects only a part of the changes of the state connected with the irreversibility, and entropy is only one of independent energy arguments. On this basis, a conclusion is made that it is necessary to substitute the principle of entropy growth with a more general criterion of irreversibility. All this argumentation fails after the elucidation of the fact that entropy, as we have proved, is a dimensionless value, and not energy. Therefore there is no duality in the given examples, and entropy characterized only the irreversibility of the phenomena taking place. The account for complex properties of entropy revealed in this work removes all argumentation connected with the assertion about its inadequacy as a criterion of the equilibrium of thermodynamic systems, as well as many other misunderstandings arising from the analysis of phenomena without taking into account the differences between the dynamic and static entropy components and its dimensionless form. An attentive analysis of modern thermodynamics allows clarifying that the dynamic and static entropy components were taken into account long ago, for example, in the generalized GibbsDuhem equation. In the most general case, for a multicomponent thermomechanical system, whose equilibrium state is characterized by the volume V, energy U, number of moles of substances Nk , and temperature T, the coupling equation is written as follows: dH 5

X dU µk dNk 2 pdV 1 T k

where p and µk are the pressure and chemical potential of the k-th substance, respectively.

Chapter 5 • Place of the entropy parameter in technical sciences

253

In this expression, the last summand is none other than the static entropy component characterizing the composition and change in the mount of substances participating in the process. Two other members of the right-hand part of this expression represent the dynamic component characterizing transformations occurring in the processes. This difference was not emphasized earlier, and no proper attention was paid to it. Therefore, when considering some aspects of entropy, nonseparated perception of these notions led to contradictions. For example, the analysis of the Gibbs paradox taking into account this difference has shown that the paradox does not exist, since at the mixing of various components, the static entropy component grows, while the dynamic component remains constant. In the debates about entropy, immediately after the Gibbs paradox, the question of entropy connection with the notion of the “thermal death of the Universe” is stated, which was formulated by Clausius and was not refuted either by the statistical physics of Boltzmann and Gibbs or quantum physics. It seems that from the standpoint of pure logic, the problem is extremely confused. The thesis of thermal death was formulated as a consequence of the second law of thermodynamics on the basis of human logic laws. It is known that these laws are not adequate to the “logic” of physical and chemical natural phenomena. Otherwise, laws of nature established by mankind would never have been defined more precisely or changed, but this happens permanently. In fact, we often witness incomprehensible processes preventing the growth of the global entropy. In living nature, entropy decreases at the expense of external solar energy absorption, which does not contradict the second law of thermodynamics. These phenomena are observed even in nonliving nature. As has been shown, a phenomenon of “stable nonequilibrium” in the extraction of particles of different sizes is observed in separation processes under the action of the flow. This nonequilibrium requires the constancy of only B criterion, which is set forth in detail earlier in this book. The consideration of artificially abstracted processes and groundless spreading of the obtained conclusions over all existing processes has led to an absurd conclusion of the end of the world. The paradox of “heat death,” as well as its numerous analogues (death of the Universe in a “black hole”) look like the results of antiscience thinking based on incorrect generalizations. These generalizations can be recognized as correct only for a Universe consisting of elastic balls, which is placed in the basis of statistical physics. However, it is impossible to build the world from enclosed balls, no matter how small they are. The structure of matter is much more complicated than these primitive notions, as is totally clear today. It is unclear yet where the energy is taken from for an entropy decrease at all levels of the Universe, although the existence of the Universe clearly demonstrates it. Science has no answer to this question. Only one thing is clear—the processes resisting the world entropy growth definitely do not refer to modern thermodynamics. They are obviously ruled by other laws. Therefore the thesis of thermal depth represents a false paradox which will possibly find explanation in the future in the deepest regularities of the development of the Universe. The force of the entropy parameter consists in the fact that it gives the most complete generalized pattern of a system state and change even in areas far from thermodynamics.

254

Entropy of Complex Processes and Systems

The value of this parameter can be explained by way of a simple and clear example of the state of physiological health. Health estimations can be qualitative, quantitative, statistical, specific, or others. A doctor is a specialist in health evaluation. Assume a group of doctors having various specializations must present a quantitative evaluation of health of a concrete person in a reference system normalized to a unity. Naturally, it will be a subjective evaluation of each specialist based on his personal experience, laboratory analyses, and investigations using devices of different levels of perfection. A general united evaluation of a concrete person’s level of health can be given by static entropy obtained from particular evaluations. With its help, one can follow the changes in health of this concrete person during a limited time or all of his life. The usefulness of such an evaluation is undisputable, especially for determining the possibility of accomplishing a concrete kind of activity by this person—that of a pilot, astronaut, miner, lifeguard, etc. This does not mean that particular evaluations are not needed. They will be more informative and useful when performing concrete medical treatment. Using the generalizing ability of such entropy, one can evaluate the health state of the population of a city, a country, a separate nationality, or some other category of the population. These evaluations will be extremely useful, because they will allow for making decisions concerning the financing of public health, building additional hospitals, increasing basic rates of medical staff, etc. In all cases where the entropy parameter is applied, it is possible to obtain a statistically substantiated idea of the system, process, or phenomenon under study on the whole despite its complexity, chaotic condition, and ambiguity.

5.2 General properties of entropy Randomness, uncertainty, blind destiny—all these notions have a certain negative sense. It seems that they are based on mysticism, being instruments in the hands of fortune-tellers, magicians, shamans, and all kinds of charlatans. It was considered that what happens by chance does not have a definite meaning. However, in due time it has become clear that scientific research of chance is possible. This analysis led to the calculus of probability, which was considered for a long time a secondary part of mathematics. The main idea in the study of randomness was a transition from uncertainty to an almost complete certainty at the observation of a long line of events or changes in large systems. The disorder called molecular chaos is, in essence, a randomness enclosed in any volume of gas, liquid, or solid. As for the amount of chaos or randomness in the processes under study, the answer is given by statistical mechanics created at the turn of the 20th century. The quantity of chances in any substance or process is measured by entropy. This parameter has given a possibility to formalize randomness. It becomes a necessary condition for understanding rather complex and intricate laws managing the behavior of thermal and other complicated processes. Classical thermodynamics was described as a queen of sciences long ago. This is a remarkable scientific system whose details do not yield to Newton’s classical mechanics either in beauty, or in its brilliant perfection. It has acquired such recognition due to the width and universality of

Chapter 5 • Place of the entropy parameter in technical sciences

255

its basis—the first and second laws of thermodynamics are obeyed practically by everything in existence. Therefore thermodynamics was destined to play the role of a “launch pad” while trying to understand the main laws of nature. The notion of entropy was introduced into science by Clausius in the in mid 19th century. Thirty to forty years later, a statistical interpretation of entropy accomplished by Boltzmann and supplemented by Gibbs was suggested. It was given the dimension of energy—as thermodynamic entropy. The analysis performed in this book has made it possible to determine that thermodynamic and statistical entropies are dimensionless parameters, by no means connected with energy. The continuation of the research into this parameter has shown that with the mixing of different gases, the “Gibbs paradox” reflects only one thing: a measure of uncertainty (chaotic condition) of the composition of components to be mixed, which is unambiguously defined only using the probability of each component’s content in the mixture. This explains a strange, at first sight, condition that entropy jump is connected in no way with either physical, chemical, or other properties of the components of the mixture. It is this nonexistent mixture which has been an object of persistent search for many years. Understanding these ideas, as well as an overall analysis of the entropy parameter, allows for interpreting the latter somewhat differently in comparison with generally accepted notions. Clear terminology in science sometimes plays a determining role in the understanding of the physics of a phenomenon. By way of example, we can mention the name of the proportionality coefficient between the system energy and temperature. Strictly speaking, thermodynamic temperature is a natural measure of some abstract unit energy related to a unit mass and unit heat capacity. Having analyzed the properties of the new coefficient, Clausius gave it a new name: entropy. It is unknown what the possible fate of this coefficient would have been without this name, and whether it would have occupied its place in science. The well-grounded disentanglement of the Gibbs paradox accomplished in this book is of significant applied importance. First, a duplex character of the internal entropy in any system accumulating the uncertainty of dynamic processes and the uncertainty of their elements’ composition is demonstrated. It can be readily seen that sometimes the composition entropy value exceeds by an order of magnitude the dynamic component. Second, it is proved that an entropy jump takes place in complicated systems even without their mixing. Third, on the basis of a paradox solution, the possibility of a totally new approach to the evaluation of many complicated systems is formulated taking into account the change in their composition in dynamic processes. Thereby, the possibility of a quantitative evaluation of many compound systems and objects, as well as of the analysis of their change, opens up. At first, great perplexity was caused by the property of entropy to change arbitrarily in only one direction—to grow constantly. A broad discussion of this problem in scientific circles has been held before the validity of this property of entropy was agreed upon. This means that any system can evolve in a natural way so that the entropy of this system will

256

Entropy of Complex Processes and Systems

grow, while the system itself will tend to equilibrium. Therefore it is accepted that in any case, even at a simple leveling of temperatures or pressures in various points of the system at the diffusion or mixing of gaseous components, the total entropy of the Universe increases. Classical thermodynamics does not use the notions of time and space. It admits only such notions as equilibrium, for which time does not exist, and uniformity, for which spatial length is indifferent. To overcome these complications, Onsager suggested the thermodynamics of irreversible processes, which contains both time and space. In addition, this theory takes into account the effects of heat release by friction in irreversible and nonequilibrium processes. It was a revolutionary step of fundamental importance. The property of changing only in one direction makes entropy irreversible. Previously, this property was inherent only in time, but later the theory of an expanding Universe gave another example of reversibility. A stream of time is an inherent aspect of the perception of the world. Now it has become clear that another inherent aspect of this perception is randomness. At the same time, it is ill-posed to speak about “determinism against randomness.” There is no logical incompatibility between determinism and randomness. As a rule, the laws of nature have a deterministic character, but they are often revealed unpredictably. For example, the law of universal gravitation is deterministic, but the behavior of dust particles hovering in the air is unpredictable. In this respect, we should remember an important remark made by Poincaré that long-term unpredictability in complicated systems reconciles randomness and determinism. Science is developing due to the creation of new concepts, new ideas in physics and natural science, and new definitions in mathematics. As far back as Galileo Galilei mathematics was described as the language of physics. This can be expanded to the notion that “mathematics is the language of nature.” If the world was created, the Almighty then, apart from everything else, is a great mathematician. The beginning of today’s science is dated to approximately the 17th century, when the problem of knowledge became connected with measurements and calculations. Such an approach has radically changed human notions concerning the surrounding world. It was assisted by the generalization of the results of measurements of concrete changes of nature. For instance, on the basis of the performed measurements and calculations, Kepler identified the trajectories of the planets in our solar system to an ellipse and formulated on this basis three laws of planetary mechanics. This invariable ellipse has become one of the first universally recognized laws of nature in science. Because of Kepler’s laws, it became possible to predict Sun and Moon eclipses and other phenomena in astronomy. This first meeting of mankind with the laws of nature led to a great deal of results. The most important of these consisted in the understanding of a new idea that nature speaks to us in the language of mathematics. Moreover, somewhat later an idea that any knowledge contains as much science as mathematics became firmly established. Then a clear understanding of the difference between purely mathematical schemes and objects of a real world arrived.

Chapter 5 • Place of the entropy parameter in technical sciences

257

Such mathematical objects as a line, circumference, and square are products of human thought. They are characterized by the identity to themselves. The ratio between their elements remains unchanged. The ratio of the circumference length to its radius, of a diagonal of a square to its side, are independent of the figure scale. Even this example shows that there is something observed in nature which can be identified with some mathematical deductions. What remains unchanged can be accepted as rules for the observed object. There is a truth in the idea that nature speaks the language of mathematics with us, but this language is characterized by peculiar and special features. The basis of mathematics is a number of axioms characteristic of each of its sections. The necessary sequence of constructions and conclusions is made on their basis by the methods of human logic. Within the frames of accepted axioms, there are no boundaries to the application of mathematical methods. By changing axioms, we can obtain other results of mathematical theories. Bourbaki formulated a standard for developing any mathematical theories, which consists of three fundamental parts. First, it is the language of formal theory; second, the necessary minimal number of axioms; and third, the rules of derivations. The language of mathematics is cardinally different from natural languages. A mathematical language does not change in time and has a single term within the frames of a concrete science. In a natural language, objects can be associated with various images under the influence of, for example, expansion of human horizons. Within each mathematical theory, an artificial world of ideal objects, which remain the same over an infinite interval of time, is created. If we come across some object A in derivations, and then it appears again in computations, this is a guarantee that the object A is the same. This property of mathematical objects is in contradiction with changes in all things in the real world. To obtain such an abstract world, time is excluded from mathematical theories, and we obtain “a world of frozen things.” To overcome this, a special mathematical definition of motion was introduced. This is based on changes in the coordinates, but the object A remains invariable inside them. Thus, mathematical motion is reduced to the rules of coordinate transformation. If some parameters are revealed, which remain unchanged at various transformations of coordinates, these parameters are considered invariants. In physics, an invariant is a result which is repeated at various times of observation. Invariants of such kind are correlated with invariable laws of nature or are considered to be such laws themselves. Today most specialists agree that all sections of mathematics can be integrated in the form of projective geometry. This opinion became stronger after it was proved that besides integer-dimensional spaces, one can use fractional spaces including all conceivable types of complicated structures. In recent years, the success in geometrization of mathematics has confirmed the researches into fractals. At the same time, while deriving mathematical relations for complicated systems, one has to abstract away from many minor details of real processes. As a rule, this leads to incomplete adequacy of the results of total conclusions. In addition, this is promoted by the difference between human logic, by whose laws the entire mathematics system was created, and “logic”—the internal connection of main regularities of concrete real processes. Usually, this

258

Entropy of Complex Processes and Systems

discrepancy is compensated by using various correction coefficients in the methods of complicated systems computation. These coefficients are obtained by comparing computation results with the results of experiments carried out with processes under study. As a rule, to obtain such coefficients, one needs to perform long-term and rather complicated and expensive experiments. We have managed to overcome this problem in a somewhat different way. As shown in the earlier chapters of the book, we have developed a calculation methodology without any empirical correction coefficients for such a complicated process as a two-phase flow. The basis of this methodology was the determination of entropic invariants of the process. The calculation methodology obtained on their basis is almost completely adequate for experimentally obtained results. This is an indisputable achievement of the method we have developed. Up-to-date physical theories consist of separate, frequently mismatching regularities reminding us of the elements of a mosaic that must make up a general finished picture similar to a children’s puzzle. However, for the time being, such a picture is not complete. Describing mathematically diverse aspects of physical reality, physical theories are obtained. These theories are numerous, and they cover an immense variety of phenomena. Very frequently, the discrepancy between them leads to conceptual problems. At the same time, physics and natural science possess a fundamental unity, because they describe a unique physical reality of our world. However, until now there has been no united theory covering all aspects of this uniqueness. The main invariant in these intricate regularities describing natural processes is entropy characterizing any complicated system. It is known that in any spontaneous physical process entropy either remains constant or grows. Entropy increase is evidence of the fact that the process is irreversible. According to Nernst’s law, at an absolute zero the entropy value in any closed system tends to zero. Therefore it is accepted that if the system temperature and energy decrease, its entropy decreases also. This statement is interpreted unambiguously everywhere. It turns out that it is not so simple. For instance, Kittel believes that since the Sun permanently loses energy at the expense of radiation, its entropy constantly decreases. Probably, this is true, but maybe this is not the case. As is known, thermonuclear synthesis permanently takes place on the Sun. As a result, helium atoms are formed from hydrogen atoms in immense quantities. Now we understand that besides the dynamic entropy component, there exists static entropy, which depends on the ratio of components in the system. If at present the amount of hydrogen atoms in the composition of the Sun is much greater than the amount of helium atoms, this entropy component will grow due to the mentioned synthesis, and it is quite possible that the total entropy will also grow despite the radiation. If, however, the amount of helium is greater, this means that our star has started approaching collapse, with a general loss of entropy. We can consider that the fact that entropy is not a physical parameter unambiguously connected only with the energy that is proved. Today we have a clear notion that entropy is a universal parameter related not only to thermodynamic processes, but characterizing all aspects of the material world transformations—changes in energy, volume, concentration, chemical composition, and conversions of

Chapter 5 • Place of the entropy parameter in technical sciences

259

the structure of substances. Hence, any system in nature—gas, liquid solution, solid, or living cell—is characterized by certain entropy. Let us try to refine its physical meaning. For this purpose, it is necessary to consider the essence of this parameter from a somewhat different angle to the existing one. An isolated system spontaneously tends to achieve the greatest disorder, that is, the situation with the maximal entropy value. This is a key principle of modern thermodynamics. At the same time, it is typical of any nonequilibrium system that the spontaneous evolution surely leads to macroscopic equilibrium. If gas in a closed vessel is, at first, mechanically or thermally turbulized (i.e., it has different temperature and pressure in different points of the volume), then, as follows from experience, it acquires a tendency to a spontaneous trend to equilibrium. After the expiry of a certain time interval, a macroscopic dynamic and thermal equilibrium is established in it (temperature and pressure are equalized over the entire volume). At the same time, the entropy of the system constantly grows and reaches the maximum at the equalization of all macroscopic parameters in the entire volume. The entropy value shows itself in two ways with respect to the state of global equilibrium. In the example under study, after the establishment of equilibrium at a macroscopic level, the equilibrium at a microscopic level does not occur, and total chaos remains—molecules move at random in all directions with mean velocities determined by the temperature and pressure in the system. In the beginning, the probability of the state of the entire system spontaneously grows, and then reaches values close to unity, and entropy acquires a value, which is maximally possible for concrete conditions. Such a kind of equilibrium can be considered as disordered/chaotic. Are global equilibrium systems with ordered, nonchaotic equilibrium possible at an elementary level? Obviously, we can give a positive answer to this question. As an example of such equilibrium, we can suggest all living nature. It is considered that the probability of equilibrium at an elementary level is vanishingly small, and the entropy of such a state is minimal. Such equilibrium is possible only at a constant, continuous consumption of a definite amount of energy in the system for its maintenance. Any breathing organism and any other biological organism consists of simple atoms of carbon, hydrogen, oxygen, nitrogen, and some others. These atoms do not form random mixtures. They are assembled in systems with an extremely high degree of organization. First of all, these are organic molecules—amino acids. Today about 20 different types of these are known. Various large macromolecules are formed from these molecules in strict succession, and each type possesses rather specific properties. For instance, amino acids in a definite connection are used for protein formation. The principle of entropy growth creates an impression that the physical world is approaching the situation characterized by ever-growing disorder. This means that for each spontaneously arising process, a predominant direction is a transition from a more ordered to a less ordered situation. Here a very essential problem arises—how and at what cost a system can be transferred into a less random and more ordered situation. For example, to what extent the transformation of a random mixture of simple atoms into complex and highly

260

Entropy of Complex Processes and Systems

organized macromolecules constituting animals and plants is possible. In other words, to what extent the existence of living organisms is possible. This problem is topical in technological processes, too. It can be formulated as follows. In what way does the internal energy of petroleum, coal, water, and gas transform into the energy connected with goal application, for instance, turbine rotation or piston motion? In this case, naturally, an entropy decrease with respect to the initial state of the fuel takes place. Answers to these questions are not obvious, but thermodynamics provides such answers. It consists in the following: a system entropy can be decreased only if the system interacts with other systems in such a way that in the process of interaction, their total common entropy increases. Naturally, it is connected with excessive energy consumption with the use of fuel or just with external energy consumption with respect to the considered system. If we imagine a situation where the working substance (a vapor or gas) in a heat engine is not discarded after the refrigerator into the atmosphere, but returns into the hot tank, this portion of the former must be heated up to the temperature corresponding to Q1, and in this case the elevated entropy must decrease to its initial value at the expense of compensated absorption of the external energy. Hence, in this case, too, an entropy decrease to its initial value can occur at the expense of energy consumption. In living nature, equilibrium between two tendencies—synthesis and disintegration—is achieved at any level, and equilibrium of such a kind is extremely unstable. If, for example, for some reason, entropy in a gaseous system fluctuates from its maximal value toward a decrease, then the process of chaotic motion of molecules spontaneously returns it to the maximal value. In living nature, entropy fluctuation in the direction of growth does not occur spontaneously, and energy consumption is needed to return the system to its extremely unstable state with minimal entropy. For a long time, entropy has been considered the most abstract physical quantity. After Shannon created the information theory, entropy, which had been determined previously as a measure of a system disorder or a measure of uncertainty, started being interpreted as a lack of our knowledge of the internal structure of a system. At that time, in many publications an idea was asserted that the two sides of entropy—energy and information—are not mutually exclusive, but on the contrary, represent two supplementary aspects of the same quantity. However, it was not mentioned at all how these aspects are accumulated in the entropy value—by summing up, multiplication, or some other method. Information theory, as with statistical mechanics, deals with measurements of the amount of chaos in transmitted messages. Therefore these two theories seem so be tightly bound with one another. As we have established, the internal entropy of a system estimates two characteristics of its state. One entropy component is determined by the totality of dynamic processes occurring in a system at an elementary level. Another component is determined by the content and composition of the system components, and it is connected only with the probabilistic distribution of the composition. For one-component systems, the composition probability equals zero, and the composition entropy is also zero.

Chapter 5 • Place of the entropy parameter in technical sciences

261

The external reality surrounding us—inorganic nature, life, the Universe—creates an unpredictable reality in its realization. An idea has arisen that everything changes in a random and very complicated way, and therefore, it is practically impossible to judge adequately the observed phenomena. However, the ideas stated earlier allow us to hope that the entropy parameter can play the role of a compass in all this complexity of the surrounding world. Attention should be especially focused on the circumstance that an entropy decrease of any system (not necessarily equilibrium, dynamic, or static) can be only compulsory, with an expenditure of the amount of energy required for concrete conditions. This happens, for instance, at the separation of any mixtures that we have considered. Other technological examples can be given, but the most visual example is living nature. Taking all this into account, we can try to formulate a somewhat expanded definition of the entropy parameter. Entropy is the most general cumulative numerical invariant of processes and their components of any nature and complexity, which has a single and dimensionless measure and combines the uncertainty of dynamic and static components of the system.

5.3 Development as growth of complexity Modern scientific notions of nature predetermine the availability of its main components as follow. These are matter, energy, and their mutual changes. These changes generates entropy, which characterizes the probability of any complicated chaotic processes involving material objects. If we examine the development of the Universe over time, we can clearly note the main parameter of its change—this is the history of a permanent increase in complexity. The world is arranged in such a way that the interaction of matter and energy entails the formation of more complicated material systems than existing ones. The diversity, amount, and interaction of basic components of concrete systems in their totality determine the complexity value at any level from an atomic nucleus to a cluster of galaxies. Modern science is looking for an explanation of the laws of nature from within nature itself. It has long been clear that scientific concepts on the whole represent a philosophical attitude that cannot be strictly proved. Therefore there exist and will exist forever various scientific trends giving various interpretations to the same facts of the surrounding reality. Usually, three kinds of complexity are singled out: • Physical nonliving nature; • Biological life; • Human civilization. We briefly analyze the main stages of the development of these complexities.

262

Entropy of Complex Processes and Systems

(a) Physical complexity Physical nature has an immense scale and high significance. The mass of the Earth is on the order of 6 3 1021 tons, and the total biological mass on the Earth is estimated at about 1013 tons, that is, less than one millionth of a percent of the mass of the Earth. The comparison of biological mass with cosmos makes it absolutely insignificant. Physical matter has various levels of complexity, from separate atoms to galactic clusters. According to up-to-date ideas about the formation of the Universe, in the beginning, there was an immense accumulation of densely concentrated undifferentiated substance. At the moment of Universe formation, this substance was infinitely dense, incredibly incandescent, and homogeneous. After the Big Bang, at first, forces which became the reason for the formation of matter appeared. Strong nuclear interactions at short distances were the reason for the appearance of the smallest subatomic and atomic particles, which created the first level in this complexity. The second level of the complexity was formed from these particles at the expense of the arising electrical and magnetic forces. This level of complexity is characterized by more complicated atoms, molecules, and groups of molecules. Weak nuclear interaction formed a more aggregative structure of the arising matter. The weakest of all the forces was gravitational impact, which was the last to reveal itself in the formation of larger structures in the Universe. On the expiry of each of the enumerated stages, the complexity of the Universe was increased. Most researchers believe that at the moment of the Big Bang, the entropy of the primary system was maximal; this assertion is disputable. More likely, on the contrary, static entropy of a homogeneous system on the eve of the Big Bang was at zero, because there was no dynamics in the superdense matter/energy. It is more logical to accept this scheme. After the Big Bang, entropy started growing monotonously with increasing complexity. This created the basis for the second law of thermodynamics to become valid. If the entropy were actually maximal, it would have meant that the world was in the state of “thermal death,” and any evolution of the former is impossible, since an external energy with respect to this system does not exist. One of the main factors of the complexity growth and entropy increase is time. Various stages of complexity growth required various time periods from several seconds or minutes to milliards of years. The appearance of complexities promoted an expansion of space, which was accompanied by a temperature decrease. This assisted gravity forces in matter assembling in nebulae, bunches, and solid bodies. According to specialists’ estimations, galaxies were formed over two milliard years. They started preserving their compositions and sizes within certain limits. As a result, some differentiation was established in the Universe. Over the course of time, a complexity of a higher level started arising within the limits of the galaxy. Under the action of gravitation, the temperature in the nuclei of stars increased up to 107 K degrees, and the pressure reached 104 atmospheres. This actuated the reaction of nuclear synthesis and contributed to the formation of heavier chemical elements. It is considered that all heavy elements arose in the nuclei of star systems. During this process, a certain part of matter transforms into energy. Stars shine at the expense of the energy generated

Chapter 5 • Place of the entropy parameter in technical sciences

263

inside them. First, the complexity level of stars was limited, since they consisted of a small amount of elements, mainly of hydrogen and helium. Then the variety of elements started growing at the expense of processes of nuclear fusion. This occurred at the expense of heating of star nuclei under the action of an attractive force. At certain temperatures, helium is transformed into heavier chemical elements up to iron. At a certain stage of this synthesis, the star strongly shrinks at the expense of the attractive force and can explode. At that point, more complex heavy elements up to uranium are formed. Such explosions generate new stars, with new forms of complexity. As a result of all these processes, entropy grows. The situation with planets is more tangled. Inside the planets, heating also takes place at the expense of gravitational compression and radioactive decay. The external energy received by planets from central stars in the form of radiation affects the processes occurring on their surfaces.

(b) Biological complexities Complexities of a higher order can arise and develop on the surfaces of planets at the expense of external energy. Today we can state this for our planet only. Probably, such complexities exist on other planets, too, but these are only conjectures and hypotheses. On the Earth, an incredible coincidence of circumstances came together for the initiation and preservation of life (biological systems): 1. The Earth has an appropriate size. In the case of a smaller size, its weak attraction could not hold the atmosphere and superficial water. At a larger size, its attraction would have wiped out the majority of living beings on dry land and water. 2. An overwhelming majority of regions of the Earth is characterized by an exceptionally narrow range of seasonal oscillations of temperatures suitable for biological life, from 230 C to 150 C, and pressure in the range from 0.6 to 1.2 atmospheres. 3. This is achieved due to the fact that the Earth rotates around the Sun at a distance that ensures this range. At a greater distance from the Sun, life on Earth would have been impossible due to insufficient energy flux. At a shorter distance, the energy flux would have been so intense that every living thing would have been destroyed. 4. The Earth has a large satellite which stabilizes the Earth on its rotating axis. Without this satellite, this axis could unpredictably change direction. This would have caused a drastic temperature change on the Earth surface, which would have prevented the development and preservation of complicated organisms. In addition, the sizes and orbits of other planets, especially those of Jupiter, stabilize terrestrial conditions. Realization of all these conditions is hardly probable. General probability of the beginnings of life on our planet equals, as is known, the product of these probabilities. A vanishingly small value of this product points to the incredibility of a spontaneous beginning of life on Earth. At the same time, biological life has been in existence for more than three milliard years already. This testifies to the fact that it is governed by some laws unknown as yet, and not by probabilistic ones. The appearance of life means the origination basically of a

264

Entropy of Complex Processes and Systems

new mechanism of development providing a higher level of complexity. The complexity of biological systems is determined by their metabolism, DNA, genome, brain, etc., which are known today. The existence of these systems has developed feedback mechanisms striving for the creation and maintaining of conditions for their own continuous existence. All living organisms exist thanks to the inherited information on the control of molecular processes inside them and the ability to extract matter and energy from the environment. This ensures not only their existence, but also their reproduction. Living organism are systems of incredible complexity, which preserve a relatively stable state. The basis of their existence is the use of sunlight energy which has allowed the formation of various biological systems. During this process, named photosynthesis, free oxygen is liberated. Finally, this led to the formation of an oxygen-enriched atmosphere. The oxygen-rich atmosphere enabled the formation of the stratospheric ozone layer, protecting living organisms from ultraviolet radiation. Before its formation, solar energy flow suppressed the development of biological systems on dry land. After the formation of the ozone layer, living organisms could leave their protective aqueous medium and start populating dry land. The oxygen-rich atmosphere provided for the appearance of a more complicated type of biological organisms which do not participate in the photosynthesis process. In these organisms, the inside processing of organic food assisted by atmospheric oxygen from breathing enables the receipt of the necessary energy for life. The tree of life was becoming more and more ramified due to the increase in the amount and kinds of organisms. This paved the way for a higher level of complexity. Approximately 540 million years ago, it led to a socalled Cambrian explosion of complicated forms of living organisms that we know today. In this period, two main kinds of complicated organisms arose. On the one hand, there are predecessors of current plants. They exist thanks to sunlight energy and the availability of necessary chemical elements from soil and water. On the other hand, there are animal organisms feeding on other organisms, that is, taking possession of their energy and matter. In order to survive, for purposeful orientation, such animals need intellect. It is no mere chance that the largest and the most complicated brain developed in animals combining characteristic features of herbivores and predators, and that they occupied a dominant position in the world. The human was the summit of the complicated biological systems.

(c) Development of civilization It is considered that the development of the human brain could be a result of numerous, even noninterconnected, geological and biological phenomena, which led to the creation of a species that allowed for more efficient usage of matter and energy fluxes. A key problem in the history of mankind has always been a search for ways of increasing matter and energy consumption necessary for survival and reproduction. Any living system (cell, plant, animal, human) in the course of its existence passes a certain “life cycle”: birth, growth, development, degradation, death. It is considered that at the first three stages any living object behaves as a nonequilibrium system tending to approach an equilibrium state. Its entropy, at that point, tends to an increase. However, this is

Chapter 5 • Place of the entropy parameter in technical sciences

265

impeded by the external energy consumed by this object, which leads to an entropy decrease and origination of “stable nonequilibrium” in the system. Nonequilibrium of such a kind allows the course of mass exchange, system growth, and its overall functioning up to reproduction. In the final two stages, the ability to absorb external energy in a sufficient amount disappears, exchange processes attenuate, and entropy grows. When its possible maximum is reached, death ensues, and the biological system passes into the class of physical systems. The human community should be considered as a complex system. Entropy of this system, just as of any other, reflects the level of its uncertainty, that is, chaotic condition. The more chaotic is human society, the higher its entropy. In a human society, entropy represents a multiple-factor parameter covering all aspects of human life. Apparently, because of that it does not have a clear definition as yet. However, an ordered society can be easily distinguished from a disordered one, where laws and rules are observed poorly or not at all. Clearly, these two communities of people have different characteristics of uncertainty, that is, different entropies. By way of example, we can compare Norway and a central African country, where the entropy difference is obvious. As is already known, two levels are distinguished in the kinetic theory of gases: a microscopic one, where chaotic motion of molecules occurs, and a macroscopic one, which forms generalizing parameters of the system, such as temperature, pressure, energy, and entropy for the entire system on the basis of this motion. An analogous distribution also can be revealed in human society. However, the personality level analogous to the microscopic one is much more complicated and multilayered. It includes direct interactions of people starting from kindergarten followed by school, further studies, army service, rest, casual acquaintances, industrial activities, financial relations, medical, juridical, and many other connections. The macroscopic level of human society gives a general evaluation of the parameters of the state of this society, the level and quality of life including the general level of well-being of people, medical service level, safety level, education, human rights, etc. Among these generalizing parameters, there is also the entropy of the status of a separate state. People differs from all preceding kinds of living organisms by the fact that human beings were endowed by intellect that could attract a great energy flux for their development by means of their labor. This development continued with a constant increase in the complexity of mankind. It was accompanied by a gigantic growth of human possibilities in the course of his historical development. This is based on the fact that a human is the only natural force that is able to increase the portion of solar energy accumulated on the surface of the Earth and to decrease the amount of energy dispersed into the outer space by volitional actions. Humans achieve this by their work cultivating plants in new lands or expanding the usage of old lands, irrigating arid territories, applying ameliorated sorts of plants, applying new machines and technologies, and breeding domesticated animals. In these ways they have achieved an increase in the total solar energy accumulated on Earth. Intellectual work and any inventions remain fruitless, unless they finally contribute to an increase in the amount of energy accumulated by biological systems on Earth. Sciences pertaining to the humanities are also very significant for decreasing the entropy of society.

266

Entropy of Complex Processes and Systems

People make up around 0.005% of the entire biomass of the planet, but govern more than 40% of the surface and bowels of the Earth. They have achieved this thanks to the gradual development of civilization. Civilization represents an accumulated experience stored in the brain and archives of the mankind. The brain controls an incredibly complicated system which represents the human organism. Ideally, brain possesses increase abilities. This makes the human more adaptable in comparison with other living organisms. Owing to this property, humans have accommodated themselves to the environment, and the environment—to themselves. In the history of the establishment and development of human society, the transformation of an arising complexity into chaos was frequently observed. This transformation corresponds to the second law of thermodynamics, since the complexity growth was always accompanied at the first stage by the disorder growth, that is, entropy growth. To establish order in society, it was necessary to spend the needed amount of energy, that is, to decrease entropy. This also corresponds to the character of the second law. All this occurred as a collective reaction to the problems that humans were faced with. It can be observed at all stages of establishment and complication of human society. It was extremely difficult to survive single-handed, therefore people started living in families, then these families started merging into clans. There were no rules regarding behavior. Skirmishes and murders took place inside and between clans. Strong clans forced out weaker ones. Gradually humans started settling in other regions, and then moved to the European and Asian continents. The adaptability of human communities to various climatic conditions considerably increased after they had acquired control of fire. This allowed them to endure cold climates and cook food. Boiled food improved their health and increased their endurance. Lifespans were expanded. Even the prehistoric development of mankind and the further course of history can be considered as a search for ways of finding, accumulating, and managing energy and material flows. Here energy implies only energy accumulated in biological objects, and not energy in general. This, and not the activity of outstanding personalities, as idealistic historiography asserts, or the class struggle according to materialistic Marxism ideas, was the engine of historical process. Certainly, human behavior is much more complicated and diversified than simple energy consumption. However, the key problem was always a search for ways of increasing energy (mainly foodstuffs) and matter sufficient for survival and reproduction. To learn efficient management and usage of matter and energy fluxes, the appearance of sciences and cultures was necessary. The necessity for a permanent expansion of possibilities of obtaining matter and energy led to increased complexity, that is, to an increase in entropy. For its decrease, it is necessary to single out some part of the acquired energy fluxes. Primordial men could make primitive production instruments. The creation and usage of some instruments gradually led to their having increased complexity, which finally allowed the development of thinking and language. Having at their disposal fire and some appliances, men ceased being scavengers only and became hunters. More diversified foods became accessible to them, and, hence, new sources of energy and matter. With a growing number of clans and communities, total consumption of energy and matter grew. At that time already, a tendency that continues up to now became necessary— making order and decreasing ambiguity in the interrelations between people. Inside a clan,

Chapter 5 • Place of the entropy parameter in technical sciences

267

people started investing a part of the procured food and matter on the head of the clan who established a certain order inside the latter. Matriarchy appeared, and women did not take part in hunting any longer. When people started using fire, this allowed them to moving into regions where the temperature reached as low as 220 С, and then farther north. The people’s way of life was primitive, and cannibalism flourished. Then people realized that a captive, especially in the case of an agrarian way of life, could produce more energy than he himself contained, and slavery was created. Modern man, considered to have appeared about 200,000 years ago, settled in all parts of the world, mastering its microclimates. With growing numbers of people, general consumption of matter and energy was growing, with people living in numerous tribes. Around 10,000 years ago, our ancestors discovered new ways of extraction of previously accumulated energy and matter from the environment. They learned to govern the process of reproduction of plants and animals that they considered useful. It gave a stimulus for an agrarian revolution—people were progressing to agriculture. At that point, they continued both gathering and hunting, but were moving to a more settled way of life, which entailed considerable changes. As a result of more efficient production of food, a greater amount of people survived and reproduced. Food sufficiency allowed people to improve their instruments of production, build dwellings, smelt glass and metals, and anneal ceramics. They started producing many things to trade. Connections between clans became more complicated, differentiation of labor arose, with—ploughmen, hunters, millers, tradespeople, craftsmen, etc., appearing. In this respect, we can outline some amazing parallels between the development of biological evolution and the analogous development of a human society. Interdependency of cells and division of functions between them correlates with an increasing interdependency of people and social division of labor between them. In both cases, these connections became more complicated and constructive with time. A transition to an agrarian way of life led to its complication. Instruments of production, seed material, and some agronomy became necessary. To regulate life, it became necessary to assign a chief and his service staff, that is, to spend energy for this purpose. About 5000 years after the agrarian revolution, the state appeared to regulate the arising complications. This was a reply to the social necessity of the regulation of human relationships in many areas of life, since otherwise chaos appeared, that is, entropy grew. With the appearance of states, increased information flows and the problem of information storage (such as payment of taxes, tributes, fees, etc.) appeared. This led to the creation of written language (at first it was primitive, and after many centuries writing in letters using an alphabet—appeared). Since states became larger and more complicated, more energy had to be spent in the organization of life therein—establishing bureaucracy, schools, police, army, etc. All this led to an entropy decrease in human society. In this respect, the appearance of religion was somewhat separate. Various beliefs arouse in humans mainly due to the impossibility of grasping the observed natural phenomena. However, around 3500 years ago, a monotheistic religion arose in the Middle East. This remains exists in its primary and somewhat changed form and involves an overwhelming majority of people populating our planet. Numerous religion expositors and especially critics

268

Entropy of Complex Processes and Systems

present its essence inaccurately by attributing its role to making fools of people, perpetuating slavery, and consecrating the hatred of some groups of people by others. Actually, the religion of that time and even today is mainly directed at an entropy decrease in a complicated human society. The point is that the Ten Commandments of Moses formed the basis of the development of modern civilization—morals, ethics, aesthetics, and the entire jurisprudence of modern states and world law are based on them. To maintain religion just as other organizations in a state, energy and materials are spent. We can only emphasize that the creation of a state and all its institutions was not accompanied by the creation of new methods of matter and energy extraction from the environment. Up to the Industrial revolution, methods of their production and usage did not change greatly. The state arose and was developed only as a structure capable of decreasing the complexity of functioning of a community of people, that is, to decrease entropy. Modern globalization (the international division of labor) and formation of alliances of states serve the same purpose. All this is connected with additional energy expenditure to decrease the total entropy in human society. As we can see, this does not contradict the second law, which acquires, in this way, a universal character for all complicated changing systems. It has become clear that entropy can find a new and unexpected applied use. This parameter can be a helpful instrument in the analysis of human activity and processes that have been created. Here is a concrete example. Extraction of minerals, their beneficiation, metal melting, recovery of other useful substances, production of various articles—all this is accompanied by energy expenditure. This must lead to a decrease in the initial entropy value characterizing the material before its processing. The ability to find the entropy value characterizing the state of materials and substances before the processing and its change in the course of technological processes with the purpose of their optimization was the goal of our research. In conclusion, we can note that in connection with a significant growth in population and its consumption of various goods and services, modern civilization is characterized, above all, by an additional entropy growth of two kinds: 1. Unprecedented complication of industrial and cultural human activity. This includes arms race at an insufficient self-discipline of the humanity, which periodically leads to minor conflicts, but in perspective greater ones are also possible. 2. Incredible growth of the volume of production and consumption residues—human activity is accompanied by the appearance of a greenhouse effect and other problems. Overcoming of such entropy requires growing expenditures of resources by all of mankind.

5.4 Biological systems and darwinism Our ideas of the organization and development of living matter are still imperfect. From the standpoint of modern notions, biological systems are extremely unstable and hardly probable. During the last 50 years, biology has been developing much quicker than other sciences.

Chapter 5 • Place of the entropy parameter in technical sciences

269

The start of this acceleration was marked by the discovery and understanding of the material nature of heredity. The decoding of the DNA structure first, and then the genetic code were perceived as a solution to the mystery of life. However, afterwards it became clear that these discoveries have not given exhaustive answers to biological problems. They have only slightly opened the door revealing new labyrinths of the unknown. Many scientists believe that since in biological objects the entropy does not grow unrestrictedly, there is an glaring contradiction between thermodynamics and biology. There are enough reasons to have doubts about the accuracy of such notions. In the absence of external energy, entropy grows, and at the inflow of such energy in a biological system, entropy decreases, which corresponds to the second law of thermodynamics. In this respect, it is interesting to examine the situation with Darwin’s theory of living nature evolution causing ardent discussions today. The theory of evolution has served as an unshakeable foundation of biology as an indisputable truth for one and a half centuries. However, in recent decades the authority of this theory has been shaken. The scientific progress upon which Darwin set hopes in the first place, not only has not brought new proofs of his correctness, but on the contrary, it has cast doubt on the main principles of his theory. What is the essence of these contradictions? In 1859, Darwin published his book entitled The Origin of Species by Means of Natural Selection. Based on the achievements of breeders and his own observations of birds on Galapagos Islands, Darwin came to the conclusion that organisms can go through small changes adapting themselves to varying conditions of the environment by natural selection. He believed that during a sufficiently long period of time, a sum of small changes led to greater changes, which leads to the appearance of new species. Evolution theory implies a continuous progression of gradually changing living creatures from the most primitive forms up to higher organisms crowned by humans. At the same time, the character of Darwin’s theory was purely conceptual, since at that time paleontological evidence did not provide any grounds for his conclusions. Hereby, the evolution theory resembled a building erected on sand, without any foundations. Darwin relied on the progress of paleontology, which will enable the discovery of transitional forms of life and confirming the validity of his theoretical ideas in the future. In this respect, the confirmation of Darwin’s theory has not been done yet, despite considerable progress in paleontology. Not a single bridge from one species to another has been discovered. It follows from the evolution theory that a great number of such bridges must exist, that all evolution history should consist of transition links. However, annals of paleontology stubbornly testify to the inconsistency of evolutionism. During the first three and a half milliard years of the existence of life, only the simplest single-celled organisms lived on our planet. Then the Cambrian period arrived, when during a relatively short period of time all the diversity of life arose in its present-day form and without any transition links. According to Darwin’s theory, this Cambrian “explosion” could not just happen. Thus, paleontologists resolutely deny the concept of macroevolution. Adherents of Darwinism who disagree with this took up a defensive position suggesting NeoDarwinism, where a mutational mechanism took the place of classical adaptation. However,

270

Entropy of Complex Processes and Systems

the deciphering of the genetic code stroke a crushing blow to this theory. Mutations occur rarely and in most cases their character is unfavorable—a new “sign” cannot give its carriers a decisive superiority in the struggle for an economic niche. Darwin created the evolution theory based on the principles of the gradualness of changes. He believed that over many millions of years, sun, wind, and precipitation slowly change the geology of the planet and the fauna on it. This was based on the contemporary notions that all global geological and climatic changes occurred at the end of the glacial age hundreds of thousands of years ago, and after that no catastrophic changes at all happened on Earth. Darwin denied the possibility of any catastrophes. He thought that the disappearance of mammoths in Siberia was an unsolved enigma of nature. Paleontological findings nowadays confirm immense changes in nature in the period of the historical epoch. The last of them was found about 2500 years ago. Along the entire Alaska sea shore, multikilometer accumulations of bones of extinct animals—mammoths, mastodons, superbisons, and horses—were discovered. In this mass of bones, remainders of species existing at present were found—many millions of animals with broken and torn-off limbs mixed with torn-out tree roots. In California, human bones were discovered along with bones of extinct animals. The found skulls do not differ at all from those of current Native Americans. There are many similar places on Earth, including—the American continent, Switzerland, Germany, England, Siberia, China, Taiwan, etc. In the permafrost of Northern Siberia, ice-bound mammoths were discovered, whose defrosted meat was eaten by dogs without any ill effects. They had to be frozen quickly, at the moment of their death, otherwise their bodies would have decomposed. In the stomachs of frozen animals, fragments of plants that do not grow now in Siberia were found. Obviously, the animals’ deaths were not caused by a struggle for existence or at human hands. Everywhere there are traces of violent death caused by some global natural disasters, which probably happened in various geological periods and, quite probably, over historical time. In recent decades, the progress of molecular biology has driven evolutionists into a deadlock. It has shown that incredibly complicated biochemical systems, biological molecular intracellular mechanisms and processes which are unexplainable from the Darwinist standpoint, exist in an organism. It has become clear that these systems consist of components, each of them possessing a critical probability. Even if only one of them is disabled, the entire system goes wrong. Hence, for a mechanism to be able to accomplish its functions, its components must appear simultaneously, notwithstanding the main principle of the evolution theory. By way of example, the problem of eyesight can be mentioned. The organ of sight becomes functionally significant only at the last moment, when all its components are assembled. The point is that if we follow Darwin’s logic, any attempt of an organism to start a slow multistage process for the creation of the mechanism of sight would have been mercilessly stopped by natural selection. We can assume that some complex compounds arose in the primordial ocean at the expense of solar energy. However, a spontaneous increase in the organization level is absolutely excluded, because entropic decay prevents it. In any circumstances, self-generation of

Chapter 5 • Place of the entropy parameter in technical sciences

271

life must be accompanied by a complexity increase at the molecular level first, and entropy prevents it. Chaos per se cannot give rise to order, as this contradicts the laws of nature. At the time of Darwin, the scientific community believed that a cell was just a primitive vessel filled with protoplasm. However, in the course of the development of molecular biology, it has become clear that a living cell is a mechanism of incredible complexity carrying a genome with an inscrutably large amount of information. According to the law of conservation of information, its amount in a closed system never increases. Its volume either remains unchanged, or decreases at the expense of entropy growth. An Indian astrologer, Wickramasinghe, expressed very successfully the respective idea, “The probability of self-generation of life is as negligible as the probability for a hurricane sweeping above a dump to build a good airliner from the garbage in one rush.” This discussion has been going on for many decades. Supporters of Darwinism accuse their opponents of obscurantism, with their attitude acquiring an extremely fierce tone. Here is a characteristic example. One English zoologist has written, word for word, the following: “One can assert with an absolute certainty that anyone who does not believe in evolution is either an ignoramus or a fool, or else a madman, or probably, a rogue (although the latter is hardly believable).” This reminds us of the ideology of orthodox Marxism that ruled in the Soviet Union, when one does not argue with opponents, but blames and anathematizes them without any discussions. Such a comparison is completely appropriate. Just as with Marxism, today’s Darwinism has degenerated, hardened, and become a stagnant pseudoscientific dogma. The more drawbacks that are revealed in the theory of evolution, the more furious is the resistance of its adepts. It seems that when an idea dies, it transforms into an ideology, and as we know, an ideology is absolutely intolerant of competition.

5.5 Principal aspects of entropy Thus, life is an amazing example of a chemical system in the state of unstable equilibrium, since this system is an extremely improbable structure. Due to its metabolism, a living matter, in contrast to inanimate bodies, avoids the tendency of transition into an inert equilibrium state. This metabolism occurs in an organism at the expense of matter and energy and leads to an entropy decrease. This also happens in inanimate systems at the expense of external energy, as our example shows in an earlier part of the book. Life in its many manifestations resists entropy increase. The instrument of the latter is energy produced in any living organism at the expense of food, water, and solar energy. Reproduction of a biological species also resists the growth of its entropy. Entropy growth in a living organism corresponds to an attenuation of exchange processes, and its maximum, that is, equilibrium, can be reached only when it dies. At that point, it is possible that the external entropy, as it is accepted, grows all the same at the expense of vital functions and products and heat released by organisms. It was considered that entropy decrease contradicts the second law of

272

Entropy of Complex Processes and Systems

thermodynamics. Here a question arises—if life is equivalent to the degradation slowdown, which follows from the second principle, can the genesis and evolution of living beings be explained on the basis of only physical laws of the material world? There is no unambiguous answer to this question as yet, which is obvious, because there exists some incomprehensible and extremely complicated connection between an elementary chaos and a sequence of complicated physicochemical phenomena leading finally to life. These phenomena in their sequence must be directly opposite to an incident, because they are barely probable and must lead to an entropy decrease in circumstances where an incident usually increases the latter. A negligibly small probability of the coincidence of all positive parameters and circumstances on Earth ensuring the possibility of the generation of biological systems was noted. When it is necessary to explain the formation of a cell, biologists say that they do not have enough words to express their incompetence. Taking into account the fact that in addition to the problem of cell appearance, even more complicated problems are added, which are connected with their assemblage into systems constituting an individual, as well as the necessity of living at the expense of the environment for this system, and the possibility of reproduction, it should be admitted that until today no distinct scientific explanation of life and its appearance on the Earth’s surface has been made. In the middle of the 20th century, computations were performed which convincingly showed that the creation of life from inanimate substances could not happen by chance. These computations based on the laws of the probability theory have shown that for successful creation of life, a Universe of a greater volume and time that Einstein’s Universe is needed. The necessary space is expressed by a sphere of 1082 light years, whereas the most remote galaxies are located at the distance of the order of 109 light years. Similarly, the time required for cell formation within the volume of the terrestrial globe is estimated as 10243 years, while the Earth has existed for no more than 4 milliard years. These computations were performed before the decoding of the human genome, which contains up to 3 milliard elements. If the genome is written down into a book of 1000 pages with 1000 elements placed in each page, then the placement of the entire genome would take 3000 volumes. Such a library is in each cell of an organism! And all this countless amount of cells in the organism (according to up-to-date estimates, their number in a human is 1014) is connected with a governing center—the brain consisting of about 1014 neurons, which coincides with the amount of stars in a large group of galaxies including the Milky Way. All this cosmic system functions efficiently, controlling each cell of the organism and its total activity according to the program inserted both into the genome and the brain. It is clear from all stated above that it is impossible to create such a complicated system as a living organism at random, even with millions of centuries wasting the necessary material. In addition, with changing environmental conditions, living organisms reveal a capability to adjust to them in an optimal way. Now we understand that this role is performed by the recombination of parents’ genetic codes. Vitally important information is transferred by genetic messages written in the genome using a four-letter alphabet or a four-note octave. Today we understand that heredity consists of long messages (up to three milliard signs in a human genome). In the course of cell division, these messages are copied with several

Chapter 5 • Place of the entropy parameter in technical sciences

273

accidentally arising mistakes. Therefore new individuals are somewhat different from their ancestors in their abilities for survival and multiplication. Thus fundamental questions connected with life can be presented as genetic information transfer in the presence of randomness. Hence, the enormous role of separation of sexes and sexuality initially built into living nature in the process of its permanent perfection and adjustment to external conditions becomes clear. Although such an understanding does not provide an answer to the principal questions connected with the origin of life and evolution of species, it helps to comprehend a presumable supporting point for obtaining some definite conclusions.

5.6 Certain world-view aspects of entropy It is generally accepted that the entropy parameter makes it possible to determine the “time arrow,” that is, to distinguish past and future. This means that entropy becomes a practical measure of the contribution of an “event,” that is, of the event probability in a system of a large number of elements, into the disorder predominating in the world. In equilibrium systems, the time arrow disappears, that is, entropy does not change any more. From that follows the “theory of heat death of the Universe,” which believes that the world development history will be over when it reaches the state of thermodynamic equilibrium. Since the direction of the course of time is determined by the disorder growth at an elementary (microscopic) level, returning back, we must come to a conclusion that “in the beginning of time” matter and energy were organized as strictly ordered. Boltzmann wrote about it in his “Theory of Gases,” where “the initial state of the Universe surrounding us was very unlikely.” And this state must be considered as a certain absolute beginning of the world. A question arises—why did our Universe initially have such a low entropy, and where did this inexhaustible energy come from to reduce it to such a level. Knowledge about space is of fundamental importance for science on the whole. Both the most important and simplest of human events mentally return us to space and its origin. As early as in the ancient Middle East, the idea that the Earth is a small world in the vast Universe was formulated. It became clear that this was correct in the 6th century BC. This was a period of amazing intellectual and spiritual growth over the entire planet. At that time, sciences were developing in Greece, Zarathustra’s ideas were formulated in Persia, those of Confucius and Lao-tzu in China, Buddha in India, and Jewish prophets—in Judea and Babylonia. It is difficult to imagine, but all this was taking place without any connections between these regions. Somewhat later, many other truths were discovered, for example, the fact that the same natural laws are valid in the entirety of space. Today we know a fundamental law of nature—energy conservation law. This asserts that it is impossible to annihilate energy. It transforms from one form into another, while its total amount remains the same. However, this means that it is also impossible to create energy. The main sources of energy for modern industrial civilization are either nuclear stations, or

274

Entropy of Complex Processes and Systems

so-called fossil fuels—coal, petroleum, gas, wood, peat, etc. But these sources do not create new portions of energy, they only release previously accumulated energy in the respective substance. Ten or 20 milliard years ago, an amazing event occurred—the Big Bang, from which our Universe started. Why it happened is the greatest of all mysteries. It is considered that before the Big Bang all energy and matter were compressed up to an incredibly high density, and everything—matter, energy, and space—was fit in a very small volume. Hence, an incredibly enormous amount of energy was thrown into our Universe with the act of the Big Bang. Many questions arise, as described next. First, how did this energy last until now to support the stability in the microworld, macroworld, and space, namely, the stability of atomic nuclei, atoms themselves, living nature on our planet, functioning of cosmic “mechanism” with “supernova” blow-ups and galaxies scattering with ever-growing velocities (as known today, the number of galaxies in the Universe is of the order of 1011, and in each galaxy the number of stellar systems is of the order of 1011)? Second, where was such an amount of energy obtained from that it has been sufficient for 15 milliard years, and where there are no signs of its disappearance?. By all appearances, during the indicated time interval, as follows from statistical physics, all processes had to settle down long ago, equilibrium had to set in the Universe, and entropy had to reach its maximum. However, this has not happened. At various levels of matter organization, an opposite tendency is observed in nature. In the microworld, atomic nuclei consisting of positive protons do not disintegrate because of repulsive forces. It is considered that there exists a certain force, neither gravitational and nor electrostatic, acting at short distances. It unites protons and neutrons into a comprehensive whole and does not allow them to decay, overcoming a strong interaction between protons. We can judge indirectly the huge energy contained in the microworld at this level by the power of a nuclear explosion, when this energy is partially released. An enormous amount of energy is spent in space, for instance, only for the recession of galaxies with acceleration or for explosions of supernova stars. It is known that at the accomplishment of a certain amount of work, energy degrades from its higher forms to lower ones. Processes opposite to this degradation do not occur spontaneously. A generally accepted opinion is that the fact of the Universe expansion itself leads to the growth of its entropy. Proceeding from the laws of gas dynamics, one could agree with this, if the velocity of galaxies at the expansion slowed down or at least remained constant, and at that the entire Universe tended to equilibrium. However, the pattern suggested by up-to-date science is just the opposite. With increasing velocities, the receding galaxies move away from the equilibrium, each of them possessing certain acceleration. According to Newton’s law, this can be provided by certain forces applied to galaxies. Here references to gravitational fields are unconvincing for several reasons, the main one consisting in the fact that with increasing distance gravitational interaction attenuates, and does not increase. The nature of these forces and of the entire phenomenon of recession of galaxies is unclear or, at least, it is controversial.

Chapter 5 • Place of the entropy parameter in technical sciences

275

However, returning to our research area, it is more logical to assert that the share of the entropy of the Universe does not grow at its expansion, but permanently decreases, because the Universe moves further and further from the equilibrium. And if the Universe is taken for a system, the question of compensative entropy also remains unanswered. This can point only to the following: an unknown mechanism of permanent entropy decrease exists in nature at various levels of the organization of matter. To make both ends meet, some scientists, for example, Bergson, believe that the energy for that is taken from out-of-space processes diffidently passing over in silence what he implies. The question of the source of the energy for the Big Bang in such amount was taken from also remains unclearly. Thus, it turns out that rather often in serious scientific problems a discussion about natural laws passes into a world-view problem. What is at issue is a centuries-old opposition in the philosophy of priority of matter or mind. Around the last 100150 years, a materialistic point of view has mainly prevailed, but nowadays this view has started losing its position. Modern science and philosophy analyzing recently discovered phenomena and laws of the material world, more and more frequently experience a necessity to explain them by combining material and ideal. As a rule, it is done diffidently, allegorically, with hints, withholding facts. Possibly, the time has come to speak about it directly. Idealism is not only a religion in which laws of life and physics are sometimes vulgarly interpreted. We will not go into profound philosophical aspects of this problem. It is only of interest to mention that even ancient sources contained a profound understanding of natural laws. Deviations from dogmata were persecuted in the Christian Europe not only in the issues of theology, but also in the issues of the understanding of the surrounding world. For a very long time, superstitions served as a panacea for various misfortunes. At the end of the Middle Ages, thanks to the works of Copernicus, Kepler, and especially Newton, it has become clear that the laws of physics form the basis of natural phenomena even in space. As we can see, the development of science puts on the agenda the issue of the correlation of material and ideals in the understanding of the surrounding world and processes occurring therein. Evidently, Newton was the greatest of scientific geniuses to ever live. Newton’s ideas shocked many of his contemporaries. For instance, Descartes could not accept the notion of forces acting between celestial bodies at a distance, in a contactless manner. The diversity of Newton’s intellectual interests is admirable. These interests vary from great achievements to doubtful reflections in the area of alchemy. In his desire to grasp the sense of the Universe, the role of the study of prophecies was as important as that of universal gravitation and differential calculus that he had discovered. As his biographer asserts, he was religious, but he thought that the teaching of the Holy Trinity traditional for Christianity was an erroneous interpretation of the Holy Writ. According to the same source, Newton was closer, in his views, to Judaic monotheism of the Rambam school. This Talmudist is known as an author of numerous publications. In 1180 he wrote a book “The Guide for the Perplexed.” This book contains tremendous revelations even for a modern reader. For example, Rambam writes that the Almighty has created space, matter, and time, and the course of time depends

276

Entropy of Complex Processes and Systems

on motion—writing about time as a separate substance in the 12th century! Physicists understood this not more than 1000 years ago. The latter statement about the course of time is a precursor of the relativity theory, which appeared after almost 800 years later. In this work, Rambam asserts that the Moon and the Earth are round, and the Sun is a sphere, although it seems to be a disk. Today it is called an asymptote. In this work he rejected Aristotle’s ideas of time and space infinity. It contains many similar revelations. Where is it all from? Rambam himself writes that he obtained this knowledge from ancient Jewish sages. We can only imagine when these sages lived and from where they knew such fundamental laws of nature. For us, Rambam himself was an inhabitant of the Ancient world, and Flavius Josephus who lived 1100 years earlier in Jerusalem had written a book entitled “Antiquities of the Jews,” which reached us. A striking convergence of ancient notions and up-to-date science in the most fundamental issues allows us to answer the arising questions in a somewhat different way. Today, as never before, the assertion of Einstein that “science without religion is blind” remains topical not only in the absence of moral brakes that allowed creating atomic, biological, and chemical weapons, but also in the sense of understanding of the laws of nature. The process of nature cognition in the course of its development can be represented as an infinite series of accumulation of knowledge. Here an analogy with a mathematical series is possible, where each subsequent term is added to an accumulated sum of all preceding terms. As is already known, a mathematical series can be either convergent or divergent. In the first case, at the adding of new positive summands, this sum tends to a certain finite limit, while in the second case such a limit does not exist. This means that the series diverges. The notion of science as a convergent series was characteristic of Einstein and the entire physics of the 20th century. Gigantic efforts applied for many years for creating a general field theory were, in essence, an attempt to approach the limit of a converging series. However, real development of the physical theory was in the direction of a diverging series. Each new discovery in the area of elementary particles and their interactions, especially after the launch of the Large Hadron Collider, leads to increasing knowledge on the microworld nature, but does not decrease the unknown. On the contrary, the horizon of new mysteries is expanding. Up-to-date physics submerges into deep layers of matter, it has discovered hundreds of kinds of elementary particles, and one can see no end to this process of the expansion of what is known, as well as of the contraction of what is not known. Nevertheless, beyond the limits of an infinite series of physical cognition, Einstein saw what he called God—the Creator of nature and its laws—merged with his creation. In the case of a converging series, God, according to Einstein, is the limit to which human cognition tends to approach, but never reaches. In the case of a diverging series, God, in principle, is a supergiant essence of the universe, which is in principle inconceivable to humans and available to us only in its limited manifestations. Today it is generally accepted that entropy is an estimation of the state of any systems reflecting the process of dissipation of the stored action leading to the restoration of all kinds of equilibrium and establishment of uniformity. Finally, it represents a qualitative characteristic of wear, aging, and death of any processes and changes.

Chapter 5 • Place of the entropy parameter in technical sciences

277

However, this contradicts the surrounding reality at all levels. Processes and phenomena that actually decrease entropy and resist its integral growth by the harmony and beauty of nature are observed everywhere. In living nature, a deep interrelated logic of the development can be observed at all levels. As is already known, entropy decrease and establishment of harmony in it occur, in the long run, at the expense of solar energy. The physical world of the universe is also harmonic at all levels, from an atomic structure to a galactic cluster. Where is this unimaginable amount of energy for the physical world development taken from and how does it maintain its harmony? For some reason, it is assumed that the act of creation—the Big Bang—was a one-time event, and probably, this is correct. Today astronomers assert that space is curved, but closed, that is, finite. Space constantly expands, that is, in this respect the act of creation continues. To explain the beginning and maintenance of life, recession of galaxies and much more, it is necessary to assume with exactly the same reason that this act is constantly reproduced not only in the expansion of space, but in other forms, too. Today it is impossible as yet to prove this assumption, as well as to refute it.

5.7 Conclusion The tension of passions in the history of the development of the “entropy” notion is not exceptional in science. Many ideas and achievements became firmly established as a result of heated discussions and controversies. To confirm this, we can remember the history of “phlogiston” and “light ether.” It is known that a universally recognized genius of science, Einstein, was in sharp opposition to the views developed by Bohr and his school. Prolonged polemics “determinism against probability” between de Broglie and Schrodinger on the one hand and Born and Heisenberg and the majority of physicists-theoreticians on the other are also known. An exception is pure mathematics, its structure being aside from other sciences. Its entire construction is based on the laws of human logic. Natural sciences, such as, for instance, physics, chemistry, and biology, are based on other principles. The basis of each of these consists of their natural laws. Correlations of the laws of human logic and natural laws are inadequate, and therefore the understanding of natural phenomena is ambiguous. Science is inherently based on empirical facts, but it is not a simple accumulation of actual material. It represents an attempt to understand and, above all, to arrange these results and to create on this basis a logical framework of thinking with internal connections between experimental observations. All this relies on a certain logical basis of human knowledge, which, as is already known, is always limited by definite frames and finite precision. Therefore many ideas in nature, especially complicated ones, are accompanied by discussions which are developing together with science. They always leave open the possibility of changing these ideas taking into account new evidence. Therefore the development of new ideas in science can never be completed.

278

Entropy of Complex Processes and Systems

In general, the history of the development of scientific ideas is accompanied, practically without any exceptions, by prolonged and heated controversies, which sometimes create the heat of passion, which considerably exceeds the plots of popular modern blockbusters, detective novels, and serials. In this respect we can recall the burning of Giordano Bruno, auto-da-fe of Galileo, Boltzmann’s suicide, countless amounts of infarctions, strokes, and premature deaths of supporters of certain scientific views. Among these are the repressions that specialists in cybernetics, genetics, and some other branches of science went through in communist countries. This book is intended for broad-scoped specialists, since the notion of entropy can be used by everybody engaged in some aspects of natural sciences. It is desirable to attract especially the attention of young scientists and students. The material of the book is set forth without decreasing the precision and generality of the problems, wherever possible, without attracting complicated sections of mathematics and explained at the level of physics accessible for students’ understanding. The author is unaware whether he has managed to hand down all stated material to the reader in such a form, but he hopes that this book will fuel interest in the knowledge and grasp of complicated systems by way of a concrete example of two-phase flows, whose realization is extremely complicated. It is shown that they can be also successfully formalized using the entropy parameter.

Bibliography [1] E. Barsky, Critical Regimes of Two-Phase Flows With a Polydisperse Solid Phase, Springer, London, New York, 2010. [2] E. Barsky, M. Barsky, Cascade Separation of Powders, Cambridge International Science Publishing, 2006. [3] I. Prigogine, I. Stenders, Time, Chaos, Quantum, URSS, Moscow, 2005. [4] R. Feynman, Lectures in Physics, Moscow, Mir, 1967. [5] P. Morse, Thermal Physics, Nauka, Moscow, 1967. [6] H.A. Lorentz, Lectures in Thermodynamics, Gosgortekhizdat, Moscow, 1946. [7] I. Prigogine, I. Stenders, Order Out of Chaos, URSS, Moscow, 2005. [8] L.D. Landau, E.M. Lifshitz, Statistical Physics, Nauka, Moscow, 1964. [9] P.C. Reist, Introduction to Aerosol Science, Macmillan Publishing Co., New York, 1987. [10] N.A. Fuks, Mechanics of Continuous Media, Akademizdat, Moscow, 1955. [11] M.D. Barsky, Fractionating of Powders, Nedra, Moscow, 1980. [12] I. Prigogine, The End of Certainty, the Free Press, New York, 1997. [13] A.M. Rosen, Theory of Isotope Separation in Columns, Atomizdat, Moscow, 1960. [14] S.L. Soo, Fluid Dynamics of Multiphase Systems, Blaisdell Publishing Co., 1971. [15] Z.R. Gorbis, Heat Exchange and Hydrodynamics of Dispersed Through Flows, Energia, Moscow, 1970. [16] A.S. Monin, A.M. Yaglom, Statistical Hydromechanics, vol. I, Nauka, Moscow, 1965. vol. II, 1967. [17] I. Happel, H. Brenner, Low Reynolds Number Hydrodynamics, Prentice-Hall, 1965. [18] J.W. Gibbs, Elementary Principles in Statistical Mechanics Developed With Special Reference to the Rational Foundation of Thermodynamics, Vale University Press, 1902. [19] R.C. Tolmon, Principles of Statistical Mechanics, Oxford University Press, 1938. [20] C. Kittel, Thermal Physics, John Willey and Sons Inc, New York, 1977. [21] L. Brillouin, Science and Information Theory, Academic Press, Inc, New York, 1977. [22] P.P. Chambodall, Evolution et applications du concept d’entropie, Dunov, Paris, 1963. [23] F.R. Gandmacher, Applications of the Theory of the Matrices, John Wiley and Sons, New York, 1959. [24] J. Kemeny, J. Shell, Finite Markov Chains, Princeton, New York, 1967. [25] J. Kemeny, J. Shell, A. Knapp, Markov Chains, Springer-Verlag, 1976. [26] D. Kondepudi, I. Prigogine, Modern Thermodynamics, John Wiley and Sons, New York, 2000. [27] J.D. Seader, E.J. Henley, Separation Process Principles, John Wiley and Sons, New York, 1998. [28] F.M. Khony, Predicting the Performance of Multistage Separation Process, CRC Press, Boca Raton, FL, 2000. [29] W.G. Einstein, General Course of Chemical Technology Processes and Apparatuses, vol. I, vol. II, Lotos, Moscow, 2002. [30] G.N. Abramovich, Applied Gas Dynamics, Nauka, Moscow, 1976.

280

Bibliography

[31] N.I. Gelperin, Principal Processes and Apparatuses of Chemical Technology, Khimia, Moscow, 1981. [32] V.G. Levich, Physico-Chemical Hydrodynamics, Fizmatgiz, Moscow, 1959. [33] A.Yu Zakgeim, Introduction to Modeling Chemico-Technological Processes, Khimia, Moscow, 1982. [34] S. Chapman, T.G. Cowling, The Mathematical Theory of Non-Uniform Gases, Cambridge University Press, 1961. [35] A. Einstein, The Theory of Brownian Movement, Dover, New York, 1956. [36] McDaniel, Collision Theory in Ionized Gases, Wiley, New York, 1964. [37] H. Schlichting, Boundary Layer Theory, Mc Graw Hill, New York, 1968. [38] L.I. Sedov, Planar Problems of Hydrodynamics and Aerodynamics, Nauka, Moscow, 1966. [39] A.V. Govorov, Cascade and Combined Processes of Bulk Materials Fractionating, Thesis, Urals Polytechnic Institute, Sverdlovsk, 1986. [40] S.F. Shishkin, Intensification of the Process of Gravitational Pneumatic Classification, Thesis, Urals Polytechnic Institute, Sverdlovsk, 1983. [41] E. Fermi, Notes on Thermodynamics and Statistics, Chicago University Press, 1966. [42] E. Barsky, Theoretical basis for separation of pourable materials in vertical flows. Sci. Isr. Technol. Advant. (1) (2003). [43] H.A. Lorentz, Les Theories Statistiques en Thermodynamique, Leipzig-Berlin, 1916. [44] E. Barsky, M. Barsky, Optimal air flow velocities in gravity separa- tion processes, Miner. Process. J. (2) (2002). [45] A. Fortier, Mechanics of Suspensions, Mir, Moscow, 1971. [46] J.M. Kay, An Introduction to Fluid Mechanics and Heat Transfer, Cambridge University Press, 1957. [47] E. Barsky, M. Barsky, A. Govorov, Lifting power and structure of two-phase flows in gravity separation regimes, Sci. Isr. Technol. Advant. 4 (2002). [48] M.V. Volkenshtein, Configurational Statistics of Polymers, Academy of Sciences of the USSR, 1959. [49] J.W. Gibbs, The Scientific Papers of J. Willard Gibbs, vol. 1. Thermodinamics, vol. 4, Dover, 1961. [50] E. Barsky, M. Barsky, Master curve of separation processes, Phys. Sep. Sci. Eng. (2)(2004). [51] E. Barsky, Algorithms for efficacy of separation of pourable material, ECMI Newslett. Mathe. Ind. (36) (2004). [52] E. Barsky, Efficacy of separation of pourable material, ECMI Newslett. Mathe. Ind. (36)(2004). [53] E. Barsky, Conditions providing optimum separation, Phys. Sep. Sci. Eng. (3)(2004). [54] E. Barsky, Absorbing markov chain in gravitational cascade separation of pourable materials at different stages of a classifier, ECMI Newslett. Mathe. Ind. (37)(2005). [55] E. Barsky, Medium motions regimes and universal curves of gravitational separation, Int. J. Miner. Process. (3)(2007). [56] E. Barsky, M. Barsky, Correlation between the velocities of particle settling and the optimal stream velocity in gravity separation, in: XXII International Mineral Processing Congress, Cape-Town, 2003. [57] E. Barsky, Chaos and order in turbulent two-phase currents, in: Proceeding of First International Conference on Discrete Chaotic Dynamics in Nature and Society, 1998. [58] W. Heisenberg, The Physical Principles of the Quantum Theory, Chicago University Press, Chicago, IL, 1930. [59] B. Mandelbrot, The Fractal Geometry of Nature, Freeman, New York, 1982. [60] M. De Leener, Classical Kinetics of Fluids, Wiley, New York, 1977.

Bibliography

[61] I. Prigogine, From Being to Becoming, Freeman and Co., San Francisco, CA, 1980. [62] B.V. Kizelvater, Theoretical Foundations of Gravitational Concentration Processes, Nedra, Moscow, 1979. [63] V.N. Shokhin, G. Lopatin, Gravitational Concentration Methods, Nedra, Moscow, 1980. [64] A. Lande, New Foundations of Quantum Mechanics, Cambridge, University Press, 1965. [65] C. Heer, Statistical Mechanics, Kinetic Theory and Stochastic Processes, Academic Press, New York, 1972. [66] М Kogan, Rarefied Gas Dynamics, Nauka, Moscow, 1967 (in Russian). [67] F. Reif, Fundamentals of Statistical and Thermal Physics, McGraw-Hill, New York, 1965. [68] S. Howking, Black Holes, Baby Universes and Other Essays, Stephen Howking, 1993. [69] S. Howking, L. Mladinow, A. Briefer, History of Time, Stephen Howking, 2005. [70] M. Zemansky, Heat and Thermodynamics, McGraw-Hill, New York, 1957. [71] E. Fermi, Scientific Papers, Nauka, Moscow, 1971. [72] А Einstein, Collection of Scientific Papers, Nauka, Moscow, 1966. [73] R. Feynman, A. Hibbs, Quantum Mechanics and Path Integrals, Mir, Moscow, 1968. [74] D. Ruelle, Statistical Mechanics, Mir, Moscow, 1971. [75] I. Levich, Introduction to Statistical Physics, Gosgortekhizdat, Moscow, 1954 (in Russian). [76] С Cercignami, Mathematical Methods in Kinetic Theory, MacMillan, 1969. [77] R. Kubo, Statistical Mechanics, Mir, Moscow, 1967. [78] D. Ruelle, Hazard et Chaos, Princeton, University Press, 1991. [79] А Isihara, Statistical Physics, Mir, Moscow, 1973. [80] D. Gibbs, Thermodynamics: Statistical Mechanics, Nauka, Moscow, 1982. [81] N. Martin, J. England, Mathematical Theory of Entropy, Cambridge University Press, England, 1984. [82] I. Van-der-Waals, F. Constamm, Course of Thermostatics, ONTI, Moscow, 1936. [83] L. De Broglie, Following the Footsteps of Science, Foreign Literature, Moscow, 1962. [84] М Кас, Probability and Related Topics in Physical Sciences, Interscience Publishers, New York, 1958. [85] E. Fermi, Thermodynamique, New York, 1937. [86] V. Kozlov, Gibbs’ Ensembles and Non-Equilibrium Statistical Mechanics, Dynamics, Moscow, 2008. [87] M. Sadovsky, Lectures on Statistical Physics, Dynamics, Moscow, 2008 (in Russian). [88] I. Prigogine, The End of Certainty, The Free Press, New York, 1964. [89] Ch Kittel, H. Kroemer, Thermal Physics, Freiman and Co, New York, 1960. [90] S. Khaitun, History of the Gibbs Paradox, Moscow, 2009. [91] М Barsky, Optimization of the Processes of Granular Materials Separation, Nedra, Moscow, 1978. [92] J. Gibbs, Basic Principles of Statistical Mechanics, Gosgortekhizdat, Moscow, 1946. [93] G. Kirkhberg, Minerals Dressing, Gosgortekhizdat, Moscow, 1960. [94] Handbook of Ore Dressing, vol. 1, Moscow, Nedra, 1972. [95] K. Shannon, Mathematical Communication Theory, Foreign Literature, Moscow, 1963. [96] F. Keizer, Zickzag

Sichter

ein Windsichter nach neuen Prinzip, Chemie Ingener Technik, N4, 1965.

[97] W. Keizer, Kennwerte und Kennlimen zum Beurteilen Von Sichtor-gegen, Zement Kalk Gips, N6, 1966.

282

Bibliography

[98] L. Barsky, I. Plaksin, Criteria of Separation Processes Optimization, Nauka, Moscow, 1967. [99] E. Barsky, Entropic Invariants of Two-Phase Flows, Elselvier, 2014. [100] H. Poincaré, Notes on the Kinetic Theory of Gases, Nauka, Moscow, 1974. [101] N. Krylov, Papers on the Statistical Physics Substantiation, Nauka, Moscow, 1950. [102] А Khingin, Mathematical Foundations of Statistical Mechanics, Gostekhizdat, Moscow, 1943. [103] Ya Sinai, Introduction to Ergodic Theory, Princeton University Press, Princeton, NJ, 1977. [104] F. Kuni, Statistical Physics and Thermodynamics, Mir, Moscow, 1981. [105] Ya Gelfer, History and Methodology of Thermodynamics and Statistical Physics, Higher Education, Moscow, 1981. [106] E. Barsky, On certain problems of thermodinamic entropy, Sci. Isr. Technol. Advant. 12 (3) (2010). [107] E. Barsky, Gibbs paradox as a derivative of composition entropy, Sci. Isr. Technol. Advant. 13 (1) (2011).

Index Note: Page numbers followed by “f” and “t” refer to figures and tables, respectively. A “Absolute activity” parameter, 170 “Absolute mobility” parameter, 170, 179 Actual energy consumption, 27 28 Actual separation of gases, 27 28 Affinity of fractional extraction curves, 210 211 of fractional separation curves, 206 of separation curves, 213, 237 Agrarian revolution, 267 Aluminum powder, 215 Analogy, 95 97 Archimedes criterion, 219 Asymptote, 275 276 Atmospheric oxygen, 264 Atomistic hypothesis, 19 Average cell occupancy, 176 Average number of particles, 175 Average values of random quantities, 49 52 B Big Bang, 262, 274, 277 Binary composition, 53 Binary mixture, 54 heterogeneity, 60 separation optimality condition according to entropic criterion, 62 67 Binomial theorem formula, 167 Biological complexities, 263 264 Biological molecular intracellular mechanisms, 270 Biological organism, 259 Biological systems, 268 271 thermodynamic aspects of, 32 36 Boiling-bed regime, 40 41 Boltzmann factor, 23 24, 169, 172 Boltzmann’s accuracy, 98 Boltzmann’s classical definition, 122

Boltzmann’s equation derivation, 17 Boltzmann’s method, 98 Boltzmann’s theory, 16, 18 19 of gases, 102 Bose Einstein distribution function, 177 C Cancerous neoformation of thermodynamics, 251 Canonical distributions in statistical ensembles for two-phase flows, 159 165 canonical ensemble, 161 163 grand canonical ensemble, 163 164 microcanonical ensemble, 160 161 relation between canonical and microscopic ensembles, 164 165 Canonical ensemble, 161 163 Carnot cycle, 1 Cascade principle, 193 Chaos, 7, 39 40 Chemical energy, 5 6 Chemical potential, 4 Chlorophyll-containing plants, 35 Civilization, 266 development, 264 268 Classical distribution function, 180 Classical thermodynamics, 10, 254 255 Clausius principle, 15 Clausius’ entropy, 21 Clausius’ formula, 23 Coarse-to-fine principle, 82t Coefficient of aerodynamic drag, 235 Complex processing of natural resources, unambiguous evaluation of, 91 93 Complexity, 254 Complexity growth, development as, 261 268 biological complexities, 263 264 development of civilization, 264 268 physical complexity, 262 263

283

284

Index

Conditional distribution functions, 44 Conditional probability, 43 Construction process, completeness of complicated object in, 84 91 Cosmic mechanism, 274 Coupling equation, 252 Cumulative numerical invariant, 261 D Darwin’s theory, 269 Darwinism, 268 271 Discrete distribution, 46 Disintegration, 260 Disjoint dispersoid, 230 231 Dispersion, 51 Dispersoid density, 232 Distribution coefficients, estimation of, 228 234 function, 44 45 dispersion, 45 parameters, 106 115 Dynamic entropy, 95 canonical distributions in statistical ensembles for two-phase flows, 159 165 canonical ensemble, 161 163 grand canonical ensemble, 163 164 microcanonical ensemble, 160 161 relation between canonical and microscopic ensembles, 164 165 distribution parameters, 106 115 dynamic systems comprehension, 95 97 entropy and probabilities distribution, 137 143 flow section, 154f invariants for two-phase flows, 153 159 on issue of entropy parameter formation for dynamic systems, 135 137 multidimensional statistical model of twophase system, 144 150 properties characterizing two-phase system, 123 127 stationary state as condition of entropy maximality, 128 135 statistical parameters of mass transfer, 178 192 substantiation

of entropy parameter for two-phase flows, 115 123 of physical analogy, 97 102 two-phase flow with longitudinal partition, 145f mobility, 150 153 statistical analysis of mass exchange in, 166 178 statistical model foundations of critical, 102 105, 106f Dynamic entropy component, 158, 258 Dynamic equilibrium, 153 Dynamic systems comprehension, 95 97 E Earth, 273 Earth surface, 263 Ehrenfests’ model, 18 19 Electric energy, 5 6 Electromagnetic radiation, 6 7 Energy, 158 159, 251 conservation law, 2, 6, 21, 273 274 Energy of universe, 3 Ensemble of systems, 141 142 Enthalpy, 9 Entropic model adequacy for two-phase flows correlation of structural and cellular models of process, 242 246 distribution coefficient formation, 218f distribution coefficients estimation, 228 234 experimental check of theoretical conclusions, 199 208 fractional separation dependence, 237f, 238f, 241f generalizing criteria, 246 249 generalizing invariant, 215 217 for all separation regimes, 239 242 granulometric composition of aluminum powder, 244t of chromium oxide, 243t mathematical model of poly-fractional mixture redistribution, 193 199 methodology development of separation processes computation, 234 239 separator designs, 200f

Index

flow velocity and particle size, 206 208 initial composition, 203 204 process stability, 205 206 solid-phase concentration in flow, 204 205 solid-phase distribution coefficients in twophase flow, 217 228 unifiеd separation curves, 208 215, 211f composition of bulk materials in partial residues, 209t Entropic stability, 135 Entropy, 1, 17, 27, 55, 122 123, 125, 128, 158 of composition, 191 192 maximality, 128 135 maximization, 162 parameter, 5 actual separation of gases, 27 28 Gibbs paradox and problems of gaseous systems separation, 25 27 notion of statistical ensemble, 11 16 phenomenological problems of second law, 31 37 problematic aspects of entropy, 20 25 solution to Gibbs paradox, 28 31 statistical substantiation of entropy, 10 11 substantiation for two-phase flows, 115 123 substantiation of statistical analysis validity, 16 20 thermodynamic entropy, 1 10 parameter formation for dynamic systems, 135 137 and probabilities distribution, 137 143 problematic character biological systems and Darwinism, 268 271 development as growth of complexity, 261 268 principal aspects of entropy, 271 273 world-view aspects of entropy, 273 277 properties, 254 261 statistical substantiation of, 10 11 Equilibrium, 260 “equilibrium position” oscillations, 17 18 processes, 7 8 Evolution theory, 269 Exhaustive search, 69 Extropy, 33

285

F Fermi Dirac statistics, 165 Fine-to-coarse principle, 80t Fossil fuels, 273 274 Fractional separation curve, 199 dependence, 237f, 238f, 241f invariance, 204 Froude criterion (Fr), 206, 213 affinization of separation curves, 207f G Gas(es) actual separation of, 27 28 Boltzmann’s theory of, 102 kinetic theory of, 101 molecules, 11 Gaseous system, 24 separation, 25 27 Gay-Lussac’s law, 22 Generalized influence, 162 163 Generalized proportionality coefficient, 149 Generalizing fluid, 32 Generalizing invariant, 215 217 Gibbs paradox, 20, 25 27, 253, 255 solution to, 28 31 Gibbs’ factor, 169 Gibbs Duhem equation, 252 Global equilibrium systems, 259 Grand canonical ensemble, 163 164 Granulometric characteristics, probabilistic interpretation of, 45 49 particles’ density distribution, 49f particles’ size distribution, 47f principal parameters of crushed quartzite, 47t Granulometric composition, 46, 48, 59, 71, 72t Greedy algorithm, 69 H Hamiltonian equation, 12 Homo sapiens, 265 Human genome, 272 I Independent random events, 43

286

Index

Information theory, 260 Integral of Clausius, 2 Intermediate regime of flow around particles, 227 228 Invariance, 185, 257 Irreversibility, 95, 98, 124 125 Irreversible entropy growth, 15 Irreversible processes, 1 2, 256 K Kepler’s laws, 256 Kinetic energy, 12 Kinetic theory of gases, 101 L Lagrange constant, 140 Laminar flow regime, 227 “Launch pad” role, 254 255 Liouville’s expression, 142 Living cell mechanism, 271 Logarithms, 180 M Magnetic energy, 5 6 Markov’s random processes, 98 Mass transfer, statistical parameters of, 178 192 Mathematical model of poly-fractional mixture redistribution, 193 199 solid-phase concentration in flow, 204 205 results of multiproduct separation of initial material by coarse-to-fine principle, 82t by fine-to-coarse principle, 80t by principle of maximal efficiency at each stage, 83t of separation into n components, 78 84 initial material characteristic, 79t Mathematics basis, 257 Methodology development of separation processes computation, 234 239 Microcanonical ensemble, 160 161 Microscopic reversibility principle, 18 Mixing entropy, 191 Mixture compositions, uncertainty of, 54 61

dependence of binary mixture entropy on components ratio, 57f Modeling, 95 97 Multicomponent mixtures separation, 67 70 diagram of multiproduct separation, 67f graph of exhaustive search, 68f graph of material separation into four components, 70f order of separation boundaries, 71f Multidimensional distribution functions, 44 Multidimensional statistical model of two-phase system, 144 150 Mysticism, 254 255 N Natural resources, unambiguous evaluation of complex processing of, 91 93 n-component mixture, 53, 61 Negative entropy, 33 Negentropy, 33 Nernst’s law, 258 Newton’s second law, 136 Nonequilibrium, 251 processes, 131 O Odor, 36 Optimization of separation into four components, 71 78 Oscillators, 10 Oxygen-rich atmosphere, 264 P Paleontology, 269 Particle collisions, 17 Particle hovering velocity, 213 Phase space, 13 Phlogiston, 32 Photons, 6 7 Photosynthesis, 33 Physical analogy, substantiation of, 97 102 Plum pudding model, 99 Polyfractional mixture of solid particles, 45 49 Poly-fractional mixture separation process, 53

Index

Potential extraction, 102, 105, 115, 126 127, 133, 148, 151, 158 159, 184 parameter, 149 values, 108, 127, 144, 146 Potential extraction, 158 159 Probabilistic characteristics, 40 44 vertical pipe with nonintersecting windows, 40f vertical pipe with two intersecting windows, 42f Probabilistic distribution, 260 Probabilistic interpretation of granulometric characteristics, 45 49 Probabilistic notions, 12 Probability density, 12 13 distribution, 137 143 function, 44 normalization, 160 theory, 39 Proportionality coefficient, 188 Proportionality factor, 55 Q Quantum classics, 37 Quantum physics, 253 Quantum theory, 6 7 R Random process, 40 Random quantities, average values of, 49 52 Random values and distribution functions, 44 45 Random walk, 103 Randomness, 39 40, 95 Relation between canonical and microscopic ensembles, 164 165 Relaxation time, 122 123 Reversibility paradox, 16 Reversible Liouville equation, 98 Reynolds criterion, 220 Root-mean-square deviation, 175 S Second law phenomenological problems discussion of problem, 36 37 essence of problem, 31 32

287

thermodynamic aspects of biological systems, 32 36 of thermodynamics, 31 32 Separation character of poly-fractional mixture distribution, 63f efficiency, 61 62 mathematical model into n components, 78 84 optimality condition according to entropic criterion, 62 67 optimization into four components, 71 78 Separation effect, 49 Similarity criteria, 97, 106 Skepticism, 5 Solid particles, solid phase of polyfractional mixture of, 45 49 Solid phase concentration, 184 concentration in flow, 106 115 distribution coefficients in two-phase flow, 217 228 intermediate regime of flow around particles, 227 228 laminar flow regime, 227 turbulent flow around particles and turbulent regime, 225 227 of polyfractional mixture of solid particles, 45 49 Specific heat capacity, 2 per unit mass, 21 Stability, 31, 39 40 Stable nonequilibrium, 34, 214, 253, 264 265 Standard deviation, 118 Static entropy, 95 Stationary entropy, 84 Stationary process, 7 Stationary state as condition of entropy maximality, 128 135 Statistical analysis of mass exchange in two-phase flow, 166 178 Statistical averages, 13 Statistical ensemble, 11 16, 140 closed space with partition, 15f Statistical entropy component

288

Index

Statistical entropy component (Continued) determining average values of random quantities, 49 52 mathematical model of separation into n components, 78 84 notions of randomness, chaos, and stability, 39 40 optimization of separation into four components, 71 78 probabilistic characteristics, 40 44 probabilistic interpretation of granulometric characteristics, 45 49 random values and distribution functions, 44 45 separation efficiency, 61 62 separation optimality condition according to entropic criterion, 62 67 unambiguous evaluation of completeness of complicated object, 84 91 of complex compositions of systems, 53 54 of complex processing of natural resources, 91 93 unbiased evaluation of efficiency, 67 70 uncertainty of mixture compositions, 54 61 Statistical limit, 179 Statistical model foundations of critical two-phase flows, 102 105, 106f Statistical parameters of mass transfer, 178 192 Statistical substantiation of entropy, 10 11 Statistical sum, 171 172 Steady-state process, 7, 146 Stirling’s approximation, 173 Stirling’s formula, 112, 126, 186 Substantiation of statistical analysis validity, 16 20 Ehrenfests distribution, 18f Sun, 275 276 Synthetic model, 95 System entropy, 260 T “Theory of Gases” (Boltzmann), 273 Thermal energy, 5 6 Thermodynamic(s), 4 5, 251

aspects of biological systems, 32 36 entropy, 1 10, 21, 134 135 methods, 27 temperature, 255 Thermoelectricity, 5 6 Thermo-momentum, 252 Total entropy for polyfractional mixture, 191 Turbulent flow around particles, 225 227 Turbulent regime of medium motion, 144 150 Two-phase flows canonical distributions in statistical ensembles for, 159 165 invariants for, 153 159 with longitudinal partition, 145f mobility, 150 153 multidimensional statistical model, 144 150 statistical analysis of mass exchange in, 166 178 statistical model foundations of critical, 102 105, 106f substantiation of entropy parameter for, 115 123 2z joint equations, 195 196 U Unambiguous evaluation of completeness of complicated object, 84 91 diagram of railway construction, 85f evaluation of completeness of complicated construction, 90t of complex compositions of systems, 53 54 of complex processing of natural resources, 91 93 principle, 86 Unbiased evaluation of efficiency of multicomponent mixtures separation, 67 70 Uncertainty of mixture compositions, 54 61 Unifiеd separation curves, 208 215, 211f composition of bulk materials in partial residues, 209t Uniform energy distribution, 31 Uniform finite-difference equation, 197 Universality of curve, 238, 254 255 Universe, the, 8, 253, 261 263

Index

development, 261 physical world of, 277 thermal death of, 253 V Velocity components, 14

gradient, 226 hypothesis, 100 101 profile, 100 101 W World-view aspects of entropy, 273 277

289