Process Engineering: Addressing the Gap between Study and Chemical Industry [3rd, Revised and Extended Edition] 9783111028149, 9783111028118

"Reading the book, you can feel the long practical experience of the author. The text is easy to read, even where c

120 30 163MB

English Pages 574 Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Preface to the 2nd edition
Preface to the 3rd edition
Contents
1 Engineering projects
2 Thermodynamic models in process simulation
3 Working on a process
4 Heat exchange
5 Distillation and absorption
6 Two liquid phases
7 Alternative separation processes
8 Fluid flow engines
9 Vessels and separators
10 Chemical reactions
11 Mechanical strength and material choice
12 Piping and measurement
13 Utilities and waste streams
14 Process safety
15 Digitalization
Glossary
List of Symbols
Bibliography
A Some numbers to remember
B Pressure drop coefficients
Index
Recommend Papers

Process Engineering: Addressing the Gap between Study and Chemical Industry [3rd, Revised and Extended Edition]
 9783111028149, 9783111028118

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Michael Kleiber Process Engineering

Also of Interest Chemical Reaction Engineering. A Computer-Aided Approach Salmi, Wärnå; Hernández Carucci, de Araújo Filho, 2023 ISBN 978-3-11-079797-8, e-ISBN 978-3-11-079798-5

Sustainable Process Engineering Szekely, 2021 ISBN 978-3-11-071712-9, e-ISBN 978-3-11-071713-6

Process Technology. An Introduction De Haan, Padding, 2022 ISBN 978-3-11-071243-8, e-ISBN 978-3-11-071244-5

Chemical Reaction Technology Murzin; 2022 ISBN 978-3-11-071252-0, e-ISBN 978-3-11-071255-1

Process Systems Engineering. For a Smooth Energy Transition Zondervan (Ed.), 2022 ISBN 978-3-11-070498-3, e-ISBN 978-3-11-070520-1

Michael Kleiber

Process Engineering �

Addressing the Gap between Study and Chemical Industry 3rd, Revised and Extended Edition

Author Dr. Ing. Michael Kleiber thyssenkrupp Uhde GmbH Friedrich-Uhde-Str. 2 65812 Bad Soden Germany [email protected]

ISBN 978-3-11-102811-8 e-ISBN (PDF) 978-3-11-102814-9 e-ISBN (EPUB) 978-3-11-102929-0 Library of Congress Control Number: 2023940903 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2024 Walter de Gruyter GmbH, Berlin/Boston Cover image: Copyright thyssenkrupp Typesetting: VTeX UAB, Lithuania Printing and binding: CPI books GmbH, Leck www.degruyter.com



For Claudia and our adult Timon

Preface You need only 10 % of the things you learn at the university. The problem is: You don’t know which 10 %! (University Sapience)

There are so many good textbooks on process engineering. When you start writing a new one, you must wonder what might be its identifying feature, making it different from other textbooks. In fact, I think I have got one. After having worked in industry for twenty years in process simulation, design, and development, the author is by no means in a position to give proper answers to any problem. On the contrary, any new process has its own characteristic problems, and more or less you start from scratch. But experience and a good network help to develop an appropriate strategy to get a solution, and to be able to distinguish between important and less important knowledge in process engineering. In the academic world, there is a tendency for over-emphasizing theoretical concepts without integration of the application aspects. For instance, many phase equilibrium and physical property data specialists have never simulated a distillation column, which would give a certain feeling for the importance of an activity coefficient at infinite dilution. On the other hand, practitioners have a tendency to believe in a solution which worked once, disregarding that this might have been related to conditions which are not always available. The corresponding pieces of software are simply trusted in any case, while nobody can explain what they are based on and what their limitations are. Bridging the gap between university and industry is the utmost concern of this book. The intention is not to write a textbook for beginners in process engineering, but to help the reader to be prepared with the most essential pieces of knowledge in practical applications. It tries to answer the so-called silly questions, things that many students have learned at the university without understanding their implications. The target of this book is not to generate specialists but to make the reader do something reasonable and keep the overview. It is not a textbook which gives thorough explanations for any topic listed in the book; for this purpose, some 400 pages are by far not enough. In fact, long mathematical and scientific derivations are avoided, and other existing textbooks are referred to where the reader can acquire an in-depth knowledge if needed. Instead, we try to explain the meaning of the topics and formulas so that the reader gets a feeling for the relationship and the interpretation. It should enable the reader to take part in discussions and to know where it is worth increasing his knowledge with further literature, and to distinguish between important and less important topics. To give an example: the author has often been asked to explain what an activity coefficient is. People always expect at most two sentences, in the usual manner of engineers. This is simply not possible. Good textbooks need several pages to explain phase equilibria of pure substances, the difference between mixtures and pure substances, https://doi.org/10.1515/9783111028149-201

VIII � Preface the meaning of the Gibbs energy, and the concepts of excess and partial molar properties. Certainly, this explanation is a requirement for thermodynamists, but it is not the way for application engineers to understand why he must use activity coefficients (or an equivalent concept) for nonideal mixtures and how to get them. Instead, a “recipe” for the usage of a model is required, and in many projects it is important to explain the necessity of a proper evaluation of the model parameters, and to avoid that the project manager gets the impression that it is just an accuracy fad. The author is fully aware that the text reflects his own opinion. For example, equations of state are currently favored by most scientific authors. Nevertheless, the author wants the reader to be capable, not perfect and at state of the art in each area. For this purpose, activity coefficients are the more pragmatic approach and taken as the standard in this book. The author is grateful to Jürgen Gmehling, Michael Benje, and Hans-Heinrich Hogrefe, who eliminated many errors and misprints in my draft. Thanks also to Hristina and Olaf Stegmann, who helped me a lot to write reasonable texts in Chapters 1 and 11. I would like the reader, probably a process engineer in his startup phase, to have a better idea about the 10 % knowledge which will help him in his professional life. And of course: always have fun in your job! And remember: There is always much more to learn than can ever be taught! (Peter Ustinov) A fool with a tool is still a fool. (Grady Booch)

Preface to the 2nd edition Three years have passed since the first edition. I intended to write a book with the special focus on students just after finishing the university and then getting acquainted with the industrial point of view. The feedback has given me confidence that I succeeded with this purpose. Now I was told that it is time to make a review. I had waited for this announcement for a long time. As always, after a text is finished, you start to find the first mistakes, and any feedback, although usually positive, reveals room for improvement. My technique is to create a file containing all suggestions from readers, so that I have a fast startup when revision comes to the top of the agenda. There were a lot of items to correct, and I have to thank all the readers who contributed to this list, although I am angry with myself about every single mistake. Furthermore, a few chapters have been extended, and a new one on dynamic simulation has been introduced. As I am not very experienced in dynamic simulation, I had the good idea to ask Mrs. Verena Haas from BASF SE to write this introductory chapter. Verena did an impressive master thesis in our company, and I cannot imagine that there is someone more appropriate for this job. She was also the one who suggested to add a PID chapter, and after it was written, she made a number of valuable changes on it. When I met her before her master thesis, I took her as a possible reader; meanwhile, she has become a valuable partner for writing this book. Hattersheim, February 2020 And remember: Experience is a thing you claim to have – until you acquire more of it. (Harald Lesch in: α-Centauri)

https://doi.org/10.1515/9783111028149-202

Michael Kleiber

Preface to the 3rd edition The second edition was just printed when the Corona virus appeared and made the world stand still. Two difficult years followed for almost all companies in chemical industry. Just after Corona was overcome, the Ukraine war began and initiated the energy crisis with a large impact not only on chemical industry but also on the methodology in chemical engineering itself. The importance of energy-saving measures has drastically increased, and so have the demands on process simulation, which delivers the constraints for energy consumption. As well, the judicial constraints of any developed measures set by the patent situation have become increasingly significant. Moreover, with the digitalization a new topic has developed during the recent years; the ability of making use of large amounts of data and artificial intelligence will lead to changes in the engineering world which none of us can imagine. Therefore, besides a large number of supplements two new chapters have been added to the text. After the success of the chapter on dynamic simulation by Verena Haas, I decided to ask again two colleagues to contribute with their expertise in two important areas where I am learning myself. In our department, Gökce Adali is assigned to develop the particular digitalization options. She had already done her master thesis on a digitalization topic, and I think she is the one who is most capable to introduce the various aspects of digitalization to professional beginners, the target group of this book. Michael Benje, one of my closest colleagues and friends, has been the supervisor of process patents in our company for many years. When he saw my first attempt on a patent chapter, he immediately offered to write an own one . . . Hattersheim, September 2023

Michael Kleiber

And remember: You do not need to fear anything, you just need to understand it. And now, it is time to understand more to fear less. (Marie Curie)

https://doi.org/10.1515/9783111028149-203

Contents Preface � VII Preface to the 2nd edition � IX Preface to the 3rd edition � XI 1 1.1 1.2 1.3

Engineering projects � 1 Process engineering activities � 1 Realization of a plant � 8 Cost estimation � 19

2 2.1 2.2 2.3 2.3.1 2.3.2 2.3.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12

Thermodynamic models in process simulation � 25 Phase equilibria � 27 φ-φ-approach � 33 γ-φ-approach � 41 Activity coefficients � 41 Vapor pressure and liquid density � 51 Vapor phase association � 57 Electrolytes � 60 Liquid-liquid equilibria � 63 Solid-liquid equilibria � 67 φ-φ-approach with gE mixing rules � 68 Enthalpy calculations � 70 Model choice and data management � 76 Binary parameter estimation � 79 Model changes � 82 Transport properties � 83

3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.7.1 3.7.2 3.8 3.8.1

Working on a process � 87 Flowsheet setup � 89 PID discussion � 98 Heat integration options � 103 Batch processes � 113 Equipment design � 117 Troubleshooting � 118 Dynamic process simulation � 121 Basic considerations for dynamic models � 123 Basics of process control for dynamic simulations � 127 Patents � 133 Novelty � 134

XIV � Contents 3.8.2 3.8.3 3.8.4 3.8.5 3.8.6

Inventiveness � 135 Industrial applicability � 136 Exceptions from patentability [334, 339] � 136 Patent research and patent monitoring � 144 Inventor’s bonus � 146

4 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13

Heat exchange � 148 Thermal conduction � 149 Convective heat transfer � 151 Heat transition � 157 Shell-and-tube heat exchangers � 160 Heat exchangers without phase change � 161 Condensers � 174 Evaporators � 176 Plate heat exchangers � 186 Double pipes � 190 Air coolers � 190 Fouling � 191 Vibrations � 194 Heat transfer by radiation � 196

5 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12

Distillation and absorption � 204 Thermodynamics of distillation and absorption columns � 207 Packed columns � 210 Maldistribution in packed columns � 221 Tray columns � 223 Comparison between packed and tray columns � 243 Distillation column control � 245 Constructive issues in column design � 251 Separation of azeotropic systems � 255 Rate-based approach � 258 Dividing wall columns � 261 Batch distillation � 264 Troubleshooting in distillation � 265

6 6.1 6.2 6.2.1 6.2.2 6.2.3

Two liquid phases � 268 Liquid-liquid separators � 268 Extraction � 271 Mixer-settler arrangement � 275 Extraction columns � 275 Centrifugal extractors � 278

Contents

7 7.1 7.2 7.3

Alternative separation processes � 280 Membrane separations � 280 Adsorption � 288 Crystallization � 295

8 8.1 8.2 8.3 8.4

Fluid flow engines � 300 Pumps � 300 Compressors � 308 Jet pumps � 320 Vacuum generation � 323

9 9.1

Vessels and separators � 328 Agitators � 337

10 10.1 10.2

Chemical reactions � 342 Reaction basics � 342 Reactors � 350

11

Mechanical strength and material choice � 357

12 12.1 12.1.1 12.1.2 12.1.3 12.1.4 12.2 12.3 12.3.1 12.3.2 12.4 12.5

Piping and measurement � 362 Pressure drop calculation � 362 Single-phase flow-through pipes � 362 Pressure drops in special piping elements � 367 Pressure drop calculation for compressible fluids � 367 Two-phase pressure drop � 370 Pipe specification � 377 Valves � 380 Isolation valves � 380 Control valves � 382 Pressure surge � 384 Measurement devices � 386

13 13.1 13.2 13.3 13.4 13.4.1 13.4.2 13.4.3 13.4.4

Utilities and waste streams � 390 Steam and condensate � 390 Heat transfer oil � 397 Cooling media � 397 Exhaust air treatment � 403 Condensation � 406 Combustion � 409 Absorption � 415 Biological exhaust air treatment � 416

� XV

XVI � Contents 13.4.5 13.4.6 13.5 13.6

Exhaust air treatment with membranes � 417 Adsorption processes � 418 Waste water treatment � 419 Biological waste water treatment � 422

14 14.1 14.2 14.2.1 14.2.2 14.2.3 14.2.4 14.2.5 14.2.6 14.2.7 14.3 14.4

Process safety � 426 HAZOP procedure � 430 Pressure relief � 435 Introduction � 435 Mass flow to be discharged � 440 Fire case � 442 Actuation cases � 447 Safety valve peculiarities � 454 Maximum relief amount � 458 Two-phase-flow safety valves � 468 Flame arresters � 470 Explosions � 472

15 15.1 15.2 15.3 15.4 15.5 15.6 15.7 15.8

Digitalization � 479 Digital transformation � 479 Digitalization and sustainability � 480 Digitalization in process industry and green transformation � 481 Key terms explained � 482 Models for digitalization � 490 Data science vs. domain expertise � 494 Digitalization trends in process industry � 496 Catching up with the times � 505

Glossary � 509 List of Symbols � 517 Bibliography � 521 A

Some numbers to remember � 537

B

Pressure drop coefficients � 541

Index � 545

1 Engineering projects An engineering project is a huge and complex task for usually several hundred people. Coming from the university and just having finished one’s studies, one has usually no clue on what is going on beyond the own desk. In fact, the construction of a chemical plant is often compared to the erection of the pyramids in ancient Egypt. While the weight of a chemical plant is much lower, its complexity is by far greater, and the project can usually be completed in approx. three years instead of twenty. The target for a beginner must be to become a increasingly larger cog in the machine. First, an overview on the particular phases and activities must be obtained.

1.1 Process engineering activities Plant engineering comprises the conceptual design, the scheduling and finally the erection of industrial plants. These industrial plants, in most cases built in chemical industry, are usually very complex, as manifold production steps are involved which are adjusted to each other. A number of specialists from different fields must be coordinated. A plant engineering project finishes with the commissioning and the proof of the guaranteed values. A plant always belongs to somebody whose target is to quickly earn money by producing the substance the plant is designed for. At the beginning, a feasibility study has to be done. A market analysis is performed, which hopefully shows that it is worth starting a more detailed project. For a new process, it has to be checked whether it is possible to overcome the technical difficulties. The legal situation with patents and licenses has to be clarified, and possible locations for the plant are compared, whereby it is often necessary to consider different energy prices or transport costs for raw materials and products. A realistic production capacity and an impression of investment (CAPEX) and operation costs (OPEX) must be available before starting a project (Chapter 1.3). For the production capacity, it must be taken into account that no plant is in operation all the time; usually, 8000 h per year are scheduled, giving approx. 90 % availability. A corresponding overcapacity must be provided in the design. It is important to know that an engineering project is not a sequential process, where e. g. first the reactors are planned and finished, then the product purification, and so on. This would actually be impossible, because due to recycle streams in the process a complete engineering design of a single part of the plant could never be achieved. Instead, all parts of the plant are worked out simultaneously, with increasing accuracy and degree of detailing. The advantage is that possible bottlenecks and difficulties are detected as early as possible, the interconnections are identified early, and an appropriate number of project participants can be assigned to work on the various parts of the plant. Certainly, this is not the way we are used to in everyday life, and there is often doubt as to whether it makes sense to perform a design of a piece of equipment while it https://doi.org/10.1515/9783111028149-001

2 � 1 Engineering projects is clear that the input streams are just preliminary and will change several times during the project. Nevertheless, as mentioned, it is most important to get an overview on the process as soon as possible. And with today’s tools, the design from the previous phase is usually an ideal starting point when the preconditions have been subject of change. The engineering process is divided into certain phases, which are in principle: – Conceptual Design Target: The process is fixed, the feasibility is checked, the risks are identified. – Basic Engineering, also called FEED (Front End Engineering Design) Target: Preliminary elaboration of the plant, all documents available as good as possible – Detailed Engineering Target: Complete and accurate description of all parts of the plant and all aspects of building. Mixed forms (“Extended Basic”) and other denominations (“PDP”, i. e. Process Design Package) become more and more widespread. Finally, it depends on the contract which activities are comprised. The process engineer should know what the follow-up activities of his calculations are. The first phase in a project is the conceptual design, where the first mass and energy balances are prepared, often based on lab trials and estimations. The mass and energy balance is a key issue for all the following activities up to the phase of detailed engineering. A change in the mass balance has often a major impact on all other participants of the project, so it is desirable to make it as exact as possible, and to update it as soon as it makes sense. There is a certain misunderstanding as to what a mass and energy balance really is. The term “process simulation” is very common, and is also used here, but hardly applies. In fact, what the process does in the steady state for a given set of inlet conditions is calculated, i. e. the streams and the operating conditions of the particular pieces of equipment. Sometimes, the purpose is in fact to find out how the plant or the equipment behaves, at least how it reacts, and what the sensitivities are. However, in most cases, its purpose is to generate the data for the design of the equipment, applying conservative cases concerning process conditions or impurities. The exact process conditions that would enable the process engineer to really “simulate” the plant are usually not known, at least not in the Conceptual Design phase. Despite these often occurring misunderstandings, “process simulation” is nowadays well acknowledged as a useful tool which requires a well-trained process engineer who has a profound knowledge of the process itself, its thermodynamics (Chapter 2), the various pieces of equipment and their peculiarities, and the simulation experience, in order to achieve convergence in the simulation flowsheet, which often turns out to be complex. Nowadays, some well-established commercial (ASPEN, HYSYS, ChemCAD, PRO/II, ProSim) and inhouse process simulators (Chemasim at BASF, VTPlan at Bayer) are available, performing calculations that would have been considered to be absolutely impossible 30 years ago. The genuine process simulation showing the actual plant behavior

1.1 Process engineering activities

� 3

with respect to the design of the equipment, the startup-behavior, and the process control is called dynamic simulation (Chapter 3.7). Nowadays, its application becomes more and more popular, and conventional process simulation can be used as a starting point for the dynamic version. Sometimes, single process steps remain unknown and are represented in the mass balance by simple split blocks. At least, there must be a concept of how to overcome this lack of knowledge and what the effort might be. At the beginning of the basic engineering these points should be completely clarified, and a full mass and energy balance must be available. How this is done is the subject of Chapters 2 and 3. It is desirable that pilot plant activities take place to confirm the mass balance and to make sure of the influence of the recycle streams. The main purpose of such an activity is to see whether all components are regarded and whether none of them accumulates in the process. The particular pieces of equipment are preliminarily designed according to the current knowledge so that it becomes clear what the critical pieces of equipment are, either because of their size or because of possible delivery limitations. As well, it must be considered whether the plant can be operated at reduced or increased capacity, which might be necessary for a certain period of time. Useful tools are the process flow diagrams (PFD), where the whole process is visualized, including the main control loops (Figure 1.1). A PFD is a document to understand the process, operation data for the important streams and blocks are usually included. The counterpart of the PFD is process description, which describes the PFD in written form. It should not be excessively detailed, as its main purpose is to enable the reader to understand the essentials of the process. At the end of the conceptual design phase, equipment and operation costs and hence the feasibility and their basis are better defined, often with respect to a possible location.

Figure 1.1: Example for the detailing in a PFD.

4 � 1 Engineering projects In a so-called HAZID (HAZard IDentification) the main issues concerning the safety of the process are first discussed and listed, often with first recommendations. At a later stage, the so-called HAZOP will take place, where all relevant safety issues are discussed (Chapter 14.1). Finally, lists of utilities, raw products, auxiliary substances (e. g. catalysts) and emissions (exhaust air, waste water, solid and organic wastes) are issued. In the conceptual design phase, the design of the equipment can be done in a preliminary way using rules of thumb. A first optimization of the process should be performed. In process development, optimization is rarely a mathematical problem, where an objective function is defined and somehow minimized. Process simulator programs offer such a function; however, the author’s experience is that in most cases process optimization cannot be translated into an objective function, as many soft factors have to be regarded (e. g. danger of fouling, increasing complexity, material issues, ease of startup etc.). Equipment costs can be estimated by a process engineer as long as only dimension changes are involved; however, it takes a specialist if the type or the material of the equipment changes. The number of team members in the conceptual design phase is comparatively low, as the process engineering tasks are usually complicated but of a limited extent. The complexity of the process development is encountered by an iterative procedure, where many options are tested to achieve a stepwise progress towards an improved process. There is no clear workflow plan; instead, the creativity of the project members is decisive [1]. Nevertheless, it is desirable to compose a comprehensible documentation to save the process knowledge which was gained during the assessment of the various options. As there is no special structure available for this purpose, the documentation is done with a final report. A successful and systematic way of optimizing a plant is the so-called value engineering procedure. It starts with a brainstorming session, where all ideas of the process team are formulated, collected, and clustered, either old ones as well as completely new ideas. In the following, these ideas are distributed back to the members of the team. In a standardized procedure, the impact of an idea on CAPEX and OPEX is carefully and comprehensibly evaluated, and the team can finally decide whether the particular ideas are adopted or not. In the basic engineering phase, the focus of the engineering moves from the process design parameters like operating temperatures and pressures, flowrates or compositions to the geometric dimensions of the process equipment, design temperatures and pressures (Chapter 11) and materials as parameters for the mechanical strength, and the plant layout [2]. First, the design basis must be specified. The design basis is a document which fixes the constraints of the project, e. g. formal things like capacity, operating hours per year, the apportionment of the plant into units, a general description of the plant, consumption figures, and the targeted quality of the product. Discretionary decisions during the project should be avoided as much as possible. The ranges of meteorological data are fixed, i. e. for the barometric pressure, the temperature, the air humidity, and other data

1.1 Process engineering activities

� 5

such as possible rainfall and their frequency, wind data, data on solar radiation, sea water temperature in coastal regions, tidal and geotechnical data such as the ground carrying capacity or the frequency and strength of earthquakes. The minimum and maximum conditions of the utilities (steam, cooling water, demineralized water, process water, brine, instrument air, nitrogen for inertization, natural gas, electric current, etc.) are set so that the engineer can choose the design cases with the most unfavorable conditions. The compositions of the raw materials are defined as well as the ones of the waste streams. The constraints for construction and design are set, such as the allowed tube lengths and shell diameters of heat exchangers, the fouling factors, and the overdesign to be chosen. The engineering standards and guidelines to be applied are listed for the various activities. Often, the client company has its own standards, which are explained in the so-called typicals, where the arrangement of standard equipment is illustrated (Figure 1.2). The philosophy of backups should be defined, e. g. for pumps. If a pump fails, it is possible that the whole plant must be shut down. A spare pump which is already installed can solve this problem. In cases which are less urgent it might be sufficient to have a spare pump in the storehouse. Also, a preventive maintenance strategy is often applied, where devices are maintained or even exchanged after a certain time when experience indicates that a failure becomes probable. Any deviation from the design basis must be reported to the client. The design basis is continuously updated until the engineering is finished.

Figure 1.2: Example for a typical of a control valve arrangement.

In the basic engineering, the process equipment is designed both with respect to its function in the process and to the mechanical strength requirements. For each equipment, the first issue of the data sheet is released, containing the data derived from the process, i. e. the process parameters, the dimensions, the nozzles, the materials of construction, and a specification of the insulation [2]. From that point on, the piece of equipment is more or less decoupled from the process itself; the responsibility is taken by the

6 � 1 Engineering projects specialists for construction or machinery, who work out the specifications from the process, especially the mechanical strength. Besides the equipment, specifications are also prepared for the measurement and control devices, the piping, and the safety valves. Detailed lists for equipment, utility consumers, electrical consumers, emissions, control equipment, and instrumentation are compiled. In collaboration between process and instrumentation engineers, the interlocks are defined. An interlock is an automatic action of the process control system due to safety considerations or equipment protection. Examples are the switch-off of a pump when the liquid level in the vessel downstream falls below the limit (LSLL, level switch low low) to avoid cavitation or the stop of feed flow and steam to a column after a pressure high-high switch (PSHH) had actuated. The most important work of the process engineer in the basic engineering is the setup of the piping and instrumentation diagrams (PID), which are the most elaborate documents of the process. While the PFD contains only process relevant lines and equipment, the PID shows other equipment, e. g. auxiliary lines for startup, safety devices, and valves, as well. The instrumentation engineers play a decisive role to fix the concept for measurement and control, which comprises a large part of the PID work. The PID is continuously updated in the detailed engineering and should finally contain the following information [2]: – all equipment and machinery, including design data and installed spare parts (e. g. substitute pumps); – all drives; – all piping and fittings, nozzles; – design information for the piping; – all instruments, control devices and control loops, interlocks with the corresponding signal flow; – all check valves, safety valves, level gauges, drain lines; – dimensions and operating data of equipment and machinery, materials of construction, elevations; – the battery limits, i. e. an illustration of the agreement of the scope of the project; – the tie-in information, i. e. how the new plant is linked to existing equipment and utilities. Figure 1.3 shows the elaborate PID based on the PFD shown in Figure 1.1. Note that only one column is depicted, because of the increased detailing the second column is on a separate PID. As a rule of thumb, at most 2–3 pieces of equipment can be represented on one PID sheet. Thus, it is normal that a major plant documentation comprises 100–150 PID sheets. One of the most important tasks of the project management is the coordination of the various teams and the time schedule for the provision of the particular pieces of information on the PID. There should always be a “master” PID, where all the changes

1.1 Process engineering activities

� 7

Figure 1.3: Example of the detailing in a PID.

are supplemented manually. In fact, there is a guideline for these changes: – red color: supplements; – blue color: omissions; – green color: comments. It is a useful agreement to indicate who has made the various changes. At certain stages, the PIDs are frozen, and a new issue is printed out. This happens several times during a project, and minor changes are even made during commissioning. The final PID set is called the “as built” status. The basic engineering should contain all the information in a way that the detailed engineering can be performed without difficulty, and possibly without profound knowledge about the process itself. This is an important requirement, as the detailed engineering is not necessarily carried out by the same company as the basic engineering. One target of the basic engineering is usually a cost estimation of ± 15 %, based on budget price offers for the most important equipment and on scaled prices for standard equipment. During the last two decades, the time provided for engineering and construction has been reduced. Nowadays, the costs for long lead items which are usually expensive must be defined with ± 10 % at the end of the basic phase. Often, even a mandatory order has to be placed. Among others, this explains why ± 30%̇ from former times have been reduced to ± 15 %. While in the basic engineering only the essential information like dimensions, operating conditions or materials is prepared, in the detailed engineering the complete

8 � 1 Engineering projects specification of the whole static equipment and machinery is generated, which enables manufacturers to submit bids. In the detailed engineering the documents of the basic engineering are more elaborated and finally fixed, so that the following activities can take place [2]: – preparation of bid invitations for equipment, materials, civil work, and construction; – selection of manufacturers and vendors; – quality assurance operations on vendors and manufacturers (“expediting”); – planning of the transport of plant equipment; – execution of civil work and plant construction; – commissioning. The information from the equipment manufacturers are considered and implemented in the documentation. In this phase, layout, piping construction, process, electrical and other types of engineers work closely together. In the detailed engineering, inconsistencies become less and less permissible. In [1] an example is described: the process engineer fixes the necessary pipe diameter with respect to the requirements of the process. Due to reasons of cost and accuracy, the instrument engineer chooses a smaller diameter for the flow measurement. This gives a temporary inconsistency, which is tolerable. However, a change is necessary, either the change of the pipe or the instrument diameter or a reducing adapter. At the end of the project, this change must have been performed; otherwise, money will be wasted due to the ordering of wrong materials, not to mention the possible time delay for the project. Quality - Time - Cost ... choose two (Robert Angler)

1.2 Realization of a plant The engineer’s work is a struggle between the pencil and the eraser. If the pencil wins, there is a chance of getting things finished at some point. (Dimitar Borisov)

With the ongoing project, the plant layout1 becomes more and more a high-priority item. Plant layout is a procedure which involves knowledge of the space requirements for the facilities and also involves their proper arrangement so that continuous and steady movement of the products takes place [3]. During recent decades, there has been great progress with regard to tools. Previously the documents were produced with drawing ink, using special pens with differ-

1 The author is grateful to Mrs. Hristina Stegmann, who gave the main part of the input to this section.

1.2 Realization of a plant



9

ent line widths. Erasing a mistake was a risky procedure; instead of an eraser, a razor blade was used to remove the ink.2 The drawings were archived on microfilm. Before the computer evolution began, the only way to convey a 3D impression were isometric drawings. While this seems possible for piping illustrations, one can hardly imagine that this was ever appropriate for equipment drawings, especially if the drawing is subject to changes.

Figure 1.4: Example of a detailed plastic model.

Three-dimensional (3D) objects can not be effectively described in a two/dimensional (2D) space [4]. At least two views of the object are required (see below). From the 1980s on, layout models made of plastics were constructed. They were very useful, as they conveyed a good impression about the final appearance of the plant [2], and they still count as objects worth being shown on guided visitor tours in an engineering company (Figure 1.4). They were set up according to the 2D drawings to verify the concept or to solve special problems concerning piping. Plastic models gave an immediate overall impression, but from the engineering point of view, they had severe disadvantages: – it was practically impossible to implement major changes; – complete representation of the whole plant was a huge effort; – the accuracy of such a model was limited. 2 Of course, razor blades do not distinguish between the ink and the skin of the user. The blood losses of the author during his studies, usually spread on the drawing, were incredible.

10 � 1 Engineering projects More details can be taken from [4]. In the era of scale models, there were many anecdotal stories of fingers being glued together, inhalation of noxious fumes from plastic solvents, lacerations from cutting tools, and dissolved fingerprints. Today, the 3D CAD modellers suffer from carpal tunnel syndrome and e-mail overload. (found in [4])

With increasing computer capacity, 3D documents have become standard, and their precision is amazing. They can hardly be distinguished from a photograph after the construction is finished. A number of programs are on the market, usually with a quite considerable license fee. While the elementary functions can be operated quite rapidly, it takes a few months until an engineer can claim to use such a system properly. The programs are able to show the plant with any level of detail from any point of view. The required documents can be created automatically, i. e. plot plans, equipment arrangement drawings, piping isometrics, and piping layout drawings. Other documents such as line lists or equipment lists can be consolidated. Creating a 3D model can be as timeconsuming as manual drawings, but the saving of time and work occurs downstream in the workflow [4]. Figure 1.5 shows an example.

Figure 1.5: Example of a 3D representation. Courtesy of AVEVA GmbH.

A possible drawback of 3D models is an overconfidence in them, just because of their precision and the amazing visualization technology. It is still the engineer whose abilities are crucial for the quality of the model. The program does not prevent him from making mistakes. Nowadays, virtual reality (VR) programs offer the next option: a pair of data goggles enables the user to go through the plant and check, for instance, its operability. The system is so close to reality that people like the author who have no head for heights are quickly led to the limits of their capability. During the recent years, this technology has become more and more accessible and less expensive.

1.2 Realization of a plant



11

The following report from the ACHEMA 2022 shall illustrate the capability of virtual reality [287]: As soon as you have understood how to move in the virtual space, you can go up and down the floors in the photorealistic 3D plant. You can have a look at the pumps in the groud floor and follow the column design along its whole height or watch the course of the piping. It does not help very much to have a good head for heights in the real world; in virtual reality, you will experience the virtual giddiness. But after having overcome this, you will gain a unique and fascinating insight into the structure of the plant, maybe a long time before it is actually erected. Moving in the virtual space as a beginner, you might experience a number of other adventures. Handrails do not prevent you from moving further, as well as column walls do not. This way, one can end up levitating in the midst of an apparatus or in the empty space. Even the deep downfall along several floors ends without any injuries or black and blue marks. Just a certain feeling remains which can hardly be described…

Connected to a dynamic process simulation, VR is supposed to be a valuable training tool for the staff, which can be exposed to a large range of training scenarios including hazardous situations. It increases the process and safety procedure knowledge, improves plant reliability, and lowers the accident rate [5]. Whether just 3D or VR is applied will probably be a matter of the cost-benefit ratio. More information can be found in [6]. The layout work already begins in the proposal phase of a project. A first concept must be set up in order to give a first idea about the appearance of the plant and to confirm that the proposed area on the site is sufficient. During the conceptual phase, precise information about the equipment is usually not known, unless reliable data from a reference plant is available. Only the size of key equipment such as reactors, large tanks, and silos can at least be estimated according to their capacity. Therefore, the particular pieces of process equipment are often represented by placeholders with an estimated preliminary size. Their position inside the battery limits is defined with regard to numerous safety and service requirements, considering the natural process flow for an efficient arrangement. When new information is generated, the layout plan is continuously updated. With the first issues of the PIDs, it becomes more and more elaborate, and the adjustments to be made become less significant – and it makes sense to begin with the 3D piping work. There are a number of 2D documents which are often relevant. The genuine documents containing the engineering information are drawings which show the top and the side views. Isometric drawings are often produced to obtain a better imagination of the plant. The three axes of space are not perpendicular to each other but angled towards the viewer with 60° angles between them. The lengths are distorted; they appear foreshortened by a certain factor. Isometric drawings are more useful in architecture. In engineering, they are only produced for illustration and usually not used as documents containing the information.3 The relevant 2D documents are:

3 For piping description, they are essential.

12 � 1 Engineering projects

Figure 1.6: Example for an overall plot plan.

– – –







Overall plot plan: Shows an overview of a complete plant including the battery limits (Figure 1.6). Area plot plan: Shows the overview of a part of a plant, e. g. a unit. Overall plot plan, isometric view: Shows the view of the complete plant from two opposite directions. It is not really necessary, but gives a good projection of the plant. Nowadays, there a tools which can easily provide this. Equipment arrangement drawing: Turns the focus to the particular pieces of equipment. The general rule is that each piece of equipment must be visible from two sides to have a clear definition; usually the top view for defining the ground and the side view to define the tangent line and the center line elevation are required. Iso-view of the equipment arrangement drawing: Again, the iso-view does not provide information for the engineer, but it is easy to produce and gives a good idea of the plant. Pipe iso (piping isometric drawing): Isometric representation of a pipe with the coordinates for beginning and end and the bends (Figure 1.9). It is automatically generated for all lines; it is used for order-

1.2 Realization of a plant



� 13

ing the material, the manufacturing, and the fitting into the plant. It is also useful for process technology if a pipe must be carefully calculated, e. g. for an exact pressuredrop calculation. Inlet and outlet lines of safety valves are well-known examples. Piping layout drawings: Piping layout drawings are equipment arrangement drawings which show not only equipment and supporting structures but also the pipe lines available in the represented area. As this kind of drawing is often overloaded with information, it is being increasingly replaced by the direct use of the 3D model. Nowadays, viewer programs are available where engineers can use the 3D model without special knowledge about design.

In basic engineering, the main dimensions of the particular pieces of equipment are outlined. The goal is that they are as exact as possible, so that other activities like piping and static calculations for the steel structure can get started in a reasonable way. Other details of the equipment such as nozzles are still omitted. Also, the safety distances between the units or the pieces of equipment are worked out, considering that service and maintenance concepts are set up as well as construction procedures (e. g. necessity of cranes). Taking into account that later on people will spend time in the plant, the concepts for fire and explosion protection, the location of safety showers and escape routes and the removal of flammable liquids in case of an accident have to be clarified. For layout, there are a number of basic principles that should be followed, among them: – Follow the process flow to keep the pipe lengths short. For example, if the condenser of a distillation column is located on the 2nd floor, put the reflux drum on the 1st floor and the corresponding pump on the floor. Utility units should be placed near the corresponding tie-in. – Consider the minimum safety distances. E. g., compressors or units containing a compressor such as cooling units need a distance of 9 m to other pieces of equipment, depending on the engineering standard used. – Estimate the space requirement of the piping around an apparatus and keep it free! At the beginning, there is no information about this, and it is often underestimated. The information is usually generated when it is too late for relocating the apparatus. It is not a solution to keep some space left in a conservative way; the waste of space and the increased length of the pipes are expensive. Nevertheless, some space for unforeseens should always be considered. Insufficient space for pipe lines and instruments mostly leads to poor operability and personnel safety issues. The estimation of the space requirement can be done best according to a reference plant design with a similar capacity. If the reference plant has a significantly different capacity, one should be aware that there is no linear relationship between size and capacity. The process specialists should be able to make a good first guess. The package units are usually the most challenging tasks. The layout information is provided

14 � 1 Engineering projects











by the vendor, who is chosen at a late stage of the project. Different vendors might use different technologies, and the space demand can vary considerably. First create a concept for all escape and service routes and stick to it! For example, provide continuous corridors on each level, and in the same way on each level so that one can orientate oneself even if there is, for example, heavy smoke. Peculiarities concerning maintenance and service must be identified and taken into account. A BEU heat exchanger (Chapter 4, Figure 4.12) will probably be regularly dismantled for cleaning. There must be enough space for this operation, and the dismantled tube-bundle should point to the road and not to the pipe rack. Especially, reboilers which can be dismantled are a delicate issue. As well, dip-pipes (Chapter 9) are useful for directing liquid flow, but the layout engineers have to consider reserved space for dismantling above the apparatus. Any machines with movable parts like pumps and compressors need regular maintenance, which is convenient to be done on the ground floor. Furthermore, pumps and compressors cause vibrations, which can be controlled in the best way on the floor. For pumps, the location on the ground floor (Figure 1.7) makes sense anyway, as the maximum NPSH value is generated (Chapter 8.1). Finally, a sense for symmetry is useful. The vessels should be arranged in a straight line, as well as the outlet nozzles of the pumps (Figure 1.7). The distances are even numbers, e. g. 2 m or 1.5 m. Interconnecting pipes are collected on a pipe rack.

A comparably new trend is modularization, which means that the whole system is divided into units. These units are called modules, and they are dedicated to a certain process task or unit operation. The modules can be manufactured and assembled in a frame (Figure 1.8). These frames have defined interfaces and can be joined at the site in a relatively easy way. This is useful if only a short time slot for construction and assembly is available. If it takes a long time to get the permission for the construction of the plant, the project can be shortened if at least the modules are ready. Besides the time savings, some other aspects of modularisation are: – Smaller space demand, but the module frames might be so narrow that piping, maintenance and operation might become difficult. In plant engineering, “narrow” is equivalent to “dangerous”, so one has to take care that the safety concept is still fulfilled. – Lower costs, as assembly does not need to be performed at the site with a large personnel staff. – Higher quality, as qualified people do the assembly in their own workshop. – The workshop is roofed, meaning that the assembly is independent of the weather. – Overall duration of the project can be reduced. – No concrete foundation necessary. – Partially, testing and commissioning can be done in the workshop. – Improved safety due to fewer man-hours at the site.

1.2 Realization of a plant

� 15

Figure 1.7: Typical pump arrangement.

Figure 1.8: Fully modularized process blocks.

– – –

Different types of transport limitations (e. g. container sizes). Difficult standardization, as customer’s demands are often different and requires individual sizing of the equipment. Opportunity to test the plant already in the workshop.

Especially in the pharmaceutical and fine chemicals business more flexibility due to modularization is expected, as units can be rapidly combined or reused in a different application. Even large distillation columns can be pre-fabricated in a horizontal posi-

16 � 1 Engineering projects tion, which minimizes dangerous installation work at great heights. New developments have the target that even software modules are created that enable to integrate a module into the process control system easily. Fire protection is one of the most important concepts of a chemical plant, as its target is to protect human life and resources, and namely, in this order. It has three components: – Fire prevention: Fire prevention comprises the education of the staff, the working procedures for fire and explosion prevention and emergency case procedures. Layout takes care that the safety distances are kept and that the access to fire engines is ensured [2]. – Passive fire protection: The main target of passive fire protection is to avoid the spreading of the fire across the plant. This can be achieved by the use of fire resistant walls and floors and the fire protection of equipment supports and steel structures. – Active fire protection: Active fire protection is the system for the delivery and distribution of fire-fighting water or, alternatively, the foam generation system. The piping engineering is an iterative procedure, as it depends on more or less all other disciplines. While the first approaches are based on assumptions, the details become more and more available, and at the end of the detailed engineering the line routing is fixed, i. e. all the bends and lengths are specified in isometric drawings (Figure 1.9) so that each pipe can be manufactured. The process engineer must then check whether the exact line routing corresponds to the assumptions made in the basic engineering, e. g. the outlet lines of safety valves (Chapter 14.2) must not generate significantly more

Figure 1.9: Example of an isometric drawing of a line.

1.2 Realization of a plant



17

pressure drop then calculated before, and the pump specification (Chapter 8.1) has to be rechecked. In piping engineering, it is an important procedure to classify all components (pipes, fittings, flanges, valves, sealings, insulation, etc.) with respect to the media and the maximum operating conditions. A piping class represents such a set of conditions. In the PID, the corresponding information is included in the identification code of each line. A piping list is composed which indicates the material requirements for the piping, which is closely related to the costs. Extensive mechanical strength calculations take place. The necessary wall thickness can be evaluated (Chapters 11 and 12), and elasticity calculations are performed to make sure that the stress resulting from temperature changes during operation is tolerable. Further documents which are compiled for the construction are: – underground coordination plan (foundations, underground lines, pits, channels); – civil info plan (steel and concrete structures, paving); – escape and rescue plan (escape routes, eye-showers, alarm buttons, fire and gas detection); – room data list (surfaces, air condition); – load data list The load data list is one of the most important documents, compiling the weights of the particular pieces of equipment, which are decisive for the design of the supporting structures; failures can lead to serious accidents or at least to significant delays. For the commissioning, an operating manual is prepared where all activities in startup, operation, and shutdown are described in detail, broken down to the positions of the valves at the various activities. Also, it is checked whether the recommendations from the HAZOP study have been considered. An important tool for this purpose is the socalled cause & effect matrix, which connects failures to the actions of the interlocks and gives an overview on the interlock structure. The electrical demand of the process is specified, concerning voltage levels, locations of the distribution stations, and cable dimensions with respect to the electrical consumer list. Finally, the process control system is set up. It illustrates the operation of the plant. In a central station, all information is gathered and made available for the plant staff. The data are usually stored so that they can be used for later analyses. The consistency of the documents becomes more and more important for the quality of the engineering, whereas the typical process engineering issues like design, process performance, and economy are regarded as finished. In fact, changes at a late stage of a project are always associated with considerable cost and should be avoided. It is amazing that even in Germany there is no chair for “administrative process engineering”. (overheard on the evening meeting of the UNIFAC Consortium)

18 � 1 Engineering projects

Figure 1.10: Matrix organization [2]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

All engineering activities are strongly affected by organization structures in the company and in the project. Currently, the most common organization form is matrix project management (Figure 1.10). Each project has a number of engineers coming from the various disciplines (here: process, engineering, procurement, construction; additional disciplines may be piping, instrumentation, electrical, layout etc.). Normally, they report to the head of their division, and for the time of the project they are also members of the particular project teams, where they report to the project manager. The project manager decides which activities have to be executed and when the due date is, while the heads of the division delegate appropriate people to the project and supervise their technical decisions. The project manager is directly responsible for the project and reports to the board of his company; for the customer he is the main partner in discussions. He ensures the compliance with budget costs, time schedules, and quality level [2]. It is neither possible nor necessary that the project manager is a specialist in all disciplines. However, he4 should have sufficient knowledge about the activities to be performed in the various subsections of the project, as it is up to him to initiate and control them. Furthermore, he must have a profound understanding of the interplay of the disciplines. Managing a project does not mean to check the cells in the “completed” column in an EXCEL file; one must be able to inquire the status of an activity and assess the answers [251].

4 The author is grateful to have experienced a number of capable and competent ladies as project managers. However, none of them would use the gender language, and neither do I.

1.3 Cost estimation

� 19

In an experienced engineering team a bureaucratic schedule, which is paper waste in most cases anyway, is not needed. Everyone knows what to do. In an engineering team without knowledge about the procedures at plant construction and the design of the equipment, project management does not help. As well, project management is useless when the members of the team cannot organize theirselves. Either one can organize or one can’t. Manfred Nitsche [286]

Even after the successful startup of the plant, the engineering work is not yet finished [1], as further projects (e. g. maintenance, revamping, debottlenecking) will take place during the whole life cycle of the plant. For such activities, an accurate documentation of the plant as it was created during basic and detailed engineering is valuable. Not only the current state (“as-built”) is needed, but also the history, which reveals the reasons and presumptions for the design of the plant, e. g. the various operation cases relevant for the dimensioning of a heat exchanger. Documentation is, apart from the content itself, an art of its own, usually related with various software compatibility problems, as software changes with time. Nowadays, it is clear that a process documentation which has to be updated continuously cannot be handed over in a paper version but as files with common software. The data exchange is often difficult if different softwares are applied, even for different revisions and configurations of the same software.

1.3 Cost estimation Cost estimation is certainly one of the most nontransparent things which engineers encounter at the beginning, as it is not really studied at the university. Nevertheless, it is one of the most important parts of any project. Costs are divided into investment (CAPEX) and operation costs (OPEX); the latter can be further divided into fixed costs (e. g. salaries of the staff) and variable costs, which are proportional to the production output (e. g. raw materials, utilities). On the other hand, there are revenues due to the sale of the product. Having a heat and mass balance and the specific prices available, one can perform an estimate of operation costs and revenues. This first approach is certainly inaccurate, but on its basis one can decide whether it is worth continuing with the more labor-consuming investment cost estimation. The revenues must be significantly higher than the operation costs; earning money is the purpose of building a plant, and the payback of the investment costs including interests must take place before anything in fact is earned. During the course of the project, the estimate of operation costs and revenues is continuously updated. The estimation of the investment costs requires a lot of experience and knowledge about similar or even previous projects of the same kind. In the early stage of the project, until cost estimates for each item are available, the estimation of the investment costs is based on the costs of the major equipment. Additionally, costs for bulk material (piping, instrumentation etc.) and construction are considered by percentages of the major equipment or the total engineering costs; these percentages depend on the various

20 � 1 Engineering projects constraints of the project (for example, buildings to be erected, use of expensive materials). Moreover, the requirements of the engineering company have to be considered; the project costs are therefore supplemented by the costs for engineering and procurement, license fees, and a profit margin. The estimation of the equipment costs should be based on costs known from previous projects. Especially the prices for the materials must be considered appropriately. The dimensions can be considered by the six-tenth-law: P(C1 ) = P(C0 ) ⋅ (

M

C1 ) , C0

(1.1)

where C is the capacity of the equipment and P is its price. For the exponent, 6 M = 0.6–0.7 ≈ 10 is often a good approach. This refers to the fact that the volume V of an apparatus is proportional to the capacity, and the surface, which determines the necessary amount of material, is proportional to V 2/3 . More elaborated values for the exponent M and orders of magnitudes for prices are given in [7]. Equation (1.1) also illustrates the economy-of-scale-effect. While the revenues of the plant operation increase linearly with the capacity of the plant, the investment costs both for equipment and the whole plant increase by far more slowly. The larger the capacity is, the relatively smaller the investment costs are. Once the costs for the equipment are calculated, the costs for the whole project can be estimated by means of factors referring to the costs of the equipment. Great experience is necessary to assess their values. From the literature [7], the following factors in Table 1.1 can be taken as a rule of thumb for processes with fluids. The item “contingency” accounts for uncertainties in the process or in project execution. Each of the values listed in Table 1.1 has a certain range, depending on the type of plant. The values can reliably be determined if a plant of the same type or at least a similar one has been analyzed before. The factors become smaller if expensive materials are used; in this case, costs for activities like engineering stay the same, but the costs for the main equipment rises so that the percentage becomes lower. Capital costs for a chemical plant are often in the range 50 million to 1 billion €, and a possible investor must be convinced that he will get his money back within a manageable time period. For this purpose, the revenues from the sale of the products must be higher than the operating costs. The best overview is achieved by referring to 1 t of the main product, as Table 1.2 with its fictitious example shows. The calculation of the production costs is not only used for the decision whether a project is feasible or not but also later for the cost control of the process. One should be aware that Table 1.2 is only an example. Depending on the product, the cost structures can of course differ, and finally, it is much more detailed. It can include site-related costs like license fees, fire brigade, staff canteen, site bus, site streets, staff association, and, of course, taxes. Nevertheless, some typical numbers are worth discussing. It is typical that raw materials make up a large part of the production costs.

1.3 Cost estimation

� 21

Table 1.1: Factors for evaluation of the capital costs [7]. Item

Factor

Main equipment Equipment stationing Piping Instrumentation Electrical Utilities Offsites Buildings Site preparation

1.0 0.4 0.7 0.2 0.1 0.5 0.2 0.2 0.1

Total equipment costs

3.4

Engineering Contingency

1.0 0.4

Total fixed capital costs

4.8

Working capital (inventories)

0.7

Total capital costs

5.5

Without a reasonable value creation from the raw materials to the product, there is hardly a chance of making the process feasible. This value creation should be checked first; especially, the amount of product per t of raw material should be checked, which is easily possible even without a detailed mass balance.5 Often, fluctuations of the prices of product and raw materials are decisive even for the feasibility of the process. In comparison, utilities play a minor role; however, the optimization of energy consumption is often the only thing which can be influenced by good process engineering. The electrical consumption of 1 MW per 100 000 t/a product capacity is a typical value, which can be exceeded if the process contains one or more compressors. Energy costs differ greatly according to the site location; on the Arabian peninsula or in the United States steam costs are easily below 10 $/t, whereas in Europe or Asia 30 $/t are realistic. The numbers for product revenue should not be based on single values on a daily basis. Instead, the market and its fluctuations must be understood. It must be clear on what basis the number is founded and how the forecast had been performed. Also, one of the main tasks of a process engineer is to explain what the numbers in Table 1.2 are based on. Co-products which can be sold are sometimes generated; in contrast to this, byproducts are often simply losses and have to be disposed of. An additional distribution channel must be established for sellable co-products and furthermore, from the process point of view an additional quality surveillance is necessary. The cost structure should

5 An example is given on Section 3.1.

22 � 1 Engineering projects Table 1.2: Example for the structure of the operation costs. Capital costs Capacity Operating time p. a. Product sales Item Raw materials Raw material A Raw material B Ammonia Utilities Cooling water Steam Electrical energy Effluents Waste water Combustible waste Co-product for sale

200 000 000 200 000 25 8 000 1 500 Consumption 40.0 25.0 5.0

$ t/a t/h h/a $/t Unit

Price

Unit

t/h t/h t/h

400 100 600

$/t $/t $/t

640.00 100.00 120.00

10 000.0 200.0 3 000.0

m� /h t/h kW

0.03 20 0.1

$/m� $/t $/kWh

12.00 160.00 12.00

3.0 0.5 5

m� /h m� /h t/h

10 40 −���

$/m� $/m� $/t

1.20 0.80 −��.��

Variable costs Personnel Overhead Maintenance, insurance Capital costs Fixed costs

Costs ($/t)

1016.00 40 70 000 $/a 15 % of personnel 2 % of capital costs 10 years depreciation

14.00 2.10 20.00 100.00 136.10

Production costs

1 152.10

Product sales

1 500.00

Margin

347.90

not indicate that the sale of the co-products would tip the scales to make the project feasible. From the fixed costs items, usually only the depreciation of the plant plays a decisive role. Rules of thumb indicate the following ranges for the operation costs: – capital costs: 15–30 %; – energy costs: 10–40 %; – raw materials: 30–90 %; – Salaries: 5–25 %. The ratio between fixed and variable costs determines whether there is an economy of scale. The variable costs are proportional to the production rate; they do not vary in a larger plant. The fixed costs stay constant (e. g. personnel costs) or increase less than

1.3 Cost estimation

� 23

proportional to the capacity (investment costs; see Equation (1.1)). The more the fixed costs determine the cost structure, the larger is the effect of the economy of scale. A good compilation of cost estimation for chemical plants can be found in [8]. There are a number of characteristic values to assess the feasibility of a project. The most widely used one is the so-called net present value [9, 10]. For its evaluation, the period where the project produces positive or negative cash flow is divided into time segments, in most cases years. An interest rate is considered, which takes into account that the investment takes place at the beginning of the project, whereas the revenues and further expenses occur later. With this interest rate, all cash flow refers to the time when the project starts. Its value is chosen in a way that it represents the assumed risk of the project. For the cash flow evaluation, the following constituent parts are considered: Cash flow = (revenues − fixed costs − variable costs) ⋅ (1 − tax rate) + Depreciation ⋅ tax rate − investment costs ,

(1.2)

where the revenues are calculated as price multiplied by quantity of the product. Fixed and variable costs depend directly on the economic and operating assumptions of the process. Integrated over the lifetime of the project, the Net Present Value (NPV) gives T

(Cash Flow)i , (1 + r)i i=0

NPV = ∑

(1.3)

with T lifetime of the project in years; i number of the year; r interest rate. The NPV maximizes the value for the company performing the project, as it assumes that the sales really take place as planned, which does hardly happen in the real world. It depends strongly on the scale of the project, and therefore it is difficult to compare different scenarios. It is more sensitive to the annual revenues than to the investment costs. Constraints like availability of the initial investment and the market situation must be set [10]. Directly related to the NPV is the internal rate of return (IRR), which is defined as the interest rate which gives a zero NPV in Equation (1.3). The IRR is independent of the scale of the project. From the mathematical point of view it is not easy to handle, as multiple solutions often exist. It should not be used as the economically decisive criterion without a critical analysis [10]. There are a number of other methods available, which are well explained in [8]. A very simple criterion which is often used for the assessment of a smaller project (e. g. a revamp for heat integration) is the static payback period. It is calculated how much time it will take until the investment costs for a project are covered by the revenues. The static payback period should be less than 3 years to make a project feasible.

24 � 1 Engineering projects

Figure 1.11: Example of a sensitivity spider chart.

The so-called sensitivity spider chart is a useful tool to evaluate how the economic situation varies if one of the assumptions varies. This way, one gets a feeling about the most decisive assumptions which have to be continuously watched and updated. An example is shown in Figure 1.11. The whole costing procedure is well described in [252, 253] and [320].

2 Thermodynamic models in process simulation Without reliable physical properties, a process simulator is just an expensive random number generator. (A. Harvey and A. Laesecke, 2002) The calculation of vapor–liquid equilibria for multicomponent mixtures will only work for ideal mixtures with an ideal vapor phase. (E. Kirschbaum, 1969)

The understanding of physical properties of fluid substances and their phase equilibria is one of the main keys for a successful process engineering. Although there are more exciting problems to solve in process engineering, almost every larger project starts with the clarification of the physical property situation. Without having a sound understanding of this, a reasonable process simulation cannot be carried out. Mainly, the following properties are important for process engineering: – thermodynamic properties: – density; – enthalpy; – phase equilibrium; – transport properties: – viscosity; – thermal conductivity; – surface tension; – diffusion coefficients. For the mass and energy balance, only the thermodynamic properties are required, whereas the transport properties play a major role in equipment design, e. g. for columns, heat exchangers, or pumps. For the thermodynamic properties, the calculations of the phase equilibria play the key role for the whole process simulation, as they determine the separation steps, which often cause 60–80 % of the total costs of the process. In process simulators, a model has to be chosen which mainly decides in which way the phase equilibria are determined; also, it has an influence on the evaluations of densities and enthalpies. It is not necessary but advantageous if one model could be used for the whole process; each model change holds the risk that inconsistencies are introduced (Chapter 2.11). Two kinds of models can be distinguished: the equationof-state models (φ-φ-approach) and the activity coefficient models (γ-φ-approach). The difference and the advantages and disadvantages between these two approaches should be known for a reasonable choice of the model. In the following section, the essentials of the most important models are introduced and discussed without any thermodynamic framework. For more details of these models, see [11]. The importance of the particular physical properties have been rated in Table 2.1 [12], where the relationships between accuracy of the particular properties and the inhttps://doi.org/10.1515/9783111028149-002

26 � 2 Thermodynamic models in process simulation Table 2.1: Example of a relationship between physical property accuracy and investment costs [12]. Physical property Thermal conductivity Specific heat capacity Heat of vaporization Activity coefficient Diffusion coefficient Viscosity Density

% error

% error capital cost

20 % 20 % 15 % 10 % 20 % 50 % 20 %

13 % 6% 15 % 100 % 4% 10 % 16 %

fluence on the investment costs are listed. Certainly, this table should not be taken as the absolute scientific truth but as an examplary case study for illustration. The outstanding item is the large influence of the activity coefficient.1 The author would agree upon its importance; however, in fact the costs are driven by the separation factors αij =

psi γi psj γj

(2.1)

The importance of its accuracy strongly depends on the case. When the separation factors are far away from unity, the influence on the accuracies of the activity coefficent and vapor pressures is limited. When the separation factors are close to unity, their influence is incredibly high, and, moreover, they can even decide whether a separation is possible at all. The heat of vaporization is clearly proportional to the reboiler duty in a distillation and therefore to the size of the reboiler; thus, the proportionality is quite in line with the experience of the author. In comparison to the heat of vaporization, the influence of specific heat capacity is smaller. Nevertheless, single pieces of equipment can be strongly influenced (e. g. liquid-liquid heat exchangers), and errors occur pretty often (Chapter 2.8). The transport properties, thermal conductivity and viscosity, have an influence on the heat transfer coefficient in heat exchangers. The author would guess that the influence of the viscosity is greater for large viscosities. Moreover, errors in the viscosity occur quite frequently, especially for mixtures, whereas thermal conductivities do not vary too much for the particular liquids, with the exception of water and glycols. Vapor viscosities and thermal conductivities are usually not measured but estimated anyway. The accuracy of estimations of physical properties is often of great interest. This question is not easy to answer, as most of the authors are suspected to claim a higher accuracy than they should. However, it is hard to imagine how objective criteria can be set 1 The activity coefficient γ will be explained in Chapter 2.1. In the context of this paragraph, it is defined as a factor describing the deviations from Raoult’s law. It can be interpreted as a correction factor for the concentration.

2.1 Phase equilibria

� 27

Table 2.2: Accuracy of physical property predictions [15]. Physical property

Error expected

Error desired

2.5–4 kJ/mol > �� % > �% > �� % 6K > �� % 15 % > �� %

4 kJ/mol 10 % 2% 10 % 3K 10–20 % 15 % 10 %

Heat of formation Liquid heat capacity Liquid density Vapor pressure Normal boiling point Transport properties Heat of vaporization Limiting activity coefficient

up. Clearly, the average deviation of the fit to the data available is an indicator, but certainly the fit to unknown data will be worse. Other authors try to leave out certain data sets in the fitting process [13], however, one does not know according to which criteria they are chosen. Another method is to predict new data sets before they are integrated into the database [14]. This method can not deliver large amounts of examples, and, moreover, it is not reproducible, as after testing the new data will certainly be added to the database. A thorough examination has been done in Table 2.2. The conclusions are as follows [15]: – the accuracy of the methods are not at industrial target level; – experimental data for thermal and transport properties are limited; – group contribution methods seem to have reached their potential, there is hardly room for improvement. To meet the industrial demand, new approaches are necessary. Perhaps the neural network technology is such an approach (s. Chapter 15).

2.1 Phase equilibria If anyone claims to have understood phase equilibrium thermodynamics, there is still the option to explain it again.

As mentioned, the knowledge and understanding of the various phase and chemical equilibria is the key for a successful process simulation. Two-phase regions have a great importance in technical applications. Even for a one-component system phenomena occur which need to be discussed thoroughly. Exemplarily, Figure 2.1 illustrates the isobaric vapor-liquid equilibrium of water when it is heated from t1 = 50 °C to t2 = 150 °C at atmospheric pressure. In the two-phase region, vapor and liquid coexist at the same temperature and pressure [11]. In this case, both liquid and vapor are called saturated. If the saturated liquid is further heated, the temperature does not change. Instead of a temperature rise, the liq-

28 � 2 Thermodynamic models in process simulation

Figure 2.1: Temperature change of water at p = 1.013 bar with respect of the heat added [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

uid is vaporized. After the last drop of liquid has vanished, the temperature rises again. Figure 2.1 clearly indicates that much more heat is consumed for the evaporation of water (enthalpy of vaporization, Δhv ) than for the 100 K temperature elevation. The state of a pure substance in the phase equilibrium region is not characterized by temperature and pressure as in the one-phase region. Temperature and pressure are related; the relationship ps = f (T)

(2.2)

is the vapor pressure curve. For the complete determination of the two-phase system the vapor quality x x=

n′

n′′ + n′′

(2.3)

is necessary, where the superscripts ′ and ′′ denote the saturated states of liquid and vapor, respectively. x = 0 means a saturated liquid, x = 1 means a saturated vapor.

2.1 Phase equilibria

� 29

In the two-phase region, the specific volume v, the specific enthalpy h and the specific entropy s can be written as v = xv′′ + (1 − x)v′

(2.4)

h = xh + (1 − x)h

(2.5)

′′

s = xs + (1 − x)s ′′





(2.6)

Vapor pressure and enthalpy of vaporization of a pure substance are related by the Clausius–Clapeyron equation Δhv = T

dps ′′ (v − v′ ) dT

(2.7)

For a multicomponent system, we must also consider that both phases have different composition, and the distribution of each component is a central issue to be solved by the thermodynamic model. As will be shown later, the behavior of a multicomponent system is mainly described by binary subsystems. The best way to illustrate the vapor-liquid equilibrium of a binary system is the pxy diagram at constant temperature. Figure 2.2 gives an example for the system ethanol/water.

Figure 2.2: Example for a pxy diagram [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

In the upper part of this diagram, there is the liquid region above the bubble point curve. Below the dew point curve, there is the vapor region. Between bubble and dew point curve, there is the two-phase region. For a specific pressure, a horizontal tie-line connects the corresponding liquid and vapor points in phase equilibrium, referring to their concentration on the abscissa. These phase equilibrium diagrams can be obtained by correlating phase equilibrium data. There are several experimental options [11]; the most popular ones are to measure both the vapor and the liquid concentration and the temperature at constant pressure, or to measure the bubble point at constant temperature for certain liquid concentrations. The latter option should be preferred in most

30 � 2 Thermodynamic models in process simulation

Figure 2.3: pxy diagram with one component becoming supercritical [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

cases; isothermal data are much more useful for the adjustment of model parameters [16]. Bubble and dew point curve meet at the ordinates at x = 0 and x = 1, indicating the corresponding pure component vapor pressures. If the temperature is greater than the critical temperature of one of the components, the vapor pressure of this component does not exist, although it can take part in a phase equilibrium in the dilute state. In Figure 2.3, the typical change in the shape of the pxy diagram is shown for the system nitrogen/methane when nitrogen becomes supercritical. For these supercritical systems, a strange phenomenon called retrograde condensation occurs (Figure 2.4). One must be aware that dew point and bubble point lines meet in a maximum, which is a critical point (CP). The solid line on the left of the CP is the bubble point line, separating the liquid from the two-phase region. The dashed line on the right hand side of the CP and below is the dew point line; it separates the vapor from the two-phase region. Consider a mixture with a light-end concentration sightly larger than at the CP. Coming from a high pressure in the vapor region, it meets the dew point line at point 1 when the pressure is lowered. Although the pressure has been reduced, unexpectedly a first liquid drop is formed. Further reduction of the pressure

Figure 2.4: Retrograde condensation.

2.1 Phase equilibria

� 31

leads to further formation of liquid (point 2). According to the lever rule (see Glossary), the amount of liquid can be determined from the diagram. After passing a maximum amount of liquid, the liquid evaporates again when the pressure is further reduced, until it has vanished completely (point 3). Retrograde condensation plays a major role in pipeline design, where the unwanted formation of liquid has to be avoided. Some other diagrams are useful for illustrating a binary vapor-liquid equilibrium. The simple yx diagram shows the relationship between vapor and liquid concentration without indicating the pressure or temperature. For a better overview, the bisecting line is usually depicted (Figure 2.5, first column). Also a Txy diagram is possible for isobaric phase equilibria, as can be seen in the right column of Figure 2.5. In this diagram, the upper line is the dew point curve, and the lower line is the bubble point curve. Figure 2.5 also shows the various kinds of binary vapor-liquid equilibria. The upper row shows an ideal mixture obeying Raoult’s law (Equation (2.42)). There are no interaction forces between the molecules. The phase equilibrium is just determined by the vapor pressures of both components. Typical examples for ideal systems are benzene/toluene or n-hexane/n-heptane. In the pxy diagram, the boiling point curve is a straight line (3rd column). In the Txy diagram, usually no straight lines occur. The activity coefficients (Chapter 2.3.1) are all equal to 1, i. e. their logarithm is 0 (2nd column). Row 2 shows a system having small nonidealities. Molecules of the same kind “prefer” to be together instead of mixing with molecules of a different kind. As a result, greater pressure is built up than for the ideal mixture. The activity coefficients are greater than 1. A typical example for such a system is methanol/water. When the activity coefficients become larger, the system can exhibit an azeotrope with a pressure maximum in the pxy diagram and a temperature minimum in the Txy diagram (row 3). At the azeotropic point, the liquid and vapor concentrations are identical. Therefore, an azeotrope cannot be separated by simple distillation. The knowledge of azeotropes is essential for any process development. A typical example for a homogeneous azeotrope is water/1-propanol. The occurrence of azeotropes is also related to the vapor pressures; the closer together the vapor pressures of the components are, the more probable is the occurrence of an azeotrope [11] (Section 2.3.1). If the activity coefficients increase even more, the system exhibits a miscibility gap, and the liquid splits into two phases. A heteroazeotrope is formed (row 4). A typical example is water/n-butanol. Note that a miscibility gap and the occurrence of a heteroazeotrope is not necessarily coupled. If the vapor pressures largely differ, the azeotrope does not occur, whereas the miscibility gap itself is not related to the vapor pressures. In addition, there are also negative deviations from Raoult’s law, where the activity coefficients are lower than 1. This means that the molecules “like” each other and prefer to be surrounded by molecules of a different kind. An example for such a system is dichloromethane/2-butanone (not represented in Fig. 2.5) [11]. If vapor pressures are close to each other and the negative deviations are strong, even an azeotrope with a pressure minimum in the pxy diagram and a temperature

32 � 2 Thermodynamic models in process simulation

Figure 2.5: Kinds of binary vapor-liquid equilibria [11]. 1 ideal mixture 2 small nonidealities 3 larger activity coeffcients, homogeneous azeotrope 4 heteroazeotrope 5 vapor pressures close to each other/strong negative deviations from Raoult’s law. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

2.2 φ-φ-approach

� 33

maximum in the Txy diagram can occur (row 5). An example is acetone/chloroform. It cannot happen that systems with negative deviations from Raoult’s law show a miscibility gap. In addition to these five types of phase equilibria there are others. For example, methyl acetate/water shows a miscibility gap with a homoazeotrope [17], and benzene/ hexafluorobenzene has a double azeotrope (s. Figure 2.13), one with a pressure minimum and one with a pressure maximum [18]. As mentioned, there are two different approaches for the description of phase equilibria. In the following sections, they are explained on vapor-liquid equilibria.

2.2 φ-φ-approach You cannot just learn thermodynamics. You must love it. (Recommendation to desperate students)

Generally, an equation of state is a relationship between the pressure p, the absolute temperature T, and the specific volume v. In a first step only pure substances are regarded. It makes sense to focus on equations of state which are pressure-explicit. Hence, the general form of an equation of state can be written as p = f (T, v)

(2.8)

The simplest equation of state is the ideal gas equation p=

RT v

(2.9)

Equation (2.9) is exact for a gas where the molecules have no volume and do not exert interaction forces on each other. It is a good approximation for gases at low pressures. At increasing pressures, Equation (2.9) becomes increasingly inaccurate. A well-known modification of Equation (2.9) is the virial equation: p=

RT B(T) C(T) D(T) (1 + + 2 + 3 + ⋅ ⋅ ⋅) v v v v

(2.10)

Equation (2.10) is the so-called Leiden form; correspondingly, there is also a volumeexplicit Berlin form expressed in form of a polynomial of the pressure p. The virial coefficients B, C, D, … account for the deviations from the ideal gas equation. Although it has a theoretical background, the virial equation is not suitable for practical applications. Usually, it has to be truncated after the second term, as the third virial coefficient C and the ones following are hardly ever known. The consequence is that the virial equation can only be used up to moderate densities (rule of thumb: ρ = 0.5 ρc ). The most widely used equations of state in technical applications are modifications of the van-der-Waals equation

34 � 2 Thermodynamic models in process simulation p=

a RT − 2 v−b v

(2.11)

Invented in 1873 [19],2 the van-der-Waals equation was the first equation of state valid for both the vapor and the liquid phase that could at least qualitatively explain the pvT behavior of a pure substance, illustrated by the pv diagram in Figure 2.6.

Figure 2.6: The pvT behavior of a pure substance [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

The pv diagram for a pure substance is dominated by the large vapor-liquid equilibrium region, where the isotherms are horizontal, e. g. between points B and C, indicating that the pressure during condensation or evaporation remains constant. At the left-hand side of the vapor-liquid equilibrium region, the isotherms are very steep. This is the region of the liquid phase, and the steep slope means that compressing a liquid does not result in a major volume change. At the right-hand side there is the vapor region, where it is easier to compress the substance. At low pressures, the isotherms obey to the ideal gas law (Equation 2.9). At higher pressures, the ideal gas law becomes more and more inaccurate. In the two-phase region, the boiling point line and the dew point line are connected by horizontal isotherms as mentioned, giving the saturated vapor volume v′′ (e. g. point B) and the saturated liquid volume v′ (e. g. point C). With increasing temperature, v′′ and v′ are getting closer to each other. At the critical point, vapor and liquid become identical. Above the critical temperature (and, correspondingly, the critical pressure), no phase equilibria between vapor and liquid exist. Remarkably, one can get from the vapor to the liquid region without crossing the two-phase region, i. e. a vapor can be gradually transformed into a liquid and vice versa. E. g. starting from point C, the saturated liquid can be isochorically (i. e. at v = const.)

2 Just like all authors, I give this citation. However, I will admit that I have not read it: The language is Dutch, and even if I could speak it I would guess that the way it was used in the 19th century would not be comprehensible for a nonnative.

2.2 φ-φ-approach

� 35

heated up to a temperature above the critical one, where simultaneously the pressure is increased to a value above the critical pressure. Then, the substance can be heated isobarically, and finally, it can be cooled down isochorically to point B. As a result, a liquid has been turned into a vapor without ever having both phases existing simultaneously. When the typical application of an equation of state, the calculation of v at given T and p, is performed, Equation (2.11) gives a third-degree polynomial: v3 − (b +

RT 2 a ab )v + v − =0 p p p

(2.12)

Therefore, the van-der-Waals equation (2.11) and their successors are called cubic equations of state. The advantage is that it can be solved analytically using Cardanos formula [11, 20]. One obtains either one real and two complex or three real solutions. In the latter case, the largest solution corresponds to a vapor specific volume, and the smallest one corresponds to a liquid volume. The middle one has no physical meaning. In contrast to Equation (2.9) or the virial equation (2.10) truncated after the second term, the cubic equations can be used both for the vapor and for the liquid phase. To decide whether the liquid or the vapor solution is valid, the vapor pressure is needed. Of course, an equation of state defined by a continuous function like the van-derWaals equation (2.11) cannot reproduce the dogleg at the transition from the one-phase to the two-phase region (see Figure 2.6). Thermodynamics points out that the equation of state can evaluate the phase equilibrium using the Maxwell criterion (Figure 2.7).

Figure 2.7: Application of the Maxwell criterion (propylene, t = 20 °C, Peng–Robinson equation).

Liquid and vapor phase are in equilibrium at p = ps if the hatched areas between p = ps and the equation of state in Figure 2.7 are equal. Analytically, it can be determined by the equation v′′

ps (v − v ) = ∫ p dv ′′



v′

(2.13)

36 � 2 Thermodynamic models in process simulation The curvature between v′ and v′′ has no physical meaning, including the negative values obtained for the pressure. Equation (2.13) is equivalent to the formulation f ′ = f ′′ ,

(2.14)

where f is the so-called fugacity, which can be interpreted as a corrected pressure. For a pure substance, the fugacity is the product of the pressure and the fugacity coefficient φ: pφ′ = pφ′′ ,

(2.15)

where the pressure cancels out. However, while its theoretical value is unquestioned, the van-der-Waals equation had limited success in practical applications, where quantitatively correct results are required. In the second half of the 20th century, several modifications of the van-derWaals equation have been developed. The most successful and most widely-used ones are the Soave–Redlich–Kwong equation (SRK or RKS) p=

a(T) RT − , v − b v(v + b)

(2.16)

with a(T) = 0.42748

R2 Tc2 α(T) pc

(2.17)

α(T) = [1 + (0.48 + 1.574 ω − 0.176 ω2 )(1 − Tr0.5 )] b = 0.08664

RTc , pc

2

(2.18) (2.19)

and the Peng–Robinson equation (PR) p=

RT a(T) − , v − b v(v + b) + b(v − b)

(2.20)

where a(T) = 0.45724

R2 Tc2 α(T) pc

α(T) = [1 + m(1 − Tr0.5 )]

2

m = 0.37464 + 1.54226 ω − 0.26992 ω RT b = 0.0778 c pc

(2.21) (2.22) 2

(2.23) (2.24)

It is remarkable that both the Soave–Redlich–Kwong and the Peng–Robinson equation need only three substance-specific parameters, i. e. the critical temperature Tc , the critical pressure pc and the acentric factor ω, which is defined as

2.2 φ-φ-approach �

ω = −1 − lg (

ps ) pc T=0.7 Tc

37

(2.25)

Essentially, ω represents the vapor pressure at T = 0.7 Tc . For most substances, this is a temperature close to the normal boiling point. Thus, the critical point and one specified point of the vapor pressure curve are the only substance-specific information used. This characterization of a substance is called the three-parameter corresponding states principle (Tc , pc , and ω). Equations of state using only this input information are called generalized equations of state. Equations of state valid for both the vapor and the liquid state can provide all thermodynamic properties needed for process calculations. Its original well-known purpose was to be a relationship between p, v, and T in the vapor phase. As seen, cubic equations of state can also be used to calculate the specific volume in the liquid phase, and in Equations (2.13) and (2.15), the vapor pressure for a given temperature can be evaluated. From thermodynamics, an expression for the enthalpy using a pressure-explicit equation of state can be derived: [11]: T

v

T0



h(T, v) = ∫ cpid dT + ∫ [T(

𝜕p ) − p]dv + pv − RT 𝜕T v

(2.26)

Inserting the equation of state into Equation (2.26), an expression is obtained which can calculate the specific enthalpy of the substance at any state. Analogously, the specific entropy can be determined [11]. The only additionally required input is the specific isobaric heat capacity in the ideal gas state. The enthalpy of vaporization for a temperature T is then easily determined by Δhv (T) = h(T, v′′ ) − h(T, v′ )

(2.27)

For the Peng–Robinson equation, the most important calculation equations for pure substances are given in the following: – Cubic equation for the determination of the volume: v3 + (b − –

RT 2 a 2bRT RT 2 ab )v + ( − 3b2 − )v + b3 + b − =0 p p p p p

Specific enthalpy [11]: (h − hid (T, p)) = RT(Z − 1) −



(2.28)

1 da v + (1 + √2)b (a − T ) ln ( ), √8b dT v + (1 − √2)b

(2.29)

Specific entropy: (s − sid )(T, p) = R ln

v − b da/ dT v + (1 − √2)b − ln( ) + R ln Z √8b v v + (1 + √2)b

(2.30)

38 � 2 Thermodynamic models in process simulation with Z=

pv RT

(2.31)

and R2 Tc2 da m = −0.45724 [1 + m(1 − Tr0.5 )] dT pc √TTc –

(2.32)

Fugacity coefficient for a pure component [21]: ln φ = Z − 1 − ln (Z −

bp a v + (1 + √2)b )− ln ( ) RT 2√2bRT v + (1 − √2)b

(2.33)

In Chapter 8.2, there is an example for a compressor calculation using the Peng-Robinson equation of state, where this set of equations is applied. The success of this route is remarkable, taking into account that only three substancespecific parameters are used (Tc , pc , ω), but limited. The three-parameter corresponding states principle is especially valid for nonpolar compounds. The generalized cubic equations of state have become standard tools in the oil and gas industries, where the compounds involved are usually nonpolar. In these cases, the vapor densities and the vapor pressures are reproduced very well, especially in the high-pressure region. This can be explained by the fact that the critical point is necessarily reproduced exactly; the closer the distance to the critical point is, the better the results will be. For polar compounds (e. g. water, methanol, ethanol), the results are usually remarkably good for the specific volume in the vapor phase, but the vapor pressure is not reproduced reliably enough for process calculations. The enthalpies of the vapor are usually a good, but not exact, estimate. For the enthalpy of vaporization the same result as for the vapor pressure is obtained; it is remarkably good for nonpolar substances, but hardly reliable for polar ones. The enthalpies of a liquid are usually poor for both polar and nonpolar compounds; the reason for this is explained in Chapter 2.8. For cubic equations of state, the specific volume of the liquid phase is not even intended to be correct. In most cases, only the order of magnitude is met. For technical applications, the liquid volume calculated by cubic equations of state is by far not accurate enough. Instead of the correct liquid volume, it should only be taken as an auxiliary quantity for the calculation of the vapor pressure and the liquid enthalpy. In the result view of process simulators, the liquid volume is usually overwritten by default with results from a liquid density correlation (Chapter 2.3). To overcome these limitations, the α-functions of the generalized equations of state can be replaced by individual ones. An example is the PSRK (Predictive Soave–Redlich– Kwong) equation, where the generalized α-function (2.18) has been replaced by 2

3 2

α(T) = [1 + c1 (1 − Tr0.5 ) + c2 (1 − Tr0.5 ) + c3 (1 − Tr0.5 ) ]

(2.34)

2.2 φ-φ-approach

� 39

The adjustable parameters c1 , c2 , c3 are usually fitted to vapor pressure data. Tr (reduced temperature) is an abbreviation for T/Tc . This way, polar components can be described as well. In the Volume-Translated Peng–Robinson equation (VTPR) the α-function of the Peng–Robinson equation (2.22) has been replaced by the flexible Twu-α-function α(Tr ) = TrN(M−1) exp [L(1 − TrMN )] ,

(2.35)

where the adjustable parameters L, M, N can again be fitted to vapor pressure data. Also, data for cpL can simultaneously be adjusted so that the enthalpy of the liquid can also be described by the equation [22, see also Chapter 2.8]. With the volume translation, an attempt was made to solve the remaining problem with the bad reproduction of the liquid density. The approach is that the specific volume v in the original equation is replaced by a term v + c, where c is a volume translation: p=

a(T) RT − v + c − b (v + c)(v + c + b) + b(v + c − b)

(2.36)

Therefore, it remains a cubic equation of state, but the results for the specific volumes for both the vapor and the liquid phase are shifted by the value of c. While this is almost negligible for the vapor phase, it causes an improvement for the specific volume of the liquid phase. The parameter c can be fitted to liquid density data or, if not available, calculated by a generalized function. However, an acceptable improvement is restricted to low pressure data, and the densities of liquids can still not be reproduced with the accuracy required in technical applications. Therefore, in spite of the use of volume translations it is still recommended to make use of the option to overwrite it with a liquid density correlation. It should be mentioned that c cannot be made temperature-dependent. Besides the cubic ones, a lot of other equations of state are in use. Only a few of them with a special importance for process engineering purposes can be mentioned here. In process engineering, cases occur where a much higher accuracy in the physical properties is required. Examples are the power plant processes, the heat pump process or the pressure drop calculation in a large pipeline. For the description of the complete pvT behavior including the two-phase region several extensions of the virial equation were suggested. All these extensions have been derived empirically and contain a large number of parameters, which have to be fitted to experimental data. A large database is necessary to obtain reliable parameters. One of the first approaches to equations of state with higher accuracies has been made by Benedict, Webb, and Rubin (BWR) in 1940 [23, 24], who used 8 adjustable parameters. Their equation allows reliable calculations of pvT data for nonpolar gases and liquids up to densities of about 1.8 ρc . Bender [25] extended the BWR equation to 20 parameters. With this large number of parameters it became possible to describe the experimental data for certain substances over a large density range in an excellent way. In the last decades, the so-called technical high-precision equations of state have been developed [26–28]. Their significant improvement was possible due to progress in measurement techniques and the development of mathematical

40 � 2 Thermodynamic models in process simulation algorithms for optimizing the structure of equations of state. With respect to the accuracy of the calculated properties, their extrapolation behavior and their reliability in regions where data are scarce, these equations define the state-of-the-art representation of thermal and caloric properties and their particular derivatives in the whole fluid range. There is also an important demand to get reliable results for derived properties (cp , cv , speed of sound). Technical high-precision equations of state are a remarkable compromise between keeping the accuracy and gain in simplicity. Furthermore, these equations should enable the user to extrapolate safely to the extreme conditions often encountered in industrial processes. For example, in the LDPE3 process ethylene is compressed to approximately 3000 bar, and it is necessary for the simulation of the process and the design of the equipment to have a reliable tool for the determination of the thermal and caloric properties. The complexity and limited availability is no longer an issue. Currently, there are approximately 150 substances for which the data situation has justified the development of a technical high-precision equation of state, e. g. water, methane, argon, carbon dioxide, nitrogen, ethane, n-butane, isobutane, and ethylene. In the FLUIDCAL software [29], these equations of state have been made applicable for users without special knowledge. The successor in this field is TREND [277], which additionally provides a concept for the application of mixtures. A genuine high-precision equation of state for mixtures is GERG [30] for natural gas applications. Table 2.3 illustrates the accuracy demand for technical equations of state. Table 2.3: Accuracy demand for technical high-precision equations of state.

p < �� MPa p > �� MPa

ρ(p, T)

w ∗ (p, T)

cp (p, T)

ps (T)

ρ′ (T)

ρ′′ (T)

0.2 % 0.5 %

1–2 % 2%

1–2 % 2%

0.2 %

0.2 %

0.2 %

The φ-φ-approach can be extended to mixtures. Thermodynamics says that the equilibrium condition Equation (2.14) becomes pxi φ′i = pyi φ′′ i

(2.37)

for each component involved. Again, the pressure cancels out. The cubic equations of state can be transformed to mixture applications by mixing rules for the parameters a and b. The most common ones are a = ∑ ∑ zi zj (aii ajj )0.5 (1 − kij )

(2.38)

b = ∑ zi bi

(2.39)

i

i

3 low density polyethylene.

j

2.3 γ-φ-approach �

41

for both the Peng–Robinson and the Soave–Redlich–Kwong equation of state. As the mixing rules (2.38) and (2.39) refer to both the vapor and the liquid phase, the neutral variable z is used for both the vapor and the liquid mole fraction. kij is an adjustable binary interaction parameter. It is symmetric (kij = kji , kii = kjj = 0) and has usually small values (−0.1 < kij < 0.1). Nevertheless, it has a significant influence on the calculation of phase equilibria and cannot be neglected. The impact on the results for the liquid and vapor volumes is comparably small. Using Equations (2.38) and (2.39), the calculation routes for pure components can be applied to mixtures. For the phase equilibrium calculations, Equation (2.37) looks very easy but is in fact a complicated equation, as the determination of the fugacity coefficients ends up in long equations, e. g. for the Peng–Robinson equation [21] ln φi =

bi p (z − 1) − ln [ (v − b)] b RT b a v + (1 + √2)b 2 − ( ∑ zj aij − i ) ln [ ] b 2√2bRT a j v + (1 − √2)b

(2.40)

For the specific enthalpy and entropy, Equations (2.29) and (2.30) can be further used; however, in the mixture a more complicated expression for da/dT is necessary: aij mj mi da 1 + ) =− ∑ ∑ zi zj ( √T i j dT 2 √α T √αi Tci j cj

(2.41)

The mixing rules (2.38) and (2.39) have had considerable success as long as only nonpolar substances were involved. When polar compounds are regarded, poor results are obtained. For an adequate description of systems with polar components using the φ-φapproach, the so-called g E mixing rules have to be applied (Figure 2.8). They will be explained in Chapter 2.7, as understanding them is not possible without knowledge of the γ-φ-approach.

2.3 γ-φ-approach 2.3.1 Activity coefficients The activity coefficient is the factor by which I have to miscalculate, to get the correct result although I use the wrong equation. (Student in his thermodynamics exam. He passed easily.) It is sobering to remember that successful oil refineries were built many years before chemical engineers used chemical potentials or fugacities […]. (J. M. Prausnitz, 1989)

42 � 2 Thermodynamic models in process simulation

Figure 2.8: Experimental and calculated VLE data for the system acetone (1)/water (2) using the Peng–Robinson equation with kij (left-hand side) and gE mixing rules (right-hand side) [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

The phase equilibrium condition (2.14) can also be elaborated in a different way [11], using different approaches for the vapor and the liquid phase. The simplest solution is Raoult’s law in combinaton with the ideal gas law for the vapor phase: xi psi = yi p ,

(2.42)

where the bubble point curve in the pxy diagram is a straight line. Only few binary systems obey this law, an example is the system benzene/toluene (Figure 2.5, upper row). If deviations from Equation (2.42) occur, the activity coefficient γi is introduced, which depends on the concentration as well as on the temperature. One can take the activity coefficient as a factor which corrects the concentration. The equilibrium condition, still not the final one, is then written as xi γi psi = yi p

(2.43)

This equilibrium condition is valid in the low pressure region, as the vapor phase can then be regarded as an ideal gas. While it is still often used in academics, it has become more and more outdated in process simulation applications, as the enthalpy calculations should not be performed using the ideal gas law for the vapor phase (Chapter 2.8). Instead, a real vapor phase is considered using an equation of state: xi γi psi Poyi = yi p with Poyi as the Poynting factor

φi (T, p, yi ) , pure φi (T, psi )

(2.44)

2.3 γ-φ-approach

Poyi = exp

vLi (p − psi ) RT

� 43

(2.45) pure

φi (T, p, yi ) accounts for the nonideality of the vapor phase, whereas φi (T, psi ) refers to the liquid fugacity coefficient at the vapor pressure. Fortunately, at p = psi the liquid fugacity coefficient is equal to the vapor one (Equation (2.15)); thus, in contrast to Equation (2.37), the equation of state does not need to be valid for both the vapor and the liquid phase. Therefore, the virial equation (2.10) truncated after the second term can also be applied. Nevertheless, in most cases generalized cubic equations of state are used, as they are the most powerful tools for this purpose with easily accessible input parameters (Tc , pc , and ω). The Poynting factor corrects the small error caused by evaluating the fugacity of the pure liquid at its vapor pressure psi instead of the system pressure p. It becomes relevant at high pressures (rule of thumb: p > 50 bar), where the φ-φ-approach is more appropriate anyway. Physics is too important to leave it to the physicists. (A father of a physics student)

The heart of Equation (2.44) is the activity coefficient γi . The formal character of the γi is a correction of the molar concentration, which, however, hardly explains anything. In fact, the activity coefficient accounts for the intermolecular interactions between the molecules and the entropic effects. Figure 2.9 shows the typical isothermal concentration dependence of the activity coefficient.

Figure 2.9: Typical isothermal concentration dependence of the activity coefficient [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

Its maximum values (respectively minimum values for systems with negative deviations from Raoult’s law) occur when the component is extremely diluted; at the concentration xi → 0. This value is called the activity coefficient at infinite dilution (γi∞ ) and is characteristic for the illustration of the nonideal behavior.

44 � 2 Thermodynamic models in process simulation Nowadays, mainly three equations for the correlation of the γi are in use; the Wilson [31], the NRTL4 [32], and the UNIQUAC5 equation [33]. They are all based on the so-called principle of local compositions, which is explained by means of the Wilson equation in the following paragraph:

Figure 2.10: Sketch for the explanation of the Local Composition Models [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

In Figure 2.10 it can be seen that in both cells the concentrations of the molecules 1 and 2 are x1 = 73 and x2 = 47 . Due to the intermolecular forces, the local concentrations can be different. In the left cell, there is a molecule of kind 1 in the center of the cell. Around this molecule, the concentration of the molecules of kind 1 is x11 = 62 , and the concentration of the molecules of kind 2 is x21 = 46 . Similarly, in the cell on the right hand side with a molecule of kind 2 in the center the concentrations are x12 = x22 = 63 . The concentrations around a molecule and the total concentrations are assumed to be related by Boltzmann factors: xji

xii

=

xj exp (−λji /(RT))

xi exp (−λii /(RT))

,

(2.46)

where the λ terms account for the intermolecular forces. Defining the local mole fraction as ξi =

xii vi , ∑j xji vj

(2.47)

γi can be introduced as a correction factor of the total concentration: γi = ξi /xi

(2.48)

After several mathematical transformations [11] the following expression for the activity coefficients is obtained: ln γi = 1 − ln ∑ xj Λij − ∑ j

4 nonrandom two-liquid. 5 universal quasi-chemical.

k

xk Λki , ∑j xj Λkj

(2.49)

2.3 γ-φ-approach

� 45

with Λ as an abbreviation representing the intermolecular forces, which are temperature-dependent Λij = exp (Aij + Bij /T + Cij ln

T + Dij T) K

(2.50)

The parameters A, B, C, D can be adjusted to experimental binary phase equilibrium data. The structure of Equation (2.50) often leads to misunderstandings. The first parameters to be adjusted are the Bij . To account for the temperature dependence, the Aij parameters can be additionally fitted if corresponding data are available, i. e. phase equilibrium data at significantly different temperatures or hE data. For better illustration, Equation (2.50) should be rewritten as Λij = exp

Bij + Aij T + Cij T ln KT + Dij T 2 T

(2.51)

The Cij or Dij parameters are used only if it is justified by the data situation, which is not often the case. It should be noted that the specific volumina introduced in Equation (2.47) do not occur in the final equations (2.49) and (2.50). The term Aij is equal to Aij = ln

vj

vi

(2.52)

,

so the ratio of the volumina just represents the term for the linear temperature dependence. In process simulation, it would be awkward anyway if the binary parameters would depend on the pure component properties which could be subject to change any time. A more detailed derivation of the Wilson equation can be found in [11]. The NRTL and UNIQUAC equations are more complicated [11], but for application it is sufficient to understand their structure: – Wilson: ln γi = f (xi , Λij ) , –

(2.53)

with Λij = f (T) as explained above. NRTL: ln γi = f (xi , αij , τij ) ,

(2.54)

with τij = f (T) as a temperature function describing the molecular interactions τij = exp (Aij + Bij /T + Cij ln

T + Dij T) , K

(2.55)

analogous to Λij in the Wilson equation and αij as an additional symmetric (αij = αji ) adjustable parameter for complicated concentration dependences. Usually, αij is set to 0.3, a reasonable range of values is αij = 0.2–0.5. In extreme cases, this range of values can be exceeded or αij can be made temperature-dependent.

46 � 2 Thermodynamic models in process simulation –

UNIQUAC: ln γi = f (xi , τij ) ,

(2.56)

with τij = f (T) analogous to the NRTL equation. UNIQUAC contains a combinatorial part independent of temperature which is useful for systems differing in size. Additional pure component parameters (van-der-Waals volume and surface) are necessary [11]. The coefficients Aij , Bij , Cij , Dij , and αij are called binary interaction parameters (BIPs). They play the key role in the thermodynamic model. The remarkable issue is that it takes only binary parameters to describe a multicomponent system. Therefore, the effort for the model development is at least limited. Without this simplifying tool, process simulation would still not be possible up to now. A famous example from Gmehling [34] indicates that it would take approx. 37 years to carry out the measurements necessary for a model capable to describe a 10-component-system. There has been a lot of discussion whether the binary interaction parameters have a physical meaning or not. This is a question which cannot be answered simply with “yes” or “no”. The tendency of the author is to say “no”. The local composition models can be mathematically derived [11, 35], where the binary parameters have the character of defined energies for removing single molecules from a cell. However, this energy is not measured; instead, the model with all its assumptions is adjusted to measured phase equilibrium data. The results obtained are not unique; the parameters strongly intercorrelate. It can be seen that various completely different parameter pairs are possible to obtain more or less the same results. This means that the value of a single parameter cannot have any physical significance without the other parameter in the pair. Therefore, a single parameter Bij has no significant physical meaning. However, it can be shown mathematically that any equation describing the activity coefficients must obey the Gibbs–Duhem equation [11]: ∑ xi d ln γi = 0 i

(2.57)

The established equations like Wilson, NRTL, or UNIQUAC and several other do so; thus, they are in some way physically justified. They should be taken as empirical approaches to fulfill the Gibbs–Duhem equation, meaning that they are able to represent the correct shape of the course of the activity coefficient as a function of concentration. An azeotrope occurs when the vapor pressure curves intersect. (Completely wrong but often successful explanation for the occurrence of azeotropes.)

Activity coefficients can be used to obtain an understanding how azeotropes occur. Figure 2.11 shows the well-defined azeotrope of the system R134a(1)–R218(2) at T = 220 K.

2.3 γ-φ-approach �

47

Figure 2.11: The azeotropic system R134a–R218 at T = 220 K [36].

Neglecting the influence of the fugacity coefficients in Equation (2.44), one can add up Equation (2.43) for both components in the following form: p = x1 γ1 ps1 + x2 γ2 ps2

(2.58)

This is the equation of the boiling point line, i. e. the upper one in Figure 2.11. As an azeotropic system it shows a clear maximum at the azeotropic point at approx. x1 = y1 = 0.3. Both pure component vapor pressures are lower than the azeotropic pressure, while R218 is the light end with the higher vapor pressure. On the right hand side of the diagram, it is expected that the addition of R218 to R134a increases the pressure, as R218 is the low-boiling component. It is more surprising that the addition of the highboiler R134a to the low-boiler R218 on the left hand side of the diagram gives a pressure increase as well. This is a necessary condition, as starting from both pure components the maximum in the azeotropic pressure must be reached. Evaluating Equation (2.58) at the pure R218, one gets for x1 → 0: p = x1 γ1∞ ps1 + (1 − x1 ) ⋅ 1 ⋅ ps2 > ps2

(2.59)

γ1∞ ps1 >1 ps2

(2.60)

or

Equation (2.60) is the azeotropic condition: If the product of vapor pressure and activity coefficient at infinite dilution of the high-boiler is larger than the vapor pressure of the low-boiler, an azeotrope with a pressure maximum will result. Similarly, for systems with negative deviations from Raoult’s law one gets the following: if the product of vapor pressure and activity coefficient at infinite dilution of

48 � 2 Thermodynamic models in process simulation the low-boiler is smaller than the vapor pressure of the high-boiler, an azeotrope with a pressure minimum will result. Note that there are two effects in Equation (2.60) which are decisive for the occurence of an azeotrope, i. e. the nonideality of the mixture represented by the activity coefficients and the similarity of the vapor pressures; the closer the vapor pressures are, the greater is the probability that an azeotrope is formed. The laws of Konovalov say that boiling and dew point lines have always the same slope in a pxy or Txy diagram [11]. Moreover, they show that the only way for them to form a maximum or a minimum is the occurence of an azeotrope, where boiling and dew point curves touch each other with the same slope and vapor and liquid have the same concentration.6 Taking the azeotropic condition (Equation 2.60) into account, this means that solely by the continuity of the boiling point line, the occurrence of an azeotrope can be predicted. Example Figure 2.12 shows pxy diagrams for three cases where only the border areas of the phase equlibrium diagrams can be seen. Evaluate whether these systems are azeotropic or not.

Figure 2.12: Covered phase diagrams.

Solution Figure 2.12(a) shows the system water(1)/trioxane(2) at t = 65 °C. Comparing the vapor pressures, one can easily see that water (component 1) is the light end. At the right hand side of the diagram, one can see that adding the heavy end to the light end causes the boiling point curve to rise. However, it must drop again to meet the pure component vapor pressure of the heavy end. Therefore, there must be a maximum in between, meaning that there is an azeotropic point. Figure 2.12(b) shows the system dimethyl ether(1)/methanol(2) at t = 50 °C. Dimethyl ether is the light end. Nothing unusual happens; adding light end to heavy end gives a rise in the boiling point pressure, and

6 far away from the critical region and without miscibility gaps.

2.3 γ-φ-approach

� 49

Figure 2.13: Uncovered phase diagrams. Data from [327–329].

adding heavy end to light end gives a drop in the boiling point pressure. Thus, there is no indication for an azeotrope. Figure 2.12(c) is a bit sophisticated. It shows the system benzene(1)/hexafluorobenzene(2). Again, component 1 is the light end. As in case (a), adding heavy end to light end gives a rise in boiing point pressure, so there is an azeotrope with a pressure maximum. On the left hand side, adding light end to heavy end gives a drop of the boiling point pressure, indicating that there must be an azeotrope with a pressure minimum. Figure 2.13 shows the full phase diagrams of the three cases. In fact, case (c) has a double azeotrope, one with a pressure maximum and one with a pressure minimum. There are only a handful of systems with a double azeotrope. Note how close the vapor pressures of the two components in case (c) are. The activity coefficients are in the range 0.85 . . . 1.25, which is not strongly nonideal.

However, the first step for checking whether an azeotrope occurs should be a look into the database. There are even special monographs where azeotropic data are collected in a systematic way [37]. A question often asked is which of the three equations, Wilson, NRTL, and UNIQUAC, is the best one. In fact, it is not much use to compare large amounts of fitting results; no clear advantage can be observed. A description of the characteristics might be more helpful. The Wilson equation cannot describe systems with a miscibility gap due to mathematical reasons [11]. For a system without a miscibility gap the Wilson equation is an adequate model. The NRTL equation can describe a miscibility gap. The third parameter α is a valuable tool if complicated systems have to be correlated. The comparison with Wilson and UNIQUAC is not fair in these cases, as these equations do not have this opportunity. Nevertheless, sometimes the third parameter is perceived to be useful and makes NRTL the favorite choice. In a project, all binary subsystems must be described with the same model; if NRTL gives a significant improvement for one of the important subsystems, the decision for NRTL is probable. Furthermore, NRTL has a very popular extension for electrolytes (ELECNRTL), which is fully consistent with the simple NRTL equation for conventional systems. Thus, if electrolytes occur or even if it is possible that they might occur, it is usually a good choice to take NRTL. UNIQUAC is clearly the equation with the strongest physical background. For application, it is not so popular as

50 � 2 Thermodynamic models in process simulation it requires the van-der-Waals surfaces and volumina as additional pure component parameters, which cannot be easily assigned for all components due to missing subgroups (e. g. CO2 ). An advantage of UNIQUAC is that it has a combinatorial part, which takes into account the behavior of molecules differing in size. In comparison with the φ-φ-approach, the γ-φ-approach has the disadvantage that supercritical components cannot be treated with this concept. According to Equation (2.44), there is no reference point in the liquid phase for a pure supercritical component. As a workaround, a supercritical component can be treated as a Henry component, with its reference state at infinite dilution in a pure solvent. The phase equilibrium condition for such a component is xi Hij = yi pφi (T, p, yi ) ,

(2.61)

with Hij as the so-called Henry coefficient of Henry component i in solvent j. Equation (2.61) can be applied for low concentrations of the Henry component in the liquid phase (rule of thumb: xi < 0.03), otherwise, further corrections are necessary, clearly favoring the φ-φ-approach. The Henry coefficient has the character of a vapor pressure; its meaning becomes clear when Equation (2.61) is applied to a subcritical component at pure low pressure (i. e. φi (T, p, yi ) = φi (T, psi ) = 1) at infinite dilution of the Henry component in the liquid phase. Compared with Equation (2.43), the Henry coefficient is equal to Hij = psi γi∞

(2.62)

The Henry coefficient is a temperature function; the temperature dependence is usually described using a function like ln

Hij p0

=A+

B + CT + DT 2 , T

(2.63)

where p0 is simply the pressure unit, necessary to make the argument of the logarithm dimensionless. It should be noted that in contrast to the vapor pressure the Henry coefficient is not necessarily a function monotonically rising with temperature. It can exhibit welldefined maxima (e. g. oxygen in water [11, 35]). For mixed solvents, a mixing rule like Hi,mix ∑j xj ln ln = p0 ∑j xj

Hij p0

(2.64)

can be applied. In process simulators, often even more complicated mixing rules are used. It should be mentioned that the averaging in the mixing rules is only performed with the solvents where Henry coefficients are available. The index j refers only to the

2.3 γ-φ-approach �

51

solvent components where a Henry coefficient is given. One must be careful, as these solvents are not always representative for the whole liquid. It is often remarked that mixing rules like Equation (2.64) are arbitrary and empirical. From the physical point of view, the application of equations of state, especially with g E mixing rules (Chapter 2.7), is more justified. Nevertheless, one must realize that in process calculations the whole gas solubility calculation has the character of an estimation, giving only a reasonable order of magnitude. The solubility of gases in liquids usually takes a lot of time to reach equilibrium. For experimental setups, several hours are scheduled to get a data point; much more than a gas has in a process step. Therefore, even relatively large errors in the Henry coefficient are not relevant for the target of the calculation. For example, if the correct solubility of a gas component is 100 ppm, an overestimation of 20 % of the Henry coefficient in Equation (2.61) would yield a solubility of 83 ppm, which is a fully acceptable result in a process calculation. In principle, excess enthalpies (enthalpies of mixing, see Glossary) can also be calculated if a correlation for the activity coefficients is available. hE depends on the temperature derivatives of the activity coefficients: hE = −RT 2 ∑ xi ( i

𝜕 ln γi ) 𝜕T

(2.65)

However, the physical background of the equations for the activity coefficient is not sufficient so that application of Equation (2.65) usually gives wrong results unless the parameters are temperature-dependent and have been fitted to both phase equilibrium and excess enthalpy data. The great advantage of the γ-φ-approach is that all properties are described by individual equations that have no influence on each other. The phase equilibrium is described only by the activity coefficients, which do not need to be varied if any other property is changed. Independent and highly accurate correlations are available for each property.

2.3.2 Vapor pressure and liquid density In contrast to the φ-φ-approach, in the γ-φ-approach the liquid density, the vapor pressure and the specific enthalpy are determined independently with separate correlations, giving the opportunity of yielding better accuracies and having a more convenient workflow, as changes in one property do not affect the others. While the enthalpy calculation needs an own chapter (Chapter 2.8), the correlations and estimations for the liquid density and the vapor pressure can be briefly discussed here.

52 � 2 Thermodynamic models in process simulation The density of a pure substance or a mixture is a fundamental quantity in any process calculation. The vapor density as a function of temperature and pressure is determined by the equation of state chosen in Equation (2.44). The liquid density of pure components is treated only as a function of temperature, the pressure effect on the density can usually be neglected. Appropriate correlations and their coefficients are the Rackett equation ρL =

A

D B1+(1−T/C)

(2.66)

and the PPDS7 equation ρc ρL = + A(1 − Tr )0.35 + B(1 − Tr )2/3 + C(1 − Tr ) + D(1 − Tr )4/3 , 3 kg/m kg/m3

(2.67)

which is usually slightly more accurate. It is important to mention that the liquid density mixing rule should be based on the specific volume, i. e. the reciprocal value of the density: x 1 =∑ i ρL ρ L,i i

(2.68)

Thermodynamics says that there is a so-called excess volume (see Glossary), i. e. a systematic deviation from Equation (2.68). If the liquid density is calculated using an equation of state, as it is the case in the φ-φ-approach (Chapter 2.2), this excess volume is accounted for automatically. Nevertheless, no advantage can be taken, as it is usually not described quantitatively correct, and in most cases the density of the pure substances is badly reproduced so that even without a correction Equation (2.68) yields better results (p. 39). The maximum error caused by neglecting the excess volume can be quantified regarding the system exhibiting the largest excess volume: To the knowledge of the author, it is in fact ethanol-water, and the maximum excess volume observed is approx. 3.5 % [11]. Liquid densities can be estimated e. g. with the COSTALD equation [38]. It can be written as 1 = v∗ ⋅ f (T, Tc , ω) , ρL

(2.69)

meaning that the liquid density can be estimated with a generalized function depending only on Tc and ω. Additionally, the parameter v∗ is used, which can be adjusted to one or more data points. In this case, the procedure is very accurate; up to temperatures below 0.95 Tc the error is usually below 2 %. If no data point is available, one can use the 7 physical property data service.

2.3 γ-φ-approach

� 53

critical volume for v∗ , which of course increases the risk but often yields surprisingly good results. It should be mentioned that the COSTALD equation has weaknesses if polar components are involved. The generalized function is f (T, Tc , ω) = V0 (1 − ωVδ )

(2.70)

V0 = 1 + a(1 − Tr )1/3 + b(1 − Tr )2/3 + c(1 − Tr ) + d(1 − Tr )4/3

(2.71)

Vδ = [e + fTr +

(2.72)

with

gTr2

+

hTr3 ]/(Tr

− 1.00001)

where Tr = T/Tc . The coefficients a . . . h are a = −�.����� e = −�.������

b = �.����� f = �.������

c = −�.����� g = −�.�������

d = �.������ h = −�.�������

Example Estimate the liquid density at saturation of o-xylene at t = 44.95 °C using a) an experimental data point ρL (99.95 °C) = 810.24 kg/m3 b) the critical volume as estimation for v ∗ Given values are M = 106.165 g/mol, vc = 372.509 cm3 /mol, ω = 0.312, Tc = 630.259 K.

Solution a)

b)

Using the experimental data point, we vary v ∗ in a way that we reproduce the data point at 99.95 °C. The result is v ∗ = 367.49 cm3 /mol. For t = 44.95 °C, i. e. T = 318.1 K, we get V0 = 0.363 from Equation (2.71) and Vδ = 0.238 from Equation (2.72). Then, Equation (2.70) gives ρ(44.95, °C) = 859.7 kg/m3 . Without an experimental data point, we estimate v ∗ = vc . V0 and Vδ stay the same. Then, Equation (2.70) gives ρ(44.95, °C) = 848.1 kg/m3 .

The experimental value is ρ(44.95, °C) = 858.5 kg/m3 [326].

The vapor pressure is the most important quantity in thermodynamics. It is decisive especially in the simulation of distillation columns. Furthermore, it is directly related to the enthalpy of vaporization via the Clausius–Clapeyron equation (Chapter 2.8). The vapor pressure is an exponential function of temperature, starting at the triple point and ending at the critical point. It comprises several orders of magnitude, therefore, a graphical representation usually represents only part of its characteristics. Figure 2.14

54 � 2 Thermodynamic models in process simulation

Figure 2.14: Typical vapor pressure plots as a function of temperature [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

Figure 2.15: Deviation plot for the fit of a vapor pressure equation [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

shows two diagrams, with linear and logarithmic axes for the vapor pressure of propylene. The linear diagram makes it impossible to identify even the qualitative behavior at low temperatures, whereas in the logarithmic diagram only the order of magnitude can be seen on the axis. At least, logarithmic diagrams allow the comparison between vapor pressures of different substances, e. g. deciding whether they intersect or not. On the other hand, many process calculations require extremely accurate vapor pressure curves, e. g. the separation of isomers by distillation. For the visualization of a fit of a vapor pressure curve, special techniques must be applied, the simple graphical comparison between experimental and calculated data in a diagram is not possible. For this purpose, a deviation plot is useful. Figure 2.15 shows a quite plausible one. The quality of the data is relatively good, most deviatons are in the range ± 0.4 %, which is not excellent but acceptable. At low temperatures, the deviations are larger, which is typical. However, they are still below 1 %. The deviations scatter around the zero line without any systematic trend. This indicates that the errors are random and that the correlation used is capable enough.

2.3 γ-φ-approach

� 55

Most companies use parameter databanks. When a parameter database is set up, e. g. the one in a commercial process simulation program, it cannot be known in advance at which conditions exact vapor pressures will be required. For distillation applications, a high accuracy is often required, especially when components with similar vapor pressures have to be separated in distillation columns. Simple vapor pressure equations cannot be applied in the whole temperature range from the triple point to the critical point. More capable vapor pressure correlations are needed, the most popular ones are the extended Antoine equation and the Wagner equation. In principle, the extended Antoine equation is a collection of various useful terms: ln

G

ps B T T T =A+ + D + E ln + F( ) , p0 T/K + C K K K

(2.73)

where p0 is the pressure unit.8 A very useful correlation is the Wagner equation ln

ps 1 = [A(1 − Tr ) + B(1 − Tr )1.5 + C(1 − Tr )3 + D(1 − Tr )6 ] , pc Tr

(2.74)

where Tr = T/Tc (reduced temperature). It can correlate the whole vapor pressure curve from the triple point to the critical point with an excellent accuracy. The Wagner equation has been developed by identifying the most important terms in Equation (2.74) with a structural optimization method [39]. The extraordinary capability of the equation has also been demonstrated by Moller [40], who showed that the Wagner equation can reproduce the difficult term Δhv /(ZV −ZL ) reasonably well. Eq. (2.74) is called the 3–6-form, where the numbers refer to the exponents of the last two terms. Some authors [41] prefer the 2.5–5-form, which is reported to be slightly more accurate. For the application of the Wagner equation, accurate critical data are required. As long as the experimental data points involved are far away from the critical point (e. g. only points below atmospheric pressure), estimated critical data are usually sufficient. As the critical point is automatically met due to the structure of the equation, the Wagner equation extrapolates reasonably to higher temperatures, even if the critical point is only estimated. However, like all vapor pressure equations it does not extrapolate reliably to lower temperatures. Sometimes, users of process simulation programs calculate vapor pressures beyond the critical point,9 although it is physically meaningless. If the Wagner equation is applied above the critical temperature, it will yield a mathematical error. Therefore, the simulation program must provide an extrapolation function that continues the vapor pressure line with the same slope. For the particular parameters of the Wagner equation, the fol-

8 It is not intended that all parameters are used for correlation. The extended Antoine equation is a compilation of useful terms to avoid different equations for any combinations of parameters. 9 maybe on purpose as a slight extrapolation or during an iteration.

56 � 2 Thermodynamic models in process simulation lowing ranges of values are reasonable for both forms: A = −10 . . . −5 B = −10 . . . 10 C = −10 . . . 10

(2.75)

D = −20 . . . 20 If these ranges are exceeded, one should carefully check the critical data used and the experimental data points for possible outliers. Coefficients for the Wagner equation can be found e. g. in [41–43]. For a vapor pressure correlation, average deviations should be well below 0.5 %. Data points with correlation deviations larger than 1 % should be rejected, as long as there are enough other values available. Exceptions can be made for vapor pressures below 1 mbar, as the accuracy of the measurements is lower in that range. The structure of the deviations should always be carefully interpreted. A guideline is given in [11]. Despite this high accuracy demand for vapor pressures, there is also a need for good estimation methods. Often, a lot of components is involved in a distillation process. Not all of these components are really important, however, one should know whether they end up at the top or at the bottom of a distillation column. In many cases, a measurement would even be not possible, as the effort for the isolation and purification of these components might be too large. Estimation methods are mostly applied to medium and low pressures for molecules with a certain complexity, as small molecules usually have well-established vapor pressure equations, whereas large molecules often have a volatility which is so low that a purification of the substance for measurement by distillation is not possible. The estimation of vapor pressures is one of most difficult problems in thermodynamics. Due to the exponential relationship between vapor pressure and temperature, a high accuracy must not be expected. Deviations in the range of 5–10 % have to be tolerated. Thus, estimated vapor pressure correlations should not be used for a main substance in a distillation column to evaluate the final design, however, they can be very useful to decide about the behavior of side components without additional measurements. Different estimation methods are discussed in [11]. At least, one data point, usually the normal boiling point, should be known. Most methods use a vapor pressure equation with two adjustable parameters; therefore, a second piece of information is necessary. It can be generated by a group contribution method, where the one of Rarey [40] can be regarded as the most useful one. Another option is to take a second data point, which can be either a genuine data point or a point based on an estimation method itself, e. g. the critical point. An example for the latter case is the application of the Hoffmann–Florin equation [44] ln

ps 1 T T = α + β[ − 7.9151 ⋅ 10−3 + 2.6726 ⋅ 10−3 lg − 0.8625 ⋅ 10−6 ] , p0 T/K K K

(2.76)

2.3 γ-φ-approach �

57

with the adjustable parameters α and β. It has the advantage that it can easily be transformed to the extended Antoine equation (2.73) by A = α − 7.9151 ⋅ 10−3 β B=β D = −0.8625 ⋅ 10−6 β 2.6726 ⋅ 10−3 β E= ln 10 C=F=G=0

(2.77)

If no single data point is available, one must estimate even the normal boiling point [11, 45]. In this case, one can hardly rely on the results obtained, however, as long as no better information is available, there is no choice. When a vapor pressure has to be estimated, one should also have a look at vapor pressure curves of components which have a similar structure. Just by defining a constant vapor pressure ratio between the two components, one can at least obtain a reasonable vapor pressure curve. Vapor pressures play the most decisive role in distillation if isomers have to be separated by distillation. In this case, the separation factor depends only on the ratio between the vapor pressures as the activity coefficients between isomers can be set to 1 as an approximation.10 If the vapor pressures of the isomers are close, a large number of theoretical stages are necessary, and its determination is very sensitive to the separation factor. In these cases, it is strongly recommended not to rely on data from the literature, not even on good data. Instead, the vapor pressure of the isomers should be measured as accurately as possible in the same apparatus and on the same day to avoid any systematic measurement errors. 2.3.3 Vapor phase association Another advantage of the γ-φ-approach is that substances showing association in the vapor phase can be described. These substances are the carboxylic acids like formic acid, acetic acid11 or propionic acid, which form dimers in the vapor phase, and hydrogen fluoride, which forms hexamers. These substances are involved in many chemical processes, and the deviations from the ideal gas law are significant even at low pressures. For example, the compressibility factor of acetic acid at the normal boiling point, which is expected to be close to 1, is ZNBP = 0.6. Up to now, no equation of state valid for both the vapor and the liquid phase has been available in this case. The corresponding equation of state does not need to be valid for both the vapor and the liquid phase; with the 10 which is, however, not always correct. 11 For acetic acid, an additional tetramer formation is often considered to obtain more accurate results.

58 � 2 Thermodynamic models in process simulation γ-φ-approach, it is sufficient to cover only the vapor phase. The formation of associates is treated as a chemical reaction in equilibrium. As an illustration, formic acid as a substance forming only dimers is regarded. The association can be described with the law of mass actions: K2 =

z2 2 z1 (p/p0 )

,

(2.78)

where K is the equilibrium constant for the reaction 2 HCOOH 󴀕󴀬 (HCOOH)2 z1 is the true concentration of the monomer, while z2 denotes the true concentration of the dimers in the mixture. p0 is simply the pressure unit. The equilibrium constant can be correlated by ln K2 = A2 +

B2 T

(2.79)

The sum of true mole fractions z is equal to one z2 + z1 = 1

(2.80)

Combining Equations (2.78) and (2.80), z1 can be determined by z1 =

√1 + 4K2 (p/p0 ) − 1 2K2 (p/p0 )

(2.81)

Assuming that the ideal gas equation is valid, the specific volume can be determined by v=

1 RT p z1 + 2z2

(2.82)

Figure 2.16 illustrates the difference to conventional equations of state like the cubic ones. Case (a) represents the situation in an ideal gas phase; there are no forces between the molecules. Case (b) represents a real vapor phase, where the molecules attract or repulse each other by intermolecular forces. Vapor phases like this are typically modeled with the equations of state described in Chapter 2.2 like the cubic equations. Case (c) denotes the association in the vapor phase. The model (Equations (2.78)–(2.82)) takes into account that associates are formed, but it is still assumed that no intermolecular forces are exerted. The nonideality in case (c) is only achieved by the formation of associates. This approximation is sufficient for low pressures. For higher pressures, a good model for substances showing vapor phase association would have to take the intermolecular forces into account as well (case (d)). However, an appropriate model for this situation has not been introduced yet.

2.3 γ-φ-approach

� 59

Figure 2.16: Illustration of the modeling of vapor phases.

Figure 2.17: Specific isobaric heat capacity of acetic acid vapor at different pressures [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

For associating substances, the heat capacity of the vapor phase shows a welldefined maximum (Figure 2.17). At low temperatures, the dimerization as the exothermic reaction is preferred in the equilibrium, and all molecules are dimerized. When the temperature rises, the dimers are split. For this endothermic reaction, energy is required which is not used for increasing the temperature. Therefore, the heat capacity increases drastically. With increasing temperature, the number of dimers which can be split decreases. The heat capacity passes a maximum and comes down to the normal value of the ideal gas. For the sizing of heat exchangers, this effect must be considered. The curvature is even more dramatic for HF with its hexamers, where the peak of the maximum can be up to 40 times higher than the ideal gas heat capacity [46]. An extensive discussion of the association in the vapor phase can be found in [283].

60 � 2 Thermodynamic models in process simulation

2.4 Electrolytes Life is too short to worry about electrolytes. (Georgios Kontogeorgis)

Water is one of the strangest substances that occurs in chemical engineering. Its physical properties are not comparable to the behavior of other substances. With a low molecular weight of 18 g/mol, its normal boiling point of tb = 100 °C is incredibly high, as well as its critical point (tc = 374 °C, pc = 221 bar). The well-known maximum in the density of liquid water at t = 4 °C is technically not important, but the course of the liquid density as a function of temperature is extraordinarily flat. Between 0 °C and 50 °C, the density decreases just by 1 %. For comparison, n-heptane with a similar normal boiling point (98.4 °C) has a density decrease of more than 4 % in the same temperature range. When water crystallizes, it expands significantly by almost 10 %, whereas other substances reduce their density as expected. The specific heat capacity of liquid water (approx. 4.2 J/(g K)) is about twice as large as for typical organic substances, but the most remarkable physical property is the enthalpy of vaporization. At t = 100 °C, it is approx. 2250 J/(g K), more than seven times larger than for n-heptane as a typical organic substance. The reason for this behavior is the strong polar character of the molecule. It is not linear; the two bonds between oxygen and hydrogen form an angle of approx. 105°. The oxygen atom is a strongly electronegative, it attracts the electrons of the bond with their negative charges so that they concentrate at the oxygen atom, while the hydrogen atoms become positively charged. Therefore, the water molecules arrange in a kind of lattice. Moreover, the strong polarity of the water molecule is the reason why it is an excellent solvent for many electrolytes. An electrolyte is a substance which conducts electric current as a result of its dissociation into positively and negatively charged ions in solutions or melts. Ions with a positive charge are called cations, ions with negative charge anions, respectively. The most typical electrolytes are acids, bases, and salts dissolved in a solvent, very often in water. The total charge of an ion is a multiple of the elementary charge (e = 1.602 ⋅ 10−19 C), given by the number z. Examples are: H3 O+ Cl



2+

z=1 z = −1

Ca

z=2

SO2− 4

z = −2

In a macroscopic solution, the sum of charges is always zero, since the solution is always neutral. Otherwise, an electric current would occur. For electrolyte solutions, the particular ions are formed by dissociation reactions like

2.4 Electrolytes

� 61

NaCl → Na+ + Cl− CaSO4 → Ca2+ + SO2− 4

H3 PO4 + H2 O → H3 O+ + H2 PO−4

H2 PO−4 + H2 O → H3 O+ + HPO2− 4 + 3− HPO2− 4 + H2 O → H3 O + PO4

The H+ ion cannot exist as a pure proton; it is always attached to a water molecule H2 O, giving H3 O+ . Strong and weak electrolytes can be distinguished. While strong electrolytes like HCl, H2 SO4 , HNO3 , NaCl, or NaOH dissociate almost completely, weak electrolytes do so only to a small extent. Sometimes, their electrolyte character plays a secondary role and can often be neglected. Examples are formic acid (HCOOH), acetic acid (CH3 COOH), H2 S, SO2 , NH3 , or CO2 [11], when they are the only electrolyte solutes. The molecular structure of an electrolyte solution is significantly determined by the electrostatic interactions between the charged ions (Coulomb-Coulomb interactions) and by the long-range interactions of the charged ions and the dipole moments of the solvent (Coulomb-dipole-interactions). Figure 2.18 illustrates the schematic distribution of water as a strongly polar solvent around a cation and an anion.

Figure 2.18: Structure of an aqueous electrolyte solution [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

As mentioned, the oxygen atom in the water molecule has a negative partial charge due to its high electronegativity. Therefore, the water molecule in the vicinity of an ion is arranged in a way that the oxygen atom is directed towards the positively charged cations. Vice versa, the hydrogen atoms in the water molecule are partially positively charged and oriented towards the anions. Around the ions, a shell of solvent molecules is formed.

62 � 2 Thermodynamic models in process simulation The corresponding procedure is called solvation. It is an exothermic process. On the other hand, the dissociation of the electrolyte is an endothermic process, because the ionic lattice has to be destroyed, which is connected to a need of energy. The overall heat of solution is the sum of both contributions. It is usually dominated by the solvation and therefore remains exothermic. In fact, there are many exceptions. Electrolytes in their dissociated form are not volatile and remain completely in the liquid phase. However, it can happen that liquid drops containing electrolytes are subject to entrainment (Chapter 9). The process simulation programs offer models for electrolyte solutions which describe the arrangement above. Not necessarily the best, but the most widely used one is the electrolyte NRTL model, often also called Chen model [47, 48]. Its obvious advantage is the compatibility with the conventional NRTL activity coefficient model. This is the most important property of an electrolyte model, often even more important than its accuracy. During a project, process simulation might start in a part of the plant where electrolytes are not involved. Later, when it is extended, electrolyte components occur, and defining these components simply as heavy ends is always tried first.12 Although of course not state of the art, this is often a feasible approach as long as the electrolyte concentration is low and not a decisive issue in the process. When it finally turns out that electrolytes occur in a way that their character must be correctly described, it is useful if at least the BIP matrix does not need to be revised. The NRTL electrolyte model consists of the theoretically derived Debye–Hückel term for the long ranging interactions due to the charges of the ions and an NRTL based term for the short ranging interactions. Still, the local composition concept is applied. The model is not restricted to systems of electrolytes just with water, other solvents and side components are possible. However, the database is mainly based on aqueous systems. Different kinds of parameters occur: – the normal binary parameters between molecules; – pair parameters between ion pairs and molecules; – pair parameters between different ion pairs. Nowadays, the possible electrolyte reactions and the species produced are generated by the process simulation program. It is up to the user to neglect them or take them into account. For instance, the dissociation reaction of ammonia NH3 + H2 O → NH+4 + OH− 12 This can simply be accomplished by assigning the same properties to the component as water, and then overwriting the molecular weight with the correct value for maintaining the mass balance and, also, overwriting the vapor pressure with an equation giving negligible values over the whole temperature range. As well, density and heat capacity should be checked; due to the changed molecular weight the correlations might not apply.

2.5 Liquid-liquid equilibria

� 63

can easily been neglected if just the system NH3 /H2 O is regarded, as only a small fraction of the ammonia dissociates, which is negligible. In a caustic environment, even this hardly ever happens. However, in an acid environment, the ammonia will dissociate, and in fact completely at low pH values. A famous example is the system NH3 /CO2 /H2 O, where both NH3 and CO2 are weak electrolytes, but they keep the other component in the solution as NH3 is a caustic and CO2 is an acid component. In the process simulation program, the equilibrium constants for the generated dissociation reactions are usually provided automatically, as well as the pair parameters. Note that the pair parameters really refer to a pair of ions and not to single ones. An example for the application of the NRTL electrolyte model is given in [11]. There are two options for the notation of the electrolytes. In the true component approach, the ions are listed as ions. The advantage is that it is listed what really happens, the disadvantage might be that it is more difficult to keep the overview. In the apparent component approach, the ions are recombined to components for the result list. This is easier for discussion but does not always work in a plausible way. It can happen that according to the balance components like NaOH and HCl coexist in the aqueous solution, which one can hardly imagine. Currently, the apparent component approach is obligatory when liquid-liquid equilibria (see Chapter 2.5) occur.

2.5 Liquid-liquid equilibria With increasing activity coefficients, two liquid phases with different compositions are formed (miscibility gap). The concentration differences of the compounds in the different phases can be used for example for the separation by extraction. In distillation processes, the liquid-liquid equilibrium (LLE) is often used when a decanter separates the condensate of the top product into two liquid phases. The knowledge of the VLLE (vapor-liquid-liquid equilibrium) behavior is of special importance for the separation of systems by heteroazeotropic distillation. Many engineers are of the mind that the formation of an LLE does not take place in distillation columns. In fact, it does, however, in contrast to a phase equilibrium arrangement the two phases do not separate due to turbulences (tray columns) or form thin layers which trickle down the internals of the column (packed columns). In both cases, it is useful to treat the two liquid phases as one homogeneous liquid for the determination of the overall liquid composition and the physical properties of the liquid phase. For the phase equilibrium calculations, the liquid phase split must definitely be considered to get the correct vapor composition. Liquid-liquid equilibria can be evaluated by the iso-activity criterion: (xi γi )I = (xi γi )II , as the product xi γi is also called activity ai .

(2.83)

64 � 2 Thermodynamic models in process simulation At moderate pressures, the liquid-liquid equilibrium behavior as a function of temperature only depends on the temperature dependence of the activity coefficients. For the calculation of LLE again g E -models like NRTL, UNIQUAC, or equations of state with a g E mixing rule (Chapter 2.7) can be used, whereas Wilson is not appropriate [11]. The formation of two liquid phases can result in the formation of binary and higher heteroazeotropes, which can for instance be used for the separation of systems like ethyl acetate–water (Figure 2.22). Ternary LLEs can be illustrated in a triangle diagram (Figure 2.19). It is more difficult to calculate reliable liquid-liquid equilibria (LLE) of a system containing three or more components using binary parameters than to describe vapor-liquid equilibria.

Figure 2.19: Liquid-liquid equilibrium of the ternary system ethyl acetate–water–ethanol at t = 50 °C.

This is the main reason why up to now no reliable prediction of a multicomponent LLE behavior (tie lines) using binary parameters is possible. Fortunately, it is quite easy to measure LLE data of ternary and higher systems at least up to atmospheric pressure. Binary parameters can be fitted to ternary data as well, and in this way LLEs can at least be correlated.

2.5 Liquid-liquid equilibria

� 65

Also, even the fit to a binary mixture is often a bit more exhausting, as priorities must be set. It hardly ever happens that a set of binary parameters can represent both the vapor-liquid equilibrium of systems with an LLE (the so-called vapor-liquid-liquid equilibrium VLLE) and the miscibility gap itself. It must be decided which capabilities the parameters should have. Often, two different parameter sets are used, one for LLE and one for VLLE. In most cases, the LLE data are more significant, especially for systems with wide miscibility gaps. Figure 2.20 explains the situation using the binary system water / n-butanol. The blue symbols denote measured mutual solubilities as functions of temperature (LLE data). There is a good agreement among different authors (different shape of the symbols), making confident that the data are trustworthy. The situation is different for the red symbols. These are miscibility gaps obtained by the fit of isothermal data sets for the vapor-liquid-liquid equilibrium (VLLE data). The mutual solubilities are the concentrations where the boiling point curve turns into a horizontal line (see Figure 2.22). While the solubilities of n-butanol in water are pretty well met, there is a severe disagreement for the solubility of water in the organic phase, with differences up to 10 %. It looks like a terrible measurement error, but several well-known authors have the same results, and similar findings are obtained for water / isobutanol and water / 2-butanol. Maybe in VLLE measurement the mass transfer between the two liquid phases is too slow for the equilibrium to form. It is obvious that the calculation itself cannot be the reason; it cannot, of course, give two different results simultaneously. So, the recommended workaround in cases like this is to use the parameters obtained from VLLE measurements when vapor-liquid separations are considered (e. g. distillation) and the parameters obtained from LLE data for liquid-liquid separations (e. g. decanter, extraction).

Figure 2.20: Inconsistency of measured miscibility gaps from VLLE and LLE measurements. Red symbols: VLLE, blue symbols: LLE. Data taken from [403–407].

66 � 2 Thermodynamic models in process simulation Moreover, the binary parameters for LLEs cannot be simply transferred from binary to ternary and multicomponent mixtures. Doing so, one obtains more or less an estimation. One should be aware that for a good description of multicomponent LLE at least data from ternary mixtures are necessary. Often, when a colleague tells me that his simulation results look strange, the first thing I do is check whether the VLLE option is actually used for systems with a miscibility gap. Forgetting it is a very common error in the simulation of vapor-liquid separations. Simulation results may look strange in many ways. For this case, the trigger is that the temperature is way too low, not just by a few degrees but by 100 K or so. (Jo Sijben)

Calculating a phase equilibrium, there are two options in a process simulator. The default is the common VLE calculation. If it is known that miscibility gaps occur in the system, the option VLLE (3-phase equilibrium) must be chosen so that the simulator checks whether there is an LLE before the equilibrium with the vapor phase is evaluated. If this is forgotten, one gets strange phase equilibrium diagrams (Figure 2.21). The boiling and the dew point curve then appear to be complete nonsense. The correct and reasonable result is obtained with the 3-phase flash (Figure 2.22). In a process simulation, the error is not so easy to detect, but whenever a result is not plausible, it should be checked whether the VLLE option is necessary and chosen. One could easily say that VLLE should always be chosen, but if it turns out to be not necessary, a lot of calculation time has been wasted, as the VLLE option is quite time-consuming in comparison with a simple VLE.

Figure 2.21: Calculation of the system ethyl acetate–water at t = 80 °C with the 2-phase flash.

Figure 2.22: Calculation of the system ethyl acetate–water at t = 80 °C with the 3-phase flash.

2.6 Solid-liquid equilibria

� 67

Generally, it can be said that systems with water and nonpolar organic substances have strong intermolecular interactions and often form miscibility gaps. Therefore, at least all the BIPs with water should be assigned in any project (Chapter 2.9).

2.6 Solid-liquid equilibria Solid-liquid equilibria (SLE) are used for the synthesis and design of crystallization processes, but taking them into account is also important to avoid undesired solids formation. Information about solid-liquid equilibria can also be used for the adjustment of binary parameters. SLEs are more complicated than VLEs or LLEs. Different types of SLEs have to be distinguished, depending on the mutual solubility of the components in the solid and in the liquid phase. However, the most important one, the simple eutectic system, is comparably easy, and it is the only one which does not require a specialist. The eutectic system is characterized by the total immiscibility of the components in the solid phase. This is advantageous for crystallization, as the crystallized phase has a high purity. One theoretical stage is sufficient to obtain the pure compounds [11]. Usually, there are liquid mechanical inclusions so that the crystallization must at least be repeated once, but from thermodynamics alone a pure solid phase is generated. Fortunately, about 80 % of the systems behave in this way. Figure 2.23 shows the solid-liquid equilibrium of the eutectic system benzene–naphthalene. Both solid phases crystallize in pure form. Generally, solids are formed at a low temperature. Consider a mixture with x1 = 0.5. When it is cooled, it reaches the liquidus line at T ≈ 320 K. The first naphthalene crystals are formed. Cooling down further, the amount of solids increases according to the lever rule, while the concentration of the liquid phase moves towards the eutectic one. At 300 K, it is approx. x1 = 0.7. When the eutectic temperature is reached, both components crystallize forming pure solid phases. The eutectic temperature is lower than the melting points of the participating pure components. Thermodynamics does not give

Figure 2.23: Solid-liquid equilibrium of the eutectic system benzene–naphthalene [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

68 � 2 Thermodynamic models in process simulation information about the shape of the crystals and the amount of liquid included. It is the art of crystallization to form the crystals in the desired way (Chapter 7.3). For eutectic systems, the equilibrium condition can be written as [11] L S cp,i − cp,i Δhm,i Tm,i − T Tm,i T ln (xi γi ) = − (1 − )+ ( − ln ) RT Tm,i R T T

(2.84)

The heat capacities of the solids are often not known, but, fortunately, the difference in the corresponding term has a tendency to cancel out. Therefore, Equation (2.84) is usually simplified to ln (xi γi ) = −

Δhm,i T (1 − ) RT Tm,i

(2.85)

For an evaluation of Equation (2.85), only the activity coefficient and melting temperature and enthalpy of fusion as pure component properties must be known. Solid-liquid equilibria are not very sensitive to pressure but much more to temperature. As the activity coefficient γi of the component i is strongly concentration-dependent, the mole fraction xi in the liquid phase must be evaluated iteratively. In process simulation programs, special solid components can be introduced which do not take part in other phase equilibria. For crystallization, reaction blocks can be introduced where the liquid component is transformed into the solid or vice versa. The “reaction equilibrium” is calculated according to Equation (2.84) or (2.85), respectively.

2.7 φ-φ-approach with gE mixing rules Nowadays, in process simulation in the chemical industry the requirement for accuracy and reliability on the one hand and the occurrence of polar components on the other hand make clear that advanced cubic equations of state with g E mixing rule and individual α-functions should be preferred to the generalized equations of state like Peng–Robinson or Soave–Redlich–Kwong with the quadratic mixing rule (Equation (2.38)). The generalized ones might still be tolerable in hydrocarbon processes; however, the advanced cubic equations of state have no drawback there. The g E mixing rule adopts the concept of the activity coefficients for use in the mixing rule for the parameter a. Examples are the Predictive Soave–Redlich–Kwong equation (PSRK) and the Volume-Translated Peng–Robinson equation (VTPR). For PSRK, the g E mixing rule reads gE a a 1 b = ∑ xi ii − ( 0 + ∑ xi ln ) , bRT bi RT 0.64663 RT bi i i where g0E denotes the Gibbs excess energy at p = 0, i. e. at low pressure:

(2.86)

2.7 φ-φ-approach with gE mixing rules

g E = RT ∑ xi ln γi i

� 69

(2.87)

To calculate the γi , any appropriate equation like Wilson, NRTL, or UNIQUAC or a predictive approach like UNIFAC (Chapter 2.10) can be used. Note that the corresponding BIPs are different from the ones to be used with the γ-φ-approach. With the g E mixing rules, the general inability of the φ-φ-approach to describe mixtures with polar components can be overcome. There has been a great deal of discussion on whether the φ-φ-approach Equation (2.15) or the γ-φ-approach Equation (2.44) is the more favorable option for phase equilibrium calculation, often in a more or less ideological way. The following items point out the particular pros and cons: – If supercritical components dissolve in the liquid phase to a considerable amount, it is nowadays obligatory to use the φ-φ-approach, preferably with a g E -mixing rule. – Theoretically, the γ-φ-approach is valid in the whole subcritical region. However, in the region close to the critical temperature of a component (approx. T > Tc − 15 K) the φ-φ-approach is more reliable. – As long as supercritical components dissolve in the liquid phase to a minor extent, (e. g. nitrogen in water), there is no real disadvantage if Henry’s law with the corresponding mixing rule is used instead of the φ-φ-approach. Although the mixing rule Equation (2.64) is more or less arbitrary, it leads to the correct order of magnitude of the gas solubility. As in processes these equilibria are usually not reached, this information from process simulation is fully sufficient. – If associating components are involved, the γ-φ-approach is obligatory. So far, there is no established equation of state valid for both vapor and liquid phase for associating components. – One must be aware that with the φ-φ-approach all thermodynamic properties (ρL , ps , Δhv , cpL ) are calculated just by the equation of state. This means that any extension of the database for one property affects all other properties, giving a lot of work. Furthermore, due to the limited number of parameters used, the accuracy of the particular quantities is often not sufficient. An exception are, of course, the high-precision equations of state for pure components. There is no option for setting priorities in the enthalpy calculations (Chapter 2.8). With the γ-φ-approach, all properties can be correlated separately and with the required accuracy, as long as enough data are available. – An often cited prejudice is that with the φ-φ-approach only data for a correlation for cpid is required. In fact, this statement is not well founded. From the formal point of view, all the thermodynamic properties as listed above can be calculated without further information. However, for instance, a vapor pressure curve is in fact not used directly in the φ-φ-approach, but the acentric factor or the individual α-function are finally based on vapor pressure data as well, and often they represent the data with a lower accuracy. For process calculations, it is necessary to

70 � 2 Thermodynamic models in process simulation







compare the values obtained with experimental data and then probably adjust the parameters involved; a procedure with the disadvantage mentioned above that the parameters influence all quantities. Finally, a responsible use of the φ-φ-approach requires presumably more preparation work than the γ-φ-approach. An argument against the γ-φ-approach is that it does not include the pressure dependence of the activity coefficient. At high pressures, even small values of the excess volume could have an influence on the results [49]. There are several approaches to encounter this argument. First, the fitting of data at the corresponding temperatures and pressures should achieve some kind of error compensation so that the data are still represented [49]. Second, a further correction taking the excess volume into account could be included in the phase equilibrium condition (2.44) [50], which is, however, not considered in the process simulators. And finally, it is clear that the φ-φ-approach takes the excess volume into account in a formally correct way; but there is hardly any evidence that the excess volume is represented correctly. For the adjustment of the parameters, there is often a disagreement between the vapor pressure equation used, which is usually well-founded, and the pure component vapor pressure given as a data point in a binary vapor-liquid equilibrium data set. It is a common and successful practice to replace the vapor pressure from the correlation by the pure component vapor pressure given in the particular data set just for the parameter fitting procedure to avoid inconsistencies with the rest of the data and to get more reliable values for the γi∞ -values (vapor pressure shifting). In the process simulation, the vapor pressure correlation is then used again together with the adjusted binary parameters [11]. In the γ-φ-formalism, this is an easy change, whereas in the φ-φ-approach the individual α-function or the critical pressure would have to be manipulated, which has also an influence on the other quantities. It is a considerable disadvantage for the project administration that the g E mixing rules use the equations for the determination of the activity coefficients in a different context. The same parameters do not yield the same activity coefficients. This leads to confusion when both approaches are used in a project simultaneously.

2.8 Enthalpy calculations For the evaluation of heating and cooling duties in a process, a correct description of the specific enthalpies is decisive. As all components are more or less present in both the liquid and the vapor phase, the difficulty is that a continuous enthalpy description in both phases is necessary. The following quantities can contribute to the enthalpy: f – standard enthalpy of formation Δh0 at t = 25 °C in the ideal gas state, used as reference point for the specific enthalpy; – ideal gas heat capacity cpid ; – enthalpy of vaporization Δhv ;

2.8 Enthalpy calculations



– –



71

liquid heat capacity cpL ;

enthalpy pressure correction of the vapor phase (h − hid ); excess enthalpy hE .

First, it should be remembered that the absolute value of the enthalpy is normally meaningless, only differences between specific enthalpies can be interpreted. A single value for the enthalpy is only useful if a reference point is given. With an arbitrary choice of the reference point (e. g. h = 0 for t = 0 °C), the calculation of chemical reactions is awkward; it makes only sense if only pure components are involved. In process simulation programs, the use of the standard enthalpy of formation as reference point for the enthalpy makes sure that the enthalpies of reaction can be correctly calculated. This is further explained in Chapter 10.1. Only the standard enthalpies of formation and the ideal gas heat capacities are explicitly necessary if the φ-φ-approach is used, whereas the deviations from the ideal gas can be calculated directly. Equations of state with generalized α-functions are usually not accurate enough, therefore, individual α-functions of advanced cubic equations of state with component-specific parameters have to be fitted to Δhv , cpL , and ps to ensure that the equation of state works. It turned out that the adjustment of ps and cpL is sufficient to obtain good results [22]. For the γ-φ-approach, there are more options available, as the particular quantities are not independent of each other. For a pure substance, knowing three of the four quantities cpid , Δhv , cpL , (h − hid ), the fourth one can be evaluated. Unfortunately, it turns

out that this does not work well for cpid and cpL . There are two ways for the description of the enthalpy in process simulators, illustrated using hT diagrams (Figures 2.24, 2.28). For a pure substance, they show the bubble point line, the dew point line and the ideal gas enthalpy at p = 0 for guidance.

Figure 2.24: Enthalpy description of a pure liquid using the vapor as the starting phase [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

72 � 2 Thermodynamic models in process simulation (A) Vapor as the starting phase: With this route, the cpL is the quantity which is determined indirectly (Figure 2.24). Liquid enthalpies of the pure components are determined with considerable deviations via T

id hL,i (T) = Δhf0,i + ∫ cp,i dT + (h − hid )i (T, psi ) − Δhvi (T)

(2.88)

T0

Figure 2.25 shows that even for a well-known pure substance like water the errors can be larger than 10 % (“old fit” in Figure 2.25), which cannot be accepted and which often causes severe problems, for example, in the design of heat exchangers for liquids.

Figure 2.25: Results for cpL of water as a function of temperature using Route A.

The reason for this behavior is the poor representation of the slope of the enthalpy of vaporization. A procedure has been developed [51] to fit the coefficients for the correlation of the enthalpy of vaporization (e. g. Equations (2.95) and (2.96), see below) to cpL data as well, using the temperature derivatives of both sides of Equation (2.88). It is well known that functions giving approximately the same values can differ significantly in their derivatives, especially if the slopes are flat. At temperatures far below the critical point, the enthalpy of vaporization can be regarded as such a function. Figure 2.26 shows two fits for the enthalpy of vaporization of water. In the temperature range 0–200 °C, both fits show comparable results. In fact, the difference is never more than 1 %, and it is hardly visible. The surprising result for the derivatives is shown in Figure 2.27. There are significant differences. The dashed

2.8 Enthalpy calculations

� 73

Figure 2.26: Two fits of the enthalpy of vaporization of water.

Figure 2.27: Slopes of the two fits.

lines represent the best fit to the high-precision data for the enthalpy of vaporization of water. For the full line, the coefficients for the correlation of Δhv have been fitted using the objective function OF = w1 ∑( i

Δhvi,calc − Δhvi,exp Δhvi,exp

2

) + w2 ∑( j

L cpi,calc − cpi,exp

cpi,exp

2

)

(2.89)

between 0 and 200 °C. The values for (h − hid ) have been calculated with the PengRobinson equation of state. The weighting factors w1 and w2 have been chosen in a way that for both Δhv and cpL the results are satisfactory. Figure 2.25 shows that

the results for cpL have improved significantly in the range between 0 and 200 °C, which was used for the adjustment (new fit). The extrapolation is considered to be

74 � 2 Thermodynamic models in process simulation acceptable, taking into account that, due to the increasing deviations for (h − hid ), high-precision results cannot be expected. In general, the deterioration in the correlation for the enthalpy of vaporization is acceptable. For most substances, the data can still be reproduced within 1 %. In [51], it is also explained how this procedure works for associating substances. The effect of pressure on the liquid enthalpy is considered to be small and therefore neglected. (B) Liquid as the starting phase: The reference enthalpy is set for a liquid state and adjusted in a way that the standard enthalpy of formation in the ideal gas state at t = 25 °C is met. The transition to the vapor phase is performed at a certain arbitrary temperature Ttrans , usually the normal boiling point. Figure 2.28 illustrates the calculation route for an enthalpy of a saturated vapor using route (B). The enthalpy of vaporization is calculated indirectly with this route, the results are usually sufficiently accurate at low temperatures. They become even qualitatively wrong in the vicinity of the critical point [11]. Despite the fact that it is calculated directly, it can turn out that cpL is again the problem quantity. It is often only measured for temperatures below the normal boiling point, and its extrapolation to high temperatures is often bad.

Figure 2.28: Calculation of the enthalpy of saturated vapor of a pure substance using the liquid as the starting phase [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

For both routes, the change from pure components to the mixture is performed at the system temperature. For gases, the mixing takes place in the ideal gas state, where the excess enthalpy is zero. Liquid enthalpies are linearly mixed, the excess enthalpy is often neglected.

2.8 Enthalpy calculations

� 75

Correlations for cpid , Δhv , and cpL supported by the various process simulation programs are discussed in [11]. The most important ones are as follows: – for cpid : – Aly-Lee equation: cpid = A + B( –

2

2

E/T C/T ) + D( ) sinh (C/T) cosh (E/T)

PPDS equation: cpid = RB + R(C − B)y2 [1 + (y − 1)(D + Ey + Fy2 + Gy3 + Hy4 )] ,





(2.91)

with y = T/(A + T). polynomial: cpid = A + BT + CT 2 + DT 3

(2.92)

cpL = A + BT + CT 2 + DT 3 + ET 4 + FT 5

(2.93)

for cpL : – polynomial



PPDS equation: cpL = R(



(2.90)

A + B + C(1 − Tr ) + D(1 − Tr )2 + E(1 − Tr )3 + F(1 − Tr )4 ) 1 − Tr

(2.94)

for Δhv : – DIPPR13 equation 2

3

Δhv = A(1 − Tr )B+CTr +DTr +ETr –

(2.95)

PPDS equation: Δhv = RTc (A(1 − Tr )1/3 + B(1 − Tr )2/3 + C(1 − Tr ) + D(1 − Tr )2 + E(1 − Tr )6 ) (2.96)

In the particular equations, A, B, . . . , F are the adjustable parameters. R is the general gas constant. It should be mentioned that most data for Δhv are based on the Clausius–Clapeyron equation (2.7). Using a good vapor pressure equation and an appropriate equation of state for v′′ , an error of approximately 2 % can be expected. The application of the

13 Design Institute for Physical Property Data.

76 � 2 Thermodynamic models in process simulation Clausius–Clapeyron equation should be avoided at vapor pressures ps < 1 mbar due to inaccuracies in dps /dT and in the vicinity of the critical point, as the description of v′′ becomes more and more weak in this area [11].

2.9 Model choice and data management Any problem, no matter how complicated it may be, has a simple, obvious and generally comprehensible wrong solution. (Harald Lesch)

Process simulators offer a vast amount of models, which often confuses the user, generating more disorientation than opportunities. The following paragraph should provide some clarification. Despite the fact that new ideas and solutions are frequently presented in the literature, the number of models in use can in principle be restricted to six. These models are the following. – Standard activity coefficient model combined with an equation of state, e. g. NRTLPR: A standard activity coefficient model should cover approx. 80–90 % of the cases. An appropriate equation of state should be used for the vapor phase to make full use of the capabilities of Equation (2.44). This equation can be used in principle for temperatures up to the lowest critical point. The Peng–Robinson equation is the favorite equation of state of the author, definitely, there are other options. If single critical points are by far exceeded (e. g. presence of an inert gas), the Henry concept can be applied (Equation (2.61)). When they are exceeded only by a small extent, it is theoretically wrong but pragmatic just to extrapolate the vapor pressure curve, however, this should not be done for main components. It does not make sense in process engineering to use the ideal gas equation for the vapor phase, even if pressures are low. With the ideal gas equation, the enthalpy description is inaccurate (Chapter 2.8). Moreover, in a later state of a project the pressure relief devices must be designed (Chapter 14.2). In this case, the thermodynamic model is applied at pressures far above the normal operation pressure, which then requires a reasonably accurate equation of state for the vapor phase anyway. For the density, it should be taken care that the mixing rule (Equation (2.68)) is applied. As well, the correct representation of the enthalpies should be ensured. – Standard activity coefficient model combined with an equation of state for vapor phase association, e. g. NRTL-ASS: If carboxylic acids (formic, acetic, propionic, acrylic, butanoic, etc.) or HF play a decisive role in the process, the use of an association model for the vapor phase is important (Chapter 2.3.3). One should be aware that there are no models which consider both association and the nonidealities caused by elevated pressures. The problems concerning cpL via enthalpy route (A) should be considered [51].

2.9 Model choice and data management







� 77

Electrolyte model, e. g. ELECNRTL: Small amounts of electrolytes can be treated as heavy ends as a workaround. If they play a major role, an electrolyte model considering the dissociation reactions and the interactions of the ions should be used [47, 48]. The electrolyte model can as well be combined with equations of state for the vapor phase, e. g. Peng–Robinson or Redlich–Kwong for high pressures or the association model in case HF occurs. Equation of state model (PR, PSRK, VTPR): If the process is mainly operated at high pressures, if supercritical components play a major role with high concentrations in the liquid phase so that Henry’s law cannot be successfully applied or if supercritical components change to the subcritical state in one apparatus, the application of the φ-φ-approach with an equation of state valid for both the vapor and the liquid phase is obligatory (Chapter 2.2). Well-known examples are the polypropylene or natural gas conversion processes. The standard generalized equations of state like Peng–Robinson [21], Soave–Redlich–Kwong [52] or Lee–Kesler–Plöcker [53] have the disadvantage that they perform well for nonpolar molecules (ethane, propane, propylene, etc.), but the more the molecules have a polar character the more inaccurate their description becomes. A solution for this problem are the advanced cubic equations of state like PSRK or VTPR [11]. They involve pure component parameters in the so-called α-function (Equations (2.34), (2.35)). With this option, the vapor pressures, heat capacities and enthalpies of vaporization of any components can be successfully represented. There are still weaknesses in the liquid density description, even the volume translation in the VTPR equation does not give satisfactory results. The mixing rules of the advanced cubic equations of state are based on the activity coefficient approaches (g E mixing rules). Combination of these advanced equations of state with electrolyte models are available as well, however, as mentioned above, they can still not be combined with vapor phase association models. High-precision equation of state: Unexpectedly, there are a lot of problems in process simulation which are related to pure components, e. g. the utilities steam and cooling water, inertizations with nitrogen or CO2 or major parts of the LDPE14 process, where ethylene is the only component. These problems can be covered very effectively by the use of the socalled high-precision equations of state, which are equations adjusted individually for the particular components [26–28]. They represent all thermodynamic properties within their experimental uncertainty over a wide range of pressures and temperatures and should be used whenever possible. However, except for natural gas no high-precision equations of state for mixtures are available. In these cases, com-

14 low density polyethylene.

78 � 2 Thermodynamic models in process simulation



promises have to be made.15 In the hydrocarbon processes, the Lee–Kesler–Plöcker equation is a good approach, using accurate pure component representations for hydrocarbons and applying corresponding mixing rules. For polar components, this is not an option. Polymer model: Polymer applications are often covered by representing the polymer as a highboiling component. A reasonable representation of the combinatorial part of the activity coefficient, which often shows considerable deviations from Raoult’s law for molecules differing large in size, should be taken into account (Flory–Huggins, UNIQUAC). There are models for polymers which are by far more complicated but represent the character and the effects in polymer mixtures accurately, e. g. PC-SAFT16 [54].

The choice of a good model does not imply that the simulation will be correct. It is even more important to compile the binary interaction parameters (BIPs) for the various possible combinations. For n components in a process, n(n − 1)/2 binary combinations are possible. For a typical project in a chemical plant with 40 components this makes 780 binary parameter sets. In hydrocarbon business, up to 200 components are possible, giving 19 900 binary parameters sets. In both cases it is not possible to provide a perfect matrix in a reasonable time scale. For a chemical process, the following procedure is recommended: An EXCEL table is set up containing a matrix with all the components, showing a color code for the possible options. Figure 2.29 gives an example.

Figure 2.29: Illustration of the binary interaction parameter matrix in a project. 15 Copolymerization in LDPE is a good example. For instance, it has to be decided whether for a mixture of 95 % ethylene and 5 % propylene at p = 100 bar it is more important to describe the pressure effect sufficiently or to describe the deviation caused by the 5 % impurity. 16 perturbated-chain statistical associating fluid theory.

2.10 Binary parameter estimation

� 79

To fill this matrix, first the BIPs given by the databanks of the process simulator should be listed. Next, it should be checked which parameters are decisive for the process. This is, of course, not an exact assessment, nevertheless, for components which occur together in any block with considerable concentrations, the corresponding BIPs should be important. For these cases, it is the responsibility of the user either to adopt the parameters from the simulator database or to adjust own ones from experimental data from the established databanks [55, 56]. The latter is highly recommended, as the quality of the parameters can be assured in this way. If the data situation turns out to be not sufficient, own phase equilibrium measurements can be initiated to overcome this situation. The adjustment of BIPs to experimental data is thoroughly described in [11]. Less important parameters can be estimated (Chapter 2.10). Parameters where the current situation is not acceptable should be marked as “to be revised” or similar. It must be clear that the BIPs for component pairs where both components occur only at very low concentrations, e. g. in the ppm region, are not important, it has no influence on the results if they are omitted. All BIPs with water should be assigned, either by the simulation program, by adjustment to experimental data or by estimation, as the nonidealities of water with organic components are usually large. Other parameters can just be set; e. g., the BIPs for n-hexane–n-heptane can be assumed to be zero (ideal mixture), as long as nothing better is available. In this way, a detailed overview about the BIP situation can be obtained, and the quality of the simulation results is easier to assess. For the hydrocarbon business, the situation is different. The number of parameters involved is so large that a thorough check is not possible within a reasonable time. Often, it cannot be distinguished between important and less important components, as all components occur only in relatively low concentrations. As hydrocarbon mixtures usually do not show major nonidealities and the interactions with water can easily be covered, it might be an option to use Modified UNIFAC with an appropriate equation of state or PSRK (Chapter 2.10). In this case, process simulation can be skipped! (Hans Haverkamp, after his boss suggested that he should vary the binary parameters until the column behavior is met)

Fitting physical properties to reproduce operation results is something one should not do. A process simulation which meets the data obtained from operation is a strong indication that the process is understood, or it can help to detect errors. Fitting physical properties to operation data will certainly reproduce the data, but the simulation is of no use. Extrapolation to other operation states will simply not work.

2.10 Binary parameter estimation BIPs can be estimated using group contribution methods, as illustrated in Figure 2.30 for the system ethanol–n-hexane. In group contribution methods it is assumed that the

80 � 2 Thermodynamic models in process simulation

Figure 2.30: Illustration of the group contribution concept [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

mixture does not consist of molecules but of functional groups. Ethanol can be divided in a CH3 -, a CH2 -, and an OH-group, whereas n-hexane consists of two CH3 - and four CH2 -groups. It can be shown that the required activity coefficients can be calculated as long as the interaction parameters between the functional groups are known. Furthermore, if the group interaction parameters between the alkane and the alcohol group are known, not only the system ethanol–n-hexane, but also all other alkane-alcohol or alcohol-alcohol systems can be predicted. The great advantage of group contribution methods is that the number of functional groups is much smaller than the number of possible molecules [11]. The UNIFAC (universal quasi-chemical functional group activity coefficients) group contribution method has first been published in 1975 [57]. Like UNIQUAC, it consists of two parts. The combinatorial part is temperature-independent and takes into account the size and form of the molecules, whereas the residual part is temperature-dependent and considers attractive and repulsive forces between the groups. The group interaction parameters refer to the so-called main groups. They often consist of more than one sub-group. For example, in the case of alkanes the subgroups are CH3 -, CH2 -, CH-, and C-groups. The different sub-groups have different size parameters, the so-called van-der-Waals properties, which represent the volume and the surface of the groups. By definition, the group interaction parameters between groups belonging to the same main group are equal to zero. Meanwhile, the UNIFAC method has almost been replaced by the Modified UNIFAC (Dortmund) method [58, 59]. Its main improvements are [11]: – an empirically modified combinatorial part to improve the results for asymmetric systems; – temperature-dependent group interaction parameters; – adjustment of the van-der-Waals properties; – additional main groups, e. g. for cyclic alkanes or formic acid; – an extension of the database, besides VLE data; – activity coefficients at infinite dilution; – excess enthalpy data; – excess heat capacity data;

2.10 Binary parameter estimation

– – –

� 81

liquid-liquid equilibrium data; solid-liquid equilibrium data of simple eutectic systems; azeotropic data.

Most important for the application of group contribution methods for the synthesis and design of separation processes is a comprehensive and reliable parameter matrix. Because of the importance of Modified UNIFAC for process development, the range of applicability is continuously extended by filling the gaps in the parameter table and revising the existing parameters with the help of new data. Since 1996, the further revision and extension of the parameter matrix is carried out by the UNIFAC consortium, and the revised parameter matrix is only available for its sponsors [60]. For the estimation of binary parameters, Mod. UNIFAC is used in a way that artificial data are generated, and the parameters of the current model are adjusted to them. The advantage to the direct use as a model is that the well-known systems do not need to be estimated with Mod. UNIFAC, and their accuracy is not lost. Still, Mod. UNIFAC has some weaknesses: – Isomer effects can not be predicted. – Unreliable results are obtained for group contribution methods if a large number of functional groups occurs in the molecule, as it is the case for pharmaceuticals. Functional groups which are located closely together are often not represented sufficiently, e. g. the configuration –C(Cl)(F)(Br) in refrigerants (proximity effect). – Poor results are obtained for the solubilities and activity coefficients at infinite dilution of alkanes or naphthenes in water [11]. – Systems with small deviations from Raoult’s law are difficult to predict, which is a problem if also the differences in vapor pressures are small. In these cases, often qualitatively wrong characteristics for the mixture are obtained. The advanced cubic equations of state PSRK and VTPR use a g E mixing rule, which can be used as a predictive tool if UNIFAC or Modified UNIFAC are used for the calculation of g E . For PSRK, the original UNIFAC method can be used, whereas Modified UNIFAC yields bad results in this combination. For VTPR, an own matrix has been built up [60, 61]. For both equations, additional groups have been introduced so that light gases can be described as well. Certainly, as equations of state both can of course be used in the supercritical region. In recent years, the COSMO-RS model17 has been developed [62], which works without adjusted parameters. Its accuracy compared to UNIFAC is a bit worse, but it is applicable in any case. However, it should be used only by a professional user, as well as Molecular Modeling. Molecular Modeling can generate data from quantum-mechanical calculations of the interactions of the molecules. This is a fascinating demonstration of

17 COnductor like Screening MOdel for Real Solvents.

82 � 2 Thermodynamic models in process simulation how far these interactions are understood. However, in engineering applications more detailed justifications of the model are required, therefore, the author is pretty convinced that the general structure of fitting models to experimental data will remain.

2.11 Model changes For the handling of enthalpies in a process simulation program, the change of a model between two blocks is often critical. This problem has much to do with the enthalpy description. Between the two blocks, the simulation program hands over the values for p and h to describe the state of the stream. According to the particular models used in the two blocks, the stream is assigned with two different temperatures that may differ significantly [11]. Example A vapor stream consisting of pure n-hexane (t1 = 100 °C, p1 = 2 bar) is coming out of a block which uses the Peng–Robinson equation with the φ-φ-approach. It is transferred to a “nothing-happens-block” (adiabatic, same pressure) working with the ideal gas law. Which error will be produced due to the model change?

Solution According to the Peng–Robinson equation of state, the specific enthalpy of the stream leaving the first block is determined to be h1 = −1807.3 J/g at p1 = 2 bar. Using an activity coefficient model with an ideal gas phase, the coordinates for p2 = p1 and h2 = h1 refer to the vapor state at t2 = 96.6 °C. Using cpL ≈ 2 J/(g K), the corresponding enthalpy difference Δh ≈ cpL ΔT = 2

J ⋅ (100 − 96.6) K = 6.8 J/g gK

will be missing in the energy balance.

Therefore, care must be taken when the thermodynamic model is changed in a flowsheet. It is recommended that a dummy heat exchanger is introduced between two blocks operating with different models, which is defined in a way that inlet and outlet states are the same in spite of the different models. Another option is to carry out a model change where both models yield at least similar results, maybe in a block operating at low pressure. In general, one should stick to the most comprehensive model in a flowsheet; nevertheless there are cases (e. g. association, electrolytes occurring only in parts of the flowsheet) where a model change cannot be avoided.

2.12 Transport properties

� 83

2.12 Transport properties The transport properties dynamic viscosity, thermal conductivity and surface tension must be known for the design of many pieces of equipment, e. g. columns or heat exchangers. Their correlations and mixing rules and further details are thoroughly explained in [11] or [63]. Therefore, only general remarks are given here and the pitfalls are explained. – Dynamic viscosity of liquids: The dynamic viscosity of liquids is probably the most important transport quantity. Among others, it has an influence on heat exchangers, distillation and extraction columns, and the pressure drop of pipes. Similar to the vapor pressure, it can cover several orders of magnitude and it is similarly difficult to correlate and extrapolate, although nowadays there are excellent correlation tools which do this easily. The dynamic viscosity of liquids starts at high values at the melting point and then decreases logarithmically with increasing temperature. The curve of the liquid viscosity as a function of temperature is shown in Figure 2.31. The most widely used correlation is the extended Kirchhoff equation ln

η B = A + + C ln(T/K) + D(T/K)E η0 T

(2.97)

which is usually truncated after the second term. η0 is an arbitrary pressure unit. An excellent tool for the correlation of liquid viscosities is the PPDS equation 1/3

η C−T = E ⋅ exp[A( ) Pas T −D

+ B(

4/3

C−T ) T −D

]

Figure 2.31: Dynamic viscosity of saturated liquid water as a function of temperature [11].

(2.98)

84 � 2 Thermodynamic models in process simulation For C and D, C > Tc and D < Tm should be chosen so that the terms in brackets remain positive in the whole range of applicability. Nevertheless, the extrapolation should be checked; it can happen that solids or supercritical substances are dissolved in a liquid, and in this case it must be avoided that unreasonable values be calculated. The pressure influence is comparably small, but it should be considered at large pressures p > 50–100 bar. Correlations refer to the dynamic viscosity at saturation pressure. Correction terms for the pressure influence are available [11]. Also, just recently the first reasonable estimation method has been developed by Rarey et al. [64]. However, the most crucial thing and probably one of the weakest parts of process simulation in general is the mixing rule. Several ones are available, but in most cases the logarithmic relationship ln

η ηM = ∑ xi ln i η0 η 0 i

(2.99)

is used, where η0 is the unit used to make the argument of the logarithm dimensionless. Equation (2.99) can only reproduce the order of magnitude of the result, and in extreme cases not even that. It does not claim to give a good result, it is essentially not more than a way to come to a number. Even simple systems like methanol-water can exhibit large maxima, where the deviation of Equation (2.99) can be up to 100 % [11]. At least, it has proven to be superior to the simple linear averaging mixing rule [245]. The lucky circumstance is that most applications are not strongly sensitive to errors in the calculation of small viscosities around or even below 1 mPas. Things become worse when larger viscosities become involved and differences between the two components occur. In these cases, it is really worth the effort to introduce and fit binary parameters, e. g. acc. to Grunberg/Nissan [41]: ln

η ηM 1 n n = ∑ xi ln i + ∑ ∑ xi xj Gij η0 η0 2 i=1 j=1 i

(2.100)

Example Calculate the dynamic viscosity of a brine consisting of 40 mol % ethylene glycol (1,2-ethanediol, EG, M = 62.068 g/mol) and 60 mol % water (W, M = 18.015 g/mol) at t = 20 °C.

Solution The pure component viscosities are according to [42] ηW = 1.01 mPa s , ηEG = 21.23 mPa s ,

2.12 Transport properties

� 85

giving a calculated viscosity for the mixture according to Equation (2.99): ln

ηM = 0.6 ⋅ ln 1.01 + 0.4 ⋅ ln 21.23 = 1.228 mPa s



ηM = 3.415 mPa s

A more probable value can be taken from [65] after recalculation of the EG concentration from 40 mol % to 69.7 mass %. The result is 7.08 mPa s, with a deviation of more than 100 % from the calculated value. In fact, this is a deviation which might have a significant influence on equipment design.



Thermal conductivity of liquids: The thermal conductivity of liquids decreases with increasing temperature almost linearly, except the region in the vicinity of the critical point. The order of magnitude is approx. 0.1–0.2 W/(K m) for almost all liquids. An exception is water; here the thermal conductivity is in the range 0.6–0.7 W/(K m), with a maximum at approx. 150 °C. The most widely used equations for the correlation are the simple polynomial λ = A + BT + CT 2 + DT 3 + ET 4

(2.101)

and the Jamieson equation λ = A(1 + Bτ 1/3 + Cτ 2/3 + Dτ)





(2.102)

with τ = 1 − T/Tc . However, most data situations allow only correlations with two parameters. There is a mixing rule which does not produce major errors [11]. The influence of the pressure is similar to the one for the liquid viscosity; the values usually refer to the saturation line, and it is a good approach to take it for pressureindependent, but at p > 50–100 bar a correction term should be applied [11, 63]. Dynamic viscosity of gases: In contrast to the viscosity of liquids, the dynamic viscosity of vapors or gases increases with increasing temperature almost linearly. It is usually correlated with a polynomial. The order of magnitude is approx. 5–20 µPa s. The dynamic viscosity of gases is hardly ever measured, and most of the values are calculated ones from a well-defined estimation method [11]. As for liquids, a pressure-correction should be applied when the pressure exceeds p > 50–100 bar [11]. Given values usually refer to the ideal gas state at low pressures. Mixing rules are available [11]. Thermal conductivity of gases: The qualitative behavior of the thermal conductivity of gases is similar to the dynamic viscosity of gases. The typical order of magnitude is 0.01–0.03 W/(K m). As well, for the pressure dependence the same statements hold. However, some interesting remarks must be given: The thermal conductivity of hydrogen and helium, the so-called quantum gases, is significantly higher; it is in the range 0.16–0.25 W/(K m) in the usual temperature range 0–250 °C for both substances. This is the reason why helium or hydrogen are used as carrier gases in gas chromatography with thermal conductivity detectors. The high thermal conductivity

86 � 2 Thermodynamic models in process simulation



corresponds to the base line; when the sample passes the detector, the thermal conductivity and therefore the heat removal from the detector decreases. The detector temperature rises and gives a corresponding signal. A second interesting item is that for very low pressures (p < 10−6 bar), the thermal conductivity of gases is no more a physical property of the substance but depends more on the dimensions of the vessel where the gas is located [11]. The thermal conductivity of gases can be correlated with a polynomial; mixing rules are available [11]. Surface tension: The surface tension occurs in various formulas used for the design of equipment in process engineering, but the author is not aware that any of these equations are sensitive to it. The surface tension refers to the phase equilibrium. A typical order of magnitude is 5–20 mN/m. For water, the surface tension is significantly higher (approx. 75 mN/m at room temperature). The surface tension decreases almost linearly with increasing temperature and becomes zero at the critical point. The correlation used for the surface tension is 2

3

σ = A(1 − Tr )B+CTr +DTr +ETr

(2.103)

with Tr = T/Tc . It is often used with C = D = E = 0, and in this case B = 1.22 is a good choice in most cases. Mixing rules are available, but for systems with water special ones must be applied.

3 Working on a process The difference between fiction and simulation is that both simulation and fiction deceive and betray, but at least simulation creates an image that is congruent with reality. ([66]) Process simulation can generate a very accurate wrong solution. (Rob Hockley)

A process simulation is the attempt to evaluate the characteristic quantities of a process with well-defined calculations of the particular process steps. It is also a target to identify the sensitivities of a process and to find out how it reacts to disturbances. Process simulation is a tool for the development, design and optimization of processes in the chemical, petrochemical, pharmaceutical, energy producing, gas processing, environmental, and food industry [11]. It provides a representation of the particular basic operations of the process using mathematical models for the different unit operations, ensuring that the mass and energy balances are maintained. Today simulation models are of extraordinary importance for scientific and technical developments. The development of process simulation started in the 1960s, when appropriate hardware and software became available and could connect the remarkable knowledge about thermophysical properties, phase equilibria, reaction equilibria, reaction kinetics, and the particular unit operations. A number of comprehensive simulation programs have been developed, commercial ones (ASPEN Plus, ChemCAD, HySys, Pro/II, ProSim) as well as in-house simulators in large companies, for example, VTPlan (Bayer AG) or ChemaSim (BASF), not to mention the large number of in-house tools that cover the particular calculation tasks of small companies working in process engineering. Nevertheless, all the simulators have in common that they are only as good as the models and the corresponding model parameters available. A lot of even philosophical discussions have taken place to specify the character of simulation [298]. By definition, simulation means the prediction of a state of a physical system by calculations based on certain assumptions. In contrast to an experiment, the simulation is fully determined by its assumptions, whereas in experiments unknown processes and errors in measurement can have an infuence on the results. This has a logic consequence: unknown properties of the system (e.g. a chemical reaction in a distillation) are not considered in a simulation and cannot be predicted. If the results of a simulation are not in line with experimental findings, the clear conclusion is that we have obviously not understood decisive issues of the process; this can range from wrong physical property data to the occurence of the above-mentioned unknown chemical reaction or a wrongly considered mass-transfer rate. On the other hand, an adequate model which has proved to be reliable can be successfully used to extrapolate to states

https://doi.org/10.1515/9783111028149-003

88 � 3 Working on a process for which no observations exist. Another issue is a certain lack of transparency of computer simulations. Due to the complexity and the huge number of calculation steps one cannot often give the clear reason for a simulation result (opacity of process simulation [298]) - the result must be interpreted. Various degrees of effort can be applied in process simulation. A simple split balance can give a first overview of the process without introducing any physical relationships into the calculation. The user just defines split factors to decide which way the particular components take. In a medium level of complexity, shortcut methods are used to characterize the various process operations. The rigorous simulation with its full complexity can be considered as the most common case. The particular unit operations (reactors, columns, heat exchangers, flash vessels, compressors, valves, pumps, etc.) are represented with their correct physical background and with a model for the thermophysical properties. Different physical modes are sometimes available for the same unit operation. A distillation column can, for example, be modeled on the basis of theoretical stages or using a rate-based model, taking into account the mass transfer on the column internals. A simulation of this kind can be used to extract the data for the design of the process equipment or to optimize the process itself. During recent years, dynamic simulation has become more and more important. In this context, “dynamic” means that the particular input data can be varied with time so that the time-dependent behavior of the plant can be modeled and the efficiency of the process control can be evaluated. For both steady-state and dynamic simulation, the correct representation of the thermodynamics, i. e. thermophysical properties, phase equilibria, mass transfer, and chemical reactions mainly determines the quality of the simulation. However, one must be aware that there are a lot of pitfalls beyond thermodynamics. Unknown components, foam formation, slow mass transfer, fouling layers, decomposition, or side reactions might lead to unrealistic results. The occurrence of solids in general is always a challenge, where only small scale-ups are possible (approx. 1 : 10) in contrast to fluid processes, where a scale-up of 1 : 1000 is nothing unusual. For crystallization, the kinetics of crystal growth is often more important than the phase equilibrium itself. Nevertheless, even under these conditions simulation can yield a valuable contribution for understanding the principles of a process. Today process simulations are the basis for the design of plants and the evaluation of investment and operation costs, as well as for follow-up tasks like process safety analysis, emission lists, or performance evaluation. For process development and optimization purposes, they can effectively be used to compare various options and select the most promising one, which, however, should in general be verified experimentally. Therefore, a state-of-the-art process simulation can make a considerable contribution for both plant contractors and operating companies in reducing costs.

3.1 Flowsheet setup

� 89

3.1 Flowsheet setup Sometimes, the reality is different from the truth … (Comparison between plant and simulation data)

Figure 3.1 shows the symbols for some of the most important blocks used in the process simulator ASPEN Plus. Their main functions can be briefly explained as follows.

Figure 3.1: Some blocks in ASPEN Plus. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

90 � 3 Working on a process 1.

2.

3. 4.

5.

Heater block: In a heater block, a process stream is heated up or cooled down. It is not regarded how this is achieved. Usually, it is done with utility (e. g. steam, cooling water, brine), but it can also happen that the heat is exchanged between two process streams. When using the heater block, one should be aware that the simulator does not check whether the driving temperature differences are sufficient to transfer the heat. It should also be clear that the use of the heater block is not connected to a special kind of equipment. Heat exchanger block: In contrast to the heater block, the heat exchanger calculates the heat transfer between two media, and it is checked whether this heat transfer can take place or not, if a minimum driving temperature difference is not available. Clearly, this is the better way to model a heat transfer, but only the target conditions of one stream can be specified. The second stream is a result of the specification, and this can lead to an increased effort for convergence. Valve: Performs an adiabatic throttling with hin = hout . Pipe: Evaluates the pressure drop in a specified pipe, considering friction losses, geodetic height influence and pressure changes due to positive or negative acceleration. Also, the two-phase pressure drop (Figure 12.7) can be considered, and the pipe can be divided into segments to update the vapor fraction continuously. Often, the most significant influence is the one of the geodetic height. The friction pressure drop can often be neglected, as later the pipe diameter will be dimensioned in a reasonable way during the basic engineering anyway. The height influence is independent of the pipe diameter. If one wants to consider the height without already fixing a friction pressure drop, the pipe diameter can be arbitrarily set to d = 2 m. Such a large diameter ensures that the velocities are so low that the friction pressure drop will be negligible, but it will develop the full height influence.1 Pump: Elevates the pressure of liquids. In process simulation, the pump symbol and their function are more or less place holders for the pumps in the process. Only for big pumps the power consumption is relevant, e. g. cooling water pumps. The author’s opinion is that the main function of the pumps in a simulation flowsheet is to make the engineer remember that a device for the pressure elevation is necessary. Formally, the pressure elevation and the efficiency of the pump can be specified. However, during process simulation the layout is not available so that the exact tasks of the pumps cannot be defined, and their efficiency is strongly dependent on the type

1 However, this will cause trouble when it is transferred to dynamic simulation, where the holdup is important.

3.1 Flowsheet setup

� 91

and on the conditions chosen where the pumps must have its optimum efficiency (Chapter 8.1). This is not really a problem for process simulation, as the change of temperature and enthalpy due to the pressure elevation of a liquid is usually small. In a later stage of the project, usually only the big pumps with large volume flows or pressure elevations are updated. 6. Compressor: Elevates the pressure of gases. In contrast to pumps the power required for compressors is significant, and the temperature elevation is important (Chapter 8.2). 7. Vessel: A vessel just for the intermediate storage of liquids is often omitted in process simulation, as it does not change the thermodynamic state. If the vessel has a process function (e. g. separation of vapor and liquid, heating the liquid by jacket heating, mixing), it is considered. 8. Splitter: Divides a stream and leads the particular parts to their destinations. 9. Mixer: Unites various streams and evaluates the state of the resulting stream. The outlet pressure after the mixing must be defined. Normally, it is the lowest pressure of the participating streams. 10. Separator: Divides the inlet stream and leads the particular parts to their destinations. In contrast to splitters, it can be defined for each component which fractions are led to the particular streams, no matter whether this is possible or not. 11. Multiplier: Multiplies the mass-flow of an inlet stream with a factor. Of course, this is a block with a function that will not be invented very soon, but it is useful for the transfer from batch to continuous operating mode or for dividing the process into lanes and their reunification. Their are a number of other blocks (reactors, distillation columns, decanters, absorption columns, extraction columns), which are discussed in detail in the corresponding chapters. There is often a misunderstanding about the meaning of a process simulation. One must be aware that all blocks described above refer to an equilibrium state, where both inlet and outlet streams are constant. However, equilibria are only reached if the residence time is infinite, and this is of course never the case. The design of the equipment should be performed in a way that the blocks approach this equilibrium in a reasonable way. For example, for a decanter it is necessary that the residence time is long enough for the two liquid phases to form and separate. If this time is not provided, the decanter will perform worse than calculated, which often leads to the conclusion that the equilibrium is not described correctly or process simulation does not make sense at all. The same holds for many other blocks. Sometimes, an efficiency can be introduced, as it is

92 � 3 Working on a process the case in columns, or kinetic approaches can be applied, as in reactors. The knowledge about the process steps and, of course, the choice of an appropriate thermodynamic model are necessary presumptions for a useful application of process simulation. Otherwise, the GIGO principle holds, i. e.: Garbage in, garbage out! (The GIGO principle)

Figure 3.2 shows the typical scheme of a chemical plant. One or more feed streams containing the raw materials are passing a preparation step, e. g. a purification, compression or a heating step. Then, the main reaction step takes place, giving the main products and a number of by- or co-products.2 In a separation section, the valuable products are isolated and purified to meet the specification. Nonvaluable by-products are separated and sent to disposal. In most cases the raw materials will not be converted to a full extent. Due to economic reasons, it is of course desirable to collect and recycle them back so that they are not lost. When it is tried to calculate the process steps in Figure 3.2 sequentially, two major difficulties come to the fore: 1. The recycle stream cannot be known in advance. It is itself a result of the process calculation. An iterative procedure is necessary, where the recycle stream is first estimated. Then, the process calculation is carried out, giving the recycle stream as a result. If estimated and calculated recycle stream are identical within a certain tolerance, the result can be accepted. Otherwise, it must be estimated again and the procedure must be repeated. In this case, the recycle stream is called a tear stream. This kind of task is a special ability of the above-mentioned process simulators. How they work is explained below.3

Figure 3.2: Typical scheme of a chemical plant [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

2 A co-product is a product which necessarily occurs when the main reaction takes place. A by-product is a product which occurs due to undesired side reactions. In some cases, co- or even by-products are valuable and can be sold, but usually, they are subject to disposal. 3 Nowadays, this “sequential flowsheeting” is going to become outdated. Most process simulator programs at least offer the option to solve the flowsheet with the equation-oriented (EO) approach, where

3.1 Flowsheet setup

2.

� 93

The other remarkable point is that the recycle stream might contain other components than the ones listed in Figure 3.2. Each component occurring in the process must have an outlet; otherwise it will accumulate in the process. If a component behaves in a way that it is neither concentrated in the product or by-product stream nor removed in a side reaction in the reactor, the only way to get rid of it is a purge stream, where a defined (and hopefully small) amount of substance is split from the recycle stream (or another appropriate one) and led out of the process. In this way, the concentration of the accumulating components will rise up to a certain level, where the removal of the components on one side and the formation and feed on the other side are in equilibrium.

There are several strategies to achieve convergence of a flowsheet, i. e. to get a result where the estimated compositions and states of all tear streams agree with the calculated ones within a certain tolerance. For this purpose, the simulation programs use specialized methods. The simplest one is the direct substitution method, where the new estimation for the tear stream X is the calculated stream G(X) from the previous flowsheet calculation path: XDirect,k+1 = G(Xk )

(3.1)

The direct substitution method is slow but sure; it gives results even in cases where other methods are unstable. With few exceptions, the convergence course indicates that the error steadily decreases from step to step. If this is not the case, it makes sense to interrupt the flowsheet convergence and have a look at the tear stream history in order to see whether a component accumulates. In case the iteration switches between two solutions, it makes sense to give another starting point or change the convergence method. The standard method for flowsheet convergence is the Wegstein method. In principle, it is an extrapolation of the direct substitution, taking into account the last two iteration steps for the next estimation of the tear stream. It is usually faster than Direct Substitution, but it can happen that the calculated error can also increase from one step to the next, and only every few steps an improvement is achieved. In case convergence takes many steps, it is more difficult to judge whether the convergence target will be met. The calculation is as follows. First, the last two evaluation steps are characterized by s=

G(Xk+1 ) − G(Xk ) Xk+1 − Xk

(3.2)

all calculation equations are put into a huge system of nonlinear equations, which are then solved in one step. The remaining problem is that the error messages are more difficult to interpret.

94 � 3 Working on a process Extrapolation gives an estimate for the next iteration: G(Xk+2 ) = G(Xk+1 ) + s(Xk+2 − Xk+1 )

(3.3)

At convergence, Xk+2 = G(Xk+2 ) is expected; therefore, we get Xk+2 = G(Xk+1 ) + s(Xk+2 − Xk+1 )

(3.4)

Substituting q = s/(s − 1), the final Wegstein iteration formula is obtained: Xk+2 = G(Xk+1 ) ⋅ (1 − q) + Xk+1 ⋅ q

(3.5)

The Newton method is the fastest approach for achieving convergence in a flowsheet. It is the transfer of the well-known solving method for nonlinear equations, where the derivative is used to get the next iteration step: xk+1 = xk −

f (x) df (x)/dx

(3.6)

For multivariable functions, instead of the derivative the Jacobian matrix is used: Xk+1 = Xk − J(Xk )−1 G(Xk ) ,

(3.7)

with Jij (Xk ) =

𝜕G(Xi ) 𝜕Xj

(3.8)

For the Newton method, a good starting point is extremely important in order to achieve convergence. Far away from the solution, the Newton method is not really a good choice. The evaluation of the Jacobian matrix is a huge effort. Therefore, as long as progress in the iteration is made, the calculation of derivatives is avoided. The number of components should not be too large. The Broyden method is a useful modification. The Jacobian matrix is only calculated at the first iteration. Therefore, it is faster but maybe less reliable than the Newton method. Although convergence is often the most difficult part of working on a flowsheet, it is strongly recommended to work it out. It is the only way known by the author to find out whether one of the components accumulates in the process. The following example explains that it is necessary even in cases assumed to be self-evident.

3.1 Flowsheet setup

� 95

Case Study For the separation of an ammonia/water mixture (1000 kg/h ammonia, 1000 kg/h water), a two-column system is provided (Figure 3.3). The target is to get both ammonia and water with a high purity so that the ammonia can be reused and the water can be disposed as waste water. In column C1, ammonia is taken overhead in a way that practically all the water goes to the bottom. This means that some of the ammonia is lost and remains in the water at the bottom. In column C2, the remaining ammonia is taken overhead with a certain amount of water. The water withdrawn at the bottom is practically ammonia-free. The overhead stream of column 2 is recycled to column 1. As a result, both the ammonia and the water outlet can be purified to any necessary extent. The question is now: Can one imagine that this arrangement works terribly in practice?

Figure 3.3: Ammonia–water separation with two distillation columns. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

A practical system never consists of only two components. In this case, which actually happened, an additional organic component occurred. Consider benzene to be this component, and its amount in the feed shall be 1 kg/h. The plant manager was desperate, as it was simply impossible to run this arrangement continuously. A similar panic came up when it was first tried to simulate the process and to converge the flowsheet calculation, there was no way to get a result. Both participants have clear indications of what went wrong. The plant manager had to empty his plant regularly, and each time huge amounts of organic substance were found. Analogously, in simulation the iteration history indicated that benzene was the component which accumulated, which was the reason for the convergence difficulties. What happened both in the plant and in simulation was the following: In the first column C1, the cut was done in a way that relatively large amounts of ammonia remained in the bottom stream. As benzene is by far less volatile than ammonia, it could not get to the top of the column. This is at first favorable, as the ammonia product is not contaminated with the organics. Together with the whole water and the rest of ammonia the organics were transported to column C2. In C2, it was made sure that no ammonia remains in the waste water by taking part of the water overhead. Benzene and water form a heterogeneous azeotrop. Regarding the separation cut, this azeotrope is a light end, and the benzene is completely in the overhead stream. Again, this is considered to be fine at first, as the organic compound should not occur in the waste

96 � 3 Working on a process

Figure 3.4: Ammonia–water separation with two distillation columns and a decanter. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

water. The strange result is that for the benzene no way is left to leave the system, and it accumulates in the cycle until the arrangement ceases to do its job. An exit for the organic component must be provided; in this case, a liquid-liquid separator was chosen (Figure 3.4, B1). Still, the benzene accumulates, but at a certain level a phase split occurs. The organic phase can be removed continuously, while again both the ammonia and the waste water have no impurities. The conclusion from this incident is: Never forget the toilet! (Process development wisdom)

The most important thing for the feasibility of a process is the value creation of the products in comparison with the raw materials. Before a process is going to be developed or the erection of a plant is decided, it must be clear that the added value of the products is attractive. This check is not really complicated, the only things one must know are the stoichiometry and the prices of the substances involved. Example EDC (1,2-dichloroethane) is produced via the reaction C2 H4 + Cl2 󳨀→ C2 H4 Cl2 The prices4 and the molecular weights for the particular substances are listed in Table 3.1.

4 in this example completely fictitious

3.1 Flowsheet setup

� 97

Table 3.1: Fictitious prices and molecular weights of ethylene, chlorine, and EDC. Ethylene: Chlorine: EDC:

500 €/t 50 €/t 200 €/t

28.053 g/mol 70.905 g/mol 98.959 g/mol

Can this process be feasible?

Solution Let us consider that 1 kmol ethylene reacts with 1 kmol chlorine. This means that 28.053 kg C2 H4 + 70.905 kg Cl2 󳨀→ 98.959 kg C2 H4 Cl2 The corresponding values are 28.053 kg ⋅ 0.5 €/kg + 70.905 kg ⋅ 0.05 €/kg 󳨀→ 98.959 kg ⋅ 0.2 €/kg This means that the value of the raw materials is 17.57 €, compared to 19.79 € for the product side. In fact, there is a value generation in this case with these assumed prices, but it is comparably small. Taking into account other operation and capital costs, the process might not be feasible. Although the product has a lower price per kg than one of the raw materials, it is possible that there is a value generation due to the increase of the molecular weight. Only approx. 30 % by mass of the EDC molecule come from the ethylene; the rest comes from the inexpensive chlorine.

The cost structures of the various kinds of chemical production, i. e. – basic chemicals, – fine chemicals, – specialty chemicals, – pharmaceuticals, and – polymers. are rather different. Basic chemicals and polymers are manufactured in large specialized plants with extremely large capacities (typically: 300 000 tons per year or even more). For the economy, the differences between the market prices of products and raw materials are decisive, as in the example above. The costs for energy and for the disposal of the waste products and waste water grow steadily and have already a significant influence. The large capacities are advantageous, as the invest costs of a plant do not rise proportionally to the capacity (economy of scale). Also, appropriate logistics for the handling with the huge amounts of substances is necessary. It is favorable to work at large chemical sites, where all utilities are available and where the products can be further used directly without a major transport infrastructure. Because of the strong competition, the margins are low. The profit in basic chemicals production is based on the safe

98 � 3 Working on a process sales of large amounts which are ensured by long-term contracts. The same holds for polymers. In contrast, the manufacturing capacity of pharmaceutical products is by far lower, often only several hundred kg per year. The costs for raw materials, energy, and the investment costs for the plant are not really important. However, before a pharmaceutical product gets permission, a huge research effort is necessary before the first revenues are obtained, often more than a billion €. This means that the risk for such a development is high. Furthermore, once the product has permission, the reliability of the production has to be ensured. An intermediate situation is encountered for fine, specialty, and agro chemicals; the capacities are by far lower, but the margins are much higher than for basic chemicals. For fine chemicals, the focus is on the molecule. Usually, it has a complex synthesis with many steps. Because of the low capacities, fine chemicals are in most cases produced in batch processes, often in campaigns. There might even be no “dedicated plant”, i. e. a plant which is used only for the manufacturing of one product. Instead, the feasibility of manufacturing in existing sites is interesting. Therefore, the optimization of the process is usually not the key issue; the reliable synthesis and purification with stringent quality specifications is more important. For process engineers, there is also a wide of variety of jobs in exhaust air purification, waste water treatment, and waste product management. Specialty chemicals are similar to fine chemicals, but the main focus is on the final application. Examples are paints, detergents, de-icing agents, or adhesives. The target is not necessarily the sale of the product but the technical solution for the customer.

3.2 PID discussion The PID is a structured, simplified view on a plant section and in most cases, it is the decisive document for the design of a plant, showing the piping including the flow directions, the particular pieces of equipment and their interconnections, the instrumentation, the control devices, the interlocks (s. Glossary), valves, and flanges. To further specify the items shown, not only simplified symbols of the apparatuses and machines are shown but also additional information on material, power, manufacturer and type, etc. This information is located in the lower part of the PID, e. g. revision status, date, originator and company specific data can be found in the lower right corner. The various symbols used are usually defined in a preamble and should not be listed here. Instead, it is tried to illustrate the philosophy behind and point out connections to other chapters in this book. One of the main intentions why engineering requires PIDs is to have a simple scheme showing the material flow and the connections of different instruments. The material streams and their direction of flow is indicated by solid lines with arrows. Often, the main product route in the process is indicated by a bold line. Various depic-

3.2 PID discussion

� 99

Figure 3.5: Pipe with insulation, diameter change from size DN 80 to DN 50, pipe with a slope of 2 % and hose.

tions exist to declare the type of insulation or pipe specifications, for example, slope or diameter change (Figure 3.5). The pipes have to be chosen according to mechanical requirements to withstand a defined temperature and pressure range, specified by process conditions and the evaluation of potential maximum values. Next to this, the pipe material is chosen to resist corrosive fluids, for example. Leak tightness is one of the most important things to consider throughout pipeline construction. Therefore, gaskets must be selected in accordance to the temperature and pressure level and fluid properties, respectively. All this information is gathered in a sequence called fluid code. It is an explicit denomination for a pipe, usually including the following information: diameter, pressure stage, pipe material, medium flowing through the pipe, type of gasket system and type of insulation. A continuous number can be added to distinguish the different pipe sections. To prevent from unclear, long names, the information is simplified by using encrypting abbreviations. Generally, the medium is not mentioned with its full name but represented by a numerical order or letter. The same holds for the material characteristics and pipe outfit. Each company may define their own catalogue. Close to numbers 10, 20 and 28 in Figure 3.6 the denomination of the lines can be seen. It starts with the nominal diameter of the line, followed by the fluid code. Then, there is the line number for identification, the material code and the insulation type, in these cases “HC40” (heat conservation at 40 °C) or, respectively, “PP40”, meaning personal protection at 40 °C (see Chapter 12.2). Material codes and insulation types are usually redundant information, as the fluid code covers it already. The PID does not intend to show the actual course of the pipe – this is provided by an isometric drawing. Generally, the PID focusses on a defined process stage, including the main instruments and energy streams like steam or cooling water. These different items and pipes are usually not located in a separate space, as a first glimpse on Figure 3.6 may indicate. As the instruments connected by pipes can have different geometric levels, small indices with the indication on the height may be present. This is a useful information to understand the arrangement and interconnections. Next to the described elements, the PID includes valuable information on the control system – both manual and automated systems. Armatures are essential elements of safe operation. The possibility to block or open some path is part of either measuring (e. g. dosage, batch processes), a redundant system (one pump in operation, a second is available in case of failure) or emergency operation (safety valve or venting system). Manual items can be a part of everyday operation, for example sampling or dosage. Automated systems are mainly used to maintain process conditions by responding to

100 � 3 Working on a process

Figure 3.6: Example PID of a distillation column.

3.2 PID discussion

� 101

deviations, see Chapter 3.7, and can be equipped with alarm systems. To give an impression on the signal chain, dashed lines depict the analog signal and lines with dashes and dots represent digital signals. The purpose of this chapter is to discuss a comparably simple PID in detail extending the given information before and showing actual possibilities of depiction. The chosen PID can be seen in Figure 3.6. As pieces of equipment, there are a distillation column called 23C002 and an adjacent thermosiphon reboiler 23E006. According to the symbol, the column is a packed one (13). The packing is irrigated by the reflux stream (10), the distributor (11) is only signified. The nozzles with their denomination, approx. position and nominal size are depicted (e. g. 19), which is also done for the reboiler (38). For the bottom, a larger diameter compared to the packing section has been chosen to increase the residence time. A coneshaped transition piece (21) is provided. For inspection, there are 24′′ manholes both at top and bottom of the column (12) so that at least slim people can enter it. The two-phase flow (vapor and liquid) coming from the reboiler (28) enters the column via the halfopen pipe (25), which is supposed to achieve disengagement of vapor and liquid.5 In the bottom area, various liquid levels (36) are indicated. NL means normal level, which is to be maintained by level control. Major deviations to high or low level (AH and AL, respectively) cause an alarm signal for the operators. Even larger deviations (SAHH and SALL) cause an interlock (SAHH means “switch alarm high high”). The level in the bottom of the column is supervised by three independent features. There is a level measurement connected to a transmitter (35). The signal is forwarded to an LIC, which controls the level, maybe by manipulating a feed or an effluent stream. The LIC also provides the alarm for level high and level low. There are obviously two level gauges (30,35), where the level can be watched directly at the plant. A third, independent system (23,40), probably realized using a different measurement principle, activates the interlocks (17), which might switch off the feeds or, respectively, the effluent flows at high-high or low-low level. The liquid can leave the column at the bottom (41). A vortex breaker (see Chapter 9) is used to avoid waterspouts to be formed. A drain valve is located close to the lowest point of the column for de-inventory, maintenance and inspection activities. The reboiler is insulated (34) with the purpose of heat conservation. Shell side and tube side can be drained (31) and vented (22), respectively. Furthermore, the condensate line can be drained. The PID instructs to provide a low point in the condensate line to ensure its complete emptying (42). Also, slopes have to be established in the steam line (27) to ensure that condensate being formed in the line by accident has a well-defined flow direction. The flow of the steam is controlled by a flow control device FIC-230903, which is itself directed by the temperature controller TIC-230908. The connecting line between the TIC and the FIC with empty dots (14) represents a software connection, whereas the

5 No comment on its effectiveness.

102 � 3 Working on a process dashed line (16) between the FC and the signal transducer FXV is an electrical signal. The valve itself is operated with instrument air (IA), requiring a pneumatic signal (26). The control valve arrangement (32) with the control valve itself, the bypass with a ball valve (see Chapter 12.3.1), which can be operated manually, the drains, the taper and the expansion upstream and downstream the control valve have been discussed before (see Figure 1.2). The “FC” at the control valve means “failure closed”, that is, the valve will go to closed position in case energy, instrument air or electricity is not available. TSO is the abbreviation of “tight shut-off”, which indicates a special tightness class to separate process systems and allow only a very small leakage rate when the valve is closed. Another important control loop is the one for the reflux (7), which is flow-controlled. The complete control loop is not visible on this sheet. The valve (33) is a shut-off valve. Undesirable states in the plant, e. g. high pressure in the column (18, I-2334), can activate corresponding interlocks in the DCS, which in turn cause the shut-off valve to close and stop the steam flow to the reboiler. In the PID, often the complicated signal flow is depicted, as in this case. In the column, there are a number of pressure and temperature measurements. The most important one is the temperature control cycle (14). With a temperature control, the composition of the column product can be regulated (see Chapter 5.6). As explained above, the TIC finally manipulates the steam flow, which in turn strongly influences the temperature profile in the column. According to the vapor–liquid equilibrium, the boiling temperature also depends on the pressure; therefore, a pressure transmitter gives a software signal to the TIC to compensate for pressure changes (14). The column pressure is usually controlled in the condenser, which is not illustrated on this PID. As for the bottom level, the interlocks for excessive pressure and temperature are activated by different transmitters (15, 18). The temperature interlock can be reset by a handswitch (HS,15). The “TW” refers to a thermowell, which protects the thermocouple from getting into contact with the medium. There are also pressure (3, 4) and temperature (8, 24, 37) measurements which only indicate the values for operator information. At position (9), the pressure drop across the packing section is measured (PDI-230903). This information is interesting for detecting fouling layers on the packing; when the pressure drop increases with time, it is a strong indication that the packing is subject to fouling. If the pressure drop decreases, it might be an indication for corrosion or even loss of packing. The lines to the pressure sensors are insulated and sloped to avoid condensate formation, which causes significant measurement errors. Vent nozzles play a decisive role when the commissioning of a plant takes place. After assembly, the equipment contains air at ambient pressure, and it can often not be avoided that additional inert gases are introduced with the process streams. In operation, the gas will be displaced by the process streams. It will accumulate at the high points of the pieces of equipment. Therefore, a vent valve must be provided at any high point so that it is possible to get rid of the gas in a defined way (column: 5, reboiler: 22). Especially for condensers these inert vents are extremely important, as the inerts

3.3 Heat integration options

� 103

restrain the heat transfer at condensation (see Chapter 4.6). Of course, the vented gases are not released to environment but collected if they are hazardous. The PID also provides information about the arrangement of the equipment. For some reference positions (39), the heights are given. The number refers to millimeters above zero level. Strangely, in this case the zero level is referred to as +100000 mm, so that negative numbers do not occur even for plants that are 100 m belowground.6 The column is protected against overpressure by a safety valve (1, see Chapter 14.2). In case the design pressure (0.62 MPag) of the column is exceeded, the safety valve opens and vapor from top of the column is vented, probably to a flare. The safety valve has a bypass which is NC (normally closed). Bypasses around safety valves frequently occur to enable maintenance of the safety valve. For this purpose, the safety valve can be isolated from the line with two valves upstream and downstream. They are CSO (car sealed open, see Glossary). During the maintenance, the NC valve in the bypass is operated manually; if there is any indication that an unintended pressure-buildup might happen, it is opened. The PID refers to a preliminary state, which becomes obvious by having a look at the nominal diameter of the lines to and from the safety valve. They are given as 99′′ , which indicates that the line has not been sized so far. Finally, three minor points shall be regarded. – The beginning of a line with different properties is indicated by a pin (2) so that it is clearly defined what a line naming refers to. – Item (29) shows that the diameter of the line changes. The triangle symbol indicates whether it becomes larger or smaller in flow direction; however, the information 8′′ /6′′ does not, the larger diameter is always given first. – The pressure indicator (3) is supposed to give the pressure at the top of the column without additional pressure drop. Therefore, the distance between the pressure sensor and the column should be as small as possible (6).

3.3 Heat integration options As mentioned, the feasibility of a process is mainly determined by the value creation due to the price difference between raw materials and the product. However, the value creation itself can hardly be influenced by the engineering, it is more a question of “yes” or “no”. Less important, but susceptible to good process engineering are the utility costs, especially for steam and electricity. Therefore, heat integration measures are a field where process engineers can develop their strength, i. e. suggesting reasonable ways to save energy without significantly increasing the complexity of the plant.

6 The sense of this convention is hardly understandable. Negative numbers would not cause any difficulties, and subterraneous plants are rare, and in case they actually occur, it is hardly ensured that the 100 m are sufficient.

104 � 3 Working on a process Before starting with heat integration, one should be aware that the utility costs extremely depend on the location. Energy of each kind is extremely inexpensive in the United States, Saudi Arabia, or Qatar, whereas in Europe or in Asia energy costs are significant. The steam price varies from 4 $/t to 30 $/t; it is clear that the effectiveness of steam saving measures must be considered when deciding where the plant will be built. First, some standard options for heat integration are presented which occur frequently, based on the case that an aqueous solution has to be concentrated. For a better understanding, it should be pointed out that for dilute solutions large amounts of water must be removed to increase the concentration. For example, to increase the concentration slightly from 1 % to 2 %, about half of the water has to be removed. To concentrate the solution from 10 % to 20 %, only 5 % of the water at the beginning has to be evaporated. – Reverse osmosis7 : For very dilute solutions, it is possible to use reverse osmosis as a first step to get rid of a large part of the water without thermal energy. Details and an example can be found in Chapter 7.1. There are membranes available which let only water8 pass, whereas all other components are retained. In this way, water can be removed until the osmotic pressure is reached. As long as the durability of the membrane is sufficient, reverse osmosis should be considered. In the low concentration region, large amounts of water can be removed according to the effect described above while the energy consumption is just given by the pump energy for the pressure elevation (usually approximately 60 bar). – Multieffect evaporation: Figure 3.7 shows one of the possible arrangements for multieffect evaporation with four effects. The evaporators I–IV are arranged in series, where the pressure is decreasing from effect to effect. The product to be concentrated is fed to the system in the first effect, which is heated with fresh steam. The vapor generated is the heating agent for the second effect, where the product outlet of the first effect is further concentrated. The process proceeds in this way: the vapor generated is the heating agent for the next effect; feed and vapors are in cocurrent flow (forward feed arrangement). The advantage is that no pumps are used, the flow of the product stream is achieved by the pressure difference between the effects. It is the most useful arrangement if the feed is already hot (i. e. not significantly subcooled) or if the concentrated product must not be exposed to high temperatures [67]. Also, a backward feed arrangement is possible, where countercurrent flow is realized. It is the appropriate arrangement if the feed is cold, i. e. strongly subcooled, as the fresh cold feed is evaporated at the lowest temperature and does not need to be heated up

7 Strictly speaking, reverse osmosis is not a heat integration but a heat saving measure. However, it can support the other methods to a great extent. 8 and some other molecules similar to water in size.

3.3 Heat integration options

� 105

Figure 3.7: Forward feed arrangement for multieffect evaporation.



first. It is also the best option if the concentrated product becomes highly viscous. In this arrangement, the concentrated product with the highest viscosity is processed at the highest temperature so that the heat transfer remains acceptable. Of course, backward feed arrangement needs pumps for the transfer of the product solution. Of course, in multieffect evaporation one must also take care that for each effect a sufficient driving temperature difference is available. It is quite easy to estimate how much fresh steam is saved with multieffect evaporation. With two effects, the first effect takes the steam and generates about the same amount of vapor for heating the second effect. Therefore, about 21 or 50 % of the fresh steam with one effect is necessary. The same considerations hold for other numbers of effects: 33 % for 3 effects, 25 % for 4, 20 % for 5, 17 % for 6, and 14 % for 7 effects. These values indicate that more than 3 or 4 effects do not save much more energy, not to mention that it will become difficult to provide sufficient driving temperature differences and that all effects have almost the same price, independent of their effectiveness. Only for very large plants, more than 4 effects really make sense. Mechanical vapor recompression (MVR): For dilute solutions, the boiling point elevation is usually not significant. In this way, a moderate compression of the generated vapor can elevate the dew point of the vapor by approx. 8–10 K. The corresponding arrangement is explained in Chapter 8.2 (Figure 8.13). Also, an example is calculated. The energy added to the vapor by the compressor is used to elevate the dew point temperature of the vapor. Even the com-

106 � 3 Working on a process



pressor losses (Chapter 8.2) are not wasted but remain in the system, causing superheating of the vapor. The necessary pressure increase can often be achieved with blowers (Figure 8.14), the simplest form of a compressor which has even relatively low investment costs. Mechanical vapor recompression gains the highest possible energy savings. There are cases where up to 100 t/h of fresh steam consumption can be replaced by a few MW el., which are by far more inexpensive [67]. Usually, fresh steam is only necessary for the startup. The drawback is that often more heat transfer area is necessary. The 8–10 K mentioned above are sufficient for heat transfer to take place but is not extraordinarily high. With fresh steam, the driving temperature difference can be more or less chosen; of course, it is limited to be within certain standard ranges (Chapter 4.7). However, in most cases it is several times larger so that the heat transfer areas can be smaller. Mechanical vapor recompression has disadvantages in vacuum applications [67]. The main ones are: – in a vacuum, the vapor volumes are larger so that larger compressors and pipes are required; – air leaks can badly affect the efficiency of vapor recompression. However, these are only drawbacks which are not generally prohibitive to vapor recompression in vacuum applications. There seems to be a certain trend to mechanical vapor recompression, as it is probably the most effective steam saving measure. Concerning the blowers with their limited pressure ratio, a multieffect compressor arrangement can be used. Blowers require hardly any maintenance. At first glance, it looks as if someone wants to pull himself up by his own bootstraps, but correctly designed mechanical vapor recompression definitely works and has a lot of successful references. Thermal vapor recompression (TVR): In thermal vapor recompression [67], instead of a compressor a jet pump (Chapter 8.3) is used to increase the pressure of the generated vapor for reuse. Figure 3.8 shows a typical arrangement. The technical principle of a jet pump is explained in Chapter 8.3. There are several differences to the conventional compression. First, the compression is not driven by electrical energy but by fresh steam (“motive steam”). Therefore, TVR should only be considered if high-pressure steam is available and low-pressure steam is sufficient for the process. Although it depends of course on the special case, the amount of motive steam is usually in the same order of magnitude as the suction steam. Therefore, a rule of thumb says that TVR can replace one effect. The motive steam does not produce condensate which can be reused but waste water, as it is contaminated by the vapor coming from the product. The unbeatable advantage of TVRs are the low investment costs, their reliability and the low space demand. However, there are difficulties in operation. TVRs are designed for a single operation point; changes in operation might result in serious performance breakdowns. In these cases, their behavior is difficult to predict.

3.3 Heat integration options

� 107

Figure 3.8: Thermal vapor recompression arrangement.

Figure 3.9: TH-diagram foundations.

For heat integration, besides the effective reuse of vapor latent heat, the optimization of the heat exchanger network is an important issue. It can be performed in a systematic way, where the network is analyzed as a whole. The Pinch method [68] determines the most efficient heat transfer network just from the point of view that the use of external utility is minimized. The main tool of the Pinch method is the representation of the heat streams in a TH diagram (Figure 3.9). In a process, hot streams require cooling, where their enthalpy is reduced (arrows from right to left). Cold streams require heating (arrows from left to right), where the enthalpy is increased. There are two sorts of streams: The first ones change the temperature during heating and cooling according to ̇ p ΔT Q̇ = ΔḢ = mc

(3.9)

108 � 3 Working on a process

Figure 3.10: Construction of a Composite Curve.

giving inclined arrows in the TH diagram.9 The second sort of streams are those which are evaporated or condensed (latent heat), where the temperature remains constant, according to ̇ v Q̇ = ΔḢ = mΔh

(3.10)

In this case, the arrows are horizontal.10 The heating and cooling demand of the process can be visualized by drawing the Composite Curve (Figure 3.10). For each temperature, the involved heat capacities of the streams are added up. At each point, the slope of the ̇ p ). Finally, the latent heats are added for each temperature. curve is represented by 1/(mc In this way, the Composite Curves for the hot streams and the cold streams are built. The curves can be shifted parallel to the H-axis so that the cold Composite Curve is below the hot Composite Curve. The so-called pinch must be defined, i. e. the minimum driving temperature difference ΔTmin (Figure 3.11) where heat transfer is considered to make sense, e. g. 10 K. Again, the cold Composite Curve is shifted until the minimum temperature difference between cold and hot Composite Curve is equal to ΔTmin . In the region where the hot Composite Curve is above the cold Composite Curve, it is possible that the hot streams can be cooled by the cold streams and vice versa. However, at the ends the curves usually overlap. In these regions, the process does not cover the demand for heating and cooling, respectively. Therefore cold and hot utility (steam, cooling water, etc.) is necessary. The enthalpy differences represented by the overlapping spaces indicate the minimum hot and cold utility demand, respectively. If the curves are shifted more towards each other

9 In the following section explaining the Pinch method, it is assumed that the heat capacities of the streams remain constant so that straight arrows are produced. In reality, this is not the case, giving slight curvatures. 10 This is an approximation for illustration as well. In fact, for mixtures changing the phase the temperature will change.

3.3 Heat integration options

� 109

Figure 3.11: Hot and cold Composite Curves.

Figure 3.12: Regions below and above the pinch.

so that the overlapping is further reduced, ΔTmin would be underrun (pinch point, Figure 3.11). Hot and cold Composite Curves can then be separated into a region below and a region above the pinch (Figure 3.12). If heat integration should be applied in the optimum way, heat must not be transferred across the pinch, or, more detailed: – Don’t use steam below the pinch! – Don’t exchange heat between streams on different sides of the pinch! – Don’t use cooling water above the pinch! – The temperature levels of a thermal engine shouldn’t be on different sides of the pinch. – A heat pump should be operated across the pinch.

110 � 3 Working on a process

Figure 3.13: Construction of the Grand Composite Curve.

Figure 3.14: Meaning of the pockets.

Furthermore, the Grand Composite Curve can be introduced (Figure 3.13). After shifting the Composite Curves to ΔTmin = 0, for each temperature the enthalpy differences between hot and cold Composite Curve are represented. With this construction, the necessary temperature of the various utilities can be determined. The “pockets” play the key role in this context. Pockets are formed when the process can deliver part of the required utility on its own. As a result, part of the utility can be supplied at a more convenient temperature level, i. e. lower for heating agents and higher for cooling utility (Figure 3.14).11 Having set these constraints, one can set up an appropriate heat exchanger network. Also, distillation columns [69] and other equipment can be described. A comprehensive explanation of the pinch method can be found in [321]. As well, it is thoroughly elaborated how to take the step from the composite curves back to the heat

11 Of course, the shifting of the Composite Curves must be reversed when the utilities are finally chosen.

3.3 Heat integration options

� 111

exchanger network (HEN). The procedure of assigning the appropriate heat exchangers will be illustrated in the following example, taken from [321]: Example From a composite curve, the pinch point of the process has been determined to be 80 °C (cold side) and 90 °C (hot side), respectively. The minimum utility consumptions are Q̇ H,min = 20 MW for heating and Q̇ C,min = 60 MW for cooling. The steams involved in the HEN are as follows (Figure 3.15):

Figure 3.15: Streams involved in the HEN.

Assign the appropriate heat exchangers to achieve a minimum utility consumption.

Solution The process is split into a region above the pinch and a region below the pinch (Figure 3.16). They are treated separately.

Figure 3.16: Streams above the pinch and below the pinch. First, the region above the pinch is regarded. One starts at the pinch. Stream (1) carries 240 MW above the pinch; the same as stream (3) needs. This is the first heat exchanger (Figure 3.17). Stream (2) carries 90 MW, while stream (4) needs 110 MW. Thus, the corresponding heat exchanger can cover the target partially, the rest (20 MW) must be provided by utility, which is the minimum. In the region below the pinch, stream (4) is the only one to be heated up, with 120 MW (Figure 3.18). Stream (1) can provide 90 MW, which is the first heat exchanger, giving 35 °C for stream (4). The rest of 30 MW

112 � 3 Working on a process

Figure 3.17: Heat integration above the pinch.

Figure 3.18: Heat integration below the pinch.

Figure 3.19: Grid diagram.

to reach 20 °C can be provided by stream (2). Stream (2) needs further cooling and takes 60 MW from cooling water, which is also the minimum. Finally, the considerations from above the pinch and below the pinch are joined again (Figure 3.19). Figure 3.19 is callled the grid diagram and provides all the information to assign appropriate heat exchangers to at least approximately implement the results of the pinch analysis. Nowadays, modern pinch analysis programs (e. g. Aspen Energy Analyzer) do this reassignment automatically. However, they sometimes do not provide a strict pinch analysis. Instead, they optimize the costs. From a first glance, this is reasonable, but one should be aware that both the sizes of the heat exchangers and the corresponding CAPEX are estimated with a considerable lack of knowledge. On the other hand, they do not necessarily maintain the ΔTmin , which is an arbitrary choice anyway. Often, very small heat exchangers are generated, as the duties in the pinch analysis are always calculated with a constant cp for the stream, which is only a rough approximation and leads to these inconsistencies.

3.4 Batch processes � 113

One should be aware that the pinch method just makes suggestions. Often, heat integration measures increase the complexity of the flowsheet, or material reasons prevent the results from being implemented. Also, the spatial distance between hot and cold streams can be so large that heat integration is awkward. Nevertheless, pinch analysis has proved to be a useful tool to get an overview on the heat integration options. In the author’s view it is even superior to the exergy analysis, although the latter is more elaborated according to thermodynamics. The pinch analysis reflects to heat integration, which can usually be implemented without changing a successfully working process. The exergy analysis covers other issues as well, e. g. dilution or neutralization as contributions to chemical exergy. Exergy analysis does not give an answer whether avoiding the losses is in line with the targets of the process or not. The opportunities for improvement often require major changes of the process, which might be linked with additional test phases. Also, the mechanical exergy often yields only minor contributions, and they can easily be detected by having a simple look at the pressure levels in the process.

3.4 Batch processes Most processes in fine chemicals, specialty chemicals, and pharmaceuticals are not operated as continuous but as batch processes, meaning that a specified amount is produced within a certain time. Often, batch products are not manufactured in dedicated plants but in multipurpose units, where several products can be produced in the same plant, just according to the demand. The engineering of such a plant comprises not only making up the dimensions but also scheduling the charges of the equipment, the duration of the various process steps and the optimization of the load of the plant. Meanwhile, tools are available which enable a comprehensive documentation and visualization of the process. Flowsheets, equipment lists, mass balance, the contents of the particular pieces of equipment, the emissions and the time schedule can be generated with a batch simulation. The basis of the batch simulation is the recipe, which is in principle a standardized process description. A special language consisting of a limited number of expressions has been developed, covering all the possible steps in a batch process. It contains the whole information about the process. The recipe links the unit operations together. It is similar to a laboratory instruction. As the language consists of standard phrases, an automatic translation into other languages is easily possible. Figure 3.20 shows an example of a batch recipe. As well as for the continuous process, a simple flowsheet (“equipment diagram”) can be derived from the recipe (Figure 3.21). It visualizes the flow between the pieces of equipment and relates them to the recipe steps. For each piece of equipment, the content as a function of time can be visualized (Figure 3.22), giving valuable advice for its design. Moreover, it can be evaluated when emissions take place and what the amount of emissions is. Especially, this is most relevant for the exhaust air concept (Chapter 13.4). In contrast to a continuous process, where it is usually well defined, the

114 � 3 Working on a process

Figure 3.20: Example for a batch process recipe. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

3.4 Batch processes � 115

Figure 3.21: Example of an equipment diagram. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

Figure 3.22: Equipment content visualization. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

exhaust air in batch processes is hard to trace, for example, it is generated each time when a vessel is filled or flushed. If the temperature in a vessel rises, vessel breathing takes place due to thermal expansion.

116 � 3 Working on a process

Figure 3.23: Example for a Gantt Chart.

However, the heart of a batch simulation program is the schedule view (“Gantt Chart”). An example is shown in Figure 3.23. It is the most valuable information for the staff to operate and organize the process and to evaluate whether it is feasible at all. There are tools for optimization of batch schedules, the following example shall illustrate the large potential of such a tool in a very simple case. Example A batch process for a specialty chemical consists of the following operation steps: 1. producing an intermediate of product A in vessel 1 in 5 h; 2. producing an intermediate of product B in vessel 1 in 2 h; 3. finalizing product A in vessel 2 in 2 h; 4. finalizing product B in vessel 2 in 4 h. For producing an appropriate amount for selling, three cycles for each product are necessary. Optimize the makespan, i. e. the time needed for production in the plant. The cleaning of the vessels must take place after each step and is already included in the given durations of the process steps.

Solution Figure 3.24 shows two options. In Approach 1, vessel 1 first produces the whole amount of the intermediate of product A. After a batch of A has been finished, it is finalized in vessel 2. Then, vessel 1 is used to produce the intermediate of product B. When the first batch of B is ready, in vessel 2 the last batch of product A has just been finalized, so one can directly continue with product B. The makespan for Approach 1 is 29 h. It has certain advantages, as both vessels can be operated partly in parallel. Moreover, in both vessels only one product change takes place so that the cleaning effort is comparably small. However, as mentioned above the cleaning must be performed anyway and is already considered in the time schedule. In this case, Approach 2 has considerable advantages, as an alternating production of the products A and B takes place. The overlapping times, where both vessels are used in parallel, are much larger. Therefore, the makespan is reduced to 25 h, giving a capacity increase of almost 14 %. This example has been executed more or less by manual rearranging of the time frames, as the process is simple enough. For complex cases, a capable software for optimization is required.

3.5 Equipment design

� 117

Figure 3.24: Example for makespan optimization.

3.5 Equipment design Besides process simulation, equipment design is the second large task for engineering calculations. There is a clear trend in the process simulator programs to integrate process simulation and equipment design. For instance, a heat exchanger could be represented in different ways with increasing complexity: – a simple heater block, which just regards the energy balance of one of the two streams, yielding the necessary duty; – a heat exchanger block, taking into account the energy balance of the second side of the heat exchanger as well. Especially if both streams are process streams, it often happens that an additional tear stream is introduced, when the conditions of the second stream are a result of the first stream. An advantage is that the temperatures of the two streams are compared in each segment so that it is supervised that the cold stream has a lower temperature than the hot stream everywhere. – the heat transfer coefficient can be taken into account, either by setting it to a fixed value or even by a calculation, taking into account the specification of the heat exchanger. In the author’s opinion there are advantages to clearly separate process simulation and equipment design. Sticking to the example of the heat exchanger, reducing the complexity is normally very useful for process simulation, and estimated heat transfer coefficients will only give a brief impression of the necessary equipment. It is useful that the process simulation just defines the requirements of the process which should be achieved by an adequate equipment design. In the design procedure itself, the engineer gets a certain feeling and experience for the sensitivities of the various adjusting screws. It is, however, necessary to adjust some parameters in the simulation after the design

118 � 3 Working on a process has been set up, e. g. the pressure drop. Nevertheless, it can still be decided whether the value obtained in the process design or a maximum allowable value is used in the simulation. Another item is the validity of the documents during an engineering project. If the design parameters of a heat exchanger or a column are directly involved in the process simulation, it might happen that minor changes due to new information about the process have to be considered, e. g. a fouling factor in a heat exchanger or a choice for a more inexpensive packing type. In this case, the outlet streams of this piece of equipment will slightly change, which in turn has an influence of all the connected downstream equipment and recycles. Their specifications are probably only slightly affected, but no more consistent with the mass balance – a nightmare for quality management. To take the mass balance only as constraints for the equipment design is more effective and less sensitive to changes. Equipment design comprises the choice of the kind of equipment, the fixing of design options (e. g. for a shell-and-tube heat exchanger: hot fluid on shell side or tube side), the determination of the dimensions including both the process and the mechanical aspects and the limits of operation conditions. In process engineering, there is a peculiarity concerning the overdesign: a piece of equipment in process engineering can be too small or too large; both incidents can lead to a lack of function. In other engineering disciplines, a piece of equipment which is too large is usually more expensive and therefore uneconomical, but the function is achieved. Therefore, special care must be taken to make sure that the particular pieces of equipment can be used in all required application cases, and, furthermore, the applications theirselves should be questioned and discussed. Due to the limited accuracy in process engineering calculations, safety factors are certainly necessary, but nonreasonable ones can be detrimental to the function of the equipment [70]. In the following chapters, the main design aspects of various kinds of equipment are outlined.

3.6 Troubleshooting Round up the usual suspects! (Claude Rains as Captain Renault in “Casablanca”)

Process simulation is a complex exercise and therefore prone to errors and mistakes. In most cases12 only one single error is responsible for failure in process simulation. It is usually no use to detect even more failures, as they all might refer to the same reason. The most important thing is to keep calm and analyze what happened. In fact, as in the citation above, there are a number of usual suspects, and, in contrast to the movie, 12 but, to my regret, not always.

3.6 Troubleshooting � 119

there is a high probability that they are responsible for the failure. And even if not: the defined work on the process helps to obtain better understanding of the calculation and the behavior of the simulation, and often it is the key for fixing the failure at the end of the day. If process calculations give strange results or do not match the process, the following things should be done first. – Check if systems with an LLE are calculated with the three-phase-flash (VLLE): If systems with an LLE are calculated without taking the LLE into account (Chapter 2.5), strange results are obtained, and they are not always so obviously wrong as in Figure 2.21. Undoubtedly, this is the error which occurs most often. Unexpected mixing temperatures or crazy phase equilibria are clear hints that the setup in the phase equilibrium calculation might be wrong. – Model change: Model changes lead to inconsistencies in the enthalpy description (Chapter 2.11). Any region where a model change takes place must be carefully checked, even if nothing suspicious happens. – Binary parameters: Check whether the decisive binary pairs are reliable. – Check whether the numbers of theoretical stages in columns are adequate. If small concentrations occur, it might be useful to apply a rate-based calculation (Section 5.9). – Components are missing: Often the wrong components are being looked at. One must double-check with the plant manager that the component list is correct. – Enthalpy description: The liquid heat capacity is often a quantity which causes errors (Chapter 2.8). If energy balances of liquid phases look strange, one should carefully check whether the errors in cpL are acceptable. – Component removal: When strange physical properties or phase equilibria occur and the reason is not identified, it might happen that one of the components is the reason because of a wrong parametrization. First, a plot of the quantity to be checked should be generated, showing all pure components. If it is still not clear which one it is, the strategy to remove the components one by one can be applied. If the error vanishes after having removed one component, the erroneous component is identified and should be further examined. – Component accumulation: As in the case study (Section 3.1) with the ammonia-water system, it often happens that no outlet for one or more components has been provided. A simulation cannot converge if one component has no option to leave the system. Process simulation programs usually save the tear history, where it can be checked for which components difficulties occur to fulfill its mass balance.

120 � 3 Working on a process –







Convergence of column calculation: Distillation is the most crucial unit operation block in a process simulation. For flowsheet convergence, it is not crucial if during flowsheet evaluation a column once does not fully meet the convergence criterion, as long as it is not the case in the final step. If a column calculation has no tendency to converge at all, it is strongly recommended to stop the process simulation. Otherwise, the simulation will be continued with a meaningless result, which is then distributed into the rest of the flowsheet. All the other blocks downstream get wrong input values and yield bad results on their part. Finally, bad starting and input values are spread all over the flowsheet, which makes convergence even worse. Wrong plant information: Data coming from the plant staff might not be in the desired form. Mass balances to be reproduced might be inconsistent, especially compositions might be biased. Even technical terms can be used in different ways; a classic error source is a wrong application of the reflux ratio (Equation (5.1)) in distillation columns. Column hydraulics: Surprisingly, column hydraulics are rarely the reason of major errors [259]. The amazing reason is that the engineers are aware that there are large uncertainties and therefore do the design carefully. Unexpected chemical reactions: Even if experienced chemists swear that a system is chemically stable, unexpected side reactions frequently occur in distillation columns. A GC analysis of a sample of the stream considered might reveal new unidentified peaks that can indicate side reactions.

One question is often encountered, that is the accuracy of process simulations. Although this question is not very specific and often used for discrediting process simulation as a tool responsible for higher costs, it has a well-defined answer: The process simulation is as accurate as our understanding of the process. This is true in any case, neglecting those where simply input errors occured (wrong parameter transfer for physical properties, different operating conditions, misinformation on equipment etc.). Such cases include: – bad extrapolation to γ∞ for the decisive binary mixture; – wrong physical property data where it is important, either due to estimation or lack of relevant data; – use of equilibrium model where mass transfer calculation is required (e. g. HCl absorption from exhaust air); – bad use of reaction kinetics; – wrong estimation of tray or packing efficiency; – bad characterization of membrane or adsorption behaviour; – unknown side reactions; – unknown components.

3.7 Dynamic process simulation

� 121

In this context, it becomes clear that process simulation is not a truth generator but requires profound process knowledge and competence of methods. Certainly, it is sometimes possible to design the equipment and run it without any calculation. Plant managers have sometimes 30 years of experience, however, their knowledge is hardly transferable to a newcomer. Process simulation can develop transferable knowledge and valuable information for design and operation.

3.7 Dynamic process simulation Verena Haas The use of process simulation tools is a widely applied practice in process design and analysis. Steady-state models, see Chapter 3, depict the overall process in a defined condition often referred to as “design condition”. This is the representation of the plant operation in its working condition, meaning after start-up and without disturbances. Design operation conditions are not necessarily accomplished due to change in raw material composition or technical failure of equipment, for instance. In fact, steady-state models are a practical tool for the representation of specified process conditions. Due to the lack of time dependence, steady-state models are a simplification of the realistic problem and the outputs, meaning flow rates, the necessary energy amount for heating or cooling, pressures and temperatures, represent a snapshot of the process. Assuming steady state, the derivative with respect to time is zero: the mass and energy input matches the output. However, every industrial production process varies over time and a steady-state specification is not sufficient to cover the complex, transient behavior of real plants. High-performance dynamic process simulators overcome the restriction of assuming only design conditions and allow for reliable real-time simulation of existing plants or planning of new processes. Both steady-state and dynamic models are based on first principals, the essential difference is that time dependence of variables is considered in the latter ones. Hence, dynamic simulation takes account of mass and energy accumulation. This allows for the simulation of scheduled start-up, shutdown or feedstock changes and the impact of external disturbances. Dynamic solver engines are designed for efficient calculation and cover transient conditions deviating from equilibrium. Due to dynamic simulation the engineer gains a broad comprehension of the functional interaction of the modeled process units. Therefore, it enhances process understanding, allows for comprehensive quantitative analysis and supports decisions on investment projects. The digital model is extensible and once created it is a basis for further process

Verena Haas, BASF SE, D-67056 Ludwigshafen am Rhein, Germany

122 � 3 Working on a process investigation or equipment sizing. The following list summarizes other typical advantages and applications and will provide examples when dynamic simulation is favorable [264, 265]: – Analysis and optimization of transient behavior: design of inherently transient batch and semi-batch processes. Simulation of start-up, shutdown or load change without the necessity to set up a new simulation. – Flexible interaction with the simulation: implementation of programmed scenarios and monitoring of the system’s response to perturbations. Performance evaluations under conditions differing from design specification, for example, changes in feedstock composition, become accessible. – Risk reduction through performance tests: safety analysis and offline modelling of external disturbances, for example, power or equipment failure, abnormal heat input, etc. – Debottlenecking studies: identification of critical operation conditions, for example, detection of pressure buildup in vessels or hotspots in chemical reactors. – Sizing of pressure safety valves and flare load calculation: depiction of relief and blowdown scenarios, calculation of relief loads in dependence of time and revamp of flare networks. – Analysis and tuning of control systems, for example, PID controllers: evaluation of control system performance prior to installation. – Operator training: enable practice of normal and non-routine plant operation in virtual scenarios. Thereby, the term “digital twin” is used to describe virtual models representing an existing asset by combining simulation tools and relevant realworld data [266] (see Chapter 15). – Modelling of equipment lifetime: increased insight in effects of fouling in heat exchangers, time-dependent catalyst performance or impact of plant modifications. – Cost savings: avoid oversizing, enable energy and emission reduction. In conclusion, a useful dynamic simulation is capable of accurately representing the operation behavior of the real process but also has a predictive character to be used for design of new or optimization of existing plants. There are different commercially available software packages, including Aspen Plus® Dynamics and Aspen HYSYS® Dynamics from Aspen Technologies, gPROMS® by Siemens Process Systems Engineering, Dassault Systèmes’s Dymola® and open source applications, for example, Open Modelica [264]. These software tools contain mathematical solver systems and are based on conservation laws, they include thermodynamics, heat and mass transfer phenomena and kinetics. Thereby, the simulator provides preinstalled equations and solver algorithms that can be expanded by user-defined sequences. Usually, the process simulator offers subroutines and libraries where different options are available. To make successful use of any tool, the user must be aware of the implemented mathematics, phase equilibrium specification and the accuracy of declared variables. The theory of these topics and mathematical solver strategies are

3.7 Dynamic process simulation

� 123

discussed thoroughly in literature, see, for example, [267] and [268]. User manuals or customer hotlines supplied by the software provider support with further information and help concerning specific problems as well.

3.7.1 Basic considerations for dynamic models The results of a simulation and their correctness strongly depend on the input and model set-up. If available, the dynamic simulation should include engineering design data and specific process data. This is, particularly, a prerequisite for the digital representation of any existing process. Additionally, a complete and reliable thermodynamic package is favorable. Process simulator default databases for thermodynamics may lack important information, especially if the examined process is operated at temperatures or pressures out of the available data source range. Apart from these key requirements that are also valid for steady-state simulations, there are some essential considerations that need to be accomplished before working on a dynamic simulation. Steady-state simulators assume material streams to flow from one unit to another. This is valid as long as the pressure in the upstream unit is higher than the pressure downstream. Material cannot simply flow, it needs to be transported and its flow is determined by pressure gradients, friction and flow regimes [264]. Therefore, dynamic models include the pressure–flow relationship by calculating valve pressure drop instead of using a constant value, for instance. Consequently, dynamic models must allow for reverse flow to calculate and display reversed pressure ratios. Equipment size definition is another requirement for dynamic simulation. Contrary to a steady-state block specification, geometry has a distinct effect on dynamics. Heat and mass transfer operations are directly affected by spatial relationships and liquid level is only accessible by including geometry. Basically, there are two different modes of setting up dynamic models. The first technique is based on a complete steady-state simulation equipped with further information to execute the transfer used by Aspen Plus® Dynamics, for example. The mentioned considerations for pressure–flow relationships and equipment sizing are essential for converting an existing steady-state simulation to a dynamic one. As downstream pressure influences flowrates, every block-to-block connection or pipeline needs to be specified concerning pressure–flow relationship. Therefore, the flowsheet must be prepared by including additional pressure changers, e. g. valves or pumps, or pipeline pressure drops must be specified. Further information on equipment size does not affect the steadystate result but because of these additional pressure changer elements the steady-state run needs to be repeated. If the dynamic model is created by transferring a steady-state solution, the initial condition of the dynamic simulation equals the steady-state condition. Simulation time is set to zero in dynamic mode and if there is no perturbation or controller action during the following dynamic run, the output results will not vary with time.

124 � 3 Working on a process

Figure 3.25: Time-dependent evolution of the variable value from initialization to stable condition. Adapted from [265].

Other simulator tools do not use an existing steady-state flowsheet as a basis for the dynamic solution. A new flowsheet is built in the dynamic simulator workspace. Blocks and material streams are inserted and connected similar to steady-state simulations, but additional data required for dynamic procedures are requested right from the start. For the design of new processes, this method is preferably used as long as no steady-state-model is available. Initialization requires known values for state variables or their time-dependent derivation at initial conditions. State variables arise in the accumulation term of instantaneous material or energy balances. Temperature, for example, is a state variable necessary to solve the energy equation [269]. After initialization, the integration algorithm is performed starting from that specified initial state. This allows the determination of how long it would take to reach a stable mode of operation for the given system, see Figure 3.25. On the other side, it is possible to skip the start-up period and only use the solver system to calculate the stable condition directly. Regardless of which technique is used, the underlying set of equations and defined variables give the possibility to perform different activities of interest. This underlines the flexibility of equation-based tools for process simulation. Interaction with the simulation is possible by defining “operation modes”. Some tools allow for the integration of programmed sequences. These user-defined programs represent, for example, a scheduled operation (e. g. “add 100 kg of reagent X after 5 min of stirring”) or a controller action when a certain condition is true (e. g. “close valve A when level is lower than 1 m”) or evoke an equipment malfunction scenario (e. g. “cooling water supply for chiller Y is blocked at run time 20 min”). Additionally, plots and tables allow a descriptive representation of the process variables over time. In Chapter 3.1 several flowsheet blocks used for steady-state simulation are introduced. The symbols used for flowsheet set-up are identical but additional input information for dynamic modelling is required. The following examples list some basic considerations and possible scenarios the user should know before starting dynamic flowsheet design, but they need to be evaluated concerning their relevance depending on the given problem. Specific requirements for block definition depend on the used simulation tool and the particular information needed to apply the provided method/model, respectively.

3.7 Dynamic process simulation











� 125

Vessel: The geometry of storage tanks, flash drums or reflux drums must be specified in dynamic mode in order to calculate liquid level or liquid mass, respectively. This is a necessary information to calculate the condition in the vessel because the pressure depends on liquid hold-up and temperature, for example. If pressure and/or temperature in a liquid-filled vessel change and boiling point is exceeded, a gaseous phase will appear – this, of course, is not restricted to vessels but this simple example shall sharpen the user’s awareness of possible phase transition. Valve: Pressure–flow relationship equations are used to calculate pressure drop depending on fluid flow and valve characteristics. The latter include the valve flow coefficient Kv (see Chapter 12.3). Control valves are commonly used to regulate fluid flow as they allow positions “open”, “closed” and positions in between. If the valve is fully closed and measured fluid flow downstream is zero, solver algorithms may fail. This is a challenging numerical operation and if solver calculation lags, it is recommended to allow for a marginal fluid flow, e. g. 0.0001 kg/h. Usually, this does not affect the overall process but keeps simulation stable. The same holds if inflow to the valve is zero because there is no upstream fluid flow. Pump: Performance and efficiency curves can be inserted to represent realistic pump behavior and to define pump working limitations. This may become interesting if volumetric flow rate changes drastically. If, for some reason, reverse flow occurs, pump simulation will probably cause the solver to freeze because pressure elevation with the pump is only possible in one direction. Thus, the simulation is not erroneous itself but reveals potential pump failure. Heat exchanger: Exchanger area is calculated from geometry, if specified, or predefined by input information. The heat capacity of the shell and tubes is assimilable and considers the energy required to warm up the material. The user should also be aware that fluid flow of both hot and cold stream can vary over time and dynamic models allow for manipulation of fluid streams, for instance, by using a controller. Next to this, fluid flow regime and medium heat capacity are not necessarily constant. This affects the overall heat transfer coefficient and pressure drop and forces an available exchanger duty to vary. These aspects influence the driving temperature difference between heating/cooling medium and the process side. Distillation column: Usually, steady-state models of columns work with a specified feed composition, defined pressure drops and reboiler/condenser specification. Steady-state distillation simulation assumes phase equilibrium on every stage for this design condition. However, pressure drop, liquid/vapor flow, temperature and composition depend on operating conditions, as mentioned before. Dynamic simulations allow to trace

126 � 3 Working on a process





a change of operation condition and its effects on column performance by modelling time-dependent stage equilibrium, pressure drop and stage hydraulics at any time. Overhead and bottom systems are included, respectively. Pressure drop is calculated from pressure-flow relationships including the liquid hold-up on the stage given by geometric definitions and hydraulics. Generally, pressure, level and temperature controllers must be present and configurated to ensure a realistic model of how the column system would behave if perturbation occurred. Examples for distillation column control are given in Chapter 5.6. To access liquid level control, geometries of reflux drum, column bottom and other adjacent pieces of equipment must be known. To add, off-spec column performance can be revealed: for example, tray flooding is detected because of liquid hold-up calculation; reverse flow, possibly arising when the top stage pressure exceeds the bottom stage pressure, can be identified. Condenser and reboiler performance will be affected, too, if operation mode changes. Here, basically the considerations given for heat exchangers are valid. It is possible to consider column material heat capacity and heat output of the whole device to the environment, which completes a thorough energy balance and displays real-world physics. Chemical reactor: Reaction rate is, among other factors, dependent on temperature, concentration and catalyst performance. As these conditions may change, reactor temperature/pressure profiles, reaction rates and product composition are affected. These data are necessarily calculated to solve the kinetic model equation and are available as simulation results. Therefore, a reliable kinetic model must be defined by the user and depending on the dynamics of the reaction, integrator time intervals must be adjusted. Additionally, liquid and vapor hold-up calculation require geometry data. Catalyst loading and distribution are a necessary information if, for instance, residence time limits conversion or catalyst deactivation is regarded. Heating/cooling equipment must be specified as well, for example, by implementing cooling water streams and adequate heat transfer models. The model is completed by considering heat capacities of reactor shell material and tubes to calculate equipment warm up and heat output to the environment. Generally, pressure and level controllers should be added, but control strategies must be defined for the specific case. Additional quotes: Dynamic simulation is obviously more CPU-intensive than a steady-state approach. To reduce simulation time, the model shall be kept as simple as possible. Referring to this, it is usually not necessary to simulate the overall process rich in detail. For operations that need to be simulated accurately, like fast dynamic effects, time step variation can be the method of choice. Decreasing the integration time step usually gives more accurate results but increases simulation time. Dynamic simulation produces lots of data and the large amount of information should be clearly documented.

3.7 Dynamic process simulation

� 127

3.7.2 Basics of process control for dynamic simulations Efficient and safe operation of any plant needs to be maintained even if external influences cause off-spec operation conditions. Automatized process control is a required operation to compensate deviations of variables like temperature, pressure, concentration or level from the desired value. Generally, steady-state simulations are specified for these desired conditions and input is set to a fixed value. In a real plant, values cannot be set – but control elements enable the definition of a set-point that is maintained by implementing an adequate control scheme [264]. These devices are included in dynamic models and are a necessity for reliable process portrayal. The presence of control elements is a first “visible” difference between steady-state and dynamic simulation. Steady-state simulators, as mentioned before, do not allow for mass or energy accumulation. As this cannot be “seen”, it is more practical to focus on the corresponding measured variables, for example, temperature, pressure or level [264]. For this purpose, the dynamic flowsheet is equipped with sensors and controllers whose performance can be tuned. A so-called feedback control loop is a common type of control. Its chronology of interaction is depicted in Figure 3.26. The sensor records a measurable variable and the data is forwarded to a controller. It receives the information delivered by the sensor and compares it with the desired value of that variable: the set-point. The deviation from the set-point is translated into an “instruction”, or output signal, for the control device. The control device executes the corresponding action, for example, opening or closing of a valve [264]. Commonly, control valves are used as control elements to manipulate fluid flow rate depending on the opening percentage of the valve. Thereby, the distinct valve type can be displayed by specifying a valve characteristic [270].

Figure 3.26: Depiction of a feedback control loop.

Principally, there are two occurring types of variable declarations concerning process control14 – parameters and free variables. Their characteristics can be explained as follows: 14 Other authors may use the term fixed variable instead of parameter or further differentiate between input, output, state variables (here: free variables) and parameters/fixed variables, respectively.

128 � 3 Working on a process –



Parameters are physical or chemical properties whose values are known or predefined, for example, reaction rate constant and heat/mass transfer coefficients. Equipment geometry parameters, e. g. diameter of a vessel or number of heat exchanger tubes, are fixed as well, but may be varied during design optimization [266]. These parameters are not accessible via process control devices. There is, for example, no sense in declaring a “controller” that varies vessel diameter to adjust the liquid level. Also, thermal conductivity, heat transfer coefficient and others are not applicable as part of a control scheme (although measurement might be possible). Free variables cannot be assigned with a constant value as physical and chemical laws define their dependence on other variables. Typically, inlet flow rates, composition or temperatures of streams entering the first stage of a process are fixed for initialization. They are predefined by the user. The mathematical solver process uses the input variables and calculates the value of the free variable downstream. Input values may be varied as well, in order to model load change, for example. Thus, both inputs and results of the simulation can be free variables. Control systems are used to measure and then influence free variables specifically [266]. In conclusion, free variables, as declared in this text, include the measured and manipulated variable mentioned in the context of process control schemes before.

The appropriate implementation of control schemes requires the definition of corresponding variable pairs. The interaction of measured and manipulated variable is the basis of a working control system. Besides, it is important to consider the “direction” of controller action, meaning that it is required to predefine if the connection of measured and manipulated variable is direct or reversed, see Figure 3.27. These terms define the regulation action that needs to be performed by the control device to react to the disturbance in the right way. Direct mode causes a controller action aligned with the direction of change of the measured variable. Reversed mode performance triggers a contrary operation, meaning that the controller output action opposes the direction of development of the measured variable.

Figure 3.27: Interdependence of measured and manipulated variable for direct or reversed mode.

A simple but descriptive example for direct mode is the implementation of a control valve and controller for level control, see left picture in Figure 3.28. The level of a

3.7 Dynamic process simulation

� 129

Figure 3.28: Depiction of direct mode by using liquid level control (LC) and reversed mode for flow control (FC).

liquid-filled tank depends on the fluid inlet and outlet flow rate. In that case it is sufficient to control either inflow or outflow. Here, the use of a control valve as control element in the outlet flow stream is considered. The measured value is the liquid level in the tank and the operator defines the desired set-point value. The manipulated value is the opening percentage of the valve. A direct mode is necessary if the increase of the measured variable can be compensated by the increase of the manipulated variable. This is applicable for the above-mentioned example of level control. If the level rises, the valve opening must increase. Hence, the defined level set-point can be restored by manipulating the position of the outlet control valve and in that case the valve opening is extended. Correspondingly, a reversed mode of action triggers a decrease of the manipulated variable if the measured value increases. An example for this is flow control. If the fluid flow increases, the control valve is set in a position of reduced orifice, see right picture in Figure 3.28. Example A storage tank shall be equipped with pressure and level control. Set points are p = 1.35 bar and L = 3.994 m. Both manipulated and measured variable must be chosen and a visual device for observation of controller performance shall be implemented.

Solution The snapshot in Figure 3.29 of a simulation created with Aspen Plus® Dynamics shows the implementation of pressure and level control devices for a storage tank. The measured variables are pressure in the vessel and liquid level, respectively. The desired values were defined with a set-point of 1.35 bar and 3.994 m, see label “SP”. The measured value, meaning actual pressure and level, are indicated with the label “PV” (process variable). Control valves are used to adjust pressure by vapor relief and to hold liquid level by bottom stream mass flow regulation. Thus, the manipulated variable is in both cases the position of the control valve, indi-

130 � 3 Working on a process

Figure 3.29: Storage drum equipped with pressure (DRUM_PC) and level control (DRUM_LC). Controller faceplates of both control devices are shown.

cated with the label “OP” (output). In this example, the valve opening is 50 % of its full opening. The faceplates monitor the actual controller performance and are a useful tool to trace and observe process and controller behavior.

Generally, the procedure of adding control elements to a flowsheet consists of these steps: 1. Define measured-manipulated variable pairs: identify process variables that can be measured by appropriate instrumentation and determine a source able to influence the process variable in the desired way, see Chapter 12.5. Process variables can be temperature, pressure, mass, level, flow or “quality variables” like pH or concentration. Manipulated variables are the opening position of control valves, heating/cooling duty or rotational speed. There are many different possible pairings depending on the specific problem. 2. Make sure that the manipulated variables can only be varied in a realistic range. Heating or cooling medium flow or temperature are restricted by on-site circumstances, for instance. If cooling water, steam or power supply are limited to a certain amount, it is pointless to extend the available resource. To add, the manipulated variable must be a free variable. Parameters cannot be varied because of physical or chemical process restrictions.

3.7 Dynamic process simulation

3.

4.

5.

� 131

Control valves are characterized by their KV -value and opening characteristic giving the pressure–flow relation, see Chapter 12.3.2. This is a necessary information for reliable results if valves are involved in control schemes. Select a proper control mode: Is the measured-manipulated action relation direct or reversed? This step is essential for defining how the control system responds to process variable variation. Add controller performance displays and trace controller action: During simulation run it is recommended to check controller behavior. The simplest method is to add visual displays or charts that record process variable variation and controller action over time, see example “Controller faceplate” and Figure 3.29. That way the user can examine if the controller is working properly. PID feedback controllers are frequently used in chemical engineering.15 If required, the controller performance can be tuned by algorithm parameter adjustment. Thorough overviews concerning process control in chemical engineering applications can be found in corresponding literature, for example, [269–271].

Example The following example [272] was set up in order to simulate a pressure relief scenario of a column equipped with reboiler, condenser and post-condensation system (Figure 3.30). Pressure relief is a highly dynamic event. The target was to calculate the relief stream over time and define how long it takes to build up a pressure that causes the pressure safety valve to open (see Chapter 14.2 for further description of pressure relief theory).

Solution The simulation was first built in Aspen Plus® and transferred to Aspen Plus® Dynamics where control elements were added. Table 3.2 lists the corresponding measured-manipulated variable pairs used for the simulation. Note the different modes of temperature control depending on whether cooling or heating is regarded. If the measured temperature rises, cooling medium supply must increase to maintain the set-point temperature (direct, TC_Cond, TC_Chill). Otherwise, heating medium supply must decrease, if the measured temperature rises (reversed, TC_Col). Column pressure control is realized through nitrogen addition if the pressure drops below a certain level (reversed, PC_N2) and through control valve opening if pressure increases (direct, PC_Col). The safety valve PSV actuates when a defined column top stage pressure is reached (direct, PSV).

15 PID is the abbreviation of a feedback control loop mechanism with proportional (P), integral (I) and derivative (D) mode. The usage of different values and tuning constants allows for various combinations of these three modes. For a PI controller, for instance, the derivative part is set to zero – lettered by leaving the “D” in the term “PID”. Based on the authors experience it is recommended to apply PI control schemes for the common flow, level, pressure and temperature control schemes. Further information can be found e. g. in [280].

132 � 3 Working on a process

Figure 3.30: Control scheme of a column and reboiler/condenser system. Adapted from [272]. Table 3.2: Measured-manipulated variable pairs and their corresponding mode of action. Controller

Measured Variable

Manipulated Variable

Mode of action

FC_Feed FC_Reflux PC_Col PC_N2 PSV LC_Sump LC_Drum TC_Cond TC_Chill TC_Col

Mass flow feed Mass flow reflux Column pressure top stage Column pressure top stage Column pressure top stage Level column sump Level reflux drum Temperature overhead stream Temperature post-condensation stream Temperature column stage

Rotational speed Control valve position Control valve position Control valve position Safety valve position Control valve position Control valve position Cooling medium flow Cooling medium flow Heating medium flow

reversed reversed direct reversed direct direct direct direct direct reversed

Cooling system failure is a commonly approached scenario during safety analysis. In the portrayed case this causes the condenser and post-condensation system, the chiller, to fail while feed stream still enters the column and the reboiler system is working. Cooling system failure can be initiated by implementing user-defined codes that evoke a malfunction scenario. In this simulation the cooling system failure arises at t = 1 min, assuming prior undisturbed, stable operation condition. Figure 3.31 shows the pressure buildup in the column due to accumulation of vapor. The overhead vapor is not liquified because of condenser and chiller failure. The safety valve opens if pressure exceeds 4.9 bar and remains open to prevent from further pressure increase. The safety valve actuates after 12 min and pressure relief takes place. This result is important concerning the consideration of the malfunction’s severity, which is rather high in the described case because of a short time period before blowdown. Next to that information, the time-wise change of the

3.8 Patents

� 133

Figure 3.31: Time-dependent pressure and relief stream evolution as a result of cooling failure. Adapted from [272]. relief stream is simulated and allows process engineers to detect the maximum relief stream (and the point of time of its occurrence), see Figure 3.31.

3.8 Patents Dr. Michael Benje This brief section has been written by a person who is not a patent expert but who has encountered the issues described below in the course of his professional life. The purpose is to stimulate the interest of young professionals in working with patents. The aspects described can only be presented briefly and in general terms within the framework of such a text. For each aspect presented, there is specialist literature for every level of knowledge. As well, introductory literature is available, e. g. [304, 305]. The information and search options of the patent offices that are available online free of charge deserve special mention – it is worth checking in here, especially for newcomers. Many people regard the patent system as dry matter – in the writer’s experience this is not the case. Dealing with patents is exciting and a great way to stay at the forefront of your field. In addition, patents offer an inspiring pool of ideas for your own developments and ideas. An often underestimated – and during day-to-day-routine easily forgotten – aspect of the work of a process engineer is the generation and protection of intellectual property (IP). In the course of his/her work, the engineer may get in touch with the matter of IP as soon as technical solutions or processes are developed or if new process steps

Dr. Michael Benje, thyssenkrupp Uhde GmbH, 65812 Bad Soden, Germany

134 � 3 Working on a process are introduced into already existing processes. New developments can significantly contribute to the economic profit of the employer, for example, by energy or raw materials saving, by reduction of emissions or by improvement of product quality, and also can establish an economic advantage over competitors. Therefore, such developments are often worth to be protected by own IP rights. Today, it is a must in the beginning of every R&D (Research & Development) activity or other technical project to investigate, which IP rights of others are already existing regarding the intended goal or technical solution. The importance of such investigation cannot be emphasized enough for several reasons: – Before starting time and resource consuming projects, it must be assured that a potential result doesn’t turn out to be unusable due to already existing IP rights of others. Otherwise all time and money already spent for the project can be useless. – Violation – on purpose or not – of IP rights of others can cause serious legal and economic consequences. For example, a manufacturer can be forced to destroy a product which was complained about, to put a plant out of operation or to pay royalties to the owner of the violated IP rights. Furthermore, especially in the case of knowingly violating IP rights, the violation can be an offence and will be prosecuted. These consequences are at least very unpleasant and in the worst case can turn out to be existence threatening. – Last but not least, the patent literature better than any other source gives deep insight into the state of the art of every technological field one may be interested in. Patent reading is not only necessary in order to avoid the above mentioned problems but gives an indispensable basis for own development work. Due to this, it is not only an unloved duty but can be very inspiring in the course of developing own, new ideas. The legal form which covers the technical aspect of intellectual property is the patent. For other aspects of commercial IP, other legal forms are applied, for example, the design or the brand. Another legal form of IP is the copyright, which protects, for example, texts, movies, photographs and so on. In general, patents are granted for technical inventions. The main requirements for patentability are [334]: – Novelty – Inventive step – Industrial applicability

3.8.1 Novelty An invention is new if it does not form part of the state of the art. The state of the art comprises all knowledge made available to the public by any means, anywhere in the world, before the date of filing [334].

3.8 Patents

� 135

Especially, the state of the art – even if its nature is technical – does not have to be disclosed necessarily in the technical literature. A famous example demonstrates this impressively [335]. In 1964, the Danish inventor Karl Kroeyer raised a sunken ship by filling it with balls of expanded polystyrene. He received patents for this method from the UK and Germany. Figure 3.32 shows the sketch and the first claim of GB 1,070,600.

Figure 3.32: Sketch from the Kroeyer patent.

However, when he applied for a Dutch patent, the Dutch patent office found the Donald Duck story “The sunken Yacht” from 1949 [336]. Although in the story (Figure 3.33) ping-pong balls were used instead of polystyrene spheres, pin-pong balls are also buoyant bodies – therefore the story was considered as prior state of the art and the patent was not granted.

3.8.2 Inventiveness At this point, it makes sense to introduce the “person skilled in the art” [337] in the sense of the patent law. The person skilled in the art is a fictitious person who is omniscient in his subject. He knows the state of the art completely, has read everything what has ever been published and is capable to realize a product or a process if it is adequately described. On the other hand, he is not creative at all, he is not inspired by own thoughts beyond simple applications of the written state of the art. This fictitious specialist is the decisive criterion to judge whether a patent is granted or not. An invention is considered to be based on an inventive step if it is not obvious to the person skilled in the art in the

136 � 3 Working on a process

Figure 3.33: Buoyancy idea in [336].

light of the state of the art [338]. This can be the case if a new combination of known elements causes a new technical effect or if a technical prejudice is overcome. In the course of the examination procedure, the inventiveness is often the matter of discussion between the examiner and the applicant. 3.8.3 Industrial applicability An invention is industrially applicable if it can be made or used in any kind of industry, including agriculture. 3.8.4 Exceptions from patentability [334, 339] Not patentable are among others: – Discoveries (for example, laws of nature) – Mathematical methods – Scientific theories Before a patent is applied for, some preliminary work has to be done. As soon as an idea is formulated and put on paper, a novelty research should be performed in order to

3.8 Patents

� 137

assure that the idea is really new (refer to example above) and that there no unpleasant and costly surprises occur later in the patent granting process. Of course, the application text itself has to be prepared. It may be necessary for the results of the novelty research to be taken into account when drafting the application text. At this step, professional help from a patent attorney is not mandatory but it is a good idea to seek it. However, the Patent Offices offer good support in the form of documents and instructions for inventors who want to perform a patent application on their own [340, 341]. At the official beginning of the chronological sequence of a patent application is the application itself, i. e. the application text is sent to the patent office as part of an application for a patent, which comprises – besides the application text itself – several data regarding the applicant, the designation of the invention, the method of payment of the registration fee and so on. Once the documents have been received, the application date and number are determined and the application is classified according to the IPC (International Patent Classification) system [342]. Furthermore, it is checked whether the patent application meets the formal requirements. The submission of the application documents marks the beginning of the priority year. This means, during a year from the beginning of the submission, the applicant can apply for patents in other countries. Additionally or alternatively, during this time a cross-border patent application covered by international treaties such as a European patent (EP) or an international patent under the PCT (Patent Cooperation Treaty) can be filed. For those further applications within the priority year, the applicant can claim the priority date of the first application. After the submission of the patent application, the examination phase begins, during which the novelty as well as the inventiveness are checked. For this purpose, the examiner of the patent office carries out a research on which the examination action is based. In the best case – if no documents are found that question novelty or inventive step – the examination action is positive and a patent can be granted within a short time. Sometimes, only the elimination of formal errors is required. In the worst case, documents are found during the search that undoubtedly anticipate the subject matter of the invention in a way that is detrimental to novelty. In this case, the application will be rejected. Most cases will happen between these two extremes. An examination report will be filed by the examiner in which patent documents (and also notpatent literature) are cited which question novelty or inventive step from the examiner’s point of view. Now, arguments have to be found that convince the examiner of the novelty or the inventive step. In many cases, this is an iterative process, with multiple arguments being exchanged between the examiner and the inventor. Again, it’s a good idea when formulating the arguments to seek professional help from the inventor. This can be a patent attorney or a corresponding specialist department in the inventor’s company.

138 � 3 Working on a process Many inventors – especially those unfamiliar with the technical language of the patent system – are put off by the tone and the wording of the examination notice and therefore believe that their application has no chance to become patented, but this is usually not the case. The best way to overcome this barrier is through conditioning and practice. Again, it is advisable to seek the help of professionals. In the end – if the examiner can be convinced – a patent will be granted in the full scope claimed. Another possibility is that in the course of the discussion between the examiner and the inventor, the scope of protection of the patent is limited and then the patent is granted with this limited scope. Depending on the number of countries in which the patent is filed, the above procedure may need to be followed multiple times, with examiners from different national patent offices having differing views which need to be countered with arguments. The invention is published in the form of a disclosure document eighteen months after the filing date. After publication, third parties can formulate so-called third-party objections that question novelty or inventive step. Documents can also be submitted to the patent office to support the objections. In the course of the granting procedure, the patent document may be subject to changes, for example, if the wording has to be changed or the scope of protection has to be restricted. Therefore, the content of the patent specification does not have to be identical to the content of the application specification. The type of document is identified by a letter code [343] after the document number – for example, A means it is an application specification and B means it is a patent specification. After a patent has been granted, a nine-month opposition period begins, during which anyone can object to the granting of a patent. In opposition proceedings, the patent application is then re-examined, taking into account the arguments presented. As a result of this process, the patent can be fully maintained, limited or revoked. Limiting or revoking a patent as part of opposition proceedings is an administrative act. After the opposition period has expired, a patent that has been granted can only be attacked by means of an action for nullity. Then, this is a court case, which is associated with the corresponding effort and costs. Therefore, if necessary, action should always be taken against a patent in the context of objections by third parties or opposition proceedings. As soon as a patent is granted, the protective effect takes effect retrospectively from the date of filing. The protection period is 20 years. Due to the complex approval procedures, which can take several years during which the patent cannot be used, there are exceptions in some countries for medicinal products and plant protection products in the form of supplementary protection certificates. This can be used to extend patent protection by up to five years. The patent allows its owner to prohibit anyone from using the claimed device or method. Patent protection applies in the area of validity of the patent, i. e. in the countries in which the patent was granted. The patent owner can also prevent others from importing a product protected by the patent from a country where patent protection

3.8 Patents

� 139

does not exist to a country where patent protection exists. This also applies to products that are manufactured using a patent-protected process. The patent owner can derive economic benefit from the patent either by using it himself to market a product with a unique selling proposition or by allowing others to use it in return for payment of a license fee, which is often the case in chemical plant engineering in particular. Although the patent grants its owner the right to prohibit others from using it, there are cases where the owner is not allowed to use the patent himself without the consent of others. This is the case if it is a dependent patent. This can, for example, be a patent that discloses a special technical solution, the principle of which was already claimed in a general form in another, previous patent but not specifically described and which is not obvious from the previous patent either. In this case, the owner of the dependent patent may not use it without the permission of the owner of the preceding, more general patent, and vice versa. One way out of this impasse is an agreement between the patent holders in the form of cross-licensing. this means that each party allows the other to use the patent in question. The possible dependence of a patent on another, older patent is one reason why an FTO (Freedom To Operate) analysis should always be carried out before commercial application. All patent specifications have the same formal structure. A patent specification contains the following parts: – Bibliographic data – Description – Claims – Figures (if any) As an example, Figure 3.34 shows the first page of a patent document with the bibliographic data. One of the most important parts is the publication number by which the patent can be identified. This number is followed by the above-mentioned number code (here: B1, meaning that it is a European patent application) which gives information about the type of document. Besides the date of publication and the filing date, the priority date is of importance. Here, the priority date shows that an earlier patent application has already been filed for the same subject. Each patent is classified according to the IPC system [342]. The IPC classes can be a valuable tool in the patent search when it comes to finding similar documents. The countries in which the patent has been filed are listed as their country codes [344]. The inventors are listed in alphabetic order.

140 � 3 Working on a process

Figure 3.34: First page of a patent.

In the shown case, the applicant and the inventor are not identical. The inventors are employees of a company which applied for the patent. If an inventor files a patent at own expense, inventor and applicant are of course identical.

3.8 Patents

� 141

The references cited describe the technical background that led the inventors to the inventive idea. The description usually first refers to the technical background or the state of the art on the subject. Documents found during preparatory research representing the state of the art are cited in this part. The description of the technical background should lead to the conclusion that there is a technical problem or defect that can be solved by the invention. Next, the invention itself is described. The description can refer to figures (where the elements of the invention are identified by numbers). Different configurations of the invention can be described by different figures. The description of the invention can be followed by examples. In the case of a test description, first a reference is given which represents the state of the art. Test results can be presented as numbers, tables or diagrams. After the reference, test results according to the invention are presented, which demonstrate the advantages of the method or device according to the invention over the state of the art. The description then is followed by the claims – the most important part of the patent. The claims are divided into the independent claim (Claim 1), which represents the “core idea” of the patent and the following dependent claims (“Device, method or process, according to claim 1, characterized in that. . . ”) which serve to specify the invention and the claimed scope of protection in more detail. A combination of dependent claims also can refer to the independent claim. In general, the scope of protection of a patent is determined solely by the claims. The description and the examples support the claims and also serve to interpret them. In addition, the examples must enable a person skilled in the art to reproduce the patent. The use of language in patent claims (e. g. “. . . at least. . . ”, “. . . at least partially. . . ”, “. . . preferably. . . ”, “. . . especially preferred. . . ”) in the beginning often seems strange and unfamiliar to newcomers. The aim of this language use is to achieve the greatest possible scope of protection and, if possible, to anticipate applications that were not yet thought of when the application specification or the claims were drafted. Skillful drafting of the patent claims makes the difference between a good and a bad patent and there is a body of literature devoted to this subject. It is therefore highly advisable to seek the help of patent professionals in drafting the claims, if the inventor or the applicant is not experienced in this matter. As mentioned above, the description should contain all information necessary to support the claims. On the other hand, it is not recommendable to include more information as needed for this purpose. While the scope of protection is determined by the claims, all additional information disclosed in the description will be considered as state of the art and may therefore be novelty-destroying, if a succeeding patent application regarding the same or a related matter is intended to be filed later on. Most patent specifications also contain an abstract, in which the aim of the patent is summarized. In order to get a quick, first overview of the aim and the scope of protection

142 � 3 Working on a process

Figure 3.35: Search report.

of the patent, it is advisable to first read the main claim (the independent claim) and the abstract. If this has not been published separately, a patent specification can also contain the search report (Figure 3.35). In the search report, all documents found during the patent office’s search are listed and classified into categories. There is also a letter code for these categories [345]. For example, the classification into X or Y directly questions novelty of inventive step (in category X – the listed document on its own, and in category Y – the listed document in combination with other listed documents), while category A represents the general technical background. The last column lists the claims of the examined document which, in the opinion of the examiner, are affected by the respective document found. The search report is interesting because it can provide a basis for your own research by listing technologically related documents. Of course, there are expenses of the inventor to get the idea into a status ready for registration. This could, for example, be costs for an upstream patent research if this is carried out by a professional service provider. Fees are payable to the patent office, e. g. for the filing, for examination request and for the examination itself [346]. Most

3.8 Patents

� 143

inventors take professional help from a patent attorney for which attorney’s fees are payable accordingly. Once the patent has been granted, maintenance fees must be paid over the entire term of 20 years – otherwise the patent expires automatically. If the patent is also to be registered abroad, there are additional costs. Foreign legal fees and translation costs are then added to the foreign official fees. Due to these facts, the patent costs depend heavily on the number of countries in which the patent is to be registered and therefore, there is no simple answer to the question “How much does a patent cost?”. The cost range (viewed over the entire term) varies from a few thousand euros to several tens of thousands of euros up to six-digit amounts, depending on the number of countries in which the application was made. If several patents are included in the protection of a product, a process or a device, these amounts can be multiplied again and the patent costs can sum up to a significant part of the entire research and development budget. Therefore, the question of how many patents are registered in which countries should be the subject of careful consideration as part of a patent strategy. First of all – at least in the case of commercial applicants – it should be considered whether a patent has to be registered at all or whether, for example, the manufacturing process for a product should be protected by a trade secret, i. e. the process and / or the production process and the devices required for this kept secret by the company using it. The advantage here is that third parties have no insight into the considerations and the development strategy. If a third party now patents the same process or device (due to the secrecy it was previously unknown to the public and is therefore considered new for the purposes of patent law), the first applicant has a right of prior use if he can prove that he has used the process or was already using the device before the patent application was filed, i. e. he may continue to use the invention even though a third party now owns a patent on it. On the other hand, the holder of the prior user right is not entitled to grant licenses for the invention. Another way to avoid patent costs is publication. For example, a patent application can be pursued until the official publication of the patent application and then abandoned. The invention is now state of the art and can no longer be registered by third parties. The disadvantage here is, that now everyone can use the invention and also develop it further. On the other hand, this approach can be used as part of a patent strategy to prevent competition in applying for a patent. In most cases it is more advantageous to protect inventions with patents – nevertheless, the procedures mentioned above are useful in special cases and should therefore be mentioned here. A previously mentioned factor that determines patent costs is the number of countries in which the patent is registered and maintained. Of course, these should be all

144 � 3 Working on a process countries in which the invention is to be marketed or licensed. If a process or device is protected by several patents and there is a core patent and several supplementary patents, it can be considered whether it makes sense not to register at least the less important, supplementary patents in all countries. Another factor that significantly affects patent costs is the term of a patent. Fees for maintaining the patent must be paid in each country over the entire term. It should therefore be checked at regular intervals whether it still makes sense to maintain a patent in a certain country – for example, if it is to be expected that a market will not develop as expected or that no more projects will likely be realized there. A valuable instrument for dealing with all of the above-described issues are regular meetings within the framework of a so-called OPC [347] (Operational Patent Committee), which ideally consists of participants from various disciplines such as product management, technology and patents. The OPC can decide, for example, how to proceed with new invention reports, the country portfolio, the maintenance of patents, how to proceed in the examination procedure or how to proceed in the event of patent infringements by others. As part of a cooperation between different companies, a patent portfolio meeting can be held at regular intervals, at which decisions are made on how to proceed with community patents with regard to the above issues.

3.8.5 Patent research and patent monitoring Patent research and patent monitoring are closely related and important for all aspects of patent work. First of all – as mentioned above – before applying for a patent, the state of the art should be determined in order to avoid that there are already documents that are detrimental to novelty or that call the inventive step into question. Within the framework of patent monitoring, research is carried out at certain intervals (e. g. monthly) on a specific field of technology and evaluated in order to observe the activities of the competition and, if necessary, to take early action against competitor patents whose potential scope of protection overlaps with that of one’s own patents. Last but not least, patent research plays a crucial role when performing a so-called FTO (Freedom To Operate) analysis. Such an analysis has to be performed when a newly developed process or device shall be used. By an FTO analysis, documents can be identified which may hinder the application of the invention, for example, if such documents establish the dependency of a patent on a third-party patent. A first stage of patent research can be done on one’s own by using the research tools offered by the patent office [348–353]. The databases of the patent offices are accessible to everyone and free of charge. Most patent office databases offer search tools on a basic level (for example, the name of the inventor or the name of the applicant as input) and on a more advanced level, where different search terms can be combined by Boolean operators (s. Glossary)

3.8 Patents

� 145

and the search can be focused on different parts of the text body (e. g. abstract, full text, claims. . . ). Searching the databases of the patent offices with the search tool offered, one can produce good results and get a good first overview after some practice and getting familiar with it. The next step could be the usage of commercial databases, such as (among others) Thomson Innovation, PatBase, PATSelect and so on, which combine the information of most patent offices and offer more advanced search tools. Beyond the patent search, the commercial databases also offer advanced data analysis tools which are useful for evaluating development activities in certain technological fields. The search results can be exported in different formats and also be stored. In order to be able to use the possibilities of these databases optimally, research should ideally be carried out by persons who are familiar with the operation and who regularly carry out research and/or data analysis. Finally, a patent research can be performed by professional service providers. If the inventor and/or the potential applicant is not familiar with and experienced in patent research, but quick and reliable results are needed, this can be the best way. By discussion with research specialists, the setup of an efficient search strategy is much easier and more effective as it would be when done on one’s own. The basis of a patent research on advanced level is the setup of a search key. The search key consists of a number of keywords that are linked as cleverly as possible using Boolean operators. When determining the key terms, the inventor or a specialist for the relevant technology should definitely be involved, as there are also special jargon expressions for many technical terms. An example is the naming of chemical compounds by trivial names in patent documents. A source which should be used additionally when generating a list of keywords is nonpatent literature (textbooks, scientific papers). In the case of chemical processes, the list of keywords should be further supported by process flowsheets and a meaningful process description. Based on this materials, the researcher can now construct a search key. Before the actual search begins, the search key can be reconciled with the technical specialists. A good search key also should include a search related to the IPC classes of the process or device in question. The evaluation of the search results should be done both by the specialist who is familiar with the technical background and by a person who is involved in patent work and is particularly familiar with the language of patent documents. Of course, it is also possible for the technical specialist to first make a (broader) preselection of potentially relevant documents, which is then further narrowed down in discussion with a patent specialist. If the search result was obtained as part of a regularly conducted search for patent monitoring, the procedure with regard to potentially relevant documents can be determined within the framework of a suitable committee (e. g. OPC, see above).

146 � 3 Working on a process As soon as one or more patents are to be used to market a new process or a new type of device, an FTO analysis should always be carried out to rule out, for example, that one of the patents used turns out to be dependent or that potentially relevant documents appear which were overlooked in the course of previous research. The research should especially be focused on the countries in which the new process, product or device shall be marketed. The results list ideally should be reviewed by a team of at least two people (foureyes principle) to reduce the probability of documents being overlooked. Furthermore, in the event of a dispute, it is important that the attacked party can prove through a documented FTO analysis that they have fulfilled their duty of care. In the course of evaluation, the features of the first claim (independent claim) of every document found by the search have to be checked against the features of the patent(s) to be applied or the features of the process, device or product to be newly implemented. Special attention has to be paid to product-by-process claims which claim a product manufactured by a certain process (usually defined by previous claims) since in some cases it may be difficult to distinguish between the same product manufactured using different processes. Then, such cases need special consideration. It is good practice to organize the documents into categories like “not relevant”, “needs further review” or “potentially relevant”. Once the documents have been categorized, their number can be further narrowed down by discussion between the assessors. The legal status of the documents in the countries concerned should now be examined, since the search results do not reveal whether and in which countries a patent is still in force or whether it may already have been abandoned. The remaining term of a patent is also important. It may be the case that a patent is relevant in terms of content, but is nevertheless not relevant for the FTO analysis because it is either no longer in force or is at the end of its term. At the latest, when all the steps described above have been completed and a short list of potentially relevant documents has been created, the help of a patent attorney should be sought. This can be a member of the IP department of the applying company as well as an external patent law firm. The patent attorney prepares an expert opinion that includes all documents classified as potentially relevant. The further procedure regarding the individual documents is then also discussed with the patent attorney.

3.8.6 Inventor’s bonus Of course, if a person makes an invention on a private basis and at his own expense, he is entitled to all economic benefits arising from the invention. Today, however, most inventions are made in the context of employment. Whether and how the employee

3.8 Patents

� 147

inventor participates in the economic success of the invention varies from country to country. The following section is based on the conditions in Germany. The employer is entitled to all inventions that the inventor makes as part of his employment. If the subject matter of the invention falls completely outside the scope of the employer’s field of work or if the employer has no interest in the invention for other reasons, he can release this for the inventor. The inventor can now pursue the invention at his own expense. If the employer pursues the invention further and generates sales with the patent, the employee inventor must share in the economic success in an appropriate manner. The inventor’s bonus is calculated as a fraction of the invention value, the proportional factor. The invention value can be determined in different ways, e. g. from a license fee that the employer receives or from the turnover achieved by means of the invention. The fraction of the invention value itself is calculated taking into account several boundary conditions: – Did the employer set a task that aimed at the invention or did the employee make the invention on his own initiative and outside his area of work? – Are employees expected to make inventions due to their position in the company (e. g. head of a development department) or is the employee in a work environment where inventions are not normally expected (e. g. unskilled worker)? – Did the employee make the invention on the basis of his own professional knowledge and skills or was he supported by the employer, e. g. by providing resources? Between the two possibilities for each point there are several gradations. The lower the position of the employee in the company and the further the invention-making is outside his work context, the higher the proportional factor.

4 Heat exchange Most of the heat exchangers do not work because of, but in spite of our design … (Hans Haverkamp)

Heating and cooling of streams are essential unit operations in any process. They are usually carried out in so-called heat exchangers, and their design is one of the main requirements of a useful process engineering. There are a lot of different aspects that have to be taken into account, and both process and construction engineers must give their input to achieve a good solution. The Second Law of Thermodynamics defines that temperature differences of two systems will be balanced by a heat flux in the direction of decreasing temperature. The interesting question is how fast this process is. In principle, there are three mechanisms: Heat conduction, convection and radiation [303]. Heat conduction is the dominating mechanism in solids. It is determined by the thermal conductivity as a thermophysical property. For liquids and gases, heat conduction occurs together with convection, i. e. movement of the fluids. It can be distinguished between natural and forced convection. The reason for natural convection is that there are density differences in fluids. Hot fluids have a tendency to ascend and vice versa. Forced convection takes place because of external energy input, e. g. by a pump, by an agitator or by a fan. As the theory is not well founded, the calculation equations used are based on the similitude theory, where it is tried to transfer experimental results from the laboratory to a practical scale. If heat is transported through a phase boundary, e. g. from a fluid to a solid, the process is called heat transfer. If the transport proceeds from a fluid through a solid wall to another fluid, it is called heat transition. Finally, heat can be transferred between two surfaces of different temperature by radiation. At high temperatures, this is the dominating mechanism. In contrast to heat conduction and convection, radiation does not need a substance; it is transferred through the empty space. Although the physics is well settled, the heat flux by radiation can be predicted only if the properties of the surfaces are known. In contrast to academia, the fundamentals of heat transfer play a minor role in industrial applications. They are briefly introduced. The necessary relationships are already integrated in commercial heat exchanger programs like HTRI, or ASPEN Heat Exchanger Design & Rating. Instead, the focus is the reasonable use of the particular options for the design of the apparatus. It is attempted to give a good explanation of these options in the following chapter; for the fundamentals, there are a lot of other textbooks available, e. g. Baehr/Stephan [71] or the VDI Heat Atlas [72]. There are two main types of apparatuses for heat exchange, the shell-and-tube type and the plate heat exchangers. Other types, e. g. spiral heat exchanger, double pipes etc., are not or only briefly considered in this book. Much information about them is given in [72]. The shell-and-tube type is still the most widely applied one because of its robustness https://doi.org/10.1515/9783111028149-004

4.1 Thermal conduction

� 149

and flexibility. It will be discussed in detail, as their specification is a standard task of a process engineer. Plate heat exchangers are outlined more briefly; usually, a vendor specialist is necessary to obtain an optimum design.

4.1 Thermal conduction Heat conduction can be illustrated by hits of atoms or molecules against each other, whereby energy is transported [303]. In solids, this is the governing mechanism for heat transfer. For liquids and solids, heat conduction is usually superimposed by convection, only in narrow gaps or in laminar boundary layers it occurs in pure form. In the onedimensional case, the heat flux by heat conduction is represented by the Fourier approach 𝜕T Q̇ = −λF( ) 𝜕x

(4.1)

with F as cross-flow area and x as coordinate direction. The sign “−” means that the heat flux direction points to decreasing temperatures. The proportionality factor λ is the thermal conductivity with the following properties: – For gases, hits between gases and molecules are the reason for heat conduction. The order of magnitude is λgas = 0.01 . . . 0.03 W/Km. Exceptions are hydrogen (λH2 ≈ 0.2 W/Km) and helium (λHe ≈ 0.16 W/Km). λgas increases more or less linearly with temperature. The influence of pressure is small, however, for high pressures p > 50 . . . 100 bar a correction factor should be introduced [11]. At very low pressures, λgas is proportional to the pressure. – For liquids, the order of magnitude is λliq = 0.1 . . . 0.2 W/Km. The exception is water (Figure 4.1, λH2 O = 0.55 . . . 0.69 W/Km). Similarly high values occur in electrolyte solutions and polyethers. λliq decreases almost linearly with temperature. For water, there is a maximum. The influence of the pressure can usually be neglected. – For nonconductive solids, the mechanism of heat conduction is that atoms or molecules are oscillating around their latttice sites and hitting their respective neighbours. The order of magnitude of the thermal conductivity is λ = 0.1 . . . 2 W/Km. For porous solids, the thermal conductivity decreases with decreasing solids content, which is the principle of many insulating materials. An important requirement is the protection against mechanical damage and penetration of moisture. – For metals, the thermal conductivity is mainly caused by the movement of electrons, which exceeds the contribution of the lattice oscillation by far. Therefore, good electric conductor are also excellent heat conductors. Examples are copper (λCu = 390 W/Km) and silver (λAg = 420 W/Km). Heat exchangers are usually made of steel. There are cases where it is important whether the material is carbon steel (λCS = 50 W/Km) or stainless steel (λSS = 15 W/Km)

150 � 4 Heat exchange

Figure 4.1: Thermal conductivity of water and toluene.

It is often convenient to make use of the fact that heat conduction has an analogy to electric conduction, with the driving temperature difference ΔT corresponding to the voltage U and the heat flux Q̇ to the electric current I. Ohm’s law with the electric resistance R I = U/R

(4.2)

Q̇ = ΔT/Rλ

(4.3)

can be transferred to heat conduction

For the thermal resistances Rλ , the same calculation rules for parallel and series connection apply. As well, analogously to the voltage drop across a resistance, the temperature drop across a thermal resistance can be used to evaluate the temperature profile in heat conduction processes. The general calculation rule for the thermal resistance is Rλ = ∫

dx λF(x)

(4.4)

d λF

(4.5)

The most often used applications are – plane wall, thickness d Rλ = –

hollow cylinder with radial heat conduction, length L, diameter d Rλ =

ln douter /dinner 2πλL

(4.6)

4.2 Convective heat transfer



� 151

spherical shell with radial heat conduction, radius r Rλ =

1/rinner − 1/router 4πλ

(4.7)

More detailed information can be found in [71, 72].

4.2 Convective heat transfer The heat transfer from a solid wall to a fluid is not only determined by heat conduction. It also depends on the flows in the fluid, which can move energy connected to the fluid (enthalpy) from an area close to a wall to other regions. Both phenomena acting together are called convective heat transfer. The flow through a tube shall illustrate the decisive topics [303]. As long as there is laminar flow, there is a distinctive velocity profile (Figure 4.2). There is hardly any momentum exchange perpendicular to the flow direction. The heat transfer is dominated by conduction. At the tube wall, the velocity is zero (no-slip condition). At a certain velocity, a sudden alteration to turbulent flow occurs. For turbulent flow, there is a strong random movement in all directions, giving an intensive mixing. An energy transport perpendicular to the flow direction is achieved. The velocity profile is much more even compared to laminar flow (Figure 4.2). However, at the wall the no-slip condition is still valid. There is a boundary layer, where the velocity increases from w = 0 to almost the maximum velocity at the tubular axis.

Figure 4.2: Velocity profiles for laminar and turbulent flow in a tube [170].

152 � 4 Heat exchange Close to the wall, this boundary layer is laminar, which has a significant impact on the heat transfer. The Fourier equation 𝜕T Q̇ = −λF( ) 𝜕x wall

(4.8)

is valid but the temperature gradient at the wall is unknown. Therefore, the approach Q̇ = αF(Twall − Tfluid )

(4.9)

is easier to handle, where α is the heat transfer coefficient and Tfluid is the temperature of the unaffected fluid far away from the wall. A comparison between Equations (4.8) and (4.9) gives α=

−λ( 𝜕T ) 𝜕x wall

Twall − Tfluid

(4.10)

and demonstrates that the problem has just been renamed. For the determination of α a system of equations can be set up, consisting of the momentum and energy balances and the description of the energy transport [71]. However, this system of equations can be solved only in a few simple cases. Instead, the similitude theory is applied to get ready-to-use equations. Its principle is that certain quantities can be combined to form characteristic dimensionless numbers. The most important ones are listed in Table (4.1). Table 4.1: The most important characteristic dimensionless numbers in heat transfer calculation. wdρ η ηc Pr = aν = λp gl � ρfluid −ρwall Gr = � ρ ν fluid Nu = αlλ

Re =

Reynolds number

characterizes the flow

Prandtl number

summarizes the physical properties

Grashof number

characterizes the natural convection

Nußelt number

quantifies the heat transfer

The particular symbols denote ν – kinematic viscosity [m2 /s] w – flow velocity [m/s] l – characteristic length [m] a – thermal diffusivity [m2 /s] g – acceleration of gravity [m/s2 ] ρ – density [kg/m3 ]

The characteristic length describes the geometry of the particular arrangement, e. g. the length in flow direction for the flow across a horizontal plane or the inner diameter of a tube.

4.2 Convective heat transfer

� 153

As results, one gets relationships between the dimensionless numbers, e. g. Nu = f (Re, Pr) for forced flow or Nu = f (Gr, Pr) for natural convection. Examples are [72]: – Flow along a horizontal flat plate Nu = 0.664Pr 1/3 Re1/2 –

(4.11)

Characteristic length: length of the plate Turbulent flow through tube (Re > 2300) Nu =

(ξ/8)Re Pr (1 + (Di /L)2/3 ) 1 + 12.7√ξ/8(Pr 2/3 − 1)

(4.12)

with ξ = (1.8 lg Re − 1.5)−2 –

(4.13)

Characteristic length: inner diameter Di Laminar flow through tube (Re < 2300) Nu = 0.664Pr 1/3 √Re Di /L



(4.14)

Characteristic length: inner diameter Di A more comprehensive equation which covers all cases is much more complicated [72]. Natural convection at vertical surfaces 1/6 2

Nu = [0.825 + 0.387(Pr Gr f1 (Pr))

]

(4.15)

with f1 (Pr) = [1 + (

9/16 −16/9

0.492 ) Pr

(4.16)

]

Characteristic length: height h of the surface For a vertical cylinder with diameter D, the correction Nucyl = Nu + 0.97 –

h D

(4.17)

can be applied. Natural convection around horizonzal cylinders 1/6 2

Nu = [0.6 + 0.387(Pr Gr f3 (Pr)) with

]

(4.18)

154 � 4 Heat exchange

f3 (Pr) = [1 + (



9/16 −16/9

0.559 ) Pr

]

(4.19)

Characteristic length: diameter D Natural convection around spheres Nu = 0.56(

0.25

Pr 2 Gr ) 0.846 + Pr

+2

(4.20)

Characteristic length: diameter D Usually, the result for α is an average value for the whole arrangement, however, local values are in use as well. In the VDI Heat Atlas [72], there is a useful compilation of a large number of relationships for the Nußelt number and their range of applicability. Moreover, for rough estimates typical values for α are given for a large number of arrangements. When a vapor stream is cooled down and the surface temperature of the heat transfer area is below the dew point temperature, condensation occurs. The phase change has a large influence on the heat transfer due to the changes of the flows close to the wall and due to the release of the heat of condensation. It is distinguished between film and dropwise condensation. Film condensation takes place if the heat transfer area can easily be wetted by the condensate. The condensate runs down as a continuous film. The thickness of the film becomes larger in bottom direction, as any condensate formed above a certain point must pass the cross-flow area at this point. At the beginning, the film is laminar, but after reaching a certain thickness it changes to the turbulent regime. The calculation of the film condensation according to Nußelt presumes that the heat transfer is determined by heat conduction in the film [71, 72]. The formation of waves, which supports the heat transfer, the influence of the vapor flow, superheating of the vapor and subcooling of the condensate should also be considered. The order of magnitude of the heat transfer coefficient is α = 4000 . . . 7000 W/m2 K. When multicomponent mixtures are considered, it is often not justified to assume that the heat conduction in the film is the dominating mechanism. Especially when large amounts of inert gases are present, the mass transfer in the vapor phase towards the condensate film and its interaction with the heat transfer (Ackermann correction [78]) must be considered. Details can again be found in [71, 79]. Dropwise condensation takes place if the heat transfer area is not or only partially wetted. The reason is a high interfacial tension between fluid and surface. Examples are mercury on glass or water on fatty surfaces. This kind of heat transfer is very effective, as the vapor has direct contact to the heat transfer area and boundary layers cannot form easily. In spite of the huge heat transfer coeffcients of α = 12000 . . . 50000 W/m2 K the technical importance of the dropwise condensation is low. The wetting properties of surfaces change during operation. For instance, fatty layers are rapidly removed when

4.2 Convective heat transfer

� 155

water is condensed. As well, surface treatments are only temporarily effective. Moreover, the α values are high but difficult to predict, as they depend not only on the process conditions in a complex way but also on properties which are difficult to quantify, i. e. the interfacial tensions, the roughness of the wall and kind and amount of impurities of the vapor. For the design of condensers, it is therefore always assumed that film condensation occurs to be on the safe side. Like condensation, evaporation is also connected to a phase change. Although it is one of the oldest processes, it is theoretically not well understood. Among others, one of the reasons is that the surface properties of the heat trasfer area are important but can hardly be described by significant parameters. The heat must be transported from the heating surface to the phase boundary beween liquid and vapor. In this context, the formation of bubbles plays a key role. Bubbles cannot form inside a liquid. The balance of forces requires an overpressure of Δp ∼ σ/R in a bubble to maintain its existence, where σ and R are the surface tension and the radius of the bubble, respectively. For a forming bubble with R → 0 the Δp would be infinite. Therefore, the nuclei of the bubbles form at the heating surface at defined places where the roughness of the surface is larger. Then, with a radius R > 0 a finite overpressure is sufficient to form and grow a bubble. Because of the density difference between liquid and vapor, the bubbles tear off and rise upward. The number of nuclei increases with increasing difference between heating surface and boiling point temperature. Figure 4.3 shows the qualitative dependence of heat transfer coefficient and heat flux as a function of the heating surface temperature.

Figure 4.3: Relationship between heat transfer coefficient and heat flux as a function of the heating surface temperature.

156 � 4 Heat exchange At low driving temperature differences (e. g. for water: 5 . . . 7 K) only few bubbles are formed. The heat transfer is mainly caused by natural convection in the liquid. With increasing temperature difference, the number of nuclei rises. The bubbles act like an agitator, and α rises rapidly (nucleate boiling). At a critical temperature difference (water: ∼ 30 K), bubble formation and growing become so intensive that a continuous vapor film is formed at the heating surface. This vapor film acts like a thermal insulation. Repeatedly, the heat flux is lowered, the vapor film breakes down but is again formed by new bubbles (instable film boiling). If the wall temperature is high enough (Leidenfrost temperature) the heat can even be transported through the insulating vapor film, and it does not break down any more (stable film boiling). This behavior is important if not the temperature difference but the heat flux is fixed, e.g for electrical heating. If the critical heat flux is exceeded, the wall temperature jumps up (arrow), and this can destroy the heating surface. In technical arrangements, there is usually a defined flow direction. Because of the increase of the vapor fraction in flow direction, there are zones with different heat transfer mechanisms. They are illustrated in Figure 4.4.

Figure 4.4: Different types of flow for evaporation in vertical tubes.

The liquid enters the tube usually in a subcooled state. If bubbles form at the heating surface, they collapse immediately due to the subcooling. After reaching the boiling temperature, the bubble can persist (bubble flow). Finally, they merge and form a plug (plug flow). These plugs can also join (churn flow). At the end, there is a liquid film at the wall and vapor with droplets in the center of the tube (annular flow). Additional heat

4.3 Heat transition � 157

input makes the liquid film vanish (mist flow), before the droplets evaporate as well (superheated vapor). The particular zones show a different heat transfer behavior, which causes a great complexity for a predictive calculation. It is not necessary that all these zones actually occur.

4.3 Heat transition Heat exchangers transfer heat from one fluid to another one, the two of which do not come into contact because of a separating wall. The design of a heat exchanger is a classical heat transition problem, meaning that the heat has to be transferred from one of the fluids to the separating wall, then go through the wall and finally be transferred from the wall to the second fluid. Fouling layers can make this heat transfer more difficult; they can be considered in the calculation analogously to the separating wall (Figure 4.5).

Figure 4.5: Steps in the heat transition process.

As introduced in Chapter 4.1, there is a strong analogy to electrical engineering, where electric current and voltage are linked by the resistance in Ohm’s law. As heat transition comprises a series of thermal resistances, the concept must be extended to convective heat transfer. For heat transfer from a fluid to a wall, the standard approach is Q̇ = αAΔT

(4.21)

Comparing Equations (4.3) and (4.21) and considering that Q̇ and I as well as ΔT and U have the same meaning, a thermal resistance can be defined as Rα =

1 ΔT = ̇ αA Q

The heat transition can then be characterized by a series of thermal resistances:

(4.22)

158 � 4 Heat exchange Rk =

1 1 1 = + Rfouling,1 + Rwall + Rfouling,2 + , kA α1 A1 α2 A2

(4.23)

with the overall resistance Rk , defined as 1/(kA) to give it the same structural meaning as α.1 The heat transfer areas A1 and A2 can differ, e. g. the inner and the outer surface of a tube. The A in the term “kA” is arbitrary, it does not matter to which area it refers unless k itself is evaluated. The thermal resistance of the separating wall Rwall can be calculated according to the rules of heat conduction [71]. The fouling resistances are usually set to empirical values (Table 4.3). It should be clear that the magnitude of these resistances gives a clear guideline how to improve the performance of a heat exchanger. Good heat exchanger calculation programs indicate the percentages of the particular thermal resistances. The high thermal resistances should be counteracted, as the following simple example shows. Example A heat exchanger is calculated to have an α1 = 150 W/(m K) and an α2 = 2000 W/(m2 K). What improvement is required? The resistances of a fouling and separating wall should be neglected. The heat exchange area is 100 m2 on both sides.

Solution The overall resistance can be calculated to be 1 K 1 1 + = 7.17 ⋅ 10−5 = kA 150 W/(m2 K) ⋅ 100 m2 2000 W/(m2 K) ⋅ 100 m2 W ⇒ k = 139.5 W/(m2 K)

Increasing the lower heat transfer coefficient by 10 % gives 1 1 1 K = + = 6.56 ⋅ 10−5 kA 165 W/(m2 K) ⋅ 100 m2 2000 W/(m2 K) ⋅ 100 m2 W ⇒ k = 152.4 W/(m2 K) ,

while doubling the higher one gives 1 1 1 K = + = 6.92 ⋅ 10−5 kA 150 W/(m2 K) ⋅ 100 m2 4000 W/(m2 K) ⋅ 100 m2 W ⇒ k = 144.6 W/(m2 K)

A small increase of the lower heat transfer coefficient is much more efficient than a large change of the higher one.

1 Note that in the English and American literature the heat transfer coefficient α is normally denoted as h, and the heat transition coefficient k is normally denoted as U.

4.3 Heat transition

� 159

Example A cylindrical carbon steel vessel (outer diameter D = 2 m, height H = 4 m, wall thickness s = 0.02 m) filled with n-decane is heated with steam, which condenses at tshell = 120 °C in an outer shell (sshell = 0.05 m). The content of the vessel is heated up to tstart = 90 °C. Then, the steam is turned off. Estimate the final temperature tend in the vessel. The tank shall be filled by 80 % (Figure 4.6). – αsteam = 1000 W/m2 K – λwall = 50 W/Km – αdecane = 100 W/m2 K – csteel = 0.5 J/gK – cp,liq,water = 4.22 J/gK, average in the range 90 . . . 120 °C – Δhv,water = 2200 J/g at tshell = 120 °C – cp,liq,decane = 2.51 J/gK, average in the range 90 . . . 120 °C – ρsteel = 7850 kg/m3 – ρsteam = 1.12 kg/m3 at 120 °C, saturation – ρdecane = 675.5 kg/m3 at 90 °C Fouling shall not occur. Head and bottom of the tank shall be neglected.

Figure 4.6: Sketch of the vessel.

Solution To determine the final temperature, it is assumed that the rest steam in the outer shell is condensed and subcooled until equilibrium is reached. As well, the wall material will cool down until it has reached the same temperature as the vessel content. The temperature at the beginning of the equilibration can be estimated using Equation (4.23). According to Ohm’s law, the ratios between voltage drops (i. e. temperature drops) and resistances (i. e. thermal resistances) are the same. Thus, tshell − twall,in 1

αsteam Fout

+ Rλ

=

1

tshell − tstart

αsteam Fout

+ Rλ +

1 αdecane Fin

(4.24)

160 � 4 Heat exchange

With Fout = πDH = 25.13 m2

(4.25) 2

Fin = π(D − 2s)H = 24.63 m Rλ =

ln

D D−2s

2πλwall H

(4.26)

= 1.6077 ⋅ 10−5 K/W

(4.27)

we get twall,in = 116.37 °C. Analogously, from

tshell − twall,out 1

αsteam Fout

=

1

tshell − tstart

αsteam Fout

+ Rλ +

1 αdecane Fin

(4.28)

we obtain twall,out = 117.42 °C. The average is twall = (twall,in + twall,out )/2 = 116.89 °C

(4.29)

For the energy balance, the masses of the particular components are evaluated first. π 2 2 msteam = ρsteam [(D + 2sshell ) − D ]H = 1.443 kg 4 π msteel = ρsteel [D2 − (D − 2s)2 ]H = 3906.4 kg 4 π mdecane = ρdecane (D − 2s)2 H ⋅ 0.8 = 8152.4 kg 4

(4.30) (4.31) (4.32)

Subsequently, the energy balance can be set up as mdecane cp,liq,decane (tend − tstart ) = msteam ⋅ (Δhv,water + cp,liq,water (tshell − tend )) + msteel csteel (twall − tend )

(4.33)

The final result is tend = 92.49 °C. The dominating term is the heat capacity of the steel, the contribution of the steam is one order of magnitude lower. Note that steel and steam have almost the same temperature at the beginning.

4.4 Shell-and-tube heat exchangers Shell-and-tube heat exchangers are the most common heat exchanger type in process industry. One of their advantages is a large ratio of heat transfer area to volume and, respectively, weight. They have a wide range of sizes, cleaning is at least possible, and wear parts like gaskets can easily be replaced. Shell-and-tube heat exchangers are composed of a shell, which is in principle a pressure vessel, and a tube bundle inside (Figure 4.7). The two fluids which are supposed to exchange heat are on different sides of the tube; one inside the tubes and one outside the tubes within the shell-side. Often, only one of the streams involved is a process stream, whereas the other one (e. g. steam for heating, cooling water) is a utility (Chapter 13). On the other hand, it is desirable to reduce the consumption of utilities and to cover the necessary heating or cooling duty from the process itself so that both streams are process streams. This heat integration can save

4.5 Heat exchangers without phase change

� 161

Figure 4.7: Transparent shell-and-tube heat exchanger. Test fluid with red dye. Courtesy of Heat Transfer Research, Inc.

operation costs significantly; the Pinch method for the optimization has been described in Chapter 3.3. The heat exchangers can be classified with respect of the phase behavior of the streams; they can maintain their phases (gas-gas or liquid-liquid exchangers), or the product stream can condense (condenser) or evaporate (evaporator). In commercial heat exchanger design programs (e. g. HTRI, HTFS), the principle of the thermal calculation of a shell-and-tube heat exchanger is the so-called cell method [72], which is illustrated in Figure 4.8.

Figure 4.8: Cell method for the thermal calculation of a shell-and-tube heat exchanger. © Springer-Verlag GmbH.

Using a simple liquid-liquid heat exchanger, the procedure is illustrated in the next chapter.

4.5 Heat exchangers without phase change Each cell must be assigned with the actual state of the stream and its associated physical properties, with the geometry of the cell and with the appropriate relationships which describe the heat transfer. In contrast to process simulation programs, commercial heat exchanger design programs do not support physical property models or their

162 � 4 Heat exchange p = � bar

Liquid Properties

t

h (J/g)

Vapor fraction weight

(°C) 100.000 94.137 88.240 82.310 76.348 70.356 64.335 58.287 52.214 46.117 40.000

ρ

η

tc (pseudo) (kg/m� ) (mPa s) (W/(K m)) (J/(g K)) (°C)

pc (pseudo) (bar)

(g/mol)

−10 767 −10 784 −10 802 −10 819 −10 837 −10 854 −10 871 −10 889 −10 906 −10 924 −10 941

0 0 0 0 0 0 0 0 0 0 0

882.724 888.618 894.406 900.089 905.669 911.147 916.524 921.802 926.980 932.060 937.041

166.96 166.96 166.96 166.96 166.96 166.96 166.96 166.96 166.96 166.96 166.96

27.39 27.39 27.39 27.39 27.39 27.39 27.39 27.39 27.39 27.39 27.39

0.2953 0.3138 0.3345 0.3578 0.3842 0.4141 0.4482 0.4873 0.5324 0.5847 0.6457

λ

0.2018 0.2034 0.2049 0.2054 0.2079 0.2084 0.2109 0.2124 0.2139 0.2154 0.2168

cp

2.985 2.967 2.950 2.934 2.919 2.904 2.891 2.878 2.867 2.856 2.847

337.49 337.49 337.49 337.49 337.49 337.49 337.49 337.49 337.49 337.49 337.49

M

Figure 4.9: Example for a heat curve without phase change. Courtesy of Heat Transfer Research, Inc.

parameters except a number of common heat transfer fluids like steam or water, cooling brines, thermal oils, and some common pure components and ideal mixtures of them. Instead, the physical properties needed are generated before the actual heat transfer calculation takes place. The communication between the physical property model and the heat exchanger design program is achieved with the help of a so-called heat curve. An example for the product side heat curve of a liquid-liquid heat exchanger without phase change is given in Figure 4.9. As can be seen, the particular physical properties with respect to temperature, i. e. specific enthalpy, density, dynamic viscosity, thermal conductivity, and molecular weight, are tabulated. Between these points, the program interpolates. Extrapolations are usually indicated with warning messages. Each heat curve refers to a certain pressure. Normally, several heat curves for different pressures are generated to account for pressure drop effects. For a liquid-liquid exchanger, this is of minor importance. As well, the vapor fraction of the stream is always zero in this case. Alternatively, the enthalpy can be used as an independent variable. The pseudocritical temperatures and pressures are of minor physical significance but required by some correlations. To make a good interpolation behavior possible, the user can vary the step size and take care that the distances between the points are appropriate and that the whole temperature and pressure range occurring in the heat exchanger is covered. The thermodynamics is completed by the specification of the process in the heat exchanger. Commercial heat exchanger design programs distinguish between three calculation modes, i. e. the rating mode, the simulation mode, and the design mode. They can be distinguished as follows.

4.5 Heat exchangers without phase change







� 163

Rating mode: The specified heat exchanger is calculated according to the process. As the main result, it is indicated how much heat exchange area is excess or, respectively, missing (overdesign). This is the standard mode for the design of heat exchangers. Simulation mode: It is evaluated how the specified heat exchanger would perform with the given input streams, i. e. the actual outlet conditions are calculated. Design mode: A heat exchanger design is evaluated which fulfills the requirement of the process (outlet conditions, pressure drop). This is a very tempting approach; the heat exchanger design is achieved by the famous mouse click. However, this mode is timeconsuming, and the result is not necessarily satisfactory; it should not be taken as final but as a starting point for further rating mode calculations. Furthermore, the user does not get a feeling for the sensitivities and the potential for further improvement. The design mode is only recommended when the user has really no idea about the design.

For the constructive details of a heat exchanger, the TEMA type (TEMA: Tubular Exchanger Manufacturers Association Inc.) has to be fixed first. The TEMA type determines the general arrangement of the heat exchanger. Figures 4.10, 4.11, and 4.12 explain the TEMA type code. Some popular choices are – BEM: standard arrangement; – BEU: U-type heat exchanger; – BKU: kettle type reboiler (Chapter 4.7); – AES: floating head, removable tube bundle; – BJ21T: arrangement for vacuum condensers (Chapter 4.6).

Figure 4.10: TEMA front end stationary head types. A Channel and removable cover; B Bonnet (integral cover); C Channel integral with tubesheet, removable cover, and removable tube bundle; N Channel integral with tubesheet and removable cover; D Special high pressure closure. Courtesy of Mihir Patel

164 � 4 Heat exchange

Figure 4.11: TEMA shell types. E One pass shell; F Two pass shell with longitudinal baffle; for maintaining countercurrent flow in temperature-cross situations; G Split flow; for horizontal thermosiphon reboilers; H Double split flow; for horizontal thermosiphon reboilers; J Divided flow; J12: one inlet, two outlets, J21: two inlets, one outlet; K Kettle type reboiler; for reboilers and refrigeration chillers; X Cross flow; for vacuum condensation on the shell side with extremely low pressure drop. Courtesy of Mihir Patel

Figure 4.12: TEMA rear end head types. L Fixed tubesheet like A, stationary head; M Fixed tubesheet like B, stationary head; N Fixed tubesheet like N, stationary head; P Outside packed floating head; S Floating head with backing device; T Pull through floating head; U U-type bundle; W Externally sealed floating tubesheet. Courtesy of Mihir Patel

4.5 Heat exchangers without phase change

� 165

One of the main reasons to distinguish between all these types is to get by with the problem of thermal stress. In many cases, the shell side will have a significantly different temperature than the tube side, causing different thermal expansion and possible damage like tube bending or loosening the connections between tube and tube sheet. Fixed tube sheets (Figure 4.12, L, M, N) do not provide any countermeasures to this kind of stress; as a rule of thumb, they should not be chosen if the temperatures of the two sides differ by more than 50 K. Fixed tubesheets are inexpensive, but the outside of the tubes cannot be cleaned, meaning that the shell side service must not be prone to fouling. Floating rear end head types (Figure 4.12, P, S, T, W) provide the possibility for the tubes to give way. However, they can only compensate for differences between tubes and shell; they are not useful when differences between the tubes theirselves occur. Furthermore, the clearances between tube bundle and shell are often enlarged, which reduces the heat transfer. Sealing strips (Figure 4.20) can mitigate this disadvantage. Both inside and outside can be cleaned, however, the types are more expensive than the fixed ones. Multi-pass tube arrangement (see below) is still possible for types S and T. In contrast, the U-type configuration, where the tubes are bent in the shape of a U, allows individual expansion of the tubes anyway. There is only one tubesheet. Its drawback is that the inner side of the bend cannot be cleaned. Cleaning generally becomes possible if the covers and/or the tube bundles (Figure 4.10) can be removed. Additional information can be obtained from [73]. If it is essential that the media on shell side and tube side do not get in contact with each other, a double-tubesheet design should be taken into account [294]. Next, the shell orientation (horizontal, vertical, inclined) must be defined and it has to be decided which stream is on the shell side and which is on the tube side. This is a strategic decision with often contradictory arguments. Usually, the heat transfer on the shell side is better than in the tubes. Therefore, it is desirable to place the stream with the worse heat transfer on the shell side. On the other hand, the tube side is easier to clean, so that the stream showing more fouling should be placed in the tubes. The latter argument is usually stronger; for example, cooling water as a notoriously dirty fluid is placed in the tubes in almost all cases. A compromise can be found if a removable tube bundle or a U-type exchanger is chosen; in these cases, the shell side can be cleaned as well. Countercurrent flow is the default; cocurrent flow can be specified. Additionally, several identical heat exchangers can be arranged in parallel or in series. Shell diameter and the length of the tubes mainly determine the heat transfer area. For a given one, longer tubes result in lower costs. The tube length is often limited to 12.2 m. Several other specification data for the tubes have to be defined. The tubes theirselves are specified by their outside diameter (OD) and the wall thickness. A standard value for the tube OD is 1′′ (25.4 mm). The most often applied alternatives are ¾′′ (19.05 mm) and 1 ½′′ (38.1 mm), where it is possible to increase the number of tubes in a given shell or reduce the velocity inside the tubes, respectively. From the heat transfer point of view, small tube diameters are advantageous, as long as it does not make cleaning uncomfortable. The wall thickness is determined by the mechanical stability. 2 mm is a reasonable value, for high-pressure applications, larger wall thicknesses are probable.

166 � 4 Heat exchange

Figure 4.13: Tube pitch patterns. t = pitch. © Springer-Verlag GmbH.

The tube pitch and the tube layout angle define the arrangement of the tubes (Figure 4.13). The pitch is the distance between the centers of the tubes. The smaller the pitch, the more tubes can be put into the shell. Large pitches can be useful to lower the shell velocities for avoiding vibrations. The pitch is often given as a pitch ratio, i. e. the pitch divided by the tube OD. Common pitch ratios are 1.25, 1.33, and 1.5. The tube layout angle defines the pattern of the tubes with respect to the flow direction. The 30° arrangement is the standard one, which provides good turbulence but is difficult to clean. The 60° pattern is useful to avoid vibrations caused by vortex shedding (Chapter 4.12). Besides these triangular patterns the square patterns are used. The 90° arrangement is useful for cleaning purposes but produces low turbulence, while the 45° pattern is a compromise between cleaning requirements and turbulence. The 45° pattern should not be used for gas streams because of possible vibrations. Finally, the number of tube passes can be specified (1, 2, 4, 8 in standard designs). If the tube side velocity is unacceptably low, the tube-side fluid can be led several times through a certain part of the tubes. For this purpose, the tube stream can be divided by partition plates at the front and rear head so that it passes several times through the exchanger (Figure 4.14). The simplest way is the use of a U-type heat exchanger, which has automatically two passes.2 As the cross-flow area for the tube stream decreases with the number of passes, the tube velocity and therefore the heat transfer coefficient increases. On the other hand, one direction is in cocurrent flow, and the profile of the temperature difference between hot and cold fluid along the flow path is distorted, often even leading to temperature crosses. It makes sense to define more tube passes if the main thermal resistance is on the tube side and if the temperature ranges of hot and cold fluid do not overlap. Otherwise, the use of more tube passes can even be a disadvantage. The use of an F shell (Figure 4.11) is possible, but the leakage through the clearance between the longitudinal baffle and the shell makes it often ineffective. The material of the tubes is characterized by standard values (density, thermal conductivity etc.). They can be overwritten if further knowledge is available.

2 For U-type heat exchangers, six passes are usually the maximum, otherwise, the bend radius would become too small.

4.5 Heat exchangers without phase change

� 167

Figure 4.14: Sketch of a two-pass shell-and-tube heat exchanger. © H Padleckas/Wikimedia Commons/CC BY-SA 3.0. https://creativecommons.org/licenses/by-sa/3.0/deed.de.

Baffles direct the shell flow back and forth across the tubes, which increases the shellside velocity and the heat transfer coefficient [75]. Furthermore, they support the tubes in their position and prevent vibration of the tubes. Again, there are some options for different types (Figure 4.17). The most common one is the single segmental baffle, which is in principle a circular plate where a segment has been removed. This is defined by the cut (Figure 4.15). A reasonable cut should be in the range 20–35 %. If low-pressure gas flow is involved, the first approach should be 40–45 %.

Figure 4.15: Baffle cut definition. Courtesy of Heat Transfer Research, Inc.

168 � 4 Heat exchange

Figure 4.16: Baffle orientations. Courtesy of Heat Transfer Research, Inc.

It can be distinguished whether the cut is perpendicular or parallel to the flow inlet (Figure 4.16). The parallel orientation is preferred for condensing fluids so that the condensate can be collected at the bottom. For liquids, the perpendicular orientation should be preferred, as it mixes existing fluid layers and avoids possible precipitation of solids at the bottom. Single segmental baffles are certainly the cheapest ones because of their easy manufacturing, but they cause a comparably large pressure drop. They are not recommended for viscous fluids [75]. The crossflow heat transfer to the tubes is better than the longitudinal heat transfer. The more baffles are set, the more this crossflow is achieved and the more the pressure drop increases, as well as the heat transfer coefficient does. Therefore, the baffle spacing can be varied within certain limits to achieve a satisfactory solution. According to TEMA, the minimum baffle spacing is 20 % of the inner shell diameter. For small heat exchangers it should not go below 2′′ . The maximum baffle spacing is the inner shell diameter; otherwise, there are large unsupported tube spans, and the cross-flow is not realized. A baffle spacing between 30–60 % of the shell inside diameter is usually a good starting point [75]. Baffle spacing can be varied if vibrations occur; vibrations are less probable with more baffles set, as the tubes get more support. The pressure drop can be significantly reduced if double-segmental baffles are used. As can be seen in Figure 4.17, two kinds of baffles are alternating, with one having the cut area in the center (“wing baffle”) and one having two circle segments as cut areas (“center baffle”). From the thermal point of view, they are less effective. Also, the socalled disk-and-donut baffles are in use, with one baffle type in the form of a circular ring and one as a circular disk in the center. They are often used in gas-gas applications to avoid vibrations.

Figure 4.17: Common baffle types.

NTIW (“no tubes in window”) baffles (Figure 4.17) are used for mechanical stability reasons. This option ensures that each baffle supports every tube. Tubes with long areas without support are avoided. NTIW is a useful option if vibration problems occur.

4.5 Heat exchangers without phase change

� 169

Also, the pressure drop is reduced. The disadvantage is that the shell diameter must be increased to obtain the same heat transfer area. A baffle cut of 15 % is most common [75]. With single segmental baffles, most of the overall pressure drop is wasted in changing the direction of flow. Furthermore, dead zones occur with minor fluid movement. A way to overcome these deficiencies is the use of helical baffles [290]. These baffles are shaped like a quadrant. They are placed with a certain angle to the tube axis, which creates a helical flow pattern (Figure 4.18). The helical baffles create a swirl, where the pressure drop is turned into an effective heat transfer. The fluid continuously swings from the bundle periphery to the center of the bundle. The flow has always a longitudinal component [289]. The advantages are better heat transfer, a better flow distribution around the tube bundle (see below) and reduced vibration, pressure drop and fouling, where the latter advantage also reduces the cleaning frequency and the maintenance costs.

Figure 4.18: Shell-and-tube heat exchanger with helical baffle.

As can be expected, the capital costs are significantly higher. To keep them endurable, one should consider that the fouling factor on the shell side will be lower than for the conventional baffles, giving a more compact design. Grid baffles are metal lattices that fix the tubes (Figure 4.19). Mainly, longitudinal flow is produced instead of cross-flow. Grid baffles protect against tube vibration and produce low pressure drops on the shell side. To finish the paragraph on baffles, tie-rods and sealing strips should be explained. A small number of tie-rods are placed in the tube bundle instead of normal tubes to gain more mechanical stability. They are not tubes, but rather massive sticks and do not take part in the heat exchange. They are located at points around the periphery of the bundle. Sealing strips are rectangular strips placed in the circumferential bypass between bundle and shell. They prevent a leakage flow between bundle and shell (Figure 4.20).

170 � 4 Heat exchange

Figure 4.19: Rod-type baffles as an example for grid baffles. Courtesy of TEMA India Ltd.

In the rating mode, the specified heat exchanger design produces an overdesign as a result, i. e. a statement whether the heat exchange area is too small or too large and how much. A reasonable overdesign is 10–20 %.3 The overdesign must cover the various uncertainties in the calculation, e. g. physical properties, uncertainties of the heat transfer relationships or, if necessary, fouling effects (Chapter 4.11). The design must be varied until the overdesign is in the desired range. However, there are a lot of other items to be checked. – Duty comparison: The calculated duty must be equal to the duty reported in the process simulation. This is a quite safe indication of whether the physical properties and the process definition have been defined correctly in the heat exchanger design program.

3 It should be noted that the overdesign in the rating mode refers to the area of the heat exchanger. Often, it is checked whether the overdesign remains slightly positive if the load is increased by 10–20 %. In fact, this is not equivalent, as increased load causes higher velocities and therefore better heat transfer coefficients. This approach is less conservative.

4.5 Heat exchangers without phase change

� 171

Figure 4.20: Sealing strips. © Springer-Verlag GmbH.



Flow fractions: The flow fractions indicate to which extent the fluid on the shell side takes the designated way. Heat exchanger design programs give an estimation about the flow distribution. The percentages of the following fractions are regarded [76]: – A fraction: The A fraction refers to the tube-to-baffle hole leakage stream. It becomes large when the clearances between tubes and baffle holes are large and when the baffle spacing is narrow, especially for single-segmented baffles. At least, the A fraction is thermally effective and not lost, as it touches the tube surfaces. Although not desired, the fraction can be added to the B fraction as long as it is below 5 %. The A fraction can be reduced by reducing the clearances between tubes and bafffles. – B fraction: The B fraction refers to the main crossflow stream through the bundle, i. e. the desired way. Normally, it is larger than 50 %, preferably more than 60 % of the total flow. If the B fraction is lower, too large clearances and a narrow baffle spacing are probably the reasons. – C fraction: The C fraction is the bundle-to-shell crossflow bypass stream, it flows through the clearance between tube bundle and the inner side of the shell. It should be less than 10 %. Additional sealing strips can decrease this fraction. The C fraction is only partially thermally effective, as it has only contact to the surface of the bundle. – E fraction: The E fraction is the baffle-to-shell leakage stream between outside of baffle and shell. It should be less than 15 %. It is thermally not effective at all. There are hardly options to manipulate it. Double-segmental baffles are advantageous in comparison with single-segmental baffles. It might also help to increase baffle spacing and baffle cut, or to reduce the baffle-to-shell clearance. – F fraction: The F fraction is the tubepass partition bypass stream; it is the flow between the baffles and the passlanes and occurs only if multiple tubepasses are used. It should be lower than 10 %. Its thermal effectiveness is weak as it

172 � 4 Heat exchange

Figure 4.21: Flow Fractions. Courtesy of Heat Transfer Research, Inc.



contacts only part of the tube surface. The F fraction can be lowered by additional sealing strips or seal rods. Figure 4.21 illustrates the streams discussed above. The flow fractions can be affected by * baffle spacing * baffle cut * tube layout angle * tube pitch * number of sealing strips (for E fractions > 0.15) * clearances Thermal resistance distribution: The percentage ratios of heat transfer on the shell side, heat transfer on the tube side, thermal conductivity resistance of the tube and thermal conductivity resistance of the fouling layer to the overall thermal resistance of the arrangement is a useful guideline for improving the design. It can be found out which side deter-

4.5 Heat exchangers without phase change













� 173

mines the heat transfer, so that the following design variations should focus on this. Furthermore, the percentage of the fouling resistance should be watched carefully; it can be assessed which impact the fouling factors have on the design. An example is given in Chapter 4.11. Tube side velocities: One should take care that the tube side velocities are sufficiently high, especially if fouling is probable (Chapter 4.11). A range of 1.0–1.2 m/s is recommended for liquids, the VDI Heat Atlas [72] even suggests 1.8 m/s. For vapors, the kinetic energy is more relevant than the velocity, as the density of a gas can cover a wide range. The recommended range is ρw2 = 30–270 kg/ms2 , corresponding to the velocity of 5–15 m/s for air at p = 1 bar. The tubeside velocity is often a crucial point for the heat exchanger design. In fact, there are examples where the choice of a smaller shell diameter improves the heat exchanger performance, when the velocity in the tubes and, subsequently, the heat transfer coefficient increase significantly. Also, there is the option of providing multiple tube passes to increase the tubeside velocity (see above). Shell side velocities: The velocities on the shell side should not be too large to avoid vibrations. The guidance values are w = 0.3–0.9 m/s for liquids and ρw2 = 30–130 kg/ms2 , corresponding to the velocity of 5–10 m/s for air at p = 1 bar. Nozzle velocities: For the nozzles of heat exchangers, the kinetic energy determines the limitation for the design. The guide values are for liquids: ρw2 = 700–2250 kg/ms2 (tube side) ρw2 = 700–1100 kg/ms2 (shell side). For gases: ρw2 = 500 kg/ms2 (tube side) ρw2 = 300–400 kg/ms2 (shell side). Vibrations: At least, the most important vibration checks should indicate that vibrations do not occur (Chapter 4.12). Physical properties: As mentioned above, it should be checked whether the calculated duty meets the expectation from the process simulation. The heat curves should cover the pressure and temperature range of the process with a sufficient number of points for interpolation. Allocation of the tubes: Streams at high pressure and corrosive streams should be placed inside the tubes, as it is easier to increase their design pressure instead of that of the shell. The stream with a fairly lower heat transfer coefficient (e. g. viscous fluids) should be placed on the shell-side. Streams showing fouling should be placed where cleaning is possible, in most cases inside the tubes. Often, this applies for cooling water service. On the

174 � 4 Heat exchange shell side, cleaning is usually more difficult, and there are more dead spots which are susceptible to fouling. Generally, corrosive streams should be located on the tube side, otherwise, both tubes and shell must be made of the expensive material. As well, the medium with the higher pressure should be on the tube side, as tubes can withstand internal pressure easier than external. Streams with low flowrates should be placed on the tube side, as the velocity can be increased by providing more tube passes. On the shell side, it is much more difficult to achieve higher velocities. For shell-and-tube heat exchangers, there is an interesting but not widely applied option to increase the heat transfer coefficient in the tubes, where often not enough turbulence is generated because of low velocities. Turbulence can be increased with wire elements (Figure 4.22), which can be inserted into the tubes [77].

Figure 4.22: hiTRAN element for increase of the heat transfer in the tubes [77]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

4.6 Condensers For condensers, the heat curve on the product side looks slightly different. In contrast to the liquid-liquid or gas-gas heat exchangers, it is important that dew and bubble point of the stream are reproduced well by the heat curves. For the properties, both phases are relevant. Due to the phase change, the influence of the pressure is larger. Also, the calculated heat duty should agree with the one obtained in the process simulation. However, there is often some work to do to prove it. In most cases, the pressure drop in the process simulation is just a guess, and often a very conservative one. In contrast to this, the heat exchanger design program calculates this pressure drop properly. In most cases it will be lower than the set one assumed in the process simulation. Therefore, due to higher outlet pressure the condensed fraction will be larger than in the process simulation, and so is the duty. The simple solution is to repeat the process

4.6 Condensers

� 175

simulation for the condenser setting the pressure drop according to the one obtained with the heat exchanger design program. The stream to be condensed is usually placed on the shell side, as good turbulence is required. Moreover, the coolant (e. g. cooling water) is often prone to fouling so that it is preferable to put it on the tube side. Vertical condensers can have a larger heat transition coefficient, as baffles disrupt the condensate film. However, their support structure is more expensive, and they are difficult to clean. There are also a lot of examples where the condensing fluid is in the tubes, when larger velocities are required. Condensate and inerts can accumulate in the tubes, so special care must be taken for their removal, as described below. With condensation in the tubes, the vertical arrangement is advantageous if subcooling is required. From the constructive point of view, the condensate removal from the heat exchanger should be well defined. As mentioned above, when condensation takes place on the shellside the parallel baffle cut has to be preferred. The condensate removal can be supported by an inclination (usually 1–2°), where the nozzle for the condensate removal is the lowest point. The condensate nozzle must be large enough to remove the condensed liquid, otherwise, condensate flooding might be the consequence. In certain cases, the condensate level in the heat exchanger can even be used for controlling the heat duty. The higher the condensate level, the more tubes are flooded, and the less heat transfer area can be used. At the vapor inlet, the tubes must often be protected against erosion by droplets. For this purpose, impingement plates (Figure 4.41) or impingement rods are often used. The criteria are: – ρw2 > 2232 Pa at inlet nozzle – inlet conditions are close to dew point – ρw2 > 744 Pa for boiling liquid A special design is very useful if vacuum vapors have to be condensed. In normal arrangements like BEM the calculated pressure drop is often larger than the pressure itself, which is impossible. Increasing the shell diameter leads to huge apparatuses. The BJ21T arrangement (Figure 4.23) enables condensation with a very low pressure drop. The two large inlet nozzles guarantee a predistribution of the vapor at low velocities, and vapor can reach the tubes easily. The dimensioning of a condenser has a special pitfall if inert gases are involved, especially when small amounts of condensables have to be removed from an inert gas stream, e. g. waste air. There is a large difference to the normal case without inert gases, where there is hardly any transport resistance for the vapor to get in contact with the cold surface or, respectively, the boundary layer. If there are large amounts of inert gases, the condensables must get to the boundary layer by means of diffusion, which is usually slow and which must be regarded as the step determining the condensation rate. As mentioned in Chapter 4.2, mass and heat transfer have a mutual influence on each other [78]. The state-of-the-art calculation of this combined heat and mass transfer is thoroughly described in [79]. However, this procedure is too complex for an application

176 � 4 Heat exchange

Figure 4.23: The BJ21T arrangement. Courtesy of Heat Transfer Research, Inc.

in a multicomponent mixture. Simplified approaches [80] are often used in commercial heat exchanger design programs, but in general it must be recommended to take care when the heat exchanger is designed. Especially for condensers, the deaeration problem is a decisive issue. There must be a clear route for defined and undefined (leakage) inert gas flows to leave the apparatus. For condensers, inert gases would significantly lower the heat transfer coefficient, as described in the last paragraph. For each heat exchanger, the design must be individually checked whether noncondensables are adequately removed. Gases accumulate at the top of of any volume; therefore, the nozzle for the inert removal must be placed at the highest point of the apparatus. Moreover, a short circuit has to be ruled out. This means that the inert removal nozzle must not be located in the vicinity of the feed nozzle. In this case, it is likely that both condensable and noncondensable components are relieved. The process stream must get the opportunity to condense, meaning that it should first get in contact with the cold surfaces in the condenser before reaching the inert removal nozzle. Therefore, at least preferably the noncondensables can be removed from the process. It should also be verified that the air which is in the exchanger at the beginning can be removed during operation. A good compilation of the pros and cons of a number of types of condensers can be found in [286].

4.7 Evaporators The design of normal heat exchangers like condensers or liquid-liquid heat exchangers can be regarded as a standard task in process engineering, whereas the design of evaporators is usually not. Before one can encounter the particular difficulties in calculation, one must choose the type of evaporator with respect to the properties of the

4.7 Evaporators

� 177

Figure 4.24: Thermosiphon reboiler with natural circulation.

stream. Most evaporators are used as reboilers for column service; therefore, this special arrangement shall be carefully considered. The most popular reboiler for distillation columns is the thermosiphon reboiler with natural circulation. It is used in approx. 70 % of the cases where evaporation is required [81]. Figure 4.24 shows the bottom of a column with a thermosiphon reboiler. On the left hand side there is the removal of the bottom product, which has the same concentration as the circulation flow. The circulation flow enters the heat exchanger at the bottom. Due to the height difference between the surface of the bottom liquid and the inlet of the tube bundle (“static liquid head”) the pressure of the liquid is higher than the saturation pressure of the liquid at the surface, i. e. the liquid is subcooled. Inside the tubes, the liquid rises again. The pressure of the liquid decreases and it is heated by the heating agent on the shell side, usually steam. Both effects compensate the subcooling after a certain height (preheating zone) has been passed, and boiling of the fluid begins. First, bubbles are formed, which become more and more numerous. A distinctive two-phase flow leaves the heat exchanger through the outlet nozzle and enters the column again, where it is split into a vapor flow going up the column and a liquid flow going back to the bottom, where the cycle begins once again. The driving force of the circulation flow is the density difference between the left leg of the reboiler cycle, where there is only liquid, and the right one, where a two-phase flow of the bottom product occurs. In the latter, the overall density is smaller due to the bubbles, and therefore, the static pressure ρgH is larger in the left leg than in the right one, causing the natural circulation without external stimulation.

178 � 4 Heat exchange The temperature difference between the heating agent and the product is the driving force for the heat exchange. For each evaporator type, there is a reasonable range. For thermosiphon reboilers, this range is between 15 and 40 K. At lower temperature differences, especially below 10 K, circulation instabilities are likely to occur [81–84]. In these cases, only a few bubbles are formed, and a significant carry-over of liquid does not take place. The circulation is low, and so is the heat transfer. There is a large preheating zone. Periodically, these bubbles coalesce, and the plugs formed push greater amounts of liquid upward, causing an increased circulation for a short time (geysering). As well, the pressure in the adjacent column is fluctuating. Small temperature differences are likely to occur if the reboiler has an overdesign which is too large. According to Q̇ = kAΔT

(4.34)

and assuming that k remains in the same order of magnitude, a large heat transfer area A means that a small driving temperature difference ΔT is formed, as usually some process control valve maintains the duty Q̇ by adjusting the steam amount. For a large area A, the valve will reduce the steam pressure on the shell side of the reboiler, and the condensation temperature will become lower and closer to the product temperature. Another case is the startup phase. At the beginning, there is no fouling; therefore, the heat transfer coefficient k is larger than expected (Chapter 4.11), and the ΔT gets smaller. Circulation reaches a maximum at driving temperature differences of 20–30 K. High driving temperature differences are just as undesirable as low ones. The natural circulation is then reduced by the rising pressure drop. At very high driving temperature differences there is the risk of burnout. This means that the bubbles form a continuous vapor film, and the heat transfer takes place mainly by radiation, which is comparably ineffective. A temperature difference rising beyond a critical value will therefore lead to a lower vapor generation [82]. The residence time of the product in the bottom and in the reboiler is quite long, and therefore the thermosiphon reboiler is not really gentle towards the product. There is another reason why thermosiphon reboilers are not the first choice when temperaturesensitive substances are involved. Usually the boiling points of these components are relatively high, and the distillation is carried out in a vacuum to avoid high temperatures at the bottom. In fact, the thermosiphon reboiler has problems in vacuum operation, especially at pressures below p = 150–200 mbar [85]. This is illustrated in Figure 4.24. The static liquid head is the driving force for the thermosiphon circulation. However, as mentioned above, due to hydrostatics the liquid is subcooled at the entrance of the tubes. In a vacuum, the preheating zone in the tubes with low heat transfer will be much larger, and at a certain point the thermosiphon circulation vanishes due to the low driving force. The following example illustrates the impact of the vacuum.

4.7 Evaporators

� 179

Example Compare the subcooling of a thermosiphon reboiler with water as the bottom product at different bottom pressures p1 = 1 bar and p2 = 100 mbar. The static liquid head is assumed to be H = 5 m. For the density of water, ρ = 1000 kg/m3 is used in both cases. For simplification, the gravity acceleration is assumed to be 10 m/s2 .

Solution At p1 = 1 bar, the bottom temperature in the column can be calculated to be ts1 = 99.6 °C [29]. Following hydrostatics, the pressure at the tube inlet of the reboiler will be p1,tube = p1 + ρgH = 1.5 bar. This in turn corresponds to a boiling temperature of ts1,tube = 111.3 °C. The subcooling is approx. 12 K. At p2 = 100 mbar, the bottom temperature can be calculated to be ts2 = 45.8 °C. Considering hydrostatics, the pressure at the tube inlet of the reboiler is p2,tube = 0.6 bar, corresponding to a boiling temperature of ts2,tube = 85.9 °C. The subcooling is considerably higher in this case, approx. 40 K. It can easily be guessed that the preheating zone will be much larger in the vacuum case. This will result in a worse heat transfer, giving less circulation and, in a vicious circle, a larger preheating zone. For this reason, it is recommended that thermosiphon reboilers should not be used below p < 0.2 bar [85]. The tube length of thermosiphon reboilers in vacuum should be approximately 3 m.

The static liquid head mentioned in the example is one of the key quantities for the function of the reboiler. It has to be defined for the heat transfer calculation, and it should be indicated on the arrangement sketch for column and reboiler, although its realization is not a matter of construction but of the level control. It has two effects which partly compensate each other. The higher the static liquid head, the larger is the preheating zone, which reduces the heat transfer, and the larger is the circulation, which increases the heat transfer. Often, these two effects compensate [82]. A reasonable value for the static liquid head is 90 % of the tube length (usually in the range 3–6 m). Furthermore, the connections between the distillation column and the reboiler have to be defined, i. e. length and diameter of the reboiler feed and outlet lines. For the diameters, useful rules of thumb exist. The cross-flow area of the outlet line should be approx. 80–100 % of the total cross-flow area of the tubes of the reboiler, whereas 25–50 % are sufficient for the inlet line. To avoid extremely large throughputs and low evaporation rates, a throttle valve can be placed in the inlet line. The height of the rear end head should be approx. 25 % of the shell diameter if it has an axial nozzle; for a radial nozzle, it is recommended to add the nozzle diameter. For the specification of a thermosiphon reboiler the definition of the process is different from the one for a single-phase heat exchanger or for a condenser. One should be aware that the flow rate of the product is not known, as its depends on arrangement and construction of the equipment. Moreover, its determination by any calculation program will be more a number for the order of magnitude than an exact value. Instead, a reboiler is specified by the heat duty to be transferred. Usually, an estimation for the

180 � 4 Heat exchange outlet vapor fraction is required. A standard value is 0.2, an acceptable range for the result is 0.15–0.25, for water, evaporation rates down to 0.05 can be accepted. It is worth mentioning that the inlet pressure of a thermosiphon reboiler is not a useful quantity, as it can hardly be determined. Instead, it makes sense to start the calculation with the pressure at the surface of the liquid in the bottom of the distillation column. The local pressures obtained due to the interaction of static height and pressure drops are then evaluated by the program. Table 4.2 gives a summary of the mentioned design recommendations for thermosiphon reboilers. Table 4.2: Recommended values for the design of thermosiphon reboilers. Item

Recommendation

Tube length Static liquid head Cross-flow area, outlet line

3–6 m (vacuum: 3 m) 90 % of tube length 80–100 % of tube cross-flow area w < �� m/s, ρw � < ���� Pa 25–50 % of tube cross-flow area w < � m/s, ρw � < ���� Pa 25 % of the shell diameter (for radial nozzle: + nozzle diameter) 0.15–0.35, aqueous systems 0.05–0.15 [263] 10–40 K [81] p > ��� mbar ≈ tube length/�.�� ��.�–��.� kW/m� ����–���� W/m� K ���–���� W/m� K

Cross-flow area, inlet line Height of rear head Outlet vapor fraction Driving temperature difference Pressure range No. of crosspasses Heat flux Heat transfer coefficient, clean surfaces Heat transfer [81] coefficient, fouling [81]

Concerning the accuracy of the calculation of a thermosiphon reboiler, one must be aware that the mutual interaction between heat transfer and the two-phase flow in the tubes is really complex. The accuracy of the pressure drop calculation of the two-phase flow (Chapter 12.1) in changing flow regimes determines the circulation rate, which is in turn decisive for the heat transfer and the vapor generation in the tubes. It is not probable that these dependencies can be accurately determined for both organic and aqueous fluids in various arrangements. Nevertheless, the considerable success of heat exchanger design programs indicate that at least the overall performance of thermosiphon reboilers, i. e. the duty transferred, is predicted in a way that a reliable design of commercial reboilers is possible. For the startup of a thermosiphon reboiler, some kind of boiling is necessary to start the circulation, on the other hand, an effective boiling takes place only if circulation is achieved. Thus, for the startup the unit should be heated up slowly so that boiling can gradually develop. If possible, it is useful to lower the condenser and therefore the

4.7 Evaporators

� 181

column pressure for a time to help the unit to get started. A high static liquid head might also support. There are a number of other types of evaporators. For vacuum applications and for systems with high viscosities or a wide boiling range, the falling film evaporator is the usual alternative. Falling film evaporators (Figure 4.25) consist of a vertical tube bundle. Usually, falling film evaporators are effective if the tubes are long, often 8–9 m. The liquid to be evaporated is fed at the top and flows down the tubes as a thin film due to gravity. Special distributors on the top of the tubes ensure that there is an even distribution of the liquid into the tubes. On the shell side, the heating agent, usually steam, is condensed. The vapor is generated in the tubes and goes down in cocurrent flow with the liquid, supporting the downflow of the liquid due to shear forces. Vapor and liquid are separated at the bottom in a separator vessel.

Figure 4.25: Falling film evaporator. Courtesy of GEA Group AG.

In a falling film evaporator, a gentle evaporation takes place. The residence time in the heated zone is short, the temperatures can be kept low, as it can be operated in the vacuum, and the temperature differences between heating agent and product can be kept low as well (usually 8–20 K). There is a small liquid holdup, giving quick reaction times on changes of the operation conditions. Furthermore, in contrast to thermosiphon and forced circulation reboilers (see below) the falling film evaporator is pretty insensitive against foaming. The k-values of falling film evaporators are in the range k = 700–1200 W/(m2 K), where the main heat transfer resistance is usually caused by the heat conduction through the film. Care must be taken that the heat flux does not exceed a critical value. Otherwise there is the danger that the film will dry out locally and a hot spot be formed. Another countermeasure is to operate with a liquid recycle so that the film thickness increases due to the increased mass flow. For the so-called coverage, defined as the ratio

182 � 4 Heat exchange between liquid volume flow rate and total wetted circumference of all tubes, a range of 1.2–1.5 m3 /(m h) is a good approach. Falling film evaporators can be used up to viscosities of 500 cP, although one cannot expect Newtonian behavior in this range, and the performance at high viscosities will certainly be lower. The limitation for the application is fouling. Due to the slow gravity flow of the liquid, there is no abrasive effect. The specification of a falling film evaporator in a heat exchanger design program has some peculiarities. In contrast to the thermosiphon reboiler, the inlet flow into the heat exchanger at the top of the calandria is quite well defined, either by the process for the single-pass option or by the recirculation pump capability. During the specification, the coverage should be checked. Defining the inlet pressure on the product side of the falling film evaporator is unpleasant. In fact, only the outlet pressure is well defined. The inlet pressure is a result of the pressure generated by the pump, the static height of the tube inlet and the pressure drop of the distributor. Especially in the vacuum operation, the performance of the falling film evaporator is quite sensitive to the inlet pressure, and therefore its calculation does not make much sense, especially because the exact performance data of the pump is barely known. It is reasonable to determine it iteratively. With a reasonable guess of the inlet pressure, the design program evaluates an outlet pressure which should match the given value. If this is not the case, the inlet pressure is varied according to the difference of the calculated and the known outlet pressure until the outlet pressure fits. A typical example for the application of falling film evaporators is the enrichment of fruit juices. The removal of water reduces the transport costs significantly, and the smooth temperature differences prevent the valuable vitamins from being destroyed. Another alternative to the thermosiphon reboiler is the forced circulation reboiler (Figure 4.26), which can be arranged both horizontally and vertically. It is often used for systems with high viscosity and large boiling point elevation. In fact, the forced circulation reboiler is a liquid heater. The heated liquid is then expanded into the recipient,

Figure 4.26: Horizontal forced circulation evaporator. Courtesy of GEA Group AG.

4.7 Evaporators

� 183

usually a column, through a valve, causing the evaporation. The heat transferred in the forced circulation reboiler becomes sensible heat according to ̇ p ΔT Q̇ = mc

(4.35)

The larger m,̇ the lower is ΔT, and the lower are the thermal stability problems. On the other hand, a strong pump is necessary for large volume flows and relatively low pressure differences. The forced circulation reboiler is relatively insensitive against fouling, as the evaporation takes place outside the apparatus. Due to the relatively high velocities in the tubes (1.5–2 m/s, sometimes even higher), there is an abrasive effect that prevents the start of fouling. The high velocities on the product side are also related to a high throughput, which in turn causes a low temperature change and therefore less thermal stress. However, the forced circulation reboiler is very sensitive against foaming. Further disadvantages of forced circulation reboilers are the considerable power consumption of the circulation pump and the investment costs for its basement. The kettle reboiler is a simple and robust alternative to the thermosiphon reboiler, both for vacuum and pressure applications (Figure 4.27). It causes no vibration problems, and evaporation rates up to 80 % are possible. Its drawback are possible entrainment and its affinity to fouling, as there is no defined flow which can remove the dirt, and heavy boiling substances have a long residence time in the heat exchanger. In these cases, it makes sense to maintain a continuous liquid draw-off stream. Kettle reboilers are relatively expensive pieces of equipment, as their space and volume requirements are pretty large. For the design, it has to be taken into account that the kettle reboiler is effective as an evaporator, not as a liquid heater. The heat transfer takes place due to bubble formation. Feeding subcooled liquids must be avoided [86], as in this case the dominating heat

Figure 4.27: Sketch of a kettle reboiler. © Springer-Verlag GmbH.

184 � 4 Heat exchange transfer mechanism is natural convection with low velocities and without the support of baffles, giving a low k-value. To enable the formation of bubbles, a minimum temperature difference between heating agent and product should be maintained, at least 12–15 K. For high pressure applications, the driving temperature difference can be lower. Although they do not belong to the shell-and-tube heat exchangers, two other evaporators should be mentioned which have the main purpose of gentle evaporation and product conservation. Well-known applications are vitamins, flavoring substances or pharmaceuticals. In thin film evaporators (Figure 4.28), the product is distributed on the inner side of a tube with a heating jacket outside. It forms a film. Inside the tube, a drive shaft with an attached wiper rotates and keeps the film thickness constant, usually below 1 mm. The residence time in thin film evaporators is normally less than 1 min. They are appropriate for low pressures down to 1 mbar, giving pretty low boiling temperatures. Pressures below 1 mbar are not possible because of the pressure drop caused by the transport from the evaporator to the condenser. The product is distributed from the top of the apparatus by means of a rotating system. It flows down on the inner wall, and is equally spread and permanently mixed by a wiper system. In Figure 4.28, it is realized as a roller wiper. It prevents the formation of hot spots and provides long operation intervals without maintenance, as the roller wipers do not get in direct contact with the wall so that scratches are avoided. The heating agent (steam or thermal oil) is led through the jacket attached to the wall. The vapor generated can leave the thin film evaporator through a nozzle at the top. Extremely high heat transfer coefficients are possible. However, the prediction capabilities are low. The design should be performed by a vendor who has carried out a pilot trial with a reasonable scale-up. A classical failure is the use of laboratory data for the determination of the k-value; these laboratory

Figure 4.28: Sketch of a thin film evaporator. © UIC GmbH.

4.7 Evaporators

� 185

Figure 4.29: Sketch of a short path evaporator. © UIC GmbH.

data have usually been obtained in an equipment made of glass, where the low thermal conductivity of the glass determines the heat transfer. For further information, [85] is a good starting point. For applications in rough vacuum (p = 1–10−3 mbar) short path evaporators (Figure 4.29) are used for extremely high boiling substances. The principle is to keep the distance between evaporator and condenser as short as possible, usually only a few cm. For this purpose, the condenser is located in the center of the apparatus. Again, in Figure 4.29 the distribution of the liquid on the inner wall is achieved by the roller wiper system. Because of the low pressures, the temperatures can be kept low as well, and the evaporation is extremely gentle towards the product. On the other hand, the smaller the distance between evaporator and condenser is, the greater is the danger of entrainment. The design should again be performed by experienced vendors. It is essential that low boiling substances are completely removed before. From gas theory, the Langmuir–Knudsen equation gives an upper limit for the evaporation capacity per heating area [87]: (

̇ p √ M m/A T −1 ) = 1575 ( ) mbar g/mol K kg/(m2 h) max

(4.36)

For p = 10−3 mbar, the order of magnitude is 2 ̇ (m/A) max = 1.5 kg/(m h)

(4.37)

A good compilation of the advantages and disadvantages of various types of evaporators can again be found in [286].

186 � 4 Heat exchange

4.8 Plate heat exchangers Plate heat exchangers are an option for combining very high heat transfer coefficients (3-4 times larger than for shell-and-tube heat exchangers) with a large heat transfer area per volume. Moreover, the driving temperature differences can be smaller, down to 5 K, sometimes even down to 1 K. Because of their smaller size, they have a number of advantages: smaller plot area is required, the installation costs are lower, building sizes as well as steel structures for equipment support can be reduced. Plate heat exchangers consist of profiled heat exchanger plates which separate the two media. When they are assembled, the profiles form many parallel connected channels which form the heat transfer area. These channels change their direction regularly, and the flows are directed in a way that the gaps filled with cold and warm medium alternate (Figure 4.30). The corrugation of the plate profiles increases the turbulence of the flows and therefore improves the overall heat transition coefficient k.

Figure 4.30: Principle sketch of a plate heat exchanger. Courtesy of Alfa Laval Mid Europe GmbH (Germany).

There are further advantages of plate heat exchangers. As can be seen in Figure 4.30, the plates are mounted in a skid between two massive plates. This arrangement is easy to dissemble, giving the opportunity of an easy cleaning procedure. Design corrections or capacity increases can easily be achieved by adding more plates to the skid. The space requirement is lower than for shell-and-tube heat exchangers, as well as the investment costs for comparable exchangers with the same duty. However, the pressure drop is considerably higher. A serious problem are the gaskets between the plates. Their stability is a limitation for the applicability of plate heat exchangers; the maximum pressure and temperature

4.8 Plate heat exchangers

� 187

are 30 barg and 260 °C, respectively. For aggressive media and higher pressures and temperatures, the plates must often be soldered or welded, where the strong advantage of the easy cleaning opportunity gets lost again. Furthermore, the heat transfer area cannot be increased by simply adding plates. The k-values of plate heat exchangers can be extremely high. They can also be used for highly viscous media, and they show less fouling due to high velocities and the high turbulence. Fouling factors used for shell-and-tube heat exchangers are generally too high for plate heat exchangers, at least by a factor of 2. Because of the high k-values, the fouling factors are often not used in the design calculation, as they would dominate the thermal resistance (see Equation 4.23), leading to the situation that arbitrary factors determine the size of the heat exchanger. The disadvantages of plate heat exchangers are the high maintenance costs for the gaskets and the permanent risk of leakage, or, alternatively, the impossibility of cleaning. Plate heat exchangers are not only used as liquid-liquid heat exchangers. Meanwhile, they are increasingly used as evaporators and condensers as well. Plate condensers have the advantage that asymmetric channels can be formed; wide ones for the vapor side and narrow ones for the cooling water to maintain an appropriate velocity to achieve enough turbulence (Figure 4.31). The same advantage can be claimed for evaporators, where even high-viscosity media can be handled. Driving temperature differences of only 3–4 K can be taken into consideration, which is especially important if mechanical (Chapter 8.2) or thermal (Chapter 8.3) vapor recompression or special materials are involved. Another advantage is the small holdup, giving short startup and shutdown phases and performing a gentle evaporation of temperature-sensitive substances. Several evaporation options can be realized, e. g. the thermosiphon reboiler,

Figure 4.31: Asymmetric channels for condensation in plate heat exchangers. Courtesy of Alfa Laval Mid Europe GmbH (Germany).

188 � 4 Heat exchange

Figure 4.32: Plate heat exchanger used as falling film evaporator.

the single pass evaporator, the falling film evaporator (Figure 4.32) or the forced circulation evaporator. The design calculations of plate heat exchangers is usually supported by the various commercial design programs. These programs enable even a nonprofessional to point out the advantages of plate heat exchangers if their application is possible. However, the particularities and the design rules are not as common as they are for shell-and-tube heat exchangers so that large differences between a quick program call and a professional design performed by a vendor might occur. The principles of the heat transfer calculation are well described in [88]. In the recent years some new developments occured which have the target to optimize the pressure drop. The pressure drop plays a key role in the design of plate heat exchangers. Essentially, a high pressure drop is a disadvantage because of the increased pumping effort. On the other side, high pressure drops are usually related with a good heat transfer. An optimized pressure drop can be used for – saving of energy for pumping – increase of throughput – reduction of heat transfer area because of higher heat transfer coefficients For this purpose, the flow pattern along the plate has been modified [287] by means of changing the shape for the inlet and outlet openings (Figure 4.33). The usual circular shape represents the minimum circumference per area for the fluid entering the space between the plates. The new solution has a larger circumferential length so that the velocity of the fluid and therefore the pressure drop of the port are lower. The specified maximum pressure drop can be more effectively utilized, in the best case by generating turbulence where the heat transfer takes place. As well, the region around the inlet opening has been modified for a more even distribution. A shortcut between inlet and outlet becomes less probable. There are no dead water zones where fouling might be initiated. Altogether, the efficiency of the plate is improved, which might lead to a lower

4.8 Plate heat exchangers

� 189

Figure 4.33: New shape for the inlet and outlet opening and the plate pattern (OmegaPortTM and CurveFlowTM ). Courtesy of Alfa Laval Mid Europe GmbH.

number of plates. It is estimated that the heat transition coefficients will increase by approx. 30 %. The FlexFlowTM (Figure 4.34) takes into account that most of the heat transfer problems are asymmetric, meaning that the volume flows significantly differ, e. g. when different phases (vapor/liquid) occur. In FlexFlowTM , the pattern creates channels of different size, which also makes the flow and the pressure drop more even. As well, fouling is reduced, and the maintenance effort is lowered.

Figure 4.34: Symmetric (lower part) and asymmetric (upper part) channels (FlexFlowTM ). Courtesy of Alfa Laval Mid Europe GmbH.

190 � 4 Heat exchange

4.9 Double pipes The double pipe is the simplest construction for a heat exchanger (Figure 4.35). This is just two concentrical pipes, where one stream is in the inner tube and the other one in the ring.

Figure 4.35: Double-pipe heat exchanger [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

Generally, double pipes suffer from the fact that the heat transfer area remains low, so that large duties can hardly be realized. Instead, double pipes are often used for heat tracing to compensate for unintended heat losses.

4.10 Air coolers If the heat transfer on one side is significantly worse than on the other side, it is determined solely by the bad side. In this case, it makes sense to increase the heat transfer area selectively on this side. An example is the air cooler. The heat transfer on the product side is much better than the one to the environmental air. Therefore, the tubes are equipped with fins (Figure 4.36), and the heat transfer is improved by blowers which create forced flow with comparably high velocities (Figure 4.37). The heat transfer to the fins takes place by heat conduction. Calculation procedures for this problem are described in [71] and [90]. The investment costs for an air cooler are higher than for a conventional shell-andtube water cooler because of the large heat transfer area and the costs for the blowers including their drives. On the other hand, operation costs are lower, as there is no cooling water consumption. Also, no piping is necessary on the service side, and no fouling takes place. There are certain other aspects for the assessment of air coolers, e. g. their high noise level or their large space demand. Process control is difficult if the air cooler

4.11 Fouling

� 191

Figure 4.36: Finned tubes. Courtesy of Kelvion (www.kelvion.com).

Figure 4.37: Air cooler. Courtesy of Kelvion (www.kelvion.com).

is exposed to rain, snow or sun radiation. But the main criterion is the fouling aspect: Cooling water contains hardness components (Chapter 13.3), which might precipitate at wall temperatures higher than 65–75 °C, and as long as no other measures are taken, the air cooler should be taken into account in these cases. The final design of an air cooler should be performed by the vendor.

4.11 Fouling If you consider fouling, you will get fouling. (Andreas Doll)

In general, heat exchangers suffer from a gradual deterioration of the heat transfer caused by so-called fouling. The streams often contain dissolved or suspended materials which may deposit on the surface of the heat transfer areas and form a layer. These

192 � 4 Heat exchange layers usually have a low thermal conductivity; therefore, they form an extra thermal heat transfer resistance which lowers the overall heat transition coefficient. Especially, streams containing hardness components are prone to the formation of fouling layers. The solubility of hardness components becomes lower with increasing temperature; their deposition is a typical case for fouling. Other reasons for fouling are microorganisms (bio-fouling) or by-products caused by corrosion or side reactions. Formally, this heat transfer resistance can be calculated by taking into account the thickness of the layer and its thermal conductivity. However, neither of them can be determined in advance. Therefore, the additional fouling resistances used (untruly called “fouling factors”) are more or less just set by the user, in most cases according to the inhouse design rules of each particular company. Fouling factors are in the range 0–600 ⋅ 10−6 m2 K/W; beyond this range, they do not really make sense. They can be interpreted according to Table 4.3. Table 4.3: Typical fouling factors. Fouling Factors ��−� m� K/W 0 100 200 300–400 500 600

Interpretation

Example

no fouling formal consideration of fouling low fouling moderate to strong fouling strong fouling very strong fouling

caustic soda steam overhead products cooling water dirty products very dirty products

A collection of fouling factors can be found in [91]. Fouling also depends on the material. Roughly, in stainless steel tubes the fouling factor is about half the size as in carbon steel tubes. In plate heat exchangers, the velocities are considerably higher so that less fouling occurs. Nevertheless, the fouling consideration is often subject to discussion. In [91] it is argued that the consideration of fouling factors often leads to heat exchangers which are too large, and therefore the velocities are lower. High velocities counteract fouling because of possible abrasion of the fouling layer. Therefore, it frequently happens that fouling only occurs because it had been considered in the design. In [91] it is recommended that fouling should not be considered. A usual safety margin (15–20 %), an appropriate design with a large B fraction (> 65 %), and a reasonable baffle cut (20–25 %) should ensure that fouling is avoided. Auxiliary measures like a small parallel heat exchanger to compensate fouling if it occurs nonetheless or recycling of part of the cooling water return to keep a sufficient velocity at plant startup when there is no fouling could also help. A similar recommendation is given in [92]. In practice, the situation of an engineer who follows this argumentation is not easy. There is no obvious success story when fouling is avoided by setting the fouling factor

4.11 Fouling

� 193

to a low value, as there is no proof that fouling would have been a problem. On the other hand, it is a serious design mistake if the heat transfer area is too small due to fouling when no fouling factor has been considered. One should at least be skeptical if fouling factors are arbitrarily increased or considered, although the medium is known to be clean. When in doubt, there should be a tendency towards the lower fouling factor. Traditions in plant engineering are difficult to overcome. In any case, velocities should be kept relatively high (tube: ≈ 2 m/s, shell: ≈ 1 m/s). Fouling can also be mitigated by the use of twisted tubes (Figure 4.38). Inside the twisted tubes, more shear stress is developed at the inner tube wall, which enables a better fouling removal from the wall [244].

Figure 4.38: Twisted tube section.

As mentioned, the heat exchanger design programs outline how the overall heat transfer resistance is composed. When the heat transfer conditions are very good on both the product and the service side, the fouling factors can account for a large percentage of the heat transfer resistance. Often, this is the case for thermosiphon reboilers with steam as heating agent and water on the product side. In these cases, the size of the heat exchanger is determined by the more or less arbitrary fouling factors. Special care must be taken. If a thermosiphon reboiler is too large, it might not work properly (see Chapter 4.7). Without experience, heat exchangers like this are very difficult to design. For the cleaning of shell-and-tube heat exchangers, a number of procedures are established [297]. – High-pressure water jets are most often applied, but water and energy consumption are considerable. Furthermore, there is noise disturbance, and it is difficult to remove hard fouling layers. – Mechanical brushes, scrapers and drills can remove layers mildly and even polish the tubes. But only straight tubes can be handled, it is not possible to treat U-tubes. – If the tube bundle can be removed from the shell, it can be submerged in a cleaning agent. Using ultrasonic sound, vibrations are induced, and fouling layers are destroyed. The investment is pretty high, but the consumption figures are low.

194 � 4 Heat exchange –

The tube bundle can also be transferred to a furnace, wherea pyrolysis takes place at 450 °C in an oxygen-deficient atmosphere. Thereby, the incrustations decompose.

4.12 Vibrations Dark Arts is a mandatory subject. (Harry Potter and the Goblet of Fire)

Tube vibrations can cause severe mechanical damage to a heat exchanger, e. g. due to the hitting of the baffles, fatigue cracking, enhanced corrosion or loosening of the tube joint. They are induced by the shellside flow across the tube bundle. There is a maximum crossflow velocity that should not be exceeded; usually, a 20 % safety margin is left. There are several mechanisms which can create vibrations; the most important ones are fluidelastic instability and vortex shedding. Due to its pressure, the fluid exerts a certain force on the tubes, pushing them apart. On the other hand, the tubes act like a spring, and a reset force is built up. Fluid force, reset force, and damping of the fluid form an oscillating system. If the amplitudes exceed a certain level, severe mechanical damage can be the consequence (Figure 4.39). This phenomenon is called fluidelastic instability. Vortex shedding is caused by periodic formation of vortices in the tube bundle (Figure 4.40). Short-term failures can take place if

Figure 4.39: Tube failure caused by fluidelastic instability [93]. © Hydrocarbon Processing.

Figure 4.40: Vortex shedding in tube array [93]. © Hydrocarbon Processing.

4.12 Vibrations

� 195

the frequency of the vortex formation approaches the resonance frequency of the tubes. Regions which are prone to vibration damage are [93]: – tubes with large unsupported spans between two baffles; – tubes located in baffle window region at the tube bundle periphery; – U-bend regions; – tubes beneath the inlet nozzle; – tubes in tube bundle bypass area. The calculation of tube vibration phenomena is not very well founded. There are some rules and plausibility considerations that should be taken into account. Many parameters can affect tube vibrations, and most of them also affect the thermal performance. The proper support of the tubes is essential in avoiding vibrations [93]. If the unsupported span of the tubes is long, its natural frequency is low and resonance phenomena are more probable. Tubes supported by baffles have an unsupported span equal to the baffle spacing. Tubes in the baffle window have much larger unsupported spans and are therefore more susceptible to vibrations. Reducing the spacing L of single-segmental baffles increases the cross-flow velocity proportional to L−1 , while the natural frequency increases proportional to L−2 so that resonance becomes less probable. Also, other baffle types can be tried (Figure 4.17). In heat exchangers with double-segmental baffles, the cross-flow velocities are much lower. “No-tubes-in-window” baffles (NTIW) are usually vibration-free, as the unsupported span of the tubes is reduced by 50 % for the same number of cross-passes. However, as the tubes in the window region are omitted, they require a larger shell diameter to maintain the heat transfer area. Another option are the so-called rod-baffled heat exchangers (Figure 4.19). They provide closely spaced support for the tubes, making the tube bundle very tight. However, the flow direction is essentially parallel to the tubes, which causes a worse heat transfer and, subsequently, a larger necessary heat transfer area. The pressure drop is significantly smaller than in conventional heat exchangers. Up to now, there is no case where vibration problems have been reported. The RODbaffle is a proprietary design with a registered trademark. For using them, a royalty payment is required. Tubes at entrance and exit areas are exposed to the highest velocities and therefore susceptible to vibrations. To avoid damage, clearances or impingement plates can be provided (Figure 4.41). Impingement plates divert the incoming stream from direct impacting on the first row of tubes. They can prevent erosion or cavitation, but do not reduce vibrations. The tube pitch and the layout pattern can be varied to avoid vibration. The larger the tube pitch, the lower are the cross-flow velocities, and flow-induced vibrations become less probable. However, the heat transfer deteriorates, and a larger diameter has to be chosen. For avoiding vortex shedding, it often proves to be successful to change the angle of the tube arrangement (60° pattern in Figure 4.13). An angle of 45° can be tried to avoid fluidelastic instability. The arrangement which is most prone to fluidelastic instability is the 90° square pattern.

196 � 4 Heat exchange

Figure 4.41: Clearance under inlet nozzle and impingement plate. Courtesy of Heat Transfer Research, Inc.

Finally, an increase of the stiffness of the tubes can be tested, e. g. by choosing a material with a higher elastic modulus or by choosing a larger tube diameter or wall thickness. However, usually these items are in most cases determined by corrosivity issues or by the heat transfer itself, respectively [93]. Using commercial programs for heat exchanger design, empirical criteria must be applied to estimate whether vibrations are probable. For fluidelastic instability, the vibrations increase with increasing velocity on the shell side. One must take care to stay below 80 % of the critical velocity in the whole shell side. The most critical locations are the ones where the velocity is high, e. g. at the inlet nozzles or at the bundle entrance. To avoid fluidelastic instability, any measure of reducing the shellside velocity might be useful, e. g. increasing the shell diameter, choosing larger nozzles or increasing the pitch. It can also be recommended to increase the clearance between nozzle and tube bundle. Also, increasing the natural frequency by reduction of the baffle spacing or the change of the baffle type is another promising option. Acoustic vibrations occur in gas flows on the shell side. Normally, they do not cause severe damages, but the noise development itself is often a problem. The 45° tube angle is known to be prone to acoustic vibration; thus, it should be avoided for gas flow on the shell side. A detailed description of the tube vibration phenomena is given in [93] and [94].

4.13 Heat transfer by radiation Heat transfer can be accomplished by three modes, i. e. conduction, convection and radiation. While convection can be interpreted as a special case of conduction with moving phases, radiation in form of electromagnetic waves must be considered as a completely different mechanism. It does not require any medium for propagation. It can be illustrated by the heat transfer from the sun to the earth through 150 million km of vacuum [300]. In principle, any surface radiates. The extreme case is the so-called black body. At a temperature T, the heat being emitted into the hemisphere over a surface F is

4.13 Heat transfer by radiation

Q̇ = σFT 4

� 197

(4.38)

with σ as the radiation constant σ = 5.67 ⋅ 10−8

W m2 K4

(4.39)

Equation (4.38) is the Stefan–Boltzmann law. The notaton “black” has nothing to do with the optical color impression. For instance, it is a good approximation to regard the above-mentioned sun as a black body radiator with T = 5780 K. In expression (4.38), the heat transfer is proportional to T 4 , which indicates the great sensitivity of heat radiation to temperature and the statement that it is extremely important at high temperatures. The energy of the black radiation has a typical frequency distribution according to Planck (Figure 4.42). The location of the maximum can be determined by Wien’s displacement law λ = 2898 µm

K T

(4.40)

where λ is the wavelength. Extensive presentations of the thermodynamics of radiation can be found in [71, 72, 300, 301].

Figure 4.42: Frequency distribution of black body radiation according to Planck.

Technical surfaces do not behave like black but like gray radiators, where only part of the black radiation is emitted. In Equation (4.38), this is represented by consideration of the emission coefficient ε < 1: Q̇ = σεFT 4

(4.41)

198 � 4 Heat exchange ε can vary significantly. For nonmetals, ε ≈ 0.8 . . . 0.95 is often valid. For metals, ε is usually smaller and depends not only on the material but as well on the surface quality. Values are given in [71], but often it cannot be assured that the actual case is really met. Fortunately, the effort to measure ε is comparably small. More complicated dependencies of ε on the wavelength are possible. Often, there are two regions with different ε, e. g. one for the short-wave range and one for the long-wave range. Transparent substances (e. g. glasses) can have more regions. Gases show certain wavelengths where they interact with radiation; the most well-known example is CO2 , which emits mainly at 2 µm, 4 µm and 15 µm [302]. To complete a heat transfer from one surface to another, the behavior of the receiver surface is also important. Part of the heat radiation impinging on a receiver surface will be absorbed (a), another part will be reflected (r), and, finally, a third part simply passes the surface (d, e. g. glass). It is a+r+d =1

(4.42)

For a black body, a = 1. Gray radiators have a < 1, while d = 0 and r = 1 − a. For a certain wavelength λ, it is ε(λ) = a(λ)

(4.43)

However, as emitter and receiver have different temperatures, ε and a are not necessarily identical. The heat transfer by radiation from a hot surface 1 to a cold surface 2 is the difference of the radiation emitted by 1 and received by 2 and the radiation emitted by 2 and received by 1. For the calculation it has to be considered that the intensity of the radiation depends on the angle between the radiation and the normal to the surface, both on emitter and receiver side. As these angles vary across the surface, a complex double integration over both surfaces is required. The result is summarized in the so-called view factor φ12 , meaning the part of the radiation emitted by surface 1 and received by surface 2. Therefore, the general structure for the calculation of the heat exchange between the two surfaces is Q̇ 12 = Ẋ 1 φ12 − Ẋ 2 φ21

(4.44)

where Ẋ 1 and Ẋ 2 denote the radiations coming from the surfaces 1 and 2. They consist of three parts: emitted, reflected, and a part which passed from behind. The formalism is thoroughly explained in [71]. The view factors are given for a number of technically relevant cases in [72], as well as the procedure for their evaluation. Equation (4.44) is often a subject of confusion. The term Ẋ 2 φ21 represents a heat flux from the cold surface 2 to the hot surface 1, which seems to be a violation of the Second Law of Thermodynamics. However, the netto heat flux Q̇ 12 from the hot to the cold surface remains positive, and

4.13 Heat transfer by radiation

� 199

this is decisive for the Second Law. The view factors are connected by the reciprocal relation F1 φ12 = F2 φ21

(4.45)

Equation (4.44) can easily be extended to multiple surfaces. For some simple but technically relevant cases, it results in relationships which are easy to use. For the heat exchange between two infinite parallel gray plates, we get Q̇ 12 =

σF

1 ε1

+

1 ε2

−1

(T14 − T24 )

(4.46)

In case that surface 2 surrounds surface 1 completely (e. g. two concentric pipes), the result is Q̇ 12 =

1 ε1

+

σF1 F1 1 ( F2 ε2

− 1)

(T14 − T24 )

(4.47)

For F2 ≫ F1 , Equation (4.47) reduces to Q̇ 12 = σε1 F1 (T14 − T24 )

(4.48)

Example A tank containing a liquid with ttank = 80 °C is located in a hall. The air in the hall, the inner walls, the floor and the ceiling shall have a temperature of thall = 25 °C. The tank has a convex shape, meaning that all of its radiation meets parts of the hall. Ftank = 30 m2 ≪ Fhall , εtank = 0.6. The heat transfer coefficient to nonmoving air shall be α = 5 W/m2 K. Calculate the heat loss of the tank. Is convective heat transfer or radiation dominating?

Solution The convective heat transfer is evaluated to be Q̇ conv = αFtank (ttank − thall ) = 5 W/m2 K ⋅ 30 m2 ⋅ (80 − 25) K = 8.25 kW Note that the evaluation of the radiation requires strictly absolute temperatures. Using Equation (4.48), one gets 4 4 Q̇ rad = σεtank Ftank (Ttank − Thall )

= 5.67 ⋅ 10−8 = 7.81 kW

W

m2 K4

⋅ 0.6 ⋅ 30 m2 ⋅ (353.154 − 298.154 ) K4

200 � 4 Heat exchange

The total heat loss is Q̇ = Q̇ conv + Q̇ rad = 8.25 kW + 7.81 kW = 16.06 kW The two heat transfer mechanisms are in the same order of magnitude. Even at low temperatures, radiation can easily compete with natural convection to gases. Forced convection and natural convection to liquids usually give a significantly higher heat transfer than radiation.

Example In the desert of Arabia, cooling air (ṁ = 100000 kg/h) is supposed to be transported through an air duct (L = 1000 m, D = 1.2 m). The air enters the line with Tenv = 323 K, p = 1 bar. The pressure drop shall be neglected. The air duct is directly exposed to sunlight (S = 1000 W/m2 ). The specific heat capacity of air shall be cp = 1 J/gK. a) Calculate the outlet temperature Tout of the air for a stainless steel line. b) Calculate the outlet temperature Tout of the air for a white painted line. Measured values for reflexion coefficients depending on the wave length can be estimated from Figure 4.43.

Figure 4.43: Measurement of the reflexion coefficients.

Solution The key to this evaluation is the assumption that the pipe wall is in a steady state. However, the temperature of the pipe varies along its length, as the heat transfer to the air flow is reduced due to the increase of the air temperature. Therefore, the line is divided into increments with 10 m length, where it can be assumed that the air temperature remains approximately constant. Five heat fluxes contribute to the energy balance of a pipe increment (Figure 4.44): – Heat flux from the pipe wall to the cooling air inside the line. It is forced convection, in a separate calculation the heat transfer coefficient had been determined to be αin = 60 W/m2 K . With

4.13 Heat transfer by radiation

� 201

Figure 4.44: Heat fluxes to and from the air duct.

Fin = π ⋅ D ⋅ 10 m = 37.7 m2 , we get Q̇ in = αin Fin (Tpipe − Tair ) –

where Tair is the outlet temperature of the previous increment. Heat flux due to natural convection (no wind assumed) from the pipe wall to the environmental air (Tenv = 323 K). αout = 5 W/m2 K . Fout = Fin . Q̇ conv = αout Fout (Tpipe − Tenv )



(4.50)

Heat radiation from the sun. The absorption coefficient of stainless steel can be estimated as a = 0.45 (Figure 4.43), the projected area is Fproj = D⋅10 m = 12 m2 . For the white paint, the absorption coefficient is much lower (a = 0.14). Q̇ sun = aFproj S



(4.49)

(4.51)

For the heat radiation exchange of the pipe with the environment, it is assessed that the upper half of the pipe exchanges radiation with the sky and the lower half with the sand. Equation (4.48) shall be valid in both cases. The sky can be treated like a black body with Tsky = 273 K [301], as well as the sand with Tsand = 348 K. The emission coefficient of the pipe can again be taken from Figure 4.43, this time, the long wave length range must be regarded with ε = 0.26 as result. ε is much higher (ε = 0.94) for the white paint. F 4 4 Q̇ sky = σεpipe out (Tpipe − Tsky ) 2 F 4 4 Q̇ sand = σεpipe out (Tpipe − Tsand ) 2

(4.52) (4.53)

Subsequently, the energy balance of the pipe wall is Q̇ sun = Q̇ in + Q̇ conv + Q̇ sky + Q̇ sand

(4.54)

202 � 4 Heat exchange

The only unknown parameter is Tpipe in the increment, which is evaluated iteratively. Subsequently, all heat fluxes can be determined. The temperature of the air inside the pipe is update according to the energy balance Q̇ in = ṁ ⋅ cp ⋅ (Tout − Tenv )

(4.55)

Figure 4.45 illustrates the results. With the stainless steel surface, the wall temperature is slighter higher than the air temperature inside the pipe, which is increased by 10 K at the outlet. The white paint absorbs much less sunlight. Moreover, the radiation heat exchange with the environment is supported by the high emission coefficient. In the long wave length range, the white paint behaves almost like a black radiator. The air temperature is hardly increased in the pipe, as well as the pipe wall temperature itself. Table (4.4) gives an overview on the particular heat fluxes. As the white paint reflects much more of the sunlight, the temperatures of the tube wall and the air inside are closer to the environmental one so that only minor heat flux by convection occurs. The radiation in the long wave length range becomes dominant, and with the white paint it is more effective. Therefore, air ducts should be painted white for cooling purposes (Figure 4.46).

Figure 4.45: Temperature courses in the air duct. Table 4.4: Heat fluxes in air duct.

steel surface white paint

Q̇ in kW

Q̇ conv kW

Q̇ sun kW

Q̇ sky kW

Q̇ sand kW

299 6

140 3

−540 −168

177 537

−76 −378

4.13 Heat transfer by radiation

� 203

Figure 4.46: Air duct with white paint during manufacturing.

In contrast to solid surfaces, the radiation of gases takes place only in certain wavelength ranges. Most of the simple gases like nitrogen, hydrogen and the noble gases are completely transparent for heat radiation, and therefore, they also do not emit. Polyatomic gases like water, carbon dioxide, carbon monoxide, sulphur dioxide and hydrocarbons are quite effective radiators. Their emission coefficient depends mainly on the thickness of the gas layer and its partial pressure. The geometry of the gas layer must be converted to a hemispherical one. Then, the usual relations for the radiation of solid surfaces can be applied. Details can be found in [71, 72].

5 Distillation and absorption The most important thermal separation process in technical applications is distillation. The name often causes some confusion. In basic lectures, “distillation” means a single stage comprising evaporation and condensation. The multiple arrangement of separation stages in a column is called “rectification”. Although this is sometimes used, the colloquial term used in industry for such a “rectification” is in fact “distillation”, which is used in this book as well. The reason for the wide spread of distillation is the use of simple heat as a utility which can easily be added and removed. The phase equilibrium between vapor and liquid is the foundation of distillation, where the density difference between the phases is so large that their separation is relatively easy. Distillation can be explained according to the following scheme. A simple separation stage consisting of evaporation and condensation achieves an enrichment of the low-boiling substances in the condensate, but according to the vaporliquid equilibrium all components occur in both phases. Generally, pure components are not obtained. Several stages are necessary for further purification. Figure 5.1 shows an example for such an arrangement. Considering a binary mixture, in the upper part the low-boiler is purified by repeatedly sending the overhead stream to a further separation stage. In principle, one can in fact obtain the low-boiling component in an arbitrarily specified concentration, and, accordingly, the heavy end as well, as shown in the lower part of Figure 5.1. However, the drawback of this process is obvious: Only small amounts of both light and heavy ends are obtained, as no use is made of the intermediate fractions where a number of separation stages have already been applied. This

Figure 5.1: Series of separation stages. https://doi.org/10.1515/9783111028149-005

5 Distillation and absorption

� 205

Figure 5.2: The prestage of a distillation column.

situation can be improved if these particular fractions are led to the next stage above or, respectively, below. Both condensers and evaporators can be left out if the vapors and condensates are directly forwarded to the next stage. Only at the ends of the sequence an evaporator and a condenser are necessary (Figure 5.2). Vapor and liquid are moving in countercurrent flow. A vertical arrangement of the stages then leads to the well-known distillation columns. The principle of countercurrent flow can be applied to more or less all thermal separation processes like extraction or even adsorption (“simulated moving bed”, Chapter 7.2). Figure 5.3 shows the typical terms describing a distillation column. The mixture to be separated is continuously led into the column. It is called “feed”. Several feeds are possible. The lower end of the column is called the “bottom”. At the bottom, the reboiler generates vapor to provide the necessary heat input into the column. At the upper end of the column (“top”), the overhead stream is led into a condenser, which removes heat from the column at a lower temperature level. Part of the condensate is led back into the column (“reflux”), which forms the essential liquid flow from the top to the bottom of the column, giving countercurrent flow with the vapor. The rest of the condensate is removed from the column as a product, the “distillate”. The ratio between reflux flow Ṙ

206 � 5 Distillation and absorption

Figure 5.3: Distillation column and the most important terms.

and distillate flow Ḋ is called the reflux ratio ν: ν = R/̇ Ḋ

(5.1)

The column part above the feed is called rectifying section, the part below the feed is the stripping section. In the column there are internals which enable a good contact and mass transfer between vapor and liquid phase. From the mathematical point of view, absorption is not more than a special subset of the distillation arrangements. Absorption means the dissolution of a gas in a liquid [306]. It is supported by low temperatures and high pressures. If the gas consists of several components and if these components dissolve selectively in the liquid, absorption can be used for the separation of the components in the mixture. An example is the separation of SO2 from flue gases. The loaded liquid is usually regenerated and led back to the absorber (Figure 13.17). This is called desorption. It is supported by high temperatures and low pressures. This can be achieved by heating the solution in a reboiler (Figure 13.17), with an intermediate heat exchanger between cold loaded and hot unloaded liquid or by stripping with an inert gas. It can be distinguished between physical and chemical absorption. For physical absorption, the absorptive agent is a liquid which absorbs the gas by intermolecular forces. In chemical absorption, the absorbed gas is subject to a chemical reaction, e. g. CO2 in caustic soda solution. Generally, there are two options to perform distillation and absorption processes: packed columns and tray columns. In packed columns, there is a continuous mass transfer along the column. Their advantage is the significantly lower pressure drop, which is the decisive criterion in vacuum distillations. In tray columns the mass transfer is performed stage-wise; their advantage is that they have no wetting problems and that they are less sensitive against fouling. Very good monographs describing distillation are the books of Baerns et al. [8], Kister [95, 96], Sattler [97], and Stichlmair and Fair [98].

5.1 Thermodynamics of distillation and absorption columns

� 207

The material costs of a distillation column depend on the amount of material for the column cylinder, which is proportional to both the diameter D and the height1 H, and on the costs for the packing or trays, which are proportional to H and to D2 . H is determined by thermodynamics (number of separation stages), whereas D is determined by hydrodynamics (characteristics of the trays or the packing) and thermodynamics (determination of the internal flows inside the column).2 Before starting, the influence of the pressure on a distillation column should be clarified: The higher the column pressure is, the lower are the volume flows. Therefore, higher pressure results in a higher capacity of the column. On the other hand, higher pressure usually (not always) results in a worse separation behavior due to the phase equilibrium, and therefore in lower purities of the products.

5.1 Thermodynamics of distillation and absorption columns There are two ways for the calculation of distillation columns. The equilibrium calculation uses the presumption of a theoretical stage, which represents full development of the phase equilibrium on this stage. In practice, this assumption is not valid. A tray does not represent a theoretical stage. One can introduce efficiencies (Chapter 5.4), which are usually in the range of 32 . Alternatively, the number of stages is reduced, e. g. 60 trays represent 40 theoretical stages. For packed columns, a certain packed height is taken for one theoretical stage (HETP value, see Glossary). The concept of the theoretical stage is very widely applied, but there are certain constellations where it leads to qualitatively and quantitatively bad results. In these cases, the mass transfer and the phase equilibrium area must be taken into account by the calculation. Still, the phase equilibrium is most important, as it determines the driving forces for the mass transfer. The application of these so-called rate-based models is obligatory for the calculation of absorber columns, if high purities are required. The mass balance on a stage is the foundation of the calculation of distillation columns. Figure 5.4 shows a volume with a theoretical stage. Two streams are entering (Li−1 , Vi+1 ), and two streams are leaving (Li , Vi ). The streams leaving the stage are in phase equilibrium.3 For an equilibrium column with theoretical stages, the determination of the necessary number of stages and the reflux ratio are essential for the development of a distillation process. With the concept of the theoretical stage, this can be achieved by solving the so-called MESH equations (material balance, phase equilibrium, summation condition, heat balance). For a column with N stages and n components, these equations are 1 neglecting top and bottom. 2 Furthermore, the pressure has a significant influence on the wall thickness and, subsequently, on the material costs. 3 In process simulation, the numbering of the stages goes from top to bottom.

208 � 5 Distillation and absorption

Figure 5.4: Mass balance on a tray.

– – – –

material balance for each component on each stage: n ⋅ N equations; phase equilibrium conditions for each component on each stage: n ⋅ N equations; summation conditions (∑ xi = 1, ∑ yi = 1) on each stage: 2N equations; heat balance on each stage: N equations.

All N ⋅ (2n + 3) equations have to be solved. The corresponding unknowns are the compositions xi and yi of each component (2n variables), the flows of liquid and vapor and the temperature of the two phases in equilibrium (3 variables), all of them on each stage (times N). For example, for a column with 60 stages and 20 components, there are altogether 60 ⋅ (2 ⋅ 20 + 3) = 2580 equations, most of them nonlinear. If chemical reactions occur (e. g. reactive distillation), additional equations would have to be considered. The mathematics of solving this system of equations are described in [8] and [95]. Modern process simulators offer very stable and well-established algorithms for the solution. In case the column does not converge, it is often a trial-and-error procedure to test the various options. The convergence history and the error messages can give valuable information. The calculation of the column can only converge if there is both vapor and liquid on each stage. If the error messages or the profile indicate that vapor or liquid are missing on certain stages, one should change the specification in a way that the amount of the missing phase on these stages is increased. A typical error in the setup of a column is that the specified distillate or bottom flow is larger than the feed, in this case, the algorithm has of course no chance to get a solution. If column convergence does not work, it is often useful to change and simplify the column specification until its calculation converges and a profile is available, even if it is a completely wrong one. Slight variations of the specification towards the correct one often give a feeling about the sensitivities, and with a valid column profile as starting point it is easier to achieve convergence. Often, a specification “out of balance” occurs. Consider a mixture of 500 kg/h of component A (light end) and 500 kg/h of component B (heavy end). If e. g. the bottom stream is defined to be 510 kg/h, one cannot expect it to be pure B, as it contains at least 10 kg/h A, corresponding to approx. 2 %. On the other hand, at least 10 kg/h of component A are lost.

5.1 Thermodynamics of distillation and absorption columns

� 209

One must be aware that a variation of the reflux ratio or the number of stages does not help at all in this case. Essentially there are two ways for the representation of a distillation column in the process simulator: the compact approach and the detailed approach. During process development, it is strongly recommended to use the compact approach (Figure 5.5) to reduce the effort when separation sequences are changed. When the process is fixed, it makes sense to move over to the detailed approach (Figure 5.6), where the results for the streams of the whole condenser system are accessible more easily so that they can directly be used for the design. The definition of so-called “pseudo-streams” to extract this

Figure 5.5: Compact column approach. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

Figure 5.6: Detailed column approach. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

210 � 5 Distillation and absorption information is not necessary. Multiple condensers on different temperature levels can easily be specified. Convergence is certainly a larger effort but usually not more difficult. However, the reflux stream must be estimated first, otherwise the upper stages might dry out, causing column calculation to stop. Another disadvantage is that the reflux ratio cannot be entered directly; it must be converted into a split ratio of the stream COND.

5.2 Packed columns In packed columns, a packing volume distributes the liquid entering the bed at the top and creates a large mass transfer area between the liquid and the vapor, which are in countercurrent flow. Large packing volumina are divided into several beds. Between the beds, the liquid is collected and redistributed by a collector/distributor unit. This is necessary because the liquid tends to run together within the packing and concentrate near the column wall. Figure 5.7 shows the principal constitution of a packed column.

Figure 5.7: Constitution of a packed column. © Sulzer Chemtech Ltd.

Packed columns are mostly used at low pressures, where many systems exhibit larger separation factors,4 making a design with a lower tower height possible. Because

4 The reason is that the ratio between the vapor pressures of the components is usually larger at low temperatures.

5.2 Packed columns

� 211

of the low pressure drop, a lower pressure can be achieved in the stripping section of the column, and it is possible to limit the bottom temperature. For temperature-sensitive substances, this is often important to avoid decomposition. Packed columns are sensitive against fouling but less sensitive to foaming systems than tray columns (at least random packings). Because of the low holdup, they are more sensitive to operation changes. The separation efficiency depends on the specific surface, the kind of packing, the packing material and the kind of system. Packings can be made of metal, plastic, glass, graphite, or ceramic. There are two types of packed columns. – Random packings: The development of random packing elements during recent decades is characterized by the aspiration to create structures that are more and more open (Figure 5.8). Packing elements of the 1st generation were spheres, cylindrical rings (“Raschig rings”) or saddles. They were easy to manufacture, but their pressure drop was still high. Furthermore, the packing elements of the 1st generation often suffered from maldistribution, and their use was limited to small columns with diameters below 500 mm. In the 2nd generation, the shell areas have been penetrated, which had already a drastic effect on the pressure drop. The most popular packing element of this generation is the Pall ring, which is still in use. In the 1970s and the 1980s, the packing elements of the 3rd generation came up, which only consist of the framework, and due to the large free cross-section area their pressure drop is even lower. The progress of this development is that the vapor load at constant column diameter can be significantly higher for the modern packing elements. Therefore, the change of the packing has become a standard option for a capacity increase. The progress in the separation efficiency was comparably low. The trend was continued in the 1990s with the development of the 4th generation packing elements with completely new shapes, giving extremely low pressure drops and high flooding points (Section 5.2). It must be emphasized that the importance of the distributor perfor-

Figure 5.8: Raschig-Ring, Pall-Ring, and ENVIPAC: packing elements of the 1st, 2nd, and 3rd generation. © Raschig GmbH, © ENVIMAC Engineering GmbH.

212 � 5 Distillation and absorption

Figure 5.9: Raschig-Super-Ring as an example for a 4th generation packing element. © Raschig GmbH.

Figure 5.10: Raschig-Super-Ring and Raschig-Super-Ring Plus put on an even board.

mance has increased, as the 4th generation packing elements have hardly any selfdistribution. The most well-known example is the Raschig-Super-Ring (Figure 5.9). In 2018, with the Raschig- Super-Ring Plus a further development was introduced. At a first glance, there seems to be no difference, however, they are different when it comes to the random placement within the column [254]. The Raschig Super Ring usually lies on its side or stands “upright”, whereas the Raschig Super Ring Plus leans diagonally, due to the different curve sequence (Figure 5.10). Therefore, the random packing layer in the column has a different structure. The free cross-flow area for the vapor flow perpendicular to the flow direction is larger, giving once again a lower pressure drop. Tests turned out that the pressure drop could be lowered by 10 %, which gives a further opportunity to increase the throughput.

5.2 Packed columns



� 213

The larger the nominal size of a packing element is, the lower is its pressure drop, but also its specific surface and therefore its separation efficiency. The nominal size of a packing element should be lower than 10 % of the column diameter. Otherwise there are too many empty volumes at the area near the wall, where the liquid has no sufficient contact to the vapor phase. Structured packings: Structured packings have a regular geometry. They need appropriate distributors; if this is ensured there is hardly any stream formation or wall effect, i. e. they are not prone to maldistribution. Moreover, structured packings are equipped with wiper rings, which convey the liquid away from the column wall. However, these wipers are even more important to avoid that gas bypasses the packings by flowing along the wall; the amounts of gas bypassing can be significant. At relatively low liquid loads (< 20 m3 /(m2 h), see next paragraph) structured packings are more effective than random packings. The liquid films formed are thinner than in random packings. The separation efficiency does not depend on the column diameter. Structured packings have a larger maximum load, a better efficiency and lower pressure drops (< 3 mbar/m) than random packings. An example is the Sulzer Mellapak (Figure 5.11). At liquid loads < 10 m3 /(m2 h) wired packings (e. g. Montz A3, Sulzer BX) are another alternative, having an even lower HETP5 value (0.1–0.2 m, i. e. 5–8 theoretical stages per m) and even less pressure drop. On the other side, wired packings must be well wetted, which is often not the case for aqueous systems. Furthermore, they are significantly more expensive and extremely sensitive

Figure 5.11: Sulzer Mellapak as an example for structured packings. © Sulzer Chemtech Ltd.

5 height equivalent of one theoretical plate, see Glossary.

214 � 5 Distillation and absorption against fouling. At higher liquid loads, their application does not make sense; their advantages do not become effective. It should be noted that structured packings are not more effective in general. Random packing is clearly the better choice at high liquid loads and if a larger holdup is required to generate residence time on the packing. For aqueous systems and low liquid loads < 1 m3 /m2 h, poor wetting is expected, but special materials and geometries are available. The Sulzer packing AYPlus DC can cope with these conditions and still yields an acceptable performance. Operation temperatures up to 300 °C are possible. Structured packings have had two generations: In the 1990s, the packing manufacturers found out that the liquid accumulates at the points where the packing layers have contact, which initiates flooding. In the second generation (e. g. Mellapak 252Y), the transitions were designed to be smoother and the contact points were reduced. Packed columns can fail if the vapor load is too high or the liquid load is too low. The vapor load is represented by the so-called F-factor: F = w√ρV ,

(5.2)

where w is the vapor velocity referring to the free cross-flow area and ρV is the vapor density. It represents the square root of the kinetic energy of the vapor. Its unit Pa0.5 is usually omitted. A reasonable order of magnitude of the F-factor is F = 2. F = 0.5 would represent a relatively low vapor load, F = 3 a relatively high one. Alternatively, the C-factor is often used [95] C = w√

ρV ρL − ρV

(5.3)

The liquid load is the most important parameter for the liquid. It is defined by B=

liquid volume flow cross-section area

(5.4)

Liquid loads can strongly differ; their range covers 100 m3 /(m2 h) as a very high load, 10–40 m3 /(m2 h) as “normal” liquid loads and 0.5–5 m3 /(m2 h) as low ones. The liquid load can also be interpreted as the superficial velocity of the liquid, referring to the crossflow area. The upper limit for the vapor load is the flood point, where the countercurrent flow between vapor and liquid breaks down. In this case, a froth layer is formed, the liquid is accumulated and finally carried over to the top. The flood point decreases with increasing liquid load, as the free cross-section area for the vapor flow is more and more filled with liquid. The minimum liquid load is a less strict criterion. If it is not reached, the wetting of the packing is so bad that the efficiency decreases significantly. As a rule of thumb, for random packings liquid loads of 10 m3 /(m2 h) for aqueous

5.2 Packed columns

� 215

systems and 5 m3 /(m2 h) for organic systems can be considered as limiting values. For structured packings, nowadays very low liquid loads down to 0.2 m3 /(m2 h) can be realized. The limiting factor is the quality of the distributor. One must always be aware that distributor and packing form a package which should be in the hand of one vendor. A badly chosen distributor type can have a significant influence on the performance of the packing [99]. For the design of packed columns their separation efficiency has to be regarded first. Manufacturers usually give HETP values as a function of the vapor and the liquid load. Often HETP also depends on the pressure and the kind of the system. Kister [100] gives the following rules of thumb for estimating HETP. –

For random packings: HETP = L ⋅



93 ap

(5.5)

For structured packings: HETP = K ⋅ L ⋅ (0.1m +

100 ) ap

(5.6)

with ap as the specific surface area. Equation (5.5) is valid for modern (i. e. at least Pall rings) random packings with nominal diameters of 1′′6 or larger. Unexpectedly, it has been found out that random packing elements smaller than 1′′ do not necessarily have a lower HETP value [100], probably due to maldistribution effects. The factors K and L can be set as L=1 L = 1.5 L=2 K =1 K = 1.45

for σ < 25 mN/m (usual organic systems) for σ ≈ 40 mN/m (amine and glycol systems) for σ ≈ 70 mN/m (aqueous systems) for Y structured packings (45° incination to the horizontal, e. g. Mellapak 250 Y) for X structured packings (60° inclination to the horizontal, e. g. Mellapak 250 X), ap ≤ 300 m2 /m3

The effect of pressure on packing efficiency is widely discussed but poorly understood, as maldistribution effects can always be involved as well. The prevailing opinion [95] is that the effect of pressure is low at least at pressures p > 100 mbar. Below this value, it is suspected that the efficiency decreases for random packings, whereas for structured packings a decrease for high pressures (p > 15–20 bar) has been observed. 6 ″ = inch, 1′′ = 25.4 mm. Another abbreviation is “in”.

216 � 5 Distillation and absorption With the HETP value, one can assign theoretical stages to the particular sections of the column and simulate it with an equilibrium model. One should have in mind that there are influences on the packing performance which are not covered by any calculation. The HETP value can be larger due to layers on the packing or bad wettability, which is often the case for aqueous systems. After making a first guess, the results should be discussed which the packing vendor to make sure that the required number of stages per m is met. The HETP values are determined by means of test measurements with an almost ideal, narrow-boiling binary mixture, e. g. chlorobenzene/ethyl benzene (Figure 5.12), cyclohexane/n-heptane, or isobutane/n-butane [99, 100]. The separations of these mixtures are sensitive to the number of stages so that it is not difficult to evaluate.

Figure 5.12: Txy diagram for the system chlorobenzene/ethyl benzene at p = 100 mbar.

From process simulation, one gets the liquid and vapor flows which encounter each other between the stages, including their properties like density and viscosity. They are the basis for the hydrodynamic calculation, which determines the column diameter. Usually, several load cases have to be regarded. The hydrodynamic considerations are working with simplified physical models (channels, particles circulated around, particle clusters). These models comprise equations for the flood point, the pressure drop, and the holdup, i. e. the liquid content of the packing during operation. They depend on each other in a complex way. Figure 5.14 illustrates the courses of pressure drop and holdup as functions of vapor and liquid load. The larger the liquid load, the larger is also the holdup of the packing. The holdup is the relative liquid content of the packing; it is a key quantity for the calculation models. It causes the pressure drop to rise, as for the flow of the vapor less cross-flow area is available. One can see that at the load point the drastic rise of pressure drop and holdup starts (Figure 5.14). These courses are difficult to reproduce with the calculation models. Therefore, a number of parameters adjustable to experimental data of the packing must

5.2 Packed columns

� 217

Figure 5.13: Montz Type S distributor for very low liquid loads. Courtesy of Julius Montz GmbH.

be introduced. The most popular models are the one by Stichlmair [102], its further development by Engel [103] and the one of Billet and Schultes [104]. It is important to note that the correlations used have limited physical value and are optimized for use within the particular model. It makes no sense to mix them, e. g. calculate the holdup according to Billet/Schultes and the pressure drop according to Engel. The hydrodynamic calculation starts with the thermodynamic calculation of the column, which evaluates the liquid and vapor flows going from stage to stage. Using these loads, it is then checked whether a specified packing type fulfills a number of criteria. The design criteria for a packing are as follows. – Distance to flood point: The flood point denotes the vapor load where the liquid is accumulated in the packing and finally carried over the top. It depends on the liquid load. The particular calculation models [103, 104] have a built-in flood-point correlation. Their accuracy can be estimated to be ± 30 %. The Kister–Gill correlation [95] might be slightly more accurate but uses a packing-specific parameter. Therefore, the strategy is to set the vapor load in the case with the maximum load to 70 % to be on the safe side. One should try to get close to these 70 % and not to stay below and set additional safety margins. Otherwise, the vapor load in the minimum case might be too low. Furthermore, one should have in mind that the uncertainty of ± 30 % could also mean that 130 % of the calculated value could be the true flood point. Therefore, it might happen that the packing does not perform very well if the load is not adequate. Packing vendors have often more experience with the use of their products and can sometimes take the responsibility to make a design closer to the flooding point than 70 %.

218 � 5 Distillation and absorption

Figure 5.14: Course of pressure drop and holdup as a function of vapor and liquid load. Courtesy of Prof. Dr. J. Stichlmair.

5.2 Packed columns









� 219

System flooding: System flooding occurs if even large droplets are carried over the top by the vapor flow without the influence of column equipment, i. e. in the empty column. A correlation is given in [100] and [101]. Normally, system flooding is the last criterion which indicates flooding. It is, however, relevant in packings with very open structures, e. g. Mellapak 125X. Another application is the check of flooding data from vendors, which are definitely too optimistic if they exceed the system flooding limit. Load point: The load point refers to the vapor flow where the vapor starts to influence the shape of the liquid film. At this point, the holdup starts to increase strongly with the vapor flow rate. The efficiency of the packing is at its maximum, and one is far enough away from the flood point. One should try to operate the column at this load point, but as a design criterion it is not useful. Sufficiently low pressure drop: The pressure drop can be a direct criterion for the diameter of the column. In many applications, there is a limitation for the bottom temperature to avoid decomposition. In these cases, a certain pressure drop over the entire column must not be exceeded, as the pressure in the condenser is usually fixed. There are two pressure drops: the dry pressure drop and the pressure drop of the irrigated packing, which is the relevant one for design and where the dry pressure drop has a contribution. The pressure drop increases first linearly with increasing vapor load, with the liquid load as an additional parameter. Beyond the load point, the pressure drop rises more rapidly, and at the flood point theory says that it is infinite. In practice, flooding or, respectively, inoperability of the column is reached already at finite pressure drops. For the Sulzer packings, there is even a special model where flooding is defined as the vapor load where the pressure drop is 12 mbar/m [105]. Minimum liquid load: A minimum liquid load according to packing and distributor should be kept to avoid maldistribution. A lot of packing operation problems are due to maldistribution [106]. An adequate distributor must be chosen. Special distributors can handle very low liquid loads down to 0.04 m3 /(m2 h) (Figure 5.13). A reference value for the number of droplet holes is 60–150/m2 . As a rule of thumb, after a packed bed length of 6 m redistribution should take place, with additional space requirement for collector and distributor. The need for redistribution can vary. For thin columns, the accumulation of liquid at the column wall is more pronounced, so that redistribution must take place earlier. For high liquid loads, the packed bed length might be extended. Anyway, it is strongly recommended to ask the manufacturer about an appropriate maximum length of a packing bed. For vacuum application, the pressure drop of the distributor might be relevant. As correlations do not make too much sense due to the wide variety of distributor types, a value of Δp = 1 mbar is usually reasonable as a first guess.

220 � 5 Distillation and absorption

Figure 5.15: Load diagram for a packing.

One should always have in mind that hydrodynamic calculations do not claim to be very accurate. An uncertainty of ± 30 % is a reasonable assumption. Figure 5.15 shows the load diagram of a packing, where the flooding line is depicted as a function of the liquid load. It can be seen that a minimum liquid load is necessary to reach the operation region. With increasing liquid load, the vapor load at the flooding point decreases, at high liquid load even rapidly. It should be mentioned that the design criterion of 70 % of the vapor load at the flood point (flooding factor 70 %) can be interpreted differently. It can either mean that flooding occurs when both vapor and liquid load are increased simultaneously from 70 % to 100 %. This flooding factor is called FFLG and is usually relevant for the design of distillation columns. Another flooding factor can be defined if flooding occurs when only the vapor load is increased from 70 % to 100 %, maintaining the liquid load constant (FFL). This would be a reasonable quantity for the design of absorption columns. The engineer must decide which one is more relevant for the particular application case. Some aspects have to be considered when systems tend to foam: – Generally, packed columns are less sensitive to foaming than tray columns as the contact between vapor and liquid phase is less intensive. – Liquid and vapor velocities are smaller in packed columns, both of them decrease foaming. – Large dimensions (column diameter, random packing size) should be preferred. – High pressure reduces foaming.

5.3 Maldistribution in packed columns

� 221

5.3 Maldistribution in packed columns Rate-based calculations for random or structured packings are not really considered to be trustworthy. Although theory seems to be well-defined, large uncertainties occur. The larger the packing height, the more the always assumed piston flow pattern for the liquid deviates from the real distribution. This uneven distribution is generally called maldistribution, and it is responsible for a significant deterioration for the mass transfer. Currently, its prediction is hardy possible, there are just qualitative indications about the particular dependencies. While liquid distribution is currently subject to various investigatons, it is widely accepted that the vapor distribution in a packed column is more or less homogeneous. Vapor distribution is a matter of pressure drop, and pressure-drop differences over the cross-flow area of the packing will result in less flow for regions with higher pressure drop and higher flow in regions with less pressure drop, ending up in an equal distribution of the vapor as long as the conditions in the particular channels are the same. However, in case of liquid maldistribution the vapor prefers to take the channels with less liquid, as the liquid occupies part of the free cross-flow area. Therefore, channels with more liquid are more narrow, causing a larger pressure drop, which is in turn equalized by lowering the flow. Exceptions are columns with large diameters and low heights, where a special vapor distributor might be useful. Mainly, there are two different kinds of maldistribution: rivulet formation and the wall effect [275]. Rivulet formation is the merging of the liquid flow to larger rivulets. The effect increases with larger packing heights. Its reason is the surface tension of the liquid; the rivulet formation decreases the surface but, subsequently, also the mass transfer area between vapor and liquid. Rivulet formation is considered as a local phenomenon at single packing elements (small scale maldistribution). The wall effect is the tendency of the liquid to accumulate at the column wall, where the flow resistance is lower than in the central region. It is effective in large parts of the column (large scale maldistribution), especially in random packed columns. In structured packings there are wall deflector sheets, which prevent the wall effect. As well, large scale maldistribution can of course be caused by an inadequate distributor which does not fit to the packing. Moreover, after certain packing heights a collector-distributor unit must be installed, achieving a redistribution in the packing. Normally, there are rules of thumb for appropriate packing bed heights available in the guidelines of the particular engineering departments, ranging from 6 … 8 m. For small column diameters, lower packing bed heights should be taken into account. Collectordistributor units do not contribute to the mass transfer but cause additional pressure drop, additional column height and further investment costs. There is a large potential for improvement if the maldistribution of the liquid could be predicted; therefore, various attempts have been published [275]. The investigation principle is to install a liquid collector under a certain packing height which is divided in segments (Figure 5.16). For the investigation of the wall ef-

222 � 5 Distillation and absorption

Figure 5.16: Collector for the investigation of the maldistribution.

fect, a thin ring segment is installed at the column wall. Specially shaped segments can be used to examine the small scale maldistribution. Tracer substances can be used to investigate the radial mixing in the packing and the residence time distribution. A new approach is the attachments of sensors below the packing, where also time-dependent effects can be investigated [274]. Recent investigations [275] show the following tendencies: – At constant liquid load, the maldistribution increases with increasing vapor load, especially the wall effect. Beyond the load point, it increases drastically. – The dependence on the liquid load is not as distinct as on the vapor load. Generally, the maldistribution decreases with increasing liquid load. These tendencies can be illustrated in Figure 5.17, where Mf is the maldistribution factor k 󵄨󵄨 󵄨 󵄨 B − B 󵄨󵄨󵄨 Ai Mf = ∑(󵄨󵄨󵄨 i 󵄨 ) 󵄨 B 󵄨󵄨󵄨 AK i=1 󵄨

Figure 5.17: Maldistribution factor as a function of liquid and vapor load.

(5.7)

5.4 Tray columns

� 223

with B … liquid load Bi … liquid load in segment i Ai … cross-flow area of segment i AK … column cross-flow area

5.4 Tray columns The lack of a reliable calculation method does not cancel the necessity of providing a reasonable design for a tray. (Volker Engel)

In tray columns, a stage-wise mass transfer takes place by means of horizontal trays (Figure 5.18). A weir causes the accumulation of liquid coming from the tray above. Over this weir, the liquid leaving the tray goes into the downcomer to the tray below. The main function of the downcomer is to collect the liquid from the active area, degas the liquid, lead it to the next tray below and seal the way of the liquid against vapor bypass [311]. The vapor can rise from tray to tray through openings characterizing the type of tray. When it passes the liquid, it is split into small bubbles with a large mass transfer area and distributed in the liquid. A so-called froth area is formed, giving good presumptions for an intensive mass transfer. In fact, there are three ways in which a tray can be operated:

Figure 5.18: Constitution of a tray column. © Sulzer Chemtech Ltd.

224 � 5 Distillation and absorption –

– –

In the bubble regime the liquid is the continuous phase. The vapor as the disperse phase is rising through the liquid as bubbles. The bubble regime often occurs at high pressure applications. The froth regime is the preferred one. A froth layer with a large interphase between vapor and liquid is formed. Both phases are more or less continuous. At high vapor loads, low liquid loads or in vacuum, trays often operate in the spray regime; in this case, the vapor phase is the continuous one. Note that the application range of many correlations does not cover the spray regime. In the spray regime, weeping (Section 5.4) and unsealed downcomers (Section 5.4) should be avoided.7

In [95], the limiting formulas for the occurrence of the particular regimes are given. One should always try to get froth regime; spray regime should be avoided if possible. Essentially, there are three types of trays. – Sieve trays: Through a sieve tray, the vapor from the tray below gets into the liquid through holes in the plate of the tray. It is dispersed in the liquid. Today relatively large holes up to d = 5–13 mm are used to avoid blocking by fouling. The fractional hole area is in the range 5–15 %. When the fractional hole area is kept constant, small holes are preferable because of the lower pressure drop, but their manufacturing costs are higher. Of course, there are exceptions to these rules, e. g. cryogenic cases where the hole diameters are less than 1 mm and absorption trays with 3 % fractional hole area [310]. The vapor prevents the liquid to leave the tray through the holes (weeping); nevertheless, for this purpose a sufficient vapor velocity is necessary. After shutdown, sieve trays are emptied through the holes. Sieve trays (Figure 5.19) are quite simple to manufacture and have a relatively low pressure drop (5–8 mbar). They are also easy to clean. As the bubbles entering the tray from below are not diverted, sieve trays are prone to entrainment and jet flood, meaning that droplets from the froth layer get to the tray above. As this liquid is transported in the wrong direction, the efficiency of the tray can be significantly lowered. Therefore, the tray spacing is at least 400 mm. Sieve trays are not well suited for large load variations. The turndown, i. e. the ratio of max. and min. load, is approx. 2 : 1 [107]. Due to their simplicity, sieve trays are easy to specify and to calculate. The tray efficiency of sieve trays is as good as for other tray types. However, they are rarely suggested by vendors, as they are not proprietary solutions [310]. – Bubble cap trays: The principle of a bubble cap tray is that the rising vapor entering the tray above is diverted by the bubble cup located above the holes (Figure 5.20). It enters the liquid parallel to the tray, in contrast to the sieve tray. Therefore, the entrainment is

7 The guideline in the spray regime is: Hold on to your liquid!

5.4 Tray columns

� 225

Figure 5.19: Sieve tray. Courtesy of Ludwig Michl GmbH.



comparably low. Bubble cap trays are not emptied after shutdown. They have a good efficiency and a wide-spread load range. However, the pressure drop and tendencies to fouling and corrosion are relatively high, the manufacturing is expensive, and the cleaning is difficult. Because of the large number of types, bubble cap trays are also difficult to specify. The hydrodynamic calculations are complicated [108–111, 288] and require a large number of input parameters. Bubble cap trays are nowadays mainly used for handling low liquid loads, where they are often the only alternative [307]. Also, some special constructions like the Bayer-Flachglocke (Figure 5.21) are still considered to be indispensable. Using extremely low weir heights, the pressure drop and the holdup on the tray can be kept very low. Bubble cap trays usually have a relative free area of 5–10 %. The tray spacing should be 500 mm and more, especially for large tower diameters. Valve trays: On valve trays, the hole of the tray is covered by movable valves (Figure 5.22). Similarly to the bubble cap tray, the vapor enters the liquid parallel to the tray, giving less entrainment. Moreover, valve trays do not tend to weep [308]. The lift of the valve determines the opening area for the vapor; it is self-adjusting to the vapor load. Figure 5.24 shows a typical pressure drop characteristics of a valve tray. At low vapor

226 � 5 Distillation and absorption

Figure 5.20: Bubble cap tray. Courtesy of Ludwig Michl GmbH.

Figure 5.21: Bayer-Flachglocke. Courtesy of Ludwig Michl GmbH.

loads, the valves are fully closed, and the vapor enters the tray through the open crevices or by opening single valves. The pressure drop rises with increasing vapor load. At point CBP (closed balance point), the valves begin to open. Further increase of the vapor load leads to a wider opening of the valves, and the pressure drop stays approximately constant. After they are fully open (point OBP, open balance point), the pressure drop at increasing vapor load rises again. The operation point should be above the OBP, as the movement of the valves with high abrasion should not take

5.4 Tray columns

� 227

Figure 5.22: Valve tray. Courtesy of Ludwig Michl GmbH.

place in normal operation [308]. Thus, the valves are in some way self-adjusting to the vapor load. This is the main advantage of valve trays. Their turndown is much higher than the one of sieve trays (approx. 4.5 : 1 [107]). In fact, on Figure 5.22 there are two kinds of valves. One sort is equipped with an additional plate on the top, giving extra weight. At low loads only the light valves will open, and at high loads the heavy ones are working as well. This option gives even more flexibility at differing loads. The disadvantage of valve trays arises from their advantage, i. e. there are movable parts which are subject to attrition. Therefore, a higher effort for maintenance is necessary. There are several valve types. The most coon ones are the legged type and the caged type (Figure 5.23). The V1 valve is of legged type. Three legs guide the valve in the opening and limit its lift. A caged valve like the T valve consists of a moving plate and a static cage (Figure 5.23). It causes more turbulence, therefore, it is used in fouling services. Moreover, its pressure drop is lower. There are a number of other valve types, e. g. rectangular valves, mini valves or double disk valves [308]. The pressure drop of valve trays is between the ones of bubble cap and sieve trays. Weeping can usually be avoided. Valve trays are sensitive to fouling. The maximum fractional hole area is 13–14 %. The tray spacing is usually 450–550 mm, lower values like 300 mm are possible. Valve trays are by 20–100 % more expensive than sieve trays [107, 308]. There are many types of valve trays available. The data of various manufacturers have been collected in professional software programs for

228 � 5 Distillation and absorption

Figure 5.23: V1-valve (left) and T-valve (right).

Figure 5.24: Typical course of the pressure drop of a valve tray. Courtesy of WelChem GmbH.



hydrodynamic calculations. Valve trays are the most common trays, their market share is approximately 70 % [95]. Fixed valve trays: Fixed valve trays are, in principle, sieve trays with a roof over the sieve holes (Figure 5.26). They do not move during operation, as the name suggests. The roofs prevent some entrainment, as the vapor is at least diverted to the side openings. Therefore, fixed valve trays reduce entrainment and increase the capacity compared to sieve trays [309], and they are often used for capacity increase in revamps. The disadvantage is that, due to the large openings, the weeping limit is similar or even worse than for sieve trays. There are two main types of fixed valve trays: the round shape ones like the VG0 and the trapezoid shaped ones like the MVG (Figure 5.25). The maximum fractional hole area is 12–13 %. Compared to sieve trays, the investment costs are approx. 10 % higher, and the main operation advantage is a better turndown (approx. 2.5 : 1) [107].

5.4 Tray columns

� 229

Figure 5.25: Round fixed valve VG0 (left) and trapezoid fixed valve MVG (right).

Figure 5.26: Fixed valve tray. Courtesy of Ludwig Michl GmbH.



Dualflow trays: The dualflow tray is more or less a sieve tray without downcomer and weir. Vapor and liquid go through the holes in countercurrent flow. The liquid goes down to the tray below by weeping. This is limiting for the vapor load. Dualflow trays are often used when the capacity has to be increased for an existing column. This is achieved by using the cross-flow area of the downcomer as active area as well. Dualflow trays are well-suited for systems tending to polymerization. Due to the missing downcomer, they are slightly less expensive than sieve trays, but their turndown is supposed to be lower (approx. 1.5 : 1 [107]). The efficiency of a dualflow tray is approximately 80 % of a normal tray.

In the recent years, several vendors developed trays with a significantly improved performance, the so-called high-performance trays. Well-known examples are the SUPERFRAC™ XT tray from Koch-Glitsch (Figure 5.27) or the UFMPlus™ tray from Sulzer ChemTech. Both capacity and efficiency are superior to conventional trays, and it is possible to reduce both column height and diameter drastically. Especially for large

230 � 5 Distillation and absorption

Figure 5.27: 8-pass SUPERFRAC® XT tray. Courtesy of Koch-Glitsch LP, Wichita, Kansas.

columns like C3 splitters with heights of approx. 100 m and diameters of 8–10 m high performance trays are an interesting and meanwhile well-established option for revamps or for reducing investment costs. For modern high performance trays downcomer, active area and inlet area have been optimized. Areas where the liquid is not moving are often equipped with so-called push-valves. Push-valves are fixed valves where the opening area leads the vapor into a defined direction so that the liquid is stimulated and no dead zone can be formed where fouling can develop. A similar effect is achieved by the so-called multi-chordal downcomer, where the shape of the weir is designed in a way that dead zones are avoided (Figure 5.28).

Figure 5.28: Flow profiles for a conventional tray and a SUPERFRAC® tray with multi-chordal downcomer. Courtesy of Koch-Glitsch LP, Wichita, Kansas.

Umbrella-shaped valves like UFM™ (Figure 5.29) from Sulzer can minimize the liquid entrainment, while vapor and liquid still achieve excellent contact for an effective

5.4 Tray columns

� 231

Figure 5.29: UFM™ valve. © Sulzer Chemtech Ltd.

Figure 5.30: UFMPlus™ tray with prisma downcomer, UFM™ valves and push valves. © Sulzer Chemtech Ltd.

mass transfer. A turndown of 5 : 1 illustrates the flexibility of the tray. The prisma downcomer (Figure 5.30) increases the vapor capacity and therefore reduces pressure drop and the probability of downcomer flooding.8 Because of the hydrostatic pressure of the liquid on a tray, which has to be passed by the vapor, tray columns have a significantly higher pressure drop than packed ones. This is a disadvantage especially for vacuum distillations. The elevated pressure leads to higher temperatures at the bottom of the columns, and the residence times on the trays are larger because of the greater holdup. This gives higher decomposition rates; usually, the substances exposed to vacuum distillation are sensitive against high tem8 Explanation on the text below equation (5.12) in this section.

232 � 5 Distillation and absorption

Figure 5.31: Cartridge column. Courtesy of Ludwig Michl GmbH.

peratures. Tray columns are relatively insensitive against liquid load variations and fouling, but susceptible to foaming. For small column diameters (< 800 mm), man holes do not make much sense. Tray columns are then designed as so-called cartridge columns (Figure 5.31), which often have sealing problems at the column wall. For the specification of trays, the main geometric data are defined in Figure 5.32. In this context, the particular abbreviations mean: IDcol inner column diameter AA active area TS tray spacing LW,out outlet weir length LW,in inlet weir length ADC downcomer area, top WDC downcomer width HCL downcomer clearance FPL flow path length HW,in height of inlet weir HW,out height of outlet weir LCL apron length WRSP width of inlet weir

5.4 Tray columns

� 233

Figure 5.32: Dimensions in tray geometry. Courtesy of WelChem GmbH.

Generally, the residence time on trays is not large enough to reach equilibrium between vapor and liquid phase. For the thermodynamic calculation of a column, one of the standard procedures is to take three trays for two stages, e. g. simulate a 60-trays column with 40 theoretical stages. Although this is often successful, this approach has difficulties with widely boiling systems, where the column is far away from equilibrium. For the equilibrium calculation, it is better to characterize the quality of a tray by the so-called Murphree efficiency: EM =

yn − yn+1 , yeq (xn ) − yn+1

(5.8)

234 � 5 Distillation and absorption where yeq is the equilibrium concentration and n + 1 refers to the tray below tray n. The Murphree efficiency always refers to a certain component. Normally it is in the range EM = 0.6–0.7, corresponding to the approach with the reduction of stages described above. There is a problem with the product streams of the column. If Murphree efficiencies are used, these product streams are not in equilibrium as expected. To avoid simulation errors with pieces of equipment that require a single phase at the inlet (compressors, pumps), an efficiency of EM = 1 should be assigned to the trays where product streams leave the column, especially to the condenser and the reboiler. The approach of taking three trays for two theoretical stages is certainly too simple to be fully correct. High viscosities of the liquid and high separation factors have a negative effect on the Murphree efficiency. Duss and Taylor [273] have recently modified the widely used O’Connell correlation [112], their result is EM = 0.503(

η ) mPas

−0.226

σ −0.08

(5.9)

with σ=

mG L

for

mG >1 L

(5.10)

or σ=(

mG ) L

−1

for

mG 10 bar), there is a negative effect due to increased entrainment [112]. Of course, the strongest negative effect on tray efficiency is caused by maldistribution. The use of the Murphree efficiency has another advantage: the numbering of trays is different in simulation and in construction. In process simulation, the trays are numbered from top to bottom. Reboiler and condenser are counted as stages. In construction, the trays are numbered from bottom to top. Therefore, a confusing and error-prone renumbering procedure has to be performed. It is much easier to assign the trays in the construction when the Murphree efficiency is used. If the reduction of stages had been applied, it is much more complicated, not to mention that the factor is often forgotten when the overall pressure drop of the column is evaluated. The hydrodynamic calculation of tray columns has a different philosophy than the one for packed columns. While for packed columns self-contained models have been developed with certain physical foundations, only single and incoherent correlations are available for the particular failure criteria of tray columns. These criteria are in most cases empirical but quite well confirmed due to the great experience with the use of distillation trays. The correlations have limited accuracy and applicability. If different ones are compared, contradictions often occur. Often, the correlations are based on measurements with the water/air system. The particular correlation equations can be found in [95, 97, 98]. Figure 5.33 shows the load diagram of a sieve tray, which illustrates the range where the sieve tray can be operated with a good efficiency. It is typical for trays that the range for the vapor load is comparatively small, while the liquid load can be widely varied. The upper limits can be interpreted as absolute limitations; when they are exceeded, the tray cannot be operated. The lower limits are recommendations; falling below them

236 � 5 Distillation and absorption might cause a bad efficiency of the tray. The criteria for limiting the operation range of a tray are as follows. – Froth height: The height of the froth layer should be lower than the tray spacing, with a certain margin. – Entrainment and jet flood: Entrainment is the carry-over of liquid droplets to the tray above. This would lower the tray efficiency significantly and has a drastic impact if high purities of the overhead product are required. The entrainment usually determines the maximum vapor load of the column. There are correlations available which can evaluate the order of magnitude of the entrainment. Besides the column diameter and the diameter of the tray holes, the tray spacing has a decisive influence on the entrainment (Figure 5.34). Common tray spacings are 350–600 mm, in North America, 600 mm are more or less generally used. The entrainment reacts sensitively on variations of

Figure 5.34: Influence of tray spacing on liquid entrainment.10 Courtesy of Prof. Dr. J. Stichlmair [113]. 10 Read the diagram as follows: If the abscissa value drops below 0.08 (transition from bubble to froth regime), take wG /wBl as coordinate. wBl is the rising velocity of a bubble.

5.4 Tray columns



these parameters; for example, an increase of the vapor load by a factor of 10 can increase the entrainment rate by 4–5 orders of magnitude. For a new tray design, it is best to keep the entrainment as small as possible. For a check of an existing column, 10 % entrainment might be tolerable as long as very high purities are not required. If the entrainment is so high that not only droplets but larger parts of the liquid are carried over, the jet flood case is reached as the limit of operability. Massive liquid entrainment has the consequence that liquid is impounded on the trays, and finally, the column floods. While normal entrainment just leads to a bad tray efficiency, jet flood makes the operation of the tray impossible. The measures to avoid jet flood are to increase the active area or to use smaller holes, as long as there is no danger of plugging. Countermeasures are the reduction of the vapor velocity, the increase of the tray spacing and reducing the weir height. Pressure drop: The pressure drop is an indication of the vapor load. It is not a direct limitation; the increase of the bottom temperature plays a large role in vacuum columns, where trays are rarely used. If the pressure drop is too large, the downcomer can show flooding (see below). The pressure drop can be split into three parts. As for packed columns, a tray has a dry pressure drop, which would occur even if no liquid is on the tray. The second part is the static pressure drop due to the height of the liquid on the tray, which is at least equal to the weir height. The third part takes the side effects into account, like liquid height over the weir, bubble formation or spraying. A typical pressure drop of a tray is in the range 5–10 mbar. From this structure, a minimum pressure drop of the tray can be estimated from the second contribution: Δpmin,tray = ρL ghW



� 237

(5.12)

If a calculated pressure drop is below that limit, there is a strong suspicion that something is wrong;11 either the pressure drop calculation or the assumption that all trays are operating properly (e. g. strong weeping). Pressure drops larger than 15 mbar per tray might be an indication for flooding, while even higher pressure drops can lead to the destruction of the column.12 Downcomer choke flooding: If the friction losses in the downcomer are too large, the liquid cannot go down to the tray below and will accumulate. The highest friction losses occur at the downcomer entrance. The reason is vapor formation from the degassing in the downcomer. The vapor carried into the downcomer must separate from the liquid and disengage in countercurrent flow to the liquid entering the downcomer. When the combination of vapor exiting and the liquid entering becomes excessive, the downcomer en-

11 Exception: spray regime. 12 The tray spacing should remain constant during operation.

238 � 5 Distillation and absorption

Figure 5.35: Sloped downcomer.





trance is choked causing the liquid to backup on the tray. Therefore, one common measure is the increase of the downcomer width. As a rule of thumb, it should be at least 10 % of the column cross-section area; however, this depends strongly on the individual situation. A sloped downcomer is often useful, as it influences the friction losses at the downcomer entrance selectively (Figure 5.35). Its background is as follows: The density of the vapor-liquid mixture increases from top to bottom of the downcomer due to the degassing. Therefore, less cross-flow area is needed for the almost degassed liquid at the bottom [311]. For the ratio between upper and lower downcomer width, 2 : 1 is a reasonable value. Downcomer choke flooding preferably occurs at relatively high pressures (p > 7 bar) and/or high liquid rates. Minimum downcomer residence time: The residence time in the downcomer is strongly related to the downcomer choke flooding. It should be large enough so that the liquid has enough time for degassing. There are various recommendations; they refer to the apparent residence time, which is defined by the ratio of the downcomer volume and the clear liquid flow in the downcomer [95]. The minimum one is 3 s which should really be kept. In this context, it should also be mentioned that the clear liquid height in the downcomer, i. e. the area really filled with liquid and not with a froth layer, is usually less than 50 % of the tray spacing. There are correlations for the clear liquid height; the difference to the apparent liquid height taking the froth layer into account as well is often significant and must be considered when the residence time in the downcomer is evaluated. A similar criterion to the minimum residence time is the maximum downcomer velocity. Calculated as the clear liquid velocity in the cross-flow area of the downcomer inlet, it should be less than 0.06–0.18 m/s [95]. Downcomer backup flooding: As Figure 5.36 indicates, there is a certain condition for the discharge of the liquid from the downcomer: p2 − p1 = Δptray < ρL ghL,DC − Δpfriction

(5.13)

5.4 Tray columns

� 239

Figure 5.36: Hydrostatic and pressure drop on a tray.



If this condition is not fulfilled, no more liquid can be discharged to the tray below. Certainly, the contribution of Δpfriction is caused by the liquid load and is related to the downcomer choke flooding, but often a pressure drop of the tray which is too high is the reason for this type of flooding. This means that finally the downcomer failure is caused by the vapor load. Therefore, in this case the increase of the downcomer width as the typical measure against downcomer failure makes things even worse; the increase of the downcomer cross-flow area reduces the active area of the tray and therefore the vapor velocity and the pressure drop of the tray increase. If possible, a reduction of the downcomer width might help or, of course, an increase of the tray diameter. In many cases, the pressure drop for the passing of the apron is the reason for downcomer backup flooding. In this case, the downcomer clearance should be increased, i. e. the distance between the tray and the lower end of the apron (HCL in Figure 5.32).13 Often, the outlet weir height has to be increased as well to ensure a liquid seal. If the downcomer clearance is larger than the outlet weir height, vapor could bypass the tray above by choosing the way through the downcomer. In fact, this is not a strict criterion. The liquid seal usually persists during operation even if the outlet weir height is slightly lower, as the height above the weir is generated.14 A small tray spacing can also be responsible for downcomer backup flooding. An increase in tray spacing might be useful; for large tray spacings like TS = 600 mm downcomer backup flooding is hardly possible. Downcomer backup flooding is not likely in the spray regime. Weir load: To ensure a uniform flow of liquid over the weir, the height above the weir should be in the range 5 mm < liquid height above the weir < 38 mm

(5.14)

13 The outlet velocity of the liquid in the downcomer crossflow area should be less than 0.45 m/s. 14 For additional measures to ensure liquid sealing, see Chapter 5.7.

240 � 5 Distillation and absorption

Figure 5.37: Two-pass column. Courtesy of WelChem GmbH.

The weir length should be larger than half the column diameter. A common measure to reduce the weir loads is the use of multipass trays for column diameters d > 2000 mm (Figure 5.37). The increase of the column diameter is not effective; the weir length only increases linearly. The capacity of the weirs can then become the limiting factor; large gradients could be the consequence, giving excessive weeping (see below) on one side of the tray and complete entrainment on the other side (Figure 5.39). The multipass option provides more weir length on a tray and can overcome this difficulty. A criterion similar to Equation (5.14) is the condition that the weir load should be in the range 4.5–60 m3 /(mh) [113, 311]. For large columns, weir loads up to 100 m3 /(mh) can be accepted. If the liquid load and, subsequently, the height above the weir, is low, any leveling problems of the distillation tower (installation, attachments to the tower and even wind) can cause that part of the weir is not used, and the corresponding part of the active area has stagnant flow without mass transfer. At low liquid loads, the liquid flow on a tray will become inhomogeneous. The weir load can be used as an indicator according to Equation (5.14).15 If it is below the lower limit, the use of notched weirs might be an alternative, especially if large fluctuations of the liquid load occur (Figure 5.38). At low liquid loads (e. g. in spray regime), only the lower part of the notches is charged, giving a lower effective weir length. For the V-notches, the weir length is increased steadily according to the increase of the liquid load. They are used when large ranges for the liquid load have

Figure 5.38: Weirs with rectangular and V-notches. Courtesy of WelChem GmbH.

15 Another criterion is the height over the weir; it should be more than 5 mm.

5.4 Tray columns



� 241

to be covered. The blocked weirs with rectangular notches (picket-fence weir) are used if the weirs are too long, e. g. the central weirs in a two-pass column. In this case, the height of the spikes is about as high as the froth layer on the tray. Minimum vapor load: If the vapor load is too low, the liquid on the trays comes in contact with the vapor in an irregular way. The vapor will then prefer the area near the weir, as there is less liquid due to the gradient on a tray. On the other side of the tray, the height of the liquid is larger and the vapor will avoid passing the tray there. The liquid is no more prevented to go through the holes, which is called “weeping”. Weeping is less dramatic than entrainment. In contrast to entrainment, the liquid goes to the tray below as intended. A decrease of the tray efficiency can be the consequence, as residence time on the tray is lost. Normally, 10 % weeping can be tolerated. There are correlations for the weeping rate [95] or for the minimum vapor velocity to avoid weeping at all. If weeping occurs, one must be careful, as the correlations do not take into account that the tray openings for the vapor are occupied by the weeping liquid. Therefore, the pressure drop can be larger than calculated. The first measure to avoid weeping is the reduction of the hole diameter. The worst way of weeping is the so-called vapor cross-flow channeling (Figure 5.39). At high liquid loads (weir load > 50 m3 /(m h)), large fractional hole areas, high tray diameters and pressures p < 7 bar, the liquid can build up a hydrostatic gradient after entering the tray at the downcomer apron. At the outlet weir, there is much less liquid height than at the downcomer inlet. The vapor chooses the way which

Figure 5.39: Vapor cross-flow channeling. Courtesy of WelChem GmbH.

242 � 5 Distillation and absorption



has the lowest pressure drop and passes the liquid on the tray near the outlet weirs, maybe with significant entrainment. On the other hand, there is accumulation of liquid at the downcomer inlet, and weeping occurs as the vapor avoids to go to this region. Entrainment and weeping occur on the tray simultaneously. The weeping is detrimental to the efficiency in this case. The weeping liquid bypasses two trays, it is directly transported from the tray inlet to the outlet of the tray below [107]. System flooding: For system flooding, the same statements can be given as for packed columns (Section 5.2). System flooding might occur for very large tray spacings (TS > 1000 mm), or for dualflow trays with TS = 600 mm.

As a design strategy, the following guideline might be helpful: – Estimate the column diameter with wG = 1 m/s. – Downcomer choke flooding 󳨀→ enlarge downcomer. – Downcomer backup flooding 󳨀→ enlarge active area. – Entrainment/jet flood 󳨀→ enlarge tray spacing and/or active area. – Always check sensitivities. – If nothing succeeds 󳨀→ increase/decrease column diameter. For the first guesses, Table 5.1 could be useful. Table 5.1: Reasonable guesses for tray data [113].

Tray spacing Weir length (Lw ) Weir height (hw ) Downcomer clearance Bubble cap diameter (dbc ) Bubble cap distance Valve diameter (dv ) Valve distance Sieve hole diameter (dh ) Sieve hole distance Fractional hole area

Vacuum

Ambient pressure

High pressure

0.4–0.6 m 0.5–0.6 dB 0.02–0.03 m 0.7 hw 0.08–0.15 m 1.25 dbc 0.04–0.05 m 1.5 dv 0.01 m 2.5–3 dh 10–15 %

0.4–0.6 m 0.6–0.75 dB 0.03–0.07 m 0.8 hw 0.08–0.15 m 1.25–1.4 dbc 0.04–0.05 m 1.7–2.2 dv 0.01 m 3–4 dh 6–10 %

0.3–0.4 m 0.85 dB 0.04–0.1 m 0.9 hw 0.08–0.15 m 1.5 dbc 0.04–0.05 m 2–3 dv 0.01 m 3.5–4.5 dh 4.5–7.5 %

Beyond the conventional design, tray columns can cause trouble which is usually not expected. Regularly, foaming turns out to be a knock-out criterion for tray columns [114]. There are few countermeasures: – The consideration of a system factor in the design just scales the residence time in the downcomer. It could only be useful if the foaming tendency could be assigned numerically. This is still a research topic [115]. Empirical system factors for a number of typical dstillation tasks are given in [95].

5.5 Comparison between packed and tray columns











� 243

Mechanical foam deletion should be considered as an academic fantasy. To have movable parts on a tray sounds expensive, and the author is not aware of any success stories. Replacing a thermosiphon reboiler by a falling film evaporator or, respectively, exchanging trays by packing might be successful, but it is quite an expensive trial without guarantee. Finding the reason (usually an unknown substance in the ppb-region) or trying to make use of the theory of foaming [116] has almost never been reported to have solved a problem. Foam formation on the trays is only possible for low vapor loads in the bubble regime. The foam is rapidly destroyed at transition to the froth regime, after the vapor load has been increased [113]. However, foam formation in the downcomer is not affected in this way. The only way to fight foaming right on the spot seems to be the use of an antifoaming agent. As long as it can be tolerated in the bottom product, this is the only strategy which has turned out to be successful in the long run. Finding an appropriate antifoaming agent is a science on its own, but vendors usually have specialists which can give valuable advice.

Another kind of problems with tray columns is vibration, which has been observed especially on one-pass sieve and valve trays with large diameters (> 2000 mm) at comparably low vapor loads. Within hours, trays have been seriously damaged. Just increasing the mechanical stability of the tray usually does not help. Instead, an increase of the vapor load or the use of notched weirs should additionally be taken into account. There are a few explanations in the literature [117–120]. It seems that the vapor enters the tray above noncontinuously as jet pulses through the holes. When they synchronize and actuate in phase, the resonance frequency of the tray might be struck [95]. Certainly, a thorough understanding of this phenomenon has not yet been achieved.

5.5 Comparison between packed and tray columns The criteria for a comparison between packed and tray columns can be set up in the following way. – Market share: While packed and tray columns have both a market share of approx. 50 %, random packings have about half the share (17 %) compared to structured packings (33 %) [314]. – Cleaning: While sieve trays and random packings are relatively easy to clean, the cleaning of bubble cap trays is difficult, and for structured packings it is more or less impossible.

244 � 5 Distillation and absorption











High-pressure steam will probably destroy the packing. Ultrasound might work, but it takes a lot of time [314]. Pressure drop: The pressure drop in packed columns is significantly lower than in tray columns, as the vapor does not have to pass a liquid layer and the narrow holes, which have a cross-flow area of max. 14 % of the active area. For well-designed packings, the pressure drop can be expected to be 1–2 mbar/m, whereas it is 5–10 mbar per tray. Consider approximately two theoretical stages per m in a packed column and a Murphree efficiency of 67 % (i. e. three trays are two theoretical stages), the pressure drop of a theoretical stage is lower by a factor of 7.5–30 in a packed column. Lower pressure drops give lower pressures in the bottom of the column, and at lower pressures the relative volatility is in most cases higher. Furthermore, lower pressures correspond to lower temperatures, meaning less degradation of the products, and it might be possible to use a steam with lower pressure. For high vacuum columns, the use of packed columns is more or less obligatory. Theoretical stages per m: The first guess is that there are two theoretical stages per m in a packed column, usually there are slightly more. For a tray spacing of 500 mm, there are two trays per m, which, however, represent about 1.3 theoretical stages due to their efficiency. The end tray has to be counted as well, meaning that for the above-mentioned tray spacing of 500 mm there are 11 trays to be placed on 5 m column height. On the other hand, in packed columns some space is always lost for collectors and distributors (10–20 % acc. to [260]). In fact, it must be examined case by case where finally more theoretical stages per m are obtained. The author would guess that in most cases the packed column is advantageous. Flexibility: Tray columns have a good turndown (2 : 1 – 4.5 : 1, according to tray type). So do packings, but the distributors usually do not. Therefore, packed columns are definitely inferior to valve trays from that point of view. In tray columns, side draws are easy to provide. Usually, they are designed on several trays, and optimization is performed during operation.16 For packed columns, side draws are a major constructive issue; they are related to the presence of a collector and a distributor. Fouling: Packed columns are more sensitive to fouling, especially structured packings and, more especially, wired-mesh packings. Sieve trays with large holes (> 10 mm) can cope with fouling quite well. They can even handle certain amounts of solids. Foaming: Foaming is a strong argument against tray columns. Columns with random packing can at least cope with limited foaming. Opinions differ whether structured pack-

16 Side draws must not be taken at the bottom of the downcomer, as gas bubbles might lead to choking or pump cavitation.

5.6 Distillation column control











� 245

ings perform worse than random ones [107] or better [314]. For systems which are prone to foaming (aldehyde systems, caustic absorptions) this is a major item for the equipment choice. Column diameter: Tray columns are well established for large column diameters. Multipass trays are often used. There is also no objection to using packed columns for large diameters. Sieve trays with large diameters can be subject to vibrations. For small column diameters (< 800 mm) packed columns are advantageous, tray columns can only be realized as cartridge columns (Figure 5.31). Loads: Tray columns can cope with large variations for the liquid load. They are relatively sensitive to variations of the vapor load. Random packed columns have difficulties with low liquid loads, whereas structured packings have difficulties with high liquid loads, as gas bubbles cannot be released in the narrow channels. If they are captured, they will go down a long way. In contrast, random packings can easily get rid of gas bubbles, and tray columns transport them one tray down in the downcomer. Aqueous systems are difficult for any kind of packing due to the high surface tension. In contrast, trays have no problems with wetting at low liquid loads or with aqueous systems. Residence time: Trays provide a well-defined residence time so that they are advantageous when desired chemical reactions are to occur in the column. On the other hand, packed columns are advantageous to avoid undesired reactions. In this case, one should take care that the reboiler has a small holdup and residence time as well. Heterogeneous systems: For systems showing liquid-liquid equilibria, the design of packings is weakly founded. Trays have no problem, as in the froth layer the separation of the two phases does not take place. Safety: In recent years, a lot of fires in structured packings have happened, and the reason for this is the large surface per mass, as the thickness of the metal is just about 0.1 mm [121]. These fires can hardly be extinguished [314].

A comprehensive comparison of trays and packings can be found in [260].

5.6 Distillation column control In process engineering, typical control oriented modeling methods have not become widely accepted, as it is the case in electrical and aerospace engineering. The control engineering is not an own phase; instead, it is distributed over the life cycle of a plant [122]. The main part of the control strategy is already fixed in the conceptual phase and

246 � 5 Distillation and absorption tested during the piloting. The draft of the control scheme is done according to the experience from other projects. In-depth knowledge of control engineering is requested if the proposed strategy fails in operation or some optimization potential is presumed. During the engineering, a systematic analysis of the control strategy hardly ever takes place. Things might change with the upcoming training simulators and advanced process control projects. The strongest reason is that plant models are usually specific for a given plant. Their development is a complex, large, and expensive effort and must be supposed to be justified. Often, during the model development the knowledge on the process increases in a way that ways are found to meet the targets with conventional process engineering methods as well. The process models are not only extraordinarily complex but also timedependent due to catalyst aging or fouling on heat exchangers. During the life cycle of a plant, many changes take place, such as capacity increases, heat integration, improved catalysts, or changed product specifications. In these cases it would be necessary to adjust the model and update the control parameters. The discrepancy between the large effort for an adequate control engineering and the often simple but successful solutions is one of the main reasons for the skepticism of practitioners to a theoretically based control engineering. Especially for distillation plants, a simple, intuitive control strategy is usually chosen to maintain the demand for a sufficient and constant quality. Generally, there are two types of variables to be controlled in distillation [96]. The control of column pressure and liquid levels ensures that no accumulation of liquid (levels) and vapor (pressure) occurs. Otherwise, a steady-state operation of the column would not be possible. Maintaining the levels and the pressure just keeps the stable operation of the column. For the product quality, the composition of the products has to be kept within the specification limits. For this purpose, some kind of composition control must take place. The special difficulty in distillation control is that the product concentration can hardly be measured effectively and fast enough. The operator has usually no clue what the feed concentration might be and what distillate rate is reasonable. Even gas chromatography as a relatively fast analyzing method with a wide application range has too long response times because of the distance between sample point and detector. In a control cycle, it would be a great dead-time element. Further problems are high investment and maintenance costs and a possible phase separation of the sample, which could spoil the result. In fact, there are few constructions of this kind,17 but the usual way for the quality control of the product uses the temperature profile as an indicator for the product composition. The sensitivity of the temperature profile from the product concentration is exemplarily illustrated in Figure 5.40. The diagram is about the simulation of a distillation column separating the binary system n-hexane/n-heptane on 50 theoretical stages at ambient pressure, where the reflux ratio has been varied. The three cases refer to the overhead concentrations of 17 e. g. oxygen analyzers.

5.6 Distillation column control

� 247

Figure 5.40: Temperature profiles at various overhead concentrations [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

n-hexane of 99.98 wt. %, 99.9988 wt. %, and 99.99984 wt. %, respectively. These purities each differ by one order of magnitude. The boiling points of these overhead products can hardly be distinguished by measurement in column operations and are not appropriate as a control signal. However, the temperature profiles are significantly different. The largest differences seem to occur on stage 20. The temperature of this stage could therefore be used as a control variable for the overhead concentration.18 There have been discussions on whether vapor or liquid temperatures should be used for composition control. In fact, both options work. The control by vapor temperatures show the faster response but might be erroneous if weeping occurs. There are a lot of options for the control of a distillation column. Figure 5.41 shows an often used one, which is discussed in the following section. In this example, the purity of the overhead product is maintained by the control of the temperature (TC) on a certain stage by manipulating the reflux amount. The distillate stream is the difference between overhead and reflux stream. The outlet valve maintains the level in the reflux drum (LC). This way, a steady state can be achieved. At the bottom of the column, the steam flow is fixed (FC). The control of the bottom product flow is analogous to the distillate flow control. The column pressure is controlled (PC) by the cooling water flow to the condenser. If the pressure is too high, it will be increased to condensate more and to lower the pressure again.19

18 Unfortunately, this stage is relatively far away from the top of the column. Using this temperature as a control variable might lead to a slow response behavior. 19 This kind of pressure control is relatively sluggish; alternatives are discussed below.

248 � 5 Distillation and absorption

Figure 5.41: Control scheme of a distillation column with top product quality control.

The control strategy in Figure 5.41 is not very fast anyway. To understand a control scheme it is useful to follow the response of the column after something is varied. Consider the case where the concentration of the feed varies, e. g. the light ends fraction increases. The light ends will accumulate in the top section of the column. The level controller of the reflux drum will open the distillate outlet. The control temperature in the profile will drop. Then, the system answers with a reduction of the reflux amount. This reflux must go down the column stage by stage, which takes some time. For frequently occurring load variations, this control scheme is not really appropriate. Its strength is to safely maintain the quality of the top product for constant load. If the bottoms product quality has to be maintained, the arrangement in Figure 5.42 is advantageous. It can be considered as the most common distillation column control scheme [282]. In this case, the temperature control manipulates the steam, and the change in the vapor phase affects the rest of the column rapidly, much faster than the reflux change does in Figure 5.41. This scheme has a huge advantage in stripping columns, where the distillate rate is small compared to the feed. It should be mentioned that the considerations about the response times mainly refer to tray columns, whereas packed

Figure 5.42: Control scheme of a distillation column with bottoms product quality control.

5.6 Distillation column control

� 249

columns have a relatively small holdup and show pretty fast control answers even for the option in Figure 5.41. In both schemes, the temperature control can also be connected to the distillate or the bottom product outlet valve, while the levels are maintained by the reflux flow or by the steam flow, respectively. All these options have their pros and cons depending on the particular case, however, the most important rule is that small streams should not control a level, neither in the reflux drum nor at the column bottom, as this results in extremely slow answers (Richardson’s law). Generally, we can distinguish two types of control schemes: the mass balanced configuration (MB, Figures 5.41 and 5.42) and the energy balanced configuration (EB, Figure 5.43) [96, 123].

Figure 5.43: Example for an energy balanced control scheme.

The mass balance control is the common way to control a column. The most important principle is that none of the product streams is flow-controlled. Otherwise, the column would easily run out of the balance. If the flow of the product is fixed, it must exactly fit to the corresponding fraction of the desired overhead components in the feed. Consider a feed flow with 5000 kg/h low-boiler and 5000 kg/h high-boiler. If the top product flow is fixed, it would have to match to the low-boiler content in the feed, i. e. 5000 kg/h. If it is fixed to another value, the impurity of one product stream is unavoidable. In practice, it is neither possible to determine the exact fraction of a feed nor are fluctuations avoidable. A coincidental 2 % deviation, e. g. 5100 kg/h low-boiler and 4900 kg/h highboiler, would not be compensated, and 100 kg/h low-boiler would end up in the bottom fraction. With a mass balance control scheme, both product streams are controlled by other information, e. g. by the levels (Figures 5.41 and 5.42) or by a temperature in the column profile. A number of configurations are possible as shown above, they are well explained and discussed in [96]. The explanation above should rule out a control scheme fixing a product flow, but in fact it makes sense for small product streams where the composition does not matter. Consider a case where there are 10 000 kg/h high-boiler and 10 kg/h low-boiler, which are

250 � 5 Distillation and absorption to be separated. The small overhead product stream will not have an influence on a level to be used for control purposes. An easy way to control this column is to set the distillate flow to 20 kg/h. This implies that 0.1 % of the bottom product are lost, but the low-boiler should be completely removed, including some fluctuations in the feed. The reflux ratio is a result of the steam flow, which can be related to a temperature in the profile or fixed like in Figure 5.43. This is an example of an energy balanced configuration. It might be surprising that the reflux ratio, which is one of the most important parameters in column design, is seldom controlled directly. Usually, either the reflux or the distillate flow are controlled, and the reflux ratio is only an operand. It is also remarkable that there is no chance to control both top and bottom compositions to maintain them within their specification [96]. It seems obvious to set one control temperature each in the stripping and in the rectifying section (Figure 5.44). In fact, these control loops would seriously interact with each other. If for example the fraction of low-boilers in the feed rises, both control temperatures decrease and cause the corresponding reactions, maybe more steam flow to the reboiler for the bottom concentration and a decrease of the reflux for the top product one. However, these two reactions have completely different dynamic responses. While the reaction on the steam increase is relatively fast, the effect of decrease of the reflux is pretty slow; it is transported with the liquid and therefore coupled with the residence times of the liquid on the particular trays. When the desired effect occurs, the increased steam flow already requires an increased reflux, and a never-ending cycle starts. Therefore, in conventional control schemes it is avoided to control both compositions.

Figure 5.44: Unstable control scheme using two composition controllers for bottom and distillate products.

In all cases it often makes sense that streams fixed by flow control are related to the flow of the feed stream (e. g. fixed reflux, fixed steam flow). There are various options for pressure control. As in Figures 5.41 and 5.42, the coolant flow through the condenser can be controlled by the pressure. An easy and fast but not very elegant way is the feeding and venting of inert gases with a split range

5.7 Constructive issues in column design

� 251

control. Together with the inert gas, always a certain amount of top product is vented; therefore, the column should be expected to operate steadily or with low product concentrations in the vapor phase so that the losses are not harmful. More options are explained in [96]. There is also an option to use pressure drops or temperature differences along the column as input signals. It might happen that they have a larger significance than conventional signals, especially if the temperature itself is determined by undefined components. However, these options can show instabilities in malfunction cases. If for example the cooling water service fails, the temperature differences along the column will become smaller. In Figure 5.42, the steam valve would fully open, which is the worst reaction of all and will probably lead to safety valve actuation. A behavior like this should at least be prevented by appropriate interlocks.

5.7 Constructive issues in column design There are also some simple considerations in column design apart from component separation. Similar to vessels, there are certain requirements for the liquid level at the bottom. There must be several minutes of time for the operators to react if the liquid level goes up and down between the particular levels (Chapter 9). If there is a thermosiphon reboiler, it has to be checked whether the circulation takes place at minimum liquid level. The nozzle for the vapor inlet from the reboiler must not be flooded, and a sufficient distance must be kept to the lowermost tray or the packing, respectively. Usually, the particular companies have their design guidelines where recommendations for the various issues are given, otherwise, they can be taken from [96]. There are several options for the bottoms design when a thermosiphon reboiler is involved (Figure 5.45). While the first one is the simplest and the most common one, a baffle can be placed in the bottoms region for different purposes. Option (2) ensures

Figure 5.45: Three options for the column bottom design. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

252 � 5 Distillation and absorption that there is always enough liquid height to maintain an appropriate NPSH value for the pump removing the bottoms product. The thermosiphon circulation is well ensured with option (3) which maintains its driving force constant. All feed nozzles require sufficient space above and below the feed to distribute the vapor, separate vapor and liquid and avoid disturbing the bottom liquid level [299]. Special care must be taken when a feed consists of vapor and liquid or when a superheated feed flashes inside a column, meaning that pressurized liquid generates vapor when it is expanded into a low-pressure column. A simple nozzle is only acceptable at low velocities with low vapor fractions. In other cases, some constructive provisions must be made to enable the liquid to vanish in bottom direction, while the vapor is smoothly directed to the top. Popular options are flash chambers (Figure 5.46). In a flash chamber, the vapor can disengage in a defined way, while the liquid is collected and guided to a liquid distributor below. Flash chambers can be located inside (Type IN) or outside (Type A) the column. Type IN is appropriate for small flash vapor amounts. For large ones, Type A is the better choice, which is in principle a small vessel with a demister. For high velocity feed where the vapor is the continuous phase, vapor horns are one of the favorite solutions. A tangential helical baffle forces the vapor to follow the contour.

Figure 5.46: Flash chambers for use inside (Type IN, (a)) and outside (Type A, (b)) the column. Courtesy of Julius Montz GmbH.

5.7 Constructive issues in column design

� 253

It is closed at the top and open at the bottom. The liquid drops hit the wall and run downward as requested [100]. Another well-known device for separating vapor and liquid from a column feed is the “Schoepentoeter” [281], which was developed by Shell. Its principle is to dissipate part of the energy of the feed stream by dividing it into many parts. The Schoepentoeter is often subject to erosion and therefore a wear part, but its effectiveness is undoubtful. More information about nozzle locations can be taken from [299]. To ensure liquid sealing of the downcomer, inlet weirs (Figure 5.47 (a)) or seal pans (Figure 5.47 (b)) can be used [96]. They are often taken in cases where the clearance under the downcomer is limited. Both arrangements ensure liquid sealing of the downcomer. For high liquid loads, seal pans can increase the capacity of the column. They can permit lower outlet weirs, which reduces the pressure drop, the froth height and the downcomer backup. In contrast, an inlet weir consumes some of the downcomer height and therefore often increases downcomer backup. This is one of the reasons why seal pans are generally preferred [96]. The disadvantage of both arrangements is that they can act as a dirt trap due to zones with stagnant liquid.

Figure 5.47: Downcomer supporting arrangements.

For weir loads below 1 m3 /(m h), so-called splash baffles are recommended [96]. Splash baffles (Figure 5.47 (c)) are vertical plates parallel and upstream to the outlet weir with a gap to the tray floor so that liquid can pass underneath. They can increase the holdup and the froth height on the tray and can prevent a tray from drying up. The placement of the column condenser is always subject to discussion. In most cases, it is located on the ground to make maintenance easy. This has, as always, a number of disadvantages. For the reflux line, an extra pump is necessary which has to overcome a large static head. The overhead line is not only long but also has a considerable diameter (Figure 5.48), especially in vacuum applications, where the pressure drop has an impact on the dew point. Also, the reflux line will be long.

254 � 5 Distillation and absorption

Figure 5.48: Overhead line for a condenser located at the bottom. Von Cephas – Eigenes Werk, CC BY-SA 3.0.

Locating the condenser close to the top of the column leads to short lines for overhead and reflux. Furthermore, the reflux can reenter the column just by gravity, without a reflux pump. On the other hand, a significant structure is required for supporting the condenser. Maintenance and cleaning are difficult. While the line for the reflux is much shorter, the ones for cooling water supply and return are long. Often, a booster pump is necessary to overcome the height. Shell-and-tube heat exchangers and air coolers are difficult due to large construction, whereas compact plate heat exchangers have advantages. Alternatively, the condenser can be placed inside the column in the top section. The column itself can then provide the support structure. An overhead line is not necessary, the cooling water lines are again long. Maintenance and cleaning are even more difficult. There are two options: The overhead vapor can enter the condenser from the bottom (upflow) or it can be directed in a way that it enters at the top (downflow). The latter is

5.8 Separation of azeotropic systems

� 255

more familiar; flooding is not possible. In the upflow case, vapor and condensate are in countercurrent flow, leading to an enhanced separation, which is, however, difficult to predict. Flooding of the condenser can occur when the vapor accumulates the liquid so that it cannot flow downwards. This must be ruled out. There are several correlations for the prediction of the maximum possible vapor velocity. The most capable one seems to be the McQuillan-Whalley equation [293] −0.25 wG ρ0.5 = 0.286 Fr −0.22 Bo0.26 [1 + g [gσ(ρL − ρG )]

ηL ] mPas

−0.18

(5.15)

with the Kuteladze number −0.25 KG = wG ρ0.5 g [gσ(ρL − ρG )]

(5.16)

the Froude number Fr =

Ld g(ρL − ρG )3 [ ] 4ρL σ3

0.25

(5.17)

and the Bond number Bo =

d 2 g(ρL − ρG ) σ

(5.18)

The average relative error of Equation (5.15) is still estimated to be 26 %, therefore, some margin should be considered. Retrofit is hardly possible for a condenser inside the column. Inserting hiTRAN elements (Figure 4.22) may be worth a try.

5.8 Separation of azeotropic systems Azeotropic systems cannot be separated by conventional distillation. Using applied thermodynamics [11], there are a number of options to break an azeotrope and obtain the components in their pure form. It is the art of the process engineer to suggest the most appropriate one. A good description of the various separation processes for azeotropes can be found in [8]. The easiest case is the separation of a heteroazeotrope. It splits in two-phases in a decanter, and the two-phases can be worked up separately [8]. Pressure-swing distillation is useful if the azeotropic concentration strongly depends on the pressure. This is the case for relatively few binary systems. The most well-known one is tetrahydrofurane–water (Figure 5.49). In the first column, the azeotrope THF–water with xTHF ≈ 0.8 is taken overhead at low pressure (p ≈ 1 bar), while pure water can be removed from the process at the bottom. The overhead stream is condensed and compressed to a significantly higher pressure (p ≈ 10 bar). At this pressure, the THF concentration of the

256 � 5 Distillation and absorption

Figure 5.49: Pressure swing distillation for the separation of the tetrahydrofurane–water azeotrope [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

azeotrope is significantly lower (xTHF ≈ 0.6). Pure THF will remain at the bottom of the second column when the azeotrope is taken overhead at high-pressure. It is recycled to the first column. Overall, the outlets of this arrangements are water and THF with arbitrary purity. Other examples are acetonitrile–water, methanol–acetone, ethanol–benzene, HCl– water and even ethanol–water [124]. The great advantage is that no additional substances have to be introduced into the process. There are four other main principles of azeotropic separation, which are illustrated using the azeotrope ethanol–water (Figure 5.50). Ethanol–water is the azeotrope which is split most often worldwide, with a capacity of approx. 40 million tons per year. It is quite an unpleasant one, as on the branch between the azeotrope and the pure ethanol hardly any separation via distillation is possible (Figure 5.50).

Figure 5.50: Ethanol–water azeotrope at t = 100 °C.



Azeotropic distillation (Figure 5.51): After the azeotrope is obtained at the top of column K1, a substance is added which forms a ternary azeotrope with ethanol and water. The most common options are benzene and cyclohexane (CHX), where the latter is nowadays preferred due to the toxicity of benzene. With this ternary azeotrope, all the water can be taken at the top

5.8 Separation of azeotropic systems

� 257

Figure 5.51: Azeotropic distillation of ethanol–water using cyclohexane. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.





in column K2, while pure ethanol is obtained at the bottom. The ternary azeotrope can be split into two phases in the decanter (Chapter 6.1). The upper phase consists of cyclohexane with small amounts of ethanol, which can be directly recycled to column 2. The lower phase can be worked up in a further distillation and recycled to the decanter or column 1, respectively. Extractive distillation (Figure 5.52): The advantage of extractive distillation is that it is not necessary to start with azeotropic concentration; an easily achievable preconcentration to approx. 90 % is sufficient. In the first column K1 the water is washed down to the bottom with a solvent where the activity coefficients of water are lower than those of ethanol. A widely used one is ethylene glycol (1,2-ethanediol). At the top of column K1, the ethanol is obtained with the desired purity. The bottom product is a mixture of water and ethylene glycol. These components are separated in the second column K2. Ethylene glycol as the bottom product can be recycled and used again as extractive agent in column K1. Adsorption: Especially for ethanol, an adsorption process has been developed where the remaining water in the azeotrope is removed with a molecular sieve. The process is described in Chapter 7.2. The advantages are its robustness and simplicity. Especially in the ethanol business these are two major items. A disadvantage is the fact that it is necessary to start with azeotropic concentration and the complicated control.

258 � 5 Distillation and absorption

Figure 5.52: Extractive distillation of ethanol–water using ethylene glycol. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.



The azeotrope contains 4 % water, with is quite a lot. It makes it necessary to change the bed after a few minutes operation. The adsorber bed must then be regenerated. Membrane: Similar to adsorption, the water in the azeotrope can be removed with a membrane (Chapter 7.1). The process strategy is the same, first, azeotropic composition must be achieved, and then the water is removed using a multistep membrane separation [125].

5.9 Rate-based approach In a conventional simulation of a distillation or an absorption, it is assumed that the liquid and the vapor phases which are leaving a stage are at complete equilibrium, i. e. phase equilibrium, thermal equilibrium and mechanical equilibrium. Furthermore, complete mixing and complete separation of the phases is assumed. In reality, these assumptions are, of course, never fulfilled. To take this into account, efficiency factors or HETP values are introduced so that realistic results can be obtained. However, one must be aware that HETP might depend on the column diameter, the properties of the substances, or the liquid and vapor flow rates. Tray efficiencies and HETP values are never accurate; instead, they should more or less be interpreted as good guesses. The so-called rate-based approach is an alternative. It considers the heat and mass transfer between the phases which encounter each other in the column. It accounts for the influences of throughput, equipment size, packing or tray properties and physical properties of the fluids so that extrapolations are more reliable [126]. The heat and mass

5.9 Rate-based approach

� 259

Figure 5.53: Rate-based approach. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

transfer rates are determined by quantifying the temperature and concentration20 differences between the phases which are the driving forces of the separation. The characteristics of the contacting device, i. e. the generated transfer areas, are also taken into account. Thermodynamic equilibrium is still a very important piece of information for the calculation, but it is only assumed at the interface between the phases, referred to as vapor and liquid film in Figure 5.53. The mathematical details can be found in [127]. Applying a rate-based approach, one should be aware of some peculiarities which are often unexpected: – The temperatures of vapor and liquid on a stage are generally different, as the two phases do not reach equilibrium. – In a multicomponent mixture, a component can diffuse in opposite direction to its concentration gradient. This can happen in situations where the fluxes of the particular components are strongly coupled. The phenomenon has been thoroughly described and experimentally proved in [128] and [129]. – The term of a theoretical stage is still used but no longer considered in the final calculation. For packed columns, the packing is divided into so-called segments, which have nothing to do with equilibrium stages or HETP; however, there are rules of thumb to choose useful values for their height which are related to the HETP values. The segment height should always be lower than the HETP. For random packings, 10–12 times the size of the packing elements is a good approach, for structured packings, HETP/2 is a reasonable choice. In general, the number of segments should have no major influence on the calculation result, as long as the choice is reasonable. For tray columns, the trays theirselves are the entity.

20 to be correct: chemical potential differences.

260 � 5 Distillation and absorption Besides the phase equilibrium and the enthalpy description, the transport properties are necessary for the calculation, i. e. viscosities, thermal conductivities, surface tensions, and diffusion coefficients. As mentioned in Chapter 2.12, the viscosity of liquid mixtures is not really accurate. The diffusion coefficients can be estimated very well for gases, but for the liquid at most the correct order of magnitude can be determined [11]. Moreover, the mass transfer models are not accurate in any case, it is a matter of experience to choose the best one, and even this is not a guarantee for a correct representation of the system. The rate-based approach is not generally more accurate than the equilibrium calculation. It connects a number of uncertain quantities for the representation of the column, whereas the equilibrium model mixes all of these influences together and represents it with one single uncertain value, i. e. the HETP or the efficiency. Nevertheless, there are a number of cases where the rate-based approach gives significantly different results, and one should know when it makes sense to go beyond equilibrium thermodynamics. One should be aware that the effort to switch to the ratebased model in commercial process simulation programs is actually limited and is not a reason for refusing this attempt; convergence has also been substantially improved in recent years. Rate-based calculations are more or less obligatory for absorption and desorption processes, which are in most cases mass transfer limited. The efficiencies vary greatly from component to component and from stage to stage, as well as in strongly nonideal systems. In absorption, the efficiencies are usually only 10–20 %, but also values like 5 % are possible. In reactive distillation, the efficiency does not make sense at all, when the main progress on a stage is the proceeding of the reaction and not of the separation. Reactions with fast reaction rates might be mass-transfer limited. Trace components, which have low mass transfer rates due to their low concentration, can often not be adequately treated with an equilibrium calculation. As well, systems with big gaps between vapor and liquid temperature can be heat transfer limited. For illustration, here is a classical example. The absorption of traces of HCl from exhaust air with water requires one single theoretical stage with an equilibrium model, as the HCl is an electrolyte and will completely and immediately dissociate in water. In this case, it becomes clear that the use of a rate-based model is not a matter of accuracy. The procedure which determines the removal of the HCl from the air is the mass transfer in the gas phase by diffusion. It takes much more effort than the absorption itself when the HCl has reached the boundary layer. For the correct dimensioning the application of a rate-based model is obligatory, it will result in a by far larger packing height.21 Other measures are not appropriate, neither the increase of the water amount nor the use of caustic soda, which makes the chemical absorption just more irreversible. The HCl does not know that the NaOH is waiting for it in the liquid!

21 often several m of packing, if TA Luft must be reached.

5.10 Dividing wall columns

� 261

Even the rate-based approach does not represent the total truth. There are a lot of systems which form aerosols in the vapor phase, e. g. sulfuric acid–water or the system HCl–water which was just mentioned above. Aerosols are formed when, due to oversaturation, condensation takes place directly in the vapor phase and not in contact with the liquid phase. The droplets formed are not large enough to settle down into the liquid, and they are not small enough to take part in the diffusion process. They are entrained with the vapor phase, distorting the principle of distillation and absorption. To explain the theory would go beyond the scope of this book. Good explanations can be found in [130] and [131].

5.10 Dividing wall columns Conventional distillation columns can separate a feed mixture into two product streams with the desired concentration, as long as the design of the column is appropriate. In case of a binary mixture, two pure component streams can be obtained. In Figure 5.54, a side draw is taken from the column additionally. However, there is no way to obtain a third product with any desired concentration. Consider a three-component mixture with the light-end A, the heavy-end B, and the middle-boiler C. As the feed is located below the side-draw, it can easily be achieved that there is no heavy-end B in the side-product. However, light-end A will pass the stage where the side-stream is taken, and certainly, part of it will inevitably occur in the side stream. Vice versa, if the side-draw is located below the feed, it will contain a certain amount of component B.

Figure 5.54: Column with side draw. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

Dividing wall columns (Figure 5.55) are an alternative, which is increasingly becoming established. Part of the column is divided by a separation wall which prevents lateral mixing. The feed enters the column on the left hand side and is split into the components

262 � 5 Distillation and absorption

Figure 5.55: Dividing wall column principle. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

A + C at the upper and B + C at the lower end of the separation wall. At the top of the column there is a rectifying section, giving pure component A as top product. Also, at the bottom the pure heavy-end B can be obtained in the conventional stripping section. The right part of the column is fed with a mixture of A + C from the top and B + C from the bottom. At an appropriate stage in the middle of the right-hand side, pure product C can be withdrawn. There is usually a distributor at the top of the divided section where it can be controlled how much liquid is fed from the top to the particular column partitions. Without a device, the vapor coming from the bottom is split in a way that the pressure drop in both partitions is the same. Therefore, for a proper design the pressure drop correlation used should work sufficiently well. Dividing wall columns represent the highest degree of heat integration between columns; it is estimated that the energy savings amount to 20–35 % in comparison with an adequate conventional distillation arrangement [132]. From the process simulation point of view, the dividing wall column can be represented by two independent columns which are linked by streams entering and leaving the partition on the right-hand side (Figure 5.56). The stripping section, the partition on the left-hand side and the rectifying section form the first column (C1LEFT). At the upper end of the dividing wall, a liquid stream (LL) is withdrawn and led to the right partition, while the complete vapor flow of the right partition enters the rectifying section (VR). Analogously, part of the vapor from the left partition is taken as a side stream to the bottom of the right partition (VL), while the whole liquid from the bottom of the right partition is fed to the stripping section from the top (LR). The following items should be observed when a dividing wall column is considered: – For a long time, column hydrodynamics in dividing wall columns were not covered by the commercially available programs. The only way to evaluate hydrodynamics was to define a circular cross-section with an equal area. Meanwhile, some programs already support dividing wall columns.

5.10 Dividing wall columns

� 263

Figure 5.56: Dividing wall column in process simulation. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.















Dividing wall columns have strong advantages if the feed contains substantial amounts of middle boiler C, typically 20–60 %. The larger the concentration of the middle boiler, the more effective is the dividing wall column option compared to a conventional design [243]. It is clear that both sides of the dividing wall have essentially the same pressure. If it turns out that operating the two sides at different pressures has a considerable advantage, the dividing wall column is not appropriate. The partition wall should be thermally insulated to avoid heat transfer across it. Heat transfer will have a negative influence on the performance of the column. Moreover, if there are large temperature differences on both sides of the wall, it will probably cause mechanical stress. This must be taken into account by the mechanical design [243]. Side reactions which cause the formation of light ends at the bottom or heavy end at the top thwart the principle of the dividing wall column, it is not useful in these cases [243]. Due to the hydrodynamic constraint of having the same pressure drop on both sides of the wall, it is at least difficult to provide a significantly different number of theoretical stages on both sides of the wall. The dividing wall column has a larger diameter and more stages than each of the columns it represents (Figures 5.55 and 5.56). As well, if there are different material requirements on both sides of the wall, the more expensive one will have to be chosen. The vapor split is fixed by the design of the column, i. e. by the location of the wall and the pressure drops across the sections. It cannot be adjusted during operation [243].

264 � 5 Distillation and absorption

5.11 Batch distillation Distillation can be performed either in the continuous mode or as a batch distillation [124]. A continuous distillation is operated at steady state, meaning that the state variables do not change with time. The feed is continuously entering the column, while top and bottom products are continuously withdrawn. In batch distillation, the feed is filled into the bottom vessel of the column at the beginning of the operation (Figure 5.57 (a)) regular configuration). Depending on the time, various products can be withdrawn at the top of the column. Side stream products and continuous feeds are optional. There is usually no bottom product, the residue in the bottom vessel can be removed from the column at the end of the distillation process. The state variables in the column change with time, the process is inherently unsteady [124]. Batch distillation is often preferred to continuous distillation if relatively small amounts of material which occur irregularly and possibly with changing composition have to be separated. It is used extensively in laboratory separations and in the production of fine and specialty chemicals, pharmaceuticals, polymers and biochemical products. Batch distillation units are very flexible; they can usually handle different kinds of products, and as a matter of principle, only one column is necessary to split a mixture into its components unless azeotropes occur. As well, hydrodynamic calculations are not as important as for continuous columns; if the column diameter does not fit, the throughput can simply be distributed over a larger time, as long as it is in line with the time schedule of the process (Chapter 3.4). Moreover, a batch product has its own identity, i. e. it can strictly be controlled which feedstock a product comes from, which is often important for quality control in the production of pharmaceuticals [124]. Essentially, there are two different kinds of batch distillation [133]: – Operation with a constant reflux ratio, where the distillate composition changes continuously. The final product concentration is an average value. At the beginning, it is usually higher so that the product purity is above specification. At the end, the reflux ratio is lower than requested, and it has to be taken care that operation is stopped as long as the product concentration is in line with the specification. This is a considerable disadvantage, the successful operation can be proved only at the end of the batch. If the final product turns out to be off-spec, the batch has to be blended or rerun [124]. – Operation with a constant distillate composition, where the reflux ratio is varied. This is normally the better approach, however, it requires a control mechanism and is more complex. It might happen that the controller settings must be adjusted during the process due to rough changes of the process conditions. Of course, even both the reflux ratio and the product composition can be varied to optimize the decisive criterion, e. g. batch time, product amount, or minimum cost. While in the traditional batch distillation the feed is charged to the bottom, other configurations might be useful. In an inverted batch column, the feed is charged to the

5.12 Troubleshooting in distillation

� 265

Figure 5.57: Batch distillation configurations; (a) regular, (b) inverted, (b) middle vessel [124].

reflux drum at the top of the column. With this approach, it is easier to remove large amounts of high-boiling components from the final product. Also, the middle vessel configuration is generally more effective in terms of energy efficiency and product rate [124]. All three configurations are illustrated in Figure 5.57. Batch distillations are more complicated to calculate than continuous ones, as the MESH equations (Section 5.1) must be supplemented by a term describing the timedependent holdup on the various stages. Moreover, the change of the holdup on the stages varies drastically with time22 so that the solution of this system of equations is much more difficult. Alternatively, the batch distillation can be represented as a series of continuous distillations where the holdups at the beginning and at the end of the steps are taken as feed streams to or side products from the particular stages, respectively [133].

5.12 Troubleshooting in distillation A number of excellent papers and books are available to get support for troubleshooting in distillation, among them [96, 106, 258, 261]. This chapter focuses on γ-ray scanning as a technique to detect areas in the column where blocking, foaming, maldistribution or damage, which belong to the most common reasons for column malfunction, prevent regular operation of the column. Its advantage is that the examination can be performed

22 At the beginning, the holdup is built up, whereas at the end there are only slight changes in the composition.

266 � 5 Distillation and absorption during operation when the problem occurs, without opening the column which always causes a major interruption and often does not reveal the reason. The principle is as follows: A radioactive source emits γ-radiation. When passing any material, the γ-radiation is attenuated according to I = I0 exp(−μρx)

(5.19)

with I as radiation intensity, I0 as original radiation intensity of the source, μ as absorption coefficient, ρ as the density and x as the passed way through the medium. At high γ-ray energies, the absorption coefficent μ becomes independent of the material, and the absorption process depends only on the product of density and the thickness of the medium. Therefore, the attenuation between source and detector is a direct measure for the average density on the way. For tray columns, the typical zigzag pattern develops when the source/detector arrangement is moved up and down the column (Figure 5.58 (a)). Alternatingly, the radiation passes the clear vapor space with the lowest density and the solid tray in horizontal direction with the highest density. In the vapor space, attenuation can be determined when there is foam, weeping, liquid entrainment or flooding (Figure 5.58 (b)), each of them having a typical shape which can be identified by a specialist, as well as tray damage or missing trays. Source and detector can be arranged centrally across the tray or, alternatively, across the downcomer to examine its behaviour.

Figure 5.58: γ-ray scanning signals for a normally working tray column (a) and a tray column with increasing entrainment up to jet flood from tray to tray (b). Courtesy of IBE-Ingenieurbüro Bulander & Esper GmbH.

5.12 Troubleshooting in distillation

� 267

In a column with random packing, the packing bed can be found as a region of high density. If density further rises, one can conclude that plugging or flooding occurs. Maldistribution can be detected by evaluating the attenuation across a number of secants in different directions (Figure 5.59). Surprisingly, there are no success stories for structured packings [96].

Figure 5.59: γ-ray scanning for a packed column with and without maldistribution. Courtesy of IBE-Ingenieurbüro Bulander & Esper GmbH.

6 Two liquid phases A thermodynamic sophistry says that it might happen that more than two liquid phases can coexist in an equilibrium. Figure 6.1 shows a sketch of seven liquids forming an equilibrium with the same vapor.

Figure 6.1: Seven liquid phases in equilibrium [134].

Fortunately, in technical applications these multi-liquid-liquid equilibria do not play a major role, but there are a number of processes where two liquid phases occur. Especially in extraction processes the two liquid phases are essential, as well as in heteroazeotropic distillations, where two liquid phases form after condensation which must be carefully separated for the continuation of the process.

6.1 Liquid-liquid separators Usually, in a process the two liquid phases form a dispersion where one phase (discontinuous phase) is distributed in a continuous phase as droplets [135]. In technical applications, in most cases an aqueous phase and an organic phase occur, where the aqueous phase is usually the heavy one.1 In a liquid-liquid separator, the dispersion should be transformed into two homogeneous phases. Figures 6.2 and 6.5 show horizontal liquidliquid separators. On the left-hand side, the dispersion enters the separator. As the velocity decreases due to the enlargement of the diameter, turbulence and kinetic energy are reduced. A layer of droplets is formed, where droplets do not coalesce. On the right hand side, the coalescence takes place. Small droplets slowly coalesce. This can be improved by internals. There are two kinds of these: 1 There are exceptions, for instance halogenated organic substances are often heavier than water. https://doi.org/10.1515/9783111028149-006

6.1 Liquid-liquid separators

� 269

Figure 6.2: Horizontal liquid-liquid separator without internals.

– –

internals which reduce the kinetic energy and distribute the liquid over the whole cross-flow area; internals with a large surface where the droplets can coalesce (e. g. plates, random packing, wire-mesh).

Fiber layers are recommended for droplet diameters between 1 and 100 µm. Plate internals are relatively expensive but useful if solid particles or surfactants are involved or if the pressure drop should be minimized. The directing of the liquid phases is an important item of the design. While in Figure 6.2 the apparatus is completely filled with the two liquid phases, it is also possible that space for the vapor is left. A siphon can be used to adjust the height of the phase boundary between the two phases (Figure 6.3), where the top height of the siphon can be varied if it is designed as a spool piece [136]. For small amounts of the heavy phase, a special collector can be placed at the bottom (Figure 6.4). A theoretically founded design of liquid-liquid separators is not possible. Some influences are clear; a large density difference, a low viscosity of the continuous phase, and large droplet sizes are favorable to the sedimentation. However, there are many phenomena which are yet unclear, e. g. the sedimentation of droplet clusters, the droplet size distribution, and the coalescence behavior. But the most unpredictable issue is the influence of surfactants. Even just traces can significantly change the separation behavior, which often just turns the design procedure into a lottery. Also, solid particles often tend to form a layer (crud) which can disturb the separation of the two phases. The generally acknowledged procedure on the design of liquid-liquid separators driven by gravity is the one of Henschke [137, 138], which describes the transfer of the results of a batch settling experiment in a standardized cell into a design of a separator. Figure 6.6 shows the course of such an experiment, where the light phase is the dispersed one. After the mixing of the two phases has stopped, the droplets start rising upwards. The sedimentation curve indicates the range of the lower part which is free of droplets. If

270 � 6 Two liquid phases

Figure 6.3: Adjusting the phase boundary with a siphon [136].

Figure 6.4: Liquid-liquid separator with a collector for the heavy phase.

the sedimentation of the droplets is faster than the coalescence at the phase boundary, a layer of droplets is formed. The droplets coalesce at the phase boundary, which is continuously shifted downwards. The extent of the clear light phase is represented by the coalescence curve. The experiment is finished when only half of the phase boundary is covered with droplets; this definition is necessary to become independent of statistical effects caused by single droplets which coalesce lately. The course of the particular curves can be used to adjust the Henschke model. New approaches are under development, which consider internals as well. Often, when the phase separation is relatively fast, the dimensions of the separator are determined by its function as a vessel, giving the plant operators time to react (Chapter 9). For the designer, this is a lucky situation, but it has to be proved by experience in any case.

6.2 Extraction

� 271

Figure 6.5: Horizontal liquid-liquid separator in chemical industry.

Figure 6.6: Course of a settling experiment with the light phase as the dispersed one [139]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

6.2 Extraction In liquid-liquid extraction, a substance (extractive) is removed from a solvent by an extracting agent in the liquid phase which is not completely miscible with the solvent [89]. Extraction has advantages in comparison with distillation if – the separation factors are small, in the worst case at the azeotropic point; – there are several components with significantly different boiling points which can be separated simultaneously; – substances with extremely high or low boiling points occur;

272 � 6 Two liquid phases – –

the concentration of high boiling substance is low so that a very large part of the mixture has to be evaporated; sensitive fluids must not be heated up.

The stream containing mainly the selective agent and the extractive is called the extract. The solution that has been cleaned from the substance is called the raffinate. The physical foundation of the extraction is the liquid-liquid equilibrium between the substances involved. A single equilibrium step is usually not sufficient. As for distillation and absorption, the separation effect can be increased by providing a number of separation stages in a row. Again, columns are possible, where the solution and the extracting agent are introduced at opposite ends of the column. The driving force of the countercurrent flow is the density difference between the two liquid phases, and therefore the light phase inlet is at the bottom, and the heavy phase inlet is at the top. The design of extraction equipment should provide good mass transfer conditions, i. e. a large contact area of the phases at a high degree of turbulence. One of the two phases is split into droplets, forming the disperse phase. There are many criteria for the choice of the disperse phase, which are sometimes contradictory. Often, the phase with the larger mass flow is dispersed to get a large contact area. In packed columns, the phase with the better wettability should be the continuous one. If the disperse phase is wetting the packing, the droplets could coalesce and become larger with less surface for mass transfer. Furthermore, the mass transfer direction should be from the continuous to the disperse phase. Moreover, flammable or poisonous substances should be dispersed to lessen the hazardous potential. The final decision should be based on experiments. Criteria for the choice of the extracting agent are the extent of the miscibility gap with the solvent, a high selectivity and a large capacity for the extractive. The separation of the extracting agent from the extract should be as easy as possible; extractive and selective agent should have a large difference of the boiling points and should not form an azeotrope. The density difference between the extracting agent and the solvent should be large so that the separation of the two liquid phases is easy; otherwise, there is also the option to achieve the phase separation by centrifugation. The surface tension between the two phases is a relevant quantity; if it is too large, the formation of small droplets is difficult, if it is too small, the separation of the two phases becomes hard. There are also practical items like low price, small vapor pressure, so that the losses by evaporation are small, high thermal and chemical stability, and low viscosity, flammability, and toxicity. Analogous to distillation and absorption, extraction can be described with an equilibrium model or a rate-based model considering mass transfer. A comprehensive description can be found in [97]. Compared with distillation and absorption, the computational modeling of liquid-liquid extraction processes has much more uncertainties. The dimensioning of equipment for extraction is hardly possible without performing a pilot scale test.

6.2 Extraction

� 273

If the extraction is performed in a column, the main purpose of a pilot scale test is the measurement of the flooding point for the determination of the column diameter. Flooding occurs when the two liquids cannot maintain the countercurrent flow. The reason can be explained by the velocities of the two phases. Consider the case where the light phase is the dispersed one. Dispersed in a nonmoving heavy phase, the droplets will move upward, as their buoyancy is larger than their weight. The flow resistance determines the rising velocity [8] wrise =

(ρheavy − ρlight )gd 2 18η

(6.1)

When the continuos phase is moving with a velocity u, its velocity downward must be lower than the rising velocity of the droplets of the dispersed phase. If this is not the case, the droplets will move downwards, and they do not reach the top of the column. They accumulate at a certain position, coalesce and form a continuous phase at this location. For flooding, there are a number of reasons. The simplest one is that the flowrates and therefore u as the velocity of the continuous phase are too high. Another reason could be an energy input which is too high. This might result in smaller droplets, which have a lower rising velocity according to Equation (6.1). The third reason might be a rapid change in the physical properties due to the mass transfer. A pilot scale test in a similar column as to be used in the industrial application with a realistic feed mixture is simply obligatory [319]. Even the calculation of the phase equilibria causes problems. Unlike the representation of vapor-liquid equilibria, the binary interaction parameters (BIPs) for the NRTL- or the UNIQUAC equation obtained from binary phase equilibrium data cannot be simply transferred to ternary and multicomponent mixtures, as already mentioned in Chapter 2.5. Usually, they yield results which are only qualitatively correct. For a reliable description of liquid-liquid equilibria, the BIPs have to be adjusted not only to binary, but also to LLE data of ternary mixtures.2 Moreover, the temperature dependence of the BIPs, which is especially distinct for systems showing strongly non-ideal behavior, must be carefully regarded. Example A water stream contains 400 wt. ppm of tetrachloromethane (CCl4 ). The CCl4 content shall be reduced significantly. Does it make sense to use a hexane stream as extractive agent, which already contains 1 wt. % of CCl4 ? The extraction process consists of a one-stage mixer-settler (Chapter 6.2.1) arrangement and happen at t = 30 °C and p = 1 bar.

2 Of course, data from quaternary and higher mixtures would be useful, but there is little available.

274 � 6 Two liquid phases

Solution In fact, this option had been disregarded in a similar practical case, as it seemed that diffusion cannot happen against the concentration gradient. However, there are two mistakes. First, the concentration which accounts for mass transfer is the mole concentration. It does not change much, but the comparison must be made between 47 mol ppm in the aqueous phase and 0.6 mol % in the organic phase. Second, the concentration measure which is decisive refers to the chemical potentials, i. e. the activities. For easy illustration, the activity coefficients at infinite dilution are taken. For CCl4 , they are: ∞ γaq = 10200 , ∞ γorg = 1.23

Thus, it becomes clear that the product xCCl4 ⋅ γCCl4 is larger in the aqueous phase, and in the equilibrium stage the CCl4 will be transported from the aqueous to the organic phase. An evaluation of the proposed mixer-settler equilibrium stage yields quite a good result: The CCl4 content of the aqueous phase is reduced to 6 wt. ppm or, respectively, 0.7 mol ppm. However, note that afterwards the aqueous phase is also saturated with hexane (33 wt. ppm, corresponding to 7 mol ppm).

But even if the phase equilibria are well-known, a number of issues exist which will not be solved in the foreseeable future. The surface tension between two liquid phases, which determines the effort for the phase separation, cannot be predicted. Small amounts of impurities can have a significant influence. To determine the residence time necessary for the phase separation, experimental tests are necessary. If rate-based models are applied, droplet sizes and their distribution for the calculation of phase boundary surface, the wettability of the built-in components and the optimum velocities cannot be predicted. Also, the diffusion coefficients in the liquid phase have a large influence on the results, but their estimation is inaccurate [11]. Finally, the choice of the disperse phase can be supported by theoretical considerations, but at the end of the day, an experimental confirmation is required. The scale-up of extraction columns is more difficult than for distillation or absorption columns. Columns with large diameters do not show the same separation efficiency as laboratory or pilot scale columns. The reason is supposed to be a more widely distribution of the residence time of the droplets, which has in principle a negative influence. For the design of an extraction, it is obligatory to perform a series of experiments [139]. These experiments comprise shaking trials to verify the phase equilibrium, to characterize the dispersion and coalescence behavior, and laboratory, miniplant, or pilot plant runs as the basis for the scale-up. In distillation, absorption and extraction, it is essential that the phases which participate in the separation process are exposed to each other intensively. Subsequently, they have to be separated again. This is easy for distillation and absorption due to the large density differences between liquid and vapor but much more difficult in extraction with small density differences of the two liquid phases. Therefore, the equipment used for extraction must enable the two phases to separate after a certain contact time.

6.2 Extraction

� 275

6.2.1 Mixer-settler arrangement The simplest concept is the mixer-settler arrangement, where mixing and separation take place at two different locations. In the easiest way, the mixer is a stirred vessel, and the separator is another vessel providing residence time for the settlement of the phases. Wire-mesh or packing elements can support the separation. If phase equilibrium and complete separation are achieved, one theoretical stage is realized. Mixer and settler can as well be arranged in a more compact way (Figure 6.7), where also countercurrent flow takes place.

Figure 6.7: Mixer-Settler arrangement [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

The advantage of the mixer-settler principle is the easy scale-up by numbering-up. The load range is large, and mixer-settler units are appropriate for extreme mass flow ratios of the two phases. The height of mixer-settler units is low, however, the floor space required is very large, as well as the liquid holdup.

6.2.2 Extraction columns For extraction, sieve tray columns and both random and structured packing columns are used [97, 139]. In contrast to distillation, the sieve tray columns do not have a weir. There is a downcomer or a pipe so that the heavy phase can get to the tray below. The light phase accumulates below the tray above. The heavy phase coming down displaces the light phase, which is forced to go to the tray above across the sieve holes. Therefore, the light phase is the disperse one, whereas the heavy phase is the continuous phase. If the heavy phase should be the disperse one, a different construction must be chosen with pipes leading to the tray above. In extraction, these sieve trays have an efficiency of η = 10–30 %. The load range is relatively small. Another disadvantage is the fact that a relatively large part of the column is used for accumulating the light phase below the next tray. On the other hand, the construction is quite simple, and the backmixing by dispersion is effectively prohibited. Also, the scale-up is easy. The density difference between the liquid phases should be quite large (> 100 kg/m3 ).

276 � 6 Two liquid phases

Figure 6.8: Pulsated sieve tray column [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

A significant improvement of the efficiency can be achieved by pulsation. The liquid in the column is vibrated by means of a piston pump (Figure 6.8). The amplitudes of these vibrations are 6–10 mm, and the frequencies are between 50–150 min−1 . The light phase passes the holes during the upstroke, and the heavy phase passes during the downstroke. In this way, new phase contact areas are continuously formed. Another option for the pulsation is the movement of the trays theirselves. Pulsated columns have a good separation efficiency, but like the nonpulsated tray columns the load range is small. For packed columns, it is important that the packing is easily wetted by the continuous phase. The disperse phase should not wet the packing; otherwise, the droplets might coalesce, which lowers the interfacial area. Often, extraction columns with rotating elements are used. The principle is that both phases are thoroughly mixed by means of input of mechanical energy. Many small droplets with a large surface are formed, and the mass transfer is improved. The two liquid phases are separated in designated zones, and the axial back mixing as one of the key problems of extraction columns is restricted. The drawbacks of these columns are the high price and that they are prone to malfunction and attrition. One of the most popular extraction column types with rotating elements is the Kühni column (Figure 6.9). A turbine agitator produces a circulation flow with a high interfacial area between the liquid phases. Perforated discs provide for the separation of the phases. The separation efficiency is high, there are up to 10 stages per m [306]. Again, the small load range is the main drawback. The rotating disc contactor (RDC) has horizontal rotating discs on a shaft, which provide the dispersion of the phases (Figure 6.10). A minimum viscosity of the phases is necessary, as the dispersion is caused by shear forces. The back mixing is restricted by the stator rings linked to the wall. However, for assembly reasons the inner diameter of these stator rings is larger than the outer diameter of the rotating discs so that the back

6.2 Extraction

� 277

Figure 6.9: Kühni column. © Sulzer Chemtech Ltd.

Figure 6.10: Rotating disc contactor (RDC). © Sulzer Chemtech Ltd.

mixing is not really prevented. RDCs can realize large throughputs but only 0.5–1 stages per m [306]. The asymmetric rotating disc contactor (ARDC) is a further development. The shaft with the rotating discs is placed non-concentrically with the column axis (Figure 6.11). Separation and transport of the phases take place in dedicated zones at the column wall which are separated from the mixing zone by vertical plates. Compared to the RDC, the maximum throughput is a bit lower, but the separation efficiency is much better (1–3 stages per m) [306].

278 � 6 Two liquid phases

Figure 6.11: Asymmetric rotating disc contactor (ARDC) [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

The hydraulic design of extraction columns is difficult. Details can be found in [97] and [139]. The most important criterion for the determination of the column diameter is the flooding point. Flooding is reached when the countercurrent flow can no more be maintained, e. g. if the buoyancy of the dispersed light phase is not sufficient to overcome the flow resistance of the droplets in the continuous heavy phase coming down. The droplets of the light dispersed phase can be entrained downward to the bottom outlet, or a phase inversion can take place if the droplets accumulate and coalesce. A reasonable calculation of these phenomena is hardly possible, as the droplet size distribution as the necessary information is not accessible. As a strongly simplifying consideration, the layer approach can be used. The two phases cover a part of the cross-flow area according to their holdup and move in opposite directions in countercurrent flow. The velocities are in the order of magnitude of 1 cm/s.

6.2.3 Centrifugal extractors Centrifugal extractors are a third type of equipment for liquid-liquid extraction. The countercurrent flow and the phase separation are not achieved by gravity but by centrifugal forces. The internals and the liquid routing provide an intensive mixing and the subsequent phase separation after a residence time of a few seconds, giving high throughputs and low holdups. Both investment and operation costs are high, centrifugal extractors are mainly used in the pharmaceutical industry or if expensive solvents are involved. The principle of centrifugal extractors is that the extractor is rotating at a high speed. By means of the centrifugal forces the heavy phase is forced to the outer wall, whereas the light phase is displaced towards the rotation axis. The Podbielniak extractor is an example. The phase separation is supported by concentrical perforated sheets.

6.2 Extraction

� 279

Figure 6.12: Podbielniak extractor [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

Countercurrent flow is achieved by feeding the light phase at the wall and the heavy phase at the rotating shaft. Podbielniak extractors can have 3–5 theoretical stages.

7 Alternative separation processes For thermal separations, the alternative processes membrane separation, adsorption, and crystallization can solve some problems where the standard operations fail. They are explained in the following sections.

7.1 Membrane separations At the inlet of the membrane there is one stream you do not like. Downstream the membrane, there are two of them. (Hans Haverkamp)

Fortunately, this does not always apply. Membranes can be used successfully for thermal separation problems, especially in combination with other processes. Figure 7.1 shows the principle of the membrane separation process and the denomination of the streams. The membrane separates two spaces from each other. However, substances can pass through the membrane and get to the other side. The stream having passed the membrane is called permeate, the stream which has not is the retentate. For the various substances, the permeability of the membrane is different, which is the basis of the separation effect.

Figure 7.1: Principle of the membrane separation process [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

There are two different principles of membrane separation. The first type acts like a sieve or a filter; small molecules can pass the membrane (permeate), whereas larger molecules cannot (retentate). As membrane materials, glass-like polymers like polyetherimide or polysulfone are used. Depending on the size of the retained particles, it is distinguished between microfiltration, ultrafiltration, and nanofiltration. Nanofiltration is normally used for the treatment of aqueous systems. It separates particles down to a particle size of 1 nm. The driving force is a pressure difference up to 40 bar between both sides of the membrane. In many cases, nanofiltration is also ion-selective. While monovalent ions can pass the membrane easily, bi- or multivalent ions are held back. For noncharged components, the MWCO value (molecular weight cut-off) is often used for the first characterization of a membrane process. It is defined as the molecular weight of a solute which is rejected from passing the membrane by 90 %. For commercial membranes, values of 150–900 g/mol are typical [315]. Well-known applications are the removal of water hardness (Ca-ions), the decoloring of waste waters from the textile and pulp industry, and the desalination of waste waters. https://doi.org/10.1515/9783111028149-007

7.1 Membrane separations

� 281

Ultrafiltration is operated with a pressure difference across the membrane between 3–10 bar. It can be used for separating highly molecular substances from a liquid. Microfiltration is used for removing particles between 0.1–10 µm. The mass transfer through these porous membranes can be explained with the pore model. For a porous membrane, the size of the molecules or ions to be separated and the pore size of the membrane are of the same order of magnitude. In this case, the membrane separation is comparable to a sieve filtration. Solubility membranes act in a different way. The mechanism for this separation is the combination of solution and diffusion. On the high pressure side of the membrane, a component is dissolved in the membrane polymer. It is then transported to the other side of the polymer by diffusion, and desorbs at the low pressure side of the membrane. The driving force of the procedure is the partial pressure difference between the two sides of the membrane. For the permeability of a component through the membrane, the product of the solubility in the membrane polymer and the diffusion coefficient is decisive, according to ṅ i =

Di Si Δpi , l

(7.1)

where ṅ is the mole flow [mol/(m2 s)], D is the diffusion coefficient [m2 /s], S is the solubility parameter [mol/(m Pa)], and l is the thickness of the polymer layer [m]. For example, pentane has a higher permeability through a silicon membrane than nitrogen. Its diffusion coefficient in silicon is three times lower than that of nitrogen, but its solubility is 200 times larger. Therefore, the permeabilities differ by a factor of approx. 60. The quality of the separation depends mainly on the selectivity of the membrane, which is defined as the ratio (D1 S1 )/(D2 S2 ) of two components to be separated. Figure 7.2 illustrates the permeation behavior of various substances in different membranes, with some remarkable and unexpected results. To achieve high fluxes through a solubility membrane at sufficient separation efficiency, it is necessary that the active layer for the separation is extremely thin [89]. The handling of thin materials is difficult; the solution is a so-called asymmetric membrane. These membranes consist of a thin active layer (approx. 0.01–0.05 µm) connected to a porous supporting layer (approx. 100 µm). The supporting layer achieves the mechanical stability without contributing significantly to the mass transfer resistance. The supporting layer can be made of the same material (phase inversion membrane) or a different one (composite membrane). In some cases, an additional layer made of polyacrylnitrile fibres is used to further increase the mechanical stability. Examples of membrane materials are polyvinyl alcohol, cellulose or its derivatives for organic membranes, whereas inorganic membranes can be made of sintered metal powder, glass in a spongy structure, carbon, or ceramic. Organic membranes are more widely used because of their low price and their mechanical stability. However, inorganic membranes are thermally and chemically stable and have long durabilities. Elas-

282 � 7 Alternative separation processes

Figure 7.2: Permeation behavior of various substances in different membranes.

tomer membranes preferentially let organic substances pass and have lower permeabilities for low-boiling gases like nitrogen, oxygen, or hydrogen. Membranes are used as modules which provide relatively high mass transfer areas per volume. The most established ones are pipe modules, coil modules, and plate modules (Table 7.1, Figure 7.3). Table 7.1: Specific mass transfer areas of the particular membrane modules [8]. Module Pipe module Plate module Spiral wound module Capillary module Hollow fibre module

Specific area [m� /m� ] 25 100–600 500–1000 > ���� approx. 10000

A prediction of the membrane separation behavior is difficult and not state of the art yet. The flow conditions on both sides of the membrane play an important role, as they have an influence on the concentration profile. Empirical or semiempirical models are needed for the modeling of the mass transfer. Any extrapolation of these models is difficult, so that experiments are definitely needed for both the choice of the membrane and the design of the membrane separation process. From the qualitative point of view, the statement can be given that high fluxes are only possible if the solubility in the membrane is high. Therefore, polar membranes (e. g. polyvinyl alcohol) are appropriate for the separation of water, while hydrophobic membranes (e. g. polydimethyl

7.1 Membrane separations

� 283

Figure 7.3: Different kinds of membrane modules. Courtesy of Prof. Dr. J. Gmehling.

siloxane, PDMS) can be used for the separation of organic components from aqueous solutions. Besides the design problems, there is always the question about the durability of the membrane. The experience is that in multicomponent mixtures there is usually at least one substance which is detrimental to the membrane. Proof that the membrane is stable can only be achieved by a long-term test. This issue and the design effort are the reason why membrane processes are only used if distillation or other unit operations are not appropriate. But membrane separations are an option in combination with other operations, e. g. with distillation to overcome azeotropic points. For waste water treatment, where small amounts of organic substances have to be removed, membrane separations are a very popular choice. Also, membrane separations are used for the separation of gas mixtures, the recovery of salts from diluted aqueous solutions, the desalination of sea water, or dialysis for patients with a kidney disease. Table 7.2 gives an overview on the most important membrane processes, the phases involved, and the membrane types. For reverse osmosis, pervaporation and vapor or gas permeation the same membrane type is used, the difference is just the phases involved. We can distinguish between dead-end and crossflow filtration. In dead-end filtration, the flow goes through the membrane in a perpendicular direction. The filtered particles are collected at the surface of the membrane and form a filter cake. In crossflow filtration, the flow direction is parallel to the membrane surface. If particles occur, they might deposit on the membrane. A sufficient flow velocity must be provided to reach an equilibrium between deposition and abrasion. For reverse osmosis, semipermeable membranes are used, where in the ideal case no transport of dissolved components (e. g. salts) takes place. On the other hand, it should be fully permeable for the solvent itself. Because of the concentration difference, the solvent (e. g. water) goes through the membrane until equilibrium is reached. This is the case when the hydrostatic pressure is equal to the osmotic pressure (osmotic equilibrium). If the pressure on the side containing the dissolved components is increased

284 � 7 Alternative separation processes Table 7.2: Technically important membrane processes [8]. Membrane process

Type

Driving force

Phases

Application

Microfiltration

porous

Δp < � bar

S/L

Ultrafiltration

porous

Δp < �� bar

L/L

Nanofiltration

Δp < �� bar

L/L

Δp < �� bar

L/L

conc. diff.

L/L

Electrodialysis Pervaporation

porous/ dense porous/ dense porous/ dense dense dense

removal of solid particles from suspensions waste water treatment, drinking water purification treatment of aqueous solutions and oil fractions waste water treatment, drinking water purification kidney dialysis, acid recycling

electric field fugacity diff.

L/L L/V

Vapor Permeation

dense

fugacity diff.

V/V

Gas Permeation

porous/ dense

fugacity diff.

G/G

Reverse Osmosis Dialysis

removal of ions from aqueous solutions separation of azeotr. systems, removal of unwanted traces separation of azeotr. systems, water removal in reactions separation of gas mixtures

Figure 7.4: Osmosis, osmotic equilibrium, and reverse osmosis [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

above this osmotic pressure, the process is inverted, i. e. the solvent concentration on this side is even decreased (reverse osmosis). The most well-known application is sea water desalination. In any case, where a heavy end component has to be removed from water, reverse osmosis should be taken into account to avoid the evaporation of large amounts of water. A good rule of thumb for the pressure used is 60 bar. Reverse osmosis as explained above is illustrated in Figure 7.4. The equation for the osmotic pressure can be derived from chemical potentials [11]. Referring to Figure 7.4 and setting the activity coefficient for the solvent to 1, one gets Π = pB − pA = −

RT ln xj , vL,j

(7.2)

7.1 Membrane separations

� 285

where the index j denotes the solvent. Equation (7.2) is very convenient to use, as mass balances from process simulation indicate the molar concentration xj of the solvent. Nevertheless, Equation (7.2) is often rewritten as Π = pB − pA = RT ∑ ci , i

(7.3)

where the index i denotes the respective solutes. The temptation to use Equation (7.3) manually is great. However, one must take into account that the ionic species dissociate. Therefore, the mole number of dissolved species is higher than expected and so are the osmotic pressures. Example 10000 kg/h of a 1 wt. % solution of sodium chloride is to be concentrated to 10 wt. % by a single evaporation step without vapor recompression. Alternatively, a reverse osmosis unit can be inserted upstream, where the pressure is restricted to p = 60 bar. The temperature is set at 300 K. Estimate whether the reverse osmosis unit can save operation costs. The electric rate shall be 10 ct/kWh, the steam costs shall be 20 €/t. Mwater = 18.015 g/mol, MNaCl = 58.4425 g/mol.

Solution First, the steam demand without the reverse osmosis unit is estimated. The mixed stream consists of 100 kg/h sodium chloride and 9900 kg/h water. To increase the concentration to 10 %, the water amount must be reduced to 900 kg/h. Therefore, 9000 kg/h of water, more than 90 %, have to be evaporated, requiring approximately the same amount of steam. A large effort is necessary to change small concentrations. For the membrane consideration, the mole compositions of the stream are considered: 9900 kg/h = 549.54 kmol/h 18.015 g/mol 100 kg/h = = 1.711 kmol/h 58.4425 g/mol

nwater = nNaCl

The latter split into 1.711 kmol/h Na+ and 1.711 kmol/h Cl− ions, giving the concentrations 549.54 = 0.9938 549.54 + 2 ⋅ 1.711 1.711 = = 0.00309 549.54 + 2 ⋅ 1.711 1.711 = = 0.00309 549.54 + 2 ⋅ 1.711

xwater = xNa+ xCl−

Applying Equation (7.2) with Π ≈ 60 bar (a bit less, as the permeate must have a remaining overpressure to be transported out of the membrane) and a specific liquid volume of vL,water ≈ 0.001 m3 /kg = 0.018 m3 /kmol, one gets after solving to x xRO,water = exp [−Π

vL,j RT

] = exp [−60 ⋅ 105 Pa

0.018 m3 /kmol ] = 0.9576 8.31446 J/(mol K) ⋅ 300 K

286 � 7 Alternative separation processes

This is the minimum mole concentration of the solvent one can achieve in the retentate with reverse osmosis. The mole concentrations of the ions are xRO,Na+ = xRO,Cl− = (1 − 0.9576)/2 = 0.0212 , corresponding to the mass concentration xROw,water =

0.9576 ⋅ 18.015 = 0.933 0.9576 ⋅ 18.015 + 0.0212 ⋅ 22.99 + 0.0212 ⋅ 35.453

The retentate contains the 100 kg/h NaCl and, correspondingly, 1392.5 kg/h water.1 Therefore, another 492.5 kg/h water have to be removed by evaporation. For the pressure elevation, assuming a pump efficiency of η = 0.7 the power can be calculated to be ̇ L Π/η = 10000 kg/h ⋅ 0.001 m3 /kg ⋅ 60 bar/0.7 = 23.8 kW P ≈ mv Without reverse osmosis, the operation costs are Cevap = 9000 kg/h ⋅ 20 €/t = 180 €/h

(7.4)

For the option with reverse osmosis and evaporation, the operation costs CRO+evap = 492.5 kg/h ⋅ 20 €/t + 23.8 kW ⋅ 10 ct/kWh = 12.23 €/h

(7.5)

can be assigned. Over one year (≈ 8000 h), the difference amounts to 1.34 million €, which should rapidly pay off the investment costs of the reverse osmosis.

Pervaporation is different from the other membrane processes, as not only the membrane separation but also a phase change takes place. A liquid feed stream enters the membrane module and is split into a liquid retentate stream and a permeate stream in the vapor state. By lowering the partial pressure on the permeate side the fugacity difference as the driving force is increased. Nevertheless, the enthalpy of vaporization has to be added; otherwise the temperature on the permeate side would be significantly lowered, especially in multiple-stage membrane modules. Besides the removal of organic compounds from aqueous solutions (or vice versa, according to the polarity of the membrane), pervaporation is an attractive option for the separation of azeotropes in combination with distillation. As mentioned above, the separation in the membrane depends mainly on the solubility and on the diffusion through the membrane, so that the separation characteristics can differ significantly from the vapor-liquid equilibrium. In gas permeation, in contrast to pervaporation the inlet stream is gaseous as well. The mass transfer is proportional to the fugacity difference across the membrane. Porous and dense membranes can be used. The main application is the recycling of hydrogen in the ammonia and methanol manufacturing processes. Moreover, gas perme-

1 Check: 1392.5/(1392.5 + 100) = 0.933.

7.1 Membrane separations

� 287

Figure 7.5: Illustration of thermal equilibrium between permeate and retentate.

ation is used for nitrogen enrichment of ambient air, natural gas drying, the separation of ethylene and carbon dioxide and the separation of helium from natural gas. Applying gas permeation, the temperatures of the outlet streams are often surprising. Figure 7.5 tries to illustrate this effect by splitting the continuous flow through the membrane into just two sections. Passing the membrane, the permeate is subject to a considerable pressure drop, which might cause a significant temperature decrease due to the Joule–Thomson effect. The pressure of the retentate stream decreases only slightly because of normal friction, giving only a slightly lower temperature. However, permeate and retentate flow in parallel along both sides of the membrane, where they are only separated by a thin membrane. Because of the large cross-flow area of the membrane, it can be assumed that they reach thermal equilibrium, where the retentate is cooled down and the permeate is warmed up (dashed rectangles in Figure 7.5) and leaves the membrane afterwards. In the next section, the permeate temperature decreases to an even lower value, as the starting temperature on the feed side is already lower. Again, retentate and permeate come into thermal equilibrium (dashed rectangle). As a result, the retentate outlet temperature corresponds to the lowest temperature reached with the Joule–Thomson effect, whereas the permeate temperature is a mixing temperature of the permeates of the particular sections. Due to internal heat exchange, the retentate outlet temperature is actually lower than the permeate one, although the retentate is not strongly affected by the Joule–Thomson effect. In electrodialysis, the potential difference on both sides of the membrane can be increased by applying an electric field if electrolytes have to be separated [140]. Ionselective membranes can support this process. Much more information on membranes can be obtained from [141], [142] and [256].

288 � 7 Alternative separation processes

7.2 Adsorption When solids get into contact with gaseous or liquid substances, interactive forces occur which can result in a way that these substances are bonded to the solid. This effect is called adsorption. The strength of these bonds can differ from component to component, which can be sufficient to achieve a selective separation. Especially microporous solids with a high specific surface, corresponding to a high capacity, can be used as adsorptive agents. For the separation, besides the different equilibrium load steric (sieve effect) and kinetic effects (different diffusion coefficients) can be used as well. Especially the development of improved adsorptive agents (e. g. molecular sieves) and better regeneration techniques increased the relevance of adsorption as a thermal separation process [89]. Whenever the separation factor of the vapor-liquid equilibrium is close to 1 (azeotropes, isomers), when difficult process conditions would have to be realized (high or low temperatures) or only small amounts of impurities have to be removed (waste water, exhaust air), adsorption has advantages in comparison with distillation. On the other hand, adsorption always means that the process becomes discontinuous. The adsorption unit is saturated after a time, and a regeneration has to take place. During this time, a second bed (twin plant, see Figure 7.8) must take over until regeneration of the first bed has been finished. Adsorption is used for the drying of gases and solvents, for the removal of condensable components (CO2 , H2 O, hydrocarbons) upstream the air separation, for natural gas conditioning, separation of nitrogen or oxygen from air, separation of hydrocarbon mixtures, and the treatment of waste water and exhaust air (Chapter 13.4.6). A number of adsorbents have been developed for various applications. Due to pores in their structure, they have enormous surfaces per volume (up to 1000–1500 m2 /g), leading to a correspondingly large adsorption capacity. These materials are manufactured by degradation reactions of solids, where fluid reaction products are formed and removed immediately. If the reaction temperature is below the melting point of the solid, the crystal cannot sinter together, and the holes and pores remain. The diffusion of the adsorbed substance inside the pores is usually the step which determines the necessary residence time. Examples for common adsorbent materials are activated carbon, silica gel, clay gel and zeolites, where the latter ones act as molecular sieves. The adsorptive agent (adsorbent) should have a high selectivity and a high capacity. Adsorption of water (except for drying purposes) and polymerizing components must be avoided. As well, a low effort for the regeneration is desirable. There are hydrophilic (e. g. silica gel, aluminium oxide, zeolites) and hydrophobic adsorptive agents (e. g. activated carbon, carbon molecular sieves). Activated carbon is a very inexpensive adsorbent which is used especially for the removal of hydrocarbons or nonpolar components in general from waste water. Its mechanical stability is limited, and it has a tendency to cause fires. On the other hand, activated carbon is so cheap that regeneration can often be omitted; it can be directly sent to incineration.

7.2 Adsorption

� 289

Zeolites (molecular sieves) are crystalline aluminosilicates of alkali or alkaline earth metals. They have defined cavities and pore diameters. The pore diameters are between 0.3–0.8 nm. The well-defined structure can be used for the separation of molecules of different size or shape, e. g. the separation of linear and branched alkanes or m- and p-substituted aromates. A frequently applied option is the removal of water from gases and solvents with a so-called KA zeolite (cation = K = potassium, pore diameter is 3 or 4 Å). A 3 Å molecular sieve has the advantage that only water is adsorbed and coadsorption of other components hardly takes place. A very common application is the separation of the ethanol/water azeotrope (see below). To keep the dimensions of the adsorber low, adsorptive agents must have a large surface. It is the inner surface which is decisive. An adsorbent particle is porous; it is distinguished between macropores (d > 50 nm), mesopores (d = 2–50 nm) and micropores (d < 2 nm). The large specific surface is mainly caused by the great number and good accessibility of the micropores. In Table 7.3, ranges for the specific surface of the various adsorptive agents are given. Table 7.3: Specific surfaces of various adsorptive agents [89]. Adsorptive agent Activated carbon, general Activated carbon, narrow pores Silica gel, wide pores Aluminium oxide Zeolites (molecular sieves) Carbon molecular sieves

Specific surface (m� /g) 300–2500 750–850 300–350 300–350 500–800 250–350

Adsorption processes for a particular component can be characterized by their adsorption isotherms, i. e. the relationship between the adsorbed amount and the concentration in the fluid phase at a certain temperature. There are five particular types, which can be physically interpreted. These adsorption isotherms can hardly be estimated, which means that the quality of an adsorption process cannot be predicted without references or experiments. The adsorption equilibrium between the concentrations of a component in the fluid phase (adsorptive) and in the phase at the surface of the adsorbent (adsorbate) is decisive for the choice of the adsorbent and the design of the adsorption column. The amount adsorbed per g adsorbent depends on the temperature, on the partial pressure or, respectively, the concentration, and the kind of the adsorbent, including the manufacturing process (size of the inner surface) and the history (aging, regeneration). There are a number of equations for the isothermal adsorption equilibrium of pure substances. Many of them are based on the Langmuir approach, where some sim-

290 � 7 Alternative separation processes plifications were made (homogeneous surface, no interaction between the adsorbed molecules): ni

ni,mon

=

Ki pi , 1 + Ki pi

(7.6)

where ni,mon is the load for the limiting case of a monomolecular layer. For the correlation of multicomponent adsorption isotherms, there is at least some theory, which is similar to the procedure for correlating VLE. However, a prediction method like UNIFAC is still missing. Furthermore, for technical applications the parameters characterizing the adsorbent like specific surface, pore distribution, crystal irregularities, and interactions with the adsorbed species are often not reproducible, not to mention the kinetics and mass transfer effects. More detailed information about adsorption isotherms is given and derived in [143]. It is obligatory that adsorption equilibria be measured, and this is usually a large effort. There are five types of adsorption isotherms (Figure 7.6) [89]. For type I, a monomolecular layer is formed. This behavior can be described with Equation (7.6).

Figure 7.6: Adsorption isotherm types [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

7.2 Adsorption

� 291

Figure 7.7: Course of an adsorption with time [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

For types II and IV, more layers are formed, and condensation in the pores takes place. For types III and V there is no tendency to form a monomolecular layer. In technical applications, there are usually multicomponent mixtures, where the components involved compete for the space on the adsorbent surface. There are similar phase equilibrium diagrams as there are for vapor-liquid equilibria [89]. Even azeotropic points occur. Adsorption is exothermic; as a first guess, it is a good approach to assume that the enthalpy of adsorption is approx. 1.5 times the enthalpy of vaporization of the adsorptive [143]. Shortcut approaches and recommended constraints for the design of adsorbers are explained in [143]. In Figure 7.7, the adsorption procedure is illustrated. In principle, there are four phases that can be distinguished. In the first phase, the adsorption bed is exposed to the process stream. Gas phase applications can be designed both for upflow and downflow, whereas for liquid applications upflow operation is preferable, as gravity can support the process during the adsorption cycle itself by promoting fluid distribution and during desorption by assisting the draining during heating. The adsorption itself takes place in the adsorption zone, which proceeds through the column towards the outlet of the adsorber with time. Different components have different adsorption zones. The stronger the component is adsorbed, the slower the saturation zone moves towards the outlet of the adsorber. The component with the weakest adsorption can pass the adsorption bed and be obtained in pure form. When the saturation zone of the component to be removed approaches the end of the adsorption bed, saturation is reached and it is necessary to stop and switch over to a second adsorption bed to avoid a breakthrough. Therefore, adsorption units usually consist of twin columns (Figure 7.8). Meanwhile, as long as the second column is in operation, the loaded column must be regenerated in a second phase. For the regeneration the flow direction is reversed. Regeneration can be done by increasing the temperature (temperature-swing adsorption, TSA) or lowering the pressure (pressure-swing adsorp-

292 � 7 Alternative separation processes

Figure 7.8: Typical arrangement of an adsorption twin plant.

tion, PSA), by replacement with another component or by lowering the partial pressure of the pollutant in the gas being in contact with the adsorbent. The latter is possible by flushing the adsorber with an unloaded gas (Figure 7.8). Combinations are possible; one of the most often applied procedures is the flushing with steam, where the partial pressure is lowered and the temperature is elevated. Of course, the adsorptives then have to be removed from the steam in a further step, which makes the whole process more complex. As usual, it takes a large effort to remove the last traces of the adsorptives in the bed; therefore, a residual load after regeneration is accepted, which in turn of course reduces the capacity of the next adsorption cycle. After regeneration, in a third phase some time should be taken into account to get back to adsorption conditions, e. g. by cooling or repressurizing the bed after TSA or PSA, respectively. Usually, the cycle is planned in a way that regeneration is faster than saturation to ensure continuous operation. Therefore, it might happen that the regenerated bed is not brought back into service immediately, giving a fourth “standby” phase. On a technical scale, adsorption has the disadvantage of being in principle a discontinuous process. It is aspired to operate the adsorption continuously in countercurrent flow like other thermal separation processes. However, countercurrent flow can hardly be realized with a solid because of its attrition. Several attempts have been made to overcome this difficulty. The most popular one is the so-called “simulated moving bed” (SMB), invented by UOP (Universal Oil Products Inc., Des Plaines, Illinois). The principle is explained in the following paragraph.

7.2 Adsorption

� 293

SMB is a continuous chromatography process in countercurrent flow with a binary system. The difference between adsorption and chromatography is that in chromatography the mobile phase achieves the desorption. Both components appear at the outlet of the column, they are separated due to the different time they need for passing the column. In fact, in SMB the adsorbent is not really moved. The movement of the solid phase is replaced by changing the position of the inlet and outlet streams in a cyclic way. An inlet stream is continuously split into two outlet streams, which consist of purified components if the SMB is adequately designed. The difficulties of the SMB are the mechanical complexity and the complicated design. Figure 7.9 shows a case where the column is fixed. The feed enters the column in the middle. Both components pass the column with different velocities and separate. If the column itself moved in the opposite direction of the mobile phase with a velocity that is between the velocity of the two components, the components appear to move in different directions (Figure 7.10). To get the SMB arrangement, the mobile phase has to flow in a closed loop. The products are withdrawn at defined places with the exact volume flow. As mentioned above, the thought experiment of the moving column is replaced by the movement of the inlet and outlet nozzles (Figure 7.11).

Figure 7.9: Normal chromatography arrangement with a fixed adsorbent.

Figure 7.10: Separation due to the movement of the column.

Figure 7.11: Principle of the simulated moving bed.

294 � 7 Alternative separation processes

Figure 7.12: 4A molecular sieve pellets. © Smokefoot/Wikimedia Commons/CC BY-SA 4.0. https://creativecommons.org/licenses/by-sa/4.0/.

One of the most popular applications of adsorption is the dehydration of ethanol in bioethanol production. Ethanol and water have an azeotrope (Figure 5.50), which cannot be split into the pure components by simple distillation. On the other hand, there is a strict specification for bioethanol concerning its water content. Therefore, other techniques must be applied (Chapter 5.8). Pressure-swing adsorption (PSA) can be a way, using a molecular sieve. The ceramic pellets are shown in Figure 7.12. The water molecules can diffuse through the pores, whereas the larger ethanol molecules are retained [144]. At the outlet of the adsorption bed, the water is more or less completely removed. Due to the heat of adsorption, the temperature of the bed is strongly elevated. The effect is even used for process control, as the temperature indicates where the saturated zone is. Once the mass transfer zone approaches the outlet of the bed, regeneration starts, and it is switched over to a second bed. Desorption is done by first applying a vacuum to the tower. To remove the remaining water, the adsorbent bed is purged with purified ethanol vapor in opposite flow direction, i. e. the vapor enters the column from the opposite side at the bottom. Figure 7.13 shows the block diagram of the process [144]. In column K1, distillation of the raw ethanol is performed. A vapor stream close to azeotropic concentration (approx. 96 wt. %) is taken at the top of the column. It passes the adsorber A1 from the top to the bottom. Downstream the adsorber, it is condensed at the shell side of the falling film evaporator W2, serving as a heating agent. The generated steam is transformed to a higher pressure by a jet pump (Chapter 8.3) and used for direct steam heating (Chapter 13.1) in column K1. The mixture of ethanol and water vapor from the regeneration of the adsorber bed A2 is condensed in heat exchanger W1 and led back to the distillation column. The dehydration of the ethanol–water azeotrope is a popular, but not really a typical application of molecular sieves. The amount of water being handled is quite large. Nor-

7.3 Crystallization

� 295

Figure 7.13: Block diagram for ethanol dehydration [144].

mally, the water concentration of the streams being treated is a few hundred ppm; here, it is approx. 4 %. Therefore, the bed is loaded rapidly, and the cycle times are pretty short, just in the range of minutes. Often additional beds are used to provide enough time for regeneration.

7.3 Crystallization Crystallization should in fact not be called an alternative separation processes, as it is in fact the oldest one. The common thing of the alternative processes in this chapter is more or less that they cannot be designed on a theoretical basis, and that “something solid” is involved. In crystallization, we can distinguish between crystallization from a solution, which is often applied for purifying inorganic salts, and crystallization from a melt, which is often used for purifying organic substances. Like in distillation, in crystallization energy for the cooling or evaporation of the solvent is necessary to create a second phase. Because of the low density difference, the separation of the two phases solid-liquid is not as easy as it is for the vapor-liquid separation in distillation. Also, the transport of the solid phase is difficult. Often, the viscosity is high, making in turn the mass transfer of the crystallizing component difficult. Crystallization has advantages in comparison with the other thermal separation options, especially distillation, if the components to be separated have a low thermal stability or a low (or even no) vapor pressure. As well, it can have advantages if the separation factor is close to 1, e. g. for azeotropes or for the separation of isomers. Crystallization can be used to get extremely pure products. In most cases, it takes place in form of eutec-

296 � 7 Alternative separation processes tic systems, and in this case the crystallizing component is pure and can be obtained by melt crystallization with one separation stage. In practical applications, the separation of the solid and the liquid phase is not perfect, so that inclusions of mother liquor will occur in the solid phase.2 This phenomenon depends mainly on the crystal formation and growth. Regularly formed crystals are useful to achieve a good separation of the two phases. The decisive thermodynamic issue for the description of condensation is solidliquid equilibrium (SLE, Chapter 2.6). In the case of eutectic systems, a pure solid phase is obtained, but it is a disadvantage that only part of this component can crystallize. The remaining mother liquor has eutectic concentration and leads to mixed crystals if crystallization is continued. This limitation can be overcome if crystallization is combined with other thermal separation processes. For the design of crystallizers an exact knowledge of the solid-liquid equilibria with respect to temperature is necessary. For most of the salts, the solubility increases with temperature. For some inorganic salts, the solubility can decrease with temperature. These are the so-called hardness components. Examples are gypsum (CaSO4 ) or calcium carbonate (CaCO3 ). Also, the kinetics of the seed crystal formation and of the crystal growth are important for the equipment design in crystallization. An oversaturation is necessary for the formation and the growth of crystals. It can be achieved in different ways, e. g. by cooling, evaporation of the solvent or depressurization, which is another way of evaporating the solvent. Furthermore, crystallization can be forced by the addition of a new component. Oversaturation by cooling has advantages if the solubility increases strongly with temperature. If the temperature dependence is less significant, an oversaturation by evaporation might be favorable. For the seed crystal formation, there are several mechanisms. Crystals can be formed at rough surfaces or impurities or by abrasion of small crystals from larger ones. Many constraints like oversaturation and flow velocity have an influence. The more the solution is subcooled, the more seed crystals are formed. However, due to the increase of the viscosity with decreasing temperature the rate of seed crystal formation decreases after passing through a maximum. The following relationship between seed crystal formation rate r and oversaturation (Δc) has been found: r = Δcb ,

(7.7)

where b = 3–6. A similar equation can be set up for the the crystal growth: r = Δcw , where w = 1–2. 2 Melting and recrystallization is a countermeasure.

(7.8)

7.3 Crystallization

� 297

The size of the crystals strongly depends on the degree of the oversaturation. Seed crystal formation and crystal growth are competing processes. A large oversaturation promotes the seed crystal formation, giving small crystals. Therefore, the oversaturation has to be kept small, if large crystals are the target. The control of the oversaturation is decisive for crystallization processes. There are several options for the choice of equipment for industrial crystallization processes. In suspension crystallizers, the crystals are dispersed in the solvent or in the melt, respectively. The heat of fusion is transferred to the liquid. Suspension crystallizers are operated continuously. It is tried to get separate zones for the oversaturation and the crystal growth. All oversaturation mechanisms can be applied. In the following paragraph, evaporation is taken as example. Usually, crystals are heavier than the mother liquor. To keep them in the suspension, there must be an upward flow in the crystallizer so that the crystals are located in definite layers according to their size. There are various types of crystallizers to realize this principle. The one most widely used is the forced circulation crystallizer (FC). It is normally operated under vacuum conditions. As can be seen in Figure 7.14, the suspension is circulated with a pump through a heater, causing evaporation in the upper part of the vessel. The concentration of the dissolved solids rises, and precipitation takes place. The slurry can be continuously removed from the vessel. The FC crystallizer is appropriate if crystal size is not an issue. There is no mechanism to redissolve small crystals. Larger crystals can be obtained with the DTB (draft tube baffle) crystallizer [145]. As the FC, it is operated under vacuum or a slight overpressure. It is provided with a skirt baffle which forms a partitioned settling zone. Inside the baffle there is a vertical draft tube, where the feed and the recycle are directed to (Figure 7.15). Outside the skirt baffle, the mother liquor containing the small crystals is withdrawn and led to a heater, where the small crystals have a chance to be dissolved again. At the top of the crystallizer, vapor is generated, giving the desired oversaturation. The formed crystals can settle down to the product discharge at the bottom. In comparison with the FC crystallizer, the internal loop shows less attrition and crystal breakage; large crystals which were formed are maintained. The largest crystals are obtained in the Oslo type crystallizer, where the crystals are grown in a fluidized bed (Figure 7.16). The growth is limited by the residence time. There is again an external recirculation loop with a heat exchanger, where the temperature is elevated. The loop reenters the crystallizer near the top. Evaporation can take place, giving the oversaturation. The oversaturated solution is led to the bottom of the crystallizer, where it first comes into contact with the larger crystals, so that these crystals can further grow instead of forming small new ones. At the bottom, the product is withdrawn. A classified bed is formed above, with the lowest concentration at the nozzle for the recirculation outlet. In the Oslo type crystallizer, hardly any attrition and crystal breakage occurs [145].

298 � 7 Alternative separation processes

Figure 7.14: Forced circulation crystallizer.

Figure 7.15: DTB crystallizer.

Figure 7.16: Oslo type crystallizer.

7.3 Crystallization

� 299

Layer crystallizers operate discontinuously. They have a cooled wall where the crystallization takes place. They are used for melt crystallizations, either as falling film crystallizers or as static crystallizers. Falling film crystallizers work analogously to falling film evaporators (Figure 4.25). The liquid runs down the inner side of the tube bundle. The tube bundle is cooled with a heat transfer fluid on the shell side. The crystallization begins at the tube wall. The melt is recycled, as long as the required amount of liquid has been crystallized. After the liquid has run out of the apparatus, the solid layer is slightly heated up to remove impurities in the surface layer. Finally, the whole crystal layer is melted and removed from the heat exchanger. At static crystallization, cooling elements dip into the melt. By varying the temperature the particular steps as described above can be carried out. All these crystallizers work very reliably, as there are no moving parts or mechanical devices for the removal of the liquid. However, the residence times are quite long, giving large equipment volumes. More information about crystallization can be obtained from [146] and [147].

8 Fluid flow engines 8.1 Pumps There is no doubt that pumping is a science on its own. In most engineering units, there is a “rotating equipment” department which is dedicated to the selection of the appropriate pump in a thankworthy way. Otherwise, vendor companies usually give assistance. However, a process engineer must be able to specify what the pump should do in the process. The following chapter will introduce the fundamental terms for the specification of a pump from a process engineering point of view. The explanations refer to centrifugal pumps, which are the most common ones (80–90 % of all the pump applications). In process simulation, pumps usually do not play a decisive role. The pressure dependence of the enthalpy of a liquid is neglected anyway, so the power consumption of the pump is transferred into a slight temperature elevation, normally less than 1 K. The exact arrangement of the source and the target vessel is determined later in the project, as well as the necessary pressures and the pump characteristics, which is decisive for the pump efficiency. Therefore, the power consumption of the pump is not accurately calculated; at most, the result shows the correct order of magnitude. The calculation of the power is performed according to the same scheme as for compressors, where it is much more important. It is explained in Chapter 8.2. A pump conveys a liquid from one piece of equipment, usually a vessel, to another one. Between the two locations, a pressure and/or a height difference and the pressure drop in the connecting line has to be overcome. Figure 8.1 shows an example of a principle sketch of this situation. It is a so-called open system, meaning that the pump has to overcome a static head. In contrast, in a closed system, e. g. a cooling water circulation loop, the pump must overcome mainly friction losses caused by the movement of the fluid. For the arrangement in Figure 8.1, the pressure elevation by the pump Δppump can be determined by the Bernoulli equation p0 + ρgh0 − Δpinlet line + Δppump − Δpoutlet line − ρgh1 = p1 ,

(8.1)

where the indices 0 and 1 refer to start and end of the whole line, respectively. The pressure drops of inlet and outlet line comprise the line pressure drops (Equation (12.1)), the pressure drops caused by special piping elements (Equation (12.19)) and the pressure drops through control valves, which can often only be more or less set arbitrarily (e. g. ΔpCV = 1 bar).1 The terms for the kinetic energy are neglected, as the velocities upstream and downstream the pump do not change very much. The pressure elevation by

1 as long as the valve is not fully specified, which takes place in the detailed engineering phase. https://doi.org/10.1515/9783111028149-008

8.1 Pumps

� 301

Figure 8.1: Example for a setup of a pumping process.

the pump is usually converted into the delivery head Hpump Hpump =

Δppump ρg

(8.2)

Analogously, the remaining terms in Equation (8.1) can be converted into heads. The delivery head Hpump is a function of the volume flow; the pump characteristics curve shows how the delivery head decreases with increasing volume flow (Figure 8.2). On the other hand, with increasing volume flow the pressure drops of inlet and outlet line increase, whereas the differences in head and the pressures p0 and p1 in the equipment remain constant, which is called the plant characteristics. The situation can be illustrated by drawing the pump and the plant characteristics in one diagram (Figure 8.2). In a given arrangement, the operation point is defined by the intersection between the

Figure 8.2: Typical curvatures of pump and plant characteristics.

302 � 8 Fluid flow engines curves of pump and plant characteristics. Usually, this operation point does not fit the requirements of the process. In this case, the plant characteristics can be manipulated by throttling the flow with a control valve (Figure 8.2). If this is not possible, the pump characteristics can be changed as well, e. g. by changing the blade wheel diameter or by changing the number of revolutions per minute with a frequency converter. It should be emphasized that this well-known construction is just an illustration; in practical applications, it is sufficient to define the requirements (volume flow, delivery head) for the pump and provide a control device to adjust the plant characteristics. Besides V̇ and Hpump the so-called NPSH value (net positive suction head) is an important operating parameter of a pump. It is relevant to avoid cavitation, which is the worst failure of a circulation pump. Cavitation means that vapor bubbles occur in the pump because of a local reduction of the static pressure below the saturation pressure of the process liquid. These bubbles violently implode when they are transported into regions in the pump with higher pressures and therefore cause erosion, often leading to the mechanical destruction of the pump. Moreover, the mechanical stress on the pump impeller, the shaft, the seals and the bearings is increased. Cavitation is in most cases caused by a suction pressure which is too low. To avoid it, a minimum suction pressure must be kept, represented by the NPSH value. The NPSH value indicates how far the medium inside a pump is away from its saturation pressure in a static case, i. e. without movement in the pump. In the case of Figure 8.1 the NPSH value would be calculated to be NPSH =

p0 + ρgh0 − Δpinlet line − ρg

ρw2 2

− ps

,

(8.3)

meaning that NPSH is the difference between the total pressure inside the pump and the saturation pressure of the liquid, transformed into a height. This NPSH value must be greater than a minimum value,2 which has to be determined experimentally by the vendor. A safety margin of 0.5 m should be kept. It depends on the type and construction of the pump and on the operating conditions. For boiling liquids with low flow velocities (i. e. pressure drop and kinetic term are negligible), Equation (8.3) reduces to NPSH = h0

(8.4)

In case of serious difficulties in maintaining the necessary NPSH value, special pumps are available which need very low NPSH values and are even capable of conveying liquids at their boiling point [148]. From the process engineering point of view, the specification of the pump can exclude the pump itself, by just defining the states upstream and downstream the pump

2 called “NPSH value of the pump”.

8.1 Pumps

� 303

using Equation (8.1).3 Then the pump specialist can chose an appropriate pump according to his knowledge about the necessary Δppump and the corresponding mass flow with its various physical properties. Usually, we distinguish between normal, maximum, and minimum case. The minimum case should be defined by the most favorable conditions for the pump, i. e. the maximum level in the vessel upstream the pump, the lowest mass flow (i. e. lowest pressure drop in the line) and the minimum level in the target vessel, whereas the maximum case is just the other way round. It is then up to the pump specialist to decide which pump type can cover this load range, and which pump efficiencies are achieved. Example In the exemplary arrangement in Figure 8.1, methanol (ṁ = 12000 kg/h, t = 30 °C, ρ = 782 kg/m3 ) is transferred from a vessel to a distillation column. Some items of the specification shall be evaluated: (a) the necessary NPSH value of the pump; (b) the normal pressure difference to be built up by the pump; (c) the maximum pressure difference to be built up by the pump; (d) the maximum power consumption of the pump. In the sketch, the equivalent lengths of inlet and outlet line are given, meaning that all the bends, elbows etc. are already included. The pipe diameters is d1 = 4′′ for the inlet line and d2 = 3′′ for the outlet line. The pressure drop of the pipe can be calculated with Equation (12.1): Δp = λ

ρw 2 L , 2 d

where, for simplicity, for the friction factor λ a standard value of λ = 0.03 is used. For the valve, a pressure drop of Δp = 1 bar shall be assumed. The efficiency of the pump is η = 0.7.

Solution First, the velocities in the pipes and the pressure drops are calculated: 4 ⋅ 12000 kg/h 4ṁ = = 0.526 m/s ρπd12 782 kg/m3 ⋅ π(4 ⋅ 25.4 mm)2 4 ⋅ 12000 kg/h 4ṁ w2 = = = 0.935 m/s ρπd22 782 kg/m3 ⋅ π(3 ⋅ 25.4 mm)2 w1 =

The pressure drops in the lines are Δp1 = λ Δp2 = λ

ρw 2 Leq1 782 kg/m3 ⋅ (0.526 m/s)2 6 m = 0.03 = 192 Pa 2 d1 2 4′′

ρw 2 Leq2 782 kg/m3 ⋅ (0.935 m/s)2 30 m = 0.03 = 4035 Pa 2 d2 2 3′′

3 Unfortunately, it often occurs that these states are not available when the pump must be specified. In these cases, the process engineer must guess as good as possible.

304 � 8 Fluid flow engines

(a)

For the NPSH value, the low liquid level (LLL in Figure 8.14 ) is relevant. The static liquid head is the sum of the height of the tangent line H1 and the low liquid level LLL. The saturation pressure of methanol at t = 30 °C is ps = 0.219 bar. According to Equation (8.3) we get

NPSH = h0 +

p0 − Δpinlet line −

= 5m +

ρg 1 bar − 192 Pa −

= 15.1 m (b)

ρw 2 − ps 2 782 kg/m3 ⋅ (0.526 m/s)2 − 0.219 bar 2 3 2 782 kg/m ⋅ 9.81 m/s

The normal pressure elevation by the pump is calculated acc. to Equation (8.1), using the normal liquid level (NLL) and the normal operation pressure of the column pnorm . For the outlet line, the pressure drop of the valve must be added to the value obtained above. Solved for Δppump , Equation (8.1) reads Δppump = p1 − p0 + Δpinlet line + Δpoutlet line + Δpvalve + ρg(h1 − h0 ) , giving Δppump,norm = 9 bar − 1 bar + 192 Pa + 4035 Pa + 1 bar + 782 kg/m3 ⋅ 9.81 m/s2 ⋅ (18 − 5.5) m

= 10.0 bar (c)

For the maximum pressure elevation, the low liquid level (LLL) and the maximum operation pressure of the column pmax are taken as input. One gets Δppump,max = 10 bar − 1 bar + 192 Pa + 4035 Pa + 1 bar + 782 kg/m3 ⋅ 9.81 m/s2 ⋅ (18 − 5) m

= 11.04 bar (d)

The maximum power consumption of the pump is Pmax = V ̇ Δppump,max /η = =

̇ pump,max mΔp

12000 kg/h ⋅ 11.04 bar

ρη

782 kg/m3 ⋅ 0.7

= 6.7 kW

Most companies have their own guideline for the installation of a pump, depending on reliability demands, the control philosophy, and the type of the pumps. Figure 8.3 shows an example with the most important features. Two pumps are installed in parallel, so

4 Numbers in technical drawings are in mm if no unit is given.

8.1 Pumps

� 305

Figure 8.3: Typical for a pump installation.

that in case of failure of the operating pump a switch to the additional one can be performed immediately, perhaps even automatically. It should be mentioned that in this arrangement even the inlet pipe can be exposed to the high outlet pressure generated by the pump. In case the nonoperating pump is not isolated by closing the valves up- and downstream the pump, the operating pump will convey liquid backwards through the nonoperating one and pressurize even its inlet line, which is normally exposed only to the low inlet pressure. There is the so-called minimum bypass line branching from the product line, ending in the vessel containing the feed of the pump (not on the drawing). The reason is that most pump types should not operate against a closed valve. If the pressure in the outlet line exceeds a certain value, the control valve in the bypass line opens so that further pressure build-up is inhibited. Furthermore, it is prevented that the temperature of the system increases, as the motor power of the pump is no longer removed, which could as well lead to damage of the system. Instead of the control valve, an orifice can act as the restriction in the bypass line. This is only acceptable for pumps with relatively low power consumption. The minimum bypass is certainly an energy waste, as it simply reduces the pressure of a stream which has just been built up. The orifice admits

306 � 8 Fluid flow engines a bypass stream even at normal operation. For large pumps, this would correspond to a significant energy waste, whereas for small pumps it might be acceptable, as the cost of a control valve can be saved. Furthermore, the typical safety sensors can be seen in Figure 8.3, i. e. vibration or, in this case, temperature sensors, which are linked to an interlock that switches off the pump and, in most cases, switches on the substitute pump. The flow of a pump can in principle be controlled in two ways: First, the minimum bypass can lead some of the flow back to the source vessel. As discussed above, this option consumes electrical energy, as the pump simply provides its maximum flow according to its characteristics. The second option, the use of a frequency converter, is more elegant but also more expensive. It makes it possible to set the rotation speed of the pump according to the demand for the volume flow. However, a frequency converter consumes additional energy. They are a good solution for frequently changing run cases. For fixed operating conditions, they often cause problems as they can compensate for a deteriorated performance due to erosion. The operators do not notice, until total damage occurs [279].



There are three types of pumps. Centrifugal pumps: Centrifugal pumps (Figure 8.4) have been thoroughly discussed above. The operation of centrifugal pumps is illustrated in Figure 8.5. The rotating impeller transfers its rotational energy to the liquid, which is accelerated and discharged into the casing due to centrifugal forces. When the casing area increases, the kinetic energy of the liquid is converted to pressure. Centrifugal pumps are used for large volume flows with moderate pressure heads. They are appropriate for low to moderate viscosities. An undissolved vapor fraction up to 5–7 vol. % can be tolerated in the liquid, however, with increasing vapor fraction the efficiency and the NPSH value (cavitation!) are decreasing. Also, the solid content should be limited, 8 % can be regarded as the maximum.

Figure 8.4: Centrifugal pump in standard arrangement [149]. © Hydrocarbon Processing.

8.1 Pumps

� 307

Figure 8.5: Functional principle of a centrifugal pump [149]. © Hydrocarbon Processing.



Oscillating displacement pumps: The most frequently used oscillating displacement pumps are piston pumps and membrane pumps (Figure 8.6). In principle, they work discontinuously; for the piston pump there is a well-defined intake stroke, where the piston generates an underpressure to suck in the liquid to be conveyed. For the outlet stroke, the piston is moved back and generates an overpressure on the liquid which pushes it out of the pump. Membrane pumps work in an analogous way, but the piston does not get in direct contact with the conveyed liquid. Instead, the piston is actuating a working fluid which moves a membrane to and fro. Membrane pumps are especially useful for corrosive fluids. Unlike centrifugal pumps, oscillating displacement pumps are appropriate for moderate volume flows at high pressure generation. The discontinuous operation can be overcome if necessary. The use of several pump stages where

Figure 8.6: Operation modes of piston and membrane pump.

308 � 8 Fluid flow engines

Figure 8.7: Basic sketch of a gear pump. © Duk/Wikimedia Commons/CC BY-SA 3.0. https:// creativecommons.org/licenses/by-sa/3.0/deed.en.



the phases are displaced can yield a quasi-continuous flow. Another option is the installation of a pressure vessel filled with pressurized gas connected to the outlet line. The gas will be further compressed when the outlet stroke takes place; during the inlet stroke, the gas expands and represents an additional pressure source. The pump characteristics are completely different from that of a centrifugal pump. After setting the repetition frequency, the volume flow is determined, and the pressure obtained depends only on the plant characteristics. Rotating displacement pumps: Well-known rotating displacement pumps are gear pumps (Figure 8.7). The cogs represent small compartments which are continuously filled with liquid at low pressure and moved to the high pressure level. Comparably high pressure elevations up to 40 bar are possible. As for oscillating displacement pumps, the volume flow is directly proportional to the rotation speed. Gear pumps are especially appropriate for highly viscous fluids.

8.2 Compressors Compressors, vacuum pumps, fans, and other fluid flow engines for the pressure elevation of gases are widely applied in industry for the transport of fluids or for establishing a certain pressure to perform a reaction or a separation. In process simulation, compressors cannot be regarded as a simple flash, as they cannot be specified by two outlet variables. Instead, more information about the course of the change of state is necessary. For most types of compressors, it can be assumed that they are adiabatic, i. e. the heat exchange with the environment does not play a major role. Proceeding from this assumption, the calculation route is illustrated by the adiabatic compression of a vapor. The changes in kinetic energy can be neglected in the energy balance. The calculation is divided into the reversible adiabatic calculation and the integration of losses. 1. Reversible calculation: The reversible case characterizes the process that requires the lowest power consumption. It is specified by the outlet pressure p2 at constant entropy. According to

8.2 Compressors

� 309

the Second Law, the outlet temperature is calculated by the isentropic condition s2 (T2rev , p2 ) = s1 (T1 , p1 ) ,

(8.5)

where the indices 1 and 2 denote the inlet and the outlet state, respectively. In process simulation calculations, Equation (8.5) is directly evaluated with the corresponding equation of state to determine T2rev . A simplified calculation using the ideal gas equation and assuming a constant heat capacity yields [11] T2rev p = ( 2) T1 p1

κ−1 κ

,

(8.6)

with κ=

cpid

cvid

=

cpid

(cpid − R)

(8.7)

The specific power consumption for the reversible case is given by wt12rev = h2rev (T2rev , p2 ) − h1 (T1 , p1 ) 2.

(8.8)

Integration of losses: The actual specific power consumption required is calculated with the isentropic and mechanical efficiency: wt12 =

wt12rev , ηth ηmech

(8.9)

while the power consumption of the process is ̇ t12 P12 = mw

(8.10)

The isentropic efficiency ηth is an empirical factor which summarizes all the effects about the irreversibility of the process. ηth = 0.8 is often a reasonable choice. It usually decreases with increasing pressure ratio. ηmech is the efficiency of the energy transformation of the compressor engine (electrical to mechanical energy), which is not related to the process flow. For large drives, ηmech = 0.95 can be used. The outlet conditions of the flow are then calculated backwards via h2 (T2 , p2 ) = h1 (T1 , p1 ) +

h2rev (T2rev , p2 ) − h1 (T1 , p1 ) ηth

(8.11)

Note that ηmech has been omitted. Knowing h2 , the outlet temperature T2 = f (h2 , p2 ) can be calculated by an iterative procedure.

310 � 8 Fluid flow engines

Example Steam (5 t/h, p1 = 1 bar, t = 110 °C) is compressed adiabatically to p2 = 5 bar. The efficiency of the compressor is given by ηth = 0.8, while the mechanical efficiency is supposed to be ηmech = 0.9. Use a high-precision equation of state, e. g. [29].

Solution The first step is always the reversible calculation using Equation (8.5). It gives the condition s2 (T2rev , 5 bar) = s1 (383.15 K, 1 bar) = 7.4155 J/(g K) The solution is T2rev = 560.55 K = 287.4 °C. The power required for the reversible case is ̇ ̇ 2rev − h1 ) = m[h(560.55 K, 5 bar) − h(383.15 K, 1 bar)] Wt12,rev = m(h = 5000 kg/h ⋅ (3038.50 − 2696.34) J/g = 475.2 kW To obtain the real consumption of the compressor, the efficiencies must be considered. One gets Wt12 =

Wt12,rev = 660.04 kW ηth ηmech

For the outlet state of the steam, the mechanical efficiency has no influence; only the thermal efficiency must be taken into account: h2 − h1 =

h2rev − h1 3038.50 − 2696.34 = J/g = 427.70 J/g , ηth 0.8

giving h2 = 427.70 J/g + 2696.34 J/g = 3124.04 J/g At p2 = 5 bar, the outlet temperature T2 can be determined to be T2 = 601.91 K or t2 = 328.76 °C.

Example A natural gas stream (1000 kg/h, 80 mole-% methane(1), 20 mole-% nitrogen(2), T1 = 300 K, p1 = 60 bar) is compressed to p2 = 100 bar. The compression is adiabatic, the isentropic efficiency is ηth = 0.8. ηmech = 0.9. Calculate the compressor outlet temperature and the power consumption a) assuming ideal gas behavior b) using the Peng-Robinson equation of state The physical properties are:

8.2 Compressors

– –

� 311

for methane: Tc = 190.564 K, pc = 45.992 bar, ω = 0.0114, M = 16.043 g/mol, cpid = 36.257 J/(mol K) = const. for nitrogen: Tc = 126.192 K, pc = 33.958 bar, ω = 0.0372, M = 28.014 g/mol, cpid = 29.135 J/(mol K) = const.

The binary interaction parameter is kCH4 ,N2 = 0.0311.

Solution a)

ideal gas The heat capacity of the mixture is cpid = y1 cpid1 + y2 cpid2 = (0.8 ⋅ 36.257 + 0.2 ⋅ 29.135) J/(mol K) = 34.832 J/(mol K) cpid is only a function of temperature and not of pressure. However, the temperature dependence is neglected in this example. Therefore, Equation (8.6) can be applied. With Equation (8.7), we get κ=

cpid

cvid

=

cpid

cpid

−R

=

34.832 = 1.3135 34.832 − 8.31446

and Equation (8.6) gives the compressor outlet temperature T2rev

p = T1 ( 2 ) p1

κ−1 κ

100 ) = 300 K( 60

0.3135 1.3135

= 338.9 K = 65.75 °C

h2rev − h1 = cpid (T2rev − T1 ) = 34.832 J/(mol K) ⋅ (338.9 − 300) K = 1355 J/mol Considering the isentropic efficiency and using M = y1 M1 + y2 M2 = (0.8 ⋅ 16.043 + 0.2 ⋅ 28.014) g/mol = 18.437 g/mol as average molar mass, the enthalpy difference is h2 − h1 = (h2rev − h1 )/ηth =

1355 J/mol = 1694 J/mol = 91.88 J/g (Eqs. (8.8), (8.9)) 0.8

and T2 = T1 +

h2 − h1 1694 J/mol = 300 K + = 348.63 K = 75.48 °C 34.832 J/(mol K) cpid

giving the power consumption ̇ 2 − h1 )/ηmech = 1000 kg/h ⋅ P = m(h b)

91.88 J/g = 102089 kJ/h = 28.36 kW 0.9

Peng-Robinson equation of state First, accounting for the real gas requires a more detailed setup of the specific enthalpy and entropy. A reference point must be defined. In contrast to the explanations in Chapter 2.8, it is not necessary to

312 � 8 Fluid flow engines

use the standard enthalpy of formation, as no chemical reactions are involved. Instead, one can choose it according to convenience. Arbitrarily, we define our reference conditions as – h (300 K; 1 bar; ideal gas state) = 0 – s (300 K; 1 bar; ideal gas state) = 0 Then, the procedure is as follows: 1. Evaluate s1 and h1 2. Estimate T2rev 3. Check if s2rev = s1 4. Vary estimated T2rev until s2rev = s1 5. Calculate h2rev 6. Integrate the losses as in the previous example 7. Calculate the power consumption First, s1 and h1 are calculated according to Equations (2.29) and (2.30), respectively. For T1 = 300 K and p1 = 60 bar, one gets – a = 0.17402 Jm3 mol−2 (Equations (2.21)–(2.23), (2.38)), – b = 2.62 ⋅ 10−5 m3 /mol (Equation (2.24)) – v = 3.792 ⋅ 10−4 m3 /mol (Equation (2.28) with Cardano’s formula) – Z = 0.91208 (Equation (2.31)) – da/dT = −3.42917 ⋅ 10−4 Jm3 mol−2 K−1 [see (2.41)] – hid = 0 – sid = −R ln 60 = −34.042 J/(mol K) – and, subsequently, h1 = 0 − 219.308 J/mol − 3730 J/mol ⋅ ln 1.20159 = −904.2 J/mol (Eq. (2.29)) s1 = −34.042 − 0.5965 + 4.6187 ⋅ ln 0.83223 − 0.7652 = −36.252 (Eq. (2.30)) J/(mol K) In the reversible case, s2rev = s1 . T2rev is estimated to be 349 K, according to the ideal gas case. We get – a = 0.15823 Jm3 mol−2 (Equations (2.21)–(2.23), (2.38)), – b = 2.62 ⋅ 10−5 m3 /mol (Equation (2.24), as before) – v = 2.72 ⋅ 10−4 m3 /mol (Equation (2.28) with Cardano’s formula) – Z = 0.93746 (Equation (2.31)) – da/dT = −3.0316 ⋅ 10−4 Jm3 mol−2 K−1 [see (2.41)] – sid = cpid ln(349/300) − R ln 100 = −33.0197 J/(mol K) – and, subsequently, s2rev = −33.0197 − 0.8437 + 4.08325 ⋅ ln 0.77864 − 0.53698 = −35.422 J/(mol K)

(Eq. (2.30))

The calculated s2rev > s1 , therefore, a lower T2rev must be tried next. With the usual numerical techniques (e. g. regulla falsi), one finally gets T2rev = 342.03 K = 68.88 °C, with s2rev = −36.252 J/(mol K) h2rev =

cpid

(Eq. (2.30))

and

⋅ (342.03 − 300) − 196.663 J/mol − 3580 J/mol ⋅ ln 1.29249 = 348.8 J/mol

(Eq. (2.29))

Taking into account the efficiencies can be done in the same way as for the ideal gas case h2 − h1 = (h2rev − h1 )/ηth =

(348.8 + 904.2) J/mol = 1566.2 J/mol ⇒ h2 = 662 J/mol 0.8

(Eqs. (8.8), (8.9))

8.2 Compressors

� 313

The corresponding temperature is again found iteratively, it is T2 = 349.65 K = 76.5 °C. The power consumption is calculated to be ̇ 2 − h1 )/ηmech = 1000 kg/h ⋅ P = m(h

1566.2 J/mol = 94387.5 kJ/h = 26.22 kW 18.437 g/mol ⋅ 0.9

Remark. This example gives only an impression about the complexity of real gas phase calculation. Moreover, the assumption that cpid is not a function of temperature is a rough approximation. Normally, it is, and cpid and cpid /T have to be integrated to get the ideal part. Antiderivatives for the usual function types are given in [11].

There are a number of different compressor types available, which differ in pressure range, volumetric flow rate, and other requirements at operating conditions like process safety, physical properties, or environmental conditions. Moreover, it has to be considered whether the fluid contains drops or particles and whether there are components which tend to polymerize. Certainly, if the achievable compression ratio is too low, several compressors can be combined into a “multistage compressor”, usually with intermediate coolers or direct liquid injection to reduce the gas temperatures and, correspondingly, the volume flows. The main compressor types are: – Piston compressors: Piston compressors work according to the same principle as piston pumps (Figure 8.6). Compression ratios up to 6 : 1 per stage can be achieved. There are no limitations for the volume flow, but relatively small ones are preferred (≈ 200 m3 /h). The lubrication of the compressor is always a major item. Care should be taken that no process components can accumulate in the lubricant, which would reduce the effectiveness and make it necessary for the lubricant to be exchanged often. Alternatively, dry-running compressors can be used if appropriate. Of course there are special requirements for the inlet and outlet valves. The main disadvantage of piston compressors is their high maintenance effort. The flow pulsation can cause vibration and structural problems due to the unbalanced forces, making heavy foundations necessary. Figure 8.8 shows a so-called hyper compressor, used in the LDPE process for compressing ethylene from approx. 300 bar to 3000 bar in two stages. The space demand is huge; a hyper compressor often takes up a whole hall and needs extremely strong fundamentals. In the middle of the picture there is the driving shaft with its coupling to the motor and the lubrication unit, causing the movement of the plungers on the right hand and the left hand side. The arrangement of the plungers is symmetric to avoid unbalanced forces as far as possible. The nozzles below the plungers are the gas inlets and outlets. – Membrane compressors: Membrane compressors have a principle similar to membrane pumps (Figure 8.6). They are appropriate for small volume flows and achieve compression ratios up to

314 � 8 Fluid flow engines

Figure 8.8: Two-stage hyper compressor. © 2016. Burckhardt Compression AG.



20 : 1 per stage. Like piston compressors, the flow pulsation causes problems, which makes it necessary to replace the membrane regularly. Screw compressors: Screw compressors are displacement compressors, where the medium is enclosed in a chamber which is continuously shortened, causing the compression of the gas (Figure 8.9). Compression ratios of 4.5–7 : 1 can be achieved, the range of the volume flows is reported to be 300–60 000 m3 /h. Screw compressors are not sensible to small amounts of liquid or dirt in the gas stream; on the contrary, liquid is often introduced to reduce the temperature elevation. No valves are involved, which is a great advantage in comparison to piston compressors. Furthermore, screw compressors have a high efficiency and a wide range of applications.

Figure 8.9: Screw compressor rotor. © MAN Diesel & Turbo SE.

8.2 Compressors





� 315

Rotary piston compressors: Rotary piston compressors are displacement compressors which are usually applied for vacuum generation. The rotary pistons and the shell form moving chambers which force the gas to the pressure side. The compression ratios are comparably low (1.8–2 : 1), while the volume flows are not restricted (100–80 000 m3 /h). The function principle is analogous to the rotary vane pump (Figure 8.18). Turbo compressors: 1. Radial turbo compressors: Radial turbo compressors are the corresponding compressors to centrifugal pumps (Figure 8.5). The compression ratio is limited (2–4 : 1), while the volume flows (5000–150 000 m3 /h) can be considerably high. However, larger or smaller volume flows cause construction problems which are not easily solved. The wide operation range and the high reliability are the advantages of radial turbo compressors; their drawbacks are their sensitivity to reduced flow rates and changes in gas composition, and their weaknesses in the dynamic behavior. 2. Axial turbo compressors: Axial turbo compressors (Figure 8.10) are appropriate for very large volume flows up to 1 200 000 m3 /h and compression ratios up to 8 : 1 [150]. The extremely large capacity and the high reliability are the main advantages of this compressor type, and the main disadvantage is the limited turndown. The smaller the volume flow, the more ineffective this compressor type is. A lower bound for axial compressors is approximately 60 000 m3 /h.

Figure 8.10: Axial turbo compressor rotor. © MAN Diesel & Turbo SE.

316 � 8 Fluid flow engines

Figure 8.11: Liquid ring compressor arrangement. Courtesy of Sterling Fluid Systems Holding GmbH.



Liquid ring compressors (Figure 8.11): Liquid ring compressors (LRCs) are compressors in which the service fluid forms a liquid ring, which itself both acts as a compressor and as a seal between suction and discharge side. The shaft of the impeller is placed eccentrically so that the cells containing the gas formed by the impeller and the liquid become smaller and smaller, which results in the compression. Moreover, the compressor power which heats up the gas is transferred to the liquid, and thus the compression can be regarded as isothermal. The peculiarity of LRCs when compared to other compressor types is that the medium to be compressed gets into direct contact with the service liquid. Normally liquid ring compressors operate as displacement compressors. If the vapor condenses due to the compression or due to the temperature decrease after contact with the liquid, the compression is supported by absorption or condensation. The liquid is circulated; cooling is necessary as the heat generated by the compression (and possibly by the condensation/absorption) increases the temperature of the liquid. The strength of LRC compressors is that any medium can be compressed without restriction, the medium can be explosive, toxic or carcinogenic [151]. The max. compression ratio for one stage is 3 : 1 (in vacuum operation up to 7 : 1), the volume flows range from 1–20 000 m3 /h [151]. LRCs are very robust and inexpensive, but they have a low efficiency (ηth ≈ 0.2). From vendor data, the following correlation has been set up:

8.2 Compressors

� 317

2

p p2 − 0.00159871( 2 ) p1 p1 ̇ p V + 6.2392 ⋅ 10−5 1 + 0.00011975 3 bar m /h

ηth = 0.09100238 + 0.00391931

It indicates that the efficiency mainly depends on the pressure ratio. In case nothing else is known, one could try this correlation, which is, however, only tested in the ranges p2 /p1 < 11, p1 > 0.7 bar and V̇ > 775 m3 /h. In case the liquid absorbs components from the gas stream it has to be continuously worked up to avoid accumulation. The characteristic curves of axial and radial turbo compressors and displacement compressors are outlined in Figure 8.12. Mechanical vapor recompression is one of the main tasks of compressors. For this purpose, the thinking for application is different. Figure 8.13 shows a typical arrangement.

Figure 8.12: Qualitative characteristic curves of different compressor types.

Figure 8.13: Mechanical vapor recompression arrangement.

The vapor used as heating agent for evaporation is the generated vapor itself. However, this vapor would condense at most at the same temperature as the evaporation itself takes place, and only if a pure substance is vaporized.5 Because of pressure drops 5 which would not make any sense.

318 � 8 Fluid flow engines in the line and in the nozzles and because of the boiling point elevation the condensation temperature of the vapor is certainly lower than the boiling temperature of the liquid. Nevertheless, the vapor can in fact be used as a heating agent if its condensation temperature is elevated. This can be achieved by increasing its pressure, and this is what mechanical vapor recompression can do. Fresh steam, as depicted in Figure 8.13, is usually only necessary for the startup. In terms of thermodynamics, the power required for the compression is used to elevate the temperature level of the heat of condensation of the vapor. This means that the mechanical power is at least not lost; it is used for heating the medium as well. However, the power from the electric current is turned into heat, which is a devaluation. The blowers used for mechanical recompression are usually simple industrial fans (Figure 8.14). They accomplish a compression ratio of approx. 1.3–1.4 per stage. In case of water vapor, this corresponds to an elevation of the dew point of 8–10 K. Usually, this

Figure 8.14: Blower for mechanical vapor recompression. © Piller Blowers & Compressors GmbH.

8.2 Compressors

� 319

should be sufficient to cover the boiling point elevation of the product, especially if plate heat exchangers are used, which need only low driving temperature differences. If not, there is also the option to use an arrangement of two or even three blowers in series. Blowers are highly standardized. The power can be adjusted by manipulating the rotation speed, which is usually achieved with a frequency converter. The costs for maintenance are low, especially compared to other compressor types. Special care must be taken for the design of the bearings and for the shaft seals. Example Saturated water vapor coming from an evaporator (0.8 bar, 93.5 °C) is compressed by a blower with a pressure ratio of 1.35. What will be its condensation temperature?

Solution The outlet pressure of the blower is pnew = 0.8 bar ⋅ 1.35 = 1.08 bar. The corresponding condensation temperature of the outlet stream is then 101.8 °C, meaning that the compression has achieved a boiling point elevation of 8.3 K.

The energy consumption of the blower is usually moderate. Many t/h of water vapor can be recompressed, saving the same amount of fresh steam (e. g. 100 t/h in the example above with a power consumption of approx. 2 MW). The inlet of a compressor should in general have a vapor fraction of 1, because liquid droplets can cause erosion to the impeller. It is necessary to carefully check what the demands of the compressor are in this area. The vapor-liquid separator can often be designed for a maximum droplet size (Chapter 9); however, there is no way to predict the amount of droplets. The various types of compressors have different susceptibilities to droplets; the least sensitive ones are in fact the blowers for vapor recompression, where liquid is even injected beyond the saturation level on purpose to continuously clean the impeller and avoid deposits on the impeller, which could lead to imbalances. Getting into the two-phase region during the compression becomes more probable the larger the molecule is [11].6 According to Equation (8.6) for the ideal gas case, the temperature rises during compression. As cid κ−1 1 R = 1 − = 1 − vid = id , κ κ cp cp

(8.12)

one can easily see that the exponent decreases with increasing molar isobaric heat capacity, which means that the temperature elevation during compression is lower, the 6 This paragraph refers to pure components as representatives.

320 � 8 Fluid flow engines larger the molecule is.7 If the rise of the boiling temperature is larger than the temperature elevation during compression, drops are formed inside the compressor, which might be seriously detrimental to the impeller [11]. If the molecule has at least four C-atoms, one can almost be sure that droplet formation will occur (“wet fluids”), for molecules with less C-atoms, the feed stream will remain gaseous (“dry fluids”) [152].

8.3 Jet pumps A jet pump is an alternative to a compressor for the compression of gases, the generation of vacuum or for the transportation of liquids or even bulk materials. Its advantage is its high reliability, caused by the fact that a jet pump has no moving parts. Also, it is not sensitive to fouling or corrosion and appropriate for large volume flows [153]. The energy is transferred by a fluid under high pressure, e. g. steam, compressed air or water and other liquids. The principle of jet pumps is explained in Figure 8.15. The energy is supplied by the motive steam on the left-hand side. It makes use of the principle of the Laval nozzle, where supersonic velocities can be reached in a tube [154]. The motive steam passes a minimum cross-section area (Chapter 12.1.3), where the speed of sound is reached. Downstream it is further accelerated in the diffusor to supersonic velocity. Due to the acceleration, the static pressure is lowered below the pressure of the suction stream. The suction stream is taken in and mixed with the motive steam in the first part of the diffusor. In the second part, the flow is slowed down again, and the pressure rises. Finally, at the outlet the flow has a pressure between the pressure of the suction and the motive steam. The motive steam has expanded while the suction stream has been compressed; the jet pump works like an equivalent system where the motive stream is expanded in a turbine, while the power obtained is used to run a compressor for the suction stream. One of the most popular applications of jet pumps is the compression of low pressure steam to a more useful pressure using high-pressure steam as the motive steam. As long as the motive steam has a pressure which has to be reduced anyway, no operation

Figure 8.15: Scheme of a jet pump. © 2015, Körting Hannover AG.

7 cpid on molar basis increases with the size of the molecule, as more vibration options can be activated.

8.3 Jet pumps

� 321

costs are related to the compression. For assessing whether a jet pump might be useful it is necessary to know how much of the motive steam is needed. The use of jet pumps in thermal vapor recompression has been shown in Chapter 3.3. Clearly, the final decision on the necessary amount of the motive stream should be made by the vendor. Nevertheless, some first guesses can easily be made. The minimum possible amount of the motive stream is determined by the entropy balance. According to the Second Law, the entropy of the outlet stream (3) must be larger than the sum of the entropies of the suction (2) and the motive stream (1): ṁ 1 s1 + ṁ 2 s2 ≤ (ṁ 1 + ṁ 2 )s3 ,

(8.13)

while at the same time the First Law ṁ 1 h1 + ṁ 2 h2 = (ṁ 1 + ṁ 2 )h3

(8.14)

must be obeyed, provided that the velocities at the nozzles do not have a major contribution to Equation (8.14). From Equations (8.13) and (8.14), one can get a first idea of the order of magnitude of the motive steam ṁ 1 . More realistic values can be obtained considering the efficiency [153] ηjet =

ṁ 2 [h(T3 , p3 ) − h(p2 , s3 )] ≈ 0.2 . . . 0.4 , ṁ 1 [h(T1 , p1 ) − h(p3 , s1 )]

(8.15)

which refers to the above mentioned turbine/compressor analogy. The efficiency can be estimated to be: ηjet = 0.3774283 − 0.0588682 ⋅

2

p3 p p + 0.00373807 ⋅ ( 3 ) − 1.9 ⋅ 10−6 ⋅ 1 p2 p2 p2

(8.16)

The correlation has been found by the evaluation of the typical vendor nomograms.8 This efficiency refers to a reference state at tref = 150 °C and the suction pressure pref = p2 . The equivalent mass flow at reference state is related to the mass flow of the suction stream by ṁ eq = ṁ 2 ⋅ 0.89764 exp(0.00072

t ) °C

(8.17)

Example 1000 kg/h water vapor at t2 = 130 °C, p2 = 2 bar shall be used at p3 = 4.5 bar. How much motive steam (t1 = 190 °C, p1 = 11 bar) is necessary (a) using Equation (8.13) for the reversible case and (b) using Equation (8.16) for a more realistic case?

8 The author is grateful to Sonali Ahuja and Jonas Jaske, who performed this work with enthusiasm.

322 � 8 Fluid flow engines

Solution (a)

(b)

First, the specific enthalpies and entropies are determined. Using the high-precision equation of state [29] one obtains h1 = 2796.6 J/g

s1 = 6.5868 J/(g K)

h2 = 2727.3 J/g

s2 = 7.1797 J/(g K)

Estimating ṁ 1 = 1000 kg/h, h3 can be determined to be h3 = 2762.0 J/g according to Equation (8.14). The corresponding specific entropy s3 (p3 , h3 ) turns out to be 6.8997 J/(g K), which is larger than the value obtained from Equation (8.13), 6.8832 J/(g K). Thus, the estimation for ṁ 1 was too high. After several iterations, the result is ṁ 1 = 916.3 kg/h, giving h3 = 2760.4 J/g and s3rev (p3 , h3 ) = 6.8962 J/(g K), which is obtained with Equation (8.13) as well. The enthalpies involved in Equation (8.15) are h(T1 , p1 ) = 2796.6 J/g h(p3 , s1 ) = 2630.0 J/g The equivalent suction stream is (Equation (8.17)) 985.7 kg/h. With Equation (8.16) the efficiency can be estimated to be ηjet = 0.264. Again, an iterative solution is necessary. Estimating ṁ 1 = 2000 kg/h, one gets h(T3 , p3 ) = 2787.55 J/g according to Equation (8.14) and h(p2 , s3 ) = 2640.0 J/g. ηjet is then determined to be 0.436, indicating that the estimation for ṁ 1 was too low. After some iterations, ηjet = 0.264 is obtained with ṁ 1 = 3313.5 kg/h, giving h(T3 , p3 ) = 2790.33 J/g and h(p2 , s3 ) = 2642.51 J/g.

The operational characteristics of a jet pump can be summarized in the diagram according to [153] (Figure 8.16).

Figure 8.16: Operational characteristics of a jet pump [153]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

In this diagram, the suction stream for a given jet pump is depicted as a function of suction pressure and outlet pressure, both as coordinates on the abscissa. The pressure p1 of the motive steam remains constant. The left border line refers to the suction pressure p2 . There is a minimum suction pressure to obtain a suction flow at all. Above

8.4 Vacuum generation

� 323

this suction pressure, the suction flow increases continuously with the suction pressure, with a typical sharp bend in the curve shape. The suction stream is proportional to the suction pressure, meaning that the suction volume flow remains constant. Moving to the right-hand side of the diagram, the pressure on the abscissa refers to the outlet pressure. The outlet pressure can be varied over wide ranges without an effect on the suction stream (see horizontal lines). At a certain outlet pressure plim , which slightly depends on the suction pressure, the suction flow is seriously affected and drops down rapidly. The goal is to avoid getting into this region. For a given jet pump, the flow of the motive steam can only be increased by increasing its pressure. This measure has little effect on the suction stream, but the critical outlet pressure is moved to higher values of the outlet pressure [153]. Jet pumps are very often used as vacuum pumps, which is analogous to the use as vapor compressors. The pressure ratio is usually between 3–10, typically 6. The main pressure range where jet pumps are used is between p = 0.1–100 mbar. For this purpose, several stages are used [155]. Between the stages there are condensers to get rid of the condensables as far as possible to reduce the load of the jet pumps. A very good explanation of the design criteria of jet pumps can be found in [156]. For the control of the vacuum either the motive or the suction fluid can be controlled by a valve. Another option is the addition of false air or gas via a control valve to spoil the vacuum to an appropriate extent.

8.4 Vacuum generation Vacuum is divided into the following pressure ranges: – rough vacuum: 1–1000 mbar; – medium vacuum: 10−3 –1 mbar; – high vacuum: 10−7 –10−3 mbar; – ultra-high vacuum: < 10−7 mbar. Before introducing the particular options for vacuum generation, some general thoughts might be useful. In process engineering, the main part of vacuum generation is covered by condensation. This works in the following way [155]. Consider a vessel which contains water vapor at p = 1000 mbar, t = 110 °C (Figure 8.17). At the beginning, the piston at the top is completely movable so that the content of the vessel is in mechanical equilibrium with the environment. Next, the piston is fixed, and the whole vessel cooled down to t1 = 20 °C, while the steam partially condenses. In this case, the pressure will drop down to the vapor pressure of the new temperature ps (t1 ) ≈ 23 mbar. Thus, using simple condensation, a vacuum of 23 mbar has been generated. A better vacuum can be generated if the temperature is further lowered, for instance 12 mbar at 10 °C.

324 � 8 Fluid flow engines

Figure 8.17: Condenser used as vacuum generator.

Things are different when inert gases are involved. Consider the case above, where the partial pressure of water is only pwater = 950 mbar and the remaining 50 mbar are caused by an inert gas, e. g. nitrogen. This causes the vacuum to deteriorate significantly. While the water is still condensed until it develops its vapor pressure at t1 = 20 °C, the inert gas will remain gaseous. Neglecting the volume of the condensate and the small solubility of the inert gas, its partial pressure will only decrease due to to the temperature decrease according to the ideal gas law, giving pinert ≈ 50 mbar ⋅

293.15 K ≈ 38 mbar 383.15 K

The overall vacuum will then be p = 23 mbar + 38 mbar = 61 mbar, by far worse than the 23 mbar previously obtained. This is a typical problem in vacuum process engineering. Most vapor streams to be condensed contain a certain fraction of inert gas. After condensation, the fraction of the inert gas will have increased, as in the example above 50 from xinert = 1000 = 0.05 to xinert = 38 ≈ 0.62. To avoid accumulation of the inert gas, the 61 remaining vapor mixture has to be removed, and this is essentially what vacuum pumps are for. As mentioned above, the fraction of condensables is decreased with decreasing temperature. Therefore, it is useful to remove the inert gas at the coldest part of the condenser, i. e. at the inlet of the cooling agent. This is another reason to realize countercurrent flow in the condenser, apart from maintaining the driving temperature difference. The condenser is the more economic part of vacuum generation; in [155] it is demonstrated with an example that condensation should be used as far as possible, and the vacuum pump should only remove the remaining gas. It should be noted that the condensate is usually transported through a pipe to a collecting vessel by gravity. There is a minimum height difference between the condenser

8.4 Vacuum generation

� 325

and this vessel, which is in most cases operated at a higher pressure. The pressure caused by the static liquid height in the pipe must be higher than the pressure in the vessel; otherwise, the condensate could not leave the condenser. Therefore, a minimum height difference between condenser and condensate collecting vessel is Hmin =

pvessel − pcondenser ρcond g

(8.18)

The pipe should be designed as a dip pipe to ensure the liquid seal between condenser and the vessel atmosphere. There are certain types of vacuum pumps: Jet pumps have been explained above in Chapter 8.3. They are the most inexpensive alternative, as they contain no movable parts. On the other hand, their flexibility is limited. If multiple stages are applied, pressures down to 1 mbar can be achieved. It has the largest capacity of all vacuum devices and can handle condensable products. However, noise development is an issue, and the process gas and the motive steam contaminate each other. Liquid ring compressors (Chapter 8.2) can also be used as vacuum pumps and are the most widely used types of vacuum pumps in chemical industry. They are robust, simple and inexpensive, but not very efficient concerning the energy consumption. The lowest operating pressure is the vapor pressure of the sealing fluid. This is approx. 20–40 mbar for water at ambient temperature but can go down to 3–5 mbar if, for example, ethylene glycol is used. The advantage of liquid ring compressors is the almost isothermal compression (“cool running”). Contamination of the sealing fluid is an issue. Rotary vane pumps are the most common types if it is ensured that all gases sucked in can be transported by the vacuum pump without condensation. Otherwise, besides a worse performance the lubrication oil of the pumps will be spoilt by dilution or by forming an emulsion with insoluble liquids as water. Compatibility with condensing substances is a strong criterion to distinguish whether an application is appropriate for rotary vane or for liquid ring pumps. The function principle of a rotary vane pump is illustrated in Figure 8.18. Inside the stator, an excentric rotor is rotating. In the center of the rotor, a spring pushes two vanes apart so that they get contact with the wall of the stator and form two separated chambers at inlet and outlet. At the inlet side, the volume of the corresponding chamber increases, so that substance from the recipient is sucked in. At the outlet, the volume of the chamber has decreased, which pushes the gas to the outlet line. Rotary vane pumps can achieve high vacuum down to 10−3 mbar. Their advantage compared with other types is that they are running dry, without an auxiliary fluid. On the other hand, they cannot handle liquid slugs, and their operating temperatures are relatively high, which prevents condensation but promotes possible chemical reactions. There are similar types working according to the displacement principle (rotary piston pump, roots blower, membrane pumps [155]). For the generation of high and ultra-high vacuums, oil diffusion and turbomolecular pumps are used. Both need a considerable prevacuum to keep the load low. Oil diffusion

326 � 8 Fluid flow engines

Figure 8.18: Rotary vane pump. 1: stator, 2: rotor, 3: vanes, 4: spring. © Rainer Bielefeld/Wikimedia Commons/CC BY-SA-3.0. https://creativecommons.org/licenses/bysa/3.0/deed.en.

Figure 8.19: Scheme of an oil diffusion pump.

pumps (Figure 8.19) have the same functional principle as jet pumps. A high-boiling oil with a low vapor pressure is heated up electrically so that it finally evaporates. The vapor exits the pump through a special system of nozzles with velocities above speed of sound, and, correspondingly, low local pressures. The gases rapidly diffuse to the oil. After the oil has been condensed at the wall, they are removed by the pre-vacuum pump. The advantage of oil diffusion pumps is their reliability, as they have no moving parts. The can produce even an ultra-high vacuum down to 10−10 mbar. Care should be taken that no oil can flow into the recipient. Turbomolecular pumps consist of a series of rotor/stator pairs which act as small compressors. The rotation speed is in the range 300–400 m/s or 10 000–90 000 rpm. It is possible to achieve ultra-high vacuum down to 10−10 mbar.

8.4 Vacuum generation

� 327

For the dimensioning of vacuum pumps reasonable values for their capacity are necessary. It is mainly determined by the leakage rate, which can be estimated according to empirical rules of thumb. There are several approaches: – For flange connections, the leakage rate can be estimated to be 200–400 g/h per m gasket length. Special measures (tongue and groove face flange, surface treatment of the gasketing areas, use of special gaskets) can reduce the leakage rate down to 50–100 g/(h m). – According to the maximum mass flux density (Chapter 14.2), through an opening of 1 mm2 approx. 0.83 kg/h can flow. Surprisingly, this value does not depend on the vacuum pressure. As long as the pressure ratio (Section 14.2.6) between environmental pressure and recipient is above the critical pressure ratio of approx. 2, meaning that the vacuum is at least below p = 500 mbar, the air intake is independent of pressure, and the maximum flow is determined by the speed of sound in the minimum cross-section area. – Typical leakage air flows can be determined according to Table 8.1 [157].

Table 8.1: Recommended values for leakage rates [157]. Connection type Equipment volume (m� )

Flange leakage rate

Flange and welded (kg/h)

Welded or special gaskets

0.2 1 3 5 10 25 50 100 200 500

0.15–0.3 0.5–1 1–2 1.5–3 2–4 4–8 6–12 10–20 16–32 30–60

0.1–0.2 0.25–0.5 0.5–1 0.7–1.5 1–2 2–4 3–6 5–10 8–16 15–30

< �.� 0.15–0.25 0.25–0.5 0.35–0.7 0.6–1.2 1–2 1.5–3 2.5–5 4–8 8–15

The leakage rate can be determined experimentally by measuring the pressure increase during 24 hours in a hermetically closed and evacuated system with the volume V Leakage Rate =

Δp24 h VMair RTamb 24 h

(8.19)

For industrial plants, the pressure increase per 24 h should be less than 10 % from the base pressure. For high vacuum systems with special seals or welded connections it should even be less than 1 %.

9 Vessels and separators Vessels are normally the simplest pieces of equipment in a process, as long as they are not used as reactors. They have several functions in a process. The most important one is the decoupling of two process parts, which is especially important in the startup phase when different parts of a plant are starting operation independently from each other. Other functions are the separation of vapor and liquid by gravity or the provision of suction head to achieve a sufficient NPSH value for a pump (Chapter 8.1). If the only function of a vessel is the storage of raw materials or product, it is called a tank. Figure 9.1 shows a typical PID representation of a horizontal vessel. The vessel is equipped with several nozzles for inlet and outlet streams, measurements for temperature, pressure and liquid level. There is a man hole on the upper side (M1). A vortex breaker at the liquid outlet nozzle prevents waterspouts to be formed. One of the feed inlets is designed as a dip-pipe to prevent electrostatic charging. Furthermore, a dippipe ensures that vapor backflow is not possible. The liquid in the vessel forms a liquid seal so that vapor from the vessel cannot flow back through these lines if no liquid is delivered. The calculation of the volume of vessels is a bit complicated, due to some apparently well-meant simplifications. In fact, some guidelines define that the volume of the vessel is just the cylindrical volume calculated with the outer diameter of the vessel and the length of the cylindrical section. Specially shaped heads ensure that there are no sharp edges where solids could deposit. Half-sphere heads need too much space and have a bad accessibility. Flat heads would provide more space for nozzles, but the tensions require larger wall thicknesses. Therefore, flat heads are rarely used for pressure vessels or for large vessels. The volume of the two heads is often neglected, which is conservative, but partly compensated for due to the fact that the outer diameter is used instead of the inner one. Also, it is possible to calculate an exact volume, e. g. using the volume of an ellipsoidal head Vel. head = 0.1298 di3

(9.1)

Before beginning to evaluate the vessel volume, it should be clarified which convention is to be used in the particular project. For the definition of the liquid level, the relationship between liquid volume and liquid level is trivial for vertical but sophisticated for horizontal vessels. With the liquid height shown in Figure 9.2, the relationship is VL =

LD2 2h D arccos (1 − ) − L( − h)√Dh − h2 , 4 D 2

(9.2)

where L is the cylindrical length of the vessel. When the volume demand of a vessel is calculated, it should be considered that only part of it can be used as working volume, usually 65–80 %. The vessel must not be completely filled with liquid, due to sudden pressure rise when the temperature slightly increases. https://doi.org/10.1515/9783111028149-009

9 Vessels and separators

Figure 9.1: PID representation of a typical vessel.

� 329

330 � 9 Vessels and separators

Figure 9.2: Cross-section of a horizontal vessel partly filled with liquid.

The dimensions of a vessel are usually determined by providing response time for the operator. In Figure 9.1 various levels are indicated. LN is the normal level which is aspired to be kept by the level controller. If the level drops or rises too much, an alarm goes off to attract the attention of the operator. In Figure 9.1, these levels are called LAL (level alarm low) and LAH (level alarm high), respectively. The level switch LSHH (level switch high high) usually actuates an interlock which prevents the vessel from being overfilled. LSHH is chosen in such a way that a certain part of the vessel volume remains free, e. g. 10 %. Similarly, the LSLL (level switch low low) sets off another interlock, which e. g. might protect the pump transferring the liquid from the vessel by switching it off. Between the alarm and the automatic response of the process control system there must be enough time for the operator to react, e. g. 1 min for an activity in the control room or 5 min for an activity in the plant area. This defined period of time mainly determines the size of the vessel; the volumes between LAL and LSLL, and LAH and LSHH, respectively, must correspond to the feed and the withdrawal during the specified response time for the operator. The last line of defense against excessive vessel filling is the overflow nozzle, which is often used in atmospheric tanks [158]. Usually, there is a pipe attached leading to the bottom. In case the tank is blanketed at the top, this overflow nozzle must not be an opportunity for the blanket gas to escape from the tank. This is the reason why the pipe is led down below the liquid level on the internal side (Figure 9.3). Also, a siphon breaker makes sense for preventing the overflow stream from emptying the vessel after

Figure 9.3: Example of an overflow nozzle.

9 Vessels and separators

� 331

the liquid level has dropped below the threshold for the overflow nozzle [158]. A siphon breaker is a piece of pipe located at the highest point of the piping and connected to atmosphere. For the determination of the nozzle sizes, the following rules of thumb can be applied: In the general case with a two-phase entry, the inlet nozzle should obey the condition 2 ρav wav ≤ 1400 Pa ,

(9.3)

where the average velocity and the average density can be calculated according to ρav = wav =

ṁ V + ṁ L V̇ V̇

A ṁ V ṁ L ̇ V= + , ρV ρL

(9.4) (9.5) (9.6)

with A as the cross-flow area of the nozzle. The vapor outlet nozzle should have the same size as the adjacent pipe as long as the condition ρV wV2 ≤ 4800 Pa

(9.7)

is maintained. The recommended velocity is wV = 10 m/s. For the liquid outlet nozzle, the criterion is ρL wL2 ≤ 400–900 Pa

(9.8)

The velocity should be kept below wL = 1 m/s. Even for low flows, the minimum diameter should be 50 mm. To avoid spouts at the liquid outlet, vortex breakers should be installed, as can be seen in Figure 9.1. These spouts can lead to vapor entrainment in the liquid discharge. The consequences might be an over-pressurization of the tank downstream or the accumulation of vapor in the pockets of the adjacent pipe. If this pipe is the suction line of a pump, it can lead to damage. If the tank is continuously operated, pulsation can occur [313]. More details about vortex breakers are given in [313]. There are a number of equations which define conditions where spouts are likely to occur [312], however, they do not indicate a relevance for normal outlet designs. Kister [96] recommends to install vortex breakers by default, whereas the Liebermanns “have a personal dislike of this device” [318], especially if the nozzle exit velocity is below 1 m/s (see Chapter 9).

332 � 9 Vessels and separators Another function of vessels is the separation between vapor and liquid phase. There are several kinds of vapor-liquid separators; the vessel is a so-called gravity separator, making use of the principle that vapor and liquid droplets have a different density. Separators of this kind are usually built as vertical vessels; although horizontal ones are possible [159]. Vertical separators have advantages when the gas-liquid ratio is high or total gas volumes are low, whereas horizontal ones are efficient when large amounts of gas are dissolved in the liquid. Of course, the space demand in the particular situation is also an issue. A simple equilibrium of forces between weight, buoyancy and flow resistance leads to the limiting gas velocity for a given droplet diameter d [159, 160] wV = √

4 g(ρL − ρV )dlim , 3 ρV cw

(9.9)

where the cw correlation according to Brauer [161] cw = 24/Re + 4 Re−0.5 + 0.4

(9.10)

can be used. The Reynolds number is defined as Re =

wV dρV ηV

(9.11)

The recommended velocity is wVrec = 0.44wV

(9.12)

It should be noted that in Equation (9.10) the values for the physical properties must be taken for the flowing phase, that is, the gas which is flowing around the droplets. The velocity must be determined iteratively, as it is also part of the Reynolds number. The iteration is quite easy with the mathematical methods of current computers; nevertheless, Equation (9.9) has always been subject to simplifications. Some are usually justified; there is no major objection to use only the first summand in Equation (9.10) for the laminar region where Re < 0.25, which gives a direct relationship between droplet diameter and limiting velocity. However, it was experienced that many companies derive their own correlations, which are restricted to their special application cases and filled with empirical factors. This cannot be recommended, as the applications are changed more often than the correlation. The meaning of Equation (9.9) is that for a given vapor velocity wV , which results from the diameter of the vessel, droplets larger than dlim are separated, while smaller ones are not. In fact, any application case has a droplet distribution, and it will not happen that the separation is performed in such a rigorous way. Therefore, the interpretation that 50 % of the droplets with the limiting droplet size will be separated from the

9 Vessels and separators

� 333

Table 9.1: Recommended values for the limiting droplet diameter [160]. Application Standard Compressor or turbine inlet Dryer inlet, prevent loss of solvent Not decisive for process

Limiting droplet diameter 0.2 mm 0.15 mm 0.1 mm 0.35 mm

gas flow makes more sense. From process engineering experience, rules of thumb can be given for what a reasonable limiting droplet size might be in a particular case (Table 9.1). Nevertheless, one should be aware that the limiting droplet diameter is not exactly what the engineer needs. Although the concept is more or less accurate from the mathematical point of view, the recommended values are just a rough guideline. Its limitation becomes clear when it has to be specified how much liquid is entrained, e. g. to specify the COD value (chemical oxygen demand, Section 13.5) of a condensate. There is currently no way to determine the amount of liquid droplets and their size distribution for a given arrangement.

Figure 9.4: Sketch of a vertical separator without a demister.

334 � 9 Vessels and separators

Figure 9.5: Wire mesh demister. © ENVIMAC Engineering GmbH.

The efficiency of the droplet separation can be increased with a so-called demister, a wire mesh layer placed in the vapor space of the separation vessel (Figure 9.5). In contrast to the gravity separator, a high vapor velocity is advantageous so that the droplets hit the wire mesh and do not pass around the wires. Therefore, demisters need lower vessel diameters, making them more inexpensive than gravity separators. However, one must check that no fouling or even polymerization of the separated droplets occurs. The height of a demister is between 100 and 150 mm. Larger heights only slightly improve the separation but also cause an additional pressure drop proportional to their height. The design velocity can be set as weff = 0.7K ∗ [

0.5

ρL − ρV ] ρV

,

(9.13)

where the default value for the constant is K ∗ = 0.11 m/s. For high pressure or high vacuum, K ∗ = 0.06 m/s should be used. The velocity should not go below wmin = 0.3 weff . It can be expected that the limiting droplet size is between 3 and 5 µm. Example Calculate the diameter of a vertical vessel which shall act as a droplet separator by gravity. As the vapor is the inlet stream of a compressor, the limiting droplet diameter is set to dmax = 0.15 mm. For the same purpose, a demister diameter shall be evaluated. The input data are: – ṁ L = 35000 kg/h

9 Vessels and separators

– – – – –

� 335

ṁ V = 3000 kg/h ρL = 900 kg/m3 ρV = 2.5 kg/m3 ηV = 0.014 mPas g = 9.81 m/s2

Solution We start with an estimated maximum velocity of wV = 1 m/s. Using Equations (9.9)–(9.11), the subsequent results are as listed in Table 9.2. Table 9.2: Iteration history for limiting vapor velocity. wV,est m/s

Re

cw

wV m/s

1 0.58 0.49 0.46 0.45

26.76 15.52 13.11 12.31 12.04

2.07 2.96 3.33 3.49 3.55

0.58 0.49 0.46 0.45 0.45 ok

Equation (9.12) then yields wV,rec = 0.44 ⋅ 0.45 m/s = 0.2 m/s. The corresponding vessel diameter is dvessel = √

4ṁ V = 1.46 m ρV wV,rec π

For the design of a demister, K ∗ = 0.11 m/s is chosen. With Equation (9.12), the design velocity can be determined to be weff = 0.7 ⋅ 0.11 m/s√

ρL − ρV = 1.46 m/s ρV

giving dvessel = √

4ṁ V = 0.54 m ρV weff π

A single vessel diameter can hardly ever be used for both gravity and demister droplet separation.

336 � 9 Vessels and separators From experience, there are some recommendations for the design of vessels which are used as vapor-liquid separators with or without demister. They should be applied together with the recommendations concerning the residence time given above. Recommended values for the dimensions are given in Figures 9.4 and 9.6.

Figure 9.6: Sketch of a vertical separator with a demister.

Several other types of droplet separators are used in chemical industry, such as baffle separators (“knock-out drum”) or cyclones (Figure 9.7), where the droplets are settled out by centrifugal forces. More details can be found in [162].

Figure 9.7: Knock-out-drum and cyclone [162]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

9.1 Agitators

� 337

Many applications require heating or cooling of the vessel content. The standard approach is to perform the heat transfer across the vessel wall. The simplest way is that the heating or cooling agent is transported in half-pipe coils. It is inexpensive, but in pressurized vessels there is a poor heat transfer because of the wall thickness. Because of the welding seams, only approx. 2/3 of the vessel wall can be used as heat transfer area. With a jacket, more area can be provided, but the design is a bit more complicated. Often, it is not effective to transfer the heat across the vessel wall. Coils or other internals can be placed inside the vessel, getting significantly more heat transfer area and, possibly, a better heat transfer coefficient. The most effective way are external heat exchangers, which are not restricted in their dimensions by the vessel itself. In this case, however, a pump is necessary to operate the cycle.

9.1 Agitators Agitating is one of the standard tasks to be performed in a vessel [8]. It comprises: – Homogenization of a liquid and improvement of the heat exchange with the vessel wall – Stirring of a gas-liquid mixture to get small and well-distributed gas bubbles for a better mass transfer (e. g. oxygen intake for waste water treatment (see Chapter 13.6) – Stirring of a liquid-solid mixture to get a homogeneous suspension (e. g. distribution of a solid catalyst) – Stirring of a liqud-liquid mixture with the target to get a even distribution of the disperse phase (see Chapter 6) The viscosity of the liquid and its sensibility against shear forces have to be considered. The installation of baffles is necessary to prevent that the fluid in the vessel just rotates and finally forms a spout. With the baffles, an effective mixing is possible. Agitated vessels are standardized. Figure 9.8 shows the arrangement and typical measures. Various types of agitators are shown in Figure 9.9. Axial agitators generate a flow inside the vessel which is primarily vertical to the ground. At the bottom and at the top, it is diverted, and a circular flow is formed (Figure 9.10). The propeller mixer is the best-known example, which is used for turbulent flow at relatively low viscosities. Inclined plate agitators introduce a radial flow component, which enhances dispersing but makes the agitator susceptible to cavitation so that the agitator speed is limited. For higher viscosities, the Intermig and the spiral agitator are used. Radial agitators generate a horizontal flow inside the vessel, which is diverted at the vessel wall (Figure 9.10). Disk agitators are often used for the aerating of liquids at relatively low viscosities, while at high viscosities anchor agitators are often the best choice. One should notice that the viscosity unit in Figure 9.9 is Pas, not mPas. For the design of an agitator, the mixing time and the power consumption have to be considered. The mixing time tmix denotes the time required for achieving a certain

338 � 9 Vessels and separators

Figure 9.8: Agitated vessel with power unit and baffles. hl – liquid level, dR – vessel diameter, hA – distance between agitator and bottom, hB – height of agitator (projection) dA – diameter of agitator.

Figure 9.9: Agitator types with their classification.

9.1 Agitators

� 339

Figure 9.10: Axial and radial flow pattern. Copyright EKATO [333].

Figure 9.11: Diagram for the evaluation of the mixing time. Copyright EKATO [333].

mixing quality, which is based on statistical methods [325]. The product of revolution speed N and mixing time tM is depicted (Figure 9.11) as a function of a modified Reynolds number ReA =

dA2 Nρ η

(9.14)

The power consumption P of an agitator can be determined with the so-called Newton number Ne =

P

dA5 N 3 ρ

= f (ReA )

(9.15)

This is a remarkable relationship; it means that in a first approximation the power consumption depends on the diameter of the agitator to the fifth power and to the revolution speed to the third power. The relationship between Newton and Reynolds numbers is shown in Figure 9.12.

340 � 9 Vessels and separators

Figure 9.12: Relationship between Newton and Reynolds numbers for some agitator types. Copyright EKATO [333].

Example A propeller mixer with dA = 1 m is operated with a revolution speed of N = 200/min. Calculate the mixing time and the power consumption during the mixing time. How does the power consumption change if we run the agitator with N = 100/min. The other numbers are: – ρ = 1000 kg/m3 – η = 0.3 Pas – dR = 3.3 m

Solution According to Equation (9.14), the Reynolds number is ReA =

(1 m)2 ⋅ 200 ⋅ 1000 kg = 11111 60 s m3 ⋅ 0.3 Pas

(9.16)

From Figure 9.11, with dA /dR ≈ 0.3 we can read off N tmix = 90, giving tmix = 0.45 min. From Figure 9.12, we take Ne ≈ 0.4. With Equation (9.15), we get P = Ne dA5 N 3 ρ = 0.4 ⋅ (1 m)5 (

3

200 ) ⋅ 1000 kg/m3 = 14.8 kW 60 s

(9.17)

9.1 Agitators

� 341

The energy consumption during tmix is W = P tmix = 14.8 kW ⋅ 0.45 min ≈ 400 kJ

(9.18)

If we run the agitator with N = 100/min, we can recalculate: ReA =

(1 m)2 ⋅ 100 ⋅ 1000 kg = 5556 60 s m3 ⋅ 0.3 Pas

(9.19)

From Figure 9.11, we can read off N tmix = 100, giving tmix = 1 min. From Figure 9.12, we take again Ne ≈ 0.4. With Equation (9.15), we get P = Ne dA5 N 3 ρ = 0.4 ⋅ (1 m)5 (

3

100 ) ⋅ 1000 kg/m3 = 1.85 kW 60 s

(9.20)

The energy consumption during tmix is W = P tmix = 1.85 kW ⋅ 1 min ≈ 111 kJ This means that one needs only ≈ 25 % of the energy if the mixing time is doubled.

(9.21)

10 Chemical reactions 10.1 Reaction basics For the author, this will be the most difficult chapter of the book. The topic easily justifies books on their own, e. g. [8] or [163], and can definitely not be covered in only a few pages. The goal of this chapter is only to explain the most important technical terms so that the reader is able to follow the discussions of practitioners. The extent of a chemical reaction is characterized by the conversion (X). It must be defined which of the reactant components the conversion refers to. Then, the conversion is X=

number of reacted moles of reference reactant number of moles of reference reactant at the beginning of the reaction

In contrast, the yield (Y ) of a reaction refers to a product of the reaction. It must be taken into account that the maximum number of product moles depends on the stoichiometry: Y=

number of product moles formed number of reactant moles ⋅ stoichiometric ratio

The stoichiometric ratio is defined as stoichiometric ratio =

number of product moles in reaction equation number of reactant moles in reaction equation

The quality of a reaction can be characterized by the selectivity (S). It is defined as S=

number of product moles formed , number of converted reactant moles ⋅ stoichiometric ratio

where again the numbers refer to a special product and reactant, respectively. Example In the process of oxychlorination of ethylene by hydrogen chloride and oxygen, giving 1,2-dichoroethane and water, there is a reaction network of several competing reactions. The most important ones are: (1) C2 H4 + 0.5 O2 + 2 HCl 󳨀→ CH2 Cl–CH2 Cl + H2 O; (2) C2 H4 + HCl 󳨀→ CH3 –CH2 Cl; (3) C2 H4 + 2 O2 󳨀→ 2 CO + 2 H2 O; (4) C2 H4 + 3 O2 󳨀→ 2 CO2 + 2 H2 O. The (fictitious) conversions of ethylene with respect to the particular reactions shall be 90 % (1), 3 % (2), 2 % (3), 3 % (4). The rest of the ethylene will remain.1 The stream entering the reactor consists of 100 mol/h C2 H4 , 190 mol/h HCl and 200 mol/h O2 . 1 In the real oxychlorination, the selectivity for 1,2-dichloroethane and the surplus of ethylene are by far larger [164]. https://doi.org/10.1515/9783111028149-010

10.1 Reaction basics

� 343

Calculate the overall conversions of ethylene, HCl, and O2 and the yields and selectivities of CH2 Cl–CH2 Cl with respect to ethylene and to HCl.

Solution In Table 10.1, the composition of the stream after the reaction is calculated. Any stoichiometric calculation should be checked for consistency in the atom balance. In this case, at the inlet there are – 100 ⋅ 2 = 200 C-atoms; – 100 ⋅ 4 + 190 = 590 H-atoms; – 190 Cl-atoms; – 200 ⋅ 2 = 400 O-atoms. At the outlet there are – 2 ⋅ 2 + 90 ⋅ 2 + 3 ⋅ 2 + 4 + 6 = 200 C-atoms; – 2 ⋅ 4 + 7 + 90 ⋅ 4 + 3 ⋅ 5 + 100 ⋅ 2 = 590 H-atoms; – 7 + 90 ⋅ 2 + 3 = 190 Cl-atoms; – 142 ⋅ 2 + 4 + 6 ⋅ 2 + 100 = 400 O-atoms. 󳨀→ o. k. The overall conversions are 100 − 2 = 98 % 100 190 − 7 = = 96.3 % 190 200 − 142 = = 29 % 200

XC2 H4 = XHCl XO2

The yields of 1,2-dichloroethane referring to ethylene or, respectively, HCl can be calculated to be 90 = 90 % 100 90 2 = = 94.7 % 190 1

YC2 H4 Cl2 ,C2 H4 = YC2 H4 Cl2 ,HCl

Finally, the selectivities with respect to ethylene and HCl are 90 = 91.84 % 98 90 2 = = 98.36 % 183 1

SC2 H4 Cl2 ,C2 H4 = SC2 H4 Cl2 ,HCl

The speed of chemical reactions vary. While the corrosion of iron is slow and can take years, other reactions take minutes or hours, and reactions like the neutralization of acids and bases take place instantaneously. Performing reactions in industrial processes often requires increasing or slowing down the reaction rates. Therefore, knowledge of the influences on reaction kinetics is essential for a process engineer. An in-depth knowledge in reaction kinetics can be found in the above mentioned textbooks; for the explanation of the basics only homogeneous reactions are regarded.

344 � 10 Chemical reactions Table 10.1: Calculation of the mole numbers at the reactor outlet. Comp. C� H� HCl O� CH� Cl–CH� Cl CH� –CH� Cl CO CO� H� O

Inlet

Reac. (1)

Reac. (2)

Reac. (3)

Reac. (4)

Outlet

100 190 200 0 0 0 0 0

−�� −��� −�� 90 0 0 0 90

−� −� 0 0 3 0 0 0

−� 0 −� 0 0 4 0 4

−� 0 −� 0 0 0 6 6

2 7 142 90 3 4 6 100

The reaction of the formation of ammonia can be used as an example [165]: N2 + 3 H2 󳨀→ 2 NH3 The reaction rate in this case can be set up as dcNH3 dτ

= kcN2 cH3 2 ,

(10.1)

where c is the volume concentration (c-concentration) ci = ni /V = xi ⋅ ρ

(10.2)

The c-concentration is not popular in chemical engineering, as the density and, subsequently, the c-concentration itself are temperature-dependent. Nevertheless, for the calculation of reaction rates it is essential.2 Equation (10.1) is called a formal kinetics equation, which means that the stoichiometric coefficients in the reaction equation are the exponents of the c-concentration. This is the easiest approach; however, it is not necessarily correct. In fact, for ammonia synthesis the kinetics is much more complicated [163]. The factor k in Equation (10.1) is the reaction rate factor. A usual approach for its temperature dependence is k = k0 exp (−EA /RT) ,

(10.3)

where EA is the activation energy of the reaction. The often cited rule of thumb that a temperature increase of 10 K gives an increase in the reaction rate by a factor of 2–3 should not be used for calculating, but rather to illustrate how temperature-sensitive reaction rates are. 2 Note that c, n, x, and ρ all refer to moles, i. e. the units are mol/l or mol/m3 , mol, mol/mol, and mol/m3 , respectively.

10.1 Reaction basics

� 345

In principle, any reaction has a reverse reaction which actually takes place to a certain extent. In many cases, the reverse reaction can be neglected, e. g. for combustion reactions. One can hardly imagine that CO2 and H2 O really react back and form a hydrocarbon and oxygen. However, a lot of cases can also be found where the reverse reaction is important, e. g. for all esterification reactions. At a certain stage, the concentrations of the participants of the reaction stay the same, as long as temperature and pressure keep constant. From the overall view, it seems that no reactions take place any more. In fact reaction and reverse reaction both happen, but with the same reaction rate so that an equilibrium is formed. Again, the ammonia reaction is a good example, where all the particular aspects can be explained: N2 + 3 H2 󴀕󴀬 2 NH3 In equilibrium, both reaction rates3 are equal, i. e. dcNH3 dτ

2 = k1 cN2 cH3 2 − k−1 cNH =0 3

(10.4)

After rearranging Equation (10.4), an equilibrium constant K can be defined as 2 cNH k1 3 = k−1 cN2 cH3

K=

(10.5) 2

Equation (10.5) is called the law of mass action. The condition for the reaction equilibrium can also be derived from thermodynamics, starting from the chemical potential. A comprehensive explanation can be found in [11]. It ends up with a slightly different expression for the equilibrium constant. For the ammonia reaction, one would find K=

0 (fNH3 /fNH )2 3

(fN2 /fN0 )(fH2 /fH0 )3 2

(10.6)

,

2

with f as the fugacity and f 0 for the fugacity at a standard state. The fugacity itself is fi = pyi φi ,

(10.7)

with φ as the fugacity coefficient. The fugacity coefficients can be calculated using the equations of state proposed in Chapter 2. Equation (10.6) can also be written in the form K =(

−2 2 yNH3 φ2NH3 p ) , p0 yN y3H φN φ3H 2

3 written with formal kinetics for illustration.

2

2

2

(10.8)

346 � 10 Chemical reactions where the exponent of the pressure term is calculated from the stoichiometric coefficients (2 − 1 − 3 = −2). Discussing Equations (10.5)–(10.8), the following statements can be given: 1. Le Chatelier’s principle: In Equation (10.8), it can be seen from the exponent of the pressure term that there is a tendency that the ammonia concentration increases with increasing pressure.4 According to the principle of Le Chatelier, the system counteracts the effect of the pressure increase by lowering the mole number in the mixture. 2. There is a formal discrepancy between Equation (10.5) and Equation (10.8) as long as the term with the fugacity coefficients is not negligible. In fact, the kinetic approach does not consider nonideal behavior. This means that one must be aware that reaction kinetics calculated with the c-concentration do not reach the thermodynamic equilibrium defined in Equation (10.8). It would be helpful if the fugacities were used as the concentration measure in reaction kinetics, but this is not really wide-spread in chemistry. 3. Even if the fugacity term can be neglected, the equilibrium constant should be evaluated using Equation (10.8). The determination of the coefficients from the reaction rate will not really be accurate enough. 4. The importance of taking the vapor phase nonideality into account is illustrated in Figure 10.1, where an attempt is made to reproduce the equilibrium conversion at t = 450 °C with two equations of state of different quality as a function of pressure. As can be seen, it is advantageous to use an accurate equation of state instead of the ideal gas law. The VTPR equation of state can represent the influence of the pressure more or less exactly, whereas the ideal gas law produces unacceptably large deviations. It should be noted that the data is more than 80 years old; however, the agreement of the two data sources and their plausibility indicate that they can be considered to be reliable. Analogously to Equation (10.6) the equilibrium constant for a reaction in the liquid phase like CH3 COOH + CH3 OH 󴀕󴀬 CH3 COOCH3 + H2 O can be derived as [11] K=

(xCH3 COOCH3 γCH3 COOCH3 ) (xH2 O γH2 O )

(xCH3 COOH γCH3 COOH ) (xCH3 OH γCH3 OH )

(10.9)

4 A bit more slowly: With increasing pressure, the pressure term decreases due to the negative exponent. Neglecting the pressure dependence of the φ-term, the concentration term must increase to keep K constant. This is only possible by increasing the NH3 concentration and, correspondingly, decrease the concentrations of the reactants N2 and H2 .

10.1 Reaction basics

� 347

Figure 10.1: Influence of the real gas phase behavior on the equilibrium conversion of the ammonia reaction. Courtesy of Prof. Dr. J. Gmehling.

Again, a kinetic approach like Equation (10.5) cannot reach equilibrium. This problem can only be overcome if the activities are used to describe the concentration, which is not widespread. Also, heterogeneous reactions can be described with a corresponding equilibrium approach. In this case, the mass transfer can also have a significant influence on the reaction rate [8, 163]. The equilibrium constant K can be estimated from thermodynamics using the standard Gibbs energies of formation Δgf0 , again according to the stoichiometric coefficients, e. g. for the ammonia reaction Equation (10.8) 0 RT ln K = −(2 ΔgNH − 3 ΔgH0 2 − ΔgN0 2 ) , 3

(10.10)

where the Δg 0 values for the various components refer to the ideal gas and can be obtained from [11]: Δgi0 (T, p0 = 1 atm) =

T

T

cpid T T 0 Δgf ,i + (1 − )Δhf0,i + ∫ cpid dT − T ∫ dT T0 T0 T T0

(10.11)

T0

The effect of temperature on a chemical reaction is complex. First, the equilibrium prefers the endothermic reaction at high temperatures and the exothermic reaction at low temperatures. On the other hand, the reaction rates strongly increase with increasing temperature, often in a way that a reaction can take place at all only at high temperatures. To mention the ammonia reaction again, it is an exothermic reaction,

348 � 10 Chemical reactions and therefore low temperatures are preferred. However, to obtain useful reaction rates the temperature must be sufficiently high, so a compromise must be found. For the ammonia reaction, in chemical industry a temperature of 400–500 °C is used. As the number of moles decreases when ammonia is formed, high pressures (150–250 bar) drive the equilibrium to the ammonia side. A catalyst is a substance which increases the rate of a chemical reaction without being consumed. Often, only very low amounts of a catalyst are sufficient to achieve a significant effect. If more than one reaction is possible, a catalyst can promote the desired one. A catalyst does not change the chemical equilibrium of a reaction; this means that it promotes the reaction itself as well as the reverse one. For reaction kinetics, the catalyst concentration must often be regarded as well. The most popular approach is Michaelis–Menten kinetics [163], which assumes that the reactant and the catalyst form a complex that further reacts to the product and the catalyst. The reaction rate is r = k ccat

creactant , kMM + creactant

(10.12)

with kMM as the Michaelis–Menten constant. Besides the equilibrium, the enthalpy of reaction is also often of interest. It is defined as the heat released when a reaction takes place at constant temperature and pressure. The equation is pretty simple: Δh0R = ∑ hproducts − ∑ hreactants

(10.13)

The reason why this equation is so simple is that the real work has been done before with the rigorous calculation of the enthalpy, using the standard enthalpy of formation as the starting point (Chapter 2.8). Equation (10.13) automatically covers all the influences of temperature, pressure, and real phase behavior. An often cited approach for the calculation of the temperature dependence using polynomials for the cpid and summarizing the coefficients of the various temperature terms (e. g. [11]) is mainly an academic approach; in practice, its application is awkward. It is restricted to ideal gas applications and, moreover, it is required that all cpid functions are given as polynomials, which is rarely the case and which is by far not the optimum choice for the correlation. Δhf0 and Δgf0 can be taken from data tables (e. g. [41, 42]) or be estimated using group contribution methods [11]. However, the problem of differences between large numbers must always be taken into account. Relatively small errors in determining Δhf0 and especially Δgf0 can lead to significant errors in estimating the enthalpy of reaction or the equilibrium constant, respectively. Therefore, the results of estimation methods should be handled with care. The following example gives an impression about the sensitivity of Δhf0 .

10.1 Reaction basics

� 349

Example Calculate the enthalpy of reaction for the hypothetical isomerization reaction of 1,1-dichloroethane (11DCE) and 1,2-dichloroethane (12DCE) CHCl2 –CH3 󴀕󴀬 CH2 Cl–CH 2 Cl at t = 25 °C in the ideal gas state. The enthalpies of formation are [41] Δh0f (11DCE) = −130120 J/mol

Δh0f (12DCE) = −126780 J/mol Consider that these values might have an error of ± 1 %. What is the possible range of the enthalpy of reaction?

Solution The expected enthalpy of reaction is ΔhR = −126780 J/mol + 130120 J/mol = 3340 J/mol The maximum possible enthalpy of reaction is ΔhR = −126780 ⋅ 0.99 J/mol + 130120 ⋅ 1.01 J/mol = 5909 J/mol , whereas the minimum one is ΔhR = −126780 ⋅ 1.01 J/mol + 130120 ⋅ 0.99 J/mol = 771 J/mol This means that in this case an error of ± 1 % causes a deviation of ± 77 %.

Chemical reactions are often considerably exothermic. To control the temperature, the heat removal must be sufficient even in the worst case. Otherwise, things will continue to spiral downward. If the heat removal cannot compensate the heat of reaction, the temperature in the reactor will rise. With increasing temperature, the reaction rates increase exponentially, causing more heat of reaction which cannot be removed again, leading to a further temperature rise … and so on (runaway reaction). According to Equation (10.3), the heat generation increases exponentially with temperature, while the heat removal due to cooling increases with temperature only linearly. In a short time, large amounts of heat can be generated which can possibly end up in an explosion or in the destruction of the reactor. Often, degradation reactions are promoted which spoil the product. It is essential that the occurrence of temperature peaks must be avoided [8]. Especially in fixed bed reactors temperature peaks can occur locally and then show the behavior described above.

350 � 10 Chemical reactions For reactor design, it must be known how sensitive the reactor is when the operation conditions like reactant concentration or temperature are slightly changed. Criteria have been developed [8] which make it possible to assess whether a runaway reaction is possible or not. Usually, multiple reactions take place in a reactor in parallel. The design must be performed in a way that the desired ones are supported. For this purpose, there are some options available.

10.2 Reactors The reactor is always the heart of a chemical plant, and the choice of the type of the reactor has a great influence on the amount and the quality of the product. On the other hand, the number of choices is manifold, and only the most typical ones can be discussed here. Comprehensive discussions on reactor types can be found in [8] and [163]. First, one can distinguish between discontinuous and continuous reactors. In the discontinuous mode (batch), all the reactants, solvents and catalysts are fed into the reactor, usually a vessel. The reactor is agitated to achieve that the holdup is homogeneous concerning temperature and composition. The composition of the holdup changes with time. Batch reactors are flexible, they can handle different products and the residence time can be chosen arbitrarily. Their disadvantage is that there are dead times for filling and emptying the reactor and for the adjustment of the temperature. Batch reactors are useful for multipurpose plants and for products with small plant capacities. It is difficult to handle fast or strongly exothermic reactions. Continuous reactors have constant feed and product streams. All parameters like temperature or pressure are kept constant. The product quality can be kept constant due to the constant conditions and the option of automation. There is a tendency for reactor volume becoming smaller, as there are no dead times. Contrary to batch reactors, their flexibility is limited, and only small variations of temperature, feed flow, and feed quality can be tolerated. In construction, care must be taken that inlet and outlet nozzles are not too close together. Otherwise, shortcut flows will occur, and a larger part of the reactants does not take part in the reaction. Also, reactors can be operated in a semicontinuous mode. This can mean that in a continuous reactor one of the reactants is fed batchwise, or that in a batch reactor one of the products is continuously removed. For the latter case, esterification reactions are a well-known example, e. g. C6 H13 COOH + C10 H21 OH 󴀕󴀬 C6 H13 COOC10 H21 + H2 O Water is the substance with the lowest boiling point and can be easily removed from the reactor by evaporation. The advantage is that the equilibrium is shifted to the ester side,

10.2 Reactors

� 351

and 100 % conversion can be achieved. In the semicontinuous mode, strongly exothermic reactions can be handled by adjusting the feed of one of the reactants in a way that the heat of reaction can be removed. Many reactor types can be characterized by three simplified ideal reactors (Figure 10.2).

Figure 10.2: Stirred tank reactor and tubular reactor.





Ideally mixed batch stirred tank reactor: The mixture of reactants is filled into the reactor and perfectly mixed during the entire reaction time. During the reaction time no substance is removed or added. The reactor can be heated or cooled. Ideally mixed continuous stirred tank reactor (CSTR): The continuously operated stirred tank reactor is a continuously operated vessel where the reaction takes place. There is an average residence time of the reaction mixture, simply given by τres =



V V̇

(10.14)

The actual residence time can differ for the various components. In the ideal continuous stirred tank reactor, it is assumed that reactants and products are mixed instantaneously. There is no concentration or temperature gradient. This is usually not an advantageous assumption, as reactant molecules which have not reacted are instantaneously transported from the inlet to the product stream. In contrast to intuition, a real continuous stirred tank reactor is superior to the ideal one, as the transportation of non-reacted reactants from the inlet to the outlet of the reactor takes some time, so that all reactant molecules have the chance to react. The CSTR is appropriate for fast reactions, where the time needed for the reaction is considerably smaller than the average residence time. Plug-flow tubular reactor (PFR): The tubular reactor is a line with a continuous flow, meaning that the mass flow does not change over the length of the reactor. It is essential to assume plug flow for simplification. Temperature and concentration are constant over the cross-flow area, but vary continuously along the length coordinate due to the progress of the reaction. The characteristics of the plug-flow tubular reactor are analogous to the

352 � 10 Chemical reactions



batch stirred tank reactor; in both cases, the reaction takes place in a given volume, where all substances are completely mixed, without interacting with other volumes or, respectively, volume elements.5 In real tubular reactors, dispersion causes axial mixing of the substances, and heat conduction and the influence of possible radial heating or cooling cause a temperature profile. Mixed forms: The ideal continuous stirred tank reactor and the plug-flow reactor represent the limiting cases concerning reactor behavior. In the CSTR, there is complete mixing of reactants and products without a temperature profile. The reactant concentration is always low due to the instantaneous mixing, whereas the product concentration is high. In the tubular reactor, the reactant concentration decreases from high values at the inlet to low values at the outlet, with the products showing the opposite profile. There are combinations of CSTR and PFR showing the intermediate behavior, i. e. the CSTR cascade (Figure 10.3) and the PFR with recycle (Figure 10.4). With an increasing number of reactor elements, the CSTR cascade approaches the behavior of the tubular reactor, whereas the plug-flow reactor with recycle can show the same properties as a single CSTR.

Figure 10.3: CSTR cascade.

Figure 10.4: Plug-flow reactor with recycle.

There are also reactor types available for heterogeneous reactions (Figure 10.5). To illustrate which aspects have to be considered in reactor design, gas-liquid reactions will be exemplarily reviewed. The other types are thoroughly discussed in [8]. For gas-liquid reactions, it is assumed that the reaction takes place in one of the phases. Therefore, one of the components has to change the phase, and it must be transported from the phase boundary to the bulk of the phase where it can take part in the reaction. There is a complex dependence of the reaction rate on mass transfer, heat transfer, reaction kinetics, solids distribution, mixing, and gas solubility. Usually, a complex modeling performed by a specialist takes place before the reactor is designed. It is

5 The statement is only valid as long as the the volume does not change during the reaction.

10.2 Reactors

� 353

Figure 10.5: Reactor types for heterogeneous reactions. Courtesy of Prof. Dr. J. Gmehling.

a key question how the energy needed for the distribution of the gas is introduced into the reactor. Mainly, three types can be distinguished, i. e.: – by agitating the reactor content, where the agitator disperses the gas and mixes it with the liquid. Most effective is the hollow stirrer, where the gas is sucked in through the hollow shaft, as a low-pressure region is formed when the agitator is rotating. Note that the power demand for the agitator strongly depends on the rotation speed, but even more on the diameter of the vessel: P = Ne ⋅ n3 ⋅ d 5 ⋅ ρL ,





(10.15)

where Ne is the so-called Newton number, characterizing the particular agitator (s. Chapter 9.1). by compressing the gas. An example is the bubble column (Figure 10.5). The gas is led into the reactor at the bottom through a nozzle holder, which disperses the gas to small bubbles to increase the mass transfer area between gas and liquid. Different types of circulation can be set up. Installing perforated plates, even a cascade can be realized. by a liquid pumparound. The dispersion is achieved in a two-component jet. By accelerating the liquid in the jet, again a low-pressure region is formed which sucks in the gas.

Concerning the energy input, agitators and liquid pumparounds are more effective than gas compression. For all options, high viscosities can cause problems. An interesting option for performing reactions determined by the equilibrium is reactive distillation. Its great advantage is that reaction and separation are carried out in one piece of equipment. If the reaction is exothermic, the enthalpy of reaction can easily be used for vapor generation. Because of the vapor-liquid equilibrium, there is a given limit for the temperature, so the danger of undesired side reactions or even

354 � 10 Chemical reactions

Figure 10.6: Reactive distillation for the production of methyl acetate.

runaway reactions is small. However, the main and outstanding advantage is that the conversion can exceed the conversion given by the reaction equilibrium. If the boiling points are appropriate, one of the products can be removed from the reactants due to the distillation effect, driving the equilibrium to the product side. A well-known example is the esterification reaction of methanol with acetic acid, giving methyl acetate. As methyl acetate forms azeotropes with water and with methanol, the purification of the ester produced is rather complex and requires several columns in the conventional path [8]. With reactive distillation, only one single column is necessary (Figure 10.6). The design of reactive distillation columns is much less certain than it is for conventional distillations. There is a complex interplay between phase equilibria and reaction kinetics, where the kinetic equations must be formulated using activities instead of concentrations (Chapter 10.1). The performance of trays and packings with respect to the chemical reaction can hardly be predicted. Laboratory tests should be performed, and if there are doubts about the scale-up, experiments in larger equipment might be useful. One of the drawbacks of reactive distillation is that heterogeneous catalysts are difficult to exchange. Figure 10.7 shows the Katapak packing (Sulzer). It is a structured packing where the catalyst is put into small bags to remain stationary. The exchange of the catalyst is a great effort and must completely be done manually. Homogeneous catalysts usually cause much less trouble and can often be removed by distillation. However, the column diameter is still determined by the residence time for the reaction. The simultaneous optimization of hydrodynamics cannot take place, and from that point of view the column has often a severe underload. Another option is the use of external reactors (Figure 10.8). They can be used for slow reactions as well, as the residence time in the reactive zone can be adjusted by conventional reactor design. The scale-up is easy, as there is no coupling between reaction and hydrodynamics. The catalyst exchange is easier, as well as the design. An extensive piloting is usually not necessary. Reactive distillation is more thoroughly discussed in [316].

10.2 Reactors

� 355

Figure 10.7: Sulzer Katapak. © Sulzer Chemtech Ltd.

Figure 10.8: Reactive distillation with an external reactor.

For standard reactors, process simulators offer some options for the representation. – The simplest and most widely used one is the stoichiometric reactor. As input, the stoichiometry of the various reactions and the conversion referring to one of the reactants are necessary. There is the option that the conversion factors refer to the amount at the reactor inlet or that the reactions take place in a sequence, where the conversion refers to the amount at the beginning of this reaction. – The yield reactor is an even more simple one. The outlet concentration can be specified, the mass flow is kept constant. The disadvantage is that alchemy can be simulated with this option, it is no problem to specify that water is turned into gold. It is often used when the reactions taking place are more or less unknown, e. g. in fermenters in biotechnology. – There are several options to define an equilibrium reactor. In the first option, the reactions taking place must be specified. Either the equilibrium constants can be

356 � 10 Chemical reactions





given, or the simulator calculates the equilibrium using the Gibbs energies (Equation (10.11)). Multiple phases can be considered as well. Alternatively, only the possible reaction products can be listed without defining the reactions theirselves. The program can then find the corresponding minimum of the Gibbs energy. Tubular reactors (as plug flow reactors), continuous and batch stirred tank reactors can be specified using reaction kinetics.

11 Mechanical strength and material choice Even a process engineer should know that the crack in the sausage on the grill is in longitudinal direction. (Olaf Stegmann)

The statement refers to a discussion where it was assumed that the worst case for a pipe was a sudden break through the cross-flow section. In fact, this is very unlikely to happen, as Figure 11.1 indicates. When the pipe is pressurized, the equilibrium of force in longitudinal direction is1 σa πDs = p

πD2 , 4

(11.1)

giving σa =

pD , 4s

(11.2)

where σa is the mechanical tension in the axial or longitudinal direction. In circumferential direction, the equilibrium of forces yields (Figure 11.1) σt ⋅ 2sL = pDL ,

(11.3)

giving σt =

pD , 2s

(11.4)

with σt as the mechanical tension in the tangential or circumferential direction. Thus, the tension in the circumferential direction is twice as large as in the longitudinal direction, and as long as there is no predetermined breaking point, the pipe would burst in the circumferential direction at overpressure, causing a longitudinal crack (Figure 14.11). If it bursts at all. There is a guideline statement called “leak before burst”. The established pressure vessel standards require designs which favor “leak before burst”, allowing the fluid to escape and reducing the pressure before the damage is growing so large that a complete fracture takes place. Usually, the weakest parts against pressure load are the gaskets, and one can easily imagine that these will show leakage first. Equation (11.4) is the so-called boiler formula. It is the foundation of many formulas for mechanical stability calculations for vessels, where the elastic limit is inserted for the tension. In the particular technical guidelines, Equation (11.4) is supplemented by terms describing manufacturing uncertainties, corrosion allowances, safety factors and

1 The wall thickness s is assumed to be small compared to the diameter. https://doi.org/10.1515/9783111028149-011

358 � 11 Mechanical strength and material choice

Figure 11.1: Illustration of the vessel formula.

the influence of welding seams. Also, different ratios between outer and inner diameter are regarded [150]. Before the mechanical stability can be considered, for each piece of equipment and the adjacent piping the design temperatures and design pressures2 have to be assigned. These are the conditions where a safe operation is definitely possible without restrictions, even if they are both reached at the same time. In fact, as standard wall thicknesses are used in the manufacturing process, the equipment will be able to cope with even higher pressures, since not the required one but the next larger standard wall thickness will be taken. It has to be proved that the equipment can withstand the design pressure, usually by pressurizing with liquid water. The test can in most cases not realize both design pressure and design temperature at the same time; therefore, a conservative temperature correction is performed, giving a test pressure higher than the design pressure at the lower temperature. Also, for high pieces of equipment (e. g. columns) this kind of test systematically yields higher pressures than requested, as the whole apparatus is pressurized to the test pressure, but the lower part is additionally exposed to the hydrostatic pressure. In operation, pressure vessels are in most cases protected by pressure relief devices such as safety valves or rupture discs (Chapter 14.2), which actuate at the design pressure. Again, the definition of design pressure turns out to be soft, as a safety valve starts to open at the design pressure but fully opens at a pressure which is 10 % above so that again the design pressure is systematically exceeded. In the design basis, a rule must be set up how the values for design temperature and design pressure can be determined. From process simulation, the normal or maximum operating conditions are known. The design conditions are related to these values; often by a factor (1.1–1.25) for the design pressure, and by an offset for the design temperature. In some cases, the operation conditions are not decisive. For example, the design temperature of heat exchangers must refer to the highest possible temperature of the hot side. In case the apparatus is cleaned by steamout,3 it can happen that the corresponding steam determines the design conditions.

2 Design pressures refer to overpressures, and the unit is therefore “MPag” or “barg”, where the “g” stands for “gauge”, i. e. overpressure. In contrast, if absolute pressures are meant, the letter “a” for “absolute” can indicate that absolute pressures are meant, e. g. MPaa or bara. 3 See Glossary.

11 Mechanical strength and material choice

� 359

Figure 11.2: Ductile fracture and brittle fracture. © BradleyGrillo/Wikimedia Commons/CC BY-SA 3.0. © Sigmund/Wikimedia Commons/CC BY-SA 3.0. https://creativecommons.org/licenses/by-sa/3.0/deed.de.

Process equipment can also be exposed to low temperatures. The materials of construction can undergo a transition from ductile to brittle behavior at low temperatures, which increases the risk of brittle fracture [166, 167], one of the most critical damages. Material failure should be of the ductile type, meaning that plastic deformation takes place before complete destruction. This gives at least some time to react. In brittle fracture, the material failure occurs suddenly. Figure 11.2 shows the difference between brittle fracture and ductile fracture. It can easily be seen that for the brittle fracture no deformation takes place. Therefore, the specification of the equipment should include an indication of the minimum design metal temperature (MDMT). The case which is most often relevant for the determination of the MDMT is the so-called auto-refrigeration case. If a low-boiling substance is stored in a vessel under pressure in the liquid state, it will evaporate in case of pressure relief. The temperature of the liquid will follow the boiling point curve. For example, a vessel containing liquid propylene at p = 10 bar will cool down to t = −48 °C, the boiling temperature of propylene at p = 1 bar, if a complete pressure relief down to ambient pressure takes place. If there is a mixture in the vessel, the boiling temperature varies according to the vaporliquid equilibrium, possibly ending up at the boiling temperature of the highest boiling component or azeotrope. However, it usually cannot be guaranteed that at pressure relief the component with the highest boiling point is actually in the vessel. Therefore, in most cases the lowest boiling point is relevant for the design. Certainly, one should not relate the MDMT with the design pressure of the vessel. Especially for the case of pressure relief described above, it becomes clear that the pressure to which the vessel

360 � 11 Mechanical strength and material choice is exposed strongly decreases with the temperature. Therefore, the minimum allowable temperature (MAT) should be defined as a function of pressure. Construction engineers should at least be provided with single temperature-pressure pairs for the various cases to avoid overdesign. It is often underestimated that an overpressure from outside can be critical for the mechanical stability of equipment or piping. This occurs if the equipment is evacuated according to the process conditions or if the equipment is emptied with a vacuum pump. If such conditions occur, it should be indicated in the specification, e. g. by an additional design pressure pDes = −1 barg. Things become more complicated if the pressure exposure from outside is higher than 1 bara. In double pipes, steam on the annular side might have a considerably higher pressure than the product in the inner tube. This should be indicated in some way on the datasheet, e. g. by specifying the overpressure from outside with a negative sign as in the vacuum case. An indication like pDes = −16 barg will hopefully attract the attention of the vendor or cause at least a further inquiry so that a damage as shown in Figure 11.3 can be prevented.

Figure 11.3: Damage of the inner tube of a double pipe due to wrong specification.

Mechanical stability is highly related to the use of appropriate materials. Also, the chemical stability must be guaranteed. To withstand aggressive chemicals, a number of special materials are available. Often, the quality of the surface is also an issue, e. g. in bio-processes, where microorganisms tend to stick to rough surfaces. In chemical plants, the materials used can be divided into metals (steel, aluminium, nickel, titanium), nonmetallic materials (ceramics, graphite, glass), and polymers. The choice of the materials should be performed by a specialist. Critical issues for the choice of the material are [74]: – strong organic or inorganic acids; – sour gases, especially bromine and chlorine; – fluoride, chloride and bromide ions; – caustics, especially at high temperatures;

11 Mechanical strength and material choice



� 361

hydrogen, which can diffuse through many materials and is explosive over a wide concentration range.

Corrosion data can often be found in the literature. If sufficient information is not available, a material test can be helpful where a piece of metal is exposed to the medium for a certain time. This procedure is easy but time-consuming. The worst kind of corrosion is stress corrosion, caused, e. g., by chlorides in stainless steel pipes after very short times of exposure. Long term corrosion can be considered by a corrosion allowance in the design phase [255]. The terms for the identification of certain steels is a bit confusing, as different guidelines use completely different systems. For example, the well-known V2A steel is called 1.4301 according to the material number (DIN EN 10088-1), X5CrNi18-10 according to the chemical composition and 304 according to the AISI standard. A comprehensive compilation of all the issues concerning the material choice can be found in [168].

12 Piping and measurement A tube has an excavation which is as long as the tube itself. (Reportedly from a handbook of construction) For an economist, a tube is nothing else than an inverse submarine. The difference is just that the liquid is inside and the people are outside. (Felix Petersen)

The transport of gaseous and liquid substances between the different pieces of equipment is achieved in pipes, which form a major part of a chemical plant. The piping engineers are the ones with the most interaction with other activities; for instance, a detailed piping cannot be performed unless the nozzles of the vessels have been put in place. A stepwise working procedure with continually increasing generation of information is typical for piping. While piping activities are more or less concentrated in the detailed engineering, the basis is in fact provided by process engineering, as the piping dimensions are mainly determined by the process. This chapter focuses on the process aspects of piping and other items like valves and measurement devices which are related. The important terms concerning the planning of the piping are explained.

12.1 Pressure drop calculation 12.1.1 Single-phase flow-through pipes Pressure drop calculations in pipes are standard tasks in process engineering. The following chapter explains the common equations which are widely used in process engineering. It should be mentioned that all equations refer to Newtonian fluids. For nonNewtonian flow, where the viscosity depends on the shear forces, other procedures are necessary [169]. With the tube length L and the friction number λ, the relationship between pressure drop and velocity is given by Equation (12.1): Δp = λ

ρw2 L , 2 dh

(12.1)

where dh is the hydraulic diameter. For different geometries, it is defined by dh = 4

A , U

(12.2)

where A is the cross-flow area and U the circumference. For circular tubes, Equation (12.2) yields of course the inner tube diameter dh = d. For circular rings, Equation (12.2) gives dh = da − di , https://doi.org/10.1515/9783111028149-012

(12.3)

12.1 Pressure drop calculation

� 363

Figure 12.1: Moody diagram. © S. Beck and R. Collins, University of Sheffield/Wikimedia Commons/CC BY-SA-3.0. https://creativecommons.org/licenses/by-sa/3.0/deed.en.

with da as the outer and di as the inner diameter. For a rectangular cross-section with two different side lengths s and w, the hydraulic diameter is dh =

2 sw s+w

(12.4)

Re =

wdh ρ , η

(12.5)

Using the Reynolds number

the friction factor λ as a function of the Reynolds number can be determined with the well-known Moody diagram (Figure 12.1). For laminar flow (Re < 2320), there is a strict theoretical relationship independent of the surface roughness of the pipe according to the Hagen–Poiseuille law [150], λ=

64 φ Re

(12.6)

The factor φ is 1 for circular tubes. For noncircular cross-sections, the factor can be determined using Tables 12.1 and 12.2 [150]. For the turbulent flow, the roughness of the inner pipe surface plays a major role and is represented by the roughness k, which can be interpreted as the size of sand grains on the surface. Guide values are given in [150]. For smooth pipes made of

364 � 12 Piping and measurement Table 12.1: Circular ring. da /di φ

1 1.5

5 1.45

10 1.4

20 1.35

50 1.28

100 1.25

Table 12.2: Rectangular cross-section. s/w φ

0 1.5

0.1 1.34

0.3 1.1

0.5 0.97

0.8 0.9

1.0 0.88

steel, k can be set to k = 0.02 . . . 0.06 mm, for other materials (Cu, Al, glass, polymer) k = 0.001 . . . 0.0015 mm can be achieved. One should be aware that k increases with time. For the turbulent flow in a smooth pipe (Re ≥ 2320, k Re < 65 dh ), the friction factor can be determined by the formula of Prandtl and v. Kármán [150]: λ = [2 lg

Re√λ ] , 2.51 −2

(12.7)

which has a theoretical background [170] and is valid in the whole range. Its disadvantage is that it must be solved iteratively. There are two popular approximations, the Blasius equation λ=

0.3164 , Re0.25

(12.8)

valid for 2320 < Re < 100000, and the formula λ=

0.309 , [lg (Re/7)]2

(12.9)

which can also be applied in the whole range [150]. Figure 12.2 shows that it is really worth to take care of the application ranges.

Figure 12.2: Comparison between Equations (12.8) (· · ·), (12.7) (—), and (12.9) (- - -).

12.1 Pressure drop calculation

� 365

For rough surfaces, the formulas of Colebrook [150]: λ = [−2 lg (

0.27 2.51 + )] Re√λ dh /k

−2

(12.10)

for Re ≥ 2320, 65 dh < k Re < 1300 dh ; and Nikuradse [150]: λ = [2 lg (3.71 dh /k)]

(12.11)

−2

for Re ≥ 2320, k Re > 1300 dh can be used. Equation (12.12) covers any case in the turbulent region without iteration. It has been obtained by inserting Equation (12.9) into Equation (12.7): λ = [−2 lg(

5.02 0.22075 Re 0.27 k lg + )] Re lg(Re/7) dh

−2

(12.12)

During the process engineering of a whole plant, the diameter of a large number of lines has to be sized, which requires an excellent documentation and workflow. It is a good approach to compute the friction factor Equations (12.6)–(12.11) in an EXCEL file and make a list of the lines to be sized. A quantity which can easily be interpreted is the pressure drop per 100 m tube length according to Equation (12.1). Varying the tube diameter, the calculated pressure drops can be compared with the corresponding values which are usually defined in the guidelines of the particular companies. Also, it can be checked whether the velocities in the pipe are reasonable, e. g. 1–2 m/s for liquids and 10–20 m/s for gases. Figure 12.3 gives a rough orientation. All cases regarded have in common that for larger diameters higher velocities can be allowed, as in this case the

Figure 12.3: Recommended velocities in a pipe.

366 � 12 Piping and measurement friction with the tube wall plays only a minor role. It should also be mentioned that diameters below 2′′ are rarely applied due to static reasons. It is worth thinking over the meaning of this procedure. It should be clear that the guideline values for the pressure drop are recommended values. They are compromises between the additional investment costs for pipes with larger diameters and higher operation costs due to larger pressure drops for pipes with smaller diameters. Anyway, the tube does not fail if the recommended values are exceeded. It makes sense to accept them if no other requirements for the pipe exist, e. g. special pressure drop constraints if the compressor or the pump is limited or if the guidelines for safety valves must be obeyed (Chapter 14.2). They can be completely ignored if the line leads to a valve where the pressure is significantly lowered anyway. There is often an overreaction when the recommended value is slightly exceeded. When the next diameter is chosen, it often happens that the pressure drop goes down to very low values, indicating that the tube is by far overdesigned. This becomes clear when Equation (12.1) is written with the mass flow and, for simplicity, the Blasius Equation (12.8): m 2 ) L ρ( ρA 2 d

(12.13)

wdρ 4ṁ 4ṁ −1 = = d η πdη πη

(12.14)

Δp = 0.3164 Re

̇

−0.25

With A = πd 2 /4 one gets Re =

and, after summarizing constant terms, Δp = C1 d 0.25 C2 d −4

L = Cd −4.75 d

(12.15)

This illustrates the dramatic dependence of the pressure drop on the tube diameter, as well as the following example. In this context, it should be mentioned that a given tube diameter refers neither to the inner nor to the outer diameter. It refers to the piping class, and for important pressure drop calculations the exact inner diameter should be inquired. Example A cooling water stream (ṁ = 50000 kg/h, t = 28 °C, p = 6 bar) is pumped to a consumer unit. The recommended maximum pressure drop is Δp = 0.2 bar per 100 m. Calculate the appropriate tube diameter. The tube shall be hydraulically smooth.

12.1 Pressure drop calculation

� 367

Solution First, the corresponding physical property data are ρ = 996.46 kg/m3 and η = 0.8324 mPa s [29]. The first approach will be a 4′′ tube, giving d ≈ 101.6 mm (1′′ = 25.4 mm). Then, the Reynolds number is (Equation (12.14)) Re =

4 ⋅ 50000 kg/h 4ṁ −1 d = = 209099 πη π ⋅ 0.8324 mPa s ⋅ 101.6 mm

(12.16)

According to Prandtl/v. Karman (Equation (12.7)), the friction factor is evaluated iteratively to be λ = 0.0155. Then, using the velocity w=

50000 kg/h ṁ = = 1.72 m/s , ρA 996.46 kg/m3 ⋅ π (101.6 mm)2 4

(12.17)

the pressure drop per 100 m is equal to (Equation (12.1)) Δp100 m = 0.0155

996.46 kg/m3 (1.72 m/s)2 100 m = 0.225 bar 2 101.6 mm

(12.18)

This is slightly larger than the recommended 0.2 bar/100 m, at a reasonable velocity. Increasing the tube diameter to 6′′ , which is the next standard nominal diameter, the resulting pressure drop per 100 m becomes Δp = 0.03 bar, which is by far lower. Probably, it is the more reasonable decision to stay at d = 4′′ .

12.1.2 Pressure drops in special piping elements For special piping elements, the pressure drop is usually calculated via Δp = ζ

ρw2 2

(12.19)

A number of ζ -values according to [150] is listed in Appendix B. 12.1.3 Pressure drop calculation for compressible fluids For gas flows, the procedure described in Chapter 12.1.1 is only valid if the flow can be regarded as incompressible. This can be checked with the Mach number (Ma), the ratio between the actual velocity and the speed of sound. The velocity should stay below 30 % of the speed of sound: Ma = w/w∗ < 0.3

(12.20)

The speed of sound can be expressed as [11] 2

(w∗ ) = −v2 (

𝜕p ) 𝜕v s

(12.21)

368 � 12 Piping and measurement With a pressure-explicit equation of state, it can be determined via [11] 2

(w∗ ) = v2 [

2

𝜕p T 𝜕p ( ) −( ) ] cv 𝜕T v 𝜕v T

(12.22)

with v

cv = cvid + T ∫ ( ∞

𝜕2 p ) dv 𝜕T 2 v

(12.23)

For ideal gases, Equation (12.21) gives 2

(w∗ ) = κRT

(12.24)

with κ = cpid /cvid , where cpid and cvid are assumed to be constant and not a function of temperature. The speed of sound is a limiting velocity for a gas flow in a pipe. Except the case that the pipe has the shape of a Laval nozzle with a minimum in the cross-flow area (Figure 12.4), thermodynamics can ensure that the speed of sound is not exceeded [154].

Figure 12.4: Shape of a Laval nozzle.

Due to the relatively high velocities for compressible flows, large pressure drops occur, often changing the state variables of the stream significantly. Due to the pressure loss, the density is reduced. This means that at constant mass flow the volume flow and therefore the velocity increases, which in turn produces further increased pressure drops. The density further decreases, and finally the pressure drop is by far increased in a vicious circle. This behavior can be summarized in the following striking formula: “Pressure drop causes pressure drop”

This is an important issue, especially for the design of the outlet lines of pressure relief devices (Chapter 14.2). For compressible flow, the pressure drop calculation is demonstrated in the following example. Because of the changes of the fluid state variables, it is useful to divide the tube into small increments and calculate them sequentially one by one, where the state is updated at the inlet of each segment. The procedure can be performed in an EXCEL file or with the help of a process simulator, which usually offer the incremental calculation as an option. It is important to know that a conventional pressure drop calculation with a given mass flow usually yields to a solution which is not realistic. At the end of the pipe, an

12.1 Pressure drop calculation

� 369

incompressible fluid must end up with the given outlet pressure. If the pressure drop is too large, the conclusion is that the mass flow cannot be realized; due to choking, it will be less. If the pressure drop is too low, the fluid will expand in a way that the inlet pressure is lowered to an extent where the outlet pressure is met. The same holds in principle for a compressible flow; however, the mentioned expansion is coupled with a significant change of the state. Furthermore, another restriction is that the speed of sound cannot be exceeded in a pipe. If it is reached upstream the pipe outlet, it is clear that the mass flow assumed is too high and choking takes place. At most, speed of sound can be reached directly at the pipe outlet. In this case, the outlet pressure will not be met; instead the fluid will expand directly after it has left the pipe. The following example should illustrate this. It might look a bit exotic, however, cases like this occur in outlet lines of rupture discs (Chapter 14.2). Example A nitrogen flow (ṁ = 30000 kg/h, p = 100 bar, t = 20 °C) enters a line (L = 50 m) which ends at a header with p = 1 bar. Choose an appropriate diameter for the line. The tube shall be hydraulically smooth.

Solution Various diameters are tested with a tube increment length of ΔL = 0.1 m. The results are illustrated in Figures 12.5 and 12.6. – The 1′′ line is too narrow for the given mass flow. At the inlet, the Mach number is already 0.38. Due to the “pressure drop causes pressure drop” effect, the fluid continuously expands, and the velocity rises increasingly. After almost 14 m, speed of sound is reached and choking takes place (Figure 12.5). The given mass flow cannot pass the pipe as assumed. – For a 2′′ line, things are more difficult. Starting with p = 100 bar at the inlet, the pressure drop of the line is Δp = 6 bar, giving p = 94 bar at the outlet, and the Mach number increases very smoothly from Ma = 0.09 at the inlet to Ma = 0.1 at the outlet (Figure 12.5). The conclusion is that the pipe diameter is sufficient; however, one should imagine what really happens in the pipe; especially, the design pressure of the pipe (Chapter 11) might be interesting. For this purpose, it is assumed that an adiabatic expansion happens at the inlet, where the velocity change is not neglected in the First Law: 1 2 1 2 h1 + w1 = h2 + w2 2 2



(12.25)

In an iterative procedure, it is found out that an expansion to p = 33.7 bar at the inlet gives a possible result. At the outlet of the pipe, speed of sound is reached, while the pressure only drops to 9.1 bar (Figure 12.6). The fluid will rapidly expand after leaving the pipe. While the 4′′ and the 6′′ line show a similar behavior, the 8′′ line works in a different way. Again, the inlet pressure of p = 100 bar will yield a pressure drop which is by far too low (Δp = 7 mbar, Figure 12.5), and expansion will take place. Iteratively, an expansion to p = 1.49 bar at the inlet can be determined. At the pipe outlet, p = 1 bar is reached, while the Mach number Ma = 0.6 indicates that the velocity is below speed of sound (Figure 12.6).

370 � 12 Piping and measurement

Figure 12.5: Pressure courses without expansion at the inlet.

Figure 12.6: Pressure courses with expansion at the inlet.

12.1.4 Two-phase pressure drop The evaluation of the pressure drop of a one-phase flow is a quite exact one with a welldefined theory behind it. Things become much more complicated when a second phase comes into play. Especially vapor-liquid flows have a great technical importance. The pressure drop of a two-phase flow is characterized by the friction between the phases, which is hardly predictable. This friction causes the pressure drop to be higher than expected; it is usually underestimated, as even the apparently most conservative assumption of a pure vapor flow is not on the safe side, as Figure 12.7 shows. The pressure drop of water in the two-phase region at p = 1.1 bar is considered for various vapor fractions. It can be seen that the obvious approach of averaging the pressure drops of the vapor and the liquid flow systematically underpredicts the two-phase pressure drop. For high vapor fractions, the two-phase pressure drop exhibits a well-defined maximum; even the assumption of a pure vapor flow as mentioned above yields lower pressure drops. As explained below, the horizontal and the vertical upward and downward flows have to be distinguished for the calculation of the two-phase pressure drop. The best calculations for the two-phase pressure drop are probably the ones used in the commercial heat transfer programs, as they are decisive for the design of thermosiphon reboilers. However, to the knowledge of the author, they have not been published. The most popular published correlations are the ones of Lockhart-Martinelli [171], Friedel [172–174], and Beggs-Brill [175]. In the following, the Friedel method is exemplarily explained, which is considered to be the most reliable one because of its large

12.1 Pressure drop calculation

� 371

Figure 12.7: Typical curvature of the two-phase pressure drop with respect to the vapor fraction. Calculated with the Friedel equation.

database. However, errors up to 50 % might still occur. The method determines the twophase factor R2Ph , which represents the ratio of the pressure drops of the two-phase flow and of a one-phase liquid flow with the same mass flow: Δp2Ph = R2Ph ΔpL

(12.26)

ΔpL is the pressure drop according to Equation (12.1), where the total mass flow of the two-phase stream is replaced by a fully liquid stream: ΔpL = λL

2 ̇ L (m/A) 2 ρL dh

(12.27)

With the Reynolds numbers for both phases j = L, G Rej =

̇ h md A ηj

(12.28)

and the vapor mass fraction x=

mG mG + mL

(12.29)

an auxiliary quantity A∗ can be calculated as A∗ = (1 − x)2 + x 2

ρL λG ρG λL

(12.30)

It must be emphasized that in fact for ṁ the total mass flow has to be used. In contrast to the single phase flow, there is no discontinuity for λj at the transition laminar/turbulent but a continuous transition region.

372 � 12 Piping and measurement For a circular cross-flow area, the friction factor is λj = 64/Rej

for Rej ≤ 1055

(12.31)

and λj = [0.86859 ln

Rej

−2

1.964 ln Rej − 3.8215

]

for Rej > 1055

(12.32)

For geometries differing from the circular cross-flow area the following changes have to be considered: – For a rectangular cross-flow area, the hydraulic diameter (Equation (12.2)) must be used: ṁ d /η , A h j

(12.33)

2 11 s s + (2 − ) , 3 24 w w

(12.34)

Rej,rectangle = Ψ with Ψ=



where s is the length of the shorter and w the length of the larger side. The other steps are analogous to the circular cross-flow area. For circular ring cross-flow areas there are the relationships λj = 64/Rej

for Rej ≤ 1055

(12.35)

and λj = [2 lg (Rej √λj ) − E]

−2

for Rej > 1055

(12.36)

Equation (12.36) must be solved iteratively. E can be determined according to the following table. di /da E

0 0.8

0.05 0.932

0.3 0.961

0.8 0.968

1.0 0.97

Between the values for di /da , linear interpolation can take place. For the evaluation of the Reynolds number Re, again the hydraulic diameter (Chapter 12.1) must be used. With the Froude number Fr L =

ṁ 2 A2 gdh ρ2L

(12.37)

12.1 Pressure drop calculation

� 373

and the Weber number WeL =

ṁ 2 dh A2 σρL

(12.38)

R2Ph can be determined to be – for the horizontal and the vertical upward flow: R2Ph = A∗ + 3.43 x 0.685 (1 − x)0.24 (ρL /ρG )0.8 (ηG /ηL )0.22 (1 − ηG /ηL )0.89 Fr −0.047 WeL−0.0334 , L



(12.39)

and for the vertical downward flow: R2Ph = A∗ + 38.5 x 0.76 (1 − x)0.314 (ρL /ρG )0.86 (ηG /ηL )0.73 (1 − ηG /ηL )6.84 Fr −0.0001 We−0.087 L L

(12.40)

The Friedel equations are valid for the whole vapor fraction range 0 < x < 1. As limiting cases, one obtains the corresponding equations for the single-phase equations for vapor and liquid, except the transition region from laminar to turbulent flow. In case the influence of the roughness of the tube wall is not negligible (k ReG < 65 dh ), the result should be compared with the one for the pure vapor flow. One should always be aware that due to the relatively high pressure changes the vapor fraction along the tube might vary significantly. In these cases, it makes sense to divide the tube into small increments and evaluate the pressure drop increment by increment, as it is often necessary for compressible flow as well (Chapter 12.1.3). As for the compressible flow, the statement “Pressure drop causes pressure drop” holds. Current process simulators usually offer an opportunity to specify such a calculation. For the flow through piping elements, only the pipe elbow is sufficiently discussed: For the 90°-elbow, Muschelknautz [176] specifies the following procedure: B=1+

λ Ld

R2Ph = 1 + (

2.2

(2 + r/d)

ρL − 1)[B x (1 − x) + x 2 ] ρG

(12.41) (12.42)

with r as the elbow radius. Accordingly, one obtains for the pressure drop Δp2Ph = λ

2 ̇ L (m/A) R d 2 ρL 2Ph

(12.43)

Equations (12.41) und (12.42) are only valid for the 90°-elbow. For other piping elements, the only remaining option is to define an equivalent tube length according to

374 � 12 Piping and measurement L=ζ

dh λ

(12.44)

and calculate the pressure drop along this artificially defined pipe. λ is then calculated using Equations (12.6), (12.7), (12.10), or (12.11), depending on the conditions. Besides the pressure drop, the flow pattern of a vapor-liquid flow is important. They are shown in Figure 12.8, taken from [177]. For vertical upward flow, the patterns are – Bubble flow (A): A large quantity of bubbles is present which are almost homogeneously mixed with the liquid. The liquid phase is still wetting the whole tube wall.

Figure 12.8: Flow patterns of vapor-liquid two-phase flow for horizontal (upper pictures) and vertical upward flow (lower pictures) [177]. © Springer-Verlag GmbH.

12.1 Pressure drop calculation



– –



� 375

Slug flow (B): Very large bubbles are formed which have a length that is by far larger than the diameter of the tube. When they end, they are followed by liquid flow with low vapor content. At a tube bend or a transition piece, this liquid will hit the tube wall and cause mechanical damage with time. Slug flow should be avoided. Chaotic flow (C): Large and small bubbles are randomly distributed. Wispy annular flow (D): The liquid is predominantly distributed around the tube wall. Vapor and swarms of droplets are in the tube core. Annular flow (E): The liquid is almost entirely distributed around the tube wall, only few droplets are suspended in the vapor flow in the tube core.

For horizontal flow, the patterns are (Figure 12.8) – Bubble flow (a): The vapor phase forms small bubbles. Due to the influence of gravity, they are distributed in the liquid in the upper part of the tube. – Stratified flow (b): The vapor phase is in the upper part, the liquid phase is in the lower part of the tube. There are no waves at the phase boundary. – Wavy flow (c): Similar to stratified flow (b), but with waves at the phase boundary. – Slug flow (d): Similar to wavy flow, but the waves can occupy the whole cross-section. There is an increased occurrence of bubbles in the liquid phase and droplets in the vapor phase. Again, there is the danger of mechanical damage for the tube. – Annular flow (e): The tube wall is fully wetted, but the liquid ring formed in the cross-section is asymmetric, with more liquid at the bottom than at the top of the tube. The vapor phase is in the center of the tube, with many liquid droplets. In the diagrams in Figure 12.8 the coordinates of the X- and Y -axis are defined as follows. For the vertical flow: il,0 = ig,0 = and for the horizontal flow:

ṁ 2 (1 − x)2 A2 ρL

ṁ 2 x 2 , A2 ρV

(12.45) (12.46)

376 � 12 Piping and measurement

X=

Δp 0.5 )L ( Δx

Δp 0.5 )V ( Δx

(12.47)

,

where the Δp values refer to the situation where, respectively, the vapor and the liquid phase would occur alone and occupy the whole cross-section area. ṁ is the total mass flow. For the horizontal flow, the boundary between bubble and slug flow refers to the ordinate TD = [

Δp ( Δx )L

(ρL − ρV ) g

0.5

]

(12.48)

The coordinate for the boundary between annular and wavy flow is FD =

̇ mx A((ρL − ρV )ρV dg)0.5

(12.49)

The coordinate for the boundary between stratified and wavy flow is KD =

ṁ 3 x 2 (1 − x) − ρV )ρV gηL

A3 (ρL

(12.50)

For the choice of the diameter, one should take care that periodical shocks due to slug flow should be avoided. It can be avoided by choosing lower pipe diameters, however, this option is often limited and the diagrams are not excessively accurate. Furthermore, small diameters give large pressure drops, which is often the limitation. Thus, the usual strategy is to avoid two-phase flow as far as possible, e. g. by placing the expansion valves directly upstream the vessel or, if possible, at a low point. Example 4000 kg/h water and 1000 kg/h air shall be transported through a horizontal 6-in-line (d = 152.4 mm, L = 50 m). Calculate the pressure drop and estimate the flow pattern. Given values: – ηL = 1 mPas – ρL = 1000 kg/m3 – σL = 70 mN/m – ρG = 1.3 kg/m3 – ηG = 0.018 mPas

Solution We calculate the following values: – A = 1.82 ⋅ 10−2 m2 (circular cross-flow area) ReG = 644644 (Equation (12.28)) – ReL = 11604 (Equation (12.28))

12.2 Pipe specification



377

Note that the total mass flow is used. – λL = 0.0297 (Equation (12.32)) – ΔpL = 28.24 Pa (Equation (12.27)) – x = 0.2 (Equation (12.29)) – λG = 0.0126 (Equation (12.32)) – A∗ = 13.673 (Equation (12.30)) – Fr L = 0.00388 (Equation (12.37)) – WeL = 12.62 (Equation (12.38)) – and finally R2Ph = 120.28 giving Δp2Ph = 33.97 mbar To estimate the flow pattern, we need the following steps for the horizontal flow: – ReG = 128929 (Equation (12.28)) – ReL = 9283 (Equation (12.28)). Note that the Re numbers are now taken with gas and liquid mass flow, respectively, not with the total flow. Note that now the mass flows of the particular phases are used. – λG = 0.017 (Equation (12.9)) – λL = 0.0317 (Equation (12.9)) (Δp/Δx)L = λL (Δp/Δx)G = λG X=

(ṁ L /A)2 = 0.3876 Pa/m 2ρL d (ṁ G /A)2 = 10 Pa/m 2ρG d

(Δp/Δx)0.5 L (Δp/Δx)0.5 G

= 0.197

The value X refers to the abscissa of the upper diagram in Figure 12.8. The ordinates are – FD = 0.3457 (Equation (12.49)) and – KD = 1109.36 (Equation (12.50)) KD distinguishes between stratified and wavy flow. The value is clearly above the border line ⇒ wavy flow. FD distinguishes between annular and wavy flow (ordinate on the right hand side). The value is pretty close to the border line; both options are considered to be possible. Of course, this is a simplfied example, as in the water-air system the ratio between the phases does not change due to the pressure drop. Otherwise, a sectionwise calculation would be necessary.

12.2 Pipe specification Besides the inner diameter of a pipe, which is decisive for the pressure drop calculation, there are of course a large number of other items to be specified for a pipe. To keep the

378 � 12 Piping and measurement overview, so-called piping classes are defined, which differ from company to company. In these piping classes, a large part of the information about a pipe can be predefined, such as: – Pressure rating: The design conditions of a pipe are normally predefined by the adjacent pieces of equipment. In the pipe class denomination there is usually a code which ensures sufficient design conditions. Also, the influence of the temperature on the mechanical stability is considered and defined. – Fluid code: A certain abbreviated code gives qualitative information about the fluid going to the pipe, both concerning the substances involved and the design conditions. There are sophisticated pipe class systems where the fluid code carries the complete information about the pipe. – Piping material: The piping material is indicated in the piping denomination code, either explicitly or via the fluid code mentioned above. – Gasket type: The gasket type can be indicated in the fluid code. Also, the necessary pipe connections (flange, welded connection) can be defined. – Insulation: There are a number of insulation types for a pipe which have to be distinguished: – None: Insulation is not needed for streams where no exorbitant temperatures or other dangers occur. An example is cooling water. – Heat insulation: An insulation must be defined (material, thickness) which avoids heat losses in the pipe. The insulation material must be thermally stable and effective at the required temperature. – Cold insulation: An insulation must be defined (material, thickness) which can keep the stream in the pipe at the cold temperature required. The insulation material must be effective at the required temperature. Often the ingression of air humidity into the insulation material must be avoided. – Personnel protection insulation: An insulation is provided which is thermally not effective, but prevents that members of the staff touch the pipe. This could cause injuries if it is hotter than approx. 60–70 °C. Most companies have guidelines where personnel protection insulation is required. – Electrical tracing: Electrical tracing is required if there is a danger that the fluid is no more pumpable at cold environmental temperatures, e. g. water at temperatures below 0 °C. Electrical tracing is relatively expensive.

12.2 Pipe specification



� 379

Jacket tracing: A double pipe is manufactured where a heating agent (steam, hot water) is used to keep the temperature in the inner part of the pipe.

Pipes are assembled at environmental conditions. During operation, they will often be exposed to elevated temperatures, and their lengths will increase due to thermal expansion. If these expansions are not compensated for, large mechanical tensions will occur which can possibly cause damage to the gaskets, the pipe fittings, and the pipe itself. Small changes in length can be compensated by the elasticity of the material, for major expansions special compensating elements are necessary. For pipes operating at high pressure, bend elements are used. The most popular one is the U-bend (Figure 12.9), which can compensate the tensions by deformation. In application, one should not forget the high-point vent or the low-point drain to avoid accumulation of gases or, respectively, liquids. Another option is the bellow expansion joint (Figure 12.10). Between two flanges, a bellow pipe can equalize the pipe expansion. A guide tube inside can prevent the bellow from being polluted, however, in this case only axial expansions can be compensated.

Figure 12.9: Expansion loop.

Figure 12.10: Bellow expansion joint [178]. © Hydrocarbon Processing.

380 � 12 Piping and measurement

12.3 Valves Valves are used in piping systems to control flowrates, pressure or temperature, to simply turn a flow off or on, or to separate two pieces of equipment [179]. Regarding their function, they can be divided into isolation valves and control valves. The difference is that isolation valves are actuated by an operator, and their states are “open” or “closed”, whereas control valves are operated automatically and it is decisive that a certain intermediate state between “open” and “closed” can be continually maintained.

12.3.1 Isolation valves Isolation valves must reliably isolate two sections of the pipe against each other, even after a long operation time [180]. Leakage to environment must be avoided due to fire danger or emission control. There are several kinds of valves which have their particular pros and cons. They are further explained in [179] and [180]. 1. Globe valves (Figure 12.11): Globe valves can be used for a precise flow control. They do not have a dead storage and close tightly. Their disadvantage is the high pressure drop across the valve, caused by two 90° turns inside the valve. 2. Ball valves (Figure 12.11): Ball valves can be fully opened with practically no additional pressure drop. They can handle solids and are appropriate for automation for use as control valve. The leakage is very low, and ball valves can be operated at high temperatures and pressures. The disadvantages are the remaining liquid holdup in the valve due to a large dead storage. Electrostatic problems might occur, so some precautions should be taken if flammable liquids are handled. 3. Gate valves (Figure 12.11): Gate valves are designed to be fully open or fully closed. In case they are fully open, they do not show an additional pressure drop. Lubricants are not necessary. Gate valves are tight and open and close slowly so that fluid hammering is avoided. Their disadvantage is that gate valves do not have a gradual valve characteristics. They are more or less either open or closed, they are not appropriate for use as control valve. In the partially open state, the valve can start vibrating, which leads to damage with time [179]. 4. Membrane valves (diaphragm valves, Figure 12.11): Membrane valves are completely tight; however, their pressure drop is considerable, and they are not appropriate for high temperatures and pressures or dirt. The mass flow control is not gradual. Due to its tightness, it is considered to be suitable for special cleanliness demands; so it is very popular in pharmaceutical applications.

12.3 Valves

� 381

Figure 12.11: Valve types. 1 = Globe valve, 2 = Ball valve, 3 = Gate valve, 4 = Membrane valve. © KSB Aktiengesellschaft.

5.

6.

Plug valves (Figure 12.12): Plug valves have also a dead storage, and the additional pressure drop is also very low as well as the leakage. The disadvantages are the high turning moment for operation and the possible contamination of the product with lubricant. They are usually not used for control purposes. Butterfly valves (Figure 12.12): Butterfly valves have a low pressure drop. They are tight, have no leakage to environment and no dead storage. They open gradually and are appropriate for use in control applications. Maintenance is easy. Excentric butterfly valves are even appropriate for high pressures and temperatures. The main disadvantage is that the disc and the shaft are in the flowpath of the fluid. Highly abrasive media will erode the disc, and it is difficult to clean the valve.

Figure 12.12: Valve types. 5 = Plug valve [181]. © Hydrocarbon Processing. 6 = Butterfly valve. © Heather Smith/Wikimedia Commons/CC BY-3.0. https://creativecommons.org/ licenses/by/3.0/deed.en.

7.

Check valves (Figure 12.13): Check valves ensure that flow can only take place in one direction. They prevent backflow from higher lines and vessels or from high-pressure regions to low-pressure regions. The construction must be carried out in a way that they have no flow resistance in one direction and complete block for the reverse one. There

382 � 12 Piping and measurement

Figure 12.13: Examples of check valves. © KSB Aktiengesellschaft.

are different types [179]. In most guidelines, for safety purposes check valves are ignored to be on the safe side, even if it is obviously wrong. In contrast to the normal function, these valve types can also be used as shut-off valves, where they are operated automatically by the DCS system to realize the action of an interlock. In this case, they shall have no additional pressure drop and close completely. To prevent manually operated valves from maloperation, they can be secured. One option is the car sealed valve, where a simple seal made of plastic must be broken on purpose before the valve can be actuated. The valve is then protected against accidental maloperation. A more rigorous measure is the locked valve, which is secured with a padlock or a chain and can only be actuated after it is unlocked with a key. The key can e. g. only be obtained after a signing procedure. The costs for normal isolation valves are usually not a decisive issue. However, for large pipe diameters it should be carefully checked whether they are really necessary, as an isolation valve in a 32′′ line can easily exceed the price of a medium-sized car.

12.3.2 Control valves Control valves are used to control quantities like flow, pressure, temperature or liquid level by fully or partially opening or closing in response to signals received from controller devices that compare a “setpoint” to a “process variable”. The opening or closing of control valves is usually done automatically by electrical, hydraulic or pneumatic actuators. In Figure 1.2, a standard arrangement around a control valve had been shown. The valve is actuated by an electric signal with instrument air. To save one stage in the nominal size, the line to be controlled is restricted upstream and expanded again downstream the valve. The control valve can be isolated by two gate valves in case maintenance is needed. For this purpose, there are also two drain valves on both sides of the control valve so that the line can be emptied completely. There is a bypass line with a ball valve around the control valve, so that during maintenance the flow can be controlled manually. The valve in Figure 1.2 has been defined to be failsafe closed (FC). This means that in case the instrument air or other necessary utilities fail

12.3 Valves

� 383

the valve takes the closed position, in contrast to failsafe open (FO). It must be defined in advance during basic engineering which position is the safe one. The usual way for the characterization of valves for liquids is the KV -value, which is the amount of water in m3 /h that flows through the valve at a pressure drop of 1 bar. It can be written as KV ρ bar V̇ = 3 √ 3 m /h m /h g/cm3 Δp

(12.51)

The KV -value for a fully opened valve is called KVs . Alternatively, the cV -value is in use, which has a similar definition but uses American units. The relationship with KV is simply cV = KV /0.865 Example 17 m3 /h liquid ethanol (ρ = 790 kg/m3 ) pass a valve with a pressure drop of Δp = 3 bar. Which KVs value is necessary?

Solution The necessary KV value can be determined according to Equation (12.51): KV 1 = 17√ 0.79 = 8.7 3 m3 /h The KVs value should be 30 % larger, i. e. KVs = 8.7 m3 /h ⋅ 1.3 = 11.3 m3 /h

The KV -value can be transferred into a ζ -value according to the following procedure: The combination of Equations (12.19) and (12.51) gives 2 ρ kg K 105 Pa = ζ w2 = ζ ⋅ 500 3 V2 2 m A

(12.52)

with A as the cross-flow area of the pipe. Solving for ζ yields ζ = or

105 Pa A2 2 500 kg3 KV m

(12.53)

384 � 12 Piping and measurement

ζ = 1.6 ⋅ 10−3 (

4

K d ) ( 3V ) , mm m /h −2

(12.54)

referring to d as the pipe diameter. For gases, we distinguish between subcritical and supercritical flow (Chapter 14.2). The KV value is written as KV =

−1 V̇ N √ ρN T1 (p1 − p2 )p2 ) ( 514 m3 /h kg/m3 K bar2

(12.55)

for subcritical flow with (p1 − p2 ) < p1 /2 and KV =

−1 V̇ N p1 ρ T ( ) √ N3 1 3 257 m /h bar kg/m K

(12.56)

for supercritical flow with (p1 − p2 ) > p1 /2. The indices denote for N standard state (p = 1.01325 bar, T = 273.15 K); 1 valve inlet; 2 valve outlet. The relevance of p1 /2 is discussed in Chapter 14.2. There are mainly two opening characteristics, i. e. the linear one and the equalpercentage one. A linear characteristic is simply KV H = KVs Hfull

(12.57)

where H is the lift of the valve. Often, the equal-percentage characteristic KV H = exp[n( − 1)] KVs Hfull

(12.58)

is preferred, as one can open the valve more carefully at the beginning, and at the end further opening of the valve has a strong effect on the flow. Equal-percentage means that equal changes in the lift leads to equal relative changes in the flow. Figure 12.14 compares the a linear and two equal-percentage characteristics with n = 3.2 and n = 4.0. Note that a valve with equal-percentage characteristics is never considered to be tight when the lift is 0.

12.4 Pressure surge Pressure surge has frequently been the reason for serious damage on piping and substance release. It occurs in long pipes with liquid flow when a valve is closed too fast. Due

12.4 Pressure surge � 385

Figure 12.14: Linear and equal-percentage valve characteristics.

to the reduction of velocity, a pressure pulse develops downstream the valve. It propagates at speed of sound in the opposite direction of the previous flow of the liquid. At the end of the pipeline the pulse is reflected. If this end is open, e. g. an open vessel, the sign of the pulse and its direction change, meaning that the pressure pulse turns into an underpressure pulse towards the valve. It extinguishes the pressure pulse, and the incident has come to an end. From this description, it can be concluded that the maximum power of the pulse can only occur when the valve has completely closed before the reflected pulse reaches the valve [295]. The order of magnitude of the pressure pulse can be estimated with the Joukowsky equation [296] Δp = ρw∗ Δwliq

(12.59)

Upstream the valve, things are mirror-inverted; an underpressure pulse develops and is finally compensated by a reflected pressure pulse. For pipes with gas flow, pressure surge plays a minor role, as both the speed of sound and the density are by far lower (see Equation (12.59). Moreover, in contrast to liquids, gases can be compressed by a pressure elevation, which mitigates the pressure pulse. To avoid safety problems, the following precautions can be taken [296] – The pipe should be designed according to the maximum pressure that can occur. – The simultaneous closing of numerous valves should be avoided. – Valves should close sufficiently slowly, it is recommended that the time required for closing the valve should be 10–20 times longer than the reflection time of the pulse. – Reduction of the flow velocity by increasing the diameter of the pipe or by addition of an orifice if possible. Further information can be taken from [402].

386 � 12 Piping and measurement

12.5 Measurement devices To keep all the process quantities controlled, it is certainly obligatory to evaluate them by measurement with the appropriate accuracy. As a rule of thumb, 10–20 % of the investment costs of a plant are spent for measurement, control, and process automation. The implementation of the concepts is usually done by designated specialists, while some foundations of measurement should be considered by any process engineer. An extensive description of a large number of measurement devices can be found in [286]. A well-known principle of measurement in chemical plants is that the signal has to be transformed into an electrical signal which can be sent to the process control system, where it can be visualized to the operators and possibly be used for control applications. The most important process quantities to be measured are temperatures, pressures, pressure differences, flows, levels and concentrations. They are briefly discussed in the following paragraphs. – Temperature: For the temperature, the most important thermometers are resistance thermometers and thermocouples. Resistance thermometers use the temperature dependence of the electrical resistance. They are the most accurate devices and can be used in the temperature range −250–1000 °C. A well-known thermometer is the Pt-100, measuring the resistance of a platinum wire, which is 100 Ω at 0 °C. Alternatively, thermocouples are used in an even wider temperature range of −200–2000 °C. The principle is that a voltage is built up between a soldering point of two wires of different materials and the free ends, if the soldering point is exposed to a different temperature. In the process, care must be taken that the thermometers are placed in a way that they take representative temperatures, it has to be avoided to place them in dead zones, where they are more or less isolated from the process. – Pressure: In contrast to the temperature, the pressure is uniform in a certain area unless there is a defined reason for change, e. g. hydrostatic effects or pressure drops due to friction losses. In most cases, the pressure is transformed into the elastic deformation of a spring. The movement of the spring is transformed in an electrical signal, often by means of the deformation of a metal membrane, which is turned into a signal by a piezo element. Using different manometers, the range from a few mbar up to more than 1000 bar can be covered. It is often amazing how much confusion is caused when it has to be clearly indicated whether the absolute pressure or the gauge pressure, the difference between absolute pressure and ambient pressure, is meant. While people from plant operation stick to the gauge pressure, scientists and simulation people can hardly imagine that anything else than absolute pressure could be meant. The only way to overcome this is to clearly indicate it writing “g” for gauge (e. g. “barg”) and “a” for absolute

12.5 Measurement devices

� 387

Figure 12.15: Illustration of the Coriolis flow meter principle. © Cleontuni/Wikimedia Commons/CC BY-SA 2.5 https://creativecommons.org/licenses/by-sa/2.5/deed.en.





(e. g. “bara”). The latter abbreviation is unknown to most people; at least it causes a further inquiry, and the possible misunderstanding is overcome. Pressure difference: The measurement of pressure differences is important to get information about hydrostatic pressures or pressure drops. It is measured in a similar way as the pressure itself, both pressures are connected to different sides of a spring. Pressure differences cannot be evaluated by measuring both absolute pressures and taking the difference, as the difference of large numbers can be considerably erroneous (Section 10.1). Flow: Today the dominating measurement principles for the flow are the Coriolis type and the Vortex type flow meters. The Coriolis flow meter is more expensive, but its accuracy is remarkable. It measures the mass flow with an uncertainty of approx. 0.2 %, covering the range from 60 g/h–120 t/h, at pressures up to 900 bar [182]. Figure 12.15 illustrates the principle, although it must be emphasized that a number of arrangements are possible. When vibrations are initiated to the tube bends, the two pipe branches stay in parallel in case there is no flow as on the left-hand side. When there is flow, the pipe branches are differently affected by the Coriolis force and distorted, as on the right-hand side of the figure. The extent of the distortion is strongly related to the mass flow through the device [182]. The amplitudes of the induced vibrations are too small to be seen (approximately 30 µm), but they can be detected tactually. The great advantage of Coriolis flow meters is that they measure the mass flow directly; they are independent from other properties of the flow and from inlet or profile effects of the flow. Apart from the price, the disadvantages are that Coriolis flow meter need a homogeneous flow and that fouling causes errors in the measurement. In contrast to the Coriolis flow meter, the vortex flow meter evaluates a volume flow, which can be converted into a mass flow by an additional temperature measurement and an appropriate density relationship. It counts the number of vortices formed after an obstacle in the flow path, using a piezoelectric crystal. The accuracy can be estimated to be 0.75 %. It needs a certain inlet zone. Also, it is not necessary to know other properties of the stream, such as viscosity. Vortex flow meters have a

388 � 12 Piping and measurement





huge turndown (approx. 1 : 50 from the lowest to highest value) and can be used in a wide temperature range (approx. −200–400 °C). They are not appropriate for low flows, fouling media, and in the case that vibrations occur in the plant. A third type which is often used is the magnetic flow meter. The physical principle is that a magnetic field is applied to the metering tube. Charged particles like ions will be diverted perpendicular to the flow. This results in a potential difference proportional to the flow velocity. For application, the fluid must have a minimum electrical conductivity (> 0.5 µS/cm), and the tube must be electrically isolated. Magnetic flow meters have no movable parts and no additional pressure drop, and they are appropriate for aggressive and corrosive fluids. It is an application for liquids; solids or gas bubbles do not matter. The temperature is limited to 200 °C, and the minimum flow velocity is 0.5 m/s. There are many other types of flow meters, which are well and briefly described in [183]. Level: There are a lot of measurement principles for the liquid level. One must distinguish between a continuous level indicator and a level switch, which detects when the level reaches a certain value. The simplest and safest device is the inspection glass; however, the transformation into an electrical signal does of course not work. Level indicators can be based on the buoyancy principle. The more a displacing piston is dipped into a liquid, the more it is exposed to buoyancy forces which can be transformed into electrical signal by resistance strain gauges. The drawback is the mechanical equipment, which might be sensitive to dirt, and that the density of the liquid, which depends on temperature and composition, must be known. Air bubblers work in a similar way. Air is bubbled through a dipped pipe into the liquid. A pressure sensor measure the pressure necessary to overcome the hydrostatics. This method is not sensitive to dirt, but a disadvantage is that the liquid is contaminated with the gas, which might not be desirable in all cases. Also, the hydrostatic pressure can directly be measured and transferred into a liquid level, taking the liquid density into account. A number of other electrical signals can be used for liquid level detection or measurement, like electrical conductivity, capacity, radar sensors, microwave or infrared sensors or radiometric signals. A useful option is the so-called liquiphant, which is in principle an oscillating tuning fork. If the liquid level reaches or drops below the liquiphant, its resonance frequency changes. This is detected and transformed into a signal for a high-level or low-level alarm. Analytical measurement: Analytical measurements are of course done with GC (gas chromatography), HPLC (high performance liquid chromatography), Karl-Fischer-titration (determination of the water content), and so on. All these methods are the tasks of designated specialists and should not be covered in one short paragraph. However, all analytical methods considerably depend on a good design of the sampling, where it has to

12.5 Measurement devices

� 389

Figure 12.16: A bad (A) and a good (B) example for sampling.

be ensured that the sample is representative. Figure 12.16 shows an inappropriate and an appropriate example for taking a sample. In example A there is a parallel branch to the line. It can be initiated by opening the two valves. However, when the valves are closed again, it is not ensured that a representative sample is obtained. The question is whether the previous content in the sample container (e. g. air, rests from previous sample) has really been replaced. There is no real motivation for the flow to pass the sample container; the short cut through the main line has probably a lower pressure drop than the way through the sample box. In example B the sample container is connected to both the pressure and the suction side of the pump. Therefore, there is a considerable pressure difference across the sample container, resulting in a well-defined backflow to the suction side of the pump. The previous content of the sample container is rapidly replaced. The disadvantage is that it will go through the pump again, and part of it will again enter the sample container. However, with time the old content will be more and more diluted, and finally, the sample will be representative.

13 Utilities and waste streams 13.1 Steam and condensate In process industry, steam is the most widely used heating agent. Most chemical sites provide a steam net where steam at several pressures is provided. The costs for steam is an important criterion for the choice of a site, however, as the costs of a process are usually determined by the costs of the raw materials, it is rarely decisive. Low pressure steam is not necessarily cheaper than high pressure steam; usually, the steam generators produce steam at very high pressure (e. g. 400 °C, 50 bar), which is then throttled down in a valve to the lower pressure levels (e. g. 210 °C, 17 bar as medium pressure steam and 170 °C, 6 bar as low pressure steam). For use, steam should not be too far superheated so that condensation can take place rapidly with an extraordinarily high heat transfer coefficient in the equipment. For desuperheating, condensate is often injected to the steam by means of specially designed nozzles. Example How much steam condensate (100 °C, 20 bar) must be added to reduce the superheating to 5 K if a stream of 1000 kg/h high pressure steam (superheated at 400 °C, 50 bar) has been throttled down to a middle pressure level of 20 bar?

Solution According to the steam table [184] or using a high-precision equation of state (e. g. [29]), we can set up the energy balance of this desuperheating process ṁ HP hHP + ṁ Cond hCond = (ṁ HP + ṁ Cond ) ⋅ h(Ts (20 bar) + 5 K, 20 bar)

(13.1)

with hHP = 3196.7 J/g hCond = 420.6 J/g Ts (20 bar) = 212.4 °C hfinal = h(212.4 °C + 5 K, 20 bar) = h(217.4 °C, 20 bar) = 2813.8 J/g Solving Equation (13.1) for ṁ Cond gives ṁ Cond = ṁ HP

hHP − hfinal 3196.7 − 2813.8 = 1000 kg/h = 160 kg/h hfinal − hCond 2813.8 − 420.6

(13.2)

One should be aware that the heat transfer of superheated steam does not take place in a two-step sequence consisting of cooling down the steam as a vapor to condensahttps://doi.org/10.1515/9783111028149-013

13.1 Steam and condensate

� 391

tion temperature and subsequent condensation. For the heat transfer, this would be a disaster, as the heat transfer coefficient for cooling of a vapor would be poor in comparison with the steam condensation and may determine the size of the condenser. In fact, for moderate superheating the condensation remains essentially the same as for a saturated vapor, the only thing which changes is the larger heat to be transferred due to the superheating [185]. People who wear glasses intuitively know this, as during winter time the glasses grow damp immediately after entering a building; it is not necessary that the whole air in the building is cooled down to dew point temperature. For moderate superheating, condensation takes place immediately in a technical condenser as well. In [79], a criterion is set up to decide in which cases a superheating can be considered as moderate; however, it is again pointed out that it is definitely disadvantageous to regard a part of the condenser as a gas cooler for design. For each sort of steam, a so-called header is provided, carrying the whole steam from the tie-in point or from battery limits to the plant and branching off to the various consumers of the plant. The condensates are collected as well in a condensate header and usually pumped back as boiler feed water to the steam generator. In some cases, the steam is used as direct steam, meaning that it is introduced directly into the process, e. g. into the bottom vessel of a column where the bottom product is water anyway. In this case, no condensate can be returned, and often additional costs occur as the whole amount of steam might end up as waste water. When steam is directly used, it should also be considered that it contains small amounts of caustic substances, e. g. ammonia or amines. One should make sure that this has no detrimental influence on the process. The great advantage of using direct steam is that a reboiler can be omitted. The use of steam as heating agent has some remarkable advantages. In contrast to heating agents making use of sensible heat (e. g. hot oil), the temperature stays constant, as water condenses as a pure substance. It is not necessary to convey the heating agent to the consumer; it is delivered with a certain pressure which is higher than the pressure at condensation. The condensation is the conveying mechanism for the steam. The specific volume of steam is by far larger than the one of the condensate: At p = 2 bar, v′′ = 0.8857 m3 /kg compared to v′ = 0.00106 m3 /kg, corresponding to a factor of 835. When steam condenses, the volume decreases drastically, and fresh steam can follow to maintain the pressure where the heat is obviously consumed. The heating agent is flowing to the area where it is required without any stimulation. The only thing which has to be provided are lines with a sufficient cross-flow area for conveying the steam without major pressure loss. Of course, it is important that there are no inerts in the steam. Moreover, even the appropriate amount of steam will flow to the heat exchange area. Figure 13.1 shows an arrangement with a heat exchanger where the steam flow is controlled. The heat flux through the heat exchanger is given by Q̇ = kA(Tcond − Tproduct ) = ṁ steam Δhv

(13.3)

392 � 13 Utilities and waste streams

Figure 13.1: Control of the steam flow as the heating agent.

Following this simple equation, one should be aware that the k value is mainly determined by the product side, as the heat transfer coefficient on the steam side is very high and does not affect k very much. Also, within a certain range the enthalpy of vaporization as a physical property of the steam can be regarded as constant, and A as the heat transfer area of the heat exchanger does not change anyway. Therefore, the transferred heat flux is mainly determined by the condensation temperature Tcond of the steam, which has to be adjusted in an appropriate way for the control of the product temperature. As the steam is a pure substance, the condensation temperature is directly related to a pressure according to the vapor pressure line. The signal for the control valve varies the discharge opening until it causes a pressure drop which yields the desired condensation pressure of the steam. For a comfortable controlling, the pressure drop should not be too low; the rule of thumb is 10–20 % or 0.5–1 bar. This pressure drop should be taken into account when the heat exchanger is designed. The steam conditions given in the design basis indicate the state of the steam upstream the control valve, for the design of the heat exchanger, the state of the steam downstream the control valve, after adiabatic throttling, is relevant, including the resulting superheating. This steam state must fit to the desired state in the heat exchanger. Example In a thermosiphon reboiler (Chapter 4.7), the bottoms product is to evaporate at t = 100 °C. Low-pressure steam (LPS) at t = 160 °C, p = 5 bar will be used for heating. For the driving temperature difference, a value of 30 K is targeted. Calculate the steam state relevant for the heat exchanger design. Is enough pressure drop for the steam control available?

Solution The condensation temperature of the steam shall be tcond = 100 °C + 30 K = 130 °C. According to the steam table, the corresponding condensation pressure is p = 2.7 bar. Adiabatic throttling of the LPS to this pressure gives a steam state t = 151.8 °C, p = 2.7 bar. The pressure drop across the valve is sufficiently high (Δp = 2.3 bar or 46 %). The superheating of 21.8 K is acceptable. Otherwise, steam saturation would have to be applied.

13.1 Steam and condensate

� 393

The symbol in the condensate outlet line in Figure 13.1 represents a so-called steam trap, a device which lets liquid pass and closes if vapor is about to leave the system without condensing. Thus, it is ensured that any steam entering the heat exchanger is condensed, as it cannot leave the system as vapor. There are several function principles [186]. The mechanical one is the simplest. A lever gauge rises if liquid comes and lowers if it is located in vapor (Figure 13.3). It is connected to a lever which opens and closes an opening, respectively. Steam traps are often supposed to be not appropriate for fouling and dirty media. An alternative scheme without using a steam trap is shown in Figure 13.2 [96]. An extra vessel with level control can take over the function; the pressure-equalizing line is necessary, otherwise, noncondensed steam might accumulate in the vessel. If the pressure of the condensate must be increased, a pump can be installed below the vessel.

Figure 13.2: Steam control without using a steam trap [96].

Figure 13.3: Sketch of a ball float steam trap with air cock for venting [187]. © 2016 Spirax Sarco Limited.

The alternative to the control scheme in Figure 13.1 is the control of the condensate flow (Figure 13.4). The advantage is that the control valve can be smaller, as the condensate has by far a lower volume. The steam condenses at its delivery pressure. To reduce the heat flux, the control valve throttles the flow. The condensate accumulates in the

394 � 13 Utilities and waste streams

Figure 13.4: Control of the condensate flow.

heat exchanger and covers part of the heat exchange area. The heat transfer to the liquid is much lower than for a condensing vapor, and furthermore, the temperature of the condensate goes down when it is used as heating agent. Therefore, part of the heat exchange area is not used and the heat flux is reduced as requested. Increasing the heat flux is unsatisfactory. It is only possible if part of the heat exchanger is already flooded. This means that a control in both directions can only be performed if the heat exchanger is designed in a way that part of the tubes are flooded at normal operation, meaning that possible heat exchange area is wasted. Further disadvantages are [186]: – The control valve does not necessarily prevent uncondensed steam from passing the heat exchanger. Therefore, an additional steam trap is necessary, as shown in Figure 13.4. Otherwise steam could be lost, and the pressure in the condensate header might increase, making the condensate removal of other heat exchangers in the process more difficult. – The controllability of the process is worse than with the steam flow control. For example, if the heat flux should be reduced from full power to a very low value, the steam stops entering the heat exchanger when the whole apparatus has been flooded with liquid. For example, if the volume of the shellside is 1 m3 , approx. 1 t of steam is consumed after the control valve has been closed. With the steam inlet control (Figure 13.1), only the steam already being in the shellside condenses. At p = 2 bar, the density of the saturated vapor is ρ = 1.13 kg/m3 , giving an undesired 1 of the condensate flow control. steam consumption of 1.13 kg, which is approx. 900 This means that the condensate flow control is much slower. – At the phase boundary between steam and condensate increased corrosion might be observed. – For horizontal reboilers, thermal stress is an important issue. The upper and the lower tubes are exposed to different temperatures, as hot fresh steam enters the heat exchanger at the top and the condensate at the bottom might be significantly subcooled. For heat-integrated columns, the control of the condensate flow is the preferred option, as the pressure drop of the inlet valve reduces the driving temperature difference for the evaporation, which is the critical issue in heat integration. It is also preferred if the heat load must be controlled over wide ranges, or if fouling on the product side is assumed. In this case, the heat transfer area is clean at the beginning, and the heat exchange area is too large. If the steam inlet is controlled, the condensing temperature will drop, which

13.1 Steam and condensate

� 395

might lead to circulation problems in reboilers (see “geysering” in Chapter 4.7). With the condensate outlet control, the performance can be adjusted. It might happen that due to the throttling in the valve and the pressure drop of the steam trap the condensate outlet pressure of the consumer becomes too low to convey it to the condensate header. To avoid an additional pump, a so-called condensate lifter can be used. Figure 13.5 shows a possible arrangement. The function is as follows. Coming from the steam trap, the condensate is collected in a vessel with three nozzles, A, B, and C. Inside the vessel there is a lever gauge, which is mechanically connected with the three nozzles (Figure 13.6). With rising liquid levels, the lever gauge is taken upward. The connections then close the condensate inlet B and open the condensate outlet C and the steam inlet A. Through A, the condensate in the vessel is pressurized by the steam and can therefore flow to the condensate header. The lever gauge sinks again, closing the nozzles A and C and opening nozzle B. The remaining steam in the condensate lifter can be vented. Condensate lifters work automatically and without maintenance. The steam as auxiliary energy is available anyway, and the additional steam consumption is approx. between 0.1–1 %, as the gaseous steam has to replace the liquid volume of

Figure 13.5: Condensate lifter process.

Figure 13.6: Sectional view of a condensate lifter [188]. © 2016 Spirax Sarco Limited.

396 � 13 Utilities and waste streams the condensate and the volume ratio between steam and condensate is in this range, depending on the pressure. Although essentially only water, condensate is a valuable substance as it has been purified so that no more salts or other substances are present. There is a certain value just from its energy content, as the sensible heat of liquid water is approx. 10 % of the heat transported as steam.1 In European terms, this corresponds to 2–3 €/m3 , without the additional costs for purification, conditioning and waste water disposal. Therefore, it is collected from the particular consumers and recycled to the steam generator, where it is again preconditioned as boiler feed water. A condensate net has in most cases a number of consumers, often in large distances from each other. One must be aware that the condensate production is not constant in time, and each consumer delivers his own condensate outlet pressure. Thus, there are always fluctuations in the condensate header, and the pressure in this line should in general be low so that none of the consumers has difficulties to get rid of the condensate. The condensate line will in most cases end up in a vessel having a pressure slightly above the ambient one (e. g. p = 1.2–1.3 bar), and the condensate line is usually operated a bit higher (e. g. p = 1.5 bar). As the condensate is close to the saturation state when it enters the condensate line, vapor will be generated in the condensate line due to the expansion. From the mass fraction point of view, it is not too much; however, it has a considerable volume, as the following example shows: Example A saturated low-pressure steam condensate at p1 = 6 bar is expanded into the condensate line to p2 = 1.5 bar. How much vapor is generated?

Solution Calculating an adiabatic throttling, the vapor generation in the condensate line is 9.13 %. The densities of vapor and liquid in the saturation state are ρ′′ = 0.863 kg/m3 , ρ′ = 949.92 kg/m3

Therefore, per kg condensate one must expect a vapor volume of 0.106 m3 and a liquid volume of 0.001 m3 . This means that from the volume point of view only approx. 1 % of the condensate is liquid.

This is a quite typical result. A condensate line is not a water line but a steam line with a certain amount of liquid. The condensate might remain liquid if some subcooling takes 1 However, on a low temperature level. Calculated as (hL (100 °C, 1.1 bar) − hL (30 °C, 1.1 bar))/(hV (250 °C, 20 bar) − hL (30 °C, 1.1 bar)).

13.2 Heat transfer oil

� 397

place, e. g. in a long condensate line during winter, but in general it should be designed as a two-phase line (Chapter 12.1.4). The exact dimensioning of the condensate line is difficult, as all the different operation modes of the particular consumers and the significant influences of insulation, ambient temperature and roughness of the inner surface of the line can hardly be determined [186]. The following items are recommended [186]: – The line should be designed as short as possible and have a base slope of at least 1 %, ensuring that the line drains itself in a shutdown. – For the pressure drop, Δp = 0.1 bar/100 m is recommended. – The tie-in of the consumers should be done from the top. – An injection for rapid mixing makes sense if the condensate temperatures significantly differ. Otherwise, water hammering might occur if flashed steam bubbles from hot condensate instantly condense in colder condensate. Their volume collapses immediately, and liquid water fills it with high velocities. – It must be possible to drain each section of the condensate header completely. As any long line, the condensate header will have low points where special care should be taken. The steam generation of hot water when it is expanded plays the main role in the socalled steam boiler explosions [249]. Steam generators contain a large amount of liquid boiler feed water at high temperature and high pressure. When a shell rupture happens, the pressure is suddenly reduced to atmospheric, and the water evaporates violently. The damages caused are often huge.

13.2 Heat transfer oil At very high temperatures (> 250 °C) the use of steam becomes more and more difficult, as the condensation pressures and therefore the design pressures become large, making the heat exchanger expensive. In these cases, it makes sense to use a heat transfer oil at comparably low pressures. Heat transfer oils can withstand very high temperatures and have low vapor pressures. Also, they remain liquid at very low temperatures. One must be aware that it makes use of sensible heat, meaning that the temperature of the heating agent changes and that much larger mass flows are necessary, leading to large circulation pumps. Moreover, the heat transfer coefficients are pretty low compared to steam. A compilation of common heat transfer oils is given in [189]. As well, hot water and melted salts are used as heating agents.

13.3 Cooling media Cooling is much more sophisticated than it seems. It requires the infrastructure of a site, and each cooling medium has its own restrictions.

398 � 13 Utilities and waste streams

Figure 13.7: Plate heat exchangers for sea water cooling.

Sites are usually located at a natural water reservoir. Its temperature is limiting the lowest achievable temperature in the process which can only be underrun by additional technical equipment. If the site is located at the sea, a sea water cooling cycle is operated. However, sea water is one of the most aggressive media due to its salt content. Using it in the process is practically impossible, as valuable materials of construction like hastelloy would have to be chosen. The only economic way to make use of sea water is the indirect way by operating a secondary cycle of demineralized water for the whole site. Figure 13.7 shows a number of huge plate heat exchangers, cooling the water cycle for a site operating with several plants with sea water. Plate exchangers for sea water must be made of a material which can withstand the corrosion attacks by the salt. The incoming sea water must be carefully filtered, and the cycle must be treated with biocides regularly. Cooling water is usually taken from a river. Its supply temperature is usually between 28–32 °C, with a supply pressure of approx. 5–6 bara. Usually, its return temperature can be 10 K higher, i. e. approx. 40 °C. Before returning it back into environment, it is cooled down again with a cooling tower, as the oxygen content of the water becomes unacceptably low at temperatures beyond 26–28 °C, causing suffocation of the fish in the river. The use of normal cooling water has its pitfalls, as it contains water hardness components. When it is heated up, these components (CaSO4 , CaCO3 ) tend to precipitate.

13.3 Cooling media

� 399

This kind of fouling must be avoided. Therefore, cooling water can only be used up to a certain temperature level of the product side, usually 65–75 °C, where the wall temperature on the cooling water side is regarded. Cooling water costs are approx. 0.05 €/m3 . Considering the above mentioned difference between supply and return temperature, a cooling water flow of approx. 86 m3 /h represents a cooling power of 1 MW, as 86 m3 /h ⋅ 1000 kg/m3 ⋅ 4200 J/(kg K) ⋅ 10 K = 1.003 MW The distribution of cooling water from a header to various consumers often turns out to be a challenging procedure with unexpected outcomes. While it is expected that there is a tendency that the first outlet in the row will get the highest flow, it often happens that the very opposite is true, with the last outlet in the row as the preferred one [248]. This can easily be explained by having a look at the drag coefficients for flow splitting (Figure 13.8). For the stream branching off, the drag coefficient is significantly higher than for the stream remaining in the header. The situation will be the same when the next stream branches off, but due to the removal of flow the velocities and, subsequently, the stagnation pressures are lower. The overall pressure drop has a tendency to be lower for the streams that are branched off from the header later.

Figure 13.8: Drag coefficients for flow divergence [72].

400 � 13 Utilities and waste streams This maldistribution can reduce the system performance. A modified design, e. g. with a tapered header lines where the velocities towards the end of the header are increased by lowering the pipe diameter, can equalize the pressure profile and create a uniform distribution. However, this is costly, and often the consumption of the particular outlets varies. Restricting orifices in the consumer lines are an alternative. If cooling water cannot be used due to a temperature level on the product side higher than mentioned above, there are mainly two options. First, one can install a secondary water cycle (“jacket water”) operated with demineralized water as cooling agent. This cycle itself is then cooled by original cooling water at moderate temperatures. A jacket water cycle means additional investment costs, as one huge heat exchanger for cooling the jacket water is necessary. This heat exchanger needs a certain temperature difference as driving force, thus, the supply temperature of the jacket water is about 5 K higher than the cooling water supply temperature. The second option is the use of air coolers, which usually cause a larger investment due to the bad heat transfer on the service side (Chapter 4.10). With cooling water, temperatures down to approximately 35 °C can be achieved on the process side. To realize lower temperatures in a process, refrigerators must be used. Refrigerated water can be supplied with cold water aggregates at temperatures down to 2 °C. Below this temperature, ice formation must be considered. In most cases, brine is used, often a mixture of propylene glycol/water or ethylene glycol/water, which can be used below 0 °C as well. A compilation of heat transfer fluids can be found in [190]. There are a number of options to cool down the used cooling water again. Most often used are cooling towers (Figure 13.9).

Figure 13.9: Sketch of a cooling tower arrangement.

13.3 Cooling media

� 401

The warm cooling water coming from the consumers is cooled down by means of direct contact with ambient air. It is distributed above a certain packing layer, often random packing as used in distillation. From the bottom, the air is led into the cooling tower, either by natural flow (humid air is lighter than dry air) or by ventilation. Because of the concentration difference in the vapor phase, water is evaporating into the air. The heat of vaporization is taken from the enthalpy of the liquid water, whereby its temperature is lowered [323]. As the effect is dominated by the mass transfer, it is not necessary that the temperature of the ambient air is lower than the water temperature; it is more important that the air is dry so that the mass transfer can take place. The advantage of cooling towers are their low investment costs. A disadvantage is the water consumption. Due to the loss of water by evaporaton, the dissolved salts, especially calcium salts, are enriched in the water, which might cause corrosion and fouling. To control the salt concentrations, part of the water must be continuously removed. Therefore, fresh make-up water is necessary to replace the losses by evaporation and the salt purge. Furthermore, due to the intensive aeration the cooling water is exposed to oxygen ingress, which also supports corrosion and bio-fouling. To prevent this, chemical additives are used. Finally, during winter time mist is formed at the top of cooling towers, which can result in glazed frost on the adjacent streets [286]. The quality criterion of a cooling tower is not the duty transferred but the achieved cold water temperature. The duty might not be significant; if a target value is not achieved, the heat accumulates in the cooling water cycle, until the temperature becomes so high that the specified duty is met. The calculation of cooling towers is very similar to the rate-based calculations for distillation and absorption (Chapter 5.9), using the so-called Merkel number [324]. A very convenient tool for estimating the performance of a cooling tower is the cooling tower characteristics2 [322]. The performance of a cooling tower is limited by the cooling limit temperature. It is a property of a given state of the air (temperature, humidity). It is defined by the lowest temperature that can be achieved by mixing liquid water to this air. The cooling limit temperature can be explained as follows [50]: If humid (but not saturated) air is flowing over a water layer with the same temperature, water will evaporate into the air, until the partial pressure is equal to the water vapor pressure. As mentioned above, the energy for the evaporation is taken from the liquid water, which lowers its temperature. As well, a heat transfer from the air to the water takes place, whereby the air is also cooled down. Finally, both water and air have cooled down and end up with the same temperature, while the partial pressure of water in the vapor phase is at saturation state equal to the vapor pressure. From a heat and mass balance, an equation for cooling limit temperature (tcl ) can be derived [50]: hA (tA,in ) + xin hV (tA,in ) + (xs (tcl ) − xin )hW (tcl ) − hA (tcl ) − xs (tcl )hV (tcl ) = 0 2 In memoriam Prof. Dr. Werner Klenke, one of my teachers, who passed away in 2023.

(13.4)

402 � 13 Utilities and waste streams with the indices A (dry air), W (liquid water), V (water vapor), cl (cooling limit) and s (saturation) Note that the inlet condition of the liquid water does not occur. People working in the humid air business use the “1 + x” concept. This means that all caloric quantities are composed of those for dry air, water vapor and liquid water. In the vapor phase, the concentration is expressed as a load x, defined as x=

ṁ V ṁ A

(13.5)

The vapor pressure is converted into an x at saturation xs = 0.622

ps p − ps

(13.6)

Equation (13.4) must be solved iteratively. Example Calculate the cooling limit temperature of humid air with tA,in = 16 °C and xin = 0.001. The ambient pressure shall be p = 1 bar.

Solution Using the TREND software [277], all the enthalpies can be calculated. Note that the enthalpy hvap should be calculated at a very low pressure to make sure that a vapor enthalpy is calculated. The error made in comparison to the procedure explained in Chapter 2.8 is negligible. In the procedure, the inlet temperature of 16 °C is taken as a starting value. After checking an arbitrary second temperature, the next tries are carried out according to the regula falsi tn+1 = tn − (tn − tn−1 )

f (tn ) f (tn ) − f (tn−1 )

(13.7)

Equation (13.4) can be evaluated with – tA,in = 16 °C – xin = 0.001 – hA (tA,in ) = 413.25 J/g – hV (tA,in ) = 2531.21 J/g The cooling limit temperature is 4.97 °C. This is also the lowest possibe cold water temperature which can be reached in a cooling tower with an infinite mass transfer area. The iteration history is given in Table 13.1.

Knowing the cooling limit temperature, the cooling tower characteristics can be set up as follows [322]: ηc =

tW ,in − tW ,out = cK (1 − exp(−Λ)) tW ,in − tcl

(13.8)

13.4 Exhaust air treatment

� 403

Table 13.1: Iteration history for cooling limit temperature evaluation. tcl °C

ps (tcl ) bar

xs (tcl )

hW (tcl ) J/g

hA (tcl ) J/g

hV (tcl ) J/g

Right Hand Side J/g

16 1 4.308 5.009 4.971

0.0181 0.0066 0.0083 0.0087 0.0087

0.0115 0.0041 0.0052 0.0055 0.0055

67.26 4.28 18.21 21.16 20.99

413.25 398.16 401.49 402.20 402.16

2531.22 2503.30 2509.45 2510.76 2510.68

25.93 −7.34 −1.28 0.075 −0.001 ok.

where ηc is the cooling efficiency and Λ is the so-called air ratio Λ=

ṁ A ṁ W ⋅ λmin

(13.9)

with hW (tW ,in ) − hW (tcl ) hA (tW ,in ) + xs,in (tW ,in ) hV (tW ,in ) − hA (tcl ) − xs (tcl )hV (tcl ) − hW (tcl ) (xs (tW ,in ) − xs (tcl )) (13.10) cK is an empirical factor which describes the performance of the cooling tower. It can be determined by evaluating one performance point. Without this data point, the cooling tower characteristics can be used for general estimations. cK = 1 means that the mass transfer area is large enough to asymptotically reach the cooling limit temperature. More realistic might be cK = 0.8.

λmin =

13.4 Exhaust air treatment Exhaust air is defined as the sum of gases, vapors, smokes, dusts, soots, and aerosols released to atmosphere that have an impact on the composition of natural air [191]. The definition might be a bit weak; it means that the release of components like carbon dioxide or hydrogen, which are not regarded as air pollutants, does not need to be restricted.3 Nevertheless, some of the components occurring in air are considered to be pollutants, as it is the case e. g. for methane or methyl chloride, which are natural air components due to the metabolism of animals or plants. Many of the most interesting fine chemicals, specialty chemicals, and pharmaceuticals are produced batchwise. This implies that it has to be carefully evaluated how

3 Carbon dioxide is a special issue. The vast amounts of CO2 emissions from combustion of coal and natural gas are regarded to be responsible for the continuous rise of the CO2 concentration in the air, currently considered to be resulting in a future climate change. CO2 emissions coming directly from chemical processes occur in relatively small amounts.

404 � 13 Utilities and waste streams

Figure 13.10: Ways for the production of exhaust air in batch processes.

frequently the particular streams occur and what their compositions and amounts are. In batch processes, exhaust air can be produced in many ways (Figure 13.10). The most common one is the charging of vessels. If a liquid is stored in a vessel, there will be a saturated gas phase above the liquid, filling up the rest of the vessel volume. When additional liquid is filled into the vessel, an equivalent vapor volume is displaced upward, usually directly into the exhaust air line. A similar mechanism is so-called vessel breathing. If a vessel has an open connection to the environment, air will be sucked in when the content of the vessel cools down and contracts, for instance at night. When the vessel is heated up again, its content expands, and the air, now saturated with the vapor being in equilibrium with the liquid in the vessel, is displaced into the exhaust air line. Other well-known mechanisms are the depressurization of vessels, the flushing of vessels in order to dilute and remove gaseous substances from a vessel, or simply the removal of gaseous by-products out of a reactor by pressure relief. Exhaust air has to obey certain restrictions depending on the country where the plant is located. For example, in Germany, the TA Luft [192] is the decisive regulation. Table 13.2 gives an overview on the particular limitations. The pollutants are assigned to certain classes according to their hazardous potential. Each class has a bagatelle limit, i. e. any industrial unit has the right to release this amount without being prosecuted. Beyond this value, a limiting concentration has to be complied with. Carcinogenic substances are treated in an extraordinarily strict way. In some cases, it is defined how the particular pollutants are counted, for example just the carbon content for simple organic substances or the typical combustion products for chlorine and bromine compounds (HCl and HBr). The limiting values are generally based on the state-of-the-art in removing the particular substances from gas streams. They can be regarded as very strict. There are several options for exhaust air treatment. They can be categorized in two ways. Combustion processes and biological treatments destroy the pollutants, whereas condensation, adsorption, absorption, or membrane processes separate them from the air, and if they are valuable and their purity is sufficient, they can be recovered. Also, the load of pollutants and the amount of exhaust air are decisive for the choice of the process. Condensation can only be used for comparably small exhaust air streams. As it will be shown, only cryo-condensation can normally fulfill the TA Luft, where the com-

13.4 Exhaust air treatment

� 405

Table 13.2: Limiting concentrations and bagatelle limits according to TA Luft [192]. Lim. conc. (mg/Nm� )

Bagatelle lim. (g/h)

50 20 100

500 100 500

Carcinogenic substances Carc. subst., cl. I Carc. subst., cl. II Carc. subst., cl. III

0.05 0.5 1

0.15 1.5 2.5

Anorganic substances Ammonia Hydrogen cyanide Chlorine compounds Nitrous oxide Sulphuric oxides Bromine compounds

30 3 30 350 350 3

150 15 150 1800 1800 15

Organic substances Org. subst., cl. I Org. subst., cl. II

Counted as C

Examples methanol, ethyl acetate acetaldehyde, vinyl acetate acetic acid, nitromethane As, Cd acrylonitrile, ethylene oxide benzene, vinyl chloride

HCl NO� SO� HBr

mercial units have a defined size. The load is not important. Membrane and absorption processes can also handle only comparably small amounts of exhaust air due to the size of the common commercial units. For absorption, high loads are advantageous to make it worth to recover the pollutants. For membrane processes, complete separations can hardly be achieved, therefore, it makes only sense for low concentrations. Combustion is an effective but expensive exhaust air treatment. Therefore, a high exhaust air stream is required to make it worth. Thermal combustion works for all concentrations, but it is especially effective for high pollutant contents to save fuel for achieving the high temperatures (850–1200 °C). For low pollutant concentrations, catalytic combustion can be used, as only approx. 400 °C are necessary. For higher concentrations (> 10 g/Nm3 ), the catalyst might be damaged. Adsorption processes can handle large exhaust air streams, but for high concentrations it is difficult to remove the heat of adsorption from the bed. Biological treatment is only possible for substances that can easily be dissolved in water. It is the best process for large exhaust air streams with low pollutant concentration, but it needs careful maintenance. Another important issue is the predictability of the processes. For the combustion, absorption and condensation processes, a mass balance can in principle be predicted without experiments. Adsorption and membrane processes usually need experiments even for the design, unless references are available. The lifetime of membranes for a new task can hardly be estimated, as often many different components are involved and come into question for spoiling the membrane. This causes great uncertainties for the final investment costs. Therefore, in most cases membranes are unlikely to be the best choice. Biological exhaust air treatment processes cannot be predicted at all. Long-term trials must be carried out with changing operation condi-

406 � 13 Utilities and waste streams tions to get a feeling about its performance. In fact, most exhaust air problems require an immediate solution where someone can give a warranty for the performance. Therefore, it is often not appropriate to suggest adsorption or biological treatment, even if the process itself might be interesting. An excellent compilation of possible errors in exhaust air projects is listed in [286].

13.4.1 Condensation If three or more people are sitting together in a meeting to discuss a condensation project, at least one of them will have the glorious idea that it is sufficient to cool down to the lowest normal boiling point. (Engineering wisdom, possibly a law of nature)

Condensation looks very simple but is not, as we have no feeling for even basic thermodynamics. Thus, as cited above many people think that a component completely condenses when the temperature falls below its normal boiling point. That would make things pretty easy: for methanol, 65 °C would be a sufficient cooling temperature, for benzene 80 °C, for pentane 36 °C, and so on. Simple cooling water with t = 30 °C would be an appropriate cooling agent. In fact, things are much more difficult, as the following example shows. Example What is the minimum condensation temperature to guarantee that the methanol concentration is in line with TA Luft (50 mg/Nm3 C; see Table 13.2)?

Solution 12 The carbon fraction of methanol is approximately 32 = 0.375, therefore, the concentration limit is 50 mg/Nm3 / 0.375 = 133 mg/Nm3 . From ideal gas law, we get the corresponding partial pressure:

pMeOH =

133 ⋅ 10−6 kg ⋅ 8.31446 J/(mol K) ⋅ 273.15 K mRT = = 9.4 Pa MV 32 ⋅ 10−3 kg/mol ⋅ 1 m3

(13.11)

This partial pressure corresponds to a saturation temperature of t = −70 °C! Similar results are obtained for other substances (toluene: −75 °C, n-hexane: −96 °C, ammonia: −145 °C, vinyl chloride: −154 °C, n-pentane: −116 °C). Even for the removal of a heavy boiling substance like n-dodecane a condensation temperature of −13 °C is required.4

4 The extrapolation of the vapor pressure curves to these low temperatures is not very reliable. The values must be considered as estimations.

13.4 Exhaust air treatment

� 407

Therefore, in most cases TA Luft can only be met by cryo-condensation using a cooling agent like liquid nitrogen [193]. At ambient pressure, liquid nitrogen has a temperature of t = −196 °C, where almost all substances have a vapor pressure far below the limit of TA Luft. On the other hand, most of the substances are solid at that temperature (water!), which makes their handling complicated. As water is hardly avoidable, it is important to remove it before the stream enters the cryo-unit, e. g. by an adsorption bed. Often, two twin plants for cryo-condensation are operated alternately; while one is used for air cleaning, the other one is being defrosted. Substances like cyclohexane form a kind of snow on the cooling coils. After a short time, the cooling coils are practically isolated, and the heat exchange breaks down. The most complicated problem in cryo-condensation is the formation of aerosols. Its mechanism is as follows. If the temperature difference between cooling agent and bulk fluid is too large, a temperature profile perpendicular to the flow direction will develop where the temperature falls below the condensation temperature far away from the wall in the gaseous phase. Spontaneous condensation takes place, and very small droplets are formed with about 1 µm in size. These droplets have a rate of descent (Chapter 9) that is too small for being separated within the equipment. Otherwise, they are too large to take part in molecular diffusion towards the cooling area. Finally, these droplets can pass the apparatus, although they actually have been condensed. Behind the condenser, they will be evaporated again and can be detected as pollutants. Therefore, care must be taken that an appropriate temperature of the cooling agent is used and controlled. In practical applications, the liquid nitrogen is never used directly as cooling agent. Instead, it is just taken to control the temperature of a secondary cooling cycle, operating with gaseous nitrogen. Figure 13.11 shows a typical liquid nitrogen condenser plant with a cooling cycle (Cryosolv process).

Figure 13.11: Principle scheme of a cryo-condensation unit (Cryosolv process) [193]. © Messer SE & Co. KGaA.

408 � 13 Utilities and waste streams The liquid nitrogen is evaporated in the cycle gas cooler, where the cycle gas is cooled down to an appropriate temperature. The pollutants are liquefied on the cooling coil of the cryo-condenser. The condensate can be collected at the bottom of the condenser. If it is a pure substance, it can be used again. The purified gas can be led into the environment. For a better utilization of the liquid nitrogen, the cooling of the cycle gas is supported by the cold purified exhaust gas and the evaporated nitrogen in the recuperator. The condensate could be used as well, but normally the amounts are too small. Cryo-condensers can be purchased as units. Normally, they are designed for 700– 1000 Nm3 /h, which can be considered as relatively small exhaust air streams. Investment costs for cryo-condensers are low in comparison with other exhaust air treatment processes. The tank can even be rented. A strong point is the further application of the gaseous nitrogen. In principle, it can be released to the environment as well, but experience shows that the process can only be operated economically if the gaseous nitrogen can be applied otherwise, for instance for inertization in the plant itself. In these cases, cryo-condensation is the favorite way for exhaust air treatment. The operation costs are then comparably low, as the plant manager has to purchase the nitrogen anyway. The Cryosolv process has been further improved during the recent years, currently, the DuoCondex process [194] is considered to be the most effective one. The heart of the DuoCondex process is the so-called thermo-controller (Figure 13.12). In steady-state operation, the liquid nitrogen is evaporated by gaseous nitrogen which

Figure 13.12: DuoCondex process scheme. © Messer SE & Co. KGaA.

13.4 Exhaust air treatment

� 409

has already been used. The more the nitrogen vapor is heated up due to condensation of the pollutants, the more nitrogen is obviously needed, and the more fresh liquid nitrogen is evaporated. Moreover, the gaseous nitrogen stream is cooled down again to a temperature close to the boiling temperature of nitrogen and used in the cryo-condenser for a second time. In a third tube bundle, the cooled purified gas is led back through the cryo-condenser for additional cooling of the exhaust gas. The temperature differences between the exhaust gas and the various cooling media are usually low enough to avoid freezing of the pollutants and the formation of aerosols. The liquid nitrogen consumption is further reduced and close to the theoretical optimum. To summarize the aspects of cryo-condensation, the following statements can be given: – Cryo-condensation has low investment and high operation costs due to standard units. – To keep operation costs low, the evaporated nitrogen must be used elsewhere in the plant to compensate for part of the operation costs. – Only liquid nitrogen and electric current are needed as utilities. The tank necessary for the storage of the liquid nitrogen can be rented. – The predictability is restricted due to the possibility of aerosol formation and the performance of the vapor pressure line at extrapolation to low temperatures. – The cryo-condensation systems in general react very slowly to changes in load. Therefore, the exhaust air stream should be as steady as possible. – Only relatively small exhaust air streams can be treated. – Due to solid or snow formation (water and cyclohexane are the most terrible components), twin-plants are usually necessary for operation and defrosting. 13.4.2 Combustion If a combustion process is used for exhaust air purification, the pollutants are destroyed by chemical reaction. Combustion is simple if the pollutants only consist of C, H, and O. In these cases, only carbon dioxide and water are formed as combustion products, e. g. C2 H5 OH + 3 O2 󳨀→ 2 CO2 + 3 H2 O Both can simply be released to environment, as they are natural air components. More difficulties come up if other elements occur, as new pollutants can be formed that must be removed. Chlorine is one of the most widely used elements in chemical industry. In a combustion process, it will completely be transformed to HCl, e. g. CH2 Cl2 + O2 󳨀→ CO2 + 2 HCl This clear statement might be a bit surprising, as one could easily think of water formation, for example according to the Deacon reaction:

410 � 13 Utilities and waste streams 2 HCl + 0.5 O2 󴀕󴀬 H2 O + Cl2 The Deacon reaction is an equilibrium reaction. For high temperatures as they occur in the combustion chamber (≥ 900 °C), the equilibrium is more or less completely on the HCl side. When the flue gas is cooled down in the steam generator (see below) and in the environment, the equilibrium changes to the Cl2 side, but reaction kinetics is relatively slow so that only small amounts of Cl2 are formed. Therefore, a rapid cooling procedure must be performed. Details can be found in [11], [191] and [195]. HCl can be removed from the flue gas with a scrubber, usually using water or sodium hydroxide solution as absorptive agents. If significant amounts of chlorine are formed, water is no more appropriate for scrubbing. Caustic soda can transform the chlorine to hypochlorite according to Cl2 + 2 OH− 󴀕󴀬 OCl− + Cl− + H2 O The hypochlorite anion can be transformed to chloride in the presence of the bisulfite ion: OH− + OCl− + HSO−3 󴀕󴀬 H2 O + Cl− + SO2− 4 The other halogen elements (F, Br, and I) behave analogously. Sulfur will be transformed into sulfur dioxide SO2 . SO2 can hardly be absorbed with pure water, but a caustic soda solution is quite efficient. For large sulfur loads in flue gases, other processes like adsorption on activated carbon or reaction with calcium hydroxide to gypsum are known and well established [196, 197]. Chemical bonded nitrogen (ammonia, amines, etc.) will be transformed to nitrous oxide to a certain extent that can hardly be determined by theoretical predictions [198]. To be conservative, it is often assumed that chemical bonded nitrogen is converted to NO completely (fuel NO), e. g. according to 2 C2 H5 NH2 + 8.5 O2 󴀕󴀬 7 H2 O + 4 CO2 + 2 NO Fuel NO is formed already at low temperatures as 800 °C. NO can be formed by reactions of elementary nitrogen and oxygen from the air as well (thermal NO), especially at high temperatures. The amount of thermal NO increases dramatically with temperature; for example, the equilibrium concentration of NO in air is 35 ppm at 1000 K and 1300 ppm at 1500 K [198]. When the flue gas is cooled down, NO forms an equilibrium with NO2 and other nitrous oxides. At ambient temperatures, NO2 is the dominating component. Therefore all the nitrous oxides are counted as NO2 in mass balances and referred to as NOx . For NOx removal, so-called DeNOx processes have to be integrated. They operate with the reaction 4 NO + 4 NH3 + O2 󴀕󴀬 6 H2 O + 4 N2

13.4 Exhaust air treatment

� 411

The “denoxation” can be performed at high temperatures (approximately 900 °C) without a catalyst (SNCR process, selective noncatalytic reduction) or at low temperatures (approximately 300 °C), using metal oxides like V2 O5 as catalysts (SCR process, selective catalytic reduction). Most plants operate with ammonia as reductive agent. As an alternative, urea can be used and decomposed to ammonia according to NH2 –CO–NH2 + H2 O 󴀕󴀬 2 NH3 + CO2 Care must be taken to ensure that there is no excess ammonia in order to avoid that the NOx problem is just replaced by the ammonia problem. N2 O (laughing gas, well-known as an anesthetic gas) is also a very critical component that occurs as a by-product in some syntheses. It is not listed in TA Luft, but it is regarded as one of the most critical greenhouse gases. Therefore, authorities do usually not accept a defined N2 O emission. Nowadays, N2 O can be decomposed to nitrogen and oxygen at comparably low temperatures (approx. 425–600 °C) with appropriate catalysts [199, 200]. Chemical bonded Si is converted into solid SiO2 , which is simple quartz or sand. From the environmental point of view, this is not critical at all, but the small particles that are formed plug and erode the combustion unit. Therefore, special constructions and operation conditions have to be chosen when Si occurs. The thermodynamic calculation of combustion reactions is pretty easy. A necessary combustion temperature is determined by the degradation temperature of the pollutants. Usually, for a number of pollutants the manufacturers of combustion units define the necessary residence time in the combustion chamber and the corresponding temperature where they give a warranty to keep the TA Luft. This temperature is usually not achieved by the combustion of the pollutants theirselves. Instead, natural gas must be injected and burnt up as well. The combustion temperature can be calculated with an adiabatic energy balance, using the specific heat capacities and the enthalpies of formation of the participating substances. The necessary amount of natural gas is evaluated in an iteration procedure, checking whether the temperature obtained equals the required one. The natural gas itself is often characterized by its lower heating value, however, for process simulation it is advantageous to describe it by its composition. In general, we can distinguish between thermal and catalytic combustion. Thermal combustion takes place at temperatures from 850–1200 °C. It is appropriate for high pollutant loads. The high temperatures make sure that complex molecules degrade and the simple combustion products can be formed. Due to cost reasons, the heat of the flue gas has to be used, e. g. by steam generation. Figure 13.13 shows a typical thermal combustion unit. The supplemental fuel and air are mixed and fired in the main burner. The combustion chamber is refractory-lined and the exhaust air is introduced into the combustion chamber in the flame zone. Supplemental combustion air can also be added, if necessary. The dimensioning of such a combustion unit can be done according to the residence time of the exhaust air stream. Depending on the kind of pollutants, the residence time should be between τ = 0.6–2 s. It has to be pointed out that the physical

412 � 13 Utilities and waste streams

Figure 13.13: Typical incinerator scheme [201].

volume flow at combustion conditions is decisive, not the standardized volume flow in Nm3 . The pollutant concentrations are not relevant for the dimensions of the unit, as for the combustion itself the pollutants are supplemented by natural gas to keep the necessary combustion temperature anyway. It is also important to note that combustion units have a limited capacity range. The ratio between lower and upper bound of the capacity is approx. 1 : 5. At the lower bound, the danger is that there is not enough turbulence to get an adequate mixing of the pollutants with air. At the upper bound, the residence time in the combustion chamber might not be long enough. The scheme for catalytic combustion is shown in Figure 13.14. The polluted air enters a heat exchanger, where it is preheated by the hot flue gas stream. The gases then enter the catalyst bed. A noble metal catalyst is used to promote the desired oxidation reactions at relatively low temperatures (250–400 °C) and at faster conversion rates. Therefore, smaller units can often be specified, and less costly construction materials can be used. The catalyst bed can be designed in the form of structured or random packing, made of ceramic. Its volume is determined by the required destruction efficiency of the particular pollutants, the flowrate, and the properties of the vapor stream. As a rule of thumb, 5000–20 000 Nm3 /h per m3 catalyst bed can be processed. Catalytic combustion makes sense for low pollutant loads (< 10 g/Nm3 or 25 % LEL (lower explosion

Figure 13.14: Catalytic combustion scheme [201].

13.4 Exhaust air treatment

� 413

limit, Chapter 14.4) [201]), as in these cases the heating value of the pollutants is low and the consumption of natural gases to obtain the temperatures required for thermal combustion would therefore be high. The oxygen concentration in the waste gas should be < 2 mol % [201]. The advantage of catalytic combustion is that smaller equipment and less costly materials can be used due to the lower temperatures. However, catalysts are in general sensitive, and the kind and amount of pollutants should be clearly defined when a catalytic combustion is chosen. Phosphorus, heavy metals, and silicium are catalyst poisons, and occasional high pollutant loads lead to high temperatures and, subsequently, deactivation. Another problem called “the classical design mistake” can come up when ammonia and chlorinated compounds are concurrently present in the exhaust air stream. In the combustion unit, ammonium chloride will be formed, consisting of small particles that block the catalyst after a remarkably short time. Regenerative thermal oxidizers (RTO) are a third kind of combustion units (Figure 13.15), appropriate for very diluted exhaust air streams. In a typical RTO unit, there are three ceramic beds for heat recovery. The contaminated gas enters one of the beds (for simplicity: bed 1), and is effectively preheated by passing the hot ceramic bed so that the burner itself does only need to cover the last part of the preheating of the exhaust air. After having been incinerated, the clean exhaust gas stream exits the combustion chamber through bed 3. Its sensible heat is transferred to the bed, where it can be used in the next cycle. Part of the clean air gas is led to bed 2, which was the preheating bed during the last cycle, to purge it; in this way, the clean air is contaminated again and therefore led back to bed 1.

Figure 13.15: Typical RTO unit [201].

At the completion of each cycle, the task of the beds is changed by switching the valves at the inlet and outlet lines. Cooling below the dew point in the heat recovery section should be avoided because of corrosion. The control system which switches between the beds is comparably complex.

414 � 13 Utilities and waste streams The advantages of combustion processes are the low operation costs due to steam production (see below). No trials are necessary to predict the outlet streams of the combustion. The main disadvantages are the high investment costs and the complex safety concept that is necessary as the flame must be regarded as a permanent ignition source. The availability is high for the thermal combustion. For the catalytic combustion, it is limited by the lifetime of the catalyst. After passing a typical combustion unit (Figure 13.16), there are a few other necessary steps before the exhaust air is released into the environment. After “denoxation” (see above), the heat of the stream has to be used due to economic reasons. In the steam generator, the stream is cooled down to approx. 270 °C. In many cases, the benefit of steam production can fully compensate for the expenses for the natural gas. Behind the steam generator, the steam is still too hot for the scrubber. It is first cooled down by direct injection of water in a so-called quench. As part of the water evaporates, the temperature can be lowered to approx. 70 °C. In the scrubber, acid gases like HCl, HBr, or SO2 are finally chemically absorbed, usually with caustic soda solution due to chlorine or bromine formation according to the Deacon reaction.

Figure 13.16: Thermal combustion scheme.

13.4 Exhaust air treatment

� 415

13.4.3 Absorption Absorption is another exhaust air purification process where the performance and the design can in most cases be evaluated without experiments. Possible absorptive agents can be found theoretically as well. The demands for a suitable absorbent are high capacity, low volatility, viscosity, corrosivity and toxicity, high thermal and chemical stability, and a high flash point. The recycling of valuable solvents from the exhaust air is possible, especially for exhaust air streams that contain only one component as a pollutant. In these cases, the relatively high operation costs can be significantly reduced. Nevertheless, there are not many cases known. Absorption is also a promising alternative if the loaded absorbent can be sold. Aqueous ammonia solution is one of the few examples. If water is used as an absorbent, it might happen that it is useful to give the loaded water directly to the biological waste water treatment. However, water is usually a bad solvent if organic substances have to be absorbed. Often, organic solvents are taken, for example glycol ethers for chlorinated compounds [202]. Other options for the removal of organic pollutants are heavy alkanes or even biodiesel (fatty acid methyl esters) [203], which can be used as fuel afterwards. One should avoid stupid combinations like taking water as an absorbent for toluene (“authority scrubber”). Otherwise, a desorption step to remove the load from the absorptive agent cannot be avoided. Figure 13.17 shows a typical absorption/desorption unit, where the absorbent is cooled down for the absorption step and heated up for the desorption step. The desorbed gas can usually be condensed and given to a liquid waste incineration, which should be available at any chemical site. The desorber column can also be designed as a conventional distillation with condenser and reflux, if the losses of the absorbent are too high due to its volatility, or as a stripping column. It is worth mentioning that at least the absorption column should be simulated with a rate-based calculation (Chapter 5), as in most cases the mass transfer resistance in the vapor phase is decisive for the final design of the column. As absorption equipment, in addition to packed and tray columns

Figure 13.17: Scheme of an absorption/ desorption unit.

416 � 13 Utilities and waste streams also spray towers, bubble columns, venturi scrubbers, and many other types come into consideration. Absorption has severe disadvantages for highly volatile components and when hydrophobic and hydrophilic substances must be absorbed simultaneously, which is often hardly possible with one absorbent. The investment costs can be considerable if highquality construction materials must be used to avoid corrosion. On the other hand, absorption is not sensitive to unsteady operating conditions like exhaust air flow, load, or concentrations. The capacity can be adjusted, and the pressure drop across an absorption column is relatively small so that the blower for the exhaust air has a low energy consumption.

13.4.4 Biological exhaust air treatment Biological processes for exhaust air treatment have become more and more important. They are appropriate for huge exhaust air streams with low pollutant concentration. The pollutants must be soluble in water and biodegradable. The exhaust air should have a temperature in the range 5–60 °C and must not contain toxic substances. If these requirements are fulfilled, biological processes are in general the best processes due to their low investment and operation costs. However, their effectivity cannot be predicted. If a vendor has no references for a defined exhaust air problem, long term experiments, usually several months, are necessary to prove that targets can be met. Biological degradation is performed by microorganisms like bacteria or fungi [196]. All of these microorganisms are surrounded by a water film which they need for their metabolism. Therefore, it is necessary for the pollutants to be soluble in water so that they can get in contact with the microorganisms. Furthermore, nutritients and trace elements (nitrogen, potassium, phosphorus) must be provided for the microorganisms. The degradation itself yields carbon dioxide and water as products. Other elements like chlorine, nitrogen or sulfur will be transformed into inorganic compounds (HCl, H2 SO4 , nitrates) which may enrich in the water, where they have a detrimental effect. Increasing temperatures accelerate the degradation process but decrease the solubility of the gases in water. Thus, an optimum temperature has to be evaluated experimentally. The mass transfer from the vapor phase into the liquid and finally to the microorganisms is decisive for the effectiveness of the process. Therefore, contact areas between the water and the exhaust air which are as large as possible should be provided. A severe disadvantage of biological processes is that the microorganisms of a specific plant can specialize in degrading the most common substances in the exhaust air. Components that occur only occasionally can then be ignored, and they remain dissolved in the water. There are several process options for biological exhaust air treatment. In biofilters, the microorganisms are located on a solid filter material, which is sprinkled with water. The pollutants are absorbed by the liquid as well as adsorbed by the filter material. As filters, compost materials, turf, brush-wood, bark, wood, coconut fibers, foams, and other

13.4 Exhaust air treatment

� 417

porous materials can be used. Inorganic nutrients (nitrogen, phosphorus etc.) can be delivered by the filter material itself or supplied with the sprinkling water. For the design, it has to be regarded that the exhaust air coming out of the biofilter is always saturated with water due to the intensive contact with the filter material. Therefore, biofilters are prone to dry out, which leads to worse conditions for the microorganisms. Therefore, the humidity has to be controlled carefully. The exhaust air is often humidified before it is led in the biofilter. As a rule of thumb, for the calculation of the volume of the filter layer it can be assumed that the exhaust air load should be in the range 100–250 Nm3 /h per m3 filter material. The degradation capacity for pollutants can be 10–100 g/h per m3 filter material [196]. This leads to considerable dimensions for biofilters. Bioscrubbers are scrubbers where liquid from an activated sludge tank is used as absorbent. The packing is inert. For the evaluation of the dimensions, bioscrubbers can be treated like normal scrubbers. They are considerably smaller than biofilters. Biotrickling filters try to combine the principles of bioscrubbers and biofilters. The microorganisms settle on the packing so that the absorbed pollutants are degraded right on the spot. New developments in biological exhaust air treatment have the target to reduce the dimensions, especially for biofilters. Bioscrubbers could also be implemented as tray columns, where the concentration of microorganisms could be much higher. It is estimated that a degradation rate of approx. 1300 g/(m2 h) can be realized. A second hydrophobic solvent in addition to water could form a second liquid phase in a scrubber, which could absorb hydrophobic pollutants from the exhaust air. It could be recycled in the activated sludge tank. Finally, membranes could be used where the microorganisms can settle on. A few hundred bioprocesses for exhaust air treatment are operated in Germany. Most of them are biofilters, used for agriculture, fish industry, and sewage plants. The application makes sense for pollutant loads of 1000–1500 mg/m3 of organic carbon [196].

13.4.5 Exhaust air treatment with membranes Membrane applications also have potential for exhaust air cleaning, especially in combination with adsorption. The advantages of membranes are the simple, modular construction and the low space demand. On the other hand, the predictability of their performance is very low, and even if references are available, there is still doubt about their mechanical, thermal, and chemical stability as well as about their sensitivity against fouling. Furthermore, their properties vary during operation, as many membrane materials swell when they are exposed to the pollutants. There are relatively few references for membrane separations. They refer to simple cases like removal of toluene or a hydrocarbon from exhaust air. Figure 13.18 shows an example. As the partial pressure difference is the driving force for the flow through the membrane, a compressor is used on the pressure side and a vacuum pump on the suction

418 � 13 Utilities and waste streams

Figure 13.18: Process scheme for gas permeation.

side. The membrane only achieves an enrichment of the pollutants on the suction side. The pollutants in the permeate are then partially removed by condensation, whereas the rest is fed back to the pressure side. In comparison to the simplicity of the problem, this is a quite complex and expensive process. After all, the pollutants can be recycled if it seems useful.

13.4.6 Adsorption processes Like membrane processes, adsorption can also be an option for exhaust air treatment. The foundations and the terms have already been explained in Chapter 7.2. There are several kinds of adsorbers that differ in the way the adsorbent is treated. It can be fixed in a packed bed or in a moving bed, or it can be implemented as a fluidized bed. The most popular way is the fixed bed because of its simplicity and the low abrasion of the particles. However, the big disadvantage is that the adsorption process is transient. For a continuous process, a second apparatus is necessary to take over the task when the first is being regenerated and vice versa (Chapter 7.2). Adsorption processes are favorable if very low pollutant contents for the exhaust air are aspired to. Compared to absorption, the investment and operation costs for adsorption are considerably higher, up to a factor up to 3 [196]. However, it is often used for relatively small exhaust air streams, as no major costs for recycling of liquids occur. Another advantage of adsorption is the option of recycling the pollutants, which often compensates the disadvantages easily [196]. A serious disadvantage is that an extended safety concept is necessary due to the danger of fire. Activated carbon with huge surface areas, the presence of oxygen, and the release of the heat of adsorption provide good conditions for fire. In fact, it has been found in many cases that smoldering fires were active inside the adsorber bed which were not detected by the operator team. Numerous examples are known for the so-called “Monday fires” [143]. On Friday, machines and vessels are often cleaned with large amounts of organic solvents, which remain in

13.5 Waste water treatment

� 419

the adsorber for the weekend and cause the fire after the new startup on Monday. Meanwhile, these Monday fires can be avoided by means of using modern CO sensor technology, which initiate flooding of the adsorber with nitrogen or carbon dioxide [239].

13.5 Waste water treatment Water is one of the most often used substances in chemical industry. It is used as solvent, raw material, medium for chemical reactions and as washing agent for products, gases, and equipment. Therefore, it can be loaded with substances and particles. Before returning it to environment, it has to be cleaned according to the governing rules (e. g. Germany: Wasserhaushaltsgesetz). Any discharge of waste water needs a permit, which is subject to strict limiting values. A permit is given only if the water is cleaned according to the current state of the art. For the treatment of waste water streams, one can distinguish between measures to remove solids and measures to remove dissolved impurities. Solids can be removed by – Sedimentation: The solids must have a larger density than water. The density difference and the particle size must be sufficiently high. – Flotation: The solids must have a lower density than water, so that they move up to the surface. If the density difference is not large enough, auxiliary substances can be used. For example, gas can be introduced into the water. The bubbles will be attached to the particles, which lowers their apparent density. – Filtration: The waste water can be filtered over flint, sand or industrial filters, where the large particles are caught as they cannot pass meshes which are smaller than the particles theirselves. The smaller the particles, the smaller the meshes must be. Beyond conventional filters, membranes are used in the different applications microfiltration, ultrafiltration, and nanofiltration (Chapter 7.1). For dissolved impurities, the typical cleaning of waste water is different from other separation tasks, as it is in most cases not well defined, i. e. the loads vary and the polluting components are often not known. Therefore, a waste water treatment process cannot be predicted but must be experimentally demonstrated in a piloting unit. Often, the particular vendors have miniplants where a test amount of genuine waste water can be processed to check the performance. The load of a waste water is characterized by the TOC (total organic carbon), the COD (chemical oxygen demand) and BOD (biochemical oxygen demand) values. These parameters are decisive for the operation costs of a waste weater treatment, as they indicate the amount and the kind of the waste water load. The TOC value is the concentration

420 � 13 Utilities and waste streams of carbon atoms of organic molecules in the waste water. It can be measured with good accuracy by determining the carbon dioxide after oxidation. The COD value is less accurate. It refers to the amount of oxygen necessary to convert the organic substances into CO2 , H2 O, and NH3 . It is determined by mixing the water sample with potassium dichromate, 50–70 % sulfuric acid and silver ions as catalyst and keeping it at boiling temperature for two hours. The biochemical oxygen demand is more complicated and can only be determined experimentally. It is measured how much oxygen is consumed by microorganisms in contact with the waste water during a five-day-period (often termed as BOD5 ) at t = 20 °C [7]. It is a very useful quantity, but it takes five days for determination. The BOD is always lower than the COD, as the microorganisms often use only parts of the molecules for combustion, while the rest is used for growth. The ratio between BOD and COD is between BOD/COD = 0.05–0.8, i. e. it is completely unpredictable. If nothing better is known, one can use BOD/COD ≈ 0.35. Example A waste water stream contains 500 wt. ppm methyl tert-butyl ether (MTBE). Determine TOC, COD, and BOD.

Solution The chemical formula of MTBE is C5 H12 O, with the molecular weight M = 88.148 g/mol. With MC = 12.011 g/mol as the molecular weight of carbon, the TOC can be determined to be TOC = 500 wt. ppm ⋅

5 ⋅ 12.011 = 341 wt. ppm 88.148

The oxidation reaction of MTBE is given by C5 H12 O + 7.5 O2 󳨀→ 5 CO2 + 6 H2 O , and therefore one gets using MO = 15.9998 g/mol as the molecular weight of oxygen COD = 500 wt. ppm ⋅

7.5 ⋅ 2 ⋅ 15.9998 = 1361 wt. ppm 88.148

Without experimental information, the BOD can only be estimated to be BOD = 0.35 COD = 476 wt. ppm

For dissolved substances, there are a number of different processes for waste water cleaning that are often used in combination: – Evaporation: Often, the pollutants are heavy boiling substances which cannot be vaporized. In this case, the waste water can be concentrated by evaporation. The remaining

13.5 Waste water treatment







� 421

residue can be sent to incineration or to disposal if possible. Of course, waste water evaporation is very energy-intensive. It is more or less obligatory to use at least one of the heat integration options described in Chapter 3.3, i. e. multieffect evaporation or vapor recompression. The statement that all the pollutants might be heavy boiling substances is usually weak. In most cases, the condensate is not pure water but contains components which are light ends or form low-boiling azeotropes with water. Then, evaporation must be supplemented by a condensate polishing measure, either reverse osmosis, chemical destruction or adsorption (see below). The target is always to return as much non-contaminated water as possible back to environment. Reverse osmosis: Reverse osmosis can be used for the cleaning of waste water as stand-alone or as a supplementary measure. The method is restricted to low-concentrated pollutants, as the osmotic pressures should be limited to 40–100 bar. Membranes are used which do not work as a filter but by means of solubility and diffusion (Chapter 7.1). Reverse osmosis can remove large amounts of pure water which can be returned to environment. The biggest problem is the durability of the membrane which is necessary to test before an application takes place. A regular exchange of the membrane is often necessary but not too expensive. Adsorption: Pollutants can also be removed by adsorption. Due to the wide variety of possible pollutants, an adsorbent like activated carbon is one of the first choices, as it removes organic components quite reliably. As described in Chapter 7.2, a twin plant (Figure 7.8) is necessary to ensure a continuous operation. As activated carbon is quite inexpensive, the loaded adsorbent can be sent to incineration or be regenerated by specialized service providers. The activated carbon is not removed as a powder, instead, a whole filter unit is removed and replaced, which is quite fast and clean at the site and easy [204]. Chemical destruction: An interesting procedure for the reduction of TOC and COD are the advanced oxidation processes (AOP). A photochemical reaction due to absorption of ultraviolet radiation can activate the pollutant molecules. The activated state can cause a further reaction to products which are easier biodegradable. Additionally, oxidants like ozone or hydrogen peroxide [205] can be split into oxidizing radicals, which rapidly react with the pollutants and decompose them into CO2 and H2 O, as long as no other elements than C, H, and O are involved. Ozone is taken for processes where the pollutant concentration is low, as the solubility of ozone in water is bad. If low boiling pollutants are present in the waste water, it happens that they are stripped into the offgas. The activation of the molecules can also be increased by ultrasound. In case the oxidation has stopped at some intermediate products, the AOP can be supported by a biological treatment.

422 � 13 Utilities and waste streams





Maintenance of these systems is hardly necessary, and another great advantage is that the waste water stream is not split into two streams, where one of them contains the pollutants and has to be further processed. Even highly concentrated solutions up to 250 g/l COD have been successfully treated [206]. The disadvantage is that an infrastructure for the oxidant must be provided. Waste water incineration: Waste water can also be incinerated, which is reliable, but probably the most unsatisfactory process, as the often large amounts of water do not have a heating value but must be evaporated in the incinerator. Therefore, the waste water is first concentrated to reduce the amount of water. Pressure hydrolysis: With this option, water is kept under pressure for some time at high temperatures (200–250 °C), where hopefully the pollutants decompose to substances which are easier to handle.

13.6 Biological waste water treatment The waste water treatment by microorganisms is probably the most often used final treatment for waste water. For the microorganisms, the organic pollutants are raw materials for their metabolism. It can be distinguished between aerobic and anaerobic waste water treatment processes. The more established way is the aerobic treatment, meaning that the microorganisms need oxygen in their metabolism to digest the pollutants. In principle, it is a gas-liquid reaction (see Chapter 10.2) of the oxygen with the organic pollutants, where the microorganisms as a suspended solid act like a catalyst. Approximately half the organic carbon is oxidized to CO2 , while the other half is used for building up additional biomass. This means that additional sludge is generated, which has to be disposed in some way. Other items to consider are the addition of nutrients (Ca, K, Mg) necessary for the metabolism of the microorganisms and ammonia to avoid nitrate formation [7]. As the solubility of oxygen in water is extremely low (approx. 8 mg/l at p = 0.2 bar partial pressure [8], i. e. ambient air conditions), reaction kinetics are determined by the mass transfer of the oxygen into the liquid. This means that the equipment must provide a large surface between water and air, which is achieved by dispersing one of the two phases. Both options vapor and liquid phase are applied; for trickle filters and activated sludge plants, the liquid phase is dispersed, while for highly contaminated waste waters it is the gas one. Trickle filters are random packing beds, irrigated by the waste water. The surface of the packing elements is porous so that the microorganisms can develop a biologically active layer. For normal waste waters (BOD5 = 100–200 mg O2 /l) the degree of degradation is 75–95 %, whereas only 50 % are achieved for highly contaminated ones (BOD5 = 1000–3000 mg O2 /l).

13.6 Biological waste water treatment

� 423

In the classical activated sludge process, the waste water is processed in a concrete basin. A rotating spinner disperses the water into droplets and distributes them along the water surface. As the basins are open to atmosphere, they develop a lot of noise and smell, which is a serious drawback, as well as the low efficiency in the oxygen intake. As an alternative to the classical activated sludge process bubble column reactors have been developed. An example is the Biohoch reactor from the former Hoechst AG (Figures 13.19, 13.20). The gassing of the sludge is performed by two-component jets, where the air is sucked in by a liquid stream with a high velocity (see Chapter 8.3). The air is dispersed into small bubbles, which stay in contact with the liquid for a comparably long time during their ascension to the surface because of the considerable height of the reactor (approx. 25 m). Up to 80 % of the oxygen flow can be dissolved. Odorous substances in the exhaust air can be removed separately, as the bubble columns are not open to atmosphere. As well, the noise problem is significantly reduced. The oxygen intake requires less energy than in the classical technology; furthermore, there is less space demand.

Figure 13.19: Biohoch reactor in the Industrial Park Höchst, Frankfurt/Main. © Infraserv GmbH & Co. Höchst KG.

The separation of the cleaned water from the activated sludge is performed in a cone-shaped decanter, which is designed as a ring at the top of the Biohoch reactor (see Figure 13.20). The overflow is clean water, the sludge at the bottom is recycled to the reactor or removed as excess sludge. As approx. 90 % of the sludge is recycled, the concentration of microorganisms in the reactor can be kept on a high level so that high conversion rates can be maintained. There is always a mixed population of microorganisms so that it can react to changes in the kind and amount of pollutants in the waste water, as it happens frequently in practical applications [8]. Natural evolution can adjust the population of microorganisms to

424 � 13 Utilities and waste streams

Figure 13.20: Sketch of the Biohoch reactor internals. © Infraserv GmbH & Co. Höchst KG.

the changing conditions. However, for fast changes in waste water conditions this mechanism is still too slow. Therefore, a large storage volume of waste water is provided so that fast changes are mitigated. Another way to even fluctuations is the addition of activated carbon powder. Aromatic components with phenol-, amino-, nitro- and chlorine substituents are often toxic for the microorganisms even at low concentrations. There are specially trained microorganisms that can cope with these components. They are used in a demobilized way, e. g. fixed on a layer of activated carbon. However, in general they should be removed by other pre-cleaning measures, e. g. adsorption. The disposal of the excess sludge remains as a challenge that has not been solved in a satisfactory way so far. Their solid content is approx. 5–20 g/l. As described above, the usual concentration measures (sedimentation, centrifugation, filtration and drying) can be applied to get solid contents of 25–50 % [8]. Then, the sludge can be transferred to a waste disposal site or incinerated; however, in the latter case the ashes have to be disposed as well. In this case, the methane fermentation can give support. The excess sludge is concentrated to approx. 5 % and let to the so-called digestion tower, where it is consumed by anaerobic microorganisms. Essentially, there are three steps in the methane fermentation of the sludge, requiring different microorganisms: 1. Carbon hydrates, proteins and different kinds of fat are hydrolized to fatty acids and alcohols.

13.6 Biological waste water treatment

2. 3.

� 425

Fatty acids and alcohols are converted to acetic acid, hydrogen and carbon dioxide. The latter substances from step 2 are converted to methane.

As a result, a gas mixture (biogas) of methane and carbon dioxide is formed with methane concentrations up to 70 mol-%. Only 5 % of the carbon remain in the sludge. The biogas can be used as natural gas substitute. The process is highly sensitive to process conditions. Step 3 takes place only at pH = 7, tolerating only small variations. If step 1 is too slow, the pH drops due to the accumulation of fatty acids, and methane formation stops. The methane fermentation is generally slow and requires 15–20 days residence time; therefore, digestion towers often have huge volumes. For waste waters with very high pollutant loads (BOD5 > 1000–2000 mg O2 /l), the anaerobic waste water treatment can even replace the aerobic process, as the oxygen intake becomes more and more difficult. The anaerobic treatment has two main advantages: – Due to the production of biogas, the anaerobic process has a positive energy balance, whereas the aerobic process needs energy for the provision of oxygen for the microorganisms. – In the anaerobic process, only 5 % of the organic carbon remain as sludge for disposal, whereas 50 % end up as sludge in aerobic treatment. On the other hand, anaerobic processes are slow and sensitive to process conditions, i. e. pH (see above), throughput and composition of the waste water, and occurrence of toxic substances. The temperature must be maintained between 35–40 °C. The production of biomass is slow, causing a low operation reliability. The microorganisms have to be grown externally and seeded to the process [276]. In general, anaerobic processes do not produce high quality effluents, so that further treatment of the waste water is necessary [7]. Typically, only 75–85 % of the COD is removed.

14 Process safety In the chemical industry there is certainly a hazardous potential, as combustible and poisonous substances are involved. In the past, a number of serious accidents have happened, some of which are listed below. – Ludwigshafen-Oppau (1921): An explosion in a fertilizer storage of the BASF company killed 561 people. More than 2000 got hurt. Even in Heidelberg, 30 km away, roofs were untiled. The crater in Oppau is 125 m long, 90 m wide and 19 m deep (Figure 14.1). Because of the extent of the catastrophe its reason could never be reconstructed exactly. The main components involved were ammonium nitrate, a well-known explosive agent, and ammonium sulfate. In the scheduled mixing ratio, the fertilizer was not explosive. Probably, a demixing had taken place. The attempt to loosen up the densely packed bed by disrupting it with dynamite led to a booster detonation of the ammonium nitrate, which caused a knock-on effect on the entire storage with 4500 t of fertilizer.

Figure 14.1: The Oppau crater after the explosion in 1921. © BASF Corporate History, Ludwigshafen/Rhein.



Texas City (1947): Like in Oppau, ammonium nitrate was also involved here. A cargo ship containing 2500 t of ammonium nitrate caught fire and blew up. As a consequence, the Monsanto site nearby and several oil refineries also caught fire, and a large number of

https://doi.org/10.1515/9783111028149-014

14 Process safety









� 427

explosions took place. It took several days to get the situation under control. There were more than 600 casualties and 3000 injured. Ludwigshafen (1948): On a hot summer day a tank wagon with 30 t of dimethylether detonated on the BASF site in Ludwigshafen with great violence. It was the worst explosion in Germany since the Second World War. There were 207 casualties, almost 4000 injured, and more than 7000 damaged houses in Ludwigshafen and Mannheim. Bitterfeld (1968): In order to exchange a sealing, 4 t of vinyl chloride were relieved from an autoclave. A violent detonation took place. 42 people lost their lives, more than 200 were injured. A large part of the site was destroyed. This accident appears especially absurd from today’s point of view. Vinyl chloride is highly carcinogenic and poisonous, today, according to TA Luft [192] only an extremely low emission of 2.5 g/h or a concentration of 1 mg/Nm3 is allowed in Germany. Even the release of a noncritical substance into environment would be unthinkable, and only a controlled discharge into an exhaust air system (usually an incineration) would be possible. A plant without such an equipment would not be licensed. Flixborough (1974): One reactor in a row of five had to be bridged because of a failure. The connecting line was not strong enough for the process conditions and broke. In 50 s, approx. 40 t of cyclohexane vapor effused into environment. An ignition followed, and the whole site was destroyed. There were 28 casualties and 88 injured. The adjacent storage tank containing 1600 t of combustible substances caught fire as well. Even after three days explosions were still taking place. The number of casualties would have been considerably higher if the explosion had happened during normal working hours and not on a Saturday [207]. For the bridging of the reactor, which is at least a manipulation of the process operating at t = 150 °C and p = 10 bar, no design work was performed. The construction drawing had been made with chalk on the floor. There was no static calculation, and valves, which would have made it possible to isolate the reactors against each other, were not provided. Seveso (1976): In an autoclave producing 2,4,5-trichlorophenol the agitator was switched off by mistake, after the reaction and the shift were finished. The heat removal from the reactor became much worse, and the product was involved in further reactions. Because of the missing heat removal, the reactions were accelerated. One of the followup products was dioxine (2,3,7,8-tetrachlorodibenzodioxine), on a kg-scale. Finally, the safety valve of the reactor actuated. There was neither a collecting vessel nor a defined line to a flare. The safety valve opened for 30 min, and the vapor went straight into the environment. It took hours before an experienced crew arrived with the next shift. The reactor could be shut down. 18 km2 were contaminated.

428 � 14 Process safety





Plants wilted, and more than 3000 carcasses were found. Approximately 200 people suffered from chlorine acne. The number of casualties as a consequence of the accident is not known. The cancer rate rose significantly. It took eight years before all decontamination measures were finished. Bhopal (1984): In Bhopal (Central India) the Union Carbide company manufactured carbaryl, an insecticide, via the intermediate methyl isocyanate, which is an extremely poisonous substance. Accidentally, water intruded into a tank filled with 40 t methyl isocyanate and caused a chemical reaction. Carbon dioxide was formed and built up pressure. The methyl isocyanate, which is quite volatile anyway (normal boiling point: 39 °C) was evaporated by the heat of reaction. Within two hours, the whole content of the tank was released through the safety valve. More than 2000 people were killed. Probably about 200 000 people were hurt. To date, a decontamination of the area around the plant has not been performed. The safety devices provided had not worked at all. The cooling system of the storage tank and the gas flare had been switched off months previously, an emergency scrubber was not ready to operate, and the tanks were overfilled. The personnel staff had been reduced and was not sufficiently trained. The alarm system was switched off in order not to disturb anyone, and no emergency plan existed; many people died when trying to escape directly through the poisonous cloud. This large list of failures and unlucky circumstances is supplemented by the bad process concept [207]. While methyl isocyanate as a poisonous substance was produced continuously, its further processing took place batchwise so that large amounts of this substance had to be stored. A completely continuous process would have drastically reduced the amount of the methyl isocyanate inventory in the plant. The circumstances of the accident and the background of the plant are described in [278]. Toulouse (2001): Exactly 80 years after the Oppau accident a violent explosion happened at the site of the TotalFinaElf in Toulouse, also caused by ammonium nitrate. There were 31 casualties and more than 1000 injured. The cause was never clarified.

Accidents with much less damage can also receive a lot of publicity. One example is the so-called Carnival Monday accident in 1993 at the Griesheim site of the former Hoechst AG. Again, an autoclave was involved. The reactants, methanolic caustic soda and o-nitrochlorobenzene, have a broad miscibility gap. Therefore, the methanolic caustic soda was slowly added to the organic phase, while sufficient mixing and thereby the reaction should be achieved by an agitator. The staff did not notice that the agitator was switched off, and a large amount of the reaction mixture got into the reactor, forming two liquid phases. Because the reaction did not take place, the usual temperature increase was missing. Therefore, the reactor was heated up, contrary to the manual. Then, the error was noticed, and the stirrer was switched on, with the large inventory of reactants at high temperature. The reaction started immediately, the cooling system

14 Process safety

� 429

could not remove the large heat of reaction, and the safety valves opened. 10 t of the product o-nitroanisol were relieved to the environment. Due to the cold weather, the product condensed as an aerosol. A greasy yellow layer covered large parts of the neighboring district [207]. It is clear that the reactor was not sufficiently protected against maloperation. The dosing of the caustic soda while the agitator is not running and the external heating of the reactor should have been prevented by interlocks. Although no one was killed or injured, the accident was often cited in an unjustified way in connection with Bhopal or Seveso. The loss of image due to the bad information management of the Hoechst AG was very serious. In the next section, we shall outline that many improvements concerning the safety have been introduced. Nevertheless, severe accidents are definitely not just a thing of the past. While this book was being written, several major accidents took place. On April 7th, 2013 a serious explosion took place in West, close to Waco/Texas. There has still been no official statement about the reason. It is clear that a fire broke out where 240 t of ammonium nitrate were stored, without sufficient fire protection. An explosion took place. 14 people were killed, among them 12 firemen. More than 250 people were injured [208]. On August 12th, 2015 a fire broke out in the bulk storage of the harbor of Tianjin/ China. After 45 min, two detonations took place within one minute. There were at least 165 casualties, and approx. 800 people were injured. In the harbor, about 3000 t of hazardous substances were stored, among them sodium cyanide, calcium carbide, and, again, ammonium nitrate. The current theory for the reason is that acetylene was formed due to contact of the calcium carbide with the fire fighting water [209]. In April 2016, 28 people died at an explosion in a vinyl chloride plant in Coatzacoalcos/Mexico. In March 2019 there was a huge explosion at the Yancheng site in China, where mainly pesticides and fertilizers are produced. At least 47 people were killed, and more than 90 got hurt seriously. In July 2019, at least 12 people were killed in Yima/China, when an explosion in an air separation unit took place. Finally, so far, on August 4th, 2020, a terrible explosion in an ammonium nitrate storage happened in the harbor of Beirut/Libanon, with an incredible vehemence illustrated by a large number of cell phone clips. The main reasons for accidents are [246]: – lack of focus on process safety, instead, focus is put on minimization of LTIR (lost time incident rate); – belief that major incidents will not occur; – production priority; – ignoring warning signals; – disregarding standard operation procedures (SOP); – insufficient focus on proactive issues (work methods, lessons learned system, safe practices etc.); – cognitive biases [247], e. g. the selective search of information that confirms the own opinion and ignores others.

430 � 14 Process safety On the occasion of the 100th anniversary of the Oppau catastrophy a broad discussion took place whether it could still happen today. Of course, people have learned. The blend of ammonium nitrate and ammonium sulfate is granulated and protected against becoming dense with an anticaking agent. Nevertheless, any case is a one of its own [292].

14.1 HAZOP procedure The safety of its process facilities is the most important target of a chemical company. Although it is not possible to avoid accidents completely, the chemical industry has learned a lot. Most hazards happen because of flaws in design, material, or due to human error, whereby the latter reason is considered to be the most important one. In a chemical plant, there is a large potential for human error during design, procurement, construction, and operation. It is desirable that these errors are anticipated before the plant is commissioned. There are a number of procedures for the safety analysis [210]. The most established one in the chemical industry is the so-called HAZOP analysis (HAZard and OPerability). It was developed in the 1970s after the Seveso accident. It is important to note that HAZOP looks for incidents with the potential for severe impacts. The minor ones (“slips, trips, and falls”) are the subject of the company’s general safety requirements. Not only for new plants but also for changes in the process supposed to be minor ones a HAZOP session is recommended, as the implications are often underestimated. Basically, HAZOP is a communication technique. Information is presented, discussed, analyzed, and recorded [210]. The safety aspects are systematically identified to check the measures taken to prevent major accidents. The HAZOP procedure is quite time-consuming and requires a number of skills of the participants. It is recommended that the team consists of approx. 5 people. More participants dilute the effectiveness, as there are too many communication routes between the people. All of them should be able to communicate fluently in the language applied (usually English), otherwise too much effort is necessary to keep everyone up to date. The participating team members should be familiar with the chemical process under examination. The participants can be members of different organizations, e. g. from the engineering company, from the future plant owner, or from a consultant company. They should cover the following items or, respectively, areas: – HAZOP leader: The HAZOP is guided by a HAZOP leader, who takes care that the meeting keeps its target focus. The HAZOP leader is the person who is essential for the success of the team. He leads the team through the procedure and brings out the concerns of the process. He announces the key words to be discussed and, in case there is no minute taker, notes down the deviations, causes, and countermeasures. It is not intended that he takes part in the discussion, but it is also not forbidden. One of his most important tasks is to keep the discussion under control, which sometimes turns

14.1 HAZOP procedure











� 431

out to be an engineering review or a personal dispute between two participants. For HAZOP leaders, a certain authoritative personality is necessary. Preferably, the HAZOP leader had not been involved in the design of the process, so that he is not biased. He also takes care that the necessary documents used in the HAZOP are available. The team leader must be aware that the attendance of the team members in the sessions is the most costly part of the HAZOP process. Therefore, it is up to him to avoid unnecessary discussions and meet the schedule. Process Engineer, responsible for the unity in the project: The people who have designed the process know it very well and can explain how the particular measures interact. They should be able to give first answers to the various key word items. Operations representative: The operations representative is more experienced in operating the plant and focuses on items which are caused by operational errors rather than by the design. Safety expert: The safety expert knows the impact of the items and is familiar with the rules and standards. Instrumentation & control representative: The representative of instrumentation & control knows the cause & effect matrix and how the control cycles work. His special expertise ensures that a number of rules are maintained which other participants are not familiar with; e. g. an indicator involved in a control loop must not be used as a safeguard as long as the control loop itself can be the cause of the deviation. Consultant: The consultant is often someone who is familiar with similar processes but not with the one to be examined. This is useful, as consultants have the advantage of being unbiased and independent. They did not participate in the project, and hopefully, they will ask questions that the others who have become blind to the shortcomings in the project will not do.

Other additional people able to give important input are maintenance representatives and material specialists. The HAZOP team should have a certain experience. If the majority of the team has never participated in a HAZOP, the HAZOP leader will be completely preoccupied with instructing the team members rather than having them contribute to the review. The team members should be encouraged to ask “stupid” questions [210]. For the whole plant, line by line and for each piece of equipment it is examined what the consequences of deviations might be and which additional countermeasures should be applied. All necessary documents should be available, i. e. (amongst others) PIDs (including the PIDs for vendor packages) and PFDs, material balances, plot plans, cause & effect charts, interlock descriptions, fire protection measures, list of safety valves/rupture discs with relief case descriptions, data sheets of equipment, instrument and control valves, ambient data, utility list, and the properties of the main components.

432 � 14 Process safety First, the plant should be divided into parts, the so-called nodes, which have a welldefined objective (e. g. separation, reaction, heating/cooling, pressure increase). There is a list of guidewords (temperature, pressure, flow, etc.) which covers at least a large number of possible deviations systematically. A number of computer programs is available which can support the procedure. For each node, every guideword is considered with the following work flow [210]: – Definition of the node. – Short process description, usually given by the process engineer. – Selection of the process parameter and assignment of the deviations, one by one. Go through all the streams of the node with the process parameter selected before changing it. The process parameters and the deviations are: – Flow (no flow, less flow, more flow, reverse flow) – Temperature (low temperature, high temperature). Double-check of the design temperatures of the particular pieces of equipment with respect to the scenarios. – Pressure (low pressure, high pressure). Double-check of the design pressures and the pressure relief cases. Ruptures or leakages can be the reason for deviations. – Level (high level, low level) – Identification of the causes and hazards of the deviation. – Identification of the consequences of the hazard without regarding the safeguards. – Specification of the appropriate safeguards and recommendations to control the hazards. – List of recommendations in the order of their priority. – Ensure that the actions proposed are implemented and documented and that for each action a responsible person is assigned. Certainly, the HAZOP is no guarantee that nothing can happen in a process. Often, cognitive biases have an influence on decisions and can hinder rationality [242]. Examples are the so-called groupthink, where a group of people shares common but possibly false beliefs, and mindsets, which are assumptions that are so established that doubt is not allowed. Another phenomenon is group polarization, meaning that there is a tendency of a group to make decisions which are more extreme than the initial opinions of the members. Whenever the expression “generally believed” occurs, a cognitive bias is probably on the way. A way to overcome cognitive biases is the use of a “devil’s advocate”, a team member whose role it is to challenge common views on purpose to determine their validity. It does not matter what the real opinion of the devil’s advocate is. The target which is achievable is to lower the probability of accidents, especially of serious ones, as many scenarios are reflected and people become sensitive to possible consequences. Risks can be categorized into equipment failures (ruptures, control valve failures, etc.), operational errors (wrong valve state), external events (corrosion, fire), and

14.1 HAZOP procedure �

433

product deviations (e. g. off-spec). The assessment of the risk takes place based on common sense, personal experience, knowledge, and intuition of the participating people. Non-reasonable risks should be ruled out. A rupture of a line has a real possibility of occurring, whereas a meteorite striking the facility is not really probable, and the consequences can neither be avoided nor mitigated by better engineering. Similarly, the “double jeopardy” case should be in mind, meaning that two independent events at a time do not need to be considered, e. g. a tube rupture as mentioned above and a simultaneous malfunction of a control valve. The probability of the simultaneous occurrence of two independent events is negligibly low. However, two simultaneous events are not necessarily independent; e. g. a failure of the reflux pump and a loss of cooling in the condenser of a distillation column. In this case, the failure of the reflux pump might cause accumulation of liquid in the condenser, with the consequence of the cooling loss. In principle, there is one first error, and the second one is the consequence of the first. The notes should clearly indicate the exact naming of the equipment and the instrumentation devices. Otherwise, the review of the report will take time and might be erroneous. Minor spelling errors are not important and can easily be corrected later. The costs of the countermeasures should not be taken into account or discussed during the HAZOP session. Of course, for the final decision the practicability and the costs must be examined. A ranking of countermeasures is useful for the acceleration of the decision process afterwards. The criteria are life safety, protection of the environment, protection of the equipment, and, finally, continuation of production. Countermeasures for high risks tend to be more costly and complex than those for low risks and should be considered first. The proposed recommendations must be forwarded to the acknowledged experts for an evaluation. Recent deveopments take into account that part of the HAZOP could also be automated to avoid that the HAZOP team wastes time in routine work where the outcomes are often evident [284]. It will be interesting to see how these concepts find their way into daily work. Currently, there is a trend to encounter safety issues by means of process control systems, meaning that instruments are used to prevent or mitigate a hazardous situation. In general, electronic components behave differently in comparison with mechanical components. Regarding the probability of error, the error rate of electronic components is high at the beginning of the lifetime in operation. With a bit of dark humor, this is called “infant mortality”. After a short period of time it drops down and remains constant on a low level. At the end of the lifetime it rises again strongly. In contrast, mechanical devices also have a high error rate at the beginning, but after a period with a low error rate it rises linearly with time [211]. The measuring and control devices which have a safety-relevant function are classified by the so-called SIL (“safety integrity level”) analysis, where it is rated which certificates for the probability of failure for the various devices are necessary, according to the risk associated when the device function fails. There are four SIL numbers, the higher the SIL number, the lower is the risk if the device fails. Table 14.1 gives the average probabilities for failure on demand in the low demand

434 � 14 Process safety Table 14.1: Probabilities for failure on demand for the various SIL classes. Safety Integrity Level (SIL)

Probability of failure on demand

4 3 2 1

��−� – ��−� ��−� – ��−� ��−� – ��−� ��−� – ��−�

mode, which is defined that the frequency of operations in a safety relevant function is not greater than once per year. The greater the consequences and the higher the probability of occurrence, the higher must be the SIL number. Figure 14.2 is based on the international standard IEC/EN 61508 and connects these “risk parameters” and the SIL class. Risk parameters Consequence/severity S�

minor injury or damage

S�

serious injury or one death, temporary serious damage

S�

several deaths, long term damage

S�

many dead, catastrophic effects

Frequency/exposure time A�

rare to quite often

A�

frequent to continuous

Possibility of avoidance G�

avoidance possible

G�

unavoidable, scarcely possible

Probability of occurrence W�

very low, rarely

W�

low

W�

high, frequent

Figure 14.2: Relationship between the risk parameters and the SIL number required.

14.2 Pressure relief � 435

Example A shell-and-tube heat exchanger is operated continuously with two fluids which can react with each other in case of a tube rupture. The reaction would be exothermic. If the reaction takes place quantitatively, it might happen that the gaskets of the apparatus fail. If people are working in the vicinity, they might be seriously injured. With a surveillance of all pressures and temperatures involved, even small deviations from the set point are indicated. If two signals are out of tolerance, an interlock closes a shut-off valve to make sure that the feed line is closed to limit the extent of the reaction. Which SIL classification is necessary?

Solution According to Figure 14.2, the case can be categorized as follows: S2 serious injury and temporary serious damage; A2 continuous operation and risk exposure; G2 avoidance not possible; W2 low probability. The chosen SIL classification is SIL = 2 , with a probability of failure on demand of 0.001–0.01 (Table 14.1).

14.2 Pressure relief Safety valve design is like Christmas: You know that it will happen soon, but when it is time you are not prepared. (Anke Schneider)

14.2.1 Introduction Chemical plants often work at pressures far from the ambient one, and corrosive, toxic, flammable, and even explosive substances are handled. These pressures must always remain under control in closed systems which are constructed in a way that they can certainly withstand the design pressure and the design temperature. However, the design values are not infinite, and undesired scenarios can happen where they are exceeded. The plant design must make sure that these scenarios do not have an impact on the safety of the plant [212]. As long as the control system works, the plant is usually operated in a safe way. If the control system also fails, a common method to protect the

436 � 14 Process safety process equipment and keep it in a safe state is the emergency pressure relief or blowdown, which is a topic where an understanding of the process and knowledge of thermodynamics can contribute to avoid accidents and damage to the plant. The pressure relief device is the last line of defense and must be capable to actuate at all times and under all circumstances. It removes the potentially dangerous contents of the process equipment and transfers them to a safe and lower-pressure location; i. e. the environment for nonhazardous contents or, in the usual case, a flare system. Furthermore, it decreases the pressures exerted on the walls of the equipment and possibly prevents an escalation due to an explosion or a major relief of toxic substances [213]. Nevertheless, the pressure relief itself is often a hazardous operation. During the depressurization, the fluid expands and low temperatures can rapidly be generated, possibly causing brittlefracture of the vessel walls (Chapter 11). In distillation columns, the flow through the packing or the trays can be much higher than designed for normal operation, and the equipment can be seriously damaged during a pressure relief (Section 14.2.5). There is a large to-do-list for the engineer when a pressure relief device must be designed [213]: – fix the rate of relief from the piece of equipment to ensure that the design pressure1 is not exceeded; – determine the restriction orifice to make sure that the corresponding equipment is protected; – make sure that the relief flow can be transferred to the low-pressure destination; – evaluate the minimum temperatures the particular materials should be designed for; – evaluate the total flare capacity; – evaluate the repulsive forces generated by the relief flow for the design of the various fixtures; – organize the inquiry.2 Two kinds of equipment are used for pressure relief: safety valves and rupture discs. Figures 14.3 and 14.4 show a picture and a sketch of a safety valve, respectively. A safety valve opens gradually at a certain pressure and closes again after relief. The spring characteristics is adjusted in a way that the safety valve opens at actuation pressure, i. e. at the design pressure of the adjacent piece of equipment. It opens gradually; it is fully open when the pressure reaches the maximum allowable overpressure, corresponding to 110 % of the design pressure.3 If there are several safety valves available to 1 more exact: relief pressure; see below. 2 In a large plant, 150–200 safety valves and rupture discs are not unusual, so it takes a lot of effort to gather the information in an appropriate form. 3 It is a bit disturbing that the design pressure can be exceeded on purpose. In fact, this case and the following ones are covered by the definition of the design pressure; exceeding the design pressure does not mean destruction of the equipment.

14.2 Pressure relief

� 437

Figure 14.3: Safety valve protecting a heat exchanger. © Markus Schweiss/Wikimedia Commons/ CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0/ deed.en.

protect the equipment, the maximum allowable overpressure is 116 % of the design pressure, and in the case of fire it is even 121 %. It should be emphasized that these pressures including the design pressure refer to overpressures. Safety valves have a hysteresis; if the pressure decreases again to the value of the design pressure, it will not be fully closed again. For this purpose, the pressure must decrease to 90 % of the design pressure. The only intention of a safety valve is the protection of the adjacent apparatus or device. It must not be misused as a pressure regulation valve. Working with safety valves, it is most important to distinguish between the particular pressure terms, which are illustrated in Figure 14.5 with exemplary pressure values. First, the vessel to be protected shall be explained. As indicated, it is normally working at a pressure of pnormOp = 3 barg, the maximum operating pressure is expected to be pmaxOp = 4 barg. The design pressure of the vessel is pDes = 6 barg, which is relatively far above the maximum operating value. A possible reason is that 6 barg is a standard value for the design pressure of low-pressure vessels. The vessel is protected by a safety valve, which actuates when the design pressure of the equipment is reached (set pressure). It is fully open at a pressure which is 10 % higher, i. e. at 6.6 barg, corresponding to 7.6 bara. This pressure is the maximum allowable overpressure, it must be maintained. The relief amount is transferred to the safety valve via the inlet line. The pressure drop in the inlet line must be below 3 % of the actuation pressure. Just downstream the safety valve in the outlet line, there is the back pressure. It is the sum of the superimposed back pressure and the built-up back pressure. The superimposed back pressure is the

438 � 14 Process safety

Figure 14.4: Sketch of a safety valve. © Rasi57/Wikimedia Commons/CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en.

pressure at the end of the outlet line, in Figure 14.5 the pressure in the header where the relief stream is disposed to. The built-up back pressure is caused by the pressure drop of the relief stream in the outlet line. It should be less than 10–15 % of the actuation pressure. The values can vary slightly, depending on the guideline used in the project (e. g. DIN, ASME) and the vendor specifications. A safety valve is fully closed again after actuation when the pressure decreases below approx. 85 % of the actuation pressure [255]. In contrast to a safety valve, a rupture disc (Figure 14.6) opens completely4 at a certain pressure and does not close again, since it is destroyed on being actuated. The advantages of rupture discs are their rapid actuation at fast pressure buildups, their low costs and their small space demand. As they have no moving parts, they are very robust, and almost all materials can be used. Rupture discs are used for large relief streams with huge cross-flow areas and for fouling or viscous media. Also, they can be placed

4 depending on the type of the rupture disc. Some types do not open the whole cross-flow area.

14.2 Pressure relief � 439

Figure 14.5: Example case for the illustration of the safety valves pressure terms.

Figure 14.6: Rupture disc. © Jens Huckauf/Wikimedia Commons/CC BY-SA 3.0. https://creativecommons.org/licenses/bysa/3.0/deed.en.

upstream a safety valve with a slightly lower actuation pressure than the safety valve itself. In this way, the safety valve is protected against corrosion, fouling, and dirt until it actuates. Furthermore, the tightness of the pressure relief arrangement is improved. The pressure between rupture disc and safety valve is recommended to be monitored; to check whether the rupture disc is still intact and to avoid that the rupture disc does not actuate properly because of an uncontrolled pressure buildup behind the rupture disc. It should be mentioned that the actuation pressure of a rupture disc decreases with increasing temperature, depending on the chosen material. For the design, this significant effect must be taken into account. The pressure terms are just the same as for safety

440 � 14 Process safety valves, whereas the pressure drop limitations for inlet and outlet line are not relevant. More details can be found in [255].

14.2.2 Mass flow to be discharged For the design of a pressure relief device, the first step is to derive from the process knowledge which mass flow has to be discharged. One or more scenarios have to be fixed where the pressure relief device actuates. In most cases, the standard scenarios (Chapter 14.2.4) can be taken. Applying these scenarios, the mass flow to be discharged can be determined. It is useful to do this always in the same way. The “volume balance” method can help a lot to understand what is happening. It requires the use of a process simulation program. The following three steps (Figure 14.7) are performed: 1. Calculate the state just before actuation: Starting from normal operation, the pressure buildup is traced until the actuation pressure is reached. If the vessel can be assumed to be closed, i. e. no inlet and outlet flows occur, the volume of the content and therefore the overall density remains constant. The safety valve characteristics, where the maximum pressure relief is obtained when the safety valve is fully opened, is neglected; instead, for simplicity it is assumed that the safety valve opens abruptly at the maximum allowable pressure (Section 14.2.1). At this pressure, the physical properties are evaluated as a co-product of the calculation in the process simulator. 2. Calculate the flow just after actuation: A differential consideration is made to evaluate what will happen just in the moment of actuation. For example, in the case of fire (see below) a small amount of heat ΔQ is added. Due to the safety valve, the pressure stays constant, and the new state of the content of the vessel, especially its new volume, can be evaluated. The difference between the new volume and the volume of the vessel is the relief vol-

Figure 14.7: Explanation of the volume balance.

14.2 Pressure relief

� 441

ume ΔV , and the relief amount is simply Δm = ρΔV . Assigning a time step Δτ (e. g. ̇ the relief stream just at actuation is (Figure 14.7): Δτ = ΔQ/Q), ṁ = ρΔV /Δτ =

3.

ρ ΔV Q̇ ΔQ

(14.1)

The fire case example (Section 14.2.2) will illustrate this procedure. Note that for a system in vapor-liquid equilibrium the density of the vapor must be inserted instead of the overall density, as only vapor is considered to be relieved. Track the blowdown process: Often, the first relief stream is the largest, and normally, a safety valve should actuate only for a short time. However, an increase of the relief stream is at least possible. Therefore, one should regard different points of time during relief and check with a differential step what happens. According to the tendency, it should be possible to estimate where the maximum occurs. In fact, for this purpose dynamic simulation would be the appropriate tool [214], especially if additional inlet and outlet streams occur. On the other hand, an averaging of the relief stream over a long period of time or even over the whole pressure relief process is not advisable. Per definition, “averaging” means that the value obtained is smaller than the maximum value. However, the pressure relief device must be able to govern any state during pressure relief including the maximum one. An averaging would lead to a systematic underestimation of the cross-flow area needed.

The advantages of the volume balance procedure are: – The temperature-elevation of the liquid (and of the vapor as well) is correctly taken into account, which is important for mixtures with a wide boiling range. – The volume increase of both vapor and liquid due to the temperature increase is correctly represented. – The change of the equilibrium during the relief is taken into account. – The “vanishing” of liquid volume due to the evaporation (Equation (14.3), see below), which is important for actuation near the critical point,5 is correctly represented. – The procedure is as well suitable for the safety valve design of packed columns, where the holdup of the packing is small. Tray columns are more difficult and require dynamic simulation [214]. – If necessary, the time until actuation takes place can be evaluated.6 Often, this time is considerably high, and many actuation cases turn out to be non-realistic.

5 In fact, such actuations are not as exotic as one would guess. For a vessel with a pure substance, an actuation pressure close to the critical pressure is sufficient to run into this problem. See also Section 14.2.2. 6 Be careful: The heat balance in a closed system has to be performed with the internal energy, not with the enthalpy as in a process simulation program (Q12 = U2 − U1 ).

442 � 14 Process safety 14.2.3 Fire case The fire case is the simplest and most frequently occurring case. Its calculation can be taken as the basis how to deal with phase equilibria in relief cases. It is subject to discussion whether the fire case is actually relevant or not. In fact, one should consider that the heat input with steam is usually higher than the heat input by an external pool fire, so that the fire case is often not the governing case. At the sites, in most cases the fire brigade needs less than ten minutes to be at a plant. This time is normally shorter than it takes to reach the actuation case with an external pool fire. If the equipment does not reach actuation conditions within a certain time frame (e. g. 2 h), the fire case can be omitted. And, moreover, most plants have a sprinkler system, which further reduces the probability of an actuation case. Nevertheless, the design of safety valves against external fire has more or less become standard. Figure 14.8 illustrates what is assumed when fire causes a pressure relief. A vessel is partly filled with liquid. It is exposed to the heat generated by a pool fire. The vessel is completely closed, i. e. the valves shown in the adjacent lines are shut. The vessel is protected by a safety valve. Without liquid in the vessel, there would be hardly any heat removal to the vessel content. The walls of the vessel would be heated in an uncontrolled way, and the design temperature would probably be soon exceeded. The vessel might even be destroyed. With liquid in the vessel, there is a better heat transfer from the vessel walls to the liquid. The temperature of the walls is limited due to a working heat removal. However, the removed heat causes a partly evaporation and a temperature increase of the liquid. Thus, the pressure in the vessel increases, and after reaching the design pressure of the vessel, pressure relief is necessary. The safety valve must be designed in a way that the vapor generated by the heat transferred to the liquid can be

Figure 14.8: Sketch for the fire case. The valves are assumed to be closed.

14.2 Pressure relief � 443

removed by the safety valve without further pressure increase. According to the established standards, the pool fire reaches a height of 25 ft, i. e. 7.62 m. With A as the wetted surface up to a height of 25 ft, the commonly used fire formula is given in the API-521 [216]: 0.82 Q̇ H A = 43.2 F( 2 ) , kW m

(14.2)

where F is a factor considering the influence of the insulation of the vessel. In the design basis of many projects, it is instructed that it should be neglected, i. e. F = 1, otherwise, one would have to prove that the insulation is still working properly at temperatures up to 700 °C. Often, there is also a recommendation to which extent the volume of the adjacent piping shall be considered (normally additional 10 %). For the assigning the 25 ft fire height to the arrangement, it has to be considered whether and where the combustible medium can be collected and develop the pool fire. Often, the floor on the upper platforms in a plant are made of steel grating so that liquid distributed on the floor simply drops down onto the platform beneath. The use of Equation (14.2) is often mandatory and hardly ever questioned. Nobody expects it to be an exact equation, but in fact, one should know some details about its background. Equation (14.2) refers to large vessels, like the ones which occur in petrochemical industry. For smaller vessels, the heat transfer by the fire is different, as the flames have the opportunity to surround the vessel. In the API-2000 [215], a number of different fire formulas depending on the size of the vessels is given. Figure 14.9 clearly indicates that they fit much better to the data, which in turn are given in API-521 [216]. Note that the diagram is logarithmic, so deviations which seem to be small might indicate a relatively large error. Moreover, Equation (14.2) is not conservative; it calculates values which are systematically too low. Nevertheless, it is nearly always used.

Figure 14.9: Fire formulas in API-2000 [215] and API521 [216].

To evaluate the pressure relief stream, consider a closed vessel with a liquid and a vapor phase which is heated and protected by a safety valve (Figure 14.10). Assuming that it is filled with a pure substance, one can derive that the ratio between the given

444 � 14 Process safety

Figure 14.10: Sketch of a closed vessel.

heat flow Q̇ and the relief stream ṁ out to maintain the relief pressure is ρL Q̇ = Δh , ṁ out ρL − ρV v

(14.3)

where the temperature is the boiling temperature at relief pressure. This relation is not applicable at or close to the critical point. Both Δhv and ρL −ρV would become zero at the critical point, and in the vicinity they are inaccurate. At least, Equation (14.3) indicates that the relief amount does not become infinite at the critical point. Far away from the critical point, one can assume that ρL ≫ ρV and Q̇ = Δhv ṁ out

(14.4)

The difference between Equations (14.3) and (14.4) is that the generated vapor was liquid before and leaves some empty volume after evaporation, which will be filled by the vapor again so that the relief amount is reduced. Far away from the critical point, Equation (14.4) is accurate; however, this is the reason for many users to apply it in any possible case. As said, this is acceptable for pure components, but it is not at all justified to apply it to mixtures. An enthalpy of vaporization can be assigned if a liquid evaporates at constant temperature and pressure. This is only the case for a pure substance or for an azeotrope, where, however, the composition changes with the pressure. If a mixture is evaporated, more low-boilers than high-boilers will be evaporated; thus, more high-boilers remain in the liquid, and the boiling temperature rises. The heating of the liquid requires part of the energy, so less liquid will be evaporated than expected by Equation (14.4). But it is even more important to find out what is really evaporated, as vapor and liquid concentration can differ significantly in a mixture. In [11], an example is given which shows that averaging the enthalpies of vaporization with respect to their liquid concentrations can

14.2 Pressure relief � 445

lead to large errors. The only way to account for both the vapor concentrations and the liquid temperature increase is a flash calculation using a process simulation program. The coordinates to be specified are the pressure and the differential heat flow, and the ̇ ṁ out can easily be determined. ratio Q/ Example In a cylindrical vessel (D = 2.5 m, H = 5 m, height of lower tangent line 5 m) there are 5000 kg of 1,2dichloroethane and 5000 kg vinyl chloride at t = 20 °C. The head of the vessel and the adjacent piping will be neglected. The design pressure of the vessel is pDes = 8 barg, and the ambient pressure is pU = 1 bar. The vessel is exposed to a pool fire. Calculate the relief amount using a process simulation program. Determine the relief amount and its state.

Solution The following steps are performed to obtain the solution: 1. Determination of the wetted area up to 25 ft and the state at the beginning. First, the volume of the vessel is determined. It is V = H π D2 /4 = 24.54 m3 Using a process simulation program, the state in the vessel is calculated in an iterative procedure. The pressure in the vessel is repeatedly estimated. The volumes of the vapor and the liquid phase are determined. If the sum is equal to the volume of the vessel, the pressure estimate was correct. In this case, the result at t = 20 °C is p = 2.15 bar, with VL = 9.38 m3 and VV = 15.16 m3 . Thus, the liquid height inside the vessel is evaluated to be HL =

VL 9.38 m3 = 1.91 m = 2 π D /4 4.909 m2

Adding the height of the tangent line, we get HL,wetted = 6.91 m < 7.62 m. This means that all the wetted part of the wall is influenced by the fire. The wetted area is A = π D2 /4 + π D ⋅ HL = 19.91 m2 , 2.

where the contribution of the adjacent piping is neglected. Heat stream transferred by the fire: According to Equation (14.2), one gets with F = 1 Q̇ H = 43.2 ⋅ 19.910.82 kW = 502 kW

3.

State of safety valve actuation: With the process simulation program, the state just before actuation is calculated. For calculation, it is assumed that actuation takes place at 121 % of the design pressure (fire case), i. e. at p = 1.21 ⋅ 8 barg = 9.68 barg = 10.68 bara. The vessel temperature must be found where the volume of the content is equal to the volume of the vessel at pressure p.

446 � 14 Process safety

It turns out that the actuation temperature is t = 87.99 °C. At this temperature, the liquid volume is 10.3166 m3 and the vapor volume is 14.2234 m3 , which adds up to the vessel volume.7 The vapor composition is already 90.5 wt. % vinyl chloride and 9.5 wt. % 1,2-dichloroethane,8 being far away from the liquid concentration at the beginning. The heat required, calculated with a process simulator as the difference of the internal energies, is Q = U2 − U1 = (H2 − p2 V ) − (H1 − p1 V ) = (H2 − H1 ) − (p2 − p1 )V

= 1.142 ⋅ 106 kJ − (10.68 − 2.15) ⋅ 105 Pa ⋅ 24.54 m3 = 1.121 ⋅ 106 kJ ,

meaning that the time until actuation is τ = Q/Q̇ H = 4.

1.121 ⋅ 106 kJ = 2233 s ≈ 37 min 502 kW

Determination of relief amount: A differentially small amount of heat (500 kJ) is added at constant pressure, corresponding to the heat input by the fire during approx. 1 s. The volumes of vapor and liquid are now VL = 10.315 m3 and VV = 14.286 m3 , giving altogether 24.60 m. This exceeds the vessel volume by 0.061 m3 . This volume, coming certainly from the vapor phase, has to be relieved through the safety valve. The properties of this stream are part of the stream report of the process simulation program. For the density of the relief stream, 26.72 kg/m3 are obtained. The relief flow can then be determined using Δτ = ΔQ/Q̇ H = 500 kJ/502 kW = 0.996 s to be (Equation (14.1)): ṁ out = ρΔV /Δτ = 26.72 kg/m3

0.061 m3 = 1.636 kg/s = 5891 kg/h 0.996 s

̇ ṁ out For comparison with a fictive enthalpy of vaporization according to Equation (14.4), the value for Q/ is determined to be ̇ ṁ out = 502 kW/1.636 kg/s = 307 kJ/kg Q/

5.

Averaging of the enthalpies of vaporization would have yielded a value of 281 J/g. An example with a much more drastic difference is given in [11]. Strictly, the whole calculation would have to be repeated for different times, as it is possible that the relief stream increases with time. For the current example with more than half an hour time until actuation, this is not really relevant. If it is, the application of dynamic simulation is strongly recommended.

A strong difficulty for this procedure is the occurrence of inert components like N2 , O2 , H2 , etc. If inerts are dissolved in the liquid, the relief pressure is often already reached at 7 The accuracy of the numbers seems to be unreasonable; however, one must have in mind that the target is to evaluate the difference between large numbers, which is always sensitive. See example on Section 10.1. 8 using the γ-φ-approach with NRTL and PR for the vapor phase.

14.2 Pressure relief

� 447

comparably low temperatures. Then the inert will be transferred into the vapor phase, and the heat added will be mainly used for heating up the liquid. Despite specifying a differential flash, the temperature then rises significantly by several K. For the quantity ̇ ṁ out very large values (e. g. 50 000 J/g) can be obtained, leading to low design loads Q/ for the pressure relief device. To get a reasonable procedure, one should take into account what would really happen in such a case. It will take some time until the liquid is really heated up. During this time, the safety valve will really actuate at the calculated ̇ ṁ out , giving a very low relief amount. Once the inerts are removed, normal values Q/ will be obtained again. In principle, the standard procedure (p. 440) works, but item 3, the tracking of the blowdown process with time, has an even larger meaning. The following procedure is pragmatic for the design of the safety valves: one could leave out the inert components and pretend that they were already blown off. It can be justified by the fact that gas solubility equilibria are reached very slowly. In a process, one cannot assure that the equilibria calculated in the downstream steps have really been reached and that the light gases are really dissolved in the mixture, not to mention the uncertainty of the mixing rules for the Henry coefficient (Chapter 2.3.1). The safety valve must not be designed in a way that the assumed inert gas concentration is decisive for the evaluation of the relief flow.

14.2.4 Actuation cases There are a number of reasons why the pressure in a piece of equipment can exceed the corresponding design pressure [217]. The following itemization can be used as a checklist for each safety valve, where it is decided whether the particular item is relevant or can be ruled out. 1. Fire case (Chapter 14.2.3) 2. Blocked discharge line: If the outlet of a vessel is blocked while the feed is still in operation, substance in the vessel accumulates, which finally leads to a pressure buildup. The most frequent case is the filling of liquid into a blocked vessel with a pump. If the pump can build up a pressure higher than the design pressure of the vessel according to its characteristics and if other safety measures are already exploited (e. g. minimum bypass, Section 8.1), a pressure buildup will be the consequence. The relief device must then be designed in a way that the feed according to the pump characteristics can be safely removed. It is always useful to calculate how much time is necessary before an actuation case is really created. Often, this time is unrealistically long. Furthermore, one should take into account that the operator probably gets an alarm after the liquid level range is exceeded. 3. Thermal expansion of the vessel content: Thermal expansion can become a safety issue when liquids are blocked up in a closed vessel or a pipe. When there is no gas blanket, meaning that the closed vol-

448 � 14 Process safety ume is completely filled with liquid, large pressures are built up when the temperature of the liquid even slightly increases. For example, consider a liquid volume of water at t = 20 °C, p = 1 bar. During the heating, the density remains constant at ρ = 998.21 kg/m3 , assuming that the thermal expansion of the vessel is negligible. A temperature increase of just 5 K to t1 = 25 °C at constant density gives a pressure of p2 = 26.8 bar, which may exceed the design pressure by far. Therefore, all volumes or pieces of pipe that can be blocked must be protected with a safety valve which can release part of the liquid. It is called a thermal expansion valve. Usually, a design calculation is not necessary; the smallest safety valve should be sufficient. For gases, thermal expansion is more gradual, so that extreme pressure buildups do not happen. Often, the relief stream is led to the flare so that it is interesting to quantify it. For the design, it is also likely that the smallest safety valve is sufficient. A frequently occurring case is the thermal expansion of a gas in a vessel which is exposed to the sun radiation. Usually, the maximum solar radiation is given in the design basis. If not, the solar constant (S = 1370 W/m2 ) can be taken, and the maximum area which the sunlight can hit can be considered.9 Usually, even this by far conservative assumption will certainly lead to the selection of the smallest safety valve. If not, drop the conservative assumptions one by one: – introduce an absorption coefficient “a” for solar radiation (normally a < 0.6); – consider heat removal to ambient air by free convection (α = 4–5 W/(m2 K)); – consider radiation exchange with environment. Example A gas storage vessel in form of a sphere (D = 10 m) is filled with natural gas (for simplicity: methane). During normal operation, it is operated at t = 50 °C and p = 100 bar. For the design of the safety valve, it is assumed that the vessel is blocked and exposed to sun radiation. In the design basis, the maximum sun radiation is specified to be Smax = 800 W/m2 . To be by far conservative, the heat removal to the environment by convection and radiation shall be neglected.10 The design pressure of the vessel is pDes = 110 bar. Calculate how long it will take until the safety valve actuates and the mass flow to be discharged.

Solution First, the volume of the sphere is calculated to be V=

πD3 = 523.599 m3 6

The projection of the surface exposed to the sun radiation is

9 Only the projection area perpendicular to the sun radiation is relevant. 10 which is not justified if a realistic value is targeted.

14.2 Pressure relief � 449

A=

πD2 = 78.5 m2 , 4

giving the heat flux Q̇ = Smax A = 800 W/m2 ⋅ 78.5 m2 = 62.8 kW With a high-precision equation of state [29], the mass of the content in the vessel and the heat necessary for reaching the actuation state can be calculated to be m = ρ(50 °C, 100 bar) ⋅ V = 66.596 kg/m3 ⋅ 523.599 m3 = 34870 kg and Q = m ⋅ [u(110 bar, 66.596 kg/m3 ) − u(100 bar, 66.596 kg/m3 )] = 34870 kg ⋅ (−135.846 J/g − (−178.289 J/g)) = 1480 MJ , as the density during the heating phase remains constant. It takes at least τ = Q/Q̇ = 1480 MJ/62.8 kW = 23567 s = 6.55 h until the safety valve actuates. The temperature at actuation is t1 = 72.7 °C. The safety valve is fully open at p2 = 121 bar. In the following, the calculation assumes that the valve opens fully and immediately after reaching p2 . The heat flux during the first 100 seconds after actuation is regarded, as the safety valve is open, the procedure is isobaric, and the specific enthalpy is decisive for the energy balance. The relatively long time is chosen to make sure that the differences between the two states are significant. The gas expands to the new density ρ100 s due to the heat flux: Q100 s = m ⋅ [h(121 bar, ρ100 s ) − h(121 bar, 66.596 kg/m3 )] 62.8 kW ⋅ 100 s = 34870 kg ⋅ (h(121 bar, ρ100 s ) − 94.367 J/g)) , giving h(121 bar, ρ100 s ) = 94.547 J/g. The corresponding density can be evaluated to be ρ100 s = 66.581 kg/m3 , giving a volume after the 100 s of V100 s = m/ρ100 s = 34870 kg/66.581 kg/m3 = 523.720 m3 To maintain the pressure, the difference of the new volume and the volume of the vessel must be relieved through the safety valve: ΔV = V100 s − V = 523.720 m3 − 523.599 m3 = 0.121 m3 , giving the relief amount Δm = ΔV ⋅ ρ100 s = 0.121 m3 ⋅ 66.581 kg/m3 = 8.07 kg , which corresponds to a very low mass flow to be discharged of ṁ discharge = Δm/100 s = 8.07 kg/100 s = 290.5 kg/h

450 � 14 Process safety 4.

5.

Chemical reaction: Runaway chemical reactions are undoubtedly the most serious actuation cases that can occur. Usually, safety valves do not react fast enough, and the pressure relief is performed with rupture discs. One of the best known actuation cases was the so-called Carnival Monday accident (Chapter 14). The consideration of chemical reactions for pressure relief requires at least a rough knowledge about the reaction kinetics and the heat removal from the vessel where the reaction takes place. An increase of the conversion of an exothermic reaction develops more heat, which in turn increases the temperature, and in a vicious circle the reaction kinetics are further accelerated. As discussed in Chapter 10.2, for runaway reactions the heat removal from the reactor has to be examined. Tube rupture in a heat exchanger: In a shell-and-tube heat exchanger, shell side and tube side might have different design pressures. In case of a tube rupture, the side with the lower design pressure will be exposed to the higher pressure from the other side. The applied design code determines whether such an event must be regarded as a pressure relief case or not. The latest ASME code requires that equipment and piping are tested at 130 % of the design pressure. Thus, if the lower design pressure is less than 10 of the higher one, 13 tube rupture must be considered as a relief case. Previous revisions of the ASME code required a test pressure of 150 % of the design pressure; for these cases, the 32 rule was the criterion. In Chapter 11, it was pointed out that a tube rupture will happen in longitudinal direction (Figure 14.11) or at locations which are weakened anyway, like welding seams. Through the hole formed, substance passes from the high pressure to the low pressure side and might cause a relief case there. However, the size of the tube rupture hole is undefined and cannot be predicted at all. The following consideration leads to a scenario which is conservative. The substance passing to the lowpressure side must pass the two circular cross-flow areas at the ends of the tube as well. Thus, these cross-flow areas can be regarded as the critical ones. If the hole created by the tube rupture is smaller, it is conservative, and if it is even larger, the cross-flow areas are in fact the critical ones. In fact, it is very unlikely that a hole of this size develops. Most damages are caused by incipient cracks at the welding seams of the top plates where the tubes are fixed. Even there, a complete demolition is not probable [255]. In chemical and petrochemical industry, relatively small cracks have been observed in the past. Therefore, it has become common practice that smaller crossflow areas are accepted. A usual approach is to consider an equivalent leakage hole diameter of 5 mm, corresponding to a cross-flow area of approx. 20 mm2 [262]. The calculation of the relief streams is described in detail in [219], using the ω-method [220, 221]. The most dramatic pressure relief case is generated if the tube rupture takes place between a gas at high pressure and a liquid at low pressure. The critical flow through the cross-flow areas is a gas flow which corresponds

14.2 Pressure relief

� 451

Figure 14.11: Tube rupture in a steam reformer tube [218]. Courtesy of IfW Essen GmbH.

to a large volume flow due to the low density of gases. After having passed the crossflow areas, the gas expands to the low pressure and further increases its volume. To maintain the pressure on the low-pressure side, an equivalent liquid volume must be removed through the pressure relief device, which in turn corresponds to a huge mass flow due to the high densities of liquids. For this case, the calculation is comparably easy. Example In a shell-and-tube heat exchanger there is nitrogen (tmax = 100 °C) in the tubes (d = 1′′ ) and cooling water (tmin = 30 °C) on the shell side. The design pressures are pDes,1 = 50 bar on the tube side and pDes,2 = 5 bar on the shell side. Determine the necessary relief amount for the safety valve on the shell side, assuming that the tube rupture takes place at p =Des,1 .

Solution At tube rupture, two circular cross-flow areas at the ends of the tube form the minimum cross-section area at tube rupture, where the nominal diameter of the tubes is taken as the approximate inner diameter of the tubes: Atube rupture = 2 ⋅

π 2 π d = 2 ⋅ ⋅ 25.42 mm2 = 1013 mm2 4 4

452 � 14 Process safety

Assuming design pressure and tmax on the nitrogen side, the maximum mass flow can be determined using the algorithm described in Chapter 14.2.6 (Figure 14.16) to ṁ = 37750 kg/h. This stream is expanded to p = 5 bar on the shell side. The temperature of the expansion is calculated by adiabatic expansion via hN2 ,tube = hN2 ,shell The heat transfer due to the direct contact to the liquid is neglected, as the relief with the maximum load is considered to be very fast. The result is tN2 ,shell = 94.9 °C, with the corresponding density of ρN2 ,shell = 4.57 kg/m3 . The volume flow is then determined to V̇ =

37750 kg/h ṁ = 8260.4 m3 /h = ρ 4.57 kg/m3

This volume flow will flow onto the shell side and try to displace the cooling water. Neglecting the small compressibility of the water, this volume flow must be removed from the shell side, otherwise the nitrogen would build up a pressure which rapidly exceeds the design pressure of the shell side. As the cooling water is probably next to the safety valve on the shell side, this volume will in the first moment be displaced as cooling water, with a density of ρ(30 °C, 5 bar) = 995.83 kg/m3 [29]. The resulting water mass flow is then ṁ = V ̇ ⋅ ρ = 8260.4 m3 /h ⋅ 995.83 kg/m3 = 8225948 kg/h Assuming a standard leakage hole of 20 mm2 , the result will be approx. 162400 kg/h, which is much easier to digest.

6.

Abnormal heat input: If the heating agent in a heat exchanger is not or no longer flow limited, large temperatures and, subsequently, large pressures on the product side can be the consequence. A well-known case is the full opening of the steam control valve so that the largest possible steam flow enters the reboiler of a distillation column (see “control valve failure” below). A simplified assumption like “steam flow is proportional to heat input” is not adequate. At relief pressure, one must take into account that the temperatures on the product side both in the reboiler and in the condenser are higher than in the design case. The scenario must be evaluated in an iterative procedure. The column has to be recalculated in the process simulator at relief pressure with estimated reboiler and condenser duties. Then, the duties have to be verified in the heat exchanger design program, taking into account that the driving temperature difference is lower in the reboiler and larger in the condenser. To be conservative, it must be additionally regarded that there is no fouling in the reboiler for maximum heat input and a fully developed fouling layer in the condenser to minimize the heat removal, as the relief case can occur just after the reboiler has been cleaned, while the condenser has been left as it was. The procedure has to be repeated until the estimated duties in the process simulation are in line with the ones

14.2 Pressure relief � 453

obtained in the heat exchanger design program. The procedure can be simplified if the heat transfer coefficient is supposed to be constant.11 7. Cooling system failure: A deficit or a full breakdown of the cooling system can cause pressure buildup, similar to the scenario “abnormal heat input” described above. For columns,12 an analogous calculation must be performed, taking into account that the reflux of the column is also affected, as soon as the storage amount in the reflux drum has been consumed. For this purpose, the application of dynamic simulation is desirable [214]. 8. Power failure: The power failure is the third scenario which is interesting especially for distillation columns. The scenario is often weaker than the “cooling system failure”, as not only the cooling but also the feed pumps and, in some cases, the supply for the heating agent fail. It has to be clearly pointed out in the scenario which electric driven pieces of equipment are affected. 9. Control valve failure: A failure of a control valve is presumed, where the control valve does not reach a failsafe closed position (Section 12.3.2) but remains fully open. According to the pressures involved, more flow than usual will enter the piece of equipment which is to be protected. During basic engineering, the control valves have not been specified so far so that a preliminary solution must give an idea about the pressure relief case. As long as no better knowledge is available, the assumption that the maximum flow through the valve is twice as large as the maximum flow specified might be useful. During detailed engineering, this assumption should be replaced by actual information about the valve. Another option is to place a restriction orifice in the line of the control valve. A rule of thumb is to design it for 130–150 % of the maximum process flow. It should be placed at least 20× tube diameter upstream the control valve [255]. 10. Pump failure: Similar to the control valve failure, it is assumed that the pump flow control is out of order. According to the pump characteristics, it has to be found out which maximum flow can be conveyed by the pump at relief conditions when the pressure on the discharge side is higher. If the pump has not been specified, some simplifying assumptions have to be used. Again, a revision during detailed engineering should take place. All these scenarios are not necessarily independent from each other. If two of them occur at the same time, it must be distinguished whether one is the consequence of the other or whether both are assumed to occur together by coincidence (“double jeopardy”). For

11 In this case, the clear separation of process simulation and equipment design (Section 3.5) is in fact not advantageous. 12 For distillation columns, “cooling system failure” can also mean that the overhead line is blocked.

454 � 14 Process safety the latter case, the probability is generally too low. Unrelated failures should not occur simultaneously. If they are nevertheless considered, it might lead to an unreasonably high capacity of the flare system. Control devices or interlocks as substitutes for pressure relief devices are in general not allowed, unless they have a SIL classification. If design cases with significantly different relief amounts occur, it might be an option to place different safety valves for the different cases in parallel. The actuation pressures must be chosen in a way that the smaller the relief case, the lower the actuation pressure is chosen. For example, for two different actuation cases and a design pressure of 6 barg the first safety valve could actuate at 5.9 barg without bothering the one for the larger relief case, which actuates at 6 barg.

14.2.5 Safety valve peculiarities In general, things become even more confusing if the pressure relief does not take place in a vessel with a well-defined content but in a column with a concentration profile across the apparatus. To start with this topic, the column is closed during the pressure relief at first, meaning that all inlet and outlet streams are blocked. This might happen during the fire case, when the entire column is blocked in by quick-acting shut-off valves. The formalism for closed vessels as described above can then be applied for the column; however, in the column there are various liquid reservoirs as holdups on the particular separation stages, each of them having a different temperature and composition and being more or less in equilibrium with its own vapor phase. The simple method to neglect these holdups and consider only the bottom reservoir might be useful for packed columns with a small holdup and a large bottom area, but in case of larger holdups it might lead to absurd results, for instance that no light ends are relieved due to their removal in the stripping area, which is usually the purpose of the column. This is often coupled with a wrong specification of the safety valves. Getting rid of this problem and take the concentration of the reflux, which has the highest concentration of light ends, leads consequently to an overdesign. Many of these shortcut approaches are in use and give results, but they are neither correct nor even useful. If some holdup remains on the stages, the column will perform as a distillation column at pressure relief. When heavy ends evaporate and go upwards, they will tend to get into equilibrium on the upper stages, where they condense and evaporate light ends. As light ends tend to have a lower molecular weight and a lower molar heat of vaporization, it might happen that a larger volume flow has to be relieved through the safety valve than it was generated at the bottom. However, there are some types of columns where the assumption that the rectification effect can be neglected is justified. – Sieve trays: When a sieve tray column is blocked, the liquid flow from above caused by the reflux is stopped. The pressure profile will break down, as there is no directed flow any more. As the vapor generated can no more leave at the top, the pressure rises

14.2 Pressure relief � 455





in the whole column. The trays will be emptied one by one, starting from the top. This will take some time. If all trays are emptied before actuation, the column can actually be treated like a vessel. However, to make sure that this presumption is fulfilled, a separate consideration has to be made. If it is reasonable, the final composition of the liquid in the bottom can be determined by adding up all the holdups on the trays with their particular concentration. The amount of the holdup can be estimated using the weir height,13 while their compositions are available from the thermodynamic modeling of the column. Also, the vapor holdups can be added up. The vapor and the liquid obtained in this way are certainly not in equilibrium, but they can be used as a starting point for simulating the further pressure buildup of the column (Section 14.2.3). Random and structured packings: Packed columns are emptied after the reflux and the feed are blocked; the entire holdup of the column, which is comparably small (< 5 %), goes down to the bottom. The considerations discussed for the sieve tray column above hold for packed columns as well. The holdups can be evaluated by the hydrodynamic models (Engel [103], Billet/Schultes [104]). Furthermore, it is important that the holdups in the collectors and distributors are taken into account. Bubble cap and valve tray columns: Bubble cap trays are more difficult to assess. After reflux and feeds are blocked, the trays do not completely drain off. Depending on the construction details of the bubble caps, the vapor will at least partly pass the liquid on the trays and performs a heat and mass transfer, so that a rectification effect will occur. On valve trays, the valves close the holes completely after the pressure profile has been equalized. The tray will be filled with liquid up to the weir height. To reach pressure equilibrium when the column content is heated up, the valves will partly open, and through the holes part of the liquid will drop down onto the next tray. It cannot be said whether the trays are completely emptied; this depends on the whole valve construction. When the safety valve actuates, the vapor will probably pass at least a small liquid layer, and a certain mass transfer giving a rectification effect will result.

To summarize: even for the comparably easy case of a column which is fully blocked there are a number of assumptions involved which cannot be clarified just by setting up a plausible scenario. The favorite tool is the dynamic process simulation, which can quantify the assumptions made above for the particular cases. There is hardly any chance to calculate scenarios where feed, reflux or product stream are still active without dynamic simulation. Some further considerations about dynamic simulation for the pressure relief are explained in [214, 222, 272].

13 Reasonable assumption: clear liquid height on the tray up to 120 % of the weir height.

456 � 14 Process safety Especially for columns, but also in other cases there is another important aspect. For an oversized safety valve, the pressure relief will achieve its target to reduce the pressure in the piece of equipment to be protected. On the other hand, the resulting stream will exert a larger load on the equipment and the piping. A larger relief stream in a column might destroy the packing or the tray fixings. For example, tray fixings can withstand a tray pressure drop of approx. 20 mbar; therefore, it is quite probable that they will be damaged during pressure relief.14 Oversizing a safety valve can also lead to severe oscillations. After reaching the actuation pressure, the safety valve opens and relieves an amount which cannot be continuously delivered by the process (dischargeable mass flow, see below). Therefore, the pressure drops down and the safety valves closes. Then, the pressure rises again and so on, giving oscillations and hammering of the safety valve. In the worst case, the welding seam at the inlet flange can break, followed by an uncontrolled release of the process medium into environment [255]. The only measure against these oscillations is to design the safety valve “correctly”. But this is easier said than done. For the design, worst case scenarios are defined as actuation cases, giving some safety margin to reality anyway. From that point on, several other safety margins are added for the final design of the safety valve. Often, the vendor is involved in the design process, claiming own safety margins. Finally, the pressure relief devices often become too large, sometimes far away from reality. As a countermeasure, friction brakes can slow down the closing of the valve. This does not prevent it from chattering but minimizes the energy released by the impact so that damage is limited. Many safety valves only work by coincidence rather than by design. And hopefully, they will never actuate. (Robert Angler)

Other points caused by oversizing of the safety valve are that the flare loads can be higher than expected and that the repulsive forces on the safety valve and the adjacent piping are underestimated, which is illustrated in Figure 14.12. During the pressure relief, the relief flow enters the safety valve from below and leaves it to the right hand side. The momentum balances for the x- and the y-direction indicate that the reaction forces Rx and Ry are induced, giving the resulting reaction force R by vector addition. In Figure 14.12, some exemplary numbers are set. The result for the repulsive force is 21 kN, equivalent to a weight of approx. 2.1 t. It is clear that pipe engineers must be aware of the forces exerted on the pipes and their fixings. An underestimation of the flows involved could lead to possible mechanical damage during pressure relief. There are many misunderstandings concerning the expression “conservative assumption”. Frequently, this term is used as a phrase for a “simplifying assumption” [223],

14 It is possible to provide stronger tray fixings; however, in case of an explosion the trays should be destroyed to protect the shell.

14.2 Pressure relief

� 457

Figure 14.12: Illustration of repulsive forces. © Rasi57/Wikimedia Commons/CC BY-SA 3.0. https://creativecommons.org/licenses/by-sa/3.0/deed.en.

which has often hardly anything to do with being “conservative”. Simplifying assumptions are necessary to save time, but they should be applied in a consistent way. For example, just taking the overhead flow of a column during normal operation for the pressure relief stream is simplifying. It is neither ensured that it is larger than the correct relief stream nor that it is at least in the correct order of magnitude. This assumption cannot replace an adequate simulation of the pressure relief. In contrast, taking the normal flowrate of a pump in an overfilling relief case is conservative, as the pump flow will in fact be reduced according to the pump characteristics due to the elevated back pressure. On the other hand, the value obtained with this assumption can be by far too large, and it might make sense to apply a reasonable estimate of the real relief flowrate. A remark should be documented with regards to making sure that this estimate is checked as soon as the characteristics of the pump are known. It must be distinguished between the mass flow to be discharged and the dischargeable mass flow. From the process calculation of the particular scenarios, one gets the mass flow to be discharged, i. e. the stream to be relieved to maintain the pressure in the apparatus at an acceptable level. However, pressure relief devices like safety valves are produced with defined sizes. One cannot choose a safety valve which fits exactly the calculated relief stream but the next one in the list where the certified capacity is sufficient. This means that the relief stream obtained with this safety valve could be higher than requested. This is the dischargeable mass flow. It must be noted that according to most of the standards all pressure drop calculations (inlet line, outlet line) must be based on this dischargeable mass flow, not regarding whether this is possible from the process point of view [224]. According to a rule of thumb, it is reasonable if the dischargeable

458 � 14 Process safety mass flow is about 10–20 % above the mass flow to be discharged. More overdesign leads to chattering in the actuation case. More peculiarities of safety valves and their arrangement can be found in [255].

14.2.6 Maximum relief amount Having determined the mass flow to be discharged, the design of the pressure relief device can take place, i. e. mainly the choice of the opening area at actuation. This area must be large enough to let the mass flow to be discharged pass in order to maintain the pressure. The maximum relief amount through an opening area is determined by the critical flow phenomenon, which must be thoroughly understood. The open cross-flow area can be generally treated as an orifice. Figure 14.13 shows the relationship between the mass flux density through an orifice and the outlet pressure. Consider a pressure vessel filled with nitrogen at p1 = 200 bar, which has to be relieved through an orifice to an environment at a pressure p2 . If p2 = p1 , there is no driving force for a flow and the mass flux density will be zero. The mass flux density increases when the pressure p2 is lowered. However, at a certain level of p2 (rule of thumb: ≈ 0.5 p1 ) the mass flux density starts decreasing in the calculation (dashed line in Figure 14.13). It can be shown with the Second Law that a further acceleration is not possible. In fact, the mass flux density stays constant. This value is called the critical mass flux density. Its existence does not depend on the state: it can be liquid, vapor, or two-phase. However, it is most important and significant for vapor flow, where it can be shown that the speed of sound will occur in the minimum cross-section area (throat) [11]. It is a common misunderstanding that the calculation of the pressure relief is equal to

Figure 14.13: Mass flux density through an orifice as a function of the outlet pressure. Calculated for nitrogen, p1 = 200 bar, t1 = 20 °C.

14.2 Pressure relief � 459

Figure 14.14: Throttle valve.

the one for an adiabatic throttling (Figure 14.14). The well-known condition for adiabatic throttling h1 = h2

(14.5)

is correct, but it refers to states 1 and 2 far away from the minimum cross-section area, where the velocities are relatively small or at least in the same order of magnitude so that the kinetic energy term can be neglected. h1 and h2 are not equal to the enthalpy directly in the orifice hc , as the velocity is much larger there and should be taken into account in the energy balance: w2 w12 + h1 = c + hc 2 2

(14.6)

Equation (14.6), the continuity equation, the isentropic change of state, the speed of sound and the equation of state are used to derive a procedure for the determination of the maximum mass flux density. For an ideal gas, we get [11] 2

ṁ c √ 2 p1 κ p κ p = [( c ) − ( c ) A v1 κ − 1 p1 p1

κ+1 κ

]

(14.7)

with κ

κ−1 pc 2 =( ) p1 κ+1

(14.8)

Equation (14.7) should be supplemented by a factor KD , which considers the fact that the cross-flow area directly in the orifice is not necessarily the minimum cross-section area. Instead, the typical flow pattern is that the flow is more constricted downstream the orifice (Figure 14.15), and the effective cross-flow area is smaller than the orifice itself. As well, KD considers the losses for the flow from 1 to c. The lack of knowledge is summarized in the KD value, which is usually in the range 0.8 < KD < 0.975: 2

ṁ c p κ p 2p κ = KD √ 1 [( c ) − ( c ) A v1 κ − 1 p1 p1

κ+1 κ

]

(14.9)

460 � 14 Process safety

Figure 14.15: Flow pattern through an orifice. Courtesy of Dana Saas.

The vendor of a safety valve or the piping element should be able to deliver more details about KD , otherwise some advice is given in [228]. Equation (14.9) is sometimes given in a way that it can barely be recognized, and furthermore, different results are obtained [229]. So far, it could always be shown that the equations and procedures are equivalent to Equation (14.9). The differences in the results occur due to different default values for KD . For a real gas, it is often considered to use the real specific volume for v1 in Equation (14.7) instead of the ideal gas one v1 = RT1 /p1 , and furthermore, the real values for cp and cv are used to determine κ as κ = cp /cv , which often leads to unreasonable values. This is contradictory to the thermodynamic derivation and leads to an inconsistent result. For example, at t = 50 °C, p = 100 bar the ratio for ethylene is cp /cv = 3.1, whereas the largest theoretically possible value is κ = 1.67 for a one-atomic ideal gas (He, Ne, Ar, …). It is not possible for a real gas to assign a κ which fulfills both the ideal gas condition for an isentropic expansion p T2 = ( 2) T1 p1

κ−1 κ

(14.10)

and the equation for the speed of sound of an ideal gas w∗ = √κRT

(14.11)

For the above-mentioned condition, the κ for the isentropic expansion which yields the correct value would be κ ≈ 1.2, whereas κ = 0.998 would be required to get the correct speed of sound. Both values are obviously different and far away from the ratio cp /cv . κ < 1 is even impossible, as cp is always larger than cv . In fact, one is often lucky in using Equation (14.7), as long as the pressures are not too high. In [230] a number of examples have been compared. The result was that Equation (14.7) is in fact a good approximation of the exact solution. The maximum mass flux density and, correspondingly, the necessary free cross-flow area are often met quite well. However, it is also pointed out that temperatures and pressures along the line cannot be reproduced, leading to possibly inadequate design conditions. When applying Equation (14.7), one should take care that κ is in a reasonable range (1 < κ < 1.67), otherwise, something might go wrong. At least, one should check how sensitive the results are to variations of κ.

14.2 Pressure relief

� 461

Often the dependence is not strong. If one is in doubt, one should prefer the real gas calculation [11]. For example, in the LDPE process15 at p = 3000 bar, where the density looks more like a liquid than like a gas density, the ideal gas calculation (Equation (14.7)) does not make sense anymore. For a correct consideration of real gas behavior, an analogous procedure has been set up in [11]. It is not a formula, as the equation of state is not standardized, but an iterative procedure. The rule of thumb that the critical pressure p2 is approx. 50 % of the inlet pressure p1 is no more valid in these cases. The calculation scheme is listed in Figure 14.16.

Figure 14.16: Calculation procedure for the mass flux density of a real fluid.

Having calculated the mass flux density, one can easily choose the crossflow area for the safety valve: Amin =

ṁ (ṁ c /A)

(14.12)

Then, from the standard row of safety valves (Table 14.2), the appropriate one can be chosen which just covers the necessary cross-flow area. Example Choose a standard orifice from Table 14.2 for a mass flow to be discharged of 40000 kg/h. The substance is ethylene, the inlet conditions are p1 = 300 bar, t1 = 100 °C. Further input data: – M = 28.053 g/mol – cpid = 1.7931 J/(gK) = 50.302 J/(mol K) – KD = 0.8 – p2 = 1 bar Use a) the ideal gas model and b) a high-precision equation of state.

15 LDPE–low density polyethylene.

462 � 14 Process safety Table 14.2: Standard row of safety valves. Orifice

A/mm�

D E F G H J K L M N P Q R T

71 126 198 324 506 830 1185 1840 2322 2800 4116 7129 10322 16774

Solution a)

With κ = cpid /(cpid − R) = 1.198, we first determine the pressure ratio from Equation (14.8): pc 2 =( ) p1 κ+1

κ κ−1

= (2/2.198)1.198/0.198 = 0.5649

With v1 = RT1 /p1 = 3.68865 ⋅ 10−3 m3 /kg, Equation (14.9) yields ṁ c /A

2

kg/m s

= 0.8 √

2 ⋅ 300 ⋅ 105 1.198 [0.56492/1.198 − 0.56492.198/1.198 ] = 46776 3.68645 ⋅ 10−3 0.198

Amin =

40000 kg/h 40000 = 106 mm2 = 238 mm2 46776 kg/(m2 s) 3600 ⋅ 46776

giving

b)

From Table 14.2, standard orifice G with 324 mm2 is chosen. The real gas solution is essentially the same as Equation (14.9) [11], but, as always, more complicated in detail. Tools like FLUIDCAL [29] or TREND [277] enable the user to perform the corresponding calculations easily within EXCEL. With a given pressure ratio pc /p1 , one can evaluate the particular quantities in the minimum cross-section area – pc – Tc (pc , s1 ) – hc (pc , s1 ) – ρc (pc , s1 ) – wc = √2(h1 − hc ) (First Law with w1 = 0 (Equation (14.6)) – ṁ c /A = KD ρc wc

14.2 Pressure relief � 463

For gas flows, the velocity in the minimum cross-section area should be equal to the speed of sound w ∗ (Tc , pc ). The pressure ratio is optimized, either iteratively or with an optimization routine, as available in EXCEL. The results are: – pc /p1 = 0.3967 – ṁ c /A = 69630 kg/m2 s giving Amin =

40000 kg/h 40000 = 106 mm2 = 160 mm2 69630 kg/(m2 s) 3600 ⋅ 69630

From Table 14.2, standard orifice F with 198 mm2 is chosen. Considering the real gas behavior does not only give more accurate results for the size of the safety valve, but also the correct state properties in the minimum cross-section area. This is important especially for the temperature, which is often extremely low.

For the design of a safety valve or a rupture disc, the inlet line, the pressure relief device, and the outlet line are a unit which should be designed together in one step. The obvious advantage is that the properties of the relief stream must be entered only once. Figure 14.17 illustrates the requirements which the calculation has to fulfill, with the simplified assumption of a gas flow.

Figure 14.17: Inlet line, safety valve and outlet line as a unit.

First, a safety valve is considered as pressure relief device, simplified as an orifice in the center of the drawing. From the vessel with the relief pressure p0 the relief stream enters the inlet line on the left hand side. The pressure drop of this line, which is usually short (1–2 m), can easily be calculated with the conventional formula Equation (12.1). The requirement is that this pressure drop shall be below 3 % of the driving pressure difference16 (p0 − pU ), where pU is the pressure of the destination of the relief stream 16 The requirement can vary slightly, according to the guideline used.

464 � 14 Process safety (usually flare or environment if possible). The safety valve itself cannot be subject of a conventional pressure drop calculation, e. g. according to Equation (12.19). Instead, what we know is that the maximum mass flux density and the critical pressure are reached in the minimum cross-section area, and for a vapor flow we get the speed of sound. According to the design guidelines, the pressure drop in the outlet line shall not exceed a certain value, e. g. 10 % of the total pressure difference between the vessel and the lowerpressure location.17 The pressure drops of both outlet and inlet line must be calculated with the dischargeable mass flow through the safety valve, not with the mass flow to be discharged. The whole procedure for the dimensioning of the outlet pipe has been described in Chapter 12.1.3. As the thermodynamic state of the fluid may vary considerably along the line, the outlet pipe is divided into increments so that the pressure drop of an increment can be determined with the updated state variables. In an iterative procedure the pressure after expansion downstream the safety valve is estimated, and the outlet state of the pipe is calculated, until the estimated pressure after the safety valve yields the speed of sound at the outlet or expands to environmental pressure. If the pressure drop exceeds the 10 %, the outlet line diameter must be increased. For rupture discs, there is no limitation like this, and in most cases the limitation is the speed of sound at the pipe outlet. The pressure drop can again not be evaluated by a single pressure drop calculation, as the state of the relieved fluid varies significantly along the pipe. Instead, an increment-wise calculation must take place. This is especially important for compressible gas flows and flashing fluids, where the pressure drop causes further evaporation. The procedure can again be taken from Chapter 12.1.3 and can be transferred to the two-phase flow pressure drop as well. In contrast to the inlet line, the course of the outlet line can only be guessed during basic engineering, as the locations of the particular pieces of equipment and the tie-in points to the flare system are not known, not to mention the number and kinds of the bends. There must always be a note that the outlet lines have to be updated in detailed engineering. Some remarks should be given about the pressure drop of the inlet line. According to the established guidelines, the pressure drop in a safety valve inlet line should not exceed 3 % of the actuation pressure, referring to the dischargeable mass flow [231]. Although there is no physical background for this rule, Figure 14.18 shows that it makes some sense. The blue lines show a case where the 3 %-criterion has been kept.18 After the actuation pressure has been reached, the safety valve starts to open, and within a few

17 10–15 %; in the following, 10 % are used for simplicity. 18 At first glance, it seems that the pressure drop might be larger than 3 % (10.5 bar at safety valve inlet, 12 bar in the vessel). The reason is that the dynamic pressure due to the velocity is not considered in the diagram.

14.2 Pressure relief � 465

Figure 14.18: Valve lift and pressure at safety valve inlet flange as a function of time for 3 % (blue) and 6 % pressure drop (red) in the inlet line. Qualitative remake from [231].

milliseconds the full valve lift is obtained. The pressure in the vessel remains almost constant, and the pressure at the safety valve inlet slightly drops. The red lines show a case where the same safety valve has been used with an increased inlet pipe length so that the pressure drop in the line amounts to approximately 6 % of the actuation pressure. A completely different situation arises in this case. After actuation, the pressure at the inlet flange of the safety valve drops significantly due to the pressure drop, causing the valve lift to drop as well. Then both valve lift and pressure start high-frequency oscillation with approximately 50 Hz. Due to this oscillation, the valve is hardly ever fully opened, and there are even periods when it is almost closed. The relief stream will be much lower than the specified one, and the protection of the vessel will probably fail. For the compliance with the 3 % rule, it often causes trouble that the dischargeable mass flow has to be considered, and not the mass flow to be discharged. In these cases, it might be useful to limit the valve lift in a way that the opening of the safety valve limits the relief stream to the design value. However, this requires an exact knowledge of the possible relief scenarios, and furthermore, one should have a look into the guidelines and see whether this lift stop is applicable or not. Often, a joint effort of process and layout engineers can achieve a better location of the safety valve so that the inlet line

466 � 14 Process safety pressure drop is reduced. Some examples are described in [231]. Meanwhile, even the 3 % criterion is no more regarded as conservative and is supposed to be replaced [240]. The 10 % pressure drop criterion for the outlet line is an arbitrary choice. Certainly, when the back pressure increases, there will be a point where it is no longer possible for the safety valve to operate properly. The safety valve will start chattering [232]. It depends on the safety valve construction at which point this behavior occurs. The 10 % criterion can be taken as a conservative value given in the particular guidelines. Cases can occur where it is more or less not possible to keep the 10 % pressure drop in the outlet line, e. g. at relatively low actuation pressures. Using bellows, it is possible to extend the pressure drop limit in the outlet line to 30 %. Figure 14.19 illustrates how the bellows work. On the left hand side, a conventional safety valve is shown. The back pressure directly acts on the sealing face, which counteracts the simple opening of the safety valve due to the pressure load of the protected equipment. On the right hand side, the safety valve has a bellow, which is in principle a gasket which prevents the back pressure to act on the sealing face. Instead, the pressure to be overcome is the ambient pressure (see the open hole in the upper part), which is in most cases much lower and therefore improves the situation.

Figure 14.19: Safety valve sketch with and without bellows.

Bellows are also recommended if a significant part of a liquid relief (≈ 5 %) flow will flash. Furthermore, the moving parts of the safety valve are protected from the potentially corrosive process fluid. Another option is again the above mentioned lift stop. It should be mentioned that in case of liquid relief the outlet line should be designed with a slope and, if possible, without pockets to avoid that a liquid column is formed downstream the safety valve.

14.2 Pressure relief

� 467

It is not always the case that only vapor is relieved without any phase change. In these cases, there is a maximum mass flux, but in the minimum cross-section area speed of sound does not occur. The algorithm described in Figure 14.16 can be further applied, but the sensitivities of the pressure ratio are much larger. The following cases can be distinguished: – Two-phase flow in the safety valve; already at the inlet: If there are both vapor and liquid in the relief stream upstream the safety valve, the necessary cross-flow area of the safety valve could in the past be determined by simple addition of the cross-flow areas necessary for the vapor and the liquid phase alone. At present most of the guidelines recommend the ω-method [220, 221]. – The two-phase flow occurs in the outlet line due to flashing: If the evaporated mass flow downstream the safety valve is less than 50 % of the entire mass flow, the safety valve should be designed for liquid relief. Often, the pressure in the minimum cross-section area is the boiling pressure of the liquid, as the first bubble would occupy much more space than the same mass as a liquid. Therefore, it is clear that the maximum mass flow occurs at conditions where the whole flow is liquid. The flashing in the outlet line makes it necessary that the pressure drop correlations for two-phase flow are applied, maybe even as an EXCEL file where the line is divided into segments, if the pressure drop is so large that changes due to the phase equilibrium take place. The use of bellows is strongly recommended in this case. – Condensation in the safety valve: In Chapter 8.2 it is explained that there are substances, especially “large” ones with more than three C-atoms, which form liquid droplets in the compressor, as the compression yields a lower temperature than the boiling temperature of the relief stream due to the compression. The other way round, this means that the substances which do not show liquid formation in a compressor are prone to form droplets at pressure relief. Astonishingly, water belongs to these substances when a saturated vapor is expanded. Starting at the dew point line, the temperature drop during expansion is larger than the drop of the dew point temperature. Also, speed of sound is not reached in these cases. – There are also cases where liquid is discharged without flashing, which frequently takes place when pressure relief occurs due to thermal expansion in a vessel completely filled with liquid. The calculation of this case can be done with the simple Bernoulli equation, giving w2 = √

2(p1 − p2 ) ρ

where the velocity in the vessel has been set to w1 = 0. In this case, w2 is the velocity in the minimum cross-section area. The maximum mass flux can be calculated with the equation of continuity

468 � 14 Process safety

(

ṁ ) = αρw2 A max

where α is the constriction coefficient, considering the conditions outlined in Figure 14.15. For liquids, α = 0.6 is the usual approach. Finally, the outlet line must be thoroughly considered, as it has the potential for a number of serious mistakes. One of them is that its design is postponed to the last possible due date. After the piping of the plant has been almost finished, the only way is that the outlet line meanders through the plant to somehow reach the desired location. The assumptions made to calculate the built-up back pressure are obsolet, and often the safety valve itself must be redesigned. As well, the repulsive forces must be checked for the whole course of the outlet line. Moreover, the material for the outlet line has to be carefully chosen because of the expansion of gases and the related Joule–Thomson effect. Often, there is a significant drop of the temperature, leading to brittleness of the material. The mechanical design with respect to thermal expansion of the pipe should be double-checked. If crystallizing substances are relieved, heat tracing should be considered [255].

14.2.7 Two-phase-flow safety valves Things become even more complicated when there is a two-phase flow through the safety valve. Many safety valve designs neglect the fact that the pressure relief is not a smooth equilibrium process as assumed if only vapor is relieved. If vapor is relieved from a vapor-liquid equilibrium, a bubbling-up of the liquid will take place, as it might happen that more vapor is formed than can escape through the surface of the liquid. The reason is that the rising velocity is limited [225], especially when the viscosity of the liquid is high (> 100 mPa s) or if there is a foam layer on the liquid. The liquid rises, and in case the liquid level at pressure relief conditions was high enough, it will be partially relieved through the safety valve as well. For the mentioned system with high viscosity or foam, this happens even at low liquid levels [226]. In everyday life, this effect is known as the champagne effect.19 For the design of the safety valve itself, one should be aware that the entrained liquid covers part of the opening area. Therefore, less gas can be relieved and less energy can be removed by evaporation of the liquid. Larger opening areas are necessary. In Figure 14.20, a criterion is given to decide whether only vapor or a two-phase flow goes through the safety valve [225]. The decisive quantity is the ratio between the

19 After having wasted large amounts of this valuable beverage due to underestimation of the champagne effect, the author claims that from the pressure relief point of view, the opening of a champagne bottle is by far too large.

14.2 Pressure relief � 469

Figure 14.20: Limiting level to avoid two-phase flow through the safety valve [225]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

superficial vapor velocity uG0 , calculated with the maximum dischargeable gaseous relief stream (Equation (14.13)), and the rising velocity of the bubbles u∞ (Equation (14.14)): uG0 =

ṁ out , ρV A

u∞ = K∞

[σL g(ρL − ρV )]0.25 ρ0.5 L

(14.13) (14.14)

A is the cross-flow area for the gas in the vessel, and the factor K∞ can be taken from Figure 14.20 according to the model chosen for the description of the bubbling-up. The diagram is valid for vertical vessels with 1 < H/D < 3. If it is applied to horizontal vessels, it can be estimated that the result for the limiting liquid level is approximately 5 % too low. Typical values for u∞ are 0.2 m/s for low viscosities and 0.05 m/s for high viscosities. Reasonable values for the limiting liquid level φlim are approx. 70 % for systems with low viscosity, approx. 20 % for systems with high viscosity and approx. 10 % for systems with foam.20 If the liquid level is below the limiting one at the moment just before the actuation, it can be continued to assume a vapor flow through the safety valve. If the liquid level is above the limiting one, things are more sophisticated. A twophase flow consisting of both vapor and liquid is relieved; to avoid further pressure rise, its volume flow must be equal to the one calculated as vapor flow only. It is difficult and a bit arbitrary to assign the fractions of vapor and liquid. The simplest way is to set the volume fractions according to the vessel content at actuation [225]. This is very conservative; it is clear that the volume fraction of the vapor will be somewhat larger due to its lower density. In [226] a relationship is discussed which can mitigate this conservative assumption. 20 φ = VL /Vvessel at the beginning of the pressure relief.

470 � 14 Process safety According to Grolmes [257], the maximum vapor volume fraction of the relief stream αmax is related to the average vapor volume fraction in the vessel ᾱ via αmax =

2ᾱ 1 + ᾱ

(14.15)

This maximum vapor volume fraction is a good approximation as long as the drift velocity between the two phases is not too large. The equation can be interpreted as follows. For a one-phase vessel content ᾱ = 0 or ᾱ = 1, it turns out that αmax = α,̄ as expected. For 0 < ᾱ < 1, it is always αmax > α,̄ e. g. for ᾱ = 0.5 one gets αmax = 0.667. The size of the pressure relief device can then be determined with the ω-method mentioned above [220, 221]. Again, the solution can only be obtained iteratively. After choosing a certain size, the calculation must be repeated with the new dischargeable mass flow. The calculation is only finished if the phase ratio does no more change from step to step. The procedure is thoroughly described in [225, 227]. Surprisingly, the two-phase flow procedure does not need to be applied in the fire case [250]. Often, there is equipment which is completely filled with liquid (e. g. filters). In this case, the first actuation is caused by liquid expansion. After a time, the boiling point at relief pressure is reached with the equipment still completely filled with liquid. Then, vapor and liquid have to be relieved simultaneously, giving two-phase flow through the safety valve with a large necessary cross-flow area. The reason is that the generated vapor volume must be relieved as liquid to a large extent, until enough vapor space is available for phase separation. In fact, according to [250] the time required to heat the system from the first actuation to the full opening of the safety valve to the relieving conditions (121 % of the actuation pressure) is large enough to cover the interim time with two-phase flow. In this way, full disengagment of vapor and liquid is realized at relieving conditions and the assumption of one-phase vapor venting is justified for the design. Moreover, in the fire case the boiling occurs close to the walls of the vessel (wallheating) and not inside the vessel (volumetric heating) as it is the case for exothermic chemical reactions. Therefore, the bubbles are by far not homogeneously distributed, making disengagement easier. For foaming or reactive systems this simplifying consideration should not apply.

14.3 Flame arresters Flame arresters are devices which prevent flames to propagate through nozzles or pipes (Figure 14.21). They are used to stop the spread of open fires, to limit the spread of explosions and to confine fires within a certain location. Flame arresters leave only small channels open where the flow can pass. Their large surface absorbs and distributes the heat transported by the flow. The gas is cooled down below its ignition temperature so that the flame cannot proceed. For the dimensioning of a flame arrester, the Maximum Experimental Safe Gap plays a key role. It is defined

14.3 Flame arresters � 471

Figure 14.21: Internal structure of the cross-flow area of a flame arrester from Braunschweiger Flammenfilter GmbH.

as the largest width of a 25 mm gap which surely prevents that an ignited mixture of gas and air can ignite a second gas mixture through the gap. It depends on the properties of the gas; therefore, the gases are divided into so-called explosion groups. For instance, propane and many other petrochemical products belong to the explosion group IIA, whereas hydrogen is assigned to group IIC, requiring a much smaller maximum experimental safe gap (see Table 14.3). Table 14.3: Examples for the Maximum Experimental Safe Gap. More values for pure components can be found in [317]. Explosion Group

Component Example

I IIA IIB IIC

methane propane, heptane ethanol hydrogen, acetylene

Maximum Experimental Safe Gap mm ≥ 1.1 1.1 …0.9 0.9 …0.5 < 0.5

A gap which is too small just causes a larger pressure drop, whereas channels which are too large have practically no effect. A short description of demonstrations can be taken from [287]. Flame arresters should be located as close as possible to the place where the ignition is expected so that the flame front cannot accelerate. If it is possible that liquid or condensate occur in the vent line, one should take into account that a flame arrester might act as a low point. In this case, an installation in a vertical line makes sense. Moreover, a flame arrester must be easily accessible for inspection and maintenance.

472 � 14 Process safety

14.4 Explosions If an exothermic reaction is very fast, in might happen that the heat of reaction cannot be removed. The temperature elevation causes the reaction rate to further increase, which in turn gives a higher temperature again. Finally, an explosion takes place, which is in principle a reaction of a substance in a very short period of time (usually within milliseconds). Starting from a certain point, the reaction finally covers any amounts of the substance which is in close contact. In contrast to a comparably slow fire event, there is no chance to act and mitigate the consequences; the only thing an engineer can do is to take all precautions to prevent an explosion under all circumstances. Concerning the initiation of the ignition, we distinguish between induced ignition, where energy is supplied from an external source (e. g. sparks, hot surfaces, flames, hot particles), and self-ignition, where the substance is heated due to chemical reactions without sufficient heat removal. Usually, in most cases explosions happen due to combustion reactions. Therefore, it takes three factors for an explosion to happen: – flammable material – oxygen (air) – ignition source However, oxygen can be replaced by other gases, e. g. chlorine for the combustion of hydrogen. An explosion is coupled with a rapid expansion of gases, which has a large destructive potential. One can distinguish between flash fires, deflagrations, and detonations. Flash fires are defined as combustion reactions with rapidly moving flame fronts. The flame velocity is below the speed of sound. Glass panes break, and people are injured. There is a continuous transition to deflagrations, where the flame front is also slower than the speed of sound but can be heard. It can destroy buildings, and there are often people who are seriously injured or even die. In detonations, the flame front velocity is faster than the speed of sound [233]. There is extensive damage in a wide area, as well as casualties. For the discussion of possible explosions in a plant, we distinguish between the following zones: – Zone 0: an explosive atmosphere occurs more than 50 % of the operation time; – Zone 1: an explosive atmosphere occurs frequently, at least 30 min per year; – Zone 2: during normal operation, an explosive atmosphere does not occur. By accident it is possible, but less than 30 min per year. Flammmable materials can be gases, liquids or solids. Flammable gases often consist of carbon and hydrogen. They only require small amounts of energy to react with oxygen.

14.4 Explosions

� 473

Flammable liquids are often hydrocarbon compounds like ethers, ketones or alcohols [291]. For being ignited, they must first be evaporated. This is of course related to the vapor pressure or, respectively, the liquid saturation pressure for mixtures. For this reason, the flash point is one of the most important characteristic numbers which indicate and classify the hazardousness of a substance. The flash point is the lowest temperature where the vapor pressure of a flammable liquid is large enough to generate ignitable mixtures in ambient air. When the ignition source is removed, the substance stops burning. The flash point of mixtures can be calcuated using a concept of Gmehling and Rasmussen [285]. The principle is that the flash point of a pure substance corresponds to its partial pressure Li in the ambient air. The flash point of a component i is obtained at a temperature TFP,i where the vapor pressure psi (T) equals Li : psi (T)/Li (T) = 1

(14.16)

For a mixture, an analogous relationship is set up, adding up the terms pi /Li for the particular flammable components, where pi denotes the partial pressure of component i at saturation state. ∑ pi (T)/Li (T) = 1 i

(14.17)

Note that Equation (14.17) reduces to Equation (14.16) for a pure component. Noncombustible components are not counted in Equation (14.17); however, they have an influence by reducing the relevant partial pressures. There is a certain temperature dependence of Li (T). It can be expressed by Li (T) Li (TFP ) T − TFP kJ = − 0.182 kPa kPa Hu,i mol K

(14.18)

where the index FP denotes the flash point and Hu,i is the lower heating value21 of component i. The procedure is as follows: 1. Inquire the flash points of the combustible components and their lower heating value 2. Convert them each to an Li using Equation (14.16) 3. Estimate the flash point temperature T for the mixture 4. Calculate Li (T) 5. Evaluate the partial pressures in the vapor of the various components at the boiling point 6. Check Equation (14.17); if the sum is > 1, start over with a lower T estimate and vice versa 21 see Glossary.

474 � 14 Process safety It should be clear that any appropriate phase equilibrium calculation can be used, not only UNIFAC as set in [285]. The following example shall illustrate the procedure and the capability of the method. Example Estimate the flash point of a ternary system consisting of ethanol (1) – toluene (2) – ethyl acetate (3). The given values are

No.

Component

xi mol/mol

TFP,i K

LFP,i kPa

Hu,i kJ/mol

1 2 3

Ethanol Toluene Ethyl Acetate

0.494 0.247 0.259

285.95 278.71 267.59

3.526 1.365 2.353

1278.6 3774.4 2009.9

Solution Note that for a solution a thermodynamic model and a simulation program are necessary. The evaluation of the partial pressures at saturation is not performed below, only the results are given. Using pi = p ⋅ yi , the iteration history is

TFP,mix K

p kPa

y� mol/mol

y� mol/mol

y� mol/mol

L� kPa

L� kPa

L� kPa

∑ pi /Li

273 268 269.06 269.12 269.11 269.116

2.924 2.15 2.297 2.306 2.304 2.3051

0.34727 0.330357 0.333949 0.334152 0.334118 0.334138

0.180898 0.183054 0.182615 0.18259 0.182594 0.182592

0.471832 0.486589 0.483436 0.483258 0.483288 0.483270

3.527843 3.528555 3.528404 3.528396 3.528397 3.528396

1.365275 1.365516 1.365465 1.365462 1.365463 1.365463

2.352510 2.352963 2.352867 2.352861 2.352862 2.352862

1.26171 0.934125 0.996556 1.000378 0.999525 0.999994

The experimental value for the flash point of the mixture is 270.37 K [285].

In contrast, the fire point is essentially the same, but the substance continues to burn after removal of the ignition source. At the autoignition temperature, the substance burns even without an external source. According to this value, the particular substances are classified in groups, as indicated in Table 14.4. Also, mists can explode. They consist of small droplets distributed in the vapor with a large surface area. They behave similar to flammable gases. Flammable solids are, in most cases, dusts [291]. They can occur as layers or as

14.4 Explosions

� 475

clouds. Dust layers are often smouldering on hot surfaces. When such a layer is stirred up, it can explode. A dust cloud can explode immediately after contact with an ignition source. Table 14.4: Temperature classes. Temperature class

Range of autoignition temperature

T1 T2 T3

> ��� °C 300–450 °C 200–300 °C

T4

135–200 °C

T5 T6

100–135 °C 85–100 °C

Example hydrogen 536 °C ethanol 363 °C diesel 205 °C diethyl ether 160 °C acetaldehyde 140 °C no examples carbon disulfide 90 °C (only one)

Explosions take place under certain conditions; there is a lower and an upper explosion limit. Between these limiting concentrations, a gas can ignite and explode. The explosion limits refer to air as oxygen containing gas. They are temperature- and pressuredependent. Moreover, there are substances where the upper explosion limit is missing, meaning that they are explosive even without the presence of oxygen. Examples are ethylene and ethylene oxide. Special care must be taken for dust. The lower explosion limit (LEL) is the lowest concentration of a gas or a vapor in air where an ignition source (e. g. flame, heat) causes a flash of fire. Below the LEL there is not enough fuel to develop an explosion, meaning that concentrations lower than the LEL are too lean to burn. The LEL generally decreases with increasing temperature and increases slightly with increasing pressure [241]. For example, methane has an LEL of 4.4 vol. %22 at t = 138 °C. At t = 20 °C, the LEL is 5.1 vol. %. Below these concentrations, an explosion cannot take place. The upper explosion limit (UEL) is the highest concentration of a gas or a vapor where an ignition or explosion is possible. Mixtures with concentrations higher than UEL are too rich to burn. The UEL increases with increasing temperature and increasing pressure [241]. There is a mixing rule for the LEL according to Le Chatelier: LELmix = (∑ i

xi ) , LELi −1

(14.19)

22 vol. % as concentration unit is simply annoying for engineering purposes. It is not exactly defined how it can be interpreted, and in principle it is temperature-dependent. For gases, it can be assumed that the gases are ideal, so the volume concentration is equal to the mole concentration. For liquids, it can be assumed that amounts corresponding to the volume concentration of the particular components are mixed, and the excess volume is neglected.

476 � 14 Process safety where x is the mole concentration. For the UEL, it is sometimes recommended to use Equation (14.19) as well, but in principle, there is no assured mixing rule. Values for LEL and UEL are given in [317], as well as some more calculation examples. In practical applications like exhaust air lines, a safety margin (usually 50 % LEL) must be considered, and there must be an analytical surveillance that this limit is kept. The detector most often used is the flame ionization detector (FID). However, it only detects C-atoms in the mixture; it does not distinguish between the components. Therefore, one component is identified to set a conservative standard, as the following example shows. Example A pollutant flow (t = 20 °C, p = 1 bar) of 2.4 kg/h acetaldehyde (1, C2 H4 O, M1 = 44.053 g/mol, LEL1 = 4 mol % = 73.25 g/m3 ) and 3.8 kg/h cyclohexanol (2, C6 H12 O, M2 = 100.161 g/mol, LEL2 = 1 mol % = 41.64 g/m3 ) is transported with an exhaust air stream to the exhaust air treatment. It must be taken into account that a flame ionization detector cannot distinguish between components which contain only C, H, and O, as the combustion products are the same. How much exhaust air flow is necessary to make sure that the resulting stream has less than 50 % LEL according to a conservative standard?

Solution Considering the C-atoms, the LELs of the two components can be interpreted as follows: 2 ⋅ 12.01 g C/mol ⋅ 73.25 g/m3 = 39.94 g C/m3 44.053 g/mol 6 ⋅ 12.01 g C/mol ⋅ 41.64 g/m3 = 29.96 g C/m3 LEL2 = 100.161 g/mol LEL1 =

Thus, component 2 (cyclohexanol) should be the reference component with the lowest LEL per C-atom. In the mixture, the detector will identify ṁ C =

2 ⋅ 12.01 g C/mol 6 ⋅ 12.01 g C/mol ⋅ 2.4 kg/h + ⋅ 3.8 kg/h = 4.04 kg C/h 44.053 g/mol 100.161 g/mol

Therefore, the stream must be diluted with air to a volume flow of ̇ = Vair

ṁ C 4.04 kg C/h 3 = = 270 m /h 0.5 LEL2 0.5 ⋅ 29.96 g C/m3

Most chemical processes do not operate with air but rather with any mixtures which can contain arbitrary amounts of oxidizing substances. Figure 14.22 shows an example for the flammability diagram of the system methane/nitrogen/oxygen. For a given temperature and pressure, the region where explosions can happen is dark colored. Additionally, some useful straight lines are shown.

14.4 Explosions

� 477

Figure 14.22: Explosion regions of the system methane/nitrogen/oxygen. © Power.corrupts/Wikimedia Commons/CC BY-SA-3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en.

First, there is the stoichiometric line, where there is as much oxygen in the mixture as necessary for the complete oxidization of the methane. According to the reaction CH4 + 2 O2 󳨀→ CO2 + 2 H2 O , 2 mol oxygen are needed for the combustion of 1 mol methane, meaning that the mole concentration of methane is approximately 33 % in case there is no nitrogen. The line shows all mixtures where this ratio is kept. Explosions close to the stoichiometric line are the most violent ones. Second, the air line is shown, where the ratio between nitrogen and oxygen is apas it is in air, independent of the methane concentration. The intersections of prox. 79 21 this line with the limits of the explosion region indicate the LEL and, respectively, the UEL. For process control, the LOC (limiting oxygen concentration) line is the decisive one. It indicates the lowest oxygen concentration where the explosion region is touched. Independent of the concentrations of the other substances, staying below the LOC ensures that an explosion does not take place. The oxygen concentration is relatively easy to supervise. The addition of an inert gas (usually nitrogen) increases the LEL and lowers the UEL. The knowledge of the explosion limits enables the engineer to determine the necessary flow of inert gas. It has to be assured that this flow can be delivered in any case; e. g. the failure of a compressor or the breakdown of the electrical energy supply must not cause a lack of inert gas delivery. Often, gas cylinders filled with nitrogen under pres-

478 � 14 Process safety

Figure 14.23: Temperature course of a nitrogen gas cylinder at emergency inertization. Courtesy of Wystrach GmbH.

sure are provided as an independent emergency supply for a limited time. However, it must be carefully determined how much gas is really in the gas cylinders, so the filling conditions must be well defined. When the temperature decreases, for instance on a cold winter day, the pressure inside the cylinders will drop. The coldest winter day is the basis for the dimensioning of the inert gas supply. At actuation, the temperature in the cylinders further decreases due to the expansion, which in turn causes the pressure in the cylinder to decrease even more rapidly. On the other hand, the temperature decreases more slowly than thermodynamics suggests, as the steel of the vessel itself with its large mass behaves as a heat storage and transfers heat to the gas by natural convection. With time, the wall temperature decreases as well, and this is a decisive issue for choosing the material for the cylinders. It must be evaluated in advance how much inert gas can be delivered at emergency case. Furthermore, it has to be considered that the temperature downstream the outlet valve further decreases due to the Joule–Thomson effect. But the lowest temperature occurs inside the valve because of the enthalpy loss due to the acceleration to speed of sound. Figure 14.23 shows the course of the various temperatures of interest [234].

15 Digitalization Gökce Adali

15.1 Digital transformation In the recent years, digital transformation has gained strategic significance as a critical agenda for high level management. Digital transformation refers to incorporation and utilization of digital technologies to fundamentally transform an organization’s operations, business models and customer experiences. It involves integrating digital technologies such as artificial intelligence, machine learning, the Internet of Things (IoT), big data analytics and cloud computing into all aspects of an organization’s operations, from its core business processes to its customer interactions as well as supply chain management (see Glossary). The key objectives that an organization seeks to achieve with digital transformation are to enhance business capabilities also by creating new business models, advance customer experience, improve efficiency by optimizing operations and consequently reduce costs [354, 355]. Digital transformation requires not only a shift in mindset and eagerness to adopt new technologies and approaches to business, but also significant changes to an organization’s culture, structure and processes to fully realize the benefits of digital technologies. If an organization resists digital transformation, several important benefits and opportunities are likely to be missed out concerning the following key areas: – Organizations that embrace digital transformation are often able to gain a competitive advantage by leveraging technologies to streamline their operations and create innovative products and services. By resisting digital transformation, an organization may fall behind its competitors and struggle to keep up with changing market demands. – Digital tools and platforms can significantly contribute improving the efficiency of an organization’s processes and workflows. Automating repetitive tasks, reducing manual data entry and enabling real-time communication and collaboration yield to operate more efficiently and make better use of their resources. – Digital transformation is as well the key to enhance the customer experience by enabling organizations to better understand and engage with their customers. By leveraging data analytics, organizations are able to personalize their interactions with customers, provide more targeted marketing messages and create a more seamless customer journey across multiple touchpoints.

Gökce Adali, thyssenkrupp Uhde GmbH, 65812 Bad Soden, Germany https://doi.org/10.1515/9783111028149-015

480 � 15 Digitalization Digital transformation should be regarded as more of a process that brings radical changes in organizations resulting in identifying and creating further improvement opportunities, rather than just a single step for upgrading specific functions of organization. Additionally, digital transformation is a phenomenon that directly affects industry and society, not limited to being just an organization-centric process. It is also worth to mark how digital transformation differs from digitization and digitalization. While digitization is concerned with automated routines and tasks such as the conversion of analog into digital information, digitalization is described as the addition of digital components to product or service offerings [356].

15.2 Digitalization and sustainability As mentioned earlier, digitalization refers to the integration of digital tools, systems and processes into every aspect of an organization from production and supply chain management to customer service and marketing in order to transform business operations and services. For instance, digitalization enables organizations to collect and analyze large amounts of data in real-time which greatly supports decision making, automate processes and enhance operational efficiency. This leads to innovative business models, products and services that were not previously feasible or even possible to realize by applying conventional approaches. One crucial point to be regarded is that digitalization has significant implications for the workforce. Organizations need to empower their employees to acquire new skills and expertise in areas such as data analytics, software development and digital marketing by investing in upskilling and reskilling programs, ensuring that the employees are well prepared for the shift towards digitalization. Why does digitalization go hand in hand with sustainability? Sustainability and digitalization are often seen as complementary and mutually reinforcing due to several reasons: – Digitalization plays an important role for improving efficiency and reducing emissions in various industries, which is an essential aspect of sustainability. As an example, digitizing supply chain management can reduce transportation costs and emissions, while smart building systems optimizes energy use and reduce waste. – Digitalization enables to generate vast amount of data that can be used to monitor and optimize resource consumption, emissions and other sustainability metrics by identifying inefficiencies, tracking progress and making data-driven decisions. – Digitalization drives innovation in sustainable technologies such as renewable energy, energy storage and electric vehicles. By digitizing and automating production processes, organizations are able to limit or better control emissions, improve performance and quality, also develop new products that are more sustainable.

15.3 Digitalization in process industry and green transformation � 481



Digitalization facilitates collaboration and information sharing between stakeholders which is critical for implementing sustainability initiatives. As an example, digital platforms can connect suppliers, manufacturers and customers to share data and insights that improve supply chain sustainability [357, 358].

For further reading, Feroz et al. [356] list numerous related articles involving case studies that show how digital technologies are able to transform the different aspects of environmental sustainability such as pollution control, waste management, sustainable production and urban sustainability.

15.3 Digitalization in process industry and green transformation Process industry involves the production of goods through chemical, physical, or biological processes. Chemical, petrochemical, and energy industries have traditionally been associated with high levels of resource consumption, waste generation and environmental impact. Digital technologies such as sensors, automation systems, machine learning, artificial intelligence and data analytics are promising means in today’s process industry to improve and optimize industrial processes. By capturing and analyzing data from industrial processes, digital technologies help to identify inefficiencies, enable predictive maintenance and support real-time decision making, consequently enabling greater efficiency and productivity, optimizing energy and resource use, reducing waste and emissions, contributing to enhance circularity and improving the sustainability performance of the process industry. In addition to improving sustainability performance, digitalization also supports the transition to a low-carbon economy by enabling the integration of renewable energy sources, such as solar and wind power into industrial processes [359, 360]. Digitalization greatly contributes to green transformation of process industry by providing tools and technologies that enable more efficient, sustainable and environmentally friendly operations. A few examples are listed below: – Digitalization helps to improve energy efficiency in process industry by providing real-time monitoring and control of energy consumption. By using sensors, automation and data analytics, organizations can identify areas of energy waste and optimize processes to reduce consumption and emissions. – Digitalization also enables process industry reducing waste and improving resource management. For instance, by using digital tools to monitor and control production processes, companies can reduce material waste and minimize the need for resource-intensive cleaning and maintenance. – Digitalization provides support for process industry transition to renewable energy sources. By using digital tools to monitor and manage renewable energy systems,

482 � 15 Digitalization



companies can optimize energy production and storage and reduce reliance on fossil fuels. Digitalization promotes circular economy by facilitating the reuse, recycling and repurposing of materials and products [357, 360].

15.4 Key terms explained Many may be already aware of, but it is worth to briefly explore some of the key buzzwords that have been gaining increasing attention in recent years within the technology and business worlds. While each of these concepts can be complex and versatile, with this section it is aimed to provide clear and concise explanations of what they mean and how they are relevant in today’s rapidly evolving landscape of technological innovation and digital transformation.

Industry 4.0 Industrial revolutions refer to the groundbreaking changes in the manufacturing, transformation and communication sectors that occurred during the past few centuries (Figure 15.1). These changes were characterized by significant advancements in technology, such as the invention of the steam engine, the development of the assembly line and the widespread use of automation. Industry 4.0, also known as the Fourth Industrial Revolution, represents a new era mainly characterized by the integration of advanced digital technologies such as artificial intelligence, robotics, big data analytics, the Internet of Things (IoT) and cloud

Figure 15.1: Four industrial revolutions. (Reproduced from Ref. [361].)

15.4 Key terms explained

� 483

computing into the manufacturing and production processes in order to create “smart factories” that are more efficient, flexible and sustainable. Cyber-physical systems (CPS), which combine physical components with digital systems to monitor and control production processes in real-time, are the core elements of Industry 4.0. These systems are enablers of productivity, efficiency and product quality improvements while reducing environmental impact and allowing for greater automation.

(Industrial) internet of things ((I)IoT) The internet of things (IoT) refers to the network of physical objects, devices and sensors that are connected to the internet and can collect and exchange data. These objects can range from everyday devices such as smartphones, watches and home appliances to industrial equipment and vehicles. The IoT enables these objects to communicate with each other and with other systems, allowing for real-time monitoring, control and analysis of data. This data can be used to improve efficiency, reduce costs and enhance user experiences. One of the key features of IoT is its ability to collect and analyze large amounts of data from a variety of sources. This data can be used to identify patterns and trends, make predictions and support decision-making. The IoT has significant implications for a wide range of industries, including healthcare, manufacturing, transportation and agriculture. In healthcare, for example, the IoT can be used to monitor patient health remotely, while in manufacturing it can be used to optimize production processes and improve supply chain management. While IoT is mainly for trading sector, the Industrial Internet of Things (IIoT) is for industrial sectors concerning with industrial applications such as manufacturing and energy management and it also refers to interrelated sensors, tools, and other devices which are joined with computers. As the number of connected devices continues to grow, there is a need for strong cybersecurity measures to protect against potential data breaches and attacks. The development of new standards and protocols for IoT devices is also an ongoing area of research and development [362, 363].

Big data Data is defined as the raw information acquired from various sources, such as sensors, control systems, machines and other digital devices in the context of digitalization. Considering the process industry, this data can include a wide range of information such as temperature, pressure, flow rate and other process parameters.

484 � 15 Digitalization The often-heard saying “data is the new gold” or “data is the new oil” suggests that data has become as valuable as gold or oil in today’s world, due to its potential and capability to generate insights and enable informed decision-making. Big data refers to massively large and complex data sets, which consist of structured (e. g. data in spreadsheets), semi-structured (e. g. graphs and trees) and unstructured data (e. g. images, audios and videos) from a variety of sources that cannot be effectively processed or analyzed using traditional data processing methods. Big data is characterized by its volume, velocity, variety and value, hence the 4Vs. The volume refers to the large amount of data being generated and collected, while velocity refers to the speed at which data is being generated and need for real-time analysis. Variety refers to the diversity of data types, such as text, images, video and audio. And value refers that the significance of big data is rather the huge value, not the great volume and low-density value [364]. To effectively manage and analyze big data, specialized tools and technologies, including distributed computing systems, machine learning algorithms and data visualization tools are required. These tools allow organizations to extract insights from big data and make data-driven decisions, leading to an improved competitive advantage. But what happens with the unused data? The term “data graveyard” is often used to refer to a repository of data or information that is no longer relevant or useful to an organization, but is still being stored, taking up valuable storage space and resources. This data may be outdated, redundant or simply no longer necessary and may have been generated from previous projects, systems or processes that are no longer in use. Despite its lack of value, the data may be stored because of a lack of clear policies or procedures for deleting or archiving it. The accumulation of unused or obsolete data can create a number of problems for organizations, including increased storage costs, decreased system performance and potential security risks if the data contains sensitive or confidential information. To avoid a data graveyard, it is important for organizations to establish clear policies and procedures for data management, including regularly reviewing and archiving or deleting data that is no longer needed. This helps to ensure that valuable storage resources are being used efficiently and that sensitive information is properly secured and protected. Moreover, the storage of big data can contribute to environmental pollution. The storage and processing of large amounts of data require significant amounts of energy, which can lead to increased carbon emissions and other environmental impacts. The primary source of energy consumption associated with data storage and processing is the electricity needed to power the data centers and servers that house and process the data. These data centers require a large amount of energy to maintain a constant temperature as well as to power the servers and cooling systems. This energy use can lead to increased greenhouse gas emissions and other environmental impacts, such as water usage and waste generation [398]. In addition to energy consumption, the disposal of electronic waste generated by data centers can also contribute to environ-

15.4 Key terms explained

� 485

mental pollution. This waste includes old servers, storage devices and other equipment that are no longer needed or have become obsolete. To minimize the environmental impact of big data storage and processing, organizations can adopt a number of strategies, such as using energy-efficient hardware, consolidating data centers and implementing virtualization and cloud computing technologies. Additionally, recycling and proper disposal of electronic waste can help to minimize the environmental impact of data storage and processing.

Smart data A transition from large, complex datasets is often necessary in order to achieve more refined data, known as smart data, containing meaningful and actionable set of information which supports decision making. Such transition requires a kind of intelligent processing that is realized by employing advanced analytical tools supported with machine learning algorithms and artificial intelligence [365]. Smart data differs from big data in that it emphasizes data quality over quantity. Rather than attempting to process all available data, it involves selecting and evaluating only the most relevant data points. In process industry, smart data can be used for a variety of purposes, including the creation of digital twins, predictive maintenance, process simulation, process performance optimization and advanced process control which will be further detailed in the upcoming sections.

Data mining Data mining is a process of discovering patterns, trends and insights from large and complex datasets. It involves using statistical and machine learning techniques to analyze data and identify relationships between variables. The data mining process typically involves the following steps [366]: – Data collection and preparation: In this step, data is collected from various sources, such as databases, spreadsheets and text files and the most appropriate data samples are selected for modelling. – Data preprocessing: Collected data is preprocessed to remove any noise, inconsistencies or irrelevant information that could affect the analysis. – Data exploration: The preprocessed data is explored using visualization and statistical techniques to identify patterns and trends. – Model building: A statistical or machine learning model is built using the preprocessed data to identify relationships between variables. – Deployment: The model is deployed in the production environment and the results are monitored and analyzed to ensure its effectiveness.

486 � 15 Digitalization In process industry, data mining and analytics enable to monitor and improve process performance and reduce costs by identifying opportunities for process optimization, predictive maintenance and process control.

Cloud computing and storage Cloud computing is a type of computing that allows users to access and use a variety of computing resources (such as servers, storage, applications and services) over the internet, without the need for physical infrastructure or hardware. In other words, cloud computing allows users to run their applications or store their data on remote servers that are owned and maintained by third-party service providers. Cloud storage, on the other hand, is a type of service that allows users to store and access their data (such as files, documents, photos and videos) on remote servers over the internet. This data is typically stored in a secure and scalable manner and can be accessed from anywhere and on any device that has an internet connection (Figure 15.2). Both cloud computing and storage have become increasingly popular in recent years, as they offer a number of benefits to users, including cost savings, scalability, flexibility and accessibility. They are used by individuals, businesses and organizations of all sizes and types and are rapidly transforming the way we use and interact with technology.

Figure 15.2: Cloud Storage. www.CartoonStock.com.

15.4 Key terms explained

� 487

The data storage and management in a cloud system is realized through a combination of hardware and software components. The hardware components include servers, storage devices and networking equipment while the software components include operating systems, database management systems and other software applications. When data is stored in a cloud system, it is typically stored in a distributed manner across multiple servers and storage devices. This helps to ensure that the data is always available and that there is no single point of failure that could cause data loss. The data in a cloud system is typically managed using a database management system, which provides a set of tools and interfaces for storing, querying and analyzing data. These systems are designed to handle large volumes of data and provide high levels of performance and scalability. Cloud systems also provide a range of data management tools and services, such as backup and disaster recovery, data encryption and access controls. These tools help to ensure the security and integrity of the data stored in the cloud and enable users to manage and control access to their data. Additionally, edge devices are used in cloud computing and storage to extend the reach of cloud services beyond the traditional data center or server. By processing and storing data locally on edge devices rather than solely in the cloud, edge computing can improve the speed and efficiency of data processing, reduce latency and bandwidth usage and provide greater resilience in the face of network disruptions or outages. This approach can be particularly useful for applications that require real-time processing or low-latency communication, such as autonomous vehicles, remote monitoring systems or industrial automation. Additionally, edge devices can help to reduce the cost and complexity of cloud computing by offloading some of the processing and storage burden from centralized data centers [367, 368].

Artificial intelligence (AI) Artificial intelligence (AI) is a broad field that includes variety of technologies and techniques aiming to develop intelligent machines capable of performing tasks that would otherwise need human intelligence. In general, AI systems are designed to learn, reason and adapt to new situations in much the same way as humans do. Integration of AI-powered tools has shown the potential to revolutionize the process industry by optimizing operations, enhancing efficiency, reducing costs and improving safety. Below, some application areas are listed [369, 370]: – Predictive maintenance: AI is used to monitor equipment, detect changes in process variables that may indicate impending equipment failure and predict equipment failures based on sensor data, allowing maintenance to be performed before a failure occurs which can consequently help preventing downtime and reducing maintenance costs.

488 � 15 Digitalization –



– – –

Process optimization: AI-aided tools are used to estimate variables that are difficult or expensive to measure directly, such as product quality and use this information to optimize process parameters in order to improve efficiency and minimize resource consumption in chemical processes. Process and quality control: AI-assisted sensors are used to monitor process variables such as temperature, pressure and flow rate and provide feedback to control systems. This helps to improve process control and reduce variability. Use of such sensors also enables to estimate product quality based on process variables, allowing manufacturers to detect anomalies and take corrective action in real-time, before low quality products are produced. Supply chain optimization: AI helps to optimize the supply chain by predicting demand, identifying bottlenecks and optimizing logistics. Energy management: AI supports optimizing energy consumption, predicting energy demand and adjusting production schedules accordingly. Safety: AI-aided digital tools can be used to detect and prevent accidents by analyzing sensor data and alerting operators to potential hazards.

Machine learning Machine learning is a type of artificial intelligence that allows computer systems to automatically improve their performance on a specific task over time by learning from data. In process industry, machine learning enables to analyze large amounts of data from sensors and other sources to identify patterns, detect anomalies and make predictions about future outcomes. Machine learning algorithms can also be used to construct software-based models, so-called soft sensors, that estimate process variables based on data from available sensors without the need for additional hardware sensors [371]. Another application area of machine learning can be counted as process performance and resource optimization, realized by identifying the most efficient operating parameters and making real-time adjustments to maximize output and minimize resource utilization. Successful integrations of AI-assisted controllers and optimizers into the existing process operations can lead to significant improvements in efficiency and productivity as well as reductions in energy consumption, helping to achieve sustainability goals. How exactly does a machine learn? A machine learns through a process known as training that involves a combination of data processing, statistical analysis and optimization techniques. By means of training, the machine becomes capable of making predictions or decisions by analyzing large amounts of data and identifying patterns or relationships within the data. The machine

15.4 Key terms explained

� 489

improves its performance over the course of this training process and adapts to new data or scenarios. For training purpose, different types of machine learning methodologies are used such as supervised, unsupervised and reinforcement learning. In case of supervised learning, a labeled dataset is used to train the machine, meaning that the correct answer or output is provided along with the input data. Hence, as the name implies, the machine is supervised. The machine then uses this labeled data to learn how to predict the correct output for new, unseen data. Unlike supervised learning, the machine in unsupervised learning is trained on an unlabeled dataset, meaning that the machine is expected to identify patterns and relationships within the data without any guidance. This type of learning is often used for clustering or segmentation tasks where the goal is to group similar data points together. In reinforcement learning, the machine learns through trial and error. With a given goal or objective, the machine interacts with its environment to learn which actions will lead to a positive outcome. The machine receives feedback in the form of rewards or penalties for its actions and uses this feedback to adjust its behavior over time [371].

Deep learning Deep learning is a subset of machine learning that involves training neural networks to recognize patterns in data with using multiple layers. It is a type of artificial intelligence that is inspired by the structure and function of the human brain (Figure 15.3). Deep learning algorithms use multiple layers of artificial neurons to learn representations of data that can be used for classification, prediction and other tasks. These neural networks, also known as artificial neural networks (ANNs), which are computational models designed to simulate the behavior of biological neurons and the networks of neurons in the brain, can be trained on large datasets, allowing them to automatically extract complex features from the data and make accurate predictions or decisions. Artificial neurons typically consist of several input nodes, a processing unit and an output node. Each input node receives a signal, which is multiplied by a weight value that represents the strength of the connection between the input node and the processing unit. The processing unit then sums the weighted inputs and applies a nonlinear activation function to produce an output value. The output value is then sent to other neurons in the network or used as the final output of the network. The weights of the connections between the input nodes and the processing unit are adjusted during training using algorithms such as backpropagation (see Glossary), in order to optimize the performance of the network for a specific task. The term “deep” in deep learning refers to the fact that these neural networks typically are composed of many layers of interconnected nodes, or “neurons” that process

490 � 15 Digitalization

Figure 15.3: Analogy between a biological neuron and an artificial neural network (ANN). Reproduced from [372].

and transmit information, which allows them to learn complex representations of the data. These networks can be trained using a variety of optimization techniques, such as stochastic gradient descent, to adjust the weights and biases of the neurons and minimize the error between the predicted output and the actual output [373]. Not only in process industry, deep learning has been as well used in a wide range of applications, including computer vision, natural language processing, speech recognition and robotics. It has achieved state-of-the art performance in many tasks and has revolutionized the field of artificial intelligence in recent years.

15.5 Models for digitalization Process models can take many different forms depending on the type of process being simulated. For example, a process model might include equations that describe how materials flow through a system, how energy is transferred or how chemical reactions occur to produce a desired output. The model might also incorporate data on the physi-

15.5 Models for digitalization

� 491

cal properties of the materials being used or the environmental conditions in which the process takes place. A model can be used to simulate and predict the behavior of these processes, enabling operators to make informed decisions about how best to manage them. For instance, a model could be used to predict how a particular chemical reaction will occur under different conditions, allowing operators to adjust parameters to optimize the process and ensure consistent output quality. Digital tools such as process simulation software, machine learning algorithms and predictive maintenance systems rely on models to provide accurate insights into complex processes. By developing and refining these models over time, organizations can improve their understanding of the underlying processes and develop better strategies for managing them to stay ahead of the competition. Ultimately, a model-based approach to process management can lead to increased efficiency, improved product quality and reduced costs, making it essential component of any modern process industry operation.

Model types The common types of models used for various digitalization applications such as digital twins, advanced process control practices, process simulators and performance optimizers are described as follows [374]: Physics-based (first principle) models: A physics-based or a first principle model, which is often referred to “white box” model, is a type of model that is based on fundamental laws of physics or engineering that govern the behavior of the system being modelled. The parameters of the model are typically based on physical constants and the model is designed to be highly accurate and predictive. The term “first principles” refers to the basic axioms, laws and equations, as opposed to empirical or phenomenological relationships that are based purely on observations or data. First principle models are typically constructed using a combination of analytical techniques, numerical simulations and experimental data and may include differential equations, linear regression (see Glossary) models and decision trees. First principle models are typically used when a deep understanding of the system or process is required or when the available data is limited or unreliable. These models are usually more complex than data-based or surrogate (see Glossary) models and they often require significant expertise in the field of study to build and interpret, therefore in some cases it may lead to time-demanding and costly development process. On the other hand, they may also be less flexible or accurate than data-based models, especially in applications with complex or uncertain relationships between variables. Examples of first principle models include models of fluid dynamics, thermodynamics, heat transfer, chemical reaction kinetics and materials science. These models are

492 � 15 Digitalization widely used in many fields, including aerospace, automotive, energy and chemical engineering to design and optimize systems and processes. Data-based models: A data-based model, also named as a “black box” model, is a mathematical or statistical model that is built using data rather than first principles or physical laws. Data-based models use machine learning, artificial intelligence or statistical techniques to identify patterns and relationships in the data and use these insights to predict future behavior or identify anomalies in the system. Data-driven models are particularly useful for systems that are too complex or uncertain to model using physicsbased models alone. The complexity and the number of parameters in these black box models can be very high and it may be difficult or impossible to understand how the model arrives at its predictions or decisions, as it takes in input variables and produces output variables without knowledge of the underlying processes or relationships. Examples of black box models include artificial neural networks, decision trees and support vector machines (see Glossary). These models can be very accurate and powerful, but they may be less interpretable and explainable than other types of models. The lack of transparency in black box models can be a concern in some applications, such as medical diagnosis or legal decision-making, where it is important to understand how a model arrived at its conclusion. The choice between white box and black box models depends on the specific application and the goals of the modelling exercise. White box models are generally preferred when a deep understanding of underlying processes is required or when explainability is important while black box models are often preferred when accuracy or predictive power is the primary concern. Surrogate models: A surrogate model is a simplified mathematical or statistical model that is created to approximate the behavior of a complex computer simulation or physical system. Surrogate models are commonly used in engineering, physics and other fields where simulations are expensive or time-consuming to run, in order to quickly explore the behavior of the system under different conditions or to optimize design parameters. A surrogate model and a data-based model are both mathematical or statistical models that are built to approximate the behavior of a complex system. However, there are some key differences between the two. A surrogate model is built using information from a complex computer simulation or physical system, while a data-based model is built using information from a dataset of input and output variables. The purpose of a surrogate model is to replace the complex simulation of physical system with a simpler model that can be used for prediction or optimization. On the other hand, the purpose of a data-based model is to predict outcomes or make decisions based on patterns in the data. Moreover, a surrogate model is typically simpler than the original simulation or physical system, while a data-based

15.5 Models for digitalization

� 493

model can be as complex or simple as necessary to accurately represent the data. The choice between these two types of models depends on the nature of the problem and the available data [374, 375]. Hybrid (grey box) models: These are models that combine physics-based and datadriven modeling techniques to create a more comprehensive representation of the system. Hybrid models can be used to capture both the fundamental physics of the system and the complex interactions and dynamics that emerge from real-world conditions and environmental factors. A hybrid (grey box) model is a modelling approach that combines elements of both white box models and black box models by incorporating some knowledge of the internal workings of the system while also treating some aspects of the system as a black box. They are typically used when some knowledge of the system is available, but not enough to fully model the system using a white box approach. In such cases, hybrid approach can help to improve the accuracy and reliability of the model. Hybrid models are commonly used in fields such as engineering, finance and economics where the underlying systems are difficult to fully understand, but where some information is available that can be used to inform the model [376]. Overall, the choice of modelling technique for a digitalization application will depend on the specific characteristics of the system being modeled, the availability and quality of data and the goals of the application. A combination of different modeling techniques may be necessary to create an accurate and useful digital tool.

Data-based model building In this section, general step-wise approach for creating data-based models is briefly summarized: – Problem definition: Starting point for data-based modelling is to define the problem itself and the objectives of the model clearly by identifying the input and output variables, as well as the performance metrics to be used for evaluating the model. – Data collection: After the problem definition step, a dataset involving the examples of the input and the corresponding output variables is collected. This step involves practicing design of experiments (DoE) and sampling to generate the data points which should be representative of the population that the model will be applied to. – Data processing and preparation: Collected data may need to be cleaned, transformed or normalized before it can be used for model training purposes. This includes handling missing values, removing outliers and scaling the data to a consistent range. – Model selection: Nature of the problem and the availability of data directly influence the model selection. Various types of modelling approaches can be used to

494 � 15 Digitalization







build a data-based model, including linear regression, logistic regression, decision trees, random forests, support vector machines and neural networks (see Glossary). Training: In order to perform model training, available data is split into three independent sets that are defined as training, validation and test sets. The model is then trained based on the training set aiming to estimate the best parameters for the model that minimize the difference between the predicted outputs and the actual outputs in the training data. Model validation: As the next step, the validation set is employed to validate the model to ensure that it accurately represents the behavior of the data. This is typically done by calculating the performance metrics on the validation set and comparing them to the performance metrics on the training set. The actual reason for the need of using an independent validation dataset other than the training dataset is to avoid possible overfitting while performing model tuning. Testing the model: Once the model is validated, it can be tested on another independent dataset to evaluate its performance by calculating the performance metrics on a test set that has not been seen by the model before. The model can be utilized in a real-world application to make predictions or decisions based on new data, in case it performs well on the test set.

The model may need to be retrained or updated periodically to account for changes in the data or the environment.

15.6 Data science vs. domain expertise With the explosive growth of data science and artificial intelligence in recent years, the importance or necessity of domain expertise in the field is being hotly debated. While leaders keep promoting the idea of involving domain experts in all stages of machine learning system design, development and deployment, it has been demonstrated in multiple cases that expert solutions can also be built and tested for performance without the involvement of domain experts [377]. In this section, it is aimed to discuss why the ideal approach could be “Data Science and Domain Expertise”, rather than “Data Science vs. Domain Expertise”. Data science and domain expertise are two distinct fields that are often intertwined in practice. Data scientists specialize in using statistical and computational techniques to analyze and interpret data, while domain experts possess extensive knowledge and experience in a particular industry or subject area. While it is possible for data scientists to work without the support of domain experts, it is often not ideal. Without domain expertise, data scientists may lack a deep understanding of the context in which the data was collected and the nuances of the subject matter. This can lead to inaccurate or irrelevant insights, as well as difficulty in interpreting the results of the analysis. On the other hand, domain experts who lack data science skills may struggle to extract meaningful

15.6 Data science vs. domain expertise �

495

insights from large, complex datasets. Data scientists can bring valuable technical expertise and analytical skills to help domain experts make sense of their data and identify patterns that may not be immediately apparent. Data science and domain expertise are often complementary and mutually rewarding, despite being two distinct areas. By combining the skills and knowledge of data scientists and domain experts, it is more likely for the organizations to gain deeper insights on what their data tells and make better informed decisions.

Role of domain experts Domain experts play a critical role in digitalization by bringing their specialized knowledge and expertise to bear on the digital transformation process. Here are a few ways in which domain experts can provide valuable contributions [377, 378]: – Contextualizing the data: Domain experts provide context for the data by understanding the industry or domain in which the data is being used. They can identify relevant benchmarks, trends and best practices that help put the data in context and provide a basis for comparison. – Interpreting the data: Domain experts help to interpret the data by understanding the nuances and complexities of the industry or domain. They also identify patterns, correlations and cause-and-effect relationships that help explain the data and provide insights into the underlying factors that are driving it. – Validating the data: Domain experts also help to validate the data by checking it against their own knowledge and experience. They identify outliers, inconsistencies and errors that may indicate data quality issues or other problems. – Understanding business processes: Domain experts have deep knowledge of the specific industry or domain in which they operate. This understanding is critical for identifying areas where digital technologies can improve efficiency, reduce costs and increase competitiveness. – Identifying opportunities for innovation: Domain experts support also identifying opportunities for innovation by applying their knowledge to the development of new products, services and business models. By understanding the needs of customers and the capabilities of digital technologies, domain experts can help drive innovation and create new value propositions. – Translating technical concepts: Digitalization often involves complex technical concepts and terminology. Domain experts help to translate these concepts into language that is easily understood by nontechnical stakeholders such as business leaders and customers. – Guiding implementation: Domain experts can also guide the implementation of digital technologies by providing input on requirements, functionality and user experience. This ensures that digital solutions are aligned with the needs of the business and the domain.

496 � 15 Digitalization –

Driving action: Ultimately, the role of domain experts is to help drive action based on the data. They can provide insights and recommendations that help informed decision-making and drive business outcomes.

15.7 Digitalization trends in process industry Digital twins A digital twin is a virtual representation of a physical asset, process or system that is used to simulate, monitor and optimize its performance. It is built by using data from sensors, machine learning algorithms, as well as applying fundamental laws of physics or engineering if available, to create a digital model that is updated in real-time based on the physical asset’s behavior [379]. In process industry, a digital twin can represent a single equipment, a process unit or the whole process plant depending on the application purposes and model availability (Figure 15.4).

Figure 15.4: Digital Twin Application in Chemical Process Industry. Copyright thyssenkrupp.

The very first digital twin application was realized at NASA in the 1960s as a “living model” of the Apollo mission. After the Apollo 13’s oxygen tank explosion incident and subsequent damage to the main engine, for the evaluation of the malfunction, NASA used multiple simulators and extended a physical model of the vehicle to include digital components. This “digital twin” was the prototype, allowing for a continuous data ingestion to model the consequent events resulted in the incident for forensic analysis and exploration of next steps [380].

15.7 Digitalization trends in process industry

� 497

From the product lifecycle management point of view, a digital twin can exist along the product’s lifecycle and support decision making during the product development, from the conceptual design stage to the product launch. Simulations enable real-time monitoring and quality evaluation even before the final product is manufactured, contributing to shorten the product development time as well as to achieve advanced products that are fit for purpose at the end. Even after the product launch, digital twins can be continuously improved through real-time data updates from physical assets which leads to further efficiency improvements and optimization [364, 379, 381]. In the process industry, digital twins can be used as core elements for a variety of applications, including: – Predictive maintenance: Digital twins are employed to predict equipment failures by monitoring the behavior of the physical asset in real-time and comparing it to the digital twin model. This allows for early recognition of a possible malfunction, so that the maintenance can be performed before the failure occurs, resulting in reduced downtime and maintenance costs. – Process performance optimization: Digital twins are widely used to optimize process parameters by simulating the behavior of the process in real-time and identifying opportunities for improvement. – Quality control: Another application of digital twins is to monitor product quality by comparing the behavior of the physical asset to the digital twin model and identifying deviations that could indicate a quality issue. – Training and simulation: Digital twins are important tools for training operators on how to respond to different process scenarios and emergencies.

Dynamic simulation Dynamic simulation is a technique used in the process industry to model and simulate the behavior of a process over time. It involves employing a dynamic process model, which can be first principle, data-based or hybrid, to predict how the process will respond to changes in input variables, such as temperature, pressure and flow rate, etc. By simulating the behavior of a process over time, manufacturers can identify potential problems, optimize process parameters and develop effective control strategies. Since dynamic simulation is one of the key elements for creating digital twins, its areas of use are mainly identical to the application fields of digital twins such as process design, development of operator training simulators, quality and advanced process control, as briefly mentioned in the previous section. For a deep dive into the topic, please refer to Chapter 3.7 Dynamic Process Simulation.

498 � 15 Digitalization Operator training simulators (OTS) Operator training simulators (OTS) are computer-based systems that simulate the operation of industrial processes by means of a dynamic simulation model, allowing operators to train and develop their skills in a safe and controlled environment, ahead of plant start-up and throughout plant lifecycle. OTS systems may as well include grading methodologies that allow for certifying operators before they encounter routine (normal operation, start-up, and shutdown) and nonroutine (equipment malfunction, emergency shutdown) scenarios in real plants. The use of OTS in the process industry provides a range of benefits, including [382]: – Improved safety: Operators can practice handling emergency situations without risking injury or damage to equipment, allowing them to develop the skills and the knowledge needed to respond effectively in real-world scenarios. – Reduced downtime: OTS is used to simulate maintenance and repair scenarios, allowing operators to test and refine their skills without disrupting actual production processes. – Improved productivity: By training operators on a virtual model of the process, OTS helps operators learn to optimize the process and improve production efficiency. – Cost savings: The use of OTS reduces the need for expensive and time-consuming physical training exercises as well as the risk of accidents and equipment damage. – Improved environmental performance: OTS is used to train operators in the best practices for reducing waste and emissions, helping to improve the environmental performance of the process. Model accuracy in an OTS application In the context of operator training simulators (OTS) for the chemical process industry, high-fidelity refers to a simulation that closely replicates the behavior of the actual process being simulated, while low-fidelity refers to a simulation that has simplified or abstracted the behavior of the process. A high-fidelity OTS incorporates realistic process models, accurate data inputs and sophisticated software and hardware to create an immersive and realistic training environment. This type of simulation can provide a very accurate representation of the process and is particularly useful for training operators in complex, high-risk scenarios where safety and accuracy are paramount. On the other hand, a low-fidelity OTS is a simplified version of the process that is designed to be less resource-intensive and easier to use. These simulations may use less detailed process models, less accurate data inputs and simpler software and hardware. They are useful for training operators in more routine scenarios or as an initial introduction to the process before moving on to more complex and realistic simulations [382, 383].

15.7 Digitalization trends in process industry

� 499

The choice of high-fidelity or low-fidelity for an OTS depends on the specific needs of the organization and the training goals for the operators.

Advanced process control Digitalization and automation play a significant role in achieving operational excellence, by enabling safe and secure process optimization of production facilities. Advanced Process Control (APC) is an essential element of the digital transformation that is driving the industry when it comes to process control and optimization. APC is a practice that uses models and mathematical algorithms for the optimization and control of industrial processes in real-time. By developing and incorporating APC systems, process data is analyzed in order to predict the process behavior and make adjustments to process variables resulting in optimized process performance and reduced variability. Dynamic models which are able to capture all the processes and their constraints are utilized for the design of these controller systems, which run in a continuous and autonomous manner, aiming to perform optimization by using real-time data. Successful integration of APC systems would certainly bring benefits such as enhanced production, plant stability and energy efficiency. Model Predictive Control (MPC) is a type of advanced control strategy that uses a mathematical model of the system being controlled to predict future behavior and optimize control actions over a defined time horizon. In MPC, the control actions are calculated by solving an optimization problem iteratively based on the model predictions and a specified performance objective. The controller then applies the calculated control actions to the system and the process is repeated at each time step [384, 385]. MPC is widely used in various industries, including chemical, automotive, aerospace and power generation to improve process control, increase efficiency and reduce costs. Some specific applications of MPC include: – Process control: MPC is used to control complex industrial processes such as chemical and petrochemical plants, where accurate control of multiple variables is essential for maintaining product quality and maximizing production rates. – Automation and robotics: MPC is utilized in robotics and autonomous systems to optimize control actions based on predictions of future system behavior and environmental conditions. – Traffic control: Traffic control systems use MPC to optimize traffic flow and reduce congestion by adjusting traffic signal timings based on real-time traffic data. – Building control: MPC is also included in building automation systems to optimize energy consumption and indoor comfort by adjusting heating, ventilation and air conditioning (HVAC) systems based on occupancy and weather predictions.

500 � 15 Digitalization Real-time optimization In chemical industry, it is of paramount interest to advance key business drivers such as process performance and product quality while fulfilling safety requirements and environmental regulations in order to maintain a competitive advantage in the global market. Real-time optimization (RTO) is a technique used in process industry to continuously adjust process parameters in real-time to achieve optimal performance. It involves the use of mathematical models and optimization algorithms to calculate the optimal set-points for process variables based on real-time data. RTO is applied in the process industry mainly in the field of process and quality control in order to tune the process variables in real-time, targeting to maintain the desired product quality in presence of fluctuations or disturbances related to raw material supply or other factors while minimizing the resource consumption. Another application area of RTO is supply chain optimization by predicting demand, identifying bottlenecks and optimizing logistics. Real time optimizers are also known as powerful tools when it comes to energy consumption optimization, realized by predicting energy demand and adjusting production schedules accordingly. The ultimate aim of RTO systems is to ensure that the process operates as close as possible to its true optimum despite uncertainties, while satisfying different constraints. The overall control hierarchy composed of planning and scheduling, RTO and regulatory control layers is demonstrated in Figure 15.5. An iterative model-based optimization routine is run in closed loop based on objection functions and constraints that are defined by the planning and scheduling layer. RTO performs as bridging layer incorporating the information from both regulatory con-

Figure 15.5: Disturbances acting on different automation levels. Adapted from [386].

15.7 Digitalization trends in process industry

� 501

trol and planning layers to provide optimal, updated operating conditions or set-points to the low-level controllers ranging from simple linear controllers to nonlinear model predictive controllers (NMPC) that are employed in the control layer. Since an ideal system simply does not exist, process disturbances are expected to have an influence on all levels of the process control architecture. Market fluctuations are regarded as slow fluctuations which typically influence the decisions making at the planning and scheduling layer. Fast disturbances such as fluctuations in several process parameters such as pressure and composition are handled at the regulatory control level. The middle layer where RTO performs commonly encounters medium-term disturbances such as changes in raw material quality. Process measurements can be used in all layers being compared to set-points to compute appropriate control actions for set-point tracking and disturbance rejection [386]. Despite many opportunities offered by process optimization, some important challenges are also commonly addressed. Difficulties in constructing the accurate mathematical formulation which is a good enough representative of the reality can be considered as one of the main challenges which is also common for creating digital twins. Decrease in the quality of this representation refers to the presence of uncertainty, arisen from process disturbances explained before, insufficiently structured process model due to unknown phenomena or neglected dynamics or model parameters not corresponding to the reality of the process. That being the case, one can talk about so-called “plant-model mismatch” which means that the optimization cannot be performed at its best, leading to unsatisfactory results or infeasible operation in presence of constraints. To handle plant-model mismatches in RTOs, different approaches are demonstrated in literature including online parameter estimation and measurement-based, also called adaptive, real time optimization methodologies [387, 388, 401].

Predictive maintenance In process industry, predictive maintenance is becoming increasingly important as companies seek to optimize their maintenance practices and reduce costs. Predictive maintenance is a proactive maintenance approach that uses data analytics and machine learning algorithms to predict when maintenance is required on an industrial piece of equipment. By analyzing process data, early signs of equipment failure can be detected and the maintenance personnel can be warned in advance. In process industry, predictive maintenance is used to optimize maintenance schedules, reduce downtime and improve overall equipment reliability. It involves collecting data on various aspects of equipment performance, such as vibration, temperature and pressure and analyzing that data to identify patterns and anomalies. Once a predictive maintenance system detects a potential issue, maintenance personnel can investigate the issue and take appropriate action, such as replacing a wornout part or scheduling maintenance before a failure occurs. By identifying issues before

502 � 15 Digitalization they become serious, predictive maintenance can reduce the need for costly emergency repairs, decrease downtime and extend the lifespan of equipment by minimizing the need for emergency repairs and by reducing the frequency of routing maintenance activities [389, 390]. Benefits of using predictive maintenance include: – Improved equipment reliability and availability – Reduced maintenance costs and downtime – Improved safety by identifying potential equipment failures before they occur – Optimized maintenance schedules, leading to more efficient use of resources – Increased productivity and efficiency by reducing the impact of unplanned downtime on production schedules

Computer vision Computer vision is a technology that enables computers to interpret and analyze images and video data. In the chemical industry, computer vision is used as a digitalization tool to improve safety, efficiency and quality in a number of applications, where it can be used to automate and optimize various manufacturing and industrial processes [391, 392]. The common application areas of computer vision in process industry are: – Quality control and inspection: Computer vision is used to inspect and identify defects in products and components, helping to ensure that the products meet the required quality standards and specifications. – Robotics and automation: Computer vision enables robots to perform tasks that require visual perception, such as object recognition and tracking. This can help increase the efficiency and accuracy of manufacturing processes. – Monitoring and surveillance: Computer vision is utilized to monitor and analyze processes in real-time, allowing for early detection of faults or deviations from the desired operating conditions. – Safety and security: Computer vision helps to detect and prevent safety hazards in the workplace, such as identifying unsafe working conditions or detecting intruders. This can include monitoring for leaks, spills and other safety hazards as well as identifying unauthorized access or security issues. – Predictive maintenance: Computer vision is used to monitor equipment and identify potential maintenance issues before they become critical. For example, computer is used to monitor the condition of pumps, valves and other equipment to identify wear and tear and to predict when maintenance or replacement will be required. One common approach in process industry is to use thermal imaging cameras to capture images of the furnace and then use image processing algorithms to analyze the images and identify any potential issues or defects. Typically, the steps of this process are:

15.7 Digitalization trends in process industry











� 503

Image capture: The first step is to capture images of the furnace using cameras. The type of camera used can vary depending on the application, but thermal imaging cameras are commonly used to capture images that show the temperature distribution across the surface of the furnace. Image processing: Once the images have been captured, they are processed using computer vision algorithms to identify any potential issues or detects. For example, the algorithms are used to identify areas of the furnace that are hotter or cooler than expected, which could indicate a problem with the furnace’s heating elements or insulation. Defect detection: Once potential issues or defects have been identified, the computer vision system can classify them based on their severity and provide recommendations for corrective action. For example, if the furnace is found to have a damaged heating element, the computer vision system might recommend replacing the element to prevent further damage. Real-time monitoring: In some cases, computer vision systems are used to monitor furnaces in real-time, providing immediate feedback on the furnace’s performance and alerting operators to any potential issues as they arise. This can help to improve the overall efficiency of the furnace. Machine learning: Machine learning techniques can also be applied to the data generated by the computer vision system to identify patterns and predict future issues before they occur. This can help to optimize the furnace’s performance and prevent costly downtime.

Virtual commissioning Virtual commissioning is a simulation-based approach, typically involving computeraided design (CAD) and simulation tools to create a virtual model of the chemical plant or process for testing and optimizing it before they are physically built and put into operation. In the chemical process industry, virtual commissioning is used to simulate the behavior of chemical processes, test control strategies and identify potential issues before they occur in the actual production environment. This can save time and money by allowing engineers and operators to identify and resolve problems early on, before they become more difficult and expensive to address [393, 394]. Here is how virtual commissioning typically works in the chemical process industry: – Process modelling: The first step is to create a detailed model of the chemical process that includes all of the equipment, sensors and control systems that will be used in the actual production environment. This model is typically created using specialized process simulation software that can accurately simulate the behavior of the process under different conditions.

504 � 15 Digitalization –





Control system modelling: Once the process model has been created, a detailed model of the control system is developed. This includes the programmable logic controllers (PLCs), distributed control systems (DCS) and other control components that will be used to monitor and control the process. Integration and testing: The process and control system models are then integrated and tested using specialized software tools that can simulate the behavior of the system under different operating conditions. This allows engineers to test and optimize the control strategies, identify potential issues and fine-tune the process parameters to improve performance. Validation and verification: Once the virtual commissioning process is complete, the system is validated and verified to ensure that it meets the requirements of the actual production environment. This includes testing the system under different scenarios and verifying that it can respond appropriately to changes in the process conditions.

Drone applications Drones are widely used for inspection purposes in chemical plants in several ways, including [395, 396]: – Aerial inspection: Drones equipped with cameras are used to capture aerial images and videos of chemical plant facilities (Figure 15.6). This helps to identify potential issues such as leaks, cracks or other damage to buildings, tanks or pipelines. – Remote inspection: Drones are used to inspect areas of the chemical plant that are difficult to access, such as tall tanks and columns or hard to reach areas. The use of drones reduces the need for operators to climb ladders or scaffolding to perform inspections, improving safety and reducing the risk of accidents. – Thermal imaging: Drones equipped with thermal imaging cameras are used to detect temperature changes that may indicate issues with equipment or processes, such as overheating or leaks. – Gas detection: Drones can also be equipped with gas detection sensors to identify leaks or other hazards that may be difficult to detect from the ground. The advantages of using drones for inspection purposes in chemical plants include: – Improved safety: By using drones, operators can avoid hazardous areas and reduce the risk of accidents or injuries. – Increased efficiency: Drones can perform inspections more quickly and efficiently than traditional methods, reducing downtime and increasing productivity. – Cost savings: Using drones for inspections reduces the need for costly equipment or scaffolding.

15.8 Catching up with the times

� 505

Figure 15.6: Drone applications in chemical process industry. Copyright thyssenkrupp.



Enhanced data collection: Drones can collect high-quality images, videos and other data that can be analyzed to identify trends or patterns that may be difficult to detect using traditional methods.

15.8 Catching up with the times In the era of data, process/chemical engineers can highly benefit from equipping themselves with statistical tools and coding skills. Here are some areas in which possessing such skills can bring great value: – Statistical analysis: Learning statistical tools and techniques allows engineers to effectively analyze and interpret large datasets. Statistical analysis can help identify patterns, trends and correlations in process data, enabling engineers to optimize processes, identify root causes of problems and make data-driven decisions. – Design of Experiments (DoE): Knowledge of statistical tools such as Design of Experiments can aid engineers in efficiently exploring process variables and identifying optimal conditions. DoE techniques help in minimizing the number of experiments required while maximizing the amount of information obtained, leading to improved process understanding and optimization.

506 � 15 Digitalization –







Data visualization: Learning how to visually represent data can be immensely beneficial for process engineers. Visualization techniques enable engineers to communicate complex data effectively, facilitating better decision making and collaboration. Tools like Python’s Matplotlib or R’s ggplot (see Glossary) [399, 400] help in creating insightful visualizations. Machine learning and predictive analytics: Understanding machine learning concepts empower engineers to build models that can predict process behavior, identify anomalies and optimize process parameters. Familiarity with algorithms like regression, classification, clustering and time-series analysis is valuable when extracting actionable insights from process data. Programming and data manipulation: Proficiency in coding, particularly languages like Python or R help engineers in data manipulation, automation and performing complex calculations efficiently. Coding skills enable engineers to work with large datasets, perform data cleaning, transformation as well as integration and develop custom tools for data analysis. Process monitoring and control: Data analysis skills are instrumental in implementing advanced process monitoring and control techniques. Engineers can develop algorithms to detect process deviations, predict equipment failure or optimize control strategies based on real-time data, improving overall process performance and safety.

It is highly recommended to consider the following strategies as a process/chemical engineering student or graduate: – Continuous learning: Commit to lifelong learning and stay updated with the latest developments in data analytics, machine learning and process optimization. Stay engaged with industry publications, attend conferences, participate in webinars and take advantage of online courses and certifications. – Develop data analysis skills: Strengthen your data analysis skills by mastering statistical tools, data visualization techniques and programming languages commonly used in data analytics, such as Python or R. Practice analyzing real-world datasets and familiarize yourself with different data analysis methods and algorithms. – Stay technologically proficient: Stay up-to-date with the latest technological advancements in process and chemical engineering. Explore emerging technologies like IIoT, cloud computing and advanced automation systems. Familiarize yourself with relevant software and tools used for process simulation, optimization and data analysis. For instance, Hedengren (Brigham Young University, Chemical Engineering Department) provides numerous online courses and tutorials at [397] as well as on his YouTube channel (APMonitor.com), focusing on programming, optimization, process dynamics and control and machine learning. – Seek industry-relevant experience: Look for opportunities to apply your datadriven skills in internships, co-op programs or research projects. Gain practical

15.8 Catching up with the times









� 507

experience by working on projects involving data analysis, process optimization or implementing advanced control strategies. Seek out mentors who can guide in applying data analytics techniques in industrial settings. Collaborate and network: Engage with professionals in the field, both online and offline. Participate in industry forums, join professional organizations and network with experts and peers. Collaborate on projects or research initiatives that involve data analytics and learn from experienced professionals. Problem-solving approach: Cultivate a problem-solving mindset that combines your process/chemical engineering knowledge with data analysis skills. Look for opportunities to apply data-driven techniques to solve real-world problems by optimizing processes and improving efficiency. Stay ethical and secure: Understand the ethical considerations related to data analysis, data privacy and cybersecurity. Stay informed about regulations, guidelines and best practices in handling and protecting sensitive data. Maintain confidentiality and ensure compliance with applicable laws and industry standards. Stay curious and adaptable: Embrace a mindset of curiosity and adaptability. Be open to learning new tools, techniques and methodologies as technology evolves. Embrace interdisciplinary collaboration and be willing to work with data scientists, computer scientists and other experts to leverage their skills in data-driven projects.

By actively pursuing these strategies, you can stay relevant and excel in today’s datadriven process industry as a process/chemical engineer. Continuously expanding your skill set and staying informed about the latest advancements will enable you to seize opportunities and contribute to solving complex industrial challenges.

Glossary Active area: The area on a distillation tray where the mass transfer takes place. Activity coefficient (γ): A factor describing the deviations from Raoult’s law. It can be interpreted as a correction factor for the concentration. γ is a function of temperature and concentration. It is not particularly dependent on the pressure. Adsorbate: Phase on the surface of the adsorbent. Adsorbent: Adsorptive agent; a solid which can develop bonds to one or more fluid substances to remove them from a liquid or a gas. Adsorptive: One or more components in a gas or a liquid which can be adsorbed by the adsorbent. Advanced process control: Number of measures for improvement of process economics with sophisticated control strategies, e. g. feedforward control or simulationbased predictive control. Aerosol: An aerosol consists of liquid droplets in the vapor phase which are so small that they do not precipitate (Equation (9.9)) but large enough that they do not take part in molecular diffusion. Their occurrence is caused by oversaturation in the vapor phase. Well-known examples occur when sulfuric acid or hydrogen chloride are absorbed in aqueous phases or if the cooling in cryo-condensation is too strong (Chapter 13.4.1). Autoignition temperature: the lowest temperature where a substance ignites in normal atmosphere without an external source of ignition. Azeotrope: Phase equilibrium where vapor and liquid concentration of all components are identical, while the equilibrium pressure at constant temperature or, respectively, the equilibrium temperature at constant pressure shows a maximum or minimum. The separation of the components of an azeotrope is not possible with simple distillation. The closer the vapor pressures of two components are, the more probable is the occurrence of an azeotrope. We distingush between homogeneous and heterogeneous azeotropes, where the latter shows a miscibility gap in the liquid phase. Azeotropes are also possible for ternary mixtures. Also, there are few examples for quaternary azeotropes. There is no evidence for azeotropes consisting of more than four components. Backpropagation: short for “backward propagation of errors”, is the method of finetuning the weights of a neural network based on the error rate obtained in the previous iteration. Battery limits: Battery Limits are the physical boundaries of a plant. Usually, flow meters are installed at this location in order to determine the economic performance of a plant. BOD: Biochemical oxygen demand, Section 13.5. Boiler feed water: Demineralized and pretreated water suitable for generating steam. Metal ions, salts, organics, oxygen, carbon dioxide, and hydrogen sulfide have been https://doi.org/10.1515/9783111028149-016

510 � Glossary removed to an acceptable level. Often, nitrogen-based weak caustics are added (ammonia, amines). Boolean operator: Elementary operation in the Boolean algebra, i. e. conjunction (AND), disjunction (OR) and negation (NOT). Brainstorming: A technique for solving problems in a group. It is based on spontaneous contributions of the particular members of the group. During the brainstorming phase, the proposals must not be subject to critics. By-product: A product formed due to undesired side reactions. CAPEX: Capital Expenditure. Investment costs for a plant. Car Sealed Open (CSO): Protection of a valve against accidental maloperation. A seal made of plastic must be broken on purpose before the valve can be actuated. Cause & effect matrix: A matrix which gives an overview on the actions caused by interlocks in a process. For each deviation, the actions caused by the interlocks are marked so that the often complex interlock description can be interpreted more easily. Check valve: A valve which is fully open in one flow direction. For reverse flow, it closes due to its mechanical construction. Coalescer: An apparatus where droplets unify to a single phase. COD: Chemical oxygen demand, Section 13.5. Compressibility factor: Deviation from the ideal gas behavior, defined as Z = pv/RT. For an ideal gas, Z = 1. At the critical point, Z is in the range Z = 0.23–0.29. At very high pressures, Z can show large values, e. g. Z = 4.57 for ethylene at t = 100 °C, p = 3000 bar. Contingency: Cost estimation item to account for uncertainties in the process or in project execution. Cooling water: Water used for cooling purposes, usually taken from natural sources like rivers, wells, or sea water. In open cooling water cycles, it is used with no further treatment so that it might contain salts which can lead to fouling. The supply temperature ranges from 25–35 °C. Usually, the return temperature is 10 K higher. In most applications, the returned cooling water is cooled down again in a cooling tower. Co-product: A product generated because it occurs in the reaction equation of the desired main reaction. Depreciation: Depreciation is the value loss of an asset with time. At the end of its lifetime, the value of the asset becomes zero. During this time, its value continuously decreases from the purchase price to zero. In the easiest case, this happens linearly with time, other courses are possible. Depreciations are costs which can be assigned each year and therefore, they have an impact on the amount of taxes the company has to pay due to the income statement. The more the depreciation is, the less taxes have to be paid by the company. It is legally required that the depreciation is spread over the estimated lifetime. The course of the depreciation with time is defined by the government. The lifetime itself is also fixed by law; often 10 years for ISBL items and 20 years for OSBL items.

Glossary

� 511

Design basis: A document which collects all the facts and assumptions known in advance before the project starts. It defines the boundary conditions of the project (environmental conditions, physical state and composition of raw materials, utilities, products, etc.). Design pressure: Chapter 11. Design temperature: Chapter 11. Deviation: Departure from design and operating intention [210]. Dip-pipe: A feed line to a vessel which does not end at the nozzle but is elongated to the bottom inside the vessel so that it dips into the liquid during operation. The purpose is to prevent backflow of the vapor phase of the vessel. Double jeopardy: Double jeopardy scenarios are two unrelated failures occurring simultaneously. As the probability of simultaneous independent errors is low, they should not be considered. Enthalpy: An item explained so well in many thermodynamic textbooks [154]. To make an own short try: Enthalpy is defined as internal energy plus potential energy in the pressure field: h = u + pv. It is the usual quantity for the thermal energy of a flowing substance, whereas the internal energy is relevant for static systems. Entropy: No reasonable explanation in just a few sentences possible. Again a short try: The entropy represents the experience associated with the behavior of a system. For a closed system where neither mass nor heat can pass the system border, the entropy reaches a maximum according to the Second Law of thermodynamics. Such a system will end up in a state which is the most probable one. An example: Two gases separated by a wall will mix when the wall is removed, until the concentration is the same in every volume element. Better and more extensive explanations can be found in [154] and [235]. Equation of state: A mathematical relationship between pressure (p), specific volume (v), and temperature (T). Excess enthalpy: Enthalpy change when two or more liquid or gaseous components are mixed at constant temperature and pressure. The mixture is supposed to remain liquid or gaseous, respectively. Excess volume: Change of the specific volume which occurs when two or more liquid or gaseous components are mixed at constant temperature and pressure. The mixture is supposed to remain liquid or gaseous, respectively. Expediting: Regular auditing of vendors to maintain quality and delivery dates. Fixed costs: Operation costs which occur independently from the production, e. g. personnel costs. Flash point: The lowest temperature where the vapor pressure is large enough to generate ignitable mixtures in ambient air. When the ignition source is removed, the substance stops burning. Froude number (Fr): Ratio between gravity force and inertia force. The general calculation formula is Fr = w2 /(g l), where l is a characteristic length.

512 � Glossary Fuel gas: Natural gas. Normally, methane is the dominating component, the rest of the composition depends on the case. Ethane, propane, butanes, higher hydrocarbons, nitrogen, hydrogen, oxygen, carbon dioxide, carbon monoxide, helium, argon, hydrogen sulfide, and water can occur as components. Fugacity coefficient (φ): A correction representing the deviation of the chemical potential from ideal gas behavior. It is thoroughly explained in [11]. Grashof number (Gr): Ratio between buoyancy force and friction force. Guideword: A simple phrase used to identify possible deviations during a HAZOP procedure. HAZID: Hazard Identification. Meeting where the main safety issues are discussed and listed. First recommendations can be given. HAZOP: Hazard and Operability review. A formal and systematic approach for the identification of the potential of hazards and operating problems caused by deviations from the intended design and operation [210]. Heating value, lower: The heating value is the heat released when a substance is incinerated. Both temperatures before and after the incineration are 25 °C. The supplement “lower” means that no condensation (e. g. water) takes place in the flue gas. HETP: Height equivalent of one theoretical plate. The height of layer of random or structured packing which corresponds to a theoretical stage. It is a measure of the separation efficiency of the packing. The reciprocal value is the number of theoretical stages per m. Holdup: Relative liquid content of the packing during operation. Internal energy: Quantity for the description of the thermal energy of a substance which is not flowing but encased in a vessel. A better explanation can be found in [154]. Interlock: A defined automatic intervention of the process control system. Reaction of the control system to encounter unacceptable deviations from normal process conditions. ISBL: All items which are in the scope of the engineering company and which are directly related to the process. Joule–Thomson effect: The temperature change (𝜕T/𝜕p)h , especially relevant for gases being throtteled adiabatically. In most cases, the Joule–Thomson effect gives a temperature decrease, but increase is also possible. Lever rule: The lever rule says that in phase equilibrium diagrams showing the concentration on the x-axis the ratio of the amounts of the phases corresponds to the ratio of the opposite lever arms of the tie-line. Linear Regression: statistical method used to model the relationship between a dependent variable and one (simple linear regression) or more (multiple linear regression) independent variables. In its simplest form, linear regression assumes a linear relationship between the dependent variable and the independent variables. Liquidus line: The liquidus line is a limiting line in a solid-liquid phase equilibrium diagram. Above the liquidus line, there is no solid present.

Glossary �

513

Logistic (Logit) Regression: statistical modelling technique used to predict the probability of a binary outcome based on one or more independent variables. Unlike linear regression which predicts a continuous outcome, logistic regression is specifically designed for binary (yes/no, 0/1, true/false) or categorical outcomes limited to two possible categories. Lower explosion limit (LEL): The lowest concentration of a substance in air where an explosion can take place after ignition. Mach number (Ma): The Mach number is the ratio between the actual velocity and the speed of sound. Makespan: Time needed for producing a product in a batch plant. Node: A part of the process which covers a dedicated task, e. g. a distillation step. NPSH value: net positive suction head; see Chapter 8.1. Nußelt number (Nu): Ratio between heat transfer by convection and heat transfer by conduction. The general formula is Nu = α l/λ, with l as a characteristic length. Objective Function: Function describing the targets of an optimization process. Usually, it contains deviations which shall be minimized. OPEX: Operational Expenditure. Operation costs of a plant, referring to a certain production rate. OSBL: Outside battery limits. Auxiliary units which are required for the functioning of the production unit, but are not directly involved in the process. In contrast to the main units, they can be shared among different production units. Examples: steam generator, cooling water system, inert gas supply, refrigeration unit, instrument air unit. Osmotic pressure: Equilibrium pressure difference across a semipermeable membrane, where the mass flow through the membrane comes to a stop. In this case, for example a salt-solution at high pressure can be in chemical equilibrium with the pure solvent on the other side of the membrane at low pressure. Package unit: A package unit is a compilation of pieces of equipment that fulfill a certain task in the project. It is delivered as a unit with defined inlets and outlets by one vendor. Examples are compressors, refrigeration units, crystallizers, or adsorber units. PFD (process flow diagram): An engineering drawing illustrating the process without showing details which are not necessary for the understanding. It is often supplemented by process and equipment data. pH value: Quantity describing acid/alkaline behavior and the strength of an electrolyte solution. It is defined as the negative of the logarithm of the H3 O+ ion concentration in mol/l to the basis of 10. In fact, it would be more correct to replace concentration by activity, which is actually performed in process simulation. The pH of neutral water is 7. Acids have lower pHs, caustic solutions have higher ones. PID (piping and instrumentation diagram): An engineering drawing showing the arrangement of piping and equipment like vessels, heat exchangers, columns, pumps, compressors, and the associated measurement and control devices. The pipes are

514 � Glossary depicted together with information about nominal diameter, design pressure, medium, piping class, and an identification number. The function of the control loops should be clarified on the PID, together with other documents (cause and effect matrix, data sheets for measurement devices), as well as the installation height of the apparatuses. Prandtl number (Pr): Ratio between kinematic viscosity and thermal diffusivity. Pr = η cp /λ. Process control system: Computer system to enable the operators to keep the overview on the state of the whole process and to control it from a central measuring station. A “PFD” of the plant is displayed and controllers are represented as software, enabling the operator to change control parameters. Any measured data can be monitored and stored so that arbitrary trend lines can be visualized; furthermore, any actions can be carried out centrally from this station (e. g. vary the set point of a pressure or switch off a pump). Process water: Water which has to be pretreated in a way that it can be used in the process. Pseudo-critical pressure/temperature: For mixtures, a genuine critical point does not exist. If it is required in correlations, the pseudo-critical temperature/pressure can be evaluated as a corresponding quantity. Python: an interpreted, object-oriented, high-level programming language with dynamic semantics. R: a programming language for statistical computing and graphics. RStudio: an integrated development environment for R. Random Forests: machine learning algorithm used for classification, regression and other tasks that involve predicting an output variable based on a set of input variables. It is an ensemble method that combines multiple decision trees to improve the accuracy and generalization of the model. R&D: Research & Development. Rectifying section: Column region between feed and top, where the light ends are enriched. Regula falsi: A method to find the root of a nonlinear equation where an analytical solution is not possible. The derivatives are not needed. The principal is that the secant through two points is constructed. The intersection with the abscissa is an estimation for the root; together with a point of the function in the vicinity, a new secant can be constructed. The procedure is repeated until the root found is accurate enough. Reynolds number (Re): Ratio between inertia force and friction force. The general calculation formula is Re = w l ρ/η, where l is a characteristic length, for instance the inner diameter in a pipe. Safety valve: A valve which opens automatically when the pressure in an apparatus exceeds an acceptable value. In case of actuation, substance is released from the equipment so that the pressure is lowered.

Glossary �

515

Safeguard: Countermeasure to prevent or mitigate the risk of a deviation [210]. Separation factor: The ratio (y1 /y2 )/(x1 /x2 ). If it is far away from 1, the separation is easy. The closer it is to 1, the more difficult is distillation. Speed of sound: The speed at which a pressure disturbance can be transported through a substance. It can be measured with a remarkable accuracy and is therefore a key quantity for the development of equations of state. Flows in pipes cannot exceed the speed of sound, which is a key fundamental for all pressure relief calculations. Split block or separation block: A block in process simulation which just defines how a stream is split by a certain unit operation. The split can be different for the particular components. It is just a book-keeper function; the physical background is not questioned, and there is no physical check whether the suggested separation is possible or not. Steamout: Steamout means that the vessel is cleaned by exposing the surfaces to steam, where high temperatures are applied. Polymer deposits and other solids might be melted due to the high temperature or dissolved and therefore removed from the wall. Stripping section: Column region between bottom and feed, where the heavy ends are enriched. Supply Chain Management (SCM): refers to the management of the flow of goods, services and information from the suppliers of raw materials to the end-users of finished products. It involves the coordination and integration of activities such as sourcing, procurement, production, transportation (logistics), warehousing and distribution. Support Vector Machines: SVMs are supervised machine learning algorithms used for both classification and regression tasks which are particularly effective when dealing with complex, high-dimensional datasets. The goal of SVM is to find the optimal hyperplane that separates different classes or regression targets with the maximum margin. Surrogate: In the context of modelling, a surrogate refers to a simplified and computationally efficient representation of a complex system or process and is often used when the original system is too complex, computationally expensive or difficult to interpret. TA Luft: German guideline for limiting concentrations in exhaust air streams. Tangent line: Level in a vessel which indicates the position of the cylindrical part; the bottom is left out. Tear stream: In a chemical process there are usually recycle streams. They are a challenge for process simulation, as they cannot be known in advance. To come to a solution, they are first estimated, and after recalculation it is checked whether the estimation was accurate enough. If not, the estimation of this stream is revised in a certain manner depending on the convergence algorithm. These streams are called tear streams.

516 � Glossary Tie-in points: Defined points where a new part of a plant is connected to an existing one. TOC: Total organic carbon, Section 13.5. Turndown: Ratio between maximum and minimum load. Typical: Depiction of an example for the arrangement of the standard equipment, i. e. valves, pumps, etc. Upper explosion limit: The highest concentration of a substance in air where an explosion can take place after ignition. Value engineering: An engineering procedure which generates suggestions for the economic and technical improvement of a process and gives an assessment whether these suggestions should be realized or not. Usually, it starts with a brainstorming session, where new ideas are developed. In a second phase, people are assigned to evaluate the economic improvement of the particular measures. These people compile standardized reports so that their assessment becomes comprehensible for both current and future colleagues. Vapor pressure: The pressure of a pure substance exerted by the vapor which is in equilibrium with its condensate in a closed system. The vapor pressure is a key quantity for the estimation of pure component properties and for evaporation, condensation and distillation processes. Variable costs: Costs which are directly related to the production amount (raw materials, auxiliary chemicals, utilities) Weber number (We): Measure of the relative importance of inertia forces compared to the surface tension. We = w2 lρ/σ. l is a characteristic length. Weeping: On a distillation tray, the liquid is supposed to leave the tray via outlet weir and downcomer to enter the tray below. If part of it leaves the tray through the sieve holes or valves, it is called weeping. Working capital: Working capital comprises inventories of raw and auxiliary materials, catalysts, stores of products and intermediate products, debt claims, and liquid assets. ω-method: The ω-method is a simplified model to consider two-phase flow through a pressure relief device. It assumes equilibrium between vapor and liquid phase. There shall be no friction between the phases. The liquid phase is assumed to be incompressible, the vapor phase obeys the ideal gas equation of state. The method defines a simplified equation of state for a two-phase flow, where the whole input information can be used from the state at the inlet. There should be sufficient distance to the critical point. The critical pressure ratio and the maximum mass flux density can be calculated iteratively with a simple EXCEL file. The ω-method is considered to be conservative. Thorough information can be found in [220, 221].

List of Symbols

Symbol

Unit

Explanation

a a ai A Aij , Bij , Cij , Dij b B, C, D c C cp cv D d dh f g g G gE h hW H H Hij Hu,i hE I k kij k K KV Li M Ma ṁ n p P P Poy j ps Q̇

N m� /mol�

attractive parameter in cubic equations of state absorption coefficient activity area interaction parameters for Wilson, NRTL, UNIQUAC repulsive parameter in cubic equations of state virial coefficients parameter for volume translation capacity spec. isobaric heat capacity spec. isochoric heat capacity diameter diameter hydraulic diameter fugacity spec. Gibbs energy gravity acceleration Gibbs energy spec. excess Gibbs energy specific enthalpy weir height delivery height enthalpy Henry coefficient of component i in solvent j lower heating value of component i excess enthalpy electric current heat transition coefficient interaction parameter in cubic equations of state roughness chemical equilibrium constant valve characterization value vapor pressure at flash point molecular weight Mach number mass flow number of moles pressure price power Poynting correction vapor pressure heat flow universal gas constant

R

m� m� /mol m� /mol kg/h J/(mol K), J/(g K) J/(mol K), J/(g K) m m m Pa J/mol �.�� m/s� J J/mol J/mol, J/g m m J Pa J/mol J/mol A W/(m� K) m m� /h kPa g/mol kg/h mol Pa € W Pa W 8.31446 J/(mol K)

https://doi.org/10.1515/9783111028149-017

518 � List of Symbols Symbol

Unit

Explanation

R R Re s s t T Tb Tr U U u u v V V̇ wt w w∗ wBl x x xij y z Z

Ω K/W

electrical resistance thermal resistance Reynolds number specific entropy wall thickness Celsius temperature absolute temperature normal boiling point reduced temperature, Tr = T /Tc internal energy voltage velocity specific internal energy specific volume volume volume flow technical work velocity speed of sound bubble rising velocity liquid concentration vapor quality local concentration of molecule i around molecule j vapor concentration vapor or liquid concentration compressibility factor

Greek symbols α αij γj Δhm Δhv ε η η ηM κ λ λ λ ν Π ρ σ σ σ τ φ φj ω

J/(mol K), J/(g K) m °C K K J V m/s J/mol, J/g m� /mol, m� /kg m� m� /h J/g m/s m/s m/s mol/mol, g/g mol/mol mol/mol mol/mol, g/g mol/mol

W/m� K J/mol, J/g J/mol, J/g Pa s

W/Km m m� /s Pa mol/m� , kg/m� N/m� N/m �.��⋅��−� Wm−� K−� s

heat transfer coefficient separation factor activity coefficient enthalpy of fusion enthalpy of vaporization emission coefficient dynamic viscosity efficiency Murphree efficiency isentropic exponent thermal conductivity friction factor wavelength kinematic viscosity osmotic pressure density mechanical tension surface tension radiation constant time relative free hole area on a tray fugacity coefficient of component j acentric factor

List of Symbols

Symbol Subscripts a DC G i, j, k jet L m M r res rev s SV t t U V w Supercsripts id L S V ∞ ′ ′′

I II

Unit

Explanation axial downcomer gas components, molecules jet pump liquid melting mixture reduced (divided by the critical property) residence reversible case saturation safety valve tangential technical ambient vapor weight ideal gas liquid solid vapor infinite dilution saturated liquid saturated vapor 1st liquid phase 2nd liquid phase

� 519

Bibliography [1] [2] [3] [4] [5]

[6] [7] [8] [9] [10] [11] [12] [13]

[14] [15] [16] [17] [18]

[19] [20] [21] [22] [23]

Franke A, Kussi J, Richert H, Rittmeister M, von Wedel L, Zeck S. Offene Standards verbinden. CITplus 2013;16(7/8):23–27. Mosberger E. Chemical Plant Design and Construction. Weinheim: Wiley-VCH; 2012. (Ullmann’s Encyclopedia of Industrial Chemistry). Available at: www.mbaofficial.com/mba-courses/operations-management/what-are-theobjectives-principles-and-types-of-plant-layout/. Bowers P, Khangura B, Noakes K. Process plant engineering models; Available at: http://spedweb.com/index.php/component/content/article/392.html. Rovaglio M, Scheele T. Immersive virtual plant reality; Available at: http://software.schneiderelectric.com/pdf/white-paper/immersive-virtual-reality-plant-a-comprehensive-plant-crew-trainingsolution/. Borissova H. Produktlebensphasenorientierte Informationsvisualisierung mit graphischen Metaphern. Diploma thesis, University of Karlsruhe; 2005. Smith R. Chemical Process. Design and Integration. West Sussex: John Wiley & Sons; 2005. Baerns M, Behr A, Brehm A, Gmehling J, Hinrichsen KO, Hofmann H, Kleiber M, Kockmann N, Onken U, Palkovits R, Renken A, Vogt D. Technische Chemie. 3rd ed. Wiley-VCH; 2023. Buskies U. Economic process optimization strategies. Chem Eng Technol 1997;20:63–70. Cie A, Lantz S, Schlarp R, Tzakas M. Renewable acrylic acid. Tech. rep., University of Pennsylvania; 2012. Gmehling J, Kleiber M, Kolbe B, Rarey J. Chemical Thermodynamics for Process Simulation. Weinheim: Wiley-VCH; 2019. Salerno D. Data on demand. Benefits of NIST TDE in Aspen Plus; 2014. Presentation ASPEN V8.4. Nannoolal Y, Rarey J, Ramjugernath D. Estimation of pure component properties, Part 3: Estimation of the vapor pressure of non-electrolyte organic compounds via group contributions and group interactions. Fluid Phase Equilibria 2008;269(1/2):117–133. Kleiber M, Axmann JK. Evolutionary algorithms for the optimization of Modified UNIFAC parameters. Computers and Chemical Engineering 1998;23:63–82. Krooshof G. Can molecular modeling meet the industrial need for robust and quick predictions?; 2014. Presentation ESAT, Eindhoven. van Ness HC. Thermodynamics in the treatment of vapor/liquid equilibrium (VLE) data. Pure Appl Chem 1995;67(6):859–872. Loehe JR, van Ness HC, Abbott MM. Vapor/liquid/liquid equilibrium. Total-pressure data and GE for water/methyl acetate at 50 degree C. J Chem Eng Data 1983;28(4):405–407. Gaw WJ, Swinton FL. Thermodynamic properties of binary systems containing hexafluorobenzene, Part 4: Excess Gibbs free energies of the three systems hexafluorobenzene + benzene, toluene, and p-xylene. Trans Faraday Soc 1968;64:2023–2034. van der Waals JD. Over de Continuiteit van den Gas- en Vloeistoftoestand. Thesis, Leiden; 1873. Bronstein IN, Semendjajew KA. Taschenbuch der Mathematik, 21st ed. Thun/Frankfurt a. M.: Verlag Harri Deutsch; 1984. Peng DY, Robinson DB. A new two-constant equation of state. Ind Eng Chem Fundam 1976;15(1):59–64. Diedrichs A, Rarey J, Gmehling J. Prediction of liquid heat capacities by the group contribution equation of state VTPR. Fluid Phase Equilibria 2006;248:56–69. Benedict M, Webb GB, Rubin LC. An empirical equation for thermodynamic properties of light hydrocarbons and their mixtures. I: Methane, ethane, propane and n-butane. J Chem Phys 1940;8:334–345.

https://doi.org/10.1515/9783111028149-018

522 � Bibliography

[24]

[25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40]

[41] [42] [43] [44] [45] [46] [47]

Benedict M, Webb GB, Rubin LC. An empirical equation for thermodynamic properties of light hydrocarbons and their mixtures. II: Mixtures of methane, ethane, propane, and n-butane. J Chem Phys 1942;10:747–758. Bender E. Equations of state for ethylene and propylene. Cryogenics 1975;667–673. Span R, Wagner W. Equations of state for technical applications. I: Simultaneously optimized functional forms for nonpolar and polar fluids. Int J Thermophys 2003;24(1):1–39. Span R, Wagner W. Equations of state for technical applications. II: Results for nonpolar fluids. Int J Thermophys 2003;24(1):41–109. Span R, Wagner W. Equations of state for technical applications. III: Results for polar fluids. Int J Thermophys 2003;24(1):111–162. Wagner W. FLUIDCAL. Software for the calculation of thermodynamic and transport properties of several fluids. Tech. rep., Ruhr-Universität Bochum; 2005. Kunz O, Wagner W. The GERG-2008 wide-range equation of state for natural gases and other mixtures: An expansion of GERG-2004. J Chem Eng Data 2012;57:3032–3091. Wilson GM. Vapor-liquid equilibrium. XI: A new expression for the excess free energy of mixing. J Am Chem Soc 1964;20:127–130. Renon H, Prausnitz JM. Local compositions in thermodynamic excess functions for liquid mixtures. AIChE Journal 1968;14(1):135–145. Abrams DS, Prausnitz JM. Statistical thermodynamics of liquid mixtures: A new expression for the excess Gibbs energy of partly or completely miscible systems. AIChE Journal 1975;21:116–128. Gmehling J, Brehm A. Lehrbuch der Technischen Chemie. Band 2: Grundoperationen. Stuttgart/New York: Georg Thieme Verlag; 1996. Prausnitz JM, Lichtenthaler RN, de Azevedo EG. Molecular thermodynamics of fluid-phase equilibria. Prentice-Hall; 1986. Anisimov VM, Zelesnyi VP, Semenjuk JV, Cerniak JA. Thermodynamic properties of the mixture FC218-HFC134a. (Russ) Inzenernyi fiziceskij zyrnal 1996;69(5):756–760. Gmehling J, Menke J, Krafczyk J, Fischer K. Azeotropic Data, 2nd ed. Weinheim: Wiley-VCH; 2004. 3 volumes. Hankinson RW, Thomson GH. A new correlation for saturated densities of liquids and their mixtures. AIChE Journal 1979;25(4):653–663. Wagner W. New vapour pressure measurements for argon and nitrogen and a new method of establishing rational vapour pressure equations. Cryogenics 1973;13:470–482. Moller B, Rarey J, Ramjugernath D. Estimation of the vapor pressure of non-electrolyte organic compounds via group contributions and group interactions. Journal of Molecular Liquids 2008;143:52–63. Poling BE, Prausnitz JM, O’Connell JP. The Properties of Gases and Liquids. McGraw-Hill; 2001. Kleiber M, Joh R. Liquids and gases. VDI Heat Atlas, 2nd ed., chap. D3.1. Berlin/Heidelberg: Springer-Verlag; 2010. McGarry J. Correlation and prediction of the vapor pressures of pure liquids over large pressure ranges. Ind Eng Chem Proc Des Dev 1983;22:313–322. Hoffmann W, Florin F. Zweckmäßige Darstellung von Dampfdruckkurven. Verfahrenstechnik, Z VDI-Beiheft 1943;2:47–51. Cordes W, Rarey J. A new method for the estimation of the normal boiling point of non-electrolyte organic compounds. Fluid Phase Equilibria 2002;201:409–433. Franck EU, Meyer F. Fluorwasserstoff III, Spezifische Wärme und Assoziation im Gas bei niedrigem Druck. Z Elektrochem, Ber Bunsenges Phys Chem 1959;63(5):571–582. Chen CC, Britt HI, Boston JF, Evans LB. Local composition model for excess Gibbs energy of electrolyte systems, Part I: Single solvent, single completely dissociated electrolyte systems. AIChE Journal 1982;28(4):588–596.

Bibliography

[48] [49] [50] [51] [52] [53]

[54] [55] [56] [57] [58] [59] [60] [61]

[62] [63] [64]

[65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76]

� 523

Chen CC, Evans LB. A local composition model for excess Gibbs energy of aqueous electrolyte systems. AIChE Journal 1986;32(3):444–454. de Hemptinne JC, Ledanois JM, Mougin P, Barreau A. Select Thermodynamic Models for Process Simulation. Paris: Edition Technip; 2012. Löffler HJ. Thermodynamik, Band 2: Gemische und chemische Reaktionen. Berlin/Heidelberg: Springer-Verlag; 1969. liq Kleiber M. The trouble with cp . Ind Eng Chem Res 2003;42:2007–2014. Soave G. Equilibrium constants from a modified Redlich–Kwong equation of state. Chem Eng Sc 1972;27:1197–1203. Plöcker U, Knapp H, Prausnitz JM. Calculation of high-pressure vapor-liquid equilibria from a corresponding-states correlation with emphasis on asymmetric mixtures. Ind Eng Chem Proc Des Dev 1978;17(3):324–332. Enders S. Polymer thermodynamics. In: Gmehling J, Kleiber M, Kolbe B, Rarey J. Chemical Thermodynamics for Process Simulation. Weinheim: Wiley-VCH; 2019. Available at: www.ddbst.de. Available at: www.nist.gov. Fredenslund Å, Jones RL, Prausnitz JM. Group-contribution estimation of activity coefficients in nonideal liquid mixtures. AIChE Journal 1975;21(6):1086–1099. Gmehling J, Li J, Schiller M. A modified UNIFAC model. 2. Present parameter matrix and results for different thermodynamic properties. Ind Eng Chem Res 1993;32:178–193. Weidlich U, Gmehling J. A modified UNIFAC model. 1. Prediction of VLE, hE , and γ ∞ . Ind Eng Chem Res 1987;26:1372–1381. Available at: www.unifac.org. Schmid B, Schedemann A, Gmehling J. Extension of the VTPR group contribution equation of state: Group interaction parameters for 192 group contributions and typical results. Ind Eng Chem Res 2014;53(8):3393–3405. Klamt A, Eckert F. COSMO-RS: A novel and efficient method for the a priori prediction of thermophysical data of liquids. Fluid Phase Equilibria 2000;172:43–72. Kleiber M, Joh R. Calculation methods for thermophysical properties. VDI Heat Atlas. 2nd ed., chap. D1. Berlin/Heidelberg: Springer-Verlag; 2010. Nannoolal Y, Rarey J, Ramjugernath D. Estimation of pure component properties, Part 2: Estimation of the saturated liquid viscosity of non-electrolyte organic compounds via group contributions and group interactions. Fluid Phase Equilibria 2009;281(2):97–119. Technical University of Denmark DoEE. Heat transfer fluid calculator; Version 2.01, Copyright 2000. Matejovski D. Die Modernität der Industrie und die Ästhetisierung des Ökonomischen. Priddat BP, West KW, editors. Die Modernität der Industrie. Marburg: Metropolis-Verlag; 2012. Minton PE. Handbook of Evaporation Technology. Westwood, NJ: Noyes Publications; 1986. Linnhoff B. Pinch technology training course. Frankfurt: Linnhoff March Ltd.; 1995. Dhole VR, Linnhoff B. Distillation column targets. Comp Chem Eng 1993;27(5/6):549–560. Toghraei M. Wide design margins do not improve engineering. Hydrocarb Process 2014;93(1):69–71. Baehr HD, Stephan K. Wärme- und Stoffübertragung. 9th ed. Berlin/Heidelberg: Springer-Verlag; 2016. Verein Deutscher Ingenieure. VDI Heat Atlas. Berlin/Heidelberg: Springer-Verlag; 2010. Frankel M. Facility Piping Systems Handbook, 2nd ed. New York: McGraw-Hill; 2002. Perea E. Mitigate heat exchanger corrosion with better construction materials. Hydrocarb Process 2013;92(12):49–51. Bouhairie S. Selecting baffles in shell-and-tube heat exchangers. Chemical Engineering Progress 2012;27–33. HTRI manual; HTRI Xchanger Suite 6.0.

524 � Bibliography

[77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105]

Drögemüller P. The use of hiTRAN wire matrix elements to improve the thermal efficiency of tubular heat exchangers in single and two-phase flow. Chem-Ing-Tech 2015;87(3):188–202. Ackermann G. Wärmeübergang und molekulare Stoffübertragung im gleichen Feld bei großen Temperatur- und Partialdruckdifferenzen. VDI-Forschungsheft 1937;8(382). Schlünder EU. Film condensation of binary mixtures with and without inert gas. VDI Heat Atlas. 2nd ed., chap. J2. Berlin/Heidelberg: Springer-Verlag; 2010. Colburn AP, Drew TB. The condensation of mixed vapors. Trans Am Inst Chem Engrs 1937;33:197–215. Arneth S. Dimensionierung und Betriebsverhalten von Naturumlaufverdampfern. Thesis, TU München, München; 1999. Arneth S, Stichlmair J. Characteristics of thermosiphon reboilers. Int J Therm Sci 2001;40:385–391. Baars A, Delgado A. Non-linear effects in a natural circulation evaporator: Geysering coupled with manometer oscillations. Heat Mass Transfer 2007;43:427–438. Dialer K. Die Wärmeübertragung beim Naturumlaufverdampfer. Thesis, ETH Zürich; 1983. Scholl S, Rinner M. Verdampfung und Kondensation. Goedecke R, editor. Fluid-Verfahrenstechnik. Weinheim: Wiley-VCH; 2011. Das T. Achieve optimal heat recovery in a kettle exchanger. Hydrocarb Process 2012;91(3):87–88. Bethge D, Kurzweg- und Molekulardestillation. Jorisch W, editor, Vakuumtechnik in der Chemischen Industrie. Weinheim: Wiley-VCH; 1999. Martin H. Pressure drop and heat transfer in plate heat exchangers. VDI Heat Atlas, 2nd ed., chap. N6. Berlin/Heidelberg: Springer-Verlag; 2010. Gmehling J, Kleiber M, Steinigeweg S. Thermische Verfahrenstechnik. Chemische Technik: Prozesse und Produkte, 5th ed. Weinheim: Wiley-VCH; 2006. Schmidt KG. Heat transfer to finned tubes. VDI Heat Atlas, 2nd ed., chap. M1. Berlin/Heidelberg: Springer-Verlag; 2010. Müller-Steinhagen H. Fouling of heat exchanger surfaces. VDI Heat Atlas, 2nd ed., chap. C4. Berlin/Heidelberg: Springer-Verlag; 2010. Zhenlu F, Dengfeng L, Xiangling Z, Xing Z. Determine fouling margins in tubular heat exchanger design. Hydrocarb Process 2015;94(9):79–82. Dole RH, Vivekanand S, Sridhar S. Mitigate vibration issues in shell-and-tube heat exchangers. Hydrocarb Process 2015;94(12):57–60. Gelbe H, Ziada S. Vibration of tube bundles in heat exchangers. VDI Heat Atlas, 2nd ed., chap. O2. Berlin/Heidelberg: Springer-Verlag; 2010. Kister HZ. Distillation -Design-. McGraw-Hill; 1992. Kister HZ. Distillation -Operation-. McGraw-Hill; 1990. Sattler K. Thermische Trennverfahren, 2nd ed. Weinheim: VCH Verlagsgesellschaft; 1995. Stichlmair J, Fair JR. Distillation: Principles and Practice. New York: Wiley-VCH; 1998. Schultes M. The impact of tower internals on packing performance. Chem-Ing-Tech 2014;86(5):658–665. Kister HZ, Mathias P, Steinmeyer DE, Penney WR, Crocker BB, Fair JR. Equipment for distillation, gas absorption, phase dispersion, and phase separation. Green DW, Perry RH, editors, Perry’s Chemical Engineers’ Handbook, 8th ed., chap. 14. McGraw-Hill. Stupin WJ, Kister HZ. System limit: The ultimate capacity of fractionators. TransIChemE 2003;81(A):136–146. Stichlmair J, Bravo JL, Fair JR. General model for prediction of pressure drop and capacity of countercurrent gas/liquid packed columns. Gas Separation & Purification 1989;3:19–28. Engel V. Fluiddynamik in Packungskolonnen für Gas-Flüssig-Systeme. Fortschritt-Berichte, Reihe 3: Verfahrenstechnik. VDI-Verlag; 1999. Billet R, Schultes M. Prediction of mass transfer columns with dumped and arranged packings. TransIChemE 1999;77(Part A):498–504. Spiegel L, Meier W. Structured packings. Chem Plants + Processing 1995;28(1):36–38.

Bibliography

� 525

[106] Kister HZ. Distillation -Troubleshooting-. Hoboken, NJ: Wiley-Interscience; 2006. [107] Kister HZ. Practical distillation technology; 2013. Course Notes. [108] Bolles WL. Optimum bubble-cap tray design. Part I: Tray dynamics. Petroleum Processing 1956;65–80. [109] Bolles WL. Optimum bubble-cap tray design. Part II: Design standards. Petroleum Processing 1956;82–95. [110] Bolles WL. Optimum bubble-cap tray design. Part III: Design technique. Petroleum Processing 1956;72–95. [111] Bolles WL. Optimum bubble-cap tray design. Part IV: Design example. Petroleum Processing 1956;109–120. [112] Kister HZ. Effects of design on tray efficiency in commercial towers. Chem Eng Prog 2008;104(6):39–47. [113] Stichlmair J. Grundlagen der Dimensionierung des Gas/Flüssigkeit-Kontaktapparates Bodenkolonne. Weinheim/New York: verlag chemie; 1978. [114] Rennie J, Evans F. The formation of froths and foams above sieve plates. British Chemical Engineering 1962;7(7):498–502. [115] Senger G, Wozny G. Experimentelle Untersuchung von Schaum in Packungskolonnen. Chem-Ing-Tech 2011;83(4):503–510. [116] Pahl MH, Franke D. Schaum und Schaumzerstörung – ein Überblick. Chem-Ing-Tech 1995;67(3):300–312. [117] Brierly RJP, Whyman PJM, Erskine JB. Flow induced vibration of distillation and absorption column trays. I. Chem. E. Symp. Ser 1979;56:2.4/45–2.4/63. [118] Priestman GH, Brown DJ. The mechanism of pressure pulsations in sieve-tray columns. Trans IChemE (London) 1981;59:279–282. [119] Priestman GH, Brown DJ. Pressure pulsations and weeping at elevated pressures in a small sieve-tray column. I. Chem. E. Symp. Ser. 1987;104:B407–B422. [120] Wijn EF. Pulsation of the two-phase layer on trays. I. Chem. E. Symp. Ser. 1982;73:D79–D101. [121] Fractionation Research Inc. Causes and prevention of packing fires. Chemical Engineering; 2007, Jul. p. 34–42. [122] Schuler H. Was behindert den praktischen Einsatz moderner regelungstechnischer Methoden in der Prozeßindustrie? Automatisierungstechnische Praxis 1992;34(3):116–123. [123] Friedman YZ, Kane L. Two DCS control configurations: Mass balance and heat balance; 2010. Webinar, Hydrocarbon Processing. [124] Sorensen E. Design and operation of batch distillation. Gorak A, Sorensen E, editors. Distillation: Fundamentals and Principles. Elsevier; 2014. [125] Brinkmann T, Ebert K, Pingel H, Wenzlaff A, Ohlrogge K. Prozessalternativen durch den Einsatz organisch-anorganischer Kompositmembranen für die Dampfpermeation. Chem-Ing-Tech 2004;76(10):1529–1533. [126] Berascola N, Eisele P. Aspen rate-based distillation; 2011. Seminar, March 21st. [127] Taylor R, Kooijman HA. Mass transfer in distillation. Gorak A, Sorensen E, editors. Distillation: Fundamentals and Principles. Elsevier; 2014. [128] Duncan JB, Toor HL. An experimental study of three component gas diffusion. AIChE J 1962;8(1):38–41. [129] Krishna R. Uphill diffusion in multicomponent mixtures. Chem Soc Rev 2015;44:2812. [130] Schaber K. Aerosolbildung durch spontane Phasenübergänge bei Absorptions- und Kondensationsvorgängen. Chem-Ing-Tech 1995;67(11):1443–1452. [131] Schaber K. Aerosolbildung bei der Absorption und Partialkondensation. Chem-Ing-Tech 1990;62(10):793–804. [132] Kaibel G. Distillation columns with vertical partitions. Chem Eng Technol 1987;10:92–98.

526 � Bibliography

[133] Galindez H, Fredenslund Å. Simulation of multicomponent batch distillation processes. Computers and Chem Eng 1988;12(4):281–288. [134] Hildebrand JH, Scott RL. The solubility of nonelectrolytes. J Phys Coll Chem 1949;53:944–947. [135] Arlt W. Thermische Grundoperationen der Verfahrenstechnik. Lecture notes, Technical University of Berlin; 1999. [136] Bocangel J. Design of liquid-liquid gravity separators. Chemical Engineering; 1986, Feb. p. 133–135. [137] Henschke M. Dimensionierung liegender Flüssig-Flüssig-Abscheider anhand diskontinuierlicher Absetzversuche. VDI Fortschritt-Berichte. (Reihe 3; no. 379), Düsseldorf: VDI-Verlag. [138] Henschke M, Schlieper LH, Pfennig A. Determination of a coalescence parameter from batch-settling experiments. Chem Eng J 2002;85:369–378. [139] Pfennig A, Pilhofer T, Schröter J. Flüssig-Flüssig-Extraktion. Goedecke R, editor, Fluid-Verfahrenstechnik. Weinheim: Wiley-VCH; 2011. [140] Huang C, Xu T, Zhang Y, Xue Y, Chen G. Application of electrodialysis to the production of organic acids: State-of-the-art and recent developments. J of Membrane Science 2007;288:1–12. [141] Kucera J. Reverse Osmosis – Industrial Applications and Processes. Salem, MA: Scrivener Publishing; 2010. [142] Nunes SP, Peinemann KV, editors. Membrane Technology in the Chemical Industry. Weinheim/New York: Wiley-VCH; 2001. [143] Bathen D, Breitbach M. Adsorptionstechnik. Berlin/Heidelberg: Springer-Verlag; 2001. [144] Bethge D. Energy-saving concepts for the dehydration of alcohol. Zuckerindustrie 2005;130(3):213–214. [145] Samant KD, O’Young L. Understanding crystallization and crystallizers. CEP; 2006, Oct. p. 28–37. [146] Beckmann W, editor. Crystallization – Basic Concepts and Industrial Application. Weinheim: Wiley-VCH; 2013. [147] Mersmann A, Kind M, Stichlmair J. Thermische Verfahrenstechnik, 2nd ed. Springer-Verlag; 2005. [148] Available at: www.bungartz.de. [149] Kernan D, Choung E. Run your pumps like a pro: Tips for boosting production and reducing risk at the refinery. Hydrocarbon Processing Webcast; 2010. [150] Beitz W, Grote KH, editors. Dubbel – Taschenbuch für den Maschinenbau, 19th ed. Berlin/Heidelberg: Springer-Verlag; 1997. Ch. B. [151] Sterling Fluid Systems Group. Liquid vacuum pumps and liquid ring compressors; Available at: www.sterlingfluidsystems.com. [152] Deiters UK, Imre AR, Quiñones-Cisneros SE. Isentropen von Fluiden im Zweiphasengebiet. ProcessNet, Thermodynamic Colloquium. Stuttgart; 2014. [153] Grave H. Dampfstrahl-Vakuumpumpen. Jorisch W, editor, Vakuumtechnik in der Chemischen Industrie. Weinheim: Wiley-VCH; 1999. [154] Baehr HD, Kabelac S. Thermodynamik. Berlin/Heidelberg: Springer-Verlag; 2006. [155] Jorisch W, editor. Vakuumtechnik in der Chemischen Industrie. Weinheim: Wiley-VCH; 1999. [156] Vogel HH. Die Finite-Elemente-Methode am Beispiel des Strahlapparates. Chem-Ing-Tech 2006;78(1/2):124–133. [157] GEA Wiegand GmbH. Überlegungen bei der Projektierung einer Dampfstrahl-Vakuumpumpe. [158] Toghraei M. Overflow systems are the last line of defense. Hydrocarb Process 2013;92(5):T92–T94. [159] Marr R, Moser F, Husung G. Schwerkraft- und Strickabscheider – Berechnung liegender Gas-Flüssig-Abscheider. verfahrenstechnik 1976;10(1):34–37. [160] Marr R, Moser F. Die Auslegung von stehenden Gas-Flüssig-Abscheidern – Schwerkraft- und Gestrickabscheider. verfahrenstechnik 1975;9(8):379–382. [161] Brauer H. Grundlagen der Einphasen- und Mehrphasenströmungen. Verlag Sauerländer; 1971. [162] Bürkholz A. Droplet Separation. Weinheim: Wiley-VCH; 1989. [163] Jess A, Wasserscheid P. Chemical Technology. Weinheim: Wiley-VCH; 2013.

Bibliography

� 527

[164] Montebelli A, Tronconi E, Orsenigo C, Ballarini N. Kinetic and modeling study of the ethylene oxychlorination to 1, 2-dichloroethane in fluidized-bed reactors. Ind Eng Chem Res 2015;54(39):9513–9524. [165] Riedel E. Allgemeine und Anorganische Chemie, 10th ed. Berlin/New York: Walter de Gruyter; 2010. [166] Macejko B. Is your plant vulnerable to brittle fracture? Hydrocarb Process 2014;93(11):67–78. [167] Sims JR. Improve evaluation of brittle-fracture resistance for vessels. Hydrocarb Process 2013;92(1):59–62. [168] Rähse W. Praktische Hinweise zur Wahl des Werkstoffs von Maschinen und Apparaten. Chem-Ing-Tech 2014;86(8):1163–1179. [169] Wagner MH. Heat transfer to non-newtonian fluids. VDI Heat Atlas, 2nd ed., chap. M4. Berlin/Heidelberg: Springer-Verlag; 2010. [170] Truckenbrodt E. Lehrbuch der angewandten Fluidmechanik. Berlin/Heidelberg/New York/Tokyo: Springer-Verlag; 1983. [171] Lockhart RW, Martinelli RC. Proposed correlation of data for isothermal two-phase, two-component flow in pipes. Chem Eng Prog 1949;45(1):39–48. [172] Friedel L. Druckabfall bei der Strömung von Gas/Dampf-Flüssigkeits-Gemischen in Rohren. Chem-Ing-Tech 1978;50(3):167–180. [173] Friedel L. Eine dimensionslose Beziehung für den Reibungsdruckabfall bei Zweiphasenrohrströmung zwischen Wasser und R12. vt Verfahrenstechnik 1979;13(4):241–246. [174] Friedel L. Improved friction pressure drop correlations for horizontal and vertical two phase pipe flow. 3R international 1979;18(7):485–491. [175] Beggs HD, Brill JP. A study of two-phase flow in inclined pipes. J Petrol Technol 1973;607–617. [176] Muschelknautz S. Druckverlust in Rohren und Rohrkrümmern bei Gas-Flüssigkeit-Strömung. VDI-Wärmeatlas, 8th ed., chap. Lgb. Berlin/Heidelberg: Springer-Verlag; 1997. [177] Schmidt H. Two-phase gas-liquid flow. VDI Heat Atlas, 2nd ed., chap. L2. Berlin/Heidelberg: Springer-Verlag; 2010. [178] Lee S, Seok W. Major accident and failure of stationary equipment in the RFCCU. Hydrocarb Process 2016;95(1):65–70. [179] Sahoo T. Pick the right valve. Chemical Engineering; 2004, Aug. p. 34–39. [180] Nitsche M. Industriearmaturen. Chemie-Technik 1985;14(3):99–102. [181] Johnson RD, Lee B. Valve design reduces costs and increases safety for US refineries. Hydrocarb Process 2010;89(8):37–40. [182] Stepanek D. Was den Betreiber von Massedurchflussmessern nach dem CORIOLIS-Prinzip interessiert. Tech. rep., Schwing Verfahrenstechnik GmbH; 2004. Corporate Publication. [183] Ignatowitz E. Chemietechnik, 7th ed. Haan-Gruiten: Verlag Europa-Lehrmittel; 2003. [184] Wagner W, Kruse A. Properties of Water and Steam. Berlin/Heidelberg/New York: Springer-Verlag; 1998. [185] Numrich R, Müller J. Filmwise condensation of pure vapors. VDI Heat Atlas, 2nd ed., chap. J1. Berlin/Heidelberg: Springer-Verlag; 2010. [186] Spirax Sarco GmbH. Grundlagen der Dampf- und Kondensattechnologie; 2014. Available at: www.spiraxsarco.de. [187] Available at: www.spiraxsarco.com/Resources/Pages/steam-engineering-tutorials.aspx. [188] Spirax Sarco GmbH. Kavitation ade! CALORIE 2011;79:10–11. [189] Glück A, Hunold D. Oil-based and synthetic heat transfer media. VDI Heat Atlas, 2nd ed., chap. D4.3. Berlin/Heidelberg: Springer-Verlag; 2010. [190] Krakat G. Cryostatic bath fluids, aqueous solutions, and glycols. VDI Heat Atlas. 2nd ed., chap. D4.2. Berlin/Heidelberg: Springer-Verlag; 2010. [191] Kleiber M. Exhaust air treatment in chemical industry. Gierycz P, Malanowski SK, editors, Thermodynamics for Environment. Warszawa: Information Processing Centre; 2004.

528 � Bibliography

[192] www.verwaltungsvorschriften-im-internet.de. Neufassung der Ersten Allgemeinen Verwaltungsvorschrift zum Bundes-Immissionsschutzgesetz (Technische Anleitung zur Reinhaltung der Luft – TA Luft). August 18th, 2021. [193] Herzog F, Schulte M. Abluftreinigung durch Kryokondensation. UMWELT 1998;1/2:49–53. [194] Messer Group GmbH. DuoCondex-Process; Available at: www.messergroup.com. [195] Domschke T, Steinebrunner K, Christill M, Seifert H. Verbrennung chlorierter Kohlenwasserstoffe – Die Deacon-Reaktion in Rauchgasen während der Abkühlung. Chem-Ing-Tech 1996;68(5):575–579. [196] Görner K, Hübner K. Gasreinigung und Luftreinhaltung. Berlin/Heidelberg/New York: Springer-Verlag; 2002. [197] Kugeler K, Phlippen PW. Energietechnik. Berlin/Heidelberg/New York: Springer-Verlag; 1993. [198] Kolar J. Stickstoffoxide und Luftreinhaltung. Berlin/Heidelberg/New York: Springer-Verlag; 1990. [199] Hevia MAG, Perez-Ramirez J. Assessment of the low-temperature EnviNOx variant for catalytic N2 O abatement over steam-activated FeZSM-5. Appl Catal B 2008;77(3/4):248–254. [200] Schwefer M, Siefert R, Groves MCE, Maurer R. Verfahren zur gemeinsamen Beseitigung von N2 O und NOx – Erste großtechnische Installation im Abgas der HNO3 -Produktion. Chem-Ing Tech 2003;75(8):1048–1049. [201] Venkatesh M, Woodhull J. Pick the right thermal oxidizer. Chemical Engineering 2003;67–70. [202] Müller G. Absorption organischer Lösemittel mit Glykolethern. VDI-Berichte 1989;730:373–394. [203] Bay K, Wanko H, Ulrich J. Biodiesel – Hoch siedendes Absorbens für die Gasreinigung. Chem-Ing-Tech 2004;76(3):328–333. [204] Available at: www.desotec.com. [205] Sörensen M, Zegenhagen F, Weckenmann J. State of the art wastewater treatment in pharmaceutical and chemical industry by advanced oxidation. Pharm Ind 2015;77(4):594–607. [206] Available at: www.enviolet.com. [207] Onken U, Behr A. Chemische Prozesskunde. Stuttgart: Georg Thieme Verlag; 1996. [208] Available at: https://en.wikipedia.org/wiki/West_Fertilizer_Company_explosion. [209] Available at: https://en.wikipedia.org/wiki/2015_Tianjin_explosions. [210] Nolan DP. Application of HAZOP and What-If-Safety Reviews to the Petroleum, Petrochemical and Chemical Industries. Park Ridge, NJ: Noyes Publications; 1994. [211] Stephan D. Sicher ist Sicher: Warum SIL keine Pflicht, aber trotzdem ein Muss ist; Available at: www.process.vogel.de/sicherheit/articles/483503/. [212] Bozoki G. Überdrucksicherungen für Behälter und Rohrleitungen. Verlag TÜV Rheinland GmbH; 1977. [213] Renfro J, Stephenson G, Marques-Riquelme E, Vandu C. Use dynamic models when designing high-pressure vessels. Hydrocarb Process 2014;93(5):71–76. [214] Feuerstein A. Dynamische Berechnung von Abblasevorgängen. Master thesis, TU Darmstadt; 2015. [215] Venting Atmospheric and Low-Pressure Storage Tanks. American Petroleum Institute, 5th ed.; 1998. API Standard 2000. [216] Pressure relieving and depressuring systems. American Petroleum Institute, 5th ed.; 2007. ANSI/API Standard 521. [217] LESER. Engineering handbook; Available at: https://www.leser.com/en/support-and-tools/ engineering/. [218] Yeh G, Griman J, Najrani M. Recover from a steam reformer tube rupture. Hydrocarb Process 2013;92(6):85–88. [219] Elliott B. Using DIERS two-phase equations to estimate tube rupture flowrates. Hydrocarb Process 2001;8:49–54. [220] Leung JC, Grolmes MA. A generalized correlation for flashing choked flow of initially subcooled liquid. AIChE J 1988;34(4):688–691. [221] Leung JC. A generalized correlation for one-component homogeneous equilibrium flashing choked flow. AIChE J 1986;32(10):1743–1746.

Bibliography

� 529

[222] Staak D, Repke JU, Wozny G. Simulation von Entlastungsvorgängen bei Rektifikationskolonnen. Chem-Ing-Tech 2008;80(1/2):129–135. [223] Smith D, Burgess J. Relief valve and flare action items: What plant engineers should know. Hydrocarb Process 2012;91(11):41–46. [224] ISO 4126. Safety devices for protection against excessive pressure. Beuth Verlag, Berlin; 2010. [225] Schmidt J, Westphal F. Praxisbezogenes Vorgehen bei der Auslegung von Sicherheitsventilen und deren Abblaseleitungen für die Durchströmung mit Gas/Dampf-Flüssigkeitsgemischen – Teil 1. Chem-Ing-Tech 1997;69(3):312–319. [226] Fründt J. Untersuchungen zum Einfluß der Flüssigkeitsviskosität auf die Druckentlastung. Aachen: Shaker-Verlag; 1997. Thesis. [227] Schmidt J, Westphal F. Praxisbezogenes Vorgehen bei der Auslegung von Sicherheitsventilen und deren Abblaseleitungen für die Durchströmung mit Gas/Dampf-Flüssigkeitsgemischen – Teil 2. Chem-Ing-Tech 1997;69(8):1074–1091. [228] Brodhagen A, Schmidt F. Berechnen von kritischen Massenströmen. VDI-Wärmeatlas. 10th ed., chap. Lbd. Berlin/Heidelberg: Springer-Verlag; 2006. [229] Sizing, Selection, and Installation of Pressure-Relieving Devices in Refineries. American Petroleum Institute, 7th ed.; 2000. API Recommended Practice 520. [230] Bauerfeind K, Friedel L. Berechnung der dissipationsbehafteten kritischen Düsenströmung realer Gase. Forschung im Ingenieurwesen 2003;67(6):227–235. [231] Westphal F, Christ M. Erfahrungen aus der Praxis mit dem 3 %-Kriterium für die Zuleitung von Sicherheitsventilen. Technische Sicherheit 2014;4(3):28–31. [232] LESER. Chattering safety valve; Available at: https://www.leser.com/en-us/the-company/ neuigkeiten/news/fehlfunktion-schlagen/. [233] Klapötke TM. Chemistry of High-Energy Materials. Berlin/New York: Walter de Gruyter; 2011. [234] Pfenning D. Inertisieren im Sekundentakt; 2014. Presentation, FH Aachen. [235] Thess A. The Entropy Principle. Berlin/Heidelberg: Springer-Verlag; 2010. [236] Kittredge CP, Rowley DS. Resistance coefficients for laminar and turbulent flow through one-half-inch valves and fittings. Trans ASME 1957;79:1759–1766. [237] Gersten K. Einführung in die Strömungsmechanik, 3rd ed. Braunschweig: Vieweg; 1984. [238] Kast W. Druckverlust bei der Strömung durch Leitungen mit Querschnittsänderungen. VDI-Wärmeatlas. 8th ed. Berlin/Heidelberg: Springer-Verlag; 1997. Abschnitt Lc. [239] Kleiber M. Prozesstechnik auf der ACHEMA 2018. Chem-Ing-Tech 2018;90(12):1897–1909. [240] Dannenmaier T, Schmidt J, Denecke J, Odenwald O. European Program on Evaluation of Safety Valve Stability. Chemical Engineering Transactions 2016;48:625–630. [241] Markus D, Maas U. Die Berechnung von Explosionsgrenzen mit detaillierter Reaktionskinetik. Chem-Ing-Tech 2004;76(3):289–292. [242] Baybutt P. Process safety incidents, cognitive biases and critical thinking. Hydrocarbon Processing 2017;96(4):81–82. [243] Patidar P, Gupta A. Savings using divided wall columns. PTQ 2018;Q4:79–85. [244] Müller A, Kropp A, Köster R, Fazzini M. Improve energy efficiency with enhanced tube bundles in tubular heat exchangers. Hydrocarbon Processing 2017;96(5):75–81. [245] Kliemann C, Kleiber M, Müller K. Rheological Behavior of Mixtures of Ionic Liquids with Water. Chem Eng Technol 2018;41(4):819–826. [246] Dutta H. Building blocks of process safety. Hydrocarbon Processing 2018;97(10):33–36. [247] Hanik P, Hausmann R. High-reliability organizing for a new paradigm in safety. Hydrocarbon Processing 2018;97(10):51–54. [248] Jain S, Patil R, Gupta A. Phenomenon of flow distribution in manifolds. Hydrocarbon Processing 2018;97(11):75–77. [249] Sofronas T. Case 104: Energy in steam boiler explosions. Hydrocarbon Processing 2018;97(11):17–18.

530 � Bibliography

[250] Pressure relieving and depressuring systems. American Petroleum Institute, 6th ed.; 2014. ANSI/API Standard 521. [251] Bernecker G. Das kleine Einmaleins des Anlagenbaus. CITplus 2016;19(5):6–9. [252] Rähse W. Vorkalkulation chemischer Anlagen. Chem-Ing-Tech 2016;88(8):1068–1081. [253] Rähse W. Ermittlung eines kompetitiven Marktpreises für neue Produkte über die Herstellkosten. Chem-Ing-Tech 2017;89(9):1142–1158. [254] Ahuja S. Comparison of Commercial Tools for Distillation Column Design. Bachelor thesis, April 2019. [255] Angler R. Sicherheitseinrichtungen auslegen ..... aber richtig. tredition GmbH, 2022. [256] Herdegen V, Werner A, Milew K, Haseneder R, Aubel T. ACHEMA 2018: Membranen und Membranverfahren. Chemie-Ing-Tech 2018;90(12):1964–1971. [257] Grolmes MA, Fisher HG. Vapor-liquid Onset/Disengagement Modeling for Emergency Relief Discharge Evaluation. Presentation at the AIChE 1994 Summer National Meeting. [258] Kister HZ. What caused tower malfunctions in the last 50 years? Trans IChemE, Vol. 81, Part A, January 2003. [259] Kister HZ. Can We Believe the Simulation Results? Chem Eng Prog 2002;98(10):52–58. [260] Kister HZ, Larson KF, Yanagi T. How Do Trays and Packings Stack Up? Chem Eng Progr 1994;90(2):23–32. [261] Duarte Pinto R, Perez M, Kister HZ. Combine temperature surveys, field tests and gamma scans for effective troubleshooting. Hydrocarbon Processing 2003;82(4):69–76. [262] Schmidt J. Auslegung von Schutzeinrichtungen für wärmeübertragende Apparate. VDI-Wärmeatlas, 11th ed., chap. L2.3. Berlin/Heidelberg: Springer-Verlag; 2013. [263] HTRI Design Manual, October 2006. [264] Georgiadis MC, Banga JR, Pistikopoulos EN, Dua V. Process Systems Engineering: Vol. 7: Dynamic Process Modeling. Weinheim: Wiley-VCH; 2011. [265] da Silva FJ. Dynamic Process Simulation: When do we really need it? Available at: http://processecology.com/articles/dynamic-process-simulation-when-do-wereally-need-it. [266] Berutti M. Understanding the digital twin; Available at: https://www.chemengonline.com/ understanding-the-digital-twin/?pagenum=1. [267] Bird RB, Stewart WE, Lightfoot EN. Transport Phenomena, Revised 2nd ed. New York: John Wiley & Sons; 2007. [268] Greenspan D. Numerical Solution of Ordinary Differential Equations for Classical, Relativistic and Nano Systems. Weinheim: Wiley-VCH; 2006. [269] Bequette BW. Process Control: Modeling, Design, and Simulation. Prentice Hall; 1998. [270] Gil Chaves ID, López JRG, Garcia Zapata JL, Leguizamón Robayo A, Rodríguez Niño G. Process Analysis and Simulation in Chemical Engineering. Berlin/Heidelberg/New York: Springer-Verlag; 2015. [271] Stephanopoulos G. Chemical Process Control: An Introduction to Theory and Practice. Upper Saddle River: Prentice-Hall; 1984. [272] Haas V. Simulation von Abblaseszenarien am Beispiel eines Industrieprozesses. Master thesis, Karlsruher Institut für Technologie, August 2018. [273] Duss M, Taylor R. Predict Distillation Tray Efficiency. CEP, July 2018, 24–30. [274] Grünewald M, Zheng G, Kopatschek M. Auslegung von Absorptionskolonnen – Neue Problemstellungen für eine altbekannte Aufgabe. Chem-Ing-Tech 2011;83(7):1026–1035. [275] Hanusch F, Rehfeldt S, Klein H. Flüssigkeitsmaldistribution in Füllkörperschüttungen: Experimentelle Untersuchung der Einflussparameter. Chem-Ing Tech 2017;89(11):1550–1560. [276] Ottow JCG, Bidlingmaier W, editors. Umweltbiotechnologie. Stuttgart: Gustav Fischer Verlag; 1997. [277] Span R, Beckmüller R, Eckermann T, Herrig S, Hielscher S, Jäger A, Mickoleit E, Neumann T, Pohl SM, Semrau B, Thol M. TREND. Thermodynamic Reference and Engineering Data 4.0. Lehrstuhl für Thermodynamik, Ruhr-Universität Bochum, 2019.

Bibliography

[278] [279] [280] [281] [282] [283] [284] [285] [286] [287] [288] [289] [290] [291] [292] [293] [294] [295] [296] [297] [298] [299] [300] [301] [302] [303]

[304] [305] [306] [307] [308] [309] [310] [311] [312]

� 531

Lapierre D, Moro J. Fünf nach zwölf in Bhopal. Europa Verlag Leipzig; 2004. Bittermann HJ. Kein Ärger mit der Pumpe. Process 2019;26(10), 52–54. Lutz H, Wendt W. Taschenbuch der Regelungstechnik. Verlag Harri Deutsch, Frankfurt; 2003. www.sulzer.com/en/shared/products/shell-schoepentoeter-and-schoepentoeter-plus. Zak Friedman Y. Distillation Column DCS control configuration. Hydrocarbon Processing 2022;101(6):17–18. Mehling T, Kleiber M. Vapor phase association of pure components. ChemTexts 2020;6:11. Denecke J, Single J. Automatisierte HAZOPs - Stand der Technik und künftige Entwicklung. Presentation, 6. CSE-Sicherheitstage, April 25th to 27th, 2022, Wangerooge, Germany. Gmehling J, Rasmussen P. Flash Points of Flammable Liquid Mixtures Using UNIFAC. I&EC Fundamentals 1982;21:186–188. Nitsche M. Nitsche-Planungs-Atlas. Berlin: Springer-Vieweg; 2020. Mehling T, Kleiber M. Prozesstechnik auf der ACHEMA 2022. Submitted to Chem-Ing-Tech. Hoppe K, Mittelstraß M. Grundlagen der Dimensionierung von Kolonnenböden. Dresden: Theodor Steinkopf; 1967. Pramanik R, Srinath NR. Case Study: Challenges in the selection of a helical baffled exchanger. Hydrocarbon Processing 2022;101:2. 72–74. De D, Pal TK, Bandyopadhyay S. Helical baffle design in shell and tube type heat exchanger with CFD analysis. International Journal of Heat and Technology 2017;35:2. 378–383. Linström HJ, Buhn J. Basic concepts for explosion protection. 12th ed. BARTEC Gruppe; 2016. Jung W, Stark A. Wiederholung ausgeschlossen? 100. Jahrestag von BASF-Explosion mit über 500 Toten. PROCESS, September 17th, 2021. McQuillan KW, Whalley PB. A comparison between flooding correlations and experimental flooding data for gas-liquid flow in vertical circular tubes. Chem Eng Sc 1985;40(8):1425–1440. Misal PM. Double-tubesheet heat exchangers: Necessities and challenges. Hydrocarbon Processing 2021;100(5):63–67. Westphal F, Feldhaus U. Auf Nummer sicher. CITplus 2008;11(1–2):50–52. Hahn M. Druckstöße in Rohrleitungen: Störungsszenarien, Sicherheitsbetrachtungen und Gegenmaßnahmen. Chem-Ing-Tech 2009;81(1–2):127–136. Stephan D. Mission: Rohr frei! PROCESS 2021;12:38–40. Beisbart C. Was ist eine Computersimulation? Physik Journal 2022;21(4):35–41. Dixit P. Fix nozzle elevations and orientations for distillation columns. Hydrocarbon Processing 2021;100(7):39–42. Balaji C. Essentials of Radiation Heat Transfer. Cham, Switzerland: Springer Nature; 2021. Kabelac S. Thermodynamik der Strahlung. Braunschweig/Wiesbaden: Verlag Vieweg; 1994. Moore WJ, Hummel DO. Physikalische Chemie. 4th ed. Berlin: Walter de Gruyter; 1986. Gmehling J, Kleiber M, Steinigeweg S. Thermische Verfahrenstechnik. 5th ed. In: Dittmeyer R, Keim W, Kreysa G, Oberholz A, editors. Winnacker-Küchler: Chemische Technik. Weinheim: Wiley-VCH; 2004. Götting HP, Schwipps K. Grundlagen des Patentrechts. Stuttgart: B.G. Teubner; 2004. Sonn H, Pawloy P, Alge D. In: Patentwissen leicht gemacht. 3rd ed. Frankfurt: Redline Wirtschaft; 2005. Schlünder EU, Thurner F. Destillation Absorption Extraktion. Stuttgart: Georg Thieme Verlag; 1986. Engel V. How to .. Bubble Cap Tray. WelChem GmbH; 2020. Engel V. How to .. Float Valve Tray. WelChem GmbH; 2020. Engel V. How to .. Fixed Valve Tray. WelChem GmbH; 2020. Engel V. How to .. Sieve Tray. WelChem GmbH; 2020. Engel V. How to .. Downcomers. WelChem GmbH; 2020. McDuffie NG. Vortex Free Downflow in Vertical Drains. AIChE J 1977;23(1):37–40.

532 � Bibliography

[313] Rochelle SG, Briscoe MT. Predict and prevent air entrainment in draining tanks. Chemical Engineering, November 2010:37–43. [314] Engel V. Private communication, Jan 26th, 2023. [315] Schmidt P. OSN-assisted reaction and distillation process. In: Lutze P, Gorak A, editors. Reactive and Membrane-Assisted Separations. Berlin: de Gruyter; 2016. [316] Niesbach A. Reactive distillation. In: Lutze P, Gorak A, editors. Reactive and Membrane-Assisted Separations. Berlin: de Gruyter; 2016. [317] Hauptmanns U. Prozess- und Anlagensicherheit. Berlin: Springer-Vieweg; 2013. [318] Liebermann NP, Liebermann ET. A Working Guide to Process Equipment. 3rd ed. McGraw-Hill; 2008. [319] Jäggy M, Koch J. External Report, March 2022. [320] Strauch U. Modulare Kostenschätzung in der chemischen Industrie. Thesis, TU Berlin, 2008. [321] Pirozfar V, Eftekhari Y, Su C-H. Pinch Technology. Berlin: de Gruyter; 2022. [322] Klenke W. Zur einheitlichen Beurteilung und Berechnung von Gegenstrom- und Kreuzstromkühltürmen. Kältetechnik - Klimatisierung 1970;22(10):322–330. [323] Vauck WRA, Müller HA. Grundoperationen chemischer Verfahrenstechnik. 8th ed. Leipzig: VEB Deutscher Verlag für Grundstoffindustrie; 1989. [324] Harting PE. Zur einheitlichen Berechnung von Kühltürmen. Thesis, TU Braunschweig; 1977. [325] Henzler HJ. Untersuchungen zum Homogenisieren von Flüssigkeiten oder Gasen. VDI-Forschungsheft Nr. 587, 1978. [326] Garg SK, Banipal TS, Ahluwalia JC. Heat capacities and densities of liquid o-xylene, m-xylene, p-xylene, and ethylbenzene, at temperatures from 318.15 K to 373.15 K and at pressures up to 10 MPa. J Chem Thermodynamics 1993;25:57–62. [327] Brandani S, Brandani V, Flammini D. Isothermal vapor-liquid equilibria for the water-1, 3,5-trioxane system. J Chem Eng Data 1994;39:184–185. [328] Park SJ, Han KJ, Gmehling J. Vapor-liquid equilibria and HE for binary systems of Dimethyl Ether (DME) with C1-C4 Alkan-1-ols at 323.15 K and liquid-liquid equilibria for ternary system of DME + Methanol + Water at 313.15 K. J Chem Eng Data 2007;52:230–234. [329] Gaw WJ, Swinton FL. Thermodynamic properties of binary systems containing hexauorobenzene. Part 4. - Excess Gibbs free energies of the three systems hexafluorobenzene + benzene, toluene, and p-xylene. Trans Faraday Soc 1968;64:2023–2034. [330] Wagner W. Strömung und Druckverlust. 4th ed. Vogel Buchverlag; 1990. [331] Bohl W. Technische Strömungslehre. 10th ed. Vogel Buchverlag; 1971. [332] Glück B. Hydrodynamische und gasdynamische Rohrströmung, Druckverluste. Berlin: VEB Verlag für Bauwesen; 1988. [333] EKATO - The Book, Firmenschrift der EKATO Rühr- und Mischtechnik, Schopfheim, 2012. [334] https://www.dpma.de/english/patents/patent_protection/protection_requirements/index.html. [335] https://www.iusmentis.com/patents/priorart/donaldduck/ The “Donald Duck as prior art” case (in Patents > When is something prior art iusmentis.com). [336] “The Sunken Yacht”, © 1949, Walt Disney Corporation. [337] https://www.epo.org/law-practice/legal-texts/html/guidelines/e/g_vii_3.htm. [338] https://www.epo.org/applying/european/Guide-for-applicants/html/e/ga_c3_4.html. [339] https://www.epo.org/law-practice/legal-texts/html/caselaw/2019/e/clr_i_e_1.htm 1. Notion of ‘industrial application’ - Case Law of the Boards of Appeal, I. PATENTABILITY, E. The requirement of industrial application under Article 57 EPC (epo.org). [340] https://www.epo.org/applying/european/Guide-for-applicants_de.html. [341] https://www.wipo.int/pct/en/guide/index.html. [342] https://www.dpma.de/english/search/classifications/patents_and_utility_models/ipc/index.html. [343] https://register.epo.org/help?lng=en&topic=kindcodes. [344] https://register.epo.org/help?lng=de&topic=countrycodes. [345] https://www.epo.org/law-practice/legal-texts/html/guidelines/e/b_x_9_2.htm.

Bibliography

[346] [347] [348] [349] [350] [351] [352] [353] [354] [355] [356] [357]

[358] [359] [360] [361] [362] [363] [364] [365] [366] [367] [368] [369] [370] [371]

[372] [373] [374]

� 533

https://www.epo.org/applying/fees/fees_de.html. Grassmann O, Bader A. Patentmanagement. Berlin: Springer-Verlag; 2006. https://depatisnet.dpma.de/DepatisNet/depatisnet?action=basis. https://worldwide.espacenet.com/?locale=en_EP. https://ppubs.uspto.gov/pubwebapp/static/pages/landing.html. https://pss-system.cponline.cnipa.gov.cn/conventionalSearch. https://www.j-platpat.inpit.go.jp/. https://www.kipo.go.kr/en/MainApp?c=1000. Chamorro-Premuzic T. https://hbr.org/2021/11/the-essential-components-of-digital-transformation. Schallmo D, Williams CA, Boardman L. Digital transformation of business models-best practice, enablers, and roadmap. International Journal of Innovation Management 2017;21(8):1740014. Feroz AK, Zo H, Chiravuri A. Digital transformation and environmental sustainability: A review and research agenda. Sustainability 2021;13(3):1530. Vacchi M, Siligardi C, Cedillo-González EI, Ferrari AM, Settembre-Blundo D. Industry 4.0 and smart data as enablers of the circular economy in manufacturing: Product re-engineering with circular eco-design. Sustainability 2021;13(18):10366. International Energy Agency. Digitalization & Energy, p. 146–151. Publication 2017. Ge W, Guo L, Li J. Toward greener and smarter process industries. Engineering 2017;3(2):152–153. Mao S, Wang B, Tang Y, Qian F. Opportunities and challenges of artificial intelligence for green manufacturing in the process industry. Engineering 2019;5(6):995–1002. Yang T, Yi X, Lu S, Johansson KH, Chai T. Intelligent manufacturing for the process industry driven by industrial artificial intelligence. Engineering 2021;7(9):1224–1230. Boyes H, Hallaq B, Cunningham J, Watson T. The industrial internet of things (IIoT): An analysis framework. Computers in Industry 2018;101:1–12. Veneri G, Capasso A. Hands-on industrial Internet of Things: create a powerful industrial IoT infrastructure using industry 4.0. Packt Publishing Ltd; 2018. p. 18–22. Qi Q, Tao F. Digital twin and big data towards smart manufacturing and industry 4.0: 360 degree comparison. IEEE Access 2018;6:3585–3593. Breuer O. From Big Data to Smart Data. https://publications.rwth-aachen.de/record/724087/files/ 724087.pdf. Ge Z, Song Z, Ding SX, Huang B. Data mining and analytics in the process industry: The role of machine learning. IEEE Access 2017;5:20590–20616. Prajapati AG, Sharma SJ, Badgujar VS. All about cloud: A systematic survey. 2018 International Conference on Smart City and Emerging Technology (ICSCET). IEEE. p. 1–6. Caiza G, Saeteros M, Oñate W, Garcia MV. Fog computing at industrial level, architecture, latency, energy, and security: A review. Heliyon 2020;6(4):e03706. Venkatasubramanian V. The promise of artificial intelligence in chemical engineering: Is it here, finally? AIChE Journal 2019;65(2):466–478. Sircar A, Yadav K, Rayavarapu K, Bist N, Oza H. Application of machine learning and artificial intelligence in oil and gas industry. Petroleum Research 2021;6(4):379–391. Mowbray M, Vallerio M, Perez-Galvan C, Zhang D, Chanona ADR, Navarro-Brull FJ. Industrial data science – a review of machine learning applications for chemical and process industries. Reaction Chemistry & Engineering 2022;7:1471–1509. Kim Y, Kang N, Kim S, Kim H. Evaluation for snowfall depth forecasting using neural network and multiple regression models. Journal of the Korean Society of Hazard Mitigation 2013;13(2):269–280. Sharma V, Rai S, Dev A. A comprehensive study of artificial neural networks. International Journal of Advanced Research in Computer Science and Software Engineering 2012;2(10):278–284. Bárkányi Á, Chovan T, Nemeth S, Abonyi J. Modelling for digital twins – potential role of surrogate models. Processes 2021;9(3):476.

534 � Bibliography

[375] Bröcker S, Benfer R, Bortz M, Engell S, Knösche C, Kröner A. Process Simulation – Fit for the future. Position Paper, ProcessNet-Arbeitsausschuss “Modellgestützte Prozessentwicklung und -optimierung”, March 2021. [376] Bikmukhametov T, Jäschke J. Combining machine learning and process engineering physics towards enhanced accuracy and explainability of data-driven models. Computers & Chemical Engineering 2020;138:106834. [377] https://www.dataversity.net/data-science-vs-domain-expertise-who-can-best-deliver-solutions/. [378] Graetsch UM, Khalajzadeh H, Shahin M, Hoda R, Grundy J. Dealing with data challenges when delivering data-intensive software solutions. IEEE Transactions on Software Engineering 2023. [379] da Silva Mendonça R, de Oliveira Lins S, de Bessa IV, de Carvalho Ayres Jr FA, de Medeiros RLP, de Lucena Jr VF. Digital twin applications: A survey of recent advances and challenges. Processes 2022;10(4):744. [380] https://ntrs.nasa.gov/citations/20210023699. [381] Bamberg A, Urbas L, Bröcker S, Bortz M, Kockmann N. The digital twin – your ingenious companion for process engineering and smart production. Chemical Engineering & Technology 2021;44(6):954–961. [382] Kallakuri R, Bahuguna PC. Role of operator training simulators in hydrocarbon industry – a review. International Journal of Simulation Modelling 2021;20(4):649–660. [383] Komulainen TM, Sannerud AR. Learning transfer through industrial simulator training: Petroleum industry case. Cogent Education 2018;5(1):1554790. [384] Smith CL. Advanced Process Control: Beyond Single Loop Control. Hoboken, New Jersey: John Wiley & Sons; 2010. [385] Katz J, Pappas I, Avraamidou S, Integrating PEN. Deep Learning and Explicit MPC for Advanced Process Control. In: 2020 American Control Conference (ACC). IEEE; 2020. p. 3559–3564. [386] Francois G, Bonvin D. Measurement-Based Real-Time Optimization of Chemical Processes. In: Pushpavanam S, editor. Control and Optimisation of Process Systems. Advances in Chemical Engineering. vol. 43. Amsterdam: Elsevier; 2013. [387] Krishnamoorthy D, Skogestad S. Real-time optimization as a feedback control problem – a review. Computers & Chemical Engineering 2022;161:107723. [388] Mendoza DF, Graciano JEA, dos Santos Liporace F, Le Roux GAC. Assessing the reliability of different real-time optimization methodologies. The Canadian Journal of Chemical Engineering 2016;94(3):485–497. [389] Zonta T, Da Costa CA, da Rosa Righi R, de Lima MJ, da Trindade ES, Li GP. Predictive maintenance in the Industry 4.0: A systematic literature review. Computers & Industrial Engineering 2020;150:106889. [390] Pech M, Vrchota J, Bednář J. Predictive maintenance and intelligent sensors in smart factory. Sensors 2021;21(4):1470. [391] Zhu W, Ma Y, Benton MG, Romagnoli JA, Zhan Y. Deep learning for pyrolysis reactor monitoring: From thermal imaging toward smart monitoring system. AIChE Journal 2019;65(2):582–591. [392] Huang P, Chen M, Chen K, Zhang H, Yu L, Liu C. A combined real-time intelligent fire detection and forecasting approach through cameras based on computer vision method. Process Safety and Environmental Protection 2022;164:629–638. [393] Krause H. Virtual commissioning of a large LNG plant with the DCS 800XA by ABB. In: 6th EUROSIM Congress on Modelling and Simulation. Ljubljana, Slovénie. 2007. [394] Schenk T, Botero Halblaub A, Rosen R, Heinzerling T, Mädler J, Klose A, Hensel S, Urbas L, Schäfer C, Bröcker S. Co-Simulation-based virtual Commissioning for modular process plants – Requirements, Framework and Support -Toolchain for a Virtual Automation Testing Environment. VDI-Berichte 2351, Automation 2019, pp. 229–242. [395] Nooralishahi P, Ibarra-Castanedo C, Deane S, López F, Pant S, Genest M, Avdelidis NP, Maldague XP. Drone-based non-destructive inspection of industrial sites: A review and case studies. Drones 2021;5(4):106.

Bibliography

� 535

[396] Restas A. Drone applications for preventing and responding HAZMAT disaster. World Journal of Engineering and Technology 2016;4:76–84. [397] https://apm.byu.edu/prism/index.php/Site/OnlineCourses. [398] https://www.tab-beim-bundestag.de/english/projects_energy-consumption-of-ict-infrastructure. php. [399] https://www.python.org/. [400] https://posit.co/download/rstudio-desktop/. [401] Mandur JS, Budman HM. Simultaneous model identification and optimization in presence of model-plant mismatch. Chemical Engineering Science 2015;129:106–115. [402] Lüdecke HJ, Kothe B. KSB Know-how Band 1: Der Druckstoß. https://docplayer.org/8286127-Ksbknow-how-band-1-der-druckstoss-2-d-i-dn-2-150-c-cw-0-75-d-i-d-i-c-cp-c-w-c-o-c-b.html. [403] Fischer K. Unpublished Data. DDB Nos. 14159-14160. [404] Stephenson R, Stuart J. Mutual Binary Solubiities: Water-Alcohols and Water-Esters. J. Chem. Eng. Data 1986;31:56–70. [405] Kato M, Konishi H, Hirata M. New apparatus for isobaric dew and bubble point method. Methanol water, ethyl acetate- ethanol, water - 1-butanol, and ethyl acetate - water systems. J. Chem. Eng. Data 1970;15(3):435–439. [406] Marongiu B, Ferino I, Monaci R, Solinas V, Torraza S. Thermodynamic properties of aqueous non-electrolyte mixtures. Alkanols + water systems. J. Mol. Liq. 1984;28(4):229–247. [407] Butler JAV, Thomson DW, Maclennan WH. The free energy of the normal aliphatic alcohols in aqueous solution. Part I. The partial vapour pressures of aqueous solutions of methyl, n-propyl, and n-butyl alcohols. Part II. The solubilities of some normal aliphatic alcohols in water. Part III. The theory of binary solutions, and its application to aqueous-alcoholic solutions. J. Chem. Soc. London 1933;674.

A Some numbers to remember It seems to be a bit outdated to know simple numbers by heart. Nevertheless, many projects are founded by having quick ideas in the open talk with colleagues, a plant manager or a plant engineer. Often complicated process simulations must be made plausible to practitioners by rough calculations. Knowing some often used numbers makes you a candidate for the pole position in process engineering meetings. Without claiming to be complete, here are some numbers considered to be worth to know them by heart. On purpose, only the even approximations and not the exact numbers are given, as the target of learning numbers by heart is their application without any tools.

Molecular weights Nitrogen Air Water Chlorine Methanol Ethylene Oxygen Propylene Hydrogen Ammonia Methane Carbon Dioxide Ethanol

28 g/mol 29 g/mol 18 g/mol 71 g/mol 32 g/mol 28 g/mol 32 g/mol 42 g/mol 2 g/mol 17 g/mol 16 g/mol 44 g/mol 46 g/mol

Standard cubic meter Essentially, the standard cubic meter is not a volume but a mass unit. It refers to the amount of gaseous substance in 1 m3 at standard conditions p = 1.01325 bar, t = 0 °C. It can be calculated with the ideal gas equation of state. For nitrogen with M = 28.013 g/mol, one gets mN =

pVM 101325 Pa ⋅ 1 m3 ⋅ 0.028013 kg/mol = = 1.2498 kg ≈ 1.25 kg RT 8.31446 J/(mol ⋅ K) ⋅ 273.15 K

It is easy to perform this calculation for other substances as well, but undoubtedly, it is difficult without at least a pocket calculator. However, the only number that refers to the substance in this calculation is the molecular weight. Thus, knowing the even number for nitrogen by heart, the mass of a standard cubic meter can easily be scaled with the molecular weight, e. g. https://doi.org/10.1515/9783111028149-019

538 � A Some numbers to remember – –

– –

29 = 1.295 kg; for air (M = 29 g/mol): mN,air = 1.25 kg ⋅ 28 2 for hydrogen (M = 2 g/mol): mN,H2 = 1.25 kg ⋅ 28 = 89 g;

32 for oxygen (M = 32 g/mol): mN,O2 = 1.25 kg ⋅ 28 = 1.43 kg; for carbon dioxide (M = 44 g/mol): mN,CO2 = 1.25 kg ⋅ 44 = 1.96 kg. 28

Some other useful physical property data cpid

Nitrogen

1 J/g K

cpid Δhv ρL κ cp ρ λ λ

Water

4.2 J/g K

Water Water Water Nitrogen Steel Steel Carbon Steel Stainless Steel

1.9 J/gK 2250 J/g (at t = ��� °C) ���� kg/m� 1.4 0.4–0.5 J/g K ���� kg/m� 50 W/mK 15 W/mK

cpL

Critical temperatures Methanol Ethanol Ethylene Propylene Propane Nitrogen Ammonia Methane Water Carbon Dioxide

240 °C 241 °C 9 °C 91 °C 97 °C −��� °C 132 °C −�� °C 374 °C 31 °C

Normal boiling points

Normal boiling points Methanol Acetone Benzene Toluene Ethylene Propylene Propane Chlorine Water Nitrogen Ammonia Methane Carbon Dioxide Ethanol

64 °C 56 °C 80 °C 111 °C −��� °C −�� °C −�� °C −�� °C 100 °C −��� °C −�� °C −��� °C None 78 °C

(In fact, the exact value is 99.97 °C according to ITS-90.)

The triple point is at t = −��.� °C, p = �.� bar.

Rough values for the vapor pressure of water 30 °C 40 °C 60 °C 70 °C 80 °C 90 °C 100 °C 120 °C 150 °C 160 °C 170 °C 180 °C 190 °C 210 °C 230 °C 250 °C

0.04 bar 0.075 bar 0.2 bar 0.3 bar 0.5 bar 0.7 bar 1 bar 2 bar 5 bar 6 bar 8 bar 10 bar 12.5 bar 19 bar 28 bar 40 bar

Some values for heat transfer Heat transfer by natural convection to air Heat transfer by wind Plate heat exchanger liquid/liquid Maximum possible solar radiation (solar constant)

α = �–� W/m� K α = �� W/m� K k = ���� W/m� K S = ���� W/m�



539

B Pressure drop coefficients For the evaluation of the ζ -values in Equation (12.19), the following instructions according to [150] can be applied. In the tables, one can interpolate between the given values. If the cross-flow area changes, the velocity always refers to the outlet of the element, i. e. to the large cross-flow area for expansions and to the small cross-flow area for restrictions. Pictures of the particular elements can be found in [150]. For laminar flow, the ζ -values listed here cannot be used. For small Reynolds numbers, they can be up to 1000 fold higher. The problem is described in [236]. Further information about pressure drop coefficients can be taken from [72, 330–332].

90° bend r/d ζ�� , smooth (k Re < �� d) ζ�� , rough (k Re > �� d)

1

2

4

6

10

0.21 0.51

0.14 0.3

0.11 0.23

0.09 0.18

0.11 0.20

Bend with arbitrary angles ϕ ≠ 90° ϕ ζ/ζ��



30°

60°

90°

120°

150°

180°

0

0.4

0.7

1.0

1.25

1.5

1.7

Elbow pipe with circular cross-flow area ϕ ζsmooth ζrough



22.5°

30°

45°

60°

90°

0 0

0.07 0.11

0.11 0.17

0.24 0.32

0.47 0.68

1.13 1.27

Elbow pipe with rectangular cross-flow area ϕ



30°

45°

60°

75°

90°

ζ

0

0.15

0.52

1.08

1.48

1.6

Corrugated expansion joint ζ = 0.2n with n as the number of corrugations

https://doi.org/10.1515/9783111028149-020

542 � B Pressure drop coefficients

U-bend a/d ζ

0

2

5

10

0.33

0.21

0.21

0.21

a is the length of the straight lines [150].

Sharp-edged tube entrance ζ = 0.5

Smooth tube entrance – – –

ζ = 0.01 (smooth) ζ = 0.03 (transition smooth–rough) ζ = 0.05 (rough)

Tube entrance with orifice (d/dorifice )� ζ

1

1.25

2

5

10

0.5

1.17

5.45

54

245

Discontinuous transition from A1 to A2 > A1 ζ = (A2 /A1 − 1)2

Continuous cross-flow area expansion (diffusor) ϕ/� ϕ ζ(d� /d� ζ(d� /d� ζ(d� /d� ζ(d� /d� ζ(d� /d�

= �.�) = �.�) = �.�) = �.�) = �.�)

4° 8°

6° 12°

8° 16°

10° 20°

12° 24°

0.0 0.0 0.25 0.8 1.25

0.0 0.15 0.6 1.15 2.0

0.0 0.25 0.85 1.75 2.75

0.1 0.3 1.05 2.15 3.5

0.2 0.7 1.65 3.1 5.0

The values have been taken from diagrams in [150]. ϕ is the opening angle of the diffusor. The cross-flow area expansion is characterized by the diameter at the inlet of the expansion piece d1 , the diameter at the outlet of the expansion piece d2 and its length, which yields the opening angle ϕ. Because of the extremely steep gradients, the extrap-

Discontinuous cross-flow area restriction from A1 to A2 < A1

� 543

olation beyond 24° should be omitted. Instead, the discontinuous transition should be used.

Discontinuous cross-flow area restriction from A1 to A2 < A1 The relationship between the area ratio (A2 /A1 ) and ζ can be evaluated according to the relationship in [170], which reproduces the curve in [150] and the tabulated values in [237] sufficiently well: ζ = 1.5 (

1−μ 2 ) , μ

(B.1)

with the restriction coefficient μ=

0.39309023 (A2 /A1 )2 − 0.86544149 (A2 /A1 ) + 0.61790739 1 − 1.4837414 (A2 /A1 ) + 0.62929722 (A2 /A1 )2

(B.2)

Continuous cross-flow area restriction The pressure drop is comparably low and can be described by ζ = 0.05 with a sufficient accuracy and in a conservative way [150]. If the angle of the restriction is > 40°, the formula for the discontinuous restriction should be applied. For very small angles, the pressure drop of the pipe itself should not be neglected [238].

Vessel outlet According to [150], the volume flow of a liquid at the vessel outlet is characterized by the Bernoulli equation supplemented by a flow coefficient μ: V̇ = μA√2gh + 2(p1 − p2 )/ρ where A ... μ ... g ... h ... p1 . . . p2 . . . ρ ...

cross-flow area of vessel outlet outlet flow coefficient gravity acceleration height of liquid in the vessel above outlet pressure inside the vessel pressure outside the vessel liquid density

544 � B Pressure drop coefficients For the outlet flow coefficient, the following values can be taken or, respectively, calculated from [150]: (a) sharp-edged outlet: μ = 0.59 . . . 0.62 (b) rounded-edged outlet: μ = 0.97 . . . 0.99 (c) outlet with short pipe with l/d = 2 . . . 3: μ = 0.82 (e) outlet with conical short pipe (d� /d� )� μ

0.1

0.2

0.4

0.6

0.8

1.0

0.80

0.81

0.84

0.86

0.90

0.96

where d2 ist the smaller inner diameter of the conus at the outlet.

Index 10 % pressure drop criterion 466 1st generation random packing 211 2-phase flash 66 2nd generation random packing 211 3-phase flash 66, 119 3 % pressure drop criterion 464 3D models 10 3rd generation random packing 211 4th generation random packing 211 γ-φ-approach 25, 50, 69, 70 γ-ray scanning 265–267 N2 O (laughing gas) 411 φ-φ-approach 25, 50, 52, 68–70 ζ-value 367, 383, 541 abnormal heat input 452 abrasive effect 182 absolute pressure 386 absorption 204, 260, 404, 405, 415 absorption column 220 absorption/desorption 405, 415 absorptive agent 415 acentric factor 36 acetylene 429 activated carbon 288, 410, 421, 424 activated sludge process 423 activation energy 344 active area 229, 232, 239, 509 active fire protection 16 activity 63 activity coefficient VII, VIII, 26, 31, 41, 46, 63, 509 activity coefficient at infinite dilution 80 actuation case 447 actuation pressure 436, 439 adiabatic throttling 90, 459 adsorbate 509 adsorbent 509 adsorption 120, 205, 257, 280, 288, 404, 405, 410, 417, 418, 421, 424 adsorption capacity 288 adsorption equilibrium 290 adsorption isotherm type 290 adsorption twin plant 292 adsorptive 509 adsorptive agent 288 advanced cubic equation of state 77, 81 https://doi.org/10.1515/9783111028149-021

advanced oxidation processes (AOP) 421 advanced process control 509 Advanced Process Control (APC) 499 aerobic treatment 422 aerosol 261, 407, 429, 509 AES 163 agitator 353 agro chemicals 98 air cooler 190, 400 air line 477 alpha function 39 amine 391 ammonia 391, 411 ammonia reaction 344, 345, 347 ammonium nitrate 426, 428, 429 ammonium sulfate 426 anaerobic waste water treatment 424, 425 annular flow 375 antifoaming agent 243 API-2000 443 API-521 443 apparent component approach 63 apron 232, 239 aqueous phase 268 aqueous system 213, 215, 245 area plot plan 12 artificial intelligence 487 artificial neural networks 489 association 57–59 asymmetric rotating disc contactor (ARDC) 277 auto-refrigeration case 359 autoignition temperature 474, 475 axial turbo compressor 315 azeotrope 31, 32, 46, 47, 255, 264, 272, 354, 444, 509 azeotropic condition 47 azeotropic distillation 256 azeotropic point 31 back mixing 276 back pressure 437, 466 backward feed arrangement 104 baffle 167, 195 baffle cut 167–169, 192 baffle orientation 168 baffle separator 336 baffle spacing 168, 195, 196

546 � Index

baffle types 168, 169, 195 bagatelle limit 404 ball valve 380, 382 BASF 426 basic chemicals 97 basic engineering 4, 5, 7, 453 batch distillation 264 batch process 98, 113, 404 batch process recipe 114 batch reactor 350 batch simulation 113 batch stirred tank reactor 351 batchwise 403 battery limit 6, 509 Bayer-Flachglocke 226 Beggs-Brill correlation 370 bellow 466 bellow expansion joint 379 bellow pipe 379 BEM 163, 175 Bender equation 39 Benedict–Webb–Rubin equation (BWR) 39 benzene 256 Bernoulli equation 300 BEU 14, 163 Bhopal 428 big data 484 Billet and Schultes model 217 binary interaction parameter (BIP) 46, 78, 79, 119, 273 binary parameter estimation 79 biochemical oxygen demand (BOD) 419 biochemical product 264 biodiesel 415 bioethanol 294 biofilter 416, 417 Biohoch reactor 423, 424 biological exhaust air treatment 404, 405, 416, 417 biological waste water treatment 415, 422–425 bioscrubber 417 biotrickling filter 417 Bitterfeld 427 BJ21T 163, 175, 176 BKU 163 “black box” model 492 Blasius equation 364 blocked discharge line 447 blocks in ASPEN Plus 89 blowdown 436

blowdown process 441 BOD 509 boiler feed water 396, 509 boiler formula 357 brainstorming 510 brine 400 brittle 359 Broyden method 94 bubble cap tray 224, 455 bubble column 416 bubble column reactor 423 bubble flow 374, 375 bubble point curve 29 bubble regime 224 bubbling-up 468 built-up back pressure 437 butterfly valve 381 by-product 21, 92, 510 c-concentration 344, 346 calcium carbide 429 CAPEX 1, 19 capillary module 282 car sealed valve 382 carboxylic acid 57, 76 Carnival Monday accident 428, 450 cartridge column 232, 245 cascade reactor 352 catalyst 348 catalyst exchange 354 catalytic combustion 405, 411, 412 cause & effect matrix 17, 431, 510 cavitation 6, 302, 306 cell method 161 centrifugal extractor 278 centrifugal pump 300, 306 centrifugation 272, 278, 424 certified capacity 457 champagne effect 468 chaotic flow 375 chattering 458, 466 check valve 6, 381, 510 chemical oxygen demand (COD) 419 chlorine 409 choking 369 chromatography 293 circulation pump 302 Clausius–Clapeyron equation 29, 53, 76 clear liquid height 238

Index � 547

closed balance point 226 cloud 486 cloud computing 486 cloud storage 486 co-product 21, 92, 510 coalescence 268, 269 coalescer 510 COD 510 cognitive biases 429, 432 coil module 282 cold water aggregate 400 Colebrook 365 collector 210, 219, 221, 244 collector/distributor unit 210 combinatorial part 80 combined heat and mass transfer 175 combustion 404, 405, 409 compact approach 209 Composite Curve 108 composite membrane 281 composition control 246 compressibility factor 57, 510 compressible fluid 367 compressor 14, 91, 106, 308, 320, 366 computer vision 502 conceptual design phase 2, 3, 245 condensate 390, 396, 397 condensate lifter 395 condensate line 396 condensate outlet control 394 condensate polishing 421 condensation 204 condenser 161, 174, 176, 187 conservative assumption 456 contingency 20, 510 continuity equation 459 continuous reactor 350 continuous stirred tank reactor (CSTR) 351, 352 control 381 control cycle 431 control engineering 246 control valve 300, 380, 382, 433 control valve failure 453 convergence 208 conversion 342 cooling brine 162 cooling system failure 453 cooling water 108, 109, 190, 192, 398, 510 core patent 144

Coriolis flow meter 387 corresponding states principle 37, 38 corrosion 99, 173, 192, 196, 225, 308, 320, 343, 361, 388, 394, 398, 413, 416, 432, 439 corrosion allowance 357 corrosivity 196 COSMO-RS 81 cost estimation 19 COSTALD equation 52 coverage 181 critical flow phenomenon 458 critical mass flux density 458 critical point 34, 53, 55, 74, 444 critical pressure 34 critical temperature 34 crossflow filtration 283 crud 269 cryo-condensation 404, 407, 408 Cryosolv process 407, 408 crystal growth 296 crystallization 67, 280, 295 CSTR cascade 352 cubic equations of state 35 cyber-physical systems 483 cyclohexane 256, 407, 427 cyclone 336 data 483 data graveyard 484 data management 76 data mining 485 data science 494 data sheet 431 data-based models 492 databank 79 Deacon reaction 409, 414 dead-end filtration 283 deaeration problem 176 Debye–Hückel term 62 decanter 63 decomposition 211, 231 dedicated plant 113 deep learning 489 deflagration 472 degradation 244, 349, 411 delivery head 301 demineralized water 398, 400 demister 334 denoxation 411, 414

548 � Index

density 162, 214, 216 dependent patent 139 depreciation 510 depressurization 404, 436 design basis 4, 5, 358, 392, 443, 448, 511 design mode 162 design pressure 358, 397, 435–437, 442, 445, 447, 448, 451, 511 design temperature 358, 435, 442, 511 desuperheating 390 detailed approach 209 detailed engineering 6, 8, 453 detonation 472 deviation 511 devil’s advocate 432 dew point curve 29 differences between large numbers 348, 349, 387 diffusion 175, 260, 281, 407 diffusion coefficient 260, 274, 281, 288 digestion tower 424 digital transformation 479 digital twin 122, 496 digitalization 480 dimethylether 427 dioxine 427 dip-pipe 14, 328, 511 direct mode 128, 129 direct steam 391 direct substitution method 93 dischargeable mass flow 457, 464, 465, 470 discontinuous mode (batch) 350 disk-and-donut baffle 168 disperse phase 272 dispersion 268 distillation 83, 204 distillation column control 245 distributor 211, 215, 219, 244 dividing wall column 261 domain expertise 494 double azeotrope 33 double jeopardy 433, 453, 511 double pipe 148, 190, 360, 379 double segmental baffle 168, 195 double tubesheet 165 downcomer 224, 238, 243 downcomer backup flooding 238, 242 downcomer choke flooding 237, 242 downcomer clearance 232, 239 downcomer cross-flow area 239

draft tube baffle crystallizer (DTB) 297, 298 drain 101, 379, 382 driving pressure difference 463 driving temperature difference 105, 106, 108, 125, 178, 181, 184, 187, 319, 394, 452 drones 504 droplet formation 320 droplet size 319 dry pressure drop 219 dualflow tray 229 ductile 359 DuoCondex process 408 durability 283 Duss–Taylor correlation 234 duty of care 146 dynamic simulation 3, 88, 121–133, 441, 446, 453, 455, 497 dynamic viscosity 83, 85, 162, 260, 272, 415, 438 economy-of-scale 20 edge devices 487 effect of temperature on a chemical reaction 347 efficiency 207 electricity 103 electrodialysis 287 electrolyte 49, 60, 61, 77, 260 electrolyte model 77 electrolyte NRTL model 62 electronegativity 61 elementary charge 60 ellipsoidal head 328 energy balanced configuration 249 Engel model 217 enthalpy 70, 119, 348, 511 enthalpy of adsorption 291 enthalpy of fusion 68 enthalpy of mixing 51 enthalpy of reaction 71, 348, 353 enthalpy of vaporization 28, 29, 38, 53, 70, 74, 77, 444 entrainment 183, 185, 224, 225, 235, 236, 240, 242, 266 entropy 511 equation of state 25, 33, 511 equation-oriented approach (EO) 92 equilibrium 350 equilibrium calculation 207 equilibrium constant 345, 348 equilibrium reaction 345

Index

equilibrium reactor 355 equipment arrangement drawing 10, 12 equipment content visualization 115 equipment diagram 113, 115 error message 208 esterification 345, 350, 354 estimation of vapor pressures 56 ethanol 256, 294 ethanol–water 256, 294 ethylene 77, 475 ethylene glycol 257 ethylene oxide 475 eutectic 81 eutectic system 67, 296 evaporation 204, 420 evaporator 161, 187 excess enthalpy 51, 71, 74, 80, 511 excess heat capacity 80 excess volume 52, 511 exergy analysis 113 exhaust air 113, 115, 403 exhaust air treatment 403, 408 exhaust air treatment with membranes 417 exothermic reaction 351 expansion loop 379 expediting 8 explosion 436, 472, 475 explosion limit 475 explosion zone 472 extended Antoine equation 55 external heat exchanger 337 external reactor 355 extraction 83, 271 extraction column 274 extractive 271 extractive distillation 257 extrapolation function 55 F-factor 214 faceplate 130 failsafe position 382 falling film crystallizer 299 falling film evaporator 181, 182, 188 fan 308 feasibility 96, 103 feedback control loop 127 filtration 419, 424 fin 190 fine chemicals 97, 113, 264, 403

� 549

fire 432 fire case 442, 447 fire formula 443 fire point 474 fire prevention 16 fire protection 16 fixed bed reactor 349 fixed costs 19, 511 fixed valve tray 228 flame ionization detector (FID) 476 flammability diagram 476 flare load 122, 456 flare system 436 flash chamber 252 flash fire 472 flash point 415, 473 flashing 467 flood point 214, 216, 217, 220 flooding 211, 214, 266 Flory–Huggins 78 flotation 419 flow 432 flow fractions 171, 172, 192 flow measurement 387 flow path length 232, 234 flow pattern 374 fluid code 99, 378 fluidelastic instability 194, 196 flushing 404 foaming 181, 183, 211, 220, 232, 242, 244, 265, 468 forced circulation crystallizer (FC) 297, 298 forced circulation evaporator 182, 182, 188 formal kinetics 344 forward feed arrangement 104 fouling 102, 122, 157, 158, 170, 172, 178, 182, 183, 187, 190–192, 206, 211, 214, 224, 225, 227, 232, 244, 246, 320, 334, 393, 399, 417, 438, 439, 510 fouling factor 5, 118, 192 fouling layer 192 fouling resistance 192 fractional hole area 224, 228, 234 fracture 359 free cross-flow area 460 free variables 127, 128 frequency converter 302, 306, 319 friction factor 363, 365 friction pressure drop 90 Friedel equation 371, 373 froth 223

550 � Index

froth height 236 froth regime 224 Froude number 372, 511 FTO (Freedom To Operate) 139, 144 fuel gas 512 fuel NO 410 fugacity 36, 345 fugacity coefficient 36, 43, 512 Gantt Chart 116 gas chromatography (GC) 246, 388 gas cylinder 477, 478 gas permeation 283, 286, 287, 418 gas solubility 69, 447 gas-gas exchanger 161, 168, 174 gas-liquid reaction 352, 422 gaskets 99, 186, 378 gate valve 380, 382 gauge pressure 386 gE mixing rules 41, 51, 64, 68, 77, 81 gear pump 308 geysering 178 Gibbs–Duhem equation 46 GIGO principle 92 globe valve 380 glycol ether 415 Grand Composite Curve 110 Grashof number 512 grid baffle 169 group contribution method 27, 79 group polarization 432 groupthink 432 guideword 432, 512 half-open pipe 101 half-pipe coil 337 hardness component 191, 192, 296, 398 hastelloy 398 HAZID 4 HAZOP 4, 17, 433, 512 HAZOP analysis 430 HAZOP leader 430, 431 HCl 260 heat adsorption 405 heat capacity 59, 77 heat conduction 158 heat curve 162 heat exchanger 14, 83, 117, 118, 148 heat exchanger block 90

heat integration 103, 109, 246, 394, 421 heat of formation 27 heat of reaction 351 heat of vaporization 26, 27 heat pump 109 heat transfer 176, 352 heat transfer oil 397 heat transition 157 heater block 90 height of inlet weir 232 height of outlet weir 232 height over the weir 240 helium 85 Henry coefficient 50, 447 Henry component 50 Henry concept 76 Henry’s law 69 Henschke model 270 heteroazeotrope 31, 64, 255 heterogeneous reaction 352 HETP 207, 215, 258, 259, 512 high performance liquid chromatography (HPLC) 388 high vacuum 323 high-fidelity OTS 498 high-precision equation of state 40, 77 hiTRAN elements 174 Hoechst AG 423, 428, 429 Hoffmann–Florin equation 56 holdup 216, 455, 512 hollow fibre module 282 hollow stirrer 353 hot spot 182, 184 hybrid (grey box) models 493 hydrodynamics 207, 216, 217 hydrogen 85, 361 hydrogen fluoride (HF) 57, 77 hydrogen peroxide 421 hyper compressor 314 hypochlorite 410 hysteresis 437 ideal gas 34, 70, 459 ideal gas equation 33 ideal gas heat capacity 70, 411 ideally mixed batch stirred tank reactor 351 ideally mixed continuous stirred tank reactor (CSTR) 351 ignition 472, 475

Index � 551

ignition source 414 impingement plate 175 inclination 175 inclusion 296 individual α-function 68 induced ignition 472 industrial applicability 134, 136 industrial fan 318 Industry 4.0 482 inert gas 175, 447 inert gas supply 478 inert vent 102 infinite dilution 50 influence of the pressure on a distillation column 207 inlet weir 232, 253 inner surface 289 insulation 378, 397 interlock 6, 512 interlock description 431 internal energy 512 internal rate of return 23 inventive step 134 inventor’s bonus 146 inverted batch column 264 investment 19 IPC (International Patent Classification) 137 ISBL 512 isentropic change of state 459 isentropic efficiency 309 isentropic expansion 460 iso-activity criterion 63 isolation valve 380 isometrics 10–12, 16 jacket 337 jacket water 400 Jacobian matrix 94 jet flood 236, 237, 242 jet pump 106, 320, 325 Joule–Thomson effect 287, 478, 512 Karl-Fischer-titration 388 Katapak 354, 355 kettle reboiler 183 Kister–Gill correlation 217 knock-out drum 336 Kühni column 276 KV -value 131, 383

laminar flow 363 Langmuir approach 289 Langmuir–Knudsen equation 185 laughing gas 411 Laval nozzle 320, 368 law of mass action 345 layer crystallizer 299 layout pattern 195 LDPE 40, 77, 313, 461 Le Chatelier mixing rule for LEL 475 Le Chatelier’s principle 346 leak before burst 357 leakage 176, 187 leakage rate 327 Lee–Kesler–Plöcker 77, 78 level 432 level alarm 330 level control 179 level indicator 388 level switch 330, 388 lever rule 67, 512 limiting activity coefficient 27 limiting droplet diameter 333 limiting gas velocity 332 limiting oxygen concentration 477 liquid density 27, 52 liquid heat capacity 27, 71 liquid load 213, 214 liquid nitrogen 407, 408 liquid ring compressor 316, 325 liquid seal 239 liquid-liquid equilibrium 63, 81, 119, 272 liquid-liquid exchanger 161, 162, 174, 176 liquid-liquid separator 268 liquidus line 67, 512 liquiphant 388 LLE 119 load data list 17 load diagram 220, 235 load point 219 LOC 477 local composition 44 local composition model 46 locked valve 382 Lockhart-Martinelli correlation 370 lost time incident rate (LTIR) 429 low-fidelity OTS 498 lower explosion limit (LEL) 475 lubricant 381

552 � Index

lubrication oil 325 Ludwigshafen 427 Ludwigshafen-Oppau 426 Mach number 367, 369, 513 machine learning 488 magnetic flow meter 388 maintenance fees 143 makespan 116, 117 maldistribution 211, 215, 219, 221, 235, 267 man hole 232, 328 mass balance 2 mass balanced configuration 249 mass flow to be discharged 440, 448, 457, 458, 464, 465 mass transfer 260, 352 material 360, 378, 398, 416 material balance 431 material test 361 materials of construction 359 matrix project management 18 maximum allowable overpressure 436, 437 maximum allowable pressure 440 maximum downcomer velocity 238 maximum flux density 459 maximum mass flux density 327, 458, 459, 516 maximum operating pressure 437 maximum relief amount 458 Maxwell criterion 35 measurement 362 mechanical efficiency 309 mechanical foam deletion 243 mechanical stability 378 mechanical strength 4, 357–360 mechanical tension 357 mechanical vapor recompression (MVR) 105, 106, 317, 421 medium vacuum 323 melt crystallization 299 melting temperature 68 membrane 120, 258, 280, 417, 421 membrane compressor 313 membrane process 404, 405 membrane pump 307 membrane separation 104, 280 membrane valve 380 MESH equation 207, 265 methane fermentation 424 methyl isocyanate 428

Michaelis–Menten kinetics 348 microfiltration 280, 419 microorganisms 416, 417, 424 micropore 289 middle vessel 265 mindset 432 minimum allowable temperature (MAT) 360 minimum bypass 305, 447 minimum cross-section area 458, 467 minimum design metal temperature (MDMT) 359 minimum downcomer residence time 238 minimum liquid load 214, 219 minimum safety distance 13 minimum vapor load 241 minimum vapor velocity 241 miniplant 274 miscibility gap 31, 49, 63, 272 mixer 91 mixer-settler arrangement 275 mixing rule 50, 52, 84, 85 model change 82, 119 model choice 76 Model Predictive Control (MPC) 499 Modified UNIFAC 79–81 modularization 14 molecular modeling 81 molecular sieve 257, 288, 289, 294 Monday fire 418 Montz A3 213 Moody diagram 363 motive steam 106 multieffect evaporation 104, 421 multipass tray 240 multiple condensers 210 multiplier 91 multipurpose plant 350 multipurpose unit 113 Murphree efficiency 233, 244, 258 nanofiltration 280, 419 natural circulation 177 natural frequency 196 net present value 23 Newton method 94 Newton number 353 Nikuradse 365 nitrous oxides (NOx ) 410 no-tubes-in-window baffle (NTIW) 168, 195 node 432, 513

Index

noise 190 non-Newtonian flow 362 non-reasonable risk 433 normal boiling point 27 normal operating condition 358 notched weir 240 novelty 134 nozzle 173, 196 nozzle size 331 NPSH 14, 252, 302, 303, 306, 328 NRTL 44–46, 49, 62, 64, 69, 273 NRTL electrolyte model 63 nutritient 416 Nußelt number 513 o-nitroanisol 429 objective function 4 O’Connell correlation 234 Ohm’s law 157 oil diffusion pump 326 ω-method 450, 467, 470, 516 OPC (Operational Patent Committee) 144 open balance point 226 operating manual 17 operation costs 19 operation modes 124 operational characteristics of a jet pump 322 operator training simulators 498 OPEX 1, 19 orifice 458 OSBL 513 oscillating displacement pump 307 oscillation 456 Oslo type crystallizer 298 osmotic pressure 283 out of balance 208, 250 outlet line 464, 466, 467 outlet vapor fraction 180 outlet weir 232, 239 overall plot plan 12 overdesign 170, 178 overflow nozzle 330 overpressure from outside 360 oversaturation 296 oversizing a safety valve 456 ozone 421 package unit 513 packed column 206, 210, 455

� 553

pair parameter 62 Pall ring 211, 215 parallel baffle cut 175 parallel orientation 168 passive fire protection 16 patent monitoring 144 patent strategy 143 patentability 134 PC-SAFT 78 Peng–Robinson equation 36, 68, 76, 77 permeability 280, 281 permeate 280 perpendicular orientation 168 “person skilled in the art” 135 pervaporation 283, 286 PFR with recycle 352 pH value 513 pharmaceuticals 97, 113, 264, 403 phase equilibrium 354 phase inversion membrane 281 phosphorus 413 physics-based (first principle) models 491 picket-fence weir 241 PID 98–103, 305 pilot plant 274 pinch 108 Pinch method 107 pipe 90 pipe insulation 99, 378 pipe module 282 piping 13, 16, 17, 98, 362 piping and instrumentation diagram (PID) 6, 17, 328, 431, 513 piping class 17, 378 piping element 300, 373 piping isometric drawing 12 piping layout drawing 10, 13 piston compressor 313 piston pump 307 plant characteristics 301 plant layout 4, 8 plant-model mismatch 501 plate heat exchanger 148, 186 plate module 282 plot plan 10, 431 plug valve 381 plug-flow reactor with recycle 352 plug-flow tubular reactor (PFR) 351, 356 Podbielniak extractor 278

554 � Index

polymer 78, 97, 264 polymerization 229, 334 porous membranes 281 power consumption 309 power failure 453 Poynting factor 42, 43 Prandtl number 514 Prandtl/v. Karman 367 predictive maintenance 501 preheating zone 177, 178 pressure 432 pressure control 250 pressure difference measurement 387 pressure drop 174, 216, 237, 244 pressure drop calculation 362 pressure drop correlations for two-phase flow 467 pressure drop of the irrigated packing 219 pressure hydrolysis 422 pressure measurement 386 pressure rating 378 pressure relief 131–133, 404, 435, 436 pressure relief device 358, 436 pressure-swing adsorption (PSA) 292, 294 pressure-swing distillation 255 preventive maintenance 5 principle of countercurrent flow 205 priority year 137 process control system 386, 433, 514 process description 3, 432 process flow diagram (PFD) 3, 431, 513 process simulation 2, 87, 120 process water 514 product-by-process claims 146 profile 208 project manager 18 pseudo-critical pressure 162 pseudo-critical temperature 162 pseudo-stream 209 PSRK 38, 77, 79, 81 Pt-100 386 pulsation 276 pump 14, 90, 300, 366, 433, 447, 457 pump characteristics 301, 308, 447, 453, 457 pump efficiency 300 pump failure 453 purge stream 93 pv diagram 34 pxy diagram 29

Rackett equation 52 radial turbo compressor 315 random packing 211, 215, 259 Raoult’s law 31, 42 Raschig ring 211 Raschig Super-Ring 212 Raschig Super-Ring Plus 212 rate of relief 436 rate-based 88, 119, 207, 221, 258, 261, 272, 274, 415 rating mode 162, 170 reaction equilibrium 345, 354 reaction factor 344 reaction force 456 reaction kinetics 120, 343, 346, 348, 352, 354, 422 reaction rate 343, 347, 352 reactive distillation 260, 353–355 real continuous stirred tank reactor 351 real-time optimization (RTO) 500 reboiler 177 recipe 113 recommended velocities in a pipe 365 rectangular notches 241 rectification 204 rectifying section 262, 514 recycle stream 92 redistribution 219 reflux 205 reflux ratio 206, 209, 250, 264 refrigerated water 400 regeneration 288 regenerative thermal oxidizers (RTO) 413 reinforcement learning 489 relief amount 446, 447 relief pressure 446 repulsive forces 436, 456 residence time 178, 181, 184, 231, 233, 238, 241, 242, 245, 278, 299, 350, 351, 354, 411, 412 resistance thermometer 386 restriction orifice 436 retentate 280 reverse flow 123 reverse osmosis 104, 283, 421 reverse reaction 345 reversed mode 128, 129 Reynolds number 332, 371, 514 Richardson’s law 249 right of prior use 143 rigorous simulation 88 risk parameter 434

Index

rod-baffled exchanger 195 rod-type baffle 170 rotary piston compressor 315 rotary vane pump 315, 325 rotating disc contactor (RDC) 276 rotating displacement pump 308 rough vacuum 185, 323 roughness 363 runaway reaction 349, 350, 354, 450 rupture disc 358, 431, 436, 438, 464 safeguard 432, 515 safety integrity level (SIL) analysis 433 safety valve 6, 16, 99, 122, 358, 366, 431, 436, 440, 445, 447, 454–458, 464, 468, 514 safety valve inlet line 464 safety valve lift stop 465, 466 safety valve outlet line 464 safety valve two-phase flow 467, 468 sampling 389 saturation zone 291 scale-up 88, 274, 354 schedule view 116 SCR process 411 screw compressor 314 sea water 398 sea water desalination 284 seal pan 253 sealing strip 169, 171 search key 145 search report 142 Second Law 458 sedimentation 419, 424 sedimentation curve 269 seed crystal 296 segment 259 segment height 259 selectivity 288, 342 self-ignition 472 semicontinuous mode 350 semipermeable membrane 283 sensitivity spider 24 separation block 515 separation factor 26, 210, 234, 515 separation sequence 209 separator 91 separator vessel 181 sequential flowsheeting 92 Seveso accident 427, 430

� 555

shaking trials 274 shell orientation 165 shell-and-tube heat exchanger 118, 148, 160–187, 190, 450, 451 shell-and-tube type 148 short path evaporator 185 shut-off valve 382 sieve tray 224, 454 signal transducer 102 SIL classification 433, 434, 454 silicium 411, 413 simplifying assumption 456 simulated moving bed 205, 292, 293 simulation mode 162 single segmental baffle 167 siphon 269 siphon breaker 331 six-tenth-law 20 sloped downcomer 238 sludge 423 slug flow 375, 376 smart data 485 SMB 293 SNCR process 411 Soave–Redlich–Kwong equation 36, 68, 77 sodium cyanide 429 soft sensors 488 solid-liquid equilibrium 67, 296 solubility 281 solubility membranes 281 solubility of gases 51 solvation 62 space demand 190 specialty chemicals 97, 113, 264, 403 specific enthalpy 70, 162 specific surface 289 speed of sound 367, 369, 458, 460, 464, 467, 472, 478, 513, 515 spiral heat exchanger 148 spiral wound module 282 splash baffle 253 split balance 88 split block 3, 515 splitter 91 spool piece 269 spray regime 224, 237 spray tower 416 standard enthalpy of formation 70, 71, 348, 411 standard Gibbs energy of formation 347

556 � Index

startup 121 state of the art 134 static crystallizer 299 static liquid head 177–179 static payback period 23 steam 104, 109, 390 steam boiler explosion 397 steam control valve 452 steam inlet control 392 steam trap 393 steamout 358, 515 Stichlmair model 217 stirrer 353 stoichiometric coefficient 344, 346 stoichiometric line 477 stoichiometric ratio 342 stoichiometric reactor 355 stoichiometry 342 stratified flow 375 stripping section 211, 262, 515 structured packing 213, 259 subcooling 177 subcritical flow 384 sulfur 410 sulfuric acid 261 Sulzer BX 213 sun radiation 448 supercritical 50 supercritical flow 384 superheating 391, 392 superimposed back pressure 437 supervised learning 489 supplementary protection certificates 138 surface tension 25, 86, 260, 272 surfactant 269 surrogate models 492 suspension crystallizer 297 system factor 242 system flooding 219, 242 TA Luft 260, 404, 405, 407, 411 tangent line 515 tear stream 92, 515 technical high-precision equations of state 40 TEMA 163, 168 temperature 432 temperature class 475 temperature measurement 386 temperature peak 349

temperature profile 246 temperature-swing adsorption (TSA) 291 ternary azeotrope 256 tetrahydrofurane–water 255 Texas City 426 TH diagram 107 the internet of things 483 theoretical stage 207, 216 thermal combustion 405, 411 thermal conductivity 25, 26, 85, 162, 192, 260 thermal engine 109 thermal expansion 165, 263, 379, 447 thermal expansion valve 448 thermal NO 410 thermal oil 162 thermal resistance 157, 172 thermal stability 183 thermal stress 183 thermal vapor recompression (TVR) 106, 321 thermocouple 386 thermodynamics 207 thermosiphon reboiler 177–179, 182, 187, 193, 251, 370, 392 THF 255 thin film evaporator 184 third-party objections 138 throttle valve 179 Tianjin accident 429 tie-in points 516 tie-rod 169, 171 total organic carbon (TOC) 419, 516 Toulouse accident 428 trace element 416 training simulator 246 transport property 83 tray column 206, 223 tray efficiency 208, 258 tray fixing 456 tray pressure drop 237 tray spacing 232, 236, 238 trickle filter 422 triple point 53 troubleshooting 118 true component approach 63 tube arrangement 195 tube layout angle 166 tube passes 166, 167, 173 tube pattern 165, 166 tube pitch 166, 195

Index � 557

tube rupture 433, 450 tubesheet 165 turbo compressor 315 turbomolecular pump 326 turbulent flow 363 turndown 224, 244 twin plant 292, 407 twisted tubes 193 two-pass tray 234 two-phase factor 371 two-phase flow 101, 177, 180, 370 two-phase flow in the safety valve 467 two-phase flow pressure drop 464 two-phase pressure drop 370 two-phase through the safety valve 468 Twu-α-function 39 Txy diagram 31 typical 5, 516 U-bend 379 U-type 195 U-type heat exchanger 166 ultra-high vacuum 323, 326 ultrafiltration 280, 419 ultrasound 421 ultraviolet radiation 421 underground 17 UNIFAC 80, 81 UNIFAC consortium 81 UNIQUAC 44–46, 50, 64, 69, 78, 80, 273 unsupervised learning 489 unsupported span 195 upper explosion limit (UEL) 475 urea 411 V-notches 240 vacuum 178 vacuum application 219 vacuum column 237, 244 vacuum distillation 206, 231 vacuum pump 308, 323 value creation 21, 96, 103 value engineering 4, 516 valve 90, 366, 380 valve tray 225, 455 van-der-Waals equation 33 van-der-Waals property 80 vapor cross-flow channeling 241 vapor horn 252

vapor inlet from reboiler 251, 252 vapor phase association 76, 77 vapor pressure 27–29, 53, 77, 83, 392, 516 vapor pressure shifting 70 vapor quality 28 vapor recompression 187, 421 vapor-liquid equilibrium 204 vapor-liquid separator 319, 332 vapor-liquid-liquid equilibrium 65 variable costs 19, 516 vent 379 vent nozzle 102 venturi scrubber 416 vessel 91 vessel breathing 115, 404 vibration 168, 183, 194, 195, 243, 306, 380, 387 vibrations 14, 167, 173 virial equation 33 virtual commissioning 503 virtual reality 10 viscosity 25, 26, 182, 216, 260, 295, 362, 468, 469 viscous 438 VLLE 119 volume balance 440, 441 volume concentration 344 volume translation 39, 77 volume-translated Peng–Robinson equation (VTPR) 39 vortex breaker 101, 328 vortex flow meter 387 vortex shedding 166, 194, 195 VTPR 39, 68, 77, 81, 346 VTPR equation state 346 Waco accident 429 Wagner equation 55 Wasserhaushaltsgesetz 419 waste water evaporation 420 waste water incineration 422 waste water treatment 97, 419 water 60 water hammering 397 waterspout 328 wavy flow 375 Weber number 373, 516 weeping 224, 227, 240–242, 247, 266 weeping rate 241 Wegstein method 93 weir 223

558 � Index

weir height 225, 455 weir length 240 weir load 239, 241 wetted area 445 “white box” model 491 Wilson 44–46, 49, 64, 69 wire mesh layer 334 wired packing 213, 244

wispy annular flow 375 working capital 516 yield 342 yield reactor 355 zeolite 289