Process Engineering: Addressing the Gap between Study and Chemical Industry [2nd, revised and extended Edition] 9783110657685, 9783110657647

This book provides a comprehensive introduction to chemical process engineering, linking the fundamental theory and conc

164 89 14MB

English Pages 477 [478] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Preface to the 2nd edition
Contents
1. Engineering projects
2. Thermodynamic models in process simulation
3. Working on a process
4. Heat exchangers
5. Distillation and absorption
6. Two liquid phases
7. Alternative separation processes
8. Fluid flow engines
9. Vessels and separators
10. Chemical reactions
11. Mechanical strength and material choice
12. Piping and measurement
13. Utilities and waste streams
14. Process safety
Glossary
List of Symbols
Bibliography
A. Some numbers to remember
B. Pressure drop coefficients
Index
Recommend Papers

Process Engineering: Addressing the Gap between Study and Chemical Industry [2nd, revised and extended Edition]
 9783110657685, 9783110657647

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Michael Kleiber Process Engineering

Also of Interest Chemical Reaction Engineering. A Computer-Aided Approach Salmi, Wärnå, Hernández Carucci, de Araújo Filho, 2020 ISBN 978-3-11-061145-8, e-ISBN 978-3-11-061160-1 Product-Driven Process Design. From Molecule to Enterprise Zondervan, Almeida-Rivera, Carmada, 2020 ISBN 978-3-11-057011-3, e-ISBN 978-3-11-057013-7 Process Intensification. Design Methodologies Gómez-Castro, Segovia-Hernández, 2019 ISBN 978-3-11-059607-6, e-ISBN 978-3-11-059612-0

Engineering Catalysis Murzin, 2020 ISBN 978-3-11-061442-8, e-ISBN 978-3-11-061443-5

Product and Process Design. Driving Innovation Harmsen, De Haan, Swinkels, 2018 ISBN 978-3-11-046772-7, e-ISBN 978-3-11-046774-1

Reviews in Chemical Engineering Luss, Brauner and Brauner, Neima (Editors-in-Chief) ISSN 0167-8299, e-ISSN 2191-0235

Michael Kleiber

Process Engineering |

Addressing the Gap between Study and Chemical Industry 2nd edition

Author Dr.-Ing. Michael Kleiber thyssenkrupp Industrial Solutions AG Friedrich-Uhde-Str. 2 65812 Bad Soden Germany [email protected]

ISBN 978-3-11-065764-7 e-ISBN (PDF) 978-3-11-065768-5 e-ISBN (EPUB) 978-3-11-065807-1 Library of Congress Control Number: 2019953833 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2020 Walter de Gruyter GmbH, Berlin/Boston Cover image: Propylene oxide plant in Ulsan / South Korea / thyssenkrupp Industrial Solutions AG Typesetting: VTeX UAB, Lithuania Printing and binding: CPI books GmbH, Leck www.degruyter.com

|

For Claudia and our no more little Timon

Preface You need only 10 % of the things you learn at the university. The problem is: You don’t know which 10 %! (University Sapience)

There are so many good textbooks on process engineering. When you start writing a new one, you must wonder what might be its identifying feature, making it different from other textbooks. In fact, I think I have got one. After having worked in industry for twenty years in process simulation, design, and development, the author is by no means in a position to give proper answers to any problem. On the contrary, any new process has its own characteristic problems, and more or less you start from scratch. But experience and a good network help to develop an appropriate strategy to get a solution, and to be able to distinguish between important and less important knowledge in process engineering. In the academic world, there is a tendency for over-emphasizing theoretical concepts without integration of the application aspects. For instance, many phase equilibrium and physical property data specialists have never simulated a distillation column, which would give a certain feeling for the importance of an activity coefficient at infinite dilution. On the other hand, practitioners have a tendency to believe in a solution which worked once, disregarding that this might have been related to conditions which are not always available. The corresponding pieces of software are simply trusted in any case, while nobody can explain what they are based on and what their limitations are. Bridging the gap between university and industry is the utmost concern of this book. The intention is not to write a textbook for beginners in process engineering, but to help the reader to be prepared with the most essential pieces of knowledge in practical applications. It tries to answer the so-called silly questions, things that many students have learned at the university without understanding their implications. The target of this book is not to generate specialists but to make the reader do something reasonable and keep the overview. It is not a textbook which gives thorough explanations for any topic listed in the book; for this purpose, some 400 pages are by far not enough. In fact, long mathematical and scientific derivations are avoided, and other existing textbooks are referred to where the reader can acquire an in-depth knowledge if needed. Instead, we try to explain the meaning of the topics and formulas so that the reader gets a feeling for the relationship and the interpretation. It should enable the reader to take part in discussions and to know where it is worth increasing his knowledge with further literature, and to distinguish between important and less important topics. To give an example: the author has often been asked to explain what an activity coefficient is. People always expect at most two sentences, in the usual manner of engineers. This is simply not possible. Good textbooks need several pages to explain phase equilibria of pure substances, the difference between mixtures and pure subhttps://doi.org/10.1515/9783110657685-201

VIII | Preface stances, the meaning of the Gibbs energy, and the concepts of excess and partial molar properties. Certainly, this explanation is a requirement for thermodynamists, but it is not the way for application engineers to understand why he must use activity coefficients (or an equivalent concept) for nonideal mixtures and how to get them. Instead, a “recipe” for the usage of a model is required, and in many projects it is important to explain the necessity of a proper evaluation of the model parameters, and to avoid that the project manager gets the impression that it is just an accuracy fad. The author is fully aware that the text reflects his own opinion. For example, equations of state are currently favored by most scientific authors. Nevertheless, the author wants the reader to be capable, not perfect and at state of the art in each area. For this purpose, activity coefficients are the more pragmatic approach and taken as the standard in this book. The author is grateful to Jürgen Gmehling, Michael Benje, and Hans-Heinrich Hogrefe, who eliminated many errors and misprints in my draft. Thanks also to Hristina and Olaf Stegmann, who helped me a lot to write reasonable texts in Chapters 1 and 11. I would like the reader, probably a process engineer in his startup phase, to have a better idea about the 10 % knowledge which will help him in his professional life. And of course: always have fun in your job! And remember: There is always much more to learn than can ever be taught! (Peter Ustinov) A fool with a tool is still a fool. (Grady Booch)

Preface to the 2nd edition Three years have passed since the first edition. I intended to write a book with the special focus on students just after finishing the university and then getting acquainted with the industrial point of view. The feedback has given me confidence that I succeeded with this purpose. Now I was told that it is time to make a review. I had waited for this announcement for a long time. As always, after a text is finished, you start to find the first mistakes, and any feedback, although usually positive, reveals room for improvement. My technique is to create a file containing all suggestions from readers, so that I have a fast startup when revision comes to the top of the agenda. There were a lot of items to correct, and I have to thank all the readers who contributed to this list, although I am angry with myself about every single mistake. Furthermore, a few chapters have been extended, and a new one on dynamic simulation has been introduced. As I am not very experienced in dynamic simulation, I had the good idea to ask Mrs. Verena Haas from BASF SE to write this introductory chapter. Verena did an impressive master thesis in our company, and I cannot imagine that there is someone more appropriate for this job. She was also the one who suggested to add a PID chapter, and after it was written, she made a number of valuable changes on it. When I met her before her master thesis, I took her as a possible reader; meanwhile, she has become a valuable partner for writing this book. Hattersheim, February 2020 And remember: Experience is a thing you claim to have – until you acquire more of it. (Harald Lesch in: α-Centauri)

https://doi.org/10.1515/9783110657685-202

Michael Kleiber

Contents Preface | VII Preface to the 2nd edition | IX 1 1.1 1.2 1.3

Engineering projects | 1 Process engineering activities | 1 Realization of a plant | 8 Cost estimation | 18

2 2.1 2.2 2.3 2.3.1 2.3.2 2.3.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12

Thermodynamic models in process simulation | 25 Phase equilibria | 27 φ-φ-approach | 32 γ-φ-approach | 40 Activity coefficients | 40 Vapor pressure and liquid density | 49 Association | 54 Electrolytes | 56 Liquid-liquid equilibria | 59 Solid-liquid equilibria | 62 φ-φ-approach with gE mixing rules | 64 Enthalpy calculations | 66 Model choice and data management | 69 Binary parameter estimation | 73 Model changes | 75 Transport properties | 76

3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.7.1 3.7.2

Working on a process | 81 Flowsheet setup | 82 PID discussion | 92 Heat integration options | 97 Batch processes | 105 Equipment design | 109 Troubleshooting | 110 Dynamic process simulation | 113 Basic considerations for dynamic models | 115 Basics of Process Control for Dynamic Simulations | 118

4 4.1 4.2

Heat exchangers | 127 Something general | 127 Shell-and-tube heat exchangers | 129

XII | Contents 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10

Heat exchangers without phase change | 130 Condensers | 141 Evaporators | 143 Plate heat exchangers | 152 Double pipes | 154 Air coolers | 154 Fouling | 156 Vibrations | 158

5 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12

Distillation and absorption | 163 Thermodynamics of distillation and absorption columns | 166 Packed columns | 168 Maldistribution in packed columns | 179 Tray columns | 181 Comparison between packed and tray columns | 201 Distillation column control | 203 Constructive issues in column design | 208 Separation of azeotropic systems | 211 Rate-based approach | 214 Dividing wall columns | 216 Batch distillation | 219 Troubleshooting in distillation | 221

6 6.1 6.2 6.2.1 6.2.2 6.2.3

Two liquid phases | 225 Liquid-liquid separators | 225 Extraction | 228 Mixer-settler arrangement | 231 Extraction columns | 232 Centrifugal extractors | 235

7 7.1 7.2 7.3

Alternative separation processes | 237 Membrane separations | 237 Adsorption | 245 Crystallization | 252

8 8.1 8.2 8.3 8.4

Fluid flow engines | 257 Pumps | 257 Compressors | 265 Jet pumps | 274 Vacuum generation | 277

9

Vessels and separators | 283

Contents | XIII

10 10.1 10.2

Chemical reactions | 291 Reaction basics | 291 Reactors | 299

11

Mechanical strength and material choice | 307

12 Piping and measurement | 313 12.1 Pressure drop calculation | 313 12.1.1 Single-phase flow-through pipes | 313 12.1.2 Pressure drops in special piping elements | 318 12.1.3 Pressure drop calculation for compressible fluids | 318 12.1.4 Two-phase pressure drop | 321 12.2 Pipe specification | 327 12.3 Valves | 329 12.3.1 Isolation valves | 329 12.3.2 Control valves | 332 12.4 Measurement devices | 334 13 Utilities and waste streams | 339 13.1 Steam and condensate | 339 13.2 Heat transfer oil | 346 13.3 Cooling media | 346 13.4 Exhaust air treatment | 348 13.4.1 Condensation | 351 13.4.2 Combustion | 354 13.4.3 Absorption | 359 13.4.4 Biological exhaust air treatment | 361 13.4.5 Exhaust air treatment with membranes | 362 13.4.6 Adsorption processes | 363 13.5 Waste water treatment | 364 13.6 Biological waste water treatment | 367 14 Process safety | 371 14.1 HAZOP procedure | 375 14.2 Pressure relief | 380 14.2.1 Introduction | 380 14.2.2 Mass flow to be discharged | 384 14.2.3 Fire case | 386 14.2.4 Actuation cases | 392 14.2.5 Safety valve peculiarities | 399 14.2.6 Maximum relief amount | 402 14.2.7 Two-phase-flow safety valves | 411

XIV | Contents 14.3

Explosions | 414

Glossary | 419 List of Symbols | 427 Bibliography | 431 A

Some numbers to remember | 443

B

Pressure drop coefficients | 447

Index | 451

1 Engineering projects An engineering project is a huge and complex task for usually several hundred people. Coming from the university and just having finished one’s studies, one has usually no clue on what is going on beyond the own desk. In fact, the construction of a chemical plant is often compared to the erection of the pyramids in ancient Egypt. While the weight of a chemical plant is much lower, its complexity is by far greater, and the project can usually be completed in approx. three years instead of twenty. The target for a beginner must be to become a increasingly larger cog in the machine. First, an overview on the particular phases and activities must be obtained.

1.1 Process engineering activities A plant always belongs to somebody whose target is to quickly earn money by producing the substance the plant is designed for. At the beginning, a feasibility study has to be done. A market analysis is performed, which hopefully shows that it is worth starting a more detailed project. For a new process, it has to be checked whether it is possible to overcome the technical difficulties. The legal situation with patents and licenses has to be clarified, and possible locations for the plant are compared, whereby it is often necessary to consider different energy prices or transport costs for raw materials and products. A realistic production capacity and an impression of investment (CAPEX) and operation costs (OPEX) must be available before starting a project (Chapter 1.3). For the production capacity, it must be taken into account that no plant is in operation all the time; usually, 8000 h per year are scheduled, giving approx. 90 % availability. A corresponding overcapacity must be provided in the design. It is important to know that an engineering project is not a sequential process, where e. g. first the reactors are planned and finished, then the product purification, and so on. This would actually be impossible, because due to recycle streams in the process a complete engineering design of a single part of the plant could never be achieved. Instead, all parts of the plant are worked out simultaneously, with increasing accuracy and degree of detailing. The advantage is that possible bottlenecks and difficulties are detected as early as possible, the interconnections are identified early, and an appropriate number of project participants can be assigned to work on the various parts of the plant. Certainly, this is not the way we are used to in everyday life, and there is often doubt as to whether it makes sense to perform a design of a piece of equipment while it is clear that the input streams are just preliminary and will change several times during the project. Nevertheless, as mentioned, it is most important to get an overview on the process as soon as possible. And with today’s tools, the design from the previous phase is usually an ideal starting point when the preconditions have been subject of change. https://doi.org/10.1515/9783110657685-001

2 | 1 Engineering projects

– –



The engineering process is divided into certain phases, which are in principle: Conceptual Design Target: The process is fixed, the feasibility is checked, the risks are identified. Basic Engineering, also called FEED (Front End Engineering Design) Target: Preliminary elaboration of the plant, all documents available as good as possible Detailed Engineering Target: Complete and accurate description of all parts of the plant and all aspects of building.

The process engineer should know what the follow-up activities of his calculations are. The first phase in a project is the conceptual design, where the first mass and energy balances are prepared, often based on lab trials and estimations. The mass and energy balance is a key issue for all the following activities up to the phase of detailed engineering. A change in the mass balance has often a major impact on all other participants of the project, so it is desirable to make it as exact as possible, and to update it as soon as it makes sense. There is a certain misunderstanding as to what a mass and energy balance really is. The term “process simulation” is very common, and is also used here, but hardly applies. In fact, what the process does in the steady state for a given set of inlet conditions is calculated, i. e. the streams and the operating conditions of the particular pieces of equipment. Sometimes, the purpose is in fact to find out how the plant or the equipment behaves, at least how it reacts, and what the sensitivities are. However, in most cases, its purpose is to generate the data for the design of the equipment, applying conservative cases concerning process conditions or impurities. The exact process conditions that would enable the process engineer to really “simulate” the plant are usually not known, at least not in the Conceptual Design phase. Despite these often occurring misunderstandings, “process simulation” is nowadays well acknowledged as a useful tool which requires a well-trained process engineer who has a profound knowledge of the process itself, its thermodynamics (Chapter 2), the various pieces of equipment and their peculiarities, and the simulation experience, in order to achieve convergence in the simulation flowsheet, which often turns out to be complex. Nowadays, some well-established commercial (ASPEN, HYSYS, ChemCAD, PRO/II, ProSim) and inhouse process simulators (Chemasim at BASF, VTPlan at Bayer) are available, performing calculations that would have been considered to be absolutely impossible 30 years ago. The genuine process simulation showing the actual plant behavior with respect to the design of the equipment, the startup-behavior, and the process control is called dynamic simulation (Chapter 3.7). Nowadays, its application becomes more and more popular, and conventional process simulation can be used as a starting point for the dynamic version. Sometimes, single process steps remain unknown and are represented in the mass balance by simple split blocks. At least, there must be a concept of how to overcome

1.1 Process engineering activities | 3

this lack of knowledge and what the effort might be. At the beginning of the basic engineering these points should be completely clarified, and a full mass and energy balance must be available. How this is done is the subject of Chapters 2 and 3. It is desirable that pilot plant activities take place to confirm the mass balance and to make sure of the influence of the recycle streams. The main purpose of such an activity is to see whether all components are regarded and whether none of them accumulates in the process. The particular pieces of equipment are preliminarily designed according to the current knowledge so that it becomes clear what the critical pieces of equipment are, either because of their size or because of possible delivery limitations. As well, it must be considered whether the plant can be operated at reduced or increased capacity, which might be necessary for a certain period of time. Useful tools are the process flow diagrams (PFD), where the whole process is visualized, including the main control loops (Figure 1.1). A PFD is a document to understand the process, operation data for the important streams and blocks are usually included. The counterpart of the PFD is process description, which describes the PFD in written form. It should not be excessively detailed, as its main purpose is to enable the reader to understand the essentials of the process. At the end of the conceptual design phase, equipment and operation costs and hence the feasibility and their basis are better defined, often with respect to a possible location.

Figure 1.1: Example for the detailing in a PFD.

In a so-called HAZID (HAZard IDentification) the main issues concerning the safety of the process are first discussed and listed, often with first recommendations. At a

4 | 1 Engineering projects later stage, the so-called HAZOP will take place, where all relevant safety issues are discussed (Chapter 14.1). Finally, lists of utilities, raw products, auxiliary substances (e. g. catalysts) and emissions (exhaust air, waste water, solid and organic wastes) are issued. In the conceptual design phase, the design of the equipment can be done in a preliminary way using rules of thumb. A first optimization of the process should be performed. In process development, optimization is rarely a mathematical problem, where an objective function is defined and somehow minimized. Process simulator programs offer such a function; however, the author’s experience is that in most cases process optimization cannot be translated into an objective function, as many soft factors have to be regarded (e. g. danger of fouling, increasing complexity, material issues, ease of startup etc.). Equipment costs can be estimated by a process engineer as long as only dimension changes are involved; however, it takes a specialist if the type or the material of the equipment changes. The number of team members in the conceptual design phase is comparatively low, as the process engineering tasks are usually complicated but of a limited extent. The complexity of the process development is encountered by an iterative procedure, where many options are tested to achieve a stepwise progress towards an improved process. There is no clear workflow plan; instead, the creativity of the project members is decisive [1]. Nevertheless, it is desirable to compose a comprehensible documentation to save the process knowledge which was gained during the assessment of the various options. As there is no special structure available for this purpose, the documentation is done with a final report. A successful and systematic way of optimizing a plant is the so-called value engineering procedure. It starts with a brainstorming session, where all ideas of the process team are formulated, collected, and clustered, either old ones as well as completely new ideas. In the following, these ideas are distributed back to the members of the team. In a standardized procedure, the impact of an idea on CAPEX and OPEX is carefully and comprehensibly evaluated, and the team can finally decide whether the particular ideas are adopted or not. In the basic engineering phase, the focus of the engineering moves from the process design parameters like operating temperatures and pressures, flowrates or compositions to the geometric dimensions of the process equipment, design temperatures and pressures (Chapter 11) and materials as parameters for the mechanical strength, and the plant layout [2]. First, the design basis must be specified. The design basis is a document which fixes the constraints of the project, e. g. formal things like capacity, operating hours per year, the apportionment of the plant into units, a general description of the plant, consumption figures, and the targeted quality of the product. Discretionary decisions during the project should be avoided as much as possible. The ranges of meteorological data are fixed, i. e. for the barometric pressure, the temperature, the air humidity, and other data such as possible rainfall and their frequency, wind data, data on solar radiation, sea water temperature in coastal regions, tidal and geotechnical data such as the ground carrying capacity or the frequency and strength of earthquakes. The

1.1 Process engineering activities | 5

minimum and maximum conditions of the utilities (steam, cooling water, demineralized water, process water, brine, instrument air, nitrogen for inertization, natural gas, electric current, etc.) are set so that the engineer can choose the design cases with the most unfavorable conditions. The compositions of the raw materials are defined as well as the ones of the waste streams. The constraints for construction and design are set, such as the allowed tube lengths and shell diameters of heat exchangers, the fouling factors, and the overdesign to be chosen. The engineering standards and guidelines to be applied are listed for the various activities. Often, the client company has its own standards, which are explained in the so-called typicals, where the arrangement of standard equipment is illustrated (Figure 1.2). The philosophy of backups should be defined, e. g. for pumps. If a pump fails, it is possible that the whole plant must be shut down. A spare pump which is already installed can solve this problem. In cases which are less urgent it might be sufficient to have a spare pump in the storehouse. Also, a preventive maintenance strategy is often applied, where devices are maintained or even exchanged after a certain time when experience indicates that a failure becomes probable. Any deviation from the design basis must be reported to the client. The design basis is continuously updated until the engineering is finished.

Figure 1.2: Example for a typical of a control valve arrangement.

In the basic engineering, the process equipment is designed both with respect to its function in the process and to the mechanical strength requirements. For each equipment, the first issue of the data sheet is released, containing the data derived from the process, i. e. the process parameters, the dimensions, the nozzles, the materials of construction, and a specification of the insulation [2]. From that point on, the piece of equipment is more or less decoupled from the process itself; the responsibility is taken by the specialists for construction or machinery, who work out the specifications from the process, especially the mechanical strength. Besides the equipment, specifications are also prepared for the measurement and control devices, the piping, and the safety valves. Detailed lists for equipment, utility

6 | 1 Engineering projects consumers, electrical consumers, emissions, control equipment, and instrumentation are compiled. In collaboration between process and instrumentation engineers, the interlocks are defined. An interlock is an automatic action of the process control system due to safety considerations or equipment protection. Examples are the switch-off of a pump when the liquid level in the vessel downstream falls below the limit (LSLL, level switch low low) to avoid cavitation or the stop of feed flow and steam to a column after a pressure high-high switch (PSHH) had actuated. The most important work of the process engineer in the basic engineering is the setup of the piping and instrumentation diagrams (PID), which are the most elaborate documents of the process. While the PFD contains only process relevant lines and equipment, the PID shows other equipment, e. g. auxiliary lines for startup, safety devices, and valves, as well. The instrumentation engineers play a decisive role to fix the concept for measurement and control, which comprises a large part of the PID work. The PID is continuously updated in the detailed engineering and should finally contain the following information [2]: – all equipment and machinery, including design data and installed spare parts (e. g. substitute pumps); – all drives; – all piping and fittings, nozzles; – design information for the piping; – all instruments, control devices and control loops, interlocks with the corresponding signal flow; – all check valves, safety valves, level gauges, drain lines; – dimensions and operating data of equipment and machinery, materials of construction, elevations; – the battery limits, i. e. an illustration of the agreement of the scope of the project; – the tie-in information, i. e. how the new plant is linked to existing equipment and utilities. Figure 1.3 shows the elaborate PID based on the PFD shown in Figure 1.1. Note that only one column is depicted, because of the increased detailing the second column is on a separate PID. As a rule of thumb, at most 2–3 pieces of equipment can be represented on one PID sheet. Thus, it is normal that a major plant documentation comprises 100–150 PID sheets. One of the most important tasks of the project management is the coordination of the various teams and the time schedule for the provision of the particular pieces of information on the PID. There should always be a “master” PID, where all the changes are supplemented manually. In fact, there is a guideline for these changes: – red color: supplements; – blue color: omissions; – green color: comments.

1.1 Process engineering activities | 7

It is a useful agreement to indicate who has made the various changes. At certain stages, the PIDs are frozen, and a new issue is printed out. This happens several times during a project, and minor changes are even made during commissioning. The final PID set is called the “as built” status.

Figure 1.3: Example of the detailing in a PID.

The basic engineering should contain all the information in a way that the detailed engineering can be performed without difficulty, and possibly without profound knowledge about the process itself. This is an important requirement, as the detailed engineering is not necessarily carried out by the same company as the basic engineering. One target of the basic engineering is usually a cost estimation of ± 30 %, based on budget price offers for the most important equipment and on scaled prices for standard equipment. While in the basic engineering only the essential information like dimensions, operating conditions or materials is prepared, in the detailed engineering the complete specification of the whole static equipment and machinery is generated, which enables manufacturers to submit bids. In the detailed engineering the documents of the basic engineering are more elaborated and finally fixed, so that the following activities can take place [2]: – preparation of bid invitations for equipment, materials, civil work, and construction; – selection of manufacturers and vendors; – quality assurance operations on vendors and manufacturers (“expediting”); – planning of the transport of plant equipment;

8 | 1 Engineering projects – –

execution of civil work and plant construction; commissioning.

The information from the equipment manufacturers are considered and implemented in the documentation. In this phase, layout, piping construction, process, electrical and other types of engineers work closely together. In the detailed engineering, inconsistencies become less and less permissible. In [1] an example is described: the process engineer fixes the necessary pipe diameter with respect to the requirements of the process. Due to reasons of cost and accuracy, the instrument engineer chooses a smaller diameter for the flow measurement. This gives a temporary inconsistency, which is tolerable. However, a change is necessary, either the change of the pipe or the instrument diameter or a reducing adapter. At the end of the project, this change must have been performed; otherwise, money will be wasted due to the ordering of wrong materials, not to mention the possible time delay for the project.

1.2 Realization of a plant The engineer’s work is a struggle between the pencil and the eraser. If the pencil wins, there is a chance of getting things finished at some point. (Dimitar Borisov)

With the ongoing project, the plant layout1 becomes more and more a high-priority item. Plant layout is a procedure which involves knowledge of the space requirements for the facilities and also involves their proper arrangement so that continuous and steady movement of the products takes place [3]. During recent decades, there has been great progress with regard to tools. Previously the documents were produced with drawing ink, using special pens with different line widths. Erasing a mistake was a risky procedure; instead of an eraser, a razor blade was used to remove the ink.2 The drawings were archived on microfilm. Before the computer evolution began, the only way to convey a 3D impression were isometric drawings. While this seems possible for piping illustrations, one can hardly imagine that this was ever appropriate for equipment drawings, especially if the drawing is subject to changes. Three-dimensional (3D) objects can not be effectively described in a two/dimensional (2D) space [4]. At least two views of the object are required (see below). From the 1980s on, layout models made of plastics were constructed. They were very useful, 1 The author is grateful to Mrs. Hristina Stegmann, who gave the main part of the input to this section. 2 Of course, razor blades do not distinguish between the ink and the skin of the user. The blood losses of the author during his studies, usually spread on the drawing, were incredible.

1.2 Realization of a plant | 9

Figure 1.4: Example of a detailed plastic model.

as they conveyed a good impression about the final appearance of the plant [2], and they still count as objects worth being shown on guided visitor tours in an engineering company (Figure 1.4). They were set up according to the 2D drawings to verify the concept or to solve special problems concerning piping. Plastic models gave an immediate overall impression, but from the engineering point of view, they had severe disadvantages: – it was practically impossible to implement major changes; – complete representation of the whole plant was a huge effort; – the accuracy of such a model was limited. More details can be found in [4]. In the era of scale models, there were many anecdotal stories of fingers being glued together, inhalation of noxious fumes from plastic solvents, lacerations from cutting tools, and dissolved fingerprints. Today, the 3D CAD modellers suffer from carpal tunnel syndrome and e-mail overload. (found in [4])

With increasing computer capacity, 3D documents have become standard, and their precision is amazing. They can hardly be distinguished from a photograph after the construction is finished. A number of programs are on the market, usually with a quite considerable license fee. While the elementary functions can be operated quite

10 | 1 Engineering projects rapidly, it takes a few months until an engineer can claim to use such a system properly. The programs are able to show the plant with any level of detail from any point of view. The required documents can be created automatically, i. e. plot plans, equipment arrangement drawings, piping isometrics, and piping layout drawings. Other documents such as line lists or equipment lists can be consolidated. Creating a 3D model can be as time-consuming as manual drawings, but the saving of time and work occurs downstream in the workflow [4]. Figure 1.5 shows an example. A possible drawback of 3D models is an overconfidence in them, just because of their precision and the amazing visualization technology. It is still the engineer whose abilities are crucial for the quality of the model. The program does not prevent him from making mistakes. Nowadays, virtual reality (VR) programs offer the next option: a pair of data goggles enables the user to go through the plant and check, for instance, its operability. The system is so close to reality that people like the author who have no head for heights are quickly led to the limits of their capability. During the recent years, this technology has become more and more accessible and less expensive. Connected to a dynamic process simulation, VR is supposed to be a valuable training tool for the staff, which can be exposed to a large range of training scenarios including hazardous situations. It increases the process and safety procedure knowledge, improves plant reliability, and lowers the accident rate [5]. Whether just 3D or VR is applied will probably be a matter of the cost-benefit ratio. More information can be found in [6].

Figure 1.5: Example of a 3D representation. Courtesy of AVEVA GmbH.

The layout work already begins in the proposal phase of a project. A first concept must be set up in order to give a first idea about the appearance of the plant and to confirm that the proposed area on the site is sufficient. During the conceptual phase, precise information about the equipment is usually not known, unless reliable data from a reference plant is available. Only the size of key equipment such as reactors, large tanks, and silos can at least be estimated according to their capacity. Therefore, the particular pieces of process equipment are often

1.2 Realization of a plant | 11

represented by placeholders with an estimated preliminary size. Their position inside the battery limits is defined with regard to numerous safety and service requirements, considering the natural process flow for an efficient arrangement. When new information is generated, the layout plan is continuously updated. With the first issues of the PIDs, it becomes more and more elaborate, and the adjustments to be made become less significant – and it makes sense to begin with the 3D piping work. There are a number of 2D documents which are often relevant. The genuine documents containing the engineering information are drawings which show the top and the side views. Isometric drawings are often produced to obtain a better imagination of the plant. The three axes of space are not perpendicular to each other but angled towards the viewer with 60° angles between them. The lengths are distorted; they appear foreshortened by a certain factor. Isometric drawings are more useful in architecture. In engineering, they are only produced for illustration and usually not used as documents containing the information.3 The relevant 2D documents are: – Overall plot plan: Shows an overview of a complete plant including the battery limits (Figure 1.6). – Area plot plan: Shows the overview of a part of a plant, e. g. a unit. – Overall plot plan, isometric view: Shows the view of the complete plant from two opposite directions. It is not really necessary, but gives a good projection of the plant. Nowadays, there a tools which can easily provide this. – Equipment arrangement drawing: Turns the focus to the particular pieces of equipment. The general rule is that each piece of equipment must be visible from two sides to have a clear definition; usually the top view for defining the ground and the side view to define the tangent line and the center line elevation are required. – Iso-view of the equipment arrangement drawing: Again, the iso-view does not provide information for the engineer, but it is easy to produce and gives a good ide of it. – Pipe iso (piping isometric drawing): Isometric representation of a pipe with the coordinates for beginning and end and the bends (Figure 1.9). It is automatically generated for all lines; it is used for ordering the material, the manufacturing, and the fitting into the plant. It is also useful for process technology if a pipe must be carefully calculated, e. g. for an exact pressure-drop calculation. Inlet and outlet lines of safety valves are wellknown examples. 3 For piping description, they are essential.

12 | 1 Engineering projects

Figure 1.6: Example for an overall plot plan.



Piping layout drawings: Piping layout drawings are equipment arrangement drawings which show not only equipment and supporting structures but also the pipe lines available in the represented area. As this kind of drawing is often overloaded with information, it is being increasingly replaced by the direct use of the 3D model. Nowadays, viewer programs are available where engineers can use the 3D model without special knowledge about design.

In basic engineering, the main dimensions of the particular pieces of equipment are outlined. The goal is that they are as exact as possible, so that other activities like piping and static calculations for the steel structure can get started in a reasonable way. Other details of the equipment such as nozzles are still omitted. Also, the safety distances between the units or the pieces of equipment are worked out, considering that service and maintenance concepts are set up as well as construction procedures (e. g. necessity of cranes). Taking into account that later on people will spend time in the plant, the concepts for fire and explosion protection, the location of safety showers and escape routes and the removal of flammable liquids in case of an accident have to be clarified.

1.2 Realization of a plant | 13

For layout, there are a number of basic principles that should be followed, among them: – Follow the process flow to keep the pipe lengths short. For example, if the condenser of a distillation column is located on the 2nd floor, put the reflux drum on the 1st floor and the corresponding pump on the floor. Utility units should be placed near the corresponding tie-in. – Consider the minimum safety distances. E. g., compressors or units containing a compressor such as cooling units need a distance of 9 m to other pieces of equipment, depending on the engineering standard used. – Estimate the space requirement of the piping around an apparatus and keep it free! At the beginning, there is no information about this, and it is often underestimated. The information is usually generated when it is too late for relocating the apparatus. It is not a solution to keep some space left in a conservative way; the waste of space and the increased length of the pipes are expensive. Nevertheless, some space for unforeseens should always be considered. Insufficient space for pipe lines and instruments mostly leads to poor operability and personnel safety issues. The estimation of the space requirement can be done best according to a reference plant design with a similar capacity. If the reference plant has a significantly different capacity, one should be aware that there is no linear relationship between size and capacity. The process specialists should be able to make a good first guess. The package units are usually the most challenging tasks. The layout information is provided by the vendor, who is chosen at a late stage of the project. Different vendors might use different technologies, and the space demand can vary considerably. – First create a concept for all escape and service routes and stick to it! For example, provide continuous corridors on each level, and in the same way on each level so that one can orientate oneself even if there is, for example, heavy smoke. – Peculiarities concerning maintenance and service must be identified and taken into account. A BEU heat exchanger (Chapter 4, Figure 4.7) will probably be regularly dismantled for cleaning. There must be enough space for this operation, and the dismantled tube-bundle should show to the road and not to the pipe rack. Especially, reboilers which can be dismantled are a delicate issue. As well, dip-pipes (Chapter 9) are useful for directing liquid flow, but the layout engineers have to consider reserved space for dismantling above the apparatus. – Any machines with movable parts like pumps and compressors need regular maintenance, which is convenient to be done on the ground floor. Furthermore, pumps and compressors cause vibrations, which can be controlled in the best way on the floor. For pumps, the location on the ground floor (Figure 1.7) makes sense anyway, as the maximum NPSH value is generated (Chapter 8.1). – Finally, a sense for symmetry is useful. The vessels should be arranged in a straight line, as well as the outlet nozzles of the pumps (Figure 1.7). The distances are even numbers, e. g. 2 m or 1.5 m.

14 | 1 Engineering projects

Figure 1.7: Typical pump arrangement.

Figure 1.8: Fully modularized process blocks.



Interconnecting pipes are collected on a pipe rack.

A comparably new trend is modularization, which means that the whole system is divided into units. These units are called modules, and they are dedicated to a certain process task or unit operation. The modules can be manufactured and assembled in a frame (Figure 1.8). These frames have defined interfaces and can be joined at the site in a relatively easy way. This is useful if only a short time slot for construction and assembly is available. If it takes a long time to get the permission for the construction

1.2 Realization of a plant | 15

of the plant, the project can be shortened if at least the modules are ready. Besides the time savings, some other aspects of modularisation are: – Smaller space demand, but the module frames might be so narrow that piping, maintenance and operation might become difficult. In plant engineering, “narrow” is equivalent to “dangerous”, so one has to take care that the safety concept is still fulfilled. – Lower costs, as assembly does not need to be performed at the site with a large personnel staff. – Higher quality, as qualified people do the assembly in their own workshop. – Different types of transport limitations (e. g. container sizes). – Difficult standardization, as customer’s demands are often different and requires individual sizing of the equipment. – Opportunity to test the plant already in the workshop. Especially in the pharmaceutical and fine chemicals business more flexibility due to modularization is expected, as units can be rapidly combined or reused in a different application. New developments have the target that even software modules are created that enable to integrate a module into the process control system easily. Fire protection is one of the most important concepts of a chemical plant, as its target is to protect human life and resources, and namely, in this order. It has three components: – Fire prevention: Fire prevention comprises the education of the staff, the working procedures for fire and explosion prevention and emergency case procedures. Layout takes care that the safety distances are kept and that the access to fire engines is ensured [2]. – Passive fire protection: The main target of passive fire protection is to avoid the spreading of the fire across the plant. This can be achieved by the use of fire resistant walls and floors and the fire protection of equipment supports and steel structures. – Active fire protection: Active fire protection is the system for the delivery and distribution of fire-fighting water or, alternatively, the foam generation system. The piping engineering is an iterative procedure, as it depends on more or less all other disciplines. While the first approaches are based on assumptions, the details become more and more available, and at the end of the detailed engineering the line routing is fixed, i. e. all the bends and lengths are specified in isometric drawings (Figure 1.9) so that each pipe can be manufactured. The process engineer must then check whether the exact line routing corresponds to the assumptions made in the basic engineering, e. g. the outlet lines of safety valves (Chapter 14.2) must not generate significantly more pressure drop then calculated before, and the pump specification (Chapter 8.1)

16 | 1 Engineering projects has to be rechecked. In piping engineering, it is an important procedure to classify all components (pipes, fittings, flanges, valves, sealings, insulation, etc.) with respect to the media and the maximum operating conditions. A piping class represents such a set of conditions. In the PID, the corresponding information is included in the identification code of each line. A piping list is composed which indicates the material requirements for the piping, which is closely related to the costs. Extensive mechanical strength calculations take place. The necessary wall thickness can be evaluated (Chapters 11 and 12), and elasticity calculations are performed to make sure that the stress resulting from temperature changes during operation is tolerable.

Figure 1.9: Example of an isometric drawing of a line.

Further documents which are compiled for the construction are: – underground coordination plan (foundations, underground lines, pits, channels); – civil info plan (steel and concrete structures, paving); – escape and rescue plan (escape routes, eye-showers, alarm buttons, fire and gas detection); – room data list (surfaces, air condition); – load data list The load data list is one of the most important documents, compiling the weights of the particular pieces of equipment, which are decisive for the design of the supporting structures; failures can lead to serious accidents or at least to significant delays. For the commissioning, an operating manual is prepared where all activities in startup, operation, and shutdown are described in detail, broken down to the positions of the valves at the various activities. Also, it is checked whether the recom-

1.2 Realization of a plant | 17

mendations from the HAZOP study have been considered. An important tool for this purpose is the so-called cause & effect matrix, which connects failures to the actions of the interlocks and gives an overview on the interlock structure. The electrical demand of the process is specified, concerning voltage levels, locations of the distribution stations, and cable dimensions with respect to the electrical consumer list. Finally, the process control system is set up. It illustrates the operation of the plant. In a central station, all information is gathered and made available for the plant staff. The data are usually stored so that they can be used for later analyses. The consistency of the documents becomes more and more important for the quality of the engineering, whereas the typical process engineering issues like design, process performance, and economy are regarded as finished. In fact, changes at a late stage of a project are always associated with considerable cost and should be avoided. It is amazing that even in Germany there is no chair for “administrative process engineering”. (heard on the evening meeting of the UNIFAC Consortium)

All engineering activities are strongly affected by organization structures in the company and in the project. Currently, the most common organization form is matrix project management (Figure 1.10). Each project has a number of engineers coming from the various disciplines (here: process, engineering, procurement, construction; additional disciplines may be piping, instrumentation, electrical, layout etc.). Nor-

Figure 1.10: Matrix organization [2]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

18 | 1 Engineering projects mally, they report to the head of their division, and for the time of the project they are also members of the particular project teams, where they report to the project manager. The project manager decides which activities have to be executed and when the due date is, while the heads of the division delegate appropriate people to the project and supervise their technical decisions. The project manager is directly responsible for the project and reports to the board of his company; for the customer he is the main partner in discussions. He ensures the compliance with budget costs, time schedules, and quality level [2]. It is neither possible nor necessary that the project manager is a specialist in all disciplines. However, he4 should have sufficient knowledge about the activities to be performed in the various subsections of the project, as it is up to him to initiate and control them. Furthermore, he must have a profound understanding of the interplay of the disciplines. Managing a project does not mean to check the cells in the “completed” column in an EXCEL file; one must be able to inquire the status of an activity and assess the answers [251]. Even after the successful startup of the plant, the engineering work is not yet finished [1], as further projects (e. g. maintenance, revamping, debottlenecking) will take place during the whole life cycle of the plant. For such activities, an accurate documentation of the plant as it was created during basic and detailed engineering is valuable. Not only the current state (“as-built”) is needed, but also the history, which reveals the reasons and presumptions for the design of the plant, e. g. the various operation cases relevant for the dimensioning of a heat exchanger. Documentation is, apart from the content itself, an art of its own, usually related with various software compatibility problems, as software changes with time. Nowadays, it is clear that a process documentation which has to be updated continuously cannot be handed over in a paper version but as files with common software. The data exchange is often difficult if different softwares are applied, even for different revisions and configurations of the same software.

1.3 Cost estimation Cost estimation is certainly one of the most nontransparent things which engineers encounter at the beginning, as it is not really studied at the university. Nevertheless, it is one of the most important parts of any project. Costs are divided into investment (CAPEX) and operation costs (OPEX); the latter can be further divided into fixed costs (e. g. salaries of the staff) and variable costs, which are proportional to the production output (e. g. raw materials, utilities). On the other hand, there are revenues due to the sale of the product. Having a heat and mass 4 The author is grateful to have experienced a number of capable and competent ladies as project managers. However, none of them would use the gender language, and neither do I.

1.3 Cost estimation | 19

balance and the specific prices available, one can perform an estimate of operation costs and revenues. This first approach is certainly inaccurate, but on its basis one can decide whether it is worth continuing with the more labor-consuming investment cost estimation. The revenues must be significantly higher than the operation costs; earning money is the purpose of building a plant, and the payback of the investment costs including interests must take place before anything in fact is earned. During the course of the project, the estimate of operation costs and revenues is continuously updated. The estimation of the investment costs requires a lot of experience and knowledge about similar or even previous projects of the same kind. In the early stage of the project, until cost estimates for each item are available, the estimation of the investment costs is based on the costs of the major equipment. Additionally, costs for bulk material (piping, instrumentation etc.) and construction are considered by percentages of the major equipment or the total engineering costs; these percentages depend on the various constraints of the project (for example, buildings to be erected, use of expensive materials). E. g., the piping costs are reported to be 20 … 40 % of the total engineering [2]. Moreover, the requirements of the engineering company have to be considered; the project costs are therefore supplemented by the costs for engineering and procurement, license fees, and a profit margin. The estimation of the equipment costs should be based on costs known from previous projects. Especially the prices for the materials must be considered appropriately. The dimensions can be considered by the six-tenth-law: P(C1 ) = P(C0 ) ⋅ (

M

C1 ) , C0

(1.1)

where C is the capacity of the equipment and P is its price. For the exponent, 6 is often a good approach. This refers to the fact that the volume V of M = 0.6–0.7 ≈ 10 an apparatus is proportional to the capacity, and the surface, which determines the necessary amount of material, is proportional to V 2/3 . More elaborated values for the exponent M and orders of magnitudes for prices are given in [7]. Equation (1.1) also illustrates the economy-of-scale-effect. While the revenues of the plant operation increase linearly with the capacity of the plant, the investment costs both for equipment and the whole plant increase by far more slowly. The larger the capacity is, the relatively smaller the investment costs are. Once the costs for the equipment are calculated, the costs for the whole project can be estimated by means of factors referring to the costs of the equipment. Great experience is necessary to assess their values. From the literature [7], the following factors in Table 1.1 can be taken as a rule of thumb for processes with fluids. The item “contingency” accounts for uncertainties in the process or in project execution. Each of the values listed in Table 1.1 has a certain range, depending on the type of plant. The values can reliably be determined if a plant of the same type or at least a similar

20 | 1 Engineering projects Table 1.1: Factors for evaluation of the capital costs. Item

Factor

Main equipment Equipment stationing Piping Instrumentation Electrical Utilities Offsites Buildings Site preparation

1.0 0.4 0.7 0.2 0.1 0.5 0.2 0.2 0.1

Total equipment costs

3.4

Engineering Contingency

1.0 0.4

Total fixed capital costs

4.8

Working capital (inventories)

0.7

Total capital costs

5.5

one has been analyzed before. The factors become smaller if expensive materials are used; in this case, costs for activities like engineering stay the same, but the costs for the main equipment rises so that the percentage becomes lower. Capital costs for a chemical plant are often in the range 50 million to 1 billion €, and a possible investor must be convinced that he will get his money back within a manageable time period. For this purpose, the revenues from the sale of the products must be higher than the operating costs. The best overview is achieved by referring to 1 t of the main product, as Table 1.2 with its fictitious example shows. The calculation of the production costs is not only used for the decision whether a project is feasible or not but also later for the cost control of the process. One should be aware that Table 1.2 is only an example. Depending on the product, the cost structures can of course differ, and finally, it is much more detailed. It can include site-related costs like license fees, fire brigade, staff canteen, site bus, site streets, staff association, and, of course, taxes. Nevertheless, some typical numbers are worth discussing. It is typical that raw materials make up a large part of the production costs. Without a reasonable value creation from the raw materials to the product, there is hardly a chance of making the process feasible. This value creation should be checked first; especially, the amount of product per t of raw material should be checked, which is easily possible even without a detailed mass balance.5 Often, fluc5 An example is given on Section 3.1.

1.3 Cost estimation | 21 Table 1.2: Example for the structure of the operation costs. Capital costs Capacity Operating time p. a. Product sales

200 000 000 200 000 25 8 000 1 500

$ t/a t/h h/a $/t

Item

Consumption

Unit

Price

Unit

t/h t/h t/h

400 100 600

$/t $/t $/t

640.00 100.00 120.00

10 000.0 200.0 3 000.0

m3 /h t/h kW

0.03 20 0.1

$/m3 $/t $/kWh

12.00 160.00 12.00

3.0 0.5 5

m3 /h m3 /h t/h

10 40 −150

$/m3 $/m3 $/t

1.20 0.80 −30.00

Raw materials Raw material A Raw material B Ammonia Utilities Cooling water Steam Electrical energy Effluents Waste water Combustible waste Co-product for sale

40.0 25.0 5.0

Variable costs Personnel Overhead Maintenance, insurance Capital costs Fixed costs

Costs ($/t)

1016.00 40 70 000 $/a 15 % of personnel 2 % of capital costs 10 years depreciation

14.00 2.10 20.00 100.00 136.10

Production costs

1 152.10

Product sales

1 500.00

Margin

347.90

tuations of the prices of product and raw materials are decisive even for the feasibility of the process. In comparison, utilities play a minor role; however, the optimization of energy consumption is often the only thing which can be influenced by good process engineering. The electrical consumption of 1 MW per 100 000 t/a product capacity is a typical value, which can be exceeded if the process contains one or more compressors. Energy costs differ greatly according to the site location; on the Arabian peninsula or in the United States steam costs are easily below 10 $/t, whereas in Europe or Asia 30 $/t are realistic. The numbers for product revenue should not be based on single values on a daily basis. Instead, the market and its fluctuations must be understood. It must be clear on what basis the number is founded and how the forecast had been performed. Also,

22 | 1 Engineering projects one of the main tasks of a process engineer is to explain what the numbers in Table 1.2 are based on. Co-products which can be sold are sometimes generated; in contrast to this, by-products are often simply losses and have to be disposed of. An additional distribution channel must be established for sellable co-products and furthermore, from the process point of view an additional quality surveillance is necessary. The cost structure should not indicate that the sale of the co-products would tip the scales to make the project feasible. From the fixed costs items, usually only the depreciation of the plant plays a decisive role. Rules of thumb indicate the following ranges for the operation costs: – capital costs: 15–30 %; – energy costs: 10–40 %; – raw materials: 30–90 %; – Salaries: 5–25 %. The ratio between fixed and variable costs determines whether there is an economy of scale. The variable costs are proportional to the production rate; they do not vary in a larger plant. The fixed costs stay constant (e. g. personnel costs) or increase less than proportional to the capacity (investment costs; see Equation (1.1)). The more the fixed costs determine the cost structure, the larger is the effect of the economy of scale. A good compilation of cost estimation for chemical plants can be found in [8]. There are a number of characteristic values to assess the feasibility of a project. The most widely used one is the so-called net present value [9, 10]. For its evaluation, the period where the project produces positive or negative cash flow is divided into time segments, in most cases years. An interest rate is considered, which takes into account that the investment takes place at the beginning of the project, whereas the revenues and further expenses occur later. With this interest rate, all cash flow refers to the time when the project starts. Its value is chosen in a way that it represents the assumed risk of the project. For the cash flow evaluation, the following constituent parts are considered: Cash flow = (revenues − fixed costs − variable costs) ⋅ (1 − tax rate) + Depreciation ⋅ tax rate − investment costs ,

(1.2)

where the revenues are calculated as price multiplied by quantity of the product. Fixed and variable costs depend directly on the economic and operating assumptions of the process. Integrated over the lifetime of the project, the Net Present Value (NPV) gives T

(Cash Flow)i , (1 + r)i i=0

NPV = ∑ with T lifetime of the project in years;

(1.3)

1.3 Cost estimation | 23

i number of the year; r interest rate.

Figure 1.11: Example of a sensitivity spider chart.

The NPV maximizes the value for the company performing the project. However, it has also some disadvantageous properties. It depends strongly on the scale of the project, and therefore it is difficult to compare different scenarios. It is more sensitive to the annual revenues than to the investment costs. Constraints like availability of the initial investment and the market situation must be set [10]. Directly related to the NPV is the internal rate of return (IRR), which is defined as the interest rate which gives a zero NPV in Equation (1.3). The IRR is independent of the scale of the project. From the mathematical point of view it is not easy to handle, as multiple solutions often exist. It should not be used as the economically decisive criterion without a critical analysis [10]. There are a number of other methods available which are well explained in [8]. A very simple criterion which is often used for the assessment of a smaller project (e. g. a revamp for heat integration) is the static payback period. It is calculated how much time it will take until the investment costs for a project are covered by the revenues. The static payback period should be less than 3 years to make a project feasible. The so-called sensitivity spider chart is a useful tool to evaluate how the economic situation varies if one of the assumptions varies. This way, one gets a feeling about the most decisive assumptions which have to be continuously watched and updated. An example is shown in Figure 1.11. The whole costing procedure is well described in [252] and [253].

2 Thermodynamic models in process simulation Without reliable physical properties, a process simulator is just an expensive random number generator. (A. Harvey and A. Laesecke, 2002) The calculation of vapor–liquid equilibria for multicomponent mixtures will only work for ideal mixtures with an ideal vapor phase. (E. Kirschbaum, 1969)

The understanding of physical properties of fluid substances and their phase equilibria is one of the main keys for a successful process engineering. Although there are more exciting problems to solve in process engineering, almost every larger project starts with the clarification of the physical property situation. Without having a sound understanding of this, a reasonable process simulation cannot be carried out. Mainly, the following properties are important for process engineering: – thermodynamic properties: – density; – enthalpy; – phase equilibrium; – transport properties: – viscosity; – thermal conductivity; – surface tension; – diffusion coefficients. For the mass and energy balance, only the thermodynamic properties are required, whereas the transport properties play a major role for the equipment design, e. g. for columns, heat exchangers, or pumps. For the thermodynamic properties, the calculations of the phase equilibria play the key role for the whole process simulation, as they determine the separation steps in the process, which often cause 60–80 % of the total costs of the process. In process simulators, a model has to be chosen which decides in which way the phase equilibria are determined; also, it has an influence on the evaluations of densities and enthalpies. It is not necessary but advantageous if one model could be used for the whole process; each model change holds the risk that inconsistencies are introduced (Chapter 2.11). Two kinds of models can be distinguished: the equation-of-state models (φ-φ-approach) and the activity coefficient models (γ-φapproach). The difference and the advantages and disadvantages between these two approaches should be known for a reasonable choice of the model. In the following section, the essentials of the most important models are introduced and discussed without any thermodynamic framework. For more details of these models, see [11]. The importance of the particular physical properties have been rated in Table 2.1 [12], where the relationships between accuracy of the particular properties and the inhttps://doi.org/10.1515/9783110657685-002

26 | 2 Thermodynamic models in process simulation Table 2.1: Example of a relationship between physical property accuracy and investment costs [12]. Physical property Thermal conductivity Specific heat capacity Heat of vaporization Activity coefficient Diffusion coefficient Viscosity Density

% error

% error capital cost

20 % 20 % 15 % 10 % 20 % 50 % 20 %

13 % 6% 15 % 100 % 4% 10 % 16 %

fluence on the investment costs are listed. Certainly, this table should not be taken as the absolute scientific truth but as an examplary case study for illustration. The outstanding item is the large influence of the activity coefficient.1 The author would agree upon its importance; however, in fact the costs are driven by the separation factors αij =

psi γi psj γj

(2.1)

The importance of its accuracy strongly depends on the case. When the separation factors are far away from unity, the influence on the accuracies of the activity coefficent and vapor pressures is limited. When the separation factors are close to unity, their influence is incredibly high, and, moreover, they can even decide whether a separation is possible at all. The heat of vaporization is clearly proportional to the reboiler duty in a distillation and therefore to the size of the reboiler; thus, the proportionality is quite in line with the experience of the author. In comparison to the heat of vaporization, the influence of specific heat capacity is smaller. Nevertheless, single pieces of equipment can be strongly influenced (e. g. liquid-liquid heat exchangers), and errors occur pretty often (Chapter 2.8). The transport properties, thermal conductivity and viscosity, have an influence on the heat transfer coefficient in heat exchangers. The author would guess that the influence of the viscosity is greater for large viscosities. Moreover, errors in the viscosity occur quite frequently, especially for mixtures, whereas thermal conductivities do not vary too much for the particular liquids, with the exception of water and glycols. Vapor viscosities and thermal conductivities are usually not measured but estimated anyway. The accuracy of estimations of physical properties is often of great interest. This question is not easy to answer, as most of the authors are suspected to claim a higher accuracy than they should. However, it is hard to imagine how objective criteria can 1 The activity coefficient γ will be explained in Chapter 2.1. In the context of this paragraph, it is defined as a factor describing the deviations from Raoult’s law. It can be interpreted as a correction factor for the concentration.

2.1 Phase equilibria |

27

Table 2.2: Accuracy of physical property predictions [15]. Physical property Heat of formation Liquid heat capacity Liquid density Vapor pressure Normal boiling point Transport properties Heat of vaporization Limiting activity coefficient

Error expected

Error desired

2.5–4 kJ/mol > 10 % > 2% > 10 % 6K > 10 % 15 % > 10 %

4 kJ/mol 10 % 2% 10 % 3K 10–20 % 15 % 10 %

be set up. Clearly, the average deviation of the fit to the data available is an indicator, but certainly the fit to unknown data will be worse. Other authors try to leave out certain data sets in the fitting process [13], however, one does not know according to which criteria they are chosen. Another method is to predict new data sets before they are integrated into the database [14]. This method can not deliver large amounts of examples, and, moreover, it is not reproducible, as after testing they will certainly be added to the database. A thorough examination has been done in Table 2.2. The conclusions are as follows [15]: – the accuracy of the methods are not at industrial target level; – experimental data for thermal and transport properties are limited; – group contribution methods seem to have reached their potential, there is hardly room for improvement. To meet the industrial demand, new approaches are necessary.

2.1 Phase equilibria Life is too short to worry about phase equilibria. (Based loosely on a comment of Georgios Kontogeorgis on electrolytes.)

The knowledge and understanding of the various phase and chemical equilibria is the key for a successful process simulation. Two-phase regions have a great importance in technical applications. Even for a one-component system phenomena occur which need to be discussed thoroughly. Exemplarily, Figure 2.1 illustrates the isobaric vapor-liquid equilibrium of water when it is heated from t1 = 50 °C to t2 = 150 °C at atmospheric pressure. In the two-phase region, vapor and liquid coexist at the same temperature and pressure [11]. In this case, both liquid and vapor are called saturated. If the saturated liquid is further heated, the temperature does not change. Instead of a temperature

28 | 2 Thermodynamic models in process simulation

Figure 2.1: Temperature change of water at p = 1.013 bar with respect of the heat added [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

rise, the liquid is vaporized. After the last drop of liquid vanishes, the temperature rises again. Figure 2.1 clearly indicates that much more heat is consumed for the evaporation of water (enthalpy of vaporization, Δhv ) than for the 100 K temperature elevation. The state of a pure substance in the phase equilibrium region is not characterized by temperature and pressure as in the one-phase region. Temperature and pressure are related; the relationship ps = f (T)

(2.2)

is the vapor pressure curve. For the complete determination of the two-phase system the vapor quality x x=

n󸀠󸀠 n󸀠 + n󸀠󸀠

(2.3)

is necessary, where the superscripts 󸀠 and 󸀠󸀠 denote the saturated states of liquid and vapor, respectively. x = 0 means a saturated liquid, x = 1 means a saturated vapor. In the two-phase region, the specific volume v, the specific enthalpy h and the specific

2.1 Phase equilibria |

29

entropy s can be written as v = xv󸀠󸀠 + (1 − x)v󸀠

(2.4)

h = xh + (1 − x)h

󸀠

(2.5)

󸀠

(2.6)

󸀠󸀠

s = xs + (1 − x)s 󸀠󸀠

Vapor pressure and enthalpy of vaporization of a pure substance are related by the Clausius–Clapeyron equation Δhv = T

dps 󸀠󸀠 (v − v󸀠 ) dT

(2.7)

For a multicomponent system, we must also consider that both phases have different composition, and the distribution of each component is a central issue to be solved by the thermodynamic model. As will be shown later, the behavior of a multicomponent system is mainly described by binary subsystems. The best way to illustrate the vapor-liquid equilibrium of a binary system is the pxy diagram at constant temperature. Figure 2.2 gives an example for the system ethanol/water.

Figure 2.2: Example for a pxy diagram [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

In the upper part of this diagram, there is the liquid region above the bubble point curve. Below the dew point curve, there is the vapor region. Between bubble and dew point curve, there is the two-phase region. For a specific pressure, a horizontal tie-line connects the corresponding liquid and vapor points in phase equilibrium, referring to their concentration on the abscissa. These phase equilibrium diagrams can be obtained by correlating phase equilibrium data. There are several experimental options; the most popular ones are to measure both the vapor and the liquid concentration and the temperature at constant pressure, or to measure the bubble point at constant temperature for certain liquid concentrations. The latter option should be preferred in most cases; isothermal data are much more useful for the adjustment of model parameters [16]. Bubble and dew point curve meet at the ordinates at x = 0 and x = 1,

30 | 2 Thermodynamic models in process simulation

Figure 2.3: pxy diagram with one component becoming supercritical [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

indicating the corresponding pure component vapor pressures. If the temperature is greater than the critical temperature of one of the components, the vapor pressure of this component does not exist, although it can take part in a phase equilibrium in the dilute state. In Figure 2.3, the typical change in the shape of the pxy diagram is shown for the system nitrogen/methane when nitrogen becomes supercritical. Some other diagrams are useful for illustrating a binary vapor-liquid equilibrium. The simple yx diagram shows the relationship between vapor and liquid concentration without indicating the pressure or temperature. For a better overview, the bisecting line is usually depicted (Figure 2.4, first column). Also a Txy diagram is possible for isobaric phase equilibria, as can be seen in the right column of Figure 2.4. In this diagram, the upper line is the dew point curve, and the lower line is the bubble point curve. Figure 2.4 also shows the various kinds of binary vapor-liquid equilibria. The upper row shows an ideal mixture obeying Raoult’s law (Equation (2.40)). There are no interaction forces between the molecules. The phase equilibrium is just determined by the vapor pressures of both components. Typical examples for ideal systems are benzene/toluene or n-hexane/n-heptane. In the pxy diagram, the boiling point curve is a straight line. In the Txy diagram, usually no straight lines occur. The activity coefficients (Chapter 2.3.1) are all equal to 1, i. e. their logarithm is 0. Row 2 shows a system having small nonidealities. Molecules of the same kind “prefer” to be together instead of mixing with molecules of a different kind. As a result, greater pressure is built up than for the ideal mixture. The activity coefficients are greater than 1. A typical example for such a system is methanol/water. When the activity coefficients become larger, the system can exhibit an azeotrope with a pressure maximum in the pxy diagram and a temperature minimum in the Txy diagram (row 3). At the azeotropic point, the liquid and vapor concentrations are identical. Therefore, an azeotrope cannot be separated by simple distillation. The knowledge of azeotropes is essential for any process development. A typical example for a homogeneous azeotrope is water/1-propanol. The occurrence of azeotropes is also related to the vapor pressures; the closer together the vapor pressures of the components are, the more probable is the occurrence of an azeotrope [11] (Section 2.3.1).

2.1 Phase equilibria |

Figure 2.4: Kinds of binary vapor-liquid equilibria [11]. 1 ideal mixture 2 small nonidealities 3 larger activity coeffcients, homogeneous azeotrope 4 heteroazeotrope 5 vapor pressures close to each other/strong negative deviations from Raoult’s law. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

31

32 | 2 Thermodynamic models in process simulation If the activity coefficients increase even more, the system exhibits a miscibility gap, and the liquid splits into two phases. A heteroazeotrope occurs (row 4). A typical example is water/n-butanol. Note that a miscibility gap and the occurrence of a heteroazeotrope is not necessarily coupled. If the vapor pressures largely differ, the azeotrope does not occur, whereas the miscibility gap itself is not related to the vapor pressures. In addition, negative deviations from Raoult’s law can occur, where the activity coefficients are lower than 1. This means that the molecules “like” each other and prefer to be surrounded by molecules of a different kind. An example for such a system is dichloromethane/2-butanone [11]. If vapor pressures are close to each other and the negative deviations are strong, even an azeotrope with a pressure minimum in the pxy diagram and a temperature maximum in the Txy diagram can occur (row 5). An example is acetone/chloroform. It cannot happen that systems with negative deviations from Raoult’s law show a miscibility gap. In addition to these five types of phase equilibria there are others. For example, methyl acetate/water shows a miscibility gap with a homoazeotrope [17], and benzene/hexafluorobenzene has a double azeotrope, one with a pressure minimum and one with a pressure maximum [18]. As mentioned, there are two different approaches for the description of phase equilibria. In the following sections, they are explained on vapor-liquid equilibria.

2.2 φ-φ-approach You cannot just learn thermodynamics. You must love it. (Recommendation to desperate students)

Generally, an equation of state is a relationship between the pressure p, the absolute temperature T, and the specific volume v. In a first step only pure substances are regarded. It makes sense to focus on equations of state which are pressure-explicit. Hence, the general form of an equation of state can be written as p = f (T, v)

(2.8)

The simplest equation of state is the ideal gas equation p=

RT v

(2.9)

Equation (2.9) is exact for a gas where the molecules have no volume and do not exert interaction forces on each other. It is a good approximation for gases at low pressures. At increasing pressures, Equation (2.9) becomes increasingly inaccurate. A well-

2.2 φ-φ-approach

| 33

known modification of Equation (2.9) is the virial equation: p=

B(T) C(T) D(T) RT (1 + + 2 + 3 + ⋅ ⋅ ⋅) v v v v

(2.10)

Equation (2.10) is the so-called Leiden form; correspondingly, there is also a volumeexplicit Berlin form expressed in form of a polynomial of the pressure p. The virial coefficients B, C, D, … account for the deviations from the ideal gas equation. Although it has a theoretical background, the virial equation is not suitable for practical applications. Usually, it has to be truncated after the second term, as the third virial coefficient C and the ones following are hardly ever known. The consequence is that the virial equation can only be used up to moderate densities (rule of thumb: ρ = 0.5ρc ). The most widely used equations of state in technical applications are modifications of the van-der-Waals equation p=

RT a − v − b v2

(2.11)

Invented in 1873 [19],2 the van-der-Waals equation was the first equation of state valid for both the vapor and the liquid phase that could at least qualitatively explain the pvT behavior of a pure substance, illustrated by the pv diagram in Figure 2.5. The pv diagram for a pure substance is dominated by the large vapor-liquid equilibrium region, where the isotherms are horizontal, e. g. between points B and C, indicating that the pressure during condensation or evaporation remains constant. At the left-hand side of the vapor-liquid equilibrium region, the isotherms are very steep, meaning that large pressures are needed to lower the specific volume. This is the region of the liquid phase, and the steep slope means that compressing a liquid does not result in a major volume change. At the right-hand side there is the vapor region, where it is easier to compress the substance. At low pressures, the isotherms obey to the ideal gas law equation (2.9). At higher pressures, the ideal gas law becomes more and more inaccurate. In the two-phase region, the boiling point line and the dew point line are connected by horizontal isotherms as mentioned, giving the saturated vapor volume v󸀠󸀠 (e. g. point B) and the saturated liquid volume v󸀠 (e. g. point C). With increasing temperature, v󸀠󸀠 and v󸀠 are getting closer to each other. At the critical point, vapor and liquid become identical. Above the critical temperature (and, correspondingly, the critical pressure), no phase equilibria between vapor and liquid exist. Remarkably, one can get from the vapor to the liquid region without crossing the two-phase region, i. e. a vapor can be gradually transformed into a liquid and vice versa. E. g. starting from point C, the saturated liquid can be isochorically (i. e. at v = 2 Just like all authors, I give this citation. However, I will admit that I have not read it: The language is Dutch, and even if I could speak it I would guess that the way it was used in the 19th century would not be comprehensible for a nonnative.

34 | 2 Thermodynamic models in process simulation

Figure 2.5: The pvT behavior of a pure substance [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

const.) heated up to a temperature above the critical one, where simultaneously the pressure is increased to a value above the critical pressure. Then, the substance can be heated isobarically, and finally, it can be cooled down isochorically to point B. As a result, a liquid has been turned into a vapor without ever having both phases existing simultaneously. When the typical application of an equation of state, the calculation of v at given T and p, is performed, Equation (2.11) gives a third-degree polynomial: v3 − (b +

RT 2 a ab )v + v − =0 p p p

(2.12)

Therefore, the van-der-Waals equation (2.11) is called a cubic equation of state. The advantage is that it can be solved analytically [11, 20]. One obtains either one real and two complex or three real solutions. In the latter case, the largest solution corresponds to a vapor specific volume, and the smallest one corresponds to a liquid volume. The middle one has no physical meaning. In contrast to Equation (2.9) or the virial equation (2.10) truncated after the second term, the cubic equations can be used both for the vapor and for the liquid phase. To decide whether the liquid or the vapor solution is valid, the vapor pressure is needed. Of course, an equation of state defined by a continuous function like the vander-Waals equation (2.11) cannot reproduce the dogleg at the transition from the onephase to the two-phase region. Thermodynamics points out that the equation of state can evaluate the phase equilibrium using the Maxwell criterion (Figure 2.6). Liquid and vapor phase are in equilibrium at p = ps if the hatched areas between p = ps and the equation of state in Figure 2.6 are equal. Analytically, it can be determined by the equation v󸀠󸀠

ps (v − v ) = ∫ p dv 󸀠󸀠

󸀠

v󸀠

(2.13)

2.2 φ-φ-approach

| 35

Figure 2.6: Application of the Maxwell criterion (propylene, t = 20 °C, Peng–Robinson equation).

The curvature between v󸀠 and v󸀠󸀠 has no physical meaning, including the negative values obtained for the pressure. Equation (2.13) is equivalent to the formulation f 󸀠 = f 󸀠󸀠 ,

(2.14)

where f is the so-called fugacity, which can be interpreted as a corrected pressure. For a pure substance, the fugacity is the product of the pressure and the fugacity coefficient φ: pφ󸀠 = pφ󸀠󸀠 ,

(2.15)

where the pressure cancels out. However, while its theoretical value is unquestioned, the van-der-Waals equation had limited success in practical applications, where quantitatively correct results are required. In the second half of the 20th century, several modifications of the van-derWaals equation have been developed. The most successful and most widely-used ones are the Soave–Redlich–Kwong equation (SRK or RKS) p=

a(T) RT − , v − b v(v + b)

(2.16)

with a(T) = 0.42748

R2 Tc2 α(T) pc

(2.17) 2

α(T) = [1 + (0.48 + 1.574 ω − 0.176 ω2 )(1 − Tr0.5 )] b = 0.08664

RTc , pc

(2.18) (2.19)

and the Peng–Robinson equation (PR) p=

RT a(T) − , v − b v(v + b) + b(v − b)

(2.20)

where a(T) = 0.45724

R2 Tc2 α(T) pc

(2.21)

36 | 2 Thermodynamic models in process simulation α(T) = [1 + m(1 − Tr0.5 )]

2

m = 0.37464 + 1.54226 ω − 0.26992 ω b = 0.0778

(2.22) 2

RTc pc

(2.23) (2.24)

It is remarkable that both the Soave–Redlich–Kwong and the Peng–Robinson equation need only three substance-specific parameters, i. e. the critical temperature Tc , the critical pressure pc and the acentric factor ω, which is defined as ω = −1 − lg (

ps ) pc T=0.7 Tc

(2.25)

Essentially, ω represents the vapor pressure at T = 0.7Tc . For most substances, this is a temperature close to the normal boiling point. Thus, the critical point and one specified point of the vapor pressure curve are the only substance-specific information used. This characterization of a substance is called the three-parameter corresponding states principle (Tc , pc , and ω). Equations of state using only this input information are called generalized equations of state. Equations of state valid for both the vapor and the liquid state can provide all thermodynamic properties needed for process calculations. Its original well-known purpose was to be a relationship between p, v, and T in the vapor phase. As seen, cubic equations of state can also be used to calculate the specific volume in the liquid phase, and as seen in Equations (2.13) and (2.15), the vapor pressure for a given temperature can be evaluated. From thermodynamics, an expression for the enthalpy using a pressure-explicit equation of state can be derived: [11]: T

h(T, v) = h0 +

∫ cpid

T0

v

dT + ∫ [T( ∞

𝜕p ) − p]dv + pv − RT 𝜕T v

(2.26)

Inserting the equation of state into Equation (2.26), an expression is obtained which can calculate the specific enthalpy of the substance at any state. The only additionally required input is the specific isobaric heat capacity in the ideal gas state. The enthalpy of vaporization for a temperature T is then easily determined by Δhv = h(T, v󸀠󸀠 ) − h(T, v󸀠 )

(2.27)

For the Peng–Robinson equation, the most important calculation equations for pure substances are given in the following: – Cubic equation for the determination of the volume: v3 + (b −

RT 2 a 2bRT RT 2 ab )v + ( − 3b2 − )v + b3 + b − =0 p p p p p

(2.28)

2.2 φ-φ-approach



| 37

Specific enthalpy [11]: (h − hid (T, v)) = RT(Z − 1) −

1 v + (1 + √2)b da ) ln ( (a − T ), √8b dT v + (1 − √2)b

(2.29)

with Z=

pv RT

(2.30)

and R2 Tc2 m da = −0.45724 [1 + m(1 − Tr0.5 )] dT pc √TTc –

(2.31)

Fugacity coefficient [21]: ln φ = Z − 1 − ln (Z −

a bp v + (1 + √2)b )− ln ( ) √ RT 2 2bRT v + (1 − √2)b

(2.32)

The success of this route is remarkable, taking into account that only three substancespecific parameters are used (Tc , pc , ω), but limited. The three-parameter corresponding states principle is especially valid for nonpolar compounds. The generalized cubic equations of state have become standard tools in the oil and gas industries, where the compounds involved are usually nonpolar. In these cases, the vapor densities and the vapor pressures are reproduced very well, especially in the high-pressure region. This can be explained by the fact that the critical point is necessarily reproduced exactly; the closer the distance to the critical point is, the better the results will be. For polar compounds (e. g. water, methanol, ethanol), the results are usually remarkably good for the specific volume in the vapor phase, but the vapor pressure is not reproduced reliably enough for process calculations. The enthalpies of the vapor are usually a good, but not exact, estimate. For the enthalpy of vaporization the same result as for the vapor pressure is obtained; it is remarkably good for nonpolar substances, but hardly reliable for polar ones. The enthalpies of a liquid are usually poor for both polar and nonpolar compounds; the reason for this is explained in Chapter 2.8. The specific volume of the liquid phase is not even intended to be correct. In most cases, only the order of magnitude is met. For technical applications, the liquid volume calculated by cubic equations of state is by far not accurate enough. Instead of the correct liquid volume, it should only be taken as an auxiliary quantity for the calculation of the vapor pressure and the liquid enthalpy. In the result view of process simulators, the liquid volume is usually overwritten by default with results from a liquid density correlation (Chapter 2.3). To overcome these limitations, the α-functions of the generalized equations of state can be replaced by individual ones. An example is the PSRK (Predictive

38 | 2 Thermodynamic models in process simulation Soave–Redlich–Kwong) equation, where the generalized α-function (2.18) has been replaced by 2

3 2

α(T) = [1 + c1 (1 − Tr0.5 ) + c2 (1 − Tr0.5 ) + c3 (1 − Tr0.5 ) ]

(2.33)

The adjustable parameters c1 , c2 , c3 are usually fitted to vapor pressure data. Tr (reduced temperature) is an abbreviation for T/Tc . This way, polar components can be described as well. In the volume-translated Peng–Robinson equation (VTPR) the α-function of the Peng–Robinson equation (2.22) has been replaced by the flexible Twu-α-function α(Tr ) = TrN(M−1) exp [L(1 − TrMN )] ,

(2.34)

where the adjustable parameters L, M, N can again be fitted to vapor pressure data. Also, data for cpL can simultaneously be adjusted so that the enthalpy of the liquid can also be described by the equation [22]. With the volume translation, an attempt was made to solve the remaining problem with the bad reproduction of the liquid density. The approach is that the specific volume v in the original equation is replaced by a term v + c, where c is a volume translation: p=

RT a(T) − v + c − b (v + c)(v + c + b) + b(v + c − b)

(2.35)

Therefore, it remains a cubic equation of state, but the results for the specific volumes for both the vapor and the liquid phase are shifted by the value of c. While this is almost negligible for the vapor phase, it causes an improvement for the specific volume of the liquid phase. The parameter c can be fitted to liquid density data or, if not available, calculated by a generalized function. However, an acceptable improvement is restricted to low pressure data, and the densities of liquids can still not be reproduced with the accuracy required in technical applications. Therefore, in spite of the use of volume translations it is still recommended to make use of the option to overwrite it with a liquid density correlation. Besides the cubic ones, a lot of other equations of state are in use. Only a few of them with a special importance for process engineering purposes can be mentioned here. In process engineering, cases occur where a much higher accuracy in the physical properties is required. Examples are the power plant processes, the heat pump process or the pressure drop calculation in a large pipeline. For the description of the complete pvT behavior including the two-phase region several extensions of the virial equation were suggested. All these extensions have been derived empirically and contain a large number of parameters, which have to be fitted to experimental data. A large database is necessary to obtain reliable parameters. One of the first approaches to equations of state with higher accuracies has been made by Benedict, Webb, and Rubin (BWR) in

2.2 φ-φ-approach

| 39

1940 [23, 24], who used 8 adjustable parameters. Their equation allows reliable calculations of pvT data for nonpolar gases and liquids up to densities of about 1.8ρc . Bender [25] extended the BWR equation to 20 parameters. With this large number of parameters it became possible to describe the experimental data for certain substances over a large density range in an excellent way. In the last two decades, the so-called technical high-precision equations of state have been developed [26–28]. Their significant improvement was possible due to progress in measurement techniques and the development of mathematical algorithms for optimizing the structure of equations of state. With respect to the accuracy of the calculated properties, their extrapolation behavior and their reliability in regions where data are scarce, these equations define the state-of-the-art representation of thermal and caloric properties and their particular derivatives in the whole fluid range. There is also an important demand to get reliable results for derived properties (cp , cv , speed of sound). Technical high-precision equations of state are a remarkable compromise between keeping the accuracy and gain in simplicity. Furthermore, these equations should enable the user to extrapolate safely to the extreme conditions often encountered in industrial processes. For example, in the LDPE3 process ethylene is compressed to approximately 3000 bar, and it is necessary for the simulation of the process and the design of the equipment to have a reliable tool for the determination of the thermal and caloric properties. The complexity and limited availability is no longer an issue. Currently, there are approximately 80 substances for which the data situation has justified the development of a technical high-precision equation of state, e. g. water, methane, argon, carbon dioxide, nitrogen, ethane, n-butane, isobutane, and ethylene. In the FLUIDCAL software [29], these equations of state have been made applicable for users without special knowledge. The successor in this field is TREND [277], which additionally provides a concept for the application of mixtures. A genuine high-precision equation of state for mixtures is GERG [30], which describes natural gas components. Table 2.3 illustrates the accuracy demand for technical equations of state. Table 2.3: Accuracy demand for technical high-precision equations of state.

p < 30 MPa p > 30 MPa

ρ(p, T )

w ∗ (p, T )

cp (p, T )

ps (T )

ρ󸀠 (T )

ρ󸀠󸀠 (T )

0.2 % 0.5 %

1–2 % 2%

1–2 % 2%

0.2 %

0.2 %

0.2 %

The φ-φ-approach can be extended to mixtures. Thermodynamics says that the equilibrium condition Equation (2.14) becomes pxi φ󸀠i = pyi φ󸀠󸀠 i 3 low density polyethylene.

(2.36)

40 | 2 Thermodynamic models in process simulation for each component involved. Again, the pressure cancels out. The cubic equations of state can be transformed to mixture applications by mixing rules for the parameters a and b. The most common ones are a = ∑ ∑ zi zj (aii ajj )0.5 (1 − kij )

(2.37)

b = ∑ zi bi

(2.38)

i

j

i

for both the Peng–Robinson and the Soave–Redlich–Kwong equation of state. As the mixing rules (2.37) and (2.38) refer to both the vapor and the liquid phase, the neutral variable z is used for both the vapor and the liquid mole fraction. kij is an adjustable binary interaction parameter. It is symmetric (kij = kji , kii = kjj = 0) and has usually small values (−0.1 < kij < 0.1). Nevertheless, it has a significant influence on the calculation of phase equilibria and cannot be neglected. The impact on the results for the liquid and vapor volumes is comparably small. Using Equations (2.37) and (2.38), the calculation routes for pure components can be applied to mixtures. For the phase equilibrium calculations, Equation (2.36) looks very easy but is in fact a complicated equation, as the determination of the fugacity coefficients ends up in long equations, e. g. for the Peng–Robinson equation [21] ln φi =

bi p (z − 1) − ln [ (v − b)] b RT b v + (1 + √2)b a 2 − ( ∑ zj aij − i ) ln [ ] √ a b 2 2bRT v + (1 − √2)b j

(2.39)

The mixing rules (2.37) and (2.38) have had considerable success as long as only nonpolar substances were involved. When polar compounds are regarded, poor results are obtained. For an adequate description of systems with polar components using the φ-φ-approach, the so-called g E mixing rules have to be applied (Figure 2.7). They will be explained in Chapter 2.7, as understanding them is not possible without knowledge of the γ-φ-approach.

2.3 γ-φ-approach 2.3.1 Activity coefficients The activity coefficient is the factor by which I have to miscalculate, to get the correct result although I use the wrong equation. (Student in his thermodynamics exam. He passed easily.) It is sobering to remember that successful oil refineries were built many years before chemical engineers used chemical potentials or fugacities […]. (J. M. Prausnitz, 1989)

2.3 γ-φ-approach

| 41

Figure 2.7: Experimental and calculated VLE data for the system acetone (1)/water (2) using the Peng–Robinson equation with kij (left-hand side) and gE mixing rules (right-hand side) [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

The phase equilibrium condition (2.14) can also be elaborated in a different way [11], using different approaches for the vapor and the liquid phase. The simplest solution is Raoult’s law: xi psi = yi p ,

(2.40)

where the bubble point curve in the pxy diagram is a straight line. Only few binary systems obey this law, an example is the system benzene/toluene (Figure 2.4, upper row). If deviations from Equation (2.40) occur, the activity coefficient γi is introduced, which depends on the concentration as well as on the temperature. One can take the activity coefficient as a factor which corrects the concentration. The equilibrium condition, still not the final one, is then written as xi γi psi = yi p

(2.41)

This equilibrium condition is valid in the low pressure region, as the vapor phase is regarded as an ideal gas. While it is still often used in academics, it has become more and more outdated in process simulation applications, as the enthalpy calculations should not be performed using the ideal gas law for the vapor phase (Chapter 2.8). Instead, a real vapor phase is considered using an equation of state: xi γi psi Poyi = yi p

φi (T, p, yi ) , φpure (T, psi ) i

(2.42)

42 | 2 Thermodynamic models in process simulation with Poyi as the Poynting factor Poyi = exp

viL (p − psi ) RT

(2.43)

φi (T, p, yi ) accounts for the nonideality of the vapor phase, whereas φpure (T, psi ) refers i to the liquid fugacity coefficient at the vapor pressure. Fortunately, at p = psi the liquid fugacity coefficient is equal to the vapor one (Equation (2.15)); thus, in contrast to Equation (2.36), the equation of state does not need to be valid for both the vapor and the liquid phase. Therefore, the virial equation (2.10) truncated after the second term can also be applied. Nevertheless, usually generalized cubic equations of state are used, as they are the most powerful tools for this purpose with easily accessible input parameters (Tc , pc , and ω). The Poynting factor corrects the small error caused by evaluating the fugacity of the pure liquid at its vapor pressure psi instead of the system pressure p. It can usually be neglected. The heart of Equation (2.42) is the activity coefficient γi . The formal character of the γi is a correction of the molar concentration, which, however, hardly explains anything. In fact, the activity coefficient accounts for the intermolecular interactions between the molecules and the entropic effects. Figure 2.8 shows the typical isothermal concentration dependence of the activity coefficient. Its maximum values (respectively minimum values for systems with negative deviations from Raoult’s law) occur when the component is extremely diluted; at the concentration xi → 0. This value is called the activity coefficient at infinite dilution (γi∞ ) and is characteristic for the illustration of the nonideal behavior. Nowadays, mainly three equations for the correlation of the γi are in use; the Wilson [31], the NRTL4 [32], and the UNIQUAC5 equation [33]. They are all based on the

Figure 2.8: Typical isothermal concentration dependence of the activity coefficient [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission. 4 nonrandom two-liquid. 5 universal quasi-chemical.

2.3 γ-φ-approach

| 43

so-called principle of local compositions, which is explained by means of the Wilson equation in the following paragraph:

Figure 2.9: Sketch for the explanation of the Local Composition Models [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

In Figure 2.9 it can be seen that in both cells the concentrations of the molecules 1 and 2 are x1 = 37 and x2 = 47 . Due to the intermolecular forces, the local concentrations can be different. In the left cell, there is a molecule of kind 1 in the center of the cell. Around this molecule, the concentration of the molecules of kind 1 is x11 = 62 , and the concentration of the molecules of kind 2 is x21 = 46 . Similarly, in the cell on the right hand side with a molecule of kind 2 in the center the concentrations are x12 = x22 = 63 . The concentrations around a molecule and the total concentrations are assumed to be related by Boltzmann factors: xji

xii

=

xj exp (−λji /(RT))

xi exp (−λii /(RT))

,

(2.44)

where the λ terms account for the intermolecular forces. Defining the local mole fraction as ξi =

xii vi , ∑j xji vj

(2.45)

γi can be introduced as a correction factor of the total concentration: γi = ξi /xi

(2.46)

After several mathematical transformations [11] the following expression for the activity coefficients is obtained: ln γi = 1 − ln ∑ xj Λij − ∑ j

k

xk Λki , ∑j xj Λkj

(2.47)

with Λ as an abbreviation representing the intermolecular forces, which are temperature-dependent Λij = exp (Aij + Bij /T + Cij ln

T + Dij T) K

(2.48)

44 | 2 Thermodynamic models in process simulation The parameters A, B, C, D can be adjusted to experimental binary phase equilibrium data. The structure of Equation (2.48) often leads to misunderstandings. The first parameters to be adjusted are the Bij . To account for the temperature dependence, the Aij parameters can be additionally fitted if corresponding data are available, i. e. phase equilibrium data at significantly different temperatures or hE data. For better illustration, Equation (2.48) should be rewritten as Λij = exp

Bij + Aij T + Cij T ln KT + Dij T 2 T

(2.49)

The Cij or Dij parameters are used only if it is justified by the data situation, which is not often the case. It should be noted that the specific volumina introduced in Equation (2.45) do not occur in the final equations (2.47) and (2.48). The term Aij is equal to Aij = ln

vj

vi

(2.50)

,

so the ratio of the volumina just represents the term for the linear temperature dependence. In process simulation, it would be awkward anyway if the binary parameters would depend on the pure component properties which could be subject to change any time. A more detailed derivation of the Wilson equation can be found in [11]. The NRTL and UNIQUAC equations are more complicated [11], but for application it is sufficient to understand their structure: – Wilson:



ln γi = f (xi , Λij ) ,

(2.51)

ln γi = f (xi , αij , τij ) ,

(2.52)

with Λij as explained above. NRTL:

with τij as a temperature function describing the molecular interactions τij = exp (Aij + Bij /T + Cij ln

T + Dij T) , K

(2.53)

analogous to Λij in the Wilson equation and αij as an additional symmetric (αij = αji ) adjustable parameter for complicated concentration dependences. Usually, αij is set to 0.3, a reasonable range of values is αij = 0.2–0.5. In extreme cases, this range of values can be exceeded or αij can be made temperature-dependent. The coefficients Aij , Bij , Cij , Dij , and αij are called binary interaction parameters (BIPs). They play the key role in the thermodynamic model.

2.3 γ-φ-approach



| 45

UNIQUAC: ln γi = f (xi , τij ) ,

(2.54)

with τij analogous to the NRTL equation. The remarkable issue is that it takes only binary parameters to describe a multicomponent system. Therefore, the effort for the model development is at least limited. Without this simplifying tool, process simulation would still not be possible up to now. A famous example from Gmehling [34] indicates that it would take approx. 37 years to carry out the measurements necessary for a model capable to describe a 10-component-system. There has been a lot of discussion whether the binary interaction parameters have a physical meaning or not. This is a question which cannot be answered simply with “yes” or “no”. The tendency of the author is to say “no”. The local composition models can be mathematically derived [11, 35], where the binary parameters have the character of defined energies for removing single molecules from a cell. However, this energy is not measured; instead, the model with all its assumptions is adjusted to measured phase equilibrium data. The results obtained are not unique; the parameters strongly intercorrelate. It can be seen that various completely different parameter pairs are possible to obtain more or less the same results. This means that the value of a single parameter cannot have any physical significance without the other parameter in the pair. Therefore, a single parameter Bij has no significant physical meaning. However, it can be shown mathematically that any equation describing the activity coefficients must obey the Gibbs–Duhem equation [11]: ∑ xi d ln γi = 0 i

(2.55)

The established equations like Wilson, NRTL, or UNIQUAC and several other do so; thus, they are in some way physically justified. They should be taken as empirical approaches to fulfill the Gibbs–Duhem equation, meaning that they are able to represent the correct shape of the course of the activity coefficient as a function of concentration. An azeotrope occurs when the vapor pressure curves intersect. (Completely wrong but often successful explanation for the occurrence of azeotropes.)

Activity coefficients can be used to obtain an understanding how azeotropes occur. Figure 2.10 shows the well-defined azeotrope of the system R134a(1)–R218(2) at T = 220 K. Neglecting the influence of the fugacity coefficients in Equation (2.42), one can add up Equation (2.41) for both components in the following form: p = x1 γ1 ps1 + x2 γ2 ps2

(2.56)

46 | 2 Thermodynamic models in process simulation

Figure 2.10: The azeotropic system R134a–R218 at T = 220 K [36].

This is the equation of the boiling point line, i. e. the upper one in Figure 2.10. As an azeotropic system it shows a clear maximum at the azeotropic point at approx. x1 = y1 = 0.3. Both pure component vapor pressures are lower than the azeotropic pressure, while R218 is the light end with the higher vapor pressure. On the right hand side of the diagram, it is expected that the addition of R218 to R134a increases the pressure, as R218 is the low-boiling component. It is more surprising that the addition of the highboiler R134a to the low-boiler R218 on the left hand side of the diagram gives a pressure increase as well. This is a necessary condition, as starting from both pure components the maximum in the azeotropic pressure must be reached. Evaluating Equation (2.56) at the pure R218, one gets for x1 → 0: p = x1 γ1∞ ps1 + (1 − x1 ) ⋅ 1 ⋅ ps2 > ps2

(2.57)

γ1∞ ps1 >1 ps2

(2.58)

or

Equation (2.58) is the azeotropic condition: If the product of vapor pressure and activity coefficient at infinite dilution of the high-boiler is larger than the vapor pressure of the low-boiler, an azeotrope with a pressure maximum will result. Similarly, for systems with negative deviations from Raoult’s law one gets the following: if the product of vapor pressure and activity coefficient at infinite dilution of the low-boiler is smaller than the vapor pressure of the high-boiler, an azeotrope with a pressure minimum will result. Solely by the continuity of the boiling point line, the occurrence of an azeotrope can be predicted just by slopes of the boiling point line caused by the addition of traces to the pure components. However, the first step for checking whether an azeotrope

2.3 γ-φ-approach

| 47

occurs should be a look into the database. There are even special monographs where azeotropic data are collected in a systematic way [37]. A question often asked is which of the three equations, Wilson, NRTL, and UNIQUAC, is the best one. In fact, it is not much use to compare large amounts of fitting results; no clear advantage can be observed. A description of the characteristics might be more helpful. The Wilson equation cannot describe systems with a miscibility gap due to mathematical reasons [11]. For a system without a miscibility gap the Wilson equation is an adequate model. The NRTL equation can describe a miscibility gap. The third parameter α is a valuable tool if complicated systems have to be correlated. The comparison with Wilson and UNIQUAC is not fair in these cases, as these equations do not have this opportunity. Nevertheless, sometimes the third parameter is perceived to be useful and makes NRTL the favorite choice. In a project, all binary subsystems must be described with the same model; if NRTL gives a significant improvement for one of the important subsystems, the decision for NRTL is probable. Furthermore, NRTL has a very popular extension for electrolytes (ELECNRTL), which is fully consistent with the simple NRTL equation for conventional systems. Thus, if electrolytes occur or even if it is possible that they might occur, it is usually a good choice to take NRTL. UNIQUAC is clearly the equation with the strongest physical background. For application, it is not so popular as it requires the van-der-Waals surfaces and volumina as additional pure component parameters, which cannot be assigned for all components due to missing subgroups (e. g. CO2 ). An advantage of UNIQUAC is that it has a combinatorial part, which takes into account the behavior of molecules differing in size. In comparison with the φ-φ-approach, the γ-φ-approach has the disadvantage that supercritical components cannot be treated with this concept. According to Equation (2.42), there is no reference point in the liquid phase for a pure supercritical component. As a workaround, a supercritical component can be treated as a Henry component, with its reference state at infinite dilution in a pure solvent. The phase equilibrium condition for such a component is xi Hij = yi pφi (T, p, yi ) ,

(2.59)

with Hij as the so-called Henry coefficient of Henry component i in solvent j. Equation (2.59) can be applied for low concentrations of the Henry component in the liquid phase (rule of thumb: xi < 0.03), otherwise, further corrections are necessary, clearly favoring the φ-φ-approach. The Henry coefficient has the character of a vapor pressure; its meaning becomes clear when Equation (2.59) is applied to a subcritical component at low pressure (i. e. φi (T, p, yi ) = 1) at infinite dilution of the Henry component in the liquid phase. Compared with Equation (2.41), the Henry coefficient is equal to Hi = psi γi∞

(2.60)

48 | 2 Thermodynamic models in process simulation The Henry coefficient is a temperature function; the temperature dependence is usually described using a function like ln

Hi B = A + + CT + DT 2 , p0 T

(2.61)

where p0 is an arbitrary pressure unit, necessary to make the argument of the logarithm dimensionless. It should be noted that in contrast to the vapor pressure the Henry coefficient is not necessarily a function monotonically rising with temperature. It can exhibit welldefined maxima (e. g. oxygen in water [11, 35]). For mixed solvents, a mixing rule like H

ij Hi ∑j xj ln p0 = ln p0 ∑j xj

(2.62)

can be applied. In process simulators, often even more complicated mixing rules are used. It should be mentioned that the averaging in the mixing rules is only performed with the solvents where Henry coefficients are available. The index j refers only to the solvent components where a Henry coefficient is given. One must be careful, as these solvents are not always representative for the whole liquid. It is often remarked that mixing rules like Equation (2.62) are arbitrary and empirical. From the physical point of view, the application of equations of state, especially with g E mixing rules (Chapter 2.7), is more justified. Nevertheless, one must realize that in process calculations the whole gas solubility calculation has the character of an estimation, giving only a reasonable order of magnitude. The solubility of gases in liquids usually takes a lot of time to reach equilibrium. For experimental setups, several hours are scheduled to get a data point; much more than a gas has in a process step. Therefore, even relatively large errors in the Henry coefficient are not relevant for the target of the calculation. For example, if the correct solubility of a gas component is 100 ppm, an overestimation of 20 % of the Henry coefficient in Equation (2.59) would yield a solubility of 83 ppm, which is a fully acceptable result in a process calculation. In principle, excess enthalpies (enthalpies of mixing, see Glossary) can also be calculated if a correlation for the activity coefficients is available. hE depends on the temperature derivatives of the activity coefficients: hE = −RT 2 ∑ xi ( i

𝜕 ln γi ) 𝜕T

(2.63)

However, the physical background of the equations for the activity coefficient is not sufficient so that application of Equation (2.63) usually gives wrong results unless the parameters are temperature-dependent and have been fitted to both phase equilibrium and excess enthalpy data.

2.3 γ-φ-approach

| 49

The great advantage of the γ-φ-approach is that all properties are described by individual equations that have no influence on each other. The phase equilibrium is described only by the activity coefficients, which do not need to be varied if any other property is changed. Independent and highly accurate correlations are available for each property. 2.3.2 Vapor pressure and liquid density In contrast to the φ-φ-approach, in the γ-φ-approach the liquid density, the vapor pressure and the specific enthalpy are determined independently with separate correlations, giving the opportunity of yielding better accuracies and having a more convenient workflow, as changes in one property do not affect the others. While the enthalpy calculation needs an own chapter (Chapter 2.8), the correlations and estimations for the liquid density and the vapor pressure can be briefly discussed here. The density of a pure substance or a mixture is a fundamental quantity in any process calculation. The vapor density as a function of temperature and pressure is determined by the equation of state chosen in Equation (2.42). The liquid density of pure components is treated only as a function of temperature, the pressure effect on the density can usually be neglected. Appropriate correlations and their coefficients are the Rackett equation ρL =

A

D

B1+(1−T/C)

(2.64)

and the PPDS6 equation ρc ρL = + A(1 − Tr )0.35 + B(1 − Tr )2/3 + C(1 − Tr ) + D(1 − Tr )4/3 , kg/m3 kg/m3

(2.65)

which is usually slightly more accurate. It is important to mention that the liquid density mixing rule is based on the specific volume, i. e. the reciprocal value of the density: x 1 =∑ i ρL ρL,i i

(2.66)

Thermodynamics says that there is a so-called excess volume (see Glossary), i. e. a systematic deviation from Equation (2.66). If the liquid density is calculated using an equation of state, as it is the case in the φ-φ-approach (Chapter 2.2), this excess volume is accounted for automatically. Nevertheless, no advantage can be taken, as it is usually not described quantitatively correct, and in most cases the density of the 6 physical property data service.

50 | 2 Thermodynamic models in process simulation pure substances is badly reproduced so that even with a correction Equation (2.66) yields better results (p. 38). The maximum error caused by neglecting the excess volume can be quantified regarding the system exhibiting the largest excess volume: To the knowledge of the author, it is in fact ethanol-water, and the maximum excess volume observed is approx. 3.5 % [11]. Liquid densities can be estimated e. g. with the COSTALD equation [38]. It can in principle be written as 1 = v∗ ⋅ f (T, Tc , ω) , ρL

(2.67)

meaning that the liquid density can be estimated with a generalized function depending only on Tc and ω. Additionally, the parameter v∗ is used, which can be adjusted to one or more data points. In this case, the procedure is very accurate; up to temperatures below 0.95 Tc the error is usually below 2 %. If no data point is available, one can use the critical volume for v∗ , which of course increases the risk but often yields surprisingly good results. It should be mentioned that the COSTALD equation has weaknesses if polar components are involved. The vapor pressure is the most important quantity in thermodynamics. It is decisive especially in the simulation of distillation columns. Furthermore, it is directly related to the enthalpy of vaporization via the Clausius–Clapeyron equation (Chapter 2.8). The vapor pressure is an exponential function of temperature, starting at the triple point and ending at the critical point. It comprises several orders of magnitude, therefore, a graphical representation usually represents only part of its characteristics. Figure 2.11 shows two diagrams, with linear and logarithmic axes for the vapor pressure of propylene. The linear diagram makes it impossible to identify even the qualitative behavior at low temperatures, whereas in the logarithmic diagram only the order of magnitude can be identified on the axis. At least, logarithmic diagrams allow the comparison between vapor pressures of different substances, e. g. deciding whether they intersect or not. On the other hand, many process calculations require

Figure 2.11: Typical vapor pressure plots as a function of temperature [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

2.3 γ-φ-approach

| 51

Figure 2.12: Deviation plot for the fit of a vapor pressure equation [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

extremely accurate vapor pressure curves, e. g. the separation of isomers by distillation. For the visualization of a fit of a vapor pressure curve, special techniques must be applied, the simple graphical comparison between experimental and calculated data in a diagram is not possible. For this purpose, a deviation plot is useful (Figure 2.12). Most companies use parameter databanks. When a parameter database is set up, e. g. the one in a commercial process simulation program, it cannot be known in advance at which conditions exact vapor pressures will be required. For distillation applications, a high accuracy is often required, especially when components with similar vapor pressures have to be separated in distillation columns. Simple vapor pressure equations cannot be applied for the whole temperature range from the triple point to the critical point. More capable vapor pressure correlations are needed, the most popular ones are the extended Antoine equation and the Wagner equation. In principle, the extended Antoine equation is a collection of various useful terms: ln

G

ps B T T T =A+ + D + E ln + F( ) , p0 T/K + C K K K

(2.68)

where p0 is an arbitrary reference pressure, usually the pressure unit.7 A very useful correlation is the Wagner equation ln

ps 1 = [A(1 − Tr ) + B(1 − Tr )1.5 + C(1 − Tr )3 + D(1 − Tr )6 ] , pc Tr

(2.69)

where Tr = T/Tc (reduced temperature). It can correlate the whole vapor pressure curve from the triple point to the critical point with an excellent accuracy. The Wagner equation has been developed by identifying the most important terms in Equation (2.69) with a structural optimization method [39]. The extraordinary capability of the equation has also been demonstrated by Moller [40], who showed that the Wagner 7 It is not intended that all parameters are used for correlation. The extended Antoine equation is a compilation of useful terms to avoid different equations for any combinations of parameters.

52 | 2 Thermodynamic models in process simulation equation can reproduce the difficult term Δhv /(zV − zL ) reasonably well. Eq. (2.69) is called the 3–6-form, where the numbers refer to the exponents of the last two terms. Some authors [41] prefer the 2.5–5-form, which is reported to be slightly more accurate. For the application of the Wagner equation, accurate critical data are required. As long as the experimental data points involved are far away from the critical point (e. g. only points below atmospheric pressure), estimated critical data are usually sufficient. As the critical point is automatically met due to the structure of the equation, the Wagner equation extrapolates reasonably to higher temperatures, even if the critical point is only estimated. However, like all vapor pressure equations it does not extrapolate reliably to lower temperatures. Sometimes, users of process simulation programs calculate vapor pressures beyond the critical point,8 although it is physically meaningless. If the Wagner equation is applied above the critical temperature, it will yield a mathematical error. Therefore, the simulation program must provide an extrapolation function that continues the vapor pressure line with the same slope. For the particular parameters of the Wagner equation, the following ranges of values are reasonable for both forms: A = −9 . . . −5 B = −10 . . . 10 C = −10 . . . 10

(2.70)

D = −20 . . . 20 If these ranges are exceeded, one should carefully check the critical data used and the experimental data points for possible outliers. Coefficients for the Wagner equation can be found e. g. in [41–43]. For a vapor pressure correlation, average deviations should be well below 0.5 %. Data points with correlation deviations larger than 1 % should be rejected, as long as there are enough other values available. Exceptions can be made for vapor pressures below 1 mbar, as the accuracy of the measurements is lower in that range. The structure of the deviations should always be carefully interpreted. A guideline is given in [11]. Despite this high accuracy demand for vapor pressures, there is also a need for good estimation methods. Often, a lot of components is involved in a distillation process. Not all of these components are really important, however, one should know whether they end up at the top or at the bottom of a distillation column. In many cases, a measurement would even be not possible, as the effort for the isolation and purification of these components might be too large. Estimation methods are mostly applied to medium and low pressures for molecules with a certain complexity, as small molecules usually have well-established vapor pressure equations, whereas large molecules often have a volatility which is so low that a purification of the 8 maybe on purpose as a slight extrapolation or during an iteration.

2.3 γ-φ-approach

| 53

substance for measurement by distillation is not possible. The estimation of vapor pressures is one of most difficult problems in thermodynamics. Due to the exponential relationship between vapor pressure and temperature, a high accuracy must not be expected. Deviations in the range of 5–10 % have to be tolerated. Thus, estimated vapor pressure correlations should not be used for a main substance in a distillation column to evaluate the final design, however, they can be very useful to decide about the behavior of side components without additional measurements. Different estimation methods are discussed in [11]. At least, one data point, usually the normal boiling point, should be known. Most methods use a vapor pressure equation with two adjustable parameters; therefore, a second piece of information is necessary. It can be generated by a group contribution method (e. g. Rarey method [40]) or by a second data point, which can be either a genuine data point or a point based on an estimation method itself, e. g. the critical point. An example for the latter case is the application of the Hoffmann–Florin equation [44] ln

ps 1 T T = α + β[ − 7.9151 ⋅ 10−3 + 2.6726 ⋅ 10−3 lg − 0.8625 ⋅ 10−6 ] , p0 T/K K K

(2.71)

with the adjustable parameters α and β. It has the advantage that it can easily be transformed to the extended Antoine equation (2.68) by A = α − 7.9151 ⋅ 10−3 β B=β D = −0.8625 ⋅ 10−6 β 2.6726 ⋅ 10−3 E= β ln 10 C=F=G=0

(2.72)

If no single data point is available, one must estimate even the normal boiling point [11, 45]. In this case, one can hardly rely on the results obtained, however, as long as no better information is available, there is no choice. When a vapor pressure has to be estimated, one should also have a look at vapor pressure curves of components which have a similar structure. Just by defining a constant vapor pressure ratio between the two components, one can at least obtain a reasonable vapor pressure curve. Vapor pressures play the most decisive role in distillation if isomers have to be separated by distillation. In this case, the separation factor depends only on the ratio between the vapor pressures as the activity coefficients between isomers can be set to 1 as an approximation.9 If the vapor pressures of the isomers are close, a large number of theoretical stages are necessary, and its determination is very sensitive to the separation factor. In these cases, it is strongly recommended not to rely on data 9 which is, however, not always correct.

54 | 2 Thermodynamic models in process simulation from the literature, not even on good data. Instead, the vapor pressure of the isomers should be measured as accurately as possible in the same apparatus and on the same day to avoid any systematic measurement errors. 2.3.3 Association Another advantage of the γ-φ-approach is that substances showing association in the vapor phase can be described. These substances are the carboxylic acids like formic acid, acetic acid10 or propionic acid, which form dimers in the vapor phase, and hydrogen fluoride, which forms hexamers. These substances are involved in many chemical processes, and the deviations from the ideal gas law are significant even at low pressures. For example, the compressibility factor of acetic acid at the normal boiling point, which is expected to be close to 1, is ZNBP = 0.6. Up to now, no equation of state valid for both the vapor and the liquid phase has been available in this case. The corresponding equation of state does not need to be valid for both the vapor and the liquid phase; with the γ-φ-approach, it is sufficient to cover only the vapor phase. The formation of associates is treated as a chemical reaction in equilibrium. As an illustration, formic acid as a substance forming only dimers is regarded. The association can be described with the law of mass actions: K2 =

z2 2 z1 (p/p0 )

,

(2.73)

where K is the equilibrium constant for the reaction 2 HCOOH 󴀕󴀬 (HCOOH)2 z1 is the true concentration of the monomer, while z2 denotes the true concentration of the dimers in the mixture. p0 is simply the pressure unit. The equilibrium constant can be correlated by ln K2 = A2 +

B2 T

(2.74)

The sum of true mole fractions z is equal to one z2 + z1 = 1

(2.75)

Combining Equations (2.73) and (2.75), z1 can be determined by z1 =

√1 + 4K2 (p/p0 ) − 1 2K2 (p/p0 )

(2.76)

10 For acetic acid, an additional tetramer formation is often considered to obtain more accurate results.

2.3 γ-φ-approach

| 55

Figure 2.13: Illustration of the modeling of vapor phases.

Assuming that the ideal gas equation is valid, the specific volume can be determined by v=

RT 1 p z1 + 2z2

(2.77)

Figure 2.13 illustrates the difference to conventional equations of state like the cubic ones. Case (a) represents the situation in an ideal gas phase; there are no forces between the molecules. Case (b) represents a real vapor phase, where the molecules attract or repulse each other by intermolecular forces. Vapor phases like this are typically modeled with the equations of state described in Chapter 2.2 like the cubic equations. Case (c) denotes the association in the vapor phase. The model (Equations (2.73)–(2.77)) takes into account that associates are formed, but it is still assumed that no intermolecular forces are exerted. The nonideality in case (c) is only achieved by the formation of associates. This approximation is sufficient for low pressures. For higher pressures, a good model for substances showing vapor phase association would have to take the intermolecular forces into account as well (case (d)). However, an appropriate model for this situation has not yet been introduced. For associating substances, the heat capacity of the vapor phase shows a welldefined maximum (Figure 2.14). At low temperatures, the dimerization as the exothermic reaction is preferred in the equilibrium, and all molecules are dimerized. When the temperature rises, the dimers are split. For this endothermic reaction, energy is required which is not used for increasing the temperature. Therefore, the heat capacity increases drastically. With increasing temperature, the number of dimers which can be split decreases. The heat capacity passes a maximum and comes down to the normal value of the ideal gas. For the sizing of heat exchangers, this effect must be con-

56 | 2 Thermodynamic models in process simulation

Figure 2.14: Specific isobaric heat capacity of acetic acid vapor at different pressures [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

sidered. The curvature is even more dramatic for HF with its hexamers, where the peak of the maximum can be up to 40 times higher than the ideal gas heat capacity [46].

2.4 Electrolytes Water is one of the strangest substances that occurs in chemical engineering. Its physical properties are not comparable to the behavior of other substances. With a low molecular weight of 18 g/mol, its normal boiling point of tb = 100 °C is incredibly high, as well as its critical point (tc = 374 °C, pc = 221 bar). The well-known maximum in the density of liquid water at t = 4 °C is technically not important, but the course of the liquid density as a function of temperature is extraordinarily flat. Between 0 °C and 50 °C, the density decreases just by 1 %. For comparison, n-heptane with a similar normal boiling point (98.4 °C) has a density decrease of more than 4 % in the same temperature range. When water crystallizes, it expands significantly by almost 10 %, whereas other substances reduce their density as expected. The specific heat capacity of liquid water (approx. 4.2 J/(g K)) is about twice as large as for typical organic substances, but the most remarkable physical property is the enthalpy of vaporization. At t = 100 °C, it is approx. 2250 J/(g K), more than seven times larger than for n-heptane as a typical organic substance. The reason for this behavior is the strong polar character of the molecule. It is not linear; the two bonds between oxygen and hydrogen form an angle of approx. 105°. The oxygen atom is a strongly electronegative, it attracts the electrons of the bond with their negative charges so that they concentrate at the oxygen atom, while the hydrogen atoms become positively charged. Therefore, the water molecules arrange in a kind of lattice.

2.4 Electrolytes | 57

Moreover, the strong polarity of the water molecule is the reason why it is an excellent solvent for many electrolytes. An electrolyte is a substance which conducts electric current as a result of its dissociation into positively and negatively charged ions in solutions or melts. Ions with a positive charge are called cations, ions with negative charge anions, respectively. The most typical electrolytes are acids, bases, and salts dissolved in a solvent, very often in water. The total charge of an ion is a multiple of the elementary charge (e = 1.602⋅10−19 C), given by the number z. Examples are: H3 O+ Cl

z=1



z−1

2+

Ca

z=2

SO2− 4

z = −2

In a macroscopic solution, the sum of charges is always zero, since the solution is always neutral. Otherwise, an electric current would occur. For electrolyte solutions, the particular ions are formed by dissociation reactions like NaCl → Na+ + Cl− CaSO4 → Ca2+ + SO2− 4

H3 PO4 + H2 O → H3 O+ + H2 PO−4

H2 PO−4 + H2 O → H3 O+ + HPO2− 4 + 3− HPO2− 4 + H2 O → H3 O + PO4

The H+ ion cannot exist as a pure proton; it is always attached to a water molecule H2 O, giving H3 O+ . Strong and weak electrolytes can be distinguished. While strong electrolytes like HCl, H2 SO4 , HNO3 , NaCl, or NaOH dissociate almost completely, weak electrolytes do so only to a small extent. Sometimes, their electrolyte character plays a secondary role and can often be neglected. Examples are formic acid (HCOOH), acetic acid (CH3 COOH), HF, H2 S, SO2 , NH3 , or CO2 [11]. The molecular structure of an electrolyte solution is significantly determined by the electrostatic interactions between the charged ions (Coulomb-Coulomb interactions) and by the long-range interactions of the charged ions and the dipole moments of the solvent (Coulomb-dipole-interactions). Figure 2.15 illustrates the schematic distribution of water as a strongly polar solvent around a cation and an anion. As mentioned, the oxygen atom in the water molecule has a negative partial charge due to its high electronegativity. Therefore, the water molecule in the vicinity of an ion is arranged in a way that the oxygen atom is directed towards the positively charged cations. Vice versa, the hydrogen atoms in the water molecule are partially

58 | 2 Thermodynamic models in process simulation

Figure 2.15: Structure of an aqueous electrolyte solution [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

positively charged and oriented towards the anions. Around the ions, a shell of solvent molecules is formed. The corresponding procedure is called solvation. It is an exothermic process. On the other hand, the dissociation of the electrolyte is an endothermic process, because the ionic lattice has to be destroyed, which is connected to a need of energy. The overall heat of solution is the sum of both contributions. It is usually dominated by the solvation and therefore remains exothermic. However, there are many exceptions. Electrolytes in their dissociated form are not volatile and remain completely in the liquid phase. However, it can happen that liquid drops containing electrolytes are subject to entrainment (Chapter 9). The process simulation programs offer models for electrolyte solutions which describe the arrangement above. Not necessarily the best, but the most widely used one is the electrolyte NRTL model, often also called Chen model [47, 48]. Its obvious advantage is the compatibility with the conventional NRTL activity coefficient model. This is the most important property of an electrolyte model, often even more important than its accuracy. During a project, process simulation might start in a part of the plant where electrolytes are not involved. Later, when it is extended, electrolyte components occur, and defining these components simply as heavy ends is always tried first.11 Although of course not state of the art, this is often a feasible approach as long as the electrolyte concentration is low and not a decisive issue in the process. When it 11 This can simply be accomplished by assigning the same properties to the component as water, and then overwriting the molecular weight with the correct value for maintaining the mass balance and, also, overwriting the vapor pressure with an equation giving negligible values over the whole concentration range.

2.5 Liquid-liquid equilibria

| 59

finally turns out that electrolytes occur in a way that their character must be correctly described, it is useful if at least the BIP matrix does not need to be revised. The NRTL electrolyte model consists of the theoretically derived Debye–Hückel term for the long ranging interactions due to the charges of the ions and an NRTL based term for the short ranging interactions. Still, the local composition concept is applied. The model is not restricted to systems of electrolytes just with water, other solvents and side components are possible. However, the database is mainly based on aqueous systems. Different kinds of parameters occur: – the normal binary parameters between molecules; – pair parameters between ion pairs and molecules; – pair parameters between different ion pairs. Nowadays, the possible electrolyte reactions and the species produced are generated by the process simulation program. It is up to the user to neglect them or take them into account. For instance, the dissociation reaction of ammonia NH3 + H2 O → NH+4 + OH− can easily been neglected if just the system NH3 /H2 O is regarded, as only a small fraction of the ammonia dissociates, which is negligible. In a caustic environment, even this hardly ever happens. However, in an acid environment, the ammonia will dissociate, and in fact completely at low pH values. A famous example is the system NH3 /CO2 /H2 O, where both NH3 and CO2 are weak electrolytes, but they keep the other component in the solution as NH3 is a caustic and CO2 is an acid component. In the process simulation program, the equilibrium constants for the generated dissociation reactions are usually provided automatically, as well as the pair parameters. Note that the pair parameters really refer to a pair of ions and not to single ones. An example for the application of the NRTL electrolyte model is given in [11]. There are two options for the notation of the electrolytes. In the true component approach, the ions are listed as ions. The advantage is that it is listed what really happens, the disadvantage might be that it is more difficult to keep the overview. In the apparent component approach, the ions are recombined to components for the result list. This is easier for discussion but does not always work in a plausible way. It can happen that according to the balance components like NaOH and HCl coexist in the aqueous solution, which one can hardly imagine.

2.5 Liquid-liquid equilibria With increasing activity coefficients, two liquid phases with different compositions are formed (miscibility gap). The concentration differences of the compounds in the different phases can be used for example for the separation by extraction. In distillation

60 | 2 Thermodynamic models in process simulation processes, the liquid-liquid equilibrium (LLE) is often used when a decanter separates the condensate of the top product into two liquid phases. The knowledge of the VLLE (vapor-liquid-liquid equilibrium) behavior is of special importance for the separation of systems by heteroazeotropic distillation. Many engineers are of the opinion that the formation of an LLE does not take place in distillation columns. In fact, it does, however, in contrast to a phase equilibrium arrangement the two phases do not separate due to turbulences (tray columns) or form thin layers which trickle down the internals of the column (packed columns). In both cases, it is useful to treat the two liquid phases as one homogeneous liquid for the determination of the overall liquid composition and the physical properties of the liquid phase. For the phase equilibrium calculations, there is no other way than to consider the liquid phase split to get the correct vapor composition. Liquid-liquid equilibria can be evaluated by the isoactivity criterion: (xi γi )I = (xi γi )II ,

(2.78)

as the product xi γi is also called activity ai . At moderate pressures, the liquid-liquid equilibrium behavior as a function of temperature only depends on the temperature dependence of the activity coefficients. For the calculation of LLE again g E -models like NRTL, UNIQUAC, or equations of state with a g E mixing rule (Chapter 2.7) can be used, whereas Wilson is not appropriate [11]. The formation of two liquid phases can result in the formation of binary and higher heteroazeotropes, which can for instance be used for the separation of systems like ethyl acetate–water (Figure 2.18). Ternary LLEs can be illustrated in a triangle diagram (Figure 2.16). It is more difficult to calculate reliable liquid-liquid equilibria (LLE) of a system containing three or more components using binary parameters than to describe vapor-liquid or solidliquid equilibria (Chapter 2.6). The reason is that in the case of LLE the activity coefficients have to describe the whole concentration dependence correctly, whereas in the case of the other phase equilibria (VLE, SLE) the activity coefficients primarily have to account for the deviation from ideal behavior (Raoult’s law resp. ideal solid solubility). This is the main reason why up to now no reliable prediction of a multicomponent LLE behavior (tie lines) using binary parameters is possible. Fortunately, it is quite easy to measure LLE data of ternary and higher systems at least up to atmospheric pressure. Binary parameters can be fitted to ternary data as well, and in this way LLEs can at least be correlated. Also, even the fit to a binary mixture is often a bit more exhausting, as priorities must be set. It hardly ever happens that a set of binary parameters can represent both the vapor-liquid equilibrium of systems with an LLE (the so-called vapor-liquid-liquid equilibrium VLLE) and the miscibility gap itself. It must be decided which capabilities the parameters should have. Often, two different parameter sets are used, one for

2.5 Liquid-liquid equilibria

| 61

Figure 2.16: Liquid-liquid equilibrium of the ternary system ethyl acetate–water–ethanol at t = 50 °C.

LLE and one for VLLE. In most cases, the LLE data are more significant, especially for systems with wide miscibility gaps. Moreover, the binary parameters for LLEs cannot be simply transferred from binary to ternary and multicomponent mixtures. Doing so, one obtains more or less an estimation. One should be aware that for a good description of multicomponent LLE at least data from ternary mixtures are necessary. Often, when a colleague tells me that his simulation results look strange, the first thing I do is check whether the VLLE option is actually used for systems with a miscibility gap. Forgetting it is a very common error in the simulation of vapor-liquid separations. Simulation results may look strange in many ways. For this case, the trigger is that the temperature is way too low, not just by a few degrees but by 100 K or so. (Jo Sijben)

Calculating a phase equilibrium, there are two options in a process simulator. One is the common VLE calculation. If it is known that miscibility gaps occur in the system, the option VLLE (3-phase equilibrium) must be chosen so that the simulator checks

62 | 2 Thermodynamic models in process simulation

Figure 2.17: Calculation of the system ethyl acetate–water at t = 80 °C with the 2-phase flash.

Figure 2.18: Calculation of the system ethyl acetate–water at t = 80 °C with the 3-phase flash.

whether there is an LLE before the equilibrium with the vapor phase is evaluated. If this is forgotten, one gets strange phase equilibrium diagrams (Figure 2.17). The boiling and the dew point curve then appear to be complete nonsense. The correct and reasonable result is obtained with the 3-phase flash (Figure 2.18). In a flash in a process simulation, the error is not so easy to detect, but whenever the result is not plausible, it should be checked whether the VLLE option is necessary and chosen. One could easily say that VLLE should always be chosen, but if it turns out to be not necessary, a lot of calculation time has been wasted, as the VLLE option is quite time-consuming in comparison with a simple VLE. Generally, it can be said that systems with water and nonpolar organic substances have strong intermolecular interactions and often form miscibility gaps. Therefore, at least all the BIPs with water should be assigned in any project (Chapter 2.9).

2.6 Solid-liquid equilibria Solid-liquid equilibria (SLE) are used for the synthesis and design of crystallization processes, but taking them into account is also important to avoid undesired solids formation. Information about solid-liquid equilibria can also be used for the adjustment of binary parameters. SLEs are more complicated than VLEs or LLEs. Different types of SLEs have to be distinguished, depending on the mutual solubility of the components in the solid and in the liquid phase. However, the most important one, the simple eutectic system, is comparably easy, and it is the only one which does not require a specialist.

2.6 Solid-liquid equilibria

| 63

Figure 2.19: Solid-liquid equilibrium of the eutectic system benzene–naphthalene [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

The eutectic system is characterized by the total immiscibility of the components in the solid phase. This is advantageous for crystallization, as the crystallized phase has a high purity. One theoretical stage is sufficient to obtain the pure compounds [11]. Usually, there are liquid mechanical inclusions so that the crystallization must at least be repeated once, but from thermodynamics alone a pure solid phase is generated. Fortunately, about 80 % of the systems behave in this way. Figure 2.19 shows the solid-liquid equilibrium of the eutectic system benzene–naphthalene. Both solid phases crystallize in pure form. Generally, solids are formed at a low temperature. Consider a mixture with x1 = 0.5. When it is cooled, it reaches the liquidus line at T ≈ 320 K. The first naphthalene crystals are formed. Cooling down further, the amount of solids increases according to the lever rule, while the concentration of the liquid phase moves towards the eutectic one. At 300 K, it is approx. x1 = 0.7. When the eutectic temperature is reached, both components crystallize forming pure solid phases. The eutectic temperature is lower than the melting points of the participating pure components. Thermodynamics does not give information about the shape of the crystals and the amount of liquid included. It is the art of crystallization to form the crystals in the desired way (Chapter 7.3). For eutectic systems, the equilibrium condition can be written as [11] ln (xi γi ) = −

L S cp,i − cp,i Tm,i − T Tm,i Δhm,i T (1 − )+ ( − ln ) RT Tm,i R T T

(2.79)

The heat capacities of the solids are not often known, and, fortunately, the difference in the corresponding term has a tendency to cancel out. Therefore, Equation (2.79) is usually simplified to ln (xi γi ) = −

Δhm,i T (1 − ) RT Tm,i

(2.80)

For an evaluation of Equation (2.80), only the activity coefficient and melting temperature and enthalpy of fusion as pure component properties must be known. Solid-liquid

64 | 2 Thermodynamic models in process simulation equilibria are not very sensitive to pressure but much more to temperature. As the activity coefficient γi of the component i is strongly concentration-dependent, the mole fraction xi in the liquid phase must be evaluated iteratively. In process simulation programs, special solid components can be introduced which do not take part in other phase equilibria. For crystallization, reaction blocks can be introduced where the liquid component is transformed into the solid or vice versa. The “reaction equilibrium” is calculated according to Equation (2.79) or (2.80), respectively.

2.7 φ-φ-approach with gE mixing rules Nowadays, in process simulation in the chemical industry the requirement for accuracy and reliability on the one hand and the occurrence of polar components on the other hand make clear that advanced cubic equations of state with g E mixing rule and individual α-functions should be preferred to the generalized equations of state like Peng–Robinson or Soave–Redlich–Kwong with the quadratic mixing rule (Equation (2.37)). The generalized ones might still be tolerable in hydrocarbon processes; however, the advanced cubic equations of state have no drawback there. The g E mixing rule adopts the concept of the activity coefficients for use in the mixing rule for the parameter a. Examples are the Predictive Soave–Redlich–Kwong equation (PSRK) and the Volume-Translated Peng–Robinson equation (VTPR). For PSRK, the g E mixing rule is gE a a 1 b = ∑ xi ii − ( 0 + ∑ xi ln ) , bRT b RT 0.64663 RT b i i i i

(2.81)

where g0E denotes the Gibbs excess energy at p = 0, i. e. at low pressure: g E = RT ∑ xi ln γi i

(2.82)

To calculate the γi , any appropriate equation like Wilson, NRTL, or UNIQUAC or a predictive approach (Chapter 2.10) can be used. With the g E mixing rules, the general inability of the φ-φ-approach to describe mixtures with polar components can be overcome. There has been a great deal of discussion on whether the φ-φ-approach Equation (2.15) or the γ-φ-approach Equation (2.42) is the more favorable option for phase equilibrium calculation, often in a more or less ideological way. The following items point out the particular pros and cons: – If supercritical components dissolve in the liquid phase to a considerable amount, it is nowadays obligatory to use the φ-φ-approach, preferably with a g E -mixing rule.

2.7 φ-φ-approach with gE mixing rules | 65













Theoretically, the γ-φ-approach is valid in the whole subcritical region. However, in the region close to the critical temperature of a component (approx. T > Tc − 15 K) the φ-φ-approach is more reliable. As long as supercritical components dissolve in the liquid phase to a minor extent, (e. g. nitrogen in water), there is no real disadvantage if Henry’s law with the corresponding mixing rule is used instead of the φ-φ-approach. Although the mixing rule Equation (2.62) is more or less arbitrary, it leads to the correct order of magnitude of the gas solubility. As in processes these equilibria are usually not reached, this information from process simulation is fully sufficient. If associating components are involved, the γ-φ-approach is obligatory. So far, there is no established equation of state valid for both vapor and liquid phase for associating components. One must be aware that with the φ-φ-approach all thermodynamic properties (ρL , ps , Δhv , cpL ) are calculated just by the equation of state. This means that any extension of the database for one property affects all other properties, giving a lot of work. Furthermore, due to the limited number of parameters used, the accuracy of the particular quantities is often not sufficient. An exception are, of course, the high-precision equations of state for pure components. There is no option for setting priorities in the enthalpy calculations (Chapter 2.8). With the γ-φ-approach, all properties can be correlated separately and with the required accuracy, as long as enough data are available. An often cited prejudice is that with the φ-φ-approach only data for a correlation for cpid is required. In fact, this statement is not well founded. From the formal point of view, all the thermodynamic properties as listed above can be calculated without further data. However, in this case one has no information about the accuracy of the equation of state. For process calculations, it is necessary to compare the values obtained with experimental data and then probably adjust the parameters involved; a procedure with the disadvantage mentioned above that the parameters influence all quantities. Finally, a responsible use of the φ-φ-approach requires presumably more preparation work than the γ-φ-approach. An argument against the γ-φ-approach is that it does not include the pressure dependence of the activity coefficient. At high pressures, even small values of the excess volume could have an influence on the results [49]. There are several approaches to encounter this argument. First, the fitting of data at the corresponding temperatures and pressures should achieve some kind of error compensation so that the data are still represented [49]. Second, a further correction taking the excess volume into account could be included in the phase equilibrium condition (2.42) [50], which is, however, not considered in the process simulators. And finally, it is clear that the φ-φ-approach takes the excess volume into account in a formally correct way; but there is hardly any evidence that the excess volume is represented correctly.

66 | 2 Thermodynamic models in process simulation –



For the adjustment of the parameters, there is often a disagreement between the vapor pressure equation used, which is usually well-founded, and the pure component vapor pressure given as a data point in a binary vapor-liquid equilibrium data set. It is a common and successful practice to replace the vapor pressure from the correlation by the pure component vapor pressure given in the particular data set just for the parameter fitting procedure to avoid inconsistencies with the rest of the data and to get more reliable values for the γi∞ -values (vapor pressure shifting). In the process simulation, the vapor pressure correlation is then used again together with the adjusted binary parameters [11]. In the γ-φ-formalism, this is an easy change, whereas in the φ-φ-approach the individual α-function would have to be manipulated, which has also an influence on the other quantities. It is a considerable disadvantage for the project administration that the g E mixing rules use the equations for the determination of the activity coefficients in a different context. The same parameters do not yield the same activity coefficients. This leads to confusion when both approaches are used in a project simultaneously.

2.8 Enthalpy calculations For the evaluation of heating and cooling duties in a process, a correct description of the specific enthalpies is decisive. As all components are more or less present in both the liquid and the vapor phase, the difficulty is that a continuous enthalpy description in both phases is necessary. The following quantities can contribute to the enthalpy: – standard enthalpy of formation Δhf0 at t = 25 °C in the ideal gas state, used as reference point for the specific enthalpy; – ideal gas heat capacity cpid ; – enthalpy of vaporization Δhv ; – liquid heat capacity cpL ; – –

enthalpy pressure correction of the vapor phase (h − hid ); excess enthalpy hE .

First, it should be remembered that the absolute value of the enthalpy is normally meaningless, only differences between specific enthalpies can be interpreted. A single value for the enthalpy is only useful if a reference point is given. With an arbitrary choice of the reference point (e. g. h = 0 for t = 0 °C), the calculation of chemical reactions is awkward; it makes only sense if only pure components are involved. In process simulation programs, the use of the standard enthalpy of formation as reference point for the enthalpy makes sure that the enthalpies of reaction can be correctly calculated. This is further explained in Chapter 10.1. Only the standard enthalpies of formation and the ideal gas heat capacities are explicitly necessary if the φ-φ-approach is used, whereas the deviations from the ideal

2.8 Enthalpy calculations | 67

gas can be calculated directly. Equations of state with generalized α-functions are usually not accurate enough, therefore, individual α-functions of advanced cubic equations of state with component-specific parameters have to be fitted to Δhv , cpL , and

(h − hid ) to ensure that the equation of state works. It turned out that the adjustment of cpL is sufficient to obtain good results [22].

Figure 2.20: Enthalpy description of a pure liquid using the vapor as the starting phase [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

For the γ-φ-approach, there are more options available, as the particular quantities are not independent of each other. For a pure substance, knowing three of the four quantities cpid , Δhv , cpL , (h − hid ), the fourth one can be evaluated. Unfortunately, it

turns out that this does not work for cpid and cpL . The reason is that the slope of Δhv is not accurate enough if only values of this quantity are adjusted [51]. There are two ways for the description of the enthalpy in process simulators, illustrated using hT diagrams (Figures 2.20, 2.21). For a pure substance, they show the bubble point line, the dew point line and the ideal gas enthalpy at p = 0 for guidance. (A) Vapor as the starting phase: With this route, the cpL is the quantity which is determined indirectly (Figure 2.20). Liquid enthalpies of the pure components are determined with considerable deviations via T

id hL,i (T) = Δh0f ,i + ∫ cp,i dT + (h − hid )i (T, psi ) − Δhvi (T)

(2.83)

T0

A way to overcome this difficulty is to fit the correlations for the enthalpy of vaporization simultaneously to both enthalpy of vaporization and cpL data [51]. The effect of pressure on the liquid enthalpy is considered to be small and therefore neglected.

68 | 2 Thermodynamic models in process simulation

Figure 2.21: Calculation of the enthalpy of saturated vapor of a pure substance using the liquid as the starting phase [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

(B) Liquid as the starting phase: The reference enthalpy is set for a liquid state and adjusted in a way that the standard enthalpy of formation in the ideal gas state at t = 25 °C is met. The transition to the vapor phase is performed at a certain arbitrary temperature, usually the normal boiling point. Figure 2.21 illustrates the calculation route for an enthalpy of a saturated vapor using route (B). The enthalpy of vaporization is calculated indirectly with this route, the results are usually sufficiently accurate at low temperatures. They become even qualitatively wrong in the vicinity of the critical point [11]. Despite the fact that it is calculated directly, it can turn out that cpL is again the problem quantity. It is often only measured for temperatures below the normal boiling point, and its extrapolation to high temperatures is often bad. For both routes, the change from pure components to the mixture is performed at the system temperature. For gases, the mixing takes place in the ideal gas state, where the excess enthalpy is zero. Liquid enthalpies are linearly mixed, the excess enthalpy is often neglected. Correlations for cpid , Δhv , and cpL supported by the various process simulation programs are discussed in [11]. The most important ones are as follows: – for cpid : – Aly-Lee equation: cpid = A + B( –

2

2

E/T C/T ) + D( ) sinh (C/T) cosh (E/T)

(2.84)

PPDS equation: cpid = RB + R(C − B)y2 [1 + (y − 1)(D + Ey + Fy2 + Gy3 + Hy4 )] ,

(2.85)

2.9 Model choice and data management | 69





with y = T/(A + T). polynomial: (2.86)

cpL = A + BT + CT 2 + DT 3 + ET 4 + FT 5

(2.87)

for cpL : – polynomial



PPDS equation: cpL = R(



cpid = A + BT + CT 2 + DT 3

A + B + C(1 − Tr ) + D(1 − Tr )2 + E(1 − Tr )3 + F(1 − Tr )4 ) 1 − Tr

(2.88)

for Δhv : – DIPPR12 equation 2

3

Δhv = A(1 − Tr )B+CTr +DTr +ETr –

(2.89)

PPDS equation: Δhv = RTc (A(1 − Tr )1/3 + B(1 − Tr )2/3 + C(1 − Tr ) + D(1 − Tr )2 + E(1 − Tr )6 ) (2.90)

In the particular equations, A, B, . . . , F are the adjustable parameters. R is the general gas constant. It should be mentioned that most data for Δhv are based on the Clausius–Clapeyron equation (2.7). Using a good vapor pressure equation and an appropriate equation of state for v󸀠󸀠 , an error of approximately 2 % can be expected. The application of the Clausius–Clapeyron equation should be avoided at vapor pressures ps < 1 mbar due to inaccuracies in dps /dT and in the vicinity of the critical point, as the description of v󸀠󸀠 becomes more and more weak in this area [11].

2.9 Model choice and data management Any problem, no matter how complicated it may be, has a simple, obvious and generally comprehensible wrong solution. (Harald Lesch)

Process simulators offer a vast amount of models, which often confuses the user, generating more disorientation than opportunities. The following paragraph should provide some clarification. Despite the fact that new ideas and solutions are frequently presented in the literature, the number of models in use can in principle be restricted to six. These models are the following. 12 Design Institute for Physical Property Data.

70 | 2 Thermodynamic models in process simulation –







Standard activity coefficient model combined with an equation of state, e. g. NRTL-PR: A standard activity coefficient model should cover approx. 80–90 % of the cases. An appropriate equation of state should be used for the vapor phase to make full use of the capabilities of Equation (2.42). This equation can be used in principle for temperatures up to the lowest critical point. The Peng–Robinson equation is the favorite one of the author, definitely, there are other options. If single critical points are by far exceeded (e. g. presence of an inert gas), the Henry concept can be applied (Equation (2.59)). When they are exceeded only by a small extent, it is theoretically wrong but pragmatic just to extrapolate the vapor pressure curve, however, this should not be done for main components. It does not make sense in process engineering to use the ideal gas equation for the vapor phase, even if pressures are low. With the ideal gas equation, the enthalpy description is inaccurate (Chapter 2.8). Moreover, in a later state of a project the pressure relief devices must be designed (Chapter 14.2). In this case, the thermodynamic model is applied at pressures far above the normal operation pressure, which then requires a reasonably accurate equation of state for the vapor phase anyway. For the density, it should be taken care that the mixing rule (Equation (2.66)) is applied. As well, the correct representation of the enthalpies should be considered. Standard activity coefficient model combined with an equation of state for vapor phase association, e. g. NRTL-ASS: If carboxylic acids (formic, acetic, propionic, acrylic, butanoic, etc.) or HF play a decisive role in the process, the use of an association model for the vapor phase is important (Chapter 2.3.3). One should be aware that there are no models which consider both association and the nonidealities caused by elevated pressures. The problems concerning cpL via enthalpy route (A) should be considered [51]. Electrolyte model, e. g. ELECNRTL: Small amounts of electrolytes can be treated as heavy ends as a workaround. If they play a major role, an electrolyte model considering the dissociation reactions and the interactions of the ions should be used [47, 48]. The electrolyte model can as well be combined with equations of state for the vapor phase, e. g. Peng–Robinson or Redlich–Kwong for high pressures or the association model in case HF occurs. Equation of state model (PR, PSRK, VTPR): If the process is mainly operated at high pressures, if supercritical components play a major role with high concentrations in the liquid phase so that Henry’s law cannot be successfully applied or if supercritical components change to the subcritical state in one apparatus, the application of the φ-φ-approach with an equation of state valid for both the vapor and the liquid phase is obligatory (Chapter 2.2). Well-known examples are the polypropylene or natural gas conversion processes. The standard generalized equations of state like Peng–Robinson

2.9 Model choice and data management | 71





[21], Soave–Redlich–Kwong [52] or Lee–Kesler–Plöcker [53] have the disadvantage that they perform well for nonpolar molecules (ethane, propane, propylene, etc.), but the more the molecules have a polar character the more inaccurate they become. A solution for this problem are the advanced cubic equations of state like PSRK or VTPR [11]. They involve pure component parameters in the so-called α-function (Equations (2.33), (2.34)). With this option, the vapor pressures, heat capacities and enthalpies of vaporization of any components can be successfully represented. There are still weaknesses in the liquid density description, even the volume translation in the VTPR equation does not give satisfactory results. The mixing rules of the advanced cubic equations of state are based on the activity coefficient approaches (g E mixing rules). Combination of these advanced equations of state with electrolyte models are available as well, however, as mentioned above, they can still not be combined with vapor phase association models. High-precision equation of state: Unexpectedly, there are a lot of problems in process simulation which are related to pure components, e. g. the utilities steam and cooling water, inertizations with nitrogen or CO2 or major parts of the LDPE13 process, where ethylene is the only component. These problems can be covered very effectively by the use of the socalled high-precision equations of state, which are equations adjusted individually for the particular components [26–28]. They represent all thermodynamic properties within their experimental uncertainty over a wide range of pressures and temperatures and should be used whenever possible. However, except for natural gas no high-precision equations of state for mixtures are available. In these cases, compromises have to be made.14 In the hydrocarbon processes, the Lee–Kesler–Plöcker equation is a good approach, using accurate pure component representations for hydrocarbons and applying corresponding mixing rules. For polar components, this is not an option. Polymer model: Polymer applications are often covered by representing the polymer as a highboiling component. A reasonable representation of the combinatorial part of the activity coefficient, which often shows negative deviations from Raoult’s law for molecules differing large in size, should be considered (Flory–Huggins, UNIQUAC). There are models for polymers which are by far more complicated but represent the character and the effects in polymer mixtures accurately, e. g. PC-SAFT15 [54].

13 low density polyethylene. 14 Copolymerization in LDPE is a good example. For instance, it has to be decided whether for a mixture of 95 % ethylene and 5 % propylene at p = 100 bar it is more important to describe the pressure effect sufficiently or to describe the deviation caused by the 5 % impurity. 15 perturbated-chain statistical associating fluid theory.

72 | 2 Thermodynamic models in process simulation The choice of a good model does not imply that simulation will be correct. It is even more important to compile the binary interaction parameters (BIPs) for the various possible combinations. For n components in a process, n(n − 1)/2 binary combinations are possible. For a typical project in a chemical plant with 40 components this makes 780 binary parameter sets. In hydrocarbon business, up to 200 components are possible, giving 19 900 binary parameters sets. In both cases it is not possible to provide a perfect matrix in a reasonable time scale. For a chemical process, the following procedure is recommended: An EXCEL table is set up containing a matrix with all the components, showing a color code for the possible options. Figure 2.22 gives an example.

Figure 2.22: Illustration of the binary interaction parameter matrix in a project.

To fill this matrix, first the BIPs given by the databanks of the process simulator should be listed. Next, it should be checked which parameters are decisive for the process. This is, of course, not an exact assessment, nevertheless, for components which occur together in any block with considerable concentrations, the corresponding BIP should be important. For these cases, it is the responsibility of the user either to adopt the parameters from the simulator or to adjust own ones from experimental data from the established databanks [55, 56]. The latter is highly recommended, as the quality of the parameters can be assured in this way. If the data situation turns out to be not sufficient, own phase equilibrium measurements can be initiated to overcome this situation. The adjustment of BIPs to experimental data is thoroughly described in [11]. Less important parameters can be estimated (Chapter 2.10). Parameters where the current situation is not acceptable should be marked as “to be revised” or similar. It must be clear that the BIPs for component pairs where both components occur only at very low concentrations, e. g. in the ppm region, are not important, it has no influence on the results if they are omitted. All BIPs with water should be assigned, either by the simulation program, by adjustment to experimental data or by estimation, as the

2.10 Binary parameter estimation

| 73

nonidealities of water with organic components are usually large. Other parameters can just be set; e. g., the BIPs for n-hexane–n-heptane can be assumed to be zero (ideal mixture), as long as nothing better is available. In this way, a detailed overview about the BIP situation can be obtained, and the quality of the simulation results is easier to assess. For the hydrocarbon business, the situation is different. The number of parameters involved is so large that a thorough check is not possible within a reasonable time. Often, it cannot be distinguished between important and less important components, as all components occur only in relatively low concentrations. As hydrocarbon mixtures usually do not show major nonidealities and the interactions with water can easily be covered, it might be an option to use Modified UNIFAC with an appropriate equation of state or PSRK (Chapter 2.10). In this case, process simulation can be skipped! (Hans Haverkamp, after his boss suggested that he should vary the binary parameters until the column behavior is met)

Fitting physical properties to reproduce operation results is something one should not do. Process simulation which meets the data obtained from operation are a strong indication that the process is understood, or it can help to detect errors. Fitting physical properties to operation data will certainly reproduce the data, but the simulation is of no use. Extrapolation to other operation states will simply not work.

2.10 Binary parameter estimation BIPs can be estimated using group contribution methods, as illustrated in Figure 2.23 for the system ethanol–n-hexane. In group contribution methods it is assumed that the mixture does not consist of molecules but of functional groups. Ethanol can be divided in a CH3 -, a CH2 -, and an OH-group, whereas n-hexane consists of two CH3 and four CH2 -groups. It can be shown that the required activity coefficients can be calculated as long as the interaction parameters between the functional groups are

Figure 2.23: Illustration of the group contribution concept [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

74 | 2 Thermodynamic models in process simulation known. Furthermore, if the group interaction parameters between the alkane and the alcohol group are known, not only the system ethanol–n-hexane, but also all other alkane-alcohol or alcohol-alcohol systems can be predicted. The great advantage of group contribution methods is that the number of functional groups is much smaller than the number of possible molecules [11]. The UNIFAC (universal quasi-chemical functional group activity coefficients) group contribution method has first been published in 1975 [57]. Like UNIQUAC, it consists of two parts. The combinatorial part is temperature-independent and takes into account the size and form of the molecules, whereas the residual part is temperaturedependent and considers attractive and repulsive forces between the groups. The interacting groups are called main groups. They often consist of more than one sub-group. For example, in the case of alkanes the subgroups are CH3 -, CH2 -, CH-, and C-groups. The different sub-groups have different size parameters, the so-called van-der-Waals properties, which represent the volume and the surface of the groups. By definition, the group interaction parameters between groups belonging to the same main group are equal to zero. Meanwhile, the UNIFAC method has almost been replaced by the Modified UNIFAC (Dortmund) method [58, 59]. Its main improvements are [11]: – an empirically modified combinatorial part to improve the results for asymmetric systems; – temperature-dependent group interaction parameters; – adjustment of the van-der-Waals properties; – additional main groups, e. g. for cyclic alkanes or formic acid; – an extension of the database, besides VLE data; – activity coefficients at infinite dilution; – excess enthalpy data; – excess heat capacity data; – liquid-liquid equilibrium data; – solid-liquid equilibrium data of simple eutectic systems; – azeotropic data. Most important for the application of group contribution methods for the synthesis and design of separation processes is a comprehensive and reliable parameter matrix. Because of the importance of Modified UNIFAC for process development, the range of applicability is continuously extended by filling the gaps in the parameter table and revising the existing parameters with the help of new data. Since 1996, the further revision and extension of the parameter matrix is carried out by the UNIFAC consortium, and the revised parameter matrix is only available for its sponsors [60]. For the estimation of binary parameters, Mod. UNIFAC is used in a way that artificial data are generated, and the parameters of the current model are adjusted to them. The advantage to the direct use as a model is that the well-known systems do not need to be estimated with Mod. UNIFAC, and their accuracy is not lost.

2.11 Model changes | 75

– –

– –

Still, Mod. UNIFAC has some weaknesses: Isomer effects can not be predicted. Unreliable results are obtained for group contribution methods if a large number of functional groups occurs in the molecule, as it is the case for pharmaceuticals. Functional groups which are located closely together are often not represented sufficiently, e. g. the configuration –C(Cl)(F)(Br) in refrigerants (proximity effect). Poor results are obtained for the solubilities and activity coefficients at infinite dilution of alkanes or naphthenes in water [11]. Systems with small deviations from Raoult’s law are difficult to predict, which is a problem if also the differences in vapor pressures are small. In these cases, often qualitatively wrong characteristics for the mixture are obtained.

The advanced cubic equations of state PSRK and VTPR use a g E mixing rule, which can be used as a predictive tool if UNIFAC or Modified UNIFAC are used for the calculation of g E . For PSRK, the original UNIFAC method can be used, whereas Modified UNIFAC yields bad results in this combination. For VTPR, an own matrix has been built up [60, 61]. For both equations, additional groups have been introduced so that light gases can be described as well. Certainly, as equations of state both can of course be used in the supercritical region. In recent years, the COSMO-RS model16 has been developed [62], which works without adjusted parameters. Its accuracy compared to UNIFAC is a bit worse, but it is applicable in any case. However, it should be used only by a professional user, as well as Molecular Modeling. Molecular Modeling can generate data from quantummechanical calculations of the interactions of the molecules. This is a fascinating demonstration of how far these interactions are understood. However, in engineering applications more detailed justifications of the model are required, therefore, the author is pretty convinced that the general structure of fitting models to experimental data will remain.

2.11 Model changes For the handling of enthalpies in a process simulation program, the change of a model between two blocks is often critical. This problem has much to do with the enthalpy description. Between the two blocks, the simulation program hands over the values for p and h to describe the state of the stream. According to the particular models used in the two blocks, the stream is assigned with two different temperatures that may differ significantly [11].

16 COnductor like Screening MOdel for Real Solvents.

76 | 2 Thermodynamic models in process simulation

Example A vapor stream consisting of pure n-hexane (t1 = 100 °C, p1 = 2 bar) is coming out of a block which uses the Peng–Robinson equation with the φ-φ-approach. It is transferred to a “nothing-happensblock” (adiabatic, same pressure) working with the ideal gas law. Which error will be produced due to the model change?

Solution According to the Peng–Robinson equation of state, the specific enthalpy of the stream leaving the first block is determined to be h1 = −1807.3 J/g at p1 = 2 bar. Using an activity coefficient model with an ideal gas phase, the coordinates for p2 = p1 and h2 = h1 refer to the vapor state at t2 = 96.6 °C. Using cpL ≈ 2 J/(g K), the corresponding enthalpy difference Δh ≈ cpL ΔT = 2

J ⋅ 3.4 K = 6.8 J/g gK

will be missing in the energy balance.

Therefore, care must be taken when the thermodynamic model is changed in a flowsheet. It is recommended that a dummy heat exchanger is introduced between two blocks operating with different models, which is defined in a way that inlet and outlet states are the same in spite of the different models. Another option is to carry out a model change where both models yield at least similar results, maybe in a block operating at low pressure. In general, one should stick to the most comprehensive model in a flowsheet; nevertheless there are cases (e. g. association, electrolytes occurring only in parts of the flowsheet) where a model change cannot be avoided.

2.12 Transport properties The transport properties dynamic viscosity, thermal conductivity and surface tension must be known for the design of many pieces of equipment, e. g. columns or heat exchangers. Their correlations and mixing rules and further details are thoroughly explained in [11] or [63]. Therefore, only general remarks are given here and the pitfalls are explained. – Dynamic viscosity of liquids: The dynamic viscosity of liquids is probably the most important transport quantity. Among others, it has an influence on heat exchangers, distillation and extraction columns, and the pressure drop of pipes. Similar to the vapor pressure, it can cover several orders of magnitude and it is similarly difficult to correlate and extrapolate, although nowadays there are excellent correlation tools which do this easily. The dynamic viscosity of liquids starts at high values at the melting point and then decreases logarithmically with increasing temperature. The

2.12 Transport properties | 77

pressure influence is comparably small, but it should be considered at large pressures p > 50–100 bar. Correlations refer to the dynamic viscosity at saturation pressure. Correction terms for the pressure influence are available [11]. Also, just recently the first reasonable estimation method has been developed [64]. However, the most crucial thing and probably the weakest part of process simulation in general is the mixing rule. Several ones are available, but in most cases the logarithmic relationship ln

η ηM = ∑ xi ln i η0 η0 i

(2.91)

is used, where η0 is the unit used to make the argument of the logarithm dimensionless. Equation (2.91) can only reproduce the order of magnitude of the result, and in extreme cases not even that. It does not claim to give a good result, it is essentially not more than a way to come to a number. Even simple systems like methanol-water can exhibit large maxima, where the deviation of Equation (2.91) can be up to 100 % [11]. At least, it has proven to be superior to the simple linear averaging mixing rule [245]. The lucky circumstance is that most applications are not strongly sensitive to errors in the calculation of small viscosities around or even below 1 mPa s. Things become worse when larger viscosities become involved and differences between the two components occur. In these cases, it is really worth the effort to introduce and fit binary parameters, e. g. acc. to Grunberg/Nissan [41]: ln

η ηM 1 n n = ∑ xi ln i + ∑ ∑ xi xj Gij η0 η0 2 i=1 j=1 i

(2.92)

Example Calculate the dynamic viscosity of a brine consisting of 40 mol % ethylene glycol (1,2-ethanediol, EG, M = 62.068 g/mol) and 60 mol % water (W, M = 18.015 g/mol) at t = 20 °C.

Solution The pure component viscosities are according to [42] ηW = 1.01 mPa s , ηEG = 21.23 mPa s , giving a calculated viscosity for the mixture according to Equation (2.91): ln

ηM = 0.6 ⋅ ln 1.01 + 0.4 ⋅ ln 21.23 = 1.228 mPa s



ηM = 3.415 mPa s

78 | 2 Thermodynamic models in process simulation

A more probable value can be taken from [65] after recalculation of the EG concentration from 40 mol % to 69.7 mass %. The result is 7.08 mPa s, with a deviation of more than 100 % from the calculated value. In fact, this is a deviation which might have a significant influence on equipment design.









Thermal conductivity of liquids: The thermal conductivity of liquids decreases with increasing temperature almost linearly, except the region in the vicinity of the critical point. The order of magnitude is approx. 0.1–0.2 W/(K m) for almost all liquids. An exception is water; here the thermal conductivity is in the range 0.6–0.7 W/(K m), with a maximum at approx. 150 °C. There is a mixing rule which does not produce major errors. The influence of the pressure is similar to the one for the liquid viscosity; the values usually refer to the saturation line, and it is a good approach to take it for pressureindependent, but at p > 50–100 bar a correction term should be applied [11, 63]. Dynamic viscosity of gases: In contrast to the viscosity of liquids, the dynamic viscosity of vapors or gases increases with increasing temperature almost linearly. The order of magnitude is approx. 5–20 µPa s. The thermal conductivity of gases is hardly ever measured, and most of the values are calculated ones from a well-defined estimation method [11]. As for liquids, a pressure-correction should be applied when the pressure exceeds p > 50–100 bar [11]. Given values usually refer to the ideal gas state at low pressures. Mixing rules are available. Thermal conductivity of gases: The qualitative behavior of the thermal conductivity of gases is similar to the dynamic viscosity of gases. The typical order of magnitude is 0.01–0.03 W/(K m). As well, for the pressure dependence the same statements hold. However, some interesting remarks must be given: The thermal conductivity of hydrogen and helium, the so-called quantum gases, is significantly higher; it is in the range 0.16–0.25 W/(K m) in the usual temperature range 0–250 °C for both substances. This is the reason why helium or hydrogen are used as carrier gases in gas chromatography with thermal conductivity detectors. The high thermal conductivity corresponds to the base line; when the sample passes the detector, the thermal conductivity and therefore the heat removal from the detector decreases. The detector temperature rises and gives a corresponding signal. A second interesting item is that for very low pressures (p < 10−6 bar), the thermal conductivity of gases is no more a physical property of the substance but depends more on the dimensions of the vessel where the gas is located [11]. Surface tension: The surface tension occurs in various formulas used for the design of equipment in process engineering, but the author is not aware that any of these equations are sensitive to it. The surface tension refers to the phase equilibrium. A typical order of magnitude is 5–20 mN/m. For water, the surface tension is significantly higher

2.12 Transport properties | 79

(approx. 75 mN/m at room temperature). The surface tension decreases almost linearly with increasing temperature and becomes zero at the critical point. Mixing rules are available, but for systems with water special ones must be applied.

3 Working on a process The difference between fiction and simulation is that both simulation and fiction deceive and betray, but at least simulation creates an image that is congruent with reality. ([66]) Process simulation can generate a very accurate wrong solution. (Rob Hockley)

A process simulation is the attempt to evaluate the characteristic quantities of a process with well-defined calculations of the particular process steps. It is also a target to identify the sensitivities of a process and to find out how it reacts to disturbances. Process simulation is a tool for the development, design and optimization of processes in the chemical, petrochemical, pharmaceutical, energy producing, gas processing, environmental, and food industry [11]. It provides a representation of the particular basic operations of the process using mathematical models for the different unit operations, ensuring that the mass and energy balances are maintained. Today simulation models are of extraordinary importance for scientific and technical developments. The development of process simulation started in the 1960s, when appropriate hardware and software became available and could connect the remarkable knowledge about thermophysical properties, phase equilibria, reaction equilibria, reaction kinetics, and the particular unit operations. A number of comprehensive simulation programs have been developed, commercial ones (ASPEN Plus, ChemCAD, HySys, Pro/II, ProSim) as well as in-house simulators in large companies, for example, VTPlan (Bayer AG) or ChemaSim (BASF), not to mention the large number of in-house tools that cover the particular calculation tasks of small companies working in process engineering. Nevertheless, all the simulators have in common that they are only as good as the models and the corresponding model parameters available. Various degrees of effort can be applied in process simulation. A simple split balance can give a first overview of the process without introducing any physical relationships into the calculation. The user just defines split factors to decide which way the particular components take. In a medium level of complexity, shortcut methods are used to characterize the various process operations. The rigorous simulation with its full complexity can be considered as the most common case. The particular unit operations (reactors, columns, heat exchangers, flash vessels, compressors, valves, pumps, etc.) are represented with their correct physical background and with a model for the thermophysical properties. Different physical modes are sometimes available for the same unit operation. A distillation column can, for example, be modeled on the basis of theoretical stages or using a rate-based model, taking into account the mass transfer on the column internals. A simulation of this kind can be used to extract the data for the design of the process equipment or to optimize the process itself. During recent years, dynamic simulation has become more and more important. In this context, “dynamic” means that the particular input data can be varied with time so that https://doi.org/10.1515/9783110657685-003

82 | 3 Working on a process the time-dependent behavior of the plant can be modeled and the efficiency of the process control can be evaluated. For both steady-state and dynamic simulation, the correct representation of the thermodynamics, i. e. thermophysical properties, phase equilibria, mass transfer, and chemical reactions mainly determines the quality of the simulation. However, one must be aware that there are a lot of pitfalls beyond thermodynamics. Unknown components, foam formation, slow mass transfer, fouling layers, decomposition, or side reactions might lead to unrealistic results. The occurrence of solids in general is always a challenge, where only small scale-ups are possible (approx. 1 : 10) in contrast to fluid processes, where a scale-up of 1 : 1000 is nothing unusual. For crystallization, the kinetics of crystal growth is often more important than the phase equilibrium itself. Nevertheless, even under these conditions simulation can yield a valuable contribution for understanding the principles of a process. Today process simulations are the basis for the design of plants and the evaluation of investment and operation costs, as well as for follow-up tasks like process safety analysis, emission lists, or performance evaluation. For process development and optimization purposes, they can effectively be used to compare various options and select the most promising one, which, however, should in general be verified experimentally. Therefore, a state-of-the-art process simulation can make a considerable contribution for both plant contractors and operating companies in reducing costs.

3.1 Flowsheet setup Sometimes, the reality is different from the truth … (Comparison between plant and simulation data)

Figure 3.1 shows the symbols for some of the most important blocks used in the process simulator ASPEN Plus. Their main functions can be briefly explained as follows. 1. Heater block: In a heater block, a process stream is heated up or cooled down. It is not regarded how this is achieved. Usually, it is done with utility (e. g. steam, cooling water, brine), but it can also happen that the heat is exchanged between two process streams. When using the heater block, one should be aware that the simulator does not check whether the driving temperature differences are sufficient to transfer the heat. It should also be clear that the use of the heater block is not connected to a special kind of equipment. 2. Heat exchanger block: In contrast to the heater block, the heat exchanger calculates the heat transfer between two media, and it is checked whether this heat transfer can take place or not, if a minimum driving temperature difference is not available. Clearly, this is the better way to model a heat transfer, but only the target conditions of one stream can be specified. The second stream is a result of the specification, and this can lead to an increased effort for convergence.

3.1 Flowsheet setup |

83

Figure 3.1: Some blocks in ASPEN Plus. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

3.

Valve: Performs an adiabatic throttling with hin = hout . 4. Pipe: Evaluates the pressure drop in a specified pipe, considering friction losses, geodetic height influence and pressure changes due to positive or negative acceleration. Also, the two-phase pressure drop (Figure 12.7) can be considered, and the pipe can be divided into segments to update the vapor fraction continuously. Often, the most significant influence is the one of the geodetic height. The friction pres-

84 | 3 Working on a process sure drop can often be neglected, as later the pipe diameter will be dimensioned in a reasonable way during the basic engineering anyway. The height influence is independent of the pipe diameter. If one wants to consider the height without already fixing a friction pressure drop, the pipe diameter can be arbitrarily set to d = 2 m. Such a large diameter ensures that the velocities are so low that the friction pressure drop will be negligible, but it will develop the full height influence.1 5. Pump: Elevates the pressure of liquids. In process simulation, the pump symbol and their function are more or less place holders for the pumps in the process. Only for big pumps the power consumption is relevant, e. g. cooling water pumps. The author’s opinion is that the main function of the pumps in a simulation flowsheet is to make the engineer remember that a device for the pressure elevation is necessary. Formally, the pressure elevation and the efficiency of the pump can be specified. However, during process simulation the layout is not available so that the exact tasks of the pumps cannot be defined, and their efficiency is strongly dependent on the type and on the conditions chosen where the pumps must have its optimum efficiency (Chapter 8.1). This is not really a problem for process simulation, as the change of temperature and enthalpy due to the pressure elevation of a liquid is usually small. In a later stage of the project, usually only the big pumps with large volume flows or pressure elevations are updated. 6. Compressor: Elevates the pressure of gases. In contrast to pumps the power required for compressors is significant, and the temperature elevation is important (Chapter 8.2). 7. Vessel: A vessel just for the intermediate storage of liquids is often omitted in process simulation, as it does not change the thermodynamic state. If the vessel has a process function (e. g. separation of vapor and liquid, heating the liquid by jacket heating, mixing), it is considered. 8. Splitter: Divides a stream and leads the particular parts to their destinations. 9. Mixer: Unites various streams and evaluates the state of the resulting stream. The outlet pressure after the mixing must be defined. Normally, it is the lowest pressure of the participating streams. 10. Separator: Divides the inlet stream and leads the particular parts to their destinations. In contrast to splitters, it can be defined for each component which fractions are led to the particular streams, no matter whether this is possible or not. 1 However, this will cause trouble when it is transferred to dynamic simulation, where the holdup is important.

3.1 Flowsheet setup

| 85

11. Multiplier: Multiplies the mass-flow of an inlet stream with a factor. Of course, this is a block with a function that will not be invented very soon, but it is useful for the transfer from batch to continuous operating mode or for dividing the process into lanes and their reunification. Their are a number of other blocks (reactors, distillation columns, decanters, absorption columns, extraction columns), which are discussed in detail in the corresponding chapters. There is often a misunderstanding about the meaning of a process simulation. One must be aware that all blocks described above refer to an equilibrium state, where both inlet and outlet streams are constant. However, equilibria are only reached if the residence time is infinite, and this is of course never the case. The design of the equipment should be performed in a way that the blocks approach this equilibrium in a reasonable way. For example, for a decanter it is necessary that the residence time is long enough for the two liquid phases to form and separate. If this time is not provided, the decanter will perform worse than calculated, which often leads to the conclusion that the equilibrium is not described correctly or process simulation does not make sense at all. The same holds for many other blocks. Sometimes, an efficiency can be introduced, as it is the case in columns, or kinetic approaches can be applied, as in reactors. The knowledge about the process steps and, of course, the choice of an appropriate thermodynamic model are necessary presumptions for a useful application of process simulation. Otherwise, the GIGO principle holds, i. e.: Garbage in, garbage out! (The GIGO principle)

Figure 3.2 shows the typical scheme of a chemical plant. One or more feed streams containing the raw materials are passing a preparation step, e. g. a purification, compression or a heating step. Then, the main reaction step takes place, giving the main products and a number of by- or co-products.2 In a separation section, the valuable products are isolated and purified to meet the specification. Nonvaluable by-products are separated and sent to disposal. In most cases the raw materials will not be converted to a full extent. Due to economic reasons, it is of course desirable to collect and recycle them back so that they are not lost. When it is tried to calculate the process steps in Figure 3.2 sequentially, two major difficulties come to the fore: 1. The recycle stream cannot be known in advance. It is itself a result of the process calculation. An iterative procedure is necessary, where the recycle stream is first 2 A co-product is a product which necessarily occurs when the main reaction takes place. A by-product is a product which occurs due to undesired side reactions. In some cases, co- or even by-products are valuable and can be sold, but usually, they are subject to disposal.

86 | 3 Working on a process

Figure 3.2: Typical scheme of a chemical plant [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

2.

estimated. Then, the process calculation is carried out, giving the recycle stream as a result. If estimated and calculated recycle stream are identical within a certain tolerance, the result can be accepted. Otherwise, it must be estimated again and the procedure must be repeated. In this case, the recycle stream is called a tear stream. This kind of task is a special ability of the above-mentioned process simulators. How they work is explained below.3 The other remarkable point is that the recycle stream might contain other components than the ones listed in Figure 3.2. Each component occurring in the process must have an outlet; otherwise it will accumulate in the process. If a component behaves in a way that it is neither concentrated in the product or by-product stream nor removed in a side reaction in the reactor, the only way to get rid of it is a purge stream, where a defined (and hopefully small) amount of substance is split from the recycle stream (or another appropriate one) and led out of the process. In this way, the concentration of the accumulating components will rise up to a certain level, where the removal of the components on one side and the formation and feed on the other side are in equilibrium.

There are several strategies to achieve conversion of a flowsheet, i. e. to get a result where the estimated compositions and states of all tear streams agree with the calculated ones within a certain tolerance. For this purpose, the simulation programs use specialized methods. The simplest one is the direct substitution method, where the new estimation for the tear stream X is the calculated stream G(X) from the previous flowsheet calculation path: XDirect,k+1 = G(Xk )

(3.1)

The direct substitution method is slow but sure; it gives results even in cases where other methods are unstable. With few exceptions, the convergence course indicates 3 Nowadays, this “sequential flowsheeting” is going to become outdated. Most process simulator programs at least offer the option to solve the flowsheet with the equation-oriented (EO) approach, where all calculation equations are put into a huge system of nonlinear equations, which are then solved in one step. The remaining problem is that the error messages are more difficult to interpret.

87

3.1 Flowsheet setup |

that the error steadily decreases from step to step. If this is not the case, it makes sense to interrupt the flowsheet convergence and have a look at the tear stream history in order to see whether a component accumulates. In case the iteration switches between two solutions, it makes sense to give another starting point or change the convergence method. The standard method for flowsheet convergence is the Wegstein method. In principle, it is an extrapolation of the direct substitution, taking into account the last two iteration steps for the next estimation of the tear stream. It is usually faster than Direct Substitution, but it can happen that the calculated error can also increase from one step to the next, and only every few steps an improvement is achieved. In case convergence takes many steps, it is more difficult to judge whether the convergence target will be met. The calculation is as follows. First, the last two evaluation steps are characterized by s=

G(Xk+1 ) − G(Xk ) Xk+1 − Xk

(3.2)

Extrapolation gives an estimate for the next iteration: G(Xk+2 ) = G(Xk+1 ) + s(Xk+2 − Xk+1 )

(3.3)

At convergence, Xk+2 = G(Xk+2 ) is expected; therefore, we get Xk+2 = G(Xk+1 ) + s(Xk+2 − Xk+1 )

(3.4)

Substituting q = s/(s − 1), the final Wegstein iteration formula is obtained: Xk+2 = G(Xk+1 ) ⋅ (1 − q) + Xk+1 ⋅ q

(3.5)

The Newton method is the fastest approach for achieving convergence in a flowsheet. It is the transfer of the well-known solving method for nonlinear equations, where the derivative is used to get the next iteration step: xk+1 = xk −

f (x) df (x)/dx

(3.6)

For multivariable functions, instead of the derivative the Jacobian matrix is used: Xk+1 = Xk − J(Xk )−1 G(Xk ) ,

(3.7)

with Jij (Xk ) =

𝜕G(Xi ) 𝜕Xj

(3.8)

For the Newton method, a good starting point is extremely important in order to achieve convergence. Far away from the solution, the Newton method is not really

88 | 3 Working on a process a good choice. The evaluation of the Jacobian matrix is a huge effort. Therefore, as long as progress in the iteration is made, the calculation of derivatives is avoided. The number of components should not be too large. The Broyden method is a useful modification. The Jacobian matrix is only calculated at the first iteration. Therefore, it is faster but maybe less reliable than the Newton method. Although convergence is often the most difficult part of working on a flowsheet, it is strongly recommended to work it out. It is the only way known by the author to find out whether one of the components accumulates in the process. The following example explains that it is necessary even in cases assumed to be self-evident. Case Study For the separation of an ammonia/water mixture (1000 kg/h ammonia, 1000 kg/h water), a twocolumn system is provided (Figure 3.3). The target is to get both ammonia and water with a high purity so that the ammonia can be reused and the water can be disposed as waste water. In column C1, ammonia is taken overhead in a way that practically all the water goes to the bottom. This means that some of the ammonia is lost and remains in the water at the bottom. In column C2, the remaining ammonia is taken overhead with a certain amount of water. The water withdrawn at the bottom is practically ammonia-free. The overhead stream of column 2 is recycled to column 1. As a result, both the ammonia and the water outlet can be purified to any necessary extent. The question is now: Can one imagine that this arrangement works terribly in practice?

Figure 3.3: Ammonia–water separation with two distillation columns. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

A practical system never consists of only two components. In this case, which actually happened, an additional organic component occurred. Consider benzene to be this component, and its amount in the feed shall be 1 kg/h. The plant manager was

3.1 Flowsheet setup |

89

desperate, as it was simply impossible to run this arrangement continuously. A similar panic came up when it was first tried to simulate the process and to converge the flowsheet calculation, there was no way to get a result. Both participants have clear indications of what went wrong. The plant manager had to empty his plant regularly, and each time huge amounts of organic substance were found. Analogously, in simulation the iteration history indicated that benzene was the component which accumulated, which was the reason for the convergence difficulties. What happened both in the plant and in simulation was the following: In the first column C1, the cut was done in a way that relatively large amounts of ammonia remained in the bottom stream. As benzene is by far less volatile than ammonia, it could not get to the top of the column. This is at first favorable, as the ammonia product is not contaminated with the organics. Together with the whole water and the rest of ammonia the organics were transported to column C2. In C2, it was made sure that no ammonia remains in the waste water by taking part of the water overhead. Benzene and water form a heterogeneous azeotrop. Regarding the separation cut, this azeotrope is a light end, and the benzene is completely in the overhead stream. Again, this is considered to be fine at first, as the organic compound should not occur in the waste water. The strange result is that for the benzene no way is left to leave the system, and it accumulates in the cycle until the arrangement ceases to do its job. An exit for the organic component must be provided; in this case, a liquid-liquid separator was chosen (Figure 3.4, B1). Still, the benzene accumulates, but at a certain level a phase split occurs. The organic phase can be removed continuously, while again both the ammonia and the waste water have no impurities.

Figure 3.4: Ammonia–water separation with two distillation columns and a decanter. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

90 | 3 Working on a process The conclusion from this incident is: Never forget the toilet! (Process development wisdom)

The most important thing for the feasibility of a process is the value creation of the products in comparison with the raw materials. Before a process is going to be developed or the erection of a plant is decided, it must be clear that the added value of the products is attractive. This check is not really complicated, the only things one must know are the stoichiometry and the prices of the substances involved. Example EDC (1,2-dichloroethane) is produced via the reaction C2 H4 + Cl2 󳨀→ C2 H4 Cl2 The prices4 and the molecular weights for the particular substances are listed in Table 3.1. Table 3.1: Fictitious prices and molecular weights of ethylene, chlorine, and EDC. Ethylene: Chlorine: EDC:

500 €/t 50 €/t 200 €/t

28.053 g/mol 70.905 g/mol 98.959 g/mol

Can this process be feasible?

Solution Let us consider that 1 kmol ethylene reacts with 1 kmol chlorine. This means that 28.053 kg C2 H4 + 70.905 kg Cl2 󳨀→ 98.959 kg C2 H4 Cl2 The corresponding values are 28.053 kg ⋅ 0.5 €/kg + 70.905 kg ⋅ 0.05 €/kg 󳨀→ 98.959 kg ⋅ 0.2 €/kg This means that the value of the raw materials is 17.57 €, compared to 19.79 € for the product side. In fact, there is a value generation in this case with these assumed prices, but it is comparably small. Taking into account other operation and capital costs, the process might not be feasible.

4 in this example completely fictitious

3.1 Flowsheet setup |

91

Although the product has a lower price per kg than one of the raw materials, it is possible that there is a value generation due to the increase of the molecular weight. Only approx. 30 % by mass of the EDC molecule come from the ethylene; the rest comes from the inexpensive chlorine.

The cost structures of the various kinds of chemical production, i. e. – basic chemicals, – fine chemicals, – specialty chemicals, – pharmaceuticals, and – polymers. are rather different. Basic chemicals and polymers are manufactured in large specialized plants with extremely large capacities (typically: 300 000 tons per year or even more). For the economy, the differences between the market prices of products and raw materials are decisive, as in the example above. The costs for energy and for the disposal of the waste products and waste water grow steadily and have already a significant influence. The large capacities are advantageous, as the invest costs of a plant do not rise proportionally to the capacity (economy of scale). Also, appropriate logistics for the handling with the huge amounts of substances is necessary. It is favorable to work at large chemical sites, where all utilities are available and where the products can be further used directly without a major transport infrastructure. Because of the strong competition, the margins are low. The profit in basic chemicals production is based on the safe sales of large amounts which are ensured by long-term contracts. The same holds for polymers. In contrast, the manufacturing capacity of pharmaceutical products is by far lower, often only several hundred kg per year. The costs for raw materials, energy, and the investment costs for the plant are not really important. However, before a pharmaceutical product gets permission, a huge research effort is necessary before the first revenues are obtained, often more than a billion €. This means that the risk for such a development is high. Furthermore, once the product has permission, the reliability of the production has to be ensured. An intermediate situation is encountered for fine, specialty, and agro chemicals; the capacities are by far lower, but the margins are much higher than for basic chemicals. For fine chemicals, the focus is on the molecule. Usually, it has a complex synthesis with many steps. Because of the low capacities, fine chemicals are in most cases produced in batch processes, often in campaigns. There might even be no “dedicated plant”, i. e. a plant which is used only for the manufacturing of one product. Instead, the feasibility of manufacturing in existing sites is interesting. Therefore, the optimization of the process is usually not the key issue; the reliable synthesis and purification with stringent quality specifications is more important. For process engineers, there is also a wide of variety of jobs in exhaust air purification, waste water treatment, and waste product management.

92 | 3 Working on a process Specialty chemicals are similar to fine chemicals, but the main focus is on the final application. Examples are paints, detergents, de-icing agents, or adhesives. The target is not necessarily the sale of the product but the technical solution for the customer.

3.2 PID discussion The PID is a structured, simplified view on a plant section and in most cases, it is the decisive document for the design of a plant, showing the piping including the flow directions, the particular pieces of equipment and their interconnections, the instrumentation, the control devices, the interlocks, valves, and flanges. To further specify the items shown, not only simplified symbols of the apparatuses and machines are shown but also additional information on material, power, manufacturer and type, etc. This information is located in the lower part of the PID. The various symbols used are usually defined in a preamble and should not be listed here. Instead, it is tried to illustrate the philosophy behind and point out connections to other chapters in this book. Information on revision status, date, originator and company specific data can be found in the lower right corner. One of the main intentions why engineering requires PIDs is to have a simple scheme showing the material flow and the connections of different instruments. The material streams and their direction of flow is indicated by solid lines with arrows. Often, the main product route in the process is indicated by a bold line. Various depictions exist to declare the type of insulation or pipe specifications, for example, slope or diameter change (Figure 3.5).

Figure 3.5: Pipe with insulation, diameter change from size DN 80 to DN 50, pipe with a slope of 2 % and hose.

The pipes have to be chosen according to mechanical requirements to withstand a defined temperature and pressure range, specified by process conditions and the evaluation of potential maximum values. Next to this, the pipe material is chosen to resist corrosive fluids, for example. Leak tightness is one of the most important things to consider throughout pipeline construction. Therefore, gaskets must be selected in accordance to the temperature and pressure level and fluid properties, respectively. All this information is gathered in a sequence called fluid code. It is an explicit denomination for a pipe, usually including the following information: diameter, pressure stage, pipe material, medium flowing through the pipe, type of gasket system and type of insulation. A continuous number can be added to distinguish the different pipe sections. To

3.2 PID discussion | 93

prevent from unclear, long names, the information is simplified by using encrypting abbreviations. Generally, the medium is not mentioned with its full name but represented by a numerical order or letter. The same holds for the material characteristics and pipe outfit. Each company may define their own catalogue. Close to numbers 10, 20 and 28 in Figure 3.6 the denomination of the lines can be seen. It starts with the nominal diameter of the line, followed by the fluid code. Then, there is the line number for identification, the material code and the insulation type, in these cases “HC40” (heat conservation at 40 °C) or, respectively, “PP40”, meaning personal protection at 40 °C (see Chapter 12.2). Material codes and insulation types are usually redundant information, as the fluid code covers it already. The PID does not intend to show the actual course of the pipe – this is provided by an isometric drawing. Generally, the PID focusses on a defined process stage, including the main instruments and energy streams like steam or cooling water. These different items and pipes are usually not located in a separate space, as a first glimpse on Figure 3.6 may indicate. As the instruments connected by pipes can have different geometric levels, small indices with the indication on the height may be present. This is a useful information to understand the arrangement and interconnections. Next to the described elements, the PID includes valuable information on the control system – both manual and automated systems. Armatures are essential elements of safe operation. The possibility to block or open some path is part of either measuring (e. g. dosage, batch processes), a redundant system (one pump in operation, a second is available in case of failure) or emergency operation (safety valve or venting system). Manual items can be a part of everyday operation, for example sampling or dosage. Automated systems are mainly used to maintain process conditions by responding to deviations, see Chapter 3.7, and can be equipped with alarm systems. To give an impression on the signal chain, dashed lines depict the analog signal and lines with dashes and dots represent digital signals. The purpose of this chapter is to discuss a comparably simple PID in detail extending the given information before and showing actual possibilities of depiction. The chosen PID can be seen in Figure 3.6. As pieces of equipment, there are a distillation column called 23C002 and an adjacent thermosiphon reboiler 23E006. According to the symbol, the column is a packed one (13). The packing is irrigated by the reflux stream (10), the distributor (11) is only signified. The nozzles with their denomination, approx. position and nominal size are depicted (e. g. 19), which is also done for the reboiler (38). For the bottom, a larger diameter compared to the packing section has been chosen to increase the residence time. A cone-shaped transition piece (21) is provided. For inspection, there are 24󸀠󸀠 manholes both at top and bottom of the column (12) so that at least slim people can enter it. The two-phase flow (vapor and liquid) coming from the reboiler (28) enters the column via the half-open pipe (25), which is supposed to achieve disengagement

Figure 3.6: Example PID of a distillation column. CSO – Car Sealed Open, NC – Normally Closed.

94 | 3 Working on a process

3.2 PID discussion | 95

of vapor and liquid.5 In the bottom area, various liquid levels (36) are indicated. NL means normal level, which is to be maintained by level control. Major deviations to high or low level (AH and AL, respectively) cause an alarm signal for the operators. Even larger deviations (SAHH and SALL) cause an interlock (SAHH means “switch alarm high high”). The level in the bottom of the column is supervised by three independent features. There is a level measurement connected to a transmitter (35). The signal is forwarded to an LIC, which controls the level, maybe by manipulating a feed or an effluent stream. The LIC also provides the alarm for level high and level low. There are obviously two level gauges (30,35), where the level can be watched directly at the plant. A third, independent system (23,40), probably realized using a different measurement principle, activates the interlocks (17), which might switch off the feeds or, respectively, the effluent flows at high-high or low-low level. The liquid can leave the column at the bottom (41). A vortex breaker is used to avoid waterspouts to be formed. A drain valve is located close to the lowest point of the column for deinventory, maintenance and inspection activities. The reboiler is insulated (34) with the purpose of heat conservation. Shell side and tube side can be drained (31) and vented (22), respectively. Furthermore, the condensate line can be drained. The PID instructs to provide a low point in the condensate line to ensure its complete emptying (42). Also, slopes have to be established in the steam line (27) to ensure that condensate being formed in the line by accident has a well-defined flow direction. The flow of the steam is controlled by a flow control device FIC-230903, which is itself directed by the temperature controller TIC-230908. The connecting line between the TIC and the FIC with empty dots (16) represents a software connection, whereas the dashed line (16) between the FC and the signal transducer FXV is an electrical signal. The valve itself is operated with instrument air (IA), requiring a pneumatic signal (26). The control valve arrangement (32) with the control valve itself, the bypass with a ball valve (see Chapter 12.3.1), which can be operated manually, the drains, the taper and the expansion upstream and downstream the control valve have been discussed before (see Figure 1.2). The “FC” at the control valve means “failure closed”, that is, the valve will go to closed position in case energy, instrument air or electricity is not available. TSO is the abbreviation of “tight shut-off”, which indicates a special tightness class to separate process systems and allow only a very small leakage rate when the valve is closed. Another important control loop is the one for the reflux (7), which is flow-controlled. The complete control loop is not visible on this sheet. The valve (33) is a shut-off valve. Undesirable states in the plant, e. g. high pressure in the column (18, I-2334), can activate corresponding interlocks in the DCS, which in turn cause the shut-off valve to close and stop the steam flow to the reboiler. In the PID, often the complicated signal flow is depicted, as in this case. 5 No comment on its effectiveness.

96 | 3 Working on a process In the column, there are a number of pressure and temperature measurements. The most important one is the temperature control cycle (14). With a temperature control, the composition of the column product can be regulated (see Chapter 5.6). As explained above, the TIC finally manipulates the steam flow, which in turn strongly influences the temperature profile in the column. According to the vapor–liquid equilibrium, the boiling temperature also depends on the pressure; therefore, a pressure transmitter gives a software signal to the TIC to compensate for pressure changes. The column pressure is usually controlled in the condenser, which is not illustrated on this PID. As for the bottom level, the interlocks for excessive pressure and temperature are activated by different transmitters (15, 18). The temperature interlock can be reset by a handswitch (HS,15). The “TW” refers to a thermowell, which protects the thermocouple from getting into contact with the medium. There are also pressure (3, 4) and temperature (8, 24, 37) measurements which only indicate the values for operator information. At position (9), the pressure drop across the packing section is measured (PDI-230903). This information is interesting for detecting fouling layers on the packing; when the pressure drop increases with time, it is a strong indication that the packing is subject to fouling. If the pressure drop decreases, it might be an indication for corrosion or even loss of packing. The lines to the pressure sensors are insulated and sloped to avoid condensate formation, which causes significant measurement errors. Vent nozzles play a decisive role when the commissioning of a plant takes place. After assembly, the equipment contains air at ambient pressure, and it can often not be avoided that additional inert gases are introduced with the process streams. In operation, the gas will be displaced by the process streams. It will accumulate at the high points of the pieces of equipment. Therefore, a vent valve must be provided at any high point so that it is possible to get rid of the gas in a defined way (column: 5, reboiler: 22). Especially for condensers these inert vents are extremely important, as the inerts restrain the heat transfer at condensation (see Chapter 4.4). Of course, the vented gases are not released to environment but collected if they are hazardous. The PID also provides information about the arrangement of the equipment. For some reference positions (39), the heights are given. The number refers to millimeters above zero level. Strangely, in this case the zero level is referred to as +100000 mm, so that negative number do not occur even for plants that are 100 m belowground.6 The column is protected against overpressure by a safety valve (1, see Chapter 14.2). In case the design pressure (0.62 MPag) of the column is exceeded, the safety valve opens and vapor from top of the column is vented, probably to a flare. The safety valve has a bypass which is NC (normally closed). Bypasses around safety valves fre6 The sense of this convention is hardly understandable. Negative numbers would not cause any difficulties, and subterraneous plants are rare, and in case they actually occur, it is hardly ensured that the 100 m are sufficient.

3.3 Heat integration options | 97

quently occur to enable maintenance of the safety valve. For this purpose, the safety valve can be isolated from the line with two valves upstream and downstream. They are CSO (car sealed open, see Glossary). During the maintenance, the NC valve in the bypass is operated manually; if there is any indication that a unintended pressurebuildup might happen, it is opened. The PID refers to a preliminary state, which becomes obvious by having a look at the nominal diameter of the lines to and from the safety valve. They are given as 99󸀠󸀠 , which indicates that the line has not been sized so far. Finally, three minor points shall be regarded. – The beginning of a line with different properties is indicated by a pin (2) so that it is clearly defined what a line naming refers to. – Item (29) shows that the diameter of the line changes. The triangle symbol indicates whether it becomes larger or smaller in flow direction; however, the information 8󸀠󸀠 /6󸀠󸀠 does not, the larger diameter is always given first. – The pressure indicator (3) is supposed to give the pressure at the top of the column without additional pressure drop. Therefore, the distance between the pressure sensor and the column should be as small as possible (6).

3.3 Heat integration options As mentioned, the feasibility of a process is mainly determined by the value creation due to the price difference between raw materials and the product. However, the value creation itself can hardly be influenced by the engineering, it is more a question of “yes” or “no”. Less important, but susceptible to good process engineering are the utility costs, especially for steam and electricity. Therefore, heat integration measures are a field where process engineers can develop their strength, i. e. suggesting reasonable ways to save energy without significantly increasing the complexity of the plant. Before starting with heat integration, one should be aware that the utility costs extremely depend on the location. Currently (2016), energy of each kind is extremely inexpensive in the United States, Saudi Arabia, or Qatar, whereas in Europe or in Asia energy costs are significant. The steam price varies from 4 $/t to 30 $/t; it is clear that the effectiveness of steam saving measures must be considered when deciding where the plant will be built. First, some standard options for heat integration are presented which occur frequently, based on the case that an aqueous solution has to be concentrated. For a better understanding, it should be pointed out that for dilute solutions large amounts of water must be removed to increase the concentration. For example, to increase the concentration slightly from 1 % to 2 %, about half of the water has to be removed. To concentrate the solution from 10 % to 20 %, only 5 % of the water at the beginning has to be evaporated.

98 | 3 Working on a process –



Reverse osmosis7 : For very dilute solutions, it is possible to use reverse osmosis as a first step to get rid of a large part of the water without thermal energy. Details and an example can be found in Chapter 7.1. There are membranes available which let only water8 pass, whereas all other components are retained. In this way, water can be removed until the osmotic pressure is reached. As long as the durability of the membrane is sufficient, reverse osmosis should be considered. In the low concentration region, large amounts of water can be removed according to the effect described above while the energy consumption is just given by the pump energy for the pressure elevation (usually approximately 60 bar). Multieffect evaporation: Figure 3.7 shows one of the possible arrangements for multieffect evaporation with four effects. The evaporators I–IV are arranged in series, where the pressure is decreasing from effect to effect. The product to be concentrated is fed to the system in the first effect, which is heated with fresh steam. The vapor generated is the heating agent for the second effect, where the product outlet of the first effect is further concentrated. The process proceeds in this way: the vapor generated is the heating agent for the next effect; feed and vapors are in cocurrent flow (forward feed arrangement). The advantage is that no pumps are used, the flow of the product stream is achieved by the pressure difference between the effects. It is the most useful arrangement if the feed is already hot (i. e. not significantly subcooled) or if the concentrated product must not be exposed to high temperatures [67]. Also, a backward feed arrangement is possible, where countercurrent flow is realized. It is the appropriate arrangement if the flow is cold, i. e. strongly subcooled, as the fresh cold feed is evaporated at the lowest temperature and does not need to be heated up first. It is also the best option if the concentrated product becomes highly viscous. In this arrangement, the concentrated product with the highest viscosity is processed at the highest temperature so that the heat transfer remains acceptable. Of course, backward feed arrangement needs pumps for the transfer of the product solution. Of course, in multieffect evaporation one must also take care that for each effect a sufficient driving temperature difference is available. It is quite easy to estimate how much fresh steam is saved with multieffect evaporation. With two effects, the first effect takes the steam and generates about the same amount of vapor for heating the second effect. Therefore, about 21 or 50 % of the fresh steam with one effect is necessary. The same considerations hold for other numbers of effects: 33 % for 3 effects, 25 % for 4, 20 % for 5, 17 % for 6, and

7 Strictly speaking, reverse osmosis is not a heat integration but a heat saving measure. However, it can support the other methods to a great extent. 8 and some other molecules similar to water in size.

3.3 Heat integration options | 99

Figure 3.7: Forward feed arrangement for multieffect evaporation.



14 % for 7 effects. These values indicate that more than 3 or 4 effects do not save much more energy, not to mention that it will become difficult to provide sufficient driving temperature differences and that all effects have almost the same price, independent of their effectiveness. Only for very large plants, more than 4 effects really make sense. Mechanical vapor recompression (MVR): For dilute solutions, the boiling point elevation is usually not significant. In this way, a moderate compression of the generated vapor can elevate the dew point of the vapor by approx. 8–10 K. The corresponding arrangement is explained in Chapter 8.2 (Figure 8.13). Also, an example is calculated. The energy added to the vapor by the compressor is used to elevate the dew point temperature of the vapor. Even the compressor losses (Chapter 8.2) are not wasted but remain in the system, causing superheating of the vapor. The necessary pressure increase can often be achieved with blowers (Figure 8.14), the simplest form of a compressor which has even relatively low investment costs. Mechanical vapor recompression gains the highest possible energy savings. There are cases where up to 100 t/h of fresh steam consumption can be replaced by a few MW el., which are by far more inexpensive [67]. Usually, fresh steam is only necessary for the startup. The drawback is that often more heat transfer area is necessary. The 8–10 K mentioned above are sufficient for heat transfer to take place but is not extraordinarily high. With fresh steam, the driving temperature difference can be more or less chosen; of course, it is limited to be within certain standard ranges (Chapter 4.5). How-

100 | 3 Working on a process



ever, in most cases it is several times larger so that the heat transfer areas can be smaller. Mechanical vapor recompression has disadvantages in vacuum applications [67]. The main ones are: – in a vacuum, the vapor volumes are larger so that larger compressors and pipes are required; – air leaks can badly affect the efficiency of vapor recompression. However, these are only drawbacks which are not generally prohibitive to vapor recompression in vacuum applications. There seems to be a certain trend to mechanical vapor recompression, as it is probably the most effective steam saving measure. Concerning the blowers with their limited pressure ratio, a multieffect compressor arrangement can be used. Blowers require hardly any maintenance. At first glance, it looks as if someone wants to pull himself up by his own bootstraps, but correctly designed mechanical vapor recompression definitely works and has a lot of successful references. Thermal vapor recompression (TVR): In thermal vapor recompression [67], instead of a compressor a jet pump (Chapter 8.3) is used to increase the pressure of the generated vapor for reuse. Figure 3.8 shows a typical arrangement. The technical principle of a jet pump is explained in Chapter 8.3. There are several differences to the conventional compression. First, the compression is not driven by electrical energy but by fresh steam (“motive steam”). Therefore, TVR should only be considered if high-pressure steam is available and low-pressure steam is sufficient for the process. Although it depends

Figure 3.8: Thermal vapor recompression arrangement.

3.3 Heat integration options | 101

of course on the special case, the amount of motive steam is usually in the same order of magnitude as the suction steam. Therefore, a rule of thumb says that TVR can replace one effect. The motive steam does not produce condensate which can be reused but waste water, as it is contaminated by the vapor coming from the product. The unbeatable advantage of TVRs are the low investment costs, their reliability and the low space demand. However, there are difficulties in operation. TVRs are designed for a single operation point; changes in operation might result in serious performance breakdowns. In these cases, their behavior is difficult to predict. For heat integration, besides the effective reuse of vapor latent heat, the optimization of the heat exchanger network is an important issue. It can be performed in a systematic way, where the network is analyzed as a whole. The Pinch method [68] determines the most efficient heat transfer network just from the point of view that the use of external utility is minimized.

Figure 3.9: TH-diagram foundations.

The main tool of the Pinch method is the representation of the heat streams in a TH diagram (Figure 3.9). In a process, hot streams require cooling, where their enthalpy is reduced (arrows from right to left). Cold streams require heating (arrows from left to right), where the enthalpy is increased. There are two sorts of streams: The first ones change the temperature during heating and cooling according to ̇ p ΔT Q̇ = ΔḢ = mc

(3.9)

giving inclined arrows in the TH diagram.9 The second sort of streams are those which are evaporated or condensed (latent heat), where the temperature remains constant, according to ̇ v Q̇ = ΔḢ = mΔh

(3.10)

9 In the following section explaining the Pinch method, it is assumed that the heat capacities of the streams remain constant so that straight arrows are produced. In reality, this is not the case, giving slight curvatures.

102 | 3 Working on a process

Figure 3.10: Construction of a Composite Curve.

In this case, the arrows are horizontal.10 The heating and cooling demand of the process can be visualized by drawing the Composite Curve (Figure 3.10). For each temperature, the involved heat capacities of the streams are added up. At each point, the ̇ p ). Finally, the latent heats are added for slope of the curve is represented by 1/(mc each temperature. In this way, the Composite Curves for the hot streams and the cold streams are built. The curves can be shifted parallel to the H-axis so that the cold Composite Curve is below the hot Composite Curve. The so-called pinch must be defined, i. e. the minimum driving temperature difference ΔTmin (Figure 3.11) where heat transfer is considered to make sense, e. g. 10 K. Again, the cold Composite Curve is shifted until the minimum temperature difference between cold and hot Composite Curve is equal to ΔTmin . In the region where the hot Composite Curve is above the cold Composite Curve, it is possible that the hot streams can be cooled by the cold streams and vice versa. However, at the ends the curves usually overlap. In these regions, the process does not cover the demand for heating and cooling, respectively. Therefore cold and hot utility (steam, cooling water, etc.) is necessary. The enthalpy differences represented by the overlapping spaces indicate the minimum hot and cold utility demand, respectively. If the curves are shifted more towards each other so that the overlapping is further reduced, ΔTmin would be underrun (pinch point, Figure 3.11). Hot and cold Composite Curves can then be separated into a region below and a region above the pinch (Figure 3.12). If heat integration should be applied in the optimum way, heat must not be transferred across the pinch, or, more detailed: – Don’t use steam below the pinch! – Don’t exchange heat between streams on different sides of the pinch! – Don’t use cooling water above the pinch! 10 This is an approximation for illustration as well. In fact, for mixtures changing the phase the temperature will change.

3.3 Heat integration options | 103

Figure 3.11: Hot and cold Composite Curves.

Figure 3.12: Regions below and above the pinch.

– –

The temperature levels of a thermal engine shouldn’t be on different sides of the pinch. A heat pump should be operated across the pinch.

Furthermore, the Grand Composite Curve can be introduced (Figure 3.13). After shifting the Composite Curves to ΔTmin = 0, for each temperature the enthalpy differences between hot and cold Composite Curve are represented. With this construction, the necessary temperature of the various utilities can be determined. The “pockets” play the key role in this context. Pockets are formed when the process can deliver part of the required utility on its own. As a result, part of the utility can be supplied at a more convenient temperature level, i. e. lower for heating agents and higher for cooling util-

104 | 3 Working on a process

Figure 3.13: Construction of the Grand Composite Curve.

Figure 3.14: Meaning of the pockets.

ity (Figure 3.14).11 Having set these constraints, one can set up an appropriate heat exchanger network. Also, distillation columns [69] and other equipment can be described. One should be aware that the pinch method just makes suggestions. Often, heat integration measures increase the complexity of the flowsheet, or material reasons prevent the results from being implemented. Also, the spatial distance between hot and cold streams can be so large that heat integration is awkward. Nevertheless, pinch analysis has proved to be a useful tool to get an overview on the heat integration options. In the author’s view it is even superior to the exergy analysis, although the latter is more elaborated according to thermodynamics. The pinch analysis reflects to heat 11 Of course, the shifting of the Composite Curves must be reversed when the utilities are finally chosen.

3.4 Batch processes | 105

integration, which can usually be implemented without changing a successfully working process. The exergy analysis covers other issues as well, e. g. dilution or neutralization as contributions to chemical exergy. Exergy analysis does not give an answer whether avoiding the losses is in line with the targets of the process or not. The opportunities for improvement often require major changes of the process, which might be linked with additional test phases. Also, the mechanical exergy often yields only minor contributions, and they can easily be detected by having a simple look at the pressure levels in the process.

3.4 Batch processes Most processes in fine chemicals, specialty chemicals, and pharmaceuticals are not operated as continuous but as batch processes, meaning that a specified amount is produced within a certain time. Often, batch products are not manufactured in dedicated plants but in multipurpose units, where several products can be produced in the same plant, just according to the demand. The engineering of such a plant comprises not only making up the dimensions but also scheduling the charges of the equipment, the duration of the various process steps and the optimization of the load of the plant. Meanwhile, tools are available which enable a comprehensive documentation and visualization of the process. Flowsheets, equipment lists, mass balance, the contents of the particular pieces of equipment, the emissions and the time schedule can be generated with a batch simulation. The basis of the batch simulation is the recipe, which is in principle a standardized process description. A special language consisting of a limited number of expressions has been developed, covering all the possible steps in a batch process. It contains the whole information about the process. The recipe links the unit operations together. It is similar to a laboratory instruction. As the language consists of standard phrases, an automatic translation into other languages is easily possible. Figure 3.15 shows an example of a batch recipe. As well as for the continuous process, a simple flowsheet (“equipment diagram”) can be derived from the recipe (Figure 3.16). It visualizes the flow between the pieces of equipment and relates them to the recipe steps. For each piece of equipment, the content as a function of time can be visualized (Figure 3.17), giving valuable advice for its design. Moreover, it can be evaluated when emissions take place and what the amount of emissions is. Especially, this is most relevant for the exhaust air concept (Chapter 13.4). In contrast to a continuous process, where it is usually well defined, the exhaust air in batch processes is hard to trace, for example, it is generated each time when a vessel is filled or flushed. If the temperature in a vessel rises, vessel breathing takes place due to thermal expansion. However, the heart of a batch simulation program is the schedule view (“Gantt Chart”). An example is shown in Figure 3.18. It is the most valuable information for

106 | 3 Working on a process

Figure 3.15: Example for a batch process recipe. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

3.4 Batch processes | 107

Figure 3.16: Example of an equipment diagram. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

Figure 3.17: Equipment content visualization. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

108 | 3 Working on a process

Figure 3.18: Example for a Gantt Chart.

the staff to operate and organize the process and to evaluate whether it is feasible at all. There are tools for optimization of batch schedules, the following example shall illustrate the large potential of such a tool in a very simple case. Example A batch process for a specialty chemical consists of the following operation steps: 1. producing an intermediate of product A in vessel 1 in 5 h; 2. producing an intermediate of product B in vessel 1 in 2 h; 3. finalizing product A in vessel 2 in 2 h; 4. finalizing product B in vessel 2 in 4 h. For producing an appropriate amount for selling, three cycles for each product are necessary. Optimize the makespan, i. e. the time needed for production in the plant. The cleaning of the vessels must take place after each step and is already included in the given durations of the process steps.

Solution Figure 3.19 shows two options. In Approach 1, vessel 1 first produces the whole amount of the intermediate of product A. After a batch of A has been finished, it is finalized in vessel 2. Then, vessel 1 is used to produce the intermediate of product B. When the first batch of B is ready, in vessel 2 the last batch of product A has just been finalized, so one can directly continue with product B. The makespan for Approach 1 is 29 h. It has certain advantages, as both vessels can be operated partly in parallel. Moreover, in both vessels only one product change takes place so that the cleaning effort is comparably small. However, as mentioned above the cleaning must be performed anyway and is already considered in the time schedule. In this case, Approach 2 has considerable advantages, as an alternating production of the products A and B takes place. The overlapping times, where both vessels are used in parallel, are much larger. Therefore, the makespan is reduced to 25 h, giving a capacity increase of almost 14 %. This example has been executed more or less by manual rearranging of the time frames, as the process is simple enough. For complex cases, a capable software for optimization is required.

3.5 Equipment design

| 109

Figure 3.19: Example for makespan optimization.

3.5 Equipment design Besides process simulation, equipment design is the second large task for engineering calculations. There is a clear trend in the process simulator programs to integrate process simulation and equipment design. For instance, a heat exchanger could be represented in different ways with increasing complexity: – a simple heater block, which just regards the energy balance of one of the two streams, yielding the necessary duty; – a heat exchanger block, taking into account the energy balance of the second side of the heat exchanger as well. Especially if both streams are process streams, it often happens that an additional tear stream is introduced, when the conditions of the second stream are a result of the first stream. An advantage is that the temperatures of the two streams are compared in each segment so that it is supervised that the cold stream has a lower temperature than the hot stream everywhere. – the heat transfer coefficient can be taken into account, either by setting it to a fixed value or even by a calculation, taking into account the specification of the heat exchanger. In the author’s opinion there are advantages to clearly separate process simulation and equipment design. Sticking to the example of the heat exchanger, reducing the complexity is normally very useful for process simulation, and estimated heat transfer coefficients will only give a brief impression of the necessary equipment. It is useful that the process simulation just defines the requirements of the process which should be achieved by an adequate equipment design. In the design procedure itself, the engineer gets a certain feeling and experience for the sensitivities of the various adjusting screws. It is, however, necessary to adjust some parameters in the simulation after the

110 | 3 Working on a process design has been set up, e. g. the pressure drop. Nevertheless, it can still be decided whether the value obtained in the process design or a maximum allowable value is used in the simulation. Another item is the validity of the documents during an engineering project. If the design parameters of a heat exchanger or a column are directly involved in the process simulation, it might happen that minor changes due to new information about the process have to be considered, e. g. a fouling factor in a heat exchanger or a choice for a more inexpensive packing type. In this case, the outlet streams of this piece of equipment will slightly change, which in turn has an influence of all the connected downstream equipment and recycles. Their specifications are probably only slightly affected, but no more consistent with the mass balance – a nightmare for quality management. To take the mass balance only as constraints for the equipment design is more effective and less sensitive to changes. Equipment design comprises the choice of the kind of equipment, the fixing of design options (e. g. for a shell-and-tube heat exchanger: hot fluid on shell side or tube side), the determination of the dimensions including both the process and the mechanical aspects and the limits of operation conditions. In process engineering, there is a peculiarity concerning the overdesign: a piece of equipment in process engineering can be too small or too large; both incidents can lead to a lack of function. In other engineering disciplines, a piece of equipment which is too large is usually more expensive and therefore uneconomical, but the function is achieved. Therefore, special care must be taken to make sure that the particular pieces of equipment can be used in all required application cases, and, furthermore, the applications theirselves should be questioned and discussed. Due to the limited accuracy in process engineering calculations, safety factors are certainly necessary, but nonreasonable ones can be detrimental to the function of the equipment [70]. In the following chapters, the main design aspects of various kinds of equipment are outlined.

3.6 Troubleshooting Round up the usual suspects! (Claude Rains as Captain Renault in “Casablanca”)

Process simulation is a complex exercise and therefore prone to errors and mistakes. In most cases12 only one single error is responsible for failure in process simulation. It is usually no use to detect even more failures, as they all might refer to the same reason. The most important thing is to keep calm and analyze what happened. In fact, as in the citation above, there are a number of usual suspects, and, in contrast to the movie, 12 but, to my regret, not always.

3.6 Troubleshooting |

111

there is a high probability that they are responsible for the failure. And even if not: the defined work on the process helps to obtain better understanding of the calculation and the behavior of the simulation, and often it is the key for fixing the failure at the end of the day. If process calculations give strange results or do not match the process, the following things should be done first. – Check if systems with an LLE are calculated with the three-phase-flash (VLLE): If systems with an LLE are calculated without taking the LLE into account (Chapter 2.5), strange results are obtained, and they are not always so obviously wrong as in Figure 2.17. Undoubtedly, this is the error which occurs most often. Unexpected mixing temperatures or crazy phase equilibria are clear hints that the setup in the phase equilibrium calculation might be wrong. – Model change: Model changes lead to inconsistencies in the enthalpy description (Chapter 2.11). Any region where a model change takes place must be carefully checked, even if nothing suspicious happens. – Binary parameters: Check whether the decisive binary pairs are reliable. – Check whether the numbers of theoretical stages in columns are adequate. If small concentrations occur, it might be useful to apply a rate-based calculation (Section 5.9). – Components are missing: Often the wrong components are being looked at. One must double-check with the plant manager that the component list is correct. – Enthalpy description: The liquid heat capacity is often a quantity which causes errors (Chapter 2.8). If energy balances of liquid phases look strange, one should carefully check whether the errors in cpL are acceptable. – Component removal: When strange physical properties or phase equilibria occur and the reason is not identified, it might happen that one of the components is the reason because of a wrong parametrization. First, a plot of the quantity to be checked should be generated, showing all pure components. If it is still not clear which one it is, the strategy to remove the components one by one can be applied. If the error vanishes after having removed one component, the erroneous component is identified and should be further examined. – Component accumulation: As in the case study (Section 3.1) with the ammonia-water system, it often happens that no outlet for one or more components has been provided. A simulation cannot converge if one component has no option to leave the system. Process simulation programs usually save the tear history, where it can be checked for which components difficulties occur to fulfill its mass balance.

112 | 3 Working on a process –







Convergence of column calculation: Distillation is the most crucial unit operation block in a process simulation. For flowsheet convergence, it is not crucial if during flowsheet evaluation a column once does not fully meet the convergence criterion, as long as it is not the case in the final step. If a column calculation has no tendency to converge at all, it is strongly recommended to stop the process simulation. Otherwise, the simulation will be continued with a meaningless result, which is then distributed into the rest of the flowsheet. All the other blocks downstream get wrong input values and yield bad results on their part. Finally, bad starting and input values are spread all over the flowsheet, which makes convergence even worse. Wrong plant information: Data coming from the plant staff might not be in the desired form. Mass balances to be reproduced might be inconsistent, especially compositions might be biased. Even technical terms can be used in different ways; a classic error source is a wrong application of the reflux ratio (Equation (5.1)) in distillation columns. Column hydraulics: Surprisingly, column hydraulics are rarely the reason of major errors [259]. The amazing reason is that the engineers are aware that there are large uncertainties and therefore do the design carefully. Unexpected chemical reactions: Even if experienced chemists swear that a system is chemically stable, unexpected side reactions frequently occur in distillation columns. A GC analysis of a sample of the stream considered might reveal new unidentified peaks that can indicate side reactions.

One question is often encountered, that is the accuracy of process simulations. Although this question is not very specific and often used for discrediting process simulation as a tool responsible for higher costs, it has a well-defined answer: The process simulation is as accurate as our understanding of the process. This is true in any case, neglecting those where simply input errors occured (wrong parameter transfer for physical properties, different operating conditions, misinformation on equipment etc.). Such cases include: – bad extrapolation to γ ∞ for the decisive binary mixture; – wrong physical property data where it is important, either due to estimation or lack of relevant data; – use of equilibrium model where mass transfer calculation is required (e. g. HCl absorption from exhaust air); – bad use of reaction kinetics; – wrong estimation of tray or packing efficiency; – bad characterization of membrane or adsorption behaviour; – unknown side reactions; – unknown components.

3.7 Dynamic process simulation

| 113

In this context, it becomes clear that process simulation is not a truth generator but requires profound process knowledge and competence of methods. Certainly, it is sometimes possible to design the equipment and run it without any calculation. Plant managers have sometimes 30 years of experience, however, their knowledge is hardly transferable to a newcomer. Process simulation can develop transferable knowledge and valuable information for design and operation.

3.7 Dynamic process simulation Verena Haas

The use of process simulation tools is a widely applied practice in process design and analysis. Steady-state models, see Chapter 3, depict the overall process in a defined condition often referred to as “design condition”. This is the representation of the plant operation in its working condition, meaning after start-up and without disturbances. Design operation conditions are not necessarily accomplished due to change in raw material composition or technical failure of equipment, for instance. In fact, steady-state models are a practical tool for the representation of specified process conditions. Due to the lack of time dependence, steady-state models are a simplification of the realistic problem and the outputs, meaning flow rates, the necessary energy amount for heating or cooling, pressures and temperatures, represent a snapshot of the process. Assuming steady state, the derivative with respect to time is zero: the mass and energy input matches the output. However, every industrial production process varies over time and a steady-state specification is not sufficient to cover the complex, transient behavior of real plants. High-performance dynamic process simulators overcome the restriction of assuming only design conditions and allow for reliable real-time simulation of existing plants or planning of new processes. Both steady-state and dynamic models are based on first principals, the essential difference is that time dependence of variables is considered in the latter ones. Hence, dynamic simulation takes account of mass and energy accumulation. This allows for the simulation of scheduled start-up, shutdown or feedstock changes and the impact of external disturbances. Dynamic solver engines are designed for efficient calculation and cover transient conditions deviating from equilibrium. Due to dynamic simulation the engineer gains a broad comprehension of the functional interaction of the modeled process units. Therefore, it enhances process understanding, allows for comprehensive quantitative analysis and supports decisions on investment projects. The digital model is extensible and once created it is a basis for further process investigation or equipment sizing. The following list summarizes other typical advantages and applications and will provide examples when dynamic simulation is favorable [264, 265]: Verena Haas, BASF SE, D-67056 Ludwigshafen am Rhein, Germany

114 | 3 Working on a process –





– –

– –

– –

Analysis and optimization of transient behavior: design of inherently transient batch and semi-batch processes. Simulation of start-up, shutdown or load change without the necessity to set up a new simulation. Flexible interaction with the simulation: implementation of programmed scenarios and monitoring of the system’s response to perturbations. Performance evaluations under conditions differing from design specification, for example, changes in feedstock composition, become accessible. Risk reduction through performance tests: safety analysis and offline modelling of external disturbances, for example, power or equipment failure, abnormal heat input, etc. Debottlenecking studies: identification of critical operation conditions, for example, detection of pressure buildup in vessels or hotspots in chemical reactors. Sizing of pressure safety valves and flare load calculation: depiction of relief and blowdown scenarios, calculation of relief loads in dependence of time and revamp of flare networks. Analysis and tuning of control systems, for example, PID controllers: evaluation of control system performance prior to installation. Operator training: enable practice of normal and non-routine plant operation in virtual scenarios. Thereby, the term “digital twin” is used to describe virtual models representing an existing asset by combining simulation tools and relevant realworld data [266]. Modelling of equipment lifetime: increased insight in effects of fouling in heat exchangers, time-dependent catalyst performance or impact of plant modifications. Cost savings: avoid oversizing, enable energy and emission reduction.

In conclusion, a useful dynamic simulation is capable of accurately representing the operation behavior of the real process but also has a predictive character to be used for design of new or optimization of existing plants. There are different commercially available software packages, including Aspen Plus® Dynamics and Aspen HYSYS® Dynamics from Aspen Technologies, gPROMS® by Process Systems Enterprise, Dassault Systèmes’s Dymola® and open source applications, for example, Open Modelica [264]. These software tools contain mathematical solver systems and are based on conservation laws, they include thermodynamics, heat and mass transfer phenomena and kinetics. Thereby, the simulator provides preinstalled equations and solver algorithms that can be expanded by user-defined sequences. Usually, the process simulator offers subroutines and libraries where different options are available. To make successful use of any tool, the user must be aware of the implemented mathematics, phase equilibrium specification and the accuracy of declared variables. The theory of these topics and mathematical solver strategies are discussed thoroughly in literature, see, for example, [267] and [268]. User manuals or customer hotlines supplied by the software provider support with further information and help concerning specific problems as well.

3.7 Dynamic process simulation

| 115

3.7.1 Basic considerations for dynamic models The results of a simulation and their correctness strongly depend on the input and model set-up. If available, the dynamic simulation should include engineering design data and specific process data. This is, particularly, a prerequisite for the digital representation of any existing process. Additionally, a complete and reliable thermodynamic package is favorable. Process simulator default databases for thermodynamics may lack important information, especially if the examined process is operated at temperatures or pressures out of the available data source range. Apart from these key requirements that are also valid for steady-state simulations, there are some essential considerations that need to be accomplished before working on a dynamic simulation. Steady-state simulators assume material streams to flow from one unit to another. This is valid as long as the pressure in the upstream unit is higher than the pressure downstream. Material cannot simply flow, it needs to be transported and its flow is determined by pressure gradients, friction and flow regimes [264]. Therefore, dynamic models include the pressure–flow relationship by calculating valve pressure drop instead of using a constant value, for instance. Consequently, dynamic models must allow for reverse flow to calculate and display reversed pressure ratios. Equipment size definition is another requirement for dynamic simulation. Contrary to a steadystate block specification, geometry has a distinct effect on dynamics. Heat and mass transfer operations are directly affected by spatial relationships and liquid level is only accessible by including geometry. Basically, there are two different modes of setting up dynamic models. The first technique is based on a complete steady-state simulation equipped with further information to execute the transfer used by Aspen Plus® Dynamics, for example. The mentioned considerations for pressure–flow relationships and equipment sizing are essential for converting an existing steady-state simulation to a dynamic one. As downstream pressure influences flowrates, every block-to-block connection or pipeline needs to be specified concerning pressure–flow relationship. Therefore, the flowsheet must be prepared by including additional pressure changers, e. g. valves or pumps, or pipeline pressure drops must be specified. Further information on equipment size does not affect the steady-state result but because of these additional pressure changer elements the steady-state run needs to be repeated. If the dynamic model is created by transferring a steady-state solution, the initial condition of the dynamic simulation equals the steady-state condition. Simulation time is set to zero in dynamic mode and if there is no perturbation or controller action during the following dynamic run, the output results will not vary with time. Other simulator tools do not use an existing steady-state flowsheet as a basis for the dynamic solution. A new flowsheet is built in the dynamic simulator workspace. Blocks and material streams are inserted and connected similar to steady-state simulations, but additional data required for dynamic procedures are requested right from the start. For the design of new processes, this method is preferably used as long as

116 | 3 Working on a process

Figure 3.20: Time-dependent evolution of the variable value from initialization to stable condition. Adapted from [265].

no steady-state-model is available. Initialization requires known values for state variables or their time-dependent derivation at initial conditions. State variables arise in the accumulation term of instantaneous material or energy balances. Temperature, for example, is a state variable necessary to solve the energy equation [269]. After initialization, the integration algorithm is performed starting from that specified initial state. This allows the determination of how long it would take to reach a stable mode of operation for the given system, see Figure 3.20. On the other side, it is possible to skip the start-up period and only use the solver system to calculate the stable condition directly. Regardless of which technique is used, the underlying set of equations and defined variables give the possibility to perform different activities of interest. This underlines the flexibility of equation-based tools for process simulation. Interaction with the simulation is possible by defining “operation modes”. Some tools allow for the integration of programmed sequences. These user-defined programs represent, for example, a scheduled operation (e. g. “add 100 kg of reagent X after 5 min of stirring”) or a controller action when a certain condition is true (e. g. “close valve A when level is lower than 1 m”) or evoke an equipment malfunction scenario (e. g. “cooling water supply for chiller Y is blocked at run time 20 min”). Additionally, plots and tables allow a descriptive representation of the process variables over time. In Chapter 3.1 several flowsheet blocks used for steady-state simulation are introduced. The symbols used for flowsheet set-up are identical but additional input information for dynamic modelling is required. The following examples list some basic considerations and possible scenarios the user should know before starting dynamic flowsheet design, but they need to be evaluated concerning their relevance depending on the given problem. Specific requirements for block definition depend on the used simulation tool and the particular information needed to apply the provided method/model, respectively. – Vessel: The geometry of storage tanks, flash drums or reflux drums must be specified in dynamic mode in order to calculate liquid level or liquid mass, respectively. This is a necessary information to calculate the condition in the vessel because pressure is dependent on liquid hold-up and temperature, for example. If pressure

3.7 Dynamic process simulation









| 117

and/or temperature in a liquid-filled vessel change and boiling point is exceeded, a gaseous phase will appear – this, of course, is not restricted to vessels but this simple example shall sharpen the user’s awareness of possible phase transition. Valve: Pressure–flow relationship equations are used to calculate pressure drop depending on fluid flow and valve characteristics. The latter include the valve flow coefficient Kv (see Chapter 12.3). Control valves are commonly used to regulate fluid flow as they allow positions “open”, “closed” and positions in between. If the valve is fully closed and measured fluid flow downstream is zero, solver algorithms may fail. This is a challenging numerical operation and if solver calculation lags, it is recommended to allow for a marginal fluid flow, e. g. 0.0001 kg/h. Usually, this does not affect the overall process but keeps simulation stable. The same holds if inflow to the valve is zero because there is no upstream fluid flow. Pump: Performance and efficiency curves can be inserted to represent realistic pump behavior and to define pump working limitations. This may become interesting if volumetric flow rate changes drastically. If, for some reason, reverse flow occurs, pump simulation will probably cause the solver to freeze because pressure elevation with the pump is only possible in one direction. Thus, the simulation is not erroneous itself but reveals potential pump failure. Heat exchanger: Exchanger area is calculated from geometry, if specified, or predefined by input information. The heat capacity of the shell and tubes is assimilable and considers the energy required to warm up the material. The user should also be aware that fluid flow of both hot and cold stream can vary over time and dynamic models allow for manipulation of fluid streams, for instance, by using a controller. Next to this, fluid flow regime and medium heat capacity are not necessarily constant. This affects the overall heat transfer coefficient and pressure drop and forces an available exchanger duty to vary. These aspects influence the driving temperature difference between heating/cooling medium and the process side. Distillation column: Usually, steady-state models of columns work with a specified feed composition, defined pressure drops and reboiler/condenser specification. Steady-state distillation simulation assumes phase equilibrium on every stage for this design condition. However, pressure drop, liquid/vapor flow, temperature and composition depend on operating conditions, as mentioned before. Dynamic simulations allow to trace a change of operation condition and its effects on column performance by modelling time-dependent stage equilibrium, pressure drop and stage hydraulics at any time. Overhead and bottom systems are included, respectively. Pressure drop is calculated from pressure-flow relationships including the liquid hold-up on the stage given by geometric definitions and hydraulics. Generally, pressure, level and temperature controllers must be present and configurated to ensure a

118 | 3 Working on a process





realistic model of how the column system would behave if perturbation occurred. Examples for distillation column control are given in Chapter 5.6. To access liquid level control, geometries of reflux drum, column bottom and other adjacent pieces of equipment must be known. To add, off-spec column performance can be revealed: for example, tray flooding is detected because of liquid hold-up calculation; reverse flow, possibly arising when the top stage pressure exceeds the bottom stage pressure, can be identified. Condenser and reboiler performance will be affected, too, if operation mode changes. Here, basically the considerations given for heat exchangers are valid. It is possible to consider column material heat capacity and heat output of the whole device to the environment, which completes a thorough energy balance and displays real-world physics. Chemical reactor: Reaction rate is, among other factors, dependent on temperature, concentration and catalyst performance. As these conditions may change, reactor temperature/pressure profiles, reaction rates and product composition are affected. These data are necessarily calculated to solve the kinetic model equation and are available as simulation results. Therefore, a reliable kinetic model must be defined by the user and depending on the dynamics of the reaction, integrator time intervals must be adjusted. Additionally, liquid and vapor hold-up calculation require geometry data. Catalyst loading and distribution are a necessary information if, for instance, residence time limits conversion or catalyst deactivation is regarded. Heating/cooling equipment must be specified as well, for example, by implementing cooling water streams and adequate heat transfer models. This model is completed by considering heat capacities of reactor shell material and tubes to calculate equipment warm up and heat output to the environment. Generally, pressure and level controllers should be added, but control strategies must be defined for the specific case. Additional quotes: Dynamic simulation is obviously more CPU-intensive than a steady-state approach. To reduce simulation time, the model shall be kept as simple as possible. Referring to this, it is usually not necessary to simulate the overall process rich in detail. For operations that need to be simulated accurately, like fast dynamic effects, time step variation can be the method of choice. Decreasing the integration time step usually gives more accurate results but increases simulation time. Dynamic simulation produces lots of data and the large amount of information should be documented clearly.

3.7.2 Basics of Process Control for Dynamic Simulations Efficient and safe operation of any plant needs to be maintained even if external influences cause off-spec operation conditions. Automatized process control is a required

3.7 Dynamic process simulation

| 119

operation to compensate deviations of variables like temperature, pressure, concentration or level from the desired value. Generally, steady-state simulations are specified for these desired conditions and input is set to a fixed value. In a real plant, values cannot be set – but control elements enable the definition of a set-point that is maintained by implementing an adequate control scheme [264]. These devices are included in dynamic models and are a necessity for reliable process portrayal. The presence of control elements is a first “visible” difference between steadystate and dynamic simulation. Steady-state simulators, as mentioned before, do not allow for mass or energy accumulation. As this cannot be “seen”, it is more practical to focus on the corresponding measured variables, for example, temperature, pressure or level [264]. For this purpose, the dynamic flowsheet is equipped with sensors and controllers whose performance can be tuned. A so-called feedback control loop is a common type of control. Its chronology of interaction is depicted in Figure 3.21. The sensor records a measurable variable and the data is forwarded to a controller. It receives the information delivered by the sensor and compares it with the desired value of that variable: the set-point. The deviation from the set-point is translated into an “instruction”, or output signal, for the control device. The control device executes the corresponding action, for example, opening or closing of a valve [264]. Commonly, control valves are used as control elements to manipulate fluid flow rate depending on the opening percentage of the valve. Thereby, the distinct valve type can be displayed by specifying a valve characteristic [270].

Figure 3.21: Depiction of a feedback control loop.

Principally, there are two occurring types of variable declarations concerning process control14 – parameters and free variables. Their characteristics can be explained as follows: – Parameters are physical or chemical properties whose values are known or predefined, for example, reaction rate constant and heat/mass transfer coefficients. 14 Other authors may use the term fixed variable instead of parameter or further differentiate between input, output, state variables (here: free variables) and parameters/fixed variables, respectively.

120 | 3 Working on a process



Equipment geometry parameters, e. g. diameter of a vessel or number of heat exchanger tubes, are fixed as well, but may be varied during design optimization [266]. These parameters are not accessible via process control devices. There is, for example, no sense in declaring a “controller” that varies vessel diameter to adjust the liquid level. Also, thermal conductivity, heat transfer coefficient and others are not applicable as part of a control scheme (although measurement might be possible). Free variables cannot be assigned with a constant value as physical and chemical laws define their dependence on other variables. Typically, inlet flow rates, composition or temperatures of streams entering the first stage of a process are fixed for initialization. They are predefined by the user. The mathematical solver process uses the input variables and calculates the value of the free variable downstream. Input values may be varied as well, in order to model load change, for example. Thus, both inputs and results of the simulation can be free variables. Control systems are used to measure and then influence free variables specifically [266]. In conclusion, free variables, as declared in this text, include the measured and manipulated variable mentioned in the context of process control schemes before.

The appropriate implementation of control schemes requires the definition of corresponding variable pairs. The interaction of measured and manipulated variable is the basis of a working control system. Besides, it is important to consider the “direction” of controller action, meaning that it is required to predefine if the connection of measured and manipulated variable is direct or reversed, see Figure 3.22. These terms define the regulation action that needs to be performed by the control device to react to the disturbance in the right way. Direct mode causes a controller action aligned with the direction of change of the measured variable. Reversed mode performance triggers a contrary operation, meaning that the controller output action opposes the direction of development of the measured variable.

Figure 3.22: Interdependence of measured and manipulated variable for direct or reversed mode.

A simple but descriptive example for direct mode is the implementation of a control valve and controller for level control, see left picture in Figure 3.23. The level of

3.7 Dynamic process simulation

| 121

Figure 3.23: Depiction of direct mode by using liquid level control (LC) and reversed mode for flow control (FC).

a liquid-filled tank depends on the fluid inlet and outlet flow rate. In that case it is sufficient to control either inflow or outflow. Here, the use of a control valve as control element in the outlet flow stream is considered. The measured value is the liquid level in the tank and the operator defines the desired set-point value. The manipulated value is the opening percentage of the valve. A direct mode is necessary if the increase of the measured variable can be compensated by the increase of the manipulated variable. This is applicable for the above-mentioned example of level control. If the level rises, the valve opening must increase. Hence, the defined level set-point can be restored by manipulating the position of the outlet control valve and in that case the valve opening is extended. Correspondingly, a reversed mode of action triggers a decrease of the manipulated variable if the measured value increases. An example for this is flow control. If the fluid flow increases, the control valve is set in a position of reduced orifice, see right picture in Figure 3.23. Example A storage tank shall be equipped with pressure and level control. Set points are p = 1.35 bar and L = 3.994 m. Both manipulated and measured variable must be chosen and a visual device for observation of controller performance shall be implemented.

Solution The snapshot in Figure 3.24 of a simulation created with Aspen Plus® Dynamics shows the implementation of pressure and level control devices for a storage tank. The measured variables are pressure in the vessel and liquid level, respectively. The desired values were defined with a set-point of 1.35 bar and 3.994 m, see label “SP”. The measured value, meaning actual pressure and level, are indicated with the label “PV” (process variable). Control valves are used to adjust pressure by vapor relief and to hold liquid level by bottom stream mass flow regulation. Thus, the manipulated variable is in both

122 | 3 Working on a process

Figure 3.24: Storage drum equipped with pressure (DRUM_PC) and level control (DRUM_LC). Controller faceplates of both control devices are shown.

cases the position of the control valve, indicated with the label “OP” (output). In this example, the valve opening is 50 % of its full opening. The faceplates monitor the actual controller performance and are a useful tool to trace and observe process and controller behavior.

Generally, the procedure of adding control elements to a flowsheet consists of these steps: 1. Define measured-manipulated variable pairs: identify process variables that can be measured by appropriate instrumentation and determine a source able to influence the process variable in the desired way, see Chapter 12.4. Process variables can be temperature, pressure, mass, level, flow or “quality variables” like pH or concentration. Manipulated variables are the opening position of control valves, heating/cooling duty or rotational speed. There are many different possible pairings depending on the specific problem. 2. Make sure that the manipulated variables can only be varied in a realistic range. Heating or cooling medium flow or temperature are restricted by on-site circumstances, for instance. If cooling water, steam or power supply are limited to a certain amount, it is pointless to extend the available resource. To add, the manipulated variable must be a free variable. Parameters cannot be varied because of physical or chemical process restrictions.

3.7 Dynamic process simulation

| 123

3.

Control valves are characterized by their KV -value and opening characteristic giving the pressure–flow relation, see Chapter 12.3.2. This is a necessary information for reliable results if valves are involved in control schemes. 4. Select a proper control mode: Is the measured-manipulated action relation direct or reversed? This step is essential for defining how the control system responds to process variable variation. 5. Add controller performance displays and trace controller action: During simulation run it is recommended to check controller behavior. The simplest method is to add visual displays or charts that record process variable variation and controller action over time, see example “Controller faceplate” and Figure 3.24. That way the user can examine if the controller is working properly. PID feedback controllers are frequently used in chemical engineering.15 If required, the controller performance can be tuned by algorithm parameter adjustment. Thorough overviews concerning process control in chemical engineering applications can be found in corresponding literature, for example, [269–271].

Example The following example [272] was set up in order to simulate a pressure relief scenario of a column equipped with reboiler, condenser and post-condensation system (Figure 3.25). Pressure relief is a highly dynamic event. The target was to calculate the relief stream over time and define how long it takes to build up a pressure that causes the pressure safety valve to open (see Chapter 14.2 for further description of pressure relief theory).

Solution The simulation was first built in Aspen Plus® and transferred to Aspen Plus® Dynamics where control elements were added. Table 3.2 lists the corresponding measured-manipulated variable pairs used for the simulation. Note the different modes of temperature control depending on whether cooling or heating is regarded. If the measured temperature rises, cooling medium supply must increase to maintain the setpoint temperature (direct, TC_Cond, TC_Chill). Otherwise, heating medium supply must decrease, if the measured temperature rises (reversed, TC_Col). Column pressure control is realized through nitrogen addition if the pressure drops below a certain level (reversed, PC_N2) and through control valve

15 PID is the abbreviation of a feedback control loop mechanism with proportional (P), integral (I) and derivative (D) mode. The usage of different values and tuning constants allows for various combinations of these three modes. For a PI controller, for instance, the derivative part is set to zero – lettered by leaving the “D” in the term “PID”. Based on the authors experience it is recommended to apply PI control schemes for the common flow, level, pressure and temperature control schemes. Further information can be found e. g. in [280].

124 | 3 Working on a process

Figure 3.25: Control scheme of a column and reboiler/condenser system. Adapted from [272]. Table 3.2: Measured-manipulated variable pairs and their corresponding mode of action. Controller

Measured Variable

Manipulated Variable

Mode of action

FC_Feed FC_Reflux PC_Col PC_N2 PSV LC_Sump LC_Drum TC_Cond TC_Chill TC_Col

Mass flow feed Mass flow reflux Column pressure top stage Column pressure top stage Column pressure top stage Level column sump Level reflux drum Temperature overhead stream Temperature post-condensation stream Temperature column stage

Rotational speed Control valve position Control valve position Control valve position Safety valve position Control valve position Control valve position Cooling medium flow Cooling medium flow Heating medium flow

reversed reversed direct reversed direct direct direct direct direct reversed

opening if pressure increases (direct, PC_Col). The safety valve PSV actuates when a defined column top stage pressure is reached (direct, PSV). Cooling system failure is a commonly approached scenario during safety analysis. In the portrayed case this causes the condenser and post-condensation system, the chiller, to fail while feed stream still enters the column and the reboiler system is working. Cooling system failure can be initiated by implementing user-defined codes that evoke a malfunction scenario. In this simulation the cooling system failure arises at t = 1 min, assuming prior undisturbed, stable operation condition. Figure 3.26 shows the pressure build-up in the column due to accumulation of vapor. The overhead vapor is not liquified because of condenser and chiller failure. The safety valve opens if pressure exceeds 4.9 bar and remains opened to prevent from further pressure increase. The safety valve actuates

3.7 Dynamic process simulation

| 125

Figure 3.26: Time-dependent pressure and relief stream evolution as a result of cooling failure. Adapted from [272]. after 12 min and pressure relief takes place. This result is important concerning the consideration of the malfunction’s severity, which is rather high in the described case because of a short time period before blowdown. Next to that information, the time-wise change of the relief stream is simulated and allows process engineers to detect the maximum relief stream (and the point of time of its occurrence), see Figure 3.26.

4 Heat exchangers Most of the heat exchangers do not work because of, but in spite of our design … (Hans Haverkamp)

Heating and cooling of streams are essential unit operations in any process. They are usually carried out in so-called heat exchangers, and their design is one of the main requirements of a useful process engineering. There are a lot of different aspects that have to be taken into account, and both process and construction engineers must give their input to achieve a good solution. In contrast to academia, the fundamentals of heat transfer play a minor role in industrial applications. The necessary relationships are already integrated in commercial heat exchanger programs like HTRI, HTFS, or ASPEN Heat Exchanger Design & Rating. Instead, the focus is the reasonable use of the particular options for the design of the apparatus. It is attempted to give a good explanation of these options in the following chapter; for the fundamentals, there are a lot of other textbooks available, e. g. Baehr/Stephan [71] or the VDI Heat Atlas [72]. There are two main types of apparatuses for heat exchange, the shell-and-tube type and the plate heat exchangers. Other types, e. g. spiral heat exchanger, double pipes etc., are not or only briefly considered in this book. Much information about them is given in [72]. The shell-and-tube type is still the most widely applied one because of its robustness and flexibility. It will be discussed in detail, as their specification is a standard task of a process engineer. Plate heat exchangers are outlined more briefly; usually, a vendor specialist is necessary to obtain an optimum design.

4.1 Something general Heat exchangers transfer heat from one fluid to another one, the two of which do not come into contact because of a separating wall. The design of a heat exchanger is a classical heat transition problem, meaning that the heat has to be transferred from one of the fluids to the separating wall, then go through the wall and finally be transferred from the wall to the second fluid. Fouling layers can make this heat transfer more difficult; they can be considered in the calculation analogously to the separating wall (Figure 4.1). There is a strong analogy to electrical engineering, where electric current and voltage are linked by the resistance in Ohm’s law: R=

U I

(4.1)

A series of resistances can be described by adding them up: Rtotal = R1 + R2 + R3 + ⋅ ⋅ ⋅ https://doi.org/10.1515/9783110657685-004

(4.2)

128 | 4 Heat exchangers

Figure 4.1: Steps in the heat transfer process.

For heat transfer from a fluid to a wall, the standard approach is Q̇ = αAΔT

(4.3)

Comparing Equations (4.1) and (4.3) and considering that Q̇ and I as well as ΔT and U have the same meaning, a thermal resistance can be defined as Rα =

ΔT 1 = αA Q̇

(4.4)

The heat transition can then be characterized by a series of thermal resistances: Rk =

1 1 1 + Rfouling,1 + Rwall + Rfouling,2 + , = kA α1 A1 α2 A2

(4.5)

with the overall resistance Rk , defined as 1/(kA) to give it the same structural meaning as α.1 The heat transfer areas A1 and A2 can differ, e. g. the inner and the outer surface of a tube. The A in the term “kA” is arbitrary, it does not matter to which area it refers unless k itself is evaluated. The thermal resistance of the separating wall Rwall can be calculated according to the rules of heat conduction [71]. The fouling resistances are usually set to empirical values (Table 4.2). It should be clear that the magnitude of these resistances gives a clear guideline how to improve the performance of a heat exchanger. Good heat exchanger calculation programs indicate the percentages of the particular thermal resistances. The high thermal resistances should be counteracted, as the following simple example shows.

1 Note that in the English and American literature the heat transfer coefficient α is normally denoted as h, and the heat transition coefficient k is normally denoted as U.

4.2 Shell-and-tube heat exchangers | 129

Example A heat exchanger is calculated to have an α1 = 150 W/(m K) and an α2 = 2000 W/(m2 K). What improvement is required? The resistances of a fouling and separating wall should be neglected. The heat exchange area is 100 m2 on both sides.

Solution The overall resistance can be calculated to be 1 1 1 K = + = 7.17 ⋅ 10−5 kA 150 W/(m2 K) ⋅ 100 m2 2000 W/(m2 K) ⋅ 100 m2 W ⇒ k = 139.5 W/(m2 K)

Increasing the lower heat transfer coefficient by 10 % gives 1 1 1 K = + = 6.56 ⋅ 10−5 kA 165 W/(m2 K) ⋅ 100 m2 2000 W/(m2 K) ⋅ 100 m2 W ⇒ k = 152.4 W/(m2 K) ,

while doubling the higher one gives 1 1 K 1 = + = 6.92 ⋅ 10−5 kA 150 W/(m2 K) ⋅ 100 m2 4000 W/(m2 K) ⋅ 100 m2 W ⇒ k = 144.6 W/(m2 K)

A small increase of the lower heat transfer coefficient is much more efficient than a large change of the higher one.

4.2 Shell-and-tube heat exchangers Shell-and-tube heat exchangers are the most common heat exchanger type in process industry. One of their advantages is a large ratio of heat transfer area to volume and, respectively, weight. They have a wide range of sizes, cleaning is at least possible, and wear parts like gaskets can easily be replaced. Shell-and-tube heat exchangers are composed of a shell, which is in principle a pressure vessel, and a tube bundle inside (Figure 4.2). The two fluids which are supposed to exchange heat are on different sides of the tube; one inside the tubes and one outside the tubes within the shell-side. Often, only one of the streams involved is a process stream, whereas the other one (e. g. steam for heating, cooling water) is a utility (Chapter 13). On the other hand, it is desirable to reduce the consumption of utilities and to cover the necessary heating or cooling duty from the process itself so that both streams are process streams. This heat integration can save operation costs significantly; the Pinch method for the

130 | 4 Heat exchangers

Figure 4.2: Transparent shell-and-tube heat exchanger. Test fluid with red dye. Courtesy of Heat Transfer Research, Inc.

optimization has been described in Chapter 3.3. The heat exchangers can be classified with respect of the phase behavior of the streams; they can maintain their phases (gasgas or liquid-liquid exchangers), or the product stream can condense (condenser) or evaporate (evaporator). In commercial heat exchanger design programs (e. g. HTRI, HTFS), the principle of the thermal calculation of a shell-and-tube heat exchanger is the so-called cell method [72], which is illustrated in Figure 4.3.

Figure 4.3: Cell method for the thermal calculation of a shell-and-tube heat exchanger. © Springer-Verlag GmbH.

Using a simple liquid-liquid heat exchanger, the procedure is illustrated in the next chapter.

4.3 Heat exchangers without phase change Each cell must be assigned with the actual state of the stream and its associated physical properties, with the geometry of the cell and with the appropriate relationships which describe the heat transfer. In contrast to process simulation programs, commercial heat exchanger design programs do not support physical property models or

4.3 Heat exchangers without phase change | 131

p = 6 bar

Liquid Properties

t

h

η

λ

cp

(J/g)

Vapor ρ fraction 3 weight (kg/m )

(°C) 100.000 94.137 88.240 82.310 76.348 70.356 64.335 58.287 52.214 46.117 40.000

−10 767 −10 784 −10 802 −10 819 −10 837 −10 854 −10 871 −10 889 −10 906 −10 924 −10 941

0 0 0 0 0 0 0 0 0 0 0

0.2953 0.3138 0.3345 0.3578 0.3842 0.4141 0.4482 0.4873 0.5324 0.5847 0.6457

0.2018 0.2034 0.2049 0.2054 0.2079 0.2084 0.2109 0.2124 0.2139 0.2154 0.2168

2.985 2.967 2.950 2.934 2.919 2.904 2.891 2.878 2.867 2.856 2.847

882.724 888.618 894.406 900.089 905.669 911.147 916.524 921.802 926.980 932.060 937.041

tc pc M (pseudo) (pseudo) (mPa s) (W/(K m)) (J/(g K)) (°C) (bar) (g/mol) 337.49 337.49 337.49 337.49 337.49 337.49 337.49 337.49 337.49 337.49 337.49

166.96 166.96 166.96 166.96 166.96 166.96 166.96 166.96 166.96 166.96 166.96

27.39 27.39 27.39 27.39 27.39 27.39 27.39 27.39 27.39 27.39 27.39

Figure 4.4: Example for a heat curve without phase change. Courtesy of Heat Transfer Research, Inc.

their parameters except a number of common heat transfer fluids like steam or water, cooling brines, thermal oils, and some common pure components and ideal mixtures of them. Instead, the physical properties needed are generated before the actual heat transfer calculation takes place. The communication between the physical property model and the heat exchanger design program is achieved with the help of a so-called heat curve. An example for the product side heat curve of a liquid-liquid heat exchanger without phase change is given in Figure 4.4. As can be seen, the particular physical properties with respect to temperature, i. e. specific enthalpy, density, dynamic viscosity, thermal conductivity, and molecular weight, are tabulated. Between these points, the program interpolates. Extrapolations are usually indicated with warning messages. Each heat curve refers to a certain pressure. Normally, several heat curves for different pressures are generated to account for pressure drop effects. For a liquid-liquid exchanger, this is of minor importance. As well, the vapor fraction of the stream is always zero in this case. Alternatively, the enthalpy can be used as an independent variable. The pseudo-critical temperatures and pressures are of minor physical significance but required by some correlations. To make a good interpolation behavior possible, the user can vary the step size and take care that the distances between the points are appropriate and that the whole temperature and pressure range occurring in the heat exchanger is covered. The thermodynamics is completed by the specification of the process in the heat exchanger. Commercial heat exchanger design programs distinguish between three calculation modes, i. e. the rating mode, the simulation mode, and the design mode. They can be distinguished as follows.

132 | 4 Heat exchangers –





Rating mode: The specified heat exchanger is calculated according to the process. As the main result, it is indicated how much heat exchange area is excess or, respectively, missing (overdesign). This is the standard mode for the design of heat exchangers. Simulation mode: It is evaluated how the specified heat exchanger would perform with the given input streams, i. e. the actual outlet conditions are calculated. Design mode: A heat exchanger design is evaluated which fulfills the requirement of the process (outlet conditions, pressure drop). This is a very tempting approach; the heat exchanger design is achieved by the famous mouse click. However, this mode is time-consuming, and the result is not necessarily satisfactory; it should not be taken as final but as a starting point for further rating mode calculations. Furthermore, the user does not get a feeling for the sensitivities and the potential for further improvement. The design mode is only recommended when the user has really no idea about the design.

For the constructive details of a heat exchanger, the TEMA type (TEMA: Tubular Exchanger Manufacturers Association Inc.) has to be fixed first. The TEMA type determines the general arrangement of the heat exchanger. Figures 4.5, 4.6, and 4.7 explain the TEMA type code. Some popular choices are – BEM: standard arrangement; – BEU: U-type heat exchanger; – BKU: kettle type reboiler (Chapter 4.5); – AES: floating head, removable tube bundle; – BJ21T: arrangement for vacuum condensers (Chapter 4.4).

Figure 4.5: TEMA front end stationary head types. A Channel and removable cover; B Bonnet (integral cover); C Channel integral with tubesheet, removable cover, and removable tube bundle; N Channel integral with tubesheet and removable cover; D Special high pressure closure. Courtesy of Tubular Exchanger Manufacturers Association, Inc.

4.3 Heat exchangers without phase change | 133

Figure 4.6: TEMA shell types. E One pass shell; F Two pass shell with longitudinal baffle; G Split flow; H Double split flow; J Divided flow; K Kettle type reboiler; X Cross flow. Courtesy of Tubular Exchanger Manufacturers Association, Inc.

Figure 4.7: TEMA rear end head types. L Fixed tubesheet like A, stationary head; M Fixed tubesheet like B, stationary head; N Fixed tubesheet like N, stationary head; P Outside packed floating head; S Floating head with backing device; T Pull through floating head; U U-type bundle; W Externally sealed floating tubesheet. Courtesy of Tubular Exchanger Manufacturers Association, Inc.

One of the main reasons to distinguish between all these types is to get by with the problem of thermal stress. In many cases, the shell side will have a significantly different temperature than the tube side, causing different thermal expansion and possible damage like tube bending or loosening the connections between tube and tube sheet. Fixed tube sheets (Figure 4.7, L, M, N) do not provide any countermeasures to this kind

134 | 4 Heat exchangers of stress; as a rule of thumb, they should not be chosen if the temperatures of the two sides differ by more than 50 K. Floating rear end head types (Figure 4.7, P, S, T, W) provide the possibility for the tubes to give way. However, they can only compensate for differences between tubes and shell; they are not useful when differences between the tubes theirselves occur. Furthermore, the clearances between tube bundle and shell are often enlarged, which reduces the heat transfer. Sealing strips (Figure 4.14) can mitigate this disadvantage. In contrast, the U-type configuration allows individual expansion of the tubes anyway. Its drawback is that the inner side of the bend cannot be cleaned. Cleaning generally becomes possible if the covers and/or the tube bundles (Figure 4.5) can be removed. Additional information can be obtained from [73]. Next, the shell orientation (horizontal, vertical, inclined) must be defined and it has to be decided which stream is on the shell side and which is on the tube side. This is a strategic decision with often contradictory arguments. Usually, the heat transfer on the shell side is better than in the tubes. Therefore, it is desirable to place the stream with the worse heat transfer on the shell side. On the other hand, the tube side is easier to clean, so that the stream showing more fouling should be placed in the tubes. The latter argument is usually stronger; for example, cooling water as a notoriously dirty fluid is placed in the tubes in almost all cases. A compromise can be found if a removable tube bundle or a U-type exchanger is chosen; in these cases, the shell side can be cleaned as well. Countercurrent flow is the default; cocurrent flow can be specified. Additionally, several identical heat exchangers can be arranged in parallel or in series. Shell diameter and the length of the tubes mainly determine the heat transfer area. For a given one, longer tubes result in lower costs. The tube length is often limited to 12.2 m. Several other specification data for the tubes have to be defined. The tubes theirselves are specified by their outside diameter (OD) and the wall thickness. A standard value for the tube OD is 1󸀠󸀠 (25.4 mm). The most often applied alternatives are ¾󸀠󸀠 (19.05 mm) and 1 ½󸀠󸀠 (38.1 mm), where it is possible to increase the number of tubes in a given shell or reduce the velocity inside the tubes, respectively. From the heat transfer point of view, small tube diameters are advantageous, as long as it does not make cleaning uncomfortable. The wall thickness is determined by the mechanical stability. 2 mm is a reasonable value, for high-pressure applications, larger wall thicknesses are probable. The tube pitch and the tube layout angle define the arrangement of the tubes (Figure 4.8). The pitch is the distance between the centers of the tubes. The smaller the pitch, the more tubes can be put into the shell. Large pitches can be useful to lower the shell velocities for avoiding vibrations. The pitch is often given as a pitch ratio, i. e. the pitch divided by the tube OD. Common pitch ratios are 1.25, 1.33, and 1.5. The tube layout angle defines the pattern of the tubes with respect to the flow direction. The 30° arrangement is the standard one. The 60° pattern is useful to avoid vibrations caused by vortex shedding (Chapter 4.10). Besides these triangular patterns the square pat-

4.3 Heat exchangers without phase change | 135

Figure 4.8: Tube pitch patterns. t = pitch. © Springer-Verlag GmbH.

Figure 4.9: Sketch of a two-pass shell-and-tube heat exchanger. © H Padleckas/Wikimedia Commons/CC BY-SA 3.0. https://creativecommons.org/licenses/by-sa/3.0/deed.de.

terns are used. The 45° pattern should not be used for gas streams because of possible vibrations, the 90° arrangement is useful for cleaning purposes. Finally, the number of tube passes can be specified (1, 2, 4, 8 in standard designs). At the front and rear head, the tube stream can be divided by partition plates so that it passes several times through the exchanger (Figure 4.9). The simplest way is the use of a U-type heat exchanger, which has automatically two passes.2 As the cross-flow area for the tube stream decreases with the number of passes, the tube velocity and therefore the heat transfer coefficient increases. On the other hand, one direction is 2 For U-type heat exchangers, six passes are usually the maximum, otherwise, the bend radius would become too small.

136 | 4 Heat exchangers in cocurrent flow, and the profile of the temperature difference between hot and cold fluid along the flow path is distorted, often even leading to temperature crosses. It makes sense to define more tube passes if the main thermal resistance is on the tube side and if the temperature ranges of hot and cold fluid do not overlap. Otherwise, the use of more tube passes can even be a disadvantage. The material of the tubes is characterized by standard values (density, thermal conductivity etc.). They can be overwritten if further knowledge is available. Baffles direct the shell flow back and forth across the tubes, which increases the shellside velocity and the heat transfer coefficient [75]. Furthermore, they support the tubes in their position and prevent vibration of the tubes. Again, there are some options for different types (Figure 4.12). The most common one is the single segmental baffle, which is in principle a circular plate where a segment has been removed. This is defined by the cut (Figure 4.10). A reasonable cut should be in the range 20–25 %. If low-pressure gas flow is involved, the first approach should be 40–45 %.

Figure 4.10: Baffle cut definition. Courtesy of Heat Transfer Research, Inc.

It can be distinguished whether the cut is perpendicular or parallel to the flow inlet (Figure 4.11). The parallel orientation is preferred for condensing fluids so that the condensate can be collected at the bottom. For liquids, the perpendicular orientation should be preferred, as it mixes existing fluid layers and avoids possible precipitation of solids at the bottom. Single segmental baffles are certainly the cheapest ones because of their easy manufacturing, but they cause a comparably large pressure drop. They are not recommended for viscous fluids [75]. The crossflow heat transfer to the tubes is better than the longitudinal heat transfer. The more baffles are set, the more this crossflow is achieved and the more the pressure drop increases, as well as the heat transfer coefficient does. Therefore, the baffle spacing can be varied within certain limits to achieve a satisfactory solution. According to TEMA, the minimum baffle

4.3 Heat exchangers without phase change | 137

Figure 4.11: Baffle orientations. Courtesy of Heat Transfer Research, Inc.

spacing is 20 % of the inner shell diameter. For small heat exchangers it should not go below 2󸀠󸀠 . A baffle spacing between 30–60 % of the shell inside diameter is usually a good starting point [75]. Baffle spacing can be varied if vibrations occur; vibrations are less probable with more baffles set, as the tubes get more support. The pressure drop can be significantly reduced if double-segmental baffles are used. As can be seen in Figure 4.12, two kinds of baffles are alternating, with one having the cut area in the center (“wing baffle”) and one having two circle segments as cut areas (“center baffle”). From the thermal point of view, they are less effective. Also, the so-called disk-and-donut baffles are in use, with one baffle type in the form of a circular ring and one as a circular disk in the center. They are often used in gas-gas applications to avoid vibrations.

Figure 4.12: Common baffle types.

NTIW (“no tubes in window”) baffles (Figure 4.12) are used for mechanical stability reasons. This option ensures that each baffle supports every tube. Tubes with long areas without support are avoided. NTIW is a useful option if vibration problems occur. Also, the pressure drop is reduced. The disadvantage is that the shell diameter must be increased to obtain the same heat transfer area. A baffle cut of 15 % is most common [75]. Grid baffles are metal lattices that fix the tubes (Figure 4.13). Mainly, longitudinal flow is produced instead of cross-flow. Grid baffles protect against tube vibration and produce low pressure drops on the shell side. To finish the paragraph on baffles, tie-rods and sealing strips should be explained. A small number of tie-rods are placed in the tube bundle instead of normal tubes to gain more mechanical stability. They are not tubes, but rather massive sticks and do not take part in the heat exchange. Sealing strips are placed on the circumference of the inner diameter. They prevent a leakage flow between bundle and shell (Figure 4.14).

138 | 4 Heat exchangers

Figure 4.13: Rod-type baffles as an example for grid baffles. Courtesy of TEMA India Ltd.

In the rating mode, the specified heat exchanger design produces an overdesign as a result, i. e. a statement whether the heat exchange area is too small or too large and how much. A reasonable overdesign is 10–20 %.3 The overdesign must cover the various uncertainties in the calculation, e. g. physical properties, uncertainties of the heat transfer relationships or, if necessary, fouling effects (Chapter 4.9). The design must be varied until the overdesign is in the desired range. However, there are a lot of other items to be checked. – Duty comparison: The calculated duty must be equal to the duty reported in the process simulation. This is a quite safe indication of whether the physical properties and the process definition have been defined correctly in the heat exchanger design program.

3 It should be noted that the overdesign in the rating mode refers to the area of the heat exchanger. Often, it is checked whether the overdesign remains slightly positive if the load is increased by 10–20 %. In fact, this is not equivalent, as increased load causes higher velocities and therefore better heat transfer coefficients. This approach is less conservative.

4.3 Heat exchangers without phase change | 139

Figure 4.14: Sealing strips. © Springer-Verlag GmbH.





Flow fractions: The flow fractions indicate to which extent the fluid on the shell side takes the designated way. Heat exchanger design programs give an estimation about the flow distribution. The percentages of the following fractions are regarded [76]: – A fraction: The A fraction refers to the tube-to-baffle hole leakage stream. It becomes large when the clearances between tubes and baffle holes are large and when the baffle spacing is narrow, especially for single-segmented baffles. At least, the A fraction is thermally effective and not lost. – B fraction: The B fraction refers to the main crossflow stream through the bundle, i. e. the desired way. Normally, it is approx. 60 % of the total flow. If the B fraction is lower, too large clearances and a narrow baffle spacing are probably the reasons. – C fraction: The C fraction is the bundle-to-shell crossflow bypass stream. It should be less than 10 %. Additional sealing strips can decrease this fraction. The C fraction is thermally not effective. – E fraction: The E fraction is the baffle-to-shell leakage stream. It is thermally not effective. There are hardly options to manipulate it. Double-segmental baffles are advantageous in comparison with single-segmental baffles. – F fraction: The F fraction is the tubepass partition bypass stream; it should be lower than 10 %. The F fraction can be lowered by additional sealing strips. Thermal resistance distribution: The percentage ratios of heat transfer on the shell side, heat transfer on the tube side, thermal conductivity resistance of the tube and thermal conductivity resistance of the fouling layer to the overall thermal resistance of the arrangement is a useful guideline for improving the design. It can be found out which side determines the heat transfer, so that the following design variations should focus on this. Furthermore, the percentage of the fouling resistance should be watched carefully; it can be assessed which impact the fouling factors have on the design. An example is given in Chapter 4.9.

140 | 4 Heat exchangers –











Tube side velocities: One should take care that the tube side velocities are sufficiently high, especially if fouling is probable (Chapter 4.9). A range of 0.4–1.2 m/s is recommended for liquids, the VDI Heat Atlas [72] even suggests 1.8 m/s. For vapors, the kinetic energy is more relevant than the velocity, as the density of a gas can cover a wide range. The recommended range is ρw2 = 30–270 kg/ms2 , corresponding to the velocity of 5–15 m/s for air at p = 1 bar. The tubeside velocity is often a crucial point for the heat exchanger design. In fact, there are examples where the choice of a smaller shell diameter improves the heat exchanger performance, when the velocity in the tubes and, subsequently, the heat transfer coefficient increase significantly. Also, there is the option of providing multiple tube passes to increase the tubeside velocity (see above). Shell side velocities: The velocities on the shell side should not be too large to avoid vibrations. The guidance values are w = 0.3–0.9 m/s for liquids and ρw2 = 30–130 kg/ms2 , corresponding to the velocity of 5–10 m/s for air at p = 1 bar. Nozzle velocities: For the nozzles of heat exchangers, the kinetic energy determines the limitation for the design. The guide values are for liquids: ρw2 = 700–2250 kg/ms2 (tube side) ρw2 = 700–1100 kg/ms2 (shell side). for gases: ρw2 = 500 kg/ms2 (tube side) ρw2 = 300–400 kg/ms2 (shell side). Vibrations: At least, the most important vibration checks should indicate that vibrations do not occur (Chapter 4.10). Physical properties: As mentioned above, it should be checked whether the calculated duty meets the expectation from the process simulation. The heat curves should cover the pressure and temperature range of the process with a sufficient number of points for interpolation. Allocation of the tubes: Streams at high pressure and corrosive streams should be placed inside the tubes, as it is easier to increase their design pressure instead of that of the shell. The stream with a fairly lower heat transfer coefficient should be placed on the shellside. Streams showing fouling should be placed where cleaning is possible.

For shell-and-tube heat exchangers, there is an interesting but not widely applied option to increase the heat transfer coefficient in the tubes, where often not enough turbulence is generated because of low velocities. Turbulence can be increased with wire elements (Figure 4.15), which can be inserted into the tubes [77].

4.4 Condensers | 141

Figure 4.15: hiTRAN element for increase of the heat transfer in the tubes [77]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

4.4 Condensers For condensers, the heat curve on the product side looks slightly different. In contrast to the liquid-liquid or gas-gas heat exchangers, it is important that dew and bubble point of the stream are reproduced well by the heat curves. For the properties, both phases are relevant. Due to the phase change, the influence of the pressure is larger. Also, the calculated heat duty should agree with the one obtained in the process simulation. However, there is often some work to do to prove it. In most cases, the pressure drop in the process simulation is just a guess, and often a very conservative one. In contrast to this, the heat exchanger design program calculates this pressure drop properly. In most cases it will be lower than the set one assumed in the process simulation. Therefore, due to higher outlet pressure the condensed fraction will be larger than in the process simulation, and so is the duty. The simple solution is to repeat the process simulation for the condenser setting the pressure drop according to the one obtained with the heat exchanger design program. From the constructive point of view, the condensate removal from the heat exchanger should be well defined. As mentioned above, the parallel baffle cut has to be preferred. The condensate removal can be supported by an inclination (usually 1–2°), where the nozzle for the condensate removal is the lowest point. In certain cases, the condensate level in the heat exchanger can even be used for controlling the heat duty. The higher the condensate level, the more tubes are flooded, and the less heat transfer area can be used. At the vapor inlet, the tubes must often be protected against erosion by droplets. For this purpose, impingement plates (Figure 4.32) are often used. A special design is very useful if vacuum vapors have to be condensed. In normal arrangements like BEM the calculated pressure drop is often larger than the pressure itself, which is impossible. Increasing the shell diameter leads to huge apparatuses. The BJ21T arrangement (Figure 4.16) enables condensation with a very low pressure drop. The two large inlet nozzles guarantee a predistribution of the vapor at low velocities, and vapor can reach the tubes easily.

142 | 4 Heat exchangers

Figure 4.16: The BJ21T arrangement. Courtesy of Heat Transfer Research, Inc.

The dimensioning of a condenser has a special pitfall if inert gases are involved, especially when small amounts of condensables have to be removed from an inert gas stream, e. g. waste air. There is a large difference to the normal case without inert gases, where there is hardly any transport resistance for the vapor to get in contact with the cold surface or, respectively, the boundary layer. If there are large amounts of inert gases, the condensables must get to the boundary layer by means of diffusion, which is usually slow and which must be regarded as the step determining the condensation rate. Mass and heat transfer have a mutual influence on each other [78]. The state-of-the-art calculation of this combined heat and mass transfer is thoroughly described in [79]. However, this procedure is too complex for an application in a multicomponent mixture. Simplified approaches [80] are often used in commercial heat exchanger design programs, but in general it must be recommended to take care when the heat exchanger is designed. Especially for condensers, the deaeration problem is a decisive issue. There must be a clear route for defined and undefined (leakage) inert gas flows to leave the apparatus. For condensers, inert gases would significantly lower the heat transfer coefficient, as described in the last paragraph. For each heat exchanger, the design must be individually checked whether noncondensables are adequately removed. Gases accumulate at the top of of any volume; therefore, the nozzle for the inert removal must be placed at the highest point of the apparatus. Moreover, a short circuit has to be ruled out. This means that the inert removal nozzle must not be located in the vicinity of the feed nozzle. In this case, it is likely that both condensable and noncondensable components are relieved. The process stream must get the opportunity to condense, meaning that it should first get in contact with the cold surfaces in the condenser before reaching the inert removal nozzle. Therefore, at least preferably the noncondensables can be removed from the process. It should also be verified that the air which is in the exchanger at the beginning can be removed during operation.

4.5 Evaporators | 143

4.5 Evaporators The design of normal heat exchangers like condensers or liquid-liquid heat exchangers can be regarded as a standard task in process engineering, whereas the design of evaporators is usually not. Before one can encounter the particular difficulties in calculation, one must choose the type of evaporator with respect to the properties of the stream. Most evaporators are used as reboilers for column service; therefore, this special arrangement shall be carefully considered. The most popular reboiler for distillation columns is the thermosiphon reboiler with natural circulation. It is used in approx. 70 % of the cases where evaporation is required [81].

Figure 4.17: Thermosiphon reboiler with natural circulation.

Figure 4.17 shows the bottom of a column with a thermosiphon reboiler. On the left hand side there is the removal of the bottom product, which has the same concentration as the circulation flow. The circulation flow enters the heat exchanger at the bottom. Due to the height difference between the surface of the bottom liquid and the inlet of the tube bundle (“static liquid head”) the pressure of the liquid is higher than the saturation pressure of the liquid at the surface, i. e. the liquid is subcooled. Inside the tubes, the liquid rises again. The pressure of the liquid decreases and it is heated by the heating agent on the shell side, usually steam. Both effects compensate the subcooling after a certain height (preheating zone) has been passed, and boiling of the fluid begins. First, bubbles are formed, which become more and more numerous.

144 | 4 Heat exchangers A distinctive two-phase flow leaves the heat exchanger through the outlet nozzle and enters the column again, where it is split into a vapor flow going up the column and a liquid flow going back to the bottom, where the cycle begins once again. The driving force of the circulation flow is the density difference between the left leg of the reboiler cycle, where there is only liquid, and the right one, where a two-phase flow of the bottom product occurs. In the latter, the overall density is smaller due to the bubbles, and therefore, the static pressure ρgH is larger in the left leg than in the right one, causing the natural circulation without external stimulation. The temperature difference between the heating agent and the product is the driving force for the heat exchange. For each evaporator type, there is a reasonable range. For thermosiphon reboilers, this range is between 15 and 40 K. At lower temperature differences, especially below 10 K, circulation instabilities are likely to occur [81–84]. In these cases, only a few bubbles are formed, and a significant carry-over of liquid does not take place. The circulation is low, and so is the heat transfer. There is a large preheating zone. Periodically, these bubbles coalesce, and the plugs formed push greater amounts of liquid upward, causing an increased circulation for a short time (geysering). As well, the pressure in the adjacent column is fluctuating. Small temperature differences are likely to occur if the reboiler has an overdesign which is too large. According to Q̇ = kAΔT

(4.6)

and assuming that k remains in the same order of magnitude, a large heat transfer area A means that a small driving temperature difference ΔT is formed, as usually some process control valve maintains the duty Q̇ by adjusting the steam amount. For a large area A, the valve will reduce the steam pressure on the shell side of the reboiler, and the condensation temperature will become lower and closer to the product temperature. Another case is the startup phase. At the beginning, there is no fouling; therefore, the heat transfer coefficient k is larger than expected (Chapter 4.9), and the ΔT gets smaller. Circulation reaches a maximum at driving temperature differences of 20–30 K. High driving temperature differences are just as undesirable as low ones. The natural circulation is then reduced by the rising pressure drop. At very high driving temperature differences there is the risk of burnout. This means that the bubbles form a continuous vapor film, and the heat transfer takes place mainly by radiation, which is comparably ineffective. A temperature difference rising beyond a critical value will therefore lead to a lower vapor generation [82]. The residence time of the product in the bottom and in the reboiler is quite long, and therefore the thermosiphon reboiler is not really gentle towards the product. There is another reason why thermosiphon reboilers are not the first choice when temperature-sensitive substances are involved. Usually the boiling points of these components are relatively high, and the distillation is carried out in a vacuum to

4.5 Evaporators | 145

avoid high temperatures at the bottom. In fact, the thermosiphon reboiler has problems in vacuum operation, especially at pressures below p = 150–200 mbar [85]. This is illustrated in Figure 4.17. The static liquid head is the driving force for the thermosiphon circulation. However, as mentioned above, due to hydrostatics the liquid is subcooled at the entrance of the tubes. In a vacuum, the preheating zone in the tubes with low heat transfer will be much larger, and at a certain point the thermosiphon circulation vanishes due to the low driving force. The following example illustrates the impact of the vacuum. Example Compare the subcooling of a thermosiphon reboiler with water as the bottom product at different bottom pressures p1 = 1 bar and p2 = 100 mbar. The static liquid head is assumed to be H = 5 m. For the density of water, ρ = 1000 kg/m3 is used in both cases. For simplification, the gravity acceleration is assumed to be 10 m/s2 .

Solution At p1 = 1 bar, the bottom temperature in the column can be calculated to be ts1 = 99.6 °C [29]. Following hydrostatics, the pressure at the tube inlet of the reboiler will be p1,tube = p1 + ρgH = 1.5 bar. This in turn corresponds to a boiling temperature of ts1,tube = 111.3 °C. The subcooling is approx. 12 K. At p2 = 100 mbar, the bottom temperature can be calculated to be ts2 = 45.8 °C. Considering hydrostatics, the pressure at the tube inlet of the reboiler is p2,tube = 0.6 bar, corresponding to a boiling temperature of ts2,tube = 85.9 °C. The subcooling is considerably higher in this case, approx. 40 K. It can easily be guessed that the preheating zone will be much larger in the vacuum case. This will result in a worse heat transfer, giving less circulation and, in a vicious circle, a larger preheating zone. For this reason, it is recommended that thermosiphon reboilers should not be used below p < 0.2 bar [85]. The tube length of thermosiphon reboilers in vacuum should be approximately 3 m.

The static liquid head mentioned in the example is one of the key quantities for the function of the reboiler. It has to be defined for the heat transfer calculation, and it should be indicated on the arrangement sketch for column and reboiler, although its realization is not a matter of construction but of the level control. It has two effects which partly compensate each other. The higher the static liquid head, the larger is the preheating zone, which reduces the heat transfer, and the larger is the circulation, which increases the heat transfer. Often, these two effects compensate [82]. A reasonable value for the static liquid head is 90 % of the tube length (usually in the range 3–6 m). Furthermore, the connections between the distillation column and the reboiler have to be defined, i. e. length and diameter of the reboiler feed and outlet lines. For the diameters, useful rules of thumb exist. The cross-flow area of the outlet line should be approx. 80–100 % of the total cross-flow area of the tubes of the reboiler, whereas 25–50 % are sufficient for the inlet line. To avoid extremely large throughputs and low

146 | 4 Heat exchangers evaporation rates, a throttle valve can be placed in the inlet line. The height of the rear end head should be approx. 25 % of the shell diameter if it has an axial nozzle; for a radial nozzle, it is recommended to add the nozzle diameter. For the specification of a thermosiphon reboiler the definition of the process is different from the one for a single-phase heat exchanger or for a condenser. One should be aware that the flow rate of the product is not known, as its depends on arrangement and construction of the equipment. Moreover, its determination by any calculation program will be more a number for the order of magnitude than an exact value. Instead, a reboiler is specified by the heat duty to be transferred. Usually, an estimation for the outlet vapor fraction is required. A standard value is 0.2, an acceptable range for the result is 0.15–0.25, for water, evaporation rates down to 0.05 can be accepted. It is worth mentioning that the inlet pressure of a thermosiphon reboiler is not a useful quantity, as it can hardly be determined. Instead, it makes sense to start the calculation with the pressure at the surface of the liquid in the bottom of the distillation column. The local pressures obtained due to the interaction of static height and pressure drops are then evaluated by the program. Table 4.1 gives a summary of the mentioned design recommendations for thermosiphon reboilers. Table 4.1: Recommended values for the design of thermosiphon reboilers. Item

Recommendation

Tube length Static liquid head Cross-flow area, outlet line

3–6 m (vacuum: 3 m) 90 % of tube length 80–100 % of tube cross-flow area w < 10 m/s, ρw 2 < 9000 Pa 25–50 % of tube cross-flow area w < 2 m/s, ρw 2 < 9000 Pa 25 % of the shell diameter (for radial nozzle: + nozzle diameter) 0.15–0.35, aqueous systems 0.05–0.15 [263] 10–40 K [81] p > 200 mbar ≈ tube length/0.45 31.5−−47.3 kW/m2 1000−−4000 W/m2 K 500−−2000 W/m2 K

Cross-flow area, inlet line Height of rear head Outlet vapor fraction Driving temperature difference Pressure range No. of crosspasses Heat flux Heat transfer coefficient, clean surfaces Heat transfer [81] coefficient, fouling [81]

Concerning the accuracy of the calculation of a thermosiphon reboiler, one must be aware that the mutual interaction between heat transfer and the two-phase flow in the tubes is really complex. The accuracy of the pressure drop calculation of the twophase flow (Chapter 12.1) in changing flow regimes determines the circulation rate, which is in turn decisive for the heat transfer and the vapor generation in the tubes. It

4.5 Evaporators | 147

is not probable that these dependencies can be accurately determined for both organic and aqueous fluids in various arrangements. Nevertheless, the considerable success of heat exchanger design programs indicate that at least the overall performance of thermosiphon reboilers, i. e. the duty transferred, is predicted in a way that a reliable design of commercial reboilers is possible. For the startup of a thermosiphon reboiler, some kind of boiling is necessary to start the circulation, on the other hand, an effective boiling takes place only if circulation is achieved. Thus, for the startup the unit should be heated up slowly so that boiling can gradually develop. If possible, it is useful to lower the condenser and therefore the column pressure for a time to help the unit to get started. A high static liquid head might also support. There are a number of other types of evaporators. For vacuum applications and for systems with high viscosities or a wide boiling range, the falling film evaporator is the usual alternative. Falling film evaporators (Figure 4.18) consist of a vertical tube bundle. Usually, falling film evaporators are effective if the tubes are long, often 8–9 m. The liquid to be evaporated is fed at the top and flows down the tubes as a thin film due to gravity. Special distributors on the top of the tubes ensure that there is an even distribution of the liquid into the tubes. On the shell side, the heating agent, usually steam, is condensed. The vapor is generated in the tubes and goes down in cocurrent flow with the liquid, supporting the downflow of the liquid due to shear forces. Vapor and liquid are separated at the bottom in a separator vessel.

Figure 4.18: Falling film evaporator. Courtesy of GEA Group AG.

In a falling film evaporator, a gentle evaporation takes place. The residence time in the heated zone is short, the temperatures can be kept low, as it can be operated in the vacuum, and the temperature differences between heating agent and product can be kept low as well (usually 8–20 K). There is a small liquid holdup, giving quick reaction times on changes of the operation conditions. Furthermore, in contrast to ther-

148 | 4 Heat exchangers mosiphon and forced circulation reboilers (see below) the falling film evaporator is pretty insensitive against foaming. The k-values of falling film evaporators are in the range k = 700–1200 W/(m2 K), where the main heat transfer resistance is usually caused by the heat conduction through the film. Care must be taken that the heat flux does not exceed a critical value. Otherwise there is the danger that the film will dry out locally and a hot spot be formed. Another countermeasure is to operate with a liquid recycle so that the film thickness increases due to the increased mass flow. For the so-called coverage, defined as the ratio between liquid volume flow rate and total wetted circumference of all tubes, a range of 1.2–1.5 m3 /(m h) is a good approach. Falling film evaporators can be used up to viscosities of 500 cP, although one cannot expect Newtonian behavior in this range, and the performance at high viscosities will certainly be lower. The limitation for the application is fouling. Due to the slow gravity flow of the liquid, there is no abrasive effect. The specification of a falling film evaporator in a heat exchanger design program has some peculiarities. In contrast to the thermosiphon reboiler, the inlet flow into the heat exchanger at the top of the calandria is quite well defined, either by the process for the single-pass option or by the recirculation pump capability. During the specification, the coverage should be checked. Defining the inlet pressure on the product side of the falling film evaporator is unpleasant. In fact, only the outlet pressure is well defined. The inlet pressure is a result of the pressure generated by the pump, the static height of the tube inlet and the pressure drop of the distributor. Especially in the vacuum operation, the performance of the falling film evaporator is quite sensitive to the inlet pressure, and therefore its calculation does not make much sense, especially because the exact performance data of the pump is barely known. It is reasonable to determine it iteratively. With a reasonable guess of the inlet pressure, the design program evaluates an outlet pressure which should match the given value. If this is not the case, the inlet pressure is varied according to the difference of the calculated and the known outlet pressure until the outlet pressure fits. A typical example for the application of falling film evaporators is the enrichment of fruit juices. The removal of water reduces the transport costs significantly, and the smooth temperature differences prevent the valuable vitamins from being destroyed. Another alternative to the thermosiphon reboiler is the forced circulation reboiler (Figure 4.19), which can be arranged both horizontally and vertically. It is often used for systems with high viscosity and large boiling point elevation. In fact, the forced circulation reboiler is a liquid heater. The heated liquid is then expanded into the recipient, usually a column, through a valve, causing the evaporation. The heat transferred in the forced circulation reboiler becomes sensible heat according to ̇ p ΔT Q̇ = mc

(4.7)

4.5 Evaporators | 149

Figure 4.19: Horizontal forced circulation evaporator. Courtesy of GEA Group AG.

The larger m,̇ the lower is ΔT, and the lower are the thermal stability problems. On the other hand, a strong pump is necessary for large volume flows and relatively low pressure differences. The forced circulation reboiler is relatively insensitive against fouling, as the evaporation takes place outside the apparatus. Due to the relatively high velocities in the tubes (1.5–2 m/s, sometimes even higher), there is an abrasive effect that prevents the start of fouling. The high velocities on the product side are also related to a high throughput, which in turn causes a low temperature change and therefore less thermal stress. However, the forced circulation reboiler is very sensitive against foaming. The disadvantages of forced circulation reboilers are the considerable power consumption of the circulation pump and the investment costs for its basement. The kettle reboiler is a simple and robust alternative to the thermosiphon reboiler, both for vacuum and pressure applications (Figure 4.20). It causes no vibration problems, and evaporation rates up to 80 % are possible. Its drawback are possible en-

Figure 4.20: Sketch of a kettle reboiler. © Springer-Verlag GmbH.

150 | 4 Heat exchangers trainment and its affinity to fouling, as there is no defined flow which can remove the dirt, and heavy boiling substances have a long residence time in the heat exchanger. In these cases, it makes sense to maintain a continuous liquid draw-off stream. Kettle reboilers are relatively expensive pieces of equipment, as their space and volume requirements are pretty large. For the design, it has to be taken into account that the kettle reboiler is effective as an evaporator, not as a liquid heater. The heat transfer takes place due to bubble formation. Feeding subcooled liquids must be avoided [86], as in this case the dominating heat transfer mechanism is natural convection with low velocities and without the support of baffles, giving a low k-value. To enable the formation of bubbles, a minimum temperature difference between heating agent and product should be maintained, at least 12–15 K. For high pressure applications, the driving temperature difference can be lower.

Figure 4.21: Sketch of a thin film evaporator. © UIC GmbH.

Although they do not belong to the shell-and-tube heat exchangers, two other evaporators should be mentioned which have the main purpose of gentle evaporation and product conservation. Well-known applications are vitamins, flavoring substances or pharmaceuticals. In thin film evaporators (Figure 4.21), the product is distributed on the inner side of a tube with a heating jacket outside. It forms a film. Inside the tube, a drive shaft with an attached wiper rotates and keeps the film thickness constant, usually below 1 mm. The residence time in thin film evaporators is normally less than 1 min. They are appropriate for low pressures down to 1 mbar, giving pretty low boiling temperatures. Pressures below 1 mbar are not possible because of the pressure drop caused by the transport from the evaporator to the condenser. The product is

4.5 Evaporators | 151

distributed from the top of the apparatus by means of a rotating system. It flows down on the inner wall, and is equally spread and permanently mixed by a wiper system. In Figure 4.21, it is realized as a roller wiper. It prevents the formation of hot spots and provides long operation intervals without maintenance, as the roller wipers do not get in direct contact with the wall so that scratches are avoided. The heating agent (steam or thermal oil) is led through the jacket attached to the wall. The vapor generated can leave the thin film evaporator through a nozzle at the top. Extremely high heat transfer coefficients are possible. However, the prediction capabilities are low. The design should be performed by a vendor who has carried out a pilot trial with a reasonable scale-up. A classical failure is the use of laboratory data for the determination of the k-value; these laboratory data have usually been obtained in an equipment made of glass, where the low thermal conductivity of the glass determines the heat transfer. For further information, [85] is a good starting point.

Figure 4.22: Sketch of a short path evaporator. © UIC GmbH.

For applications in rough vacuum (p = 1–10−3 mbar) short path evaporators (Figure 4.22) are used for extremely high boiling substances. The principle is to keep the distance between evaporator and condenser as short as possible, usually only a few cm. For this purpose, the condenser is located in the center of the apparatus. Again, in Figure 4.22 the distribution of the liquid on the inner wall is achieved by the roller wiper system. Because of the low pressures, the temperatures can be kept low as well, and the evaporation is extremely gentle towards the product. On the other hand, the smaller the distance between evaporator and condenser is, the greater is the danger of entrainment. The design should again be performed by experienced vendors. It is

152 | 4 Heat exchangers essential that low boiling substances are completely removed before. From gas theory, the Langmuir–Knudsen equation gives an upper limit for the evaporation capacity per heating area [87]: (

̇ p √ M m/A T −1 ) = 1575 ( ) mbar g/mol K kg/(m2 h) max

(4.8)

For p = 10−3 mbar, the order of magnitude is 2 ̇ (m/A) max = 1.5 kg/(m h)

(4.9)

4.6 Plate heat exchangers Plate heat exchangers are an option for combining very high heat transfer coefficients with a large heat transfer area per volume. They consist of profiled heat exchanger plates which separate the two media. When they are assembled, the profiles form many parallel connected channels which form the heat transfer area. These channels change their direction regularly, and the flows are directed in a way that the gaps filled with cold and warm medium alternate (Figure 4.23). The corrugation of the plate profiles increases the turbulence of the flows and therefore improves the overall heat transition coefficient k.

Figure 4.23: Principle sketch of a plate heat exchanger. Courtesy of Alfa Laval Mid Europe GmbH (Germany).

There are further advantages of plate heat exchangers. As can be seen in Figure 4.23, the plates are mounted in a skid between two massive plates. This arrangement is easy

4.6 Plate heat exchangers | 153

to dissemble, giving the opportunity of an easy cleaning procedure. Design corrections or capacity increases can easily be achieved by adding more plates to the skid. The space requirement is lower than for shell-and-tube heat exchangers, as well as the investment costs for comparable exchangers with the same duty. However, the pressure drop is considerably higher. A serious problem are the gaskets between the plates. For aggressive media and many organic liquids the plates must often be soldered or welded, where the strong advantage of the easy cleaning opportunity gets lost again. The k-values of plate heat exchangers can be extremely high. They can also be used for highly viscous media, and they show less fouling due to high velocities. Fouling factors used for shell-and-tube heat exchangers are generally too high for plate heat exchangers. The disadvantages are the high maintenance costs for the gaskets and the permanent risk of leakage, or, alternatively, the impossibility of cleaning. Plate heat exchangers are not only used as liquid-liquid heat exchangers. Meanwhile, they are increasingly used as evaporators and condensers as well. Plate condensers have the advantage that asymmetric channels can be formed; wide ones for the vapor side and narrow ones for the cooling water to maintain an appropriate velocity to achieve enough turbulence (Figure 4.24). The same advantage can be claimed for evaporators, where even high-viscosity media can be handled. Driving temperature differences of only 3–4 K can be taken into consideration, which is especially important if mechanical (Chapter 8.2) or thermal (Chapter 8.3) vapor recompression or special materials are involved. Another advantage is the small holdup, giving short startup and shutdown phases and performing a gentle evaporation of temperature-sensitive substances. Several evaporation options can be realized, e. g.

Figure 4.24: Asymmetric channels for condensation in plate heat exchangers. Courtesy of Alfa Laval Mid Europe GmbH (Germany).

154 | 4 Heat exchangers

Figure 4.25: Plate heat exchanger used as falling film evaporator.

the thermosiphon reboiler, the single pass evaporator, the falling film evaporator (Figure 4.25) or the forced circulation evaporator. The design calculations of plate heat exchangers is usually supported by the various commercial design programs. However, the particularities and the design rules are not as common as they are for shell-and-tube heat exchangers so that large differences between a quick program call and a professional design performed by a vendor might occur. The commercial programs enable even a nonprofessional to point out the advantages of plate heat exchangers if their application is possible. The principles of the heat transfer calculation are well described in [88].

4.7 Double pipes The double pipe is the simplest construction for a heat exchanger (Figure 4.26). This is just two concentrical pipes, where one stream is in the inner tube and the other one in the ring. Generally, double pipes suffer from the fact that the heat transfer area remains low, so that large duties can hardly be realized. Instead, double pipes are often used for heat tracing to compensate for unintended heat losses.

4.8 Air coolers If the heat transfer on one side is significantly worse than on the other side, it is determined solely by the bad side. In this case, it makes sense to increase the heat transfer area selectively on this side. An example is the air cooler. The heat transfer on the product side is much better than the one to the environmental air. Therefore, the tubes are equipped with fins (Figure 4.27), and the heat transfer is improved by blowers which create forced flow with comparably high velocities (Figure 4.28). The heat transfer to

4.8 Air coolers | 155

Figure 4.26: Double-pipe heat exchanger [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

Figure 4.27: Finned tubes. Courtesy of Kelvion (www.kelvion.com).

the fins takes place by heat conduction. Calculation procedures for this problem are described in [71] and [90]. The investment costs for an air cooler are higher than for a conventional shell-andtube water cooler because of the large heat transfer area and the costs for the blowers including their drives. On the other hand, operation costs are lower, as there is no cooling water consumption. Also, no piping is necessary on the service side, and no fouling takes place. There are certain other aspects for the assessment of air coolers, e. g. their high noise level or their large space demand. Process control is difficult if the air cooler is exposed to rain, snow or sun radiation. But the main criterion is the fouling aspect: Cooling water contains hardness components (Chapter 13.3), which might precipitate at wall temperatures higher than 65–75 °C, and as long as no other measures are taken, the air cooler should be taken into account in these cases. The final design of an air cooler should be performed by a vendor.

156 | 4 Heat exchangers

Figure 4.28: Air cooler. Courtesy of Kelvion (www.kelvion.com).

4.9 Fouling If you consider fouling, you will get fouling. (Andreas Doll)

In general, heat exchangers suffer from a gradual deterioration of the heat transfer caused by so-called fouling. The streams often contain dissolved or suspended materials which may deposit on the surface of the heat transfer areas and form a layer. These layers usually have a low thermal conductivity; therefore, they form an extra thermal heat transfer resistance which lowers the overall heat transition coefficient. Especially, streams containing hardness components are prone to the formation of fouling layers. The solubility of hardness components becomes lower with increasing temperature; their deposition is a typical case for fouling. Other reasons for fouling are microorganisms (bio-fouling) or by-products caused by corrosion or side reactions. Formally, this heat transfer resistance can be calculated by taking into account the thickness of the layer and its thermal conductivity. However, neither of them can be determined in advance. Therefore, the additional fouling resistances used (untruly called “fouling factors”) are more or less just set by the user, in most cases according to the in-house design rules of each particular company. Fouling factors are in the range 0–600 ⋅ 10−6 m2 K/W; beyond this range, they do not really make sense. They can be interpreted according to Table 4.2. A collection of fouling factors can be found in [91]. Fouling also depends on the material. Roughly, in stainless steel tubes the fouling factor is about half the size as in carbon steel tubes. Shell-and-tube fouling factors cannot be directly applied in the plate heat exchanger design. In plate heat exchangers, the velocities are considerably higher so that less fouling occurs.

4.9 Fouling

| 157

Table 4.2: Typical fouling factors. Fouling Factors 10−6 m2 K/W 0 100 200 300–400 500 600

Interpretation

Example

no fouling formal consideration of fouling low fouling moderate to strong fouling strong fouling very strong fouling

caustic soda steam overhead products cooling water dirty products very dirty products

Nevertheless, the fouling consideration is often subject to discussion. In [91] it is argued that the consideration of fouling factors often leads to heat exchangers which are too large, and therefore the velocities are lower. High velocities counteract fouling because of possible abrasion of the fouling layer. Therefore, it frequently happens that fouling only occurs because it had been considered in the design. In [91] it is recommended that fouling should not be considered. A usual safety margin (15–20 %), an appropriate design with a large B fraction (> 65 %), and a reasonable baffle cut (20–25 %) should ensure that fouling is avoided. Auxiliary measures like a small parallel heat exchanger to compensate fouling if it occurs nonetheless or recycling of part of the cooling water return to keep a sufficient velocity at plant startup when there is no fouling could also help. A similar recommendation is given in [92]. In practice, the situation of an engineer who follows this argumentation is not easy. There is no obvious success story when fouling is avoided by setting the fouling factor to a low value, as there is no proof that fouling would have been a problem. On the other hand, it is a serious design mistake if the heat transfer area is too small due to fouling when no fouling factor has been considered. One should at least be skeptical if fouling factors are arbitrarily increased or considered, although the medium is known to be clean. When in doubt, there should be a tendency towards the lower fouling factor. Traditions in plant engineering are difficult to overcome. In any case, velocities should be kept relatively high (tube: ≈ 2 m/s, shell: ≈ 1 m/s). Fouling can also be mitigated by the use of twisted tubes (Figure 4.29). Inside the twisted tubes, more shear stress is developed at the inner tube wall, which enables a better fouling removal from the wall [244]. As mentioned, the heat exchanger design programs outline how the overall heat transfer resistance is composed. When the heat transfer conditions are very good on both the product and the service side, the fouling factors can account for a large percentage of the heat transfer resistance. Often, this is the case for thermosiphon reboilers with steam as heating agent and water on the product side. In these cases, the size of the heat exchanger is determined by the more or less arbitrary fouling factors.

158 | 4 Heat exchangers

Figure 4.29: Twisted tube section.

Special care must be taken. If a thermosiphon reboiler is too large, it might not work properly (see above). Without experience, heat exchangers like this are very difficult to design.

4.10 Vibrations Dark Arts is a mandatory subject. (Harry Potter and the Goblet of Fire)

Tube vibrations can cause severe mechanical damage to a heat exchanger, e. g. due to the hitting of the baffles, fatigue cracking, enhanced corrosion or loosening of the tube joint. They are induced by the shellside flow across the tube bundle. There is a maximum crossflow velocity that should not be exceeded; usually, a 20 % safety margin is left. There are several mechanisms which can create vibrations; the most important ones are fluidelastic instability and vortex shedding. Due to its pressure, the fluid exerts a certain force on the tubes, pushing them apart. On the other hand, the tubes act like a spring, and a reset force is built up. Fluid force, reset force, and damping of the fluid form an oscillating system. If the amplitudes exceed a certain level, severe mechanical damage can be the consequence (Figure 4.30). This phenomenon is called fluidelastic instability. Vortex shedding is caused by periodic formation of vortices in the tube bundle (Figure 4.31). Short-term failures can take place if the frequency of the vortex formation approaches the resonance frequency of the tubes. Regions which are prone to vibration damage are [93]: – tubes with large unsupported spans between two baffles; – tubes located in baffle window region at the tube bundle periphery; – U-bend regions; – tubes beneath the inlet nozzle; – tubes in tube bundle bypass area.

4.10 Vibrations | 159

Figure 4.30: Tube failure caused by fluidelastic instability [93]. © Hydrocarbon Processing.

Figure 4.31: Vortex shedding in tube array [93]. © Hydrocarbon Processing.

The calculation of tube vibration phenomena is not very well founded. There are some rules and plausibility considerations that should be taken into account. Many parameters can affect tube vibrations, and most of them also affect the thermal performance. The proper support of the tubes is essential in avoiding vibrations [93]. If the unsupported span of the tubes is long, its natural frequency is low and resonance phenomena are more probable. Tubes supported by baffles have an unsupported span equal to the baffle spacing. Tubes in the baffle window have much larger unsupported spans and are therefore more susceptible to vibrations. Reducing the spacing L of singlesegmental baffles increases the cross-flow velocity proportional to L−1 , while the natural frequency increases proportional to L−2 so that resonance becomes less probable. Also, other baffle types can be tried (Figure 4.12). In heat exchangers with doublesegmental baffles, the cross-flow velocities are much lower. “No-tubes-in-window” baffles (NTIW) are usually vibration-free, as the unsupported span of the tubes is reduced by 50 % for the same number of cross-passes. However, as the tubes in the window region are omitted, they require a larger shell diameter to maintain the heat transfer area. Another option are the so-called rod-baffled heat exchangers (Figure 4.13). They provide closely spaced support for the tubes, making the tube bundle very tight. However, the flow direction is essentially parallel to the tubes, which causes a worse heat transfer and, subsequently, a larger necessary heat transfer area. The pressure drop is significantly smaller than in conventional heat exchangers. Up to now, there is no case where vibration problems have been reported. The RODbaffle is a proprietary design with a registered trademark. For using them, a royalty payment is required.

160 | 4 Heat exchangers

Figure 4.32: Clearance under inlet nozzle and impingement plate. Courtesy of Heat Transfer Research, Inc.

Tubes at entrance and exit areas are exposed to the highest velocities and therefore susceptible to vibrations. To avoid damage, clearances or impingement plates can be provided (Figure 4.32). Impingement plates divert the incoming stream from direct impacting on the first row of tubes. They can prevent erosion or cavitation, but do not reduce vibrations. The tube pitch and the layout pattern can be varied to avoid vibration. The larger the tube pitch, the lower are the cross-flow velocities, and flow-induced vibrations become less probable. However, the heat transfer deteriorates, and a larger diameter has to be chosen. For avoiding vortex shedding, it often proves to be successful to change the angle of the tube arrangement (60° pattern in Figure 4.8). An angle of 45° can be tried to avoid fluidelastic instability. The arrangement which is most prone to fluidelastic instability is the 90° square pattern. Finally, an increase of the stiffness of the tubes can be tested, e. g. by choosing a material with a higher elastic modulus or by choosing a larger tube diameter or wall thickness. However, usually these items are in most cases determined by corrosivity issues or by the heat transfer itself, respectively [93]. Using commercial programs for heat exchanger design, empirical criteria must be applied to estimate whether vibrations are probable. For fluidelastic instability, the vibrations increase with increasing velocity on the shell side. One must take care to stay below 80 % of the critical velocity in the whole shell side. The most critical locations are the ones where the velocity is high, e. g. at the inlet nozzles or at the bundle entrance. To avoid fluidelastic instability, any measure of reducing the shellside velocity might be useful, e. g. increasing the shell diameter, choosing larger nozzles or increasing the pitch. It can also be recommended to increase the clearance between nozzle and tube bundle. Also, increasing the natural frequency by reduction of the baffle spacing or the change of the baffle type is another promising option. To avoid vortex shedding, one must stay away from the resonance frequency. Besides reducing the velocities, the change of the tube arrangement has often proved to be successful. Acoustic vibrations occur in gas flows on the shell side. Normally, they do not cause severe damages, but the noise development itself is often a problem. The 45°

4.10 Vibrations | 161

tube angle is known to be prone to acoustic vibration; thus, it should be avoided for gas flow on the shell side. A detailed description of the tube vibration phenomena is given in [93] and [94].

5 Distillation and absorption The most important thermal separation process in technical applications is distillation. The name often causes some confusion. In basic lectures, “distillation” means a single stage comprising evaporation and condensation. The multiple arrangement of separation stages in a column is called “rectification”. Although this is sometimes used, the colloquial term used in industry for such a “rectification” is in fact “distillation”, which is used in this book as well. The reason for the wide spread of distillation is the use of simple heat as a utility which can easily be added and removed. The phase equilibrium between vapor and liquid is the foundation of distillation, where the density difference between the phases is so large that their separation is relatively easy. Distillation can be explained according to the following scheme. A simple separation stage consisting of evaporation and condensation achieves an enrichment of the low-boiling substances in the condensate, but according to the vapor-liquid equilibrium all components occur in both phases. Generally, pure components are not obtained. Several stages are necessary for further purification. Figure 5.1 shows an example for such an arrangement. Considering a binary mixture, in the upper part the low-boiler is purified by repeatedly sending the overhead stream to a further separation stage. In principle, one can in fact obtain the low-boiling component in an arbitrarily specified concentration, and, accordingly, the heavy end as well, as shown in the lower part of Figure 5.1. However, the drawback of this process is obvious: Only small amounts of both light and heavy ends are obtained, as no use is made of the intermediate fractions where a number of separation stages have al-

Figure 5.1: Series of separation stages. https://doi.org/10.1515/9783110657685-005

164 | 5 Distillation and absorption

Figure 5.2: The prestage of a distillation column.

ready been applied. This situation can be improved if these particular fractions are led to the next stage above or, respectively, below. Both condensers and evaporators can be left out if the vapors and condensates are directly forwarded to the next stage. Only at the ends of the sequence an evaporator and a condenser are necessary (Figure 5.2). Vapor and liquid are moving in countercurrent flow. A vertical arrangement of the stages then leads to the well-known distillation columns. The principle of countercurrent flow can be applied to more or less all thermal separation processes like extraction or even adsorption (“simulated moving bed”, Chapter 7.2). Figure 5.3 shows the typical terms describing a distillation column. The mixture to be separated is continuously led into the column. It is called “feed”. Several feeds are possible. The lower end of the column is called the “bottom”. At the bottom, the reboiler generates vapor to provide the necessary heat input into the column. At the upper end of the column (“top”), the overhead stream is led into a condenser, which removes heat from the column at a lower temperature level. Part of the condensate is led back into the column (“reflux”), which forms the essential liquid flow from the top to the bottom of the column, giving countercurrent flow with the vapor. The rest of the condensate is removed from the column as a product, the “distillate”. The ratio

5 Distillation and absorption

| 165

Figure 5.3: Distillation column and the most important terms.

between reflux flow Ṙ and distillate flow Ḋ is called the reflux ratio ν: ̇ Ḋ ν = R/

(5.1)

The column part above the feed is called rectifying section, the part below the feed is the stripping section. In the column there are internals which enable a good contact and mass transfer between vapor and liquid phase. Generally, there are two options to perform distillation and absorption processes: packed columns and tray columns. In packed columns, there is a continuous mass transfer along the column. Their advantage is the significantly lower pressure drop, which is the decisive criterion in vacuum distillations. In tray columns the mass transfer is performed stage-wise; their advantage is that they have no wetting problems and that they are less sensitive against fouling. Very good monographs describing distillation are the books of Baerns et al. [8], Kister [95, 96], Sattler [97], and Stichlmair and Fair [98]. The material costs of a distillation column depend on the amount of material for the column cylinder, which is proportional to both the diameter D and the height1 H, and on the costs for the packing or trays, which are proportional to H and to D2 . H is determined by thermodynamics (number of separation stages), whereas D is determined by hydrodynamics (characteristics of the trays or the packing) and thermodynamics (determination of the internal flows inside the column).2 Before starting, the influence of the pressure on a distillation column should be clarified: The higher the column pressure is, the lower are the volume flows. Therefore, higher pressure results in a higher capacity of the column. On the other hand, higher pressure usually (not always) results in a worse separation behavior due to the phase equilibrium, and therefore in lower purities of the products. 1 neglecting top and bottom. 2 Furthermore, the pressure has a significant influence on the wall thickness and, subsequently, on the material costs.

166 | 5 Distillation and absorption

5.1 Thermodynamics of distillation and absorption columns There are two ways for the calculation of distillation columns. The equilibrium calculation uses the presumption of a theoretical stage, which represents full development of the phase equilibrium on this stage. In practice, this assumption is not valid. A tray does not represent a theoretical stage. One can introduce efficiencies (Chapter 5.4), which are usually in the range of 32 . Alternatively, the number of stages is reduced, e. g. 60 trays represent 40 theoretical stages. For packed columns, a certain packed height is taken for one theoretical stage (HETP value, see Glossary). The concept of the theoretical stage is very widely applied, but there are certain constellations where it leads to qualitatively and quantitatively bad results. In these cases, the mass transfer and the phase equilibrium area must be taken into account by the calculation. Still, the phase equilibrium is most important, as it determines the driving forces for the mass transfer. The application of these so-called rate-based models is obligatory for the calculation of absorber columns, if high purities are required.

Figure 5.4: Mass balance on a tray.

The mass balance on a stage is the foundation of the calculation of distillation columns. Figure 5.4 shows a volume with a theoretical stage. Two streams are entering (Li−1 , Vi+1 ), and two streams are leaving (Li , Vi ). The streams leaving the stage are in phase equilibrium. In process simulation, the numbering of the stages goes from top to bottom. For an equilibrium column with theoretical stages, the determination of the necessary number of stages and the reflux ratio are essential for the development of a distillation process. With the concept of the theoretical stage, this can be achieved by solving the so-called MESH equations (material balance, phase equilibrium, summation condition, heat balance). For a column with N stages and n components, these equations are – material balance for each component on each stage: n ⋅ N equations; – phase equilibrium conditions for each component on each stage: n ⋅ N equations; – summation conditions (∑ xi = 1, ∑ yi = 1) on each stage: 2N equations; – heat balance on each stage: N equations.

5.1 Thermodynamics of distillation and absorption columns |

167

All N ⋅ (2n + 3) equations have to be solved. The corresponding unknowns are the compositions xi and yi of each component (2n variables), the flows of liquid and vapor and the temperature of the two phases in equilibrium (3 variables), all of them on each stage (times N). For example, for a column with 60 stages and 20 components, there are altogether 60 ⋅ (2 ⋅ 20 + 3) = 2580 equations, most of them nonlinear. If chemical reactions occur (e. g. reactive distillation), additional equations would have to be considered. The mathematics of solving this system of equations are described in [8] and [95]. Modern process simulators offer very stable and well-established algorithms for the solution. In case the column does not converge, it is often a trial-and-error procedure to test the various options. The convergence history and the error messages can give valuable information. The calculation of the column can only converge if there is both vapor and liquid on each stage. If the error messages or the profile indicate that vapor or liquid are missing on certain stages, one should change the specification in a way that the amount of the missing phase on these stages is increased. A typical error in the setup of a column is that the specified distillate or bottom flow is larger than the feed, in this case, the algorithm has of course no chance to get a solution. If column convergence does not work, it is often useful to change and simplify the column specification until its calculation converges and a profile is available, even if it is a completely wrong one. Slight variations of the specification towards the correct one often give a feeling about the sensitivities, and with a valid column profile as starting point it is easier to achieve convergence. Often, a specification “out of balance” occurs. Consider a mixture of 500 kg/h of component A (light end) and 500 kg/h of component B (heavy end). If e. g. the bottom stream is defined to be 510 kg/h, one cannot expect it to be pure B, as it contains at least 10 kg/h A, corresponding to approx. 2 %. On the other hand, at least 10 kg/h of component A are lost. One must be aware that a variation of the reflux ratio or the number of stages does not help at all in this case. Essentially there are two ways for the representation of a distillation column in the process simulator: the compact approach and the detailed approach. During process development, it is strongly recommended to use the compact approach (Figure 5.5) to reduce the effort when separation sequences are changed. When the process is fixed, it makes sense to move over to the detailed approach (Figure 5.6), where the results for the streams of the whole condenser system are accessible more easily so that they can directly be used for the design. The definition of so-called “pseudo-streams” to extract this information is not necessary. Multiple condensers on different temperature levels can easily be specified. Convergence is certainly a larger effort but usually not more difficult. However, the reflux stream must be estimated first, otherwise the upper stages might dry out, causing column calculation to stop. Another disadvantage is that the reflux ratio cannot be entered directly; it must be converted into a split ratio of the stream COND.

168 | 5 Distillation and absorption

Figure 5.5: Compact column approach. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

Figure 5.6: Detailed column approach. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

5.2 Packed columns In packed columns, a packing volume distributes the liquid entering the bed at the top and creates a large mass transfer area between the liquid and the vapor, which are in

countercurrent flow. Large packing volumina are divided into several beds. Between the beds, the liquid is collected and redistributed by a collector/distributor unit. This is

necessary because the liquid tends to run together within the packing and concentrate near the column wall. Figure 5.7 shows the principal constitution of a packed column.

5.2 Packed columns | 169

Figure 5.7: Constitution of a packed column. © Sulzer Chemtech Ltd.

Packed columns are mostly used at low pressures, where many systems exhibit larger separation factors,3 making a design with a lower tower height possible. Because of the low pressure drop, a lower pressure can be achieved in the stripping section of the column, and it is possible to limit the bottom temperature. For temperature-sensitive substances, this is often important to avoid decomposition. Packed columns are sensitive against fouling but less sensitive to foaming systems than tray columns (at least random packings). Because of the low holdup, they are more sensitive to operation changes. The separation efficiency depends on the specific surface, the kind of packing, the packing material and the kind of system. Packings can be made of metal, plastic, glass, graphite, or ceramic. There are two types of packed columns. – Random packings: The development of random packing elements during recent decades is characterized by the aspiration to create structures that are more and more open (Figure 5.8). Packing elements of the 1st generation were spheres, cylindrical rings (“Raschig rings”) or saddles. They were easy to manufacture, but their pressure drop was still high. Furthermore, the packing elements of the 1st generation often suffered from maldistribution, and their use was limited to small columns with 3 The reason is that the ratio between the vapor pressures of the components is usually larger at low temperatures.

170 | 5 Distillation and absorption

Figure 5.8: Raschig-Ring, Pall-Ring, and ENVIPAC: packing elements of the 1st, 2nd, and 3rd generation. © Raschig GmbH, © ENVIMAC Engineering GmbH.

diameters below 500 mm. In the 2nd generation, the shell areas have been penetrated, which had already a drastic effect on the pressure drop. The most popular packing element of this generation is the Pall ring, which is still in use. In the 1970s and the 1980s, the packing elements of the 3rd generation came up, which only consist of the framework, and due to the large free cross-section area their pressure drop is even lower. The progress of this development is that the vapor load at constant column diameter can be significantly higher for the modern packing elements. Therefore, the change of the packing has become a standard option for a capacity increase. The progress in the separation efficiency was comparably low. The trend was continued in the 1990s with the development of the 4th generation packing elements with completely new shapes, giving extremely low pressure drops and high flooding points (Section 5.2). It must be emphasized that the importance of the distributor performance has increased, as the 4th generation packing elements have hardly any self-distribution. The most well-known example is the Raschig-Super-Ring (Figure 5.9). In 2018, with the Raschig- SuperRing Plus a further development was introduced. At a first glance, there seems to be no difference, however, they are different when it comes to the random placement within the column [254]. The Raschig Super Ring usually lies on its side or stands “upright”, whereas the Raschig Super Ring Plus leans diagonally, due to the different curve sequence (Figure 5.10). Therefore, the random packing layer in the column has a different structure. The free cross-flow area for the vapor flow perpendicular to the flow direction is larger, giving once again a lower pressure drop. Tests turned out that the pressure drop could be lowered by 10 %, which gives a further opportunity to increase the throughput. The larger the nominal size of a packing element is, the lower is its pressure drop, but also its specific surface and therefore its separation efficiency. The nominal size of a packing element should be lower than 10 % of the column diameter. Oth-

5.2 Packed columns |

171

Figure 5.9: Raschig-Super-Ring as an example for a 4th generation packing element. © Raschig GmbH.

Figure 5.10: Raschig-Super-Ring and Raschig-Super-Ring Plus put on an even board.



erwise there are too many empty volumes at the area near the wall, where the liquid has no sufficient contact to the vapor phase. Structured packings: Structured packings have a regular geometry. They need appropriate distributors; if this is ensured there is hardly any stream formation or wall effect. At relatively low liquid loads (< 20 m3 /(m2 h), see next paragraph) structured packings are more effective than random packings. The separation efficiency does not depend on the column diameter. Structured packings have a larger maximum load, a better efficiency and lower pressure drops than random packings. An example is the Sulzer Mellapak (Figure 5.11). At liquid loads < 10 m3 /(m2 h) wired packings

172 | 5 Distillation and absorption

Figure 5.11: Sulzer Mellapak as an example for structured packings. © Sulzer Chemtech Ltd.

(e. g. Montz A3, Sulzer BX) are another alternative, having an even lower HETP4 value (0.1–0.2 m, i. e. 5–8 theoretical stages per m) and even less pressure drop. On the other side, wired packings must be well wetted, which is often not the case for aqueous systems. Furthermore, they are significantly more expensive and extremely sensitive against fouling. At higher liquid loads, their application does not make sense; their advantages do not become effective. It should be noted that structured packings are not more effective in general. Random packing is clearly the better choice at high liquid loads and if a larger holdup is required to generate residence time on the packing. Packed columns can fail if the vapor load is too high or the liquid load is too low. The vapor load is represented by the so-called F-factor: F = w√ρ ,

(5.2)

where w is the vapor velocity referring to the free cross-flow area and ρ is the vapor density. It represents the square root of the kinetic energy of the vapor. Its unit Pa0.5 is usually omitted. A reasonable order of magnitude of the F-factor is F = 2. F = 0.5 would represent a relatively low vapor load, F = 3 a relatively high one. The liquid load is the parameter in the set of curves. It is defined by B=

liquid volume flow cross-section area

4 height equivalent of one theoretical plate, see Glossary.

(5.3)

5.2 Packed columns |

173

Liquid loads can strongly differ; their range covers 100 m3 /(m2 h) as a very high load, 10–40 m3 /(m2 h) as “normal” liquid loads and 0.5–5 m3 /(m2 h) as low ones. The liquid load can also be interpreted as the superficial velocity of the liquid, referring to the crossflow area. The upper limit for the vapor load is the flood point, where the countercurrent flow between vapor and liquid breaks down. In this case, a froth layer is formed, the liquid is accumulated and finally carried over to the top. The flood point decreases with increasing liquid load, as the free cross-section area for the vapor flow is more and more filled with liquid. The minimum liquid load is a less strict criterion. If it is not reached, the wetting of the packing is so bad that the efficiency decreases significantly. As a rule of thumb, for random packings liquid loads of 10 m3 /(m2 h) for aqueous systems and 5 m3 /(m2 h) for organic systems can be considered as limiting values. For structured packings, nowadays very low liquid loads down to 0.2 m3 /(m2 h) can be realized. The limiting factor is the quality of the distributor. One must always be aware that distributor and packing form a package which should be in the hand of one vendor. A badly chosen distributor type can have a significant influence on the performance of the packing [99]. For the design of packed columns their separation efficiency has to be regarded first. Manufacturers usually give HETP values as a function of the vapor and the liquid load. Often HETP also depends on the pressure and the kind of the system. Kister [100] gives the following rules of thumb for estimating HETP. –

For random packings: HETP = L ⋅



93 ap

(5.4)

For structured packings: HETP = K ⋅ L ⋅ (0.1m +

100 ) ap

(5.5)

with ap as the specific surface area. Equation (5.4) is valid for modern (i. e. at least Pall rings) random packings with nominal diameters of 1󸀠󸀠5 or larger. Unexpectedly, it has been found out that random packing elements smaller than 1󸀠󸀠 do not necessarily have a lower HETP value [100], probably due to maldistribution effects. The factors K and L can be set as 5 ″ = inch, 1󸀠󸀠 = 25.4 mm. Another abbreviation is “in”.

174 | 5 Distillation and absorption L=1 L = 1.5 L=2 K=1 K = 1.45

for σ < 25 mN/m (usual organic systems) for σ ≈ 40 mN/m (amine and glycol systems) for σ ≈ 70 mN/m (aqueous systems) for Y structured packings (45° incination to the horizontal, e. g. Mellapak 250 Y) for X structured packings (60° inclination to the horizontal, e. g. Mellapak 250 X), ap ≤ 300 m2 /m3

The effect of pressure on packing efficiency is widely discussed but poorly understood, as maldistribution effects can always be involved as well. The prevailing opinion [95] is that the effect of pressure is low at least at pressures p > 100 mbar. Below this value, it is suspected that the efficiency decreases for random packings, whereas for structured packings a decrease for high pressures (p > 15–20 bar) has been observed. With the HETP value, one can assign theoretical stages to the particular sections of the column and simulate it with an equilibrium model. One should have in mind that there are influences on the packing performance which are not covered by any calculation. The HETP value can be larger due to layers on the packing or bad wettability, which is often the case for aqueous systems. After making a first guess, the results should be discussed which the packing vendor to make sure that the required number of stages per m is met. The HETP values are determined by means of test measurements with an almost ideal, narrow-boiling binary mixture, e. g. chlorobenzene/ethyl benzene (Figure 5.12), cyclohexane/n-heptane, or isobutane/n-butane [99, 100]. The separations of these mixtures are sensitive to the number of stages so that it is not difficult to evaluate. From process simulation, one gets the liquid and vapor flows which encounter each other between the stages, including their properties like density and viscosity. They are the basis for the hydrodynamic calculation, which determines the column diameter.

Figure 5.12: Txy diagram for the system chlorobenzene/ethyl benzene at p = 100 mbar.

5.2 Packed columns |

175

Figure 5.13: Montz Type S distributor for very low liquid loads. Courtesy of Julius Montz GmbH.

Usually, several load cases have to be regarded. The hydrodynamic considerations are working with simplified physical models (channels, particles circulated around, particle clusters). These models comprise equations for the flood point, the pressure drop, and the holdup, i. e. the liquid content of the packing during operation. They depend on each other in a complex way. Figure 5.14 illustrates the courses of pressure drop and holdup as functions of vapor and liquid load. The larger the liquid load, the larger is also the holdup of the packing. The holdup is the relative liquid content of the packing; it is a key quantity for the calculation models. It causes the pressure drop to rise, as for the flow of the vapor less cross-flow area is available. One can see that at the load point the drastic rise of pressure drop and holdup starts (Figure 5.14). These courses are difficult to reproduce with the calculation models. Therefore, a number of parameters adjustable to experimental data of the packing must be introduced. The most popular models are the one by Stichlmair [102], its further development by Engel [103] and the one of Billet and Schultes [104]. It is important to note that the correlations used have limited physical value and are optimized for use within the particular model. It makes no sense to mix them, e. g. calculate the holdup according to Billet/Schultes and the pressure drop according to Engel. The hydrodynamic calculation starts with the thermodynamic calculation of the column, which evaluates the liquid and vapor flows going from stage to stage. Using these loads, it is then checked whether a specified packing type fulfills a number of criteria. The design criteria for a packing are as follows.

176 | 5 Distillation and absorption

Figure 5.14: Course of pressure drop and holdup as a function of vapor and liquid load. Courtesy of Prof. Dr. J. Stichlmair.

5.2 Packed columns |









177

Distance to flood point: The flood point denotes the vapor load where the liquid is accumulated in the packing and finally carried over the top. It depends on the liquid load. The particular calculation models [103, 104] have a built-in flood-point correlation. Their accuracy can be estimated to be ± 30 %. The Kister–Gill correlation [95] might be slightly more accurate but uses a packing-specific parameter. Therefore, the strategy is to set the vapor load in the case with the maximum load to 70 % to be on the safe side. One should try to get close to these 70 % and not to stay below and set additional safety margins. Otherwise, the vapor load in the minimum case might be too low. Furthermore, one should have in mind that the uncertainty of ± 30 % could also mean that 130 % of the calculated value could be the true flood point. Therefore, it might happen that the packing does not perform very well if the load is not adequate. Packing vendors have often more experience with the use of their products and can sometimes take the responsibility to make a design closer to the flooding point than 70 %. System flooding: System flooding occurs if even large droplets are carried over the top by the vapor flow without the influence of column equipment, i. e. in the empty column. A correlation is given in [100] and [101]. Normally, system flooding is the last criterion which indicates flooding. It is, however, relevant in packings with very open structures, e. g. Mellapak 125X. Another application is the check of flooding data from vendors, which are definitely too optimistic if they exceed the system flooding limit. Load point: The load point refers to the vapor flow where the vapor starts to influence the shape of the liquid film. At this point, the holdup starts to increase strongly with the vapor flow rate. The efficiency of the packing is at its maximum, and one is far enough away from the flood point. One should try to operate the column at this load point, but as a design criterion it is not useful. Sufficiently low pressure drop: The pressure drop can be a direct criterion for the diameter of the column. In many applications, there is a limitation for the bottom temperature to avoid decomposition. In these cases, a certain pressure drop over the entire column must not be exceeded, as the pressure in the condenser is usually fixed. There are two pressure drops: the dry pressure drop and the pressure drop of the irrigated packing, which is the relevant one for design and where the dry pressure drop has a contribution. The pressure drop increases first linearly with increasing vapor load, with the liquid load as an additional parameter. Beyond the load point, the pressure drop rises more rapidly, and at the flood point theory says that it is infinite. In practice, flooding or, respectively, inoperability of the column is reached already at finite pressure drops. For the Sulzer packings, there is even a

178 | 5 Distillation and absorption



special model where flooding is defined as the vapor load where the pressure drop is 12 mbar/m [105]. Minimum liquid load: A minimum liquid load according to packing and distributor should be kept to avoid maldistribution. A lot of packing operation problems are due to maldistribution [106]. An adequate distributor must be chosen. Special distributors can handle very low liquid loads down to 0.04 m3 /(m2 h) (Figure 5.13). A reference value for the number of droplet holes is 60–150/m2 . As a rule of thumb, after a packed bed length of 6 m redistribution should take place, with additional space requirement for collector and distributor. The need for redistribution can vary. For thin columns, the accumulation of liquid at the column wall is more pronounced, so that redistribution must take place earlier. For high liquid loads, the packed bed length might be extended. Anyway, it is strongly recommended to ask the manufacturer about an appropriate maximum length of a packing bed. For vacuum application, the pressure drop of the distributor might be relevant. As correlations do not make too much sense due to the wide variety of distributor types, a value of Δp = 1 mbar is usually reasonable as a first guess.

One should always have in mind that hydrodynamic calculations do not claim to be very accurate. An uncertainty of ± 30 % is a reasonable assumption. Figure 5.15 shows the load diagram of a packing, where the flooding line is depicted as a function of the liquid load. It can be seen that a minimum liquid load is necessary to reach the operation region. With increasing liquid load, the vapor load at

Figure 5.15: Load diagram for a packing.

5.3 Maldistribution in packed columns | 179

the flooding point decreases, at high liquid load even rapidly. It should be mentioned that the design criterion of 70 % of the vapor load at the flood point (flooding factor 70 %) can be interpreted differently. It can either mean that flooding occurs when both vapor and liquid load are increased simultaneously from 70 % to 100 %. This flooding factor is called FFLG and is usually relevant for the design of distillation columns. Another flooding factor can be defined if flooding occurs when only the vapor load is increased from 70 % to 100 %, maintaining the liquid load constant (FFL). This would be a reasonable quantity for the design of absorption columns. The engineer must decide which one is more relevant for the particular application case. Some aspects have to be considered when systems tend to foam: – Generally, packed columns are less sensitive to foaming than tray columns as the contact between vapor and liquid phase is less intensive. – Liquid and vapor velocities are smaller in packed columns, both of them decrease foaming. – Large dimensions (column diameter, random packing size) should be preferred. – High pressure reduces foaming.

5.3 Maldistribution in packed columns Rate-based calculations for random or structured packings are not really considered to be trustworthy. Although theory seems to be well-defined, large uncertainties occur. The larger the packing height, the more the always assumed piston flow pattern for the liquid deviates from the real distribution. This uneven distribution is generally called maldistribution, and it is responsible for a significant deterioration for the mass transfer. Currently, its prediction is hardy possible, there are just qualitative indications about the particular dependencies. While liquid distribution is currently subject to various investigatons, it is widely accepted that the vapor distribution in a packed column is more or less homogeneous. Vapor distribution is a matter of pressure drop, and pressure-drop differences over the cross-flow area of the packing will result in less flow for regions with higher pressure drop and higher flow in regions with less pressure drop, ending up in an equal distribution of the vapor as long as the conditions in the particular channels are the same. However, in case of liquid maldistribution the vapor prefers to take the channels with less liquid, as the liquid occupies part of the free cross-flow area. Therefore, channels with more liquid are more narrow, causing a larger pressure drop, which is in turn equalized by lowering the flow. Exceptions are columns with large diameters and low heights, where a special vapor distributor might be useful. Mainly, there are two different kinds of maldistribution: rivulet formation and the wall effect [275]. Rivulet formation is the merging of the liquid flow to larger rivulets. The effect increases with larger packing heights. Its reason is the surface tension of

180 | 5 Distillation and absorption

Figure 5.16: Collector for the investigation of the maldistribution.

the liquid; the rivulet formation decreases the surface but, subsequently, also the mass transfer area between vapor and liquid. Rivulet formation is considered as a local phenomenon at single packing elements (small scale maldistribution). The wall effect is the tendency of the liquid to accumulate at the column wall, where the flow resistance is lower than in the central region. It is effective in large parts of the column (large scale maldistribution), especially in random packed columns. In structured packings there are wall deflector sheets, which prevent the wall effect. As well, large scale maldistribution can of course be caused by an inadequate distributor which does not fit to the packing. Moreover, after certain packing heights a collector-distributor unit must be installed, achieving a redistribution in the packing. Normally, there are rules of thumb for appropriate packing bed heights available in the guidelines of the particular engineering departments, ranging from 6 … 8 m. For small column diameters, lower packing bed heights should be taken into account. Collectordistributor units do not contribute to the mass transfer but cause additional pressure drop, additional column height and further investment costs. There is a large potential for improvement if the maldistribution of the liquid could be predicted; therefore, various attempts have been published [275]. The investigation principle is to install a liquid collector under a certain packing height which is divided in segments (Figure 5.16). For the investigation of the wall effect, a thin ring segment is installed at the column wall. Specially shaped segments can be used to examine the small scale maldistribution. Tracer substances can be used to investigate the radial mixing in the packing and the residence time distribution. A new approach is the attachments of sensors below the packing, where also timedependent effects can be investigated [274]. Recent investigations [275] show the following tendencies: – At constant liquid load, the maldistribution increases with increasing vapor load, especially the wall effect. Beyond the load point, it increases drastically. – The dependence on the liquid load is not as distinct as on the vapor load. Generally, the maldistribution decreases with increasing liquid load.

5.4 Tray columns | 181

Figure 5.17: Maldistribution factor as a function of liquid and vapor load.

These tendencies can be illustrated in Figure 5.17, where Mf is the maldistribution factor k 󵄨󵄨 󵄨 󵄨 B − B 󵄨󵄨󵄨 Ai Mf = ∑(󵄨󵄨󵄨 i 󵄨 ) 󵄨 B 󵄨󵄨󵄨 AK i=1 󵄨

(5.6)

with B … liquid load Bi … liquid load in segment i Ai … cross-flow area of segment i AK … column cross-flow area

5.4 Tray columns The lack of a reliable calculation method does not cancel the necessity of providing a reasonable design for a tray. (Volker Engel)

In tray columns, a stage-wise mass transfer takes place by means of horizontal trays (Figure 5.18). A weir causes the accumulation of liquid coming from the tray above. Over this weir, the liquid leaving the tray goes into the downcomer to the tray below. The vapor can rise from tray to tray through openings characterizing the type of tray. When it passes the liquid, it is split into small bubbles with a large mass transfer area and distributed in the liquid. A so-called froth area is formed, giving good presumptions for an intensive mass transfer. In fact, there are three ways in which a tray can be operated:

182 | 5 Distillation and absorption

Figure 5.18: Constitution of a tray column. © Sulzer Chemtech Ltd.



– –

In the bubble regime the liquid is the continuous phase. The vapor as the disperse phase is rising through the liquid as bubbles. The bubble regime often occurs at high pressure applications. The froth regime is the preferred one. A froth layer with a large interphase between vapor and liquid is formed. Both phases are more or less continuous. At high vapor loads, low liquid loads or in vacuum, trays often operate in the spray regime; in this case, the vapor phase is the continuous one. Note that the application range of many correlations does not cover the spray regime. In the spray regime, weeping (Section 5.4) and unsealed downcomers (Section 5.4) should be avoided.6

In [95], the limiting formulas for the occurrence of the particular regimes are given. One should always try to get froth regime; spray regime should be avoided if possible. Essentially, there are three types of trays. – Sieve trays: Through a sieve tray, the vapor from the tray below gets into the liquid through holes in the plate of the tray. It is dispersed in the liquid. Today relatively large holes up to d = 20 mm are used to avoid blocking by fouling. The maximum fractional hole area is 11–12 %. The vapor prevents the liquid to leave the tray through the holes (weeping); nevertheless, for this purpose a sufficient vapor velocity is 6 The guideline in the spray regime is: hold on to your liquid!

5.4 Tray columns | 183

necessary. After shutdown, sieve trays are emptied through the holes. Sieve trays (Figure 5.19) are quite simple to manufacture and have a relatively low pressure drop. They are also easy to clean. As the bubbles entering the tray from below are not diverted, sieve trays are prone to entrainment, meaning that droplets from the froth layer get to the tray above. As this liquid is transported in the wrong direction, the efficiency of the tray can be significantly lowered. Sieve trays are not well suited for large load variations. The turndown, i. e. the ratio of max. and min. load, is approx. 2 : 1 [107]. Due to their simplicity, sieve trays are easy to specify and to calculate. The tray efficiency of sieve trays is as good as for other tray types.

Figure 5.19: Sieve tray. Courtesy of Ludwig Michl GmbH.



Bubble cap trays: The principle of a bubble cap tray is that the rising vapor entering the tray above is diverted by the bubble cup located above the holes (Figure 5.20). It enters the liquid parallel to the tray, in contrast to the sieve tray. Therefore, the entrainment is comparably low. Bubble cap trays are not emptied after shutdown. They have a good efficiency and a wide-spread load range. However, the pressure drop and tendencies to fouling and corrosion are relatively high, the manufacturing is expensive, and the cleaning is difficult. Because of the large number of types, bub-

184 | 5 Distillation and absorption ble cap trays are also difficult to specify. The hydrodynamic calculations are complicated [108–111] and require a large number of input parameters. Nevertheless, some special constructions like the Bayer-Flachglocke (Figure 5.21) are still considered to be indispensable. Using extremely low weir heights, the pressure drop and the holdup on the tray can be kept very low.

Figure 5.20: Bubble cap tray. Courtesy of Ludwig Michl GmbH.

Figure 5.21: Bayer-Flachglocke. Courtesy of Ludwig Michl GmbH.

5.4 Tray columns | 185

Figure 5.22: Valve tray. Courtesy of Ludwig Michl GmbH.

Figure 5.23: Typical course of the pressure drop of a valve tray. Courtesy of WelChem GmbH.



Valve trays: On valve trays, the hole of the tray is covered by movable valves (Figure 5.22). Similarly to the bubble cap tray, the vapor enters the liquid parallel to the tray, giving less entrainment. The lift of the valve determines the opening area for the vapor; it is self-adjusting to the vapor load. Figure 5.23 shows a typical pressure drop characteristics of a valve tray. At low vapor loads, the valves are fully closed, and the vapor enters the tray through the open crevices of the valves. The pressure drop rises with increasing vapor load. At point CBP (closed balance point), the valves begin to open. Further increase of the vapor load leads to a wider opening of the valves, and the pressure drop stays constant. After they are fully open (point OBP, open balance point), the pressure drop at increasing vapor load rises again.

186 | 5 Distillation and absorption



Thus, the valves are in some way self-adjusting to the vapor load. This is the main advantage of valve trays. Their turndown is much higher than the one of sieve trays (approx. 4.5 : 1 [107]). In fact, on Figure 5.22 there are two kinds of valves. One sort is equipped with an additional plate on the top, giving extra weight. At low loads only the light valves will open, and at high loads the heavy ones are working as well. This option gives even more flexibility at differing loads. The pressure drop of valve trays is between the ones of bubble cap and sieve trays. Weeping can usually be avoided. Valve trays are sensitive to fouling. The maximum fractional hole area is 13–14 %. Valve trays are by about 20 % more expensive than sieve trays [107]. There are many types of valve trays available. The data of various manufacturers have been collected in professional software programs for hydrodynamic calculations. Valve trays are the most common trays, their market share is approximately 70 % [95]. Fixed valve trays: Fixed valve trays are, in principle, sieve trays with a roof over the sieve holes (Figure 5.24). They do not move during operation, as the name suggests. The roofs might prevent some entrainment, as the vapor is at least diverted to the side openings. Nevertheless, the performance of fixed valve trays is pretty similar to that of sieve trays. The maximum fractional hole area is 12–13 %. The investment costs are approx. 10 % higher, and the main operation advantage is a slightly better turndown (approx. 2.5 : 1) [107].

Figure 5.24: Fixed valve tray. Courtesy of Ludwig Michl GmbH.



Dualflow trays: The dualflow tray is more or less a sieve tray without downcomer and weir. Vapor and liquid go through the holes in countercurrent flow. The liquid goes down to the tray below by weeping. This is limiting for the vapor load. Dualflow trays are

5.4 Tray columns | 187

often used when the capacity has to be increased for an existing column. This is achieved by using the cross-flow area of the downcomer as active area as well. Dualflow trays are well-suited for systems tending to polymerization. Due to the missing downcomer, they are slightly less expensive than sieve trays, but their turndown is supposed to be lower (approx. 1.5 : 1 [107]). The efficiency of a dualflow tray is approximately 80 % of a normal tray. In the recent years, several vendors developed trays with a significantly improved performance, the so-called high-performance trays. Well-known examples are the SUPERFRAC XT tray from Koch-Glitsch (Figure 5.25) or the UFMPlus™ tray from Sulzer ChemTech. Both capacity and efficiency are superior to conventional trays, and it is possible to reduce both column height and diameter drastically. Especially for large columns like C3 splitters with heights of approx. 100 m and diameters of 8–10 m high performance trays are an interesting and meanwhile well-established option for revamps or for reducing investment costs.

Figure 5.25: Eight-pass SUPERFRAC XT tray.

For modern high performance trays downcomer, active area and inlet area have been optimized. Areas where the liquid is not moving are often equipped with so-called push-valves. Push-valves are fixed valves where the opening area leads the vapor into a defined direction so that the liquid is stimulated and no dead zone can be formed where fouling can develop. A similar effect is achieved by the so-called multi-chordal downcomer, where the shape of the weir is designed in a way that dead zones are avoided (Figure 5.26). Umbrella-shaped valves like UFM™ (Figure 5.27) from Sulzer can minimize the liquid entrainment, while vapor and liquid still achieve excellent contact for an effective mass transfer. A turndown of 5 : 1 illustrates the flexibility of the tray. The prisma

188 | 5 Distillation and absorption

Figure 5.26: Flow profiles for a conventional tray and a SUPERFRAC tray with multi-chordal downcomer.

Figure 5.27: UFM™ valve. © Sulzer Chemtech Ltd.

downcomer (Figure 5.28) increases the vapor capacity and therefore reduces pressure drop and the probability of downcomer flooding.7 Because of the hydrostatic pressure of the liquid on a tray, which has to be passed by the vapor, tray columns have a significantly higher pressure drop than packed ones. This is a disadvantage especially for vacuum distillations. The elevated pressure leads to higher temperatures at the bottom of the columns, and the residence times on the trays are larger because of the greater holdup. This gives higher decomposition rates; usually, the substances exposed to vacuum distillation are sensitive against high temperatures. Tray columns are relatively insensitive against liquid load variations and fouling, but susceptible to foaming. For small column diameters (< 800 mm), man holes do not make much sense. Tray columns are then designed as 7 Explanation on the text below equation (5.11) in this section.

5.4 Tray columns | 189

Figure 5.28: UFMPlus™ tray with prisma downcomer, UFM™ valves and push valves. © Sulzer Chemtech Ltd.

so-called cartridge columns (Figure 5.29), which often have sealing problems at the column wall. For the specification of trays, the main geometric data are defined in Figure 5.30. In this context, the particular abbreviations mean: IDcol inner column diameter AA active area TS tray spacing LW,out outlet weir length LW,in inlet weir length ADC downcomer area, top WDC downcomer width HCL downcomer clearance FPL flow path length HW,in height of inlet weir HW,out height of outlet weir LCL apron length WRSP width of inlet weir Generally, the residence time on trays is not large enough to reach equilibrium between vapor and liquid phase. For the thermodynamic calculation of a column, one of the standard procedures is to take three trays for two stages, e. g. simulate a 60-trays column with 40 theoretical stages. Although this is often successful, this approach has difficulties with widely boiling systems, where the column is far away from equi-

190 | 5 Distillation and absorption

Figure 5.29: Cartridge column. Courtesy of Ludwig Michl GmbH.

librium. For the equilibrium calculation, it is better to characterize the quality of a tray by the so-called Murphree efficiency: EM =

yn − yn+1 , yeq (xn ) − yn+1

(5.7)

where yeq is the equilibrium concentration and n + 1 refers to the tray below tray n. The Murphree efficiency always refers to a certain component. Normally it is in the range EM = 0.6–0.7, corresponding to the approach with the reduction of stages described above. There is a problem with the product streams of the column. If Murphree efficiencies are used, these product streams are not in equilibrium as expected. To avoid simulation errors with pieces of equipment that require a single phase at the inlet (compressors, pumps), an efficiency of EM = 1 should be assigned to the trays where product streams leave the column, especially to the condenser and the reboiler. The approach of taking three trays for two theoretical stages is certainly too simple to be fully correct. High viscosities of the liquid and high separation factors have a negative effect on the Murphree efficiency. Duss and Taylor [273] have recently modified the widely used O’Connell correlation [112], their result is EM = 0.503(

η ) mPas

−0.226

σ −0.08

(5.8)

5.4 Tray columns | 191

Figure 5.30: Dimensions in tray geometry. Courtesy of WelChem GmbH.

with σ=

mG L

for

mG >1 L

(5.9)

or σ=(

mG ) L

−1

for

mG 10 bar), there is a negative effect due to increased entrainment [112]. Of course, the strongest negative effect on tray efficiency is caused by maldistribution.

Figure 5.31: Load diagram of a sieve tray. Courtesy of Prof. Dr. J. Stichlmair.

The use of the Murphree efficiency has another advantage: the numbering of trays is different in simulation and in construction. In process simulation, the trays are numbered from top to bottom. Reboiler and condenser are counted as stages. In construc8 Although the effect on m representing the vapor-liquid equilibrium might spoil the result.

5.4 Tray columns | 193

tion, the trays are numbered from bottom to top. Therefore, a confusing and errorprone renumbering procedure has to be performed. It is much easier to assign the trays in the construction when the Murphree efficiency is used. If the reduction of stages had been applied, it is much more complicated, not to mention that the factor is often forgotten when the overall pressure drop of the column is evaluated. The hydrodynamic calculation of tray columns has a different philosophy than the one for packed columns. While for packed columns self-contained models have been developed with certain physical foundations, only single and incoherent correlations are available for the particular failure criteria of tray columns. These criteria are in most cases empirical but quite well confirmed due to the great experience with the use of distillation trays. The correlations have limited accuracy and applicability. If different ones are compared, contradictions often occur. Often, the correlations are based on measurements with the water/air system. The particular correlation equations can be found in [95, 97, 98]. Figure 5.31 shows the load diagram of a sieve tray, which illustrates the range where the sieve tray can be operated with a good efficiency. It is typical for trays that the range for the vapor load is comparatively small, while the liquid load can be widely varied. The upper limits can be interpreted as absolute limitations; when they are exceeded, the tray cannot be operated. The lower limits are recommendations; falling below them might cause a bad efficiency of the tray. The criteria for limiting the operation range of a tray are as follows. – Froth height: The height of the froth layer should be lower than the tray spacing, with a certain margin. – Entrainment and jet flood: Entrainment is the carry-over of liquid droplets to the tray above. This would lower the tray efficiency significantly and has a drastic impact if high purities of the overhead product are required. The entrainment usually determines the maximum vapor load of the column. There are correlations available which can evaluate the order of magnitude of the entrainment. Besides the column diameter and the diameter of the tray holes, the tray spacing has a decisive influence on the entrainment (Figure 5.32). Common tray spacings are 350–600 mm, in North America, 600 mm are more or less generally used. The entrainment reacts sensitively on variations of these parameters; for example, an increase of the vapor load by a factor of 10 can increase the entrainment rate by 4–5 orders of magnitude. For a new tray design, it is best to keep the entrainment as small as possible. For a check of an existing column, 10 % entrainment might be tolerable as long as very high purities are not required. If the entrainment is so high that not only droplets but larger parts of the liquid are carried over, the jet flood case is reached as the limit of operability. Massive liquid entrainment has the consequence that liquid is impounded on the trays, and finally, the column floods. While normal entrainment just leads to a bad tray efficiency, jet flood makes the operation of the tray

194 | 5 Distillation and absorption impossible. The measures to avoid jet flood are to increase the active area or to use smaller holes, as long as there is no danger of plugging. As mentioned above, more tray spacing helps.

Figure 5.32: Influence of tray spacing on liquid entrainment.9 Courtesy of Prof. Dr. J. Stichlmair [113].



Pressure drop: The pressure drop is an indication of the vapor load. It is not a direct limitation; the increase of the bottom temperature plays a large role in vacuum columns, where trays are rarely used. If the pressure drop is too large, the downcomer can show flooding (see below). The pressure drop can be split into three parts. As for packed columns, a tray has a dry pressure drop, which would occur even if no liquid is on the tray. The second part is the static pressure drop due to the height of the liquid on the tray, which is at least equal to the weir height. The third part takes the side effects into account, like liquid height over the weir, bubble formation

9 Read the diagram as follows: If the abscissa value drops below 0.08 (transition from bubble to froth regime), take wG /wBl as coordinate.

5.4 Tray columns | 195

or spraying. A typical pressure drop of a tray is in the range 5–10 mbar. From this structure, a minimum pressure drop of the tray can be estimated from the second contribution: Δpmin,tray = ρL ghW



(5.11)

If a calculated pressure drop is below that limit, there is a strong suspicion that something is wrong;10 either the pressure drop calculation or the assumption that all trays are operating properly (e. g. strong weeping). Pressure drops larger than 15 mbar per tray might be an indication for flooding, while even higher pressure drops can lead to the destruction of the column.11 Downcomer choke flooding: If the friction losses in the downcomer are too large, the liquid cannot go down to the tray below and will accumulate. The highest friction losses occur at the downcomer entrance. The reason is vapor formation from the degassing in the downcomer. The vapor carried into the downcomer must separate from the liquid and disengage in countercurrent flow to the liquid entering the downcomer. When the combination of vapor exiting and the liquid entering becomes excessive, the downcomer entrance is choked causing the liquid to backup on the tray. Therefore, one common measure is the increase of the downcomer width. As a rule of thumb, it should be at least 10 % of the column cross-section area; however, this depends strongly on the individual situation. A sloped downcomer is often useful, as it influences the friction losses at the downcomer entrance selectively (Figure 5.33). For the ratio between upper and lower downcomer width, 2 : 1 is a reasonable value. Downcomer choke flooding preferably occurs at relatively high pressures (p > 7 bar) and/or high liquid rates.

Figure 5.33: Sloped downcomer.

10 Exception: spray regime. 11 The tray spacing should remain constant during operation.

196 | 5 Distillation and absorption –



Minimum downcomer residence time: The residence time in the downcomer is strongly related to the downcomer choke flooding. It should be large enough so that the liquid has enough time for degassing. There are various recommendations; they refer to the apparent residence time, which is defined by the ratio of the downcomer volume and the clear liquid flow in the downcomer [95]. The minimum one is 3 s which should really be kept. In this context, it should also be mentioned that the clear liquid height in the downcomer, i. e. the area really filled with liquid and not with a froth layer, is usually less than 50 % of the tray spacing. There are correlations for the clear liquid height; the difference to the apparent liquid height taking the froth layer into account as well is often significant and must be considered when the residence time in the downcomer is evaluated. A similar criterion to the minimum residence time is the maximum downcomer velocity. Calculated as the clear liquid velocity in the cross-flow area of the downcomer inlet, it should be less than 0.06–0.18 m/s [95]. Downcomer backup flooding: As Figure 5.34 indicates, there is a certain condition for the discharge of the liquid from the downcomer: p2 − p1 = Δptray < ρL ghL,DC − Δpfriction

(5.12)

If this condition is not fulfilled, no more liquid can be discharged to the tray below. Certainly, the contribution of Δpfriction is caused by the liquid load and is related to the downcomer choke flooding, but often a pressure drop of the tray which is too high is the reason for this type of flooding. This means that finally the downcomer failure is caused by the vapor load. Therefore, in this case the increase of the downcomer width as the typical measure against downcomer failure makes things even worse; the increase of the downcomer cross-flow area reduces the active area of the tray and therefore the vapor velocity and the pressure drop of the tray increase. If possible, a reduction of the downcomer width might help or, of course, an increase of the tray diameter. In many cases, the pressure drop for the

Figure 5.34: Hydrostatic and pressure drop on a tray.

5.4 Tray columns | 197



passing of the apron is the reason for downcomer backup flooding. In this case, the downcomer clearance should be increased, i. e. the distance between the tray and the lower end of the apron (HCL in Figure 5.30). Often, the outlet weir height has to be increased as well to ensure a liquid seal. If the downcomer clearance is larger than the outlet weir height, vapor could bypass the tray above by choosing the way through the downcomer. In fact, this is not a strict criterion, as the liquid seal usually persists during operation even if the outlet weir height is slightly lower, but design programs might give a warning in this case.12 A small tray spacing can also be responsible for downcomer backup flooding. An increase in tray spacing might be useful; for large tray spacings like TS = 600 mm downcomer backup flooding is hardly possible. Downcomer backup flooding is not likely in the spray regime. Weir load: To ensure a uniform flow of liquid over the weir, the weir load should be in the range 2 m3 /(m h) < weir load < 60 m3 /(m h)

(5.13)

For large columns, weir loads up to 100 m3 /(m h) can be accepted. The weir length should be larger than half the column diameter. A common measure to reduce the weir loads is the use of multipass trays for column diameters d > 2000 mm (Figure 5.35). The increase of the column diameter is not effective; the weir length only increases linearly. The capacity of the weirs can then become the limiting factor; large gradients could be the consequence, giving excessive weeping (see below) on one side of the tray and complete entrainment on the other side (Figure 5.37). The multipass option provides more weir length on a tray and can overcome this difficulty. A criterion similar to Equation (5.13) is the condition that the liquid height above the weir should be in the range 5 mm … 38 mm.

Figure 5.35: Two-pass column. Courtesy of WelChem GmbH.

12 For additional measures to ensure liquid sealing, see Chapter 5.7.

198 | 5 Distillation and absorption

Figure 5.36: Weirs with rectangular and V-notches. Courtesy of WelChem GmbH.



At low liquid loads, the liquid flow on a tray will become inhomogeneous. The weir load can be used as an indicator according to Equation (5.13).13 If it is below the lower limit, the use of notched weirs might be an alternative, especially if large fluctuations of the liquid load occur (Figure 5.36). At low liquid loads (e. g. in spray regime), only the lower part of the notches is charged, giving a lower effective weir length. For the V-notches, the weir length is increased steadily according to the increase of the liquid load. They are used when large ranges for the liquid load have to be covered. The blocked weirs with rectangular notches (picket-fence weir) are used if the weirs are too long, e. g. the central weirs in a two-pass column. In this case, the height of the spikes is about as high as the froth layer on the tray. Minimum vapor load: If the vapor load is too low, the liquid on the trays comes in contact with the vapor in an irregular way. The vapor will then prefer the area near the weir, as there is less liquid due to the gradient on a tray. On the other side of the tray, the height of the liquid is larger and the vapor will avoid passing the tray there. The liquid is no more prevented to go through the holes, which is called “weeping”. Weeping is less dramatic than entrainment. In contrast to entrainment, the liquid goes to the tray below as intended. A decrease of the tray efficiency can be the consequence, as residence time on the tray is lost. Normally, 10 % weeping can be tolerated. There are correlations for the weeping rate [95] or for the minimum vapor velocity to avoid weeping at all. If weeping occurs, one must be careful, as the correlations do not take into account that the tray openings for the vapor are occupied by the weeping liquid. Therefore, the pressure drop can be larger than calculated. The first measure to avoid weeping is the reduction of the hole diameter. The worst way of weeping is the so-called vapor cross-flow channeling (Figure 5.37). At high liquid loads (weir load > 50 m3 /(m h)), large fractional hole areas, high tray diameters and pressures p < 7 bar, the liquid can build up a hydrostatic gradient after entering the tray at the downcomer apron. At the outlet weir, there is much less liquid height than at the downcomer inlet. The vapor chooses the way which has the lowest pressure drop and passes the liquid on

13 Another criterion is the height over the weir; it should be more than 5 mm.

5.4 Tray columns | 199

Figure 5.37: Vapor cross-flow channeling. Courtesy of WelChem GmbH.



– – – – – –

the tray near the outlet weirs, maybe with significant entrainment. On the other hand, there is accumulation of liquid at the downcomer inlet, and weeping occurs as the vapor avoids to go to this region. Entrainment and weeping occur on the tray simultaneously. The weeping is detrimental to the efficiency in this case. The weeping liquid bypasses two trays, it is directly transported from the tray inlet to the outlet of the tray below [107]. System flooding: For system flooding, the same statements can be given as for packed columns (Section 5.2). System flooding might occur for very large tray spacings (TS > 1000 mm), or for dualflow trays with TS = 600 mm. As a design strategy, the following guideline might be helpful: Estimate the column diameter with wG = 1 m/s. Downcomer choke flooding 󳨀→ enlarge downcomer. Downcomer backup flooding 󳨀→ enlarge active area. Entrainment/jet flood 󳨀→ enlarge tray spacing and/or active area. Always check sensitivities. If nothing succeeds 󳨀→ increase/decrease column diameter.

For the first guesses, Table 5.1 could be useful.

200 | 5 Distillation and absorption Table 5.1: Reasonable guesses for tray data [113].

Tray spacing Weir length (Lw ) Weir height (hw ) Downcomer clearance Bubble cap diameter (dbc ) Bubble cap distance Valve diameter (dv ) Valve distance Sieve hole diameter (dh ) Sieve hole distance Fractional hole area

Vacuum

Ambient pressure

High pressure

0.4–0.6 m 0.5–0.6 dB 0.02–0.03 m 0.7 hw 0.08–0.15 m 1.25 dbc 0.04–0.05 m 1.5 dv 0.01 m 2.5–3 dh 10–15 %

0.4–0.6 m 0.6–0.75 dB 0.03–0.07 m 0.8 hw 0.08–0.15 m 1.25–1.4 dbc 0.04–0.05 m 1.7–2.2 dv 0.01 m 3–4 dh 6–10 %

0.3–0.4 m 0.85 dB 0.04–0.1 m 0.9 hw 0.08–0.15 m 1.5 dbc 0.04–0.05 m 2–3 dv 0.01 m 3.5–4.5 dh 4.5–7.5 %

Beyond the conventional design, tray columns can cause trouble which is usually not expected. Regularly, foaming turns out to be a knock-out criterion for tray columns [114]. There are few countermeasures: – The consideration of a foam factor in the design just scales the residence time in the downcomer. It could only be useful if the foaming tendency could be assigned numerically. This is still a research topic [115]. – Mechanical foam deletion should be considered as an academic fantasy. To have movable parts on a tray sounds expensive, and the author is not aware of any success stories. – Replacing a thermosiphon reboiler by a falling film evaporator or, respectively, exchanging trays by packing might be successful, but it is quite an expensive trial without guarantee. – Finding the reason (usually an unknown substance in the ppb-region) or trying to make use of the theory of foaming [116] has almost never been reported to have solved a problem. – Foam formation on the trays is only possible for low vapor loads in the bubble regime. The foam is rapidly destroyed at transition to the froth regime, after the vapor load has been increased [113]. However, foam formation in the downcomer is not affected in this way. – The only way to fight foaming right on the spot seems to be the use of an antifoaming agent. As long as it can be tolerated in the bottom product, this is the only strategy which has turned out to be successful in the long run. Finding an appropriate antifoaming agent is a science on its own, but vendors usually have specialists which can give valuable advice. Another kind of problems with tray columns is vibration, which has been observed especially on one-pass sieve and valve trays with large diameters (> 2000 mm) at

5.5 Comparison between packed and tray columns | 201

comparably low vapor loads. Within hours, trays have been seriously damaged. Just increasing the mechanical stability of the tray usually does not help. Instead, an increase of the vapor load or the use of notched weirs should additionally be taken into account. There are a few explanations in the literature [117–120]. It seems that the vapor enters the tray above noncontinuously as jet pulses through the holes. When they synchronize and actuate in phase, the resonance frequency of the tray might be struck [95]. Certainly, a thorough understanding of this phenomenon has not yet been achieved.

5.5 Comparison between packed and tray columns The criteria for a comparison between packed and tray columns can be set up in the following way. – Pressure drop: The pressure drop in packed columns is significantly lower than in tray columns, as the vapor does not have to pass a liquid layer and the narrow holes, which have a cross-flow area of max. 14 % of the active area. For well-designed packings, the pressure drop can be expected to be 1–2 mbar/m, whereas it is 5–10 mbar per tray. Consider approximately two theoretical stages per m in a packed column and a Murphree efficiency of 67 % (i. e. three trays are two theoretical stages), the pressure drop of a theoretical stage is lower by a factor of 7.5–30 in a packed column. Lower pressure drops give lower pressures in the bottom of the column, and at lower pressures the relative volatility is in most cases higher. Furthermore, lower pressures correspond to lower temperatures, meaning less degradation of the products, and it might be possible to use a steam with lower pressure. For high vacuum columns, the use of packed columns is more or less obligatory. – Theoretical stages per m: The first guess is that there are two theoretical stages per m in a packed column, usually there are slightly more. For a tray spacing of 500 mm, there are two trays per m, which, however, represent about 1.3 theoretical stages due to their efficiency. The end tray has to be counted as well, meaning that for the above-mentioned tray spacing of 500 mm there are 11 trays to be placed on 5 m column height. On the other hand, in packed columns some space is always lost for collectors and distributors (10–20 % acc. to [260]). In fact, it must be examined case by case where finally more theoretical stages per m are obtained. The author would guess that in most cases the packed column is advantageous. – Flexibility: Tray columns have a good turndown (2 : 1 – 4.5 : 1, according to tray type). So do packings, but the distributors usually do not. Therefore, packed columns are definitely inferior to valve trays from that point of view. In tray columns, side draws

202 | 5 Distillation and absorption













are easy to provide. Usually, they are designed on several trays, and optimization is performed during operation.14 For packed columns, side draws are a major constructive issue; they are related to the presence of a collector and a distributor. Fouling: Packed columns are more sensitive to fouling, especially structured packings and, more especially, wired-mesh packings. Sieve trays with large holes (> 10 mm) can cope with fouling quite well. They can even handle certain amounts of solids. Foaming: Foaming is a strong argument against tray columns. Columns with random packing can at least cope with limited foaming, whereas columns with structured packing perform worse [107]. For systems which are prone to foaming (aldehyde systems, caustic absorptions) this is a major item for the equipment choice. Column diameter: Tray columns are well established for large column diameters. Multipass trays are often used. There is also no objection to using packed columns for large diameters. Sieve trays with large diameters can be subject to vibrations. For small column diameters (< 800 mm) packed columns are advantageous, tray columns can only be realized as cartridge columns (Figure 5.29). Loads: Tray columns can cope with large variations for the liquid load. They are relatively sensitive to variations of the vapor load. Random packed columns have difficulties with low liquid loads, whereas structured packings have difficulties with high liquid loads, as gas bubbles cannot be released in the narrow channels. If they are captured, they will go down a long way. In contrast, random packings can easily get rid of gas bubbles, and tray columns transport them one tray down in the downcomer. Aqueous systems are difficult for any kind of packing due to the high surface tension. In contrast, trays have no problems with wetting at low liquid loads or with aqueous systems. Residence time: Trays provide a well-defined residence time so that they are advantageous when desired chemical reactions are to occur in the column. On the other hand, packed columns are advantageous to avoid undesired reactions. In this case, one should take care that the reboiler has a small holdup and residence time as well. Heterogeneous systems: For systems showing liquid-liquid equilibria, the design of packings is weakly founded. Trays have no problem, as in the froth layer the separation of the two phases does not take place.

14 Side draws must not be taken at the bottom of the downcomer, as gas bubbles might lead to choking or pump cavitation.

5.6 Distillation column control | 203



Safety: In recent years, a lot of fires in structured packings have happened, and the reason for this is the large surface per mass, as the thickness of the metal is just about 0.1 mm [121].

A comprehensive comparison of trays and packings can be found in [260].

5.6 Distillation column control In process engineering, typical control oriented modeling methods have not become widely accepted, as it is the case in electrical and aerospace engineering. The control engineering is not an own phase; instead, it is distributed over the life cycle of a plant [122]. The main part of the control strategy is already fixed in the conceptual phase and tested during the piloting. The draft of the control scheme is done according to the experience from other projects. In-depth knowledge of control engineering is requested if the proposed strategy fails in operation or some optimization potential is presumed. During the engineering, a systematic analysis of the control strategy hardly ever takes place. Things might change with the upcoming training simulators and advanced process control projects. The strongest reason is that plant models are usually specific for a given plant. Their development is a complex, large, and expensive effort and must be supposed to be justified. Often, during the model development the knowledge on the process increases in a way that ways are found to meet the targets with conventional process engineering methods as well. The process models are not only extraordinarily complex but also time-dependent due to catalyst aging or fouling on heat exchangers. During the life cycle of a plant, many changes take place, such as capacity increases, heat integration, improved catalysts, or changed product specifications. In these cases it would be necessary to adjust the model and update the control parameters. The discrepancy between the large effort for an adequate control engineering and the often simple but successful solutions is one of the main reasons for the skepticism of practitioners to a theoretically based control engineering. Especially for distillation plants, a simple, intuitive control strategy is usually chosen to maintain the demand for a sufficient and constant quality. Generally, there are two types of variables to be controlled in distillation [96]. The control of column pressure and liquid levels ensures that no accumulation of liquid (levels) and vapor (pressure) occurs. Otherwise, a steady-state operation of the column would not be possible. Maintaining the levels and the pressure just keeps the stable operation of the column. For the product quality, the composition of the products has to be kept within the specification limits. For this purpose, some kind of composition control must take place. The special difficulty in distillation control is that the product concentration can hardly be measured effectively and fast enough. Even gas chromatography as a

204 | 5 Distillation and absorption

Figure 5.38: Temperature profiles at various overhead concentrations [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

relatively fast analyzing method with a wide application range has too long response times because of the distance between sample point and detector. In a control cycle, it would be a great dead-time element. Further problems are high investment and maintenance costs and a possible phase separation of the sample, which could spoil the result. In fact, there are few constructions of this kind,15 but the usual way for the quality control of the product uses the temperature profile as an indicator for the product composition. The sensitivity of the temperature profile from the product concentration is exemplarily illustrated in Figure 5.38. The diagram is about the simulation of a distillation column separating the binary system n-hexane/n-heptane on 50 theoretical stages at ambient pressure, where the reflux ratio has been varied. The three cases refer to the overhead concentrations of n-hexane of 99.98 wt. %, 99.9988 wt. %, and 99.99984 wt. %, respectively. These purities each differ by one order of magnitude. The boiling points of these overhead products can hardly be distinguished by measurement in column operations and are not appropriate as a control signal. However, the temperature profiles are significantly different. The largest differences seem to occur on stage 20. The temperature of this stage could therefore be used as a control variable for the overhead concentration.16 There have been discussions on whether vapor or liquid temperatures should be used for composition control. In fact, both options work. The control by vapor temperatures show the faster response but might be erroneous if weeping occurs. 15 e. g. oxygen analyzers. 16 Unfortunately this stage is relatively far away from the top of the column. Using this temperature as a control variable might lead to a slow response behavior.

5.6 Distillation column control | 205

Figure 5.39: Control scheme of a distillation column with top product quality control.

There are a lot of options for the control of a distillation column. Figure 5.39 shows an often used one, which is discussed in the following section. In this example, the purity of the overhead product is maintained by the control of the temperature (TC) on a certain stage by manipulating the reflux amount. The distillate stream is the difference between overhead and reflux stream. The outlet valve maintains the level in the reflux drum (LC). This way, a steady state can be achieved. At the bottom of the column, the steam flow is fixed (FC). The control of the bottom product flow is analogous to the distillate flow control. The column pressure is controlled (PC) by the cooling water flow to the condenser. If the pressure is too high, it will be increased to condensate more and to lower the pressure again.17 The control strategy in Figure 5.39 is not very fast anyway. To understand a control scheme it is useful to follow the response of the column after something is varied. Consider the case where the concentration of the feed varies, e. g. the light ends fraction increases. The light ends will accumulate in the top section of the column. The level controller of the reflux drum will open the distillate outlet. The control temperature in the profile will drop. Then, the system answers with a reduction of the reflux amount. This reflux must go down the column stage by stage, which takes some time. For frequently occurring load variations, this control scheme is not really appropriate. Its strength is to safely maintain the quality of the top product for constant load. If the bottoms product quality has to be maintained, the arrangement in Figure 5.40 is advantageous. In this case, the temperature control manipulates the steam, and the change in the vapor phase affects the rest of the column rapidly, much faster than the reflux change does in Figure 5.39. This scheme has a huge advantage in stripping columns, where the distillate rate is small compared to the feed. It should 17 This kind of pressure control is relatively sluggish; alternatives are discussed below.

206 | 5 Distillation and absorption

Figure 5.40: Control scheme of a distillation column with bottoms product quality control.

Figure 5.41: Example for an energy balanced control scheme.

be mentioned that the considerations about the response times mainly refer to tray columns, whereas packed columns have a relatively small holdup and show pretty fast control answers even for the option in Figure 5.39. In both schemes, the temperature control can also be connected to the distillate or the bottom product outlet valve, while the levels are maintained by the reflux flow or by the steam flow, respectively. All these options have their pros and cons depending on the particular case, however, the most important rule is that small streams should not control a level, neither in the reflux drum nor at the column bottom, as this results in extremely slow answers (Richardson’s law). Generally, we can distinguish two types of control schemes: the mass balanced configuration (MB, Figures 5.39 and 5.40) and the energy balanced configuration (EB, Figure 5.41) [96, 123]. The mass balance control is the common way to control a column. The most important principle is that none of the product streams is flow-controlled. Otherwise, the column would easily run out of the balance. If the flow of the product is fixed, it must exactly fit to the corresponding fraction of the desired overhead components in the

5.6 Distillation column control | 207

feed. Consider a feed flow with 5000 kg/h low-boiler and 5000 kg/h high-boiler. If the top product flow is fixed, it would have to match to the low-boiler content in the feed, i. e. 5000 kg/h. If it is fixed to another value, the impurity of one product stream is unavoidable. In practice, it is neither possible to determine the exact fraction of a feed nor are fluctuations avoidable. A coincidental 2 % deviation, e. g. 5100 kg/h low-boiler and 4900 kg/h high-boiler, would not be compensated, and 100 kg/h low-boiler would end up in the bottom fraction. With a mass balance control scheme, both product streams are controlled by other information, e. g. by the levels (Figures 5.39 and 5.40) or by a temperature in the column profile. A number of configurations are possible as shown above, they are well explained and discussed in [96]. The explanation above should rule out a control scheme fixing a product flow, but in fact it makes sense for small product streams where the composition does not matter. Consider a case where there are 10 000 kg/h high-boiler and 10 kg/h low-boiler, which are to be separated. The small overhead product stream will not have an influence on a level to be used for control purposes. An easy way to control this column is to set the distillate flow to 20 kg/h. This implies that 0.1 % of the bottom product are lost, but the low-boiler should be completely removed, including some fluctuations in the feed. The reflux ratio is a result of the steam flow, which can be related to a temperature in the profile or fixed like in Figure 5.41. This is an example of an energy balanced configuration. It might be surprising that the reflux ratio, which is one of the most important parameters in column design, is seldom controlled directly. Usually, either the reflux or the distillate flow are controlled, and the reflux ratio is only an operand. It is also remarkable that there is no chance to control both top and bottom compositions to maintain them within their specification [96]. It seems obvious to set one control temperature each in the stripping and in the rectifying section (Figure 5.42). In fact, these control loops would seriously interact with each other. If for example the fraction of low-boilers in the feed rises, both control temperatures decrease and cause the corresponding reactions, maybe more steam flow to the reboiler for the bottom concentration and a decrease of the reflux for the top product one. However, these two reactions have completely different dynamic responses. While the reaction on the steam increase is relatively fast, the effect of decrease of the reflux is pretty slow; it is transported with the liquid and therefore coupled with the residence times of the liquid on the particular trays. When the desired effect occurs, the increased steam flow already requires an increased reflux, and a never-ending cycle starts. Therefore, in conventional control schemes it is avoided to control both compositions. In all cases it often makes sense that streams fixed by flow control are related to the flow of the feed stream (e. g. fixed reflux, fixed steam flow). There are various options for pressure control. As in Figures 5.39 and 5.40, the coolant flow through the condenser can be controlled by the pressure. An easy and fast but not very elegant way is the feeding and venting of inert gases with a split range control. Together with the inert gas, always a certain amount of top product

208 | 5 Distillation and absorption

Figure 5.42: Unstable control scheme using two composition controllers for bottom and distillate products.

is vented; therefore, the column should be expected to operate steadily or with low product concentrations in the vapor phase so that the losses are not harmful. More options are explained in [96]. There is also an option to use pressure drops or temperature differences along the column as input signals. It might happen that they have a larger significance than conventional signals, especially if the temperature itself is determined by undefined components. However, these options can show instabilities in malfunction cases. If for example the cooling water service fails, the temperature differences along the column will become smaller. In Figure 5.40, the steam valve would fully open, which is the worst reaction of all and will probably lead to safety valve actuation. A behavior like this should at least be prevented by appropriate interlocks.

5.7 Constructive issues in column design There are also some simple considerations in column design apart from component separation. Similar to vessels, there are certain requirements for the liquid level at the bottom. There must be several minutes of time for the operators to react if the liquid level goes up and down between the particular levels (Chapter 9). If there is a thermosiphon reboiler, it has to be checked whether the circulation takes place at minimum liquid level. The nozzle for the vapor inlet from the reboiler must not be flooded, and a sufficient distance must be kept to the lowermost tray or the packing, respectively. Usually, the particular companies have their design guidelines where recommendations for the various issues are given, otherwise, recommendations can be found in [96]. There are several options for the bottoms design when a thermosiphon reboiler is involved (Figure 5.43). While the first one is the simplest and the most common one, a baffle can be placed in the bottoms region for different purposes. Option (2) ensures that there is always enough liquid height to maintain an appropriate NPSH value for

5.7 Constructive issues in column design

| 209

Figure 5.43: Three options for the column bottom design. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

the pump removing the bottoms product. The thermosiphon circulation is well ensured with option (3) which maintains its driving force constant. Special care must be taken when a feed consists of vapor and liquid or when a superheated feed flashes inside a column, meaning that pressurized liquid generates vapor when it is expanded into a low-pressure column. A simple nozzle is only acceptable at low velocities with low vapor fractions. In other cases, some constructive provisions must be made to enable the liquid to vanish in bottom direction, while the vapor is smoothly directed to the top. Popular options are flash chambers (Figure 5.44). In a flash chamber, the vapor can disengage in a defined way, while the liquid is collected and guided to a liquid distributor below. Flash chambers can be located inside (Type IN) or outside (Type A) the column. Type IN is appropriate for small flash vapor amounts. For large ones, Type A is the better choice, which is in principle a small vessel with a demister. For high velocity feed where the vapor is the continuous phase, vapor horns are one of the favorite solutions. A tangential helical baffle forces the vapor to follow the contour. It is closed at the top and open at the bottom. The liquid drops hit the wall and run downward as requested [100]. Another well-known device for separating vapor and liquid from a column feed is the “Schoepentoeter” [281], which was developed by Shell. Its principle is to dissipate part of the energy of the feed stream by dividing it into many parts. The Schoepentoeter is often subject to erosion and therefore a wear part, but its effectiveness is undoubtful. To ensure liquid sealing of the downcomer, inlet weirs (Figure 5.45 (a)) or seal pans (Figure 5.45 (b)) can be used [96]. They are often taken in cases where the clearance under the downcomer is limited. Both arrangements ensure liquid sealing of the downcomer. For high liquid loads, seal pans can increase the capacity of the column. They can permit lower outlet weirs, which reduces the pressure drop, the froth height and the downcomer backup. In contrast, an inlet weir consumes some of the downcomer height and therefore often increases downcomer backup. This is one of the reasons

210 | 5 Distillation and absorption

Figure 5.44: Flash chambers for use inside (Type IN, (a)) and outside (Type A, (b)) the column. Courtesy of Julius Montz GmbH.

Figure 5.45: Downcomer supporting arrangements.

5.8 Separation of azeotropic systems | 211

why seal pans are generally preferred [96]. The disadvantage of both arrangements is that they can act as a dirt trap due to zones with stagnant liquid. For weir loads below 1 m3 /(m h), so-called splash baffles are recommended [96]. Splash baffles (Figure 5.45 (c)) are vertical plates parallel and upstream to the outlet weir with a gap to the tray floor so that liquid can pass underneath. They can increase the holdup and the froth height on the tray and can prevent a tray from drying up.

5.8 Separation of azeotropic systems Azeotropic systems cannot be separated by conventional distillation. Using applied thermodynamics [11], there are a number of options to break an azeotrope and obtain the components in their pure form. It is the art of the process engineer to suggest the most appropriate one. A good description of the various separation processes for azeotropes can be found in [8]. The easiest case is the separation of a heteroazeotrope. It splits in two-phases in a decanter, and the two-phases can be worked up separately [8]. Pressure-swing distillation is useful if the azeotropic concentration strongly depends on the pressure. This is the case for relatively few binary systems. The most well-known one is tetrahydrofurane–water (Figure 5.46). In the first column, the azeotrope THF–water with xTHF ≈ 0.8 is taken overhead at low pressure (p ≈ 1 bar), while pure water can be removed from the process at the bottom. The overhead stream is condensed and compressed to a significantly higher pressure (p ≈ 10 bar). At this pressure, the THF concentration of the azeotrope is significantly lower (xTHF ≈ 0.6). Pure THF will remain at the bottom of the second column when the azeotrope is taken overhead at high-pressure. It is recycled to the first column. Overall, the outlets of this arrangements are water and THF with arbitrary purity. Other examples are acetonitrile–water, methanol–acetone, ethanol–benzene, HCl–water and even ethanol–water [124]. The great advantage is that no additional substances have to be introduced into the process. There are four other main principles of azeotropic separation, which are illustrated using the azeotrope ethanol–water (Figure 5.47). Ethanol–water is the azeotrope which is split most often worldwide, with a capacity of 40 million tons per year. It is

Figure 5.46: Pressure swing distillation for the separation of the tetrahydrofurane–water azeotrope [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

212 | 5 Distillation and absorption

Figure 5.47: Ethanol–water azeotrope at t = 100 °C.

Figure 5.48: Azeotropic distillation of ethanol–water using cyclohexane. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

quite an unpleasant one, as on the branch between the azeotrope and the pure ethanol hardly any separation via distillation is possible (Figure 5.47). – Azeotropic distillation (Figure 5.48): After the azeotrope is obtained at the top of column K1, a substance is added which forms a ternary azeotrope with ethanol and water. The most common options are benzene and cyclohexane (CHX), where the latter is nowadays preferred due to the toxicity of benzene. With this ternary azeotrope, all the water can be taken at the top in column K2, while pure ethanol is obtained at the bottom. The ternary azeotrope can be split into two phases in the decanter (Chapter 6.1). The upper phase consists of cyclohexane with small amounts of ethanol, which can be directly recycled to column 2. The lower phase can be worked up in a further distillation and recycled to the decanter or column 1, respectively.

5.8 Separation of azeotropic systems | 213



Extractive distillation (Figure 5.49): The advantage of extractive distillation is that it is not necessary to start with azeotropic concentration; an easily achievable preconcentration to approx. 90 % is sufficient. In the first column K1 the water is washed down to the bottom with a solvent which has lower activity coefficients with water than with ethanol. A widely used one is ethylene glycol (1,2-ethanediol). At the top of column K1, the ethanol is obtained with the desired purity. The bottom product is a mixture of water and ethylene glycol. These components are separated in the second column K2. Ethylene glycol as the bottom product can be recycled and used again as extractive agent in column K1.

Figure 5.49: Extractive distillation of ethanol–water using ethylene glycol. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.



Adsorption: Especially for ethanol, an adsorption process has been developed where the remaining water in the azeotrope is removed with a molecular sieve. The process is described in Chapter 7.2. The advantages are its robustness and simplicity. Especially in the ethanol business these are two major items. A disadvantage is the fact that it is necessary to start with azeotropic concentration and the complicated control. The azeotrope contains 4 % water, with is quite a lot. It makes it necessary to change the bed after a few minutes operation. The adsorber bed must then be regenerated.

214 | 5 Distillation and absorption –

Membrane: Similar to adsorption, the water in the azeotrope can be removed with a membrane (Chapter 7.1). The process strategy is the same, first, azeotropic composition must be achieved, and then the water is removed using a multistep membrane separation [125].

5.9 Rate-based approach In a conventional simulation of a distillation or an absorption, it is assumed that the liquid and the vapor phases which are leaving a stage are at complete equilibrium, i. e. phase equilibrium, thermal equilibrium and mechanical equilibrium. Furthermore, complete mixing and complete separation of the phases is assumed. In reality, these assumptions are, of course, never fulfilled. To take this into account, efficiency factors or HETP values are introduced so that realistic results can be obtained. However, one must be aware that HETP might depend on the column diameter, the properties of the substances, or the liquid and vapor flow rates. Tray efficiencies and HETP values are never accurate; instead, they should more or less be interpreted as good guesses. The so-called rate-based approach is an alternative. It considers the heat and mass transfer between the phases which encounter each other in the column. It accounts for the influences of throughput, equipment size, packing or tray properties and physical properties of the fluids so that extrapolations are more reliable [126]. The heat and mass transfer rates are determined by quantifying the temperature and concentration18 differences between the phases which are the driving forces of the separation. The characteristics of the contacting device, i. e. the generated transfer areas, are also taken into account. Thermodynamic equilibrium is still a very important piece of information for the calculation, but it is only assumed at the interface between the phases, referred to as vapor and liquid film in Figure 5.50. The mathematical details can be found in [127]. Applying a rate-based approach, one should be aware of some peculiarities which are often unexpected: – The temperatures of vapor and liquid on a stage are generally different, as the two phases do not reach equilibrium. – In a multicomponent mixture, a component can diffuse in opposite direction to its concentration gradient. This can happen in situations where the fluxes of the particular components are strongly coupled. The phenomenon has been thoroughly described and experimentally proved in [128] and [129]. – The term of a theoretical stage is still used but no longer considered in the final calculation. For packed columns, the packing is divided into so-called segments, which have nothing to do with equilibrium stages or HETP; however, there are 18 to be correct: chemical potential differences.

5.9 Rate-based approach

| 215

Figure 5.50: Rate-based approach. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

rules of thumb to choose useful values for their height which are related to the HETP values. The segment height should always be lower than the HETP. For random packings, 10–12 times the size of the packing elements is a good approach, for structured packings, HETP/2 is a reasonable choice. In general, the number of segments should have no major influence on the calculation result, as long as the choice is reasonable. For tray columns, the trays theirselves are the entity. Besides the phase equilibrium and the enthalpy description, the transport properties are necessary for the calculation, i. e. viscosities, thermal conductivities, surface tensions, and diffusion coefficients. As mentioned in Chapter 2.12, the viscosity of liquid mixtures is not really accurate. The diffusion coefficients can be estimated very well for gases, but for the liquid probably only the correct order of magnitude can be determined [11]. Moreover, the mass transfer models are not accurate in any case, it is a matter of experience to choose the best one, and even this is not a guarantee for a correct representation of the system. The rate-based approach is not generally more accurate than the equilibrium calculation. It connects a number of uncertain quantities for the representation of the column, whereas the equilibrium model mixes all of these influences together and represents it with one single uncertain value, i. e. the HETP or the efficiency. Nevertheless, there are a number of cases where the rate-based approach gives significantly different results, and one should know when it makes sense to go beyond equilibrium thermodynamics. One should be aware that the effort to switch to the ratebased model in commercial process simulation programs is actually limited and is not a reason for refusing this attempt; convergence has also been substantially improved in recent years.

216 | 5 Distillation and absorption Rate-based calculations are more or less obligatory for absorption and desorption processes, which are in most cases mass transfer limited. The efficiencies vary greatly from component to component and from stage to stage, as well as in strongly nonideal systems. In absorption, the efficiencies are usually only 10–20 %, but also values like 5 % are possible. In reactive distillation, the efficiency does not make sense at all, when the main progress on a stage is the proceeding of the reaction and not of the separation. Reactions with fast reaction rates might be mass-transfer limited. Trace components, which have low mass transfer rates due to their low concentration, can often not be adequately treated with an equilibrium calculation. As well, systems with big gaps between vapor and liquid temperature can be heat transfer limited. For illustration, here is a classical example. The absorption of traces of HCl from exhaust air with water requires one single theoretical stage with an equilibrium model, as the HCl is an electrolyte and will completely and immediately dissociate in water. In this case, it becomes clear that the use of a rate-based model is not a matter of accuracy. The procedure which determines the removal of the HCl from the air is the mass transfer in the gas phase by diffusion. It takes much more effort than the absorption itself when the HCl has reached the boundary layer. For the correct dimensioning the application of a rate-based model is obligatory, it will result in a by far larger packing height.19 Other measures are not appropriate, neither the increase of the water amount nor the use of caustic soda, which makes the chemical absorption just more irreversible. The HCl does not know that the NaOH is waiting for it in the liquid!

Even the rate-based approach does not represent the total truth. There are a lot of systems which form aerosols in the vapor phase, e. g. sulfuric acid–water or the system HCl–water which was just mentioned above. Aerosols are formed when, due to oversaturation, condensation takes place directly in the vapor phase and not in contact with the liquid phase. The droplets formed are not large enough to settle down into the liquid, and they are not small enough to take part in the diffusion process. They are entrained with the vapor phase, distorting the principle of distillation and absorption. To explain the theory would go beyond the scope of this book. Good explanations can be found in [130] and [131].

5.10 Dividing wall columns Conventional distillation columns can separate a feed mixture into two product streams with the desired concentration, as long as the design of the column is 19 often several m of packing, if TA Luft must be reached.

5.10 Dividing wall columns | 217

appropriate. In case of a binary mixture, two pure component streams can be obtained. In Figure 5.51, a side draw is taken from the column additionally. However, there is no way to obtain a third product with any desired concentration. Consider a three-component mixture with the light-end A, the heavy-end B, and the middleboiler C. As the feed is located below the side-draw, it can easily be achieved that there is no heavy-end B in the side-product. However, light-end A will pass the stage where the side-stream is taken, and certainly, part of it will be inevitably in the sidestream.

Figure 5.51: Column with side draw. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

Dividing wall columns (Figure 5.52) are an alternative, which is increasingly becoming established. Part of the column is divided by a separation wall which prevents lateral mixing. The feed enters the column on the left hand side and is split into the components A + C at the upper and B + C at the lower end of the separation wall. At the top of the column there is a rectifying section, giving pure component A as top product. Also, at the bottom the pure heavy-end B can be obtained in the conventional stripping section. The right part of the column is fed with a mixture of A + C from the top and B + C from the bottom. At an appropriate stage in the middle of the right-hand side, pure product C can be withdrawn. There is usually a distributor at the top of the divided section where it can be controlled how much liquid is fed from the top to the particular column partitions. Without a device, the vapor coming from the bottom is split in a way that the pressure drop in both partitions is the same. Therefore, for a proper design the pressure drop correlation used should work sufficiently well. Dividing wall columns represent the highest degree of heat integration between columns; it is estimated that the energy savings amount to 20–35 % in comparison with an adequate conventional distillation arrangement [132].

218 | 5 Distillation and absorption

Figure 5.52: Dividing wall column principle. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

Figure 5.53: Dividing wall column in process simulation. Screen images of Aspen Plus® are reprinted with permission by Aspen Technology, Inc. AspenTech® , aspenONE® , Aspen Plus® , and the Aspen leaf logo are trademarks of Aspen Technology, Inc. All rights reserved.

From the process simulation point of view, the dividing wall column can be represented by two independent columns which are linked by streams entering and leaving the partition on the right-hand side (Figure 5.53). The stripping section, the partition on the left-hand side and the rectifying section form the first column (C1LEFT). At the upper end of the dividing wall, a liquid stream (LL) is withdrawn and led to the right partition, while the complete vapor flow of the right partition enters the rectifying section (VR). Analogously, part of the vapor from the left partition is taken as a side stream to the bottom of the right partition (VL), while the whole liquid from the bottom of the right partition is fed to the stripping section from the top (LR).

5.11 Batch distillation

| 219

The following items should be observed when a dividing wall column is considered: – For a long time, column hydrodynamics in dividing wall columns were not covered by the commercially available programs. The only way to evaluate hydrodynamics was to define a circular cross-section with an equal area. Meanwhile, some programs already support dividing wall columns. – Dividing wall columns have strong advantages if the feed contains substantial amounts of middle boiler C, typically 20–60 %. The larger the concentration of the middle boiler, the more effective is the dividing wall column option compared to a conventional design [243]. – It is clear that both sides of the dividing wall have essentially the same pressure. If it turns out that operating the two sides at different pressures has a considerable advantage, the dividing wall column is not appropriate. – The partition wall should be thermally insulated to avoid heat transfer across it. Heat transfer will have a negative influence on the performance of the column. Moreover, if there are large temperature differences on both sides of the wall, it will probably cause mechanical stress. This must be taken into account by the mechanical design [243]. – Side reactions which cause the formation of light ends at the bottom or heavy end at the top thwart the principle of the dividing wall column, it is not useful in these cases [243]. – Due to the hydrodynamic constraint of having the same pressure drop on both sides of the wall, it is at least difficult to provide a significantly different number of theoretical stages on both sides of the wall. – The dividing wall column has a larger diameter and more stages than each of the columns it represents (Figures 5.52 and 5.53). As well, if there are different material requirements on both sides of the wall, the more expensive one will have to be chosen. – The vapor split is fixed by the design of the column, i. e. by the location of the wall and the pressure drops across the sections. It cannot be adjusted during operation [243].

5.11 Batch distillation Distillation can be performed either in the continuous mode or as a batch distillation [124]. A continuous distillation is operated at steady state, meaning that the state variables do not change with time. The feed is continuously entering the column, while top and bottom products are continuously withdrawn. In batch distillation, the feed is filled into the bottom vessel of the column at the beginning of the operation (Figure 5.54 (a)) regular configuration). Depending on the time, various products can be withdrawn at the top of the column. Side stream products and continuous feeds are

220 | 5 Distillation and absorption optional. There is usually no bottom product, the residue in the bottom vessel can be removed from the column at the end of the distillation process. The state variables in the column change with time, the process is inherently unsteady [124]. Batch distillation is often preferred to continuous distillation if relatively small amounts of material which occur irregularly and possibly with changing composition have to be separated. It is used extensively in laboratory separations and in the production of fine and specialty chemicals, pharmaceuticals, polymers and biochemical products. Batch distillation units are very flexible; they can usually handle different kinds of products, and as a matter of principle, only one column is necessary to split a mixture into its components unless azeotropes occur. As well, hydrodynamic calculations are not as important as for continuous columns; if the column diameter does not fit, the throughput can simply be distributed over a larger time, as long as it is in line with the time schedule of the process (Chapter 3.4). Moreover, a batch product has its own identity, i. e. it can strictly be controlled which feedstock a product comes from, which is often important for quality control in the production of pharmaceuticals [124]. Essentially, there are two different kinds of batch distillation [133]: – Operation with a constant reflux ratio, where the distillate composition changes continuously. The final product concentration is an average value. At the beginning, it is usually higher so that the product purity is above specification. At the end, the reflux ratio is lower than requested, and it has to be taken care that operation is stopped as long as the product concentration is in line with the specification. This is a considerable disadvantage, the successful operation can be proved only at the end of the batch. If the final product turns out to be off-spec, the batch has to be blended or rerun [124]. – Operation with a constant distillate composition, where the reflux ratio is varied. This is normally the better approach, however, it requires a control mechanism and is more complex. It might happen that the controller settings must be adjusted during the process due to rough changes of the process conditions. Of course, even both the reflux ratio and the product composition can be varied to optimize the decisive criterion, e. g. batch time, product amount, or minimum cost. While in the traditional batch distillation the feed is charged to the bottom, other configurations might be useful. In an inverted batch column, the feed is charged to the reflux drum at the top of the column. With this approach, it is easier to remove large amounts of high-boiling components from the final product. Also, the middle vessel configuration is generally more effective in terms of energy efficiency and product rate [124]. All three configurations are illustrated in Figure 5.54. Batch distillations are more complicated to calculate than continuous ones, as the MESH equations (Section 5.1) must be supplemented by a term describing the timedependent holdup on the various stages. Moreover, the change of the holdup on the

5.12 Troubleshooting in distillation

| 221

Figure 5.54: Batch distillation configurations; (a) regular, (b) inverted, (b) middle vessel [124].

stages varies drastically with time20 so that the solution of this system of equations is much more difficult. Alternatively, the batch distillation can be represented as a series of continuous distillations where the holdups at the beginning and at the end of the steps are taken as feed streams to or side products from the particular stages, respectively [133].

5.12 Troubleshooting in distillation A number of excellent papers and books are available to get support for troubleshooting in distillation, among them [96, 106, 258, 261]. This chapter focuses on γ-ray scanning as a technique to detect areas in the column where blocking, foaming, maldistribution or damage, which belong to the most common reasons for column malfunction, prevent regular operation of the column. Its advantage is that the examination can be performed during operation when the problem occurs, without opening the column which always causes a major interruption and often does not reveal the reason. The principle is as follows: A radioactive source emits γ-radiation. When passing any material, the γ-radiation is attenuated according to I = I0 exp(−μρx)

(5.14)

with I as radiation intensity, I0 as original radiation intensity of the source, μ as absorption coefficient, ρ as the density and x as the passed way through the medium. At 20 At the beginning, the holdup is built-up, whereas at the end there are only slight changes in the composition.

222 | 5 Distillation and absorption high γ-ray energies, the absorption coefficent μ becomes independent of the material, and the absorption process depends only on the product of density and the thickness of the medium. Therefore, the attenuation between source and detector is a direct measure for the average density on the way. For tray columns, the typical zigzag pattern develops when the source/detector arrangement is moved up and down the column (Figure 5.55 (a)). Alternatingly, the radiation passes the clear vapor space with the lowest density and the solid tray in horizontal direction with the highest density. In the vapor space, attenuation can be determined when there is foam, weeping, liquid entrainment or flooding (Figure 5.55 (b)), each of them having a typical shape which can be identified by a specialist, as well as tray damage or missing trays. Source and detector can be arranged centrally across the tray or, alternatively, across the downcomer to examine its behaviour.

Figure 5.55: γ-ray scanning signals for a normally working tray column (a) and a tray column with increasing entrainment up to jet flood from tray to tray (b).

In a column with random packing, the packing bed can be found as a region of high density. If density further rises, one can conclude that plugging or flooding occurs. Maldistribution can be detected by evaluating the attenuation across a number of secants in different directions (Figure 5.56). Surprisingly, there are no success stories for structured packings [96].

5.12 Troubleshooting in distillation

Figure 5.56: γ-ray scanning for a packed column with and without maldistribution.

| 223

6 Two liquid phases A thermodynamic sophistry says that it might happen that more than two liquid phases can coexist in an equilibrium. Figure 6.1 shows a sketch of seven liquids forming an equilibrium with the same vapor.

Figure 6.1: Seven liquid phases in equilibrium [134].

Fortunately, in technical applications these multi-liquid-liquid equilibria do not play a major role, but there are a number of processes where two liquid phases occur. Especially in extraction processes the two liquid phases are essential, as well as in heteroazeotropic distillations, where two liquid phases form after condensation which must be carefully separated for the continuation of the process.

6.1 Liquid-liquid separators Usually, in a process the two liquid phases form a dispersion where one phase (discontinuous phase) is distributed in a continuous phase as droplets [135]. In technical applications, in most cases an aqueous phase and an organic phase occur, where the aqueous phase is usually the heavy one.1 In a liquid-liquid separator, the dispersion should be transformed into two homogeneous phases. Figure 6.2 shows a horizontal liquid-liquid separator. On the left-hand side, the dispersion enters the separator. As the velocity decreases due to the enlargement of the diameter, turbulence and kinetic energy are reduced. A layer of droplets is formed, where droplets do not coalesce. On the right hand side, the coalescence takes place. Small droplets slowly coalesce. This can be improved by internals. There are two kinds of these: 1 There are exceptions, for instance halogenated organic substances are often heavier than water. https://doi.org/10.1515/9783110657685-006

226 | 6 Two liquid phases – –

internals which reduce the kinetic energy and distribute the liquid over the whole cross-flow area; internals with a large surface where the droplets can coalesce (e. g. plates, random packing, wire-mesh).

Fiber layers are recommended for droplet diameters between 1 and 100 µm. Plate internals are relatively expensive but useful if solid particles or surfactants are involved or if the pressure drop should be minimized.

Figure 6.2: Horizontal liquid-liquid separator without internals.

The directing of the liquid phases is an important item of the design. While in Figure 6.2 the apparatus is completely filled with the two liquid phases, it is also possible that space for the vapor is left. A siphon can be used to adjust the height of the phase boundary between the two phases (Figure 6.3), where the top height of the siphon can be varied if it is designed as a spool piece [136]. For small amounts of the heavy phase, a special collector can be placed at the bottom (Figure 6.4). A theoretically founded design of liquid-liquid separators is not possible. Some influences are clear; a large density difference, a low viscosity of the continuous phase, and large droplet sizes are favorable to the sedimentation. However, there are many phenomena which are yet unclear, e. g. the sedimentation of droplet clusters, the droplet size distribution, and the coalescence behavior. But the most unpredictable issue is the influence of surfactants. Even just traces can significantly change the separation behavior, which often just turns the design procedure into a lottery. Also, solid particles often tend to form a layer (crud) which can disturb the separation of the two phases. The generally acknowledged procedure on the design of liquid-liquid separators driven by gravity is the one of Henschke [137, 138], which describes the transfer of the results of a batch settling experiment in a standardized cell into a design of a separator. Figure 6.6 shows the course of such an experiment, where the light phase is the dispersed one. After the mixing of the two phases has stopped, the droplets start

6.1 Liquid-liquid separators | 227

Figure 6.3: Adjusting the phase boundary with a siphon [136].

Figure 6.4: Liquid-liquid separator with a collector for the heavy phase.

rising upwards. The sedimentation curve indicates the range of the lower part which is free of droplets. If the sedimentation of the droplets is faster than the coalescence at the phase boundary, a layer of droplets is formed. The droplets coalesce at the phase boundary, which is continuously shifted downwards. The extent of the clear light phase is represented by the coalescence curve. The experiment is finished when only half of the phase boundary is covered with droplets; this definition is necessary to become independent of statistical effects caused by single droplets which coalesce lately. The course of the particular curves can be used to adjust the Henschke model. New approaches are under development, which consider internals as well. Often, when the phase separation is relatively fast, the dimensions of the separator are determined by its function as a vessel, giving the plant operators time to react

228 | 6 Two liquid phases

Figure 6.5: Horizontal liquid-liquid separator in chemical industry.

Figure 6.6: Course of a settling experiment with the light phase as the dispersed one [139]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

(Chapter 9). For the designer, this is a lucky situation, but it has to be proved by experience in any case.

6.2 Extraction In liquid-liquid extraction, a substance (extractive) is removed from a solvent by an extracting agent in the liquid phase which is not completely miscible with the solvent [89]. Extraction has advantages in comparison with distillation if – the separation factors are small, in the worst case at the azeotropic point;

6.2 Extraction

– – – –

| 229

there are several components with significantly different boiling points which can be separated simultaneously; substances with extremely high or low boiling points occur; the concentration of high boiling substance is low so that a very large part of the mixture has to be evaporated; sensitive fluids must not be heated up.

The stream containing mainly the selective agent and the extractive is called the extract. The solution that has been cleaned from the substance is called the raffinate. The physical foundation of the extraction is the liquid-liquid equilibrium between the substances involved. A single equilibrium step is usually not sufficient. As for distillation and absorption, the separation effect can be increased by providing a number of separation stages in a row. Again, columns are possible, where the solution and the extracting agent are introduced at opposite ends of the column. The driving force of the countercurrent flow is the density difference between the two liquid phases, and therefore the light phase inlet is at the bottom, and the heavy phase inlet is at the top. The design of extraction equipment should provide good mass transfer conditions, i. e. a large contact area of the phases at a high degree of turbulence. One of the two phases is split into droplets, forming the disperse phase. There are many criteria for the choice of the disperse phase, which are sometimes contradictory. Often, the phase with the larger mass flow is dispersed to get a large contact area. In packed columns, the phase with the better wettability should be the continuous one. If the disperse phase is wetting the packing, the droplets could coalesce and become larger with less surface for mass transfer. Furthermore, the mass transfer direction should be from the continuous to the disperse phase. Moreover, flammable or poisonous substances should be dispersed to lessen the hazardous potential. The final decision should be based on experiments. Criteria for the choice of the extracting agent are the extent of the miscibility gap with the solvent, a high selectivity and a large capacity for the extractive. The separation of the extracting agent from the extract should be as easy as possible; extractive and selective agent should have a large difference of the boiling points and should not form an azeotrope. The density difference between the extracting agent and the solvent should be large so that the separation of the two liquid phases is easy; otherwise, there is also the option to achieve the phase separation by centrifugation. The surface tension between the two phases is a relevant quantity; if it is too large, the formation of small droplets is difficult, if it is too small, the separation of the two phases becomes hard. There are also practical items like low price, small vapor pressure, so that the losses by evaporation are small, high thermal and chemical stability, and low viscosity, flammability, and toxicity. Analogous to distillation and absorption, extraction can be described with an equilibrium model or a rate-based model considering mass transfer. A comprehensive description can be found in [97]. Compared with distillation and absorption, the

230 | 6 Two liquid phases computational modeling of liquid-liquid extraction processes has much more uncertainties. The dimensioning of equipment for extraction is hardly possible without performing a pilot scale test. Even the calculation of the phase equilibria causes problems. Unlike the representation of vapor-liquid equilibria, the binary interaction parameters (BIPs) for the NRTL- or the UNIQUAC equation obtained from binary phase equilibrium data cannot be simply transferred to ternary and multicomponent mixtures, as already mentioned in Chapter 2.5. Usually, they yield results which are only qualitatively correct. For a reliable description of liquid-liquid equilibria, the BIPs have to be adjusted not only to binary, but also to LLE data of ternary mixtures.2 Moreover, the temperature dependence of the BIPs, which is especially distinct for systems showing strongly non-ideal behavior, must be carefully regarded. Example A water stream contains 400 wt. ppm of tetrachloromethane (CCl4 ). The CCl4 content shall be reduced significantly. Does it make sense to use a hexane stream as extractive agent, which already contains 1 wt. % of CCl4 ? The extraction process consists of a one-stage mixer-settler (Chapter 6.2.1) arrangement and happen at t = 30 °C and p = 1 bar.

Solution In fact, this option had been disregarded in a similar practical case, as it seemed that diffusion cannot happen against the concentration gradient. However, there are two mistakes. First, the concentration which accounts for mass transfer is the mole concentration. It does not change much, but the comparison must be made between 47 mol ppm in the aqueous phase and 0.6 mol % in the organic phase. Second, the concentration measure which is decisive refers to the chemical potentials, i. e. the activities. For easy illustration, the activity coefficients at infinite dilution are taken. For CCl4 , they are: ∞ γaq = 10200 , ∞ γorg = 1.23

Thus, it becomes clear that the product xCCl4 ⋅ γCCl4 is larger in the aqueous phase, and in the equilibrium stage the CCl4 will be transported from the aqueous to the organic phase. An evaluation of the proposed mixer-settler equilibrium stage yields quite a good result: The CCl4 content of the aqueous phase is reduced to 6 wt. ppm or, respectively, 0.7 mol ppm. However, note that afterwards the aqueous phase is also saturated with hexane (33 wt. ppm, corresponding to 7 mol ppm).

But even if the phase equilibria are well-known, a number of issues exist which will not be solved in the foreseeable future. The surface tension between two liquid phases, which determines the effort for the phase separation, cannot be predicted. 2 Of course, data from quaternary and higher mixtures would be useful, but there is little available.

6.2 Extraction

| 231

Small amounts of impurities can have a significant influence. To determine the residence time necessary for the phase separation, experimental tests are necessary. If rate-based models are applied, droplet sizes and their distribution for the calculation of phase boundary surface, the wettability of the built-in components and the optimum velocities cannot be predicted. Also, the diffusion coefficients in the liquid phase have a large influence on the results, but their estimation is inaccurate [11]. Finally, the choice of the disperse phase can be supported by theoretical considerations, but at the end of the day, an experimental confirmation is required. The scale-up of extraction columns is more difficult than for distillation or absorption columns. Columns with large diameters do not show the same separation efficiency as laboratory or pilot scale columns. The reason is supposed to be a more widely distribution of the residence time of the droplets, which has in principle a negative influence. For the design of an extraction, it is obligatory to perform a series of experiments [139]. These experiments comprise shaking trials to characterize the dispersion and coalescence behavior, and laboratory, miniplant, or pilot plant runs as the basis for the scale-up. In distillation, absorption and extraction, it is essential that the phases which participate in the separation process are exposed to each other intensively. Subsequently, they have to be separated again. This is easy for distillation and absorption due to the large density differences between liquid and vapor but much more difficult in extraction with small density differences of the two liquid phases. Therefore, the equipment used for extraction must enable the two phases to separate after a certain contact time.

6.2.1 Mixer-settler arrangement The simplest concept is the mixer-settler arrangement, where mixing and separation take place at two different locations. In the easiest way, the mixer is a stirred vessel, and the separator is another vessel providing residence time for the settlement of the phases. Wire-mesh or packing elements can support the separation. If phase equilibrium and complete separation are achieved, one theoretical stage is realized. Mixer and settler can as well be arranged in a more compact way (Figure 6.7), where also countercurrent flow takes place. The advantage of the mixer-settler principle is the easy scale-up by numberingup. The load range is large, and mixer-settler units are appropriate for extreme mass flow ratios of the two phases. The height of mixer-settler units is low, however, the floor space required is very large, as well as the liquid holdup.

232 | 6 Two liquid phases

Figure 6.7: Mixer-Settler arrangement [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

6.2.2 Extraction columns For extraction sieve tray columns and both random and structured packing columns are used [97, 139]. In contrast to distillation, the sieve tray columns do not have a weir. There is a downcomer or a pipe so that the heavy phase can get to the tray below. The light phase accumulates below the tray above. The heavy phase coming down displaces the light phase, which is forced to go to the tray above across the sieve holes. Therefore, the light phase is the disperse one, whereas the heavy phase is the continuous phase. If the heavy phase should be the disperse one, a different construction must be chosen with pipes leading to the tray above. In extraction, these sieve trays have an efficiency of η = 10–30 %. The load range is relatively small. Another disadvantage is the fact that a relatively large part of the column is used for accumulating the light phase below the next tray. On the other hand, the construction is quite simple, and the backmixing by dispersion is effectively prohibited. Also, the scale-up is easy. The density difference between the liquid phases should be quite large (> 100 kg/m3 ). A significant improvement of the efficiency can be achieved by pulsation. The liquid in the column is vibrated by means of a piston pump (Figure 6.8). The amplitudes of these vibrations are 6–10 mm, and the frequencies are between 50–150 min−1 . The light phase passes the holes during the upstroke, and the heavy phase passes during the downstroke. In this way, new phase contact areas are continuously formed. Another option for the pulsation is the movement of the trays theirselves. Pulsated columns have a good separation efficiency, but like the nonpulsated tray columns the load range is small. For packed columns, it is important that the packing is easily wetted by the continuous phase. The disperse phase should not wet the packing; otherwise, the droplets might coalesce, which lowers the interfacial area. Often, extraction columns with rotating elements are used. The principle is that both phases are thoroughly mixed by means of input of mechanical energy. Many small droplets with a large surface are formed, and the mass transfer is improved. The two liquid phases are separated in designated zones, and the axial back mixing as one of the key problems of extraction columns is restricted. The drawbacks of these columns are the high price and that they are prone to malfunction and attrition.

6.2 Extraction

| 233

Figure 6.8: Pulsated sieve tray column [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

Figure 6.9: Kühni column. © Sulzer Chemtech Ltd.

One of the most popular extraction column types with rotating elements is the Kühni column (Figure 6.9). A turbine agitator produces a circulation flow with a high interfacial area between the liquid phases. Perforated discs provide for the separation of the phases. The separation efficiency is high, there are up to 10 stages per m. Again, the small load range is the main drawback. The rotating disc contactor (RDC) has horizontal rotating discs on a shaft, which provide the dispersion of the phases (Figure 6.10). A minimum viscosity of the phases is necessary, as the dispersion is caused by shear forces. The back mixing is restricted

234 | 6 Two liquid phases

Figure 6.10: Rotating disc contactor (RDC). © Sulzer Chemtech Ltd.

Figure 6.11: Asymmetric rotating disc contactor (ARDC) [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

by the stator rings linked to the wall. However, for assembly reasons the inner diameter of these stator rings is larger than the outer diameter of the rotating discs so that the back mixing is not really prevented. RDCs can realize large throughputs but only 0.5–1 stages per m. The asymmetric rotating disc contactor (ARDC) is a further development. The shaft with the rotating discs is placed non-concentrically with the column axis (Figure 6.11). Separation and transport of the phases takes place in dedicated zones at the column wall which are separated from the mixing zone by vertical plates. Compared to the RDC, the maximum throughput is a bit lower, but the separation efficiency is much better (1–3 stages per m). The hydraulic design of extraction columns is difficult. Details can be found in [97] and [139]. The most important criterion for the determination of the column diameter is the flooding point. Flooding is reached when the countercurrent flow can

6.2 Extraction

| 235

no more be maintained, e. g. if the buoyancy of the dispersed light phase is not sufficient to overcome the flow resistance of the droplets in the continuous heavy phase coming down. The droplets of the light dispersed phase can be entrained downward to the bottom outlet, or a phase inversion can take place if the droplets accumulate and coalesce. A reasonable calculation of these phenomena is hardly possible, as the droplet size distribution as the necessary information is not accessible. As a strongly simplifying consideration, the layer approach can be used. The two phases cover a part of the cross-flow area according to their holdup and move in opposite directions in countercurrent flow. The velocities are in the order of magnitude of 1 cm/s.

6.2.3 Centrifugal extractors Centrifugal extractors are a third type of equipment for liquid-liquid extraction. The countercurrent flow and the phase separation are not achieved by gravity but by centrifugal forces. The internals and the liquid routing provide an intensive mixing and the subsequent phase separation after a residence time of a few seconds, giving high throughputs and low holdups. Both investment and operation costs are high, centrifugal extractors are mainly used in the pharmaceutical industry or if expensive solvents are involved. The principle of centrifugal extractors is that the extractor is rotating at a high speed. By means of the centrifugal forces the heavy phase is forced to the outer wall, whereas the light phase is displaced towards the rotation axis. The Podbielniak extractor is an example. The phase separation is supported by concentrical perforated sheets. Countercurrent flow is achieved by feeding the light phase at the wall and the heavy phase at the rotating shaft. Podbielniak extractors can have 3–5 theoretical stages.

Figure 6.12: Podbielniak extractor [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

7 Alternative separation processes For thermal separations, the alternative processes membrane separation, adsorption, and crystallization can solve some problems where the standard operations fail. They are explained in the following sections.

7.1 Membrane separations At the inlet of the membrane there is one stream you do not like. Downstream the membrane, there are two of them. (Hans Haverkamp)

Fortunately, this does not always apply. Membranes can be used successfully for thermal separation problems, especially in combination with other processes. Figure 7.1 shows the principle of the membrane separation process and the denomination of the streams. The membrane separates two spaces from each other. However, substances can pass through the membrane and get to the other side. The stream having passed the membrane is called permeate, the stream which has not is the retentate. For the various substances, the permeability of the membrane is different, which is the basis of the separation effect.

Figure 7.1: Principle of the membrane separation process [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

There are two different principles of membrane separation. The first type acts like a sieve or a filter; small molecules can pass the membrane (permeate), whereas larger molecules cannot (retentate). As membrane materials, glass-like polymers like polyetherimide or polysulfone are used. Depending on the size of the retained particles, it is distinguished between microfiltration, ultrafiltration, and nanofiltration. Nanofiltration is normally used for the treatment of aqueous systems. It separates particles down to a particle size of 1 nm. The driving force is a pressure difference up to 40 bar between both sides of the membrane. In many cases, nanofiltration is also ionselective. While monovalent ions can pass the membrane easily, bi- or multivalent ions are held back. Well-known applications are the removal of water hardness (Ca-ions), the decoloring of waste waters from the textile and pulp industry, and the desalination of waste waters. Ultrafiltration is operated with a pressure difference across the membrane between 3–10 bar. It can be used for separating highly molecular substances from a liquid. Microfiltration is used for removing particles between 0.1–10 µm. https://doi.org/10.1515/9783110657685-007

238 | 7 Alternative separation processes The mass transfer through these porous membranes can be explained with the pore model. For a porous membrane, the size of the molecules or ions to be separated and the pore size of the membrane are of the same order of magnitude. In this case, the membrane separation is comparable to a sieve filtration. Solubility membranes act in a different way. The mechanism for this separation is the combination of solution and diffusion. On the high pressure side of the membrane, a component is dissolved in the membrane polymer. It is then transported to the other side of the polymer by diffusion, and desorbs at the low pressure side of the membrane. The driving force of the procedure is the partial pressure difference between the two sides of the membrane. For the permeability of a component through the membrane, the product of the solubility in the membrane polymer and the diffusion coefficient is decisive, according to ṅ i =

Di Si Δpi , l

(7.1)

2 where ṅ is the mole flow [mol/(m s)], D is the diffusion coefficient [m2 /s], S is the solubility parameter [mol/(m Pa)], and l is the thickness of the polymer layer [m]. For example, pentane has a higher permeability through a silicon membrane than nitrogen. Its diffusion coefficient in silicon is three times lower than that of nitrogen, but its solubility is 200 times larger. Therefore, the permeabilities differ by a factor of approx. 60. The quality of the separation depends mainly on the selectivity of the membrane, which is defined as the ratio (D1 S1 )/(D2 S2 ) of two components to be separated. Figure 7.2 illustrates the permeation behavior of various substances in different membranes, with some remarkable and unexpected results.

Figure 7.2: Permeation behavior of various substances in different membranes.

7.1 Membrane separations | 239

To achieve high fluxes through a solubility membrane at sufficient separation efficiency, it is necessary that the active layer for the separation is extremely thin [89]. The handling of thin materials is difficult; the solution is a so-called asymmetric membrane. These membranes consist of a thin active layer (approx. 0.01–0.05 µm) connected to a porous supporting layer (approx. 100 µm). The supporting layer achieves the mechanical stability without contributing significantly to the mass transfer resistance. The supporting layer can be made of the same material (phase inversion membrane) or a different one (composite membrane). In some cases, an additional layer made of polyacrylnitrile fibres is used to further increase the mechanical stability. Examples of membrane materials are polyvinyl alcohol, cellulose or its derivatives for organic membranes, whereas inorganic membranes can be made of sintered metal powder, glass in a spongy structure, carbon, or ceramic. Organic membranes are more widely used because of their low price and their mechanical stability. However, inorganic membranes are thermally and chemically stable and have long durabilities. Elastomer membranes preferentially let organic substances pass and have lower permeabilities for low-boiling gases like nitrogen, oxygen, or hydrogen. Membranes are used as modules which provide relatively high mass transfer areas per volume. The most established ones are pipe modules, coil modules, and plate modules (Table 7.1, Figure 7.3). Table 7.1: Specific mass transfer areas of the particular membrane modules [8]. Module Pipe module Plate module Spiral wound module Capillary module Hollow fibre module

3

Specific area [m2 /m ] 25 100–600 500–1000 > 1000 approx. 10000

A prediction of the membrane separation behavior is difficult and not state of the art yet. The flow conditions on both sides of the membrane play an important role, as they have an influence on the concentration profile. Empirical or semiempirical models are needed for the modeling of the mass transfer. Any extrapolation of these models is difficult, so that experiments are definitely needed for both the choice of the membrane and the design of the membrane separation process. From the qualitative point of view, the statement can be given that high fluxes are only possible if the solubility in the membrane is high. Therefore, polar membranes (e. g. polyvinyl alcohol) are appropriate for the separation of water, while hydrophobic membranes (e. g. polydimethyl siloxane, PDMS) can be used for the separation of organic components from aqueous solutions.

240 | 7 Alternative separation processes

Figure 7.3: Different kinds of membrane modules. Courtesy of Prof. Dr. J. Gmehling.

Besides the design problems, there is always the question about the durability of the membrane. The experience is that in multicomponent mixtures there is usually at least one substance which is detrimental to the membrane. Proof that the membrane is stable can only be achieved by a long-term test. This issue and the design effort are the reason why membrane processes are only used if distillation or other unit operations are not appropriate. But membrane separations are an option in combination with other operations, e. g. with distillation to overcome azeotropic points. For waste water treatment, where small amounts of organic substances have to be removed, membrane separations are a very popular choice. Also, membrane separations are used for the separation of gas mixtures, the recovery of salts from diluted aqueous solutions, the desalination of sea water, or dialysis for patients with a kidney disease. Table 7.2 gives an overview on the most important membrane processes, the phases involved, and the membrane types. For reverse osmosis, pervaporation and vapor or gas permeation the same membrane type is used, the difference is just the phases involved. We can distinguish between dead-end and crossflow filtration. In dead-end filtration, the flow goes through the membrane in a perpendicular direction. The filtered particles are collected at the surface of the membrane and form a filter cake. In crossflow filtration, the flow direction is parallel to the membrane surface. If particles occur, they might deposit on the membrane. A sufficient flow velocity must be provided to reach an equilibrium between deposition and abrasion. For reverse osmosis, semipermeable membranes are used, where in the ideal case no transport of dissolved components (e. g. salts) takes place. On the other hand, it should be fully permeable for the solvent itself. Because of the concentration difference, the solvent (e. g. water) goes through the membrane until equilibrium is reached. This is the case when the hydrostatic pressure is equal to the osmotic pressure (osmotic equilibrium). If the pressure on the side containing the dissolved components is increased above this osmotic pressure, the process is inverted, i. e. the solvent concentration on this side is even decreased (reverse osmosis). The most well-known ap-

7.1 Membrane separations | 241 Table 7.2: Technically important membrane processes [8]. Membrane process

Type

Driving force

Phases

Application

Microfiltration

porous

Δp < 3 bar

S/L

Ultrafiltration

porous

Δp < 10 bar

L/L

Nanofiltration

Δp < 40 bar

L/L

Δp < 80 bar

L/L

conc. diff.

L/L

Electrodialysis

porous/ dense porous/ dense porous/ dense dense

removal of solid particles from suspensions waste water treatment, drinking water purification treatment of aqueous solutions and oil fractions waste water treatment, drinking water purification kidney dialysis, acid recycling

electric field

L/L

Pervaporation

dense

fugacity diff.

L/V

Vapor Permeation

dense

fugacity diff.

V/V

Gas Permeation

porous/ dense

fugacity diff.

G/G

Reverse Osmosis Dialysis

removal of ions from aqueous solutions separation of azeotr. systems, removal of unwanted traces separation of azeotr. systems, water removal in reactions separation of gas mixtures

Figure 7.4: Osmosis, osmotic equilibrium, and reverse osmosis [11]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

plication is sea water desalination. In any case, where a heavy end component has to be removed from water, reverse osmosis should be taken into account to avoid the evaporation of large amounts of water. A good rule of thumb for the pressure used is 60 bar. Reverse osmosis as explained above is illustrated in Figure 7.4. The equation for the osmotic pressure can be derived from chemical potentials [11]. Referring to Figure 7.4 and setting the activity coefficient for the solvent to 1, one gets Π = pB − pA = −

RT ln xj , vL,j

(7.2)

242 | 7 Alternative separation processes where the index j denotes the solvent. Equation (7.2) is very convenient to use, as mass balances from process simulation indicate the molar concentration xj of the solvent. Nevertheless, Equation (7.2) is often rewritten as Π = pB − pA = RT ∑ ci , i

(7.3)

where the index i denotes the respective solutes. The temptation to use Equation (7.3) manually is great. However, one must take into account that the ionic species dissociate. Therefore, the mole number of dissolved species is higher than expected and so are the osmotic pressures. Example 10000 kg/h of a 1 wt. % solution of sodium chloride is to be concentrated to 10 wt. % by a single evaporation step without vapor recompression. Alternatively, a reverse osmosis unit can be inserted upstream, where the pressure is restricted to p = 60 bar. The temperature is set at 300 K. Estimate whether the reverse osmosis unit can save operation costs. The electric rate shall be 10 ct/kWh, the steam costs shall be 20 €/t.

Solution First, the steam demand without the reverse osmosis unit is estimated. The mixed stream consists of 100 kg/h sodium chloride and 9900 kg/h water. To increase the concentration to 10 %, the water amount must be reduced to 900 kg/h. Therefore, 9000 kg/h of water, more than 90 %, have to be evaporated, requiring approximately the same amount of steam. A large effort is necessary to change small concentrations. For the membrane consideration, the mole compositions of the stream are considered: 9900 kg/h = 549.54 kmol/h 18.015 g/mol 100 kg/h = = 1.711 kmol/h 58.4425 g/mol

nwater = nNaCl

The latter split into 1.711 kmol/h Na+ and 1.711 kmol/h Cl− ions, giving the concentrations 549.54 = 0.9938 549.54 + 2 ⋅ 1.711 1.711 = 0.00309 = 549.54 + 2 ⋅ 1.711 1.711 = = 0.00309 549.54 + 2 ⋅ 1.711

xwater = xNa+ xCl−

Applying Equation (7.2) with Π ≈ 60 bar (a bit less, as the permeate must have a remaining overpressure to be transported out of the membrane) and a specific liquid volume of vL,water ≈ 0.001 m3 /kg = 0.018 m3 /kmol, one gets after solving to x xRO,water = exp [−Π

vL,j RT

] = exp [−60 ⋅ 105 Pa

0.018 m3 /kmol ] = 0.9576 8.31447 J/(mol K) ⋅ 300 K

7.1 Membrane separations | 243

This is the minimum mole concentration of the solvent one can achieve in the retentate with reverse osmosis. The mole concentrations of the ions are xRO,Na+ = xRO,Cl− = (1 − 0.9576)/2 = 0.0212 , corresponding to the mass concentration xROw,water =

0.9576 ⋅ 18.015 = 0.933 0.9576 ⋅ 18.015 + 0.0212 ⋅ 22.99 + 0.0212 ⋅ 35.453

The retentate contains the 100 kg/h NaCl and, correspondingly, 1392.5 kg/h water.1 Therefore, another 492.5 kg/h water have to be removed by evaporation. For the pressure elevation, assuming a pump efficiency of η = 0.7 the power can be calculated to be ̇ L Π/η = 10000 kg/h ⋅ 0.001 m3 /kg ⋅ 60 bar/0.7 = 23.8 kW P ≈ mv Without reverse osmosis, the operation costs are Cevap = 9000 kg/h ⋅ 20 €/t = 180 €/h

(7.4)

For the option with reverse osmosis and evaporation, the operation costs CRO+evap = 492.5 kg/h ⋅ 20 €/t + 23.8 kW ⋅ 10 ct/kWh = 12.23 €/h

(7.5)

can be assigned. Over one year (≈ 8000 h), the difference amounts to 1.34 million €, which should rapidly pay off the investment costs of the reverse osmosis.

Pervaporation is different from the other membrane processes, as not only the membrane separation but also a phase change takes place. A liquid feed stream enters the membrane module and is split into a liquid retentate stream and a permeate stream in the vapor state. By lowering the partial pressure on the permeate side the fugacity difference as the driving force is increased. Nevertheless, the enthalpy of vaporization has to be added; otherwise the temperature on the permeate side would be significantly lowered, especially in multiple-stage membrane modules. Besides the removal of organic compounds from aqueous solutions (or vice versa, according to the polarity of the membrane), pervaporation is an attractive option for the separation of azeotropes in combination with distillation. As mentioned above, the separation in the membrane depends mainly on the solubility and on the diffusion through the membrane, so that the separation characteristics can differ significantly from the vapor-liquid equilibrium. In gas permeation, in contrast to pervaporation the inlet stream is gaseous as well. The mass transfer is proportional to the fugacity difference across the membrane. Porous and dense membranes can be used. The main application is the recycling of hydrogen in the ammonia and methanol manufacturing processes. Moreover, gas per1 Check: 1392.5/(1392.5 + 100) = 0.933.

244 | 7 Alternative separation processes

Figure 7.5: Illustration of thermal equilibrium between permeate and retentate.

meation is used for nitrogen enrichment of ambient air, natural gas drying, the separation of ethylene and carbon dioxide and the separation of helium from natural gas. Applying gas permeation, the temperatures of the outlet streams are often surprising. Figure 7.5 tries to illustrate this effect by splitting the continuous flow through the membrane into just two sections. Passing the membrane, the permeate is subject to a considerable pressure drop, which might cause a significant temperature decrease due to the Joule–Thomson effect. The pressure of the retentate stream decreases only slightly because of normal friction, giving only a slightly lower temperature. However, permeate and retentate flow in parallel along both sides of the membrane, where they are only separated by a thin membrane. Because of the large cross-flow area of the membrane, it can be assumed that they reach thermal equilibrium, where the retentate is cooled down and the permeate is warmed up (dashed rectangles in Figure 7.5) and leaves the membrane afterwards. In the next section, the permeate temperature decreases to an even lower value, as the starting temperature on the feed side is already lower. Again, retentate and permeate come into thermal equilibrium (dashed rectangle). As a result, the retentate outlet temperature corresponds to the lowest temperature reached with the Joule–Thomson effect, whereas the permeate temperature is a mixing temperature of the permeates of the particular sections. Due to internal heat exchange, the retentate outlet temperature is actually lower than the permeate one, although the retentate is not strongly affected by the Joule–Thomson effect. In electrodialysis, the potential difference on both sides of the membrane can be increased by applying an electric field if electrolytes have to be separated [140]. Ionselective membranes can support this process. Much more information on membranes can be obtained from [141], [142] and [256].

7.2 Adsorption

| 245

7.2 Adsorption When solids get into contact with gaseous or liquid substances, interactive forces occur which can result in a way that these substances are bonded to the solid. This effect is called adsorption. The strength of these bonds can differ from component to component, which can be sufficient to achieve a selective separation. Especially microporous solids with a high specific surface, corresponding to a high capacity, can be used as adsorptive agents. For the separation, besides the different equilibrium load steric (sieve effect) and kinetic effects (different diffusion coefficients) can be used as well. Especially the development of improved adsorptive agents (e. g. molecular sieves) and better regeneration techniques increased the relevance of adsorption as a thermal separation process [89]. Whenever the separation factor of the vapor-liquid equilibrium is close to 1 (azeotropes, isomers), when difficult process conditions would have to be realized (high or low temperatures) or only small amounts of impurities have to be removed (waste water, exhaust air), adsorption has advantages in comparison with distillation. On the other hand, adsorption always means that the process becomes discontinuous. The adsorption unit is saturated after a time, and a regeneration has to take place. During this time, a second bed (twin plant, see Figure 7.8) must take over until regeneration of the first bed has been finished. Adsorption is used for the drying of gases and solvents, for the removal of condensable components (CO2 , H2 O, hydrocarbons) upstream the air separation, for natural gas conditioning, separation of nitrogen or oxygen from air, separation of hydrocarbon mixtures, and the treatment of waste water and exhaust air (Chapter 13.4.6). A number of adsorbents have been developed for various applications. Due to pores in their structure, they have enormous surfaces per volume (up to 1000– 1500 m2 /g), leading to a correspondingly large adsorption capacity. These materials are manufactured by degradation reactions of solids, where fluid reaction products are formed and removed immediately. If the reaction temperature is below the melting point of the solid, the crystal cannot sinter together, and the holes and pores remain. The diffusion of the adsorbed substance inside the pores is usually the step which determines the necessary residence time. Examples for common adsorbent materials are activated carbon, silica gel, clay gel and zeolites, where the latter ones act as molecular sieves. The adsorptive agent (adsorbent) should have a high selectivity and a high capacity. Adsorption of water (except for drying purposes) and polymerizing components must be avoided. As well, a low effort for the regeneration is desirable. There are hydrophilic (e. g. silica gel, aluminium oxide, zeolites) and hydrophobic adsorptive agents (e. g. activated carbon, carbon molecular sieves). Activated carbon is a very inexpensive adsorbent which is used especially for the removal of hydrocarbons or nonpolar components in general from waste water. Its mechanical stability is limited,

246 | 7 Alternative separation processes and it has a tendency to cause fires. On the other hand, activated carbon is so cheap that regeneration can often be omitted; it can be directly sent to incineration. Zeolites (molecular sieves) are crystalline aluminosilicates of alkali or alkaline earth metals. They have defined cavities and pore diameters. The pore diameters are between 0.3–0.8 nm. The well-defined structure can be used for the separation of molecules of different size or shape, e. g. the separation of linear and branched alkanes or m- and p-substituted aromates. A frequently applied option is the removal of water from gases and solvents with a so-called KA zeolite (cation = K = potassium, pore diameter is 3 or 4 Å). A 3 Å molecular sieve has the advantage that only water is adsorbed and co-adsorption of other components hardly takes place. A very common application is the separation of the ethanol/water azeotrope (see below). To keep the dimensions of the adsorber low, adsorptive agents must have a large surface. It is the inner surface which is decisive. An adsorbent particle is porous; it is distinguished between macropores (d > 50 nm), mesopores (d = 2–50 nm) and micropores (d < 2 nm). The large specific surface is mainly caused by the great number and good accessibility of the micropores. In Table 7.3, ranges for the specific surface of the various adsorptive agents are given. Table 7.3: Specific surfaces of various adsorptive agents [89]. Adsorptive agent Activated carbon, general Activated carbon, narrow pores Silica gel, wide pores Aluminium oxide Zeolites (molecular sieves) Carbon molecular sieves

Specific surface (m2 /g) 300–2500 750–850 300–350 300–350 500–800 250–350

Adsorption processes for a particular component can be characterized by their adsorption isotherms, i. e. the relationship between the adsorbed amount and the concentration in the fluid phase at a certain temperature. There are five particular types, which can be physically interpreted. These adsorption isotherms can hardly be estimated, which means that the quality of an adsorption process cannot be predicted without references or experiments. The adsorption equilibrium between the concentrations of a component in the fluid phase (adsorptive) and in the phase at the surface of the adsorbent (adsorbate) is decisive for the choice of the adsorbent and the design of the adsorption column. The amount adsorbed per g adsorbent depends on the temperature, on the partial pressure or, respectively, the concentration, and the kind of the adsorbent, including the manufacturing process (size of the inner surface) and the history (aging, regeneration).

7.2 Adsorption

| 247

There are a number of equations for the isothermal adsorption equilibrium of pure substances. Many of them are based on the Langmuir approach, where some simplifications were made (homogeneous surface, no interaction between the adsorbed molecules): Ki pi ni = , ni,mon 1 + Ki pi

(7.6)

where ni,mon is the load for the limiting case of a monomolecular layer. For the correlation of multicomponent adsorption isotherms, there is at least some theory, which is similar to the procedure for correlating VLE. However, a prediction method like UNIFAC is still missing. Furthermore, for technical applications the parameters characterizing the adsorbent like specific surface, pore distribution, crystal irregularities, and interactions with the adsorbed species are often not reproducible, not to mention the kinetics and mass transfer effects. More detailed information about adsorption isotherms is given and derived in [143]. It is obligatory that adsorption equilibria be measured, and this is usually a large effort. There are five types of adsorption isotherms (Figure 7.6) [89]. For type I, a

Figure 7.6: Adsorption isotherm types [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

248 | 7 Alternative separation processes

Figure 7.7: Course of an adsorption with time [89]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

monomolecular layer is formed. This behavior can be described with Equation (7.6). For types II and IV, more layers are formed, and condensation in the pores takes place. For types III and V there is no tendency to form a monomolecular layer. In technical applications, there are usually multicomponent mixtures, where the components involved compete for the space on the adsorbent surface. There are similar phase equilibrium diagrams as there are for vapor-liquid equilibria [89]. Even azeotropic points occur. Adsorption is exothermic; as a first guess, it is a good approach to assume that the enthalpy of adsorption is approx. 1.5 times the enthalpy of vaporization of the adsorptive [143]. Shortcut approaches and recommended constraints for the design of adsorbers are explained in [143]. In Figure 7.7, the adsorption procedure is illustrated. In principle, there are four phases that can be distinguished. In the first phase, the adsorption bed is exposed to the process stream. Gas phase applications can be designed both for upflow and downflow, whereas for liquid applications upflow operation is preferable, as gravity can support the process during the adsorption cycle itself by promoting fluid distribution and during desorption by assisting the draining during heating. The adsorption itself takes place in the adsorption zone, which proceeds through the column towards the outlet of the adsorber with time. Different components have different adsorption zones. The stronger the component is adsorbed, the slower the saturation zone moves towards the outlet of the adsorber. The component with the weakest adsorption can pass the adsorption bed and be obtained in pure form. When the saturation zone of the component to be removed approaches the end of the adsorption bed, saturation is reached and it is necessary to stop and switch over to a second adsorption bed to avoid a breakthrough. Therefore, adsorption units usually consist of twin columns (Figure 7.8). Meanwhile, as long as the second column is in operation, the loaded column must be regenerated in a second phase. For the regeneration the flow direction is reversed. Regeneration can be done by increasing the

7.2 Adsorption

| 249

Figure 7.8: Typical arrangement of an adsorption twin plant.

temperature (temperature-swing adsorption, TSA) or lowering the pressure (pressureswing adsorption, PSA), by replacement with another component or by lowering the partial pressure of the pollutant in the gas being in contact with the adsorbent. The latter is possible by flushing the adsorber with an unloaded gas (Figure 7.8). Combinations are possible; one of the most often applied procedures is the flushing with steam, where the partial pressure is lowered and the temperature is elevated. Of course, the adsorptives then have to be removed from the steam in a further step, which makes the whole process more complex. As usual, it takes a large effort to remove the last traces of the adsorptives in the bed; therefore, a residual load after regeneration is accepted, which in turn of course reduces the capacity of the next adsorption cycle. After regeneration, in a third phase some time should be taken into account to get back to adsorption conditions, e. g. by cooling or repressurizing the bed after TSA or PSA, respectively. Usually, the cycle is planned in a way that regeneration is faster than saturation to ensure continuous operation. Therefore, it might happen that the regenerated bed is not brought back into service immediately, giving a fourth “standby” phase. On a technical scale, adsorption has the disadvantage of being in principle a discontinuous process. It is aspired to operate the adsorption continuously in countercurrent flow like other thermal separation processes. However, countercurrent flow can hardly be realized with a solid because of its attrition. Several attempts have been made to overcome this difficulty. The most popular one is the so-called “simulated moving bed” (SMB), invented by UOP (Universal Oil Products Inc., Des Plaines, Illinois). The principle is explained in the following paragraph.

250 | 7 Alternative separation processes SMB is a continuous chromatography process in countercurrent flow with a binary system. The difference between adsorption and chromatography is that in chromatography the mobile phase achieves the desorption. Both components appear at the outlet of the column, they are separated due to the different time they need for passing the column. In fact, in SMB the adsorbent is not really moved. The movement of the solid phase is replaced by changing the position of the inlet and outlet streams in a cyclic way. An inlet stream is continuously split into two outlet streams, which consist of purified components if the SMB is adequately designed. The difficulties of the SMB are the mechanical complexity and the complicated design. Figure 7.9 shows a case where the column is fixed. The feed enters the column in the middle. Both components pass the column with different velocities and separate. If the column itself moved in the opposite direction of the mobile phase with a velocity that is between the velocity of the two components, the components appear to move in different directions (Figure 7.10). To get the SMB arrangement, the mobile phase has to flow in a closed loop. The products are withdrawn at defined places with the exact volume flow. As mentioned above, the thought experiment of the moving column is replaced by the movement of the inlet and outlet nozzles (Figure 7.11).

Figure 7.9: Normal chromatography arrangement with a fixed adsorbent.

Figure 7.10: Separation due to the movement of the column.

Figure 7.11: Principle of the simulated moving bed.

7.2 Adsorption

| 251

Figure 7.12: 4A molecular sieve pellets. © Smokefoot/Wikimedia Commons/CC BY-SA 4.0. https://creativecommons.org/licenses/by-sa/4.0/.

One of the most popular applications of adsorption is the dehydration of ethanol in bioethanol production. Ethanol and water have an azeotrope (Figure 5.47), which cannot be split into the pure components by simple distillation. On the other hand, there is a strict specification for bioethanol concerning its water content. Therefore, other techniques must be applied (Chapter 5.8). Pressure-swing adsorption (PSA) can be a way, using a molecular sieve. The ceramic pellets are shown in Figure 7.12. The water molecules can diffuse through the pores, whereas the larger ethanol molecules are retained [144]. At the outlet of the adsorption bed, the water is more or less completely removed. Due to the heat of adsorption, the temperature of the bed is strongly elevated. The effect is even used for process control, as the temperature indicates where the saturated zone is. Once the mass transfer zone approaches the outlet of the bed, regeneration starts, and it is switched over to a second bed. Desorption is done by first applying a vacuum to the tower. To remove the remaining water, the adsorbent bed is purged with purified ethanol vapor in opposite flow direction, i. e. the vapor enters the column from the opposite side at the bottom. Figure 7.13 shows the block diagram of the process [144]. In column K1, distillation of the raw ethanol is performed. A vapor stream close to azeotropic concentration (approx. 96 wt. %) is taken at the top of the column. It passes the adsorber A1 from the top to the bottom. Downstream the adsorber, it is condensed at the shell side of the falling film evaporator W2, serving as a heating agent. The generated steam is transformed to a higher pressure by a jet pump (Chapter 8.3) and used for direct steam heating (Chapter 13.1) in column K1. The mixture of ethanol and water vapor from the regeneration of the adsorber bed A2 is condensed in heat exchanger W1 and led back to the distillation column. The dehydration of the ethanol–water azeotrope is a popular, but not really a typical application of molecular sieves. The amount of water being handled is quite large.

252 | 7 Alternative separation processes

Figure 7.13: Block diagram for ethanol dehydration [144].

Normally, the water concentration of the streams being treated is a few hundred ppm; here, it is approx. 4 %. Therefore, the bed is loaded rapidly, and the cycle times are pretty short, just in the range of minutes. Often additional beds are used to provide enough time for regeneration.

7.3 Crystallization Crystallization should in fact not be called an alternative separation processes, as it is in fact the oldest one. The common thing of the alternative processes in this chapter is more or less that they cannot be designed on a theoretical basis, and that “something solid” is involved. In crystallization, we can distinguish between crystallization from a solution, which is often applied for purifying inorganic salts, and crystallization from a melt, which is often used for purifying organic substances. Like in distillation, in crystallization energy for the cooling or evaporation of the solvent is necessary to create a second phase. Because of the low density difference, the separation of the two phases solid-liquid is not as easy as it is for the vapor-liquid separation in distillation. Also, the transport of the solid phase is difficult. Often, the viscosity is high, making in turn the mass transfer of the crystallizing component difficult. Crystallization has advantages in comparison with the other thermal separation options, especially distillation, if the components to be separated have a low thermal stability or a low (or even no) vapor pressure. As well, it can have advantages if the separation factor is close to 1, e. g. for azeotropes or for the separation of isomers. Crystallization can be used to get extremely pure products. In most cases, it takes place

7.3 Crystallization | 253

in form of eutectic systems, and in this case the crystallizing component is pure and can be obtained by melt crystallization with one separation stage. In practical applications, the separation of the solid and the liquid phase is not perfect, so that inclusions of mother liquor will occur in the solid phase.2 This phenomenon depends mainly on the crystal formation and growth. Regularly formed crystals are useful to achieve a good separation of the two phases. The decisive thermodynamic issue for the description of condensation is solidliquid equilibrium (SLE, Chapter 2.6). In the case of eutectic systems, a pure solid phase is obtained, but it is a disadvantage that only part of this component can crystallize. The remaining mother liquor has eutectic concentration and leads to mixed crystals if crystallization is continued. This limitation can be overcome if crystallization is combined with other thermal separation processes. For the design of crystallizers an exact knowledge of the solid-liquid equilibria with respect to temperature is necessary. For most of the salts, the solubility increases with temperature. For some inorganic salts, the solubility can decrease with temperature. These are the so-called hardness components. Examples are gypsum (CaSO4 ) or calcium carbonate (CaCO3 ). Also, the kinetics of the seed crystal formation and of the crystal growth are important for the equipment design in crystallization. An oversaturation is necessary for the formation and the growth of crystals. It can be achieved in different ways, e. g. by cooling, evaporation of the solvent or depressurization, which is another way of evaporating the solvent. Furthermore, crystallization can be forced by the addition of a new component. Oversaturation by cooling has advantages if the solubility increases strongly with temperature. If the temperature dependence is less significant, an oversaturation by evaporation might be favorable. For the seed crystal formation, there are several mechanisms. Crystals can be formed at rough surfaces or impurities or by abrasion of small crystals from larger ones. Many constraints like oversaturation and flow velocity have an influence. The more the solution is subcooled, the more seed crystals are formed. However, due to the increase of the viscosity with decreasing temperature the rate of seed crystal formation decreases after passing through a maximum. The following relationship between seed crystal formation rate r and oversaturation (Δc) has been found: r = Δcb ,

(7.7)

where b = 3–6. A similar equation can be set up for the the crystal growth: r = Δcw , where w = 1–2. 2 Melting and recrystallization is a countermeasure.

(7.8)

254 | 7 Alternative separation processes The size of the crystals strongly depends on the degree of the oversaturation. Seed crystal formation and crystal growth are competing processes. A large oversaturation promotes the seed crystal formation, giving small crystals. Therefore, the oversaturation has to be kept small, if large crystals are the target. The control of the oversaturation is decisive for crystallization processes. There are several options for the choice of equipment for industrial crystallization processes. In suspension crystallizers, the crystals are dispersed in the solvent or in the melt, respectively. The heat of fusion is transferred to the liquid. Suspension crystallizers are operated continuously. It is tried to get separate zones for the oversaturation and the crystal growth. All oversaturation mechanisms can be applied. In the following paragraph, evaporation is taken as example. Usually, crystals are heavier than the mother liquor. To keep them in the suspension, there must be an upward flow in the crystallizer so that the crystals are located in definite layers according to their size. There are various types of crystallizers to realize this principle. The one most widely used is the forced circulation crystallizer (FC). It is normally operated under vacuum conditions. As can be seen in Figure 7.14, the suspension is circulated with a pump through a heater, causing evaporation in the upper part of the vessel. The concentration of the dissolved solids rises, and precipitation takes place. The slurry can be continuously removed from the vessel. The FC crystallizer is appropriate if crystal size is not an issue. There is no mechanism to redissolve small crystals. Larger crystals can be obtained with the DTB (draft tube baffle) crystallizer [145]. As the FC, it is operated under vacuum or a slight overpressure. It is provided with a skirt baffle which forms a partitioned settling zone. Inside the baffle there is a vertical draft tube, where the feed and the recycle are directed to (Figure 7.15). Outside the skirt baffle, the mother liquor containing the small crystals is withdrawn and led to a heater, where the small crystals have a chance to be dissolved again. At the top of the crystallizer, vapor is generated, giving the desired oversaturation. The formed crystals can settle down to the product discharge at the bottom. In comparison with the FC crystallizer, the internal loop shows less attrition and crystal breakage; large crystals which were formed are maintained. The largest crystals are obtained in the Oslo type crystallizer, where the crystals are grown in a fluidized bed (Figure 7.16). The growth is limited by the residence time. There is again an external recirculation loop with a heat exchanger, where the temperature is elevated. The loop reenters the crystallizer near the top. Evaporation can take place, giving the oversaturation. The oversaturated solution is led to the bottom of the crystallizer, where it first comes into contact with the larger crystals, so that these crystals can further grow instead of forming small new ones. At the bottom, the product is withdrawn. A classified bed is formed above, with the lowest concentration at the nozzle for the recirculation outlet. In the Oslo type crystallizer, hardly any attrition and crystal breakage occurs [145].

7.3 Crystallization | 255

Figure 7.14: Forced circulation crystallizer.

Figure 7.15: DTB crystallizer.

Figure 7.16: Oslo type crystallizer.

256 | 7 Alternative separation processes Layer crystallizers operate discontinuously. They have a cooled wall where the crystallization takes place. They are used for melt crystallizations, either as falling film crystallizers or as static crystallizers. Falling film crystallizers work analogously to falling film evaporators (Figure 4.18). The liquid runs down the inner side of the tube bundle. The tube bundle is cooled with a heat transfer fluid on the shell side. The crystallization begins at the tube wall. The melt is recycled, as long as the required amount of liquid has been crystallized. After the liquid has run out of the apparatus, the solid layer is slightly heated up to remove impurities in the surface layer. Finally, the whole crystal layer is melted and removed from the heat exchanger. At static crystallization, cooling elements dip into the melt. By varying the temperature the particular steps as described above can be carried out. All these crystallizers work very reliably, as there are no moving parts or mechanical devices for the removal of the liquid. However, the residence times are quite long, giving large equipment volumes. More information about crystallization can be obtained from [146] and [147].

8 Fluid flow engines 8.1 Pumps There is no doubt that pumping is a science on its own. In most engineering units, there is a “rotating equipment” department which is dedicated to the selection of the appropriate pump in a thankworthy way. Otherwise, vendor companies usually give assistance. However, a process engineer must be able to specify what the pump should do in the process. The following chapter will introduce the fundamental terms for the specification of a pump from a process engineering point of view. The explanations refer to centrifugal pumps, which are the most common ones (80–90 % of all the pump applications). In process simulation, pumps usually do not play a decisive role. The pressure dependence of the enthalpy of a liquid is neglected anyway, so the power consumption of the pump is transferred into a slight temperature elevation, normally less than 1 K. The exact arrangement of the source and the target vessel is determined later in the project, as well as the necessary pressures and the pump characteristics, which is decisive for the pump efficiency. Therefore, the power consumption of the pump is not accurately calculated; at most, the result shows the correct order of magnitude. The calculation of the power is performed according to the same scheme as for compressors, where it is much more important. It is explained in Chapter 8.2. A pump conveys a liquid from one piece of equipment, usually a vessel, to another one. Between the two locations, a pressure and/or a height difference and the pressure drop in the connecting line has to be overcome. Figure 8.1 shows an example of a principle sketch of this situation. For the arrangement in Figure 8.1, the pressure elevation by the pump Δppump can be determined by the Bernoulli equation p0 + ρgh0 − Δpinlet line + Δppump − Δpoutlet line − ρgh1 = p1 ,

(8.1)

where the indices 0 and 1 refer to start and end of the whole line, respectively. The pressure drops of inlet and outlet line comprise the line pressure drops (Equation (12.1)), the pressure drops caused by special piping elements (Equation (12.18)) and the pressure drops through control valves, which can often only be more or less set arbitrarily (e. g. ΔpCV = 1 bar).1 The terms for the kinetic energy are neglected, as the velocities upstream and downstream the pump do not change very much. The pressure elevation by the pump is usually converted into the delivery head Hpump Hpump =

Δppump ρg

(8.2)

1 as long as the valve is not fully specified, which takes place in the detailed engineering phase. https://doi.org/10.1515/9783110657685-008

258 | 8 Fluid flow engines

Figure 8.1: Example for a setup of a pumping process.

Analogously, the remaining terms in Equation (8.1) can be converted into heads. The delivery head Hpump is a function of the volume flow; the pump characteristics curve shows how the delivery head decreases with increasing volume flow (Figure 8.2). On the other hand, with increasing volume flow the pressure drops of inlet and outlet line increase, whereas the differences in head and the pressures p0 and p1 in the equipment remain constant, which is called the plant characteristics. The situation can be illustrated by drawing the pump and the plant characteristics in one diagram (Figure 8.2). In a given arrangement, the operation point is defined by the intersection between the curves of pump and plant characteristics. Usually, this operation point does not fit the requirements of the process. In this case, the plant characteristics can be manipulated by throttling the flow with a control valve (Figure 8.2). If this is not possible, the pump characteristics can be changed as well, e. g. by changing the blade wheel diam-

Figure 8.2: Typical curvatures of pump and plant characteristics.

8.1 Pumps | 259

eter or by changing the number of revolutions per minute with a frequency converter. It should be emphasized that this well-known construction is just an illustration; in practical applications, it is sufficient to define the requirements (volume flow, delivery head) for the pump and provide a control device to adjust the plant characteristics. Besides V̇ and Hpump the so-called NPSH value (net positive suction head) is an important operating parameter of a pump. It is relevant to avoid cavitation, which is the worst failure of a circulation pump. Cavitation means that vapor bubbles occur in the pump because of a local reduction of the static pressure below the saturation pressure of the process liquid. These bubbles violently implode when they are transported into regions in the pump with higher pressures and therefore cause erosion, often leading to the mechanical destruction of the pump. Moreover, the mechanical stress on the pump impeller, the shaft, the seals and the bearings is increased. The NPSH value indicates how far the medium inside a pump is away from its saturation pressure in a static case, i. e. without movement in the pump. In the case of Figure 8.1 the NPSH value would be calculated to be NPSH =

p0 + ρgh0 − Δpinlet line − ρg

ρw2 2

− ps

,

(8.3)

meaning that NPSH is the difference between the total pressure inside the pump and the saturation pressure of the liquid, transformed into a height. This NPSH value must be greater than a minimum value,2 which has to be determined experimentally by the vendor. A safety margin of 0.5 m should be kept. It depends on the type and construction of the pump and on the operating conditions. For boiling liquids with low flow velocities (i. e. pressure drop and kinetic term are negligible), Equation (8.3) reduces to NPSH = h0

(8.4)

In case of serious difficulties in maintaining the necessary NPSH value, special pumps are available which need very low NPSH values and are even capable of conveying liquids at their boiling point [148]. From the process engineering point of view, the specification of the pump can exclude the pump itself, by just defining the states upstream and downstream the pump using Equation (8.1).3 Then the pump specialist can chose an appropriate pump according to his knowledge about the necessary Δppump and the corresponding mass flow with its various physical properties. Usually, we distinguish between normal, maximum, and minimum case. The minimum case should be defined by the most favorable conditions for the pump, i. e. the maximum level in the vessel upstream the 2 called “NPSH value of the pump”. 3 Unfortunately, it often occurs that these states are not available when the pump must be specified. In these cases, the process engineer must guess as good as possible.

260 | 8 Fluid flow engines pump, the lowest mass flow (i. e. lowest pressure drop in the line) and the minimum level in the target vessel, whereas the maximum case is just the other way round. It is then up to the pump specialist to decide which pump type can cover this load range, and which pump efficiencies are achieved. Example In the exemplary arrangement in Figure 8.1, methanol (ṁ = 12000 kg/h, t = 30 °C, ρ = 782 kg/m3 ) is transferred from a vessel to a distillation column. Some items of the specification shall be evaluated: (a) the necessary NPSH value of the pump; (b) the normal pressure difference to be built up by the pump; (c) the maximum pressure difference to be built up by the pump; (d) the maximum power consumption of the pump. In the sketch, the equivalent lengths of inlet and outlet line are given, meaning that all the bends, elbows etc. are already included. The pipe diameters is d1 = 4󸀠󸀠 for the inlet line and d2 = 3󸀠󸀠 for the outlet line. The pressure drop of the pipe can be calculated with Equation (12.1): Δp = λ

ρw 2 L , 2 d

where, for simplicity, for the friction factor λ a standard value of λ = 0.03 is used. For the valve, a pressure drop of Δp = 1 bar shall be assumed. The efficiency of the pump is η = 0.7.

Solution First, the velocities in the pipes and the pressure drops are calculated: 4 ⋅ 12000 kg/h 4ṁ = = 0.526 m/s ρπd12 782 kg/m3 ⋅ π(4 ⋅ 25.4 mm)2 4ṁ 4 ⋅ 12000 kg/h = 0.935 m/s w2 = = ρπd22 782 kg/m3 ⋅ π(3 ⋅ 25.4 mm)2 w1 =

The pressure drops in the lines are Δp1 = λ Δp2 = λ

ρw 2 Leq1 782 kg/m3 ⋅ (0.526 m/s)2 6 m = 0.03 = 192 Pa 2 d1 2 4󸀠󸀠

ρw 2 Leq2 782 kg/m3 ⋅ (0.935 m/s)2 30 m = 0.03 = 4035 Pa 2 d2 2 3󸀠󸀠

(a) For the NPSH value, the low liquid level (LLL in Figure 8.14 ) is relevant. The static liquid head is the sum of the height of the tangent line H1 and the low liquid level LLL. The saturation pressure

4 Numbers in technical drawings are in mm if no unit is given.

8.1 Pumps | 261

of methanol at t = 30 °C is ps = 0.219 bar. According to Equation (8.3) we get

NPSH = h0 +

p0 − Δpinlet line −

= 5m +

ρg 1 bar − 192 Pa −

= 15.1 m

ρw 2 − ps 2 782 kg/m3 ⋅ (0.526 m/s)2 − 0.219 bar 2 3 2 782 kg/m ⋅ 9.81 m/s

(b) The normal pressure elevation by the pump is calculated acc. to Equation (8.1), using the normal liquid level (NLL) and the normal operation pressure of the column pnorm . For the outlet line, the pressure drop of the valve must be added to the value obtained above. Solved for Δppump , Equation (8.1) reads Δppump = p1 − p0 + Δpinlet line + Δpoutlet line + Δpvalve + ρg(h1 − h0 ) , giving Δppump,norm = 9 bar − 1 bar + 192 Pa + 4035 Pa + 1 bar + 782 kg/m3 ⋅ 9.81 m/s2 ⋅ (18 − 5.5) m

= 10.0 bar (c)

For the maximum pressure elevation, the low liquid level (LLL) and the maximum operation pressure of the column pmax are taken as input. One gets Δppump,max = 10 bar − 1 bar + 192 Pa + 4035 Pa + 1 bar + 782 kg/m3 ⋅ 9.81 m/s2 ⋅ (18 − 5) m

= 11.04 bar (d) The maximum power consumption of the pump is Pmax = V̇ Δppump,max /η = =

̇ pump,max mΔp

12000 kg/h ⋅ 11.04 bar

ρη

782 kg/m3 ⋅ 0.7

= 6.7 kW

Most companies have their own guideline for the installation of a pump, depending on reliability demands, the control philosophy, and the type of the pumps. Figure 8.3 shows an example with the most important features. Two pumps are installed in parallel, so that in case of failure of the operating pump a switch to the additional one can be performed immediately, perhaps even automatically. It should be mentioned that in this arrangement even the inlet pipe can be exposed to the high outlet pressure generated by the pump. In case the nonoperating pump is not isolated by closing the valves up- and downstream the pump, the operating pump will convey liquid

262 | 8 Fluid flow engines

Figure 8.3: Typical for a pump installation.

backwards through the nonoperating one and pressurize even its inlet line, which is normally exposed only to the low inlet pressure. There is the so-called minimum bypass line branching from the product line, ending in the vessel containing the feed of the pump. The reason is that most pump types should not operate against a closed valve. If the pressure in the outlet line exceeds a certain value, the control valve in the bypass line opens so that further pressure build-up is inhibited. Furthermore, it is prevented that the temperature of the system increases, as the motor power of the pump is no longer removed, which could as well lead to damage of the system. Instead of the control valve, an orifice can act as the restriction in the bypass line. This is only acceptable for pumps with relatively low power consumption. The minimum bypass is certainly an energy waste, as it simply reduces the pressure of a stream which has just been built up. The orifice admits a bypass stream even at normal operation. For large pumps, this would correspond to a significant energy waste, whereas for small pumps it might be acceptable, as the cost of a control valve can be saved. Furthermore, the typical safety sensors can be seen in Figure 8.3, i. e. vibration or, in this case, tem-

8.1 Pumps | 263

perature sensors, which are linked to an interlock that switches off the pump and, in most cases, switches on the substitute pump. The flow of a pump can in principle be controlled in two ways: First, the minimum bypass can lead some of the flow back to the source vessel. As discussed above, this option consumes electrical energy, as the pump simply provides its maximum flow according to its characteristics. The second option, the use of a frequency converter, is more elegant but also more expensive. It makes it possible to set the rotation speed of the pump according to the demand for the volume flow. However, a frequency converter consumes additional energy. They are a good solution for frequently changing run cases. For fixed operating conditions, they often cause problems as they can compensate for a deteriorated performance due to erosion. The operators do not notice, until total damage occurs [279]. There are three types of pumps. – Centrifugal pumps: Centrifugal pumps (Figure 8.4) have been thoroughly discussed above. The operation of centrifugal pumps is illustrated in Figure 8.5. The rotating impeller trans-

Figure 8.4: Centrifugal pump in standard arrangement [149]. © Hydrocarbon Processing.

Figure 8.5: Functional principle of a centrifugal pump [149]. © Hydrocarbon Processing.

264 | 8 Fluid flow engines



fers its rotational energy to the liquid, which is accelerated and discharged into the casing due to centrifugal forces. When the casing area increases, the kinetic energy of the liquid is converted to pressure. Centrifugal pumps are used for large volume flows with moderate pressure heads. They are appropriate for low to moderate viscosities. An undissolved vapor fraction up to 5–7 vol. % can be tolerated in the liquid, however, with increasing vapor fraction the efficiency and the NPSH value (cavitation!) are decreasing. Also, the solid content should be limited, 8 % can be regarded as the maximum. Oscillating displacement pumps: The most frequently used oscillating displacement pumps are piston pumps and membrane pumps (Figure 8.6). In principle, they work discontinuously; for the piston pump there is a well-defined intake stroke, where the piston generates an underpressure to suck in the liquid to be conveyed. For the outlet stroke, the piston is moved back and generates an overpressure on the liquid which pushes it out of the pump. Membrane pumps work in an analogous way, but the piston does not get in direct contact with the conveyed liquid. Instead, the piston is actuating a working fluid which moves a membrane to and fro. Membrane pumps are especially useful for corrosive fluids. Unlike centrifugal pumps, oscillating displacement pumps are appropriate for moderate volume flows at high pressure generation. The discontinuous operation can be overcome if necessary. The use of several pump stages where the phases are displaced can yield a quasi-continuous flow. Another option is the installation of a pressure vessel filled with pressurized gas connected to the outlet line. The gas will be further compressed when the outlet stroke takes place; during the inlet stroke, the gas expands and represents

Figure 8.6: Operation modes of piston and membrane pump.

8.2 Compressors | 265

Figure 8.7: Basic sketch of a gear pump. © Duk/Wikimedia Commons/CC BY-SA 3.0. https:// creativecommons.org/licenses/by-sa/3.0/deed.en.



an additional pressure source. The pump characteristics are completely different from that of a centrifugal pump. After setting the repetition frequency, the volume flow is determined, and the pressure obtained depends only on the plant characteristics. Rotating displacement pumps: Well-known rotating displacement pumps are gear pumps (Figure 8.7). The cogs represent small compartments which are continuously filled with liquid at low pressure and moved to the high pressure level. Comparably high pressure elevations up to 40 bar are possible. As for oscillating displacement pumps, the volume flow is directly proportional to the rotation speed. Gear pumps are especially appropriate for highly viscous fluids.

8.2 Compressors Compressors, vacuum pumps, fans, and other fluid flow engines for the pressure elevation of gases are widely applied in industry for the transport of fluids or for establishing a certain pressure to perform a reaction or a separation. In process simulation, compressors cannot be regarded as a simple flash, as they cannot be specified by two outlet variables. Instead, more information about the course of the change of state is necessary. For most types of compressors, it can be assumed that they are adiabatic, i. e. the heat exchange with the environment does not play a major role. Proceeding from this assumption, the calculation route is illustrated by the adiabatic compression of a vapor. The changes in kinetic energy can be neglected in the energy balance. The calculation is divided into the reversible adiabatic calculation and the integration of losses. 1. Reversible calculation: The reversible case characterizes the process that requires the lowest power consumption. It is specified by the outlet pressure p2 at constant entropy. According to the Second Law, the outlet temperature is calculated by the isentropic condition s2 (T2rev , p2 ) = s1 (T1 , p1 ) ,

(8.5)

266 | 8 Fluid flow engines where the indices 1 and 2 denote the inlet and the outlet state, respectively. In process simulation calculations, Equation (8.5) is directly evaluated with the corresponding equation of state to determine T2rev . A simplified calculation using the ideal gas equation and assuming a constant heat capacity yields [11] T2rev p = ( 2) T1 p1

κ−1 κ

,

(8.6)

with κ=

cpid

cvid

(8.7)

The specific power consumption for the reversible case is given by wt12rev = h2rev (T2rev , p2 ) − h1 (T1 , p1 ) 2.

(8.8)

Integration of losses: The actual specific power consumption required is calculated with the isentropic and mechanical efficiency: wt12 =

wt12rev , ηth ηmech

(8.9)

while the power consumption of the process is ̇ t12 P12 = mw

(8.10)

The isentropic efficiency ηth is an empirical factor which summarizes all the effects about the irreversibility of the process. ηth = 0.8 is often a reasonable choice. It usually decreases with increasing pressure ratio. ηmech is the efficiency of the energy transformation of the compressor engine (electrical to mechanical energy), which is not related to the process flow. For large drives, ηmech = 0.95 can be used. The outlet conditions of the flow are then calculated backwards via h2 (T2 , p2 ) = h1 (T1 , p1 ) +

h2rev (T2rev , p2 ) − h1 (T1 , p1 ) ηth

(8.11)

Note that ηmech has been omitted. Knowing h2 , the outlet temperature T2 = f (h2 , p2 ) can be calculated by an iterative procedure. Example Steam (5 t/h, p1 = 1 bar, t = 110 °C) is compressed adiabatically to p2 = 5 bar. The efficiency of the compressor is given by ηth = 0.8, while the mechanical efficiency is supposed to be ηmech = 0.9. Use a high-precision equation of state, e. g. [29].

8.2 Compressors | 267

Solution The first step is always the reversible calculation using Equation (8.5). It gives the condition s2 (T2rev , 5 bar) = s1 (383.15 K, 1 bar) = 7.4155 J/(g K) The solution is T2rev = 560.55 K = 287.4 °C. The power required for the reversible case is ̇ 2rev − h1 ) = m[h(560.55 ̇ Wt12,rev = m(h K, 5 bar) − h(383.15 K, 1 bar)] = 5000 kg/h ⋅ (3038.50 − 2696.34) J/g = 475.2 kW To obtain the real consumption of the compressor, the efficiencies must be considered. One gets Wt12 =

Wt12,rev = 660.04 kW ηth ηmech

For the outlet state of the steam, the mechanical efficiency has no influence; only the thermal efficiency must be taken into account: h2 − h1 =

h2rev − h1 3038.50 − 2696.34 = J/g = 427.70 J/g , ηth 0.8

giving h2 = 427.71 J/g + 2696.34 J/g = 3124.04 J/g At p2 = 5 bar, the outlet temperature T2 can be determined to be T2 = 601.91 K or t2 = 328.76 °C.

There are a number of different compressor types available, which differ in pressure range, volumetric flow rate, and other requirements at operating conditions like process safety, physical properties, or environmental conditions. Moreover, it has to be considered whether the fluid contains drops or particles and whether there are components which tend to polymerize. Certainly, if the achievable compression ratio is too low, several compressors can be combined into a “multistage compressor”, usually with intermediate coolers or direct liquid injection to reduce the gas temperatures and, correspondingly, the volume flows. The main compressor types are: – Piston compressors: Piston compressors work according to the same principle as piston pumps (Figure 8.6). Compression ratios up to 6 : 1 per stage can be achieved. There are no limitations for the volume flow, but relatively small ones are preferred (≈ 200 m3 /h). The lubrication of the compressor is always a major item. Care should be taken that no process components can accumulate in the lubricant, which would reduce the effectiveness and make it necessary for the lubricant to be exchanged often. Alternatively, dry-running compressors can be used if appropriate. Of course

268 | 8 Fluid flow engines

Figure 8.8: Two-stage hyper compressor. © 2016. Burckhardt Compression AG.





there are special requirements for the inlet and outlet valves. The main disadvantage of piston compressors is their high maintenance effort. The flow pulsation can cause vibration and structural problems due to the unbalanced forces, making heavy foundations necessary. Figure 8.8 shows a so-called hyper compressor, used in the LDPE process for compressing ethylene from approx. 300 bar to 3000 bar in two stages. The space demand is huge; a hyper compressor often takes up a whole hall and needs extremely strong fundamentals. In the middle of the picture there is the driving shaft with its coupling to the motor and the lubrication unit, causing the movement of the plungers on the right hand and the left hand side. The arrangement of the plungers is symmetric to avoid unbalanced forces as far as possible. The nozzles below the plungers are the gas inlets and outlets. Membrane compressors: Membrane compressors have a principle similar to membrane pumps (Figure 8.6). They are appropriate for small volume flows and achieve compression ratios up to 20 : 1 per stage. Like piston compressors, the flow pulsation causes problems, which makes it necessary to replace the membrane regularly. Screw compressors: Screw compressors are displacement compressors, where the medium is enclosed in a chamber which is continuously shortened, causing the compression of the gas (Figure 8.9). Compression ratios of 4.5–7 : 1 can be achieved, the range of the volume flows is reported to be 300–60 000 m3 /h. Screw compressors are not sensible to small amounts of liquid or dirt in the gas stream; on the contrary, liquid is often introduced to reduce the temperature elevation. No valves are involved, which is a great advantage in comparison to piston compressors. Furthermore, screw compressors have a high efficiency and a wide range of applications.

8.2 Compressors | 269

Figure 8.9: Screw compressor rotor. © MAN Diesel & Turbo SE.





Rotary piston compressors: Rotary piston compressors are displacement compressors which are usually applied for vacuum generation. The rotary pistons and the shell form moving chambers which force the gas to the pressure side. The compression ratios are comparably low (1.8–2 : 1), while the volume flows are not restricted (100–80 000 m3 /h). The function principle is analogous to the rotary vane pump (Figure 8.18). Turbo compressors: 1. Radial turbo compressors: Radial turbo compressors are the corresponding compressor to centrifugal pumps (Figure 8.5). The compression ratio is limited (2–4 : 1), while the volume flows (5000–150 000 m3 /h) can be considerably high. However, larger or smaller volume flows cause construction problems which are not easily solved. The wide operation range and the high reliability are the advantages of radial turbo compressors; their drawbacks are their sensitivity to reduced flow rates and changes in gas composition, and their weaknesses in the dynamic behavior. 2. Axial turbo compressors: Axial turbo compressors (Figure 8.10) are appropriate for very large volume flows up to 1 200 000 m3 /h and compression ratios up to 8 : 1 [150]. The extremely large capacity and the high reliability are the main advantages of this compressor type, and the main disadvantage is the limited turndown. The smaller the volume flow, the more ineffective this compressor type is. A lower bound for axial compressors is approximately 60 000 m3 /h.

270 | 8 Fluid flow engines

Figure 8.10: Axial turbo compressor rotor. © MAN Diesel & Turbo SE.

Figure 8.11: Liquid ring compressor arrangement. Courtesy of Sterling Fluid Systems Holding GmbH.



Liquid ring compressors (Figure 8.11): Liquid ring compressors (LRCs) are compressors in which the service fluid forms a liquid ring, which itself both acts as a compressor and as a seal between suction

8.2 Compressors | 271

and discharge side. The shaft of the impeller is placed eccentrically so that the cells containing the gas formed by the impeller and the liquid become smaller and smaller, which results in the compression. Moreover, the compressor power which heats up the gas is transferred to the liquid, and thus the compression can be regarded as isothermal. The peculiarity of LRCs when compared to other compressor types is that the medium to be compressed gets into direct contact with the service liquid. Normally liquid ring compressors operate as displacement compressors. If the vapor condenses due to the compression or due to the temperature decrease after contact with the liquid, the compression is supported by absorption or condensation. The liquid is circulated; cooling is necessary as the heat generated by the compression (and possibly by the condensation/absorption) increases the temperature of the liquid. The strength of LRC compressors is that any medium can be compressed without restriction, the medium can be explosive, toxic or carcinogenic [151]. The max. compression ratio for one stage is 3 : 1 (in vacuum operation up to 7 : 1), the volume flows range from 1–20 000 m3 /h [151]. LRCs are very robust and inexpensive, but they have a low efficiency (ηth ≈ 0.2). From vendor data, the following correlation has been set up: 2

p2 p − 0.00159871( 2 ) p1 p1 ̇ p V + 6.2392 ⋅ 10−5 1 + 0.00011975 3 bar m /h

ηth = 0.09100238 + 0.00391931

It indicates that the efficiency mainly depends on the pressure ratio. In case nothing else is known, one could try this correlation, which is, however, only tested in the ranges p2 /p1 < 11, p1 > 0.7 bar and V̇ > 775 m3 /h. In case the liquid absorbs components from the gas stream it has to be continuously worked up to avoid accumulation. The characteristic curves of axial and radial turbo compressors and displacement compressors are outlined in Figure 8.12. Mechanical vapor recompression is one of the main tasks of compressors. For this purpose, the thinking for application is different. Figure 8.13 shows a typical arrangement. The vapor used as heating agent for evaporation is the generated vapor itself. However, this vapor would condense at most at the same temperature as the evaporation itself takes place, and only if a pure substance is vaporized.5 Because of pressure drops in the line and in the nozzles and because of the boiling point elevation the condensation temperature of the vapor is certainly lower than the boiling temperature of 5 which would not make any sense.

272 | 8 Fluid flow engines

Figure 8.12: Qualitative characteristic curves of different compressor types.

Figure 8.13: Mechanical vapor recompression arrangement.

the liquid. Nevertheless, the vapor can in fact be used as a heating agent if its condensation temperature is elevated. This can be achieved by increasing its pressure, and this is what mechanical vapor recompression can do. Fresh steam, as depicted in Figure 8.13, is usually only necessary for the startup. In terms of thermodynamics, the power required for the compression is used to elevate the temperature level of the heat of condensation of the vapor. This means that the mechanical power is at least not lost; it is used for heating the medium as well. However, the power from the electric current is turned into heat, which is a devaluation. The blowers used for mechanical recompression are usually simple industrial fans (Figure 8.14). They accomplish a compression ratio of approx. 1.3–1.4 per stage. In case of water vapor, this corresponds to an elevation of the dew point of 8–10 K. Usually, this should be sufficient to cover the boiling point elevation of the product, especially if plate heat exchangers are used, which need only low driving temperature differences. If not, there is also the option to use an arrangement of two or even three blowers in series. Blowers are highly standardized. The power can be adjusted by manipulating the rotation speed, which is usually achieved with a frequency converter. The costs for maintenance are low, especially compared to other compressor types. Special care must be taken for the design of the bearings and for the shaft seals.

8.2 Compressors | 273

Figure 8.14: Blower for mechanical vapor recompression. © Piller Blowers & Compressors GmbH.

Example Saturated water vapor coming from an evaporator (0.8 bar, 93.5 °C) is compressed by a blower with a pressure ratio of 1.35. What will be its condensation temperature?

Solution The outlet pressure of the blower is pnew = 0.8 bar ⋅ 1.35 = 1.08 bar. The corresponding condensation temperature of the outlet stream is then 101.8 °C, meaning that the compression has achieved a boiling point elevation of 8.3 K.

274 | 8 Fluid flow engines The energy consumption of the blower is usually moderate. Many t/h of water vapor can be recompressed, saving the same amount of fresh steam (e. g. 100 t/h in the example above with a power consumption of approx. 2 MW). The inlet of a compressor should in general have a vapor fraction of 1, because liquid droplets can cause erosion to the impeller. It is necessary to carefully check what the demands of the compressor are in this area. The vapor-liquid separator can often be designed for a maximum droplet size (Chapter 9); however, there is no way to predict the amount of droplets. The various types of compressors have different susceptibilities to droplets; the least sensitive ones are in fact the blowers for vapor recompression, where liquid is even injected beyond the saturation level on purpose to continuously clean the impeller and avoid deposits on the impeller, which could lead to imbalances. Getting into the two-phase region during the compression becomes more probable the larger the molecule is [11].6 According to Equation (8.6) for the ideal gas case, the temperature rises during compression. As cid κ−1 R 1 = 1 − = 1 − vid = id , κ κ cp cp

(8.12)

one can easily see that the exponent decreases with increasing molar isobaric heat capacity, which means that the temperature elevation during compression is lower, the larger the molecule is.7 If the rise of the boiling temperature is larger than the temperature elevation during compression, drops are formed inside the compressor, which might be seriously detrimental to the impeller [11]. If the molecule has at least four C-atoms, one can almost be sure that droplet formation will occur (“wet fluids”), for molecules with less C-atoms, the feed stream will remain gaseous (“dry fluids”) [152].

8.3 Jet pumps A jet pump is an alternative to a compressor for the compression of gases, the generation of vacuum or for the transportation of liquids or even bulk materials. Its advantage is its high reliability, caused by the fact that a jet pump has no moving parts. Also, it is not sensitive to fouling or corrosion and appropriate for large volume flows [153]. The energy is transferred by a fluid under high pressure, e. g. steam, compressed air or water and other liquids. The principle of jet pumps is explained in Figure 8.15. The energy is supplied by the motive steam on the left-hand side. It makes use of the principle of the Laval nozzle, where supersonic velocities can be reached in a tube [154]. The motive steam passes a narrowest crossflow area (Chapter 12.1.3), where the speed 6 This paragraph refers to pure components as representatives. 7 cpid on molar basis increases with the size of the molecule, as more vibration options can be activated.

8.3 Jet pumps | 275

Figure 8.15: Scheme of a jet pump. © 2015, Körting Hannover AG.

of sound is reached. Downstream it is further accelerated in the diffusor to supersonic velocity. Due to the acceleration, the static pressure is lowered below the pressure of the suction stream. The suction stream is taken in and mixed with the motive steam in the first part of the diffusor. In the second part, the flow is slowed down again, and the pressure rises. Finally, at the outlet the flow has a pressure between the pressure of the suction and the motive steam. The motive steam has expanded while the suction stream has been compressed; the jet pump works like an equivalent system where the motive stream is expanded in a turbine, while the power obtained is used to run a compressor for the suction stream. One of the most popular applications of jet pumps is the compression of low pressure steam to a more useful pressure using high-pressure steam as the motive steam. As long as the motive steam has a pressure which has to be reduced anyway, no operation costs are related to the compression. For assessing whether a jet pump might be useful it is necessary to know how much of the motive steam is needed. The use of jet pumps in thermal vapor recompression has been shown in Chapter 3.3. Clearly, the final decision on the necessary amount of the motive stream should be made by the vendor. Nevertheless, some first guesses can easily be made. The minimum possible amount of the motive stream is determined by the entropy balance. According to the Second Law, the entropy of the outlet stream (3) must be larger than the sum of the entropies of the suction (2) and the motive stream (1): ṁ 1 s1 + ṁ 2 s2 ≤ (ṁ 1 + ṁ 2 )s3 ,

(8.13)

while at the same time the First Law ṁ 1 h1 + ṁ 2 h2 = (ṁ 1 + ṁ 2 )h3

(8.14)

must be obeyed, provided that the velocities at the nozzles do not have a major contribution to Equation (8.14). From Equations (8.13) and (8.14), one can get a first idea of the order of magnitude of the motive steam ṁ 1 . More realistic values can be obtained considering the efficiency [153] ηjet =

ṁ 2 [h(T3 , p3 ) − h(p2 , s3 )] ≈ 0.2 . . . 0.4 , ṁ 1 [h(T1 , p1 ) − h(p3 , s1 )]

which refers to the above mentioned turbine/compressor analogy.

(8.15)

276 | 8 Fluid flow engines The efficiency can be estimated to be: ηjet = 0.3774283 − 0.0588682 ⋅

2

p p3 p + 0.00373807 ⋅ ( 3 ) − 1.9 ⋅ 10−6 ⋅ 1 p2 p2 p2

(8.16)

The correlation has been found by the evaluation of the typical vendor nomograms.8 Example 1000 kg/h water vapor at t2 = 130 °C, p2 = 2 bar shall be used at p3 = 4.5 bar. How much motive steam (t1 = 190 °C, p1 = 11 bar) is necessary (a) using Equation (8.13) for the reversible case and (b) using Equation (8.16) for a more realistic case?

Solution (a) First, the specific enthalpies and entropies are determined. Using the high-precision equation of state [29] one obtains h1 = 2796.6 J/g

s1 = 6.5868 J/(g K)

h2 = 2727.3 J/g

s2 = 7.1797 J/(g K)

Estimating ṁ 1 = 1000 kg/h, h3 can be determined to be h3 = 2762.0 J/g according to Equation (8.14). The corresponding specific entropy s3 (p3 , h3 ) turns out to be 6.8997 J/(g K), which is larger than the value obtained from Equation (8.13), 6.8832 J/(g K). Thus, the estimation for ṁ 1 was too high. After several iterations, the result is ṁ 1 = 916.3 kg/h, giving h3 = 2760.4 J/g and s3rev (p3 , h3 ) = 6.8962 J/(g K), which is obtained with Equation (8.13) as well. (b) The enthalpies involved in Equation (8.15) are h(T1 , p1 ) = 2796.6 J/g h(p3 , s1 ) = 2630.0 J/g With Equation (8.16) the efficiency can be estimated to be ηjet = 0.264. Again, an iterative solution is necessary. Estimating ṁ 1 = 2000 kg/h, one gets h(T3 , p3 ) = 2773.5 J/g according to Equation (8.14) and h(p2 , s3 ) = 2627.4 J/g. ηjet is then determined to be 0.438, indicating that the estimation for ṁ 1 was too low. After some iterations, ηjet = 0.264 is obtained with ṁ 1 = 3337 kg/h, giving h(T3 , p3 ) = 2780.65 J/g and h(p2 , s3 ) = 2633.848 J/g.

The operational characteristics of a jet pump can be summarized in the diagram according to [153] (Figure 8.16). In this diagram, the suction stream for a given jet pump is depicted as a function of suction pressure and outlet pressure, both as coordinates on the abscissa. The pressure p1 of the motive steam remains constant. The left border line refers to the 8 The author is grateful to Ms. Sonali Ahuja, who performed this work with enthusiasm.

8.4 Vacuum generation

| 277

Figure 8.16: Operational characteristics of a jet pump [153]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

suction pressure. There is a minimum suction pressure to obtain a suction flow at all. Above this suction pressure, the suction flow increases continuously with the suction pressure, with a typical sharp bend in the curve shape. Moving to the right-hand side of the diagram, the pressure on the abscissa refers to the outlet pressure. The outlet pressure can be varied over wide ranges without an effect on the suction stream (see horizontal lines). At a certain outlet pressure, which slightly depends on the suction pressure, the suction flow is seriously affected and drops down rapidly. The goal is to avoid getting into this region. For a given jet pump, the flow of the motive steam can only be increased by increasing its pressure. This measure has little effect on the suction stream, but the critical outlet pressure is moved to higher values of the outlet pressure [153]. Jet pumps are very often used as vacuum pumps, which is analogous to the use as vapor compressors. The pressure ratio is usually between 15–20. The main pressure range where jet pumps are used is between p = 0.1–100 mbar. For this purpose, several stages are used [155]. Between the stages there are condensers to get rid of the condensables as far as possible to reduce the load of the jet pumps. A very good explanation of the design criteria of jet pumps can be found in [156].

8.4 Vacuum generation Vacuum is divided into the following pressure ranges: – rough vacuum: 1–1000 mbar; – medium vacuum: 10−3 –1 mbar; – high vacuum: 10−7 –10−3 mbar; – ultra-high vacuum: < 10−7 mbar. Before introducing the particular options for vacuum generation, some general thoughts might be useful. In process engineering, the main part of vacuum generation is covered by condensation. This works in the following way [155].

278 | 8 Fluid flow engines Consider a vessel which contains water vapor at p = 1000 mbar, t = 110 °C (Figure 8.17). At the beginning, the piston at the top is completely movable so that the content of the vessel is in mechanical equilibrium with the environment. Next, the piston is fixed, and the whole vessel cooled down to t1 = 20 °C, while the steam partially condenses. In this case, the pressure will drop down to the vapor pressure of the new temperature ps (t1 ) ≈ 23 mbar. Thus, using simple condensation, a vacuum of 23 mbar has been generated. A better vacuum can be generated if the temperature is further lowered, for instance 12 mbar at 10 °C.

Figure 8.17: Condenser used as vacuum generator.

Things are different when inert gases are involved. Consider the case above, where the partial pressure of water is only pwater = 950 mbar and the remaining 50 mbar are caused by an inert gas, e. g. nitrogen. This causes the vacuum to deteriorate significantly. While the water is still condensed until it develops its vapor pressure at t1 = 20 °C, the inert gas will remain gaseous. Neglecting the volume of the condensate and the small solubility of the inert gas, its partial pressure will only decrease due to to the temperature decrease according to the ideal gas law, giving pinert ≈ 50 mbar ⋅

293.15 K ≈ 38 mbar 383.15 K

The overall vacuum will then be p = 23 mbar + 38 mbar = 61 mbar, by far worse than the 23 mbar previously obtained. This is a typical problem in vacuum process engineering. Most vapor streams to be condensed contain a certain fraction of inert gas. After condensation, the fraction of the inert gas will have increased, as in the exam50 ple above from xinert = 1000 = 0.05 to xinert = 38 ≈ 0.62. To avoid accumulation of 61

8.4 Vacuum generation

| 279

the inert gas, the remaining vapor mixture has to be removed, and this is essentially what vacuum pumps are for. As mentioned above, the fraction of condensables is decreased with decreasing temperature. Therefore, it is useful to remove the inert gas at the coldest part of the condenser, i. e. at the inlet of the cooling agent. This is another reason to realize countercurrent flow in the condenser, apart from maintaining the driving temperature difference. The condenser is the more economic part of vacuum generation; in [155] it is demonstrated with an example that condensation should be used as far as possible, and the vacuum pump should only remove the remaining gas. There are certain types of vacuum pumps: Jet pumps have been explained above in Chapter 8.3. They are the most inexpensive alternative, as they contain no movable parts. On the other hand, their flexibility is limited. Liquid ring compressors (Chapter 8.2) can also be used as vacuum pumps and are the most widely used types of vacuum pumps in chemical industry. They are robust, simple and inexpensive, but not very efficient concerning the energy consumption. They can achieve a vacuum down to 30 mbar. Also, piston and membrane compressors can also be applied as vacuum pumps [155]. Rotary vane pumps are the most common types if it is ensured that all gases sucked in can be transported by the vacuum pump without condensation. Otherwise, besides a worse performance the lubrication oil of the pumps will be spoilt by dilution or by forming an emulsion with insoluble liquids as water. Compatibility with condensing substances is a strong criterion to distinguish whether an application is appropriate for rotary vane or for liquid ring pumps. The function principle of a rotary vane pump is illustrated in Figure 8.18. Inside the stator, an excentric rotor is rotating. In the center of the rotor, a spring pushes two vanes apart so that they get contact with the wall of the stator and form two separated chambers at inlet and outlet. At the inlet side, the volume of the corresponding chamber increases, so that substance from the recipient is sucked in. At the outlet, the volume of the chamber has decreased, which pushes the gas to the outlet line. Rotary vane pumps can achieve high vacuum down to 10−3 mbar. There are similar types working according to the displacement principle (rotary piston pump, roots blower). For the generation of high and ultra-high vacuums, oil diffusion and turbomolecular pumps are used. Both need a considerable prevacuum to keep the load low. Oil diffusion pumps (Figure 8.19) have the same functional principle as jet pumps. A highboiling oil with a low vapor pressure is heated up electrically so that it finally evaporates. The vapor exits the pump through a special system of nozzles with velocities above speed of sound, and, correspondingly, low local pressures. The gases rapidly diffuse to the oil. After the oil has been condensed at the wall, they are removed by the pre-vacuum pump. The advantage of oil diffusion pumps is their reliability, as they have no moving parts. The can produce even an ultra-high vacuum down to 10−10 mbar. Care should be taken that no oil can flow into the recipient.

280 | 8 Fluid flow engines

Figure 8.18: Rotary vane pump. 1: stator, 2: rotor, 3: vanes, 4: spring. © Rainer Bielefeld/Wikimedia Commons/CC BY-SA-3.0. https://creativecommons.org/licenses/bysa/3.0/deed.en.

Figure 8.19: Scheme of an oil diffusion pump.

Turbomolecular pumps consist of a series of rotor/stator pairs which act as small compressors. The rotation speed is in the range 300–400 m/s or 10 000–90 000 rpm. It is possible to achieve ultra-high vacuum down to 10−10 mbar. For the dimensioning of vacuum pumps reasonable values for their capacity are necessary. It is mainly determined by the leakage rate, which can be estimated according to empirical rules of thumb. There are several approaches: – For flange connections, the leakage rate can be estimated to be 200–400 g/h per m gasket length. Special measures (tongue and groove face flange, surface treatment of the gasketing areas, use of special gaskets) can reduce the leakage rate down to 50–100 g/(h m). – According to the maximum mass flux density (Chapter 14.2), through an opening of 1 mm2 approx. 0.83 kg/h can flow. Surprisingly, this value does not depend on

8.4 Vacuum generation



| 281

the vacuum pressure. As long as the pressure ratio (Section 14.2.6) between environmental pressure and recipient is above the critical pressure ratio of approx. 2, meaning that the vacuum is at least below p = 500 mbar, the air intake is independent of pressure, and the maximum flow is determined by the speed of sound in the narrowest cross-flow area. The leakage air flows can be determined according to Table 8.1 [157].

Table 8.1: Recommended values for leakage rates [157]. Connection type Equipment volume (m3 )

Flange leakage rate

Flange and welded (kg/h)

Welded or special gaskets

0.2 1 3 5 10 25 50 100 200 500

0.15–0.3 0.5–1 1–2 1.5–3 2–4 4–8 6–12 10–20 16–32 30–60

0.1–0.2 0.25–0.5 0.5–1 0.7–1.5 1–2 2–4 3–6 5–10 8–16 15–30

< 0.1 0.15–0.25 0.25–0.5 0.35–0.7 0.6–1.2 1–2 1.5–3 2.5–5 4–8 8–15

9 Vessels and separators Vessels are normally the simplest pieces of equipment in a process, as long as they are not used as reactors. They have several functions in a process. The most important one is the decoupling of two process parts, which is especially important in the startup phase when different parts of a plant are starting operation independently from each other. Other functions are the separation of vapor and liquid by gravity or the provision of suction head to achieve a sufficient NPSH value for a pump (Chapter 8.1). If the only function of a vessel is the storage of raw materials or product, it is called a tank. For mixing the content of the vessel, agitators are used. For a quick overview on this topic, the book of Baerns et al. [8] is recommended. Figure 9.1 shows a typical PID representation of a horizontal vessel. The vessel is equipped with several nozzles for inlet and outlet streams, measurements for temperature, pressure and liquid level. There is a man hole on the upper side (M1). A vortex breaker at the liquid outlet nozzle prevents waterspouts to be formed. One of the feed inlets is designed as a dip-pipe to prevent electrostatic charging. Furthermore, a dippipe ensures that vapor backflow is not possible. The liquid in the vessel forms a liquid seal so that vapor from the vessel cannot flow back through these lines if no liquid is delivered. The calculation of the volume of vessels is a bit complicated, due to some apparently well-meant simplifications. In fact, some guidelines define that the volume of the vessel is just the cylindrical volume calculated with the outer diameter of the vessel and the length of the cylindrical section. Specially shaped heads ensure that there are no sharp edges where solids could deposit. Half-sphere heads need too much space and have a bad accessibility. Flat heads would provide more space for nozzles, but the tensions require larger wall thicknesses. Therefore, flat heads are rarely used for pressure vessels or for large vessels. The volume of the two heads is often neglected, which is conservative, but partly compensated for due to the fact that the outer diameter is used instead of the inner one. Also, it is possible to calculate an exact volume, e. g. using the volume of an ellipsoidal head Vel. head = 0.1298 di3

(9.1)

Before beginning to evaluate the vessel volume, it should be clarified which convention is to be used in the particular project. For the definition of the liquid level, the relationship between liquid volume and liquid level is trivial for vertical but sophisticated for horizontal vessels. With the liquid height shown in Figure 9.2, the relationship is VL =

2h D LD2 arccos (1 − ) − L( − h)√Dh − h2 , 4 D 2

https://doi.org/10.1515/9783110657685-009

(9.2)

Figure 9.1: PID representation of a typical vessel.

284 | 9 Vessels and separators

9 Vessels and separators | 285

where L is the cylindrical length of the vessel. When the volume demand of a vessel is calculated, it should be considered that only part of it can be used as working volume, usually 65–80 %. The vessel must not be completely filled with liquid, due to sudden pressure rise when the temperature slightly increases.

Figure 9.2: Cross-section of a horizontal vessel partly filled with liquid.

The dimensions of a vessel are usually determined by providing response time for the operator. In Figure 9.1 various levels are indicated. LN is the normal level which is aspired to be kept by the level controller. If the level drops or rises too much, an alarm goes off to attract the attention of the operator. In Figure 9.1, these levels are called LAL (level alarm low) and LAH (level alarm high), respectively. The level switch LSHH (level switch high high) usually actuates an interlock which prevents the vessel from being overfilled. LSHH is chosen in such a way that a certain part of the vessel volume remains free, e. g. 10 %. Similarly, the LSLL (level switch low low) sets off another interlock, which e. g. might protect the pump transferring the liquid from the vessel by switching it off. Between the alarm and the automatic response of the process control system there must be enough time for the operator to react, e. g. 1 min for an activity in the control room or 5 min for an activity in the plant area. This defined period of time mainly determines the size of the vessel; the volumes between LAL and LSLL, and LAH and LSHH, respectively, must correspond to the feed and the withdrawal during the specified response time for the operator. The last line of defense against excessive vessel filling is the overflow nozzle, which is often used in atmospheric tanks [158]. Usually, there is a pipe attached leading to the bottom. In case the tank is blanketed at the top, this overflow nozzle must not be an opportunity for the blanket gas to escape from the tank. This is the reason why the pipe is led down below the liquid level on the internal side (Figure 9.3). Also, a siphon breaker makes sense for preventing the overflow stream from emptying the vessel after the liquid level has dropped below the threshold for the overflow nozzle [158]. A siphon breaker is a piece of pipe located at the highest point of the piping and connected to atmosphere.

286 | 9 Vessels and separators

Figure 9.3: Example of an overflow nozzle.

For the determination of the nozzle sizes, the following rules of thumb can be applied: In the general case with a two-phase entry, the inlet nozzle should obey the condition 2 ρav wav ≤ 1500 Pa ,

(9.3)

where the average velocity and the average density can be calculated according to ρav = wav =

ṁ V + ṁ L V̇ ̇ V

A ̇ m ṁ V̇ = V + L , ρV ρL

(9.4) (9.5) (9.6)

with A as the cross-flow area of the nozzle. The vapor outlet nozzle should have the same size as the adjacent pipe as long as the condition ρV wV2 ≤ 3750 Pa

(9.7)

is maintained. The recommended velocity is wV = 10 m/s. For the liquid outlet nozzle, the criterion is ρL wL2 ≤ 400–900 Pa

(9.8)

The velocity should be kept below wL = 1 m/s. Even for low flows, the minimum diameter should be 50 mm. Another function of vessels is the separation between vapor and liquid phase. There are several kinds of vapor-liquid separators; the vessel is a so-called gravity separator, making use of the principle that vapor and liquid droplets have a different density. Separators of this kind are usually built as vertical vessels; although horizontal ones are possible [159], they hardly occur in chemical industry. A simple equilibrium of forces between weight, buoyancy and flow resistance leads to the limiting gas

9 Vessels and separators | 287

velocity for a given droplet diameter d [159, 160] wV = √

4 g(ρL − ρV )dlim , 3 ρV cw

(9.9)

where the cw correlation according to Brauer [161] cw = 24/Re + 4 Re−0.5 + 0.4

(9.10)

can be used. The Reynolds number is defined as Re =

wV dρV ηV

(9.11)

It should be noted that in Equation (9.10) the values for the physical properties must be taken for the flowing phase, that is, the gas which is flowing around the droplets. The velocity must be determined iteratively, as it is also part of the Reynolds number. The iteration is quite easy with the mathematical methods of current computers; nevertheless, Equation (9.9) has always been subject to simplifications. Some are usually justified; there is no major objection to use only the first summand in Equation (9.10) for the laminar region where Re < 0.25, which gives a direct relationship between droplet diameter and limiting velocity. However, it was experienced that many companies derive their own correlations, which are restricted to their special application cases and filled with empirical factors. This cannot be recommended, as the applications are changed more often than the correlation. The meaning of Equation (9.9) is that for a given vapor velocity wV , which results from the diameter of the vessel, droplets larger than dlim are separated, while smaller ones are not. In fact, any application case has a droplet distribution, and it will not happen that the separation is performed in such a rigorous way. Therefore, the interpretation that 50 % of the droplets with the limiting droplet size will be separated from the gas flow makes more sense. From process engineering experience, rules of thumb can be given for what a reasonable limiting droplet size might be in a particular case (Table 9.1). Nevertheless, one should be aware that the limiting droplet diameter is not exactly what the engineer needs. Although the concept is more or less accurate from the Table 9.1: Recommended values for the limiting droplet diameter [160]. Application Standard Compressor or turbine inlet Dryer inlet, prevent loss of solvent Not decisive for process

Limiting droplet diameter 0.2 mm 0.15 mm 0.1 mm 0.35 mm

288 | 9 Vessels and separators

Figure 9.4: Sketch of a vertical separator without a demister.

mathematical point of view, the recommended values are just a rough guideline. Its limitation becomes clear when it has to be specified how much liquid is entrained, e. g. to specify the COD value (chemical oxygen demand, Section 13.5) of a condensate. There is currently no way to determine the amount of liquid droplets and their size distribution for a given arrangement. Figure 9.4 shows the main dimensions of a vessel without demister. The efficiency of the droplet separation can be increased with a so-called demister, a wire mesh layer placed in the vapor space of the separation vessel (Figure 9.5). In contrast to the gravity separator, a high vapor velocity is advantageous so that the droplets hit the wire mesh and do not pass around the wires. Therefore, demisters need lower vessel diameters, making them more inexpensive than gravity separators. However, one must check that no fouling or even polymerization of the separated droplets occurs. From experience, there are some recommendations for the design of vessels which are used as vapor-liquid separators with or without demister. They should be applied together with the recommendations concerning the residence time given above. Recommended values for the dimensions are given in Figures 9.4 and 9.6. The height of a demister is between 100 and 150 mm. Larger heights only slightly improve the separation but also cause an additional pressure drop proportional to their height. The

9 Vessels and separators | 289

Figure 9.5: Wire mesh demister. © ENVIMAC Engineering GmbH.

design velocity can be set as weff = 0.7K ∗ [

0.5

ρL − ρV ] ρV

,

(9.12)

where the default value for the constant is K ∗ = 0.11 m/s. For high pressure or high vacuum, K ∗ = 0.06 m/s should be used. The velocity should not go below wmin = 0.3 weff . It can be expected that the limiting droplet size is between 3 and 5 µm. Figure 9.6 shows the main dimensions of a vessel with demister. Several other types of droplet separators are used in chemical industry, such as baffle separators (“knock-out drum”) or cyclones (Figure 9.7), where the droplets are settled out by centrifugal forces. More details can be found in [162]. In case the vessel has an agitator, it is strongly recommended to provide baffles to prevent that the liquid is rotating in a whole and forms spouts due to the centrifugal forces. The H/D ratio for stirred vessels is normally in the range 1–1.5, for large volumes there is a tendency to use slender and high vessels, as the power required for agitation is proportional to d5 . Many applications require heating or cooling of the vessel content. The standard approach is to perform the heat transfer across the vessel wall. The simplest way is that the heating or cooling agent is transported in half-pipe coils. It is inexpensive, but in pressurized vessels there is a poor heat transfer because of the wall thickness. Because of the welding seams, only approx. 2/3 of the vessel wall can be used as heat transfer area. With a jacket, more area can be provided, but the design is a bit more

290 | 9 Vessels and separators

Figure 9.6: Sketch of a vertical separator with a demister.

Figure 9.7: Knock-outdrum and cyclone [162]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

complicated. Often, it is not effective to transfer the heat across the vessel wall. Coils or other internals can be placed inside the vessel, getting significantly more heat transfer area and, possibly, a better heat transfer coefficient. The most effective way are external heat exchangers, which are not restricted in their dimensions by the vessel itself. In this case, however, a pump is necessary to operate the cycle.

10 Chemical reactions 10.1 Reaction basics For the author, this will be the most difficult chapter of the book. The topic easily justifies books on their own, e. g. [8] or [163], and can definitely not be covered in only a few pages. The goal of this chapter is only to explain the most important technical terms so that the reader is able to follow the discussions of practitioners. The extent of a chemical reaction is characterized by the conversion (X). It must be defined which of the reactant components the conversion refers to. Then, the conversion is X=

number of reacted moles of reference reactant number of moles of reference reactant at the beginning of the reaction

In contrast, the yield (Y) of a reaction refers to a product of the reaction. It must be taken into account that the maximum number of product moles depends on the stoichiometry: Y=

number of product moles formed number of reactant moles ⋅ stoichiometric ratio

The stoichiometric ratio is defined as stoichiometric ratio =

number of product moles in reaction equation number of reactant moles in reaction equation

The quality of a reaction can be characterized by the selectivity (S). It is defined as S=

number of product moles formed , number of converted reactant moles ⋅ stoichiometric ratio

where again the numbers refer to a special product and reactant, respectively. Example In the process of oxychlorination of ethylene by hydrogen chloride and oxygen, giving 1,2-dichoroethane and water, there is a reaction network of several competing reactions. The most important ones are: (1) C2 H4 + 0.5 O2 + 2 HCl 󳨀→ CH2 Cl–CH2 Cl + H2 O; (2) C2 H4 + HCl 󳨀→ CH3 –CH2 Cl; (3) C2 H4 + 2 O2 󳨀→ 2 CO + 2 H2 O; (4) C2 H4 + 3 O2 󳨀→ 2 CO2 + 2 H2 O. The (fictitious) conversions of ethylene with respect to the particular reactions shall be 90 % (1), 3 % (2), 2 % (3), 3 % (4). The rest of the ethylene will remain.1 The stream entering the reactor consists of 100 mol/h C2 H4 , 190 mol/h HCl and 200 mol/h O2 . 1 In the real oxychlorination, the selectivity for 1,2-dichloroethane and the surplus of ethylene are by far larger [164]. https://doi.org/10.1515/9783110657685-010

292 | 10 Chemical reactions

Calculate the overall conversions of ethylene, HCl, and O2 and the yields and selectivities of CH2 Cl–CH2 Cl with respect to ethylene and to HCl.

Solution In Table 10.1, the composition of the stream after the reaction is calculated. Any stoichiometric calculation should be checked for consistency in the atom balance. In this case, at the inlet there are – 100 ⋅ 2 = 200 C-atoms; – 100 ⋅ 4 + 190 = 590 H-atoms; – 190 Cl-atoms; – 200 ⋅ 2 = 400 O-atoms. At the outlet there are – 2 ⋅ 2 + 90 ⋅ 2 + 3 ⋅ 2 + 4 + 6 = 200 C-atoms; – 2 ⋅ 4 + 7 + 90 ⋅ 4 + 3 ⋅ 5 + 100 ⋅ 2 = 590 H-atoms; – 7 + 90 ⋅ 2 + 3 = 190 Cl-atoms; – 142 ⋅ 2 + 4 + 6 ⋅ 2 + 100 = 400 O-atoms. 󳨀→ o. k. The overall conversions are 100 − 2 = 98 % 100 190 − 7 = = 96.3 % 190 200 − 142 = = 29 % 200

XC2 H4 = XHCl XO2

The yields of 1,2-dichloroethane referring to ethylene or, respectively, HCl can be calculated to be 90 = 90 % 100 90 2 = = 94.7 % 190 1

YC2 H4 Cl2 ,C2 H4 = YC2 H4 Cl2 ,HCl

Finally, the selectivities with respect to ethylene and HCl are 90 = 91.84 % 98 90 2 = = 98.36 % 183 1

SC2 H4 Cl2 ,C2 H4 = SC2 H4 Cl2 ,HCl

The speed of chemical reactions vary. While the corrosion of iron is slow and can take years, other reactions take minutes or hours, and reactions like the neutralization of acids and bases take place instantaneously. Performing reactions in industrial processes often requires increasing or slowing down the reaction rates. Therefore, knowledge of the influences on reaction kinetics is essential for a process engineer. An indepth knowledge in reaction kinetics can be found in the above mentioned textbooks; for the explanation of the basics only homogeneous reactions are regarded. The reaction of the formation of ammonia can be used as an example [165]: N2 + 3 H2 󳨀→ 2 NH3

10.1 Reaction basics |

293

Table 10.1: Calculation of the mole numbers at the reactor outlet. Comp.

Inlet

Reac. (1)

Reac. (2)

Reac. (3)

Reac. (4)

Outlet

C2 H4 HCl O2 CH2 Cl–CH2 Cl CH3 –CH2 Cl CO CO2 H2 O

100 190 200 0 0 0 0 0

−90 −180 −45 90 0 0 0 90

−3 −3 0 0 3 0 0 0

−2 0 −4 0 0 4 0 4

−3 0 −9 0 0 0 6 6

2 7 142 90 3 4 6 100

The reaction rate in this case can be set up as dcNH3 dτ

3 = kcN2 cH , 2

(10.1)

where c is the volume concentration (c-concentration) ci = ni /V = xi ⋅ ρ

(10.2)

The c-concentration is not popular in chemical engineering, as the density and, subsequently, the c-concentration itself are temperature-dependent. Nevertheless, for the calculation of reaction rates it is essential.2 Equation (10.1) is called a formal kinetics equation, which means that the stoichiometric coefficients in the reaction equation are the exponents of the c-concentration. This is the easiest approach; however, it is not necessarily correct. In fact, for ammonia synthesis the kinetics is much more complicated [163]. The factor k in Equation (10.1) is the reaction rate factor. A usual approach for its temperature dependence is k = k0 exp (−EA /RT) ,

(10.3)

where EA is the activation energy of the reaction. The often cited rule of thumb that a temperature increase of 10 K gives an increase in the reaction rate by a factor of 2–3 should not be used for calculating, but rather to illustrate how temperature-sensitive reaction rates are. In principle, any reaction has a reverse reaction which actually takes place to a certain extent. In many cases, the reverse reaction can be neglected, e. g. for combustion reactions. One can hardly imagine that CO2 and H2 O really react back and form 2 Note that c, n, x, and ρ all refer to moles, i. e. the units are mol/l or mol/m3 , mol, mol/mol, and mol/m3 , respectively.

294 | 10 Chemical reactions a hydrocarbon and oxygen. However, a lot of cases can also be found where the reverse reaction is important, e. g. for all esterification reactions. At a certain stage, the concentrations of the participants of the reaction stay the same, as long as temperature and pressure keep constant. From the overall view, it seems that no reactions take place any more. In fact reaction and reverse reaction both happen, but with the same reaction rate so that an equilibrium is formed. Again, the ammonia reaction is a good example, where all the particular aspects can be explained: N2 + 3 H2 󴀕󴀬 2 NH3 In equilibrium, both reaction rates3 are equal, i. e. dcNH3 dτ

3 2 = k1 cN2 cH − k−1 cNH =0 2 3

(10.4)

After rearranging Equation (10.4), an equilibrium constant K can be defined as K=

2 cNH k1 3 = 3 k−1 cN2 cH

(10.5) 2

Equation (10.5) is called the law of mass action. The condition for the reaction equilibrium can also be derived from thermodynamics, starting from the chemical potential. A comprehensive explanation can be found in [11]. It ends up with a slightly different expression for the equilibrium constant. For the ammonia reaction, one would find K=

0 (fNH3 /fNH )2 3

(fN2 /fN0 )(fH2 /fH0 )3 2

(10.6)

,

2

0

with f as the fugacity and f for the fugacity at a standard state. The fugacity itself is fi = pyi φi ,

(10.7)

with φ as the fugacity coefficient. The fugacity coefficients can be calculated using the equations of state proposed in Chapter 2. Equation (10.6) can also be written in the form K=(

−2 y 2 φ2NH3 p NH3 , ) p0 yN yH3 φN φ3H 2

2

2

(10.8)

2

where the exponent of the pressure term is calculated from the stoichiometric coefficients (2 − 1 − 3 = −2). Discussing Equations (10.5) and (10.6), the following statements can be given: 3 written with formal kinetics for illustration.

10.1 Reaction basics |

295

1.

Le Chatelier’s principle: In Equation (10.8), it can be seen from the exponent of the pressure term that there is a tendency that the ammonia concentration increases with increasing pressure.4 According to the principle of Le Chatelier, the system counteracts the effect of the pressure increase by lowering the mole number in the mixture. 2. There is a formal discrepancy between Equation (10.5) and Equation (10.8) as long as the term with the fugacity coefficients is not negligible. In fact, the kinetic approach does not consider nonideal behavior. This means that one must be aware that reaction kinetics calculated with the c-concentration do not reach the thermodynamic equilibrium defined in Equation (10.8). It would be helpful if the fugacities were used as the concentration measure in reaction kinetics, but this is not really wide-spread in chemistry. 3. Even if the fugacity term can be neglected, the equilibrium constant should be evaluated using Equation (10.8). The determination of the coefficients from the reaction rate will not really be accurate enough. 4. The importance of taking the vapor phase nonideality into account is illustrated in Figure 10.1, where an attempt is made to reproduce the equilibrium conversion at t = 450 °C with two equations of state of different quality as a function of pressure. As can be seen, it is advantageous to use an accurate equation of state instead of the ideal gas law. The VTPR equation of state can represent the influence of the pressure more or less exactly, whereas the ideal gas law produces unacceptably large deviations. It should be noted that the data is more than 80 years old; however, the agreement of the two data sources and their plausibility indicate that they can be considered to be reliable. Analogously to Equation (10.6) the equilibrium constant for a reaction in the liquid phase like CH3 COOH + CH3 OH 󴀕󴀬 CH3 COOCH3 + H2 O can be derived as [11] K=

(xCH3 COOCH3 γCH3 COOCH3 ) (xH2 O γH2 O )

(xCH3 COOH γCH3 COOH ) (xCH3 OH γCH3 OH )

(10.9)

Again, a kinetic approach like Equation (10.5) cannot reach equilibrium. This problem can only be overcome if the activities are used to describe the concentration, which is 4 A bit more slowly: With increasing pressure, the pressure term decreases due to the negative exponent. Neglecting the pressure dependence of the φ-term, the concentration term must increase to keep K constant. This is only possible by increasing the NH3 concentration and, correspondingly, decrease the concentrations of the reactants N2 and H2 .

296 | 10 Chemical reactions

Figure 10.1: Influence of the real gas phase behavior on the equilibrium conversion of the ammonia reaction. Courtesy of Prof. Dr. J. Gmehling.

not widespread. Also, heterogeneous reactions can be described with a corresponding equilibrium approach. In this case, the mass transfer can also have a significant influence on the reaction rate [8, 163]. The equilibrium constant K can be estimated from thermodynamics using the standard Gibbs energies of formation Δgf0 , again according to the stoichiometric coefficients, e. g. for the ammonia reaction Equation (10.8) 0 RT ln K = −(2 ΔgNH − 3 ΔgH0 2 − ΔgN0 2 ) , 3

(10.10)

where the Δg 0 values for the various components refer to the ideal gas and can be obtained from [11]: Δgi0 (T, p0

T

T

T0

T0

cpid T T Δgf0,i + (1 − )Δh0f ,i + ∫ cpid dT − T ∫ dT = 1 atm) = T0 T0 T

(10.11)

The effect of temperature on a chemical reaction is complex. First, the equilibrium prefers the endothermic reaction at high temperatures and the exothermic reaction at low temperatures. On the other hand, the reaction rates strongly increase with increasing temperature, often in a way that a reaction can take place at all only at high temperatures. To mention the ammonia reaction again, it is an exothermic reaction, and therefore low temperatures are preferred. However, to obtain useful reaction rates the temperature must be sufficiently high, so a compromise must be found. For the

10.1 Reaction basics |

297

ammonia reaction, in chemical industry a temperature of 400–500 °C is used. As the number of moles decreases when ammonia is formed, high pressures (150–250 bar) drive the equilibrium to the ammonia side. A catalyst is a substance which increases the rate of a chemical reaction without being consumed. Often, only very low amounts of a catalyst are sufficient to achieve a significant effect. If more than one reaction is possible, a catalyst can promote the desired one. A catalyst does not change the chemical equilibrium of a reaction; this means that it promotes the reaction itself as well as the reverse one. For reaction kinetics, the catalyst concentration must often be regarded as well. The most popular approach is Michaelis–Menten kinetics [163], which assumes that the reactant and the catalyst form a complex that further reacts to the product and the catalyst. The reaction rate is r = k ccat

creactant , kMM + creactant

(10.12)

with kMM as the Michaelis–Menten constant. Besides the equilibrium, the enthalpy of reaction is also often of interest. It is defined as the heat released when a reaction takes place at constant temperature and pressure. The equation is pretty simple: ΔhR0 = ∑ hproducts − ∑ hreactants

(10.13)

The reason why this equation is so simple is that the real work has been done before with the rigorous calculation of the enthalpy, using the standard enthalpy of formation as the starting point (Chapter 2.8). Equation (10.13) automatically covers all the influences of temperature, pressure, and real phase behavior. An often cited approach for the calculation of the temperature dependence using polynomials for the cpid and summarizing the coefficients of the various temperature terms (e. g. [11]) is mainly an academic approach; in practice, its application is awkward. It is restricted to ideal gas applications and, moreover, it is required that all cpid functions are given as polynomials, which is rarely the case and which is by far not the optimum choice for the correlation. Δh0f and Δgf0 can be taken from data tables (e. g. [41, 42]) or be estimated using group contribution methods [11]. However, the problem of differences between large numbers must always be taken into account. Relatively small errors in determining Δh0f and especially Δgf0 can lead to significant errors in estimating the enthalpy of reaction or the equilibrium constant, respectively. Therefore, the results of estimation methods should be handled with care. The following example gives an impression about the sensitivity of Δh0f .

298 | 10 Chemical reactions

Example Calculate the enthalpy of reaction for the hypothetical isomerization reaction of 1,1-dichloroethane (11DCE) and 1,2-dichloroethane (12DCE) CHCl2 –CH3 󴀕󴀬 CH2 Cl–CH2 Cl at t = 25 °C in the ideal gas state. The enthalpies of formation are [41] Δh0f (11DCE) = −130120 J/mol

Δh0f (12DCE) = −126780 J/mol Consider that these values might have an error of ± 1 %. What is the possible range of the enthalpy of reaction?

Solution The expected enthalpy of reaction is ΔhR = −126780 J/mol + 130120 J/mol = 3340 J/mol The maximum possible enthalpy of reaction is ΔhR = −126780 ⋅ 0.99 J/mol + 130120 ⋅ 1.01 J/mol = 5909 J/mol , whereas the minimum one is ΔhR = −126780 ⋅ 1.01 J/mol + 130120 ⋅ 0.99 J/mol = 771 J/mol This means that in this case an error of ± 1 % causes a deviation of ± 77 %.

Chemical reactions are often considerably exothermic. To control the temperature, the heat removal must be sufficient even in the worst case. Otherwise, things will continue to spiral downward. If the heat removal cannot compensate the heat of reaction, the temperature in the reactor will rise. With increasing temperature, the reaction rates increase exponentially, causing more heat of reaction which cannot be removed again, leading to a further temperature rise … and so on (runaway reaction). According to Equation (10.3), the heat generation increases exponentially with temperature, while the heat removal due to cooling increases with temperature only linearly. In a short time, large amounts of heat can be generated which can possibly end up in an explosion or in the destruction of the reactor. Often, degradation reactions are promoted which spoil the product. It is essential that the occurrence of temperature peaks must be avoided [8]. Especially in fixed bed reactors temperature peaks can occur locally and then show the behavior described above.

10.2 Reactors | 299

For reactor design, it must be known how sensitive the reactor is when the operation conditions like reactant concentration or temperature are slightly changed. Criteria have been developed [8] which make it possible to assess whether a runaway reaction is possible or not. Usually, multiple reactions take place in a reactor in parallel. The design must be performed in a way that the desired ones are supported. For this purpose, there are some options available.

10.2 Reactors The reactor is always the heart of a chemical plant, and the choice of the type of the reactor has a great influence on the amount and the quality of the product. On the other hand, the number of choices is manifold, and only the most typical ones can be discussed here. Comprehensive discussions on reactor types can be found in [8] and [163]. First, one can distinguish between discontinuous and continuous reactors. In the discontinuous mode (batch), all the reactants, solvents and catalysts are fed into the reactor, usually a vessel. The reactor is agitated to achieve that the holdup is homogeneous concerning temperature and composition. The composition of the holdup changes with time. Batch reactors are flexible, they can handle different products and the residence time can be chosen arbitrarily. Their disadvantage is that there are dead times for filling and emptying the reactor and for the adjustment of the temperature. Batch reactors are useful for multipurpose plants and for products with small plant capacities. It is difficult to handle fast or strongly exothermic reactions. Continuous reactors have constant feed and product streams. All parameters like temperature or pressure are kept constant. The product quality can be kept constant due to the constant conditions and the option of automation. There is a tendency for reactor volume becoming smaller, as there are no dead times. Contrary to batch reactors, their flexibility is limited, and only small variations of temperature, feed flow, and feed quality can be tolerated. In construction, care must be taken that inlet and outlet nozzles are not too close together. Otherwise, shortcut flows will occur, and a larger part of the reactants does not take part in the reaction. Also, reactors can be operated in a semicontinuous mode. This can mean that in a continuous reactor one of the reactants is fed batchwise, or that in a batch reactor one of the products is continuously removed. For the latter case, esterification reactions are a well-known example, e. g. C6 H13 COOH + C10 H21 OH 󴀕󴀬 C6 H13 COOC10 H21 + H2 O Water is the substance with the lowest boiling point and can be easily removed from the reactor by evaporation. The advantage is that the equilibrium is shifted to the ester

300 | 10 Chemical reactions side, and 100 % conversion can be achieved. In the semicontinuous mode, strongly exothermic reactions can be handled by adjusting the feed of one of the reactants in a way that the heat of reaction can be removed. Many reactor types can be characterized by three simplified ideal reactors (Figure 10.2).

Figure 10.2: Stirred tank reactor and tubular reactor.





Ideally mixed batch stirred tank reactor: The mixture of reactants is filled into the reactor and perfectly mixed during the entire reaction time. During the reaction time no substance is removed or added. The reactor can be heated or cooled. Ideally mixed continuous stirred tank reactor (CSTR): The continuously operated stirred tank reactor is a continuously operated vessel where the reaction takes place. There is an average residence time of the reaction mixture, simply given by τres =



V V̇

(10.14)

The actual residence time can differ for the various components. In the ideal continuous stirred tank reactor, it is assumed that reactants and products are mixed instantaneously. There is no concentration or temperature gradient. This is usually not an advantageous assumption, as reactant molecules which have not reacted are instantaneously transported from the inlet to the product stream. In contrast to intuition, a real continuous stirred tank reactor is superior to the ideal one, as the transportation of non-reacted reactants from the inlet to the outlet of the reactor takes some time, so that all reactant molecules have the chance to react. The CSTR is appropriate for fast reactions, where the time needed for the reaction is considerably smaller than the average residence time. Plug-flow tubular reactor (PFR): The tubular reactor is a line with a continuous flow, meaning that the mass flow does not change over the length of the reactor. It is essential to assume plug flow for simplification. Temperature and concentration are constant over the crossflow area, but vary continuously along the length coordinate due to the progress of the reaction. The characteristics of the plug-flow tubular reactor are analogous to

10.2 Reactors | 301

Figure 10.3: CSTR cascade.

Figure 10.4: Plug-flow reactor with recycle.



the batch stirred tank reactor; in both cases, the reaction takes place in a given volume, where all substances are completely mixed, without interacting with other volumes or, respectively, volume elements.5 In real tubular reactors, dispersion causes axial mixing of the substances, and heat conduction and the influence of possible radial heating or cooling cause a temperature profile. Mixed forms: The ideal continuous stirred tank reactor and the plug-flow reactor represent the limiting cases concerning reactor behavior. In the CSTR, there is complete mixing of reactants and products without a temperature profile. The reactant concentration is always low due to the instantaneous mixing, whereas the product concentration is high. In the tubular reactor, the reactant concentration decreases from high values at the inlet to low values at the outlet, with the products showing the opposite profile. There are combinations of CSTR and PFR showing the intermediate behavior, i. e. the CSTR cascade (Figure 10.3) and the PFR with recycle (Figure 10.4). With an increasing number of reactor elements, the CSTR cascade approaches the behavior of the tubular reactor, whereas the plug-flow reactor with recycle can show the same properties as a single CSTR.

There are also reactor types available for heterogeneous reactions (Figure 10.5). To illustrate which aspects have to be considered in reactor design, gas-liquid reactions will be exemplarily reviewed. The other types are thoroughly discussed in [8]. For gas-liquid reactions, it is assumed that the reaction takes place in one of the phases. Therefore, one of the components has to change the phase, and it must be transported from the phase boundary to the bulk of the phase where it can take part in the reaction. There is a complex dependence of the reaction rate on mass transfer, heat transfer, reaction kinetics, solids distribution, mixing, and gas solubility. Usually, a complex modeling performed by a specialist takes place before the reactor is designed. It is a key question how the energy needed for the distribution of the gas is introduced into the reactor. Mainly, three types can be distinguished, i. e.: 5 The statement is only valid as long as the the volume does not change during the reaction.

302 | 10 Chemical reactions –

by agitating the reactor content, where the agitator disperses the gas and mixes it with the liquid. Most effective is the hollow stirrer, where the gas is sucked in through the hollow shaft, as a low-pressure region is formed when the agitator is rotating. Note that the power demand for the agitator strongly depends on the rotation speed, but even more on the diameter of the vessel: P = Ne ⋅ n3 ⋅ d5 ⋅ ρL ,





(10.15)

where Ne is the so-called Newton number, characterizing the particular agitator. by compressing the gas. An example is the bubble column (Figure 10.5). The gas is led into the reactor at the bottom through a nozzle holder, which disperses the gas to small bubbles to increase the mass transfer area between gas and liquid. Different types of circulation can be set up. Installing perforated plates, even a cascade can be realized. by a liquid pumparound. The dispersion is achieved in a two-component jet. By accelerating the liquid in the jet, again a low-pressure region is formed which sucks in the gas.

Concerning the energy input, agitators and liquid pumparounds are more effective than gas compression. For all options, high viscosities can cause problems. An interesting option for performing reactions determined by the equilibrium is reactive distillation. Its great advantage is that reaction and separation are carried out in one piece of equipment. If the reaction is exothermic, the enthalpy of reaction can easily be used for vapor generation. Because of the vapor-liquid equilibrium, there is a given limit for the temperature, so the danger of undesired side reactions or even runaway reactions is small. However, the main and outstanding advantage is that the conversion can exceed the conversion given by the reaction equilibrium. If the boiling points are appropriate, one of the products can be removed from the reactants due to

Figure 10.5: Reactor types for heterogeneous reactions. Courtesy of Prof. Dr. J. Gmehling.

10.2 Reactors | 303

Figure 10.6: Reactive distillation for the production of methyl acetate.

the distillation effect, driving the equilibrium to the product side. A well-known example is the esterification reaction of methanol with acetic acid, giving methyl acetate. As methyl acetate forms azeotropes with water and with methanol, the purification of the ester produced is rather complex and requires several columns in the conventional path [8]. With reactive distillation, only one single column is necessary (Figure 10.6). The design of reactive distillation columns is much less certain than it is for conventional distillations. There is a complex interplay between phase equilibria and reaction kinetics, where the kinetic equations must be formulated using activities instead of concentrations (Chapter 10.1). The performance of trays and packings with respect to the chemical reaction can hardly be predicted. Laboratory tests should be performed, and if there are doubts about the scale-up, experiments in larger equipment might be useful. One of the drawbacks of reactive distillation is that heterogeneous catalysts are difficult to exchange. Figure 10.7 shows the Katapak packing (Sulzer). It is a structured packing where the catalyst is put into small bags to remain stationary. The exchange of the catalyst is a great effort and must completely be done manually. Homogeneous catalysts usually cause much less trouble and can often be removed by distillation. However, the column diameter is still determined by the residence time for the reaction. The simultaneous optimization of hydrodynamics cannot take place, and from that point of view the column has often a severe underload. Another option is the use of external reactors (Figure 10.8). They can be used for slow reactions as well, as the residence time in the reactive zone can be adjusted by conventional reactor design. The scale-up is easy, as there is no coupling between reaction and hydrodynamics. The catalyst exchange is easier, as well as the design. An extensive piloting is usually not necessary. For standard reactors, process simulators offer some options for the representation.

304 | 10 Chemical reactions

Figure 10.7: Sulzer Katapak. © Sulzer Chemtech Ltd.

Figure 10.8: Reactive distillation with an external reactor.







The simplest and most widely used one is the stoichiometric reactor. As input, the stoichiometry of the various reactions and the conversion referring to one of the reactants are necessary. There is the option that the conversion factors refer to the amount at the reactor inlet or that the reactions take place in a sequence, where the conversion refers to the amount at the beginning of this reaction. The yield reactor is an even more simple one. The outlet concentration can be specified, the mass flow is kept constant. The disadvantage is that alchemy can be simulated with this option, it is no problem to specify that water is turned into gold. It is often used when the reactions taking place are more or less unknown, e. g. in fermenters in biotechnology. There are several options to define an equilibrium reactor. In the first option, the reactions taking place must be specified. Either the equilibrium constants can be

10.2 Reactors | 305





given, or the simulator calculates the equilibrium using the Gibbs energies (Equation (10.11)). Multiple phases can be considered as well. Alternatively, only the possible reaction products can be listed without defining the reactions theirselves. The program can then find the corresponding minimum of the Gibbs energy. Tubular reactors (as plug flow reactors), continuous and batch stirred tank reactors can be specified using reaction kinetics.

11 Mechanical strength and material choice Even a process engineer should know that the crack in the sausage on the grill is in longitudinal direction. (Olaf Stegmann)

The statement refers to a discussion where it was assumed that the worst case for a pipe was a sudden break through the cross-flow section. In fact, this is very unlikely to happen, as Figure 11.1 indicates. When the pipe is pressurized, the equilibrium of force in longitudinal direction is1 σa πDs = p

πD2 , 4

(11.1)

giving σa =

pD , 4s

(11.2)

where σa is the mechanical tension in the axial or longitudinal direction. In circumferential direction, the equilibrium of forces yields (Figure 11.1) σt ⋅ 2sL = pDL ,

(11.3)

giving σt =

pD , 2s

(11.4)

with σt as the mechanical tension in the tangential or circumferential direction. Thus, the tension in the circumferential direction is twice as large as in the longitudinal direction, and as long as there is no predetermined breaking point, the pipe would burst in the circumferential direction at overpressure, causing a longitudinal crack (Figure 14.11). If it bursts at all. There is a guideline statement called “leak before burst”. The established pressure vessel standards require designs which favor “leak before burst”, allowing the fluid to escape and reducing the pressure before the damage is growing so large that a complete fracture takes place. Usually, the weakest parts against pressure load are the gaskets, and one can easily imagine that these will show leakage first. Equation (11.4) is the so-called boiler formula. It is the foundation of many formulas for mechanical stability calculations for vessels, where the elastic limit is inserted for the tension. In the particular technical guidelines, Equation (11.4) is supplemented by terms describing manufacturing uncertainties, corrosion allowances, safety factors 1 The wall thickness s is assumed to be small compared to the diameter. https://doi.org/10.1515/9783110657685-011

308 | 11 Mechanical strength and material choice

Figure 11.1: Illustration of the vessel formula.

and the influence of welding seams. Also, different ratios between outer and inner diameter are regarded [150]. Before the mechanical stability can be considered, for each piece of equipment and the adjacent piping the design temperatures and design pressures2 have to be assigned. These are the conditions where a safe operation is definitely possible without restrictions, even if they are both reached at the same time. In fact, as standard wall thicknesses are used in the manufacturing process, the equipment will be able to cope with even higher pressures, since not the required one but the next larger standard wall thickness will be taken. It has to be proved that the equipment can withstand the design pressure, usually by pressurizing with liquid water. The test can in most cases not realize both design pressure and design temperature at the same time; therefore, a conservative temperature correction is performed, giving a test pressure higher than the design pressure at the lower temperature. Also, for high pieces of equipment (e. g. columns) this kind of test systematically yields higher pressures than requested, as the whole apparatus is pressurized to the test pressure, but the lower part is additionally exposed to the hydrostatic pressure. In operation, pressure vessels are in most cases protected by pressure relief devices such as safety valves or rupture discs (Chapter 14.2), which actuate at the design pressure. Again, the definition of design pressure turns out to be soft, as a safety valve starts to open at the design pressure but fully opens at a pressure which is 10 % above so that again the design pressure is systematically exceeded. In the design basis, a rule must be set up how the values for design temperature and design pressure can be determined. From process simulation, the normal or maximum operating conditions are known. The design conditions are related to these values; often by a factor (1.1–1.25) for the design pressure, and by an offset for the design temperature. In some cases, the operation conditions are not decisive. For example, the design temperature of heat exchangers must refer to the highest possible temperature of the hot side. In case the apparatus is cleaned by steamout,3 it can happen that the corresponding steam determines the design conditions. 2 Design pressures refer to overpressures, and the unit is therefore “MPag” or “barg”, where the “g” stands for “gauge”, i. e. overpressure. In contrast, if absolute pressures are meant, the letter “a” for “absolute” can indicate that absolute pressures are meant, e. g. MPaa or bara. 3 See Glossary.

11 Mechanical strength and material choice

| 309

Figure 11.2: Ductile fracture and brittle fracture. © BradleyGrillo/Wikimedia Commons/CC BY-SA 3.0. © Sigmund/Wikimedia Commons/CC BY-SA 3.0. https://creativecommons.org/licenses/by-sa/3.0/deed.de.

Process equipment can also be exposed to low temperatures. The materials of construction can undergo a transition from ductile to brittle behavior at low temperatures, which increases the risk of brittle fracture [166, 167], one of the most critical damages. Material failure should be of the ductile type, meaning that plastic deformation takes place before complete destruction. This gives at least some time to react. In brittle fracture, the material failure occurs suddenly. Figure 11.2 shows the difference between brittle fracture and ductile fracture. It can easily be seen that for the brittle fracture no deformation takes place. Therefore, the specification of the equipment should include an indication of the minimum design metal temperature (MDMT). The case which is most often relevant for the determination of the MDMT is the so-called auto-refrigeration case. If a low-boiling substance is stored in a vessel under pressure in the liquid state, it will evaporate in case of pressure relief. The temperature of the liquid will follow the boiling point curve. For example, a vessel containing liquid propylene at p = 10 bar will cool down to t = −48 °C, the boiling temperature of propylene at p = 1 bar, if a complete pressure relief down to ambient pressure takes place. If there is a mixture in the vessel, the boiling temperature varies according to the vapor-liquid equilibrium, possibly ending up at the boiling temperature of the highest boiling component or azeotrope. However, it usually cannot be guaranteed that at pressure relief the component with the highest boiling point is actually in the vessel. Therefore, in most cases the lowest boiling point is relevant for the design. Certainly, one should not relate the MDMT with the design pressure of the vessel. Especially for the case of pressure relief described above, it becomes clear that the pressure to

310 | 11 Mechanical strength and material choice which the vessel is exposed strongly decreases with the temperature. Therefore, the minimum allowable temperature (MAT) should be defined as a function of pressure. Construction engineers should at least be provided with single temperature-pressure pairs for the various cases to avoid overdesign. It is often underestimated that an overpressure from outside can be critical for the mechanical stability of equipment or piping. This occurs if the equipment is evacuated according to the process conditions or if the equipment is emptied with a vacuum pump. If such conditions occur, it should be indicated in the specification, e. g. by an additional design pressure pDes = −1 barg. Things become more complicated if the pressure exposure from outside is higher than 1 bara. In double pipes, steam on the annular side might have a considerably higher pressure than the product in the inner tube. This should be indicated in some way on the datasheet, e. g. by specifying the overpressure from outside with a negative sign as in the vacuum case. An indication like pDes = −16 barg will hopefully attract the attention of the vendor or cause at least a further inquiry so that a damage as shown in Figure 11.3 can be prevented.

Figure 11.3: Damage of the inner tube of a double pipe due to wrong specification.

Mechanical stability is highly related to the use of appropriate materials. Also, the chemical stability must be guaranteed. To withstand aggressive chemicals, a number of special materials are available. Often, the quality of the surface is also an issue, e. g. in bio-processes, where microorganisms tend to stick to rough surfaces. In chemical plants, the materials used can be divided into metals (steel, aluminium, nickel, titanium), nonmetallic materials (ceramics, graphite, glass), and polymers. The choice of the materials should be performed by a specialist. Critical issues for the choice of the material are [74]: – strong organic or inorganic acids; – sour gases, especially bromine and chlorine; – fluoride, chloride and bromide ions; – caustics, especially at high temperatures;

11 Mechanical strength and material choice | 311



hydrogen, which can diffuse through many materials and is explosive over a wide concentration range.

Corrosion data can often be found in the literature. If sufficient information is not available, a material test can be helpful where a piece of metal is exposed to the medium for a certain time. This procedure is easy but time-consuming. The terms for the identification of certain steels is a bit confusing, as different guidelines use completely different systems. For example, the well-known V2A steel is called 1.4301 according to the material number (DIN EN 10088-1), X5CrNi18-10 according to the chemical composition and 304 according to the AISI standard. A comprehensive compilation of all the issues concerning the material choice can be found in [168].

12 Piping and measurement A tube has an excavation which is as long as the tube itself. (Reportedly from a handbook of construction)

The transport of gaseous and liquid substances between the different pieces of equipment is achieved in pipes, which form a major part of a chemical plant. The piping engineers are the ones with the most interaction with other activities; for instance, a detailed piping cannot be performed unless the nozzles of the vessels have been put in place. A stepwise working procedure with continually increasing generation of information is typical for piping. While piping activities are more or less concentrated in the detailed engineering, the basis is in fact provided by process engineering, as the piping dimensions are mainly determined by the process. This chapter focuses on the process aspects of piping and other items like valves and measurement devices which are related. The important terms concerning the planning of the piping are explained.

12.1 Pressure drop calculation 12.1.1 Single-phase flow-through pipes Pressure drop calculations in pipes are standard tasks in process engineering. The following chapter explains the common equations which are widely used in process engineering. It should be mentioned that all equations refer to Newtonian fluids. For non-Newtonian flow, where the viscosity depends on the shear forces, other procedures are necessary [169]. With the tube length L and the friction number λ, the relationship between pressure drop and velocity is given by Equation (12.1): Δp = λ

ρw2 L , 2 dh

(12.1)

where dh is the hydraulic diameter. For different geometries, it is defined by dh = 4

A , U

(12.2)

where A is the cross-flow area and U the circumference. For circular tubes, Equation (12.2) yields of course the inner tube diameter dh = d. For circular rings, Equation (12.2) gives dh = da − di ,

(12.3)

with da as the outer and di as the inner diameter. For a rectangular cross-section with two different side lengths s and w, the hydraulic diameter is dh = https://doi.org/10.1515/9783110657685-012

2 sw s+w

(12.4)

314 | 12 Piping and measurement

Figure 12.1: Moody diagram. © S. Beck and R. Collins, University of Sheffield/Wikimedia Commons/CC BY-SA-3.0. https://creativecommons.org/licenses/by-sa/3.0/deed.en.

Using the Reynolds number Re =

wdh ρ , η

(12.5)

the friction factor λ as a function of the Reynolds number can be determined with the well-known Moody diagram (Figure 12.1). For laminar flow (Re < 2320), there is a strict theoretical relationship independent of the surface roughness of the pipe according to the Hagen–Poiseuille law [150], λ=

64 φ Re

(12.6)

The factor φ is 1 for circular tubes. For noncircular cross-sections, the factor can be determined using Tables 12.1 and 12.2 [150]. For the turbulent flow, the roughness of the inner pipe surface plays a major role and is represented by the roughness k, which can be interpreted as the size of sand grains on the surface. Guide values are given in [150]. For smooth pipes made of Table 12.1: Circular ring. da /di φ

1 1.5

5 1.45

10 1.4

20 1.35

50 1.28

100 1.25

12.1 Pressure drop calculation

| 315

Table 12.2: Rectangular cross-section. s/w φ

0 1.5

0.1 1.34

0.3 1.1

0.5 0.97

0.8 0.9

1.0 0.88

steel, k can be set to k = 0.02 . . . 0.06 mm, for other materials (Cu, Al, glass, polymer) k = 0.001 . . . 0.0015 mm can be achieved. For the turbulent flow in a smooth pipe (Re ≥ 2320, k Re < 65 dh ), the friction factor can be determined by the formula of Prandtl and v. Kármán [150]: λ = [2 lg

Re√λ ] , 2.51 −2

(12.7)

which has a theoretical background [170] and is valid in the whole range. Its disadvantage is that it must be solved iteratively. There are two popular approximations, the Blasius equation λ=

0.3164 , Re0.25

(12.8)

valid for 2320 < Re < 100000, and the formula λ=

0.309 , [lg (Re/7)]2

(12.9)

which can also be applied in the whole range [150]. Figure 12.2 shows that it is really worth to take care of the application ranges. For rough surfaces, the formulas of Colebrook [150]: λ = [−2 lg (

0.27 2.51 + )] Re√λ dh /k

−2

(12.10)

for Re ≥ 2320, 65 dh < k Re < 1300 dh ; and Nikuradse [150]: λ = [2 lg (3.71 dh /k)]

−2

(12.11)

for Re ≥ 2320, k Re > 1300 dh can be used. During the process engineering of a whole plant, the diameter of a large number of lines has to be sized, which requires an excellent documentation and workflow. It is a good approach to compute the friction factor Equations (12.6)–(12.11) in an EXCEL file and make a list of the lines to be sized. A quantity which can easily be interpreted is the pressure drop per 100 m tube length according to Equation (12.1). Varying the tube diameter, the calculated pressure drops can be compared with the corresponding values which are usually defined in the guidelines of the particular companies. Also, it can be checked whether the velocities in the pipe are reasonable, e. g. 1–2 m/s

316 | 12 Piping and measurement

Figure 12.2: Comparison between Equations (12.8) (· · ·), (12.7) (—), and (12.9) (- - -).

Figure 12.3: Recommended velocities in a pipe.

for liquids and 10–20 m/s for gases. Figure 12.3 gives a rough orientation. All cases regarded have in common that for larger diameters higher velocities can be allowed, as in this case the friction with the tube wall plays only a minor role. It should also be mentioned that diameters below 2󸀠󸀠 are rarely applied due to static reasons. It is worth thinking over the meaning of this procedure. It should be clear that the guideline values for the pressure drop are recommended values. They are compromises between the additional investment costs for pipes with larger diameters and higher operation costs due to larger pressure drops for pipes with smaller diameters. Anyway, the tube does not fail if the recommended values are exceeded. It makes sense to accept them if no other requirements for the pipe exist, e. g. special pressure drop constraints if the compressor or the pump is limited or if the guidelines for safety valves must be obeyed (Chapter 14.2). They can be completely ignored if the line leads to a valve where the pressure is significantly lowered anyway. There is often an overreaction when the recommended value is slightly exceeded. When the next diameter

12.1 Pressure drop calculation

| 317

is chosen, it often happens that the pressure drop goes down to very low values, indicating that the tube is by far overdesigned. This becomes clear when Equation (12.1) is written with the mass flow and, for simplicity, the Blasius Equation (12.8): m 2 ρ( ρA ) L 2 d

(12.12)

wdρ 4ṁ 4ṁ −1 = = d η πdη πη

(12.13)

Δp = 0.3164 Re

̇

−0.25

With A = πd2 /4 one gets Re =

and, after summarizing constant terms, Δp = C1 d0.25 C2 d−4

L = Cd−4.75 d

(12.14)

This illustrates the dramatic dependence of the pressure drop on the tube diameter, as well as the following example. Example A cooling water stream (ṁ = 50000 kg/h, t = 28 °C, p = 6 bar) is pumped to a consumer unit. The recommended maximum pressure drop is Δp = 0.2 bar per 100 m. Calculate the appropriate tube diameter. The tube shall be hydraulically smooth.

Solution First, the corresponding physical property data are ρ = 996.46 kg/m3 and η = 0.8324 mPa s [29]. The first approach will be a 4󸀠󸀠 tube, giving d ≈ 101.6 mm (1󸀠󸀠 = 25.4 mm). Then, the Reynolds number is (Equation (12.13)) Re =

4 ⋅ 50000 kg/h 4ṁ −1 d = = 209099 πη π ⋅ 0.8324 mPa s ⋅ 101.6 mm

(12.15)

According to Prandtl/v. Karman (Equation (12.7)), the friction factor is evaluated iteratively to be λ = 0.0155. Then, using the velocity w=

ṁ 50000 kg/h = = 1.72 m/s , ρA 996.46 kg/m3 ⋅ π (101.6 mm)2 4

(12.16)

the pressure drop per 100 m is equal to (Equation (12.1)) Δp100 m = 0.0155

996.46 kg/m3 (1.72 m/s)2 100 m = 0.225 bar 2 101.6 mm

(12.17)

This is slightly larger than the recommended 0.2 bar/100 m, at a reasonable velocity. Increasing the tube diameter to 6󸀠󸀠 , which is the next standard nominal diameter, the resulting pressure drop per 100 m becomes Δp = 0.03 bar, which is by far lower. Probably, it is the more reasonable decision to stay at d = 4󸀠󸀠 .

318 | 12 Piping and measurement 12.1.2 Pressure drops in special piping elements For special piping elements, the pressure drop is usually calculated via Δp = ζ

ρw2 2

(12.18)

A number of ζ -values according to [150] is listed in Appendix B. 12.1.3 Pressure drop calculation for compressible fluids For gas flows, the procedure described in Chapter 12.1.1 is only valid if the flow can be regarded as incompressible. This can be checked with the Mach number (Ma), the ratio between the actual velocity and the speed of sound. The velocity should stay below 30 % of the speed of sound: Ma = w/w∗ < 0.3

(12.19)

The speed of sound can be expressed as [11] 2

(w∗ ) = −v2 (

𝜕p ) 𝜕v s

(12.20)

With a pressure-explicit equation of state, it can be determined via [11] 2

(w∗ ) = v2 [

2

𝜕p T 𝜕p ( ) −( ) ] cv 𝜕T v 𝜕v T

(12.21)

with v

cv = cvid + T ∫ ( ∞

𝜕2 p ) dv 𝜕T 2 v

(12.22)

For ideal gases, Equation (12.20) gives 2

(w∗ ) = κRT

(12.23)

with κ = cpid /cvid , where cpid and cvid are assumed to be constant and not a function of temperature. The speed of sound is a limiting velocity for a gas flow in a pipe. Except the case that the pipe has the shape of a Laval nozzle with a minimum in the crossflow area (Figure 12.4), thermodynamics can ensure that the speed of sound is not exceeded [154]. Due to the relatively high velocities for compressible flows, large pressure drops occur, often changing the state variables of the stream significantly. Due to the pressure loss, the density is reduced. This means that at constant mass flow the volume

12.1 Pressure drop calculation

| 319

Figure 12.4: Shape of a Laval nozzle.

flow and therefore the velocity increases, which in turn produces further increased pressure drops. The density further decreases, and finally the pressure drop is by far increased in a vicious circle. This behavior can be summarized in the following striking formula: “Pressure drop causes pressure drop”

This is an important issue, especially for the design of the outlet lines of pressure relief devices (Chapter 14.2). For compressible flow, the pressure drop calculation is demonstrated in the following example. Because of the changes of the fluid state variables, it is useful to divide the tube into small increments and calculate them sequentially one by one, where the state is updated at the inlet of each segment. The procedure can be performed in an EXCEL file or with the help of a process simulator, which usually offer the incremental calculation as an option. It is important to know that a conventional pressure drop calculation with a given mass flow usually yields to a solution which is not realistic. At the end of the pipe, an incompressible fluid must end up with the given outlet pressure. If the pressure drop is too large, the conclusion is that the mass flow cannot be realized; due to choking, it will be less. If the pressure drop is too low, the fluid will expand in a way that the inlet pressure is lowered to an extent where the outlet pressure is met. The same holds in principle for a compressible flow; however, the mentioned expansion is coupled with a significant change of the state. Furthermore, another restriction is that the speed of sound cannot be exceeded in a pipe. If it is reached upstream the pipe outlet, it is clear that the mass flow assumed is too high and choking takes place. At most, speed of sound can be reached directly at the pipe outlet. In this case, the outlet pressure will not be met; instead the fluid will expand directly after it has left the pipe. The following example should illustrate this. It might look a bit exotic, however, cases like this occur in outlet lines of rupture discs (Chapter 14.2). Example A nitrogen flow (ṁ = 30000 kg/h, p = 100 bar, t = 20 °C) enters a line (L = 50 m) which ends at a header with p = 1 bar. Choose an appropriate diameter for the line. The tube shall be hydraulically smooth.

320 | 12 Piping and measurement

Figure 12.5: Pressure courses without expansion at the inlet.

Figure 12.6: Pressure courses with expansion at the inlet.

Solution Various diameters are tested with a tube increment length of ΔL = 0.1 m. The results are illustrated in Figures 12.5 and 12.6. – The 1󸀠󸀠 line is too narrow for the given mass flow. At the inlet, the Mach number is already 0.38. Due to the “pressure drop causes pressure drop” effect, the fluid continuously expands, and the velocity rises increasingly. After almost 14 m, speed of sound is reached and choking takes place (Figure 12.5). The given mass flow cannot pass the pipe as assumed. – For a 2󸀠󸀠 line, things are more difficult. Starting with p = 100 bar at the inlet, the pressure drop of the line is Δp = 6 bar, giving p = 94 bar at the outlet, and the Mach number increases very smoothly from Ma = 0.09 at the inlet to Ma = 0.1 at the outlet (Figure 12.5). The conclusion is that the pipe diameter is sufficient; however, one should imagine what really happens in the pipe; especially, the design pressure of the pipe (Chapter 11) might be interesting. For this purpose, it is assumed that an adiabatic expansion happens at the inlet, where the velocity change is not neglected in the First Law: h1 +

1 1 2 w = h2 + w22 2 1 2

(12.24)

In an iterative procedure, it is found out that an expansion to p = 33.7 bar at the inlet gives a possible result. At the outlet of the pipe, speed of sound is reached, while the pressure only drops to 9.1 bar (Figure 12.6). The fluid will rapidly expand after leaving the pipe.

12.1 Pressure drop calculation



| 321

While the 4󸀠󸀠 and the 6󸀠󸀠 line show a similar behavior, the 8󸀠󸀠 line works in a different way. Again, the inlet pressure of p = 100 bar will yield a pressure drop which is by far too low (Δp = 7 mbar, Figure 12.5), and expansion will take place. Iteratively, an expansion to p = 1.49 bar at the inlet can be determined. At the pipe outlet, p = 1 bar is reached, while the Mach number Ma = 0.6 indicates that the velocity is below speed of sound (Figure 12.6).

12.1.4 Two-phase pressure drop The evaluation of the pressure drop of a one-phase flow is a quite exact one with a well-defined theory behind it. Things become much more complicated when a second phase comes into play. Especially vapor-liquid flows have a great technical importance. The pressure drop of a two-phase flow is characterized by the friction between the phases, which is hardly predictable. This friction causes the pressure drop to be higher than expected; it is usually underestimated, as even the apparently most conservative assumption of a pure vapor flow is not on the safe side, as Figure 12.7 shows. The pressure drop of water in the two-phase region at p = 1.1 bar is considered for various vapor fractions. It can be seen that the obvious approach of averaging the pressure drops of the vapor and the liquid flow systematically underpredicts the twophase pressure drop. For high vapor fractions, the two-phase pressure drop exhibits a well-defined maximum; even the assumption of a pure vapor flow as mentioned above yields lower pressure drops. As explained below, the horizontal and the vertical upward and downward flows have to be distinguished for the calculation of the two-phase pressure drop.

Figure 12.7: Typical curvature of the two-phase pressure drop with respect to the vapor fraction. Calculated with the Friedel equation.

322 | 12 Piping and measurement The best calculations for the two-phase pressure drop are probably the ones used in the commercial heat transfer programs, as they are decisive for the design of thermosiphon reboilers. However, to the knowledge of the author, they have not been published. The most popular published correlations are the ones of Lockhart-Martinelli [171], Friedel [172–174], and Beggs-Brill [175]. In the following, the Friedel method is exemplarily explained, which is considered to be the most reliable one because of its large database. However, errors up to 50 % might still occur. The method determines the two-phase factor R2Ph , which represents the ratio of the pressure drops of the twophase flow and of a one-phase liquid flow with the same mass flow: Δp2Ph = R2Ph ΔpL

(12.25)

ΔpL is the pressure drop according to Equation (12.1), where the total mass flow of the two-phase stream is replaced by a fully liquid stream: ΔpL = λL

2 ̇ (m/A) L 2 ρL dh

(12.26)

With the Reynolds numbers for both phases j = L, G Rej =

̇ h md A ηj

(12.27)

and the vapor mass fraction x=

mG mG + mL

(12.28)

an auxiliary quantity A∗ can be calculated as A∗ = (1 − x)2 + x 2

ρL λG ρG λL

(12.29)

It must be emphasized that in fact for ṁ the total mass flow has to be used. In contrast to the single phase flow, there is no discontinuity for λj at the transition laminar/ turbulent but a continuous transition region. For a circular cross-flow area, the friction factor is λj = 64/Rej

for Rej ≤ 1055

(12.30)

and λj = [0.86859 ln

Rej

1.964 ln Rej − 3.8215

−2

]

for Rej > 1055

(12.31)

For geometries differing from the circular cross-flow area the following changes have to be considered:

12.1 Pressure drop calculation



| 323

For a rectangular cross-flow area, the hydraulic diameter (Equation (12.2)) must be used: ṁ d /η , A h j

(12.32)

s 2 11 s + (2 − ) , 3 24 w w

(12.33)

Rej,rectangle = Ψ with Ψ=



where s is the length of the shorter and w the length of the larger side. The other steps are analogous to the circular cross-flow area. For circular ring cross-flow areas there are the relationships λj = 64/Rej

for Rej ≤ 1055

(12.34)

and λj = [2 lg (Rej √λj ) − E]

−2

for Rej > 1055

(12.35)

Equation (12.35) must be solved iteratively. E can be determined according to the following table. di /da E

0 0.8

0.05 0.932

0.3 0.961

0.8 0.968

1.0 0.97

Between the values for di /da , linear interpolation can take place. For the evaluation of the Reynolds number Re, again the hydraulic diameter (Chapter 12.1) must be used. With the Froude number Fr L =

ṁ 2 A2 gdh ρ2L

(12.36)

ṁ 2 dh A2 σρL

(12.37)

and the Weber number WeL =

R2Ph can be determined to be – for the horizontal and the vertical upward flow: R2Ph = A∗ + 3.43 x0.685 (1 − x)0.24 (ρL /ρG )0.8 (ηG /ηL )0.22 (1 − ηG /ηL )0.89 Fr −0.047 We−0.0334 , L L

(12.38)

324 | 12 Piping and measurement –

and for the vertical downward flow: R2Ph = A∗ + 38.5 x0.76 (1 − x)0.314 (ρL /ρG )0.86 (ηG /ηL )0.73 (1 − ηG /ηL )6.84 Fr −0.0001 We−0.087 L L

(12.39)

The Friedel equations are valid for the whole vapor fraction range 0 < x < 1. As limiting cases, one obtains the corresponding equations for the single-phase equations for vapor and liquid, except the transition region from laminar to turbulent flow. In case the influence of the roughness of the tube wall is not negligible (k ReG < 65 dh ), the result should be compared with the one for the pure vapor flow. One should always be aware that due to the relatively high pressure changes the vapor fraction along the tube might vary significantly. In these cases, it makes sense to divide the tube into small increments and evaluate the pressure drop increment by increment, as it is often necessary for compressible flow as well (Chapter 12.1.3). As for the compressible flow, the statement “Pressure drop causes pressure drop” holds. Current process simulators usually offer an opportunity to specify such a calculation. For the flow through piping elements, only the pipe elbow is sufficiently discussed: For the 90°-elbow, Muschelknautz [176] specifies the following procedure: B=1+ R2Ph = 1 + (

λ dL

2.2

(2 + r/d)

ρL − 1)[B x (1 − x) + x 2 ] ρG

(12.40) (12.41)

with r as the elbow radius. Accordingly, one obtains for the pressure drop Δp2Ph = λ

2 ̇ L (m/A) R d 2 ρL 2Ph

(12.42)

Equations (12.40) und (12.41) are only valid for the 90°-elbow. For other piping elements, the only remaining option is to define an equivalent tube length according to L=ζ

dh λ

(12.43)

and calculate the pressure drop along this artificially defined pipe. λ is then calculated using Equations (12.6), (12.7), (12.10), or (12.11), depending on the conditions. Besides the pressure drop, the flow pattern of a vapor-liquid flow is important. They are shown in Figure 12.8, taken from [177]. For vertical upward flow, the patterns are – Bubble flow (A): A large quantity of bubbles is present which are almost homogeneously mixed with the liquid. The liquid phase is still wetting the whole tube wall.

12.1 Pressure drop calculation

| 325

Figure 12.8: Flow patterns of vapor-liquid two-phase flow for horizontal (upper pictures) and vertical upward flow (lower pictures) [177]. © Springer-Verlag GmbH.



– –

Slug flow (B): Very large bubbles are formed which have a length that is by far larger than the diameter of the tube. When they end, they are followed by liquid flow with low vapor content. At a tube bend or a transition piece, this liquid will hit the tube wall and cause mechanical damage with time. Slug flow should be avoided. Chaotic flow (C): Large and small bubbles are randomly distributed. Wispy annular flow (D): The liquid is predominantly distributed around the tube wall. Vapor and swarms of droplets are in the tube core.

326 | 12 Piping and measurement –

Annular flow (E): The liquid is almost entirely distributed around the tube wall, only few droplets are suspended in the vapor flow in the tube core.

For horizontal flow, the patterns are (Figure 12.8) – Bubble flow (a): The vapor phase forms small bubbles. Due to the influence of gravity, they are distributed in the liquid in the upper part of the tube. – Stratified flow (b): The vapor phase is in the upper part, the liquid phase is in the lower part of the tube. There are no waves at the phase boundary. – Wavy flow (c): Similar to stratified flow (b), but with waves at the phase boundary. – Slug flow (d): Similar to wavy flow, but the waves can occupy the whole cross-section. There is an increased occurrence of bubbles in the liquid phase and droplets in the vapor phase. Again, there is the danger of mechanical damage for the tube. – Annular flow (e): The tube wall is fully wetted, but the liquid ring formed in the cross-section is asymmetric, with more liquid at the bottom than at the top of the tube. The vapor phase is in the center of the tube, with many liquid droplets. In the diagrams in Figure 12.8 the coordinates of the X- and Y-axis are defined as follows. For the vertical flow: il,0 = ig,0 =

ṁ 2 (1 − x)2 A2 ρL

(12.44)

ṁ 2 x 2 , A2 ρV

(12.45)

and for the horizontal flow: X=

)0.5 ( Δp Δx L

)0.5 ( Δp Δx V

(12.46)

,

where the Δp values refer to the situation where, respectively, the vapor and the liquid phase would occur alone and occupy the whole cross-section area. ṁ is the total mass flow. For the horizontal flow, the boundary between bubble and slug flow refers to the ordinate TD = [

( Δp ) Δx L

(ρL − ρV ) g

0.5

]

(12.47)

12.2 Pipe specification

| 327

The coordinate for the boundary between annular and wavy flow is FD =

̇ mx A((ρL − ρV )ρV dg)0.5

(12.48)

The coordinate for the boundary between stratified and wavy flow is KD =

ṁ 3 x2 (1 − x) − ρV )ρV gηL

A3 (ρL

(12.49)

For the choice of the diameter, one should take care that periodical shocks due to slug flow should be avoided. It can be avoided by choosing lower pipe diameters, however, this option is often limited and the diagrams are not excessively accurate. Furthermore, small diameters give large pressure drops, which is often the limitation. Thus, the usual strategy is to avoid two-phase flow as far as possible, e. g. by placing the expansion valves directly upstream the vessel or, if possible, at a low point.

12.2 Pipe specification Besides the inner diameter of a pipe, which is decisive for the pressure drop calculation, there are of course a large number of other items to be specified for a pipe. To keep the overview, so-called piping classes are defined, which differ from company to company. In these piping classes, a large part of the information about a pipe can be predefined, such as: – Pressure rating: The design conditions of a pipe are normally predefined by the adjacent pieces of equipment. In the pipe class denomination there is usually a code which ensures sufficient design conditions. Also, the influence of the temperature on the mechanical stability is considered and defined. – Fluid code: A certain abbreviated code gives qualitative information about the fluid going to the pipe, both concerning the substances involved and the design conditions. There are sophisticated pipe class systems where the fluid code carries the complete information about the pipe. – Piping material: The piping material is indicated in the piping denomination code, either explicitly or via the fluid code mentioned above. – Gasket type: The gasket type can be indicated in the fluid code. Also, the necessary pipe connections (flange, welded connection) can be defined. – Insulation: There are a number of insulation types for a pipe which have to be distinguished:

328 | 12 Piping and measurement –











None: Insulation is not needed for streams where no exorbitant temperatures or other dangers occur. An example is cooling water. Heat insulation: An insulation must be defined (material, thickness) which avoids heat losses in the pipe. The insulation material must be thermally stable and effective at the required temperature. Cold insulation: An insulation must be defined (material, thickness) which can keep the stream in the pipe at the cold temperature required. The insulation material must be effective at the required temperature. Often the ingression of air humidity into the insulation material must be avoided. Personnel protection insulation: An insulation is provided which is thermally not effective, but prevents that members of the staff touch the pipe. This could cause injuries if it is hotter than approx. 60–70 °C. Most companies have guidelines where personnel protection insulation is required. Electrical tracing: Electrical tracing is required if there is a danger that the fluid is no more pumpable at cold environmental temperatures, e. g. water at temperatures below 0 °C. Electrical tracing is relatively expensive. Jacket tracing: A double pipe is manufactured where a heating agent (steam, hot water) is used to keep the temperature in the inner part of the pipe.

Pipes are assembled at environmental conditions. During operation, they will often be exposed to elevated temperatures, and their lengths will increase due to thermal expansion. If these expansions are not compensated for, large mechanical tensions will occur which can possibly cause damage to the gaskets, the pipe fittings, and the pipe itself. Small changes in length can be compensated by the elasticity of the material, for major expansions special compensating elements are necessary. For pipes operating at high pressure, bend elements are used. The most popular one is the U-bend (Figure 12.9), which can compensate the tensions by deformation. In application, one should not forget the high-point vent or the low-point drain to avoid accumulation of gases or, respectively, liquids. Another option is the bellow expansion joint (Figure 12.10). Between two flanges, a bellow pipe can equalize the pipe expansion. A guide tube inside can prevent the bellow from being polluted, however, in this case only axial expansions can be compensated.

12.3 Valves | 329

Figure 12.9: Expansion loop.

Figure 12.10: Bellow expansion joint [178]. © Hydrocarbon Processing.

12.3 Valves Valves are used in piping systems to control flowrates, pressure or temperature, to simply turn a flow off or on, or to separate two pieces of equipment [179]. Regarding their function, they can be divided into isolation valves and control valves. The difference is that isolation valves are actuated by an operator, and their states are “open” or “closed”, whereas control valves are operated automatically and it is decisive that a certain intermediate state between “open” and “closed” can be continually maintained. 12.3.1 Isolation valves Isolation valves must reliably isolate two sections of the pipe against each other, even after a long operation time [180]. Leakage to environment must be avoided due to fire danger or emission control. There are several kinds of valves which have their particular pros and cons. They are further explained in [179] and [180].

330 | 12 Piping and measurement 1.

Globe valves (Figure 12.11): Globe valves can be used for a precise flow control. They do not have a dead storage and close tightly. Their disadvantage is the high pressure drop across the valve, caused by two 90° turns inside the valve. 2. Ball valves (Figure 12.11): Ball valves can be fully opened with practically no additional pressure drop. They can handle solids and are appropriate for automation for use as control valve. The leakage is very low, and ball valves can be operated at high temperatures and pressures. The disadvantages are the remaining liquid holdup in the valve due to a large dead storage. Electrostatic problems might occur, so some precautions should be taken if flammable liquids are handled. 3. Gate valves (Figure 12.11): Gate valves are designed to be fully open or fully closed. In case they are fully open, they do not show an additional pressure drop. Lubricants are not necessary. Gate valves are tight and open and close slowly so that fluid hammering is avoided. Their disadvantage is that gate valves do not have a gradual valve characteristics. They are more or less either open or closed, they are not appropriate for use as control valve. In the partially open state, the valve can start vibrating, which leads to damage with time [179]. 4. Membrane valves (diaphragm valves, Figure 12.11): Membrane valves are completely tight; however, their pressure drop is considerable, and they are not appropriate for high temperatures and pressures or dirt. The mass flow control is not gradual. Due to its tightness, it is considered to be suitable for special cleanliness demands; so it is very popular in pharmaceutical applications.

Figure 12.11: Valve types. 1 = Globe valve, 2 = Ball valve, 3 = Gate valve, 4 = Membrane valve. © KSB Aktiengesellschaft.

5.

Plug valves (Figure 12.12): Plug valves have also a dead storage, and the additional pressure drop is also very low as well as the leakage. The disadvantages are the high turning moment for

12.3 Valves | 331

6.

operation and the possible contamination of the product with lubricant. They are usually not used for control purposes. Butterfly valves (Figure 12.12): Butterfly valves have a low pressure drop. They are tight, have no leakage to environment and no dead storage. They open gradually and are appropriate for use in control applications. Maintenance is easy. Excentric butterfly valves are even appropriate for high pressures and temperatures. The main disadvantage is that the disc and the shaft are in the flowpath of the fluid. Highly abrasive media will erode the disc, and it is difficult to clean the valve.

Figure 12.12: Valve types. 5 = Plug valve [181]. © Hydrocarbon Processing. 6 = Butterfly valve. © Heather Smith/Wikimedia Commons/CC BY-3.0. https:// creativecommons.org/licenses/by/ 3.0/deed.en.

7.

Check valves (Figure 12.13): Check valves ensure that flow can only take place in one direction. They prevent backflow from higher lines and vessels or from high-pressure regions to lowpressure regions. The construction must be carried out in a way that they have no flow resistance in one direction and complete block for the reverse one. There are different types [179]. In most guidelines, for safety purposes check valves are ignored to be on the safe side, even if it is obviously wrong.

Figure 12.13: Examples of check valves. © KSB Aktiengesellschaft.

In contrast to the normal function, these valve types can also be used as shut-off valves, where they are operated automatically by the DCS system to realize the action

332 | 12 Piping and measurement of an interlock. In this case, they shall have no additional pressure drop and close completely. To prevent manually operated valves from maloperation, they can be secured. One option is the car sealed valve, where a simple seal made of plastic must be broken on purpose before the valve can be actuated. The valve is then protected against accidental maloperation. A more rigorous measure is the locked valve, which is secured with a padlock or a chain and can only be actuated after it is unlocked with a key. The key can e. g. only be obtained after a signing procedure. The costs for normal isolation valves are usually not a decisive issue. However, for large pipe diameters it should be carefully checked whether they are really necessary, as an isolation valve in a 32󸀠󸀠 line can easily exceed the price of a medium-sized car.

12.3.2 Control valves Control valves are used to control quantities like flow, pressure, temperature or liquid level by fully or partially opening or closing in response to signals received from controller devices that compare a “setpoint” to a “process variable”. The opening or closing of control valves is usually done automatically by electrical, hydraulic or pneumatic actuators. In Figure 1.2, a standard arrangement around a control valve had been shown. The valve is actuated by an electric signal with instrument air. To save one stage in the nominal size, the line to be controlled is restricted upstream and expanded again downstream the valve. The control valve can be isolated by two gate valves in case maintenance is needed. For this purpose, there are also two drain valves on both sides of the control valve so that the line can be emptied completely. There is a bypass line with a ball valve around the control valve, so that during maintenance the flow can be controlled manually. The valve in Figure 1.2 has been defined to be failsafe closed (FC). This means that in case the instrument air or other necessary utilities fail the valve takes the closed position, in contrast to failsafe open (FO). It must be defined in advance during basic engineering which position is the safe one. The usual way for the characterization of valves for liquids is the KV -value, which is the amount of water in m3 /h that flows through the valve at a pressure drop of 1 bar. It can be written as KV ρ bar V̇ = 3 √ 3 m /h m /h g/cm3 Δp The KV -value for a fully opened valve is called KVs .

(12.50)

12.3 Valves | 333

Example 17 m3 /h liquid ethanol (ρ = 790 kg/m3 ) pass a valve with a pressure drop of Δp = 3 bar. Which KVs value is necessary?

Solution The necessary KV value can be determined according to Equation (12.50): KV 1 = 17√ 0.79 = 8.7 3 m3 /h The KVs value should be 30 % larger, i. e. KVs = 8.7 m3 /h ⋅ 1.3 = 11.3 m3 /h

The KV -value can be transferred into a ζ -value according to the following procedure: The combination of Equations (12.18) and (12.50) gives 2 ρ kg K 105 Pa = ζ w2 = ζ ⋅ 500 3 V2 2 m A

(12.51)

with A as the cross-flow area of the pipe. Solving for ζ yields ζ =

105 Pa A2 2 500 kg3 KV

(12.52)

m

or ζ = 1.6 ⋅ 10−3 (

4

K d ) ( 3V ) , mm m /h −2

(12.53)

referring to d as the pipe diameter. For gases, we distinguish between subcritical and supercritical flow (Chapter 14.2). The KV value is written as KV =

−1 V̇ N √ ρN T1 (p1 − p2 )p2 ( ) 514 m3 /h kg/m3 K bar2

(12.54)

for subcritical flow with (p1 − p2 ) < p1 /2 and KV =

−1 V̇ N p1 ρ T ( ) √ N 3 1 3 257 m /h bar kg/m K

for supercritical flow with (p1 − p2 ) > p1 /2. The indices denote for

(12.55)

334 | 12 Piping and measurement N standard state (p = 1.01325 bar, T = 273.15 K); 1 valve inlet; 2 valve outlet. The relevance of p1 /2 is discussed in Chapter 14.2.

12.4 Measurement devices To keep all the process quantities controlled, it is certainly obligatory to evaluate them by measurement with the appropriate accuracy. As a rule of thumb, 10–20 % of the investment costs of a plant are spent for measurement, control, and process automation. The implementation of the concepts is usually done by designated specialists, while some foundations of measurement should be considered by any process engineer. The most important process quantities to be measured are temperatures, pressures, pressure differences, flows, levels and concentrations. They are briefly discussed in the following paragraphs. – Temperature: One of the most important principles of measurement in chemical plants is that the signal has to be transformed into an electrical signal which can be sent to the process control system, where it can be visualized to the operators and possibly be used for control applications. For the temperature, the most important thermometers are resistance thermometers and thermocouples. Resistance thermometers use the temperature dependence of the electrical resistance. They are the most accurate devices and can be used in the temperature range −250–1000 °C. A wellknown thermometer is the Pt-100, measuring the resistance of a platinum wire, which is 100 Ω at 0 °C. Alternatively, thermocouples are used in an even wider temperature range of −200–2000 °C. The principle is that a voltage is built up between a soldering point of two wires of different materials and the free ends, if the soldering point is exposed to a different temperature. In the process, care must be taken that the thermometers are placed in a way that they take representative temperatures, it has to be avoided to place them in dead zones, where they are more or less isolated from the process. – Pressure: In contrast to the temperature, the pressure is uniform in a certain area unless there is a defined reason for change, e. g. hydrostatic effects or pressure drops due to friction losses. In most cases, the pressure is transformed into the elastic deformation of a spring. The movement of the spring is transformed in an electrical signal, often by means of the deformation of a metal membrane, which is turned into a signal by a piezo element. Using different manometers, the range from a few mbar up to more than 1000 bar can be covered.

12.4 Measurement devices | 335





It is often amazing how much confusion is caused when it has to be clearly indicated whether the absolute pressure or the gauge pressure, the difference between absolute pressure and ambient pressure, is meant. While people from plant operation stick to the gauge pressure, scientists and simulation people can hardly imagine that anything else than absolute pressure could be meant. The only way to overcome this is to clearly indicate it writing “g” for gauge (e. g. “barg”) and “a” for absolute (e. g. “bara”). The latter abbreviation is unknown to most people; at least it causes a further inquiry, and the possible misunderstanding is overcome. Pressure difference: The measurement of pressure differences is important to get information about hydrostatic pressures or pressure drops. It is measured in a similar way as the pressure itself, both pressures are connected to different sides of a spring. Pressure differences cannot be evaluated by measuring both absolute pressures and taking the difference, as the difference of large numbers can be considerably erroneous (Section 10.1). Flow: Today the dominating measurement principles for the flow are the Coriolis type and the Vortex type flow meters. The Coriolis flow meter is more expensive, but its accuracy is remarkable. It measures the mass flow with an uncertainty of approx. 0.2 %, covering the range from 60 g/h–120 t/h, at pressures up to 900 bar [182]. Figure 12.14 illustrates the principle, although it must be emphasized that a number of arrangements are possible. When vibrations are initiated to the tube bends, the two pipe branches stay in parallel in case there is no flow as on the left-hand side. When there is flow, the pipe branches are differently affected by the Coriolis force and distorted, as on the right-hand side of the figure. The extent of the distortion is strongly related to the mass flow through the device [182]. The amplitudes of the induced vibrations are too small to be seen (approximately 30 µm), but they can be detected tactually. The great advantage of Coriolis flow meters is that they measure the mass flow directly; they are independent from other properties of the flow and from inlet or profile effects of the flow. Apart from the price, the disadvantages are that Coriolis flow meter need a homogeneous flow and that fouling causes errors in the measurement. In contrast to the Coriolis flow meter, the vortex flow meter evaluates a volume flow, which can be converted into a mass flow by an additional temperature measurement and an appropriate density relationship. It counts the number of vortices formed after an obstacle in the flow path, using a piezoelectric crystal. The accuracy can be estimated to be 0.75 %. It needs a certain inlet zone. Also, it is not necessary to know other properties of the stream, such as viscosity. Vortex flow meters have a huge turndown (approx. 1 : 50 from the lowest to highest value) and can be used in a wide temperature range (approx. −200–400 °C). They are not ap-

336 | 12 Piping and measurement

Figure 12.14: Illustration of the Coriolis flow meter principle. © Cleontuni/Wikimedia Commons/CC BY-SA 2.5 https://creativecommons.org/licenses/by-sa/2.5/deed.en.



propriate for low flows, fouling media, and in the case that vibrations occur in the plant. A third type which is often used is the magnetic flow meter. The physical principle is that a magnetic field is applied to the metering tube. Charged particles like ions will be diverted perpendicular to the flow. This results in a potential difference proportional to the flow velocity. For application, the fluid must have a minimum electrical conductivity (> 0.5 µS/cm), and the tube must be electrically isolated. Magnetic flow meters have no movable parts and no additional pressure drop, and they are appropriate for aggressive and corrosive fluids. It is an application for liquids; solids or gas bubbles do not matter. The temperature is limited to 200 °C, and the minimum flow velocity is 0.5 m/s. There are many other types of flow meters, which are well and briefly described in [183]. Level: There are a lot of measurement principles for the liquid level. One must distinguish between a continuous level indicator and a level switch, which detects when the level reaches a certain value. The simplest and safest device is the inspection glass; however, the transformation into an electrical signal does of course not work. Level indicators can be based on the buoyancy principle. The more a displacing piston is dipped into a liquid, the more it is exposed to buoyancy forces which can be transformed into electrical signal by resistance strain gauges. The drawback is the mechanical equipment, which might be sensitive to dirt, and that the density of the liquid, which depends on temperature and composition, must be known. Air bubblers work in a similar way. Air is bubbled through a dipped pipe into the liquid. A pressure sensor measure the pressure necessary to overcome the hydrostatics. This method is not sensitive to dirt, but a disadvantage is that the liquid is contaminated with the gas, which might not be desirable in all cases. Also, the hydrostatic pressure can directly be measured and transferred into a liquid level, taking the liquid density into account. A number of other electrical signals can be used for liquid level detection or measurement, like electrical conductivity, capacity, radar sensors, microwave or in-

12.4 Measurement devices | 337

Figure 12.15: A bad (A) and a good (B) example for sampling.



frared sensors or radiometric signals. A useful option is the so-called liquiphant, which is in principle an oscillating tuning fork. If the liquid level reaches or drops below the liquiphant, its resonance frequency changes. This is detected and transformed into a signal for a high-level or low-level alarm. Analytical measurement: Analytical measurements are of course done with GC (gas chromatography), HPLC (high performance liquid chromatography), Karl-Fischer-titration (determination of the water content), and so on. All these methods are the tasks of designated specialists and should not be covered in one short paragraph. However, all analytical methods considerably depend on a good design of the sampling, where it has to be ensured that the sample is representative. Figure 12.15 shows an inappropriate and an appropriate example for taking a sample. In example A there is a parallel branch to the line. It can be initiated by opening the two valves. However, when the valves are closed again, it is not ensured that a representative sample is obtained. The question is how it is ensured that the previous content in the sample container (e. g. air, rests from previous sample) has really been replaced. There is no real motivation for the flow to pass the sample container; the short cut through the main line has probably a lower pressure drop than the way through the sample box. Furthermore, the velocity in the sample container is low due to the larger cross-flow area. In example B the sample container is connected to both the pressure and the suction side of the pump. Therefore, there is a considerable pressure difference across the sample container, resulting in a well-defined backflow to the suction side of the pump. The previous content of the sample container is rapidly replaced. The disadvantage is that it will go through the pump again, and part of it will again enter the sample container. However, with time the old content will be more and more diluted, and finally, the sample will be representative.

13 Utilities and waste streams 13.1 Steam and condensate In process industry, steam is the most widely used heating agent. Most chemical sites provide a steam net where steam at several pressures is provided. The costs for steam is an important criterion for the choice of a site, however, as the costs of a process are usually determined by the costs of the raw materials, it is rarely decisive. Low pressure steam is not necessarily cheaper than high pressure steam; usually, the steam generators produce steam at very high pressure (e. g. 400 °C, 50 bar), which is then throttled down in a valve to the lower pressure levels (e. g. 210 °C, 17 bar as medium pressure steam and 170 °C, 6 bar as low pressure steam). For use, steam should not be too far superheated so that condensation can take place rapidly with an extraordinarily high heat transfer coefficient in the equipment. For desuperheating, condensate is often injected to the steam by means of specially designed nozzles. Example How much steam condensate (100 °C, 20 bar) must be added to reduce the superheating to 5 K if a stream of 1000 kg/h high pressure steam (superheated at 400 °C, 50 bar) has been throttled down to a middle pressure level of 20 bar?

Solution According to the steam table [184] or using a high-precision equation of state (e. g. [29]), we can set up the energy balance of this desuperheating process ṁ HP hHP + ṁ Cond hCond = (ṁ HP + ṁ Cond ) ⋅ h(Ts (20 bar) + 5 K, 20 bar)

(13.1)

with hHP = 3196.7 J/g hCond = 420.6 J/g Ts (20 bar) = 212.4 °C hfinal = h(212.4 °C + 5 K, 20 bar) = h(217.4 °C, 20 bar) = 2813.8 J/g Solving Equation (13.1) for ṁ Cond gives ṁ Cond = ṁ HP

hHP − hfinal 3196.7 − 2813.8 = 1000 kg/h = 160 kg/h hfinal − hCond 2813.8 − 420.6

(13.2)

One should be aware that the heat transfer of superheated steam does not take place in a two-step sequence consisting of cooling down the steam as a vapor to condensation temperature and subsequent condensation. For the heat transfer, this would https://doi.org/10.1515/9783110657685-013

340 | 13 Utilities and waste streams be a disaster, as the heat transfer coefficient for cooling of a vapor would be poor in comparison with the steam condensation and may determine the size of the condenser. In fact, for moderate superheating the condensation remains essentially the same as for a saturated vapor, the only thing which changes is the larger heat to be transferred due to the superheating [185]. People who wear glasses intuitively know this, as during winter time the glasses grow damp immediately after entering a building; it is not necessary that the whole air in the building is cooled down to dew point temperature. For moderate superheating, condensation takes place immediately in a technical condenser as well. In [79], a criterion is set up to decide in which cases a superheating can be considered as moderate; however, it is again pointed out that it is definitely disadvantageous to regard a part of the condenser as a gas cooler for design. For each sort of steam, a so-called header is provided, carrying the whole steam from the tie-in point or from battery limits to the plant and branching off to the various consumers of the plant. The condensates are collected as well in a condensate header and usually pumped back as boiler feed water to the steam generator. In some cases, the steam is used as direct steam, meaning that it is introduced directly into the process, e. g. into the bottom vessel of a column where the bottom product is water anyway. In this case, no condensate can be returned, and often additional costs occur as the whole amount of steam might end up as waste water. When steam is directly used, it should also be considered that it contains small amounts of caustic substances, e. g. ammonia or amines. One should make sure that this has no detrimental influence on the process. The great advantage of using direct steam is that a reboiler can be omitted. The use of steam as heating agent has some remarkable advantages. In contrast to heating agents making use of sensible heat (e. g. hot oil), the temperature stays constant, as water condenses as a pure substance. It is not necessary to convey the heating agent to the consumer; it is delivered with a certain pressure which is higher than the pressure at condensation. The condensation is the conveying mechanism for the steam. The specific volume of steam is by far larger than the one of the condensate: At p = 2 bar, v󸀠󸀠 = 0.8857 m3 /kg compared to v󸀠 = 0.00106 m3 /kg, corresponding to a factor of 835. When steam condenses, the volume decreases drastically, and fresh steam can follow to maintain the pressure where the heat is obviously consumed. The heating agent is flowing to the area where it is required without any stimulation. The only thing which has to be provided are lines with a sufficient cross-flow area for conveying the steam without major pressure loss. Of course, it is important that there are no inerts in the steam. Moreover, even the appropriate amount of steam will flow to the heat exchange area. Figure 13.1 shows an arrangement with a heat exchanger where the steam flow is controlled.

13.1 Steam and condensate

| 341

Figure 13.1: Control of the steam flow as the heating agent.

The heat flux through the heat exchanger is given by Q̇ = kA(Tcond − Tproduct ) = ṁ steam Δhv

(13.3)

Following this simple equation, one should be aware that the k value is mainly determined by the product side, as the heat transfer coefficient on the steam side is very high and does not affect k very much. Also, within a certain range the enthalpy of vaporization as a physical property of the steam can be regarded as constant, and A as the heat transfer area of the heat exchanger does not change anyway. Therefore, the transferred heat flux is mainly determined by the condensation temperature Tcond of the steam, which has to be adjusted in an appropriate way for the control of the product temperature. As the steam is a pure substance, the condensation temperature is directly related to a pressure according to the vapor pressure line. The signal for the control valve varies the discharge opening until it causes a pressure drop which yields the desired condensation pressure of the steam. For a comfortable controlling, the pressure drop should not be too low; the rule of thumb is 10–20 % or 0.5–1 bar. This pressure drop should be taken into account when the heat exchanger is designed. The steam conditions given in the design basis indicate the state of the steam upstream the control valve, for the design of the heat exchanger, the state of the steam downstream the control valve, after adiabatic throttling, is relevant, including the resulting superheating. This steam state must fit to the desired state in the heat exchanger. Example In a thermosiphon reboiler (Chapter 4.5), the bottoms product is to evaporate at t = 100 °C. Lowpressure steam (LPS) at t = 160 °C, p = 5 bar will be used for heating. For the driving temperature difference, a value of 30 K is targeted. Calculate the steam state relevant for the heat exchanger design. Is enough pressure drop for the steam control available?

Solution The condensation temperature of the steam shall be tcond = 100 °C + 30 K = 130 °C. According to the steam table, the corresponding condensation pressure is p = 2.7 bar. Adiabatic throttling of the LPS to this pressure gives a steam state t = 151.8 °C, p = 2.7 bar. The pressure drop across the valve is

342 | 13 Utilities and waste streams

sufficiently high (Δp = 2.3 bar or 46 %). The superheating of 21.8 K is acceptable. Otherwise, steam saturation would have to be applied.

The symbol in the condensate outlet line in Figure 13.1 represents a so-called steam trap, a device which lets liquid pass and closes if vapor is about to leave the system without condensing. Thus, it is ensured that any steam entering the heat exchanger is condensed, as it cannot leave the system as vapor. There are several function principles [186]. The mechanical one is the simplest. A lever gauge rises if liquid comes and lowers if it is located in vapor (Figure 13.3). It is connected to a lever which opens and closes an opening, respectively. Steam traps are often supposed to be not appropriate for fouling and dirty media. An alternative scheme without using a steam trap is shown in Figure 13.2 [96]. An extra vessel with level control can take over the function; the pressure-equalizing line is necessary, otherwise, noncondensed steam might accumulate in the vessel. If the pressure of the condensate must be increased, a pump can be installed below the vessel.

Figure 13.2: Steam control without using a steam trap [96].

Figure 13.3: Sketch of a ball float steam trap with air cock for venting [187]. © 2016 Spirax Sarco Limited.

13.1 Steam and condensate

| 343

Figure 13.4: Control of the condensate flow.

The alternative to the control scheme in Figure 13.1 is the control of the condensate flow (Figure 13.4). The advantage is that the control valve can be smaller, as the condensate has by far a lower volume. The steam condenses at its delivery pressure. To reduce the heat flux, the control valve throttles the flow. The condensate accumulates in the heat exchanger and covers part of the heat exchange area. The heat transfer to the liquid is much lower than for a condensing vapor, and furthermore, the temperature of the condensate goes down when it is used as heating agent. Therefore, part of the heat exchange area is not used and the heat flux is reduced as requested. Increasing the heat flux is unsatisfactory. It is only possible if part of the heat exchanger is already flooded. This means that a control in both directions can only be performed if the heat exchanger is designed in a way that part of the tubes are flooded at normal operation, meaning that possible heat exchange area is wasted. Further disadvantages are [186]: – The control valve does not necessarily prevent uncondensed steam from passing the heat exchanger. Therefore, an additional steam trap is necessary, as shown in Figure 13.4. Otherwise steam could be lost, and the pressure in the condensate header might increase, making the condensate removal of other heat exchangers in the process more difficult. – The controllability of the process is worse than with the steam flow control. For example, if the heat flux should be reduced from full power to a very low value, the steam stops entering the heat exchanger when the whole apparatus has been flooded with liquid. For example, if the volume of the shellside is 1 m3 , approx. 1 t of steam is consumed after the control valve has been closed. With the steam inlet control (Figure 13.1), only the steam already being in the shellside condenses. At p = 2 bar, the density of the saturated vapor is ρ = 1.13 kg/m3 , giving an undesired 1 steam consumption of 1.13 kg, which is approx. 900 of the condensate flow control. This means that the condensate flow control is much slower. – At the phase boundary between steam and condensate increased corrosion might be observed. – For horizontal reboilers, thermal stress is an important issue. The upper and the lower tubes are exposed to different temperatures, as hot fresh steam enters the heat exchanger at the top and the condensate at the bottom might be significantly subcooled. For heat-integrated columns, the control of the condensate flow is the preferred option, as the pressure drop of the inlet valve reduces the driving temperature difference for the evaporation, which is the critical issue in heat integration.

344 | 13 Utilities and waste streams It might happen that due to the throttling in the valve and the pressure drop of the steam trap the condensate outlet pressure of the consumer becomes too low to convey it to the condensate header. To avoid an additional pump, a so-called condensate lifter can be used. Figure 13.5 shows a possible arrangement. The function is as follows. Coming from the steam trap, the condensate is collected in a vessel with three nozzles, A, B, and C. Inside the vessel there is a lever gauge, which is mechanically connected with the three nozzles (Figure 13.6). With rising liquid levels, the lever gauge is taken upward. The connections then close the condensate inlet B and open the condensate outlet C and the steam inlet A. Through A, the condensate in the vessel is pressurized by the steam and can therefore flow to the condensate header. The lever gauge sinks again, closing the nozzles A and C and opening nozzle B. The remaining steam in the condensate lifter can be vented. Condensate lifters work automatically and without maintenance. The steam as auxiliary energy is available anyway, and the additional steam consumption is approx. between 0.1–1 %, as the gaseous steam has to replace the liquid volume of the condensate and the volume ratio between steam and condensate is in this range, depending on the pressure.

Figure 13.5: Condensate lifter process.

Figure 13.6: Sectional view of a condensate lifter [188]. © 2016 Spirax Sarco Limited.

13.1 Steam and condensate

| 345

Although essentially only water, condensate is a valuable substance as it has been purified so that no more salts or other substances are present. There is a certain value just from its energy content, as the sensible heat of liquid water is approx. 10 % of the heat transported as steam.1 In European terms, this corresponds to 2–3 €/m3 , without the additional costs for purification, conditioning and waste water disposal. Therefore, it is collected from the particular consumers and recycled to the steam generator, where it is again preconditioned as boiler feed water. A condensate net has in most cases a number of consumers, often in large distances from each other. One must be aware that the condensate production is not constant in time, and each consumer delivers his own condensate outlet pressure. Thus, there are always fluctuations in the condensate header, and the pressure in this line should in general be low so that none of the consumers has difficulties to get rid of the condensate. The condensate line will in most cases end up in a vessel having a pressure slightly above the ambient one (e. g. p = 1.2–1.3 bar), and the condensate line is usually operated a bit higher (e. g. p = 1.5 bar). As the condensate is close to the saturation state when it enters the condensate line, vapor will be generated in the condensate line due to the expansion. From the mass fraction point of view, it is not too much; however, it has a considerable volume, as the following example shows: Example A saturated low-pressure steam condensate at p1 = 6 bar is expanded into the condensate line to p2 = 1.5 bar. How much vapor is generated?

Solution Calculating an adiabatic throttling, the vapor generation in the condensate line is 9.13 %. The densities of vapor and liquid in the saturation state are ρ󸀠󸀠 = 0.863 kg/m3 ,

ρ󸀠 = 949.92 kg/m3

Therefore, per kg condensate one must expect a vapor volume of 0.106 m3 and a liquid volume of 0.001 m3 . This means that from the volume point of view only approx. 1 % of the condensate is liquid.

This is a quite typical result. A condensate line is not a water line but a steam line with a certain amount of liquid. The condensate might remain liquid if some subcooling takes place, e. g. in a long condensate line during winter, but in general it should be designed as a two-phase line (Chapter 12.1.4). The exact dimensioning of the conden1 However, on a low temperature level. Calculated as (hL (100 °C, 1.1 bar) − hL (30 °C, 1.1 bar))/ (hV (250 °C, 20 bar) − hL (30 °C, 1.1 bar)).

346 | 13 Utilities and waste streams sate line is difficult, as all the different operation modes of the particular consumers and the significant influences of insulation, ambient temperature and roughness of the inner surface of the line can hardly be determined [186]. The following items are recommended [186]: – The line should be designed as short as possible and have a base slope of at least 1 %, ensuring that the line drains itself in a shutdown. – For the pressure drop, Δp = 0.1 bar/100 m is recommended. – The tie-in of the consumers should be done from the top. – An injection for rapid mixing makes sense if the condensate temperatures significantly differ. Otherwise, water hammering might occur if flashed steam bubbles from hot condensate instantly condense in colder condensate. Their volume collapses immediately, and liquid water fills it with high velocities. – It must be possible to drain each section of the condensate header completely. As any long line, the condensate header will have low points where special care should be taken. The steam generation of hot water when it is expanded plays the main role in the socalled steam boiler explosions [249]. Steam generators contain a large amount of liquid boiler feed water at high temperature and high pressure. When a shell rupture happens, the pressure is suddenly reduced to atmospheric, and the water evaporates violently. The damages caused are often huge.

13.2 Heat transfer oil At very high temperatures (> 250 °C) the use of steam becomes more and more difficult, as the condensation pressures and therefore the design pressures become large, making the heat exchanger expensive. In these cases, it makes sense to use a heat transfer oil at comparably low pressures. Heat transfer oils can withstand very high temperatures and have low vapor pressures. Also, they remain liquid at very low temperatures. One must be aware that it is made use of sensible heat, meaning that the temperature of the heating agent changes and that much larger mass flows are necessary. A compilation of common heat transfer oils is given in [189]. As well, melted salts are used as heating agents.

13.3 Cooling media Cooling is much more sophisticated than it seems. It requires the infrastructure of a site, and each cooling medium has its own restrictions. Sites are usually located at a natural water reservoir. Its temperature is limiting the lowest achievable temperature in the process which can only be underrun by ad-

13.3 Cooling media

| 347

Figure 13.7: Plate heat exchangers for sea water cooling.

ditional technical equipment. If the site is located at the sea, a sea water cooling cycle is operated. However, sea water is one of the most aggressive media due to its salt content. Using it in the process is practically impossible, as valuable materials of construction like hastelloy would have to be chosen. The only economic way to make use of sea water is the indirect way by operating a secondary cycle of demineralized water for the whole site. Figure 13.7 shows a number of huge plate heat exchangers, cooling the water cycle for a site operating with several plants with sea water. Plate exchangers for sea water must be made of a material which can withstand the corrosion attacks by the salt. The incoming sea water must be carefully filtered, and the cycle must be treated with biocides regularly. Cooling water is usually taken from a river. Its supply temperature is usually between 28–32 °C, with a supply pressure of approx. 5–6 bara. Usually, its return temperature can be 10 K higher, i. e. approx. 40 °C. Before returning it back into environment, it is cooled down again with a cooling tower, as the oxygen content of the water becomes unacceptably low at temperatures beyond 26–28 °C, causing suffocation of the fish in the river. The use of normal cooling water has its pitfalls, as it contains water hardness components. When it is heated up, these components (CaSO4 , CaCO3 ) tend to precipitate. This kind of fouling must be avoided. Therefore, cooling water can only be used up to a certain temperature level of the product side, usually 65–75 °C,

348 | 13 Utilities and waste streams where the wall temperature on the cooling water side is regarded. Cooling water costs are approx. 0.05 €/m3 . Considering the above mentioned difference between supply and return temperature, a cooling water flow of approx. 86 m3 /h represents a cooling power of 1 MW, as 86 m3 /h ⋅ 1000 kg/m3 ⋅ 4200 J/(kg K) ⋅ 10 K = 1.003 MW The distribution of cooling water from a header to various consumers often turns out to be a challenging procedure with unexpected outcomes. While it is expected that there is a tendency that the first outlet in the row will get the highest flow, it often happens that the very opposite is true, with the last outlet in the row as the preferred one [248]. This maldistribution can reduce the system performance. A modified design, e. g. with a tapered header lines where the velocities towards the end of the header are increased by lowering the pipe diameter, can equalize the pressure profile and create a uniform distribution. However, this is costly, and often the consumption of the particular outlets varies. Restricting orifices in the consumer lines are an alternative. If cooling water cannot be used due to a temperature level on the product side higher than mentioned above, there are mainly two options. First, one can install a secondary water cycle (“jacket water”) operated with demineralized water as cooling agent. This cycle itself is then cooled by original cooling water at moderate temperatures. A jacket water cycle means additional investment costs, as one huge heat exchanger for cooling the jacket water is necessary. This heat exchanger needs a certain temperature difference as driving force, thus, the supply temperature of the jacket water is about 5 K higher than the cooling water supply temperature. The second option is the use of air coolers, which usually cause a larger investment due to the bad heat transfer on the service side (Chapter 4.8). With cooling water, temperatures down to approximately 35 °C can be achieved on the process side. To realize lower temperatures in a process, refrigerators must be used. Refrigerated water can be supplied with cold water aggregates at temperatures down to 2 °C. Below this temperature, ice formation must be considered. In most cases, brine is used, often a mixture of propylene glycol/water or ethylene glycol/water, which can be used below 0 °C as well. A compilation of heat transfer fluids can be found in [190].

13.4 Exhaust air treatment Exhaust air is defined as the sum of gases, vapors, smokes, dusts, soots, and aerosols released to atmosphere that have an impact on the composition of natural air [191]. The definition might be a bit weak; it means that the release of components like carbon dioxide or hydrogen, which are not regarded as air pollutants, does not need to be

13.4 Exhaust air treatment | 349

Figure 13.8: Ways for the production of exhaust air in batch processes.

restricted.2 Nevertheless, some of the components occurring in air are considered to be pollutants, as it is the case e. g. for methane or methyl chloride, which are natural air components due to the metabolism of animals or plants. Many of the most interesting fine chemicals, specialty chemicals, and pharmaceuticals are produced batchwise. This implies that it has to be carefully evaluated how frequently the particular streams occur and what their compositions and amounts are. In batch processes, exhaust air can be produced in many ways (Figure 13.8). The most common one is the charging of vessels. If a liquid is stored in a vessel, there will be a saturated gas phase above the liquid, filling up the rest of the vessel volume. When additional liquid is filled into the vessel, an equivalent vapor volume is displaced upward, usually directly into the exhaust air line. A similar mechanism is so-called vessel breathing. If a vessel has an open connection to the environment, air will be sucked in when the content of the vessel cools down and contracts, for instance at night. When the vessel is heated up again, its content expands, and the air, now saturated with the vapor being in equilibrium with the liquid in the vessel, is displaced into the exhaust air line. Other well-known mechanisms are the depressurization of vessels, the flushing of vessels in order to dilute and remove gaseous substances from a vessel, or simply the removal of gaseous by-products out of a reactor by pressure relief. Exhaust air has to obey certain restrictions depending on the country where the plant is located. For example, in Germany, the TA Luft [192] is the decisive regulation. Table 13.1 gives an overview on the particular limitations. The pollutants are assigned to certain classes according to their hazardous potential. Each class has a bagatelle limit, i. e. any industrial unit has the right to release this amount without being prosecuted. Beyond this value, a limiting concentration has to be complied with. Carcinogenic substances are treated in an extraordinarily strict way. In some cases, it is defined how the particular pollutants are counted, for example just the carbon content for simple organic substances or the typical combustion products for chlorine and 2 Carbon dioxide is a special issue. The vast amounts of CO2 emissions from combustion of coal and natural gas are regarded to be responsible for the continuous rise of the CO2 concentration in the air, currently considered to be resulting in a future climate change. CO2 emissions coming directly from chemical processes occur in relatively small amounts.

350 | 13 Utilities and waste streams Table 13.1: Limiting concentrations and bagatelle limits according to TA Luft [192]. Lim. conc. (mg/Nm3 )

Bagatelle lim. (g/h)

Organic substances Org. subst., cl. I Org. subst., cl. II

50 20 100

500 100 500

Carcinogenic substances Carc. subst., cl. I Carc. subst., cl. II Carc. subst., cl. III

0.05 0.5 1

0.15 1.5 2.5

Anorganic substances Ammonia Hydrogen cyanide Chlorine compounds Nitrous oxide Sulphuric oxides Bromine compounds

30 3 30 350 350 3

150 15 150 1800 1800 15

Counted as C

Examples methanol, ethyl acetate acetaldehyde, vinyl acetate acetic acid, nitromethane As, Cd acrylonitrile, ethylene oxide benzene, vinyl chloride

HCl NO2 SO2 HBr

bromine compounds (HCl and HBr). The limiting values are generally based on the state-of-the-art in removing the particular substances from gas streams. They can be regarded as very strict, especially the latest amendment from December 2001 has lowered many limiting values by a factor 2 or more. There are several options for exhaust air treatment. They can be categorized in two ways. Combustion processes and biological treatments destroy the pollutants, whereas condensation, adsorption, absorption, or membrane processes separate them from the air, and if they are valuable and their purity is sufficient, they can be recovered. Also, the load of pollutants and the amount of exhaust air are decisive for the choice of the process. Condensation can only be used for comparably small exhaust air streams. As it will be shown, only cryo-condensation can normally fulfill the TA Luft, where the commercial units have a defined size. The load is not important. Membrane and absorption processes can also handle only comparably small amounts of exhaust air due to the size of the common commercial units. For absorption, high loads are advantageous to make it worth to recover the pollutants. For membrane processes, complete separations can hardly be achieved, therefore, it makes only sense for low concentrations. Combustion is an effective but expensive exhaust air treatment. Therefore, a high exhaust air stream is required to make it worth. Thermal combustion works for all concentrations, but it is especially effective for high pollutant contents to save fuel for achieving the high temperatures (850–1200 °C). For low pollutant concentrations, catalytic combustion can be used, as only approx. 400 °C are necessary. For higher concentrations (> 10 g/Nm3 ), the catalyst might be damaged. Adsorption processes can handle large exhaust air streams, but for high

13.4 Exhaust air treatment | 351

concentrations it is difficult to remove the heat of adsorption out of the bed. Biological treatment is only possible for substances that can easily be dissolved in water. It is the best process for large exhaust air streams with low pollutant concentration, but it needs careful maintenance. Another important issue is the predictability of the processes. For the combustion, absorption and condensation processes, a mass balance can in principle be predicted without experiments. Adsorption and membrane processes usually need experiments even for the design, unless references are available. The lifetime of membranes for a new task can hardly be estimated, as often many different components are involved and come into question for spoiling the membrane. This causes great uncertainties for the final investment costs. Therefore, in most cases membranes are unlikely to be the best choice. Biological exhaust air treatment processes cannot be predicted at all. Long-term trials must be carried out with changing operation conditions to get a feeling about its performance. In fact, most exhaust air problems require an immediate solution where someone can give a warranty for the performance. Therefore, it is often not appropriate to suggest adsorption or biological treatment, even if the process itself might be interesting.

13.4.1 Condensation If three or more people are sitting together in a meeting to discuss a condensation project, at least one of them will have the glorious idea that it is sufficient to cool down to the lowest normal boiling point. (Engineering wisdom, possibly a law of nature)

Condensation looks very simple but is not, as we have no feeling for even basic thermodynamics. Thus, as cited above many people think that a component completely condenses when the temperature falls below its normal boiling point. That would make things pretty easy: for methanol, 65 °C would be a sufficient cooling temperature, for benzene 80 °C, for pentane 36 °C, and so on. Simple cooling water with t = 30 °C would be an appropriate cooling agent. In fact, things are much more difficult, as the following example shows. Example What is the minimum condensation temperature to guarantee that the methanol concentration is in line with TA Luft (50 mg/Nm3 C; see Table 13.1)?

Solution The carbon fraction of methanol is approximately 3

3

12 32

= 0.375, therefore, the concentration limit

is 50 mg/Nm /0.375 = 133 mg/Nm . From ideal gas law, we get the corresponding partial pres-

352 | 13 Utilities and waste streams

sure: pMeOH =

mRT 133 ⋅ 10−6 kg ⋅ 8.31447 J/(mol K) ⋅ 273.15 K = = 9.4 Pa MV 32 ⋅ 10−3 kg/mol ⋅ 1 m3

(13.4)

This partial pressure corresponds to a saturation temperature of t = −70 °C! Similar results are obtained for other substances (toluene: −75 °C, n-hexane: −96 °C, ammonia: −145 °C, vinyl chloride: −154 °C, n-pentane: −116 °C). Even for the removal of a heavy boiling substance like n-dodecane a condensation temperature of −13 °C is required.3

Therefore, in most cases TA Luft can only be met by cryo-condensation using a cooling agent like liquid nitrogen [193]. At ambient pressure, liquid nitrogen has a temperature of t = −196 °C, where almost all substances have a vapor pressure far below the limit of TA Luft. On the other hand, most of the substances are solid at that temperature (water!), which makes their handling complicated. As water is hardly avoidable, it is important to remove it before the stream enters the cryo-unit, e. g. by an adsorption bed. Often, two twin plants for cryo-condensation are operated alternately; while one is used for air cleaning, the other one is being defrosted. Substances like cyclohexane form a kind of snow on the cooling coils. After a short time, the cooling coils are practically isolated, and the heat exchange breaks down. The most complicated problem in cryo-condensation is the formation of aerosols. The mechanism of the formation is as follows. If the temperature difference between cooling agent and bulk fluid is too large, a temperature profile perpendicular to the flow direction will develop where the temperature falls below the condensation temperature far away from the wall in the gaseous phase. Spontaneous condensation takes place, and very small droplets are formed with about 1 µm in size. These droplets have a rate of descent (Chapter 9) that is too small for being separated within the equipment. Otherwise, they are too large to take part in molecular diffusion towards the cooling area. Finally, these droplets can pass the apparatus, although they actually have been condensed. Behind the condenser, they will be evaporated again and can be detected as pollutants. Therefore, care must be taken that an appropriate temperature of the cooling agent is used and controlled. In practical applications, the liquid nitrogen is never used directly as cooling agent. Instead, it is just taken to control the temperature of a secondary cooling cycle, operating with gaseous nitrogen. Figure 13.9 shows a typical liquid nitrogen condenser plant with a cooling cycle (Cryosolv process). The liquid nitrogen is evaporated in the cycle gas cooler, where the cycle gas is cooled down to an appropriate temperature. The pollutants are liquefied on the cooling coil of the cryo-condenser. The condensate can be collected at the bottom of the condenser. If it is a pure substance, it can be used again. The purified gas can be led 3 The extrapolation of the vapor pressure curves to these low temperatures is not very reliable. The values must be considered as estimations.

13.4 Exhaust air treatment | 353

Figure 13.9: Principle scheme of a cryo-condensation unit [193]. © Messer Group.

into the environment. For a better utilization of the liquid nitrogen, the cooling of the cycle gas is supported by the cold purified exhaust gas and the evaporated nitrogen in the recuperator. The condensate could be used as well, but normally the amounts are too small. Cryo-condensers can be purchased as units. Normally, they are designed for 700–1000 Nm3 /h, which can be considered as relatively small exhaust air streams. Investment costs for cryo-condensers are low in comparison with other exhaust air treatment processes. The tank can even be rented. A strong point is the further application of the gaseous nitrogen. In principle, it can be released to the environment as well, but experience shows that the process can only be operated economically if the gaseous nitrogen can be applied otherwise, for instance for inertization in the plant itself. In these cases, cryo-condensation is the favorite way for exhaust air treatment. The operation costs are then comparably low, as the plant manager has to purchase the nitrogen anyway. The Cryosolv process has been further improved during the recent years, currently, the DuoCondex process [194] is considered to be the most effective one. To summarize the aspects of cryo-condensation, the following statements can be given: – Cryo-condensation has low investment and high operation costs due to standard units. – To keep operation costs low, the evaporated nitrogen must be used elsewhere in the plant to compensate for part of the operation costs. – Only liquid nitrogen and electric current are needed as utilities. The tank necessary for the storage of the liquid nitrogen can be rented.

354 | 13 Utilities and waste streams – – – –

The predictability is restricted due to the possibility of aerosol formation and the performance of the vapor pressure line at extrapolation to low temperatures. The cryo-condensation systems in general react very slowly to changes in load. Therefore, the exhaust air stream should be as steady as possible. Only relatively small exhaust air streams can be treated. Due to solid or snow formation, twin-plants are usually necessary for operation and defrosting.

13.4.2 Combustion If a combustion process is used for exhaust air purification, the pollutants are destroyed by chemical reaction. Combustion is simple if the pollutants only consist of C, H, and O. In these cases, only carbon dioxide and water are formed as combustion products, e. g. C2 H5 OH + 3 O2 󳨀→ 2 CO2 + 3 H2 O Both can simply be released to environment, as they are natural air components. More difficulties come up if other elements occur, as new pollutants can be formed that must be removed. Chlorine is one of the most widely used elements in chemical industry. In a combustion process, it will completely be transformed to HCl, e. g. CH2 Cl2 + O2 󳨀→ CO2 + 2 HCl This clear statement might be a bit surprising, as one could easily think of water formation, for example according to the Deacon reaction: 2 HCl + 0.5 O2 󴀕󴀬 H2 O + Cl2 The Deacon reaction is an equilibrium reaction. For high temperatures as they occur in the combustion chamber (≥ 900 °C), the equilibrium is more or less completely on the HCl side. When the flue gas is cooled down in the steam generator (see below) and in the environment, the equilibrium changes to the Cl2 side, but reaction kinetics is relatively slow so that only small amounts of Cl2 are formed. Therefore, a rapid cooling procedure must be performed. Details can be found in [11], [191] and [195]. HCl can be removed from the flue gas with a scrubber, usually using water or sodium hydroxide solution as absorptive agents. If significant amounts of chlorine are formed, water is no more appropriate for scrubbing. Caustic soda can transform the chlorine to hypochlorite according to Cl2 + 2 OH− 󴀕󴀬 OCl− + Cl− + H2 O

13.4 Exhaust air treatment | 355

The hypochlorite anion can be transformed to chloride in the presence of the bisulfite ion: OH− + OCl− + HSO−3 󴀕󴀬 H2 O + Cl− + SO2− 4 The other halogen elements (F, Br, and I) behave analogously. Sulfur will be transformed into sulfur dioxide SO2 . SO2 can hardly be absorbed with pure water, but a caustic soda solution is quite efficient. For large sulfur loads in flue gases, other processes like adsorption on activated carbon or reaction with calcium hydroxide to gypsum are known and well established [196, 197]. Chemical bonded nitrogen (ammonia, amines, etc.) will be transformed to nitrous oxide to a certain extent that can hardly be determined by theoretical predictions [198]. To be conservative, it is often assumed that chemical bonded nitrogen is converted to NO completely (fuel NO), e. g. according to 2 C2 H5 NH2 + 8.5 O2 󴀕󴀬 7 H2 O + 4 CO2 + 2 NO Fuel NO is formed already at low temperatures as 800 °C. NO can be formed by reactions of elementary nitrogen and oxygen from the air as well (thermal NO), especially at high temperatures. The amount of thermal NO increases dramatically with temperature; for example, the equilibrium concentration of NO in air is 35 ppm at 1000 K and 1300 ppm at 1500 K [198]. When the flue gas is cooled down, NO forms an equilibrium with NO2 and other nitrous oxides. At ambient temperatures, NO2 is the dominating component. Therefore all the nitrous oxides are counted as NO2 in mass balances and referred to as NOx . For NOx removal, so-called DeNOx processes have to be integrated. They operate with the reaction 4 NO + 4 NH3 + O2 󴀕󴀬 6 H2 O + 4 N2 The “denoxation” can be performed at high temperatures (approximately 900 °C) without a catalyst (SNCR process, selective noncatalytic reduction) or at low temperatures (approximately 300 °C), using metal oxides like V2 O5 as catalysts (SCR process, selective catalytic reduction). Most plants operate with ammonia as reductive agent. As an alternative, urea can be used and decomposed to ammonia according to NH2 –CO–NH2 + H2 O 󴀕󴀬 2 NH3 + CO2 Care must be taken to ensure that there is no excess ammonia in order to avoid that the NOx problem is just replaced by the ammonia problem. N2 O (laughing gas, well-known as an anesthetic gas) is also a very critical component that occurs as a by-product in some syntheses. It is not listed in TA Luft, but it is regarded as one of the most critical greenhouse gases. Therefore, authorities do usually not accept a defined N2 O emission. Nowadays, N2 O can be decomposed to

356 | 13 Utilities and waste streams nitrogen and oxygen at comparably low temperatures (approx. 425–600 °C) with appropriate catalysts [199, 200]. Chemical bonded Si is converted into solid SiO2 , which is simple quartz or sand. From the environmental point of view, this is not critical at all, but the small particles that are formed plug and erode the combustion unit. Therefore, special constructions and operation conditions have to be chosen when Si occurs. The thermodynamic calculation of combustion reactions is pretty easy. A necessary combustion temperature is determined by the degradation temperature of the pollutants. Usually, for a number of pollutants the manufacturers of combustion units define the necessary residence time in the combustion chamber and the corresponding temperature where they give a warranty to keep the TA Luft. This temperature is usually not achieved by the combustion of the pollutants theirselves. Instead, natural gas must be injected and burnt up as well. The combustion temperature can be calculated with an adiabatic energy balance, using the specific heat capacities and the enthalpies of formation of the participating substances. The necessary amount of natural gas is evaluated in an iteration procedure, checking whether the temperature obtained equals the required one. The natural gas itself is often characterized by its heating value, however, for process simulation it is advantageous to describe it by its composition. In general, we can distinguish between thermal and catalytic combustion. Thermal combustion takes place at temperatures from 850–1200 °C. It is appropriate for high pollutant loads. The high temperatures make sure that complex molecules degrade and the simple combustion products can be formed. Due to cost reasons, the heat of the flue gas has to be used, e. g. by steam generation. Figure 13.10 shows a typical thermal combustion unit. The supplemental fuel and air are mixed and fired in the main burner. The combustion chamber is refractory-lined and the exhaust air is introduced into the combustion chamber in the flame zone. Supplemental combustion air can also be added, if necessary. The dimensioning of such a combustion unit can be done according to the residence time of the exhaust air stream. Depending on the kind of pollutants, the residence time should be between τ = 0.6–2 s. It has to be pointed out that the physical volume flow at combustion conditions is decisive, not the

Figure 13.10: Typical incinerator scheme [201].

13.4 Exhaust air treatment | 357

standardized volume flow in Nm3 . The pollutant concentration are not relevant for the dimensions of the unit, as for the combustion itself the pollutants are supplemented by natural gas to keep the necessary combustion temperature anyway. It is also important to note that combustion units have a limited capacity range. The ratio between lower and upper bound of the capacity is approx. 1 : 5. At the lower bound, the danger is that there is not enough turbulence to get an adequate mixing of the pollutants with air. At the upper bound, the residence time in the combustion chamber might not be long enough. The scheme for catalytic combustion is shown in Figure 13.11. The polluted air enters a heat exchanger, where it is preheated by the hot flue gas stream. The gases then enter the catalyst bed. A noble metal catalyst is used to promote the desired oxidation reactions at relatively low temperatures (250–400 °C) and at faster conversion rates. Therefore, smaller units can often be specified, and less costly construction materials can be used. The catalyst bed can be designed in the form of structured or random packing, made of ceramic. Its volume is determined by the required destruction efficiency of the particular pollutants, the flowrate, and the properties of the vapor stream. As a rule of thumb, 5000–20 000 Nm3 /h per m3 catalyst bed can be processed. Catalytic combustion makes sense for low pollutant loads (< 10 g/Nm3 or 25 % LEL (lower explosion limit, Chapter 14.3) [201]), as in these cases the heating value of the pollutants is low and the consumption of natural gases to obtain the temperatures required for thermal combustion would therefore be high. The oxygen concentration in the waste gas should be < 2 mol % [201]. The advantage of catalytic combustion is that smaller equipment and less costly materials can be used due to the lower temperatures. However, catalysts are in general sensitive, and the kind and amount of pollutants should be clearly defined when a catalytic combustion is chosen. Phosphorus, heavy metals, and silicium are catalyst poisons, and occasional high pollutant loads lead to high temperatures and, subsequently, deactivation. Another problem called “the classical design mistake” can come up when ammonia and chlorinated compounds are concurrently present in the exhaust air stream. In the combustion unit,

Figure 13.11: Catalytic combustion scheme [201].

358 | 13 Utilities and waste streams

Figure 13.12: Typical RTO unit [201].

ammonium chloride will be formed, consisting of small particles that block the catalyst after a remarkably short time. Regenerative thermal oxidizers (RTO) are a third kind of combustion units (Figure 13.12), appropriate for very diluted exhaust air streams. In a typical RTO unit, there are three ceramic beds for heat recovery. The contaminated gas enters one of the beds (for simplicity: bed 1), and is effectively preheated by passing the hot ceramic bed so that the burner itself does only need to cover the last part of the preheating of the exhaust air. After having been incinerated, the clean exhaust gas stream exits the combustion chamber through bed 3. Its sensible heat is transferred to the bed, where it can be used in the next cycle. Part of the clean air gas is led to bed 2, which was the preheating bed during the last cycle, to purge it; in this way, the clean air is contaminated again and therefore led back to bed 1. At the completion of each cycle, the task of the beds is changed by switching the valves at the inlet and outlet lines. Cooling below the dew point in the heat recovery section should be avoided because of corrosion. The control system which switches between the beds is comparably complex. The advantages of combustion processes are the low operation costs due to steam production (see below). No trials are necessary to predict the outlet streams of the combustion. The main disadvantages are the high investment costs and the complex safety concept that is necessary as the flame must be regarded as a permanent ignition source. The availability is high for the thermal combustion. For the catalytic combustion, it is limited by the lifetime of the catalyst. After passing a typical combustion unit (Figure 13.13), there are a few other necessary steps before the exhaust air is released into the environment. After “denoxation” (see above), the heat of the stream has to be used due to economic reasons. In the steam generator, the stream is cooled down to approx. 270 °C. In many cases, the benefit of steam production can fully compensate for the expenses for the natural gas.

13.4 Exhaust air treatment | 359

Figure 13.13: Thermal combustion scheme.

Behind the steam generator, the steam is still too hot for the scrubber. It is first cooled down by direct injection of water in a so-called quench. As part of the water evaporates, the temperature can be lowered to approx. 70 °C. In the scrubber, acid gases like HCl, HBr, or SO2 are finally chemically absorbed, usually with caustic soda solution due to chlorine or bromine formation according to the Deacon reaction.

13.4.3 Absorption Absorption is another exhaust air purification process where the performance and the design can in most cases be evaluated without experiments. Possible absorptive agents can be found theoretically as well. The demands for a suitable absorbent are high capacity, low volatility, viscosity, corrosivity and toxicity, high thermal and chemical stability, and a high flash point. The recycling of valuable solvents from the exhaust air is possible, especially for exhaust air streams that contain only one component as a pollutant. In these cases, the relatively high operation costs can be significantly reduced. Nevertheless, there are not many cases known. Absorption is also a promising alternative if the loaded absorbent can be sold. Aqueous ammonia solution is one of the few examples. If water is used as an absorbent, it might happen that

360 | 13 Utilities and waste streams it is useful to give the loaded water directly to the biological waste water treatment. However, water is usually a bad solvent if organic substances have to be absorbed. Often, organic solvents are taken, for example glycol ethers for chlorinated compounds [202]. Other options for the removal of organic pollutants are heavy alkanes or even biodiesel (fatty acid methyl esters) [203], which can be used as fuel afterwards. One should avoid stupid combinations like taking water as an absorbent for toluene (“authority scrubber”). Otherwise, a desorption step to remove the load from the absorptive agent cannot be avoided. Figure 13.14 shows a typical absorption/desorption unit, where the absorbent is cooled down for the absorption step and heated up for the desorption step. The desorbed gas can usually be condensed and given to a liquid waste incineration, which should be available at any chemical site. The desorber column can also be designed as a conventional distillation with condenser and reflux, if the losses of the absorbent are too high due to its volatility. It is worth mentioning that at least the absorption column should be simulated with a rate-based calculation (Chapter 5), as in most cases the mass transfer resistance in the vapor phase is decisive for the final design of the column. As absorption equipment, in addition to packed and tray columns also spray towers, bubble columns, venturi scrubbers, and many other types come into consideration.

Figure 13.14: Scheme of an absorption/ desorption unit.

Absorption has severe disadvantages for highly volatile components and when hydrophobic and hydrophilic substances must be absorbed simultaneously, which is often hardly possible with one absorbent. The investment costs can be considerable if high-quality construction materials must be used to avoid corrosion. On the other hand, absorption is not sensitive to unsteady operating conditions like exhaust air flow, load, or concentrations. The capacity can be adjusted, and the

13.4 Exhaust air treatment | 361

pressure drop acroccs an absorption column is relatively small so that the blower for the exhaust air has a low energy consumption.

13.4.4 Biological exhaust air treatment Biological processes for exhaust air treatment have become more and more important. They are appropriate for huge exhaust air streams with low pollutant concentration. The pollutants must be soluble in water and biodegradable. The exhaust air should have a temperature in the range 5–60 °C and must not contain toxic substances. If these requirements are fulfilled, biological processes are in general the best processes due to their low investment and operation costs. However, their effectivity cannot be predicted. If a vendor has no references for a defined exhaust air problem, long term experiments, usually several months, are necessary to prove that targets can be met. Biological degradation is performed by microorganisms like bacteria or fungi [196]. All of these microorganisms are surrounded by a water film which they need for their metabolism. Therefore, it is necessary for the pollutants to be soluble in water so that they can get in contact with the microorganisms. Furthermore, nutritients and trace elements (nitrogen, potassium, phosphorus) must be provided for the microorganisms. The degradation itself yields carbon dioxide and water as products. Other elements like chlorine, nitrogen or sulfur will be transformed into inorganic compounds (HCl, H2 SO4 , nitrates) which may enrich in the water, where they have a detrimental effect. Increasing temperatures accelerate the degradation process but decrease the solubility of the gases in water. Thus, an optimum temperature has to be evaluated experimentally. The mass transfer from the vapor phase into the liquid and finally to the microorganisms is decisive for the effectiveness of the process. Therefore, contact areas between the water and the exhaust air which are as large as possible should be provided. A severe disadvantage of biological processes is that the microorganisms of a specific plant can specialize in degrading the most common substances in the exhaust air. Components that occur only occasionally can then be ignored, and they remain dissolved in the water. There are several process options for biological exhaust air treatment. In biofilters, the microorganisms are located on a solid filter material, which is sprinkled with water. The pollutants are absorbed by the liquid as well as adsorbed by the filter material. As filters, compost materials, turf, brush-wood, bark, wood, coconut fibers, foams, and other porous materials can be used. Inorganic nutrients (nitrogen, phosphorus etc.) can be delivered by the filter material itself or supplied with the sprinkling water. For the design, it has to be regarded that the exhaust air coming out of the biofilter is always saturated with water due to the intensive contact with the filter material. Therefore, biofilters are prone to dry out, which leads to worse conditions for the microorganisms. Therefore, the humidity has to be controlled carefully. The exhaust air

362 | 13 Utilities and waste streams is often humidified before it is led in the biofilter. As a rule of thumb, for the calculation of the volume of the filter layer it can be assumed that the exhaust air load should be in the range 100–250 Nm3 /h per m3 filter material. The degradation capacity for pollutants can be 10–100 g/h per m3 filter material [196]. This leads to considerable dimensions for biofilters. Bioscrubbers are scrubbers where liquid from an activated sludge tank is used as absorbent. The packing is inert. For the evaluation of the dimensions, bioscrubbers can be treated like normal scrubbers. They are considerably smaller than biofilters. Biotrickling filters try to combine the principles of bioscrubbers and biofilters. The microorganisms settle on the packing so that the absorbed pollutants are degraded right on the spot. New developments in biological exhaust air treatment have the target to reduce the dimensions, especially for biofilters. Bioscrubbers could also be implemented as tray columns, where the concentration of microorganisms could be much higher. It is estimated that a degradation rate of approx. 1300 g/(m2 h) can be realized. A second hydrophobic solvent in addition to water could form a second liquid phase in a scrubber, which could absorb hydrophobic pollutants from the exhaust air. It could be recycled in the activated sludge tank. Finally, membranes could be used where the microorganisms can settle on. A few hundred bioprocesses for exhaust air treatment are operated in Germany. Most of them are biofilters, used for agriculture, fish industry, and sewage plants. The application makes sense for pollutant loads of 1000–1500 mg/m of organic carbon [196].

13.4.5 Exhaust air treatment with membranes Membrane applications also have potential for exhaust air cleaning, especially in combination with adsorption. The advantages of membranes are the simple, modular construction and the low space demand. On the other hand, the predictability of their performance is very low, and even if references are available, there is still doubt about their mechanical, thermal, and chemical stability as well as about their sensitivity against fouling. Furthermore, their properties vary during operation, as many membrane materials swell when they are exposed to the pollutants. There are relatively few references for membrane separations. They refer to simple cases like removal of toluene or a hydrocarbon from exhaust air. Figure 13.15 shows an example. As the partial pressure difference is the driving force for the flow through the membrane, a compressor is used on the pressure side and a vacuum pump on the suction side. The membrane only achieves an enrichment of the pollutants on the suction side. The pollutants in the permeate are then partially removed by condensation, whereas the rest is fed back to the pressure side. In comparison to the simplicity of the problem, this is a quite complex and expensive process. After all, the pollutants can be recycled if it seems useful.

13.4 Exhaust air treatment | 363

Figure 13.15: Process scheme for gas permeation.

13.4.6 Adsorption processes Like membrane processes, adsorption can also be an option for exhaust air treatment. The foundations and the terms have already been explained in Chapter 7.2. There are several kinds of adsorbers that differ in the way the adsorbent is treated. It can be fixed in a packed bed or in a moving bed, or it can be implemented as a fluidized bed. The most popular way is the fixed bed because of its simplicity and the low abrasion of the particles. However, the big disadvantage is that the adsorption process is transient. For a continuous process, a second apparatus is necessary to take over the task when the first is being regenerated and vice versa (Chapter 7.2). Adsorption processes are favorable if very low pollutant contents for the exhaust air are aspired to. Compared to absorption, the investment and operation costs for adsorption are considerably higher, up to a factor up to 3 [196]. However, it is often used for relatively small exhaust air streams, as no major costs for recycling of liquids occur. Another advantage of adsorption is the option of recycling the pollutants, which often compensates the disadvantages easily [196]. A serious disadvantage is that an extended safety concept is necessary due to the danger of fire. Activated carbon with huge surface areas, the presence of oxygen, and the release of the heat of adsorption provide good conditions for fire. In fact, it has been found in many cases that smoldering fires were active inside the adsorber bed which were not detected by the operator team. Numerous examples are known for the so-called “Monday fires” [143]. On Friday, machines and vessels are often cleaned with large amounts of organic solvents, which remain in the adsorber for the weekend and cause the fire after the new startup on Monday. Meanwhile, these Monday fires can be avoided by means of using modern CO sensor technology, which initiate flooding of the adsorber with nitrogen or carbon dioxide [239].

364 | 13 Utilities and waste streams

13.5 Waste water treatment Water is one of the most often used substances in chemical industry. It is used as solvent, raw material, medium for chemical reactions and as washing agent for products, gases, and equipment. Therefore, it can be loaded with substances and particles. Before returning it to environment, it has to be cleaned according to the governing rules (e. g. Germany: Wasserhaushaltsgesetz). Any discharge of waste water needs a permit, which is subject to strict limiting values. A permit is given only if the water is cleaned according to the current state of the art. For the treatment of waste water streams, one can distinguish between measures to remove solids and measures to remove dissolved impurities. Solids can be removed by – Sedimentation: The solids must have a larger density than water. The density difference and the particle size must be sufficiently high. – Flotation: The solids must have a lower density than water, so that it moves up to the surface. If the density difference is not large enough, auxiliary substances can be used. For example, gas can be introduced into the water. The bubbles will be attached to the particles, which lowers their apparent density. – Filtration: The waste water can be filtered over flint, sand or industrial filters, where the large particles are caught as they cannot pass meshes which are smaller than the particles theirselves. The smaller the particles, the smaller the meshes must be. Beyond conventional filters, membranes are used in the different applications microfiltration, ultrafiltration, and nanofiltration (Chapter 7.1). For dissolved impurities, the typical cleaning of waste water is different from other separation tasks, as it is in most cases not well defined, i. e. the loads vary and the polluting components are often not known. Therefore, a waste water treatment process cannot be predicted but must be experimentally demonstrated in a piloting unit. Often, the particular vendors have miniplants where a test amount of genuine waste water can be processed to check the performance. The load of a waste water is characterized by the TOC (total organic carbon), the COD (chemical oxygen demand) and BOD (biochemical oxygen demand) values. These parameters are decisive for the operation costs of a waste weater treatment, as they indicate the amount and the kind of the waste water load. The TOC value is the concentration of carbon atoms of organic molecules in the waste water. It can be measured with good accuracy by determining the carbon dioxide after oxidation. The COD value is less accurate. It refers to the amount of oxygen necessary to convert the organic substances into CO2 , H2 O, and NH3 . It is determined by mixing the water sample with potassium dichromate, 50–70 % sulfuric acid and silver ions as catalyst and keeping it

13.5 Waste water treatment | 365

at boiling temperature for two hours. The biochemical oxygen demand is more complicated and can only be determined experimentally. It is measured how much oxygen is consumed by microorganisms in contact with the waste water during a five-day-period (often termed as BOD5 ) at t = 20 °C [7]. It is a very useful quantity, but it takes five days for determination. The BOD is always lower than the COD, as the microorganisms often use only parts of the molecules for combustion, while the rest is used for growth. The ratio between BOD and COD is between BOD/COD = 0.05–0.8, i. e. it is completely unpredictable. If nothing better is known, one can use BOD/COD ≈ 0.35. Example A waste water stream contains 500 wt. ppm methyl tert-butyl ether (MTBE). Determine TOC, COD, and BOD.

Solution The chemical formula of MTBE is C5 H12 O, with the molecular weight M = 88.148 g/mol. With MC = 12.011 g/mol as the molecular weight of carbon, the TOC can be determined to be TOC = 500 wt. ppm ⋅

5 ⋅ 12.011 = 341 wt. ppm 88.148

The oxidation reaction of MTBE is given by C5 H12 O + 7.5 O2 󳨀→ 5 CO2 + 6 H2 O , and therefore one gets using MO = 15.9998 g/mol as the molecular weight of oxygen COD = 500 wt. ppm ⋅

7.5 ⋅ 2 ⋅ 15.9998 = 1361wt. ppm 88.148

Without experimental information, the BOD can only be estimated to be BOD = 0.35 COD = 476 wt. ppm

For dissolved substances, there are a number of different processes for waste water cleaning that are often used in combination: – Evaporation: Often, the pollutants are heavy boiling substances which cannot be vaporized. In this case, the waste water can be concentrated by evaporation. The remaining residue can be sent to incineration or to disposal if possible. Of course, waste water evaporation is very energy-intensive. It is more or less obligatory to use at least one of the heat integration options described in Chapter 3.3, i. e. multieffect evaporation or vapor recompression. The statement that all the pollutants might be

366 | 13 Utilities and waste streams







heavy boiling substances is usually weak. In most cases, the condensate is not pure water but contains components which are light ends or form low-boiling azeotropes with water. Then, evaporation must be supplemented by a condensate polishing measure, either reverse osmosis, chemical destruction or adsorption (see below). The target is always to return as much non-contaminated water as possible back to environment. Reverse osmosis: Reverse osmosis can be used for the cleaning of waste water as stand-alone or as a supplementary measure. The method is restricted to low-concentrated pollutants, as the osmotic pressures should be limited to 40–100 bar. Membranes are used which do not work as a filter but by means of solubility and diffusion (Chapter 7.1). Reverse osmosis can remove large amounts of pure water which can be returned to environment. The biggest problem is the durability of the membrane which is necessary to test before an application takes place. A regular exchange of the membrane is often necessary but not too expensive. Adsorption: Pollutants can also be removed by adsorption. Due to the wide variety of possible pollutants, an adsorbent like activated carbon is one of the first choices, as it removes organic components quite reliably. As described in Chapter 7.2, a twin plant (Figure 7.8) is necessary to ensure a continuous operation. As activated carbon is quite inexpensive, the loaded adsorbent can be sent to incineration or be regenerated by specialized service providers. The activated carbon is not removed as a powder, instead, a whole filter unit is removed and replaced, which is quite fast and clean at the site and easy [204]. Chemical destruction: An interesting procedure for the reduction of TOC and COD are the advanced oxidation processes (AOP). A photochemical reaction due to absorption of ultraviolet radiation can activate the pollutant molecules. The activated state can cause a further reaction to products which are easier biodegradable. Additionally, oxidants like ozone or hydrogen peroxide [205] can be split into oxidizing radicals, which rapidly react with the pollutants and decompose them into CO2 and H2 O, as long as no other elements than C, H, and O are involved. Ozone is taken for processes where the pollutant concentration is low, as the solubility of ozone in water is bad. If low boiling pollutants are present in the waste water, it happens that they are stripped into the offgas. The activation of the molecules can also be increased by ultrasound. In case the oxidation has stopped at some intermediate products, the AOP can be supported by a biological treatment. Maintenance of these systems is hardly necessary, and another great advantage is that the waste water stream is not split into two streams, where one of them contains the pollutants and has to be further processed. Even highly concentrated solutions up to 250 g/l COD have been successfully treated [206]. The disadvantage is that an infrastructure for the oxidant must be provided.

13.6 Biological waste water treatment | 367





Waste water incineration: Waste water can also be incinerated, which is reliable, but probably the most unsatisfactory process, as the often large amounts of water do not have a heating value but must be evaporated in the incinerator. Therefore, the waste water is first concentrated to reduce the amount of water. Pressure hydrolysis: With this option, water is kept under pressure for some time at high temperatures (200–250 °C), where hopefully the pollutants decompose to substances which are easier to handle.

13.6 Biological waste water treatment The waste water treatment by microorganisms is probably the most often used final treatment for waste water. For the microorganisms, the organic pollutants are raw materials for their metabolism. It can be distinguished between aerobic and anaerobic waste water treatment processes. The more established way is the aerobic treatment, meaning that the microorganisms need oxygen in their metabolism to digest the pollutants. In principle, it is a gasliquid reaction (see Chapter 10.2) of the oxygen with the organic pollutants, where the microorganisms as a suspended solid act like a catalyst. Approximately half the organic carbon is oxidized to CO2 , while the other half is used for building up additional biomass. This means that additional sludge is generated, which has to be disposed in some way. Other items to consider are the addition of nutrients (Ca, K, Mg) necessary for the metabolism of the microorganisms and ammonia to avoid nitrate formation [7]. As the solubility of oxygen in water is extremely low (approx. 8 mg/l at p = 0.2 bar partial pressure [8], i. e. ambient air conditions), reaction kinetics are determined by the mass transfer of the oxygen into the liquid. This means that the equipment must provide a large surface between water and air, which is achieved by dispersing one of the two phases. Both options vapor and liquid phase are applied; for trickle filters and activated sludge plants, the liquid phase is dispersed, while for highly contaminated waste waters it is the gas one. Trickle filters are random packing beds, irrigated by the waste water. The surface of the packing elements is porous so that the microorganisms can develop a biologically active layer. For normal waste waters (BOD5 = 100–200 mg O2 /l) the degree of degradation is 75–95 %, whereas only 50 % are achieved for highly contaminated ones (BOD5 = 1000–3000 mg O2 /l). In the classical activated sludge process, the waste water is processed in a concrete basin. A rotating spinner disperses the water into droplets and distributes them along the water surface. As the basins are open to atmosphere, they develop a lot of noise and smell, which is a serious drawback, as well as the low efficiency in the oxygen intake.

368 | 13 Utilities and waste streams

Figure 13.16: Biohoch reactor in the Industrial Park Höchst, Frankfurt/Main.

As an alternative to the classical activated sludge process bubble column reactors have been developed. An example is the Biohoch reactor from the former Hoechst AG (Figures 13.16, 13.17). The gassing of the sludge is performed by two-component jets, where the air is sucked in by a liquid stream with a high velocity (see Chapter 8.3). The air is dispersed into small bubbles, which stay in contact with the liquid for a comparably long time during their ascension to the surface because of the considerable height of the reactor (approx. 25 m). Up to 80 % of the oxygen flow can be dissolved. Odorous substances in the exhaust air can be removed separately, as the bubble columns are not open to atmosphere. As well, the noise problem is significantly reduced. The oxygen intake requires less energy than in the classical technology; furthermore, there is less space demand. The separation of the cleaned water from the activated sludge is performed in a cone-shaped decanter, which is designed as a ring at the top of the Biohoch reactor (see Figure 13.17). The overflow is clean water, the sludge at the bottom is recycled to the reactor or removed as excess sludge. As approx. 90 % of the sludge is recycled, the concentration of microorganisms in the reactor can be kept on a high level so that high conversion rates can be maintained. There is always a mixed population of microorganisms so that it can react to changes in the kind and amount of pollutants in the waste water, as it happens frequently in practical applications [8]. Natural evolution can adjust the population of microorganisms to the changing conditions. However, for fast changes in waste water conditions this mechanism is still too slow. Therefore, a large storage volume of waste water is provided so that fast changes are mitigated. Another way to even fluctuations is the addition of activated carbon powder. Aromatic components with phenol-, amino-, nitro- and chlorine substituents are often toxic for the microorganisms even at low concentrations. There are specially trained microorganisms that can cope with these components. They are used in a demobilized way, e. g. fixed on a layer of ac-

13.6 Biological waste water treatment | 369

Figure 13.17: Sketch of the Biohoch reactor internals.

tivated carbon. However, in general they should be removed by other pre-cleaning measures, e. g. adsorption. The disposal of the excess sludge remains as a challenge that has not been solved in a satisfactory way so far. Their solid content is approx. 5–20 g/l. As described above, the usual concentration measures (sedimentation, centrifugation, filtration and drying) can be applied to get solid contents of 25–50 % [8]. Then, the sludge can be transferred to a waste disposal site or incinerated; however, in the latter case the ashes have to be disposed as well. In this case, the methane fermentation can give support. The excess sludge is concentrated to approx. 5 % and let to the so-called digestion tower, where it is consumed by anaerobic microorganisms. Essentially, there are three steps in the methane fermentation of the sludge, requiring different microorganisms: 1. Carbon hydrates, proteins and different kinds of fat are hydrolized to fatty acids and alcohols. 2. Fatty acids and alcohols are converted to acetic acid, hydrogen and carbon dioxide. 3. The latter substances from step 2 are converted to methane. As a result, a gas mixture (biogas) of methane and carbon dioxide is formed with methane concentrations up to 70 mol-%. Only 5 % of the carbon remain in the sludge. The biogas can be used as natural gas substitute. The process is highly sensitive to

370 | 13 Utilities and waste streams process conditions. Step 3 takes place only at pH = 7, tolerating only small variations. If step 1 is too slow, the pH drops due to the accumulation of fatty acids, and methane formation stops. The methane fermentation is generally slow and requires 15–20 days residence time; therefore, digestion towers often have huge volumes. For waste waters with very high pollutant loads (BOD5 > 1000–2000 mg O2 /l), the anaerobic waste water treatment can even replace the aerobic process, as the oxygen intake becomes more and more difficult. The anaerobic treatment has two main advantages: – Due to the production of biogas, the anaerobic process has a positive energy balance, whereas the aerobic process needs energy for the provision of oxygen for the microorganisms. – In the anaerobic process, only 5 % of the organic carbon remain as sludge for disposal, whereas 50 % end up as sludge in aerobic treatment. On the other hand, anaerobic processes is slow and sensitive to process conditions, i. e. pH (see above), throughput and composition of the waste water, and occurrence of toxic substances. The temperature must be maintained between 35–40 °C. The production of biomass is slow, causing a low operation reliability. The microorganisms have to be grown externally and seeded to the process [276]. In general, anaerobic processes do not produce high quality effluents, so that further treatment of the waste water is necessary [7]. Typically, only 75–85 % of the COD is removed.

14 Process safety In the chemical industry there is certainly a hazardous potential, as combustible and poisonous substances are involved. In the past, a number of serious accidents have happened, some of which are listed below. – Ludwigshafen-Oppau (1921): An explosion in a fertilizer storage of the BASF company killed 561 people. More than 2000 got hurt. Even in Heidelberg, 30 km away, roofs were untiled. The crater in Oppau is 125 m long, 90 m wide and 19 m deep (Figure 14.1). Because of the extent of the catastrophe its reason could never be reconstructed exactly. The main components involved were ammonium nitrate, a well-known explosive agent, and ammonium sulfate. In the scheduled mixing ratio, the fertilizer was not explosive. Probably, a demixing had taken place. The attempt to loosen up the densely packed bed by disrupting it with dynamite led to a booster detonation of the ammonium nitrate, which caused a knock-on effect on the entire storage with 4500 t of fertilizer.

Figure 14.1: The Oppau crater after the explosion in 1921. © BASF Corporate History, Ludwigshafen/Rhein.



Texas City (1947): Like in Oppau, ammonium nitrate was also involved here. A cargo ship containing 2500 t of ammonium nitrate caught fire and blew up. As a consequence, the Mon-

https://doi.org/10.1515/9783110657685-014

372 | 14 Process safety









santo site nearby and several oil refineries also caught fire, and a large number of explosions took place. It took several days to get the situation under control. There were more than 600 casualties and 3000 injured. Ludwigshafen (1948): On a hot summer day a tank wagon with 30 t of dimethylether detonated on the BASF site in Ludwigshafen with great violence. It was the worst explosion in Germany since the Second World War. There were 207 casualties, almost 4000 injured, and more than 7000 damaged houses in Ludwigshafen and Mannheim. Bitterfeld (1968): In order to exchange a sealing, 4 t of vinyl chloride were relieved from an autoclave. A violent detonation took place. 42 people lost their lives, more than 200 were injured. A large part of the site was destroyed. This accident appears especially absurd from today’s point of view. Vinyl chloride is highly carcinogenic and poisonous, today, according to TA Luft [192] only an extremely low emission of 2.5 g/h or a concentration of 1 mg/Nm3 is allowed in Germany. Even the release of a noncritical substance into environment would be unthinkable, and only a controlled discharge into an exhaust air system (usually an incineration) would be possible. A plant without such an equipment would not be licensed. Flixborough (1974): One reactor in a row of five had to be bridged because of a failure. The connecting line was not strong enough for the process conditions and broke. In 50 s, approx. 40 t of cyclohexane vapor effused into environment. An ignition followed, and the whole site was destroyed. There were 28 casualties and 88 injured. The adjacent storage tank containing 1600 t of combustible substances caught fire as well. Even after three days explosions were still taking place. The number of casualties would have been considerably higher if the explosion had happened during normal working hours and not on a Saturday [207]. For the bridging of the reactor, which is at least a manipulation of the process operating at t = 150 °C and p = 10 bar, no design work was performed. The construction drawing had been made with chalk on the floor. There was no static calculation, and valves, which would have made it possible to isolate the reactors against each other, were not provided. Seveso (1976): In an autoclave producing 2,4,5-trichlorophenol the agitator was switched off by mistake, after the reaction and the shift were finished. The heat removal from the reactor became much worse, and the product was involved in further reactions. Because of the missing heat removal, the reactions were accelerated. One of the follow-up products was dioxine (2,3,7,8-tetrachlorodibenzodioxine), on a kg-scale. Finally, the safety valve of the reactor actuated. There was neither a collecting vessel nor a defined line to a flare. The safety valve opened for 30 min,

14 Process safety | 373





and the vapor went straight into the environment. It took hours before an experienced crew arrived with the next shift. The reactor could be shut down. 18 km2 were contaminated. Plants wilted, and more than 3000 carcasses were found. Approximately 200 people suffered from chlorine acne. The number of casualties as a consequence of the accident is not known. The cancer rate rose significantly. It took eight years before all decontamination measures were finished. Bhopal (1984): In Bhopal (Central India) the Union Carbide company manufactured carbaryl, an insecticide, via the intermediate methyl isocyanate, which is an extremely poisonous substance. Accidentally, water intruded into a tank filled with 40 t methyl isocyanate and caused a chemical reaction. Carbon dioxide was formed and built up pressure. The methyl isocyanate, which is quite volatile anyway (normal boiling point: 39 °C) was evaporated by the heat of reaction. Within two hours, the whole content of the tank was released through the safety valve. More than 2000 people were killed. Probably about 200 000 people were hurt. A decontamination of the area around the plant has to date not been performed. The safety devices provided had not worked at all. The cooling system of the storage tank and the gas flare had been switched off months previously, an emergency scrubber was not ready to operate, and the tanks were overfilled. The personnel staff had been reduced and was not sufficiently trained. The alarm system was switched off in order not to disturb anyone, and no emergency plan existed; many people died when trying to escape directly through the poisonous cloud. This large list of failures and unlucky circumstances is supplemented by the bad process concept [207]. While methyl isocyanate as a poisonous substance was produced continuously, its further processing took place batchwise so that large amounts of this substance had to be stored. A completely continuous process would have drastically reduced the amount of the methyl isocyanate inventory in the plant. The circumstances of the accident and the background of the plant are described in [278]. Toulouse (2001): Exactly 80 years after the Oppau accident a violent explosion happened at the site of the TotalFinaElf in Toulouse, also caused by ammonium nitrate. There were 31 casualties and more than 1000 injured. The cause was never clarified.

Accidents with much less damage can also receive a lot of publicity. One example is the so-called Carnival Monday accident in 1993 at the Griesheim site of the former Hoechst AG. Again, an autoclave was involved. The reactants, methanolic caustic soda and o-nitrochlorobenzene, have a broad miscibility gap. Therefore, the methanolic caustic soda was slowly added to the organic phase, while sufficient mixing and thereby the reaction should be achieved by an agitator. The staff did not notice that the agitator was switched off, and a large amount of the reaction mixture got into the reactor, forming two liquid phases. Because the reaction did not take place, the usual

374 | 14 Process safety temperature increase was missing. Therefore, the reactor was heated up, contrary to the manual. Then, the error was noticed, and the stirrer was switched on, with the large inventory of reactants at high temperature. The reaction started immediately, the cooling system could not remove the large heat of reaction, and the safety valves opened. 10 t of the product o-nitroanisol were relieved to the environment. Due to the cold weather, the product condensed as an aerosol. A greasy yellow layer covered large parts of the neighboring district [207]. It is clear that the reactor was not sufficiently protected against maloperation. The dosing of the caustic soda while the agitator is not running and the external heating of the reactor should have been prevented by interlocks. Although no one was killed or injured, the accident was often cited in an unjustified way in connection with Bhopal or Seveso. The loss of image due to the bad information management of the Hoechst AG was very serious. In the next section, we shall outline that many improvements concerning the safety have been introduced. Nevertheless, severe accidents are definitely not just a thing of the past. While this book was being written, several major accidents took place. On April 7th, 2013 a serious explosion took place in West, close to Waco/Texas. There has still been no official statement about the reason. It is clear that a fire broke out where 240 t of ammonium nitrate were stored, without sufficient fire protection. An explosion took place. 14 people were killed, among them 12 firemen. More than 250 people were injured [208]. On August 12th, 2015 a fire broke out in the bulk storage of the harbor of Tianjin/ China. After 45 min, two detonations took place within one minute. There were at least 165 casualties, and approx. 800 people were injured. In the harbor, about 3000 t of hazardous substances were stored, among them sodium cyanide, calcium carbide, and, again, ammonium nitrate. The current theory for the reason is that acetylene was formed due to contact of the calcium carbide with the fire fighting water [209]. In April 2016, 28 people died at an explosion in a vinyl chloride plant in Coatzacoalcos/Mexico. In March 2019 there was a huge explosion at the Yancheng site in China, where mainly pesticides and fertilizers are produced. At least 47 people were killed, and more than 90 got hurt seriously. Finally, in July 2019 at least 12 people were killed in Yima/China, when an explosion in an air separation unit took place. The main reasons for accidents are [246]: – lack of focus on process safety, instead, focus is put on minimization of LTIR (lost time incident rate); – belief that major incidents will not occur; – production priority; – ignoring warning signals; – disregarding standard operation procedures (SOP); – insufficient focus on proactive issues (work methods, lessons learned system, safe practices etc.);

14.1 HAZOP procedure



| 375

cognitive biases [247], e. g. the selective search of information that confirms the own opinion and ignores others.

14.1 HAZOP procedure The safety of its process facilities is the most important target of a chemical company. Although it is not possible to avoid accidents completely, the chemical industry has learned a lot. Most hazards happen because of flaws in design, material, or due to human error, whereby the latter reason is considered to be the most important one. In a chemical plant, there is a large potential for human error during design, procurement, construction, and operation. It is desirable that these errors are anticipated before the plant is commissioned. There are a number of procedures for the safety analysis [210]. The most established one in the chemical industry is the so-called HAZOP analysis (HAZard and OPerability). It was developed in the 1970s after the Seveso accident. It is important to note that HAZOP looks for incidents with the potential for severe impacts. The minor ones (“slips, trips, and falls”) are the subject of the company’s general safety requirements. Not only for new plants but also for changes in the process supposed to be minor ones a HAZOP session is recommended, as the implications are often underestimated. Basically, HAZOP is a communication technique. Information is presented, discussed, analyzed, and recorded [210]. The safety aspects are systematically identified, to check the measures taken to prevent major accidents. The HAZOP procedure is quite time-consuming and requires a number of skills of the participants. It is recommended that the team consists of approx. 5 people. More participants dilute the effectiveness, as there are too many communication routes between the people. All of them should be able to communicate fluently in the language applied (usually English), otherwise too much effort is necessary to keep everyone up to date. The participating team members should be familiar with the chemical process under examination. The participants can be members of different organizations, e. g. from the engineering company, from the future plant owner, or from a consultant company. They should cover the following items or, respectively, areas: – HAZOP leader: The HAZOP is guided by a HAZOP leader, who takes care that the meeting keeps its target focus. The HAZOP leader is the person who is essential for the success of the team. He leads the team through the procedure and brings out the concerns of the process. He announces the key words to be discussed and, in case there is no minute taker, notes down the deviations, causes, and countermeasures. It is not intended that he takes part in the discussion, but it is also not forbidden. One of his most important tasks is to keep the discussion under control, which sometimes turns out to be an engineering review or a personal dispute between two

376 | 14 Process safety











participants. For HAZOP leaders, a certain authoritative personality is necessary. Preferably, the HAZOP leader had not been involved in the design of the process, so that he is not biased. He also takes care that the necessary documents used in the HAZOP are available. The team leader must be aware that the attendance of the team members in the sessions is the most costly part of the HAZOP process. Therefore, it is up to him to avoid unnecessary discussions and meet the schedule. Process Engineer, responsible for the unity in the project: The people who have designed the process know it very well and can explain how the particular measures interact. They should be able to give first answers to the various key word items. Operations representative: The operations representative is more experienced in operating the plant and focuses on items which are caused by operational errors rather than by the design. Safety expert: The safety expert knows the impact of the items and is familiar with the rules and standards. Instrumentation & control representative: The representative of instrumentation & control knows the cause & effect matrix and how the control cycles work. His special expertise ensures that a number of rules are maintained which other participants are not familiar with; e. g. an indicator involved in a control loop must not be used as a safeguard as long as the control loop itself can be the cause of the deviation. Consultant: The consultant is often someone who is familiar with similar processes but not with the one to be examined. This is useful, as consultants have the advantage of being unbiased and independent. They did not participate in the project, and hopefully, they will ask questions that the others who have become blind to the shortcomings in the project will not do.

Other additional people able to give important input are maintenance representatives and material specialists. The HAZOP team should have a certain experience. If the majority of the team has never participated in a HAZOP, the HAZOP leader will be completely preoccupied with instructing the team members rather than having them contribute to the review. The team members should be encouraged to ask “stupid” questions [210]. For the whole plant, line by line and for each piece of equipment it is examined what the consequences of deviations might be and which additional countermeasures should be applied. All necessary documents should be available, i. e. (amongst others) PIDs (including the PIDs for vendor packages) and PFDs, material balances, plot plans, cause & effect charts, interlock descriptions, fire protection measures, list of safety valves/rupture discs with relief case descriptions, data sheets of equipment, in-

14.1 HAZOP procedure

| 377

strument and control valves, ambient data, utility list, and the properties of the main components. First, the plant should be divided into parts, the so-called nodes, which have a well-defined objective (e. g. separation, reaction, heating/cooling, pressure increase). There is a list of guidewords (temperature, pressure, flow, etc.) which covers at least a large number of possible deviations systematically. A number of computer programs is available which can support the procedure. For each node, every guideword is considered with the following work flow [210]: – Definition of the node. – Short process description, usually given by the process engineer. – Selection of the process parameter and assignment of the deviations, one by one. Go through all the streams of the node with the process parameter selected before changing it. The process parameters and the deviations are: – Flow (no flow, less flow, more flow, reverse flow) – Temperature (low temperature, high temperature). Double-check of the design temperatures of the particular pieces of equipment with respect to the scenarios. – Pressure (low pressure, high pressure). Double-check of the design pressures and the pressure relief cases. Ruptures or leakages can be the reason for deviations. – Level (high level, low level) – Identification of the causes and hazards of the deviation. – Identification of the consequences of the hazard without regarding the safeguards. – Specification of the appropriate safeguards and recommendations to control the hazards. – List of recommendations in the order of their priority. – Ensure that the actions proposed are implemented and documented and that for each action a responsible person is assigned. Certainly, the HAZOP is no guarantee that nothing can happen in a process. Often, cognitive biases have an influence on decisions and can hinder rationality [242]. Examples are the so-called groupthink, where a group of people shares common but possibly false beliefs, and mindsets, which are assumptions that are so established that doubt is not allowed. Another phenomenon is group polarization, meaning that there is a tendency of a group to make decisions which are more extreme than the initial opinions of the members. Whenever the expression “generally believed” occurs, a cognitive bias is probably on the way. A way to overcome cognitive biases is the use of a “devil’s advocate”, a team member whose role it is to challenge common views on purpose to determine their validity. It does not matter what the real opinion of the devil’s advocate is. The target which is achievable is to lower the probability of accidents, especially of serious ones, as many scenarios are reflected and people become

378 | 14 Process safety sensitive to possible consequences. Risks can be categorized into equipment failures (ruptures, control valve failures, etc.), operational errors (wrong valve state), external events (corrosion, fire), and product deviations (e. g. off-spec). The assessment of the risk takes place based on common sense, personal experience, knowledge, and intuition of the participating people. Non-reasonable risks should be ruled out. A rupture of a line has a real possibility of occurring, whereas a meteorite striking the facility is not really probable, and the consequences can neither be avoided nor mitigated by better engineering. Similarly, the “double jeopardy” case should be in mind, meaning that two independent events at a time do not need to be considered, e. g. a tube rupture as mentioned above and a simultaneous malfunction of a control valve. The probability of the simultaneous occurrence of two independent events is negligibly low. However, two simultaneous events are not necessarily independent; e. g. a failure of the reflux pump and a loss of cooling in the condenser of a distillation column. In this case, the failure of the reflux pump might cause accumulation of liquid in the condenser, with the consequence of the cooling loss. In principle, there is one first error, and the second one is the consequence of the first. The notes should clearly indicate the exact naming of the equipment and the instrumentation devices. Otherwise, the review of the report will take time and might be erroneous. Minor spelling errors are not important and can easily be corrected latter. The costs of the countermeasures should not be taken into account or discussed during the HAZOP session. Of course, for the final decision the practicability and the costs must be examined. A ranking of countermeasures is useful for the acceleration of the decision process afterwards. The criteria are life safety, protection of the environment, protection of the equipment, and, finally, continuation of production. Countermeasures for high risks tend to be more costly and complex than those for low risks and should be considered first. The proposed recommendations must be forwarded to the acknowledged experts for an evaluation. Currently, there is a trend to encounter safety issues by means of process control systems, meaning that instruments are used to prevent or mitigate a hazardous situation. In general, electronic components behave differently in comparison with mechanical components. Regarding the probability of error, the error rate of electronic components is high at the beginning of the lifetime in operation. With a bit of dark humor, this is called “infant mortality”. After a short period of time it drops down and remains constant on a low level. At the end of the lifetime it rises again strongly. In contrast, mechanical devices also have a high error rate at the beginning, but after a period with a low error rate it rises linearly with time [211]. The measuring and control devices which have a safety-relevant function are classified by the so-called SIL (“safety integrity level”) analysis, where it is rated which certificates for the probability of failure for the various devices are necessary, according to the risk associated when the device function fails. There are four SIL numbers, the higher the SIL number, the lower is the risk if the device fails. Table 14.1 gives the average probabilities

14.1 HAZOP procedure

| 379

Table 14.1: Probabilities for failure on demand for the various SIL classes. Safety Integrity Level (SIL)

Probability of failure on demand

4 3 2 1

10−5 – 10−4 10−4 – 10−3 10−3 – 10−2 10−2 – 10−1

for failure on demand in the low demand mode, which is defined that the frequency of operations in a safety relevant function is not greater than once per year. The greater the consequences and the higher the probability of occurrence, the higher must be the SIL number. Figure 14.2 is based on the international standard IEC/EN 61508 and connects these “risk parameters” and the SIL class. Risk parameters Consequence/severity C=1

minor injury or damage

C=2

serious injury or one death, temporary serious damage

C=3

several deaths, long term damage

C=4

many dead, catastrophic effects

Frequency/exposure time F=1

rare to quite often

F=2

frequent to continuous

SIL calculation

C=1

no special safety requirements C>1

Possibility of avoidance P=1

avoidance possible

P=2

unavoidable, scarcely possible

SIL = C + F + P + W − 6

Probability of occurrence

SIL ≤ 0 → no special safety requirements

W=1

very low, rarely

SIL > 4 → process concept not reasonable

W=2

low

W=3

high, frequent

Figure 14.2: Relationship between the risk parameters and the SIL number required.

380 | 14 Process safety

Example A shell-and-tube heat exchanger is operated continuously with two fluids which can react with each other in case of a tube rupture. The reaction would be exothermic. If the reaction takes place quantitatively, it might happen that the gaskets of the apparatus fail. If people are working in the vicinity, they might be seriously injured. With a surveillance of all pressures and temperatures involved, even small deviations from the set point are indicated. If two signals are out of tolerance, an interlock closes a shut-off valve to make sure that the feed line is closed to limit the extent of the reaction. Which SIL classification is necessary?

Solution According to Figure 14.2, the case can be categorized as follows: C = 2 serious injury and temporary serious damage; F = 2 continuous operation and risk exposure; P = 2 avoidance not possible; W = 1 low probability. The chosen SIL classification is SIL = C + F + P + W − 6 = 2 + 2 + 2 + 1 − 6 = 1 , with a probability of failure on demand of 0.01–0.1 (Table 14.1).

14.2 Pressure relief Safety valve design is like Christmas: You know that it will happen soon, but when it is time you are not prepared. (Anke Schneider)

14.2.1 Introduction Chemical plants often work at pressures far from the ambient one, and corrosive, toxic, flammable, and even explosive substances are handled. These pressures must always remain under control in closed systems which are constructed in a way that they can certainly withstand the design pressure and the design temperature. However, the design values are not infinite, and undesired scenarios can happen where they are exceeded. The plant design must make sure that these scenarios do not have an impact on the safety of the plant [212]. As long as the control system works, the plant is usually operated in a safe way. If the control system also fails, a common method to protect the process equipment and keep it in a safe state is the emergency pressure relief or blowdown, which is a topic where an understanding of the process and knowledge

14.2 Pressure relief |

381

of thermodynamics can contribute to avoid accidents and damage to the plant. The pressure relief device is the last line of defense and must be capable to actuate at all times and under all circumstances. It removes the potentially dangerous contents of the process equipment and transfers them to a safe and lower-pressure location; i. e. the environment for nonhazardous contents or, in the usual case, a flare system. Furthermore, it decreases the pressures exerted on the walls of the equipment and possibly prevents an escalation due to an explosion or a major relief of toxic substances [213]. Nevertheless, the pressure relief itself is often a hazardous operation. During the depressurization, the fluid expands and low temperatures can rapidly be generated, possibly causing brittle-fracture of the vessel walls (Chapter 11). In distillation columns, the flow through the packing or the trays can be much higher than designed for normal operation, and the equipment can be seriously damaged during a pressure relief (Section 14.2.5). There is a large to-do-list for the engineer when a pressure relief device must be designed [213]: – fix the rate of relief from the piece of equipment to ensure that the design pressure1 is not exceeded; – determine the restriction orifice to make sure that the corresponding equipment is protected; – make sure that the relief flow can be transferred to the low-pressure destination; – evaluate the minimum temperatures the particular materials should be designed for; – evaluate the total flare capacity; – evaluate the repulsive forces generated by the relief flow for the design of the various fixtures; – organize the inquiry.2 Two kinds of equipment are used for pressure relief: safety valves and rupture discs. Figures 14.3 and 14.4 show a picture and a sketch of a safety valve, respectively. A safety valve opens gradually at a certain pressure and closes again after relief. The spring characteristics is adjusted in a way that the safety valve opens at actuation pressure, i. e. at the design pressure of the adjacent piece of equipment. It opens gradually; it is fully open when the pressure reaches the maximum allowable overpressure, corresponding to 110 % of the design pressure.3 If there are several safety valves available to protect the equipment, the maximum allowable overpressure is 116 % of 1 more exact: relief pressure; see below. 2 In a large plant, 150–200 safety valves and rupture discs are not unusual, so it takes a lot of effort to gather the information in an appropriate form. 3 It is a bit disturbing that the design pressure can be exceeded on purpose. In fact, this case and the following ones are covered by the definition of the design pressure; exceeding the design pressure does not mean destruction of the equipment.

382 | 14 Process safety

Figure 14.3: Safety valve protecting a heat exchanger. © Markus Schweiss/Wikimedia Commons/ CC BY-SA 3.0 https://creativecommons.org/licenses/bysa/3.0/deed.en.

the design pressure, and in the case of fire it is even 121 %. It should be emphasized that these pressures including the design pressure refer to overpressures. Safety valves have a hysteresis; if the pressure decreases again to the value of the design pressure, it will not be fully closed again. For this purpose, the pressure must decrease to 90 % of the design pressure. The only intention of a safety valve is the protection of the adjacent apparatus or device. It must not be misused as a pressure regulation valve. Working with safety valves, it is most important to distinguish between the particular pressure terms, which are illustrated in Figure 14.5 with exemplary pressure values. First, the vessel to be protected shall be explained. As indicated, it is normally working at a pressure of pnormOp = 3 barg, the maximum operating pressure is expected to be pmaxOp = 4 barg. The design pressure of the vessel is pDes = 6 barg, which is relatively far above the maximum operating value. A possible reason is that 6 barg is a standard value for the design pressure of low-pressure vessels. The vessel is protected by a safety valve, which actuates when the design pressure of the equipment is reached. It is fully open at a pressure which is 10 % higher, i. e. at 6.6 barg, corresponding to 7.6 bara. This pressure is the maximum allowable overpressure, it must be maintained. The relief amount is transferred to the safety valve via the inlet line. The pressure drop in the inlet line must be below 3 % of the actuation pressure. Just downstream the safety valve in the outlet line, there is the back pressure. It is the sum of the superimposed back pressure and the built-up back pressure. The superimposed back pressure is the pressure at the end of the outlet line, in Figure 14.5 the pressure in the

14.2 Pressure relief |

383

Figure 14.4: Sketch of a safety valve. © Rasi57/Wikimedia Commons/CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en.

header where the relief stream is disposed to. The built-up back pressure is caused by the pressure drop of the relief stream in the outlet line. It should be less than 10–15 % of the actuation pressure. The values can vary slightly, depending on the guideline used in the project (e. g. DIN, ASME) and the vendor specifications. In contrast to a safety valve, a rupture disc (Figure 14.6) opens completely4 at a certain pressure and does not close again, since it is destroyed on being actuated. The advantages of rupture discs are their rapid actuation at fast pressure buildups, their low costs and their small space demand. As they have no moving parts, they are very robust, and almost all materials can be used. Rupture discs are used for large relief streams with huge cross-flow areas and for fouling or viscous media. Also, they can be placed upstream a safety valve with a slightly lower actuation pressure than the safety valve itself. In this way, the safety valve is protected against corrosion, fouling, and dirt until it actuates. Furthermore, the tightness of the pressure relief arrange4 depending on the type of the rupture disc. Some types do not open the whole cross-flow area.

384 | 14 Process safety

Figure 14.5: Example case for the illustration of the safety valves pressure terms.

Figure 14.6: Rupture disc. © Jens Huckauf/Wikimedia Commons/CC BY-SA 3.0. https://creativecommons.org/licenses/bysa/3.0/deed.en.

ment is improved. It should be mentioned that the actuation pressure of a rupture disc decreases with increasing temperature, depending on the chosen material. For the design, this significant effect must be taken into account. The pressure terms are just the same as for safety valves, whereas the pressure drop limitations for inlet and outlet line are not relevant. 14.2.2 Mass flow to be discharged For the design of a pressure relief device, the first step is to derive from the process knowledge which mass flow has to be discharged. One or more scenarios have to be fixed where the pressure relief device actuates. In most cases, the standard scenarios

14.2 Pressure relief | 385

Figure 14.7: Explanation of the volume balance.

(Chapter 14.2.4) can be taken. Applying these scenarios, the mass flow to be discharged can be determined. It is useful to do this always in the same way. The “volume balance” method can help a lot to understand what is happening. It requires the use of a process simulation program. The following three steps (Figure 14.7) are performed: 1.

2.

Calculate the state just before actuation: Starting from normal operation, the pressure buildup is traced until the actuation pressure is reached. If the vessel can be assumed to be closed, i. e. no inlet and outlet flows occur, the volume of the content and therefore the overall density remains constant. The safety valve characteristics, where the maximum pressure relief is obtained when the safety valve is fully opened, is neglected; instead, for simplicity it is assumed that the safety valve opens abruptly at the maximum allowable pressure (Section 14.2.1). At this pressure, the physical properties are evaluated as a co-product of the calculation in the process simulator. Calculate the flow just after actuation: A differential consideration is made to evaluate what will happen just in the moment of actuation. For example, in the case of fire (see below) a small amount of heat ΔQ is added. Due to the safety valve, the pressure stays constant, and the new state of the content of the vessel, especially its new volume, can be evaluated. The difference between the new volume and the volume of the vessel is the relief volume ΔV, and the relief amount is simply Δm = ρΔV. Assigning a time step Δτ (e. g. ̇ the relief stream just at actuation is (Figure 14.7): Δτ = ΔQ/Q), ṁ = ρΔV/Δτ =

ρ ΔV Q̇ ΔQ

(14.1)

The fire case example (Section 14.2.2) will illustrate this procedure. Note that for a system in vapor-liquid equilibrium the density of the vapor must be inserted instead of the overall density, as only vapor is considered to be relieved.

386 | 14 Process safety 3.

Track the blowdown process: Often, the first relief stream is the largest, and normally, a safety valve should actuate only for a short time. However, an increase of the relief stream is at least possible. Therefore, one should regard different points of time during relief and check with a differential step what happens. According to the tendency, it should be possible to estimate where the maximum occurs. In fact, for this purpose dynamic simulation would be the appropriate tool [214], especially if additional inlet and outlet streams occur. On the other hand, an averaging of the relief stream over a long period of time or even over the whole pressure relief process is not advisable. Per definition, “averaging” means that the value obtained is smaller than the maximum value. However, the pressure relief device must be able to govern any state during pressure relief including the maximum one. An averaging would lead to a systematic underestimation of the cross-flow area needed.

The advantages of the volume balance procedure are: – The temperature-elevation of the liquid (and of the vapor as well) is correctly taken into account, which is important for mixtures with a wide boiling range. – The volume increase of both vapor and liquid due to the temperature-increase is correctly represented. – The change of the equilibrium during the relief is taken into account. – The “vanishing” of liquid volume due to the evaporation (Equation (14.3), see below), which is important for actuation near the critical point,5 is correctly represented. – The procedure is as well suitable for the safety valve design of packed columns, where the holdup of the packing is small. Tray columns are more difficult and require dynamic simulation [214]. – If necessary, the time until actuation takes place can be evaluated.6 Often, this time is considerably high, and many actuation cases turn out to be non-realistic. 14.2.3 Fire case The fire case is the simplest and most frequently occurring case. Its calculation can be taken as the basis how to deal with phase equilibria in relief cases. It is subject to discussion whether the fire case is actually relevant or not. In fact, one should consider that the heat input with steam is usually higher than the heat input by an external pool 5 In fact, such actuations are not as exotic as one would guess. For a vessel with a pure substance, an actuation pressure close to the critical pressure is sufficient to run into this problem. See also Section 14.2.2. 6 Be careful: The heat balance in a closed system has to be performed with the internal energy, not with the enthalpy as in a process simulation program (Q12 = U2 − U1 ).

14.2 Pressure relief |

387

fire, so that the fire case is often not the governing case. At the sites, in most cases the fire brigade needs less than ten minutes to be at a plant. This time is normally shorter than it takes to reach the actuation case with an external pool fire. And, moreover, most plants have a sprinkler system, which further reduces the probability of an actuation case. Nevertheless, the design of safety valves against external fire has more or less become standard. Figure 14.8 illustrates what is assumed when fire causes a pressure relief. A vessel is partly filled with liquid. It is exposed to the heat generated by a pool fire. The vessel is completely closed, i. e. the valves shown in the adjacent lines are shut. The vessel is protected by a safety valve. Without liquid in the vessel, there would be hardly any heat removal to the vessel content. The walls of the vessel would be heated uncontrollably, and the design temperature would probably be soon exceeded. The vessel might even be destroyed. With liquid in the vessel, there is a better heat transfer from the vessel walls to the liquid. The temperature of the walls is limited due to a working heat removal. However, the removed heat causes a partly evaporation and a temperature increase of the liquid. Thus, the pressure in the vessel increases, and after reaching the design pressure of the vessel, pressure relief is necessary. The safety valve must be designed in a way that the vapor generated by the heat transferred to the liquid can be removed by the safety valve without further pressure increase. According to the established standards, the pool fire reaches a height of 25 ft, i. e. 7.62 m.

Figure 14.8: Sketch for the fire case. The valves are assumed to be closed.

With A as the wetted surface up to a height of 25 ft, the commonly used fire formula is given in the API-521 [216]: 0.82 Q̇ H A = 43.2F( 2 ) , kW m

(14.2)

388 | 14 Process safety where F is a factor considering the influence of the insulation of the vessel. In the design basis of many projects, it is instructed that it should be neglected, i. e. F = 1, otherwise, one would have to prove that the insulation is still working properly at temperatures up to 700°C. Often, there is also a recommendation to which extent the volume of the adjacent piping shall be considered (normally additional 10 %). For the assigning the 25 ft fire height to the arrangement, it has to be considered whether and where the combustible medium can be collected and develop the pool fire. Often, the floor on the upper platforms in a plant are made of steel grating so that liquid distributed on the floor simply drops down onto the platform beneath. The use of Equation (14.2) is often mandatory and hardly ever questioned. Nobody expects it to be an exact equation, but in fact, one should know some details about its background. Equation (14.2) refers to large vessels, like the ones which occur in petrochemical industry. For smaller vessels, the heat transfer by the fire is different, as the flames have the opportunity to surround the vessel. In the API-2000 [215], a number of different fire formulas depending on the size of the vessels is given. Figure 14.9 clearly indicates that they fit much better to the data, which in turn are given in API-521 [216]. Note that the diagram is logarithmic, so deviations which seem to be small might indicate a relatively large error. Moreover, Equation (14.2) is not conservative; it calculates values which are systematically too low. Nevertheless, it is nearly always used.

Figure 14.9: Fire formulas in API-2000 [215] and API-521 [216].

To evaluate the pressure relief stream, consider a closed vessel with a liquid and a vapor phase which is heated and protected by a safety valve (Figure 14.10). Assuming that it is filled with a pure substance, one can derive that the ratio between the given heat flow Q̇ and the relief stream ṁ out to maintain the relief pressure is ρL Q̇ = Δh , ṁ out ρL − ρV v

(14.3)

where the temperature is the boiling temperature at relief pressure. This relation is not applicable at or close to the critical point. Both Δhv and ρL − ρV would become zero at the critical point, and in the vicinity they are inaccurate. At least, Equation (14.3) indicates that the relief amount does not become infinite at the critical point. Far away

14.2 Pressure relief |

389

Figure 14.10: Sketch of a closed vessel.

from the critical point, one can assume that ρL ≫ ρV and Q̇ = Δhv ṁ out

(14.4)

The difference between Equations (14.3) and (14.4) is that the generated vapor was liquid before and leaves some empty volume after evaporation, which will be filled by the vapor again so that the relief amount is reduced. Far away from the critical point, Equation (14.4) is accurate; however, this is the reason for many users to apply it in any possible case. As said, this is acceptable for pure components, but it is not at all justified to apply it to mixtures. An enthalpy of vaporization can be assigned if a liquid evaporates at constant temperature and pressure. This is only the case for a pure substance or for an azeotrope, where, however, the composition changes with the pressure. If a mixture is evaporated, more low-boilers than high-boilers will be evaporated; thus, more high-boilers remain in the liquid, and the boiling temperature rises. The heating of the liquid requires part of the energy, so less liquid will be evaporated than expected by Equation (14.4). But it is even more important to find out what is really evaporated, as vapor and liquid concentration can differ significantly in a mixture. In [11], an example is given which shows that averaging the enthalpies of vaporization with respect to their liquid concentrations can lead to large errors. The only way to account for both the vapor concentrations and the liquid temperature increase is a flash calculation using a process simulation program. The coordinates to be specified are the pressure and ̇ ṁ the differential heat flow, and the ratio Q/ out can easily be determined. Example In a cylindrical vessel (D = 2.5 m, H = 5 m, height of lower tangent line 5 m) there are 5000 kg of 1,2-dichloroethane and 5000 kg vinyl chloride at t = 20 °C. The head of the vessel and the adjacent

390 | 14 Process safety

piping will be neglected. The design pressure of the vessel is pDes = 8 barg, and the ambient pressure is pU = 1 bar. The vessel is exposed to a pool fire. Calculate the relief amount using a process simulation program. Determine the relief amount and its state.

Solution The following steps are performed to obtain the solution: 1. Determination of the wetted area up to 25 ft and the state at the beginning. First, the volume of the vessel is determined. It is V = H π D2 /4 = 24.54 m3 Using a process simulation program, the state in the vessel is calculated in an iterative procedure. The pressure in the vessel is repeatedly estimated. The volumes of the vapor and the liquid phase are determined. If the sum is equal to the volume of the vessel, the pressure estimate was correct. In this case, the result at t = 20 °C is p = 2.15 bar, with VL = 9.38 m3 and VV = 15.16 m3 . Thus, the liquid height inside the vessel is evaluated to be HL =

VL 9.38 m3 = = 1.91 m 2 π D /4 4.909 m2

Adding the height of the tangent line, we get HL,wetted = 6.91 m < 7.62 m. This means that all the wetted part of the wall is influenced by the fire. The wetted area is A = π D2 /4 + π D ⋅ HL = 19.91 m2 , 2.

where the contribution of the adjacent piping is neglected. Heat stream transferred by the fire: According to Equation (14.2), one gets with F = 1 Q̇ H = 43.2 ⋅ 19.910.82 kW = 502 kW

3.

State of safety valve actuation: With the process simulation program, the state just before actuation is calculated. For calculation, it is assumed that actuation takes place at 121 % of the design pressure (fire case), i. e. at p = 1.21 ⋅ 8 barg = 9.68 barg = 10.68 bara. The vessel temperature must be found where the volume of the content is equal to the volume of the vessel at pressure p. It turns out that the actuation temperature is t = 87.99 °C. At this temperature, the liquid volume is 10.3166 m3 and the vapor volume is 14.2234 m3 , which adds up to the vessel volume.7 The vapor composition is already 90.5 wt. % vinyl chloride and 9.5 wt. % 1,2-dichloroethane,8 being far away from the liquid concentration at the beginning. The heat required, calculated with a process simulator as the difference of the internal energies, is Q = U2 − U1 = (H2 − p2 V ) − (H1 − p1 V ) = (H2 − H1 ) − (p2 − p1 )V

= 1.142 ⋅ 106 kJ − (10.68 − 2.15) ⋅ 105 Pa ⋅ 24.54 m3 = 1.121 ⋅ 106 kJ ,

7 The accuracy of the numbers seems to be unreasonable; however, one must have in mind that the target is to evaluate the difference between large numbers, which is always sensitive. See example on Section 10.1. 8 using the γ-φ-approach with NRTL and PR for the vapor phase.

14.2 Pressure relief |

391

meaning that the time until actuation is τ = Q/Q̇ H = 4.

1.121 ⋅ 106 kJ = 2233 s ≈ 37 min 502 kW

Determination of relief amount: A differentially small amount of heat (500 kJ) is added at constant pressure, corresponding to the heat input by the fire during approx. 1 s. The volumes of vapor and liquid are now VL = 10.315 m3 and VV = 14.286 m3 , giving altogether 24.60 m. This exceeds the vessel volume by 0.061 m3 . This volume, coming certainly from the vapor phase, has to be relieved through the safety valve. The properties of this stream are part of the stream report of the process simulation program. For the density of the relief stream, 26.72 kg/m3 are obtained. The relief flow can then be determined using Δτ = ΔQ/Q̇ H = 500 kJ/502 kW = 0.996 s to be (Equation (14.1)): ṁ out = ρΔV /Δτ = 26.72 kg/m3

0.061 m3 = 1.636 kg/s = 5891 kg/h 0.996 s

For comparison with a fictive enthalpy of vaporization according to Equation (14.4), the value for ̇ ṁ out is determined to be Q/ ̇ ṁ out = 502 kW/1.636 kg/s = 307 kJ/kg Q/

5.

Averaging of the enthalpies of vaporization would have yielded a value of 281 J/g. An example with a much more drastic difference is given in [11]. Strictly, the whole calculation would have to be repeated for different times, as it is possible that the relief stream increases with time. For the current example with more than half an hour time until actuation, this is not really relevant. If it is, the application of dynamic simulation is strongly recommended.

A strong difficulty for this procedure is the occurrence of inert components like N2 , O2 , H2 , etc. If inerts are dissolved in the liquid, the relief pressure is often already reached at comparably low temperatures. Then the inert will be transferred into the vapor phase, and the heat added will be mainly used for heating up the liquid. Despite specifying a differential flash, the temperature then rises significantly by several K. ̇ ṁ For the quantity Q/ out very large values (e. g. 50 000 J/g) can be obtained, leading to low design loads for the pressure relief device. To get a reasonable procedure, one should take into account what would really happen in such a case. It will take some time until the liquid is really heated up. During this time, the safety valve will really ̇ ṁ , giving a very low relief amount. Once the inerts are actuate at the calculated Q/ out removed, normal values will be obtained again. In principle, the standard procedure (p. 12) works, but item 3, the tracking of the blowdown process with time, has an even larger meaning. The following procedure is pragmatic for the design of the safety valves: one could leave out the inert components and pretend that they were already blown off. It can be

392 | 14 Process safety justified by the fact that gas solubility equilibria are reached very slowly. In a process, one cannot assure that the equilibria calculated in the downstream steps have really been reached and that the light gases are really dissolved in the mixture, not to mention the uncertainty of the mixing rules for the Henry coefficient (Chapter 2.3.1). The safety valve must not be designed in a way that the assumed inert gas concentration is decisive for the evaluation of the relief flow.

14.2.4 Actuation cases There are a number of reasons why the pressure in a piece of equipment can exceed the corresponding design pressure [217]. The following itemization can be used as a checklist for each safety valve, where it is decided whether the particular item is relevant or can be ruled out. 1. Fire case (Chapter 14.2.3) 2. Blocked discharge line: If the outlet of a vessel is blocked while the feed is still in operation, substance in the vessel accumulates, which finally leads to a pressure buildup. The most frequent case is the filling of liquid into a blocked vessel with a pump. If the pump can build up a pressure higher than the design pressure of the vessel according to its characteristics and if other safety measures are already exploited (e. g. minimum bypass, Section 8.1), a pressure buildup will be the consequence. The relief device must then be designed in a way that the feed according to the pump characteristics can be safely removed. It is always useful to calculate how much time is necessary before an actuation case is really created. Often, this time is unrealistically long. Furthermore, one should take into account that the operator probably gets an alarm after the liquid level range is exceeded. 3. Thermal expansion of the vessel content: Thermal expansion can become a safety issue when liquids are blocked up in a closed vessel or a pipe. When there is no gas blanket, meaning that the closed volume is completely filled with liquid, large pressures are built up when the temperature of the liquid even slightly increases. For example, consider a liquid volume of water at t = 20 °C, p = 1 bar. During the heating, the density remains constant at ρ = 998.21 kg/m3 , assuming that the thermal expansion of the vessel is negligible. A temperature increase of just 5 K to t1 = 25 °C at constant density gives a pressure of p2 = 26.8 bar, which may exceed the design pressure by far. Therefore, all volumes or pieces of pipe that can be blocked must be protected with a safety valve which can release part of the liquid. It is called a thermal expansion valve. Usually, a design calculation is not necessary; the smallest safety valve should be sufficient. For gases, thermal expansion is more gradual, so that extreme pressure buildups do not happen. Often, the relief stream is led to the flare so that it is interesting

14.2 Pressure relief |

393

to quantify it. For the design, it is also likely that the smallest safety valve is sufficient. A frequently occurring case is the thermal expansion of a gas in a vessel which is exposed to the sun radiation. Usually, the maximum solar radiation is given in the design basis. If not, the solar constant (S = 1370 W/m2 ) can be taken, and the maximum area which the sunlight can hit can be considered.9 Usually, even this by far conservative assumption will certainly lead to the selection of the smallest safety valve. If not, drop the conservative assumptions one by one: – introduce an absorption coefficient “a” for solar radiation (normally a < 0.6); – consider heat removal to ambient air by free convection (α = 4–5 W/(m2 K)); – consider radiation exchange with environment.

Example A gas storage vessel in form of a sphere (D = 10 m) is filled with natural gas (for simplicity: methane). During normal operation, it is operated at t = 50 °C and p = 100 bar. For the design of the safety valve, it is assumed that the vessel is blocked and exposed to sun radiation. In the design basis, the maximum sun radiation is specified to be Smax = 800 W/m2 . To be by far conservative, the heat removal to the environment by convection and radiation shall be neglected.10 The design pressure of the vessel is pDes = 110 bar. Calculate how long it will take until the safety valve actuates and the mass flow to be discharged.

Solution First, the volume of the sphere is calculated to be V=

πD3 = 523.599 m3 6

The projection of the surface exposed to the sun radiation is A=

πD2 = 78.5 m2 , 4

giving the heat flux Q̇ = Smax A = 800 W/m2 ⋅ 78.5 m2 = 62.8 kW With a high-precision equation of state [29], the mass of the content in the vessel and the heat necessary for reaching the actuation state can be calculated to be m = ρ(50 °C, 100 bar) ⋅ V = 66.596 kg/m3 ⋅ 523.599 m3 = 34870 kg

9 Only the projection area perpendicular to the sun radiation is relevant. 10 which is not justified if a realistic value is targeted.

394 | 14 Process safety

and Q = m ⋅ [u(110 bar, 66.596 kg/m3 ) − u(100 bar, 66.596 kg/m3 )] = 34870 kg ⋅ (−135.846 J/g − (−178.289 J/g)) = 1480 MJ , as the density during the heating phase remains constant. It takes at least τ = Q/Q̇ = 1480 MJ/62.8 kW = 23567 s = 6.55 h until the safety valve actuates. The temperature at actuation is t1 = 72.7 °C. The safety valve is fully open at p2 = 121 bar. In the following, the calculation assumes that the valve opens fully and immediately after reaching p2 . The heat flux during the first 100 seconds after actuation is regarded, as the safety valve is open, the procedure is isobaric, and the specific enthalpy is decisive for the energy balance. The relatively long time is chosen to make sure that the differences between the two states are significant. The gas expands to the new density ρ100 s due to the heat flux: Q100 s = m ⋅ [h(121 bar, ρ100 s ) − h(121 bar, 66.596 kg/m3 )] 62.8 kW ⋅ 100 s = 34870 kg ⋅ (h(121 bar, ρ100 s ) − 94.367 J/g)) , giving h(121 bar, ρ100 s ) = 94.547 J/g. The corresponding density can be evaluated to be ρ100 s = 66.581 kg/m3 , giving a volume after the 100 s of V100 s = m/ρ100 s = 34870 kg/66.581 kg/m3 = 523.720 m3 To maintain the pressure, the difference of the new volume and the volume of the vessel must be relieved through the safety valve: ΔV = V100 s − V = 523.720 m3 − 523.599 m3 = 0.121 m3 , giving the relief amount Δm = ΔV ⋅ ρ100 s = 0.121 m3 ⋅ 66.581 kg/m3 = 8.07 kg , which corresponds to a very low mass flow to be discharged of ṁ discharge = Δm/100 s = 8.07 kg/100 s = 290.5 kg/h

4. Chemical reaction: Runaway chemical reactions are undoubtedly the most serious actuation cases that can occur. Usually, safety valves do not react fast enough, and the pressure relief is performed with rupture discs. One of the best known actuation cases was the so-called Carnival Monday accident (Chapter 14). The consideration of chemical reactions for pressure relief requires at least a rough knowledge about the reaction kinetics and the heat removal from the vessel where the reaction takes place. An increase of the conversion of an exothermic reaction develops more heat, which in turn increases the temperature, and in a vicious circle the reaction kinetics are further accelerated. As discussed in Chapter 10.2, for runaway reactions the heat removal from the reactor has to be examined.

14.2 Pressure relief |

5.

395

Tube rupture in a heat exchanger: In a shell-and-tube heat exchanger, shell side and tube side might have different design pressures. In case of a tube rupture, the side with the lower design pressure will be exposed to the higher pressure from the other side. The applied design code determines whether such an event must be regarded as a pressure relief case or not. The latest ASME code requires that equipment and piping are tested at 130 % of the design pressure. Thus, if the lower design pressure is less than 10 of the 13 higher one, tube rupture must be considered as a relief case. Previous revisions of the ASME code required a test pressure of 150 % of the design pressure; for these cases, the 32 rule was the criterion. In Chapter 11, it was pointed out that a tube rupture will happen in longitudinal direction (Figure 14.11) or at locations which are weakened anyway, like welding seams. Through the hole formed, substance passes from the high pressure to the low pressure side and might cause a relief case there. However, the size of the tube rupture hole is undefined and cannot be predicted at all. The following consideration leads to a scenario which is conservative. The substance passing to the low-pressure side must pass the two circular cross-flow areas at the ends of the tube as well. Thus, these cross-flow areas can be regarded as the critical ones. If the hole created by the tube rupture is smaller, it is conservative, and if it is even larger, the cross-flow areas are in fact the critical ones.

Figure 14.11: Tube rupture in a steam reformer tube [218]. Courtesy of IfW Essen GmbH.

396 | 14 Process safety In fact, it is very unlikely that a hole of this size develops. Most damages are caused by incipient cracks at the welding seams of the top plates where the tubes are fixed. Even there, a complete demolition is not probable [255]. In chemical and petrochemical industry, relatively small cracks have been observed in the past. Therefore, it has become common practice that smaller crossflow areas are accepted. A usual approach is to consider an equivalent leakage hole diameter of 5 mm, corresponding to a cross-flow area of approx. 20 mm2 [262]. The calculation of the relief streams is described in detail in [219], using the ω-method [220, 221]. The most dramatic pressure relief case is generated if the tube rupture takes place between a gas at high pressure and a liquid at low pressure. The critical flow through the cross-flow areas is a gas flow which corresponds to a large volume flow due to the low density of gases. After having passed the cross-flow areas, the gas expands to the low pressure and further increases its volume. To maintain the pressure on the low-pressure side, an equivalent liquid volume must be removed through the pressure relief device, which in turn corresponds to a huge mass flow due to the high densities of liquids. For this case, the calculation is comparably easy.

Example In a shell-and-tube heat exchanger there is nitrogen (tmax = 100 °C) in the tubes (d = 1󸀠󸀠 ) and cooling water (tmin = 30 °C) on the shell side. The design pressures are pDes,1 = 50 bar on the tube side and pDes,2 = 5 bar on the shell side. Determine the necessary relief amount for the safety valve on the shell side.

Solution At tube rupture, two circular cross-flow areas at the ends of the tube form the narrowest cross-flow area at tube rupture, where the nominal diameter of the tubes is taken as the approximate inner diameter of the tubes: Atube rupture = 2 ⋅

π 2 π d = 2 ⋅ ⋅ 25.42 mm2 = 1013 mm2 4 4

Assuming design pressure and tmax on the nitrogen side, the maximum mass flow can be determined using the algorithm described in Chapter 14.2.6 (Figure 14.16) to ṁ = 37750 kg/h. This stream is expanded to p = 5 bar on the shell side. The temperature of the expansion is calculated by adiabatic expansion via hN2 ,tube = hN2 ,shell The heat transfer due to the direct contact to the liquid is neglected, as the relief with the maximum load is considered to be very fast. The result is tN2 ,shell = 94.9 °C, with the corresponding density of

14.2 Pressure relief |

397

ρN2 ,shell = 4.57 kg/m3 . The volume flow is then determined to ṁ 37750 kg/h V̇ = = = 8260.4 m3 /h ρ 4.57 kg/m3 This volume flow will flow onto the shell side and try to displace the cooling water. Neglecting the small compressibility of the water, this volume flow must be removed from the shell side, otherwise the nitrogen would build up a pressure which rapidly exceeds the design pressure of the shell side. As the cooling water is probably next to the safety valve on the shell side, this volume will in the first moment be displaced as cooling water, with a density of ρ(30 °C, 5 bar) = 995.83 kg/m3 [29]. The resulting water mass flow is then ṁ = V̇ ⋅ ρ = 8260.4 m3 /h ⋅ 995.83 kg/m3 = 8225948 kg/h Assuming a standard leakage hole of 20 mm2 , the result will be approx. 162400 kg/h, which is much easier to digest.

6.

7.

Abnormal heat input: If the heating agent in a heat exchanger is not or no longer flow limited, large temperatures and, subsequently, large pressures on the product side can be the consequence. A well-known case is the full opening of the steam control valve so that the largest possible steam flow enters the reboiler of a distillation column (see “control valve failure” below). A simplified assumption like “steam flow is proportional to heat input” is not adequate. At relief pressure, one must take into account that the temperatures on the product side both in the reboiler and in the condenser are higher than in the design case. The scenario must be evaluated in an iterative procedure. The column has to be recalculated in the process simulator at relief pressure with estimated reboiler and condenser duties. Then, the duties have to be verified in the heat exchanger design program, taking into account that the driving temperature difference is lower in the reboiler and larger in the condenser. To be conservative, it must be additionally regarded that there is no fouling in the reboiler for maximum heat input and a fully developed fouling layer in the condenser to minimize the heat removal, as the relief case can occur just after the reboiler has been cleaned, while the condenser has been left as it was. The procedure has to be repeated until the estimated duties in the process simulation are in line with the ones obtained in the heat exchanger design program. The procedure can be simplified if the heat transfer coefficient is supposed to be constant.11 Cooling system failure: A deficit or a full breakdown of the cooling system can cause pressure buildup, similar to the scenario “abnormal heat input” described above. For columns,12

11 In this case, the clear separation of process simulation and equipment design (Section 3.5) is in fact not advantageous. 12 For distillation columns, “cooling system failure” can also mean that the overhead line is blocked.

398 | 14 Process safety an analogous calculation must be performed, taking into account that the reflux of the column is also affected, as soon as the storage amount in the reflux drum has been consumed. For this purpose, the application of dynamic simulation is desirable [214]. 8. Power failure: The power failure is the third scenario which is interesting especially for distillation columns. The scenario is often weaker than the “cooling system failure”, as not only the cooling but also the feed pumps and, in some cases, the supply for the heating agent fail. It has to be clearly pointed out in the scenario which electric driven pieces of equipment are affected. 9. Control valve failure: A failure of a control valve is presumed, where the control valve does not reach a failsafe closed position (Section 12.3.2) but remains fully open. According to the pressures involved, more flow than usual will enter the piece of equipment which is to be protected. During basic engineering, the control valves have not been specified so far so that a preliminary solution must give an idea about the pressure relief case. As long as no better knowledge is available, the assumption that the maximum flow through the valve is twice as large as the maximum flow specified might be useful. During detailed engineering, this assumption should be replaced by actual information about the valve. Another option is to place a restriction orifice in the line of the control valve. A rule of thumb is to design it for 130–150 % of the maximum process flow. It should be placed at least 20× tube diameter upstream the control valve [255]. 10. Pump failure: Similar to the control valve failure, it is assumed that the pump flow control is out of order. According to the pump characteristics, it has to be found out which maximum flow can be conveyed by the pump at relief conditions when the pressure on the discharge side is higher. If the pump has not been specified, some simplifying assumptions have to be used. Again, a revision during detailed engineering should take place. All these scenarios are not necessarily independent from each other. If two of them occur at the same time, it must be distinguished whether one is the consequence of the other or whether both are assumed to occur together by coincidence (“double jeopardy”). For the latter case, the probability is generally too low. Unrelated failures should not occur simultaneously. If they are nevertheless considered, it might lead to an unreasonably high capacity of the flare system. Control devices or interlocks as substitutes for pressure relief devices are in general not allowed, unless they have a SIL classification. If design cases with significantly different relief amounts occur, it might be an option to place different safety valves for the different cases in parallel. The actuation

14.2 Pressure relief |

399

pressures must be chosen in a way that the smaller the relief case, the lower the actuation pressure is chosen. For example, for two different actuation cases and a design pressure of 6 barg the first safety valve could actuate at 5.9 barg without bothering the one for the larger relief case, which actuates at 6 barg.

14.2.5 Safety valve peculiarities In general, things become even more confusing if the pressure relief does not take place in a vessel with a well-defined content but in a column with a concentration profile across the apparatus. To start with this topic, the column is closed during the pressure relief at first, meaning that all inlet and outlet streams are blocked. This might happen during the fire case, when the entire column is blocked in by quick-acting shutoff valves. The formalism for closed vessels as described above can then be applied for the column; however, in the column there are various liquid reservoirs as holdups on the particular separation stages, each of them having a different temperature and composition and being more or less in equilibrium with its own vapor phase. The simple method to neglect these holdups and consider only the bottom reservoir might be useful for packed columns with a small holdup and a large bottom area, but in case of larger holdups it might lead to absurd results, for instance that no light ends are relieved due to their removal in the stripping area, which is usually the purpose of the column. This is often coupled with a wrong specification of the safety valves. Getting rid of this problem and take the concentration of the reflux, which has the highest concentration of light ends, leads consequently to an overdesign. Many of these shortcut approaches are in use and give results, but they are neither correct nor even useful. If some holdup remains on the stages, the column will perform as a distillation column at pressure relief. When heavy ends evaporate and go upwards, they will tend to get into equilibrium on the upper stages, where they condense and evaporate light ends. As light ends tend to have a lower molecular weight and a lower molar heat of vaporization, it might happen that a larger volume flow has to be relieved through the safety valve than it was generated at the bottom. However, there are some types of columns where the assumption that the rectification effect can be neglected is justified. – Sieve trays: When a sieve tray column is blocked, the liquid flow from above caused by the reflux is stopped. The pressure profile will break down, as there is no directed flow any more. As the vapor generated can no more leave at the top, the pressure rises in the whole column. The trays will be emptied one by one, starting from the top. This will take some time. If all trays are emptied before actuation, the column can actually be treated like a vessel. However, to make sure that this presumption is fulfilled, a separate consideration has to be made. If it is reasonable, the final composition of the liquid in the bottom can be determined by adding

400 | 14 Process safety





up all the holdups on the trays with their particular concentration. The amount of the holdup can be estimated using the weir height,13 while their compositions are available from the thermodynamic modeling of the column. Also, the vapor holdups can be added up. The vapor and the liquid obtained in this way are certainly not in equilibrium, but they can be used as a starting point for simulating the further pressure buildup of the column (Section 14.2.3). Random and structured packings: Packed columns are emptied after the reflux and the feed are blocked; the entire holdup of the column, which is comparably small (< 5 %), goes down to the bottom. The considerations discussed for the sieve tray column above hold for packed columns as well. The holdups can be evaluated by the hydrodynamic models (Engel [103], Billet/Schultes [104]). Furthermore, it is important that the holdups in the collectors and distributors are taken into account. Bubble cap and valve tray columns: Bubble cap trays are more difficult to assess. After reflux and feeds are blocked, the trays do not completely drain off. Depending on the construction details of the bubble caps, the vapor will at least partly pass the liquid on the trays and performs a heat and mass transfer, so that a rectification effect will occur. On valve trays, the valves close the holes completely after the pressure profile has been equalized. The tray will be filled with liquid up to the weir height. To reach pressure equilibrium when the column content is heated up, the valves will partly open, and through the holes part of the liquid will drop down onto the next tray. It cannot be said whether the trays are completely emptied; this depends on the whole valve construction. When the safety valve actuates, the vapor will probably pass at least a small liquid layer, and a certain mass transfer giving a rectification effect will result.

To summarize: even for the comparably easy case of a column which is fully blocked there are a number of assumptions involved which cannot be clarified just by setting up a plausible scenario. The favorite tool is the dynamic process simulation, which can quantify the assumptions made above for the particular cases. There is hardly any chance to calculate scenarios where feed, reflux or product stream are still active without dynamic simulation. Some further considerations about dynamic simulation for the pressure relief are explained in [214, 222]. Especially for columns, but also in other cases there is another important aspect. For an oversized safety valve, the pressure relief will achieve its target to reduce the pressure in the piece of equipment to be protected. On the other hand, the resulting stream will exert a larger load on the equipment and the piping. A larger relief stream in a column might destroy the packing or the tray fixings. For example, tray fixings can 13 Reasonable assumption: clear liquid height on the tray up to 120 % of the weir height.

14.2 Pressure relief |

401

withstand a tray pressure drop of approx. 20 mbar; therefore, it is quite probable that they will be damaged during pressure relief.14 Oversizing a safety valve can also lead to severe oscillations. After reaching the actuation pressure, the safety valve opens and relieves an amount which cannot be further delivered by the process (dischargeable mass flow, see below). Therefore, the pressure drops down and the safety valves closes. Then, the pressure rises again and so on, giving oscillations and hammering of the safety valve. In the worst case, the welding seam at the inlet flage can break, followed by an uncontrolled release of the process medium into environment [255]. The only measure against these oscillations is to design the safety valve “correctly”. But this is easier said than done. For the design, worst case scenarios are defined as actuation cases, giving some safety margin to reality anyway. From that point on, several other safety margins are added for the final design of the safety valve. Often, the vendor is involved in the design process, claiming own safety margins. Finally, the pressure relief devices often become too large, sometimes far away from reality. Many safety valves only work by coincidence rather than by design. And hopefully, they will never actuate. (Robert Angler)

Other points caused by oversizing of the safety valve are that the flare loads can be higher than expected and that the repulsive forces on the safety valve and the adjacent piping are underestimated, which is illustrated in Figure 14.12. During the pressure relief, the relief flow enters the safety valve from below and leaves it to the right hand side. The momentum balances for the x- and the y-direction indicate that the reaction forces Rx and Ry are induced, giving the resulting reaction force R by vector addition. In Figure 14.12, some exemplary numbers are set. The result for the repulsive force is 21 kN, equivalent to a weight of approx. 2.1 t. It is clear that pipe engineers must be aware of the forces exerted on the pipes and their fixings. An underestimation of the flows involved could lead to possible mechanical damage during pressure relief. There are many misunderstandings concerning the expression “conservative assumption”. Frequently, this term is used as a phrase for a “simplifying assumption” [223], which has often hardly anything to do with being “conservative”. Simplifying assumptions are necessary to save time, but they should be applied in a consistent way. For example, just taking the overhead flow of a column during normal operation for the pressure relief stream is simplifying. It is neither ensured that it is larger than the correct relief stream nor that it is at least in the correct order of magnitude. This assumption cannot replace an adequate simulation of the pressure relief. In contrast, taking the normal flowrate of a pump in an overfilling relief case is conservative, as the pump flow will in fact be reduced according to the pump characteristics due to the 14 It is possible to provide stronger tray fixings; however, in case of an explosion the trays should be destroyed to protect the shell.

402 | 14 Process safety

Figure 14.12: Illustration of repulsive forces. © Rasi57/Wikimedia Commons/CC BY-SA 3.0. https://creativecommons.org/licenses/by-sa/3.0/deed.en.

elevated back pressure. On the other hand, the value obtained with this assumption can be by far too large, and it might make sense to apply a reasonable estimate of the real relief flowrate. A remark should be documented with regards to making sure that this estimate is checked as soon as the characteristics of the pump are known. It must be distinguished between the mass flow to be discharged and the dischargeable mass flow. From the process calculation of the particular scenarios, one gets the mass flow to be discharged, i. e. the stream to be relieved to maintain the pressure in the apparatus at an acceptable level. However, pressure relief devices like safety valves are produced with defined sizes. One cannot choose a safety valve which fits exactly the calculated relief stream but the next one in the list where the certified capacity is sufficient. This means that the relief stream obtained with this safety valve could be higher than requested. This is the dischargeable mass flow. It must be noted that according to most of the standards all pressure drop calculations (inlet line, outlet line) must be based on this dischargeable mass flow, not regarding whether this is possible from the process point of view [224]. According to a rule of thumb, it is reasonable if the dischargeable mass flow is about 10–20 % above the mass flow to be discharged. More overdesign leads to chattering in the actuation case. 14.2.6 Maximum relief amount Having determined the mass flow to be discharged, the design of the pressure relief device can take place, i. e. mainly the choice of the opening area at actuation. This area

14.2 Pressure relief |

403

must be large enough to let the mass flow to be discharged pass in order to maintain the pressure. The maximum relief amount through an opening area is determined by the critical flow phenomenon, which must be thoroughly understood.

Figure 14.13: Mass flux density through an orifice as a function of the outlet pressure. Calculated for nitrogen, p1 = 200 bar, t1 = 20 °C.

The open cross-flow area can be generally treated as an orifice. Figure 14.13 shows the relationship between the mass flux density through an orifice and the outlet pressure. Consider a pressure vessel at p1 = 200 bar, which has to be relieved through an orifice to an environment at a pressure p2 . If p2 = p1 , there is no driving force for a flow and the mass flux density will be zero. The mass flux density increases when the pressure p2 is lowered. However, at a certain level of p2 (rule of thumb: ≈ 0.5 p1 ) the mass flux density starts decreasing in the calculation (dashed line in Figure 14.13). It can be shown with the Second Law that a further acceleration is not possible. In fact, the mass flux density stays constant. This value is called the critical mass flux density. Its existence does not depend on the state: it can be liquid, vapor, or two-phase. However, it is most important and significant for vapor flow, where it can be shown that the speed of sound will occur in the narrowest cross-flow area [11]. It is a common misunderstanding that the calculation of the pressure relief is equal to the one for an adiabatic throttling (Figure 14.14). The well-known condition for adiabatic throttling h1 = h2

(14.5)

is correct, but it refers to states 1 and 2 far away from the narrowest cross-flow area, where the velocities are relatively small or at least in the same order of magnitude so that the kinetic energy term can be neglected. h1 and h2 are not equal to the enthalpy

404 | 14 Process safety

Figure 14.14: Throttle valve.

directly in the orifice hc , as the velocity is much larger there and should be taken into account in the energy balance: w2 w12 + h1 = c + hc 2 2

(14.6)

Equation (14.6), the continuity equation, the isentropic change of state, the speed of sound and the equation of state are used to derive a procedure for the determination of the maximum mass flux density. For an ideal gas, we get [11] 2

p κ p ṁ c √ 2 p1 κ = [( c ) − ( c ) A v1 κ − 1 p1 p1

κ+1 κ

]

(14.7)

with κ

κ−1 pc 2 =( ) p1 κ+1

(14.8)

Figure 14.15: Flow pattern through an orifice. Courtesy of Dana Saas.

Equation (14.7) should be supplemented by a factor KD , which considers the fact that the cross-flow area directly in the orifice is not necessarily the narrowest cross-flow area. Instead, the typical flow pattern is that the flow is more constricted downstream the orifice (Figure 14.15), and the effective cross-flow area is smaller than the orifice itself. The lack of knowledge is summarized in the KD value, which is usually in the range15 0.6 < KD < 0.85: 2

p κ p ṁ c 2p κ = KD √ 1 [( c ) − ( c ) A v1 κ − 1 p1 p1

κ+1 κ

]

15 If nothing else is known, one can use KD = 0.65 for liquids and KD = 0.8 for gases.

(14.9)

14.2 Pressure relief |

405

Having calculated the mass flux density, one can easily get the necessary chosen crossflow area for the safety valve. Then, from the standard row of safety valves, the appropriate one can be chosen which just covers the necessary cross-flow area. The vendor of a safety valve or the piping element should be able to deliver more details about KD , otherwise some advice is given in [228]. Equation (14.9) is sometimes given in a way that it can barely be recognized, and furthermore, different results are obtained [229]. So far, it could always be shown that the equations and procedures are equivalent to Equation (14.9). The differences in the results occur due to different default values for KD . For a real gas, it is often considered to use the real specific volume for v1 in Equation (14.7) instead of the ideal gas one v1 = RT1 /p1 , and furthermore, the real values for cp and cv are used to determine κ as κ = cp /cv , which often leads to unreasonable values. This is contradictory to the thermodynamic derivation and leads to an inconsistent result. For example, at t = 50 °C, p = 100 bar the ratio for ethylene is cp /cv = 3.1, whereas the largest theoretically possible value is κ = 1.67 for a one-atomic ideal gas (He, Ne, Ar, …). It is not possible for a real gas to assign a κ which fulfills both the ideal gas condition for an isentropic expansion T2 p = ( 2) T1 p1

κ−1 κ

(14.10)

and the equation for the speed of sound of an ideal gas w∗ = √κRT

(14.11)

For the above-mentioned condition, the κ for the isentropic expansion which yields the correct value would be κ ≈ 1.2, whereas κ = 0.998 would be required to get the correct speed of sound. Both values are obviously different and far away from the ratio cp /cv . κ < 1 is even impossible, as cp is always larger than cv . In fact, one is often lucky in using Equation (14.7), as long as the pressures are not too high. In [230] a number of examples have been compared. The result was that Equation (14.7) is in fact a good approximation of the exact solution. The maximum mass flux density and, correspondingly, the necessary free cross-flow area are often met quite well. However, it is also pointed out that temperatures and pressures along the line cannot be reproduced, leading to possibly inadequate design conditions. When applying Equation (14.7), one should take care that κ is in a reasonable range (1 < κ < 1.67), otherwise, something might go wrong. At least, one should check how sensitive the results are to variations of κ. Often the dependence is not strong. If one is in doubt, one should prefer the real gas calculation [11]. For example, in the LDPE process16 at p = 3000 bar, where the 16 LDPE–low density polyethylene.

406 | 14 Process safety density looks more like a liquid than like a gas density, the ideal gas calculation (Equation (14.7)) does not make sense anymore. For a correct consideration of real gas behavior, an analogous procedure has been set up in [11]. It is not a formula, as the equation of state is not standardized, but an iterative procedure. The rule of thumb that the critical pressure p2 is approx. 50 % of the inlet pressure p1 is no more valid in these cases. The calculation scheme is listed in Figure 14.16.

Figure 14.16: Calculation procedure for the mass flux density of a real fluid.

For the design of a safety valve or a rupture disc, the inlet line, the pressure relief device, and the outlet line are a unit which should be designed together in one step. The obvious advantage is that the properties of the relief stream must be entered only once. Figure 14.17 illustrates the requirements which the calculation has to fulfill, with the simplified assumption of a gas flow.

Figure 14.17: Inlet line, safety valve and outlet line as a unit.

14.2 Pressure relief |

407

First, a safety valve is considered as pressure relief device, simplified as an orifice in the center of the drawing. From the vessel with the relief pressure p0 the relief stream enters the inlet line on the left hand side. The pressure drop of this line, which is usually short (1–2 m), can easily be calculated with the conventional formula Equation (12.1). The requirement is that this pressure drop shall be below 3 % of the driving pressure difference17 (p0 − pU ), where pU is the pressure of the destination of the relief stream (usually flare or environment if possible). The safety valve itself cannot be subject of a conventional pressure drop calculation, e. g. according to Equation (12.18). Instead, what we know is that the maximum mass flux density and the critical pressure are reached in the narrowest cross-flow area, and for a vapor flow we get the speed of sound. According to the design guidelines, the pressure drop in the outlet line shall not exceed a certain value, e. g. 10 % of the total pressure difference between the vessel and the lower-pressure location.18 The pressure drops of both outlet and inlet line must be calculated with the dischargeable mass flow through the safety valve, not with the mass flow to be discharged. The whole procedure for the dimensioning of the outlet pipe has been described in Chapter 12.1.3. As the thermodynamic state of the fluid may vary considerably along the line, the outlet pipe is divided into increments so that the pressure drop of an increment can be determined with the updated state variables. In an iterative procedure the pressure after expansion downstream the safety valve is estimated, and the outlet state of the pipe is calculated, until the estimated pressure after the safety valve yields the speed of sound at the outlet or expands to environmental pressure. If the pressure drop exceeds the 10 %, the outlet line diameter must be increased. For rupture discs, there is no limitation like this, and in most cases the limitation is the speed of sound at the pipe outlet. The pressure drop can again not be evaluated by a single pressure drop calculation, as the state of the relieved fluid varies significantly along the pipe. Instead, an increment-wise calculation must take place. This is especially important for compressible gas flows and flashing fluids, where the pressure drop causes further evaporation. The procedure has been described in Chapter 12.1.3 and can be transferred to the two-phase flow pressure drop as well. In contrast to the inlet line, the course of the outlet line can only be guessed during basic engineering, as the locations of the particular pieces of equipment and the tie-in points to the flare system are not known, not to mention the number and kinds of the bends. There must always be a note that the outlet lines have to be updated in detailed engineering. Some remarks should be given about the pressure drop of the inlet line. According to the established guidelines, the pressure drop in a safety valve inlet line should not exceed 3 % of the actuation pressure, referring to the dischargeable mass flow [231]. Al17 The requirement can vary slightly, according to the guideline used. 18 10–15 %; in the following, 10 % are used for simplicity.

408 | 14 Process safety

Figure 14.18: Valve lift and pressure at safety valve inlet flange as a function of time for 3 % (blue) and 6 % pressure drop (red) in the inlet line. Qualitative remake from [231].

though there is no physical background for this rule, Figure 14.18 shows that it makes some sense. The blue lines show a case where the 3 %-criterion has been kept.19 After the actuation pressure has been reached, the safety valve starts to open, and within a few milliseconds the full valve lift is obtained. The pressure in the vessel remains almost constant, and the pressure at the safety valve inlet slightly drops. The red lines show a case where the same safety valve has been used with an increased inlet pipe length so that the pressure drop in the line amounts to approximately 6 % of the actuation pressure. A completely different situation arises in this case. After actuation, the pressure at the inlet flange of the safety valve drops significantly due to the pressure drop, causing the valve lift to drop as well. Then both valve lift and pressure start high-frequency oscillation with approximately 50 Hz. Due to this oscillation, the valve is hardly ever fully opened, and there are even periods when it is almost closed. The relief stream will be much lower than the specified one, and the protection of the vessel will probably fail. 19 At first glance, it seems that the pressure drop might be larger than 3 % (10.5 bar at safety valve inlet, 12 bar in the vessel). The reason is that the dynamic pressure due to the velocity is not considered in the diagram.

14.2 Pressure relief |

409

For the compliance with the 3 % rule, it often causes trouble that the dischargeable mass flow has to be considered, and not the mass flow to be discharged. In these cases, it might be useful to limit the valve lift in a way that the opening of the safety valve limits the relief stream to the design value. However, this requires an exact knowledge of the possible relief scenarios, and furthermore, one should have a look into the guidelines and see whether this lift stop is applicable or not. Often, a joint effort of process and layout engineers can achieve a better location of the safety valve so that the inlet line pressure drop is reduced. Some examples are described in [231]. Meanwhile, even the 3 % criterion is no more regarded as conservative and is supposed to be replaced [240]. The 10 % pressure drop criterion for the outlet line is an arbitrary choice. Certainly, when the back pressure increases, there will be a point where it is no longer possible for the safety valve to operate properly. The safety valve will start chattering [232]. It depends on the safety valve construction at which point this behavior occurs. The 10 % criterion can be taken as a conservative value given in the particular guidelines. Cases can occur where it is more or less not possible to keep the 10 % pressure drop in the outlet line, e. g. at relatively low actuation pressures. Using bellows, it is possible to extend the pressure drop limit in the outlet line to 30 %. Figure 14.19 illustrates how the bellows work. On the left hand side, a conventional safety valve is shown. The back pressure directly acts on the sealing face, which counteracts the simple opening of the safety valve due to the pressure load of the protected equipment. On the right hand side, the safety valve has a bellow, which is in principle a gasket which prevents the back pressure to act on the sealing face. Instead, the pressure to be overcome is the ambient pressure (see the open hole in the upper part), which is in most cases much lower and therefore improves the situation.

Figure 14.19: Safety valve sketch with and without bellows.

410 | 14 Process safety Bellows are also recommended if a significant part of a liquid relief (≈ 5 %) flow will flash. Another option is again the above mentioned lift stop. It should be mentioned that in case of liquid relief the outlet line should be designed with a slope and, if possible, without pockets to avoid that a liquid column is formed downstream the safety valve. It is not always the case that only vapor is relieved without any phase change. In these cases, there is a maximum mass flux, but in the narrowest cross-flow area speed of sound does not occur. The algorithm described in Figure 14.16 can be further applied, but the sensitivities of the pressure ratio are much larger. The following cases can be distinguished: – Two-phase flow in the safety valve; already at the inlet: If there are both vapor and liquid in the relief stream upstream the safety valve, the necessary cross-flow area of the safety valve could in the past be determined by simple addition of the cross-flow areas necessary for the vapor and the liquid phase alone. At present most of the guidelines recommend the ω-method [220, 221]. – The two-phase flow occurs in the outlet line due to flashing: If the evaporated mass flow downstream the safety valve is less than 50 % of the entire mass flow, the safety valve should be designed for liquid relief. Often, the pressure in the narrowest cross-flow area is the boiling pressure of the liquid, as the first bubble would occupy much more space than the same mass as a liquid. Therefore, it is clear that the maximum mass flow occurs at conditions where the whole flow is liquid. The flashing in the outlet line makes it necessary that the pressure drop correlations for two-phase flow are applied, maybe even as an EXCEL file where the line is divided into segments, if the pressure drop is so large that changes due to the phase equilibrium take place. The use of bellows is strongly recommended in this case. – Condensation in the safety valve: In Chapter 8.2 it is explained that there are substances, especially “large” ones with more than three C-atoms, which form liquid droplets in the compressor, as the compression yields a lower temperature than the boiling temperature of the relief stream due to the compression. The other way round, this means that the substances which do not show liquid formation in a compressor are prone to form droplets at pressure relief. Astonishingly, water belongs to these substances when a saturated vapor is expanded. Starting at the dew point line, the temperature drop during expansion is larger than the drop of the dew point temperature. Also, speed of sound is not reached in these cases. – There are also cases where liquid is discharged without flashing, which frequently takes place when pressure relief occurs due to thermal expansion in a vessel completely filled with liquid. The calculation of this case can be done with the simple

14.2 Pressure relief | 411

Bernoulli equation, giving w2 = √

2(p1 − p2 ) ρ

where the velocity in the vessel has been set to w1 = 0. In this case, w2 is the velocity in the narrowest cross-flow area. The maximum mass flux can be calculated with the equation of continuity (

ṁ ) = αρw2 A max

where α is the constriction coefficient, considering the conditions outlined in Figure 14.15. For liquids, α = 0.6 is the usual approach. Finally, the outlet line must be thoroughly considered, as it has the potential for a number of serious mistakes. One of them is that its design is postponed to the last possible due date. After the piping of the plant has been almost finished, the only way is that the outlet line meanders through the plant to somehow reach the desired location. The assumptions made to calculate the built-up back pressure are obsolet, and often the safety valve itself must be redesigned. As well, the repulsive forces must be checked for the whole course of the outlet line. Moreover, the material for the outlet line has to be carefully chosen because of the expansion of gases and the related Joule–Thomson effect. Often, there is a significant drop of the temperature, leading to brittleness of the material. The mechanical design with respect to thermal expansion of the pipe should be double-checked. If crystallizing substances are relieved, heat tracing should be considered [255].

14.2.7 Two-phase-flow safety valves Things become even more complicated when there is a two-phase flow through the safety valve. Many safety valve designs neglect the fact that the pressure relief is not a smooth equilibrium process as assumed if only vapor is relieved. If vapor is relieved from a vapor-liquid equilibrium, a bubbling-up of the liquid will take place, as it might happen that more vapor is formed than can escape through the surface of the liquid. The reason is that the rising velocity is limited [225], especially when the viscosity of the liquid is high (> 100 mPa s) or if there is a foam layer on the liquid. The liquid rises, and in case the liquid level at pressure relief conditions was high enough, it will be partially relieved through the safety valve as well. For the mentioned system with high viscosity or foam, this happens even at

412 | 14 Process safety low liquid levels [226]. In everyday life, this effect is known as the champagne effect.20 For the design of the safety valve itself, one should be aware that the entrained liquid covers part of the opening area. Therefore, less gas can be relieved and less energy can be removed by evaporation of the liquid. Larger opening areas are necessary. In Figure 14.20, a criterion is given to decide whether only vapor or a two-phase flow goes through the safety valve [225]. The decisive quantity is the ratio between the superficial vapor velocity uG0 , calculated with the maximum dischargeable gaseous relief stream (Equation (14.12)), and the rising velocity of the bubbles u∞ (Equation (14.13)): uG0 =

ṁ out , ρV A

u∞ = K∞

[σL g(ρL − ρV )]0.25 ρ0.5 L

(14.12) (14.13)

A is the cross-flow area for the gas in the vessel, and the factor K∞ can be taken from Figure 14.20 according to the model chosen for the description of the bubbling-up. The diagram is valid for vertical vessels with 1 < H/D < 3. If it is applied to horizontal vessels, it can be estimated that the result for the limiting liquid level is approximately 5 % too low. Typical values for u∞ are 0.2 m/s for low viscosities and 0.05 m/s for high viscosities. Reasonable values for the limiting liquid level φlim are approx. 70 % for systems with low viscosity, approx. 20 % for systems with high viscosity and approx. 10 % for systems with foam.21 If the liquid level is below the limiting one at the moment just before the actuation, it can be continued to assume a vapor flow through the safety valve. If the liquid level is above the limiting one, things are more sophisticated. A twophase flow consisting of both vapor and liquid is relieved; to avoid further pressure rise, its volume flow must be equal to the one calculated as vapor flow only. It is difficult and a bit arbitrary to assign the fractions of vapor and liquid. The simplest way is to set the volume fractions according to the vessel content at actuation [225]. This is very conservative; it is clear that the volume fraction of the vapor will be somewhat larger due to its lower density. In [226] a relationship is discussed which can mitigate this conservative assumption. According to a relationship of Grolmes [257] the maximum vapor volume fraction of the relief stream αmax is related to the average vapor volume fraction in the vessel ᾱ 20 After having wasted large amounts of this valuable beverage due to underestimation of the champagne effect, the author claims that from the pressure relief point of view, the opening of a champagne bottle is by far too large. 21 φ = VL /Vvessel at the beginning of the pressure relief.

14.2 Pressure relief | 413

Figure 14.20: Limiting level to avoid two-phase flow through the safety valve [225]. © Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

via αmax =

2ᾱ 1 + ᾱ

(14.14)

This maximum vapor volume fraction is a good approximation as long as the drift velocity between the two phases is not too large. The equation can be interpreted as follows. For a one-phase vessel content ᾱ = 0 or ᾱ = 1, it turns out that αmax = α,̄ as expected. For 0 < ᾱ < 1, it is always αmax > α,̄ e. g. for ᾱ = 0.5 one gets αmax = 0.667. The size of the pressure relief device can then be determined with the ω-method mentioned above [220, 221]. Again, the solution can only be obtained iteratively. After choosing a certain size, the calculation must be repeated with the new dischargeable mass flow. The calculation is only finished if the phase ratio does no more change from step to step. The procedure is thoroughly described in [225, 227]. Surprisingly, the two-phase flow procedure does not need to be applied in the fire case [250]. Often, there is equipment which is completely filled with liquid (e. g. filters). In this case, the first actuation is caused by liquid expansion. After a time, the boiling point at relief pressure is reached with the equipment still completely filled with liquid. Then, vapor and liquid have to be relieved simultaneously, giving two-phase flow through the safety valve with a large necessary cross-flow area. The reason is that the generated vapor volume must be relieved as liquid to a large extent, until enough vapor space is available for phase separation. In fact, according to [250] the time required to heat the system from the first actuation to the full opening of the safety valve to the relieving conditions (121 % of the actuation pressure) is large enough to cover the interim time with two-phase flow. In this way, full disengagment of vapor and liquid is realized at relieving conditions and the assumption of one-phase vapor venting is justified for the design. Moreover, in the fire case the boiling occurs close to the walls of the vessel (wall-heating) and not inside the vessel (volumetric heating) as it is the case for exothermic chemical reactions. Therefore, the bubbles are by far not homo-

414 | 14 Process safety geneously distributed, making disengagement easier. For foaming or reactive systems this simplifying consideration should not apply.

14.3 Explosions If an exothermic reaction is very fast, in might happen that the heat of reaction cannot be removed. The temperature elevation causes the reaction rate to further increase, which in turn gives a higher temperature again. Finally, an explosion takes place, which is in principle a reaction of a substance in a very short period of time (usually within milliseconds). Starting from a certain point, the reaction finally covers any amounts of the substance which is in close contact. In contrast to a comparably slow fire event, there is no chance to act and mitigate the consequences; the only thing an engineer can do is to take all precautions to prevent an explosion under all circumstances. Concerning the initiation of the ignition, we distinguish between induced ignition, where energy is supplied from an external source (e. g. sparks, hot surfaces), and self-ignition, where the substance is heated due to chemical reactions without sufficient heat removal. An explosion is coupled with a rapid expansion of gases, which has a large destructive potential. One can distinguish between flash fires, deflagrations, and detonations. Flash fires are defined as combustion reactions with rapidly moving flame fronts. The flame velocity is below the speed of sound. Glass panes break, and people are injured. There is a continuous transition to deflagrations, where the flame front is also slower than the speed of sound but can be heard. It can destroy buildings, and there are often people who are seriously injured or even die. In detonations, the flame front velocity is faster than the speed of sound [233]. There is extensive damage in a wide area, as well as casualties. For the discussion of possible explosions in a plant, we distinguish between the following zones: – Zone 0: an explosive atmosphere occurs more than 50 % of the operation time; – Zone 1: an explosive atmosphere occurs frequently, at least 30 min per year; – Zone 2: during normal operation, an explosive atmosphere does not occur. By accident it is possible, but less than 30 min per year. There are some characteristic numbers which indicate and classify the hazardousness of a substance. The flash point is the lowest temperature where the vapor pressure is large enough to generate ignitable mixtures in ambient air. When the ignition source is removed,

14.3 Explosions | 415

the substance stops burning. In contrast, the fire point is essentially the same, but the substance continues to burn after removal of the ignition source. At the autoignition temperature, the substance burns even without an external source. According to this value, the particular substances are classified in groups, as indicated in Table 14.2. Table 14.2: Temperature classes. Temperature class

Range of autoignition temperature

T1 T2 T3

> 450 °C 300–450 °C 200–300 °C

T4

135–200 °C

T5 T6

100–135 °C 85–100 °C

Example hydrogen 536 °C ethanol 363 °C diesel 205 °C diethyl ether 160 °C acetaldehyde 140 °C no examples carbon disulfide 90 °C (only one)

Explosions take place under certain conditions; there is a lower and an upper explosion limit. Between these limiting concentrations, a gas can ignite and explode. The explosion limits refer to air as oxygen containing gas. They are temperature- and pressure-dependent. Moreover, there are substances where the upper explosion limit is missing, meaning that they are explosive even without the presence of oxygen. Examples are ethylene and ethylene oxide. Special care must be taken for dust. The lower explosion limit (LEL) is the lowest concentration of a gas or a vapor in air where an ignition source (e. g. flame, heat) causes a flash of fire. Below the LEL there is not enough fuel to develop an explosion, meaning that concentrations lower than the LEL are too lean to burn. The LEL generally decreases with increasing temperature and increases slightly with increasing pressure [241]. For example, methane has an LEL of 4.4 vol. %22 at t = 138 °C. At t = 20 °C, the LEL is 5.1 vol. %. Below these concentrations, an explosion cannot take place. The upper explosion limit (UEL) is the highest concentration of a gas or a vapor where an ignition or explosion is possible. Mixtures with concentrations higher than UEL are too rich to burn. The UEL increases with increasing temperature and increasing pressure [241]. 22 vol. % as concentration unit is simply annoying for engineering purposes. It is not exactly defined how it can be interpreted, and in principle it is temperature-dependent. For gases, it can be assumed that the gases are ideal, so the volume concentration is equal to the mole concentration. For liquids, it can be assumed that amounts corresponding to the volume concentration of the particular components are mixed, and the excess volume is neglected.

416 | 14 Process safety There is a mixing rule for the LEL according to Le Chatelier: LELmix = (∑ i

xi ) , LELi −1

(14.15)

where x is the mole concentration. For the UEL, it is sometimes recommended to use Equation (14.15) as well, but in principle, there is no assured mixing rule. In practical applications like exhaust air lines, a safety margin (usually 50 % LEL) must be considered, and there must be an analytical surveillance that this limit is kept. The detector most often used is the flame ionization detector (FID). However, it only detects C-atoms in the mixture; it does not distinguish between the components. Therefore, one component is identified to set a conservative standard, as the following example shows. Example A pollutant flow (t = 20 °C, p = 1 bar) of 2.4 kg/h acetaldehyde (1, C2 H4 O, M1 = 44.053 g/mol, LEL1 = 4 mol % = 73.25 g/m3 ) and 3.8 kg/h cyclohexanol (2, C6 H12 O, M2 = 100.161 g/mol, LEL2 = 1 mol % = 41.64 g/m3 ) is transported with an exhaust air stream to the exhaust air treatment. It must be taken into account that a flame ionization detector cannot distinguish between components which contain only C, H, and O, as the combustion products are the same. How much exhaust air flow is necessary to make sure that the resulting stream has less than 50 % LEL according to a conservative standard?

Solution Considering the C-atoms, the LELs of the two components can be interpreted as follows: 2 ⋅ 12.01 g C/mol ⋅ 73.25 g/m3 = 39.94 g C/m3 44.053 g/mol 6 ⋅ 12.01 g C/mol LEL2 = ⋅ 41.64 g/m3 = 29.96 g C/m3 100.161 g/mol LEL1 =

Thus, component 2 (cyclohexanol) should be the reference component with the lowest LEL per C-atom. In the mixture, the detector will identify ṁ C =

2 ⋅ 12.01 g C/mol 6 ⋅ 12.01 g C/mol ⋅ 2.4 kg/h + ⋅ 3.8 kg/h = 4.04 kg C/h 44.053 g/mol 100.161 g/mol

Therefore, the stream must be diluted with air to a volume flow of V̇air =

ṁ C 4.04 kg C/h = = 270 m3 /h 0.5 LEL2 0.5 ⋅ 29.96 g C/m3

Most chemical processes do not operate with air but rather with any mixtures which can contain arbitrary amounts of oxidizing substances. Figure 14.21 shows an example for the flammability diagram of the system methane/nitrogen/oxygen. For a given

14.3 Explosions | 417

Figure 14.21: Explosion regions of the system methane/nitrogen/oxygen. © Power.corrupts/Wikimedia Commons/CC BY-SA-3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en.

temperature and pressure, the region where explosions can happen is dark colored. Additionally, some useful straight lines are shown. First, there is the stochiometric line, where there is as much oxygen in the mixture as necessary for the complete oxidization of the methane. According to the reaction CH4 + 2 O2 󳨀→ CO2 + 2 H2 O , 2 mol oxygen are needed for the combustion of 1 mol methane, meaning that the mole concentration of methane is approximately 33 % in case there is no nitrogen. The line shows all mixtures where this ratio is kept. Second, the air line is shown, where the ratio between nitrogen and oxygen is apas it is in air, independent of the methane concentration. The intersections prox. 79 21 of this line with the limits of the explosion region indicate the LEL and, respectively, the UEL. For process control, the LOC (limiting oxygen concentration) line is the decisive one. It indicates the lowest oxygen concentration where the explosion region is touched. Independent of the concentrations of the other substances, staying below the LOC ensures that an explosion does not take place. The oxygen concentration is relatively easy to supervise. The addition of an inert gas (usually nitrogen) increases the LEL and lowers the UEL. The knowledge of the explosion limits enables the engineer to determine the necessary flow of inert gas. It has to be assured that this flow can be delivered in any case; e. g. the failure of a compressor or the breakdown of the electrical energy supply

418 | 14 Process safety

Figure 14.22: Temperature course of a nitrogen gas cylinder at emergency inertization. Courtesy of Wystrach GmbH.

must not cause a lack of inert gas delivery. Often, gas cylinders filled with nitrogen under pressure are provided as an independent emergency supply for a limited time. However, it must be carefully determined how much gas is really in the gas cylinders, so the filling conditions must be well defined. When the temperature decreases, for instance on a cold winter day, the pressure inside the cylinders will drop. The coldest winter day is the basis for the dimensioning of the inert gas supply. At actuation, the temperature in the cylinders further decreases due to the expansion, which in turn causes the pressure in the cylinder to decrease even more rapidly. On the other hand, the temperature decreases more slowly than thermodynamics suggests, as the steel of the vessel itself with its large mass behaves as a heat storage and transfers heat to the gas by natural convection. With time, the wall temperature decreases as well, and this is a decisive issue for choosing the material for the cylinders. It must be evaluated in advance how much inert gas can be delivered at emergency case. Furthermore, it must be considered that the temperature downstream the outlet valve further decreases due to the Joule–Thomson effect. But the lowest temperature occurs inside the valve because of the enthalpy loss due to the acceleration to speed of sound. Figure 14.22 shows the course of the various temperatures of interest [234].

Glossary Active area: The area on a distillation tray where the mass transfer takes place. Activity coefficient (γ): A factor describing the deviations from Raoult’s law. It can be interpreted as a correction factor for the concentration. γ is a function of temperature and concentration. It is not particularly dependent on the pressure. Adsorbate: Phase on the surface of the adsorbent. Adsorbent: Adsorptive agent; a solid which can develop bonds to one or more fluid substances to remove them from a liquid or a gas. Adsorptive: One or more components in a gas or a liquid which can be adsorbed by the adsorbent. Advanced process control: Number of measures for improvement of process economics with sophisticated control strategies, e. g. feedforward control or simulation-based predictive control. Aerosol: An aerosol consists of liquid droplets in the vapor phase which are so small that they do not precipitate (Equation (9.9)) but large enough that they do not take part in molecular diffusion. Their occurrence is caused by oversaturation in the vapor phase. Well-known examples occur when sulfuric acid or hydrogen chloride are absorbed in aqueous phases or if the cooling in cryo-condensation is too strong (Chapter 13.4.1). Autoignition temperature: the lowest temperature where a substance ignites in normal atmosphere without an external source of ignition. Azeotrope: Phase equilibrium where vapor and liquid concentration of all components are identical, while the equilibrium pressure at constant temperature or, respectively, the equilibrium temperature at constant pressure shows a maximum or minimum. The separation of the components of an azeotrope is not possible with simple distillation. The closer the vapor pressures of two components are, the more probable is the occurrence of an azeotrope. We distingush between homogeneous and heterogeneous azeotropes, where the latter shows a miscibility gap in the liquid phase. Azeotropes are also possible for ternary mixtures. Also, there are few examples for quaternary azeotropes. There is no evidence for azeotropes consisting of more than four components. Battery limits: Battery Limits are the physical boundaries of a plant. Usually, flow meters are installed at this location in order to determine the economic performance of a plant. BOD: Biochemical oxygen demand, Section 13.5. Boiler feed water: Demineralized and pretreated water suitable for generating steam. Metal ions, salts, organics, oxygen, carbon dioxide, and hydrogen sulfide have been removed to an acceptable level. Often, nitrogen-based weak caustics are added (ammonia, amines). https://doi.org/10.1515/9783110657685-015

420 | Glossary Brainstorming: A technique for solving problems in a group. It is based on spontaneous contributions of the particular members of the group. During the brainstorming phase, the proposals must not be subject to critics. By-product: A product formed due to undesired side reactions. CAPEX: Capital Expenditure. Investment costs for a plant. Car Sealed Open: Protection of a valve against accidental maloperation. A seal made of plastic must be broken on purpose before the valve can be actuated. Cause & effect matrix: A matrix which gives an overview on the actions caused by interlocks in a process. For each deviation, the actions caused by the interlocks are marked so that the often complex interlock description can be interpreted more easily. Check valve: A valve which is fully open in one flow direction. For reverse flow, it closes due to its mechanical construction. Coalescer: An apparatus where droplets unify to a single phase. COD: Chemical oxygen demand, Section 13.5. Compressibility factor: Deviation from the ideal gas behavior, defined as Z = pv/RT. For an ideal gas, Z = 1. At the critical point, Z is in the range Z = 0.23–0.29. At very high pressures, Z can show large values, e. g. Z = 4.57 for ethylene at t = 100 °C, p = 3000 bar. Contingency: Cost estimation item to account for uncertainties in the process or in project execution. Cooling water: Water used for cooling purposes, usually taken from natural sources like rivers, wells, or sea water. In open cooling water cycles, it is used with no further treatment so that it might contain salts which can lead to fouling. The supply temperature ranges from 25–35 °C. Usually, the return temperature is 10 K higher. In most applications, the returned cooling water is cooled down again in a cooling tower. Co-product: A product generated because it occurs in the reaction equation of the desired main reaction. Depreciation: Depreciation is the value loss of an asset with time. At the end of its lifetime, the value of the asset becomes zero. During this time, its value continuously decreases from the purchase price to zero. In the easiest case, this happens linearly with time, other courses are possible. Depreciations are costs which can be assigned each year and therefore, they have an impact on the amount of taxes the company has to pay due to the income statement. The more the depreciation is, the less taxes have to be paid by the company. It is legally required that the depreciation is spread over the estimated lifetime. The course of the depreciation with time is defined by the government. The lifetime itself is also fixed by law; often 10 years for ISBL items and 20 years for OSBL items. Design basis: A document which collects all the facts and assumptions known in advance before the project starts. It defines the boundary conditions of the project

Glossary |

421

(environmental conditions, physical state and composition of raw materials, utilities, products, etc.). Design pressure: Chapter 11. Design temperature: Chapter 11. Deviation: Departure from design and operating intention [210]. Dip-pipe: A feed line to a vessel which does not end at the nozzle but is elongated to the bottom inside the vessel so that it dips into the liquid during operation. The purpose is to prevent backflow of the vapor phase of the vessel. Double jeopardy: Double jeopardy scenarios are two unrelated failures occurring simultaneously. As the probability of simultaneous independent errors is low, they should not be considered. Enthalpy: An item explained so well in many thermodynamic textbooks [154]. To make an own short try: Enthalpy is defined as internal energy plus potential energy in the pressure field: h = u + pv. It is the usual quantity for the thermal energy of a flowing substance, whereas the internal energy is relevant for static systems. Entropy: No reasonable explanation in just a few sentences possible. Again a short try: The entropy represents the experience associated with the behavior of a system. For a closed system where neither mass nor heat can pass the system border, the entropy reaches a maximum according to the Second Law of thermodynamics. Such a system will end up in a state which is the most probable one. An example: Two gases separated by a wall will mix when the wall is removed, until the concentration is the same in every volume element. Better and more extensive explanations can be found in [154] and [235]. Equation of state: A mathematical relationship between pressure (p), specific volume (v), and temperature (T). Excess enthalpy: Enthalpy change when two or more liquid or gaseous components are mixed at constant temperature and pressure. The mixture is supposed to remain liquid or gaseous, respectively. Excess volume: Change of the specific volume which occurs when two or more liquid or gaseous components are mixed at constant temperature and pressure. The mixture is supposed to remain liquid or gaseous, respectively. Expediting: Regular auditing of vendors to maintain quality and delivery dates. Fixed costs: Operation costs which occur independently from the production, e. g. personnel costs. Flash point: The lowest temperature where the vapor pressure is large enough to generate ignitable mixtures in ambient air. When the ignition source is removed, the substance stops burning. Froude number (Fr): Ratio between gravity force and inertia force. The general calculation formula is Fr = w2 /(g l), where l is a characteristic length. Fuel gas: Natural gas. Normally, methane is the dominating component, the rest of the composition depends on the case. Ethane, propane, butanes, higher hydro-

422 | Glossary carbons, nitrogen, hydrogen, oxygen, carbon dioxide, carbon monoxide, helium, argon, hydrogen sulfide, and water can occur as components. Fugacity coefficient (φ): A correction representing the deviation of the chemical potential from ideal gas behavior. It is thoroughly explained in [11]. Grashof number (Gr): Ratio between buoyancy force and friction force. Guideword: A simple phrase used to identify possible deviations during a HAZOP procedure. HAZID: Hazard Identification. Meeting where the main safety issues are discussed and listed. First recommendations can be given. HAZOP: Hazard and Operability review. A formal and systematic approach for the identification of the potential of hazards and operating problems caused by deviations from the intended design and operation [210]. HETP: Height equivalent of one theoretical plate. The height of layer of random or structured packing which corresponds to a theoretical stage. It is a measure of the separation efficiency of the packing. The reciprocal value is the number of theoretical stages per m. Holdup: Relative liquid content of the packing during operation. Internal energy: Quantity for the description of the thermal energy of a substance which is not flowing but encased in a vessel. A better explanation can be found in [154]. Interlock: A defined automatic intervention of the process control system. Reaction of the control system to encounter unacceptable deviations from normal process conditions. ISBL: All items which are in the scope of the engineering company and which are directly related to the process. Joule–Thomson effect: The temperature change (𝜕T/𝜕p)h , especially relevant for gases being throtteled adiabatically. In most cases, the Joule–Thomson effect gives a temperature decrease, but increase is also possible. Lever rule: The lever rule says that in phase equilibrium diagrams showing the concentration on the x-axis the ratio of the amounts of the phases corresponds to the ratio of the opposite lever arms of the tie-line. Liquidus line: The liquidus line is a limiting line in a solid-liquid phase equilibrium diagram. Above the liquidus line, there is no solid present. Lower explosion limit (LEL): The lowest concentration of a substance in air where an explosion can take place after ignition. Mach number (Ma): The Mach number is the ratio between the actual velocity and the speed of sound. Makespan: Time needed for producing a product in a batch plant. Node: A part of the process which covers a dedicated task, e. g. a distillation step. NPSH value: net positive suction head; see Chapter 8.1. Nußelt number (Nu): Ratio between heat transfer by convection and heat transfer by conduction. The general formula is Nu = α l/λ, with l as a characteristic length.

Glossary | 423

Objective Function: Function describing the targets of an optimization process. Usually, it contains deviations which shall be minimized. OPEX: Operational Expenditure. Operation costs of a plant, referring to a certain production rate. OSBL: Outside battery limits. Auxiliary units which are required for the functioning of the production unit, but are not directly involved in the process. In contrast to the main units, they can be shared among different production units. Examples: steam generator, cooling water system, inert gas supply, refrigeration unit, instrument air unit. Osmotic pressure: Equilibrium pressure difference across a semipermeable membrane, where the mass flow through the membrane comes to a stop. In this case, for example a salt-solution at high pressure can be in chemical equilibrium with the pure solvent on the other side of the membrane at low pressure. Package unit: A package unit is a compilation of pieces of equipment that fulfill a certain task in the project. It is delivered as a unit with defined inlets and outlets by one vendor. Examples are compressors, refrigeration units, crystallizers, or adsorber units. PFD (process flow diagram): An engineering drawing illustrating the process without showing details which are not necessary for the understanding. It is often supplemented by process and equipment data. pH value: Quantity describing acid/alkaline behavior and the strength of an electrolyte solution. It is defined as the negative of the logarithm of the H3 O+ ion concentration in mol/l to the basis of 10. In fact, it would be more correct to replace concentration by activity, which is actually performed in process simulation. The pH of neutral water is 7. Acids have lower pHs, caustic solutions have higher ones. PID (piping and instrumentation diagram): An engineering drawing showing the arrangement of piping and equipment like vessels, heat exchangers, columns, pumps, compressors, and the associated measurement and control devices. The pipes are depicted together with information about nominal diameter, design pressure, medium, piping class, and an identification number. The function of the control loops should be clarified on the PID, together with other documents (cause and effect matrix, data sheets for measurement devices), as well as the installation height of the apparatuses. Prandtl number (Pr): Ratio between kinematic viscosity and thermal diffusivity. Pr = η cp /λ. Process control system: Computer system to enable the operators to keep the overview on the state of the whole process and to control it from a central measuring station. A “PFD” of the plant is displayed and controllers are represented as software, enabling the operator to change control parameters. Any measured data can be monitored and stored so that arbitrary trend lines can be visualized; furthermore, any actions can be carried out centrally from this station (e. g. vary the set point of a pressure or switch off a pump).

424 | Glossary Process water: Water which has to be pretreated in a way that it can be used in the process. Pseudo-critical pressure/temperature: For mixtures, a genuine critical point does not exist. If it is required in correlations, the pseudo-critical temperature/pressure can be evaluated as a corresponding quantity. Rectifying section: Column region between feed and top, where the light ends are enriched. Reynolds number (Re): Ratio between inertia force and friction force. The general calculation formula is Re = w l ρ/η, where l is a characteristic length, for instance the inner diameter in a pipe. Safety valve: A valve which opens automatically when the pressure in an apparatus exceeds an acceptable value. In case of actuation, substance is released from the equipment so that the pressure is lowered. Safeguard: Countermeasure to prevent or mitigate the risk of a deviation [210]. Separation factor: The ratio (y1 /y2 )/(x1 /x2 ). If it is far away from 1, the separation is easy. The closer it is to 1, the more difficult is distillation. Speed of sound: The speed at which a pressure disturbance can be transported through a substance. It can be measured with a remarkable accuracy and is therefore a key quantity for the development of equations of state. Flows in pipes cannot exceed the speed of sound, which is a key fundamental for all pressure relief calculations. Split block or separation block: A block in process simulation which just defines how a stream is split by a certain unit operation. The split can be different for the particular components. It is just a book-keeper function; the physical background is not questioned, and there is no physical check whether the suggested separation is possible or not. Steamout: Steamout means that the vessel is cleaned by exposing the surfaces to steam, where high temperatures are applied. Polymer deposits and other solids might be melted due to the high temperature or dissolved and therefore removed from the wall. Stripping section: Column region between bottom and feed, where the heavy ends are enriched. TA Luft: German guideline for limiting concentrations in exhaust air streams. Tangent line: Level in a vessel which indicates the position of the cylindrical part; the bottom is left out. Tear stream: In a chemical process there are usually recycle streams. They are a challenge for process simulation, as they cannot be known in advance. To come to a solution, they are first estimated, and after recalculation it is checked whether the estimation was accurate enough. If not, the estimation of this stream is revised in a certain manner depending on the convergence algorithm. These streams are called tear streams.

Glossary | 425

Tie-in points: Defined points where a new part of a plant is connected to an existing one. TOC: Total organic carbon, Section 13.5. Turndown: Ratio between maximum and minimum load. Typical: Depiction of an example for the arrangement of the standard equipment, i. e. valves, pumps, etc. Upper explosion limit: The highest concentration of a substance in air where an explosion can take place after ignition. Value engineering: An engineering procedure which generates suggestions for the economic and technical improvement of a process and gives an assessment whether these suggestions should be realized or not. Usually, it starts with a brainstorming session, where new ideas are developed. In a second phase, people are assigned to evaluate the economic improvement of the particular measures. These people compile standardized reports so that their assessment becomes comprehensible for both current and future colleagues. Vapor pressure: The pressure of a pure substance exerted by the vapor which is in equilibrium with its condensate in a closed system. The vapor pressure is a key quantity for the estimation of pure component properties and for evaporation, condensation and distillation processes. Variable costs: Costs which are directly related to the production amount (raw materials, auxiliary chemicals, utilities) Weber number (We): Measure of the relative importance of inertia forces compared to the surface tension. We = w2 lρ/σ. l is a characteristic length. Weeping: On a distillation tray, the liquid is supposed to leave the tray via outlet weir and downcomer to enter the tray below. If part of it leaves the tray through the sieve holes or valves, it is called weeping. Working capital: Working capital comprises inventories of raw and auxiliary materials, catalysts, stores of products and intermediate products, debt claims, and liquid assets. ω-method: The ω-method is a simplified model to consider two-phase flow through a pressure relief device. It assumes equilibrium between vapor and liquid phase. There shall be no friction between the phases. The liquid phase is assumed to be incompressible, the vapor phase obeys the ideal gas equation of state. The method defines a simplified equation of state for a two-phase flow, where the whole input information can be used from the state at the inlet. There should be sufficient distance to the critical point. The critical pressure ratio and the maximum mass flux density can be calculated iteratively with a simple EXCEL file. The ω-method is considered to be conservative. Thorough information can be found in [220, 221].

List of Symbols

Symbol a ai A Aij , Bij , Cij , Dij b B, C, D c C cp cv D d dh f g g G gE h hW H H Hij hE I k kij k K KV M Ma ṁ n p P P Poy j ps Q̇ R R

Unit 4

Explanation 2

N m /mol m2 m3 /mol

m3 /mol kg/h J/(mol K), J/(g K) J/(mol K), J/(g K) m m m Pa J/mol 9.81 m/s2 J J/mol J/mol, J/g m m J Pa J/mol A W/(m2 K) m m3 /h g/mol kg/h mol Pa € W Pa W 8.31447 J/(mol K) Ω

attractive parameter in cubic equations of state activity area interaction parameters for Wilson, NRTL, UNIQUAC repulsive parameter in cubic equations of state virial coefficients parameter for volume translation capacity spec. isobaric heat capacity spec. isochoric heat capacity diameter diameter hydraulic diameter fugacity spec. Gibbs energy gravity acceleration Gibbs energy spec. excess Gibbs energy specific enthalpy weir height delivery height enthalpy Henry coefficient of component i in solvent j excess enthalpy electric current heat transition coefficient interaction parameter in cubic equations of state roughness chemical equilibrium constant valve characterization value molecular weight Mach number mass flow number of moles pressure price power Poynting correction vapor pressure heat flow universal gas constant electrical resistance

https://doi.org/10.1515/9783110657685-016

428 | List of Symbols Symbol

Unit

Explanation

R Re s s t T Tb Tr U U u u v V V̇ wt w w∗ wBl x x xij y z Z

K/W

thermal resistance Reynolds number specific entropy wall thickness Celsius temperature absolute temperature normal boiling point reduced temperature, Tr = T /Tc internal energy voltage velocity specific internal energy specific volume volume volume flow technical work velocity speed of sound bubble rising velocity liquid concentration vapor quality local concentration of molecule i around molecule j vapor concentration vapor or liquid concentration compressibility factor

Greek symbols α αij γj Δhm Δhv η η ηM κ λ λ ν Π ρ σ σ τ φ φj ω

J/(mol K), J/(g K) m °C K K J V m/s J/mol, J/g m3 /mol, m3 /kg m3 m3 /h J/g m/s m/s m/s mol/mol, g/g mol/mol mol/mol mol/mol, g/g mol/mol

W/m2 K

J/mol, J/g J/mol, J/g Pa s

W/Km m2 /s Pa mol/m3 , kg/m3 N/m2 N/m s

heat transfer coefficient separation factor activity coefficient enthalpy of fusion enthalpy of vaporization dynamic viscosity efficiency Murphree efficiency isentropic exponent thermal conductivity friction factor kinematic viscosity osmotic pressure density mechanical tension surface tension time relative free hole area on a tray fugacity coefficient of component j acentric factor

List of Symbols | 429

Symbol Subscripts a DC G i, j, k jet L m M r res rev s SV t t U V w Supercsripts id L S V ∞ 󸀠 󸀠󸀠

I II

Unit

Explanation axial downcomer gas components, molecules jet pump liquid melting mixture reduced (divided by the critical property) residence reversible case saturation safety valve tangential technical ambient vapor weight ideal gas liquid solid vapor infinite dilution saturated liquid saturated vapor 1st liquid phase 2nd liquid phase

Bibliography [1] [2] [3] [4] [5]

[6] [7] [8] [9] [10] [11] [12] [13]

[14] [15] [16] [17] [18]

[19] [20] [21] [22]

Franke A, Kussi J, Richert H, Rittmeister M, von Wedel L, Zeck S. Offene Standards verbinden. CITplus 2013;16(7/8):23–27. Mosberger E. Chemical Plant Design and Construction. Weinheim: Wiley-VCH; 2012. (Ullmann’s Encyclopedia of Industrial Chemistry). Available at: www.mbaofficial.com/mba-courses/operations-management/what-are-theobjectives-principles-and-types-of-plant-layout/. Bowers P, Khangura B, Noakes K. Process plant engineering models; Available at: http://spedweb.com/index.php/component/content/article/392.html. Rovaglio M, Scheele T. Immersive virtual plant reality; Available at: http://software. schneider-electric.com/pdf/white-paper/immersive-virtual-reality-plant-a-comprehensiveplant-crew-training-solution/. Borissova H. Produktlebensphasenorientierte Informationsvisualisierung mit graphischen Metaphern. Diploma thesis, University of Karlsruhe; 2005. Smith R. Chemical Process. Design and Integration. West Sussex: John Wiley & Sons; 2005. Baerns M, Behr A, Brehm A, Gmehling J, Hinrichsen KO, Hofmann H et al.. Technische Chemie. 2nd ed. Wiley-VCH; 2013. Buskies U. Economic process optimization strategies. Chem Eng Technol 1997;20:63–70. Cie A, Lantz S, Schlarp R, Tzakas M. Renewable acrylic acid. Tech. rep., University of Pennsylvania; 2012. Gmehling J, Kleiber M, Kolbe B, Rarey J. Chemical Thermodynamics for Process Simulation. Weinheim: Wiley-VCH; 2019. Salerno D. Data on demand. Benefits of NIST TDE in Aspen Plus; 2014. Presentation ASPEN V8.4. Nannoolal Y, Rarey J, Ramjugernath D. Estimation of pure component properties, Part 3: Estimation of the vapor pressure of non-electrolyte organic compounds via group contributions and group interactions. Fluid Phase Equilibria 2008;269(1/2):117–133. Kleiber M, Axmann JK. Evolutionary algorithms for the optimization of Modified UNIFAC parameters. Computers and Chemical Engineering 1998;23:63–82. Krooshof G. Can molecular modeling meet the industrial need for robust and quick predictions?; 2014. Presentation ESAT, Eindhoven. van Ness HC. Thermodynamics in the treatment of vapor/liquid equilibrium (VLE) data. Pure Appl Chem 1995;67(6):859–872. Loehe JR, van Ness HC, Abbott MM. Vapor/liquid/liquid equilibrium. Total-pressure data and GE for water/methyl acetate at 50 degree C. J Chem Eng Data 1983;28(4):405–407. Gaw WJ, Swinton FL. Thermodynamic properties of binary systems containing hexafluorobenzene, Part 4: Excess Gibbs free energies of the three systems hexafluorobenzene + benzene, toluene, and p-xylene. Trans Faraday Soc 1968;64:2023–2034. van der Waals JD. Over de Continuiteit van den Gas- en Vloeistoftoestand. Thesis, Leiden; 1873. Bronstein IN, Semendjajew KA. Taschenbuch der Mathematik, 21st ed. Thun/Frankfurt a. M.: Verlag Harri Deutsch; 1984. Peng DY, Robinson DB. A new two-constant equation of state. Ind Eng Chem Fundam 1976;15(1):59–64. Diedrichs A, Rarey J, Gmehling J. Prediction of liquid heat capacities by the group contribution equation of state VTPR. Fluid Phase Equilibria 2006;248:56–69.

https://doi.org/10.1515/9783110657685-017

432 | Bibliography

[23]

[24]

[25] [26] [27] [28] [29] [30] [31] [32] [33]

[34] [35] [36] [37] [38] [39] [40]

[41] [42] [43] [44] [45]

Benedict M, Webb GB, Rubin LC. An empirical equation for thermodynamic properties of light hydrocarbons and their mixtures. I: Methane, ethane, propane and n-butane. J Chem Phys 1940;8:334–345. Benedict M, Webb GB, Rubin LC. An empirical equation for thermodynamic properties of light hydrocarbons and their mixtures. II: Mixtures of methane, ethane, propane, and n-butane. J Chem Phys 1942;10:747–758. Bender E. Equations of state for ethylene and propylene. Cryogenics 1975;667–673. Span R, Wagner W. Equations of state for technical applications. I: Simultaneously optimized functional forms for nonpolar and polar fluids. Int J Thermophys 2003;24(1):1–39. Span R, Wagner W. Equations of state for technical applications. II: Results for nonpolar fluids. Int J Thermophys 2003;24(1):41–109. Span R, Wagner W. Equations of state for technical applications. III: Results for polar fluids. Int J Thermophys 2003;24(1):111–162. Wagner W. FLUIDCAL. Software for the calculation of thermodynamic and transport properties of several fluids. Tech. rep., Ruhr-Universität Bochum; 2005. Kunz O, Wagner W. The GERG-2008 wide-range equation of state for natural gases and other mixtures: An expansion of GERG-2004. J Chem Eng Data 2012;57:3032–3091. Wilson GM. Vapor-liquid equilibrium. XI: A new expression for the excess free energy of mixing. J Am Chem Soc 1964;20:127–130. Renon H, Prausnitz JM. Local compositions in thermodynamic excess functions for liquid mixtures. AIChE Journal 1968;14(1):135–145. Abrams DS, Prausnitz JM. Statistical thermodynamics of liquid mixtures: A new expression for the excess Gibbs energy of partly or completely miscible systems. AIChE Journal 1975;21:116–128. Gmehling J, Brehm A. Lehrbuch der Technischen Chemie. Band 2: Grundoperationen. Stuttgart/New York: Georg Thieme Verlag; 1996. Prausnitz JM, Lichtenthaler RN, de Azevedo EG. Molecular thermodynamics of fluid-phase equilibria. Prentice-Hall; 1986. Anisimov VM, Zelesnyi VP, Semenjuk JV, Cerniak JA. Thermodynamic properties of the mixture FC218-HFC134a. (Russ) Inzenernyi fiziceskij zyrnal 1996;69(5):756–760. Gmehling J, Menke J, Krafczyk J, Fischer K. Azeotropic Data, 2nd ed. Weinheim: Wiley-VCH; 2004. 3 volumes. Hankinson RW, Thomson GH. A new correlation for saturated densities of liquids and their mixtures. AIChE Journal 1979;25(4):653–663. Wagner W. New vapour pressure measurements for argon and nitrogen and a new method of establishing rational vapour pressure equations. Cryogenics 1973;13:470–482. Moller B, Rarey J, Ramjugernath D. Estimation of the vapor pressure of non-electrolyte organic compounds via group contributions and group interactions. Journal of Molecular Liquids 2008;143:52–63. Poling BE, Prausnitz JM, O’Connell JP. The Properties of Gases and Liquids. McGraw-Hill; 2001. Kleiber M, Joh R. Liquids and gases. VDI Heat Atlas, 2nd ed., chap. D3.1. Berlin/Heidelberg: Springer-Verlag; 2010. McGarry J. Correlation and prediction of the vapor pressures of pure liquids over large pressure ranges. Ind Eng Chem Proc Des Dev 1983;22:313–322. Hoffmann W, Florin F. Zweckmäßige Darstellung von Dampfdruckkurven. Verfahrenstechnik, Z VDI-Beiheft 1943;2:47–51. Cordes W, Rarey J. A new method for the estimation of the normal boiling point of non-electrolyte organic compounds. Fluid Phase Equilibria 2002;201:409–433.

Bibliography | 433

[46] [47]

[48] [49] [50] [51] [52] [53]

[54] [55] [56] [57] [58] [59] [60] [61]

[62] [63] [64]

[65] [66] [67] [68] [69] [70] [71]

Franck EU, Meyer F. Fluorwasserstoff III, Spezifische Wärme und Assoziation im Gas bei niedrigem Druck. Z Elektrochem, Ber Bunsenges Phys Chem 1959;63(5):571–582. Chen CC, Britt HI, Boston JF, Evans LB. Local composition model for excess Gibbs energy of electrolyte systems, Part I: Single solvent, single completely dissociated electrolyte systems. AIChE Journal 1982;28(4):588–596. Chen CC, Evans LB. A local composition model for excess Gibbs energy of aqueous electrolyte systems. AIChE Journal 1986;32(3):444–454. de Hemptinne JC, Ledanois JM, Mougin P, Barreau A. Select Thermodynamic Models for Process Simulation. Paris: Edition Technip; 2012. Löffler HJ. Thermodynamik, Band 2: Gemische und chemische Reaktionen. Berlin/Heidelberg: Springer-Verlag; 1969. liq Kleiber M. The trouble with cp . Ind Eng Chem Res 2003;42:2007–2014. Soave G. Equilibrium constants from a modified Redlich–Kwong equation of state. Chem Eng Sc 1972;27:1197–1203. Plöcker U, Knapp H, Prausnitz JM. Calculation of high-pressure vapor-liquid equilibria from a corresponding-states correlation with emphasis on asymmetric mixtures. Ind Eng Chem Proc Des Dev 1978;17(3):324–332. Enders S. Polymer thermodynamics. In: Gmehling J, Kleiber M, Kolbe B, Rarey J. Chemical Thermodynamics for Process Simulation. Weinheim: Wiley-VCH; 2019. Available at: www.ddbst.de. Available at: www.nist.gov. Fredenslund Å, Jones RL, Prausnitz JM. Group-contribution estimation of activity coefficients in nonideal liquid mixtures. AIChE Journal 1975;21(6):1086–1099. Gmehling J, Li J, Schiller M. A modified UNIFAC model. 2. Present parameter matrix and results for different thermodynamic properties. Ind Eng Chem Res 1993;32:178–193. Weidlich U, Gmehling J. A modified UNIFAC model. 1. Prediction of VLE, hE , and γ ∞ . Ind Eng Chem Res 1987;26:1372–1381. Available at: www.unifac.org. Schmid B, Schedemann A, Gmehling J. Extension of the VTPR group contribution equation of state: Group interaction parameters for 192 group contributions and typical results. Ind Eng Chem Res 2014;53(8):3393–3405. Klamt A, Eckert F. COSMO-RS: A novel and efficient method for the a priori prediction of thermophysical data of liquids. Fluid Phase Equilibria 2000;172:43–72. Kleiber M, Joh R. Calculation methods for thermophysical properties. VDI Heat Atlas. 2nd ed., chap. D1. Berlin/Heidelberg: Springer-Verlag; 2010. Nannoolal Y, Rarey J, Ramjugernath D. Estimation of pure component properties, Part 2: Estimation of the saturated liquid viscosity of non-electrolyte organic compounds via group contributions and group interactions. Fluid Phase Equilibria 2009;281(2):97–119. Technical University of Denmark DoEE. Heat transfer fluid calculator; Version 2.01, Copyright 2000. Matejovski D. Die Modernität der Industrie und die Ästhetisierung des Ökonomischen. Priddat BP, West KW, editors. Die Modernität der Industrie. Marburg: Metropolis-Verlag; 2012. Minton PE. Handbook of Evaporation Technology. Westwood, NJ: Noyes Publications; 1986. Linnhoff B. Pinch technology training course. Frankfurt: Linnhoff March Ltd.; 1995. Dhole VR, Linnhoff B. Distillation column targets. Comp Chem Eng 1993;27(5/6):549–560. Toghraei M. Wide design margins do not improve engineering. Hydrocarb Process 2014;93(1):69–71. Baehr HD, Stephan K. Wärme- und Stoffübertragung. Berlin/Heidelberg: Springer-Verlag; 1994.

434 | Bibliography

[72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99]

Verein Deutscher Ingenieure. VDI Heat Atlas. Berlin/Heidelberg: Springer-Verlag; 2010. Frankel M. Facility Piping Systems Handbook, 2nd ed. New York: McGraw-Hill; 2002. Perea E. Mitigate heat exchanger corrosion with better construction materials. Hydrocarb Process 2013;92(12):49–51. Bouhairie S. Selecting baffles in shell-and-tube heat exchangers. Chemical Engineering Progress 2012;27–33. HTRI manual; HTRI Xchanger Suite 6.0. Drögemüller P. The use of hiTRAN wire matrix elements to improve the thermal efficiency of tubular heat exchangers in single and two-phase flow. Chem-Ing-Tech 2015;87(3):188–202. Ackermann G. Wärmeübergang und molekulare Stoffübertragung im gleichen Feld bei großen Temperatur- und Partialdruckdifferenzen. VDI-Forschungsheft 1937;8(382). Schlünder EU. Film condensation of binary mixtures with and without inert gas. VDI Heat Atlas. 2nd ed., chap. J2. Berlin/Heidelberg: Springer-Verlag; 2010. Colburn AP, Drew TB. The condensation of mixed vapors. Trans Am Inst Chem Engrs 1937;33:197–215. Arneth S. Dimensionierung und Betriebsverhalten von Naturumlaufverdampfern. Thesis, TU München, München; 1999. Arneth S, Stichlmair J. Characteristics of thermosiphon reboilers. Int J Therm Sci 2001;40:385–391. Baars A, Delgado A. Non-linear effects in a natural circulation evaporator: Geysering coupled with manometer oscillations. Heat Mass Transfer 2007;43:427–438. Dialer K. Die Wärmeübertragung beim Naturumlaufverdampfer. Thesis, ETH Zürich; 1983. Scholl S, Rinner M. Verdampfung und Kondensation. Goedecke R, editor. Fluid-Verfahrenstechnik. Weinheim: Wiley-VCH; 2011. Das T. Achieve optimal heat recovery in a kettle exchanger. Hydrocarb Process 2012;91(3):87–88. Bethge D, Kurzweg- und Molekulardestillation. Jorisch W, editor, Vakuumtechnik in der Chemischen Industrie. Weinheim: Wiley-VCH; 1999. Martin H. Pressure drop and heat transfer in plate heat exchangers. VDI Heat Atlas, 2nd ed., chap. N6. Berlin/Heidelberg: Springer-Verlag; 2010. Gmehling J, Kleiber M, Steinigeweg S. Thermische Verfahrenstechnik. Chemische Technik: Prozesse und Produkte, 5th ed. Weinheim: Wiley-VCH; 2006. Schmidt KG. Heat transfer to finned tubes. VDI Heat Atlas, 2nd ed., chap. M1. Berlin/Heidelberg: Springer-Verlag; 2010. Müller-Steinhagen H. Fouling of heat exchanger surfaces. VDI Heat Atlas, 2nd ed., chap. C4. Berlin/Heidelberg: Springer-Verlag; 2010. Zhenlu F, Dengfeng L, Xiangling Z, Xing Z. Determine fouling margins in tubular heat exchanger design. Hydrocarb Process 2015;94(9):79–82. Dole RH, Vivekanand S, Sridhar S. Mitigate vibration issues in shell-and-tube heat exchangers. Hydrocarb Process 2015;94(12):57–60. Gelbe H, Ziada S. Vibration of tube bundles in heat exchangers. VDI Heat Atlas, 2nd ed., chap. O2. Berlin/Heidelberg: Springer-Verlag; 2010. Kister HZ. Distillation -Design-. McGraw-Hill; 1992. Kister HZ. Distillation -Operation-. McGraw-Hill; 1990. Sattler K. Thermische Trennverfahren, 2nd ed. Weinheim: VCH Verlagsgesellschaft; 1995. Stichlmair J, Fair JR. Distillation: Principles and Practice. New York: Wiley-VCH; 1998. Schultes M. The impact of tower internals on packing performance. Chem-Ing-Tech 2014;86(5):658–665.

Bibliography | 435

[100] Kister HZ, Mathias P, Steinmeyer DE, Penney WR, Crocker BB, Fair JR. Equipment for distillation, gas absorption, phase dispersion, and phase separation. Green DW, Perry RH, editors, Perry’s Chemical Engineers’ Handbook, 8th ed., chap. 14. McGraw-Hill. [101] Stupin WJ, Kister HZ. System limit: The ultimate capacity of fractionators. TransIChemE 2003;81(A):136–146. [102] Stichlmair J, Bravo JL, Fair JR. General model for prediction of pressure drop and capacity of countercurrent gas/liquid packed columns. Gas Separation & Purification 1989;3:19–28. [103] Engel V. Fluiddynamik in Packungskolonnen für Gas-Flüssig-Systeme. Fortschritt-Berichte, Reihe 3: Verfahrenstechnik. VDI-Verlag; 1999. [104] Billet R, Schultes M. Prediction of mass transfer columns with dumped and arranged packings. TransIChemE 1999;77(Part A):498–504. [105] Spiegel L, Meier W. Structured packings. Chem Plants + Processing 1995;28(1):36–38. [106] Kister HZ. Distillation -Troubleshooting-. Hoboken, NJ: Wiley-Interscience; 2006. [107] Kister HZ. Practical distillation technology; 2013. Course Notes. [108] Bolles WL. Optimum bubble-cap tray design. Part I: Tray dynamics. Petroleum Processing 1956;65–80. [109] Bolles WL. Optimum bubble-cap tray design. Part II: Design standards. Petroleum Processing 1956;82–95. [110] Bolles WL. Optimum bubble-cap tray design. Part III: Design technique. Petroleum Processing 1956;72–95. [111] Bolles WL. Optimum bubble-cap tray design. Part IV: Design example. Petroleum Processing 1956;109–120. [112] Kister HZ. Effects of design on tray efficiency in commercial towers. Chem Eng Prog 2008;104(6):39–47. [113] Stichlmair J. Grundlagen der Dimensionierung des Gas/Flüssigkeit-Kontaktapparates Bodenkolonne. Weinheim/New York: verlag chemie; 1978. [114] Rennie J, Evans F. The formation of froths and foams above sieve plates. British Chemical Engineering 1962;7(7):498–502. [115] Senger G, Wozny G. Experimentelle Untersuchung von Schaum in Packungskolonnen. Chem-Ing-Tech 2011;83(4):503–510. [116] Pahl MH, Franke D. Schaum und Schaumzerstörung – ein Überblick. Chem-Ing-Tech 1995;67(3):300–312. [117] Brierly RJP, Whyman PJM, Erskine JB. Flow induced vibration of distillation and absorption column trays. I. Chem. E. Symp. Ser 1979;56:2.4/45–2.4/63. [118] Priestman GH, Brown DJ. The mechanism of pressure pulsations in sieve-tray columns. Trans IChemE (London) 1981;59:279–282. [119] Priestman GH, Brown DJ. Pressure pulsations and weeping at elevated pressures in a small sieve-tray column. I. Chem. E. Symp. Ser. 1987;104:B407–B422. [120] Wijn EF. Pulsation of the two-phase layer on trays. I. Chem. E. Symp. Ser. 1982;73:D79–D101. [121] Fractionation Research Inc. Causes and prevention of packing fires. Chemical Engineering; 2007, Jul. p. 34–42. [122] Schuler H. Was behindert den praktischen Einsatz moderner regelungstechnischer Methoden in der Prozeßindustrie? Automatisierungstechnische Praxis 1992;34(3):116–123. [123] Friedman YZ, Kane L. Two DCS control configurations: Mass balance and heat balance; 2010. Webinar, Hydrocarbon Processing. [124] Sorensen E. Design and operation of batch distillation. Gorak A, Sorensen E, editors. Distillation: Fundamentals and Principles. Elsevier; 2014.

436 | Bibliography

[125] Brinkmann T, Ebert K, Pingel H, Wenzlaff A, Ohlrogge K. Prozessalternativen durch den Einsatz organisch-anorganischer Kompositmembranen für die Dampfpermeation. Chem-Ing-Tech 2004;76(10):1529–1533. [126] Berascola N, Eisele P. Aspen rate-based distillation; 2011. Seminar, March 21st. [127] Taylor R, Kooijman HA. Mass transfer in distillation. Gorak A, Sorensen E, editors. Distillation: Fundamentals and Principles. Elsevier; 2014. [128] Duncan JB, Toor HL. An experimental study of three component gas diffusion. AIChE J 1962;8(1):38–41. [129] Krishna R. Uphill diffusion in multicomponent mixtures. Chem Soc Rev 2015;44:2812. [130] Schaber K. Aerosolbildung durch spontane Phasenübergänge bei Absorptions- und Kondensationsvorgängen. Chem-Ing-Tech 1995;67(11):1443–1452. [131] Schaber K. Aerosolbildung bei der Absorption und Partialkondensation. Chem-Ing-Tech 1990;62(10):793–804. [132] Kaibel G. Distillation columns with vertical partitions. Chem Eng Technol 1987;10:92–98. [133] Galindez H, Fredenslund Å. Simulation of multicomponent batch distillation processes. Computers and Chem Eng 1988;12(4):281–288. [134] Hildebrand JH, Scott RL. The solubility of nonelectrolytes. J Phys Coll Chem 1949;53:944–947. [135] Arlt W. Thermische Grundoperationen der Verfahrenstechnik. Lecture notes, Technical University of Berlin; 1999. [136] Bocangel J. Design of liquid-liquid gravity separators. Chemical Engineering; 1986, Feb. p. 133–135. [137] Henschke M. Dimensionierung liegender Flüssig-Flüssig-Abscheider anhand diskontinuierlicher Absetzversuche. VDI Fortschritt-Berichte. (Reihe 3; no. 379), Düsseldorf: VDI-Verlag. [138] Henschke M, Schlieper LH, Pfennig A. Determination of a coalescence parameter from batch-settling experiments. Chem Eng J 2002;85:369–378. [139] Pfennig A, Pilhofer T, Schröter J. Flüssig-Flüssig-Extraktion. Goedecke R, editor, Fluid-Verfahrenstechnik. Weinheim: Wiley-VCH; 2011. [140] Huang C, Xu T, Zhang Y, Xue Y, Chen G. Application of electrodialysis to the production of organic acids: State-of-the-art and recent developments. J of Membrane Science 2007;288:1–12. [141] Kucera J. Reverse Osmosis – Industrial Applications and Processes. Salem, MA: Scrivener Publishing; 2010. [142] Nunes SP, Peinemann KV, editors. Membrane Technology in the Chemical Industry. Weinheim/New York: Wiley-VCH; 2001. [143] Bathen D, Breitbach M. Adsorptionstechnik. Berlin/Heidelberg: Springer-Verlag; 2001. [144] Bethge D. Energy-saving concepts for the dehydration of alcohol. Zuckerindustrie 2005;130(3):213–214. [145] Samant KD, O’Young L. Understanding crystallization and crystallizers. CEP; 2006, Oct. p. 28–37. [146] Beckmann W, editor. Crystallization – Basic Concepts and Industrial Application. Weinheim: Wiley-VCH; 2013. [147] Mersmann A, Kind M, Stichlmair J. Thermische Verfahrenstechnik, 2nd ed. Springer-Verlag; 2005. [148] Available at: www.bungartz.de. [149] Kernan D, Choung E. Run your pumps like a pro: Tips for boosting production and reducing risk at the refinery. Hydrocarbon Processing Webcast; 2010. [150] Beitz W, Grote KH, editors. Dubbel – Taschenbuch für den Maschinenbau, 19th ed. Berlin/Heidelberg: Springer-Verlag; 1997. Ch. B.

Bibliography | 437

[151] Sterling Fluid Systems Group. Liquid vacuum pumps and liquid ring compressors; Available at: www.sterlingfluidsystems.com. [152] Deiters UK, Imre AR, Quiñones-Cisneros SE. Isentropen von Fluiden im Zweiphasengebiet. ProcessNet, Thermodynamic Colloquium. Stuttgart; 2014. [153] Grave H. Dampfstrahl-Vakuumpumpen. Jorisch W, editor, Vakuumtechnik in der Chemischen Industrie. Weinheim: Wiley-VCH; 1999. [154] Baehr HD, Kabelac S. Thermodynamik. Berlin/Heidelberg: Springer-Verlag; 2006. [155] Jorisch W, editor. Vakuumtechnik in der Chemischen Industrie. Weinheim: Wiley-VCH; 1999. [156] Vogel HH. Die Finite-Elemente-Methode am Beispiel des Strahlapparates. Chem-Ing-Tech 2006;78(1/2):124–133. [157] GEA Wiegand GmbH. Überlegungen bei der Projektierung einer Dampfstrahl-Vakuumpumpe. [158] Toghraei M. Overflow systems are the last line of defense. Hydrocarb Process 2013;92(5):T92–T94. [159] Marr R, Moser F, Husung G. Schwerkraft- und Strickabscheider – Berechnung liegender Gas-Flüssig-Abscheider. verfahrenstechnik 1976;10(1):34–37. [160] Marr R, Moser F. Die Auslegung von stehenden Gas-Flüssig-Abscheidern – Schwerkraft- und Gestrickabscheider. verfahrenstechnik 1975;9(8):379–382. [161] Brauer H. Grundlagen der Einphasen- und Mehrphasenströmungen. Verlag Sauerländer; 1971. [162] Bürkholz A. Droplet Separation. Weinheim: Wiley-VCH; 1989. [163] Jess A, Wasserscheid P. Chemical Technology. Weinheim: Wiley-VCH; 2013. [164] Montebelli A, Tronconi E, Orsenigo C, Ballarini N. Kinetic and modeling study of the ethylene oxychlorination to 1, 2-dichloroethane in fluidized-bed reactors. Ind Eng Chem Res 2015;54(39):9513–9524. [165] Riedel E. Allgemeine und Anorganische Chemie, 10th ed. Berlin/New York: Walter de Gruyter; 2010. [166] Macejko B. Is your plant vulnerable to brittle fracture? Hydrocarb Process 2014;93(11):67–78. [167] Sims JR. Improve evaluation of brittle-fracture resistance for vessels. Hydrocarb Process 2013;92(1):59–62. [168] Rähse W. Praktische Hinweise zur Wahl des Werkstoffs von Maschinen und Apparaten. Chem-Ing-Tech 2014;86(8):1163–1179. [169] Wagner MH. Heat transfer to non-newtonian fluids. VDI Heat Atlas, 2nd ed., chap. M4. Berlin/Heidelberg: Springer-Verlag; 2010. [170] Truckenbrodt E. Lehrbuch der angewandten Fluidmechanik. Berlin/Heidelberg/New York/Tokyo: Springer-Verlag; 1983. [171] Lockhart RW, Martinelli RC. Proposed correlation of data for isothermal two-phase, two-component flow in pipes. Chem Eng Prog 1949;45(1):39–48. [172] Friedel L. Druckabfall bei der Strömung von Gas/Dampf-Flüssigkeits-Gemischen in Rohren. Chem-Ing-Tech 1978;50(3):167–180. [173] Friedel L. Eine dimensionslose Beziehung für den Reibungsdruckabfall bei Zweiphasenrohrströmung zwischen Wasser und R12. vt Verfahrenstechnik 1979;13(4):241–246. [174] Friedel L. Improved friction pressure drop correlations for horizontal and vertical two phase pipe flow. 3R international 1979;18(7):485–491. [175] Beggs HD, Brill JP. A study of two-phase flow in inclined pipes. J Petrol Technol 1973;607–617. [176] Muschelknautz S. Druckverlust in Rohren und Rohrkrümmern bei Gas-Flüssigkeit-Strömung. VDI-Wärmeatlas, 8th ed., chap. Lgb. Berlin/Heidelberg: Springer-Verlag; 1997. [177] Schmidt H. Two-phase gas-liquid flow. VDI Heat Atlas, 2nd ed., chap. L2. Berlin/Heidelberg: Springer-Verlag; 2010.

438 | Bibliography

[178] Lee S, Seok W. Major accident and failure of stationary equipment in the RFCCU. Hydrocarb Process 2016;95(1):65–70. [179] Sahoo T. Pick the right valve. Chemical Engineering; 2004, Aug. p. 34–39. [180] Nitsche M. Industriearmaturen. Chemie-Technik 1985;14(3):99–102. [181] Johnson RD, Lee B. Valve design reduces costs and increases safety for US refineries. Hydrocarb Process 2010;89(8):37–40. [182] Stepanek D. Was den Betreiber von Massedurchflussmessern nach dem CORIOLIS-Prinzip interessiert. Tech. rep., Schwing Verfahrenstechnik GmbH; 2004. Corporate Publication. [183] Ignatowitz E. Chemietechnik, 7th ed. Haan-Gruiten: Verlag Europa-Lehrmittel; 2003. [184] Wagner W, Kruse A. Properties of Water and Steam. Berlin/Heidelberg/New York: Springer-Verlag; 1998. [185] Numrich R, Müller J. Filmwise condensation of pure vapors. VDI Heat Atlas, 2nd ed., chap. J1. Berlin/Heidelberg: Springer-Verlag; 2010. [186] Spirax Sarco GmbH. Grundlagen der Dampf- und Kondensattechnologie; 2014. Available at: www.spiraxsarco.de. [187] Available at: www.spiraxsarco.com/Resources/Pages/steam-engineering-tutorials.aspx. [188] Spirax Sarco GmbH. Kavitation ade! CALORIE 2011;79:10–11. [189] Glück A, Hunold D. Oil-based and synthetic heat transfer media. VDI Heat Atlas, 2nd ed., chap. D4.3. Berlin/Heidelberg: Springer-Verlag; 2010. [190] Krakat G. Cryostatic bath fluids, aqueous solutions, and glycols. VDI Heat Atlas. 2nd ed., chap. D4.2. Berlin/Heidelberg: Springer-Verlag; 2010. [191] Kleiber M. Exhaust air treatment in chemical industry. Gierycz P, Malanowski SK, editors, Thermodynamics for Environment. Warszawa: Information Processing Centre; 2004. [192] Bundesministerium für Umwelt NuR. Technische Anleitung zur Reinhaltung der Luft. Köln: Carl Heymanns Verlag KG; 2002. [193] Herzog F, Schulte M. Abluftreinigung durch Kryokondensation. UMWELT 1998;1/2:49–53. [194] Messer Group GmbH. The DuoCondex process; Available at: www.messergroup.com/the-duocondex-process. [195] Domschke T, Steinebrunner K, Christill M, Seifert H. Verbrennung chlorierter Kohlenwasserstoffe – Die Deacon-Reaktion in Rauchgasen während der Abkühlung. Chem-Ing-Tech 1996;68(5):575–579. [196] Görner K, Hübner K. Gasreinigung und Luftreinhaltung. Berlin/Heidelberg/New York: Springer-Verlag; 2002. [197] Kugeler K, Phlippen PW. Energietechnik. Berlin/Heidelberg/New York: Springer-Verlag; 1993. [198] Kolar J. Stickstoffoxide und Luftreinhaltung. Berlin/Heidelberg/New York: Springer-Verlag; 1990. [199] Hevia MAG, Perez-Ramirez J. Assessment of the low-temperature EnviNOx variant for catalytic N2 O abatement over steam-activated FeZSM-5. Appl Catal B 2008;77(3/4):248–254. [200] Schwefer M, Siefert R, Groves MCE, Maurer R. Verfahren zur gemeinsamen Beseitigung von N2 O und NOx – Erste großtechnische Installation im Abgas der HNO3 -Produktion. Chem-Ing Tech 2003;75(8):1048–1049. [201] Venkatesh M, Woodhull J. Pick the right thermal oxidizer. Chemical Engineering 2003;67–70. [202] Müller G. Absorption organischer Lösemittel mit Glykolethern. VDI-Berichte 1989;730:373–394. [203] Bay K, Wanko H, Ulrich J. Biodiesel – Hoch siedendes Absorbens für die Gasreinigung. Chem-Ing-Tech 2004;76(3):328–333. [204] Available at: www.desotec.com.

Bibliography | 439

[205] Sörensen M, Zegenhagen F, Weckenmann J. State of the art wastewater treatment in pharmaceutical and chemical industry by advanced oxidation. Pharm Ind 2015;77(4):594–607. [206] Available at: www.enviolet.com. [207] Onken U, Behr A. Chemische Prozesskunde. Stuttgart: Georg Thieme Verlag; 1996. [208] Available at: https://en.wikipedia.org/wiki/West_Fertilizer_Company_explosion. [209] Available at: https://en.wikipedia.org/wiki/2015_Tianjin_explosions. [210] Nolan DP. Application of HAZOP and What-If-Safety Reviews to the Petroleum, Petrochemical and Chemical Industries. Park Ridge, NJ: Noyes Publications; 1994. [211] Stephan D. Sicher ist Sicher: Warum SIL keine Pflicht, aber trotzdem ein Muss ist; Available at: www.process.vogel.de/sicherheit/articles/483503/. [212] Bozoki G. Überdrucksicherungen für Behälter und Rohrleitungen. Verlag TÜV Rheinland GmbH; 1977. [213] Renfro J, Stephenson G, Marques-Riquelme E, Vandu C. Use dynamic models when designing high-pressure vessels. Hydrocarb Process 2014;93(5):71–76. [214] Feuerstein A. Dynamische Berechnung von Abblasevorgängen. Master thesis, TU Darmstadt; 2015. [215] Venting Atmospheric and Low-Pressure Storage Tanks. American Petroleum Institute, 5th ed.; 1998. API Standard 2000. [216] Pressure relieving and depressuring systems. American Petroleum Institute, 5th ed.; 2007. ANSI/API Standard 521. [217] LESER. Engineering handbook; Available at: www.leser.com/en/tools/engineering.html. [218] Yeh G, Griman J, Najrani M. Recover from a steam reformer tube rupture. Hydrocarb Process 2013;92(6):85–88. [219] Elliott B. Using DIERS two-phase equations to estimate tube rupture flowrates. Hydrocarb Process 2001;8:49–54. [220] Leung JC, Grolmes MA. A generalized correlation for flashing choked flow of initially subcooled liquid. AIChE J 1988;34(4):688–691. [221] Leung JC. A generalized correlation for one-component homogeneous equilibrium flashing choked flow. AIChE J 1986;32(10):1743–1746. [222] Staak D, Repke JU, Wozny G. Simulation von Entlastungsvorgängen bei Rektifikationskolonnen. Chem-Ing-Tech 2008;80(1/2):129–135. [223] Smith D, Burgess J. Relief valve and flare action items: What plant engineers should know. Hydrocarb Process 2012;91(11):41–46. [224] ISO 4126. Safety devices for protection against excessive pressure. Beuth Verlag, Berlin; 2010. [225] Schmidt J, Westphal F. Praxisbezogenes Vorgehen bei der Auslegung von Sicherheitsventilen und deren Abblaseleitungen für die Durchströmung mit Gas/Dampf-Flüssigkeitsgemischen – Teil 1. Chem-Ing-Tech 1997;69(3):312–319. [226] Fründt J. Untersuchungen zum Einfluß der Flüssigkeitsviskosität auf die Druckentlastung. Aachen: Shaker-Verlag; 1997. Thesis. [227] Schmidt J, Westphal F. Praxisbezogenes Vorgehen bei der Auslegung von Sicherheitsventilen und deren Abblaseleitungen für die Durchströmung mit Gas/Dampf-Flüssigkeitsgemischen – Teil 2. Chem-Ing-Tech 1997;69(8):1074–1091. [228] Brodhagen A, Schmidt F. Berechnen von kritischen Massenströmen. VDI-Wärmeatlas. 10th ed., chap. Lbd. Berlin/Heidelberg: Springer-Verlag; 2006. [229] Sizing, Selection, and Installation of Pressure-Relieving Devices in Refineries. American Petroleum Institute, 7th ed.; 2000. API Recommended Practice 520.

440 | Bibliography

[230] Bauerfeind K, Friedel L. Berechnung der dissipationsbehafteten kritischen Düsenströmung realer Gase. Forschung im Ingenieurwesen 2003;67(6):227–235. [231] Westphal F, Christ M. Erfahrungen aus der Praxis mit dem 3 %-Kriterium für die Zuleitung von Sicherheitsventilen. Technische Sicherheit 2014;4(3):28–31. [232] LESER. Chattering safety valve; Available at: www.leser.com/en/news-about-leser/media-center/videos.html. [233] Klapötke TM. Chemistry of High-Energy Materials. Berlin/New York: Walter de Gruyter; 2011. [234] Pfenning D. Inertisieren im Sekundentakt; 2014. Presentation, FH Aachen. [235] Thess A. The Entropy Principle. Berlin/Heidelberg: Springer-Verlag; 2010. [236] Kittredge CP, Rowley DS. Resistance coefficients for laminar and turbulent flow through one-half-inch valves and fittings. Trans ASME 1957;79:1759–1766. [237] Gersten K. Einführung in die Strömungsmechanik, 3rd ed. Braunschweig: Vieweg; 1984. [238] Kast W. Druckverlust bei der Strömung durch Leitungen mit Querschnittsänderungen. VDI-Wärmeatlas. 8th ed. Berlin/Heidelberg: Springer-Verlag; 1997. Abschnitt Lc. [239] Kleiber M. Prozesstechnik auf der ACHEMA 2018. Chem-Ing-Tech 2018;90(12):1897–1909. [240] Dannenmaier T, Schmidt J, Denecke J, Odenwald O. European Program on Evaluation of Safety Valve Stability. Chemical Engineering Transactions 2016;48:625–630. [241] Markus D, Maas U. Die Berechnung von Explosionsgrenzen mit detaillierter Reaktionskinetik. Chem-Ing-Tech 2004;76(3):289–292. [242] Baybutt P. Process safety incidents, cognitive biases and critical thinking. Hydrocarbon Processing 2017;96(4):81–82. [243] Patidar P, Gupta A. Savings using divided wall columns. PTQ 2018;Q4:79–85. [244] Müller A, Kropp A, Köster R, Fazzini M. Improve energy efficiency with enhanced tube bundles in tubular heat exchangers. Hydrocarbon Processing 2017;96(5):75–81. [245] Kliemann C, Kleiber M, Müller K. Rheological Behavior of Mixtures of Ionic Liquids with Water. Chem Eng Technol 2018;41(4):819–826. [246] Dutta H. Building blocks of process safety. Hydrocarbon Processing 2018;97(10):33–36. [247] Hanik P, Hausmann R. High-reliability organizing for a new paradigm in safety. Hydrocarbon Processing 2018;97(10):51–54. [248] Jain S, Patil R, Gupta A. Phenomenon of flow distribution in manifolds. Hydrocarbon Processing 2018;97(11):75–77. [249] Sofronas T. Case 104: Energy in steam boiler explosions. Hydrocarbon Processing 2018;97(11):17–18. [250] Pressure relieving and depressuring systems. American Petroleum Institute, 6th ed.; 2014. ANSI/API Standard 521. [251] Bernecker G. Das kleine Einmaleins des Anlagenbaus. CITplus 2016;19(5):6–9. [252] Rähse W. Vorkalkulation chemischer Anlagen. Chem-Ing-Tech 2016;88(8):1068–1081. [253] Rähse W. Ermittlung eines kompetitiven Marktpreises für neue Produkte über die Herstellkosten. Chem-Ing-Tech 2017;89(9):1142–1158. [254] Ahuja S. Comparison of Commercial Tools for Distillation Column Design. Bachelor thesis, April 2019. [255] Angler R. Eine kleine Fibel zur Auslegung von Sicherheitseinrichtungen. Version 0.92, private communication. [256] Herdegen V, Werner A, Milew K, Haseneder R, Aubel T. ACHEMA 2018: Membranen und Membranverfahren. Chemie-Ing-Tech 2018;90(12):1964–1971. [257] Grolmes MA, Fisher HG. Vapor-liquid Onset/Disengagement Modeling for Emergency Relief Discharge Evaluation. Presentation at the AIChE 1994 Summer National Meeting.

Bibliography | 441

[258] Kister HZ. What caused tower malfunctions in the last 50 years? Trans IChemE, Vol. 81, Part A, January 2003. [259] Kister HZ. Can We Believe the Simulation Results? Chem Eng Prog 2002;98(10):52–58. [260] Kister HZ, Larson KF, Yanagi T. How Do Trays and Packings Stack Up? Chem Eng Progr 1994;90(2):23–32. [261] Duarte Pinto R, Perez M, Kister HZ. Combine temperature surveys, field tests and gamma scans for effective troubleshooting. Hydrocarbon Processing 2003;82(4):69–76. [262] Schmidt J. Auslegung von Schutzeinrichtungen für wärmeübertragende Apparate. VDI-Wärmeatlas, 11th ed., chap. L2.3. Berlin/Heidelberg: Springer-Verlag; 2013. [263] HTRI Design Manual, October 2006. [264] Georgiadis MC, Banga JR, Pistikopoulos EN, Dua V. Process Systems Engineering: Vol. 7: Dynamic Process Modeling. Weinheim: Wiley-VCH; 2011. [265] da Silva FJ. Dynamic Process Simulation: When do we really need it? Available at: http://processecology.com/articles/dynamic-process-simulation-when-do-wereally-need-it. [266] Berutti M. Understanding the digital twin; Available at: https://www.chemengonline.com/ understanding-the-digital-twin/?pagenum=1. [267] Bird RB, Stewart WE, Lightfoot EN. Transport Phenomena, Revised 2nd ed. New York: John Wiley & Sons; 2007. [268] Greenspan D. Numerical Solution of Ordinary Differential Equations for Classical, Relativistic and Nano Systems. Weinheim: Wiley-VCH; 2006. [269] Bequette BW. Process Control: Modeling, Design, and Simulation. Prentice Hall; 1998. [270] Gil Chaves ID, López JRG, Garcia Zapata JL, Leguizamón Robayo A, Rodríguez Niño G. Process Analysis and Simulation in Chemical Engineering. Berlin/Heidelberg/New York: Springer-Verlag; 2015. [271] Stephanopoulos G. Chemical Process Control: An Introduction to Theory and Practice. Upper Saddle River: Prentice-Hall; 1984. [272] Haas V. Simulation von Abblaseszenarien am Beispiel eines Industrieprozesses. Master thesis, Karlsruher Institut für Technologie, August 2018. [273] Duss M, Taylor R. Predict Distillation Tray Efficiency. CEP, July 2018, 24–30. [274] Grünewald M, Zheng G, Kopatschek M. Auslegung von Absorptionskolonnen – Neue Problemstellungen für eine altbekannte Aufgabe. Chem-Ing-Tech 2011;83(7):1026–1035. [275] Hanusch F, Rehfeldt S, Klein H. Flüssigkeitsmaldistribution in Füllkörperschüttungen: Experimentelle Untersuchung der Einflussparameter. Chem-Ing Tech 2017;89(11):1550–1560. [276] Ottow JCG, Bidlingmaier W, editors. Umweltbiotechnologie. Stuttgart: Gustav Fischer Verlag; 1997. [277] Span R, Beckmüller R, Eckermann T, Herrig S, Hielscher S, Jäger A, Mickoleit E, Neumann T, Pohl SM, Semrau B, Thol M. TREND. Thermodynamic Reference and Engineering Data 4.0. Lehrstuhl für Thermodynamik, Ruhr-Universität Bochum, 2019. [278] Lapierre D, Moro J. Fünf nach zwölf in Bhopal. Europa Verlag Leipzig; 2004. [279] Bittermann HJ. Kein Ärger mit der Pumpe. Process 2019;26(10), 52–54. [280] Lutz H, Wendt W. Taschenbuch der Regelungstechnik. Verlag Harri Deutsch, Frankfurt; 2003. [281] www.sulzer.com/en/shared/products/shell-schoepentoeter-and-schoepentoeter-plus.

A Some numbers to remember It seems to be a bit outdated to know simple numbers by heart. Nevertheless, many projects are founded by having quick ideas in the open talk with colleagues, a plant manager or a plant engineer. Often complicated process simulations must be made plausible to practitioners by rough calculations. Knowing some often used numbers makes you a candidate for the pole position in process engineering meetings. Without claiming to be complete, here are some numbers considered to be worth to know them by heart. On purpose, only the even approximations and not the exact numbers are given, as the target of learning numbers by heart is their application without any tools.

Molecular weights Nitrogen Air Water Chlorine Methanol Ethylene Oxygen Propylene Hydrogen Ammonia Methane Carbon Dioxide Ethanol

28 g/mol 29 g/mol 18 g/mol 71 g/mol 32 g/mol 28 g/mol 32 g/mol 42 g/mol 2 g/mol 17 g/mol 16 g/mol 44 g/mol 46 g/mol

Standard cubic meter Essentially, the standard cubic meter is not a volume but a mass unit. It refers to the amount of gaseous substance in 1 m3 at standard conditions p = 1.01325 bar, t = 0 °C. It can be calculated with the ideal gas equation of state. For nitrogen with M = 28.013 g/mol, one gets mN =

pVM 101325 Pa ⋅ 1 m3 ⋅ 0.028013 kg/mol = = 1.2498 kg ≈ 1.25 kg RT 8.31447 J/(mol ⋅ K) ⋅ 273.15 K

It is easy to perform this calculation for other substances as well, but undoubtedly, it is difficult to perform without at least a pocket calculator. However, the only number that refers to the substance in this calculation is the molecular weight. Thus, knowing the even number for nitrogen by heart, the mass of a standard cubic meter can easily be scaled with the molecular weight, e. g. https://doi.org/10.1515/9783110657685-018

444 | A Some numbers to remember – –

– –

29 for air (M = 29 g/mol): mN,air = 1.25 kg ⋅ 28 = 1.295 kg; 2 for hydrogen (M = 2 g/mol): mN,H2 = 1.25 kg ⋅ 28 = 89 g;

32 = 1.43 kg; for oxygen (M = 32 g/mol): mN,O2 = 1.25 kg ⋅ 28 for carbon dioxide (M = 44 g/mol): mN,CO2 = 1.25 kg ⋅ 44 = 1.96 kg. 28

Some other useful physical property data cpid cpL

cpid Δhv ρL κ cp ρ λ λ

Nitrogen

1 J/g K

Water

4.2 J/g K

Water Water Water Nitrogen Steel Steel Carbon Steel Stainless Steel

1.9 J/gK 2250 J/g (at t = 100 °C) 1000 kg/m3 1.4 0.4–0.5 J/g K 7800 kg/m3 50 W/mK 15 W/mK

Critical temperatures Methanol Ethanol Ethylene Propylene Propane Nitrogen Ammonia Methane Water Carbon Dioxide

240 °C 241 °C 9 °C 91 °C 97 °C −147 °C 132 °C −83 °C 374 °C 31 °C

Normal boiling points | 445

Normal boiling points Methanol Acetone Benzene Toluene Ethylene Propylene Propane Chlorine Water Nitrogen Ammonia Methane Carbon Dioxide Ethanol

64 °C 56 °C 80 °C 111 °C −104 °C −48 °C −42 °C −34 °C 100 °C −196 °C −33 °C −161 °C None 78 °C

(In fact, the exact value is 99.97 °C according to ITS-90.)

The triple point is at t = −56.5 °C, p = 5.2 bar.

Rough values for the vapor pressure of water 30 °C 40 °C 60 °C 70 °C 80 °C 90 °C 100 °C 120 °C 150 °C 160 °C 170 °C 180 °C 190 °C 210 °C 230 °C 250 °C

0.04 bar 0.075 bar 0.2 bar 0.3 bar 0.5 bar 0.7 bar 1 bar 2 bar 5 bar 6 bar 8 bar 10 bar 12.5 bar 19 bar 28 bar 40 bar

Some values for heat transfer Heat transfer by natural convection to air Heat transfer by wind Plate heat exchanger liquid/liquid Maximum possible solar radiation (solar constant)

α = 4–5 W/m2 K α = 10 W/m2 K k = 2000 W/m2 K S = 1367 W/m2

B Pressure drop coefficients For the evaluation of the ζ -values in Equation (12.18), the following instructions according to [150] can be applied. In the tables, one can interpolate between the given values. If the cross-flow area changes, the velocity always refers to the outlet of the element, i. e. to the large cross-flow area for expansions and to the small cross-flow area for restrictions. Pictures of the particular elements can be found in [150]. For laminar flow, the ζ -values listed here cannot be used. For small Reynolds numbers, they can be up to 1000 fold higher. The problem is described in [236].

90° bend r/d ζ90 , smooth (k Re < 65 d) ζ90 , rough (k Re > 65 d)

1

2

4

6

10

0.21 0.51

0.14 0.3

0.11 0.23

0.09 0.18

0.11 0.20

Bend with arbitrary angles ϕ ≠ 90° ϕ ζ /ζ90



30°

60°

90°

120°

150°

180°

0

0.4

0.7

1.0

1.25

1.5

1.7

Elbow pipe with circular cross-flow area ϕ ζsmooth ζrough



22.5°

30°

45°

60°

90°

0 0

0.07 0.11

0.11 0.17

0.24 0.32

0.47 0.68

1.13 1.27

Elbow pipe with rectangular cross-flow area ϕ



30°

45°

60°

75°

90°

ζ

0

0.15

0.52

1.08

1.48

1.6

Corrugated expansion joint ζ = 0.2n with n as the number of corrugations

https://doi.org/10.1515/9783110657685-019

448 | B Pressure drop coefficients

U-bend a/d ζ

0

2

5

10

0.33

0.21

0.21

0.21

a is the length of the straight lines [150].

Sharp-edged tube entrance ζ = 0.5

Smooth tube entrance – – –

ζ = 0.01 (smooth) ζ = 0.03 (transition smooth–rough) ζ = 0.05 (rough)

Tube entrance with orifice (d/dorifice )2 ζ

1

1.25

2

5

10

0.5

1.17

5.45

54

245

Discontinuous transition from A1 to A2 > A1 ζ = (A2 /A1 − 1)2

Continuous cross-flow area expansion (diffusor) ϕ/2 ϕ ζ (d2 /d1 ζ (d2 /d1 ζ (d2 /d1 ζ (d2 /d1 ζ (d2 /d1

= 1.2) = 1.4) = 1.6) = 1.8) = 2.0)

4° 8°

6° 12°

8° 16°

10° 20°

12° 24°

0.0 0.0 0.25 0.8 1.25

0.0 0.15 0.6 1.15 2.0

0.0 0.25 0.85 1.75 2.75

0.1 0.3 1.05 2.15 3.5

0.2 0.7 1.65 3.1 5.0

The values have been taken from diagrams in [150]. ϕ is the opening angle of the diffusor. The cross-flow area expansion is characterized by the diameter at the inlet of the expansion piece d1 , the diameter at the outlet of the expansion piece d2 and its

Discontinuous cross-flow area restriction from A1 to A2 < A1

| 449

length, which yields the opening angle ϕ. Because of the extremely steep gradients, the extrapolation beyond 24° should be omitted. Instead, the discontinuous transition should be used.

Discontinuous cross-flow area restriction from A1 to A2 < A1 The relationship between the area ratio (A2 /A1 ) and ζ can be evaluated according to the relationship in [170], which reproduces the curve in [150] and the tabulated values in [237] sufficiently well: ζ = 1.5 (

1−μ 2 ) , μ

(B.1)

with the restriction coefficient μ=

0.39309023 (A2 /A1 )2 − 0.86544149 (A2 /A1 ) + 0.61790739 1 − 1.4837414 (A2 /A1 ) + 0.62929722 (A2 /A1 )2

(B.2)

Continuous cross-flow area restriction The pressure drop is comparably low and can be described by ζ = 0.05 with a sufficient accuracy and in a conservative way [150]. If the angle of the restriction is > 40°, the formula for the discontinuous restriction should be applied. For very small angles, the pressure drop of the pipe itself should not be neglected [238].

Vessel outlet According to [150], the volume flow of a liquid at the vessel outlet is characterized by the Bernoulli equation supplemented by a flow coefficient μ: V̇ = μA√2gh + 2(p1 − p2 )/ρ where A ... μ ... g ... h ... p1 . . . p2 . . . ρ ...

cross-flow area of vessel outlet outlet flow coefficient gravity acceleration height of liquid in the vessel above outlet pressure inside the vessel pressure outside the vessel liquid density

450 | B Pressure drop coefficients For the outlet flow coefficient, the following values can be taken or, respectively, calculated from [150]: (a) sharp-edged outlet: μ = 0.59 . . . 0.62 (b) rounded-edged outlet: μ = 0.97 . . . 0.99 (c) outlet with short pipe with l/d = 2 . . . 3: μ = 0.82 (e) outlet with conical short pipe (d2 /d1 )2 μ

0.1

0.2

0.4

0.6

0.8

1.0

0.80

0.81

0.84

0.86

0.90

0.96

where d2 ist the smaller inner diameter of the conus at the outlet.

Index 10 % pressure drop criterion 409 1st generation random packing 169 2-phase flash 62 2nd generation random packing 170 3-phase flash 62, 111 3 % pressure drop criterion 408 3D models 10 3rd generation random packing 170 4th generation random packing 170 γ-φ-approach 25, 47, 64, 65 γ-ray scanning 221–223 N2 O (laughing gas) 355 φ-φ-approach 25, 47, 49, 64, 65 ζ -value 318, 333, 447 abnormal heat input 397 abrasive effect 148 absolute pressure 335 absorption 163, 216, 350, 359 absorption column 179 absorption/desorption 350, 351, 360 absorptive agent 359 acentric factor 36 acetylene 374 activated carbon 245, 355, 366, 368, 369 activated sludge process 367 activation energy 293 active area 187, 189, 196, 419 active fire protection 15 activity 60 activity coefficient VII, VIII, 26, 30, 40, 45, 59, 419 activity coefficient at infinite dilution 74 actuation case 392 actuation pressure 381, 384 adiabatic throttling 83, 403 adsorbate 419 adsorbent 419 adsorption 112, 164, 213, 237, 245, 350, 355, 362, 363, 366, 369 adsorption capacity 245 adsorption equilibrium 247 adsorption isotherm type 247 adsorption twin plant 249 adsorptive 419 adsorptive agent 245

advanced cubic equation of state 71, 75 advanced oxidation processes (AOP) 366 advanced process control 419 aerobic treatment 367 aerosol 216, 352, 374, 419 AES 132 agitator 283, 289, 302 agro chemicals 91 air cooler 154, 348 air line 417 alpha function 38 amine 340 ammonia 340, 355 ammonia reaction 294, 296 ammonium nitrate 371, 373, 374 ammonium sulfate 371 anaerobic waste water treatment 369, 370 annular flow 326 antifoaming agent 200 API-2000 388 API-521 388 apparent component approach 59 apron 189, 197 aqueous phase 225 aqueous system 172, 174, 202 area plot plan 11 association 54–56 asymmetric rotating disc contactor (ARDC) 234 auto-refrigeration case 309 autoignition temperature 415 axial turbo compressor 269 azeotrope 30, 31, 45, 46, 211, 220, 229, 303, 389, 419 azeotropic condition 46 azeotropic distillation 212 azeotropic point 30 back mixing 232 back pressure 382, 409 backward feed arrangement 98 baffle 136, 158 baffle cut 136, 137, 157 baffle orientation 136 baffle separator 289 baffle spacing 136, 137, 159, 160 baffle types 136, 137, 159

452 | Index

bagatelle limit 349 ball valve 330, 332 BASF 371 basic chemicals 91 basic engineering 4, 5, 7, 398 batch distillation 219 batch process 91, 105, 349 batch process recipe 106 batch reactor 299 batch simulation 105 batch stirred tank reactor 300 batchwise 349 battery limit 6, 419 Bayer-Flachglocke 184 Beggs-Brill correlation 322 bellow 409 bellow expansion joint 328, 329 bellow pipe 328, 329 BEM 132, 141 Bender equation 39 Benedict–Webb–Rubin equation (BWR) 38, 39 benzene 211 Bernoulli equation 257 BEU 13, 132 Bhopal 373 Billet and Schultes model 175 binary interaction parameter (BIP) 44, 72, 73, 111, 230 binary parameter estimation 73 biochemical oxygen demand (BOD) 364 biochemical product 220 biodiesel 360 bioethanol 251 biofilter 361, 362 Biohoch reactor 368, 369 biological exhaust air treatment 350, 351, 361, 362 biological waste water treatment 360, 367–370 bioscrubber 362 biotrickling filter 362 Bitterfeld 372 BJ21T 132, 141, 142 BKU 132 Blasius equation 315 blocked discharge line 392 blocks in ASPEN Plus 83 blowdown 380 blowdown process 386 BOD 419

boiler feed water 345, 419 boiler formula 307 brainstorming 420 brine 348 brittle 309 Broyden method 88 bubble cap tray 183, 400 bubble column 360 bubble column reactor 368 bubble flow 324, 326 bubble point curve 29 bubble regime 182 bubbling-up 411 built-up back pressure 382 butterfly valve 331 by-product 22, 85, 420 c-concentration 293, 295 calcium carbide 374 CAPEX 1, 18 capillary module 239 car sealed valve 332 carboxylic acid 54, 70 Carnival Monday accident 373, 394 cartridge column 189, 202 cascade reactor 301 catalyst 297 catalyst exchange 303 catalytic combustion 350, 356, 357 cause & effect matrix 17, 376, 420 cavitation 6, 259, 264 cell method 130 centrifugal extractor 235 centrifugal pump 257, 263 centrifugation 369 certified capacity 402 champagne effect 412 chaotic flow 325 chattering 402, 409 check valve 6, 331, 420 chemical oxygen demand (COD) 364 chlorine 354 choking 319 chromatography 250 circulation pump 259 Clausius–Clapeyron equation 29, 50, 69 clear liquid height 196 closed balance point 185 co-product 22, 85, 420

Index | 453

coalescence 225, 226 coalescer 420 COD 420 cognitive biases 375, 377 coil module 239 cold water aggregate 348 Colebrook 315 collector 168, 169, 178, 180, 201, 202 collector/distributor unit 168 combinatorial part 74 combined heat and mass transfer 142 combustion 350, 354 compact approach 167 Composite Curve 102 composite membrane 239 composition control 203 compressibility factor 54, 420 compressible fluid 318 compressor 13, 84, 100, 265, 274, 316 conceptual design phase 2, 3, 203 condensate 339, 344–346 condensate lifter 344 condensate line 345 condensate outlet control 343 condensate polishing 366 condensation 163 condenser 130, 141, 143, 153 conservative assumption 401 contingency 19, 420 continuity equation 404 continuous reactor 299 continuous stirred tank reactor (CSTR) 300, 301 control 331 control cycle 376 control engineering 203 control valve 257, 329, 330, 332, 378 control valve failure 398 convergence 167 conversion 291 cooling brine 131 cooling system failure 397 cooling water 102, 155, 157, 347, 420 Coriolis flow meter 335 corresponding states principle 36, 37 corrosion 92, 140, 156, 160, 183, 265, 274, 292, 311, 336, 343, 347, 358, 360, 378, 383 corrosion allowance 307 corrosivity 160 COSMO-RS 75

cost estimation 18 COSTALD equation 50 coverage 148 critical flow phenomenon 403 critical mass flux density 403 critical point 33, 50, 52, 68, 388 critical pressure 33 critical temperature 33 crossflow filtration 240 crud 226 cryo-condensation 350, 352, 353 Cryosolv process 352, 353 crystal growth 253 crystallization 62, 237, 252 CSTR cascade 301 cubic equation of state 34 cyclohexane 212, 352, 372 cyclone 289 data management 69 data sheet 376 databank 72 Deacon reaction 354, 359 dead-end filtration 240 deaeration problem 142 Debye–Hückel term 59 decanter 60 decomposition 169, 188 dedicated plant 105 deflagration 414 degradation 201, 298, 356 delivery head 257, 258 demineralized water 347, 348 demister 288 denoxation 355, 358 density 131, 172, 174 depreciation 420 depressurization 349, 381 design basis 4, 5, 308, 341, 388, 393, 420 design mode 131 design pressure 308, 346, 380–382, 387, 390, 392, 393, 396, 421 design temperature 308, 380, 387, 421 desuperheating 339 detailed approach 167 detailed engineering 6, 8, 398 detonation 414 deviation 421 devil’s advocate 377

454 | Index

dew point curve 29 differences between large numbers 297, 298, 335 diffusion 142, 216, 238, 352 diffusion coefficient 215, 231, 238, 245 digestion tower 369 digital twin 114 dimethylether 372 dioxine 372 dip-pipe 13, 283, 421 direct mode 120, 121 direct steam 340 direct substitution method 86 dischargeable mass flow 402, 407, 409, 413 discontinuous mode (batch) 299 disk-and-donut baffle 137 disperse phase 229 dispersion 225 distillation 76, 163 distillation column control 203 distributor 170, 173, 178, 201, 202 dividing wall column 216 double azeotrope 32 double jeopardy 378, 398, 421 double pipe 127, 154, 310, 328 double-segmental baffle 137, 159 downcomer 182, 196, 200 downcomer backup flooding 196, 199 downcomer choke flooding 195, 199 downcomer clearance 189, 197 downcomer cross-flow area 196 draft tube baffle crystallizer (DTB) 254, 255 drain 95, 328, 332 driving pressure difference 407 driving temperature difference 98, 99, 102, 117, 144, 147, 150, 153, 272, 343, 397 droplet formation 274 droplet size 274 dry pressure drop 177 dualflow tray 186 ductile 309 DuoCondex process 353 durability 240 Duss–Taylor correlation 190 dynamic simulation 2, 81, 113–125, 386, 391, 398, 400 dynamic viscosity 76, 78, 131, 215, 229, 359, 383

economy-of-scale 19 effect of temperature on a chemical reaction 296 efficiency 166 electricity 97 electrodialysis 244 electrolyte 47, 56, 57, 70, 216 electrolyte model 70 electrolyte NRTL model 58 electronegativity 57 elementary charge 57 ellipsoidal head 283 energy balanced configuration 206 Engel model 175 enthalpy 66, 111, 297, 421 enthalpy of adsorption 248 enthalpy of fusion 63 enthalpy of mixing 48 enthalpy of reaction 66, 297, 302 enthalpy of vaporization 28, 29, 37, 50, 66, 68, 71, 389 entrainment 150, 151, 183, 185, 192, 193, 197, 199, 222 entropy 421 equation of state 25, 32, 421 equation-oriented approach (EO) 86 equilibrium 299 equilibrium calculation 166 equilibrium constant 294, 297 equilibrium reaction 294 equilibrium reactor 304 equipment arrangement drawing 10, 11 equipment content visualization 107 equipment diagram 105, 107 error message 167 estimation of vapor pressures 53 ethanol 212, 251 ethanol–water 211, 251 ethylene 71, 415 ethylene glycol 213 ethylene oxide 415 eutectic 74 eutectic system 62, 253 evaporation 163, 365 evaporator 130, 153 excess enthalpy 48, 66, 68, 74, 421 excess heat capacity 74 excess volume 49, 421 exergy analysis 104

Index | 455

exhaust air 105, 348 exhaust air treatment 348, 353 exhaust air treatment with membranes 362 exothermic reaction 300 expansion loop 329 expediting 7 explosion 381, 414, 415 explosion limit 415 explosion zone 414 extended Antoine equation 51 external heat exchanger 290 external reactor 304 extraction 76, 228 extraction column 231 extractive 228 extractive distillation 213 extrapolation function 52 F-factor 172 faceplate 122 failsafe position 332 falling film crystallizer 256 falling film evaporator 147, 148, 154 fan 265 feasibility 90, 97 feedback control loop 119 filtration 364, 369 fin 154 fine chemicals 91, 105, 220, 349 fire 378 fire case 386, 392 fire formula 387, 388 fire point 415 fire prevention 15 fixed bed reactor 298 fixed costs 18, 421 fixed valve tray 186 flame ionization detector (FID) 416 flammability diagram 416 flare load 114, 401 flare system 381 flash chamber 209 flash fire 414 flash point 359, 414 flashing 410 flood point 173, 175, 177, 179 flooding 170, 173, 222 Flory–Huggins 71 flotation 364

flow 377 flow fractions 139, 157 flow measurement 335 flow path length 189, 192 flow pattern 324 fluid code 92, 93, 327 fluidelastic instability 158, 160 flushing 349 foam factor 200 foaming 148, 149, 169, 179, 188, 200, 202, 221, 411 forced circulation crystallizer (FC) 254, 255 forced circulation evaporator 148, 149, 154 formal kinetics 293 forward feed arrangement 98 fouling 96, 114, 127, 128, 138, 139, 144, 148–150, 153, 155, 156, 165, 169, 172, 182, 183, 186, 188, 202, 203, 274, 288, 342, 347, 362, 383, 420 fouling factor 5, 110, 156 fouling layer 157 fouling resistance 156 fractional hole area 182, 186, 192 fracture 309 free cross-flow area 405 free variables 119, 120 frequency converter 259, 263, 272 friction factor 314, 315 friction pressure drop 84 Friedel equation 321, 324 froth 181 froth height 193 froth regime 182 Froude number 323, 421 fuel gas 421 fuel NO 355 fugacity 35, 294 fugacity coefficient 35, 42, 422 Gantt Chart 105 gas chromatography (GC) 203, 337 gas cylinder 418 gas permeation 240, 243, 244, 363 gas solubility 65, 392 gas-gas exchanger 130, 137, 141 gas-liquid reaction 301, 367 gaskets 92, 153, 327 gate valve 330, 332 gauge pressure 335

456 | Index

gE mixing rules 40, 48, 60, 64, 71, 75 gear pump 265 geysering 144 Gibbs–Duhem equation 45 GIGO principle 85 globe valve 330 glycol ether 360 Grand Composite Curve 103 Grashof number 422 grid baffle 137 group contribution method 27, 73 group polarization 377 groupthink 377 guideword 377, 422 half-open pipe 93 half-pipe coil 289 hardness component 155, 156, 253, 347 hastelloy 347 HAZID 3 HAZOP 4, 17, 378, 422 HAZOP analysis 375 HAZOP leader 375, 376 HCl 216 heat adsorption 351 heat capacity 55, 71 heat conduction 128 heat curve 131 heat exchanger 13, 76, 109, 110, 127 heat exchanger block 82 heat integration 97, 102, 203, 343, 365 heat of formation 27 heat of reaction 300 heat of vaporization 26, 27 heat pump 103 heat transfer 128, 142, 301 heat transfer oil 346 heat transition 128 heater block 82 height of inlet weir 189 height of outlet weir 189 height over the weir 198 helium 78 Henry coefficient 47, 392 Henry component 47 Henry concept 70 Henry’s law 65 Henschke model 227 heteroazeotrope 32, 60, 211

heterogeneous reaction 301 HETP 166, 173, 214, 422 high performance liquid chromatography (HPLC) 337 high vacuum 277 high-precision equation of state 39, 71 hiTRAN elements 140, 141 Hoechst AG 368, 373, 374 Hoffmann–Florin equation 53 holdup 175, 400, 422 hollow fibre module 239 hollow stirrer 302 hot spot 148, 151 hydrodynamics 165, 174, 175 hydrogen 78, 311 hydrogen fluoride (HF) 54, 70 hydrogen peroxide 366 hyper compressor 268 hypochlorite 354, 355 hysteresis 382 ideal gas 33, 66, 404 ideal gas equation 32 ideal gas heat capacity 66, 356 ideally mixed batch stirred tank reactor 300 ideally mixed continuous stirred tank reactor (CSTR) 300 ignition 414, 415 ignition source 358 impingement plate 141 inclination 141 inclusion 253 individual α-function 64 induced ignition 414 industrial fan 272 inert gas 142, 392 inert gas supply 418 inert vent 96 infinite dilution 47 influence of the pressure on a distillation column 165 inlet weir 189, 209 inner surface 246 insulation 327, 346 interlock 6, 422 interlock description 376 internal energy 422 internal rate of return 23 inverted batch column 220

Index | 457

investment 18 ISBL 422 isentropic change of state 404 isentropic efficiency 266 isentropic expansion 405 iso-activity criterion 60 isolation valve 329 isometrics 10, 11, 15, 16 jacket 289 jacket water 348 Jacobian matrix 87 jet flood 193, 199 jet pump 100, 274, 279 Joule–Thomson effect 244, 418, 422 Karl-Fischer-titration 337 Katapak 303, 304 kettle reboiler 149 Kister–Gill correlation 177 knock-out drum 289 Kühni column 233 KV -value 123, 332 laminar flow 314 Langmuir approach 247 Langmuir–Knudsen equation 152 laughing gas 355 Laval nozzle 274, 318 law of mass action 294 layer crystallizer 256 layout pattern 160 LDPE 39, 71, 268, 405 Le Chatelier mixing rule for LEL 416 Le Chatelier’s principle 295 leak before burst 307 leakage 142, 153 leakage rate 280 Lee–Kesler–Plöcker 71 level 377 level alarm 285 level control 145 level indicator 336 level switch 285, 336 lever rule 63, 422 limiting activity coefficient 27 limiting gas velocity 287 limiting oxygen concentration 417 liquid density 27, 49

liquid heat capacity 27, 66 liquid load 171, 172 liquid nitrogen 352, 353 liquid ring compressor 270, 279 liquid seal 197 liquid-liquid equilibrium 59, 74, 111, 229 liquid-liquid exchanger 130, 131, 141, 143 liquid-liquid separator 225 liquidus line 63, 422 liquiphant 337 LLE 111 load data list 16 load diagram 178, 193 load point 177 LOC 417 local composition 43 local composition model 45 locked valve 332 Lockhart-Martinelli correlation 322 lost time incident rate (LTIR) 374 lower explosion limit (LEL) 415 lubricant 331 lubrication oil 279 Ludwigshafen 372 Ludwigshafen-Oppau 371 Mach number 318, 320, 422 magnetic flow meter 336 makespan 108, 109 maldistribution 169, 173, 174, 178, 179, 192, 222 man hole 188, 283 mass balance 2 mass balanced configuration 206 mass flow to be discharged 384, 393, 402, 407, 409 mass transfer 216, 301 material 310, 327, 347, 360 material balance 376 material test 311 materials of construction 309 matrix project management 17 maximum allowable overpressure 381, 382 maximum allowable pressure 385 maximum downcomer velocity 196 maximum flux density 404 maximum mass flux density 280, 403–405, 425 maximum operating pressure 382 maximum relief amount 402 Maxwell criterion 34

458 | Index

measurement 313 mechanical efficiency 266 mechanical foam deletion 200 mechanical stability 327 mechanical strength 4, 307–310 mechanical tension 307 mechanical vapor recompression (MVR) 99, 271, 365 medium vacuum 277 melt crystallization 256 melting temperature 63 membrane 112, 214, 237, 362, 366 membrane compressor 268 membrane process 350 membrane pump 264 membrane separation 98, 237 membrane valve 330 MESH equation 166, 220 methane fermentation 369 methyl isocyanate 373 Michaelis–Menten kinetics 297 microfiltration 237, 364 microorganisms 361, 362, 369 micropore 246 middle vessel 220 mindset 377 minimum allowable temperature (MAT) 310 minimum bypass 262, 392 minimum design metal temperature (MDMT) 309 minimum downcomer residence time 196 minimum liquid load 173, 178 minimum safety distance 13 minimum vapor load 198 minimum vapor velocity 198 miniplant 231 miscibility gap 32, 47, 59, 229 mixer 84 mixer-settler arrangement 231 mixing rule 48, 49, 77, 78 model change 75, 111 model choice 69 Modified UNIFAC 73–75 modularization 14 molecular modeling 75 molecular sieve 213, 245, 246, 251 Monday fire 363 Montz A3 172 Moody diagram 314

motive steam 100 multieffect evaporation 98, 365 multipass tray 197 multiple condensers 167 multiplier 85 multipurpose plant 299 multipurpose unit 105 Murphree efficiency 190, 201, 214 nanofiltration 237, 364 narrowest cross-flow area 403, 410 natural circulation 143, 144 natural frequency 160 net present value 22 Newton method 87 Newton number 302 Nikuradse 315 nitrous oxides (NOx ) 355 no-tubes-in-window baffle (NTIW) 137, 159 node 377, 422 noise 155 non-Newtonian flow 313 non-reasonable risk 378 normal boiling point 27 normal operating condition 308 notched weir 198 nozzle 140, 160 nozzle size 286 NPSH 13, 208, 259, 260, 264, 283 NRTL 42, 44, 45, 47, 58, 60, 64, 230 NRTL electrolyte model 59 nutritient 361 Nußelt number 422 o-nitroanisol 374 objective function 4 O’Connell correlation 190 Ohm’s law 127 oil diffusion pump 279 ω-method 396, 410, 413, 425 open balance point 185 operating manual 16 operation costs 18 operation modes 116 operational characteristics of a jet pump 276 OPEX 1, 18 orifice 403 OSBL 423 oscillating displacement pump 264

Index | 459

oscillation 401 Oslo type crystallizer 255 osmotic pressure 240 out of balance 167, 208 outlet line 407, 409, 410 outlet vapor fraction 146 outlet weir 189, 197 overall plot plan 11 overdesign 138, 144 overpressure from outside 310 oversaturation 253 oversizing a safety valve 401 ozone 366 package unit 423 packed column 165, 168, 400 pair parameter 59 Pall ring 170, 173 parallel baffle cut 141 parallel orientation 136 passive fire protection 15 PC-SAFT 71 Peng–Robinson equation 35, 64, 70 permeability 237, 238 permeate 237 perpendicular orientation 136 pervaporation 240, 243 PFR with recycle 301 pH value 423 pharmaceuticals 91, 105, 220, 349 phase equilibrium 303 phase inversion membrane 239 phosphorus 357 picket-fence weir 198 PID 92–97, 262 pilot plant 231 pinch 102 Pinch method 101 pipe 83 pipe insulation 93, 327, 328 pipe module 239 piping 13, 15, 16, 92, 313 piping and instrumentation diagram (PID) 6, 16, 283, 376, 423 piping class 16, 327 piping element 257, 324 piping isometric drawing 11 piping layout drawing 10, 12 piston compressor 267

piston pump 264 plant characteristics 258 plant layout 4, 8 plate heat exchanger 127, 152, 156 plate module 239 plot plan 10, 376 plug valve 330 plug-flow reactor with recycle 301 plug-flow tubular reactor (PFR) 300, 305 Podbielniak extractor 235 polymer 71, 91, 220 polymerization 187, 288 porous membranes 238 power consumption 266 power failure 398 Poynting factor 42 Prandtl number 423 Prandtl/v. Karman 317 preheating zone 143, 145 pressure 377 pressure control 207 pressure difference measurement 335 pressure drop 141, 175, 194, 201 pressure drop calculation 313 pressure drop correlations for two-phase flow 410 pressure drop of the irrigated packing 177 pressure hydrolysis 367 pressure measurement 334 pressure rating 327 pressure relief 123–125, 349, 380 pressure relief device 308, 381 pressure-swing adsorption (PSA) 249, 251 pressure-swing distillation 211 preventive maintenance 5 principle of countercurrent flow 164 process control system 334, 378, 423 process description 3, 377 process flow diagram (PFD) 3, 376, 423 process simulation 2, 81, 112 process water 424 profile 167 project manager 18 pseudo-critical pressure 131 pseudo-critical temperature 131 pseudo-stream 167 PSRK 38, 71, 73, 75 Pt-100 334 pulsation 232

460 | Index

pump 13, 84, 257, 316, 378, 392, 401 pump characteristics 258, 265, 392, 398, 401 pump efficiency 257 pump failure 398 purge stream 86 pv diagram 33 pxy diagram 29 Rackett equation 49 radial turbo compressor 269 random packing 169, 173, 215 Raoult’s law 30, 41, 60 Raschig ring 169 Raschig Super-Ring 170 Raschig Super-Ring Plus 170 rate of relief 381 rate-based 81, 111, 166, 179, 214, 216, 229, 231, 360 rating mode 131, 138 reaction equilibrium 294, 302 reaction factor 293 reaction force 401 reaction kinetics 112, 292, 295, 297, 301, 303, 367 reaction rate 292, 296, 301 reactive distillation 216, 302–304 real continuous stirred tank reactor 300 reboiler 143 recipe 105 recommended velocities in a pipe 316 rectangular notches 198 rectification 163 rectifying section 218, 424 recycle stream 85 redistribution 178 reflux 164 reflux ratio 165, 167, 207, 220 refrigerated water 348 regeneration 245 regenerative thermal oxidizers (RTO) 358 relief amount 391 relief pressure 391 repulsive forces 381, 401 residence time 144, 147, 150, 188, 189, 196, 198, 200, 202, 235, 256, 299, 300, 303, 356, 357 resistance thermometer 334 restriction orifice 381 retentate 237

reverse flow 115 reverse osmosis 98, 240, 366 reverse reaction 293 reversed mode 120, 121 Reynolds number 287, 322, 424 Richardson’s law 206 rigorous simulation 81 risk parameter 379 rod-baffled exchanger 159 rod-type baffle 138 rotary piston compressor 269 rotary vane pump 269, 279 rotating disc contactor (RDC) 233 rotating displacement pump 265 rough vacuum 151, 277 roughness 314 runaway reaction 298, 299, 302, 394 rupture disc 308, 376, 381, 383, 407 safeguard 377, 424 safety integrity level (SIL) analysis 378 safety valve 6, 15, 93, 114, 308, 316, 376, 381, 385, 390–392, 399–402, 407, 408, 412, 424 safety valve inlet line 407 safety valve lift stop 409, 410 safety valve outlet line 407 safety valve two-phase flow 410, 411 sampling 337 saturation zone 248 scale-up 82, 231, 303 schedule view 105 SCR process 355 screw compressor 268 sea water 347 sea water desalination 241 seal pan 209 sealing strip 137, 139 Second Law 403 sedimentation 364, 369 sedimentation curve 227 seed crystal 253 segment 214 segment height 215 selectivity 245, 291 self-ignition 414 semicontinuous mode 299 semipermeable membrane 240 sensitivity spider 23

Index | 461

separation block 424 separation factor 26, 169, 190, 424 separation sequence 167 separator 84 separator vessel 147 sequential flowsheeting 86 Seveso accident 372, 375 shell orientation 134 shell-and-tube heat exchanger 110, 127, 129–153, 155, 395, 396 shell-and-tube type 127 short path evaporator 151 shut-off valve 331 sieve tray 182, 399 signal transducer 95 SIL classification 378, 379, 398 silicium 356, 357 simplifying assumption 401 simulated moving bed 164, 249, 250 simulation mode 131 single segmental baffle 136 siphon 226 siphon breaker 285 six-tenth-law 19 sloped downcomer 195 sludge 368 slug flow 325, 327 SMB 250 SNCR process 355 Soave–Redlich–Kwong equation 35, 64, 71 sodium cyanide 374 solid-liquid equilibrium 62, 253 solubility 238 solubility membranes 238 solubility of gases 48 solvation 58 space demand 155 specialty chemicals 91, 105, 220, 349 specific enthalpy 66, 131 specific surface 246 speed of sound 318, 321, 403, 405, 407, 410, 414, 418, 422, 424 spiral heat exchanger 127 spiral wound module 239 splash baffle 211 split balance 81 split block 424 splitter 84 spool piece 226

spout 289 spray regime 182, 195 spray tower 360 standard enthalpy of formation 66, 297, 356 standard Gibbs energy of formation 296 startup 113 static crystallizer 256 static liquid head 143, 145 static payback period 23 steam 97, 102, 339 steam boiler explosion 346 steam control valve 397 steam inlet control 341 steam trap 342 steamout 308, 424 Stichlmair model 175 stirrer 302 stochiometric line 417 stoichiometric coefficient 293, 294 stoichiometric ratio 291 stoichiometric reactor 304 stoichiometry 291 stratified flow 326 stripping section 169, 218, 424 structured packing 171, 215 subcooling 143 subcritical flow 333 sulfur 355 sulfuric acid 216 Sulzer BX 172 sun radiation 393 supercritical 47 supercritical flow 333 superheating 340, 341 superimposed back pressure 382 surface tension 25, 78, 215, 229 surfactant 226 suspension crystallizer 254 system flooding 177, 199 TA Luft 216, 349, 350, 352, 355, 356 tangent line 424 tear stream 86, 424 technical high-precision equations of state 39 TEMA 132, 136 temperature 377 temperature class 415 temperature measurement 334 temperature peak 298

462 | Index

temperature profile 204 temperature-swing adsorption (TSA) 249 ternary azeotrope 212 tetrahydrofurane–water 211 Texas City 371 TH diagram 101 theoretical stage 166, 174 thermal combustion 350, 356 thermal conductivity 25, 26, 78, 131, 156, 215 thermal engine 103 thermal expansion 133, 219, 328, 392 thermal expansion valve 392 thermal NO 355 thermal oil 131 thermal resistance 128, 139 thermal stability 149 thermal stress 149 thermal vapor recompression (TVR) 100, 275 thermocouple 334 thermodynamics 165 thermosiphon reboiler 143, 144, 146, 148, 154, 158, 208, 322, 341 THF 211 thin film evaporator 150 throttle valve 146 Tianjin accident 374 tie-in points 425 tie-rod 137, 139 total organic carbon (TOC) 364, 425 Toulouse accident 373 trace element 361 training simulator 203 transport property 76 tray column 165, 181 tray efficiency 166, 214 tray fixing 400 tray pressure drop 194 tray spacing 189, 193, 196 trickle filter 367 triple point 50 troubleshooting 110 true component approach 59 tube arrangement 160 tube layout angle 134 tube passes 135, 136, 140 tube pattern 134, 135 tube pitch 134, 160 tube rupture 378, 395 turbo compressor 269

turbomolecular pump 280 turbulent flow 314 turndown 183, 201 twin plant 249, 352 twisted tubes 157, 158 two-pass tray 192 two-phase factor 322 two-phase flow 93, 144, 146, 321 two-phase flow in the safety valve 410 two-phase flow pressure drop 407 two-phase pressure drop 321, 322 two-phase through the safety valve 411 Twu-α-function 38 Txy diagram 30 typical 5, 425 U-bend 328 U-type 158 U-type heat exchanger 135 ultra-high vacuum 277, 280 ultrafiltration 237, 364 ultrasound 366 ultraviolet radiation 366 underground 16 UNIFAC 74, 75 UNIFAC consortium 74 UNIQUAC 42, 44, 45, 47, 60, 64, 71, 74, 230 unsupported span 159 upper explosion limit (UEL) 415 urea 355 V-notches 198 vacuum 144 vacuum application 178 vacuum column 194, 201 vacuum distillation 165, 188 vacuum pump 265, 277 value creation 20, 90, 97 value engineering 4, 425 valve 83, 316, 329 valve tray 185, 400 van-der-Waals equation 33 van-der-Waals property 74 vapor cross-flow channeling 198, 199 vapor horn 209 vapor inlet from reboiler 208, 209 vapor phase association 70, 71 vapor pressure 27–29, 50, 71, 76, 341, 425 vapor pressure shifting 66

Index | 463

vapor quality 28 vapor recompression 153, 365 vapor-liquid equilibrium 163 vapor-liquid separator 274, 286 vapor-liquid-liquid equilibrium 60 variable costs 18, 425 vent 328 vent nozzle 96 venturi scrubber 360 vessel 84 vessel breathing 105, 349 vibration 137, 149, 158, 160, 200, 262, 330, 335 vibrations 13, 135, 136, 140 virial equation 33 virtual reality 10 viscosity 25, 26, 148, 174, 215, 252, 313, 411, 412 viscous 383 VLLE 111 volume balance 385, 386 volume concentration 293 volume translation 38, 71 volume-translated Peng–Robinson equation (VTPR) 38 vortex breaker 95, 283 vortex flow meter 335 vortex shedding 134, 158, 160 VTPR 38, 64, 71, 75, 295 VTPR equation state 295

Waco accident 374 Wagner equation 51 Wasserhaushaltsgesetz 364 waste water evaporation 365 waste water incineration 367 waste water treatment 91, 364 water 56 water hammering 346 waterspout 283 wavy flow 326 Weber number 323, 425 weeping 182, 186, 197–199, 204, 222 weeping rate 198 Wegstein method 87 weir 181 weir height 184, 400 weir length 197 weir load 197, 198 wetted area 390 Wilson 42, 44, 45, 47, 60, 64 wire mesh layer 288 wired packing 171, 202 wispy annular flow 325 working capital 425 yield 291 yield reactor 304 zeolite 246